[jira] [Commented] (TRAFODION-2873) LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331064#comment-16331064
 ] 

ASF GitHub Bot commented on TRAFODION-2873:
---

Github user sandhyasun commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1405#discussion_r162448037
  
--- Diff: core/sql/exp/ExpLOBaccess.h ---
@@ -589,7 +589,7 @@ class ExLobGlobals
 {
   public :
   
-ExLobGlobals(); 
+ExLobGlobals(NAHeap *lobHeap=NULL); 
--- End diff --

Yes this was only done to be able to compile the call in ExpLobProcess.cpp. 
Currently this code is unused and is kept around in case we need to offload any 
processing to another process. This code, invokes LobGlobals and there is no 
heap available. So I had to make it default just for this unused code.  WIll 
put a comment that it shoudl always pass a heap. In anycase a caller cannot 
initialize the LobGlobals without passing in a caller's heap - i.e the only way 
to initialize is by calling ExpLOBinterfaceInit  or ExpLOBoper::initLobGlobal. 
Both require a heap to get passed in. 


> LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals
> ---
>
> Key: TRAFODION-2873
> URL: https://issues.apache.org/jira/browse/TRAFODION-2873
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> The LOBGlobals structure contains information relevant to LOBLoad which was 
> an operator initially designed to operate at the disk level. It is no longer 
> needed/relevant so cleaning up that code and simplifying the LOBGlobals 
> sturcture as well to keep only the relevant data members. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2913) Tweak some MDAM-related heuristics

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331121#comment-16331121
 ] 

ASF GitHub Bot commented on TRAFODION-2913:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1403


> Tweak some MDAM-related heuristics
> --
>
> Key: TRAFODION-2913
> URL: https://issues.apache.org/jira/browse/TRAFODION-2913
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
> Attachments: MdamTests2.py.txt
>
>
> While debugging a plan choice issue on a customer query, two issues were 
> noted with MDAM heuristics.
>  # When CQD FSO_TO_USE is set to '0', FileScanOptimizer::optimize attempts to 
> perform logic similar to that in ScanOptimizer::getMdamStatus, checking the 
> mdamFlag that is stored in the index descriptor. But the logic is not the 
> same (the inevitable result of having two copies of something!); in the 
> latter case the mdamFlag is ignored if CQD RANGESPEC_TRANSFORMATION is 'ON' 
> while in the FileScanOptimizer::optimize logic no such additional check is 
> made. Now, 'ON' is presently the default for RANGESPACE_TRANSFORMATION. So, 
> we have the anomaly that using CQD FSO_TO_USE '0' to force consideration of 
> MDAM might still not get MDAM because of a flag that we would ignore 
> otherwise.
>  # The mdamFlag in the IndexDesc object is set by IndexDesc :: pruneMdam 
> (optimizer/IndexDesc.cpp). There is heuristic logic there to guess whether 
> MDAM will be useful for a given access path. The major purpose of this logic 
> is index elimination: if we have several indexes, and some look like good 
> choices for MDAM and others not, we eliminate the ones that are not. Only 
> secondarily is this mdam flag later looked at by the scan optimizer, as 
> described above in 1. The major purpose of this logic still seems reasonable, 
> though the computation logic itself can be criticized for not considering the 
> possibility of a parallel predicate on a leading "_SALT_" column, for 
> example. But the computation involves a CQD, MDAM_SELECTION_DEFAULT, which is 
> set to a low value by default. The customer query involved showed that the 
> value used is too low; this flag ended up eliminating a favorable MDAM plan. 
> The default was likely last determined in the predecessor product; given that 
> the HBase engine has different execution dynamics this value needs to be 
> recalibrated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2873) LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331374#comment-16331374
 ] 

ASF GitHub Bot commented on TRAFODION-2873:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1405#discussion_r162492341
  
--- Diff: core/sql/cli/Cli.cpp ---
@@ -9765,12 +9797,27 @@ Lng32 SQLCLI_LOBddlInterface

goto error_return;
  }
-   
+   //Initialize LOB interface 
+
+Int32 rc= 
ExpLOBoper::initLOBglobal(exLobGlob,currContext.exHeap(),,hdfsServer,hdfsPort);
+if (rc)
+  {
+{
--- End diff --

And here


> LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals
> ---
>
> Key: TRAFODION-2873
> URL: https://issues.apache.org/jira/browse/TRAFODION-2873
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> The LOBGlobals structure contains information relevant to LOBLoad which was 
> an operator initially designed to operate at the disk level. It is no longer 
> needed/relevant so cleaning up that code and simplifying the LOBGlobals 
> sturcture as well to keep only the relevant data members. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2873) LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331373#comment-16331373
 ] 

ASF GitHub Bot commented on TRAFODION-2873:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1405#discussion_r162492113
  
--- Diff: core/sql/cli/Cli.cpp ---
@@ -9611,12 +9611,28 @@ Lng32 SQLCLI_LOBddlInterface

  } // for
 
+//Initialize LOB interface 
+
+Int32 rc= 
ExpLOBoper::initLOBglobal(exLobGlob,currContext.exHeap(),,hdfsServer,hdfsPort);
+if (rc)
+  {
+{
--- End diff --

It doesn't hurt anything, but I'm curious: Why two sets of braces? Looks 
like one would do?


> LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals
> ---
>
> Key: TRAFODION-2873
> URL: https://issues.apache.org/jira/browse/TRAFODION-2873
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> The LOBGlobals structure contains information relevant to LOBLoad which was 
> an operator initially designed to operate at the disk level. It is no longer 
> needed/relevant so cleaning up that code and simplifying the LOBGlobals 
> sturcture as well to keep only the relevant data members. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2894) Catalog Api GetTypeInfo does not support TINYINT, BIGINT UNSIGNED and LONG VARCHAR

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331351#comment-16331351
 ] 

ASF GitHub Bot commented on TRAFODION-2894:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1381


> Catalog Api GetTypeInfo does not support TINYINT, BIGINT UNSIGNED and LONG 
> VARCHAR
> --
>
> Key: TRAFODION-2894
> URL: https://issues.apache.org/jira/browse/TRAFODION-2894
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
> Environment: Centos 6.7
>Reporter: XuWeixin
>Assignee: XuWeixin
>Priority: Major
> Fix For: any
>
>
> It fails to use ODBC SQLGetTypeInfo to get the description of Tinyint, Bigint 
> Unsigned, Long Varchar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326829#comment-16326829
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

GitHub user liuyu000 opened a pull request:

https://github.com/apache/trafodion/pull/1399

[TRAFODION-2909] Add ROLLUP Function for *Trafodion SQL Reference Manual*



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liuyu000/trafodion ROLLUP

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1399.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1399


commit 233d096295d239ac6ecf6a70c247b5ec33007dfb
Author: liu.yu 
Date:   2018-01-16T07:17:00Z

Add ROLLUP Function for *Trafodion SQL Reference Manual*




> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2910) Add Details for Load Statement 4 and Correct Typos for *Trafodion SQL Reference Manual*

2018-01-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326844#comment-16326844
 ] 

ASF GitHub Bot commented on TRAFODION-2910:
---

GitHub user liuyu000 opened a pull request:

https://github.com/apache/trafodion/pull/1400

[TRAFODION-2910] Add Details for Load Statement 4 and Correct Typos for 
*Trafodion SQL Reference Manual* 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liuyu000/trafodion LoadStatement4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1400.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1400


commit 1e3707985078d1f5d338bfa69d9c4600c85085b7
Author: liu.yu 
Date:   2018-01-16T07:53:28Z

Add Details for Load Statement 4 and Correct Typos for *Trafodion SQL 
Reference Manual*




> Add Details for Load Statement 4 and Correct Typos for *Trafodion SQL 
> Reference Manual* 
> 
>
> Key: TRAFODION-2910
> URL: https://issues.apache.org/jira/browse/TRAFODION-2910
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2857) Remove incubating from trafodion web pages

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327506#comment-16327506
 ] 

ASF GitHub Bot commented on TRAFODION-2857:
---

GitHub user svarnau opened a pull request:

https://github.com/apache/trafodion/pull/1402

[TRAFODION-2857] Remove remaining "incubator" path on downloads page

As Seth pointed out on the user list, I missed updating paths on the
downloads web page.  The old release names still have "incubating" in
the dir and file names, but the whole trafodion dist site has moved
out of the incubator directory.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/svarnau/trafodion j2857_3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1402.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1402


commit c26bac1668723721b1f8115c8c5f7e6b334a90ff
Author: Steve Varnau 
Date:   2018-01-16T18:08:34Z

[TRAFODION-2857] Remove remaining "incubator" path on downloads page

As Seth pointed out on the user list, I missed updating paths on the
downloads web page.  The old release names still have "incubating" in
the dir and file names, but the whole trafodion dist site has moved
out of the incubator directory.




> Remove incubating from trafodion web pages
> --
>
> Key: TRAFODION-2857
> URL: https://issues.apache.org/jira/browse/TRAFODION-2857
> Project: Apache Trafodion
>  Issue Type: Improvement
>Reporter: Steve Varnau
>Assignee: Steve Varnau
>Priority: Major
> Fix For: 2.3
>
>
> Preparing for announcment of graduation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2891) fix the bufoverrun Critical error checked by TScanCode

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327317#comment-16327317
 ] 

ASF GitHub Bot commented on TRAFODION-2891:
---

Github user traflm commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1394#discussion_r161806194
  
--- Diff: core/conn/odbc/src/odbc/Common/linux/sqmem.cpp ---
@@ -400,7 +400,7 @@ void TestPool(void * membase, int length)
long j, index;
short error;
long pass;
-   void * pool_ptrs[256];
+   void * pool_ptrs[256 + 1];
--- End diff --

very good catch!


> fix the bufoverrun Critical error checked by TScanCode
> --
>
> Key: TRAFODION-2891
> URL: https://issues.apache.org/jira/browse/TRAFODION-2891
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Major
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>
> access the buffer over run, if the buffer is at end of memory, will be make a 
> core dump 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2861) Remove incubating reference(s) from code base

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327298#comment-16327298
 ] 

ASF GitHub Bot commented on TRAFODION-2861:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1393


> Remove incubating reference(s) from code base
> -
>
> Key: TRAFODION-2861
> URL: https://issues.apache.org/jira/browse/TRAFODION-2861
> Project: Apache Trafodion
>  Issue Type: Sub-task
>  Components: documentation, website
>Reporter: Pierre Smits
>Priority: Major
> Fix For: 2.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2891) fix the bufoverrun Critical error checked by TScanCode

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327319#comment-16327319
 ] 

ASF GitHub Bot commented on TRAFODION-2891:
---

Github user traflm commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1394#discussion_r161806802
  
--- Diff: core/sqf/src/seatrans/tm/hbasetmlib2/hbasetm.h ---
@@ -139,9 +139,9 @@ class CHbaseTM : public JavaObjectInterfaceTM
   JM_REGTRUNCABORT,
   JM_DROPTABLE,
   JM_RQREGINFO,
-  JM_LAST
+  JM_TMLAST
};
-   JavaMethodInit JavaMethods_[JM_LAST];
+   JavaMethodInit JavaMethods_[JM_TMLAST];
--- End diff --

This is critical!


> fix the bufoverrun Critical error checked by TScanCode
> --
>
> Key: TRAFODION-2891
> URL: https://issues.apache.org/jira/browse/TRAFODION-2891
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Major
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>
> access the buffer over run, if the buffer is at end of memory, will be make a 
> core dump 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2873) LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331830#comment-16331830
 ] 

ASF GitHub Bot commented on TRAFODION-2873:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1405


> LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals
> ---
>
> Key: TRAFODION-2873
> URL: https://issues.apache.org/jira/browse/TRAFODION-2873
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> The LOBGlobals structure contains information relevant to LOBLoad which was 
> an operator initially designed to operate at the disk level. It is no longer 
> needed/relevant so cleaning up that code and simplifying the LOBGlobals 
> sturcture as well to keep only the relevant data members. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2921) Add Syntax and Examples of *UPSERT USING LOAD* for *LOAD Statement* for *Trafodion SQL Reference Manual*

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331786#comment-16331786
 ] 

ASF GitHub Bot commented on TRAFODION-2921:
---

GitHub user liuyu000 opened a pull request:

https://github.com/apache/trafodion/pull/1408

[TRAFODION-2921] Add Syntax and Examples of *UPSERT USING LOAD* for *LOAD 
Statement* for *Trafodion SQL Reference Manual*



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liuyu000/trafodion LoadStatement5

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1408.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1408


commit 629a369fefca80e59bf91f1951c4e584c593d250
Author: liu.yu 
Date:   2018-01-19T06:04:07Z

Add Syntax and Examples of *UPSERT USING LOAD* for *LOAD Statement* for 
*Trafodion SQL Reference Manual*




> Add Syntax and Examples of *UPSERT USING LOAD* for *LOAD Statement* for 
> *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2921
> URL: https://issues.apache.org/jira/browse/TRAFODION-2921
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2918) various regression tests fail due to TRAFODION-2805 fix

2018-01-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331972#comment-16331972
 ] 

ASF GitHub Bot commented on TRAFODION-2918:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1406


> various regression tests fail due to TRAFODION-2805 fix
> ---
>
> Key: TRAFODION-2918
> URL: https://issues.apache.org/jira/browse/TRAFODION-2918
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: liu ming
>Assignee: liu ming
>Priority: Major
>
> fix the regression



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2873) LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals

2018-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331822#comment-16331822
 ] 

ASF GitHub Bot commented on TRAFODION-2873:
---

Github user sandhyasun commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1405#discussion_r162546998
  
--- Diff: core/sql/exp/ExpLOBprocess.cpp ---
@@ -575,7 +575,7 @@ Lng32 main(Lng32 argc, char *argv[])
 // setup log4cxx
 QRLogger::initLog4cxx(QRLogger::QRL_LOB);
 // initialize lob globals
-lobGlobals = new ExLobGlobals();
+lobGlobals = new ExLobGlobals(NULL);
--- End diff --

The ExpLOBprocess.cpp is dormant code for now. If we enable it and create a 
LOB process to offload some operations like GC etc, we will need to create and 
setup globals , send messages form the master process to it etc .  For now I 
just had to ensure it compiled. 


> LOB:Cleanup usage of LOBLoad which si deprecated and LobGlobals
> ---
>
> Key: TRAFODION-2873
> URL: https://issues.apache.org/jira/browse/TRAFODION-2873
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> The LOBGlobals structure contains information relevant to LOBLoad which was 
> an operator initially designed to operate at the disk level. It is no longer 
> needed/relevant so cleaning up that code and simplifying the LOBGlobals 
> sturcture as well to keep only the relevant data members. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2919) report a negligible error when compile without setting QT_TOOLKIT

2018-01-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331872#comment-16331872
 ] 

ASF GitHub Bot commented on TRAFODION-2919:
---

GitHub user xiaozhongwang opened a pull request:

https://github.com/apache/trafodion/pull/1409

[TRAFODION-2919] fixed a negligible error when compile without setting 
QT_TOOLKIT

make sure the QT_TOOLKIT set and execute the command as following:
 cp -f ../$(basename $(CMPGUI_OBJ))/$(LIBPREFIX)$(notdir $(obj)) 
$(CMPGUI_FINAL) 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiaozhongwang/trafodion TRAFODION-2919

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1409.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1409


commit f8acc2f5d8a12d25c7d063393cb79c1b2098b3f2
Author: Kenny 
Date:   2018-01-19T08:05:45Z

fixed a negligible error when compile without setting QT_TOOLKIT




> report a negligible error when compile without setting QT_TOOLKIT
> -
>
> Key: TRAFODION-2919
> URL: https://issues.apache.org/jira/browse/TRAFODION-2919
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: Build Infrastructure
> Environment: CentOS 6.*
>Reporter: xiaozhong.wang
>Priority: Trivial
>
> When we build without setting QT_TOOLKIT. the build can be done, while report 
> a error as following:
> cd ../SqlCompilerDebugger && . ./mk.sh ##(SQL)
> Skipping Build of SQL Compiler Debugger ##(SQL)
> cp -f ../SqlCompilerDebugger/libSqlCompilerDebugger.so 
> ../lib/linux/64bit/debug/libSqlCompilerDebugger.so ##(SQL)
> cp: 无法获取"../SqlCompilerDebugger/libSqlCompilerDebugger.so" 的文件状态(stat): 
> 没有那个文件或目录 ##(SQL)
> make[4]: [cmpdbg_qt_build] 错误 1 (忽略) ##(SQL)
> I found the problem as  following:
> when QT_TOOLKIT is not setted, the command "cd ../SqlCompilerDebugger && . 
> ./mk.sh" is skipped, so the file libSqlCompilerDebugger.so is not generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2921) Add Syntax and Examples of *UPSERT USING LOAD* for *LOAD Statement* for *Trafodion SQL Reference Manual*

2018-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335420#comment-16335420
 ] 

ASF GitHub Bot commented on TRAFODION-2921:
---

Github user liuyu000 closed the pull request at:

https://github.com/apache/trafodion/pull/1408


> Add Syntax and Examples of *UPSERT USING LOAD* for *LOAD Statement* for 
> *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2921
> URL: https://issues.apache.org/jira/browse/TRAFODION-2921
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2922) calling int[] executeBatch() method execute create/drop/delete/insert/update/upsert statement always return -2

2018-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338440#comment-16338440
 ] 

ASF GitHub Bot commented on TRAFODION-2922:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1410


> calling int[] executeBatch() method execute 
> create/drop/delete/insert/update/upsert statement always return -2
> --
>
> Key: TRAFODION-2922
> URL: https://issues.apache.org/jira/browse/TRAFODION-2922
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t4
>Affects Versions: 2.3
>Reporter: LiZe
>Priority: Major
> Fix For: 2.3
>
>
> calling int[] executeBatch() method execute 
> create/drop/delete/insert/update/upsert statement always return -2,but jdbc 
> t2 is pass



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2802) Prepare the build environment with one command

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340457#comment-16340457
 ] 

ASF GitHub Bot commented on TRAFODION-2802:
---

Github user xiaozhongwang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1413#discussion_r164020994
  
--- Diff: install/traf_checkset_env.sh ---
@@ -244,11 +244,14 @@ if [ ! -d ${javapath} ]; then
  exit 1
 fi
 
+source $HOME/.bashrc
--- End diff --

Before, we must prepare environment step by step as following document: 
[https://cwiki.apache.org/confluence/display/TRAFODION/Create+Build+Environment]
I only write them in one command. 
Yes. I think this is not a good way too.


> Prepare the build environment with one command
> --
>
> Key: TRAFODION-2802
> URL: https://issues.apache.org/jira/browse/TRAFODION-2802
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: Build Infrastructure
>Affects Versions: any
> Environment: Red Hat and CentOS first
>Reporter: xiaozhong.wang
>Priority: Major
> Fix For: any
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Newbie cann't create build environment without a hitch.
> Although has a script traf_tools_setup.sh, there are a lot of process needed 
> to prepare before that.
> Give a script that can create build environment by one command, it's very 
> useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341394#comment-16341394
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user eowhadi commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164186395
  
--- Diff: core/sql/executor/ExFastTransport.h ---
@@ -407,6 +408,7 @@ class ExHdfsFastExtractTcb : public ExFastExtractTcb
   
   NABoolean isSequenceFile();
   void createSequenceFileError(Int32 sfwRetCode);
+  void createHdfsClientFileError(Int32 sfwRetCode);
--- End diff --

should be hdfsClientRetCode instead of sfwRetCode



> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341414#comment-16341414
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user eowhadi commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164189696
  
--- Diff: core/sql/executor/ExHbaseAccess.cpp ---
@@ -502,6 +506,8 @@ void ExHbaseAccessTcb::freeResources()
  NADELETEBASIC(directRowBuffer_, getHeap());
   if (colVal_.val != NULL)
  NADELETEBASIC(colVal_.val, getHeap());
+  if (hdfsClient_ != NULL) 
+ NADELETE(hdfsClient_, HdfsClient, getHeap());
 }
--- End diff --

I am wondering if there is a need to delete the new introduces 
loggingFileName_ here?


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2910) Add Details for Load Statement 4 and Correct Typos for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329526#comment-16329526
 ] 

ASF GitHub Bot commented on TRAFODION-2910:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1400#discussion_r162189054
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/olap_functions.adoc ---
@@ -260,7 +260,7 @@ and <>.
 
 * `_inline-window-specification_`
 +
-specifies_the_window_over_which_the_avg_is_computed. The
+specifies the window over which the avg is computed. The
--- End diff --

It would be better to spell out "avg" (as "average"). Or if we are 
referring to the AVG syntax, capitalize it.


> Add Details for Load Statement 4 and Correct Typos for *Trafodion SQL 
> Reference Manual* 
> 
>
> Key: TRAFODION-2910
> URL: https://issues.apache.org/jira/browse/TRAFODION-2910
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329569#comment-16329569
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162192952
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
+
+For example, a query that contains three rollup columns returns the 
following rows:
+
+* First-level: stand aggregate values calculated by GROUP BY clause 
without using ROLLUP.
--- End diff --

Possible wordsmith: "the usual aggregate values as calculated..."


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329568#comment-16329568
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162192522
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
--- End diff --

There is no "ORDER BY ROLLUP" syntax. But it looks like there are functions 
such as GROUPING that can refer to whether a column is used as a grouping 
column in a rollup result row, so one can order the detail vs. the summary 
rows. There's also a small typo here "...is a an..." Possible wordsmith for the 
last sentence: "It is an extension to the 'GROUP BY' clause. Related features 
such as the GROUPING function can be used with 'ORDER BY' to control the 
placement of summary results."


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328768#comment-16328768
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user traflm commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162050572
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
--- End diff --

I think this is not a function but a clause, so I suggest move this chapter 
into chapter 6?
I am not sure about this, @DaveBirdsall what do you think here?


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328773#comment-16328773
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user traflm commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162052172
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
+
+For example, a query that contains three rollup columns returns the 
following rows:
+
+* First-level: stand aggregate values calculated by GROUP BY clause 
without using ROLLUP.
+* Second-level: subtotals aggregating across column 3 for each combination 
of column 1 and column 2.
+* Third-level: subtotals aggregating across column 2 and column 3 for each 
column 1.
+* Fourth-level: the grand total row.
+
+NOTE: Trafodion does not support CUBE function which works slightly 
differently from ROLLUP.
+
+[[considerations_for_rollup]]
+=== Considerations for ROLLUP
+
+[[null_in_result_sets]]
+ NULL in Result Sets
+
+* The NULLs in each super-aggregate row represent subtotals and grand 
total.
+* The NULLs in selected columns are considered equal and sorted into one 
NULL group in result sets.
+
+[[using_rollup_with_the_column_order_reversed]]
+ Using ROLLUP with the Column Order Reversed
+
+ROLLUP removes the right-most column at each step, therefore the result 
sets vary with the column order specified in the comma-separated list. 
+
+[cols="50%,50%"]
+|===
+| If the column order is _country_, _state_, _city_ and _name_, ROLLUP 
returns following groupings. 
+| If the column order is _name_, _city_, _state_ and _country_, ROLLUP 
returns following groupings.
+| _country_, _state_, _city_ and _name_  | _name_, _city_, _state_ and 
_country_
+| _country_, _state_ and _city_  | _name_, _city_ and _state_
+| _country_ and _state_  | _name_ and _city_
+| _country_  | _name_
+| grand total| grand total
+|===
+
+[[examples_of_rollup]]
+=== Examples of ROLLUP
+
+[[examples_of_grouping_by_one_or_multiple_rollup_columns]]
+ Examples of Grouping By One or Multiple Rollup Columns
+
+Suppose that we have a _sales1_ table like this:
+
+```
+SELECT * FROM sales1;
+
+DELIVERY_YEAR REGION PRODUCT  REVENUE
+- --  ---
+ 2016 A  Dress100
+ 2016 A  Dress200
+ 2016 A  Pullover 300
+ 2016 B  Dress400
+ 2017 A  Pullover 500
+ 2017 B  Dress600
+ 2017 B  Pullover 700
+ 2017 B  Pullover 800
+
+--- 8 row(s) selected.
+```
+
+* This is an example of grouping by one rollup column.
++
+```
+SELECT delivery_year, SUM (revenue) AS total_revenue 
+FROM sales1
+GROUP BY ROLLUP (delivery_year);
+```
+
++
+```
+DELIVERY_YEAR TOTAL_REVENUE   
+- 
+ 2016 1000
+ 2017 2600
+ NULL 3600
+
+--- 3 row(s) selected.
+```
+
+* This is an example of grouping by two rollup columns.
++ 
+ROLLUP firstly aggregates at the lowest level (_region_) and then rollup 
those aggregations to the next
+level (_delivery_year_), finally it produces a grand total across these 
two levels.
+
++
+```
+SELECT delivery_year, region, SUM (revenue) AS total_revenue 
+FROM sales1
+GROUP BY ROLLUP (delivery_year, region);
+```
+
++
+```
+DELIVERY_YEAR REGION TOTAL_REVENUE   
+- -- 
+ 2016 A  

[jira] [Commented] (TRAFODION-2891) fix the bufoverrun Critical error checked by TScanCode

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329822#comment-16329822
 ] 

ASF GitHub Bot commented on TRAFODION-2891:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1394#discussion_r162226893
  
--- Diff: core/conn/odb/src/odb.c ---
@@ -5313,7 +5313,7 @@ static void etabadd(char type, char *run, int id)
 }
 }
 if ( etab[no].type == 'e' ) { /* name & create output file 
*/
-for ( i = j = 0; etab[no].tgt[i] && i < sizeof(buff); 
i++ ) {
+for ( i = j = 0; i < sizeof(buff) && etab[no].tgt[i]; 
i++ ) {
--- End diff --

I tried to make sense of this too. It looks like etab[no].tgt is populated 
in function parseopt from some string buffer; I can't tell if it is buff (which 
is a static variable declared as a char[256]) or something else. My suspicion 
is that the test "i < sizeof(buff)" has nothing to do with etab[no].tgt but 
rather some other buffer being copied to. In any case, reversing the order of 
these tests seems harmless.


> fix the bufoverrun Critical error checked by TScanCode
> --
>
> Key: TRAFODION-2891
> URL: https://issues.apache.org/jira/browse/TRAFODION-2891
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Major
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>
> access the buffer over run, if the buffer is at end of memory, will be make a 
> core dump 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329906#comment-16329906
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162237017
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329921#comment-16329921
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162238300
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
+
+For example, a query that contains three rollup columns returns the 
following rows:
+
+* First-level: stand aggregate values calculated by GROUP BY clause 
without using ROLLUP.
+* Second-level: subtotals aggregating across column 3 for each combination 
of column 1 and column 2.
+* Third-level: subtotals aggregating across column 2 and column 3 for each 
column 1.
+* Fourth-level: the grand total row.
+
+NOTE: Trafodion does not support CUBE function which works slightly 
differently from ROLLUP.
+
+[[considerations_for_rollup]]
+=== Considerations for ROLLUP
+
+[[null_in_result_sets]]
+ NULL in Result Sets
+
+* The NULLs in each super-aggregate row represent subtotals and grand 
total.
+* The NULLs in selected columns are considered equal and sorted into one 
NULL group in result sets.
+
+[[using_rollup_with_the_column_order_reversed]]
+ Using ROLLUP with the Column Order Reversed
+
+ROLLUP removes the right-most column at each step, therefore the result 
sets vary with the column order specified in the comma-separated list. 
+
+[cols="50%,50%"]
+|===
+| If the column order is _country_, _state_, _city_ and _name_, ROLLUP 
returns following groupings. 
+| If the column order is _name_, _city_, _state_ and _country_, ROLLUP 
returns following groupings.
+| _country_, _state_, _city_ and _name_  | _name_, _city_, _state_ and 
_country_
+| _country_, _state_ and _city_  | _name_, _city_ and _state_
+| _country_ and _state_  | _name_ and _city_
+| _country_  | _name_
+| grand total| grand total
+|===
+
+[[examples_of_rollup]]
+=== Examples of ROLLUP
+
+[[examples_of_grouping_by_one_or_multiple_rollup_columns]]
+ Examples of Grouping By One or Multiple Rollup Columns
+
+Suppose that we have a _sales1_ table like this:
+
+```
+SELECT * FROM sales1;
+
+DELIVERY_YEAR REGION PRODUCT  REVENUE
+- --  ---
+ 2016 A  Dress100
+ 2016 A  Dress200
+ 2016 A  Pullover 300
+ 2016 B  Dress400
+ 2017 A  Pullover 500
+ 2017 B  Dress600
+ 2017 B  Pullover 700
+ 2017 B  Pullover 800
+
+--- 8 row(s) selected.
+```
+
+* This is an example of grouping by one rollup column.
++
+```
+SELECT delivery_year, SUM (revenue) AS total_revenue 
+FROM sales1
+GROUP BY ROLLUP (delivery_year);
+```
+
++
+```
+DELIVERY_YEAR TOTAL_REVENUE   
+- 
+ 2016 1000
+ 2017 2600
+ NULL 3600
+
+--- 3 row(s) selected.
+```
+
+* This is an example of grouping by two rollup columns.
++ 
+ROLLUP firstly aggregates at the lowest level (_region_) and then rollup 
those aggregations to the next
+level (_delivery_year_), finally it produces a grand total across these 
two levels.
+
++
+```
+SELECT delivery_year, region, SUM (revenue) AS total_revenue 
+FROM sales1
+GROUP BY ROLLUP (delivery_year, region);
+```
+
++
+```
+DELIVERY_YEAR REGION TOTAL_REVENUE   
+- -- 
+ 2016 

[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329908#comment-16329908
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162237087
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
+
+For example, a query that contains three rollup columns returns the 
following rows:
+
+* First-level: stand aggregate values calculated by GROUP BY clause 
without using ROLLUP.
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329918#comment-16329918
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162238124
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
+
+For example, a query that contains three rollup columns returns the 
following rows:
+
+* First-level: stand aggregate values calculated by GROUP BY clause 
without using ROLLUP.
+* Second-level: subtotals aggregating across column 3 for each combination 
of column 1 and column 2.
+* Third-level: subtotals aggregating across column 2 and column 3 for each 
column 1.
+* Fourth-level: the grand total row.
+
+NOTE: Trafodion does not support CUBE function which works slightly 
differently from ROLLUP.
+
+[[considerations_for_rollup]]
+=== Considerations for ROLLUP
+
+[[null_in_result_sets]]
+ NULL in Result Sets
+
+* The NULLs in each super-aggregate row represent subtotals and grand 
total.
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2909) Add ROLLUP Function for *Trafodion SQL Reference Manual*

2018-01-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329920#comment-16329920
 ] 

ASF GitHub Bot commented on TRAFODION-2909:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1399#discussion_r162238258
  
--- Diff: 
docs/sql_reference/src/asciidoc/_chapters/sql_functions_and_expressions.adoc ---
@@ -6337,6 +6337,300 @@ UPDATE persnl.job
 SET jobdesc = RIGHT (jobdesc, 12);
 ```
 
+<<<
+[[rollup_function]]
+== ROLLUP Function
+
+The ROLLUP function calculates multiple levels of subtotals aggregating 
from right to left through the comma-separated list of columns, and provides a 
grand total. It is a an extension to the `GROUP BY` clause and can be used with 
`ORDER BY` to sort the results.
+
+```
+SELECT…GROUP BY ROLLUP (column 1, [column 2,]…[column n])
+```
+
+ROLLUP generates n+1 levels of subtotals and grand total, where n is the 
number of the selected column(s).
+
+For example, a query that contains three rollup columns returns the 
following rows:
+
+* First-level: stand aggregate values calculated by GROUP BY clause 
without using ROLLUP.
+* Second-level: subtotals aggregating across column 3 for each combination 
of column 1 and column 2.
+* Third-level: subtotals aggregating across column 2 and column 3 for each 
column 1.
+* Fourth-level: the grand total row.
+
+NOTE: Trafodion does not support CUBE function which works slightly 
differently from ROLLUP.
+
+[[considerations_for_rollup]]
+=== Considerations for ROLLUP
+
+[[null_in_result_sets]]
+ NULL in Result Sets
+
+* The NULLs in each super-aggregate row represent subtotals and grand 
total.
+* The NULLs in selected columns are considered equal and sorted into one 
NULL group in result sets.
+
+[[using_rollup_with_the_column_order_reversed]]
+ Using ROLLUP with the Column Order Reversed
+
+ROLLUP removes the right-most column at each step, therefore the result 
sets vary with the column order specified in the comma-separated list. 
+
+[cols="50%,50%"]
+|===
+| If the column order is _country_, _state_, _city_ and _name_, ROLLUP 
returns following groupings. 
+| If the column order is _name_, _city_, _state_ and _country_, ROLLUP 
returns following groupings.
+| _country_, _state_, _city_ and _name_  | _name_, _city_, _state_ and 
_country_
+| _country_, _state_ and _city_  | _name_, _city_ and _state_
+| _country_ and _state_  | _name_ and _city_
+| _country_  | _name_
+| grand total| grand total
+|===
+
+[[examples_of_rollup]]
+=== Examples of ROLLUP
+
+[[examples_of_grouping_by_one_or_multiple_rollup_columns]]
+ Examples of Grouping By One or Multiple Rollup Columns
+
+Suppose that we have a _sales1_ table like this:
+
+```
+SELECT * FROM sales1;
+
+DELIVERY_YEAR REGION PRODUCT  REVENUE
+- --  ---
+ 2016 A  Dress100
+ 2016 A  Dress200
+ 2016 A  Pullover 300
+ 2016 B  Dress400
+ 2017 A  Pullover 500
+ 2017 B  Dress600
+ 2017 B  Pullover 700
+ 2017 B  Pullover 800
+
+--- 8 row(s) selected.
+```
+
+* This is an example of grouping by one rollup column.
++
+```
+SELECT delivery_year, SUM (revenue) AS total_revenue 
+FROM sales1
+GROUP BY ROLLUP (delivery_year);
+```
+
++
+```
+DELIVERY_YEAR TOTAL_REVENUE   
+- 
+ 2016 1000
+ 2017 2600
+ NULL 3600
+
+--- 3 row(s) selected.
+```
+
+* This is an example of grouping by two rollup columns.
++ 
+ROLLUP firstly aggregates at the lowest level (_region_) and then rollup 
those aggregations to the next
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add ROLLUP Function for *Trafodion SQL Reference Manual* 
> -
>
> Key: TRAFODION-2909
> URL: https://issues.apache.org/jira/browse/TRAFODION-2909
> Project: Apache Trafodion
>  Issue Type: 

[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338480#comment-16338480
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

GitHub user DaveBirdsall opened a pull request:

https://github.com/apache/trafodion/pull/1414

[TRAFODION-2840] Make [first n] with ORDER BY views non-updatable

JIRA TRAFODION-2822 attempted to make views containing [first n] in the 
select list non-updatable. It missed one case, the case where both [first n] 
and ORDER BY are present in the view definition.

The reason it missed that case is because of a design oddity in how [first 
n] is implemented. When there is no ORDER BY clause, the Binder pass introduces 
a FirstN node into the query tree. But if an ORDER BY is present, it defers 
doing this to the Generator. That means any FirstN-related semantic checks in 
earlier passes will not be applied when ORDER BY is present.

This set of changes refactors the FirstN implementation a little bit so 
that the FirstN can be introduced in the Binder when ORDER BY is present. Then 
the check added in JIRA TRAFODION-2822 will apply, and such views will be 
marked non-updatable as desired.

The refactored design copies a bound and transformed version of the ORDER 
BY clause into the FirstN node at Normalize time. (Aside: I tried copying the 
unbound ORDER BY tree into the FirstN node at Bind time initially, but this 
fails if there are expressions in the select list referenced by the ORDER BY; 
different ValueIds get assigned to parts of the expressions than in the 
original resulting in expression coverage check failures at Optimize time.)

The refactored design has been implemented for all code paths that are 
involved in view definition, which is all that is required to satisfy this 
JIRA. However, there is another code path that hasn't been addressed: If a 
query uses output rowsets, has [first n] + ORDER BY, then the FirstN is still 
added in the Generator. I took a look at fixing this one too, but it proved 
more tedious than I wanted to tackle just now. A follow-on JIRA, 
TRAFODION-2924, has been written to track that case.

In addition, I noticed some code paths that use [first n] today that do not 
work correctly. The parser supports [first n] syntax on UPDATE and DELETE 
statements. But I discovered (even before my changes) that UPDATE ignores its 
[first n] specification. And if [first n] is specified for DELETE, the query 
fails with a compiler error 2235 (cannot produce plan). So those evidently are 
incomplete features, or features that broke sometime in the past and no-one 
noticed until now.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DaveBirdsall/trafodion Trafodion2840

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1414


commit ebf7283de994729f898fa5a9f5e476fa03b40a4f
Author: Dave Birdsall 
Date:   2018-01-25T00:16:08Z

[TRAFODION-2840] Make [first n] with ORDER BY views non-updatable




> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339472#comment-16339472
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163898763
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
+
+  ValueIdList sortKey;
+  sortKey.insert(reqdOrder());
--- End diff --

Just a nit: It isn't really necessary to make a copy of the required order 
here, reqdOrder() could be passed directly to the rg.addSortKey method below?


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339621#comment-16339621
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163930862
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
+
+  ValueIdList sortKey;
+  sortKey.insert(reqdOrder());
+
+  // Shouldn't/Can't add a sort order type requirement
+  // if we are in DP2
+  if (rppForMe->executeInDP2())
+rg.addSortKey(sortKey,NO_SOT);
+  else
+rg.addSortKey(sortKey,ESP_SOT);
+}
+
+  if (NOT pws->isEmpty())
--- End diff --

I think you're right. Perhaps this is one of those code blocks I copied 
from the default implementation that gets executed only when arity > 1? I'll 
look into it more carefully and remove this if it is dead code.


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339620#comment-16339620
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163930378
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
+
+  ValueIdList sortKey;
+  sortKey.insert(reqdOrder());
--- End diff --

Will fix. Thanks. I think I copied this code from another method.


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339619#comment-16339619
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163930312
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
--- End diff --

H... Suppose the parent is different at normalize time than it is here. 
(I don't think that ever happens but let's suppose in the future some Optimizer 
rule might make that happen.) Suppose further that the order requirement for 
the parent is different. We can generate a valid plan by doing a sort before 
doing first n (satisfying the first n + ORDER BY semantic) and then doing 
another sort with different criteria afterward (satisfying the different sort 
order of the parent). I think that's what this code achieves. If I remove this 
line, then we won't generate a plan (which might be OK because there might be 
other plan alternatives, but it might not in which case we'll get an error 
2235). So my take is that leaving this line here is slightly more robust. What 
do you think?


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339671#comment-16339671
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163941065
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
--- End diff --

If the parent really produces a different order requirement, I agree that 
the solution is two sorts. Line 15532 does not achieve that, however. To get 
two sorts, we need to wait until we have a sort as a parent, and that sort will 
not require a sort order. Leaving the line in may result in an inconsistent 
plan that will be ignored by the Cascades engine.


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339706#comment-16339706
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163943604
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
--- End diff --

Gotcha. Will remove the line.


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2802) Prepare the build environment with one command

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339783#comment-16339783
 ] 

ASF GitHub Bot commented on TRAFODION-2802:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1413#discussion_r163950904
  
--- Diff: install/traf_checkset_env.sh ---
@@ -244,11 +244,14 @@ if [ ! -d ${javapath} ]; then
  exit 1
 fi
 
+source $HOME/.bashrc
 javahome=`grep JAVA_HOME ~/.bashrc | wc -l`
--- End diff --

See comment above


> Prepare the build environment with one command
> --
>
> Key: TRAFODION-2802
> URL: https://issues.apache.org/jira/browse/TRAFODION-2802
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: Build Infrastructure
>Affects Versions: any
> Environment: Red Hat and CentOS first
>Reporter: xiaozhong.wang
>Priority: Major
> Fix For: any
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Newbie cann't create build environment without a hitch.
> Although has a script traf_tools_setup.sh, there are a lot of process needed 
> to prepare before that.
> Give a script that can create build environment by one command, it's very 
> useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2802) Prepare the build environment with one command

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339782#comment-16339782
 ] 

ASF GitHub Bot commented on TRAFODION-2802:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1413#discussion_r163951143
  
--- Diff: install/traf_checkset_env.sh ---
@@ -244,11 +244,14 @@ if [ ! -d ${javapath} ]; then
  exit 1
 fi
 
+source $HOME/.bashrc
 javahome=`grep JAVA_HOME ~/.bashrc | wc -l`
 if [ "x${javahome}" = "x0" ]; then
 echo -en "\n# Added by traf_checkset_env.sh of trafodion\n" >> 
$HOME/.bashrc
 echo -en "export JAVA_HOME=${javapath}\n" >> $HOME/.bashrc
 echo -en "export PATH=\$PATH:\$JAVA_HOME/bin\n" >> $HOME/.bashrc
--- End diff --

See comment above. Writing into the user's .bashrc is probably not a very 
good idea. You can create a file ~/.trafodion instead.


> Prepare the build environment with one command
> --
>
> Key: TRAFODION-2802
> URL: https://issues.apache.org/jira/browse/TRAFODION-2802
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: Build Infrastructure
>Affects Versions: any
> Environment: Red Hat and CentOS first
>Reporter: xiaozhong.wang
>Priority: Major
> Fix For: any
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Newbie cann't create build environment without a hitch.
> Although has a script traf_tools_setup.sh, there are a lot of process needed 
> to prepare before that.
> Give a script that can create build environment by one command, it's very 
> useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2802) Prepare the build environment with one command

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339781#comment-16339781
 ] 

ASF GitHub Bot commented on TRAFODION-2802:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1413#discussion_r163950816
  
--- Diff: install/traf_checkset_env.sh ---
@@ -244,11 +244,14 @@ if [ ! -d ${javapath} ]; then
  exit 1
 fi
 
+source $HOME/.bashrc
--- End diff --

Sourcing ~/.bashrc and looking into this file is not a very good solution, 
IMHO. There are so many other ways we can set JAVA_HOME. For example: 
/etc/bashrc, ~/.trafodion, $TRAF_HOME/.sqenvcom.sh.

I would suggest to just check whether JAVA_HOME is set, otherwise duplicate 
the code in $TRAF_HOME/.sqenvcom.sh that sets JAVA_HOME. If JAVA_HOME is not 
already set, then you can then add a line 

export JAVA_HOME=...

To the file ~/.trafodion.


> Prepare the build environment with one command
> --
>
> Key: TRAFODION-2802
> URL: https://issues.apache.org/jira/browse/TRAFODION-2802
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: Build Infrastructure
>Affects Versions: any
> Environment: Red Hat and CentOS first
>Reporter: xiaozhong.wang
>Priority: Major
> Fix For: any
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Newbie cann't create build environment without a hitch.
> Although has a script traf_tools_setup.sh, there are a lot of process needed 
> to prepare before that.
> Give a script that can create build environment by one command, it's very 
> useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2928) Add news articles about Trafodion to Trafodion web site

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340242#comment-16340242
 ] 

ASF GitHub Bot commented on TRAFODION-2928:
---

GitHub user DaveBirdsall opened a pull request:

https://github.com/apache/trafodion/pull/1415

[TRAFODION-2928] Add recent articles to Trafodion web site

This pull request adds recent articles about Apache Trafodion becoming a 
Top-Level Project and Apache Trafodion features to the Apache Trafodion web 
site.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DaveBirdsall/trafodion UpdateWebSite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1415


commit 3c0700c5ffd840a8de93907009ca952c2f1c9279
Author: Dave Birdsall 
Date:   2018-01-25T22:49:52Z

[TRAFODION-2928] Add recent articles to Trafodion web site




> Add news articles about Trafodion to Trafodion web site
> ---
>
> Key: TRAFODION-2928
> URL: https://issues.apache.org/jira/browse/TRAFODION-2928
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: website
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> Add recent articles about Trafodion as a top-level project to the Trafodion 
> web site.
> [http://globenewswire.com/news-release/2018/01/10/1286517/0/en/The-Apache-Software-Foundation-Announces-Apache-Trafodion-as-a-Top-Level-Project.html]
> [https://thenewstack.io/sql-hadoop-database-trafodion-bridges-transactions-analysis-divide/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339463#comment-16339463
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163894233
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
--- End diff --

I don't think we should remove the sort requirement from the parent here, 
since the FirstN does not itself do any sorting, so whatever the parent needs 
will have to be provided by the child. Therefore we need to pass the parent's 
requirement on to the child. If what the parent wants is incompatible with what 
the FirstN needs (probably not possible in the current design), we'll return 
NULL on line 15562, which is what we want.


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2840) ORDER BY clause on a view circumvents [first n] updatability check

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339464#comment-16339464
 ] 

ASF GitHub Bot commented on TRAFODION-2840:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1414#discussion_r163895532
  
--- Diff: core/sql/optimizer/OptPhysRelExpr.cpp ---
@@ -15499,6 +15499,95 @@ GenericUtilExpr::synthPhysicalProperty(const 
Context* myContext,
   return sppForMe;
 } //  GenericUtilExpr::synthPhysicalProperty()
 
+// ---
+// FirstN::createContextForAChild()
+// The FirstN node may have an order by requirement that it needs to
+// pass to its child context. Other than that, this method is quite
+// similar to the default implementation, RelExpr::createContextForAChild.
+// The arity of FirstN is always 1, so some logic from the default
+// implementation that deals with childIndex > 0 is unnecessary and has
+// been removed.
+// ---
+Context * FirstN::createContextForAChild(Context* myContext,
+ PlanWorkSpace* pws,
+ Lng32& childIndex)
+{
+  const ReqdPhysicalProperty* rppForMe =
+myContext->getReqdPhysicalProperty();
+
+  CMPASSERT(getArity() == 1);
+
+  childIndex = getArity() - pws->getCountOfChildContexts() - 1;
+
+  // return if we are done
+  if (childIndex < 0)
+return NULL;
+
+  RequirementGenerator rg(child(childIndex), rppForMe);
+
+  if (reqdOrder().entries() > 0)
+{
+  // replace our sort requirement with that implied by our ORDER BY 
clause
+
+  rg.removeSortKey();
+
+  ValueIdList sortKey;
+  sortKey.insert(reqdOrder());
+
+  // Shouldn't/Can't add a sort order type requirement
+  // if we are in DP2
+  if (rppForMe->executeInDP2())
+rg.addSortKey(sortKey,NO_SOT);
+  else
+rg.addSortKey(sortKey,ESP_SOT);
+}
+
+  if (NOT pws->isEmpty())
--- End diff --

I don't think we need this code (line 15545 - 15559). We already returned 
from this method if pws contains previous contexts (line 15524), so this if 
condition should never be true.


> ORDER BY clause on a view circumvents [first n] updatability check
> --
>
> Key: TRAFODION-2840
> URL: https://issues.apache.org/jira/browse/TRAFODION-2840
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> The following script fails:
> >>create table t1 (a int not null, b int, primary key (a));
> --- SQL operation complete.
> >>
> >>insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5),(6,6);
> --- 6 row(s) inserted.
> >>
> >>create view v1 as select [first 5] * from t1 order by a;
> --- SQL operation complete.
> >>
> >>create view v2 as select [first 5] * from t1;
> --- SQL operation complete.
> >>
> >>update v1 set b = 6;
> --- 6 row(s) updated.
> >> -- should fail; v1 should be non-updatable
> >>
> >>update v2 set b = 7;
> *** ERROR[4028] Table or view TRAFODION.SEABASE.V2 is not updatable.
> *** ERROR[8822] The statement was not prepared.
> >>-- does fail; v2 is non-updatable (correctly)
> >>
> It seems the presence of the ORDER BY clause in the view definition 
> circumvents the [first n] updatability check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341531#comment-16341531
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164211946
  
--- Diff: core/sql/executor/ExHbaseAccess.cpp ---
@@ -502,6 +506,8 @@ void ExHbaseAccessTcb::freeResources()
  NADELETEBASIC(directRowBuffer_, getHeap());
   if (colVal_.val != NULL)
  NADELETEBASIC(colVal_.val, getHeap());
+  if (hdfsClient_ != NULL) 
+ NADELETE(hdfsClient_, HdfsClient, getHeap());
 }
--- End diff --

Yes. It is a good catch. I will fix this too in my next commit.  This won't 
cause memory leak as such because the heap is destroyed.


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341705#comment-16341705
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164237633
  
--- Diff: core/sql/executor/ExFastTransport.h ---
@@ -407,6 +408,7 @@ class ExHdfsFastExtractTcb : public ExFastExtractTcb
   
   NABoolean isSequenceFile();
   void createSequenceFileError(Int32 sfwRetCode);
+  void createHdfsClientFileError(Int32 sfwRetCode);
--- End diff --

Will do in my next commit though it is a just parameter name in the method 
declaration.


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2899) Catalog API SQLColumns does not support ODBC2.x

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319914#comment-16319914
 ] 

ASF GitHub Bot commented on TRAFODION-2899:
---

GitHub user Weixin-Xu opened a pull request:

https://github.com/apache/trafodion/pull/1386

[TRAFODION-2899] Catalog API SQLColumns support ODBC2.x

When using ODBC2.x to get description of columns, failure occurs but no 
error returns.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Weixin-Xu/incubator-trafodion T2899

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1386


commit 79001ea0d31b0c2d8c2c48e40876f79528a7edf5
Author: Weixin-Xu 
Date:   2018-01-10T08:49:10Z

Catalog API SQLColumns support ODBC2.x




> Catalog API SQLColumns does not support ODBC2.x
> ---
>
> Key: TRAFODION-2899
> URL: https://issues.apache.org/jira/browse/TRAFODION-2899
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
> Environment: Centos 6.7
>Reporter: XuWeixin
>Assignee: XuWeixin
> Fix For: 2.2-incubating
>
>
> When using ODBC2.x to get description of columns, failure occurs but no error 
> returns.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2886) fix the null pointer Critical error checked by TScanCode

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324148#comment-16324148
 ] 

ASF GitHub Bot commented on TRAFODION-2886:
---

Github user xiaozhongwang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1380#discussion_r161255440
  
--- Diff: core/sql/executor/cluster.cpp ---
@@ -2395,7 +2395,7 @@ NABoolean Cluster::checkAndSplit(ExeErrorCode * rc)
 rc);
   
   if ( !next_ || *rc ) {
--- End diff --

I cann't understand.
Why remove the || *rc,
I think there are two type results:
1、 First, if there are no memory, next_ will get NULL value.
2、 Second, there are some wrong happened in Cluster::Cluster
In the second case, next_ will get a value, but *rc will is not EXE_OK.
If we remove the || *rc,  this check will pass, but there was an error 
happened.

My understanding is wrong?


> fix the null pointer Critical error checked by TScanCode
> 
>
> Key: TRAFODION-2886
> URL: https://issues.apache.org/jira/browse/TRAFODION-2886
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Critical
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2886) fix the null pointer Critical error checked by TScanCode

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324239#comment-16324239
 ] 

ASF GitHub Bot commented on TRAFODION-2886:
---

Github user xiaozhongwang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1380#discussion_r161273985
  
--- Diff: core/sql/executor/cluster.cpp ---
@@ -2395,7 +2395,7 @@ NABoolean Cluster::checkAndSplit(ExeErrorCode * rc)
 rc);
   
   if ( !next_ || *rc ) {
--- End diff --

OK, I see.


> fix the null pointer Critical error checked by TScanCode
> 
>
> Key: TRAFODION-2886
> URL: https://issues.apache.org/jira/browse/TRAFODION-2886
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Critical
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2886) fix the null pointer Critical error checked by TScanCode

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324226#comment-16324226
 ] 

ASF GitHub Bot commented on TRAFODION-2886:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1380#discussion_r161272011
  
--- Diff: core/sql/executor/cluster.cpp ---
@@ -2395,7 +2395,7 @@ NABoolean Cluster::checkAndSplit(ExeErrorCode * rc)
 rc);
   
   if ( !next_ || *rc ) {
--- End diff --

You are correct, In that case you can change it if *(!next) || (*rc != 
EXE_OK)). I wish we got this right this time.


> fix the null pointer Critical error checked by TScanCode
> 
>
> Key: TRAFODION-2886
> URL: https://issues.apache.org/jira/browse/TRAFODION-2886
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Critical
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2881) Multiple node failures occur during HA testing

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324252#comment-16324252
 ] 

ASF GitHub Bot commented on TRAFODION-2881:
---

Github user trinakrug commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1392#discussion_r161276162
  
--- Diff: core/sqf/monitor/linux/cmsh.cxx ---
@@ -128,31 +168,97 @@ int CCmsh::GetClusterState( PhysicalNodeNameMap_t 
 )
 if (it != physicalNodeMap.end())
 {
// TEST_POINT and Exclude List : to force state down on 
node name 
-   const char *downNodeName = getenv( TP001_NODE_DOWN );
-   const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
-  string downNodeString = " ";
-  if (downNodeList)
-  {
-downNodeString += downNodeList;
-downNodeString += " ";
-  }
-  string downNodeToFind = " ";
-  downNodeToFind += nodeName.c_str();
-  downNodeToFind += " ";
-   if (((downNodeList != NULL) && 
- strstr(downNodeString.c_str(),downNodeToFind.c_str())) ||
-   ( (downNodeName != NULL) && 
- !strcmp( downNodeName, nodeName.c_str()) ))
-  {
-   nodeState = StateDown;
-  }
-  
+const char *downNodeName = getenv( TP001_NODE_DOWN );
+const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
+string downNodeString = " ";
+if (downNodeList)
+{
+downNodeString += downNodeList;
+downNodeString += " ";
+}
+string downNodeToFind = " ";
+downNodeToFind += nodeName.c_str();
+downNodeToFind += " ";
+if (((downNodeList != NULL) && 
+  
strstr(downNodeString.c_str(),downNodeToFind.c_str())) ||
+((downNodeName != NULL) && 
+ !strcmp(downNodeName, nodeName.c_str(
+{
+nodeState = StateDown;
+}
+  
 // Set physical node state
 physicalNode = it->second;
 physicalNode->SetState( nodeState );
 }
 }
-}  
+}  
+
+TRACE_EXIT;
+return( rc );
+}
+

+///
+//
+// Function/Method: CCmsh::GetNodeState
+//
+// Description: Updates the state of the nodeName in the physicalNode 
passed in
+//  as a parameter. Caller should ensure that the node names 
are already
+//  present in the physicalNodeMap. 
+//
+// Return:
+//0 - success
+//   -1 - failure
+//

+///
+int CCmsh::GetNodeState( char *name ,CPhysicalNode  *physicalNode )
+{
+const char method_name[] = "CCmsh::GetNodeState";
+TRACE_ENTRY;
+
+int rc;
+
+rc = PopulateNodeState( name );
+
+if ( rc != -1 )
+{
+// Parse each line extracting name and state
+string nodeName;
+NodeState_t nodeState;
+PhysicalNodeNameMap_t::iterator it;
+
+StringList_t::iteratoralit;
+for ( alit = nodeStateList_.begin(); alit != nodeStateList_.end() 
; alit++ )
+{
+ParseNodeStatus( *alit, nodeName, nodeState );
+
+// TEST_POINT and Exclude List : to force state down on node 
name 
+const char *downNodeName = getenv( TP001_NODE_DOWN );
+const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
--- End diff --

Is TRAF_EXCLUDE_LIST obsolete?


> Multiple node failures occur during HA testing
> --
>
> Key: TRAFODION-2881
> URL: https://issues.apache.org/jira/browse/TRAFODION-2881
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Affects Versions: 2.3
>Reporter: Gonzalo E Correa
>Assignee: Gonzalo E Correa
> Fix For: 2.3
>
>
> Inflicting server failure in certain modes will cause multiple monitor 
> process to also bring their nodes down along with the intended target of the 
> test.
> Server down modes:
> init 6
> reboot -f
> shutdown -r now
> shell node down command
> In addition, after a server down, the shell 'node up' command 

[jira] [Commented] (TRAFODION-2886) fix the null pointer Critical error checked by TScanCode

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324132#comment-16324132
 ] 

ASF GitHub Bot commented on TRAFODION-2886:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1380#discussion_r161252479
  
--- Diff: core/sql/executor/cluster.cpp ---
@@ -2395,7 +2395,7 @@ NABoolean Cluster::checkAndSplit(ExeErrorCode * rc)
 rc);
   
   if ( !next_ || *rc ) {
--- End diff --

Oops.  Please remove the || *rc, 


> fix the null pointer Critical error checked by TScanCode
> 
>
> Key: TRAFODION-2886
> URL: https://issues.apache.org/jira/browse/TRAFODION-2886
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Critical
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2881) Multiple node failures occur during HA testing

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324361#comment-16324361
 ] 

ASF GitHub Bot commented on TRAFODION-2881:
---

Github user zcorrea commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1392#discussion_r161296519
  
--- Diff: core/sqf/monitor/linux/cmsh.cxx ---
@@ -128,31 +168,97 @@ int CCmsh::GetClusterState( PhysicalNodeNameMap_t 
 )
 if (it != physicalNodeMap.end())
 {
// TEST_POINT and Exclude List : to force state down on 
node name 
-   const char *downNodeName = getenv( TP001_NODE_DOWN );
-   const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
-  string downNodeString = " ";
-  if (downNodeList)
-  {
-downNodeString += downNodeList;
-downNodeString += " ";
-  }
-  string downNodeToFind = " ";
-  downNodeToFind += nodeName.c_str();
-  downNodeToFind += " ";
-   if (((downNodeList != NULL) && 
- strstr(downNodeString.c_str(),downNodeToFind.c_str())) ||
-   ( (downNodeName != NULL) && 
- !strcmp( downNodeName, nodeName.c_str()) ))
-  {
-   nodeState = StateDown;
-  }
-  
+const char *downNodeName = getenv( TP001_NODE_DOWN );
+const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
+string downNodeString = " ";
+if (downNodeList)
+{
+downNodeString += downNodeList;
+downNodeString += " ";
+}
+string downNodeToFind = " ";
+downNodeToFind += nodeName.c_str();
+downNodeToFind += " ";
+if (((downNodeList != NULL) && 
+  
strstr(downNodeString.c_str(),downNodeToFind.c_str())) ||
+((downNodeName != NULL) && 
+ !strcmp(downNodeName, nodeName.c_str(
+{
+nodeState = StateDown;
+}
+  
 // Set physical node state
 physicalNode = it->second;
 physicalNode->SetState( nodeState );
 }
 }
-}  
+}  
+
+TRACE_EXIT;
+return( rc );
+}
+

+///
+//
+// Function/Method: CCmsh::GetNodeState
+//
+// Description: Updates the state of the nodeName in the physicalNode 
passed in
+//  as a parameter. Caller should ensure that the node names 
are already
+//  present in the physicalNodeMap. 
+//
+// Return:
+//0 - success
+//   -1 - failure
+//

+///
+int CCmsh::GetNodeState( char *name ,CPhysicalNode  *physicalNode )
+{
+const char method_name[] = "CCmsh::GetNodeState";
+TRACE_ENTRY;
+
+int rc;
+
+rc = PopulateNodeState( name );
+
+if ( rc != -1 )
+{
+// Parse each line extracting name and state
+string nodeName;
+NodeState_t nodeState;
+PhysicalNodeNameMap_t::iterator it;
+
+StringList_t::iteratoralit;
+for ( alit = nodeStateList_.begin(); alit != nodeStateList_.end() 
; alit++ )
+{
+ParseNodeStatus( *alit, nodeName, nodeState );
+
+// TEST_POINT and Exclude List : to force state down on node 
name 
+const char *downNodeName = getenv( TP001_NODE_DOWN );
+const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
--- End diff --

Created JIRA (TRAFODION-2907) to clean up the use of TRAF_EXCLUDE_LIST in 
the monitor code.


> Multiple node failures occur during HA testing
> --
>
> Key: TRAFODION-2881
> URL: https://issues.apache.org/jira/browse/TRAFODION-2881
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Affects Versions: 2.3
>Reporter: Gonzalo E Correa
>Assignee: Gonzalo E Correa
> Fix For: 2.3
>
>
> Inflicting server failure in certain modes will cause multiple monitor 
> process to also bring their nodes down along with the intended target of the 
> test.
> Server down modes:
> init 6
> reboot -f
> shutdown -r now
> shell node down command
> In 

[jira] [Commented] (TRAFODION-2881) Multiple node failures occur during HA testing

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324353#comment-16324353
 ] 

ASF GitHub Bot commented on TRAFODION-2881:
---

Github user zcorrea commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1392#discussion_r161294809
  
--- Diff: core/sqf/monitor/linux/cmsh.cxx ---
@@ -128,31 +168,97 @@ int CCmsh::GetClusterState( PhysicalNodeNameMap_t 
 )
 if (it != physicalNodeMap.end())
 {
// TEST_POINT and Exclude List : to force state down on 
node name 
-   const char *downNodeName = getenv( TP001_NODE_DOWN );
-   const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
-  string downNodeString = " ";
-  if (downNodeList)
-  {
-downNodeString += downNodeList;
-downNodeString += " ";
-  }
-  string downNodeToFind = " ";
-  downNodeToFind += nodeName.c_str();
-  downNodeToFind += " ";
-   if (((downNodeList != NULL) && 
- strstr(downNodeString.c_str(),downNodeToFind.c_str())) ||
-   ( (downNodeName != NULL) && 
- !strcmp( downNodeName, nodeName.c_str()) ))
-  {
-   nodeState = StateDown;
-  }
-  
+const char *downNodeName = getenv( TP001_NODE_DOWN );
+const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
+string downNodeString = " ";
+if (downNodeList)
+{
+downNodeString += downNodeList;
+downNodeString += " ";
+}
+string downNodeToFind = " ";
+downNodeToFind += nodeName.c_str();
+downNodeToFind += " ";
+if (((downNodeList != NULL) && 
+  
strstr(downNodeString.c_str(),downNodeToFind.c_str())) ||
+((downNodeName != NULL) && 
+ !strcmp(downNodeName, nodeName.c_str(
+{
+nodeState = StateDown;
+}
+  
 // Set physical node state
 physicalNode = it->second;
 physicalNode->SetState( nodeState );
 }
 }
-}  
+}  
+
+TRACE_EXIT;
+return( rc );
+}
+

+///
+//
+// Function/Method: CCmsh::GetNodeState
+//
+// Description: Updates the state of the nodeName in the physicalNode 
passed in
+//  as a parameter. Caller should ensure that the node names 
are already
+//  present in the physicalNodeMap. 
+//
+// Return:
+//0 - success
+//   -1 - failure
+//

+///
+int CCmsh::GetNodeState( char *name ,CPhysicalNode  *physicalNode )
+{
+const char method_name[] = "CCmsh::GetNodeState";
+TRACE_ENTRY;
+
+int rc;
+
+rc = PopulateNodeState( name );
+
+if ( rc != -1 )
+{
+// Parse each line extracting name and state
+string nodeName;
+NodeState_t nodeState;
+PhysicalNodeNameMap_t::iterator it;
+
+StringList_t::iteratoralit;
+for ( alit = nodeStateList_.begin(); alit != nodeStateList_.end() 
; alit++ )
+{
+ParseNodeStatus( *alit, nodeName, nodeState );
+
+// TEST_POINT and Exclude List : to force state down on node 
name 
+const char *downNodeName = getenv( TP001_NODE_DOWN );
+const char *downNodeList = getenv( TRAF_EXCLUDE_LIST );
--- End diff --

Yes, it is! Good catch!


> Multiple node failures occur during HA testing
> --
>
> Key: TRAFODION-2881
> URL: https://issues.apache.org/jira/browse/TRAFODION-2881
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Affects Versions: 2.3
>Reporter: Gonzalo E Correa
>Assignee: Gonzalo E Correa
> Fix For: 2.3
>
>
> Inflicting server failure in certain modes will cause multiple monitor 
> process to also bring their nodes down along with the intended target of the 
> test.
> Server down modes:
> init 6
> reboot -f
> shutdown -r now
> shell node down command
> In addition, after a server down, the shell 'node up' command will also 

[jira] [Commented] (TRAFODION-2861) Remove incubating reference(s) from code base

2018-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322724#comment-16322724
 ] 

ASF GitHub Bot commented on TRAFODION-2861:
---

GitHub user svarnau opened a pull request:

https://github.com/apache/trafodion/pull/1393

[TRAFODION-2861] Backport to release2.2 branch

This includes doc changes for 2.2 docs to be generated without incubating 
references. (Main website is generated from master branch.)

Also includes packaging changes to remove incubating. 2.2.0 will be first 
TLP release.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/svarnau/trafodion tlp_2.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1393


commit f077620ae434bfdc115a599a591bec1f5eb7bff6
Author: Steve Varnau 
Date:   2017-12-22T19:23:57Z

[TRAFODION-2857] Web-site changes to remove incubating status

Project status and URL changes.
Email lists and git repo name changes still to come based on infra changes.

commit 6047c512707f72fd32f4b6e7e91bd3834c272b10
Author: Steve Varnau 
Date:   2017-12-29T19:03:25Z

[TRAFODION-2857] Adding repo and email address name changes.

commit 6429a0c2839dd22eefbd6ed368236492e8f730fc
Author: Steve Varnau 
Date:   2018-01-04T20:07:47Z

[TRAFODION-2861][TRAFODION-2869] Remove incubating from release packaging

Remove disclaimer file and incubating string from packaging file names.




> Remove incubating reference(s) from code base
> -
>
> Key: TRAFODION-2861
> URL: https://issues.apache.org/jira/browse/TRAFODION-2861
> Project: Apache Trafodion
>  Issue Type: Sub-task
>  Components: documentation, website
>Reporter: Pierre Smits
> Fix For: 2.3
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2895) Update Messages Guide for range 1700-1999 and some other cleanups

2018-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322777#comment-16322777
 ] 

ASF GitHub Bot commented on TRAFODION-2895:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1388


> Update Messages Guide for range 1700-1999 and some other cleanups
> -
>
> Key: TRAFODION-2895
> URL: https://issues.apache.org/jira/browse/TRAFODION-2895
> Project: Apache Trafodion
>  Issue Type: Sub-task
>  Components: documentation
> Environment: All
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>
> The range 1700-1999 is the last remaining range of DDL messages to be updated.
> There are a few issues to fix in earlier ranges as well (for example a few 
> messages have since been added to the code that were not added to the 
> Messages Guide).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2890) When using failed connection handle to alloc statement handle, crash happens

2018-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322745#comment-16322745
 ] 

ASF GitHub Bot commented on TRAFODION-2890:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1378


> When using failed connection handle to alloc statement handle, crash happens
> 
>
> Key: TRAFODION-2890
> URL: https://issues.apache.org/jira/browse/TRAFODION-2890
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-odbc-linux
>Affects Versions: any
> Environment: centos 6.7
>Reporter: XuWeixin
>Assignee: XuWeixin
> Fix For: 2.2-incubating
>
>
> Using odbc to connect server.
> A segment fault happens while calling SQLAllocHandle to alloc statement 
> handle after connection to server fails.
> The reason is the driver manager has not estimated whether the connection is 
> valid or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2891) fix the bufoverrun Critical error checked by TScanCode

2018-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323529#comment-16323529
 ] 

ASF GitHub Bot commented on TRAFODION-2891:
---

GitHub user xiaozhongwang opened a pull request:

https://github.com/apache/trafodion/pull/1394

[TRAFODION-2891] fix the bufoverrun Critical error checked by TScanCode

fix the bufoverrun Critical error checked by TScanCode

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiaozhongwang/trafodion TRAFODION-2891

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1394


commit 0166c96321a104636e7dec67eb98b67e931ea84e
Author: Kenny 
Date:   2018-01-11T08:32:39Z

fix the bufoverrun Critical error checked by TScanCode

commit d6573f9c0a1ca7a7f63f76a629e64750a37f038c
Author: Kenny 
Date:   2018-01-12T01:37:08Z

fix the bufoverrun Critical error checked by TScanCode

commit 1fe8890705310df729c35401d53d55531e8e8398
Author: Kenny 
Date:   2018-01-12T04:02:26Z

fix the bufoverrun Critical error checked by TScanCode




> fix the bufoverrun Critical error checked by TScanCode
> --
>
> Key: TRAFODION-2891
> URL: https://issues.apache.org/jira/browse/TRAFODION-2891
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
> Attachments: Critical_trafodion_tscancode_codecheck.xml
>
>
> access the buffer over run, if the buffer is at end of memory, will be make a 
> core dump 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2903) The COLUMN_SIZE fetched from mxosrvr is wrong

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323686#comment-16323686
 ] 

ASF GitHub Bot commented on TRAFODION-2903:
---

GitHub user Weixin-Xu opened a pull request:

https://github.com/apache/trafodion/pull/1395

[TRAFODION-2903] correct Column_Size fetched from mxosrvr

DATE precision expect: 10 and actual: 11
TIME precision expect: 8 and actual: 9
TIMESTAMP precision expect: 19 and actual: 20

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Weixin-Xu/incubator-trafodion T2903

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1395.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1395


commit bb306026bdcec65050c364596a52d6c4bedb7837
Author: Weixin-Xu 
Date:   2018-01-12T07:28:46Z

correct Column_Size fetched from mxosrvr




> The COLUMN_SIZE fetched from mxosrvr is wrong
> -
>
> Key: TRAFODION-2903
> URL: https://issues.apache.org/jira/browse/TRAFODION-2903
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: connectivity-mxosrvr
>Affects Versions: any
>Reporter: XuWeixin
>Assignee: XuWeixin
> Fix For: 2.2-incubating
>
>
> 1. DDL: create table TEST (C1 L4PKE date,C2 time,C3 timestamp)
> 2. 
> SQLColumns(hstmt,(SQLTCHAR*)"TRAFODION",SQL_NTS,(SQLTCHAR*)"SEABASE",SQL_NTS,(SQLTCHAR*)"TEST",SQL_NTS,(SQLTCHAR*)"%",SQL_NTS);
> 3. SQLBindCol(hstmt,7,SQL_C_LONG,,0,)
> 4. SQLFetch(hstmt)
> return  DATE ColPrec expect: 10 and actual: 11
>TIME ColPrec expect: 8 and actual: 9
>TIMESTAMP ColPrec expect: 19 and actual: 20



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TRAFODION-2912) Non-deterministic scalar UDFs not executed once per row

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345283#comment-16345283
 ] 

ASF GitHub Bot commented on TRAFODION-2912:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1420


> Non-deterministic scalar UDFs not executed once per row
> ---
>
> Key: TRAFODION-2912
> URL: https://issues.apache.org/jira/browse/TRAFODION-2912
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.0-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
> Fix For: 2.3
>
>
> This problem was found by Andy Yang.
> Andy created a random generator scalar UDF and found that it did not return a 
> different random value for each row:
> {noformat}
> >>select scalar_rand_udf(), scalar_rand_udf()
> +>from (values (1), (2), (3)) T(s);
> RND RND 
> --- ---
> 846930886 1804289383
> 846930886 1804289383
> 846930886 1804289383
> --- 3 row(s) selected.
> >>
> {noformat}
> Here is the explain, it shows that we are using hash joins, not nested joins, 
> to evaluate the UDFs:
> {noformat}
> >>explain options 'f' s;
> LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD
>          -
> 5.6root  3.00E+000
> 415hybrid_hash_join  3.00E+000
> 324hybrid_hash_join  1.00E+000
> ..3isolated_scalar_udf SCALAR_RAND_UDF   1.00E+000
> ..2isolated_scalar_udf SCALAR_RAND_UDF   1.00E+000
> ..1tuplelist 3.00E+000
> --- SQL operation complete.
> >>
> {noformat}
> The problem is that we don't check for non-deterministic UDFs when we 
> transform a TSJ to a regular join in the transformer or normalizer. We don't 
> even set the non-deterministic flag in the group attributes of the 
> IsolatedScalarUDF node.
> The fix is to set this flag correctly and to add a check and not transform 
> routine joins for non-deterministic isolated scalar UDFs into a regular join.
> To recreate:
> Here is the source code of the UDF:
> {noformat}
> #include "sqludr.h"
> #include 
> SQLUDR_LIBFUNC SQLUDR_INT32 scalar_rand_udf(SQLUDR_INT32 *out1,
> SQLUDR_INT16 *outInd1,
> SQLUDR_TRAIL_ARGS)
> {
>   if (calltype == SQLUDR_CALLTYPE_FINAL)
> return SQLUDR_SUCCESS;
>   (*out1) = rand();
>   return SQLUDR_SUCCESS;
> }
> {noformat}
> Compile the UDF:
> {noformat}
> gcc -g -Wall -I$TRAF_HOME/export/include/sql -shared -fPIC -o 
> scalar_rand_udf.so scalar_rand_udf.c
> {noformat}
> Create the UDF and run it:
> {noformat}
> drop function scalar_rand_udf;
> drop library scalar_rand_udf_lib;
> create library scalar_rand_udf_lib
>  file '/home/zellerh/src/scalar_rand_udf/scalar_rand_udf.so';
> create function scalar_rand_udf() returns (rnd int)
>   external name 'scalar_rand_udf' library scalar_rand_udf_lib
>   not deterministic no sql no transaction required;
> prepare s from
> select scalar_rand_udf(), scalar_rand_udf()
> from (values (1), (2), (3)) T(s);
> explain options 'f' s;
> execute s;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2871) Trafodion Security Team and ML

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346059#comment-16346059
 ] 

ASF GitHub Bot commented on TRAFODION-2871:
---

GitHub user svarnau opened a pull request:

https://github.com/apache/trafodion/pull/1424

[TRAFODION-2871] Add web references to security mailing list



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/svarnau/trafodion j2871

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1424.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1424


commit e0b1dbc5d3ba15ac5b5b93cf7f1905be65c4b0d8
Author: Steve Varnau 
Date:   2018-01-31T00:43:41Z

[TRAFODION-2871] Add web references to security mailing list




> Trafodion Security Team and ML
> --
>
> Key: TRAFODION-2871
> URL: https://issues.apache.org/jira/browse/TRAFODION-2871
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: documentation, website
>Reporter: Pierre Smits
>Assignee: Steve Varnau
>Priority: Major
>  Labels: security
>
> Security threats are everywhere. We should have the constructs in place to 
> address these properly, meaning:
> * have a Trafodion security ml
> * etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2938) Add *GET PRIVILEGES* for GET Statement in *Trafodion SQL Reference Manual*

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346353#comment-16346353
 ] 

ASF GitHub Bot commented on TRAFODION-2938:
---

GitHub user liuyu000 opened a pull request:

https://github.com/apache/trafodion/pull/1426

[TRAFODION-2938] Add *GET PRIVILEGES* for GET Statement in *Trafodion SQL 
Reference Manual*



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liuyu000/trafodion GET_Statement

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1426.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1426


commit c59195e62d68a9878d141ed6c711772d0e7a1114
Author: liu.yu 
Date:   2018-01-31T07:09:07Z

Add GET PRIVILEGES for GET Statement




> Add *GET PRIVILEGES* for GET Statement in *Trafodion SQL Reference Manual*
> --
>
> Key: TRAFODION-2938
> URL: https://issues.apache.org/jira/browse/TRAFODION-2938
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342851#comment-16342851
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164331021
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -517,6 +518,18 @@ Bulk Loader is executing.
 specifies that the target table, which is an index, be populated with
 data from the parent table.
 
+** `REBUILD INDEXES`
++
+specifies that indexes of the target table will be updated automatically 
when the source table 
+is updated. 
++
+This is the default behavior of the LOAD Statement, that is, even if this 
option is not 
+specified, the LOAD Statement will rebuild indexes except the 
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342902#comment-16342902
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164336688
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -517,6 +518,18 @@ Bulk Loader is executing.
 specifies that the target table, which is an index, be populated with
 data from the parent table.
 
+** `REBUILD INDEXES`
++
+specifies that indexes of the target table will be updated automatically 
when the source table 
+is updated. 
++
+This is the default behavior of the LOAD Statement, that is, even if this 
option is not 
+specified, the LOAD Statement will rebuild indexes except the 
+CQD `TRAF_LOAD_ALLOW_RISKY_INDEX_MAINTENANCE` is turned *ON*. This CQD is 
turned *OFF* by default, 
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2934) Add ROLLUP Function in Aggregate (Set) Functions in *Trafodion SQL Reference Manual*

2018-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342917#comment-16342917
 ] 

ASF GitHub Bot commented on TRAFODION-2934:
---

GitHub user liuyu000 opened a pull request:

https://github.com/apache/trafodion/pull/1419

[TRAFODION-2934] Add *ROLLUP Function* in Aggregate (Set) Functions in 
*Trafodion SQL Reference Manual*



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liuyu000/trafodion ROLLUP2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1419.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1419


commit e36a94da5357abc7692d62e46eb756e3f0ea2650
Author: liu.yu 
Date:   2018-01-29T05:35:40Z

Add ROLLUP Function in Aggregate (Set) Function




> Add ROLLUP Function in Aggregate (Set) Functions in *Trafodion SQL Reference 
> Manual*
> 
>
> Key: TRAFODION-2934
> URL: https://issues.apache.org/jira/browse/TRAFODION-2934
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342910#comment-16342910
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164338657
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -1173,6 +1186,362 @@ SQL> POPULATE INDEX index_target_table4 ON 
target_table4;
 SQL> DROP INDEX index_target_table4;
 --- SQL operation complete.
 ```
+
+[[rebuild_indexes_examples]]
+ Examples of `REBUILD INDEXES`
+
+Suppose that we have following tables:
+
+_source_table_:
+
+```
+SQL>select count(*) from source_table;
+(EXPR)
+
+ 100
+
+--- 1 row(s) selected. 
+```
+
+_target_table1_ has the same structure as _target_table2_, here takes 
_target_table1_ for example:
+
+```
+SQL>CREATE TABLE target_table1
+  ( 
+ID   INT NO DEFAULT NOT NULL NOT DROPPABLE 
NOT
+  SERIALIZED
+  , NUM  INT DEFAULT NULL NOT SERIALIZED
+  , CARD_ID  LARGEINT DEFAULT NULL NOT SERIALIZED
+  , PRICE    DECIMAL(11, 3) DEFAULT NULL NOT 
SERIALIZED
+  , START_DATE   DATE DEFAULT NULL NOT SERIALIZED
+  , START_TIME   TIME(0) DEFAULT NULL NOT SERIALIZED
+  , END_TIME TIMESTAMP(6) DEFAULT NULL NOT 
SERIALIZED
+  , B_YEAR   INTERVAL YEAR(10) DEFAULT NULL NOT
+  SERIALIZED
+  , B_YM INTERVAL YEAR(5) TO MONTH DEFAULT 
NULL NOT
+  SERIALIZED
+  , B_DS INTERVAL DAY(10) TO SECOND(3) DEFAULT 
NULL
+  NOT SERIALIZED
+  , PRIMARY KEY (ID ASC)
+  )
+  SALT USING 9 PARTITIONS
+  ATTRIBUTES ALIGNED FORMAT NAMESPACE 'TRAF_150' 
+  HBASE_OPTIONS 
+  ( 
+MEMSTORE_FLUSH_SIZE = '1073741824' 
+  ) 
+;
+```
+
+* This example compares the execution time of using LOAD Statement without 
options and 
+using `LOAD WITH REBUILD INDEXES` when the CQD 
`TRAF_LOAD_ALLOW_RISKY_INDEX_MAINTENANCE` 
+is turned *OFF* by default. These two statements take almost the same time.
+
++
+```
+SQL>CREATE INDEX index_target_table1 ON target_table1(id);
+--- SQL operation complete.
+
+SQL>SET STATISTICS ON;
+
+SQL>LOAD INTO target_table1 SELECT * FROM source_table WHERE id < 301;
+
+UTIL_OUTPUT

+-
+Task:  LOADStatus: StartedObject: 
TRAFODION.SEABASE.TARGET_TABLE1  
+Task:  CLEANUP Status: StartedTime: 2018-01-18 13:33:52.310
 
+Task:  CLEANUP Status: Ended  Time: 2018-01-18 13:33:52.328
+Task:  CLEANUP Status: Ended  Elapsed Time:00:00:00.019
+Task:  DISABLE INDEXE  Status: StartedTime: 2018-01-18 13:33:52.328
 
+Task:  DISABLE INDEXE  Status: Ended  Time: 2018-01-18 13:34:04.709
+Task:  DISABLE INDEXE  Status: Ended  Elapsed Time:00:00:12.381
+Task:  LOADING DATAStatus: StartedTime: 2018-01-18 13:34:04.709
 
+   Rows Processed: 300 
+   Error Rows: 0 
+Task:  LOADING DATAStatus: Ended  Time: 2018-01-18 13:34:21.629
+Task:  LOADING DATAStatus: Ended  Elapsed Time:00:00:16.919
+Task:  COMPLETION  Status: StartedTime: 2018-01-18 13:34:21.629
 
+   Rows Loaded:300 
+Task:  COMPLETION  Status: Ended  Time: 2018-01-18 13:34:22.436
+Task:  COMPLETION  Status: Ended  Elapsed Time:00:00:00.808
+Task:  POPULATE INDEX  Status: StartedTime: 2018-01-18 13:34:22.436
 
+Task:  POPULATE INDEX  Status: Ended  Time: 2018-01-18 13:34:31.116   
+Task:  POPULATE INDEX  Status: Ended  Elapsed Time:00:00:08.680
+--- SQL operation complete.
+
+Start Time 2018/01/18 13:33:51.478782
+End Time   2018/01/18 13:34:31.549491
+Elapsed Time  00:00:40.070709 
+Compile Time  00:00:00.510024   
+Execution Time00:00:39.559433 
+
+SQL>LOAD INTO target_table1 SELECT * FROM source_table WHERE id > 300;  
+UTIL_OUTPUT


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342909#comment-16342909
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user liuyu000 commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164338652
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -1173,6 +1186,362 @@ SQL> POPULATE INDEX index_target_table4 ON 
target_table4;
 SQL> DROP INDEX index_target_table4;
 --- SQL operation complete.
 ```
+
+[[rebuild_indexes_examples]]
+ Examples of `REBUILD INDEXES`
+
+Suppose that we have following tables:
+
+_source_table_:
+
+```
+SQL>select count(*) from source_table;
+(EXPR)
+
+ 100
+
+--- 1 row(s) selected. 
+```
+
+_target_table1_ has the same structure as _target_table2_, here takes 
_target_table1_ for example:
--- End diff --

Thanks Dave, your comment has been incorporated :)


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342238#comment-16342238
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164276782
  
--- Diff: core/sql/executor/HdfsClient_JNI.cpp ---
@@ -0,0 +1,452 @@
+//**
+// @@@ START COPYRIGHT @@@
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+//
+// @@@ END COPYRIGHT @@@
+// **
+
+#include "QRLogger.h"
+#include "Globals.h"
+#include "jni.h"
+#include "HdfsClient_JNI.h"
+
+// 
===
+// = Class HdfsScan
+// 
===
+
+JavaMethodInit* HdfsScan::JavaMethods_ = NULL;
+jclass HdfsScan::javaClass_ = 0;
+bool HdfsScan::javaMethodsInitialized_ = false;
+pthread_mutex_t HdfsScan::javaMethodsInitMutex_ = 
PTHREAD_MUTEX_INITIALIZER;
+
+static const char* const hdfsScanErrorEnumStr[] = 
+{
--- End diff --

I am surprised that this is empty. Is that because HdfsScan java side is 
now in preview and error handling has not been introduced yet?


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U 

[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342290#comment-16342290
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164280205
  
--- Diff: core/sql/src/main/java/org/trafodion/sql/HDFSClient.java ---
@@ -0,0 +1,319 @@
+// @@@ START COPYRIGHT @@@
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+//
+// @@@ END COPYRIGHT @@@
+
+package org.trafodion.sql;
+
+import org.apache.log4j.PropertyConfigurator;
+import org.apache.log4j.Logger;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.conf.Configuration;
+import java.nio.ByteBuffer;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.concurrent.Callable;
+import java.util.concurrent.Future;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.io.compress.CodecPool;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.SequenceFile.CompressionType;
+import org.apache.hadoop.util.ReflectionUtils;
+
+public class HDFSClient 
+{
+   static Logger logger_ = Logger.getLogger(HDFSClient.class.getName());
+   private static Configuration config_ = null;
+   private static ExecutorService executorService_ = null;
+   private static FileSystem defaultFs_ = null;
+   private FileSystem fs_ = null;
+   private int bufNo_;
+   private FSDataInputStream fsdis_; 
+   private OutputStream outStream_;
+   private String filename_;
+   private ByteBuffer buf_;
+   private int bufLen_;
+   private int bufOffset_ = 0;
+   private long pos_ = 0;
+   private int len_ = 0;
+   private int lenRemain_ = 0; 
+   private int blockSize_; 
+   private int bytesRead_;
+   private Future future_ = null;
+
+   static {
+  String confFile = System.getProperty("trafodion.log4j.configFile");
+  System.setProperty("trafodion.root", System.getenv("TRAF_HOME"));
+  if (confFile == null) {
+ confFile = System.getenv("TRAF_CONF") + "/log4j.sql.config";
+  }
+  PropertyConfigurator.configure(confFile);
+  config_ = TrafConfiguration.create(TrafConfiguration.HDFS_CONF);
+  executorService_ = Executors.newCachedThreadPool();
+  try {
+ defaultFs_ = FileSystem.get(config_);
+  }
+  catch (IOException ioe) {
+ throw new RuntimeException("Exception in HDFSClient static 
block", ioe);
+  }
+   }
+
+   class HDFSRead implements Callable 
--- End diff --

Could you please explain how the classes HdfsClient, HdfsClient.HDFSRead 
and HdfsScan are related? Thank you for the nice comments in HdfsScan.java. I 
took HdfsClient to be the class that contains all the methods that we removed 
from SequenceFileWriter.  If that is true, why do we need a HDFSRead subclass? 
Is this for the future or is there some functionality that I missed. For error 
row logging do we need read?  


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: 

[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342233#comment-16342233
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164276571
  
--- Diff: core/sql/executor/ExHdfsScan.cpp ---
@@ -283,6 +285,8 @@ void ExHdfsScanTcb::freeResources()
  ExpLOBinterfaceCleanup(lobGlob_, (NAHeap 
*)getGlobals()->getDefaultHeap());
  lobGlob_ = NULL;
   }
+  if (hdfsClient_ != NULL) 
+ NADELETE(hdfsClient_, HdfsClient, getHeap());
--- End diff --

Same comment as Eric on ExHBaseAccess::freeResources(). Should we release 
loggingFileName_ here? Constructor guarantees it is never null.


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342270#comment-16342270
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164278860
  
--- Diff: core/sql/executor/ExHdfsScan.cpp ---
@@ -1948,6 +1948,54 @@ short ExHdfsScanTcb::handleDone(ExWorkProcRetcode 
)
   return 0;
 }
 
+void ExHdfsScanTcb::handleException(NAHeap *heap,
--- End diff --

JavaObjectInterface is the base class for all these classes.


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342288#comment-16342288
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164280041
  
--- Diff: core/sql/src/main/java/org/trafodion/sql/HDFSClient.java ---
@@ -0,0 +1,319 @@
+// @@@ START COPYRIGHT @@@
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+//
+// @@@ END COPYRIGHT @@@
+
+package org.trafodion.sql;
+
+import org.apache.log4j.PropertyConfigurator;
+import org.apache.log4j.Logger;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.conf.Configuration;
+import java.nio.ByteBuffer;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.concurrent.Callable;
+import java.util.concurrent.Future;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.io.compress.CodecPool;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.SequenceFile.CompressionType;
+import org.apache.hadoop.util.ReflectionUtils;
+
+public class HDFSClient 
+{
+   static Logger logger_ = Logger.getLogger(HDFSClient.class.getName());
+   private static Configuration config_ = null;
+   private static ExecutorService executorService_ = null;
+   private static FileSystem defaultFs_ = null;
+   private FileSystem fs_ = null;
+   private int bufNo_;
+   private FSDataInputStream fsdis_; 
+   private OutputStream outStream_;
+   private String filename_;
+   private ByteBuffer buf_;
+   private int bufLen_;
+   private int bufOffset_ = 0;
+   private long pos_ = 0;
+   private int len_ = 0;
+   private int lenRemain_ = 0; 
+   private int blockSize_; 
+   private int bytesRead_;
+   private Future future_ = null;
+
+   static {
+  String confFile = System.getProperty("trafodion.log4j.configFile");
+  System.setProperty("trafodion.root", System.getenv("TRAF_HOME"));
+  if (confFile == null) {
+ confFile = System.getenv("TRAF_CONF") + "/log4j.sql.config";
+  }
+  PropertyConfigurator.configure(confFile);
+  config_ = TrafConfiguration.create(TrafConfiguration.HDFS_CONF);
+  executorService_ = Executors.newCachedThreadPool();
+  try {
+ defaultFs_ = FileSystem.get(config_);
+  }
+  catch (IOException ioe) {
+ throw new RuntimeException("Exception in HDFSClient static 
block", ioe);
+  }
+   }
+
+   class HDFSRead implements Callable 
+   {
+  int length_;
+
+  HDFSRead(int length) 
+  {
+ length_ = length;
+  }
+ 
+  public Object call() throws IOException 
+  {
+ int bytesRead;
+ if (buf_.hasArray())
+bytesRead = fsdis_.read(pos_, buf_.array(), bufOffset_, 
length_);
+ else
+ {
+buf_.limit(bufOffset_ + length_);
+bytesRead = fsdis_.read(buf_);
+ }
+ return new Integer(bytesRead);
+  }
+   }
+   
+   public HDFSClient() 
+   {
+   }
+ 
+   public HDFSClient(int bufNo, String filename, ByteBuffer buffer, long 
position, int length) throws IOException
+   {
+  bufNo_ = bufNo; 
+  filename_ = filename;
+  Path filepath = new 

[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342235#comment-16342235
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r164276621
  
--- Diff: core/sql/executor/ExHdfsScan.cpp ---
@@ -1948,6 +1948,54 @@ short ExHdfsScanTcb::handleDone(ExWorkProcRetcode 
)
   return 0;
 }
 
+void ExHdfsScanTcb::handleException(NAHeap *heap,
--- End diff --

Makes me wish we had a common base class for HBaseAccess, HdfsClient and 
maybe SequenceFileRead/Write. Any think that requiresJava/JNI to go to another 
file format. Could some of this logic be shared in that common base class? This 
is not a request for change, simply a question that will help me understand 
future refactoring choices.


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text formatted hive tables.
> Compiler returns a list of scan ranges and the begin range and number of 
> ranges to be done by each instance of TCB in TDB. This list of scan ranges is 
> also re-computed at run time possibly based on a CQD
> The scan range for a TCB can come from the same or different hdfs files.  TCB 
> creates two threads to read these ranges.Two ranges (for the TCB) are 
> initially assigned to these threads. As and when a range is completed, the 
> next range (assigned for the TCB) is picked up by the thread. Ranges are read 
> in multiples of hdfs scan buffer size at the TCB level. Default hdfs scan 
> buffer size is 64 MB. Rows from hdfs scan buffer is processed and moved into 
> up queue. If the range contains a record split, then the range is extended to 
> read up to range tail IO size to get the full row. The range that had the 
> latter part of the row ignores it because the former range processes it. 
> Record split at the file level is not possible and/or not supported.
>  For compression, the compiler returns the range info such that the hdfs scan 
> buffer can hold the full uncompressed buffer.
>  Cons:
> Reader threads feature too complex to maintain in C++
> Error handling at the layer below the TCB is missing or errors are not 
> propagated to work method causing incorrect results
> Possible multiple copying of data
> Libhdfs calls are not optimized. It was observed that the method Ids are 
> being obtained many times. Need to check if this problem still exists.
> Now that we clearly know what is expected, it could be optimized better
>   - Reduced scan buffer size for smoother data flow
>   - Better thread utilization
>   - Avoid multiple copying of data.
> Unable to comprehend the need for two threads for pre-fetch especially when 
> one range is completed fully before the data from next range is processed.
>  Following are the hdfsCalls used by programs at exp and executor directory.
>   U hdfsCloseFile
>  U hdfsConnect
>  U hdfsDelete
>  U hdfsExists
>  U hdfsFlush
>      U hdfsFreeFileInfo
>  U hdfsGetPathInfo
>  U hdfsListDirectory
>  U hdfsOpenFile
>  U hdfsPread
>  U hdfsRename
>  U hdfsWrite
>  U hdfsCreateDirectory
>  New implementation
>  Make changes to use direct Java APIs for these calls. However, come up with 
> better mechanism to move the data from Java and JNI, avoid unnecessary 
> copying of data, better thread management via Executor concepts in Java. 
> Hence it won’t be direct mapping of these calls to hdfs Java API. Instead, 
> use the abstraction like what is being done for HBase access.
>  I believe newer implementation will be optimized better and hence improved 
> performance. (but not many folds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342440#comment-16342440
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164288969
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -517,6 +518,18 @@ Bulk Loader is executing.
 specifies that the target table, which is an index, be populated with
 data from the parent table.
 
+** `REBUILD INDEXES`
++
+specifies that indexes of the target table will be updated automatically 
when the source table 
+is updated. 
++
+This is the default behavior of the LOAD Statement, that is, even if this 
option is not 
+specified, the LOAD Statement will rebuild indexes except the 
--- End diff --

Suggested wordsmith: instead of "except the", consider "unless"


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342438#comment-16342438
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164289112
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -1173,6 +1186,362 @@ SQL> POPULATE INDEX index_target_table4 ON 
target_table4;
 SQL> DROP INDEX index_target_table4;
 --- SQL operation complete.
 ```
+
+[[rebuild_indexes_examples]]
+ Examples of `REBUILD INDEXES`
+
+Suppose that we have following tables:
+
+_source_table_:
+
+```
+SQL>select count(*) from source_table;
+(EXPR)
+
+ 100
+
+--- 1 row(s) selected. 
+```
+
+_target_table1_ has the same structure as _target_table2_, here takes 
_target_table1_ for example:
+
+```
+SQL>CREATE TABLE target_table1
+  ( 
+ID   INT NO DEFAULT NOT NULL NOT DROPPABLE 
NOT
+  SERIALIZED
+  , NUM  INT DEFAULT NULL NOT SERIALIZED
+  , CARD_ID  LARGEINT DEFAULT NULL NOT SERIALIZED
+  , PRICE    DECIMAL(11, 3) DEFAULT NULL NOT 
SERIALIZED
+  , START_DATE   DATE DEFAULT NULL NOT SERIALIZED
+  , START_TIME   TIME(0) DEFAULT NULL NOT SERIALIZED
+  , END_TIME TIMESTAMP(6) DEFAULT NULL NOT 
SERIALIZED
+  , B_YEAR   INTERVAL YEAR(10) DEFAULT NULL NOT
+  SERIALIZED
+  , B_YM INTERVAL YEAR(5) TO MONTH DEFAULT 
NULL NOT
+  SERIALIZED
+  , B_DS INTERVAL DAY(10) TO SECOND(3) DEFAULT 
NULL
+  NOT SERIALIZED
+  , PRIMARY KEY (ID ASC)
+  )
+  SALT USING 9 PARTITIONS
+  ATTRIBUTES ALIGNED FORMAT NAMESPACE 'TRAF_150' 
+  HBASE_OPTIONS 
+  ( 
+MEMSTORE_FLUSH_SIZE = '1073741824' 
+  ) 
+;
+```
+
+* This example compares the execution time of using LOAD Statement without 
options and 
+using `LOAD WITH REBUILD INDEXES` when the CQD 
`TRAF_LOAD_ALLOW_RISKY_INDEX_MAINTENANCE` 
+is turned *OFF* by default. These two statements take almost the same time.
+
++
+```
+SQL>CREATE INDEX index_target_table1 ON target_table1(id);
+--- SQL operation complete.
+
+SQL>SET STATISTICS ON;
+
+SQL>LOAD INTO target_table1 SELECT * FROM source_table WHERE id < 301;
+
+UTIL_OUTPUT

+-
+Task:  LOADStatus: StartedObject: 
TRAFODION.SEABASE.TARGET_TABLE1  
+Task:  CLEANUP Status: StartedTime: 2018-01-18 13:33:52.310
 
+Task:  CLEANUP Status: Ended  Time: 2018-01-18 13:33:52.328
+Task:  CLEANUP Status: Ended  Elapsed Time:00:00:00.019
+Task:  DISABLE INDEXE  Status: StartedTime: 2018-01-18 13:33:52.328
 
+Task:  DISABLE INDEXE  Status: Ended  Time: 2018-01-18 13:34:04.709
+Task:  DISABLE INDEXE  Status: Ended  Elapsed Time:00:00:12.381
+Task:  LOADING DATAStatus: StartedTime: 2018-01-18 13:34:04.709
 
+   Rows Processed: 300 
+   Error Rows: 0 
+Task:  LOADING DATAStatus: Ended  Time: 2018-01-18 13:34:21.629
+Task:  LOADING DATAStatus: Ended  Elapsed Time:00:00:16.919
+Task:  COMPLETION  Status: StartedTime: 2018-01-18 13:34:21.629
 
+   Rows Loaded:300 
+Task:  COMPLETION  Status: Ended  Time: 2018-01-18 13:34:22.436
+Task:  COMPLETION  Status: Ended  Elapsed Time:00:00:00.808
+Task:  POPULATE INDEX  Status: StartedTime: 2018-01-18 13:34:22.436
 
+Task:  POPULATE INDEX  Status: Ended  Time: 2018-01-18 13:34:31.116   
+Task:  POPULATE INDEX  Status: Ended  Elapsed Time:00:00:08.680
+--- SQL operation complete.
+
+Start Time 2018/01/18 13:33:51.478782
+End Time   2018/01/18 13:34:31.549491
+Elapsed Time  00:00:40.070709 
+Compile Time  00:00:00.510024   
+Execution Time00:00:39.559433 
+
+SQL>LOAD INTO target_table1 SELECT * FROM source_table WHERE id > 300;  
+UTIL_OUTPUT


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342437#comment-16342437
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164288987
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -517,6 +518,18 @@ Bulk Loader is executing.
 specifies that the target table, which is an index, be populated with
 data from the parent table.
 
+** `REBUILD INDEXES`
++
+specifies that indexes of the target table will be updated automatically 
when the source table 
+is updated. 
++
+This is the default behavior of the LOAD Statement, that is, even if this 
option is not 
+specified, the LOAD Statement will rebuild indexes except the 
+CQD `TRAF_LOAD_ALLOW_RISKY_INDEX_MAINTENANCE` is turned *ON*. This CQD is 
turned *OFF* by default, 
--- End diff --

"This CQD is turned *OFF* by default, ..." Suggest making this a complete 
sentence. What comes after it seems to stand well on its own.


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342439#comment-16342439
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1416#discussion_r164289100
  
--- Diff: docs/sql_reference/src/asciidoc/_chapters/sql_utilities.adoc ---
@@ -1173,6 +1186,362 @@ SQL> POPULATE INDEX index_target_table4 ON 
target_table4;
 SQL> DROP INDEX index_target_table4;
 --- SQL operation complete.
 ```
+
+[[rebuild_indexes_examples]]
+ Examples of `REBUILD INDEXES`
+
+Suppose that we have following tables:
+
+_source_table_:
+
+```
+SQL>select count(*) from source_table;
+(EXPR)
+
+ 100
+
+--- 1 row(s) selected. 
+```
+
+_target_table1_ has the same structure as _target_table2_, here takes 
_target_table1_ for example:
--- End diff --

Possible wordsmith: "Suppose _target_table1_ and _target_table2_ both have 
the following structure:"


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2932) ConnectionTimeout value for jdbc can't lager than 32768

2018-01-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342512#comment-16342512
 ] 

ASF GitHub Bot commented on TRAFODION-2932:
---

GitHub user mashengchen opened a pull request:

https://github.com/apache/trafodion/pull/1418

TRAFODION-2932 ConnectionTimeout value for jdbc cannot lager than 32768

if the given value is 32768 throw err msg : 
org.trafodion.jdbc.t4.TrafT4Exception: Invalid connection property setting: 
Incorrect value for connectionTimeout set: [40001]. Max value is: [32767]

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mashengchen/trafodion connectionTimeout

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1418.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1418


commit c72e47eb49e3612d531353cd56d4a81735ba95f7
Author: mashengchen 
Date:   2018-01-28T09:45:23Z

TRAFODION-2932 ConnectionTimeout value for jdbc cannot lager than 32768




> ConnectionTimeout value for jdbc can't lager than 32768
> ---
>
> Key: TRAFODION-2932
> URL: https://issues.apache.org/jira/browse/TRAFODION-2932
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t4
>Affects Versions: any
>Reporter: mashengchen
>Assignee: mashengchen
>Priority: Major
> Fix For: any
>
>
> The type of connectionTimeout is short in mxosrvr, while it's int in jdbc, so 
> if users give a lager value than 32768, there may throw connection time out 
> exception, or some other abnormal  phenomenons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2917) Refactor Trafodion implementation of hdfs scan for text formatted hive tables

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347522#comment-16347522
 ] 

ASF GitHub Bot commented on TRAFODION-2917:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1417#discussion_r165170894
  
--- Diff: core/sql/src/main/java/org/trafodion/sql/HDFSClient.java ---
@@ -0,0 +1,319 @@
+// @@@ START COPYRIGHT @@@
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+//
+// @@@ END COPYRIGHT @@@
+
+package org.trafodion.sql;
+
+import org.apache.log4j.PropertyConfigurator;
+import org.apache.log4j.Logger;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.conf.Configuration;
+import java.nio.ByteBuffer;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.concurrent.Callable;
+import java.util.concurrent.Future;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.io.compress.CodecPool;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.SequenceFile.CompressionType;
+import org.apache.hadoop.util.ReflectionUtils;
+
+public class HDFSClient 
+{
+   static Logger logger_ = Logger.getLogger(HDFSClient.class.getName());
+   private static Configuration config_ = null;
+   private static ExecutorService executorService_ = null;
+   private static FileSystem defaultFs_ = null;
+   private FileSystem fs_ = null;
+   private int bufNo_;
+   private FSDataInputStream fsdis_; 
+   private OutputStream outStream_;
+   private String filename_;
+   private ByteBuffer buf_;
+   private int bufLen_;
+   private int bufOffset_ = 0;
+   private long pos_ = 0;
+   private int len_ = 0;
+   private int lenRemain_ = 0; 
+   private int blockSize_; 
+   private int bytesRead_;
+   private Future future_ = null;
+
+   static {
+  String confFile = System.getProperty("trafodion.log4j.configFile");
+  System.setProperty("trafodion.root", System.getenv("TRAF_HOME"));
+  if (confFile == null) {
+ confFile = System.getenv("TRAF_CONF") + "/log4j.sql.config";
+  }
+  PropertyConfigurator.configure(confFile);
+  config_ = TrafConfiguration.create(TrafConfiguration.HDFS_CONF);
+  executorService_ = Executors.newCachedThreadPool();
+  try {
+ defaultFs_ = FileSystem.get(config_);
+  }
+  catch (IOException ioe) {
+ throw new RuntimeException("Exception in HDFSClient static 
block", ioe);
+  }
+   }
+
+   class HDFSRead implements Callable 
--- End diff --

HdfsClient is meant for both read and write to hdfs text files. Need to 
check if we can use for other format files too. HdfsClient is also used to 
create Hdfs files..

I have made 2nd commit that might help to understand it better with some 
comments in ExHdfsScan.h


> Refactor Trafodion implementation of hdfs scan for text formatted hive tables
> -
>
> Key: TRAFODION-2917
> URL: https://issues.apache.org/jira/browse/TRAFODION-2917
> Project: Apache Trafodion
>  Issue Type: New Feature
>  Components: sql-general
>Reporter: Selvaganesan Govindarajan
>Priority: Major
> Fix For: 2.3
>
>
> Find below the general outline of hdfs scan for text 

[jira] [Commented] (TRAFODION-2874) LOB: Add syntax to return filename of a LOB data file for external LOBs.

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347361#comment-16347361
 ] 

ASF GitHub Bot commented on TRAFODION-2874:
---

GitHub user sandhyasun opened a pull request:

https://github.com/apache/trafodion/pull/1428

[TRAFODION-2874] New syntax to retrieve the LOB HDFS filename, offset for a 
LOBhandle

 for both external and internal LOBs . Also added syntax to return starting 
offset of a particular LOB handle in the LOB Hdfs data file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sandhyasun/trafodion traf_lob_global_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1428.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1428


commit a96968e401666a99f36b1729597dab23dd87b74b
Author: Sandhya Sundaresan 
Date:   2018-01-31T18:35:48Z

New syntax to retrieve the LOB HDFS filename for both external and internal 
LOBs . Also added syntax to return starting offset of a particular LOB handle 
in the LOB Hdfs data file.




> LOB: Add syntax to return filename  of a LOB data file for external LOBs.
> -
>
> Key: TRAFODION-2874
> URL: https://issues.apache.org/jira/browse/TRAFODION-2874
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> For external LOBs,  Trafodion does not save the LOB data in it's internal 
> trafodion namespace. It saves only the LOB  handle information and the actual 
> LOB data remains outside in users namespace in HDFS.So inserts are very 
> eifficient. During extract from an external LOB today, we extract the LOB 
> data from the external file and return it to the user. Need to extend the 
> Extract syntax to also be able to return the external LOB data filename alone 
> . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2650) The SQLite Trafodion Configuration database storage method can become stale.

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349663#comment-16349663
 ] 

ASF GitHub Bot commented on TRAFODION-2650:
---

GitHub user zcorrea opened a pull request:

https://github.com/apache/trafodion/pull/1437

[TRAFODION-2650] Restructured Trafodion Configuration code into separate

directory structure.

Move code out of monitor source tree into its own directory structure. Move 
Trafodion Configuration API header file into the export/include/trafconf 
directory.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zcorrea/trafodion TRAFODION-2650

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1437.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1437


commit 87849fcf8199857e0c525d332a061db8744c86cb
Author: Zalo Correa 
Date:   2018-02-02T01:51:35Z

[TRAFODION-2650] Restructured Trafodion Configuration code into separate
directory structure.




> The SQLite Trafodion Configuration database storage method can become stale.
> 
>
> Key: TRAFODION-2650
> URL: https://issues.apache.org/jira/browse/TRAFODION-2650
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Affects Versions: 2.2-incubating
>Reporter: Gonzalo E Correa
>Assignee: Gonzalo E Correa
>Priority: Major
> Fix For: 2.3
>
>
> The SQLite storage method can prevent a node from initializing properly. This 
> is due to the nature of SQLIte, a database in a file. The file is updated 
> locally on each node. In certain failure scenarios, such as Cloudera Manager 
> and potentially Ambari installations, the file will not be updated locally. 
> This happens when the monitor process is not running and changes are made to 
> the configuration, the file in the node where the monitor process is not 
> executing will become stale and on a node up event, may contain old 
> configuration information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2943) shebang in file core/sqf/sql/scripts/cleanat is broken

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349756#comment-16349756
 ] 

ASF GitHub Bot commented on TRAFODION-2943:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1432


> shebang in file core/sqf/sql/scripts/cleanat is broken
> --
>
> Key: TRAFODION-2943
> URL: https://issues.apache.org/jira/browse/TRAFODION-2943
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dev-environment
>Affects Versions: any
>Reporter: Wenjun Zhu
>Priority: Trivial
> Fix For: 2.3
>
>
> In file core/sqf/sql/scripts/cleanat is broken, the shebang should be '#!', 
> but it lacks '#' at present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2931) call char_length on Chinese character will cause core dump.

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349758#comment-16349758
 ] 

ASF GitHub Bot commented on TRAFODION-2931:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1431


> call char_length on Chinese character will cause core dump.
> ---
>
> Key: TRAFODION-2931
> URL: https://issues.apache.org/jira/browse/TRAFODION-2931
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-general
>Affects Versions: 2.4
>Reporter: Yang, Yongfeng
>Assignee: chenyunren
>Priority: Major
>
> {color:#00}>>set terminal_charset utf8;{color} 
> {color:#00}>>select char_lentgh('{color}{color:#00}中国人') from 
> (values(1));{color} 
> {color:#00}#{color} 
> {color:#00}# A fatal error has been detected by the Java Runtime 
> Environment:{color} 
> {color:#00}#{color} 
> {color:#00}#  SIGSEGV (0xb) at pc=0x004024ea, pid=13636, 
> tid=140591300516352{color} 
> {color:#00}#{color} 
> {color:#00}# JRE version: OpenJDK Runtime Environment (7.0_161) (build 
> 1.7.0_161-mockbuild_2017_12_06_14_28-b00){color} 
> {color:#00}# Java VM: OpenJDK 64-Bit Server VM (24.161-b00 mixed mode 
> linux-amd64 compressed oops){color} 
> {color:#00}# Derivative: IcedTea 2.6.12{color} 
> {color:#00}# Distribution: CentOS release 6.9 (Final), package 
> rhel-2.6.12.0.el6_9-x86_64 u161-b00{color} 
> {color:#00}# Problematic frame:{color} 
> {color:#00}# C  [sqlci+0x24ea]  folly::fbstring_core::category() 
> const+0xc{color} 
> {color:#00}#{color} 
> {color:#00}# Core dump written. Default location: 
> /home/centos/incubator-trafodion/core or core.13636{color} 
> {color:#00}#{color} 
> {color:#00}# An error report file with more information is saved 
> as:{color} 
> {color:#00}# /tmp/jvm-13636/hs_error.log{color} 
> {color:#00}#{color} 
> {color:#00}# If you would like to submit a bug report, please 
> include{color} 
> {color:#00}# instructions on how to reproduce the bug and visit:{color} 
> {color:#00}#  {color} [http://icedtea.classpath.org/bugzilla] 
> {color:#00}# The crash happened outside the Java Virtual Machine in 
> native code.{color} 
> {color:#00}# See problematic frame for where to report the bug.{color} 
> {color:#00}#{color} 
> {color:#00}Aborted (core dumped){color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2938) Add *GET PRIVILEGES* for GET Statement in *Trafodion SQL Reference Manual*

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349764#comment-16349764
 ] 

ASF GitHub Bot commented on TRAFODION-2938:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1426


> Add *GET PRIVILEGES* for GET Statement in *Trafodion SQL Reference Manual*
> --
>
> Key: TRAFODION-2938
> URL: https://issues.apache.org/jira/browse/TRAFODION-2938
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2871) Trafodion Security Team and ML

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349767#comment-16349767
 ] 

ASF GitHub Bot commented on TRAFODION-2871:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1424


> Trafodion Security Team and ML
> --
>
> Key: TRAFODION-2871
> URL: https://issues.apache.org/jira/browse/TRAFODION-2871
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: documentation, website
>Reporter: Pierre Smits
>Assignee: Steve Varnau
>Priority: Major
>  Labels: security
>
> Security threats are everywhere. We should have the constructs in place to 
> address these properly, meaning:
> * have a Trafodion security ml
> * etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2930) [FIRST N] will cause sqlci core dump.

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349760#comment-16349760
 ] 

ASF GitHub Bot commented on TRAFODION-2930:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1430


> [FIRST N] will cause sqlci core dump.
> -
>
> Key: TRAFODION-2930
> URL: https://issues.apache.org/jira/browse/TRAFODION-2930
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.4
>Reporter: Yang, Yongfeng
>Assignee: chenyunren
>Priority: Major
>
> {color:#00}>>create table mytable(id int);{color} 
> {color:#00}    {color} 
> {color:#00}--- SQL operation complete.{color} 
> {color:#00}>>insert into mytable values(1);{color}
> {color:#00}--- 1 row(s) inserted.{color} 
> {color:#00}>>select [first 6000] * from mytable;  {color} 
> {color:#00}*** EXECUTOR ASSERTION FAILURE{color} 
> {color:#00}*** Time: Fri Jan 26 06:48:54 2018{color} 
> {color:#00}*** Process: 13301{color} 
> {color:#00}*** File: ../executor/ExFirstN.cpp{color} 
> {color:#00}*** Line: 383{color} 
> {color:#00}*** Message: ExFirstNTcb::work(): only last 0 and last 1 
> supported.{color} 
> {color:#00}Aborted (core dumped){color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2936) nullpointer error where server return without value

2018-02-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349771#comment-16349771
 ] 

ASF GitHub Bot commented on TRAFODION-2936:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1421


> nullpointer error where server return without value
> ---
>
> Key: TRAFODION-2936
> URL: https://issues.apache.org/jira/browse/TRAFODION-2936
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Critical
>
> we met a bug, when the server return without value, will make a null point 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2927) Keep log information for UPDATE STATISTICS in case of errors

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347874#comment-16347874
 ] 

ASF GitHub Bot commented on TRAFODION-2927:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1429#discussion_r165237344
  
--- Diff: core/sql/ustat/hs_log.cpp ---
@@ -389,44 +371,64 @@ NABoolean HSLogMan::GetLogFile(NAString & path, const 
char* cqd_value)
 /*  until either StartLog() or */
 /*  ClearLog() methods are called. */
 /***/
-void HSLogMan::StartLog(NABoolean needExplain, const char* logFileName)
+void HSLogMan::StartLog(NABoolean needExplain)
   {
-// The GENERATE_EXPLAIN cqd captures explain data pertaining to dynamic
-// queries. Ordinarily we want it on, but for just-in-time logging 
triggered
-// by an error, we don't need it, and can't set it because 
HSFuncExecQuery
-// clears the diagnostics area, which causes the error to be lost.
-explainOn_ = needExplain;
-if (!needExplain ||
-HSFuncExecQuery("CONTROL QUERY DEFAULT GENERATE_EXPLAIN 'ON'") == 
0)
+if (!logNeeded_)  // if logging isn't already on
   {
-CollIndex activeNodes = gpClusterInfo->numOfSMPs();
-if (logFileName)
-{
-  *logFile_ = logFileName;
-   currentTimingEvent_ = -1;
-   startTime_[0] = 0;   /* reset timer   */
-   logNeeded_ = TRUE;
-}
-else if(activeNodes > 2)
-{//we consider we are running on cluster 
- //if gpClusterInfo->numOfSMPs() > 2
-   NABoolean ret = FALSE;
-   if(GetLogFile(*logFile_, 
ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG)))
- ret = ContainDirExist(logFile_->data());
-
-   if(ret)
- logNeeded_ = TRUE;
-
-   currentTimingEvent_ = -1;
-   startTime_[0] = 0;   /* reset timer   */
-}
+// Construct logfile name incorporating process id and node 
number. Note that
+// the 2nd parameter of processhandle_decompose is named cpu but 
is actually
+// the node number for Seaquest (the 4th param, named nodenumber, 
is the cluster
+// number).
+Int32 nodeNum;
+Int32 pin;
+SB_Phandle_Type procHandle;
+XPROCESSHANDLE_GETMINE_();
+XPROCESSHANDLE_DECOMPOSE_(, , );
+long currentTime = (long)time(0);
+
+const size_t MAX_FILENAME_SIZE = 50;
+char qualifiers[MAX_FILENAME_SIZE];
+sprintf(qualifiers, ".%d.%d.%ld.txt", nodeNum, pin, currentTime);
+   
+std::string logFileName;
+QRLogger::getRootLogDirectory(CAT_SQL_USTAT, logFileName /* out 
*/);
+if (logFileName.size() > 0)
+  logFileName += '/';
+
+const char * ustatLog = 
ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
+const char * fileNameStem = ustatLog + strlen(ustatLog);
+if (ustatLog == fileNameStem) 
+  fileNameStem = "ULOG";  // CQD USTAT_LOG is the empty string
 else
-{
-  *logFile_ = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
-   currentTimingEvent_ = -1;
-   startTime_[0] = 0;   /* reset timer   */
-   logNeeded_ = TRUE;
-}
+  {
+// strip off any directory path name; we will always use the 
logs directory
+// as configured via QRLogger
+while ((fileNameStem > ustatLog) && (*(fileNameStem - 1) != 
'/'))
+  fileNameStem--;
+  }
+
+logFileName += fileNameStem;
+logFileName += qualifiers;
+
+NABoolean logStarted = 
QRLogger::startLogFile(CAT_SQL_USTAT,logFileName.c_str());
--- End diff --

This could allow a user to create a file anywhere the trafodion user has 
write access. For UDFs, we are creating a sandbox in $TRAF_HOME/udr for 
different types of users, beginning with a public area $TRAF_HOME/udr/public. I 
wonder whether it would be better here to sandbox the log files as well, e.g. 
into something like $TRAF_HOME/logs/ustat_logs. In the UDF code we also try to 
catch names like a/../../../export/lib/myfile that would escape the sandbox.

A related comment/question (a non-issue), maybe I can bring it up here as 
well, as it's related to the choice of log file directory: As far as I 
understand, these log files don't conform to the format we are using for 
executor, TM, DCS, etc. Therefore they can't be read by the 

[jira] [Commented] (TRAFODION-2929) Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*

2018-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344265#comment-16344265
 ] 

ASF GitHub Bot commented on TRAFODION-2929:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1416


> Add *Rebuild Indexes* for LOAD Statement in *Trafodion SQL Reference Manual*
> 
>
> Key: TRAFODION-2929
> URL: https://issues.apache.org/jira/browse/TRAFODION-2929
> Project: Apache Trafodion
>  Issue Type: Documentation
>Reporter: Liu Yu
>Assignee: Liu Yu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2912) Non-deterministic scalar UDFs not executed once per row

2018-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344276#comment-16344276
 ] 

ASF GitHub Bot commented on TRAFODION-2912:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1420#discussion_r164609598
  
--- Diff: core/sql/regress/udr/TEST103_functions.cpp ---
@@ -117,11 +117,41 @@ SQLUDR_LIBFUNC SQLUDR_INT32 
genRandomNumber(SQLUDR_INT32 *in1,
 }
   }
   
-  strcpy(out, result.c_str());
+  memcpy(out, result.c_str(), result.length());
   return SQLUDR_SUCCESS;
 }
 
 
+SQLUDR_LIBFUNC SQLUDR_INT32 nonDeterministicRandom(SQLUDR_INT32 *out1,
+   SQLUDR_INT16 *outInd1,
+   SQLUDR_TRAIL_ARGS)
+{
+  if (calltype == SQLUDR_CALLTYPE_FINAL)
+return SQLUDR_SUCCESS;
+
+  // pointer to the buffer in the state area that is
+  // available for the lifetime of this statement,
+  // this can be used by the UDF to maintain state
+  int *my_state = (int *) statearea->stmt_data.data;
+
+  if (calltype == SQLUDR_CALLTYPE_INITIAL && *my_state == 0)
+{
+  *my_state = 555;
+}
+  else
+// Use a simple linear congruential generator, we
+// want deterministic regression results, despite
+// the name of this function. Note that a
--- End diff --

Typo. Extra "a". Smallest nit possible :)


> Non-deterministic scalar UDFs not executed once per row
> ---
>
> Key: TRAFODION-2912
> URL: https://issues.apache.org/jira/browse/TRAFODION-2912
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.0-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>Priority: Major
> Fix For: 2.3
>
>
> This problem was found by Andy Yang.
> Andy created a random generator scalar UDF and found that it did not return a 
> different random value for each row:
> {noformat}
> >>select scalar_rand_udf(), scalar_rand_udf()
> +>from (values (1), (2), (3)) T(s);
> RND RND 
> --- ---
> 846930886 1804289383
> 846930886 1804289383
> 846930886 1804289383
> --- 3 row(s) selected.
> >>
> {noformat}
> Here is the explain, it shows that we are using hash joins, not nested joins, 
> to evaluate the UDFs:
> {noformat}
> >>explain options 'f' s;
> LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD
>          -
> 5.6root  3.00E+000
> 415hybrid_hash_join  3.00E+000
> 324hybrid_hash_join  1.00E+000
> ..3isolated_scalar_udf SCALAR_RAND_UDF   1.00E+000
> ..2isolated_scalar_udf SCALAR_RAND_UDF   1.00E+000
> ..1tuplelist 3.00E+000
> --- SQL operation complete.
> >>
> {noformat}
> The problem is that we don't check for non-deterministic UDFs when we 
> transform a TSJ to a regular join in the transformer or normalizer. We don't 
> even set the non-deterministic flag in the group attributes of the 
> IsolatedScalarUDF node.
> The fix is to set this flag correctly and to add a check and not transform 
> routine joins for non-deterministic isolated scalar UDFs into a regular join.
> To recreate:
> Here is the source code of the UDF:
> {noformat}
> #include "sqludr.h"
> #include 
> SQLUDR_LIBFUNC SQLUDR_INT32 scalar_rand_udf(SQLUDR_INT32 *out1,
> SQLUDR_INT16 *outInd1,
> SQLUDR_TRAIL_ARGS)
> {
>   if (calltype == SQLUDR_CALLTYPE_FINAL)
> return SQLUDR_SUCCESS;
>   (*out1) = rand();
>   return SQLUDR_SUCCESS;
> }
> {noformat}
> Compile the UDF:
> {noformat}
> gcc -g -Wall -I$TRAF_HOME/export/include/sql -shared -fPIC -o 
> scalar_rand_udf.so scalar_rand_udf.c
> {noformat}
> Create the UDF and run it:
> {noformat}
> drop function scalar_rand_udf;
> drop library scalar_rand_udf_lib;
> create library scalar_rand_udf_lib
>  file '/home/zellerh/src/scalar_rand_udf/scalar_rand_udf.so';
> create function scalar_rand_udf() returns (rnd int)
>   external name 'scalar_rand_udf' library scalar_rand_udf_lib
>   not deterministic no sql no transaction required;
> prepare s from
> select scalar_rand_udf(), scalar_rand_udf()
> from (values (1), (2), (3)) T(s);
> explain options 'f' s;
> execute s;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2940) In HA env, one node lose network, when recover, trafci can't use

2018-01-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346593#comment-16346593
 ] 

ASF GitHub Bot commented on TRAFODION-2940:
---

GitHub user mashengchen opened a pull request:

https://github.com/apache/trafodion/pull/1427

TRAFODION-2940 In HA env, one node lose network, when recover, trafci can't 
use

when there loses network for a long time ,and then network recover, there 
will trigger zookeeper session expired, at this time ,check whether current 
dcsmaster is leader, if not unbind this node's floating ip, and make dcsmaster 
init status, then rerun dcs master.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mashengchen/trafodion TRAFODION-2940

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1427.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1427


commit c92bd619a310f6b1b13d4164250ee8c8cb93f1e6
Author: aven 
Date:   2018-01-31T10:33:48Z

TRAFODION-2940 In HA env, one node lose network, when recover, trafci can't 
use




> In HA env, one node lose network, when recover, trafci can't use
> 
>
> Key: TRAFODION-2940
> URL: https://issues.apache.org/jira/browse/TRAFODION-2940
> Project: Apache Trafodion
>  Issue Type: Bug
>Affects Versions: any
>Reporter: mashengchen
>Assignee: mashengchen
>Priority: Major
> Fix For: 2.3
>
>
> In HA env, if one node lose network for a long time , once network recover, 
> there will have two floating ip, two working dcs master, and trafci can't be 
> use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2936) nullpointer error where server return without value

2018-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344507#comment-16344507
 ] 

ASF GitHub Bot commented on TRAFODION-2936:
---

GitHub user xiaozhongwang opened a pull request:

https://github.com/apache/trafodion/pull/1421

[TRAFODION-2936] nullpointer error where server return without value

we met a bug, when the server return without value, will make a null point 
error.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiaozhongwang/trafodion TRAFODION-2936

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/trafodion/pull/1421.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1421


commit 5e8e9c08a99d72773eb1fbe6c1ab5c70cf31ddb2
Author: Kenny 
Date:   2018-01-30T05:02:45Z

nullpointer error where server return without value




> nullpointer error where server return without value
> ---
>
> Key: TRAFODION-2936
> URL: https://issues.apache.org/jira/browse/TRAFODION-2936
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: xiaozhong.wang
>Priority: Critical
>
> we met a bug, when the server return without value, will make a null point 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2772) retrieve a value from Json string got an error: Json value is invalid

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352112#comment-16352112
 ] 

ASF GitHub Bot commented on TRAFODION-2772:
---

Github user andyyangcn commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1264#discussion_r165899916
  
--- Diff: core/sql/exp/exp_function.cpp ---
@@ -6503,8 +6503,19 @@ ex_expr::exp_return_type 
ex_function_json_object_field_text::eval(char *op_data[
 Int32 prec2 = ((SimpleType *)getOperand(2))->getPrecision();
 len2 = Attributes::trimFillerSpaces( op_data[2], prec2, len2, cs );
 }
+
 char *rltStr = NULL;
-JsonReturnType ret = json_extract_path_text(, op_data[1], 1, 
op_data[2]);
+char *jsonStr = new(heap) char[len1+1];
+char *jsonAttr = new(heap) char[len2+1];
+if (jsonStr == NULL || jsonAttr == NULL)
+{
+return ex_expr::EXPR_ERROR;
+}
+memset(jsonStr, 0, len1+1);
+memset(jsonAttr, 0, len2+1);
--- End diff --

strncpy(jsonStr, op_data[1], len1);
jsonStr[len1] = '\0';
Do you mean the change should look like above?


> retrieve a value from Json string got an error: Json value is invalid
> -
>
> Key: TRAFODION-2772
> URL: https://issues.apache.org/jira/browse/TRAFODION-2772
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: Yang, Yongfeng
>Assignee: Yang, Yongfeng
>Priority: Major
>
> >>create table json(str varchar(200));
> >>select json_object_field_text('{"f2":1}', 'f2') from json;
> *** ERROR[8971] JSON value is invalid.
> --- 0 row(s) selected.
> >>
> the expected result should like below.
> >>select json_object_field_text('{"f2":1}', 'f2') from (values(1)) T;
> (EXPR)  
> 
> 1  
> --- 1 row(s) selected.
> >>
> >>select json_object_field_text('{"f2":1}', 'f2') from (values(1)) T;
> (EXPR)  
> 
> 1  
> --- 1 row(s) selected.
> >>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2943) shebang in file core/sqf/sql/scripts/cleanat is broken

2018-02-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352739#comment-16352739
 ] 

ASF GitHub Bot commented on TRAFODION-2943:
---

Github user sbroeder closed the pull request at:

https://github.com/apache/trafodion/pull/1436


> shebang in file core/sqf/sql/scripts/cleanat is broken
> --
>
> Key: TRAFODION-2943
> URL: https://issues.apache.org/jira/browse/TRAFODION-2943
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: dev-environment
>Affects Versions: any
>Reporter: Wenjun Zhu
>Priority: Trivial
> Fix For: 2.3
>
>
> In file core/sqf/sql/scripts/cleanat is broken, the shebang should be '#!', 
> but it lacks '#' at present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2816) ODBC data convert function split

2018-02-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353211#comment-16353211
 ] 

ASF GitHub Bot commented on TRAFODION-2816:
---

Github user Weixin-Xu commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1310#discussion_r166166997
  
--- Diff: core/conn/unixodbc/odbc/odbcclient/unixcli/cli/ctosqlconv.cpp ---
@@ -157,852 +157,982 @@ unsigned long ODBC::Ascii_To_Interval_Helper(char 
*source,
 
 
 unsigned long ODBC::ConvertCToSQL(SQLINTEGER   ODBCAppVersion,
-   SQLSMALLINT 
CDataType,
-   SQLPOINTER  
srcDataPtr,
-   SQLINTEGER  
srcLength,
-   SQLSMALLINT 
ODBCDataType,
-   SQLSMALLINT 
SQLDataType,
-   SQLSMALLINT 
SQLDatetimeCode,
-   SQLPOINTER  
targetDataPtr,
-   SQLINTEGER  
targetLength,
-   SQLINTEGER  
targetPrecision,
-   SQLSMALLINT 
targetScale,
-   SQLSMALLINT 
targetUnsigned,
-   SQLINTEGER  
targetCharSet,
-   BOOL
byteSwap,
-// FPSQLDriverToDataSource 
fpSQLDriverToDataSource,
-// DWORD   
translateOption,
-   ICUConverter* iconv,
-   UCHAR   
*errorMsg,
-   SWORD   
errorMsgMax,
-   SQLINTEGER  
EnvironmentType,
-   BOOL
RWRSFormat,
-   SQLINTEGER  
datetimeIntervalPrecision)
+SQLSMALLINTCDataType,
+SQLPOINTER srcDataPtr,
+SQLINTEGER srcLength,
+SQLPOINTER targetDataPtr,
+CDescRec*targetDescPtr,
+BOOL   byteSwap,
+#ifdef unixcli
+ICUConverter* iconv,
+#else
+FPSQLDriverToDataSource fpSQLDriverToDataSource,
+DWORD  translateOption,
+#endif
+UCHAR  *errorMsg,
+SWORD  errorMsgMax,
+SQLINTEGER EnvironmentType,
+BOOL   RWRSFormat,
+SQLINTEGER datetimeIntervalPrecision)
 {
 
-   unsigned long retCode = SQL_SUCCESS;
-   SQLPOINTER  DataPtr = NULL;
-   SQLPOINTER  outDataPtr = targetDataPtr;
-   SQLINTEGER  DataLen = DRVR_PENDING;
-   short   Offset = 0; // Used for VARCHAR fields
-   SQLINTEGER  OutLen = targetLength;
-   short targetType = 0;  //for bignum datatype
+unsigned long   retCode = SQL_SUCCESS;
+if(pdwGlobalTraceVariable && *pdwGlobalTraceVariable){
+TraceOut(TR_ODBC_DEBUG,"ConvertCToSQL(%d, %d, %#x, %d, %d, %d, %d, 
%#x, %d, %d, %d, %d, %d, %d, %#x, %d, %d, %d)",
+ODBCAppVersion,
+CDataType,
+srcDataPtr,
+srcLength,
+targetDescPtr->m_ODBCDataType,
+targetDescPtr->m_SQLDataType,
+targetDescPtr->m_SQLDatetimeCode,
+targetDataPtr,
+targetDescPtr->m_SQLOctetLength,
+targetDescPtr->m_ODBCPrecision,
+targetDescPtr->m_ODBCScale,
+targetDescPtr->m_SQLUnsigned,
+targetDescPtr->m_SQLCharset,
+byteSwap,
+errorMsg,
+errorMsgMax,
+EnvironmentType,
+RWRSFormat);
+}
+else
+RESET_TRACE();
 
+if (CDataType == SQL_C_DEFAULT)
+{
+retCode = getCDefault(targetDescPtr->m_ODBCDataType, 
ODBCAppVersion, targetDescPtr->m_SQLCharset, CDataType);
+if (retCode != SQL_SUCCESS)
+return retCode;
+}
 
-   int dec;
-   int sign;
-   int tempLen;
-   int

[jira] [Commented] (TRAFODION-2927) Keep log information for UPDATE STATISTICS in case of errors

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355758#comment-16355758
 ] 

ASF GitHub Bot commented on TRAFODION-2927:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1429


> Keep log information for UPDATE STATISTICS in case of errors
> 
>
> Key: TRAFODION-2927
> URL: https://issues.apache.org/jira/browse/TRAFODION-2927
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 2.3
>Reporter: David Wayne Birdsall
>Assignee: David Wayne Birdsall
>Priority: Major
>
> Presently, UPDATE STATISTICS keeps a detailed log of its internal activities 
> if one has specifed "update statistics log on" in advance.
> In production scenarios, this is typically not done. That means when a 
> long-running UPDATE STATISTICS command fails on a large table, one has to 
> redo it with logging turned on in order to troubleshoot.
> A better practice might be to always log, and delete the log if the operation 
> succeeds.
> Another issue with UPDATE STATISTICS logs is their location. The directory is 
> different than other Trafodion logs and is sometimes hard to find. As part of 
> this JIRA, consideration should be given to writing the logs to the Trafodion 
> logs directory instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2772) retrieve a value from Json string got an error: Json value is invalid

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355770#comment-16355770
 ] 

ASF GitHub Bot commented on TRAFODION-2772:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1439


> retrieve a value from Json string got an error: Json value is invalid
> -
>
> Key: TRAFODION-2772
> URL: https://issues.apache.org/jira/browse/TRAFODION-2772
> Project: Apache Trafodion
>  Issue Type: Bug
>Reporter: Yang, Yongfeng
>Assignee: Yang, Yongfeng
>Priority: Major
>
> >>create table json(str varchar(200));
> >>select json_object_field_text('{"f2":1}', 'f2') from json;
> *** ERROR[8971] JSON value is invalid.
> --- 0 row(s) selected.
> >>
> the expected result should like below.
> >>select json_object_field_text('{"f2":1}', 'f2') from (values(1)) T;
> (EXPR)  
> 
> 1  
> --- 1 row(s) selected.
> >>
> >>select json_object_field_text('{"f2":1}', 'f2') from (values(1)) T;
> (EXPR)  
> 
> 1  
> --- 1 row(s) selected.
> >>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2955) odb crash when executing sql statements in a file and the file contains space only line.

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355788#comment-16355788
 ] 

ASF GitHub Bot commented on TRAFODION-2955:
---

Github user asfgit closed the pull request at:

https://github.com/apache/trafodion/pull/1442


> odb crash when executing sql statements in a file and the file contains space 
> only line. 
> -
>
> Key: TRAFODION-2955
> URL: https://issues.apache.org/jira/browse/TRAFODION-2955
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: db-utility-odb
>Reporter: 苏锦佩
>Assignee: 苏锦佩
>Priority: Major
>
> 1. create a sql file:
> vi temp.sql
>   1 drop table odb_copy_customer1;$ 
>   2 create table odb_copy_customer1(id int, name varchar(32));$ 
>   3 insert into odb_copy_customer1 values (1, 'hadoop');$ 
>   4 $ 
>   5 insert into odb_copy_customer1 values (2, 'hive'); $ 
> 2. ./odb64luo -u odb -p odb -d oracle -f temp.sql
> 3. [0.0.2]--- 1 row(s) inserted in 0.021s (prep 0.000s, exec 0.021s, fetch 
> 0.000s/0.000s)
> Segmentation fault



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TRAFODION-2874) LOB: Add syntax to return filename of a LOB data file for external LOBs.

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355891#comment-16355891
 ] 

ASF GitHub Bot commented on TRAFODION-2874:
---

Github user sandhyasun commented on a diff in the pull request:

https://github.com/apache/trafodion/pull/1428#discussion_r166720182
  
--- Diff: core/sql/exp/ExpLOBaccess.cpp ---
@@ -870,6 +870,67 @@ Ex_Lob_Error ExLob::getLength(char *handleIn, Int32 
handleInLen,Int64 
   }
   return err;
 }
+Ex_Lob_Error ExLob::getOffset(char *handleIn, Int32 handleInLen,Int64 
,LobsSubOper so, Int64 transId)
+{
+  char logBuf[4096];
+  Int32 cliErr = 0;
+  Ex_Lob_Error err=LOB_OPER_OK; 
+  char *blackBox = new(getLobGlobalHeap()) char[MAX_LOB_FILE_NAME_LEN+6];
+  Int32 blackBoxLen = 0;
+  Int64 dummy = 0;
+  Int32 dummy2 = 0;
+  if (so != Lob_External_File)
+{
+  
+  cliErr = SQL_EXEC_LOBcliInterface(handleIn, 
handleInLen,NULL,NULL,NULL,NULL,LOB_CLI_SELECT_LOBOFFSET,LOB_CLI_ExecImmed,,0,
 0, 0,0,transId,lobTrace_);
+
+  if (cliErr < 0 ) {
+str_sprintf(logBuf,"CLI SELECT_LOBOFFSET returned error 
%d",cliErr);
+lobDebugInfo(logBuf, 0,__LINE__,lobTrace_);
+  
+return LOB_DESC_READ_ERROR;
+  }
+}
+ 
+  return err;
+}
+
+Ex_Lob_Error ExLob::getFileName(char *handleIn, Int32 handleInLen, char 
*outFileName, Int32  , LobsSubOper so, Int64 transId)
+{
+  char logBuf[4096];
+  Int32 cliErr = 0;
+  Ex_Lob_Error err=LOB_OPER_OK; 
+  Int64 dummy = 0;
+  Int32 dummy2 = 0;
+  if (so != Lob_External_File)
+{
+  //Derive the filename from the LOB handle and return
+  str_cpy_all(outFileName, (char 
*)lobDataFile_.data(),lobDataFile_.length());
--- End diff --

Internally we allocate enough to hold the name. But when the final result 
is copied out, it's copied out to a caller  provided address (64 bit address). 
So we are assuming the caller (mxosrvr)  has allocated enough space. 


> LOB: Add syntax to return filename  of a LOB data file for external LOBs.
> -
>
> Key: TRAFODION-2874
> URL: https://issues.apache.org/jira/browse/TRAFODION-2874
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 2.3
>Reporter: Sandhya Sundaresan
>Assignee: Sandhya Sundaresan
>Priority: Major
>
> For external LOBs,  Trafodion does not save the LOB data in it's internal 
> trafodion namespace. It saves only the LOB  handle information and the actual 
> LOB data remains outside in users namespace in HDFS.So inserts are very 
> eifficient. During extract from an external LOB today, we extract the LOB 
> data from the external file and return it to the user. Need to extend the 
> Extract syntax to also be able to return the external LOB data filename alone 
> . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    1   2   3   4   5   6   7   8   9   10   >