pyodbc-tests failure

2020-02-12 Thread Selva Govindarajan
How do I run pyodbc-tests in my own workspace instead of via Jenkins? I want to 
debug and confirm if the issue is caused by PR 1868.

Thanks in advance.
Selva


RE: Trafodion master rh6 Daily Test Result - 1076 - Still Failing

2020-02-07 Thread Selva Govindarajan
It looks like the failure in core/TEST131 is triggered by  my PR 1868. 
I will run privs2 regression test in my workspace to figure out the timeout 
issue to check if it is caused by my PR 1868.

The rest of the failures in the last 2 days after PR 1868 seems to be random.  
Does anyone know what could be the cause for such randomness in the test 
failures?

Regards
Selva

-Original Message-
From: steve.var...@esgyn.com  
Sent: Friday, February 7, 2020 4:09 AM
To: dev@trafodion.apache.org
Subject: Trafodion master rh6 Daily Test Result - 1076 - Still Failing

External

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/1076/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/1076
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
No changes


Test Job Results:

FAILURE core-regress-core-cdh (59 min)
FAILURE core-regress-core-hdp (1 hr 16 min)
FAILURE core-regress-privs2-hdp (2 hr 40 min)
FAILURE core-regress-qat-cdh (16 min)
FAILURE pyodbc_test-cdh (15 min)
FAILURE pyodbc_test-hdp (20 min)
SUCCESS build-rh6-master-debug (30 min)
SUCCESS build-rh6-master-release (35 min)
SUCCESS core-regress-charsets-cdh (38 min)
SUCCESS core-regress-charsets-hdp (51 min)
SUCCESS core-regress-compGeneral-cdh (59 min)
SUCCESS core-regress-compGeneral-hdp (1 hr 5 min)
SUCCESS core-regress-executor-cdh (1 hr 18 min)
SUCCESS core-regress-executor-hdp (1 hr 45 min)
SUCCESS core-regress-fullstack2-cdh (10 min)
SUCCESS core-regress-fullstack2-hdp (20 min)
SUCCESS core-regress-hive-cdh (56 min)
SUCCESS core-regress-hive-hdp (1 hr 9 min)
SUCCESS core-regress-privs1-cdh (50 min)
SUCCESS core-regress-privs1-hdp (49 min)
SUCCESS core-regress-privs2-cdh (1 hr 5 min)
SUCCESS core-regress-qat-hdp (29 min)
SUCCESS core-regress-seabase-cdh (1 hr 31 min)
SUCCESS core-regress-seabase-hdp (2 hr 1 min)
SUCCESS core-regress-udr-cdh (36 min)
SUCCESS core-regress-udr-hdp (34 min)
SUCCESS jdbc_test-cdh (37 min)
SUCCESS jdbc_test-hdp (46 min)
SUCCESS phoenix_part1_T2-cdh (1 hr 1 min)
SUCCESS phoenix_part1_T2-hdp (1 hr 18 min)
SUCCESS phoenix_part1_T4-cdh (56 min)
SUCCESS phoenix_part1_T4-hdp (1 hr 23 min)
SUCCESS phoenix_part2_T2-cdh (59 min)
SUCCESS phoenix_part2_T2-hdp (1 hr 17 min)
SUCCESS phoenix_part2_T4-cdh (58 min)
SUCCESS phoenix_part2_T4-hdp (1 hr 23 min)


RE: Maven central change cause build failure

2020-01-23 Thread Selva Govindarajan
Thanks Steve for taking care of this. Once this issue is fixed, I will create a 
release plan for Trafodion R2.4 release  Can the previous release managers send 
the pointers to enable me to jump start this process.

Regards
Selva

-Original Message-
From: Steve Varnau  
Sent: Thursday, January 23, 2020 10:14 AM
To: dev@trafodion.apache.org
Subject: RE: Maven central change cause build failure

External

I have been looking at this a bit.  I have seen issues in both JDBC build and 
TRX builds.  I want to prove I can do complete build starting from empty maven 
repo  (~/.m2/repository), since having dependencies cached there can mask 
problems.

I can get past some of the issues by using newest (3.6.3) version of maven, 
instead of the 3.0.5 version.  Then java 1.7 has some issues that I can get 
passed by setting TLSv1.2 option 
(JAVA_TOOL_OPTIONS="-Dhttps.protocols=TLSv1.2,TLSv1.1,TLSv1").  But I still run 
into some handshake_failure with repository.cloudera.com, when building the 
CDH-compatible TRX jar.

Alternatively, I found that I was able to get a complete successful build if I 
used the new maven and I set JAVA_HOME to a 1.8 JDK (except DCS docs for some 
reason).

Is there any concerns with moving up to requiring JDK 1.8 for trafodion?
I have not yet regression tested this combination.

Meanwhile, I will also try upgrading to the latest 1.7 JDK to see if that helps.

I have logged Jira TRAFODION- for this.

-Steve

From: Steve Varnau
Sent: Thursday, January 16, 2020 10:00 AM
To: dev@trafodion.apache.org
Subject: Maven central change cause build failure

Hi folks,

Looks like a change at maven central to not allow http access has caused the 
daily trafodion build to fail:


[INFO] Building Trafodion JDBC Type4 Driver 2.4.0##(JDBCT4)
[INFO] BUILD FAILURE   ##(JDBCT4)
[ERROR] Plugin org.codehaus.mojo:properties-maven-plugin:1.0.0 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:properties-maven-plugin:jar:1.0.0: Could not transfer 
artifact org.codehaus.mojo:properties-maven-plugin:pom:1.0.0 from/to central 
(http://repo.maven.apache.org/maven2): Failed to transfer file: 
http://repo.maven.apache.org/maven2/org/codehaus/mojo/properties-maven-plugin/1.0.0/properties-maven-plugin-1.0.0.pom.
 Return code is: 501 , ReasonPhrase:HTTPS Required. -> [Help 1]  ##(JDBCT4)
[ERROR]##(JDBCT4)

Someone have time to look into fixing this up?  I think builds will all fail 
until this gets fixed.

-Steve


RE: Trafodion master rh6 Daily Test Result - 1018 - Failure

2019-12-12 Thread Selva Govindarajan
Thanks Steve. I will try to work on it when time permits after completing my 
bread earning responsbilites.

Regards
Selva

-Original Message-
From: Steve Varnau  
Sent: Wednesday, December 11, 2019 12:51 PM
To: dev@trafodion.apache.org
Subject: RE: Trafodion master rh6 Daily Test Result - 1018 - Failure

External

Hi Selva,

Looks like your changes broke one udr test.

-Steve

-Original Message-
From: steve.var...@esgyn.com 
Sent: Wednesday, December 11, 2019 3:47 AM
To: dev@trafodion.apache.org
Subject: Trafodion master rh6 Daily Test Result - 1018 - Failure

External

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/1018/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/1018
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
[Selvaganesan Govindarajan] Avoid reading VERSIONS and DEFAULTS table after it 
is done once in a

[Selvaganesan Govindarajan] Possible fix for the core files seen when 2 
Contexts are involved.

[Selvaganesan Govindarajan] Changes to generate core when the error 2055 is 
seen to debug this issue

[Selvaganesan Govindarajan] Reverting the change made in commit

[Selvaganesan Govindarajan] Possible fix for check-PR failures.

[Selvaganesan Govindarajan] Changes to ensure 
NADefaults::updateSystemParameters is done correctly.

[Selvaganesan Govindarajan] Fix for debug build failure with commit 
ec426b7afea5e



Test Job Results:

FAILURE core-regress-udr-cdh (36 min)
FAILURE core-regress-udr-hdp (45 min)
SUCCESS build-rh6-master-debug (33 min)
SUCCESS build-rh6-master-release (37 min)
SUCCESS core-regress-charsets-cdh (38 min)
SUCCESS core-regress-charsets-hdp (50 min)
SUCCESS core-regress-compGeneral-cdh (59 min)
SUCCESS core-regress-compGeneral-hdp (1 hr 16 min)
SUCCESS core-regress-core-cdh (58 min)
SUCCESS core-regress-core-hdp (1 hr 16 min)
SUCCESS core-regress-executor-cdh (1 hr 12 min)
SUCCESS core-regress-executor-hdp (1 hr 37 min)
SUCCESS core-regress-fullstack2-cdh (10 min)
SUCCESS core-regress-fullstack2-hdp (14 min)
SUCCESS core-regress-hive-cdh (55 min)
SUCCESS core-regress-hive-hdp (1 hr 3 min)
SUCCESS core-regress-privs1-cdh (48 min)
SUCCESS core-regress-privs1-hdp (1 hr 2 min)
SUCCESS core-regress-privs2-cdh (1 hr 4 min)
SUCCESS core-regress-privs2-hdp (1 hr 25 min)
SUCCESS core-regress-qat-cdh (29 min)
SUCCESS core-regress-qat-hdp (24 min)
SUCCESS core-regress-seabase-cdh (1 hr 30 min)
SUCCESS core-regress-seabase-hdp (1 hr 59 min)
SUCCESS jdbc_test-cdh (36 min)
SUCCESS jdbc_test-hdp (46 min)
SUCCESS phoenix_part1_T2-cdh (1 hr 0 min)
SUCCESS phoenix_part1_T2-hdp (1 hr 18 min)
SUCCESS phoenix_part1_T4-cdh (57 min)
SUCCESS phoenix_part1_T4-hdp (1 hr 14 min)
SUCCESS phoenix_part2_T2-cdh (59 min)
SUCCESS phoenix_part2_T2-hdp (1 hr 17 min)
SUCCESS phoenix_part2_T4-cdh (55 min)
SUCCESS phoenix_part2_T4-hdp (1 hr 13 min)
SUCCESS pyodbc_test-cdh (17 min)
SUCCESS pyodbc_test-hdp (22 min)


RE: Odd Jar problem in Trafodion builds

2019-04-10 Thread Selva Govindarajan
Hi Dave,

You can try the following

cd $TRAF_HOME
cd ../..
make clean
make all

Selva
-Original Message-
From: Dave Birdsall  
Sent: Wednesday, April 10, 2019 4:30 PM
To: dev@trafodion.apache.org
Subject: Odd Jar problem in Trafodion builds

External

Hi,

Recently in my Trafodion instance, I've been seeing the following error message:

JDBC Library Version Error - Jar: Traf_JDBC_Type2_Build_0ab8d50 Jni: 
Traf_JDBC_Type2_Build_2ffb876

I see this when running compGeneral/TEST072, where it attempts to create a Java 
stored procedure.

I also see it at "initialize trafodion" time when it tries to create the 
libraries.

In both cases, the UDR server aborts.

Any idea of how to cure this problem?

Thanks,

Dave


RE: Trafodion master rh6 Daily Test Result - 768 - Failure

2019-04-04 Thread Selva Govindarajan
I am working on to fix the core files seen with T2 driver.  Please accept my 
apology to cause this failure.  I need to confirm if check PR tests  have any 
JDBC T2 driver tests.

Selva

-Original Message-
From: steve.var...@esgyn.com  
Sent: Thursday, April 4, 2019 5:16 AM
To: dev@trafodion.apache.org
Subject: Trafodion master rh6 Daily Test Result - 768 - Failure

External

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/768/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/768
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
[selvaganesang] [TRAODION-3280] Reduce path length in Trafodion for improved 
performance

[selvaganesang] [TRAODION-3280] Reduce path length in Trafodion for improved 
performance

[selvaganesang] [TRAFODION-3280] Reduce path length in Trafodion for improved

[selvaganesang] Changes as per review comments of PR 1820



Test Job Results:

FAILURE phoenix_part1_T2-cdh (18 min) *Corefiles*
FAILURE phoenix_part1_T2-hdp (33 min) *Corefiles*
FAILURE phoenix_part2_T2-cdh (18 min) *Corefiles*
FAILURE phoenix_part2_T2-hdp (54 min) *Corefiles*
SUCCESS build-rh6-master-debug (33 min)
SUCCESS build-rh6-master-release (38 min)
SUCCESS core-regress-charsets-cdh (36 min)
SUCCESS core-regress-charsets-hdp (44 min)
SUCCESS core-regress-compGeneral-cdh (48 min)
SUCCESS core-regress-compGeneral-hdp (1 hr 15 min)
SUCCESS core-regress-core-cdh (53 min)
SUCCESS core-regress-core-hdp (1 hr 14 min)
SUCCESS core-regress-executor-cdh (1 hr 11 min)
SUCCESS core-regress-executor-hdp (1 hr 46 min)
SUCCESS core-regress-fullstack2-cdh (10 min)
SUCCESS core-regress-fullstack2-hdp (20 min)
SUCCESS core-regress-hive-cdh (1 hr 3 min)
SUCCESS core-regress-hive-hdp (1 hr 13 min)
SUCCESS core-regress-privs1-cdh (48 min)
SUCCESS core-regress-privs1-hdp (1 hr 0 min)
SUCCESS core-regress-privs2-cdh (1 hr 8 min)
SUCCESS core-regress-privs2-hdp (1 hr 12 min)
SUCCESS core-regress-qat-cdh (27 min)
SUCCESS core-regress-qat-hdp (29 min)
SUCCESS core-regress-seabase-cdh (1 hr 29 min)
SUCCESS core-regress-seabase-hdp (1 hr 47 min)
SUCCESS core-regress-udr-cdh (35 min)
SUCCESS core-regress-udr-hdp (33 min)
SUCCESS jdbc_test-cdh (37 min)
SUCCESS jdbc_test-hdp (54 min)
SUCCESS phoenix_part1_T4-cdh (59 min)
SUCCESS phoenix_part1_T4-hdp (1 hr 25 min)
SUCCESS phoenix_part2_T4-cdh (56 min)
SUCCESS phoenix_part2_T4-hdp (1 hr 14 min)
SUCCESS pyodbc_test-cdh (16 min)
SUCCESS pyodbc_test-hdp (16 min)



RE: Google Summer of Code 2019 is coming

2019-01-29 Thread Selva Govindarajan
It is an amazing idea. I am thinking if it should be a feature because 
developing a new feature needs a bit deeper understanding of Trafodon product. 
I wonder if it can be rewrite of the existing code or an isolated feature to 
make it more efficient or robust. 

Selva

-Original Message-
From: Dave Birdsall  
Sent: Tuesday, January 29, 2019 8:53 AM
To: dev@trafodion.apache.org
Subject: FW: Google Summer of Code 2019 is coming

External

Is this something of interest to the Trafodion community? Is there some feature 
in our engine that we could ask a student to attack over the summer?

Dave

-Original Message-
From: Ulrich Stärk 
Sent: Tuesday, January 29, 2019 4:25 AM
Subject: Google Summer of Code 2019 is coming

Hello PMCs (incubator Mentors, please forward this email to your podlings),

Google Summer of Code [1] is a program sponsored by Google allowing students to 
spend their summer working on open source software. Students will receive 
stipends for developing open source software full-time for three months. 
Projects will provide mentoring and project ideas, and in return have the 
chance to get new code developed and - most importantly - to identify and bring 
in new committers.

The ASF will apply as a participating organization meaning individual projects 
don't have to apply separately.

If you want to participate with your project we ask you to do the following 
things by no later than
2019-01-31 19:00 UTC (applications from organizations close a week later)

1. understand what it means to be a mentor [2].

2. record your project ideas.

Just create issues in JIRA, label them with gsoc2019, and they will show up at 
[3]. Please be as specific as possible when describing your idea. Include the 
programming language, the tools and skills required, but try not to scare 
potential students away. They are supposed to learn what's required before the 
program starts.

Use labels, e.g. for the programming language (java, c, c++, erlang, python, 
brainfuck, ...) or technology area (cloud, xml, web, foo, bar, ...).

Please use the COMDEV JIRA project for recording your ideas if your project 
doesn't use JIRA (e.g.
httpd, ooo). Contact d...@community.apache.org if you need assistance.

[4] contains some additional information (will be updated for 2019 shortly).

3. subscribe to ment...@community.apache.org; restricted to potential mentors, 
meant to be used as a private list - general discussions on the public 
d...@community.apache.org list as much as possible please). Use a recognized 
address when subscribing (@apache.org or one of your alias addresses on record).

Note that the ASF isn't accepted as a participating organization yet, 
nevertheless you *have to* start recording your ideas now or we might not get 
accepted.

Over the years we were able to complete hundreds of projects successfully. Some 
of our prior students are active contributors now! Let's make this year a 
success again!

P.S.: this email is free to be shared publicly if you want to.

[1] https://summerofcode.withgoogle.com/
[2] http://community.apache.org/guide-to-being-a-mentor.html
[3] https://s.apache.org/gsoc2019ideas
[4] http://community.apache.org/gsoc.html


RE: Someone could help to review and merge code

2018-12-20 Thread Selva Govindarajan
Thanks @Dave Birdsall for taking care of this. I have added new comments for 
https://github.com/apache/trafodion/pull/1694
and requested for more info for https://github.com/apache/trafodion/pull/1603

It is better to wait for author to make changes as per the comments or respond 
before the PR is merged. 

Selva

-Original Message-
From: Dave Birdsall  
Sent: Wednesday, December 19, 2018 6:18 PM
To: dev@trafodion.apache.org; d...@trafodion.incubator.apache.org
Subject: RE: Someone could help to review and merge code

External

Hi,

I took a quick look at these.

Regarding [TRAFODION-3246] support TLS for jdbc, 
https://github.com/apache/trafodion/pull/1765: I'm hoping someone qualified 
will review it soon.

Regarding [TRAFODION-3250] optimize get/set schema, 
https://github.com/apache/trafodion/pull/1764: It looks like the JDBC tests 
have failed. You need to take a look to see why. I took a quick look and it 
doesn't seem to be an environmental failure. It looks like someone has approved 
the changes, though.

Regarding [TRAFODION-3247] support customer define client charset, 
https://github.com/apache/trafodion/pull/1760: I merged it just now.

Regarding [TRAFODION-3183] fetch huge data give rise to core, 
https://github.com/apache/trafodion/pull/1694: It wasn't clear to me if the 
reviewer approved the changes. I pinged him to re-review. I also started a 
retest for the Jenkins tests as this particular change has been sitting there 
for a while.

Regarding [TRAFODION-3089] DatabaseMetaData.getIndexInfo not work well, 
https://github.com/apache/trafodion/pull/1603: It also wasn't clear to me if 
the reviewers approved the changes. So I pinged them. I also started a retest 
since this has been sitting there for a while.

Dave


-Original Message-
From: shengchen...@esgyn.cn 
Sent: Tuesday, December 18, 2018 5:39 PM
To: d...@trafodion.incubator.apache.org
Subject: Someone could help to review and merge code

hi all:
I had done several PR to TRAFODION, but it's really a long time that no 
response, is there anyone who have free time could help to review & merge them

[TRAFODION-3246] support TLS for jdbc
https://github.com/apache/trafodion/pull/1765

[TRAFODION-3250] optimize get/set schema
https://github.com/apache/trafodion/pull/1764

[TRAFODION-3247] support customer define client charset
https://github.com/apache/trafodion/pull/1760

[TRAFODION-3183] fetch huge data give rise to core
https://github.com/apache/trafodion/pull/1694

[TRAFODION-3089] DatabaseMetaData.getIndexInfo not work well
https://github.com/apache/trafodion/pull/1603


thanks,
shengchen.ma



RE: questionable `sprintf` usage

2018-12-19 Thread Selva Govindarajan
The code is not erroneous, though it is bit strange.

Declaration of sprintf is 

int sprintf ( char * str, const char * format, ... );

It just needs 2 parameters, the rest are optional. In this case when format 
parameter has no format specification, sprintf  just copies the format 
parameter to str.

Trafodion code is compiled with -Wformat -Werror. This should emit out 
compilation errors when printf, sprintf  is used in incorrect way such as less 
number of arguments than the required number as per the format specification, 
incompatible format and argument, and other errors.

snprintf might be good to avoid buffer overflow, but in this case I am not sure 
if there was a buffer overflow condition.

Selva
-Original Message-
From: wenjun@esgyn.cn  
Sent: Wednesday, December 19, 2018 2:35 AM
To: dev@trafodion.apache.org
Subject: questionable `sprintf` usage

Hi,

 

I suspect the following code in core/sql/ustat/hs_read.cpp is erroneous:

2120   desc = new SQLDESC_ID;

2121   init_SQLCLI_OBJ_ID(desc);

2122

2123   desc->name_mode = cursor_name;

2124   desc->module = 

2125   desc->identifier = new char[HS_STMTID_LENGTH];

2126   desc->handle = 0;

2127

2128   sprintf((char*)desc->identifier, descID);

2129   desc->identifier_len = strlen(descID);

 

The parameters to function `sprintf` should be 3, but there are only 2.

 

I’d like to change it to:

   snprintf((char*)desc->identifier, HS_STMTID_LENGTH, “%s”, descID);

 

How do you find it?

 

Regards,

Wenjun Zhu



RE: What does (m) mean in esp_exchange

2018-11-08 Thread Selva Govindarajan
'm' in the esp_exchange means that this esp acts like merge esp exchange. The 
tuple producers provides the tuples in the sorted order of the chosen key. Then 
the exchange operator can pick the rows in the sorted order from the producers 
and project the sorted rows to its parent.

Selva

-Original Message-
From: Liu, Yuan (Yuan)  
Sent: Thursday, November 8, 2018 4:58 PM
To: dev@trafodion.apache.org
Subject: RE: What does (m) mean in esp_exchange

Sorry, in case you can not see the screenshot, just paste the query plan as 
below.


SQL>explain options 'f' select stat_mon_year,count(*) from (
select stat_mon_year,stat_mon,count(*) from DMA_ENTTYPE_STAT group by 1,2) a 
group by 1;+>

LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD
         -
7.8root  6.00E+000
6.7hash_partial_groupby  6.00E+000
5.6esp_exchange1:6(hash2)  (m)  
6.00E+000
4.5hash_partial_groupby  6.00E+000
3.4hash_partial_groupby  4.07E+002
2.3esp_exchange6(hash2):8(hash2) 4.07E+002
1.2hash_partial_groupby  4.07E+002
..1trafodion_scan  DMA_ENTTYPE_STAT  5.03E+006

--- SQL operation complete.


Best regards
Yuan


From: Liu, Yuan (Yuan) 
Sent: Friday, November 09, 2018 8:56 AM
To: dev@trafodion.apache.org
Subject: What does (m) mean in esp_exchange

Hi Trafodioneers,

I get a query plan below, can anyone teach me what does the (m) mean in red 
box? Thanks ahead.

[cid:image001.png@01D47809.FED29780]



Best regards
Yuan




RE: refCount on ComDiagsArea

2018-08-15 Thread Selva Govindarajan
Hi Wen-Jun,

Are you planning for general cleanup or attempting to solve a specific problem? 
 If it is latter, can you please describe the problem? I might be able to help 
you to resolve it possibly differently.  This suggestion is based on the 
similar fear outlined by Dave's comment earlier. 

Below is what I have understood how reference count used in Trafodion.

An object gets a refCount_ only when it is derived from IpcMessageObj.  Only 
the objects that would be shipped between processes need to be derived from 
IpcMessageObj. ComDiagsArea is one such object.  You can look at any object 
derived from IpcMessageObj to determine how reference count is used.
 
See the code snippet below
 
this << *cqReply;
 
  if (!(didAttemptCancel ||cancelByPid))
  {
if (diags == NULL)
{
   diags = ComDiagsArea::allocate(getHeap());
   *diags << DgSqlCode(-EXE_CANCEL_QID_NOT_FOUND);
}
*this << *diags;
  }
  send(FALSE);
 
  cqReply->decrRefCount();
  if (diags)
diags->decrRefCount();
  request->decrRefCount();
 
In the above code, the concatenate operator << increments the reference count.  
 The “send” queues in the request to be sent.   When decrRefCount sees 
reference count as 0, it deallocates the IpcMessageObj."Send" also calls 
decrRefCount. Whichever  decrRefCount happens last either in "send" or in this 
function deletes the IpcMessageObj to avoid memory leak.  It is also possible 
if you don't manage reference count correctly, you could end up looking at a 
deleted object.

Also, in general, in executor layer passing pointers between different object 
doesn't involve reference count management because refCount_ member variable is 
available only when the object would be shipped between processes.
 
You can also consult the comments  about reference count in 
export/IpcMessageObj.h 

Selva
-Original Message-
From: Qifan Chen  
Sent: Wednesday, August 15, 2018 6:13 AM
To: dev@trafodion.apache.org
Subject: Re: refCount on ComDiagsArea

Hi Wen-Jun,


Thanks for the info.


I wonder if you can also provide an example of a ComDiagArea object that is 
mistakenly deleted (see the two relevant methods below) due to incorrectly 
maintained reference count.


Thanks --Qifan


383 IpcMessageRefCount ComDiagsArea::decrRefCount()

 384 {

 385   if (getRefCount() == 1)

 386 {

 387   deAllocate();

 388   return 0;

 389 }

 390

 391   // Let base class do the work.

 392   return this->IpcMessageObj::decrRefCount();

 393 }


1513 inline

1514 void ComDiagsArea::deAllocate()

1515 {

1516if (collHeapPtr_ == NULL)

1517   delete this;

1518else {

1519  // save collHeapPtr, because destroyMe() sets it to NULL

1520  // Better solution: derive ComDiagsArea from NABasicObject and get

1521  // rid of allocate() / deAllocate()

1522  CollHeap * p = collHeapPtr_;

1523  destroyMe();

1524  p->deallocateMemory(this);

1525};

1526 }

1527


From: Zhu, Wen-Jun 
Sent: Tuesday, August 14, 2018 9:51:40 PM
To: dev@trafodion.apache.org
Subject: 答复: refCount on ComDiagsArea

Hi,

It is true that there are some other functions incrementing the reference 
count, like in atp_struct::copyAtp:

  if (from->getDiagsArea())
from->getDiagsArea()->incrRefCount();

  setDiagsArea(from->getDiagsArea());

incrRefCount() is called to increase the ref count.

But when I check the result of command
grep setDiagsArea * -B4 -A4 -rw
I find lots of code have not invoked incrRefCount(), like 
core/sql/exp/exp_eval.cpp:
 479   if (retcode == ex_expr::EXPR_ERROR)
 480 {
 481   if (diagsArea != atp1->getDiagsArea())
 482 atp1->setDiagsArea(diagsArea);
 483
 484   return retcode;
 485 }
There's bug here.

So, how about to wrap operator= invocation in atp_struct::setDiagsArea(), and 
in operator= we deal with the ref count, like Qifan suggested?


-邮件原件-
发件人: Selva Govindarajan 
发送时间: 2018年8月15日 0:23
收件人: dev@trafodion.apache.org
主题: RE: refCount on ComDiagsArea

I second Dave's  comment. I was about to comment in similar lines. When I saw 
Dave's message, I had no second thoughts to second it.

Selva

-Original Message-
From: Qifan Chen 
Sent: Tuesday, August 14, 2018 9:11 AM
To: dev@trafodion.apache.org
Subject: Re: refCount on ComDiagsArea

Hi,


I personally also like the 2nd option which is to overload the operator= for 
ComDiagsArea.


But because of high number of calls to atp_struct::setDiagsArea() in the 
executor, it is better to hide the call to operator=(ComDiagsArea&) inside 
atp_struct::setDiagsArea().


Also keep in mind that we can not ubiquitously use C11 structs such as 
shared_ptr or make_shared in our code base yet since we still need to support 
platforms such as CentOS6 that do not have C11 support pack

RE: refCount on ComDiagsArea

2018-08-14 Thread Selva Govindarajan
I second Dave's  comment. I was about to comment in similar lines. When I saw 
Dave's message, I had no second thoughts to second it.

Selva

-Original Message-
From: Qifan Chen  
Sent: Tuesday, August 14, 2018 9:11 AM
To: dev@trafodion.apache.org
Subject: Re: refCount on ComDiagsArea

Hi,


I personally also like the 2nd option which is to overload the operator= for 
ComDiagsArea.


But because of high number of calls to atp_struct::setDiagsArea() in the 
executor, it is better to hide the call to operator=(ComDiagsArea&) inside 
atp_struct::setDiagsArea().


Also keep in mind that we can not ubiquitously use C11 structs such as 
shared_ptr or make_shared in our code base yet since we still need to support 
platforms such as CentOS6 that do not have C11 support package.


Thanks --Qifan


From: Dave Birdsall 
Sent: Tuesday, August 14, 2018 10:10:43 AM
To: dev@trafodion.apache.org
Subject: RE: refCount on ComDiagsArea

Hi,

I have not researched this area, but it strikes me as one that could be very 
delicate. It may be that in most code paths it is assumed that some other 
function is incrementing the reference count. Great care should be taken in 
modifying this otherwise it may lead to memory leaks. I am hoping others who 
are more knowledgeable will add to this discussion.

Can you give more insight into what problem led you here?

Dave

-Original Message-
From: Zhu, Wen-Jun 
Sent: Tuesday, August 14, 2018 4:11 AM
To: dev@trafodion.apache.org
Subject: refCount on ComDiagsArea

hi,

When setting a ComDiagsArea, I find the refCount did not increase, in function 
atp_struct::setDiagsArea of file core/sql/exp/ExpAtp.h:

inline void atp_struct::setDiagsArea(ComDiagsArea* diagsArea) {
  if (diagsArea_)
diagsArea_->decrRefCount();

  diagsArea_ = diagsArea;
}

I guess this is a problem.

There are two solutions to fix this:

1. Invoke incrRefCount for ComDiagsArea to increase, just after the 
assignment.

2. Overload the operator= for ComDiagsArea, to increase within the 
operator=,
I find operator= is declared in ComDiags.h, but there is no implementation.

The 2nd solution may be better, as the both increment for the left-hand side 
ComDiagsArea and the decrement for the right-hand side ComDiagsAre can be 
handled within a single operator=, which is friendly to users, like shared_ptr 
in C++.

Regards,
Wenjun Zhu


RE: Trafodion master rh6 Daily Test Result - 532 - Still Failing

2018-08-13 Thread Selva Govindarajan
It look like the issue seems to be with compressed sequence files.  This test 
passes in my environment but fails to fetch all rows in Jenkins environment  I 
am trying to figure out the difference in execution between my environment and 
Jenkins.

Selva

-Original Message-
From: Liu, Ming (Ming)  
Sent: Saturday, August 11, 2018 8:29 AM
To: dev@trafodion.apache.org
Subject: RE: Trafodion master rh6 Daily Test Result - 532 - Still Failing

The failure is for hive/TEST006, a simple select from a sequence file.
I feel the latest PR about refactor the sequence file reader should be the most 
possible related change: https://github.com/apache/trafodion/pull/1674
So Selva, could you help to take a look at this issue?

thanks,
Ming

-Original Message-
From: steve.var...@esgyn.com 
Sent: Saturday, August 11, 2018 8:33 PM
To: dev@trafodion.apache.org
Subject: Trafodion master rh6 Daily Test Result - 532 - Still Failing

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/532/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/532
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
No changes


Test Job Results:

FAILURE core-regress-hive-cdh (58 min)
FAILURE core-regress-hive-hdp (1 hr 5 min) SUCCESS build-rh6-master-debug (32 
min) SUCCESS build-rh6-master-release (38 min) SUCCESS 
core-regress-charsets-cdh (41 min) SUCCESS core-regress-charsets-hdp (48 min) 
SUCCESS core-regress-compGeneral-cdh (56 min) SUCCESS 
core-regress-compGeneral-hdp (1 hr 14 min) SUCCESS core-regress-core-cdh (1 hr 
1 min) SUCCESS core-regress-core-hdp (1 hr 30 min) SUCCESS 
core-regress-executor-cdh (1 hr 33 min) SUCCESS core-regress-executor-hdp (1 hr 
55 min) SUCCESS core-regress-fullstack2-cdh (19 min) SUCCESS 
core-regress-fullstack2-hdp (27 min) SUCCESS core-regress-privs1-cdh (42 min) 
SUCCESS core-regress-privs1-hdp (51 min) SUCCESS core-regress-privs2-cdh (1 hr 
16 min) SUCCESS core-regress-privs2-hdp (1 hr 17 min) SUCCESS 
core-regress-qat-cdh (26 min) SUCCESS core-regress-qat-hdp (25 min) SUCCESS 
core-regress-seabase-cdh (1 hr 44 min) SUCCESS core-regress-seabase-hdp (2 hr 4 
min) SUCCESS core-regress-udr-cdh (24 min) SUCCESS core-regress-udr-hdp (43 
min) SUCCESS jdbc_test-cdh (42 min) SUCCESS jdbc_test-hdp (51 min) SUCCESS 
phoenix_part1_T2-cdh (1 hr 16 min) SUCCESS phoenix_part1_T2-hdp (1 hr 29 min) 
SUCCESS phoenix_part1_T4-cdh (1 hr 12 min) SUCCESS phoenix_part1_T4-hdp (1 hr 
26 min) SUCCESS phoenix_part2_T2-cdh (1 hr 10 min) SUCCESS phoenix_part2_T2-hdp 
(1 hr 34 min) SUCCESS phoenix_part2_T4-cdh (1 hr 7 min) SUCCESS 
phoenix_part2_T4-hdp (1 hr 22 min) SUCCESS pyodbc_test-cdh (10 min) SUCCESS 
pyodbc_test-hdp (14 min)



RE: How to analyze the completed sql statement

2018-07-30 Thread Selva Govindarajan
You can issue

explain options ‘f’ for qid  or explain options ‘f’  from RMS – To 
get the explain from the RMS shared segment
explain options ‘f’  from repository – To get the explain plan from 
repository

What command did you use?

Selva

From: Yang, Peng-Peng 
Sent: Monday, July 30, 2018 2:28 AM
To: dev@trafodion.apache.org
Subject: 答复: How to analyze the completed sql statement


[cid:image001.png@01D4282A.AC84AA60]

Regards,
Pengpeng



-邮件原件-
发件人: Yang, Peng-Peng mailto:pengpeng.y...@esgyn.cn>>
发送时间: 2018年7月30日 17:12
收件人: dev@trafodion.apache.org<mailto:dev@trafodion.apache.org>
主题: 答复: How to analyze the completed sql statement



Add attachment.



Regards,

Pengpeng



-邮件原件-

发件人: Yang, Peng-Peng mailto:pengpeng.y...@esgyn.cn>>

发送时间: 2018年7月30日 17:08

收件人: dev@trafodion.apache.org<mailto:dev@trafodion.apache.org>

主题: 答复: How to analyze the completed sql statement



Hi Selva, Yuan, Carol



Thanks for your feedback.

I have set up dcs-site.xml. But I can't get the correct explain plan, it looks 
like it's garbled? How can I decode it into the text explain plan?

Attachment is a screenshot of the explain plan now.



Regards,

Pengpeng



-邮件原件-

发件人: Carol Pearson 
mailto:carol.pearson...@gmail.com>>

发送时间: 2018年7月30日 14:53

收件人: dev@trafodion.apache.org<mailto:dev@trafodion.apache.org>

主题: Re: How to analyze the completed sql statement



I have experimented with this a bit in various configurations and with 
different workloads.  For short transactions and high concurrency, the impact 
is pretty high.  But that's really the absolute worst case - thousands of 
concurrent transactions with hundreds of thousands of queries in under a minute 
that previously weren't logged, all on a cluster that was tuned for the 
workload, not the workload +  query logging.



For the more general case, a workload where the queries are in, say, the

10-59 second category and there aren't thousands running at once, then the 
impact of setting the threshold down to zero is much lower.  Depending on the 
cluster load, it may not be highly noticed.



So this is definitely worth trying, but be aware that there can be performance 
impacts to setting the threshold to 0, depending on the workload.  In some 
cases, there might be a need to increase the cluster capacity (add more 
nodes/disks) to achieve workload SLAs while permanently capturing data for all 
queries.



-Carol P.



On Sun, Jul 29, 2018 at 11:13 PM Liu, Yuan (Yuan) 
mailto:yuan@esgyn.cn>> wrote:



> We can also add below configuration into dcs-site.xml to save queries

> less than 60 seconds into repository.

>

> 

>  dcs.server.user.program.statistics.limit.time

> 0

> 

>

> The value 0 means all sql will be saved into repository. We can set

> the value as we want.

>

>

> Best regards

>

> 刘源(Yuan)

> 上海易鲸捷信息技术有限公司

> 地址:上海市浦东新区金科路2889号长泰广场A座603

> 手机:13671935540

> 邮箱:yuan@esgyn.cn<mailto:yuan@esgyn.cn>

>

>

> -Original Message-

> From: Selva Govindarajan 
> mailto:selva.govindara...@esgyn.com>>

> Sent: Monday, July 30, 2018 1:49 PM

> To: dev@trafodion.apache.org<mailto:dev@trafodion.apache.org>

> Subject: RE: How to analyze the completed sql statement

>

> Statistics of the running queries are stored in shared segment. To

> avoid shared segment becoming full, the statistics is removed as soon

> as the query is deallocated.  I believe If the query takes longer than

> 60 seconds, the end statistics will be written to repository by

> default.  This time duration of the query to be written  can be configured.

>

> In addition, the monitored queries are kept in the shared segment even

> after the query is deallocated. You can start monitoring the query by

> issuing get statistics for qid  or any other equivalent command

> involving query id. However, if the query takes a very short time, it

> is possible that query could complete before you start monitoring it.

>

> Thanks, Selva

>

> -Original Message-

> From: Yang, Peng-Peng mailto:pengpeng.y...@esgyn.cn>>

> Sent: Sunday, July 29, 2018 9:55 PM

> To: dev@trafodion.apache.org<mailto:dev@trafodion.apache.org>

> Subject: How to analyze the completed sql statement

>

> Hi Trafodioneers,

>

>

> Sometimes we need to analyze the sql running at the end, how can we

> get the statistics of the completed sql?

> Or configure sql statistics to stay longer in the offender?

> Not "get statistics for qid current;",

>

>

> Regards,

> Pengpeng

>

>


RE: How to analyze the completed sql statement

2018-07-29 Thread Selva Govindarajan
Statistics of the running queries are stored in shared segment. To avoid  
shared segment becoming full, the statistics is removed as soon as the query is 
deallocated.  I believe If the query takes longer than 60 seconds, the end 
statistics will be written to repository by default.  This time duration of the 
query to be written  can be configured. 

In addition, the monitored queries are kept in the shared segment even after 
the query is deallocated. You can start monitoring the query by issuing get 
statistics for qid  or any other equivalent command involving query id. 
However, if the query takes a very short time, it is possible that query could 
complete before you start monitoring it.

Thanks, Selva

-Original Message-
From: Yang, Peng-Peng  
Sent: Sunday, July 29, 2018 9:55 PM
To: dev@trafodion.apache.org
Subject: How to analyze the completed sql statement

Hi Trafodioneers,


Sometimes we need to analyze the sql running at the end, how can we get the 
statistics of the completed sql?
Or configure sql statistics to stay longer in the offender?
Not "get statistics for qid current;",


Regards,
Pengpeng



RE: [VOTE] New Trafodion Reease

2018-07-26 Thread Selva Govindarajan
+1  

I have also updated the jiras that were resolved by me but shown as open.

Selva

-Original Message-
From: Eric Owhadi  
Sent: Thursday, July 26, 2018 2:38 PM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] New Trafodion Reease

+1

-Original Message-
From: Hans Zeller  
Sent: Thursday, July 26, 2018 4:09 PM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] New Trafodion Reease

+1

-Original Message-
From: Xu, Kai-Hua (Kevin)  
Sent: Thursday, July 26, 2018 1:12 AM
To: dev@trafodion.apache.org
Subject: Re: [VOTE] New Trafodion Reease

+1



Regards,

徐恺华(Kevin)

易鲸捷信息技术有限公司 

+86 136 4197 5902



在 2018/7/26 10:50,“Jin, Jian (Seth)” 写入:



+1



Br,



Seth



-Original Message-

From: Roberta Marton  

Sent: 2018年7月26日 1:24

To: dev@trafodion.apache.org

Subject: RE: [VOTE] New Trafodion Reease



+1





-Original Message-

From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 

Sent: Wednesday, July 25, 2018 8:36 AM

To: dev@trafodion.apache.org

Subject: RE: [VOTE] New Trafodion Reease



+1





-Original Message-

From: Sean Broeder  

Sent: Tuesday, July 24, 2018 2:28 PM

To: dev@trafodion.apache.org

Subject: [VOTE] New Trafodion Reease



Hi All,

Discussion has been absent on my previous email on this subject, so I'm 
moving ahead with a vote.



It's time to start thinking about the next release of Apache Trafodion as 
well as any features that should be present prior to the release.



I have update the confluence pages with new pointers at:

https://cwiki.apache.org/confluence/display/TRAFODION/Roadmap



And also:

https://cwiki.apache.org/confluence/display/TRAFODION/Release+2.3



I propose a release date for discussion of 31 August 2018.



As part of this procedure I will be creating a new 2.3 Trafodion branch and 
master will be relabeled to 2.4.  I will perform regular merges from the

2.3 branch to the master to keep the master reasonably caught up.



Regards,

Sean






RE: [DISCUSS] Filter out SYSTEM object for various GET utilities

2018-06-19 Thread Selva Govindarajan
I believe " get all tables", " get user tables",  "get system tables"  commands 
are already available. But it seems to return the same information.  I would 
think  you might consider fixing these commands if possible.

Selva

-Original Message
From: Ming  
Sent: Tuesday, June 19, 2018 1:54 AM
To: dev@trafodion.apache.org
Subject: [DISCUSS] Filter out SYSTEM object for various GET utilities

Hi, all,

 

I have got some requirements from the support engineers of Trafodion, that wish 
to filter out those system objects when doing 'get tables', for example, those 
histogram tables, not showing them.

I have created JIRA: https://issues.apache.org/jira/browse/TRAFODION-3113

 

What do all others think about this? Shall we need to do this? Or you have 
better ideas?

 

I noticed there are various GET commands, and maybe we can show all object when 
it is DB__ROOT, but for normal user, just show normal objects. Or we should add 
some new command like 'get all tables' , 'get user tables' , 'get system 
tables' , for example?

 

thanks,

Ming



RE: issues on error message

2018-04-10 Thread Selva Govindarajan
The enums are just a way to access the error codes symbolically. Usually, the 
error codes in Trafodion have ranges assigned to different components of the 
Trafodion engine. The errors originating in catalog manager  component are 
expected to be CmpDDLCatErrorCodes.h and the errors originating in executor  
component are in exp/ExpErrorEnums.h.However, these grouping are not 
strictly enforced. At times, the error codes are used without enums (symbolic 
names).

There are many helper functions to populate the diagnostics area.  Most often 
used functions are

ExRaiseSqlErrors
DgSqlCode DgString, DgInt 

ExRaiseSqlError is used in executor and mostly in work method of operators. It 
can be used when diagnostics area is not already allocated. This method 
allocates the diagnostics area and passes back to the caller.

DgSqlCode concept is used in other areas of Trafodion when diagnostics area is 
already allocated and available.

See https://issues.apache.org/jira/browse/TRAFODION-3009 and 
https://issues.apache.org/jira/browse/TRAFODION-2853 
 for details of the change that streamlined this concept to some extent.

Selva

-Original Message-
From: Zhu, Wen-Jun  
Sent: Tuesday, April 10, 2018 1:39 AM
To: dev@trafodion.apache.org
Subject: issues on error message

Hi,

I am trying to add some error message in SqlciErrors.txt when new error happens.
As far as I know, it uses message catalog, and the error number is a enum in 
CmpDDLCatErrorCodes.h.

I guess there is a map between these two files, but it is not exactly mapped.

Are there rules about this? Like feed DgSqlCode constructor with enum, not 
magic number, or something like this And what should I take care of?



RE: Question about Trafodion MXOSRVR

2018-03-27 Thread Selva Govindarajan
Please refer to my earlier response. AWAITOX and other APIs used for 
communication that are not TCP/IP APIs belong to the IPC( Inter process 
communication messages) between MXOSRVR and ESPs (Executor Server Processes) 
and clustering infrastructure used in Trafodion.

Selva

-Original Message-
From: Song, Hao-Lin  
Sent: Tuesday, March 27, 2018 9:21 AM
To: dev@trafodion.apache.org
Subject: 答复: Question about Trafodion MXOSRVR

Hi


Thanks for your explanation.


In Listener_srvr_ps.cpp, it starts a thread to do tcpip listener and handle 
messages and tasks. I mean that could we simply separate network processing 
from task processing instead of converting it to completely multi-threads . 
Besides, does anyone know the role of the API AWAITIOX and related ones?

I could not search it on the Internet and I don't know what is the 'while 
(m_bKeepRunning) ' mean?.

511
512
// Start tcpip listener thread
513
tcpip_tid = tcpip_listener_thr.create("TCPIP_listener",
514
CNSKListenerSrvr::tcpip_listener, this);
515
516
// Persistently wait for input on $RECEIVE and then act on it.
517
while(m_bKeepRunning)
518
{
519
RESET_ERRORS((long)0);
520
521
timeout = -1;
522
fnum = m_ReceiveFnum;
523
524
cc = AWAITIOX(, OMITREF, , , timeout);



发件人: Dave Birdsall 
发送时间: 2018年3月27日 23:26:03
收件人: dev@trafodion.apache.org
主题: RE: Question about Trafodion MXOSRVR

Hi,

This is an accident of history. The predecessor product was developed on a 
platform that did not support operator multi-threading.

Yes, it is certainly possible to rearchitect mxosrvr to make it multi-threaded. 
This can be tricky and must be done carefully. One must take into account any 
global variables the code uses and decide whether to leave them as globals (and 
co-ordinate access with mutex), make them thread-globals, or refactor them into 
some object that is not a global instead.

Dave

-Original Message-
From: Song, Hao-Lin 
Sent: Tuesday, March 27, 2018 8:20 AM
To: dev@trafodion.apache.org
Subject: Question about Trafodion MXOSRVR

Hi all


I found that mxosrvr could not handle other network messages when a query is 
processing , since network processing and data processing are in the same 
thread. I am confused about this.  Can we put them in different threads to make 
the program more clear and potential?


Best,

Haolin


RE: Question about Trafodion MXOSRVR

2018-03-27 Thread Selva Govindarajan
What are the other network messages that need to be acted upon by mxosrvr?  
ODBC/JDBC APIs are blocking. Other than cancelling query, there should no other 
messages from the driver sent to the server at least on behalf of the 
application.  The messages between mxosrvr and the ESPs are encapsulated in 
Trafodion's own IPC mechanism.  I believe mxosrvr is capable of alternatively 
listening between the IPC messages and the messages from the driver via TCP/IP. 

Are you planning to make mxosrvr multi-threaded or just want to way to process 
the any other message when mxosrvr is stuck processing the query.

Selva

-Original Message-
From: Dave Birdsall  
Sent: Tuesday, March 27, 2018 8:26 AM
To: dev@trafodion.apache.org
Subject: RE: Question about Trafodion MXOSRVR

Hi,

This is an accident of history. The predecessor product was developed on a 
platform that did not support operator multi-threading.

Yes, it is certainly possible to rearchitect mxosrvr to make it multi-threaded. 
This can be tricky and must be done carefully. One must take into account any 
global variables the code uses and decide whether to leave them as globals (and 
co-ordinate access with mutex), make them thread-globals, or refactor them into 
some object that is not a global instead.

Dave

-Original Message-
From: Song, Hao-Lin  
Sent: Tuesday, March 27, 2018 8:20 AM
To: dev@trafodion.apache.org
Subject: Question about Trafodion MXOSRVR

Hi all


I found that mxosrvr could not handle other network messages when a query is 
processing , since network processing and data processing are in the same 
thread. I am confused about this.  Can we put them in different threads to make 
the program more clear and potential?


Best,

Haolin


ComDiagsArea and Error handling in Trafodion SQL - Guideline for Trafodion developers

2018-03-13 Thread Selva Govindarajan
ComDiagsArea is a class containing errors and warnings encountered during SQL 
compilation or execution. This object is passed around between SQL processes 
and finally displayed by the end user application.

ComDiagsArea is populated and handled in many ways.


  1.  Caller allocates the ComDiagsArea passes it to the callee. The callee 
populates the diagnostics area when there are errors and warnings. Caller is 
responsible to deallocate it.
  2.  In case of process hop, the ComDiagsArea is shipped from the child 
process to the parent process via IPC mechanism.
  3.  ComDiagsArea is also embedded within a container object. The container 
object could be a CLI context (ContextCli) or CLI Statement (Statement) or the 
compiler context(CmpContext).
  4.  During compilation, the error/warnings message are mostly populated in 
the current CmpContext ComDiagsArea. There could be more than one CmpContext. 
There should be at least 2 CmpContext namely user and META.  User given queries 
are compiled in user CmpContext  while the internal meta-data queries are 
compiled in  META CmpContext.
  5.  The errors/warnings info gathered in steps 1) and 2) are usually copied 
to the respective object of item 3. Then passed around between the objects of 
item 3) like CmpContext to Statement or Context before it can be obtained by 
the client applications.

Cons of the above methods:


  *   In step 1, ComDiagsArea is always allocated even when statement would 
succeed without the need for ComDiagsArea.
  *   In step 2, an empty ComDiagsArea is shipped from the child to parent even 
when there are no errors or warnings. This resulted in ComDiagsArea to be 
allocated on the parent side and populated with empty error/warning condition.
  *   Because of step 1 and step 2, the empty ComDiagsArea is copied to the 
objects of step 3.
  *   Prone to leaks in ComDiagsArea due to many unnecessary allocations and 
de-allocations.
  *   At least the sqlci application was attempting an expensive older way 
obtaining the diagnostic error message always even when there are no 
errors/warnings.

I have created a PR https://github.com/apache/trafodion/pull/1470  to take care 
of these issues. The strategy now is


  1.  Caller never allocates the ComDiagsArea, but pass in Reference  to 
pointer to the callee. The caller initializes the pointer to NULL.  When an 
error/warning is raised the callee needs to allocate ComDiagsArea and populate 
it. Then the caller moves it to the objects of step 3 and destroys the 
ComDiagsArea allocated in the callee.
  2.  In case of process hop, the ComDiagsArea is shipped from the child only 
when there is an error or warning via IPC mechanism.
  3.  While switching back from "META" CmpContext to user CmpContext, the 
errors/warnings from the META are copied to the user CmpContext.
  4.  Applications like sqlci, mxosrvr should attempt to obtain the diagnostics 
info based on return code of the CLI call. When the return code is 100, get the 
number of error condition via less expensive call 
SQL_EXEC_GetDiagnosticsStmtInfo2. When this call returns 2 or more conditions, 
then there are warnings other than 100.
  5.  Use mark/rewind and other methods of ComDiagsArea to manipulate it rather 
than creating and copying it.

These changes have enabled us to create ComDiagsArea only when there are any 
errors or warnings in primed up state.  This also should help in fixing the 
leak in ComDiagsArea seen with Trafodion.

It is important that the developers and reviewers do not let the earlier 
inefficient code to creep in back. The purpose of this message is to let all 
SQL developers to be made aware of the new concept and enforce it whenever the 
code is modified or added in error handling.

Selva


RE: [VOTE] Apache Trafodion release 2.2.0 RC3

2018-03-04 Thread Selva Govindarajan
+1 (binding)

Thanks
Selva

-Original Message-
From: Suresh Subbiah  
Sent: Sunday, March 4, 2018 1:44 PM
To: dev@trafodion.apache.org
Subject: Re: [VOTE] Apache Trafodion release 2.2.0 RC3

+1  (binding)

Thanks
Suresh

On Sat, Mar 3, 2018 at 3:46 PM, Steve Varnau  wrote:

> +1  -- the extraneous files are removed now from the artifacts directory.
> Looks good to me.
>
> --Steve
>
> > -Original Message-
> > From: Steve Varnau [mailto:steve.var...@esgyn.com]
> > Sent: Wednesday, February 28, 2018 10:08 AM
> > To: dev@trafodion.apache.org
> > Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC3
> >
> > Ming,
> >
> > I compared the source tar file to the rc3 tag in git. That looks good.
> >
> > I see a couple extra files in the artifacts directory 
> > (apache-trafodion- regress.tgz, dcs-tests.tgz, phoenix-tests.tgz). 
> > These don't have
> checksums
> > and are just sub-sets of the source tree used in testing, so I think
> they should
> > be removed.
> >
> > Once that is addressed, I'll give a positive vote.
> >
> > --Steve
> >
> > > -Original Message-
> > > From: Ming Liu [mailto:lium...@apache.org]
> > > Sent: Tuesday, February 27, 2018 6:59 PM
> > > To: dev@trafodion.apache.org
> > > Subject: [VOTE] Apache Trafodion release 2.2.0 RC3
> > >
> > > Hi to everyone in the Trafodion Community,
> > >
> > > This is a call to vote on release 2.2.0 of Apache Trafodion .
> > >
> > > This is a major release and includes over 300 fixes and some 
> > > important features. The highlights are all documented here:
> > > https://cwiki.apache.org/confluence/display/TRAFODION/Release+2.2
> > > They include :
> > >
> > > * Trafodion graduates as top level Apache project
> > > * DTM enhancements by porting EsgynDB DTM changes to Trafodion
> > > * jdbcT4 for publish to maven central
> > > * Trafodion Elasticity enhancements
> > > * LOB support in JDBC
> > > * RMS enhancements
> > > * Bug fixes
> > >
> > > JIRA Release Notes :
> > > Trafodion 2.2 JIRA Release Notes
> > >
> >  > 3186
> > > 20=12338559>
> > > Highlights :
> > > https://cwiki.apache.org/confluence/display/TRAFODION/Release+2.2
> > >
> > > GIT info:
> > > The tag for this candidate is "2.2.0rc3". Git repository: git:// 
> > > git.apache.org/trafodion.git
> > >
> > > Release artifacts :
> > > https://dist.apache.org/repos/dist/dev/trafodion/trafodion-2.2.0-R
> > > C3/ Artifacts are signed with my key : E1502B16 which is in 
> > > https://dist.apache.org/repos/dist/release/trafodion/KEYS
> > >
> > > Instructions :
> > >
> > > Installing Trafodion using convenience binaries using the Python
> installer or
> > > install with
> > > Ambari :
> > > http://trafodion.apache.org/docs/provisioning_guide/index.html
> > >
> > > Setting up build environment and building from source :
> > >
> > https://cwiki.apache.org/confluence/display/TRAFODION/Create+Build+E
> > nvi
> > > ronment
> > > https://cwiki.apache.org/confluence/display/TRAFODION/Build+Source
> > >
> > > If you don't have a hadoop, you can refer to this page for how to
> install a
> > local
> > > hadoop environment
> > >
> > https://cwiki.apache.org/confluence/display/TRAFODION/Create+Test+En
> > vi
> > > ronment
> > >
> > > [ ] +1 approve
> > >
> > > [ ] +0 no opinion
> > >
> > > [ ] -1 disapprove (and reason why)
> > >
> > > Vote will be open until the community has had a chance to try out 
> > > the instructions and we get sufficient feedback ( at least 72 
> > > hours), unless canceled.
> > >
> > > Thanks,
> > > Ming
>


RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

2018-02-04 Thread Selva Govindarajan
+1

Thanks Ming for all your explanations. 

Selva
-Original Message-
From: Ming Liu [mailto:lium...@apache.org] 
Sent: Saturday, February 3, 2018 6:01 PM
To: dev@trafodion.apache.org
Subject: Re: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

Hi, Selva,

I verified on a fresh CentOS 6.9, and can successfully finish all instructions 
and build and start Trafodion. 
"initialize trafodion" also works fine in my environment.

As for the web page 'issues', here is my comment:
There is a separate page to tell the user how to set up testing env , including 
the installation of hadoop, by using 'install_local_hadoop' script as well : 
https://cwiki.apache.org/confluence/display/TRAFODION/Create+Test+Environment
So next time I will add this page in the VOTE message.

For the git command, the page is about how to get the code from git, and it is 
generic, not good to mention for a specific tag. But I can add some example to 
show that. And again, I feel it is good to add this in the VOTE message to tell 
the voter after get the code, how to set the tag. 
And I think it should be more officially to test the src tarball from SVN as 
well. So I will add that in the VOTE message as well.

So , in sum:
CentOS 6.9 is fine
I will update the web page about how to switch to a specific tag/version of 
trafodion I will update VOTE message, for voters to know which tag to use, and 
it will be good to build from the src tarball as well.

Hope this solve your doubts. ^_^

Ming


On 2018/01/27 01:13:38, Selva Govindarajan <selva.govindara...@esgyn.com> 
wrote: 
> +0
> 
> I was able to build from scratch using instructions verbatim at the 
> section "Create Build Environment" and  "Build Source" at 
> https://cwiki.apache.org/confluence/display/TRAFODION/Trafodion+Contri
> butor+Guide
> I used centos 6.9
> [centos@gselva-trafodion-2 conf]$ lsb_release -a
> LSB Version:
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> Distributor ID: CentOS
> Description:CentOS release 6.9 (Final)
> Release:6.9
> Codename:   Final
> 
> The webpage has two minor issues. 
> 
> 1, git clone git://git.apache.org/trafodion.git
> It shows the steps to build from latest source. It didn't show the 
> steps to build from a tag or a given release. "Source Download" link 
> has some outdated incubator-trafodion links
> 
> 2,  Set Default (Debug) Environment
> cd 
> source ./env.sh
> 
> It should show cd /core/sqf. But, release build 
> shows the correct directory. 
> 
> Otherwise, I was delighted to see the instructions work like a charm. 
> 
> I missed seeing the "install_local_hadoop" sections in the next steps, but 
> used my own way to install Hadoop and start trafodion.
> 
> But the initialize trafodion failed with the following error
> [centos@gselva-trafodion-2 sqf]$ sqlci Apache Trafodion Conversational 
> Interface 2.2.0 Copyright (c) 2015-2017 Apache Software Foundation
> >>initialize trafodion ;
> 
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::create() returned error HBASE_CREATE_ERROR(701). 
> Cause: java.io.IOException: createTable exception. Unable to create 
> table TRAFODION._MD_.AUTHS Reason: java.io.IOException: createTable 
> call error
> org.trafodion.dtm.HBaseTxClient.callCreateTable(HBaseTxClient.java:765
> ) Caused by
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: DataStreamer Exception:
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo
> rAccessorImpl.java:57)
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo
> nstructorAccessorImpl.java:45)
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcep
> tion.java:106) 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExce
> ption.java :95)
> org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(Foreig
> nExceptionUtil.java:45)
> org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.convertResul
> t(HBaseAdmin.java:4448)
> org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedur
> eResult(HBaseAdmin.java:4406)
> org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdm
> in.java:4339)
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:
> 674)
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:
> 604)
> org.apache.hadoop.hbase.client.transactional.TransactionManager.create
&

RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

2018-01-26 Thread Selva Govindarajan
Which version of Centos are you trying on? It would be good if you try on a 
different version than Centos 6.9.

Selva

-Original Message-
From: Roberta Marton [mailto:roberta.mar...@esgyn.com] 
Sent: Friday, January 26, 2018 5:25 PM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

I am setting up a VM to test the generated source files.  Will let you know if 
it works.

 Roberta

-Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
Sent: Friday, January 26, 2018 5:14 PM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

+0

I was able to build from scratch using instructions verbatim at the section 
"Create Build Environment" and  "Build Source" at 
https://cwiki.apache.org/confluence/display/TRAFODION/Trafodion+Contributor+Guide
I used centos 6.9
[centos@gselva-trafodion-2 conf]$ lsb_release -a
LSB Version:
:base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description:CentOS release 6.9 (Final)
Release:6.9
Codename:   Final

The webpage has two minor issues. 

1, git clone git://git.apache.org/trafodion.git
It shows the steps to build from latest source. It didn't show the steps to 
build from a tag or a given release. "Source Download" link has some outdated 
incubator-trafodion links

2,  Set Default (Debug) Environment
cd 
source ./env.sh

It should show cd /core/sqf. But, release build 
shows the correct directory. 

Otherwise, I was delighted to see the instructions work like a charm. 

I missed seeing the "install_local_hadoop" sections in the next steps, but used 
my own way to install Hadoop and start trafodion.

But the initialize trafodion failed with the following error
[centos@gselva-trafodion-2 sqf]$ sqlci
Apache Trafodion Conversational Interface 2.2.0 Copyright (c) 2015-2017 Apache 
Software Foundation
>>initialize trafodion ;

*** ERROR[8448] Unable to access Hbase interface. Call to 
ExpHbaseInterface::create() returned error HBASE_CREATE_ERROR(701). Cause: 
java.io.IOException: createTable exception. Unable to create table 
TRAFODION._MD_.AUTHS Reason: java.io.IOException: createTable call error
org.trafodion.dtm.HBaseTxClient.callCreateTable(HBaseTxClient.java:765) Caused 
by
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: DataStreamer Exception:
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java
 :95)
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.convertResult(HBaseAdmin.java:4448)
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4406)
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4339)
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:604)
org.apache.hadoop.hbase.client.transactional.TransactionManager.createTable(TransactionManager.java:2759)
org.trafodion.dtm.HBaseTxClient.callCreateTable(HBaseTxClient.java:760) Caused 
by
org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
java.util.concurrentExecutionException: java.io.IOException: DataStreamer 
Exception:
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:186)
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:141)
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:118)
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure$3.createHdfsRegions(CreateTableProcedure.java:370)
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayoutyLite;)LjQ
org.apache.hadoop.hbase.client.transactional.RMInterface.createTable(RMInterface.java:475)
org.trafodion.sql.HBaseClient.createk(HBaseClient.java:532).

--- SQL operation failed with errors.
>>quit

I can change my vote to +1 if someone else is able to initialize trafodion with 
R2.2.1 building from scratch either from source bundle or github.
Selva

Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
Sent: Friday, January 26, 2018 9:15 AM
To: dev@trafodion.apache.org
Subject: ***UNCHECKED*** RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

I wanted to check if I am able to b

RE: Contributing to apache/trafodion

2018-01-08 Thread Selva Govindarajan
Hi Ming,

You can continue to use the same URL. But, you need to create a new workspace 
with new git URL

git clone g...@github.com:apache/trafodion using your apache committer logon.

Selva

-Original Message-
From: Liu, Ming (Ming) [mailto:ming@esgyn.cn] 
Sent: Sunday, January 7, 2018 10:45 PM
To: dev@trafodion.apache.org
Subject: RE: Contributing to apache/trafodion

Hi, Selva,

As a committer, I used to use below URL to push the approved RP into ASF repos. 
What is the new URL now?

git remote add apache 
https://lium...@git-wip-us.apache.org/repos/asf/trafodion.git

Thanks,
Ming

-Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
Sent: Wednesday, January 03, 2018 7:00 AM
To: dev@trafodion.apache.org
Subject: Contributing to apache/trafodion

The existing incubator-trafodion contributors can migrate to contribute to 
apache/trafodion using the following steps:


  1.  Create a new workspace using

git clone 
g...@github.com@apache/trafodion<mailto:g...@github.com@apache/trafodion>

  1.  Find the remote pointing to the existing fork. Usually, you would have 
created with your name. In my case

git remote -v

origin  g...@github.com:apache/trafodion (fetch)

origin  g...@github.com:apache/trafodion (push)

selvaganesang  g...@github.com:selvaganesang/incubator-trafodion (fetch)

selvaganesang   g...@github.com:selvaganesang/incubator-trafodion (push)



selvaganesang is my git id.


  1.  The existing cloned repository /incubator-trafodion needs to 
be renamed as /trafodion in the github.

In github

Go to profile->Repositories ->incubator-trafodion->settings



Change the name from incubator-trafodion to trafodion



  1.  Change the remotes in your workspace to point to the new repository git 
remote rm  git remote add   
g...@github.com:/trafodion<mailto:g...@github.com:%3cyour_git_id%3e/trafodion>

Then you can push in your change as before. If you are not concerned with 
having "incubator" in the fork name, you just need to do step 1 and use the 
existing remote.

Selva






RE: Contributing to apache/trafodion

2018-01-02 Thread Selva Govindarajan
It looks like junk got added in the command 1 somehow.

It should be 

git clone g...@github.com@apache/trafodion

Selva
-Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com] 
Sent: Tuesday, January 2, 2018 3:00 PM
To: dev@trafodion.apache.org
Subject: Contributing to apache/trafodion

The existing incubator-trafodion contributors can migrate to contribute to 
apache/trafodion using the following steps:


  1.  Create a new workspace using

git clone 
g...@github.com@apache/trafodion<mailto:g...@github.com@apache/trafodion>

  1.  Find the remote pointing to the existing fork. Usually, you would have 
created with your name. In my case

git remote -v

origin  g...@github.com:apache/trafodion (fetch)

origin  g...@github.com:apache/trafodion (push)

selvaganesang  g...@github.com:selvaganesang/incubator-trafodion (fetch)

selvaganesang   g...@github.com:selvaganesang/incubator-trafodion (push)



selvaganesang is my git id.


  1.  The existing cloned repository /incubator-trafodion needs to 
be renamed as /trafodion in the github.

In github

Go to profile->Repositories ->incubator-trafodion->settings



Change the name from incubator-trafodion to trafodion



  1.  Change the remotes in your workspace to point to the new repository git 
remote rm  git remote add   
g...@github.com:/trafodion<mailto:g...@github.com:%3cyour_git_id%3e/trafodion>

Then you can push in your change as before. If you are not concerned with 
having "incubator" in the fork name, you just need to do step 1 and use the 
existing remote.

Selva






Contributing to apache/trafodion

2018-01-02 Thread Selva Govindarajan
The existing incubator-trafodion contributors can migrate to contribute to 
apache/trafodion using the following steps:


  1.  Create a new workspace using

git clone 
g...@github.com@apache/trafodion

  1.  Find the remote pointing to the existing fork. Usually, you would have 
created with your name. In my case

git remote -v

origin  g...@github.com:apache/trafodion (fetch)

origin  g...@github.com:apache/trafodion (push)

selvaganesang  g...@github.com:selvaganesang/incubator-trafodion (fetch)

selvaganesang   g...@github.com:selvaganesang/incubator-trafodion (push)



selvaganesang is my git id.


  1.  The existing cloned repository /incubator-trafodion needs to 
be renamed as /trafodion in the github.

In github

Go to profile->Repositories ->incubator-trafodion->settings



Change the name from incubator-trafodion to trafodion



  1.  Change the remotes in your workspace to point to the new repository
git remote rm 
git remote add   
g...@github.com:/trafodion

Then you can push in your change as before. If you are not concerned with 
having "incubator" in the fork name, you just need to do step 1 and use the 
existing remote.

Selva






RE: how would esp do when it was launched?

2018-01-02 Thread Selva Govindarajan
Hi Joshua,

I tried the steps I had outlined earlier in my work space via sqlci and didn't 
see so much variation. Can you please try the following in your cluster and 
send us the output. 

drop table tstat;
create table tstat (a char(5) not null ,
 b smallint not null ,
 c char(4),
 d integer,
 primary key (a,b) )
 salt using 2 partitions ;
insert into tstat values (' ',11,'',11);
insert into tstat values (' ',12,'',12);
insert into tstat values (' ',21,'',21);
insert into tstat values ('X',22,'', 22);

drop table tstat_1;
create table tstat_1 (a char(5) not null ,
 b smallint not null ,
 c char(4),
 d integer,
 primary key (a,b) )
 salt using 2 partitions ;
insert into tstat_1 values (' ',11,'',11);
insert into tstat_1 values (' ',12,'',12);
insert into tstat_1 values (' ',21,'',21);
insert into tstat_1 values ('X',22,'', 22);

control query default attempt_esp_parallelism 'on' ;
control query default parallel_num_esps '2' ;
control query shape esp_exchange(hash_groupby(esp_exchange(scan)));

Create a script sample.sql with the following commands

values(current_timestamp);
control query shape esp_exchange(hash_groupby(esp_exchange(scan)));
prepare s1 from select distinct d from tstat ;
get statistics;
execute s1 ;
get statistics;
control query shape cut;
values(current_timestamp);
execute s1 ;
get statistics;
values(current_timestamp);
control query shape esp_exchange(hash_groupby(esp_exchange(scan)));
prepare s1 from select distinct d from tstat_1 ;
get statistics;
execute s1 ;
get statistics;
control query shape cut;
values(current_timestamp);

Please Capture the output of the following commands in sqlci/trafci and send it.

log sample.log clear ;
obey sample.sql ;

BTW, How many regions/tables are there in your hbase cluster?

Selva

-Original Message-
From: Liu, Yao-Hua (Joshua) [mailto:yaohua@esgyn.cn] 
Sent: Monday, January 1, 2018 10:36 PM
To: dev@trafodion.apache.org; d...@trafodion.incubator.apache.org
Subject: 答复: how would esp do when it was launched?

Hi Selva,

Thanks a lot for your explanation! Very clear.

I can understand what your wanted us to do in your steps. But my 
concern is why the first-time launched ESPs to execute some query could take a 
lot of time.
In offender to monitor the query, I can see the first 75 seconds the 
scanned rows are only 20 thousands, but the final 3 seconds would scan the 
remaining 4 million rows.

Thanks
Joshua

-邮件原件-
发件人: Selva Govindarajan [mailto:selva.govindara...@esgyn.com] 
发送时间: 2018年1月2日 14:19
收件人: dev@trafodion.apache.org; d...@trafodion.incubator.apache.org
主题: RE: how would esp do when it was launched?

When a query involves ESP (Executor Server Process), the following is done.

1. Master process creates ESP processes in different nodes via the Trafodion 
clustering infrastructure. This process creation is no-waited and hence all the 
ESP processes should be started almost simultaneously.
2. Master process then sends the fragment of the query that needs to be 
executed to every ESP. When this completes, the query starts executing.
 3. As part of query execution,  ESP would establish a connection to HBase for 
the first time.  The HBase connection will be reused for subsequent query 
execution. Currently, this connection time is not tracked separately. I think 
it might be good to track this connection time separately.  

You can try the following:

Prepare s1 a query with table 1 involving ESPs Execute the query Re-execute the 
query

Prepare s1 a query with table 2 involving ESPs Execute the query Re-execute the 
query

Use the same statement name s1. This would cause the ESPs to be re-used for the 
2nd query.

Selva

-Original Message-
From: Liu, Yao-Hua (Joshua) [mailto:yaohua@esgyn.cn]
Sent: Monday, January 1, 2018 7:07 PM
To: dev@trafodion.apache.org; d...@trafodion.incubator.apache.org
Subject: 答复: how would esp do when it was launched?

By the way, we can get rid of the cache since if in step3 we choose another 
table to select, then the elapsed time would also be super-fast.


-邮件原件-
发件人: Liu, Yao-Hua (Joshua) [mailto:yaohua@esgyn.cn]
发送时间: 2018年1月2日 11:02
收件人: dev@trafodion.apache.org; d...@trafodion.incubator.apache.org
主题: 答复: how would esp do when it was launched?

Hi Dave,

Thanks for your suggestion!
Actually the table is trafodion table which is only named with _HIVE. 
For your 3 steps
1. prepare
  It would take 2 seconds
2. execute first time
  It would take 78 seconds. Here to start all the ESPs would take less 
than 1 second
3. execute the second time
  It would take 3 seconds.

So I am wondering what does ESP do