RE: UDR support query cancel?

2019-10-14 Thread Dave Birdsall
Hi,

I don't know off the top of my head. But if I were trying to answer this 
question for myself, I'd explore what tdm_udrserv does when it receives a 
CANCEL request. Do we even send a CANCEL request to tdm_udrserv? A simple test 
might be "select [first n] * from table(udr(whatever))".

I would hope that we don't do something gross like kill the tdm_udrserv process.

Dave

-Original Message-
From: Eric Owhadi  
Sent: Monday, October 14, 2019 1:43 PM
To: dev@trafodion.apache.org
Subject: UDR support query cancel?

External

Hi Trafodioneers,

I am developing a TMUDF in java, and struggle with the case where the query get 
cancelled by a limit n, and I want to cleanly stop the TMUDF from producing 
data, and close all resources it occupied. But I can find a handler to hook for 
query cancellation?
Am I missing it?
Thanks in advance for the help,
Eric Owhadi


RE: Trafodion master rh6 Daily Test Result - 934 - Failure

2019-09-18 Thread Dave Birdsall
Looks like the Phoenix test suite failure is due to a network difficulty and 
likely transient. Will watch for this again tomorrow.

Dave

-Original Message-
From: steve.var...@esgyn.com  
Sent: Wednesday, September 18, 2019 4:47 AM
To: dev@trafodion.apache.org
Subject: Trafodion master rh6 Daily Test Result - 934 - Failure

External

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/934/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/934
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
[dbirdsall] [TRAFODION-3325] Pass down index hints for non-VEG equality 
predicates

[dbirdsall] Rework to address review comment



Test Job Results:

FAILURE phoenix_part2_T2-hdp (2 min 9 sec) SUCCESS build-rh6-master-debug (33 
min) SUCCESS build-rh6-master-release (38 min) SUCCESS 
core-regress-charsets-cdh (37 min) SUCCESS core-regress-charsets-hdp (51 min) 
SUCCESS core-regress-compGeneral-cdh (59 min) SUCCESS 
core-regress-compGeneral-hdp (1 hr 10 min) SUCCESS core-regress-core-cdh (59 
min) SUCCESS core-regress-core-hdp (1 hr 16 min) SUCCESS 
core-regress-executor-cdh (1 hr 11 min) SUCCESS core-regress-executor-hdp (1 hr 
31 min) SUCCESS core-regress-fullstack2-cdh (10 min) SUCCESS 
core-regress-fullstack2-hdp (14 min) SUCCESS core-regress-hive-cdh (55 min) 
SUCCESS core-regress-hive-hdp (1 hr 3 min) SUCCESS core-regress-privs1-cdh (49 
min) SUCCESS core-regress-privs1-hdp (1 hr 3 min) SUCCESS 
core-regress-privs2-cdh (1 hr 5 min) SUCCESS core-regress-privs2-hdp (1 hr 25 
min) SUCCESS core-regress-qat-cdh (29 min) SUCCESS core-regress-qat-hdp (35 
min) SUCCESS core-regress-seabase-cdh (1 hr 30 min) SUCCESS 
core-regress-seabase-hdp (1 hr 58 min) SUCCESS core-regress-udr-cdh (36 min) 
SUCCESS core-regress-udr-hdp (45 min) SUCCESS jdbc_test-cdh (37 min) SUCCESS 
jdbc_test-hdp (48 min) SUCCESS phoenix_part1_T2-cdh (1 hr 1 min) SUCCESS 
phoenix_part1_T2-hdp (1 hr 18 min) SUCCESS phoenix_part1_T4-cdh (56 min) 
SUCCESS phoenix_part1_T4-hdp (1 hr 15 min) SUCCESS phoenix_part2_T2-cdh (59 
min) SUCCESS phoenix_part2_T4-cdh (57 min) SUCCESS phoenix_part2_T4-hdp (1 hr 
15 min) SUCCESS pyodbc_test-cdh (17 min) SUCCESS pyodbc_test-hdp (23 min)


RE: A heads-up about a recent pull request

2019-09-11 Thread Dave Birdsall
Hi Zalo,

Thanks!

And BTW, reinstalling local Hadoop worked for me as expected. My Trafodion 
instance came up fine.

Dave

-Original Message-
From: Zalo Correa  
Sent: Wednesday, September 11, 2019 6:27 PM
To: dev@trafodion.apache.org
Subject: RE: A heads-up about a recent pull request

External

The changes require that Hadoop be uninstalled and reinstalled. The conf 
directory, configuration files, is now copied to the local_hadoop/traf_conf 
directory by Hadoop installation script.

Zalo

-Original Message-
From: Dave Birdsall 
Sent: Wednesday, September 11, 2019 5:59 PM
To: dev@trafodion.apache.org
Subject: A heads-up about a recent pull request

External

Hi,

I recently merged a pull request: https://github.com/apache/trafodion/pull/1854

I'm noticing one side effect from this pull request. I had a successful 
Trafodion development instance, using local Hadoop, based on code before this 
pull request.

I did an sqstop + swstophbase. Then I did a git fetch origin, picking up Zalo's 
change, then created a new branch and did a make clean + make all. Then I did 
swstarthbase. Hbase comes up successfully. When I do an sqstart however, it 
hangs at 5 processes and never comes up. An sqps command fails with:

[birdsall@edev05 trafodion]$ sqps
2019-09-11 23:47:54,, FATAL, TRAFCONFIG,,, PIN: 17917 TID: 17917, Message 
ID: 101091501, [CClusterConfig::CClusterConfig], Environment variable 
TRAF_CLUSTER_ID is undefined, exiting!

So I did a pkillall.

I then did an sqgen, followed by another attempt at sqstart. It still hangs at 
5 processes. I noticed that sqgen did not recreate ms.env, which is where 
environment variables are defined. So I tried deleting it, and doing sqgen + 
sqstart again. Didn't help, same hang.

I noticed something else. When running sqgen, I see:

Can't open 
/mnt2/birdsall/trafodion/core/sqf/sql/local_hadoop/traf_conf/sqconfig.persist: 
No such file or directory at ./gensq.pl line 606, <> line 12.

So, perhaps I need to blow away my instance and reinstall local_hadoop. So, I 
tried pkillall + git clean -fxd + make all + install_local_hadoop + 
install_traf_components + sqgen + sqstart. That's in progress now; will let you 
know how it turns out.

So this e-mail is just a heads-up that this particular change seems to require 
a fresh install.

I notice that Jenkins Trafodion tests are running fine, so clean installs 
should work.

Dave





A heads-up about a recent pull request

2019-09-11 Thread Dave Birdsall
Hi,

I recently merged a pull request: https://github.com/apache/trafodion/pull/1854

I'm noticing one side effect from this pull request. I had a successful 
Trafodion development instance, using local Hadoop, based on code before this 
pull request.

I did an sqstop + swstophbase. Then I did a git fetch origin, picking up Zalo's 
change, then created a new branch and did a make clean + make all. Then I did 
swstarthbase. Hbase comes up successfully. When I do an sqstart however, it 
hangs at 5 processes and never comes up. An sqps command fails with:

[birdsall@edev05 trafodion]$ sqps
2019-09-11 23:47:54,, FATAL, TRAFCONFIG,,, PIN: 17917 TID: 17917, Message 
ID: 101091501, [CClusterConfig::CClusterConfig], Environment variable 
TRAF_CLUSTER_ID is undefined, exiting!

So I did a pkillall.

I then did an sqgen, followed by another attempt at sqstart. It still hangs at 
5 processes. I noticed that sqgen did not recreate ms.env, which is where 
environment variables are defined. So I tried deleting it, and doing sqgen + 
sqstart again. Didn't help, same hang.

I noticed something else. When running sqgen, I see:

Can't open 
/mnt2/birdsall/trafodion/core/sqf/sql/local_hadoop/traf_conf/sqconfig.persist: 
No such file or directory at ./gensq.pl line 606, <> line 12.

So, perhaps I need to blow away my instance and reinstall local_hadoop. So, I 
tried pkillall + git clean -fxd + make all + install_local_hadoop + 
install_traf_components + sqgen + sqstart. That's in progress now; will let you 
know how it turns out.

So this e-mail is just a heads-up that this particular change seems to require 
a fresh install.

I notice that Jenkins Trafodion tests are running fine, so clean installs 
should work.

Dave





FW: PRIORITY Action required: Security review for non-https dependency urls

2019-05-21 Thread Dave Birdsall
Hi,

Anyone want to take a look at our Trafodion build scripts?

Dave

-Original Message-
From: m...@gsuite.cloud.apache.org  On Behalf Of 
Apache Security Team
Sent: Tuesday, May 21, 2019 4:30 AM
To: Apache Security Team 
Subject: PRIORITY Action required: Security review for non-https dependency urls

ASF Security received a report that a number of Apache projects have build 
dependencies downloaded using insecure urls. The reporter states this could be 
used in conjunction with a man-in-the-middle attack to compromise project 
builds.  The reporter claims this a significant issue and will be making an 
announcement on June 10th and a number of press releases and industry reaction 
is expected.

We have already contacted each of the projects the reporter detected.
However we have not run any scanning ourselves to identify any other instances 
hence this email.

We request that you review any build scripts and configurations for insecure 
urls where appropriate to your projects, fix them asap, and report back if you 
had to change anything to secur...@apache.org by the 31st May 2019.

The most common finding was HTTP references to repos like maven.org in build 
files (Gradle, Maven, SBT, or other tools).  Here is an example showing 
repositories being used with http urls that should be changed to https:

https://github.com/apache/flink/blob/d1542e9561c6235feb902c9c6d781ba416b8f784/pom.xml#L1017-L1038

Note that searching for http:// might not be enough, look for http\:// too due 
to escaping.

Although this issue is public on June 10th, please make fixes to insecure urls 
immediately.  Also note that some repos will be moving to blocking http 
transfers in June and later:

https://central.sonatype.org/articles/2019/Apr/30/http-access-to-repo1mavenorg-and-repomavenapacheorg-is-being-deprecated/

The reporter claims that a full audit of affected projects is required to 
ensure builds were not made with tampered dependencies, and that CVE names 
should be given to each project, however we are not requiring this -- we 
believe it’s more likely a third party repo could
be compromised with a malicious build than a MITM attack.   If you
disagree, let us know. Projects like Lucene do checksum whitelists of all their 
build dependencies, and you may wish to consider that as a protection against 
threats beyond just MITM.

Best Regards,
Mark J Cox
VP, ASF Security Team


RE: Project Dashboard

2019-05-20 Thread Dave Birdsall
Hi,

Very nice dashboard. Thank you for creating it.

Dave

-Original Message-
From: Pierre Smits  
Sent: Saturday, May 18, 2019 12:19 AM
To: dev@trafodion.apache.org
Subject: Project Dashboard

Hi All,

I have made a project Dashboard available regarding our JIRA. See:
https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12332181

If you have any questions and/or remarks (e.g. improvements) feel free to share.

Best regards,

Pierre Smits

*Apache Trafodion , Vice President* *Apache 
Directory , PMC Member* Apache Incubator 
, committer *Apache OFBiz 
, contributor (without privileges) since 2008* Apache 
Steve , committer


Odd Jar problem in Trafodion builds

2019-04-10 Thread Dave Birdsall
Hi,

Recently in my Trafodion instance, I've been seeing the following error message:

JDBC Library Version Error - Jar: Traf_JDBC_Type2_Build_0ab8d50 Jni: 
Traf_JDBC_Type2_Build_2ffb876

I see this when running compGeneral/TEST072, where it attempts to create a Java 
stored procedure.

I also see it at "initialize trafodion" time when it tries to create the 
libraries.

In both cases, the UDR server aborts.

Any idea of how to cure this problem?

Thanks,

Dave


RE: workstation instance change

2019-03-05 Thread Dave Birdsall
Hi,

What effect should we see?

If we do a git fetch origin and do a make, will our existing instances break? 
Do we need to do a fresh install_local_hadoop?

Thanks,

Dave

-Original Message-
From: Steve Varnau  
Sent: Tuesday, March 5, 2019 12:40 PM
To: dev@trafodion.apache.org
Subject: workstation instance change

External

Trafodion developers,

FYI. I just merged TRAFODION-3272 changes to master.  These mostly affect 
clustered-installed instances, since it changes the locations of TRAF_CONF to 
/etc/trafodion/conf, as well as the default locations of TRAF_LOG and TRAF_VAR. 
 (cdl and cdt functions continue to work, of course).

The locations of DCS and REST conf directories also changed, so that they are 
located relative to TRAF_CONF.  This created a small conflict between 
install_local_hadoop modifying config files and the makefile packaging that 
packages the default config files.  To resolve this and to allow good packages 
be built in same workspace as install_local_hadoop, the setting for TRAF_CONF 
on workstation instances was changed to be under the local_hadoop directory.

This mostly does not affect regular usage, since there is little need to change 
the default config files. But I wanted to give this heads-up to you.

Also, I re-defined the cdc function that was doing something very outdated to 
instead change to the TRAF_CONF directory and reduce the need to remember where 
it is in the various environments.

--Steve



Logsort change coming

2019-02-05 Thread Dave Birdsall
Hi,

I've been playing with changes to logsort to teach it to sort output from GET 
statements. A little while back we had some regression failures because GET 
output was not ordered.

At the moment, I have it sorting the output of the following GET statements:

Tables
Indexes
Libraries
Schemas
Views
Privileges
Functions
Procedures
Sequences
Roles
Objects

If you expect GET output to be ordered for one or more of these, please speak 
up! Because in that case, logsort should *not* sort the output to that 
statement.

Let me know. A pull request will be created shortly.

Thanks,

Dave


FW: Google Summer of Code 2019 is coming

2019-01-29 Thread Dave Birdsall
Is this something of interest to the Trafodion community? Is there some feature 
in our engine that we could ask a student to attack over the summer?

Dave

-Original Message-
From: Ulrich Stärk  
Sent: Tuesday, January 29, 2019 4:25 AM
Subject: Google Summer of Code 2019 is coming

Hello PMCs (incubator Mentors, please forward this email to your podlings),

Google Summer of Code [1] is a program sponsored by Google allowing students to 
spend their summer working on open source software. Students will receive 
stipends for developing open source software full-time for three months. 
Projects will provide mentoring and project ideas, and in return have the 
chance to get new code developed and - most importantly - to identify and bring 
in new committers.

The ASF will apply as a participating organization meaning individual projects 
don't have to apply separately.

If you want to participate with your project we ask you to do the following 
things by no later than
2019-01-31 19:00 UTC (applications from organizations close a week later)

1. understand what it means to be a mentor [2].

2. record your project ideas.

Just create issues in JIRA, label them with gsoc2019, and they will show up at 
[3]. Please be as specific as possible when describing your idea. Include the 
programming language, the tools and skills required, but try not to scare 
potential students away. They are supposed to learn what's required before the 
program starts.

Use labels, e.g. for the programming language (java, c, c++, erlang, python, 
brainfuck, ...) or technology area (cloud, xml, web, foo, bar, ...).

Please use the COMDEV JIRA project for recording your ideas if your project 
doesn't use JIRA (e.g.
httpd, ooo). Contact d...@community.apache.org if you need assistance.

[4] contains some additional information (will be updated for 2019 shortly).

3. subscribe to ment...@community.apache.org; restricted to potential mentors, 
meant to be used as a private list - general discussions on the public 
d...@community.apache.org list as much as possible please). Use a recognized 
address when subscribing (@apache.org or one of your alias addresses on record).

Note that the ASF isn't accepted as a participating organization yet, 
nevertheless you *have to* start recording your ideas now or we might not get 
accepted.

Over the years we were able to complete hundreds of projects successfully. Some 
of our prior students are active contributors now! Let's make this year a 
success again!

P.S.: this email is free to be shared publicly if you want to.

[1] https://summerofcode.withgoogle.com/
[2] http://community.apache.org/guide-to-being-a-mentor.html
[3] https://s.apache.org/gsoc2019ideas
[4] http://community.apache.org/gsoc.html


RE: One question about the code of mxosrvr

2019-01-28 Thread Dave Birdsall
Hi,

Did you get an answer to your question?

Dave

-Original Message-
From: Song Hao-Lin  
Sent: Monday, January 21, 2019 3:19 AM
To: dev@trafodion.apache.org
Subject: One question about the code of mxosrvr


Hi all

I found that the function GETMXCSWARNINGORERROR, which used to set warnings or 
errors in mxosrvr, could only record one warning or error during one client 
request.
If you call the function twice , it will remove the message which is recorded 
in the first time(by bzero(WarningOrError,sizeof(WarningOrError))),while errors 
or warnings from sql engine could be recorded by SRVR::GETSQLWARNINGORERROR2 no 
matter how many they are. If I make a mistake, please let me know.

The function GETMXCSWARNINGORERROR is in the file 
core/conn/odbc/src/odbc/nsksrvrcore/srvrcommon.cpp.
I want to modify the related code and suggestions are welcome. 


// DO NOT call this function using pSrvrStmt->sqlWarningOrErrorLength and 
pSrvrStmt->sqlWarningOrError, // Since the WarningOrError is static and 
pSrvrStmt->sqlWarningOrError will deallocate this memory.
extern "C" void GETMXCSWARNINGORERROR(
/* In */ Int32 sqlcode
, /* In */ char *sqlState
, /* In */ char *msg_buf
, /* Out */ Int32 *MXCSWarningOrErrorLength , /* Out */ BYTE 
*) {
Int32 total_conds = 1;
Int32 buf_len;
Int32 curr_cond = 1;
Int32 msg_buf_len = strlen(msg_buf)+1;
Int32 time_and_msg_buf_len = 0;
Int32 msg_total_len = 0;
Int32 rowId = 0; // use this for rowset recovery.
char tsqlState[6];
static BYTE WarningOrError[1024];
char strNow[TIMEBUFSIZE + 1];
char* time_and_msg_buf = NULL;
memset(tsqlState,0,sizeof(tsqlState));
memcpy(tsqlState,sqlState,sizeof(tsqlState)-1);

bzero(WarningOrError,sizeof(WarningOrError));
*MXCSWarningOrErrorLength = 0;
MXCSWarningOrError = WarningOrError; // Size of internally generated 
message should be enough
*(Int32 *)(MXCSWarningOrError+msg_total_len) = total_conds;
msg_total_len += sizeof(total_conds);
*(Int32 *)(MXCSWarningOrError+msg_total_len) = rowId;
msg_total_len += sizeof(rowId);
*(Int32 *)(MXCSWarningOrError+msg_total_len) = sqlcode;
msg_total_len += sizeof(sqlcode);
time_and_msg_buf_len = msg_buf_len + TIMEBUFSIZE;
*(Int32 *)(MXCSWarningOrError+msg_total_len) = time_and_msg_buf_len;
msg_total_len += sizeof(time_and_msg_buf_len);
//Get the timetsamp
time_and_msg_buf = new char[time_and_msg_buf_len];
strncpy(time_and_msg_buf, msg_buf, msg_buf_len);
time_t now = time(NULL);
bzero(strNow, sizeof(strNow));
strftime(strNow, sizeof(strNow), " [%Y-%m-%d %H:%M:%S]", localtime());
strcat(time_and_msg_buf, strNow);
memcpy(MXCSWarningOrError+msg_total_len, time_and_msg_buf, 
time_and_msg_buf_len);
msg_total_len += time_and_msg_buf_len;
delete time_and_msg_buf;
memcpy(MXCSWarningOrError+msg_total_len, tsqlState, sizeof(tsqlState));
msg_total_len += sizeof(tsqlState);
*MXCSWarningOrErrorLength = msg_total_len;
return;
}


Repository migration difficulty

2019-01-14 Thread Dave Birdsall
Hi Trafodion developers,

In preparation for the gitbox repo migration, I attempted to link my apache and 
github IDs.

I successfully added my github ID to the apache profile, and successfully 
received an invitation.

Now, when I go to https://gitbox.apache.org/setup/, I can log in successfully 
to apache, and that automatically auths me into github as well. So the first 
two windows show "Authed". But the third window, MFA Status, says: "MFA 
DISABLED Write access suspended. Please make sure you are a part of the ASF 
Organization on Github and have 2FA enabled. Visit id.apache.org and set your 
GitHub ID to be invited to the org. Please allow up to an hour for your MFA 
status to propagate ... "

Well, I did all that last week and it still hasn't migrated.

Any suggestions on what I should do?

Thanks,

Dave


RE: Repo Migration

2019-01-09 Thread Dave Birdsall
Hi,

I just did the "add your github ID to your ASF profile" step.

One oddity is that when I log into the id.apache.org application, it showed two 
"Github Username" fields. The first contained "" while the second 
was blank. It would not allow me to edit the first one, but it did allow me to 
type in something into the second box. I typed in my Github user name there, 
and submitted the changes.

When I log back in again, it shows my Github user name in the first of these 
"Github Username" fields.

So it's quirky but seems to work so far.

Dave

-Original Message-
From: Steve Varnau  
Sent: Wednesday, January 9, 2019 10:15 AM
To: dev@trafodion.apache.org
Subject: RE: Repo Migration

External

The instructions are there on the gitbox site: https://gitbox.apache.org/setup/

You have to add your github ID to your ASF profile at: https://id.apache.org/ 
And then your github ID will be invited to join the ASF group in github.

You also have to enable 2-factor authentication in github.

--Steve

> -Original Message-
> From: Dave Birdsall 
> Sent: Wednesday, January 9, 2019 10:06 AM
> To: dev@trafodion.apache.org
> Subject: RE: Repo Migration
>
> External
>
> Hi Steve,
>
> How does one link github and ASF accounts together? (Or tell if they 
> have already done so?)
>
> Thanks,
>
> Dave
>
> -Original Message-
> From: Steve Varnau 
> Sent: Wednesday, January 9, 2019 9:55 AM
> To: dev@trafodion.apache.org
> Subject: RE: Repo Migration
>
> External
>
> Looks like I, as a committer who has linked my github and ASF accounts 
> together, have permission to merge your PR (merge button is green).
> https://github.com/apache/trafodion-site/pull/1
>
> I don't see any way to know who all have write permission.
>
> Looks like I can check which repos I have access to by logging into 
> https://gitbox.apache.org/setup/ (with apache account).
> It also checks that my accounts are linked. So those committers who 
> have not done so, should take care of that.  (Otherwise you'll have to 
> merge commits the old fashioned way instead of clicking the green 
> button.)
>
> --Steve
>
> > -Original Message-
> > From: Pierre Smits 
> > Sent: Wednesday, January 9, 2019 12:57 AM
> > To: dev@trafodion.apache.org
> > Subject: Re: Repo Migration
> >
> > Hi all,
> >
> > Our trafodion site repo has been migrated by INFRA to GitBox. It 
> > should now offer a write-able mirror on GitHub, allowing committers 
> > to be able to merge GitHub pull requests directly via functionalities there.
> >
> > Can we test this (and report findings) before I submit a request to 
> > migrate our code repo?
> >
> > Best regards,
> >
> > Pierre Smits
> >
> > *Apache Trafodion <https://trafodion.apache.org>, Vice President* 
> > *Apache Directory <https://directory.apache.org>, PMC Member* Apache 
> > Incubator <https://incubator.apache.org>, committer *Apache OFBiz 
> > <https://ofbiz.apache.org>, contributor (without privileges) since
> > 2008* Apache Steve <https://steve.apache.org>, committer
> >
> >
> > On Sun, Jan 6, 2019 at 10:36 AM Pierre Smits 
> > 
> > wrote:
> >
> > >
> > > Hi all the request to migrate the website repo has been submitted.
> > > It can be tracked here:
> > > https://issues.apache.org/jira/browse/INFRA-17562
> > >
> > > Best regards,
> > >
> > > Pierre Smits
> > >
> > > *Apache Trafodion <https://trafodion.apache.org>, Vice President* 
> > > *Apache Directory <https://directory.apache.org>, PMC Member*
> Apache
> > > Incubator <https://incubator.apache.org>, committer *Apache OFBiz 
> > > <https://ofbiz.apache.org>, contributor (without privileges) since
> > > 2008* Apache Steve <https://steve.apache.org>, committer
> > >
> > >
> > > On Fri, Jan 4, 2019 at 1:54 AM ming.liu  wrote:
> > >
> > >> Thanks Pierre,
> > >>
> > >> This is a good plan. +1
> > >>
> > >> Ming
> > >>
> > >> -Original Message-
> > >> From: Pierre Smits 
> > >> Sent: Thursday, January 03, 2019 9:58 PM
> > >> To: dev@trafodion.apache.org
> > >> Subject: Repo Migration
> > >>
> > >> Hi All,
> > >>
> > >> Happy New Year to all.
> > >>
> > >> As you may have noticed from postings by the INFRA team the 
> > >> mandatory migration of our repos
> > >

RE: Repo Migration

2019-01-09 Thread Dave Birdsall
Hi Steve,

How does one link github and ASF accounts together? (Or tell if they have 
already done so?)

Thanks,

Dave

-Original Message-
From: Steve Varnau  
Sent: Wednesday, January 9, 2019 9:55 AM
To: dev@trafodion.apache.org
Subject: RE: Repo Migration

External

Looks like I, as a committer who has linked my github and ASF accounts 
together, have permission to merge your PR (merge button is green).  
https://github.com/apache/trafodion-site/pull/1

I don't see any way to know who all have write permission.

Looks like I can check which repos I have access to by logging into 
https://gitbox.apache.org/setup/ (with apache account).
It also checks that my accounts are linked. So those committers who have not 
done so, should take care of that.  (Otherwise you'll have to merge commits the 
old fashioned way instead of clicking the green button.)

--Steve

> -Original Message-
> From: Pierre Smits 
> Sent: Wednesday, January 9, 2019 12:57 AM
> To: dev@trafodion.apache.org
> Subject: Re: Repo Migration
>
> Hi all,
>
> Our trafodion site repo has been migrated by INFRA to GitBox. It 
> should now offer a write-able mirror on GitHub, allowing committers to 
> be able to merge GitHub pull requests directly via functionalities there.
>
> Can we test this (and report findings) before I submit a request to 
> migrate our code repo?
>
> Best regards,
>
> Pierre Smits
>
> *Apache Trafodion , Vice President* 
> *Apache Directory , PMC Member* Apache 
> Incubator , committer *Apache OFBiz 
> , contributor (without privileges) since 
> 2008* Apache Steve , committer
>
>
> On Sun, Jan 6, 2019 at 10:36 AM Pierre Smits 
> wrote:
>
> >
> > Hi all the request to migrate the website repo has been submitted. 
> > It can be tracked here: 
> > https://issues.apache.org/jira/browse/INFRA-17562
> >
> > Best regards,
> >
> > Pierre Smits
> >
> > *Apache Trafodion , Vice President* 
> > *Apache Directory , PMC Member* Apache 
> > Incubator , committer *Apache OFBiz 
> > , contributor (without privileges) since 
> > 2008* Apache Steve , committer
> >
> >
> > On Fri, Jan 4, 2019 at 1:54 AM ming.liu  wrote:
> >
> >> Thanks Pierre,
> >>
> >> This is a good plan. +1
> >>
> >> Ming
> >>
> >> -Original Message-
> >> From: Pierre Smits 
> >> Sent: Thursday, January 03, 2019 9:58 PM
> >> To: dev@trafodion.apache.org
> >> Subject: Repo Migration
> >>
> >> Hi All,
> >>
> >> Happy New Year to all.
> >>
> >> As you may have noticed from postings by the INFRA team the 
> >> mandatory migration of our repos
> >>
> >>- trafodion.git
> >>- trafodion-site.git
> >>
> >> is intended to happen soon (shortly after Feb 7th, 2019).
> >>
> >> In order not to put all our eggs (combined with those of other 
> >> Apache
> >> projects) I propose to stagger our migration efforts before the 
> >> kick-off of the mandatory process, by
> >>
> >>1. initiating the migration of our website repo first a.s.a.p.;
> >>2. initiate the migration of our code base repo second and last.
> >>
> >> This way we will build the experience of all our contributors and 
> >> will - most probably - minimise the migration issues.
> >>
> >> f you have any questions or concerns please let us know.  I will 
> >> wait until the beginning of next week before submitting a ticket to 
> >> infra regarding the migration of our website repo ( aspect #1), 
> >> giving consent for the move.  Once that move has taken place and 
> >> the dust has settled again, we'll initiate the move of the code 
> >> repo. This will give us a 3-4 week window to address any issues 
> >> raised.
> >>
> >> Lazy consensus rules apply, meaning silence implies consent.
> >>
> >> Best regards,
> >>
> >> Pierre Smits
> >>
> >> *Apache Trafodion , Vice President* 
> >> *Apache Directory , PMC Member* 
> >> Apache Incubator , committer *Apache 
> >> OFBiz , contributor (without
> >> privileges)
> >> since 2008*
> >> Apache Steve , committer
> >>
> >>


RE: questionable `sprintf` usage

2018-12-19 Thread Dave Birdsall
Hi,

Just a follow-up. Awhile back Anoop Sharma and myself removed some of the 
obsolete static SQL code from UPDATE STATISTICS. You can find a reference here: 
https://issues.apache.org/jira/browse/TRAFODION-3138 (the pull request is here 
https://github.com/apache/trafodion/pull/1639). There is still some left to be 
removed, including, apparently, this OpenCursor method. Feel free to open a new 
JIRA and remove the rest of the static SQL code if you wish.

Dave 

-Original Message-
From: Dave Birdsall  
Sent: Wednesday, December 19, 2018 9:22 AM
To: dev@trafodion.apache.org
Subject: RE: questionable `sprintf` usage

External

Hi,

I think this is dead code, actually. I looked for callers to OpenCursor but did 
not find any.

The comment at the top of the function says that it is used for static modules. 
Static SQL compilation was supported in Trafodion's predecessor product, but 
Trafodion itself does not support it.

We probably can remove this function.

Dave


-Original Message-
From: Selva Govindarajan 
Sent: Wednesday, December 19, 2018 7:38 AM
To: dev@trafodion.apache.org
Subject: RE: questionable `sprintf` usage

External

The code is not erroneous, though it is bit strange.

Declaration of sprintf is

int sprintf ( char * str, const char * format, ... );

It just needs 2 parameters, the rest are optional. In this case when format 
parameter has no format specification, sprintf  just copies the format 
parameter to str.

Trafodion code is compiled with -Wformat -Werror. This should emit out 
compilation errors when printf, sprintf  is used in incorrect way such as less 
number of arguments than the required number as per the format specification, 
incompatible format and argument, and other errors.

snprintf might be good to avoid buffer overflow, but in this case I am not sure 
if there was a buffer overflow condition.

Selva
-Original Message-
From: wenjun@esgyn.cn 
Sent: Wednesday, December 19, 2018 2:35 AM
To: dev@trafodion.apache.org
Subject: questionable `sprintf` usage

Hi,



I suspect the following code in core/sql/ustat/hs_read.cpp is erroneous:

2120   desc = new SQLDESC_ID;

2121   init_SQLCLI_OBJ_ID(desc);

2122

2123   desc->name_mode = cursor_name;

2124   desc->module = 

2125   desc->identifier = new char[HS_STMTID_LENGTH];

2126   desc->handle = 0;

2127

2128   sprintf((char*)desc->identifier, descID);

2129   desc->identifier_len = strlen(descID);



The parameters to function `sprintf` should be 3, but there are only 2.



I’d like to change it to:

   snprintf((char*)desc->identifier, HS_STMTID_LENGTH, “%s”, descID);



How do you find it?



Regards,

Wenjun Zhu



RE: questionable `sprintf` usage

2018-12-19 Thread Dave Birdsall
Hi,

I think this is dead code, actually. I looked for callers to OpenCursor but did 
not find any.

The comment at the top of the function says that it is used for static modules. 
Static SQL compilation was supported in Trafodion's predecessor product, but 
Trafodion itself does not support it.

We probably can remove this function.

Dave


-Original Message-
From: Selva Govindarajan  
Sent: Wednesday, December 19, 2018 7:38 AM
To: dev@trafodion.apache.org
Subject: RE: questionable `sprintf` usage

External

The code is not erroneous, though it is bit strange.

Declaration of sprintf is

int sprintf ( char * str, const char * format, ... );

It just needs 2 parameters, the rest are optional. In this case when format 
parameter has no format specification, sprintf  just copies the format 
parameter to str.

Trafodion code is compiled with -Wformat -Werror. This should emit out 
compilation errors when printf, sprintf  is used in incorrect way such as less 
number of arguments than the required number as per the format specification, 
incompatible format and argument, and other errors.

snprintf might be good to avoid buffer overflow, but in this case I am not sure 
if there was a buffer overflow condition.

Selva
-Original Message-
From: wenjun@esgyn.cn 
Sent: Wednesday, December 19, 2018 2:35 AM
To: dev@trafodion.apache.org
Subject: questionable `sprintf` usage

Hi,



I suspect the following code in core/sql/ustat/hs_read.cpp is erroneous:

2120   desc = new SQLDESC_ID;

2121   init_SQLCLI_OBJ_ID(desc);

2122

2123   desc->name_mode = cursor_name;

2124   desc->module = 

2125   desc->identifier = new char[HS_STMTID_LENGTH];

2126   desc->handle = 0;

2127

2128   sprintf((char*)desc->identifier, descID);

2129   desc->identifier_len = strlen(descID);



The parameters to function `sprintf` should be 3, but there are only 2.



I’d like to change it to:

   snprintf((char*)desc->identifier, HS_STMTID_LENGTH, “%s”, descID);



How do you find it?



Regards,

Wenjun Zhu



RE: Merges seem to be going slow

2018-11-19 Thread Dave Birdsall
Hi,

Just noticed that these pull requests were merged just now. So whatever hiccup 
was happening has now completed.

Dave

-Original Message-
From: Dave Birdsall  
Sent: Monday, November 19, 2018 10:25 AM
To: dev@trafodion.apache.org
Subject: Merges seem to be going slow

External

Hi,

Just a heads-up: I've merged three pull requests this morning into Trafodion 
master (#1743, 1744 and 1745) but the merge infrastructure has not reported 
them as merged yet. So something is hung up somewhere. Will wait to see if it 
resolves on its own.

Dave


Merges seem to be going slow

2018-11-19 Thread Dave Birdsall
Hi,

Just a heads-up: I've merged three pull requests this morning into Trafodion 
master (#1743, 1744 and 1745) but the merge infrastructure has not reported 
them as merged yet. So something is hung up somewhere. Will wait to see if it 
resolves on its own.

Dave


RE: destructor has been called twice?

2018-11-12 Thread Dave Birdsall
Hi,

I'm not familiar with this code.

Looking at it now though I see some things that look odd to me.

First, struct CacheEntry inherits from NABasicObject. If I understand 
export/NABasicObject.h correctly, we should use "delete" on it, and not 
NADELETE.

Another thing I wonder about is CacheEntry has its member "KeyDataPair data_;". 
The destructor for struct KeyDataPair does nothing; note that the only members 
of struct KeyDataPair are pointers.

So, calling ~KeyDataPair() at line 1357 appears to be harmless. I agree with 
you that it is also redundant.

Dave

-Original Message-
From: Zhu, Wen-Jun  
Sent: Saturday, November 10, 2018 12:49 AM
To: dev@trafodion.apache.org
Subject: destructor has been called twice?

Hi,

When I read code in core/sql/sqlcomp/QCache.h, I found that:
1349   // removes the element at iterator position pos and
1350   // returns the position of the next element
1351   iterator erase(iterator pos)
1352 {
1353   if (pos == end()) return end();
1354   iterator tmp = (iterator)(pos.node_->next_);
1355   pos.node_->prev_->next_ = pos.node_->next_;
1356   pos.node_->next_->prev_ = pos.node_->prev_;
1357   pos.node_->data_.~KeyDataPair(); // destroy data_
1358   putNode(pos.node_);
1359   --length_;
1360   return tmp;
1361 }


1278   void putNode(CacheEntry* p) {
1279 NADELETE(p,CacheEntry,heap_); // implicitly calls 
p.data_.~KeyDataPair()
1280   }

The data in that node had been destructed twice. So this is a problem.

I suggestion is to remove the first destruction, i.e., do not call the 
destruction function explicitly.
And the function name `putNode` is not so accurate.

How do you find it?

Regards,
Wenjun Zhu


RE: How to get a sort_groupby plan

2018-10-15 Thread Dave Birdsall
Hi,

If the data is already in sorted order, you might get a sort group by plan.

For example:

Create table t1 (a int not null, b int);

Insert a million rows into t1

Prepare select a,count(*) from t1 group by a;

You are likely to get a sort group by plan for this, since the data is already 
ordered by a.

If the table is partitioned, you might still get a hash group by plan though.

Dave

-Original Message-
From: Qifan Chen  
Sent: Saturday, October 13, 2018 9:35 AM
To: dev@trafodion.apache.org
Subject: Re: How to get a sort_groupby plan

External

Hi Yuan,


In this plan, I think the compiles does the right thing in selecting the 
hash_groupby because a sort groupby would be more expensive and the sorting 
work (on 1) does not help the final sort (on 2).


Neverthless, you can use a "control query shape" statment to force the sort 
groupby.


control query shape sort(sort_groupby(cut));

;


Thanks --Qifan


From: Liu, Yuan (Yuan) 
Sent: Saturday, October 13, 2018 10:57:37 AM
To: dev@trafodion.apache.org
Subject: How to get a sort_groupby plan


Hi trafodioneers,



I am trying to get a sort_groupby query plan, but I always get hash_groupby 
plan.

Do you have any idea about how to get a sort_groupby plan?



>>explain options 'f' select a.INDUSTRYPHY,sum(a.REGCAP) from DMA_ENTTYPE_STAT 
>>a group by 1 order by 2;



LC   RC   OP   OPERATOR  OPT   DESCRIPTION   CARD

         -



3.4root  2.50E+001

2.3sort  2.50E+001

1.2hash_groupby  2.50E+001

..1trafodion_scan  DMA_ENTTYPE_STAT  5.03E+00







Best regards



刘源(Yuan)

上海易鲸捷信息技术有限公司

地址:上海市浦东新区金科路2889号长泰广场A座603

手机:13671935540

邮箱:yuan@esgyn.cn

[易鲸捷logo中文 - 副本]




RE: [DISCUSS] gcc compiler support

2018-08-29 Thread Dave Birdsall
Hi,

I dimly remember that someone was playing with moving to 4.8.5... so there 
might be a JIRA already. But I don't remember who.

Dave

-Original Message-
From: Sean Broeder  
Sent: Wednesday, August 29, 2018 11:58 AM
To: dev@trafodion.apache.org
Subject: [DISCUSS] gcc compiler support

Hi All,
During the current vote for the 2.3.0 release of Trafodion it was pointed out 
that using gcc compiler version 4.8.5 would result in build errors.
I've looked around and I cannot find any documentation indicating which 
compiler versions are supported, however, it appears the version that gets used 
most prominently on the systems I've seen is version 4.4.7.

While I think it is a good idea to move forward with a newer compiler, this 
change is not without risk and could result in a variety of other errors that 
are not yet known.

Therefore, at this stage of the release, I would like to propose we open a JIRA 
to move to a newer compiler version that could be tested on the main branch 
during the next release cycle and list gcc compiler 4.4.7 as a build 
requirement.

Opinions?

Thanks,
Sean


RE: problems in ExHbaseAccessTcb and MdamColumn

2018-08-15 Thread Dave Birdsall
Hi Qifan,

I explained the CAST business poorly.

Actually, in the compiler we are generating a Narrow node. This is a 
non-standard node that casts a value from a larger data type to a smaller one.

It is necessary for MDAM in order to properly populate key buffers. Key buffers 
have the data type of the key, which may be smaller than the value the key is 
being compared against.

It is perfectly fine in ANSI SQL to issue a query such as SELECT * FROM T WHERE 
X > 10, given a SMALLINT column X. Such a query should always return zero 
rows as the predicate can never be true for a SMALLINT.

If we choose a non-key access path, the compiler internally will convert X > 
10 to CAST(X TO INTEGER) > 10. And the predicate will evaluate just 
fine.

For a key access path however we need to make 10 smaller, rather than 
making X bigger, for the aforementioned reason. Hence the Narrow node.

The Narrow node has the property that it has two outputs, one being the value 
being converted and the other being a "data conversion error" flag. That flag 
can then be used to manipulate inclusion/exclusion on a key predicate. So, for 
example, X < 10 can be changed at run time to X <= 32767 if X is SMALLINT 
SIGNED.

So, no, we do not want to raise a SQL error in this case. Ironically, to do so 
would make us non-standard.

Dave

-Original Message-
From: Qifan Chen  
Sent: Wednesday, August 15, 2018 2:13 PM
To: dev@trafodion.apache.org
Subject: Re: problems in ExHbaseAccessTcb and MdamColumn

Hi Dave,


Thanks a lot for the good explanation on the MDAM code.


On the data conversion error you gave as an example, it seems the best 
treatment is to raise it as a SQL error, per SQL standard quoted below.  
Perhaps we are already doing it?


Thanks --Qifan


ISO/IEC 9075-2:2003 (E)

6.12



yielding value TVEi.

d) If TD is an array type, then let TC be the maximum cardinality of TD. Case:

i) If SC is greater than TC, then an exception condition is raised: data 
exception - array data, right truncation.

ii) Otherwise, TV is the array with elements TVEi, 1 (one) ≤ i ≤ SC.

e) If TD is a multiset type, then TV is the multiset with elements TVEi, 1 
(one) ≤ i ≤ SC.

If SD is a row type, then:

a) For i varying from 1 (one) to DSD, the  is applied:

CAST ( FSDi AS TFTDi ) yielding a value TVEi.

b) TV is ROW ( TVE1, TVE2, ..., TVEDSD ).

If TD is exact numeric, then Case:

a) If SD is exact numeric or approximate numeric, then

Case:

i) If there is a representation of SV in the data type TD that does not lose 
any leading significant digits after rounding or truncating if necessary, then 
TV is that representation. The choice of whether to round or truncate is 
implementation-defined.

ii) Otherwise, an exception condition is raised: data exception - numeric value 
out of range.

____
From: Dave Birdsall 
Sent: Wednesday, August 15, 2018 1:03:43 PM
To: dev@trafodion.apache.org
Subject: RE: problems in ExHbaseAccessTcb and MdamColumn

Hi,

My previous answer was incorrect.

The code at hand is execution time code, not compile time code.

When a predicate of the form X > 10 for a SMALLINT column X is executed, 
predPtr->getValue(atp0,workAtp) returns ex_expr::EXPR_OK; the data conversion 
error of converting 10 to a SMALLINT is reflected instead in 
dataConvErrorFlag.

So, I am guessing that the problem you are trying to solve is that the 
expression evaluator is returning a genuine error, one that does not involve 
data conversions.

However I just tried such an example but observed that the error was returned 
(see below). So the error is returned in spite of MdamColumn::buildDisjunct() 
not reporting it via a return code. I will guess that instead higher level code 
looks for the presence of errors in ComDiagsArea.

Perhaps the issue is we don't respond to the error quickly enough?

Thanks,

Dave



>>showddl tmdam;

CREATE TABLE TRAFODION.SEABASE.TMDAM
  (
AINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , BINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , CINT DEFAULT NULL NOT SERIALIZED
  , PRIMARY KEY (A ASC, B ASC)
  )
 ATTRIBUTES ALIGNED FORMAT
;

-- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SEABASE.TMDAM 
TO DB__ROOT WITH GRANT OPTION;

--- SQL operation complete.
>>log example.txt;
>>showddl tmdam;

CREATE TABLE TRAFODION.SEABASE.TMDAM
  (
AINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , BINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , CINT DEFAULT NULL NOT SERIALIZED
  , PRIMARY KEY (A ASC, B ASC)
  )
 ATTRIBUTES ALIGNED FORMAT
;

-- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SEABASE.TMDAM 
TO DB__ROOT WITH GRANT

RE: problems in ExHbaseAccessTcb and MdamColumn

2018-08-15 Thread Dave Birdsall
Hi,

My previous answer was incorrect.

The code at hand is execution time code, not compile time code.

When a predicate of the form X > 10 for a SMALLINT column X is executed, 
predPtr->getValue(atp0,workAtp) returns ex_expr::EXPR_OK; the data conversion 
error of converting 10 to a SMALLINT is reflected instead in 
dataConvErrorFlag.

So, I am guessing that the problem you are trying to solve is that the 
expression evaluator is returning a genuine error, one that does not involve 
data conversions.

However I just tried such an example but observed that the error was returned 
(see below). So the error is returned in spite of MdamColumn::buildDisjunct() 
not reporting it via a return code. I will guess that instead higher level code 
looks for the presence of errors in ComDiagsArea.

Perhaps the issue is we don't respond to the error quickly enough?

Thanks,

Dave



>>showddl tmdam;

CREATE TABLE TRAFODION.SEABASE.TMDAM
  (
AINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , BINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , CINT DEFAULT NULL NOT SERIALIZED
  , PRIMARY KEY (A ASC, B ASC)
  )
 ATTRIBUTES ALIGNED FORMAT
;

-- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SEABASE.TMDAM 
TO DB__ROOT WITH GRANT OPTION;

--- SQL operation complete.
>>log example.txt;
>>showddl tmdam;

CREATE TABLE TRAFODION.SEABASE.TMDAM
  (
AINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , BINT NO DEFAULT NOT NULL NOT DROPPABLE NOT
  SERIALIZED
  , CINT DEFAULT NULL NOT SERIALIZED
  , PRIMARY KEY (A ASC, B ASC)
  )
 ATTRIBUTES ALIGNED FORMAT
;

-- GRANT SELECT, INSERT, DELETE, UPDATE, REFERENCES ON TRAFODION.SEABASE.TMDAM 
TO DB__ROOT WITH GRANT OPTION;

--- SQL operation complete.
>>showstats for table tmdam on existing columns;

Histogram data for Table TRAFODION.SEABASE.TMDAM
Table ID: 8581215122595296219

   Hist ID # IntsRowcount UEC Colname(s)
== == === === ===
1638572183 24   16384  24 C
1638572188 48   163848192 B
1638572193  2   16384   2 A
1638572198  1   16384   16384 A, B


--- SQL operation complete.
>>prepare s2 from select * from tmdam where b = 'abc';

--- SQL command prepared.
>>showshape select * from tmdam where b = 'abc';
control query shape scan(path 'TRAFODION.SEABASE.TMDAM', forward
, blocks_per_access 4 , mdam forced, mdam_columns all(dense, sparse));

--- SQL operation complete.
>>execute s2;

Breakpoint 2, MdamPred::getValue_ (this=0x77eec520, value=..., 
atp0=0x77eb7390, atp1=0x77eb6cf8) at ../comexe/ComKeyMDAM.cpp:600
600   ex_expr::exp_return_type returnExpReturnType = ex_expr::EXPR_OK;
(gdb) c
Continuing.

*** ERROR[8413] The string argument contains characters that cannot be 
converted. Source data(in hex): 616263

--- 0 row(s) selected.
>>



-Original Message-
From: Dave Birdsall 
Sent: Wednesday, August 15, 2018 9:37 AM
To: dev@trafodion.apache.org
Subject: RE: problems in ExHbaseAccessTcb and MdamColumn

Hi,

Could you elaborate on what problem you are trying to solve?

I think I might have written this code, but it was over 20 years ago. I'll try 
to refresh my memory to explain what is going on in this code.

Looking at the code:

587ex_expr::exp_return_type errorCode = predPtr->getValue(atp0,workAtp);
588
589Int32 dcErrFlag1 = dataConvErrorFlag;
590Int32 dcErrFlag2 = 0;
591if (errorCode == ex_expr::EXPR_OK &&
592predPtr->getPredType() == MdamPred::MDAM_BETWEEN)
593  {
594dataConvErrorFlag = 0;
595errorCode = predPtr->getValue2(atp0,workAtp);
596dcErrFlag2 = dataConvErrorFlag;
597  }
598
599MdamPred::MdamPredType predType = MdamPred::MDAM_RETURN_FALSE;
600// Next 2 used only for MDAM_BETWEEN.
601MdamEnums::MdamInclusion startInclusion = 
predPtr->getStartInclusion();
602MdamEnums::MdamInclusion endInclusion   = predPtr->getEndInclusion();
603if (errorCode == ex_expr::EXPR_OK)
604  predType = predPtr->getTransformedPredType(dcErrFlag1, dcErrFlag2,
605 startInclusion, 
endInclusion);

errorCode is set at line 587. It is checked at line 591; if everything is OK we 
go ahead and process the second value if it is a BETWEEN. Then errCode is again 
set at line 595.

At line 599, we are pessimistic; we set predType to FALSE. Then if there were 
no errors at line 603, we then compute the predType at line 604.

If I remember correctly, the point is that when processing certain key 
predicates (MDAM predicat

RE: problems in ExHbaseAccessTcb and MdamColumn

2018-08-15 Thread Dave Birdsall
Hi,

Could you elaborate on what problem you are trying to solve?

I think I might have written this code, but it was over 20 years ago. I'll try 
to refresh my memory to explain what is going on in this code.

Looking at the code:

587ex_expr::exp_return_type errorCode = predPtr->getValue(atp0,workAtp);
588
589Int32 dcErrFlag1 = dataConvErrorFlag;
590Int32 dcErrFlag2 = 0;
591if (errorCode == ex_expr::EXPR_OK &&
592predPtr->getPredType() == MdamPred::MDAM_BETWEEN)
593  {
594dataConvErrorFlag = 0;
595errorCode = predPtr->getValue2(atp0,workAtp);
596dcErrFlag2 = dataConvErrorFlag;
597  }
598
599MdamPred::MdamPredType predType = MdamPred::MDAM_RETURN_FALSE;
600// Next 2 used only for MDAM_BETWEEN.
601MdamEnums::MdamInclusion startInclusion = 
predPtr->getStartInclusion();
602MdamEnums::MdamInclusion endInclusion   = predPtr->getEndInclusion();
603if (errorCode == ex_expr::EXPR_OK)
604  predType = predPtr->getTransformedPredType(dcErrFlag1, dcErrFlag2,
605 startInclusion, 
endInclusion);

errorCode is set at line 587. It is checked at line 591; if everything is OK we 
go ahead and process the second value if it is a BETWEEN. Then errCode is again 
set at line 595.

At line 599, we are pessimistic; we set predType to FALSE. Then if there were 
no errors at line 603, we then compute the predType at line 604.

If I remember correctly, the point is that when processing certain key 
predicates (MDAM predicates are key predicates) we expect to get certain 
errors. For example, if we have a SMALLINT column X, and we have a predicate X 
> 10, the compiler will down-cast the 10 for the comparison. So when we 
get here the predicate is transformed to X < CAST(10 AS SMALLINT). This 
CAST results in an error, because 10 is greater than any possible SMALLINT 
value. So in this case, we want the predicate to always evaluate as FALSE.

And that is precisely what this code does.

The handling of errorCode locally in this procedure is quite intentional.

If we change this code to instead pass the error up to the caller, we may get 
compile time errors for a predicate that is perfectly OK.

Thanks,

Dave


-Original Message-
From: Zhu, Wen-Jun  
Sent: Tuesday, August 14, 2018 9:10 PM
To: dev@trafodion.apache.org
Subject: problems in ExHbaseAccessTcb and MdamColumn

Hi,

in MdamColumn::buildDisjunct() at ../executor/ex_mdam.cpp:
587 ex_expr::exp_return_type errorCode = 
predPtr->getValue(atp0,workAtp);
588
589 Int32 dcErrFlag1 = dataConvErrorFlag;
590 Int32 dcErrFlag2 = 0;
591 if (errorCode == ex_expr::EXPR_OK &&
592 predPtr->getPredType() == MdamPred::MDAM_BETWEEN)
593   {
594 dataConvErrorFlag = 0;
595 errorCode = predPtr->getValue2(atp0,workAtp);
596 dcErrFlag2 = dataConvErrorFlag;
597   }
when errorCode is not OK on line 587, it is not checked immediately and is then 
reused.

So I suggest to change the interface of function MdamColumn::buildDisjunct(), 
making the return value type from NABoolean to int, to distingush the normal 
value and error code.

And in function keyMdamEx::buildNetwork(), which calls buildDisjunct(), the 
current logic requires the returned value must be true, why?
If there is an error occurred, we can ignore that ex_assert() and just return 
the errorCode, right?





Another problem, ExHbaseAccessTcb::initNextKeyRange() in 
core/sql/executor/ExHbaseAccess.cpp:
2430 Lng32 ExHbaseAccessTcb::initNextKeyRange(sql_buffer_pool *pool,
2431  atp_struct * atp)
2432 {
2433   if (keyExeExpr())
2434 keyExeExpr()->initNextKeyRange(pool, atp);
2435   else
2436 return -1;
2437
2438   return 0;
2439 }
which does not check the returned value of initNextKeyRange(), so if there is 
something wrong in it, but keyExeExpr() is fine, this function returns 0, 
indicating success.


Regards,
Wenjun Zhu


RE: refCount on ComDiagsArea

2018-08-14 Thread Dave Birdsall
Hi,

I have not researched this area, but it strikes me as one that could be very 
delicate. It may be that in most code paths it is assumed that some other 
function is incrementing the reference count. Great care should be taken in 
modifying this otherwise it may lead to memory leaks. I am hoping others who 
are more knowledgeable will add to this discussion.

Can you give more insight into what problem led you here?

Dave

-Original Message-
From: Zhu, Wen-Jun  
Sent: Tuesday, August 14, 2018 4:11 AM
To: dev@trafodion.apache.org
Subject: refCount on ComDiagsArea

hi,

When setting a ComDiagsArea, I find the refCount did not increase, in function 
atp_struct::setDiagsArea of file core/sql/exp/ExpAtp.h:

inline void atp_struct::setDiagsArea(ComDiagsArea* diagsArea) {
  if (diagsArea_)
diagsArea_->decrRefCount();

  diagsArea_ = diagsArea;
}

I guess this is a problem.

There are two solutions to fix this:

1. Invoke incrRefCount for ComDiagsArea to increase, just after the 
assignment.

2. Overload the operator= for ComDiagsArea, to increase within the 
operator=,
I find operator= is declared in ComDiags.h, but there is no implementation.

The 2nd solution may be better, as the both increment for the left-hand side 
ComDiagsArea and the decrement for the right-hand side ComDiagsAre can be 
handled within a single operator=, which is friendly to users, like shared_ptr 
in C++.

Regards,
Wenjun Zhu


New release branch, JIRAs, pull requests

2018-08-02 Thread Dave Birdsall
Hi,

As you may have noticed, our release manager, Sean Broeder, has created a new 
branch, apache/release2.3, for Trafodion release 2.3.

This means that any changes merged into apache/master going forward will be in 
Trafodion release 2.4.

Therefore for any JIRAs whose pull requests are merged today or going forward 
into apache/master: you should mark those JIRAs as fixed in 2.4. (Example: This 
morning I merged PR 1673 (Trafodion 3108) and PR 1671 (Trafodion 3710). The 
authors of those pull requests should mark the JIRAs as Resolved/Fixed in 2.4.)

For JIRAs that were merged before Sean created the branch (yesterday), mark 
those JIRAs as fixed in 2.3.

You can still fix things in 2.3 of course by creating your pull request against 
apache/release2.3. But I imagine Sean will want to limit those to critical 
issues. Look for a policy statement from Sean on this issue.

Dave


Where does cerr go in sqlci?

2018-05-23 Thread Dave Birdsall
Hi,

I am debugging some stuff in NormItemExpr.cpp.

I see some nice debugging macros in there, e.g. DBGIF, that have code that 
write things to "cerr".

So I tried it. But damned if I can figure out where "cerr" is being written to.

I would have thought it would go to a log file in the $TRAF_HOME/logs 
directory. But nothing shows up there.

I tried "sqlci 2>dave.txt" but to no avail.

I will guess the Monitor is redirecting cerr somewhere. But where?

Any help would be appreciated.

Thanks,

Dave


RE: Tables left over from regression test runs

2018-05-01 Thread Dave Birdsall
Hi,

Regarding why stopping hbase takes a long time: I was watching the HBase log 
today while doing a swstophbase. It was doing individual region closes on each 
table. It took a long time to get through all of them. Of course, one can 
always just kill the HMaster process (I sometimes do this) but that sometimes 
results in not being able to bring the instance up again, with loss of any 
working data. So that's risky.

Regarding time to drop tables: I'm noticing that many of the tests that don't 
drop tables at the end do so at the beginning. If they are run on a clean 
instance, that's fast (because it fails fast or it has "drop if exists"). If 
they are run on an instance where they have been run before, we pay the cost of 
dropping the table anyway. Agreed, for Jenkins it's better because we just 
throw the instance away after one run. For developers who are keeping test 
tables around, it's not so good.

Regarding the convenience of having objects around when there's a need to debug 
something: I've been unlucky at this. Almost always, the particular object I 
need is in a test that cleans up its objects. So I end up having to recreate it 
from a stripped down version of the test script. I suspect this is true more 
often than not. So I haven't found this particular argument persuasive.

Regarding speeding up HBase drop: Yes, that would be a great idea.

Dave

-Original Message-
From: Anoop Sharma <anoop.sha...@esgyn.com> 
Sent: Tuesday, May 1, 2018 4:51 PM
To: dev@trafodion.apache.org
Subject: RE: Tables left over from regression test runs

yes, it is true that some tests do not drop all the tables that are created as 
part of that test. 
This is not always intentional and at times it is because one missed cleaning 
them up.

But there are some advantages of not dropping tables at the end of a test run.

- drop hbase tables take a non-trivial amount of time.  dropping all tables 
will increase the time it takes to run a test. 
  This will also impact Jenkins as it runs tests after init traf which cleans 
up everything
- is there a way to make dropping of table or dropping of whole schema faster? 
Using concurrent drops? Or drop without disable(disable is where most of the 
time is spent due to mem flush). There is an hbase jira on drop issue but no 
one has volunteered to fix it.
- some tables are permanent (like from QAT) that should not be cleaned up
- many tests drop tables at the beginning of the test or have an 'if not 
exists' clause. 
- one advantage of not dropping a table at the end is that sometimes an issue 
could be diagnosed without having to recreate the table and associated 
dependent objects.
- if the only objects on a dev instance are regression tests, then doing 
ilh_trafinit will be much faster to clean up everything after full regressions.
  But this would also nuke any non-regression traf objects so one need to be 
careful about it
- should we also find out why stopping hbase takes a long time. Is there 
something that can be done to 'stop abrupt' on dev platform?

anoop

-Original Message-
From: Dave Birdsall <dave.birds...@esgyn.com>
Sent: Tuesday, May 1, 2018 3:57 PM
To: dev@trafodion.apache.org
Subject: Tables left over from regression test runs

Hi,

I've noticed after running full regressions that there are a boatload of tables 
that don't get cleaned up.

These tables occupy regions in our instance's region server and I think may 
cause excessive memory usage and/or increasingly long times when stopping HBase.

So, I'm thinking about cleaning up some of our regression tests to drop these 
tables when they finish.

Does anyone object to this? Or is there some pressing need to keep any of these 
tables around after regressions complete?

Thanks,

Dave


Tables left over from regression test runs

2018-05-01 Thread Dave Birdsall
Hi,

I've noticed after running full regressions that there are a boatload of tables 
that don't get cleaned up.

These tables occupy regions in our instance's region server and I think may 
cause excessive memory usage and/or increasingly long times when stopping HBase.

So, I'm thinking about cleaning up some of our regression tests to drop these 
tables when they finish.

Does anyone object to this? Or is there some pressing need to keep any of these 
tables around after regressions complete?

Thanks,

Dave


RE: Conferences, meet-ups, etc.

2018-04-22 Thread Dave Birdsall
Hi,

Another way to ask the question (or maybe this is simply a related question): 
What do we think folks at conferences would want to hear about Trafodion?

Dave

-Original Message-
From: Rohit Jain  
Sent: Friday, April 20, 2018 8:56 AM
To: dev@trafodion.apache.org
Subject: Re: Conferences, meet-ups, etc.

Pierre,

We have product overview presentations before. We may have some product updates 
we could talk about. But what would you suggest we present that would be 
accepted as a proposal at conferences?

Rohit

> On Apr 20, 2018, at 2:04 AM, Pierre Smits  wrote:
> 
> Hi all,
> 
> Participation in conferences, meet-ups, etc. are important to create 
> awareness of our product and our project and thus grow adoption and 
> our community. Are any of such participations planned in the near future (e.g.
> in the next 2 quarters).
> 
> Therefore I ask each of us to update the following wiki page when we 
> are (going to be) participating (or where Trafodion is a major 
> subject) in one of such events a.s.a.p.:
> 
> https://cwiki.apache.org/confluence/display/TRAFODION/Events
> 
> 
> Best regards,
> 
> Pierre Smits
> 
> Apache Trafodion , Vice President Apache 
> Directory , PMC Member Apache Incubator 
> , committer Apache OFBiz 
> , contributor since 2008 Apache Steve 
> , committer


RE: 2.3 Release Manager

2018-04-19 Thread Dave Birdsall
+1 Hear! Hear! Bless you, my son.

-Original Message-
From: Sean Broeder  
Sent: Thursday, April 19, 2018 1:47 PM
To: dev@trafodion.apache.org
Subject: RE: 2.3 Release Manager

I have some other commitments that would need to take priority over the next 
week or two, but if no one else has a strong desire to take over this release I 
would be willing to have a shot at it.

Regards,
Sean

-Original Message-
From: Steve Varnau  
Sent: Wednesday, April 18, 2018 11:09 AM
To: dev@trafodion.apache.org
Subject: 2.3 Release Manager

Thanks to Ming for guiding the 2.2 release, especially through some re-sets as 
we transitioned to TLP.

I think it is time we get a release manager lined up to get 2.3 release 
defined.  Release Manager needs to be a committer, though some of the tasks can 
certainly be done by other contributors if someone is willing to assist.

One of the strengths of Trafodion project is that we've had a different release 
manager for each release, so each person has contributed to better defining and 
documenting the process.  So a new release manager is certainly welcome and 
would get support from those of us that have done it before.

--Steve



FW: REMINDER - TAC Applications closes in 2 weeks for ACNA Montréal

2018-04-17 Thread Dave Birdsall
FYI

From: Gavin McDonald 
Sent: Tuesday, April 17, 2018 5:41 PM
To: travel-assista...@apache.org
Subject: REMINDER - TAC Applications closes in 2 weeks for ACNA Montréal

Hello PMCs.

Please could you forward on the below email to your dev and user lists.

Thanks

Gav…

—
Reminder that travel assistance applications for ApacheCon NA 2018 are still 
open but only for another 2 weeks!
Please get your applications in NOW.

We will be supporting ApacheCon NA Montréal, Canada on 24th - 29th September 
2018

 TAC exists to help those that would like to attend ApacheCon events, but are 
unable to do so for financial reasons.
For more info on this years applications and qualifying criteria, please visit 
the TAC website at < http://www.apache.org/travel/ >. Applications are now open 
and will close 1st May.

Important: Applications close on May 1st, 2018. Applicants have until the 
closing date above to submit their applications (which should contain as much 
supporting material as required to efficiently and accurately process their 
request), this will enable TAC to announce successful awards shortly afterwards.

As usual, TAC expects to deal with a range of applications from a diverse range 
of backgrounds. We therefore encourage (as always) anyone thinking about 
sending in an application to do so ASAP.
We look forward to greeting many of you in Montreal

Kind Regards,
Gavin - (On behalf of the Travel Assistance Committee)



RE: issues on error message

2018-04-10 Thread Dave Birdsall
Hi,

Yes! Glad you asked.

The error message numbers are organized by what part of the software raises 
them.

Messages 1000 - 1999 are raised by the DDL code (core/sql/sqlcomp/* modules) -- 
use the enum in sqlcomp/CmpDDLCatErrorCodes.h
Messages 2000 - 2999 are raised by the mainline code of the SQL compiler 
process (core/sql/arkcmp/* modules + the core/sql/sqlcomp/* modules that aren't 
DDL modules)
Messages 3000 - 3999 are raised by the parser code (core/sql/parser/*)
Messages 4000 - 4999 are raised by the binder code (core/sql/optimizer/Bind*)
Messages 5000 - 5999 are raised by the normalizer code 
(core/sql/optimizer/Norm*)
Messages 6000 - 6999 are raised by the optimizer code (core/sql/optimizer/ 
 ) -- use the enum in optimizer/opt_error.h
Messages 7000 - 7999 are raised by the generator code (core/sql/generator/*)
Messages 8000 - 8999 are raised by the executor code (core/sql/executor/* and 
core/sql/exp/*) -- use the enum in exp/ExpErrorEnums.h
Messages 9200 - 9299 are raised by the update statistics code 
(core/sql/ustat/*) -- use the enum in ustat/hs_const.h
Messages 1 - 10049 are raised by the run-time sort code (core/sql/sort/*) 
-- use the enum in sort/SortError.h
Messages 11100 - 11399 are raised by the UDR server and language manager code 
(core/sql/udrserv/*) -- use the num in udrserv/udrdefs.h

As you can see, not all of the error message ranges have corresponding enum 
files; we did not have a unified convention for this when we originally wrote 
the code unfortunately.

When you add an error message, be sure to also add a description to the SQL 
Messages Guide. See http://trafodion.apache.org/docs/messages_guide/index.html 
for the latest version. The source for it can be found in 
$TRAF_HOME/../../docs/messages_guide/src/asciidoc/_chapters.

Dave





-Original Message-
From: Zhu, Wen-Jun  
Sent: Tuesday, April 10, 2018 1:39 AM
To: dev@trafodion.apache.org
Subject: issues on error message

Hi,

I am trying to add some error message in SqlciErrors.txt when new error happens.
As far as I know, it uses message catalog, and the error number is a enum in 
CmpDDLCatErrorCodes.h.

I guess there is a map between these two files, but it is not exactly mapped.

Are there rules about this? Like feed DgSqlCode constructor with enum, not 
magic number, or something like this And what should I take care of?



RE: Question about Trafodion MXOSRVR

2018-03-27 Thread Dave Birdsall
Hi,

This is an accident of history. The predecessor product was developed on a 
platform that did not support operator multi-threading.

Yes, it is certainly possible to rearchitect mxosrvr to make it multi-threaded. 
This can be tricky and must be done carefully. One must take into account any 
global variables the code uses and decide whether to leave them as globals (and 
co-ordinate access with mutex), make them thread-globals, or refactor them into 
some object that is not a global instead.

Dave

-Original Message-
From: Song, Hao-Lin  
Sent: Tuesday, March 27, 2018 8:20 AM
To: dev@trafodion.apache.org
Subject: Question about Trafodion MXOSRVR

Hi all


I found that mxosrvr could not handle other network messages when a query is 
processing , since network processing and data processing are in the same 
thread. I am confused about this.  Can we put them in different threads to make 
the program more clear and potential?


Best,

Haolin


RE: JIRA tickets with Status = resolved AND Resolution != NULL

2018-03-26 Thread Dave Birdsall
Hi Pierre,

Some of these are fixed in R2.3. Do we want to close them before R2.3 is 
released?

Dave

-Original Message-
From: Pierre Smits  
Sent: Sunday, March 25, 2018 7:47 AM
To: dev@trafodion.apache.org
Subject: JIRA tickets with Status = resolved AND Resolution != NULL

Hi all,

While browsing Trafodion tickets I stumbled upon [1]. This overview lists about 
660 tickets, which should all go to Status = closed (IIC).

Do we still want to keep one, some or all open? If there are no objections I 
will move these tickets to Status = Closed at the end of the week.


[1]
https://issues.apache.org/jira/issues/?jql=project%20%3D%20TRAFODION%20AND%20status%20%3D%20Resolved%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC

Best regards,

Pierre Smits

V.P. Apache Trafodion


RE: Roadmap and 2.3

2018-03-25 Thread Dave Birdsall
Hi Pierre,

The master branch is presently 2.3. When we are ready to distinguish 2.4 
content from 2.3, we can create an explicit release2.3 branch and use master 
for 2.4. In past releases, we've done this when the current release was near 
the end of its development cycle. Of course going forward the development 
community can agree on some other treatment.

Dave

-Original Message-
From: Pierre Smits  
Sent: Sunday, March 25, 2018 7:40 AM
To: dev@trafodion.apache.org
Subject: Roadmap and 2.3

Hi all,

Now that we have 2.2.0 in play as our latest release, what needs to be done to 
make our upcoming release visible to our contributors (including our 
committers)?

I see the following in play regarding 2.3:

   - https://cwiki.apache.org/confluence/display/TRAFODION/Roadmap
   - https://issues.apache.org/jira/projects/TRAFODION/versions/12340979

But I don't see a 2.3 branch in
https://github.com/apache/trafodion/branches/all

Best regards,

Pierre Smits

V.P. Apache Trafodion


RE: ComDiagsArea and Error handling in Trafodion SQL - Guideline for Trafodion developers

2018-03-18 Thread Dave Birdsall
Hi all,

I added a discussion of SQL Diagnostics and ComDiagsArea, including the 
recommentations below, to the wiki. Please take a look and let me know what you 
think. Suggestions welcome. Or feel free to edit directly.

https://cwiki.apache.org/confluence/display/TRAFODION/SQL+Diagnostics+Architecture+and+Design

Dave

-Original Message-
From: Selva Govindarajan  
Sent: Tuesday, March 13, 2018 9:14 AM
To: dev@trafodion.apache.org
Subject: ComDiagsArea and Error handling in Trafodion SQL - Guideline for 
Trafodion developers 

ComDiagsArea is a class containing errors and warnings encountered during SQL 
compilation or execution. This object is passed around between SQL processes 
and finally displayed by the end user application.

ComDiagsArea is populated and handled in many ways.


  1.  Caller allocates the ComDiagsArea passes it to the callee. The callee 
populates the diagnostics area when there are errors and warnings. Caller is 
responsible to deallocate it.
  2.  In case of process hop, the ComDiagsArea is shipped from the child 
process to the parent process via IPC mechanism.
  3.  ComDiagsArea is also embedded within a container object. The container 
object could be a CLI context (ContextCli) or CLI Statement (Statement) or the 
compiler context(CmpContext).
  4.  During compilation, the error/warnings message are mostly populated in 
the current CmpContext ComDiagsArea. There could be more than one CmpContext. 
There should be at least 2 CmpContext namely user and META.  User given queries 
are compiled in user CmpContext  while the internal meta-data queries are 
compiled in  META CmpContext.
  5.  The errors/warnings info gathered in steps 1) and 2) are usually copied 
to the respective object of item 3. Then passed around between the objects of 
item 3) like CmpContext to Statement or Context before it can be obtained by 
the client applications.

Cons of the above methods:


  *   In step 1, ComDiagsArea is always allocated even when statement would 
succeed without the need for ComDiagsArea.
  *   In step 2, an empty ComDiagsArea is shipped from the child to parent even 
when there are no errors or warnings. This resulted in ComDiagsArea to be 
allocated on the parent side and populated with empty error/warning condition.
  *   Because of step 1 and step 2, the empty ComDiagsArea is copied to the 
objects of step 3.
  *   Prone to leaks in ComDiagsArea due to many unnecessary allocations and 
de-allocations.
  *   At least the sqlci application was attempting an expensive older way 
obtaining the diagnostic error message always even when there are no 
errors/warnings.

I have created a PR https://github.com/apache/trafodion/pull/1470  to take care 
of these issues. The strategy now is


  1.  Caller never allocates the ComDiagsArea, but pass in Reference  to 
pointer to the callee. The caller initializes the pointer to NULL.  When an 
error/warning is raised the callee needs to allocate ComDiagsArea and populate 
it. Then the caller moves it to the objects of step 3 and destroys the 
ComDiagsArea allocated in the callee.
  2.  In case of process hop, the ComDiagsArea is shipped from the child only 
when there is an error or warning via IPC mechanism.
  3.  While switching back from "META" CmpContext to user CmpContext, the 
errors/warnings from the META are copied to the user CmpContext.
  4.  Applications like sqlci, mxosrvr should attempt to obtain the diagnostics 
info based on return code of the CLI call. When the return code is 100, get the 
number of error condition via less expensive call 
SQL_EXEC_GetDiagnosticsStmtInfo2. When this call returns 2 or more conditions, 
then there are warnings other than 100.
  5.  Use mark/rewind and other methods of ComDiagsArea to manipulate it rather 
than creating and copying it.

These changes have enabled us to create ComDiagsArea only when there are any 
errors or warnings in primed up state.  This also should help in fixing the 
leak in ComDiagsArea seen with Trafodion.

It is important that the developers and reviewers do not let the earlier 
inefficient code to creep in back. The purpose of this message is to let all 
SQL developers to be made aware of the new concept and enforce it whenever the 
code is modified or added in error handling.

Selva


RE: Our past releases and the mirror issue

2018-03-18 Thread Dave Birdsall
Hi,

FYI: I've committed Steve's change and rebuilt the web site. I checked the 
download URLs for R2.2.0; they say ".../dyn/closer.lua/..." now.

Dave

-Original Message-----
From: Dave Birdsall <dave.birds...@esgyn.com> 
Sent: Sunday, March 18, 2018 1:40 PM
To: dev@trafodion.apache.org
Subject: RE: Our past releases and the mirror issue

Hi,

I will commit Steve's change today and rebuild the web site.

Dave

-Original Message-
From: Pierre Smits <pierresm...@apache.org>
Sent: Sunday, March 18, 2018 2:37 AM
To: dev@trafodion.apache.org
Subject: Re: Our past releases and the mirror issue

Hi Ming, all,

I feel this was not addressed/stressed enough while undergoing incubation.
So I brought that forward today in general@incubator.a.o as lesson learned.

Having it pop up with our 2.2.0 release, and the subsequent inability to 
announce it to the wider audience due to this we - in essence - lost the 
momentum to catch the wave to drive adoption. And with each day passing to have 
this fixed the wave decreases in strength.

Please let us have this in place a.s.a.p.


Best regards,

Pierre Smits

V.P. Apache Trafodion

On Sun, Mar 18, 2018 at 10:02 AM, Liu, Ming (Ming) <ming@esgyn.cn>
wrote:

> Hi, Pierre,
>
> As I recall, we only have R2.1.0 (incubating) that sent announcement 
> to apache announce mail list. That is in 2017, May, and it was sent by 
> you Pierre.
> IMHO, the issue is in the download link, our initial link address is 
> correct, going through the dyn/closer.lua . I was misunderstanding the 
> comments and change it to direct address, sorry about this!
> Steve already have a fix for this.
> The download link should use https, but the normal web page can be 
> http. I checked several other release announces and some use 'http'
> without any concern from others. But we must use dyn/closer.lua in the 
> download link.
>
> Thanks,
> Ming
>
> -Original Message-
> From: Pierre Smits <pierresm...@apache.org>
> Sent: Sunday, March 18, 2018 4:27 PM
> To: dev@trafodion.apache.org
> Subject: Our past releases and the mirror issue
>
> Hi All,
>
> Did we, while undergoing incubation, experience a rejection of our 
> release announcement as we experience now while being a TLP?
>
>
>
> Best regards,
>
> Pierre Smits
>
> V.P. Apache Trafodion
>


RE: JIRA tickets & working with 'Fix Version'

2018-03-18 Thread Dave Birdsall
Hi,

My practice has been to leave the Fix Version field blank until my fix for a 
JIRA has been committed. At that point, when marking the JIRA resolved, I 
update the Fix Version field to the next release. (I will admit to having 
forgotten to do this a few times though.)

I suppose if one were personally committing to fix something in the next 
release, it might make sense to set Fix Version before committing a fix but 
that might be confusing if intents don't come to fruition.

I don't have strong feelings either way on the issue.

Dave

-Original Message-
From: Pierre Smits  
Sent: Sunday, March 18, 2018 3:16 AM
To: dev@trafodion.apache.org
Subject: JIRA tickets & working with 'Fix Version'

Hi all,

Currently we have the possibility to set 'any' as the value for the 'Fix 
Version' field. And we have - at this moment - about 10 open tickets there (see 
[1]), which is a mix of old (out of date?) and newer issues.

Having this may lead to confusion that is undesirable.

I propose that we change our M.O. on this one, into:

   1. on creation of a new ticket the 'Fix Version' is *not* set;
   2. we rename that version to *Upcoming release* and use it only will a
   ticket is in progress;
   3. As soon as a contributor closes a ticket he/she changes the Fix
   Version = Upcoming release to next actually upcoming and named release
   (e.g. now that is 2.3)

What are your thoughts? Please share.



Best regards,

Pierre Smits

V.P. Apache Trafodion


RE: Our past releases and the mirror issue

2018-03-18 Thread Dave Birdsall
Hi,

I will commit Steve's change today and rebuild the web site.

Dave

-Original Message-
From: Pierre Smits  
Sent: Sunday, March 18, 2018 2:37 AM
To: dev@trafodion.apache.org
Subject: Re: Our past releases and the mirror issue

Hi Ming, all,

I feel this was not addressed/stressed enough while undergoing incubation.
So I brought that forward today in general@incubator.a.o as lesson learned.

Having it pop up with our 2.2.0 release, and the subsequent inability to 
announce it to the wider audience due to this we - in essence - lost the 
momentum to catch the wave to drive adoption. And with each day passing to have 
this fixed the wave decreases in strength.

Please let us have this in place a.s.a.p.


Best regards,

Pierre Smits

V.P. Apache Trafodion

On Sun, Mar 18, 2018 at 10:02 AM, Liu, Ming (Ming) 
wrote:

> Hi, Pierre,
>
> As I recall, we only have R2.1.0 (incubating) that sent announcement 
> to apache announce mail list. That is in 2017, May, and it was sent by 
> you Pierre.
> IMHO, the issue is in the download link, our initial link address is 
> correct, going through the dyn/closer.lua . I was misunderstanding the 
> comments and change it to direct address, sorry about this!
> Steve already have a fix for this.
> The download link should use https, but the normal web page can be 
> http. I checked several other release announces and some use 'http' 
> without any concern from others. But we must use dyn/closer.lua in the 
> download link.
>
> Thanks,
> Ming
>
> -Original Message-
> From: Pierre Smits 
> Sent: Sunday, March 18, 2018 4:27 PM
> To: dev@trafodion.apache.org
> Subject: Our past releases and the mirror issue
>
> Hi All,
>
> Did we, while undergoing incubation, experience a rejection of our 
> release announcement as we experience now while being a TLP?
>
>
>
> Best regards,
>
> Pierre Smits
>
> V.P. Apache Trafodion
>


RE: debug problem about sqlci

2018-03-18 Thread Dave Birdsall
Hi Kenny,

See also 
https://cwiki.apache.org/confluence/display/TRAFODION/Debugging+Tips#DebuggingTips-DebuggingMixedC++/JavaProcesses.

Dave

-Original Message-
From: Sandhya Sundaresan  
Sent: Friday, March 16, 2018 8:26 PM
To: dev@trafodion.apache.org
Subject: RE: debug problem about sqlci

Hi,
  You can avoid hitting this by setting  this in the gdb prompt in the debug 
session or better to set it in your ~/.gdbinit file itself so it's always there 
:

handle SIGSEGV pass nostop noprint

Sandhya

-Original Message-
From: Wang, Xiao-Zhong [mailto:xiaozhong.w...@esgyn.cn] 
Sent: Friday, March 16, 2018 6:21 PM
To: dev@trafodion.apache.org
Subject: 答复: debug problem about sqlci

I debug the disassemble code, and the error point is a hard code disassemble 
code, looks like is not our code error.
(gdb) bt
#0  0x73087e83 in JNI_CreateJavaVM@plt () from 
/work/esgyn/core/sqf/export/lib64d/libexecutor.so
#1  0x039d in ?? ()
#2  0x733c78ff in JavaObjectInterface::createJVM (this=0x77e974e0, 
options=0x0)
at ../executor/JavaObjectInterface.cpp:290
#3  0x733c7a24 in JavaObjectInterface::initJVM (this=0x77e974e0, 
options=0x0)
at ../executor/JavaObjectInterface.cpp:317
#4  0x733c7f0b in JavaObjectInterface::init (this=0x77e974e0,
className=0x7388b9b0 "org/trafodion/sql/HBaseClient", 
javaClass=@0x7397edc8, JavaMethods=0xc09a40,
howManyMethods=52, methodsInitialized=false) at 
../executor/JavaObjectInterface.cpp:416
#5  0x733e68fc in HBaseClient_JNI::init (this=0x77e974e0) at 
../executor/HBaseClient_JNI.cpp:341
#6  0x71385b11 in ExpHbaseInterface_JNI::init (this=0x7fffe14273b8, 
hbs=0x0) at ../exp/ExpHbaseInterface.cpp:471
#7  0x7fffee924227 in CmpSeabaseDDL::allocEHI (this=0x7ffee980, 
connectParam1=0x7fffe1416ce8 "",
connectParam2=0x7fffe1416de8 "", raiseError=1, 
storageType=COM_STORAGE_HBASE)
at ../sqlcomp/CmpSeabaseDDLcommon.cpp:1322
#8  0x7fffee9253ef in CmpSeabaseDDL::validateVersions (this=0x7ffee980, 
defs=0x7fffe13f6870, inEHI=0x0,
mdMajorVersion=0x0, mdMinorVersion=0x0, mdUpdateVersion=0x0, 
sysSWMajorVersion=0x0, sysSWMinorVersion=0x0,
sysSWUpdVersion=0x0, mdSWMajorVersion=0x0, mdSWMinorVersion=0x0, 
mdSWUpdateVersion=0x0, hbaseErrNum=0x7ffeeb0c,
hbaseErrStr=0x7ffeeaa0) at ../sqlcomp/CmpSeabaseDDLcommon.cpp:1591
#9  0x7fffeeaa1bdb in NADefaults::readFromSQLTables (this=0x7fffe13f6870, 
overwriteIfNotYet=NADefaults::SET_BY_CQD,
errOrWarn=1) at ../sqlcomp/nadefaults.cpp:4947
#10 0x7fffeeaa1da3 in NADefaults::getValueWhileInitializing 
(this=0x7fffe13f6870, attrEnum=635)
at ../sqlcomp/nadefaults.cpp:4982
#11 0x7fffeeaa1ddb in NADefaults::getCatalogAndSchema (this=0x7fffe13f6870, 
cat=..., sch=...)
at ../sqlcomp/nadefaults.cpp:4991
#12 0x7fffecdccdb4 in SchemaDB::initPerStatement (this=0x7fffe13f65a8, 
lightweight=0)
at ../optimizer/SchemaDB.cpp:136
#13 0x7fffecdccc1d in SchemaDB::SchemaDB (this=0x7fffe13f65a8, 
rtd=0x7fffe13f6168) at ../optimizer/SchemaDB.cpp:106
#14 0x7427a791 in CmpContext::CmpContext (this=0x7fffe13ef090, f=1, 
h=0x77e96f38)
at ../arkcmp/CmpContext.cpp:207
#15 0x77507925 in arkcmp_main_entry () at ../common/arkcmp_proc.cpp:164
#16 0x74d1c5d4 in ContextCli::switchToCmpContext (this=0x77f052c0, 
cmpCntxtType=0)
at ../cli/Context.cpp:6204
#17 0x74d6b48c in CliStatement::prepare2 (this=0x77e8c4c0,
source=0x77e8a1d0 "SET TRANSACTION AUTOCOMMIT ON;", diagsArea=..., 
passed_gen_code=0x0, passed_gen_code_len=0,
charset=15, unpackTdbs=1, cliFlags=144) at ../cli/Statement.cpp:1556
#18 0x74d6b138 in CliStatement::prepare (this=0x77e8c4c0,
source=0x77e8a1d0 "SET TRANSACTION AUTOCOMMIT ON;", diagsArea=..., 
passed_gen_code=0x0, passed_gen_code_len=0,
charset=15, unpackTdbs=1, cliFlags=144) at ../cli/Statement.cpp:1448
#19 0x74cdac66 in SQLCLI_ExecDirect2(CliGlobals *, SQLSTMT_ID *, 
SQLDESC_ID *, Int32, SQLDESC_ID *, Lng32, typede\ f __va_list_tag __va_list_tag 
*, SQLCLI_PTR_PAIRS *) (cliGlobals=0xbecff0, statement_id=0xc07790, 
sql_source=0xc077d0,
prepFlags=0, input_descriptor=0x0, num_ptr_pairs=0, ap=0x7ffef4e0, 
ptr_pairs=0x0) at ../cli/Cli.cpp:3654
#20 0x74d87dd3 in SQL_EXEC_ExecDirect2 (statement_id=0xc07790, 
sql_source=0xc077d0, prep_flags=0,
input_descriptor=0x0, num_ptr_pairs=0) at ../cli/CliExtern.cpp:2348
#21 0x77782fed in SqlCmd::executeQuery (query=0x7779eb48 "SET 
TRANSACTION AUTOCOMMIT ON;",
sqlci_env=0xbd5480) at ../sqlci/SqlCmd.cpp:2476
#22 0x7776ca27 in SqlciEnv::autoCommit (this=0xbd5480) at 
../sqlci/SqlciEnv.cpp:417
#23 0x7776cdbb in SqlciEnv_prologue_to_run (sqlciEnv=0xbd5480) at 
../sqlci/SqlciEnv.cpp:524
#24 0x7776d00a in SqlciEnv::run (this=0xbd5480) at 
../sqlci/SqlciEnv.cpp:599
#25 

RE: JIRA couldn't assign

2018-03-07 Thread Dave Birdsall
Hi,

I've added you as an administrator on JIRA. User name haolin.song, e-mail 
403438...@qq.com<mailto:403438...@qq.com>. Let me know if this is correct.

Dave

From: Song, Hao-Lin <haolin.s...@esgyn.cn>
Sent: Tuesday, March 6, 2018 7:32 PM
To: Seth <jian....@esgyn.cn>; Dave Birdsall <dave.birds...@esgyn.com>
Subject: 答复: JIRA couldn't assign

Hi

Sorry, my jira username is haolin.song .

Best,
宋昊霖 (Haolin Song)

Esgyn China
上海易鲸捷信息技术有限公司
Mobile +86 18351935898
Email haolin.s...@esgyn.cn<mailto:haolin.s...@esgyn.cn>
[esgyn]
What doesn’t kill you make you stronger

发件人: Jin, Jian (Seth)
发送时间: 2018年3月7日 11:29
收件人: Song, Hao-Lin <haolin.s...@esgyn.cn<mailto:haolin.s...@esgyn.cn>>; Dave 
Birdsall <dave.birds...@esgyn.com<mailto:dave.birds...@esgyn.com>>
主题: RE: JIRA couldn't assign

Hi HaoLin,

Could you provide jira account and username for Dave.

Br,

Seth

From: Song, Hao-Lin
Sent: 2018年3月7日 11:20
To: Dave Birdsall <dave.birds...@esgyn.com<mailto:dave.birds...@esgyn.com>>
Cc: Jin, Jian (Seth) <jian@esgyn.cn<mailto:jian@esgyn.cn>>
Subject: JIRA couldn't assign

Hi

I have created some jira issues for Trafodion , but I couldn’t assign those 
issues to others. I think I am not the member of Trafodion group. Could you put 
me in that group?

Best,
宋昊霖 (Haolin Song)

Esgyn China
上海易鲸捷信息技术有限公司
Mobile +86 18351935898
Email haolin.s...@esgyn.cn<mailto:haolin.s...@esgyn.cn>
[esgyn]
What doesn’t kill you make you stronger



RE: Upcoming 2.2.0 release

2018-03-04 Thread Dave Birdsall
Hi Pierre,

It is only the open tickets on [2] that you are concerned with?

I notice some closed tickets that appear on this Warnings list, but I don't 
know why they are listed there. Example: 
https://issues.apache.org/jira/browse/TRAFODION-2474, which is a JIRA that I 
worked on. Not sure what to do with it if anything.

Dave

-Original Message-
From: Pierre Smits [mailto:pierresm...@apache.org] 
Sent: Sunday, March 4, 2018 1:44 AM
To: dev@trafodion.apache.org
Subject: Upcoming 2.2.0 release

Hi all,

Now that we're on the brink of releasing 2.2.0 (vote is happening in thread 
[1]), can we have a look at the open tickets that are associated with this 
release (see [2]). And see whether we can do some housekeeping (e.g. by moving 
tickets to a next intended release, like 2.3.0, 2.4.0, etc.), so that our JIRA 
reflects reality and intentions a bit more.

[1] [VOTE] Apache Trafodion release 2.2.0 RC3 

[2] https://issues.apache.org/jira/projects/TRAFODION/versions/12338559

Best regards,

Pierre Smits

V.P. Apache Trafodion


RE: Make failure in latest Trafodion, uuid.h missing

2018-02-27 Thread Dave Birdsall
That worked! Thank you, Hans.

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Tuesday, February 27, 2018 12:10 PM
To: dev@trafodion.apache.org
Subject: RE: Make failure in latest Trafodion, uuid.h missing

Hi, just installed libuuid-devel on edev08. This should bring it in line with 
the other edev machines. The others do not have uuid-c++ and uuid-c++-devel 
installed, so I didn't do that on edev08, either.

Can you try again?

Hans

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Tuesday, February 27, 2018 12:00 PM
To: dev@trafodion.apache.org
Subject: RE: Make failure in latest Trafodion, uuid.h missing

Thank you. Will try that.

Should the dependencies on the wiki be updated? See 
https://cwiki.apache.org/confluence/display/TRAFODION/Create+Build+Environment

Dave

-Original Message-
From: Wang, Xiao-Zhong [mailto:xiaozhong.w...@esgyn.cn] 
Sent: Tuesday, February 27, 2018 11:49 AM
To: dev@trafodion.apache.org
Subject: 答复: Make failure in latest Trafodion, uuid.h missing

I think you need to install uuid.
You can execute the command as following:
yum install libuuid.x86_64 uuid.x86_64 uuid-c++.x86_64 uuid-c++-devel.x86_64

北京易鲸捷信息技术有限公司
地址:北京市朝阳区北辰东路8号汇宾大厦A座1302室
手机:18513493336
邮箱:xiaozhong.w...@esgyn.cn

-邮件原件-
发件人: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
发送时间: 2018年2月28日 3:39
收件人: dev@trafodion.apache.org
主题: Make failure in latest Trafodion, uuid.h missing

Hi,

I just brought one of my Trafodion instances up-to-date and did a clean build. 
The build failed with the following errors:

[birdsall@edev08 trafodion]$ grep ": err" make.out
../exp/exp_function.cpp:53:23: error: uuid/uuid.h: No such file or directory
  ##(SQL)
../exp/exp_function.cpp:6812: error: 'uuid_t' was not declared in this scope
 ##(SQL)
../exp/exp_function.cpp:6812: error: expected ';' before 'uu'  ##(SQL)
../exp/exp_function.cpp:6813: error: 'uu' was not declared in this scope ##(SQL)
../exp/exp_function.cpp:6813: error: 'uuid_generate' was not declared in this 
scope ##(SQL)
../exp/exp_function.cpp:6814: error: 'uuid_unparse' was not declared in this 
scope  ##(SQL)
cc1plus: error: unrecognized command line option "-Wno-conversion-null" 
  ##(SQL)
[birdsall@edev08 trafodion]$

Looking at exp/exp_function.cpp, this file now contains the following statement:

#include 

I looked around in the /usr/include directory on my workstation for the file 
uuid.h. I found it in /usr/include/linux. So I tried changing the include to 
the following:

#include 

Now I get a different set of build errors:

[birdsall@edev08 trafodion]$ grep ": err" make.out
../exp/exp_function.cpp:6812: error: 'uuid_t' was not declared in this scope
 ##(SQL)
../exp/exp_function.cpp:6812: error: expected ';' before 'uu'  ##(SQL)
../exp/exp_function.cpp:6813: error: 'uu' was not declared in this scope ##(SQL)
../exp/exp_function.cpp:6813: error: 'uuid_generate' was not declared in this 
scope ##(SQL)
../exp/exp_function.cpp:6814: error: 'uuid_unparse' was not declared in this 
scope  ##(SQL)
cc1plus: error: unrecognized command line option "-Wno-conversion-null" 
  ##(SQL)
[birdsall@edev08 trafodion]$

So, I am guessing something needs to be updated or reconfigured in the 
Trafodion build environment. Or at least there is a new dependency that needs 
to be taken into account.

Can anyone suggest what must be done to get a clean build?

Thanks,

Dave

PS - The Jenkins machines seem to have the right configuration as builds aren't 
failing there...




Make failure in latest Trafodion, uuid.h missing

2018-02-27 Thread Dave Birdsall
Hi,

I just brought one of my Trafodion instances up-to-date and did a clean build. 
The build failed with the following errors:

[birdsall@edev08 trafodion]$ grep ": err" make.out
../exp/exp_function.cpp:53:23: error: uuid/uuid.h: No such file or directory
  ##(SQL)
../exp/exp_function.cpp:6812: error: 'uuid_t' was not declared in this scope
 ##(SQL)
../exp/exp_function.cpp:6812: error: expected ';' before 'uu'  ##(SQL)
../exp/exp_function.cpp:6813: error: 'uu' was not declared in this scope ##(SQL)
../exp/exp_function.cpp:6813: error: 'uuid_generate' was not declared in this 
scope ##(SQL)
../exp/exp_function.cpp:6814: error: 'uuid_unparse' was not declared in this 
scope  ##(SQL)
cc1plus: error: unrecognized command line option "-Wno-conversion-null" 
  ##(SQL)
[birdsall@edev08 trafodion]$

Looking at exp/exp_function.cpp, this file now contains the following statement:

#include 

I looked around in the /usr/include directory on my workstation for the file 
uuid.h. I found it in /usr/include/linux. So I tried changing the include to 
the following:

#include 

Now I get a different set of build errors:

[birdsall@edev08 trafodion]$ grep ": err" make.out
../exp/exp_function.cpp:6812: error: 'uuid_t' was not declared in this scope
 ##(SQL)
../exp/exp_function.cpp:6812: error: expected ';' before 'uu'  ##(SQL)
../exp/exp_function.cpp:6813: error: 'uu' was not declared in this scope ##(SQL)
../exp/exp_function.cpp:6813: error: 'uuid_generate' was not declared in this 
scope ##(SQL)
../exp/exp_function.cpp:6814: error: 'uuid_unparse' was not declared in this 
scope  ##(SQL)
cc1plus: error: unrecognized command line option "-Wno-conversion-null" 
  ##(SQL)
[birdsall@edev08 trafodion]$

So, I am guessing something needs to be updated or reconfigured in the 
Trafodion build environment. Or at least there is a new dependency that needs 
to be taken into account.

Can anyone suggest what must be done to get a clean build?

Thanks,

Dave

PS - The Jenkins machines seem to have the right configuration as builds aren't 
failing there...




RE: Release 2.2

2018-02-15 Thread Dave Birdsall
Thank you Venkat, for stepping up on this issue (and finding another one).

-Original Message-
From: Venkat Muthuswamy [mailto:venkat.muthusw...@esgyn.com] 
Sent: Thursday, February 15, 2018 5:06 PM
To: dev@trafodion.apache.org
Subject: Release 2.2

I found another issue with the install. If I select LDAP authentication, the 
installer accepts LDAP parameters and install completes successfully. 

But authentication flag is not really enabled. I opened a JIRA TRAFODION-2962 
for this and will deliver a fix.

Venkat


RE: volunteer needed for JIRA 2395

2018-02-14 Thread Dave Birdsall
Seems to me we should fix it.

Easy for me to say though.

I won't volunteer as it is outside my area of expertise.

-Original Message-
From: Roberta Marton [mailto:roberta.mar...@esgyn.com] 
Sent: Wednesday, February 14, 2018 1:44 PM
To: dev@trafodion.apache.org
Subject: RE: volunteer needed for JIRA 2395

Does anyone have opinion on this?  When I tested Trafodion 2.2 with Kerberos 
enabled, the python installation procedure failed. 
Since previous releases allow Trafodion to run on Kerberos enabled systems, 
this is a regression.
Does someone else want to verify installation of Trafodion on a Kerberos 
enabled system in case it was some procedure error on my part?
Is this something we can document instead of fixing?  What this means it that 
you won't be able to install Trafodion Kerberos enabled system.
Don't know if any is running Trafodion on a Kerberos enabled system.  If they 
are, would they be able to install release 2.2?  Probably should test this 
scenario also.

  Roberta

-Original Message-
From: Liu, Ming (Ming) [mailto:ming@esgyn.cn] 
Sent: Saturday, February 10, 2018 2:11 AM
To: dev@trafodion.apache.org
Subject: volunteer needed for JIRA 2395

Hi, all,

We need an volunteer to work on 
https://issues.apache.org/jira/browse/TRAFODION-2935
It is an issue found when we tried to release R2.2. So it will be great to fix 
it asap, so I can create RC3 for release R2.2.
I hope there is a volunteer to work on that. It is in python installer and need 
some background knowledge in security.

Or do we feel we can start RC3 with this as a known issue?

thanks,
Ming


Travel Assistance to ApacheCon North America 2018

2018-02-14 Thread Dave Birdsall
FYI

From: Gavin McDonald [mailto:ga...@16degrees.com.au]
Sent: Wednesday, February 14, 2018 1:34 AM
To: travel-assista...@apache.org
Subject: Travel Assistance applications open. Please inform your communities

Hello PMCs.

Please could you forward on the below email to your dev and user lists.

Thanks

Gav…

—
The Travel Assistance Committee (TAC) are pleased to announce that travel 
assistance applications for ApacheCon NA 2018 are now open!

We will be supporting ApacheCon NA Montreal, Canada on 24th - 29th September 
2018

 TAC exists to help those that would like to attend ApacheCon events, but are 
unable to do so for financial reasons.
For more info on this years applications and qualifying criteria, please visit 
the TAC website at < http://www.apache.org/travel/ >. Applications are now open 
and will close 1st May.

Important: Applications close on May 1st, 2018. Applicants have until the 
closing date above to submit their applications (which should contain as much 
supporting material as required to efficiently and accurately process their 
request), this will enable TAC to announce successful awards shortly afterwards.

As usual, TAC expects to deal with a range of applications from a diverse range 
of backgrounds. We therefore encourage (as always) anyone thinking about 
sending in an application to do so ASAP.
We look forward to greeting many of you in Montreal

Kind Regards,
Gavin - (On behalf of the Travel Assistance Committee)
—




RE: [ANNOUNCEMENT] liu Yu and Sean Broeder join the committer pool of the Apache Trafodion Project

2018-02-02 Thread Dave Birdsall
Congratulations to Sean Broader and Liu Yu for becoming committers!

-Original Message-
From: Carol Pearson [mailto:carol.pearson...@gmail.com] 
Sent: Thursday, February 1, 2018 8:34 AM
To: dev@trafodion.apache.org
Subject: Re: [ANNOUNCEMENT] liu Yu and Sean Broeder join the committer pool of 
the Apache Trafodion Project

Congratulations Sean and Yu!

-Carol P.

---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---

On Thu, Feb 1, 2018 at 12:23 AM, Pierre Smits 
wrote:

> The Apache Trafodion Project is pleased to be able to announce that 
> contributors Liu Yu and Sean Broeder have chosen to accept the 
> invitation to receive commit privileges.
>
> With commit privileges, a contributor of the Apache Trafodion Project 
> can do more to improve the works and further the project.
>
> Apache Trafodion
> Pierre Smits
>
> Vice President
>


RE: Info session

2018-01-29 Thread Dave Birdsall
Hi,

I'd look at buffer[379] and see what's there. That seems to be a UTF-8 
character that can't be mapped to UCS2.

Dave

From: ?? ? [mailto:alex_peng1...@hotmail.com]
Sent: Sunday, January 28, 2018 9:18 PM
To: d...@trafodion.incubator.apache.org
Subject: Info session


did anyone ever encountered the error as followings:



*** ERROR[2109] Invalid Character error converting SQL statement from character 
set UTF8 to character set UCS2 (character position 379, byte offset 379). 
[2018-01-26 10:11:47]



as we checked, LANG=zh_CN.UTF-8 and authentication details as below:

[cid:ac95b784-e819-4839-8315-b1e78ce4c109]

i found the source code for the query in the mxosrvr 
SrvrConnect::isInfoSession()
SELECT [first 1]"
"'%s' as \"SESSION_ID\","
"'%s' as \"SERVER_PROCESS_NAME\","
"'%s' as \"SERVER_PROCESS_ID\","
"'%s' as \"SERVER_HOST\","
"'%s' as \"SERVER_PORT\","
"'%s' as \"MAPPED_SLA\","
"'%s' as \"MAPPED_CONNECT_PROFILE\","
"'%s' as \"MAPPED_DISCONNECT_PROFILE\","
"'%d' as \"CONNECTED_INTERVAL_SEC\","
"'%s' as \"CONNECT_TIME\","
"'%s' as \"DATABASE_USER_NAME\","
"'%s' as \"USER_NAME\","
"'%s' as \"ROLE_NAME\","
"'%s' as \"APP_NAME\","
"'%s' as \"TENANT_NAME\""

"FROM (values(1)) X(A);

then used the sprintf to fill the parameters:
sprintf (buffer, pattern,
  srvrGlobal->sessionId,
  serverProcessName.c_str(),
  serverProcessId.c_str(),
  serverHost.c_str(),
  serverPort.c_str(),
  sla.c_str(),
  connectProfile.c_str(),
  disconnectProfile.c_str(),
  (int)(time(NULL) - connectedTimestamp),
  connecttime,
  srvrGlobal->QSDBUserName,
  srvrGlobal->QSUserName,
  srvrGlobal->QSRoleName,
  appName.c_str(),
  srvrGlobal->tenantName);

i guess maybe some value encode not properly parsed  caused the error. since 
could not reproduce yet locally, still checking.


any clue will be appreciated.



Nice Day

Alex


RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

2018-01-29 Thread Dave Birdsall
Is there a follow-up JIRA for the security item?

-Original Message-
From: Roberta Marton [mailto:roberta.mar...@esgyn.com] 
Sent: Sunday, January 28, 2018 6:55 PM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

+0

Running centos 6.7 and Cloudera 5.7.6 with Kerberos enabled.
I installed the source files for Trafodion release 2.2.
Successfully build binaries.

However, when I ran the python installer, it failed:

Host [rm1.novalocal]: Script [hdfs_cmds.py] 
. [ FAIL ]


Failed to run command  su - hdfs -c '/usr/bin/hdfs dfs -chgrp 18/01/28 21:15:25 
WARN security.UserGroupInformation: PriviledgedActionException as:root 
(auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]
18/01/28 21:15:25 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)

There is a valid HBASE  ticket:

[centos@rm1 distribution]$ sudo su hdfs
bash-4.1$ klist
Ticket cache: FILE:/tmp/krb5cc_494
Default principal: hdfs/rm1.novalo...@trafkdc.com

Valid starting ExpiresService principal
01/28/18 21:11:10  01/29/18 21:11:10  krbtgt/trafkdc@trafkdc.com
renew until 02/02/18 21:11:10

I was able to manually run the hdfs requests when connected as hdfs user. 

After manually running the HDFS steps, the installation step completed and 
trafodion database was initialized.  However, neither authentication or 
authorization was enabled.  I was able to manually enable both and successful 
run some SQL queries.

I am concerned that when security features are enabled, that things do not work 
as correctly.

  Roberta

-Original Message-
From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
Sent: Friday, January 26, 2018 7:09 PM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

Hi, Steve,

The policy page does requires the year to reflect the distribution of the 
CURRENT and past versions of the product. It we should update it. But does that 
mean we have to update all headers in most source files, or only the NOTICE 
file?

And I think this is NOT that strict, I checked a few other Apache projects:

Kylin 2.2.0 , in NOTICE, it is 2014-2016, but Kylin 2.2.0 released at 2017 Nov 
ZooKeeper 3.4.8 , NOTICE file said 2009-2015, but ZooKeeper 3.4.8 released at 
2016 Feb Drill 1.12 , NOTICE file said  2013-2014, Drill 1.12 released at 2017 
Dec HBase is very good, update NOTICE for most releases, but the 2.0.0-beta-1 
released at 2018 Jan, and the NOTICE file not update the year, still 2007-2017 
Kudu 1.2.0, NOTICE file is 2016, but it released at 2017 Jan Hive 1.2.2 , 
NOTICE file is  2008-2015, 1.2.2 released at 2017 April ...

Hadoop is very strict at this, not only update year for each release, but also 
lists all third-party license header in its NOTICE file.

So in sum, I think it will be good for Trafodion to strictly follow the rule in 
the next release, but it is not strict for now. Could you consider to change 
your vote ?

Thanks,
Ming

 

-Original Message-
From: Steve Varnau [mailto:steve.var...@esgyn.com]
Sent: Friday, January 26, 2018 8:03 AM
To: dev@trafodion.apache.org
Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2

I'm not certain, but the NOTICE file, that contains the copyright for the 
entire release is wrong.

This is the policy: http://apache.org/legal/src-headers.html#notice 

I don't see any guidance about how strict that dates be correct.

--Steve

> -Original Message-
> From: Hans Zeller [mailto:hans.zel...@esgyn.com]
> Sent: Thursday, January 25, 2018 2:49 PM
> To: dev@trafodion.apache.org
> Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2
> 
> Hi Steve, does this really justify an entire new round? All the code 
> (ok, maybe only 99.98 %) for this release was written in 2017. The 
> fact that we voted in 2018, does that really justify an update to the 
> copyright year?
> 
> -Original Message-
> From: Steve Varnau [mailto:steve.var...@esgyn.com]
> Sent: Thursday, January 25, 2018 2:42 PM
> To: dev@trafodion.apache.org
> Subject: RE: [VOTE] Apache Trafodion release 2.2.0 RC 2
> 
> It occurred to me that I missed something when I reviewed all those 
> NOTICE files (that are all the same).
> 
> The copyright line should be updated to include 2018.
> 
>   Copyright 2015-2017 The Apache Software Foundation
> 
> Not certain if that is a legal showstopper, but it looks like an 
> oversight we should fix while we have the chance.
> 
> Changing my vote to -1.
> 
> --Steve
> 
> > -Original Message-
> > From: Steve Varnau [mailto:steve.var...@esgyn.com]
> > Sent: Thursday, January 25, 2018 12:39 PM
> > To: 

Re: how to read debug info after set SUBQUERY_UNNESTING to 'debug'

2018-01-28 Thread Dave Birdsall
Hi Ming,


I don't know the particulars of that CQD.


However, have you used the DISPLAY utility? That shows the shape of the query 
tree after each pass of the compiler. So you will be able to see what pass 
transforms a subquery into a join.


Dave


From: Liu, Ming (Ming) 
Sent: Sunday, January 28, 2018 12:58:22 AM
To: dev@trafodion.apache.org
Subject: how to read debug info after set SUBQUERY_UNNESTING to 'debug'

hi, all,

I am trying to understand better about subquery optimization in Trafodion. So I 
set the SUBQUERY_UNNESTING CQD to 'DEBUG', and try a few queries, but I don't 
know how to see the output of that DEBUG? I saw the code will generate some 
WARNs into diagArea, but not seeing them.
If someone know how to use this debug skill, please help me.

thanks,
Ming


RE: mxosrvr debugging

2018-01-24 Thread Dave Birdsall
Hi,

Never mind... I found it. See 
http://trafodion.apache.org/docs/dcs_reference/index.html#_configuration_files, 
the section "Servers". Just edit the conf/servers file and you're all set.

Dave

-Original Message-----
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Wednesday, January 24, 2018 1:12 PM
To: dev@trafodion.apache.org
Subject: mxosrvr debugging

Hi,

I'm doing some debugging on the executor side of a JDBC T4 program. A JDBC T4 
program of course connects to an mxosrvr process; the executor runs inside of 
that. On a workstation local_hadoop environment, there are four of these and 
you don't know which one you're going to get in advance. So you have to put all 
four of them into debug.

That's a hassle, of course.

I remember there was mention of a knob somewhere that controlled the number of 
mxosrvrs in your workstation instance. I'd like to set that to one.

Can anyone remind me of what the knob was?

Thanks,

Dave


mxosrvr debugging

2018-01-24 Thread Dave Birdsall
Hi,

I'm doing some debugging on the executor side of a JDBC T4 program. A JDBC T4 
program of course connects to an mxosrvr process; the executor runs inside of 
that. On a workstation local_hadoop environment, there are four of these and 
you don't know which one you're going to get in advance. So you have to put all 
four of them into debug.

That's a hassle, of course.

I remember there was mention of a knob somewhere that controlled the number of 
mxosrvrs in your workstation instance. I'd like to set that to one.

Can anyone remind me of what the knob was?

Thanks,

Dave


Rowsets in Trafodion

2018-01-23 Thread Dave Birdsall
Hi,

Is there a way to use rowsets in Trafci? (Regression test executor/TEST015 
suggests that sqlci does not support rowsets.)

Or must one write JDBC code to use rowsets?

Are there examples of JDBC code that use rowsets for input? (I did find an 
output example, in 
trafodion/core/conn/jdbc_type2/samples/JdbcRowSetSample.java).

Thanks,

Dave


FW: Google Summer of Code 2018 is coming

2018-01-22 Thread Dave Birdsall
Hi Trafodion developers,

Can we think of some ideas that might be useful for a summer student to do? If 
so, see below...

Dave

-Original Message-
From: Ulrich Stärk [mailto:u...@apache.org] 
Sent: Sunday, January 21, 2018 1:23 PM
To: ment...@community.apache.org
Subject: Google Summer of Code 2018 is coming

Hello PMCs (incubator Mentors, please forward this email to your podlings),

Google Summer of Code [1] is a program sponsored by Google allowing students to 
spend their summer working on open source software. Students will receive 
stipends for developing open source software full-time for three months. 
Projects will provide mentoring and project ideas, and in return have the 
chance to get new code developed and - most importantly - to identify and bring 
in new committers.

The ASF will apply as a participating organization meaning individual projects 
don't have to apply separately.

If you want to participate with your project we ask you to do the following 
things as soon as possible but please no later than 2017-01-30:

1. understand what it means to be a mentor [2].

2. record your project ideas.

Just create issues in JIRA, label them with gsoc2018, and they will show up at 
[3]. Please be as specific as possible when describing your idea. Include the 
programming language, the tools and skills required, but try not to scare 
potential students away. They are supposed to learn what's required before the 
program starts.

Use labels, e.g. for the programming language (java, c, c++, erlang, python, 
brainfuck, ...) or technology area (cloud, xml, web, foo, bar, ...) and record 
them at [5].

Please use the COMDEV JIRA project for recording your ideas if your project 
doesn't use JIRA (e.g.
httpd, ooo). Contact d...@community.apache.org if you need assistance.

[4] contains some additional information (will be updated for 2017 shortly).

3. subscribe to ment...@community.apache.org; restricted to potential mentors, 
meant to be used as a private list - general discussions on the public 
d...@community.apache.org list as much as possible please). Use a recognized 
address when subscribing (@apache.org or one of your alias addresses on record).

Note that the ASF isn't accepted as a participating organization yet, 
nevertheless you *have to* start recording your ideas now or we will not get 
accepted.

Over the years we were able to complete hundreds of projects successfully. Some 
of our prior students are active contributors now! Let's make this year a 
success again!

Cheers,

Uli

P.S.: Except for the private parts (label spreadsheet mostly), this email is 
free to be shared publicly if you want to.

[1] https://summerofcode.withgoogle.com/
[2] http://community.apache.org/guide-to-being-a-mentor.html
[3] http://s.apache.org/gsoc2018ideas
[4] http://community.apache.org/gsoc.html
[5] http://s.apache.org/gsoclabels



RE: purgedata can not be put in a transaction?

2018-01-14 Thread Dave Birdsall
That's cool, Sean! Thanks for the suggestion... 

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Sunday, January 14, 2018 12:17 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

I think we could treat it like a drop.  There we disable the table during 
phase0 of the transaction and allow any other phase0 operations to continue.  
There may be both DML and other DDL in the transaction.  But we don't actually 
truncate the table until phase2 when all participants have voted yes.  If any 
participant votes no we can rollback be simply enabling the table again.

Technically, I think it's very doable.  I would be interested to hear if others 
think it's important.

Regards,
Sean

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Sunday, January 14, 2018 12:08 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Hi,

DROP TABLE and CREATE TABLE have been transactional in Trafodion's predecessor 
products since the beginning. As have most DDL operations.

In the beginning, when Oracle was born, it was not well-understood how to make 
DDL transactional. At that time there was no SQL standard that specified DDL 
transactional behavior either. That has since changed -- such behavior is 
described in the SQL standard as an optional feature.

PURGEDATA is a bit different. PURGEDATA is not a DDL operation, rather it is 
the same as "DELETE * FROM T". If you want a transactional version of 
PURGEDATA, you can get it by using DELETE instead. But the implementation is 
quite inefficient: All the rows from T will be written to the audit log to be 
used in case of rollback. PURGEDATA, being non-transactional, just does 
truncates on the underlying HBase table instead; nothing goes into the audit 
log.

It is conceivable that we could make PURGEDATA transactional, but we'd have to 
under the covers map it to something like "DROP TABLE T" + "CREATE TABLE T". 
And there are complications such as preserving any dependent objects and 
privileges on top of that. DDL operations are expensive in Trafodion so it 
might not turn out to be particularly efficient.

Dave

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Friday, January 12, 2018 8:04 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Thanks Anoop.

DROP TABLE and CREATE TABLE are both DDL. As I know, those DDL are 
non-transactional in RDBMS such as Oracle.
So I am curious that DROP TABLE can be rollbacked in Trafodion.

By the way, it is possible that we change PURGEDATA transactional?

Best regards,
Yuan

-Original Message-
From: Anoop Sharma [mailto:anoop.sha...@esgyn.com] 
Sent: Saturday, January 13, 2018 12:24 AM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Most of the DDL operations are transactional and supported by traf transaction 
manager (DTM) layer. This is a traf feature that enables DDL operations to be 
handled in an atomic transactional way.

It means that one can do (for ex):
  begin work;
  drop table t;
  create table t1 (a..)
  rollback work;
and get to the same state that existed before the begin work.

Is there a reason or some confusion on 'drop table' being a transactional 
operation?

One can set autocommit to ON in a session by doing:
  set transaction autocommit ON;

This is automatically set to on from sqlci and trafci.
There may be a conn property to set it to ON as well.

anoop

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Thursday, January 11, 2018 10:20 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Hi Anoop, 

Thanks for your feedback. It is strange that 'drop table'  is a transactional 
operation.

When using "purgedata" in a java application flow,  we saw below error,

ERROR[20124] This DDL operation cannot be performed if AUTOCOMMIT is OFF.

Can we set AUTOCOMMIT to ON in trafodion? How to set it? 


Best regards,
Yuan

-Original Message-
From: Anoop Sharma [mailto:anoop.sha...@esgyn.com]
Sent: Friday, January 12, 2018 12:44 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

currently, purgedata(or truncate) is a non-transactional operation.
It is performed by truncating the underlying traf/hbase object.
That truncate operation cannot be undone or rolled back as it is not protected 
by traf transactional layer (dtm).

'drop table' on the other hand, is a transactional operation.
One can 'rollback' a 'drop table' and get the table back.

anoop


-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Thursday, January 11, 2018 8:15 PM
To: dev@trafodion.apache.org
Subject: purgedata can not be put in a transaction?

Hi Trafodioneers,

I found that purgedata can not be put in a 

RE: purgedata can not be put in a transaction?

2018-01-14 Thread Dave Birdsall
Hi,

DROP TABLE and CREATE TABLE have been transactional in Trafodion's predecessor 
products since the beginning. As have most DDL operations.

In the beginning, when Oracle was born, it was not well-understood how to make 
DDL transactional. At that time there was no SQL standard that specified DDL 
transactional behavior either. That has since changed -- such behavior is 
described in the SQL standard as an optional feature.

PURGEDATA is a bit different. PURGEDATA is not a DDL operation, rather it is 
the same as "DELETE * FROM T". If you want a transactional version of 
PURGEDATA, you can get it by using DELETE instead. But the implementation is 
quite inefficient: All the rows from T will be written to the audit log to be 
used in case of rollback. PURGEDATA, being non-transactional, just does 
truncates on the underlying HBase table instead; nothing goes into the audit 
log.

It is conceivable that we could make PURGEDATA transactional, but we'd have to 
under the covers map it to something like "DROP TABLE T" + "CREATE TABLE T". 
And there are complications such as preserving any dependent objects and 
privileges on top of that. DDL operations are expensive in Trafodion so it 
might not turn out to be particularly efficient.

Dave

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Friday, January 12, 2018 8:04 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Thanks Anoop.

DROP TABLE and CREATE TABLE are both DDL. As I know, those DDL are 
non-transactional in RDBMS such as Oracle.
So I am curious that DROP TABLE can be rollbacked in Trafodion.

By the way, it is possible that we change PURGEDATA transactional?

Best regards,
Yuan

-Original Message-
From: Anoop Sharma [mailto:anoop.sha...@esgyn.com] 
Sent: Saturday, January 13, 2018 12:24 AM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Most of the DDL operations are transactional and supported by traf transaction 
manager (DTM) layer. This is a traf feature that enables DDL operations to be 
handled in an atomic transactional way.

It means that one can do (for ex):
  begin work;
  drop table t;
  create table t1 (a..)
  rollback work;
and get to the same state that existed before the begin work.

Is there a reason or some confusion on 'drop table' being a transactional 
operation?

One can set autocommit to ON in a session by doing:
  set transaction autocommit ON;

This is automatically set to on from sqlci and trafci.
There may be a conn property to set it to ON as well.

anoop

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Thursday, January 11, 2018 10:20 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

Hi Anoop, 

Thanks for your feedback. It is strange that 'drop table'  is a transactional 
operation.

When using "purgedata" in a java application flow,  we saw below error,

ERROR[20124] This DDL operation cannot be performed if AUTOCOMMIT is OFF.

Can we set AUTOCOMMIT to ON in trafodion? How to set it? 


Best regards,
Yuan

-Original Message-
From: Anoop Sharma [mailto:anoop.sha...@esgyn.com]
Sent: Friday, January 12, 2018 12:44 PM
To: dev@trafodion.apache.org
Subject: RE: purgedata can not be put in a transaction?

currently, purgedata(or truncate) is a non-transactional operation.
It is performed by truncating the underlying traf/hbase object.
That truncate operation cannot be undone or rolled back as it is not protected 
by traf transactional layer (dtm).

'drop table' on the other hand, is a transactional operation.
One can 'rollback' a 'drop table' and get the table back.

anoop


-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Thursday, January 11, 2018 8:15 PM
To: dev@trafodion.apache.org
Subject: purgedata can not be put in a transaction?

Hi Trafodioneers,

I found that purgedata can not be put in a transaction. If we put below rows 
after begin transaction, error 20123 occurred.

>> begin;
>>purgedata test1;

*** ERROR[20123] A user-defined transaction has been started.

This DDL operation cannot be performed.

However, "drop" and "delete with no rollback form" can run normally. From my 
side, I think both drop and purgedata are non-transactional, but why they 
behave different?
Is there any that we can workaournd this error?


Best regards,
Yuan



RE: How to fix this InvalidVarArgs problem, remove parameters or add print format?

2018-01-09 Thread Dave Birdsall
This looks like dead code to me. The NEO.HP_DEFINITION_SCHEMA stuff is from the 
predecessor Neoview product.

Perhaps this should simply be removed?

-Original Message-
From: Wang, Xiao-Zhong [mailto:xiaozhong.w...@esgyn.cn] 
Sent: Tuesday, January 9, 2018 12:15 AM
To: dev@trafodion.apache.org
Subject: How to fix this InvalidVarArgs problem, remove parameters or add print 
format?

file="/home/wangxz/code/trafodion/core/conn/odbc/src/odbc/nsksrvr/SrvrConnect.cpp"
 line="6289"
id="logic" subid="InvalidVarArgs" severity="error"
msg="The count of parameters mismatches the format string in sprintf" 
web_identify="{identify:sprintf}"
func_info="bool GetHashInfo ( char * sqlString , char * genRequestError , char 
* HashTableInfo )"
6279: break;
6280: case 5:
6281: if(syskeysPresent)
6282: sprintf(ControlQuery,"select 
concat(cast(ac.COLUMN_NUMBER as varchar(10) character set ISO88591),',') from 
%s.SYSTEM_SCHEMA.SCHEMATA sc, NEO.HP_DEFINITION_SCHEMA.OBJECTS ob, 
NEO.HP_DEFINITION_SCHEMA.ACCESS_PATH_COLS ac where sc.SCHEMA_NAME = '%s' and 
ob.OBJECT_NAME = '%s'and ac.PART_KEY_SEQ_NUM > 0and sc.SCHEMA_UID = 
ob.SCHEMA_UID and ob.OBJECT_UID = ac.ACCESS_PATH_UID ORDER BY 
ac.POSITION_IN_ROW FOR READ UNCOMMITTED ACCESS", srvrGlobal->SystemCatalog, 
schemaToken, tableName);
6283: else
6284: sprintf(ControlQuery,"select 
concat(cast(ac.COLUMN_NUMBER+1 as varchar(10) character set ISO88591),',') from 
%s.SYSTEM_SCHEMA.SCHEMATA sc, NEO.HP_DEFINITION_SCHEMA.OBJECTS ob, 
NEO.HP_DEFINITION_SCHEMA.ACCESS_PATH_COLS ac where sc.SCHEMA_NAME = '%s' and 
ob.OBJECT_NAME = '%s'and ac.PART_KEY_SEQ_NUM > 0and sc.SCHEMA_UID = 
ob.SCHEMA_UID and ob.OBJECT_UID = ac.ACCESS_PATH_UID ORDER BY 
ac.POSITION_IN_ROW FOR READ UNCOMMITTED ACCESS", srvrGlobal->SystemCatalog, 
schemaToken, tableName);
6285: strcpy(HashTableInfo+ControlQueryLen, ";HC="); // HC 
means HASH COLUMNS in the TABLE.
6286: ControlQueryLen = ControlQueryLen + 4;
6287: break;
6288: case 6:
6289: sprintf(ControlQuery,"select cast(cast((52 * 1024 * 128) 
/ (sum(co.column_size)) as integer) as varchar(10) character set ISO88591) from 
 %s.SYSTEM_SCHEMA.SCHEMATA sc, NEO.HP_DEFINITION_SCHEMA.OBJECTS ob, 
NEO.HP_DEFINITION_SCHEMA.COLS co where sc.SCHEMA_NAME = '%s' and ob.OBJECT_NAME 
= '%s' and sc.SCHEMA_UID = ob.SCHEMA_UID and ob.OBJECT_UID = co.OBJECT_UID and 
ob.OBJECT_TYPE = 'BT' FOR READ UNCOMMITTED ACCESS", srvrGlobal->SystemCatalog, 
verBuffer, verBuffer, atol(verBuffer), schemaToken, tableName);
6290: strcpy(HashTableInfo+ControlQueryLen, ";HE="); // HE 
means Guesstimated rowset size. Change 128 to HP soon.
6291: ControlQueryLen = ControlQueryLen + 4;
6292: break;
6293: default:
6294: break;
6295: }
6296: iqqcode = QryControlSrvrStmt->ExecDirect(NULL, ControlQuery, 
EXTERNAL_STMT, TYPE_SELECT, SQL_ASYNC_ENABLE_OFF, 0);
6297: if (iqqcode != SQL_SUCCESS)
6298: {
6299: ERROR_DESC_def *p_buffer = 
QryControlSrvrStmt->sqlError.errorList._buffer;


北京易鲸捷信息技术有限公司
地址:北京市朝阳区北辰东路8号汇宾大厦A座1302室
手机:18513493336
邮箱:xiaozhong.w...@esgyn.cn



RE: Anomaly with [first n] and ORDER BY

2018-01-08 Thread Dave Birdsall
Hi Hans,

Cool example! Thanks, this will help a lot. Hope to have an implementation by 
tomorrow.

Dave

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 5:40 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave, here is the case where this won't work:

1. The root node requires an order by A from its child.

2. The sort enforcer rule fires and the sort that gets created requires no 
order from its child (the FirstN)

3. The FirstN gets the required physical properties from the sort, which 
specifies no order

4. Now, the FirstN does not require any order from its child

If we force the FirstN to require an order for every context it creates, this 
won't happen. I was also thinking of the equivalent way to do this: select * 
from t where row_number() over(order by A) <= 10. This works the same way, we 
include the "order by A" in the Sequence function RelExpr and solve the 
required property issue that way.

Thanks,

Hans

-Original Message-----
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 4:50 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Hans,

Thanks. I was looking just now at RelExpr::createAContextForAChild. It seems to 
pass its own required property down to its left-most child (at least for 
sorting). I'm confused why this isn't good enough.

Dave

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 4:30 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave, you could just add a ValueIdList data member to the FirstN and store 
the ORDER BY there as well as in the Root operator. Then, in a new virtual 
method createAContextForAChild() method of the FirstN, it would need to add a 
required order, like the Root operator does. This is not ideal, having to store 
the order by list twice, but I guess that's the price we pay for having an 
operator that does not cleanly separate logical and physical properties. Also, 
the meaning of this order is different from the meaning in the root node, we 
just syntactically force the two orders to the be same.

Thanks,

Hans

-Original Message-----
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 3:40 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Hans,

Thanks. An elemental question: How do I make both operators require an order?

Dave

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 3:28 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave,

Overall, I like the idea of moving some of this logic into the optimizer. It's 
about time to do that.

One small comment: The sort enforcer rule does not have a pattern, so it cannot 
look at the FIRST_N node. It can look at the group, so you could probably mark 
the group of the FIRST_N somehow.

To be honest, I don't really like this solution, because it uses some flags to 
deal with required physical properties. Ideally, we would figure out some way 
to make it work in a way that required physical properties are passed as such, 
while things that alter the logical properties are stored in RelExprs. The 
problem is that the FIRST_N operator, especially with an ORDER BY, violates 
this separation of logical and physical properties. We have another operator 
like that, the partial groupby.

Here is what I would do - it's not a perfect solution but hopefully it makes 
the required physical properties more accurate:

For queries with a [FIRST n] and an ORDER BY, let both the root and the FIRST_N 
node (which is then always added in the binder) ask for the sort order. This is 
more realistic. Both operators require an order. We might consider a sort on 
top of the FIRST_N, but that sort would be eliminated, because it is 
unnecessary, as the FIRST_N would always return its result in order.

Thanks,

Hans

-Original Message-----
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 2:37 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Hans,

So my first attempt at a fix for Trafodion 2840 is to add code to 
RelRoot::isUpdatableBasic to check if getFirstNRows != -1 or getFirstNParams is 
non-null.

That fix does half the trick. It makes new [first n] + ORDER BY views not 
updatable. The half of the trick that it doesn't do is take care of existing 
[first n] + ORDER BY views; they remain marked updatable in the metadata.

So I was trying to imagine approaches that would catch existing views as well.

In RelRoot::codeGen I saw these comments:

// if root has GET_N indication set, insert a FirstN node.
  // Usually this transformation is done in the binder, but in
  // some special cases it is not.
  /

RE: Anomaly with [first n] and ORDER BY

2018-01-08 Thread Dave Birdsall
Hi Hans,

Thanks. I was looking just now at RelExpr::createAContextForAChild. It seems to 
pass its own required property down to its left-most child (at least for 
sorting). I'm confused why this isn't good enough.

Dave

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 4:30 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave, you could just add a ValueIdList data member to the FirstN and store 
the ORDER BY there as well as in the Root operator. Then, in a new virtual 
method createAContextForAChild() method of the FirstN, it would need to add a 
required order, like the Root operator does. This is not ideal, having to store 
the order by list twice, but I guess that's the price we pay for having an 
operator that does not cleanly separate logical and physical properties. Also, 
the meaning of this order is different from the meaning in the root node, we 
just syntactically force the two orders to the be same.

Thanks,

Hans

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 3:40 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Hans,

Thanks. An elemental question: How do I make both operators require an order?

Dave

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 3:28 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave,

Overall, I like the idea of moving some of this logic into the optimizer. It's 
about time to do that.

One small comment: The sort enforcer rule does not have a pattern, so it cannot 
look at the FIRST_N node. It can look at the group, so you could probably mark 
the group of the FIRST_N somehow.

To be honest, I don't really like this solution, because it uses some flags to 
deal with required physical properties. Ideally, we would figure out some way 
to make it work in a way that required physical properties are passed as such, 
while things that alter the logical properties are stored in RelExprs. The 
problem is that the FIRST_N operator, especially with an ORDER BY, violates 
this separation of logical and physical properties. We have another operator 
like that, the partial groupby.

Here is what I would do - it's not a perfect solution but hopefully it makes 
the required physical properties more accurate:

For queries with a [FIRST n] and an ORDER BY, let both the root and the FIRST_N 
node (which is then always added in the binder) ask for the sort order. This is 
more realistic. Both operators require an order. We might consider a sort on 
top of the FIRST_N, but that sort would be eliminated, because it is 
unnecessary, as the FIRST_N would always return its result in order.

Thanks,

Hans

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 2:37 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Hans,

So my first attempt at a fix for Trafodion 2840 is to add code to 
RelRoot::isUpdatableBasic to check if getFirstNRows != -1 or getFirstNParams is 
non-null.

That fix does half the trick. It makes new [first n] + ORDER BY views not 
updatable. The half of the trick that it doesn't do is take care of existing 
[first n] + ORDER BY views; they remain marked updatable in the metadata.

So I was trying to imagine approaches that would catch existing views as well.

In RelRoot::codeGen I saw these comments:

// if root has GET_N indication set, insert a FirstN node.
  // Usually this transformation is done in the binder, but in
  // some special cases it is not.
  // For example, if there is an 'order by' in the query, then
  // the Sort node is added by the optimizer. In this case, we
  // want to add the FirstN node on top of the Sort node and not
  // below it. If we add the FirstN node in the binder, the optimizer
  // will add the Sort node on top of the FirstN node. Maybe we
  // can teach optimizer to do this.

So, I thought: Well, let's explore the idea of having the Optimizer insert the 
Sort below FirstN instead of above it. I've coded a first attempt (still 
getting it to compile cleanly at this moment though). The idea was: Change the 
binder to always insert FirstN (even when ORDER BY is present). Change 
SortEnforcerRule::topMatch so it does not match on FirstN nodes (that prevents 
the Sort from being placed on top of FirstN). Add a new rule 
SortEnforcerFirstNRule that matches FirstN trees only, and transforms 
FirstN(CutOp) to FirstN(Sort(CutOp)). Hopefully that eliminates the need for 
the generator to insert the FirstN node, and gets the Sort node in the right 
place.

This should catch existing [first n] + ORDER BY views since view composition 
happens in the binder. Now the FirstN node will be generated there and the 
existing Normalizer check will catch it and flag

RE: Anomaly with [first n] and ORDER BY

2018-01-08 Thread Dave Birdsall
Hi Hans,

Thanks. An elemental question: How do I make both operators require an order?

Dave

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 3:28 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave,

Overall, I like the idea of moving some of this logic into the optimizer. It's 
about time to do that.

One small comment: The sort enforcer rule does not have a pattern, so it cannot 
look at the FIRST_N node. It can look at the group, so you could probably mark 
the group of the FIRST_N somehow.

To be honest, I don't really like this solution, because it uses some flags to 
deal with required physical properties. Ideally, we would figure out some way 
to make it work in a way that required physical properties are passed as such, 
while things that alter the logical properties are stored in RelExprs. The 
problem is that the FIRST_N operator, especially with an ORDER BY, violates 
this separation of logical and physical properties. We have another operator 
like that, the partial groupby.

Here is what I would do - it's not a perfect solution but hopefully it makes 
the required physical properties more accurate:

For queries with a [FIRST n] and an ORDER BY, let both the root and the FIRST_N 
node (which is then always added in the binder) ask for the sort order. This is 
more realistic. Both operators require an order. We might consider a sort on 
top of the FIRST_N, but that sort would be eliminated, because it is 
unnecessary, as the FIRST_N would always return its result in order.

Thanks,

Hans

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 2:37 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Hans,

So my first attempt at a fix for Trafodion 2840 is to add code to 
RelRoot::isUpdatableBasic to check if getFirstNRows != -1 or getFirstNParams is 
non-null.

That fix does half the trick. It makes new [first n] + ORDER BY views not 
updatable. The half of the trick that it doesn't do is take care of existing 
[first n] + ORDER BY views; they remain marked updatable in the metadata.

So I was trying to imagine approaches that would catch existing views as well.

In RelRoot::codeGen I saw these comments:

// if root has GET_N indication set, insert a FirstN node.
  // Usually this transformation is done in the binder, but in
  // some special cases it is not.
  // For example, if there is an 'order by' in the query, then
  // the Sort node is added by the optimizer. In this case, we
  // want to add the FirstN node on top of the Sort node and not
  // below it. If we add the FirstN node in the binder, the optimizer
  // will add the Sort node on top of the FirstN node. Maybe we
  // can teach optimizer to do this.

So, I thought: Well, let's explore the idea of having the Optimizer insert the 
Sort below FirstN instead of above it. I've coded a first attempt (still 
getting it to compile cleanly at this moment though). The idea was: Change the 
binder to always insert FirstN (even when ORDER BY is present). Change 
SortEnforcerRule::topMatch so it does not match on FirstN nodes (that prevents 
the Sort from being placed on top of FirstN). Add a new rule 
SortEnforcerFirstNRule that matches FirstN trees only, and transforms 
FirstN(CutOp) to FirstN(Sort(CutOp)). Hopefully that eliminates the need for 
the generator to insert the FirstN node, and gets the Sort node in the right 
place.

This should catch existing [first n] + ORDER BY views since view composition 
happens in the binder. Now the FirstN node will be generated there and the 
existing Normalizer check will catch it and flag it not updatable.

What do you think?

Dave


-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Monday, January 8, 2018 2:17 PM
To: dev@trafodion.apache.org
Subject: RE: Anomaly with [first n] and ORDER BY

Hi Dave,

The simple reason is that the person who implemented the [first n] feature is 
not a compiler developer.

Ideally, we would be aware of the [first n] throughout the compilation and have 
a new required property in the optimizer that says "optimize for first N rows", 
so that we could favor certain query plans such as nested joins, but this is 
not happening today and it would be a significant project.

One other comment about being able to update a [first n] view: Ideally, such a 
view would be updatable if no WITH CHECK OPTION was specified, and it would not 
be updatable when the WITH CHECK OPTION was specified in the CREATE VIEW DDL. 
Again, that's the ideal case, and we may not be able to make that happen today.

Thanks,

Hans

-Original Message-----
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 8, 2018 12:24 PM
To: dev@trafodion.apache.org
Subject: Anomaly with [first n] and ORDER BY

Hi,

I've been studying https://issues.apache.org/j

Anomaly with [first n] and ORDER BY

2018-01-08 Thread Dave Birdsall
Hi,

I've been studying https://issues.apache.org/jira/browse/TRAFODION-2840, and 
the related case https://issues.apache.org/jira/browse/TRAFODION-2822.

I attempted to fix the latter case by making [first n] views not updatable.

But the former case documents a hole in my fix. It seems that if we add ORDER 
BY to the view definition, the checks in 2822 are circumvented.

I figured out why.

At bind time, [first n] scans are transformed to a firstN(scan) tree (that is, 
a firstN node is created and inserted on top of the scan). EXCEPT, if there is 
an ORDER BY clause, we don't do this. Instead, we generate the firstN node at 
code generation time.

But that means the Normalizer sees a [first n] + ORDERBY as just a scan, and a 
[first n] without ORDER BY as firstN(scan). The fix for 2822 was in the 
Normalizer; so this anomaly explains why the fix didn't work when ORDER BY was 
present.

Now, I've figured out how to improve the fix so the Normalizer catches the 
ORDER BY example.

But I am curious why we do this strange thing of deferring firstN insertion to 
generation time. It seems to me doing so could defeat many other checks for 
firstN processing. For example, an optimizer rule that does something for 
firstNs wouldn't fire if an ORDER BY is present.

I'm wondering, for example, why we didn't have the Binder simply insert a 
firstN and a sort node into the tree.

Any thoughts?

Dave


RE: What functions or procedures are used by a library?

2018-01-02 Thread Dave Birdsall
Thanks, Venkat.

-Original Message-
From: Venkat Muthuswamy [mailto:venkat.muthusw...@esgyn.com] 
Sent: Tuesday, January 2, 2018 2:03 PM
To: dev@trafodion.apache.org
Subject: RE: What functions or procedures are used by a library?

Dave,

Yes, there is. 

GET PROCEDURES FOR LIBRARY SCH.LIB1
GET FUNCTIONS FOR LIBRARY SCH.LIB1

Regards
Venkat

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Tuesday, January 2, 2018 1:50 PM
To: dev@trafodion.apache.org
Subject: What functions or procedures are used by a library?

Hi,

If I try to drop a library that has UDRs defined on it, the DROP LIBRARY will 
fail with SQL error 1366. All well and good.

I can figure out what UDRs are defined on it by joining the metadata ROUTINES 
table to the metadata LIBRARIES table on ROUTINES.LIBRARY_UID = 
LIBRARIES.LIBRARY_UID, and placing an appropriate predicate on the column 
LIBRARIES.LIBRARY_FILENAME.

But I'm wondering if there is a simple utility that would do this for me? 
Something like GET ROUTINES ON LIBRARY x?

Thanks,

Dave


What functions or procedures are used by a library?

2018-01-02 Thread Dave Birdsall
Hi,

If I try to drop a library that has UDRs defined on it, the DROP LIBRARY will 
fail with SQL error 1366. All well and good.

I can figure out what UDRs are defined on it by joining the metadata ROUTINES 
table to the metadata LIBRARIES table on ROUTINES.LIBRARY_UID = 
LIBRARIES.LIBRARY_UID, and placing an appropriate predicate on the column 
LIBRARIES.LIBRARY_FILENAME.

But I'm wondering if there is a simple utility that would do this for me? 
Something like GET ROUTINES ON LIBRARY x?

Thanks,

Dave


RE: is there docs about systables of trafodion?

2018-01-02 Thread Dave Birdsall
I don't think there is any documentation of the metadata tables at the moment. 
At least I don't see any in the manuals.

If you simply want to know what columns exist in each table, you can do the 
following:

Set schema "_MD_";
Get tables;

Then for each table, do "showddl ".

Some of it is self-explanatory.

If you would like to see documentation in the manuals, please feel free to 
write a JIRA requesting that. (Or I can do it for you if you wish.)

Thanks,

Dave

-Original Message-
From: yingshuai...@esgyn.cn [mailto:yingshuai...@esgyn.cn] 
Sent: Friday, December 22, 2017 12:28 AM
To: dev 
Subject: is there docs about systables of trafodion?

Hi all,
i want to get the information about the systables of trafodion in schema 
_MD_, include column values and its descriptions, is there docs about that?



李英帅
易鲸捷信息技术有限公司
手机:18701691214
邮箱:yingshuai...@esgyn.cn


RE: 2018

2018-01-02 Thread Dave Birdsall
And to you likewise!

Dave

-Original Message-
From: Pierre Smits [mailto:pierresm...@apache.org] 
Sent: Tuesday, January 2, 2018 2:21 AM
To: Pierre Smits 
Subject: 2018

Hi all,

I wish you all a blessed and fruitful 2018.

Best regards,

Pierre Smits

V.P. Apache Trafodion
PMC Member Apache Directory


RE: [Release R2.2] items to fix

2018-01-02 Thread Dave Birdsall
I assume that copyrights need to change to 2018 also?

-Original Message-
From: Steve Varnau [mailto:steve.var...@esgyn.com] 
Sent: Monday, January 1, 2018 7:28 PM
To: dev@trafodion.apache.org
Subject: RE: [Release R2.2] items to fix

Those changes are primarily docs (for master branch), and web-site.  The 
packaging changes to take incubating out of release name have not been made yet.

--Steve

> -Original Message-
> From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
> Sent: Monday, January 1, 2018 6:55 PM
> To: dev@trafodion.apache.org
> Subject: RE: [Release R2.2] items to fix
> 
> thank you , I see it. So after that is merged, I can start the VOTE :)
> 
> -Original Message-
> From: Zhang, Yi (Eason) [mailto:yi.zh...@esgyn.cn]
> Sent: Tuesday, January 02, 2018 10:41 AM
> To: dev@trafodion.apache.org
> Subject: RE: [Release R2.2] items to fix
> 
> Hi Ming,
> 
> We already have PR for the changes:
> 
> https://github.com/apache/trafodion/pull/1359
> 
> 
> Thanks,
> Eason
> 
> -Original Message-
> From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
> Sent: Tuesday, January 2, 2018 10:37
> To: dev@trafodion.apache.org
> Subject: [Release R2.2] items to fix
> 
> hi, all,
> 
> We are going to renew the release R2.2 of Apache Trafodion.
> Some documentation may need to change since Trafodion is now TLP. The 
> first is DISCLAMER, it still says 'incubation'. I will file a JIRA to fix it.
> Anything else people can think of before I start a new VOTE.
> 
> thanks,
> Ming


RE: how would esp do when it was launched?

2017-12-30 Thread Dave Birdsall
Hi,

How big is the table? How many esps are we creating? Perhaps we are creating 
the esps serially; maybe that is what is taking the time.

Another factor to look at is compile time. You can separate that out as follows:

Step 1: using trafci, do a "prepare" of your query. See how long that takes.
Step 2: then execute the query. How long does that take?
Step 3: re-execute the query. How long does that take?

I expect part of that 80 seconds will be consumed in step 1 as compile time. 
Would be interesting to know if compile time is, say, 2 seconds or 78 seconds. 
If the latter, perhaps the issue is how we read statistics for a Hive table 
with many partitions.

Dave

-Original Message-
From: Liu, Yao-Hua (Joshua) [mailto:yaohua@esgyn.cn] 
Sent: Friday, December 22, 2017 1:01 AM
To: d...@trafodion.incubator.apache.org
Subject: how would esp do when it was launched?

Hi all,

   Suresh and I found some interesting thing when run some queries.

   Step 1:
   Use trafci, run query: select count(*) from CELL_INDICATOR_HIVE where 
starttime=201708010;  // CELL_INDICATOR_HIVE has 100 billion rows and 
each starttime would have 4346483 rows. Starttime is the first column in store 
by keys
   This would take about 1 minute and 20 seconds to finish.
   Step2
   Run above sql again, then it would take 3 seconds to finish.
   Here 80s vs 3s, we may guess it's due to esp start time or cache. But we 
checked,

1. to start all the esps would take less than 1 seconds.

2. If due to cache, we can run another table for a test:
   Step3
   Run another query: select count(*) from SERVERIP_INDICATOR_BAK where 
starttime=201708010;  // SERVERIP_INDICATOR_BAK has 64 billion rows and 
each starttime would have 2.8 million rows. Starttime is also the first column 
in store by keys. Then it would take 2 seconds to finish.

   By the way, if we start another trafci(not the same mxosrvr from above) 
and run above select count(*) from SERVERIP_INDICATOR_BAK where 
starttime=201708010, it would also take 1 minute or more.

   So we are wondering what does esp do when it was started? Why the first 
time the esp to scan one table would take so much time but the second time to 
scan another table could be much faster?

Thanks
Joshua


RE: Migration of our repos

2017-12-28 Thread Dave Birdsall
Hi,

FYI: Today I did a "git clone" to "trafodion" (rather than incubator-trafodion) 
and was successfully able to build, install local hadoop, and bring up an 
instance.

Dave

-Original Message-
From: Pierre Smits [mailto:pierresm...@apache.org] 
Sent: Thursday, December 28, 2017 2:16 AM
To: dev@trafodion.apache.org
Subject: Migration of our repos

Hi all,

Today I have been informed that our repos (on [1],[2]) have been migrated.

The new addresses are:
https://git-wip-us.apache.org/repos/asf?p=trafodion.git
git://git.apache.org/trafodion.git
https://github.com/apache/trafodion

Please be aware that the old (incubator) references may still be available for 
a short time (but may link to nowhere), so I advise everybody to adjust their 
settings as soon as possible. Also, the references in our website and wiki may 
be out of sync for a short period.

[1] http://git.apache.org
[2] https://github.com

Best regards,

Pierre Smits