RE: About sqgen

2017-11-16 Thread Sean Broeder
Yes, that's a possibility if you have an idea what you would like to test for.  
You might gain support to add an option to sqstart that automatically triggers 
sqgen.

If this is something you'd like to pursue, I'd suggest creating a JIRA so it 
can be discussed and tracked.

Thanks,
Sean

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Thursday, November 16, 2017 7:27 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: About sqgen

Thanks Sean.

Maybe we can add sqgen into sqstart with a judgment, 
For example,
If [ ... ] then
sqgen;
fi

Best regards,
Yuan

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Friday, November 17, 2017 10:31 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: About sqgen

In a development environment, sqgen is frequently required when you pull new 
software that has some new feature set, but not always.

Running sqgen as part of sqstart would slow the start down and is probably 
unwanted in cases where a simple restart is wanted.  I don't often run on a 
real cluster, so my experience there is limited, but I would think a simple way 
to achieve what you are suggesting without changing it for everyone is either 
executing the two commands with a semicolon separating them or to create an 
alias.

Others may have additional input.

Thanks,
Sean

 

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Thursday, November 16, 2017 4:47 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: About sqgen

Hi Sean,

Thanks a lot for your answer. Now I know that we need to rerun sqgen when we 
change sqconfig, any other cases that we also need to rerun sqgen?
Also, is is possible that we make sqgen as part of sqstart? Because sometimes 
we might forget to rerun sqgen because it is a separate part for sqstart.


Best regards,
Yuan

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Friday, November 17, 2017 7:24 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: About sqgen

Hi Yuan,
Sqgen is a script that takes a mixture of configuration and release specific 
parameters and creates or sets up a number of files that are used at startup 
and runtime.

For example, there is a file named sqconfig that describes the cluster (the 
number of nodes, the location of $TRAF_HOME, etc).  Sqgen takes this file and 
other scripts to programmatically generate $TRAF_HOME/etc/ms.env, gomon.cold, 
etc.  These are files that are created specifically and tailored to the 
installed cluster and release that it is running.  In this way, when a user 
types 'sqstart' the scripts, directories, and files are all in place for the 
monitor and other processes to read and come up quickly and consistently.

If you later decide to alter the configuration of your cluster, for example it 
you change the number of nodes in sqconfig, you can stop Trafodion and rerun 
sqgen.  When Trafodion is restarted it will reflect the new changes you've 
specified.
 
Regards,
Sean

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Thursday, November 16, 2017 1:58 AM
To: dev@trafodion.incubator.apache.org
Subject: About sqgen

Hi Trafodioneers,

As we know, the first time when we install Trafodion, "sqgen" is required to 
run, can anyone give a brief introduction about sqgen? What does sqgen do?
Thanks ahead.

Best regards,
Yuan



RE: About sqgen

2017-11-16 Thread Sean Broeder
In a development environment, sqgen is frequently required when you pull new 
software that has some new feature set, but not always.

Running sqgen as part of sqstart would slow the start down and is probably 
unwanted in cases where a simple restart is wanted.  I don't often run on a 
real cluster, so my experience there is limited, but I would think a simple way 
to achieve what you are suggesting without changing it for everyone is either 
executing the two commands with a semicolon separating them or to create an 
alias.

Others may have additional input.

Thanks,
Sean

 

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Thursday, November 16, 2017 4:47 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: About sqgen

Hi Sean,

Thanks a lot for your answer. Now I know that we need to rerun sqgen when we 
change sqconfig, any other cases that we also need to rerun sqgen?
Also, is is possible that we make sqgen as part of sqstart? Because sometimes 
we might forget to rerun sqgen because it is a separate part for sqstart.


Best regards,
Yuan

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Friday, November 17, 2017 7:24 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: About sqgen

Hi Yuan,
Sqgen is a script that takes a mixture of configuration and release specific 
parameters and creates or sets up a number of files that are used at startup 
and runtime.

For example, there is a file named sqconfig that describes the cluster (the 
number of nodes, the location of $TRAF_HOME, etc).  Sqgen takes this file and 
other scripts to programmatically generate $TRAF_HOME/etc/ms.env, gomon.cold, 
etc.  These are files that are created specifically and tailored to the 
installed cluster and release that it is running.  In this way, when a user 
types 'sqstart' the scripts, directories, and files are all in place for the 
monitor and other processes to read and come up quickly and consistently.

If you later decide to alter the configuration of your cluster, for example it 
you change the number of nodes in sqconfig, you can stop Trafodion and rerun 
sqgen.  When Trafodion is restarted it will reflect the new changes you've 
specified.
 
Regards,
Sean

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Thursday, November 16, 2017 1:58 AM
To: dev@trafodion.incubator.apache.org
Subject: About sqgen

Hi Trafodioneers,

As we know, the first time when we install Trafodion, "sqgen" is required to 
run, can anyone give a brief introduction about sqgen? What does sqgen do?
Thanks ahead.

Best regards,
Yuan



RE: About sqgen

2017-11-16 Thread Sean Broeder
Hi Yuan,
Sqgen is a script that takes a mixture of configuration and release specific 
parameters and creates or sets up a number of files that are used at startup 
and runtime.

For example, there is a file named sqconfig that describes the cluster (the 
number of nodes, the location of $TRAF_HOME, etc).  Sqgen takes this file and 
other scripts to programmatically generate $TRAF_HOME/etc/ms.env, gomon.cold, 
etc.  These are files that are created specifically and tailored to the 
installed cluster and release that it is running.  In this way, when a user 
types 'sqstart' the scripts, directories, and files are all in place for the 
monitor and other processes to read and come up quickly and consistently.

If you later decide to alter the configuration of your cluster, for example it 
you change the number of nodes in sqconfig, you can stop Trafodion and rerun 
sqgen.  When Trafodion is restarted it will reflect the new changes you've 
specified.
 
Regards,
Sean

-Original Message-
From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn] 
Sent: Thursday, November 16, 2017 1:58 AM
To: dev@trafodion.incubator.apache.org
Subject: About sqgen

Hi Trafodioneers,

As we know, the first time when we install Trafodion, "sqgen" is required to 
run, can anyone give a brief introduction about sqgen? What does sqgen do?
Thanks ahead.

Best regards,
Yuan



RE: [VOTE] Graduate from Incubator and become a top-level project

2017-11-16 Thread Sean Broeder
+1
Regards,
Sean

-Original Message-
From: Pierre Smits [mailto:pierre.sm...@gmail.com] 
Sent: Thursday, November 16, 2017 8:21 AM
To: dev@trafodion.incubator.apache.org
Subject: [VOTE] Graduate from Incubator and become a top-level project

Hi all,

Following upon our [DISCUSSION] Graduation of The (incubating) Apache
Trafodion Project


I herby propose that we graduate from incubating and have our Apache
Trafodion Project established as an independent  (and top-level ASF)
project. According to The Graduation Process
 a
vote is not required, but votes provided shows that the project is ready to
embark on the next phase in its lifecycle. So please all come forward and
express your vote.

Votes from our Mentors (Jacques, Michael) will count as binding votes.

Vote options are:
+1 - meaning: Yes, I want the project to graduate
-1 - meaning: No, I don't want the project to graduate
+0 - meaning: Abstain

A normal majority (50% of the number of voters+ 1 vote) will be applicable.


Here is my vote: +1

Best regards,


Pierre Smits

ORRTIZ.COM 
OFBiz based solutions & services

OEM - The OFBiz Extensions Marketplace1
http://oem.ofbizci.net/oci-2/
1 not affiliated to (and not endorsed by) the OFBiz project


RE: Question about transaction manager on Trafodion architecture page

2017-11-07 Thread Sean Broeder
Hi Dave,
Thanks for pointing this out.

I think stating a specific release of HBase support in the architecture is a 
mistake because it's just begging to become obsolete.  Perhaps a better 
replacement for the first bullet is something like "Upgraded to support HBase 
coprocessor mechanism" because this will not likely change in the near future.

I think the third bullet remains true.  We do support global transactions that 
span multiple rows, regions and tables within an HBase cluster.

Thanks,
Sean

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Tuesday, November 7, 2017 8:49 AM
To: dev@trafodion.incubator.apache.org
Subject: Question about transaction manager on Trafodion architecture page

Hi Trafodioners,

On the Trafodion architecture page 
(http://trafodion.apache.org/architecture-overview.html), in the section on 
Transactions, there is the following text:

Trafodion supports distributed ACID transaction semantics using the 
Multi-Version Concurrency Control (MVCC) model. The transaction management is 
built on top of a fork of the HBase-trx project implementing the following 
changes:
* Upgraded it to work on HBase version 0.98.1 (for CDH 5.1) or 0.98.0 
(for HDP 2.1).
* Added support for parallel worker processes doing work on behalf of 
the same transaction.
* Added support for global transactions, that is, transactions that can 
encompass resources (regions/HTables) across an HBase cluster.
* Added transaction recovery after server failure.
I believe the first and third bullets above are out of date.
Can you suggest up-to-date text for these? I will then update the web page.
Thanks,
Dave



RE: Jenkins tests failing in JDBC tests

2017-08-16 Thread Sean Broeder
Seems like a reasonable short-term solution to me.

+1

Sean

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Wednesday, August 16, 2017 10:34 AM
To: dev@trafodion.incubator.apache.org
Subject: Jenkins tests failing in JDBC tests

Hi Trafodion developers,

I'm noticing that almost all outstanding pull requests are failing the Jenkins 
tests with the same error. The error is in the jdbc_test-cdh suite:

2017-08-16 01:12:27 Running testBatchInsertFKNotExist..
2017-08-16 01:12:56 Batch Update Failed, See next exception for details
2017-08-16 01:12:56 *** ERROR[8103] The operation is prevented by referential 
integrity constraint TRAFODION.T4QA.BATCH_TEST_TABLE_FK_911645628_4551 on table 
TRAFODION.T4QA.BATCH_TEST_TABLE_FK.
2017-08-16 01:13:11 *** ERROR[2026] Server Process $ZSM0 has reached allowed 
depth for nowait operation from the process 0,15846. [2017-08-16 01:13:11]


I know that Weiqing is working hard on a solution to this problem.

In the meantime, though, I wonder if we should liberalize our commit policy and 
allow commits of pull requests if this is the only failure seen?

What do you think?

Dave


RE: Speeding up development builds?

2017-08-03 Thread Sean Broeder
Another thing you can do in your sandbox is change the Makefile in 
core/sqf/src/seatrans/hbase-trx to make only the jar file you need instead of 
making the jar files for all the possible distros.  For example, I comment out 
the first line and shorten it to just build the jar files for CDH5.4

#build_all: jdk_1_7_cdh54 jdk_1_7_cdh55 jdk_1_7_cdh57 jdk_1_7_hdp 
jdk_1_7_apache10 jdk_1_7_apache11 jdk_1_7_apache12
build_all: jdk_1_7_cdh54

Thanks,
Sean

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Thursday, August 3, 2017 9:07 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: Speeding up development builds?

One thing I do when I make changes in the SQL layer is that I only build the 
SQL code:

  cd .../incubator-trafodion/core/sql/nskgmake
  gmake -j 4 -ks linuxdebug

That is a lot faster.

-Original Message-
From: Zhang, Yi (Eason) [mailto:yi.zh...@esgyn.cn] 
Sent: Thursday, August 3, 2017 4:16 AM
To: dev@trafodion.incubator.apache.org
Subject: Re: Speeding up development builds?

+1

I’m also wondering how to speed it up.


Thanks,
Eason



On 03/08/2017, 02:41, "Dave Birdsall"  wrote:

Hi,

I notice that when I make a change to a C++ module in Trafodion, many 
(all?) of the Java modules get built as well. This slows down development; a 
mistake in C++ coding can easily cost you 5 to 10 minutes after correction 
waiting for a rebuild.

I wonder if there is a way to do just the C++ part of the build?

Thanks,

Dave




RE: make all suboptimal?

2017-06-15 Thread Sean Broeder
Hi Dave,
Several of the java files in the DTM are created using a template for each 
distribution and release.  So these java files are new as far as the make is 
concerned.

Regards,
Sean

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Thursday, June 15, 2017 11:33 AM
To: dev@trafodion.incubator.apache.org
Subject: make all suboptimal?

Hi,

Dumb question time.

Scenario:

I have a Trafodion development instance with a clean build. I make a one-line 
change to a C++ file and save it. Then I do a "make all". I notice that the 
"make all" seems to be rebuilding all the Java files - watching in "top" I see 
lots of heavy java processes going.

Question:

Why are the Java files rebuilt when they have not changed?

Dave


RE: TRAFODION-2001

2017-05-11 Thread Sean Broeder
It would be $TSID0 and should be running on node 0 by default.

-Original Message-
From: Zalo Correa [mailto:zalo.cor...@esgyn.com] 
Sent: Thursday, May 11, 2017 5:27 AM
To: dev@trafodion.incubator.apache.org
Subject: Re: TRAFODION-2001

Yes, where can I find the process name used to open it? There is a process name 
change and it looks like I missed it in this code.

Thanks,

Zalo
_
From: Sean Broeder <sean.broe...@esgyn.com<mailto:sean.broe...@esgyn.com>>
Sent: Wednesday, May 10, 2017 8:17 PM
Subject: RE: TRAFODION-2001
To: 
<dev@trafodion.incubator.apache.org<mailto:dev@trafodion.incubator.apache.org>>


Looking at the exception, it implies the IdTm is not running. Can you verify 
that the process is running?

Thanks,
Sean

-Original Message-
From: Zalo Correa [mailto:zalo.cor...@esgyn.com]
Sent: Wednesday, May 10, 2017 5:44 PM
To: 
dev@trafodion.incubator.apache.org<mailto:dev@trafodion.incubator.apache.org>
Subject: RE: TRAFODION-2001

Regarding the Jenkins test failure, I am seeking developer help to correct the 
failure of 
core-regress-core-hdp<https://jenkins.esgyn.com/job/core-regress-core-hdp/2224/console>.

Perhaps someone has seen this failure and would know how to correct the problem?

The following is the diff file contents:

cat DIFF116
946c946,958
< --- 1 row(s) inserted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: java.io.IOException: getTmId: IdTm threw exception
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:240)
> org.apache.hadoop.hbase.client.transactional.RMInterface.checkAndPutRegionTx(RMInterface.java:402)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1585)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729) Caused by
> org.apache.hadoop.hbase.regionserver.transactional.IdTmException: id 
> threw:org.apache.hadoop.hbase.regionserver.transactional.IdTmException: 
> ferr=11
> org.apache.hadoop.hbase.regionserver.transactional.IdTm.id(IdTm.java:134)
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:236)
> org.apache.hadoop.hbase.client.transactional.RMInterface.checkAndPutRegionTx(RMInterface.java:402)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1585)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729).
>
> --- 0 row(s) inserted.
956,958c968
< *** ERROR[8616] A conflict was detected during commit processing. Transaction 
has been aborted. Detail @conflict details@
<
< --- SQL operation failed with errors.
---
> --- SQL operation complete.
964c974
< 1 2
---
> 1 1
1054c1064,1076
< --- 1 row(s) inserted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::insertRow returned error HBASE_ACCESS_ERROR(-706). Cause: 
> java.io.IOException: getTmId: IdTm threw exception
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:240)
> org.apache.hadoop.hbase.client.transactional.RMInterface.putRegionTx(RMInterface.java:365)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1599)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729) Caused by
> org.apache.hadoop.hbase.regionserver.transactional.IdTmException: id 
> threw:org.apache.hadoop.hbase.regionserver.transactional.IdTmException: 
> ferr=22
> org.apache.hadoop.hbase.regionserver.transactional.IdTm.id(IdTm.java:134)
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:236)
> org.apache.hadoop.hbase.client.transactional.RMInterface.putRegionTx(RMInterface.java:365)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1599)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729).
>
> --- 0 row(s) inserted.
1064,1066c1086
< *** ERROR[8616] A conflict was detected during commit processing. Transaction 
has been aborted. Detail @conflict details@
<
< --- SQL operation failed with errors.
---
> --- SQL operation complete.
1072c1092
< 1 2
---
> 1 1
1163c1183,1195
< --- 1 row(s) deleted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::deleteRow returned error HBASE_ACCESS_ERROR(-706). Cause: 
> java.io.IOException: getTmId: IdTm threw exception
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:240)
> org.apache.hadoop.hbase.client.transactional.RMInterface.deleteRegionTx(RMInterface.java:338)
> org.TRAFODION.sql.HTableClient.deleteRow(HTableClient.java:1391)
> org.TRAFODION.sql.HBaseClient.deleteRow(HBaseClient.java:1776) Caused by
> org.apache.hadoop.hbase.regionserver.transactional.IdTmException: id 
> threw:org.apache.hadoop.hbase.region

RE: TRAFODION-2001

2017-05-10 Thread Sean Broeder
Looking at the exception, it implies the IdTm is not running.  Can you verify 
that the process is running?

Thanks,
Sean

-Original Message-
From: Zalo Correa [mailto:zalo.cor...@esgyn.com] 
Sent: Wednesday, May 10, 2017 5:44 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: TRAFODION-2001

Regarding the Jenkins test failure, I am seeking developer help to correct the 
failure of 
core-regress-core-hdp.

Perhaps someone has seen this failure and would know how to correct the problem?

The following is the diff file contents:

cat DIFF116
946c946,958
< --- 1 row(s) inserted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::checkAndInsertRow returned error HBASE_ACCESS_ERROR(-706). 
> Cause: java.io.IOException: getTmId: IdTm threw exception
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:240)
> org.apache.hadoop.hbase.client.transactional.RMInterface.checkAndPutRegionTx(RMInterface.java:402)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1585)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729) Caused by
> org.apache.hadoop.hbase.regionserver.transactional.IdTmException: id 
> threw:org.apache.hadoop.hbase.regionserver.transactional.IdTmException: 
> ferr=11
> org.apache.hadoop.hbase.regionserver.transactional.IdTm.id(IdTm.java:134)
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:236)
> org.apache.hadoop.hbase.client.transactional.RMInterface.checkAndPutRegionTx(RMInterface.java:402)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1585)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729).
>
> --- 0 row(s) inserted.
956,958c968
< *** ERROR[8616] A conflict was detected during commit processing. Transaction 
has been aborted. Detail @conflict details@
<
< --- SQL operation failed with errors.
---
> --- SQL operation complete.
964c974
<  1 2
---
>  1 1
1054c1064,1076
< --- 1 row(s) inserted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::insertRow returned error HBASE_ACCESS_ERROR(-706). Cause: 
> java.io.IOException: getTmId: IdTm threw exception
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:240)
> org.apache.hadoop.hbase.client.transactional.RMInterface.putRegionTx(RMInterface.java:365)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1599)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729) Caused by
> org.apache.hadoop.hbase.regionserver.transactional.IdTmException: id 
> threw:org.apache.hadoop.hbase.regionserver.transactional.IdTmException: 
> ferr=22
> org.apache.hadoop.hbase.regionserver.transactional.IdTm.id(IdTm.java:134)
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:236)
> org.apache.hadoop.hbase.client.transactional.RMInterface.putRegionTx(RMInterface.java:365)
> org.TRAFODION.sql.HTableClient.putRow(HTableClient.java:1599)
> org.TRAFODION.sql.HBaseClient.insertRow(HBaseClient.java:1729).
>
> --- 0 row(s) inserted.
1064,1066c1086
< *** ERROR[8616] A conflict was detected during commit processing. Transaction 
has been aborted. Detail @conflict details@
<
< --- SQL operation failed with errors.
---
> --- SQL operation complete.
1072c1092
<  1 2
---
>  1 1
1163c1183,1195
< --- 1 row(s) deleted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::deleteRow returned error HBASE_ACCESS_ERROR(-706). Cause: 
> java.io.IOException: getTmId: IdTm threw exception
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:240)
> org.apache.hadoop.hbase.client.transactional.RMInterface.deleteRegionTx(RMInterface.java:338)
> org.TRAFODION.sql.HTableClient.deleteRow(HTableClient.java:1391)
> org.TRAFODION.sql.HBaseClient.deleteRow(HBaseClient.java:1776) Caused by
> org.apache.hadoop.hbase.regionserver.transactional.IdTmException: id 
> threw:org.apache.hadoop.hbase.regionserver.transactional.IdTmException: 
> ferr=11
> org.apache.hadoop.hbase.regionserver.transactional.IdTm.id(IdTm.java:134)
> org.apache.hadoop.hbase.client.transactional.RMInterface.getTmId(RMInterface.java:236)
> org.apache.hadoop.hbase.client.transactional.RMInterface.deleteRegionTx(RMInterface.java:338)
> org.TRAFODION.sql.HTableClient.deleteRow(HTableClient.java:1391)
> org.TRAFODION.sql.HBaseClient.deleteRow(HBaseClient.java:1776).
>
> --- 0 row(s) deleted.
1173,1175c1205
< *** ERROR[8616] A conflict was detected during commit processing. Transaction 
has been aborted. Detail @conflict details@
<
< --- SQL operation failed with errors.
---
> --- SQL operation complete.
1267c1297,1309
< --- 1 row(s) deleted.
---
> *** ERROR[8448] Unable to access Hbase interface. Call to 
> ExpHbaseInterface::deleteRow returned error HBASE_ACCESS_ERROR(-706). Cause: 
> java.io.IOException: getTmId: IdTm threw exception
> 

RE: Release manager for our next Trafodion release - any volunteers?

2017-05-09 Thread Sean Broeder
Thanks for volunteering Ming.  I think you will make an excellent release 
manager.

Regards,
Sean

-Original Message-
From: Liu, Ming (Ming) [mailto:ming@esgyn.cn] 
Sent: Monday, May 8, 2017 10:40 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: Release manager for our next Trafodion release - any volunteers?

Hi, Hans and all,

I would like to volunteer as the next Release Manager for Trafodion.

Thanks,
Ming
-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
Sent: Tuesday, May 09, 2017 8:26 AM
To: dev@trafodion.incubator.apache.org
Subject: Release manager for our next Trafodion release - any volunteers?

Hi,

Sandhya did a wonderful job getting Trafodion 2.1.0-incubating released, thank 
you for all your work!!

Just wanted to start a discussion on who might be willing to be the Release 
Manager for our next release, may that be 2.2.0 or a smaller patch.

If you are wondering what you would have to do as the Release Manager, here is 
a link to the description: http://www.apache.org/dev/release-publishing.html. A 
more specific task list, kept up-to-date by Sandhya, is on the Trafodion wiki: 
https://cwiki.apache.org/confluence/display/TRAFODION/Create+Release.

The Release Manager must be a Trafodion committer.

Thanks,

Hans



RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing

2017-05-05 Thread Sean Broeder
Hi Sandhya,
There were repeated UnknownTransactionExceptions and connection issues 
connecting to various tables in the TRAFODION.LOB130 schema.  I believe some of 
these should be addressed by the large Esgyn contribution I am trying to get 
in.  There are a variety of fixes in it to address duplicate registration and 
UnknownTransactionExceptions.

Regards,
Sean

2017-05-05 11:56:19,102 ERROR transactional.TransactionManager: doAbortX 
UnknownTransactionException for transaction 1394 participantNum 4 Location 
TRAFODION.LOB130.LOBDescHandle__02949986368004701567_0002,\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1493985287325.108d602de2770ef49ec7c86419e4da55.
org.apache.hadoop.hbase.client.transactional.UnknownTransactionException: 
java.io.IOException: UnknownTransactionException
at 
org.apache.hadoop.hbase.client.transactional.TransactionManager$TransactionManagerCallable.doAbortX(TransactionManager.java:973)
at 
org.apache.hadoop.hbase.client.transactional.TransactionManager$10.call(TransactionManager.java:2405)
at 
org.apache.hadoop.hbase.client.transactional.TransactionManager$10.call(TransactionManager.java:2403)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Friday, May 5, 2017 9:02 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing

Sure I can take a look.  Steve mentioned that the Trafodion timeout was quite a 
bit lower than the private timeout, so my comment was more at making the 
timeout equivalent.  I'll take a closer look in any case.

Thanks,
Sean

-Original Message-
From: Sandhya Sundaresan [mailto:sandhya.sundare...@esgyn.com] 
Sent: Friday, May 5, 2017 8:26 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing


Hi Sean,
 This may indicate a real problem. A simple "create table" is  hanging. I guess 
it needs to be debugged - maybe you can look at TM logs around the time 
executor/TEST130 ran. 
http://traf-testlogs.esgyn.com/Daily-master/67/regress-executor-cdh-rh6/sql-regress-logs/rundir/executor/LOG130
Thanks
Sandhya

-Original Message-----
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Friday, May 5, 2017 7:16 AM
To: dev@trafodion.incubator.apache.org; no-re...@trafodion.org
Subject: RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing

Tis test timed out.  Can we increase the timeout?

2017-05-05 13:20:36 Build timed out (after 160 minutes). Marking the build as 
failed.
2017-05-05 13:20:36 Build was aborted

Thanks,
Sean

-Original Message-
From: steve.var...@esgyn.com [mailto:steve.var...@esgyn.com]
Sent: Friday, May 5, 2017 6:21 AM
To: dev@trafodion.incubator.apache.org
Subject: Trafodion master rh6 Daily Test Result - 67 - Still Failing

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/67/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/67
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
No changes


Test Job Results:

FAILURE core-regress-executor-cdh (2 hr 40 min) FAILURE 
core-regress-executor-hdp (2 hr 40 min) SUCCESS build-rh6-master-debug (49 min) 
SUCCESS build-rh6-master-release (38 min) SUCCESS core-regress-charsets-cdh (34 
min) SUCCESS core-regress-charsets-hdp (58 min) SUCCESS 
core-regress-compGeneral-cdh (1 hr 1 min) SUCCESS core-regress-compGeneral-hdp 
(1 hr 17 min) SUCCESS core-regress-core-cdh (1 hr 13 min) SUCCESS 
core-regress-core-hdp (1 hr 15 min) SUCCESS core-regress-fullstack2-cdh (9 min 
30 sec) SUCCESS core-regress-fullstack2-hdp (26 min) SUCCESS 
core-regress-hive-cdh (50 min) SUCCESS core-regress-hive-hdp (57 min) SUCCESS 
core-regress-privs1-cdh (54 min) SUCCESS core-regress-privs1-hdp (1 hr 3 min) 
SUCCESS core-regress-privs2-cdh (1 hr 8 min) SUCCESS core-regress-privs2-hdp (1 
hr 18 min) SUCCESS core-regress-qat-cdh (23 min) SUCCESS core-regress-qat-hdp 
(23 min) SUCCESS core-regress-seabase-cdh (1 hr 26 min) SUCCESS 
core-regress-seabase-hdp (2 hr 10 min) SUCCESS core-regress-udr-cdh (37 min) 
SUCCESS core-regress-udr-hdp (57 min) SUCCESS jdbc_test-cdh (40 min) SUCCESS 
jdbc_test-hdp (39 min) SUCCESS phoenix_part1_T2-cdh (1 hr 7 min) SUCCESS 
phoenix_part1_T2-hdp (1 hr 39 min) SUCCESS phoenix_part1_T4-cdh (1 hr 3 min) 
SUCCESS phoenix_part1_T4-hdp (1 hr 14 min) SUCCESS phoenix_part2_T2-cdh (1 hr 0 
min) SUCCESS phoenix_part2_T2-hdp (1 hr 19 min) SUCCESS phoenix_part2_T4-cdh (1 
hr 14 min) SUCCESS phoenix_part2_T4-hdp (1 hr 35 min) SUCCESS pyodbc_test-cd

RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing

2017-05-05 Thread Sean Broeder
Sure I can take a look.  Steve mentioned that the Trafodion timeout was quite a 
bit lower than the private timeout, so my comment was more at making the 
timeout equivalent.  I'll take a closer look in any case.

Thanks,
Sean

-Original Message-
From: Sandhya Sundaresan [mailto:sandhya.sundare...@esgyn.com] 
Sent: Friday, May 5, 2017 8:26 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing


Hi Sean,
 This may indicate a real problem. A simple "create table" is  hanging. I guess 
it needs to be debugged - maybe you can look at TM logs around the time 
executor/TEST130 ran. 
http://traf-testlogs.esgyn.com/Daily-master/67/regress-executor-cdh-rh6/sql-regress-logs/rundir/executor/LOG130
Thanks
Sandhya

-Original Message-----
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Friday, May 5, 2017 7:16 AM
To: dev@trafodion.incubator.apache.org; no-re...@trafodion.org
Subject: RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing

Tis test timed out.  Can we increase the timeout?

2017-05-05 13:20:36 Build timed out (after 160 minutes). Marking the build as 
failed.
2017-05-05 13:20:36 Build was aborted

Thanks,
Sean

-Original Message-
From: steve.var...@esgyn.com [mailto:steve.var...@esgyn.com]
Sent: Friday, May 5, 2017 6:21 AM
To: dev@trafodion.incubator.apache.org
Subject: Trafodion master rh6 Daily Test Result - 67 - Still Failing

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/67/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/67
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
No changes


Test Job Results:

FAILURE core-regress-executor-cdh (2 hr 40 min) FAILURE 
core-regress-executor-hdp (2 hr 40 min) SUCCESS build-rh6-master-debug (49 min) 
SUCCESS build-rh6-master-release (38 min) SUCCESS core-regress-charsets-cdh (34 
min) SUCCESS core-regress-charsets-hdp (58 min) SUCCESS 
core-regress-compGeneral-cdh (1 hr 1 min) SUCCESS core-regress-compGeneral-hdp 
(1 hr 17 min) SUCCESS core-regress-core-cdh (1 hr 13 min) SUCCESS 
core-regress-core-hdp (1 hr 15 min) SUCCESS core-regress-fullstack2-cdh (9 min 
30 sec) SUCCESS core-regress-fullstack2-hdp (26 min) SUCCESS 
core-regress-hive-cdh (50 min) SUCCESS core-regress-hive-hdp (57 min) SUCCESS 
core-regress-privs1-cdh (54 min) SUCCESS core-regress-privs1-hdp (1 hr 3 min) 
SUCCESS core-regress-privs2-cdh (1 hr 8 min) SUCCESS core-regress-privs2-hdp (1 
hr 18 min) SUCCESS core-regress-qat-cdh (23 min) SUCCESS core-regress-qat-hdp 
(23 min) SUCCESS core-regress-seabase-cdh (1 hr 26 min) SUCCESS 
core-regress-seabase-hdp (2 hr 10 min) SUCCESS core-regress-udr-cdh (37 min) 
SUCCESS core-regress-udr-hdp (57 min) SUCCESS jdbc_test-cdh (40 min) SUCCESS 
jdbc_test-hdp (39 min) SUCCESS phoenix_part1_T2-cdh (1 hr 7 min) SUCCESS 
phoenix_part1_T2-hdp (1 hr 39 min) SUCCESS phoenix_part1_T4-cdh (1 hr 3 min) 
SUCCESS phoenix_part1_T4-hdp (1 hr 14 min) SUCCESS phoenix_part2_T2-cdh (1 hr 0 
min) SUCCESS phoenix_part2_T2-hdp (1 hr 19 min) SUCCESS phoenix_part2_T4-cdh (1 
hr 14 min) SUCCESS phoenix_part2_T4-hdp (1 hr 35 min) SUCCESS pyodbc_test-cdh 
(17 min) SUCCESS pyodbc_test-hdp (14 min)



RE: Trafodion master rh6 Daily Test Result - 67 - Still Failing

2017-05-05 Thread Sean Broeder
Tis test timed out.  Can we increase the timeout?

2017-05-05 13:20:36 Build timed out (after 160 minutes). Marking the build as 
failed.
2017-05-05 13:20:36 Build was aborted

Thanks,
Sean

-Original Message-
From: steve.var...@esgyn.com [mailto:steve.var...@esgyn.com] 
Sent: Friday, May 5, 2017 6:21 AM
To: dev@trafodion.incubator.apache.org
Subject: Trafodion master rh6 Daily Test Result - 67 - Still Failing

Daily Automated Testing master rh6

Jenkins Job:   https://jenkins.esgyn.com/job/Check-Daily-master-rh6/67/
Archived Logs: http://traf-testlogs.esgyn.com/Daily-master/67
Bld Downloads: http://traf-builds.esgyn.com

Changes since previous daily build:
No changes


Test Job Results:

FAILURE core-regress-executor-cdh (2 hr 40 min)
FAILURE core-regress-executor-hdp (2 hr 40 min)
SUCCESS build-rh6-master-debug (49 min)
SUCCESS build-rh6-master-release (38 min)
SUCCESS core-regress-charsets-cdh (34 min)
SUCCESS core-regress-charsets-hdp (58 min)
SUCCESS core-regress-compGeneral-cdh (1 hr 1 min)
SUCCESS core-regress-compGeneral-hdp (1 hr 17 min)
SUCCESS core-regress-core-cdh (1 hr 13 min)
SUCCESS core-regress-core-hdp (1 hr 15 min)
SUCCESS core-regress-fullstack2-cdh (9 min 30 sec)
SUCCESS core-regress-fullstack2-hdp (26 min)
SUCCESS core-regress-hive-cdh (50 min)
SUCCESS core-regress-hive-hdp (57 min)
SUCCESS core-regress-privs1-cdh (54 min)
SUCCESS core-regress-privs1-hdp (1 hr 3 min)
SUCCESS core-regress-privs2-cdh (1 hr 8 min)
SUCCESS core-regress-privs2-hdp (1 hr 18 min)
SUCCESS core-regress-qat-cdh (23 min)
SUCCESS core-regress-qat-hdp (23 min)
SUCCESS core-regress-seabase-cdh (1 hr 26 min)
SUCCESS core-regress-seabase-hdp (2 hr 10 min)
SUCCESS core-regress-udr-cdh (37 min)
SUCCESS core-regress-udr-hdp (57 min)
SUCCESS jdbc_test-cdh (40 min)
SUCCESS jdbc_test-hdp (39 min)
SUCCESS phoenix_part1_T2-cdh (1 hr 7 min)
SUCCESS phoenix_part1_T2-hdp (1 hr 39 min)
SUCCESS phoenix_part1_T4-cdh (1 hr 3 min)
SUCCESS phoenix_part1_T4-hdp (1 hr 14 min)
SUCCESS phoenix_part2_T2-cdh (1 hr 0 min)
SUCCESS phoenix_part2_T2-hdp (1 hr 19 min)
SUCCESS phoenix_part2_T4-cdh (1 hr 14 min)
SUCCESS phoenix_part2_T4-hdp (1 hr 35 min)
SUCCESS pyodbc_test-cdh (17 min)
SUCCESS pyodbc_test-hdp (14 min)



RE: Why is this test run a failure?

2017-05-03 Thread Sean Broeder
Hi Dave,
I'm working with Anoop on this.  I need to get the correct file in as the 
expected022 file and separately there is an ordering issue.  I will get these 
fixes before the changes are merged.

Regards,
Sean

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Wednesday, May 3, 2017 1:13 PM
To: dev@trafodion.incubator.apache.org
Subject: Why is this test run a failure?

Hi,

I'm looking at the latest Jenkins test run for pull request 
https://github.com/apache/incubator-trafodion/pull/1075. (This is the rather 
large contribution from Esgyn concerning transaction manager changes.)

It looks like just one test failed, seabase/TEST022. The diff file is:

http://traf-testlogs.esgyn.com/PullReq/1075/1760/regress-seabase-hdp-rh6/sql-regress-logs/seabase/DIFF022:

638c638
< -- Definition of table TRAFODION.TRAFODION.T022HBM1
---
> -- Definition of table #CAT.#SCH.T022HBM1
723c723
< -- Definition of table TRAFODION.TRAFODION.T022HBM1_LIKE
---
> -- Definition of table #CAT.#SCH.T022HBM1_LIKE
773c773
< *** ERROR[4082] Object TRAFODION.TRAFODION.T022HBM1 does not exist or is 
inaccessible.
---
> *** ERROR[4082] Object #CAT.#SCH.T022HBM1 does not exist or is inaccessible.

This just looks like a filtering problem to me.

Is that all it is? I wonder why this doesn't fail for everyone?

Thanks,

Dave


RE: Loggers in Trafodion Java classes

2017-01-09 Thread Sean Broeder
Also, from the top of the source file you can see the file is part of package 
org.trafodion.sql;  So in the file Selva mentions you should put a line like 
log4j.logger.org.trafodion.sql=

Thanks,
Sean
-Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com] 
Sent: Monday, January 9, 2017 1:35 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: Loggers in Trafodion Java classes

Logger configuration is at $TRAF_HOME/conf/log4j.hdfs.config
The log file is written to $TRAF_HOME/logs/trafodion.hdfs.log

It is same for workstation and the cluster. In case of cluster, you need to set 
it up in every node.

Selva

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com] 
Sent: Monday, January 9, 2017 1:23 PM
To: dev@trafodion.incubator.apache.org
Subject: Loggers in Trafodion Java classes

Hi,

I'm looking at HBaseClient.java, and I would like to use the logger there.

Can someone remind me how to turn on the logging?

Also, where will the log file go? (If the answer is different on a workstation 
than it is on a cluster, please answer for each.)

I'll add this info to the wiki Debugging Tips page.

Thanks,

Dave


RE: A few questions

2016-12-23 Thread Sean Broeder
Rinka,
As Rohit mentioned HBase is not exactly pluggable into Trafodion.  I think more 
specifically you seem to want to know if the storage engine in interchangeable. 
 It could be done, and in fact HBase is not the first storage engine that that 
our SQL and Transaction engines have utilized, but the current incarnation 
makes heavy use of HBase coprocessors and this would be rather cumbersome to 
remove or duplicate in a replacement engine.  Still, Trafodion is open source, 
so with additional contributors come additional possibilities and we would 
welcome that.

Cheers,
Sean

-Original Message-
From: Rohit Jain [mailto:rohit.j...@esgyn.com] 
Sent: Friday, December 23, 2016 10:27 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: A few questions

Rinka,

Here is a response to your questions:

1.  Is HBase pluggable?
For a query engine like Trafodion to get the best out of a storage engine like 
HBase, very deep integration is required.  We can certainly provide a list of 
the deep integration we have done with HBase.  But suffice is to say that to 
support another storage engine requires a few months of development and 
performance tuning effort.  So while there is nothing preventing Trafodion from 
integrating with other storage engines, a fair amount of effort is required to 
do so well.  In that sense, another storage engine is not "pluggable".  Esgyn 
has a supported and Enterprise database engine called EsgynDB, that is powered 
by Trafodion, which integrates with Apache ORC as well to support BI and 
Analytics workloads for example.

 2.  Is Trafodion production ready?
Trafodion has been production ready for OLTP since R1.0 which was released very 
early 2015.  So yes, it has been production ready for two years now and is in 
production at a number of customer sites.

3.  OLTP workloads supported?
Actually, while there are no in-place updates supported by HBase, it is an MVCC 
model where an update translates to a new version of the row being inserted.  
This is a model that is followed by a number of database implementations and 
not just HBase.  You can choose the number of versions of a row you want to 
keep, thereby providing temporal and change data capture capabilities as well.  
With compaction, old versions are deleted and the new versions are integrated 
into the main HFile.  With this MVCC model Trafodion supports full OLTP 
capabilities with very impressive OLTP performance proven by internal 
benchmarks we run at Esgyn based on TPC-C.

Please feel free to contact us for more detailed information, presentations, 
and architectural capabilities between Trafodion and HBase.

Rohit

-Original Message-
From: Rinka Singh [mailto:rinka.si...@gmail.com] 
Sent: Friday, December 23, 2016 2:34 PM
To: dev@trafodion.incubator.apache.org
Subject: A few questions

Hi,
we are an open source startup and are considering using Trafordion for 
various deployments as well as contributing to it.  We have a customer 
where we would like to propose Trafordion as we see a pretty good fit.

We had the following questions for you:
* Is HBase pluggable.  How easy would it be to plug another storage 
engine instead of HBase?  What is involved?
* Is Trafordion production ready.  We have a mid-sized customer who 
would like to use it for OLTP transactions.  Can we put it into their 
production environment?
* What kind of OLTP workloads is HBase suitable for if there is no 
in-place updates in HDFS?

Looking forward to a quick reply,

Thanks in advance,

Rinka.

-- 
Rinka Singh
CoFounder, Melt Iron Accelerating the Enterprise




RE: HBase Meetup on 8th @ Splice...

2016-11-05 Thread Sean Broeder
Hi Stack,
It's a Thursday, so some members might not have the availability to get up to 
SF, but I'm fairly certain we would like to send a Trafodion representative as 
well.  We'll probably have a better idea of who could go next week.

Thanks,
Sean

-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: Saturday, November 5, 2016 10:25 AM
To: dev@trafodion.incubator.apache.org
Subject: HBase Meetup on 8th @ Splice...

We're thinking of doing a meetup on the 8th of December up in SF @ Splice.
The Yahoo! Israel folks will be in town. In particular the Omid folks will do 
an update presentation on Omid2 (There will also be a talk on inmemory 
compaction from the Y! folks too). Any chance of Trafaodion sending along a 
Transactions rep? It would be good if we could get together a chat on TMs over 
HBase. I wrote the Tephra crew. You lot should be in on it. If you want to 
present, just say too. Could squeeze you in under a TMs umbrella.

Thanks,
S


RE: [DISCUSS] Loading coprocessors from the client side dynamically

2016-10-25 Thread Sean Broeder
Hi Selva,
Adding the coprocessor as an attribute of the table is a good plan.  That would 
prevent the case where a coprocessor is not loaded at all for a table, which is 
what I was concerned about.  Thanks for clarifying this issue for me!

I am still not clear how software migration would occur.  Would disabling and 
enabling the tables work in your proposal?  If not, how can we guarantee a 
clean exit of one version of coprocessor code before starting up with another?

Thanks,
Sean

-Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com] 
Sent: Tuesday, October 25, 2016 9:11 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: [DISCUSS] Loading coprocessors from the client side dynamically

Sean,

My proposal and the implementation of PR 777 is to make Trafodion behave 
consistently irrespective of distro and the type of installation. Trafodion 
will always add table coprocessors now so that region server doesn't call into 
Trafodion hooks for non-trafodion tables. In addition, the proposal is to have 
a resource to customize Trafodion instead of using the standard hbase-site.xml.

Though the PR 777 doesn't remove the 'hbase.coprocessor.region.classes' from 
the hbase-site.xml from the cluster installer, it would be done sometime later. 
This shouldn't prevent any recovery process. The first attempt is to try this 
out on workstation before implementing it on cluster install.

For the existing Trafodion installations, all the trafodion tables should be 
altered to add this coprocessor via hbase shell, before the above property is 
removed from hbase-site.xml

Do you anticipate any issues with this?

By adding table coprocessor, Trafodion needs to stick with these coprocessor 
classes. If there is a need to add more coprocessor classes, it should be added 
to all the existing tables manually or it should be designed such that 
Trafodion can tolerate the absence of the newly added coprocessors.

If there is a need to avoid restarting hbase, it might require extensive 
testing and at least the following needs to be done:
1) Trafodion shouldn't require the modifications of ACL of some of the hdfs 
directories for the fresh Trafodion cluster installation.
2) Assuming region server supports class reloading concept for the table 
coprocessors, Trafodion needs to adapt to it for its upgrade.

Selva

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com] 
Sent: Tuesday, October 25, 2016 3:00 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: [DISCUSS] Loading coprocessors from the client side dynamically

This sender failed our fraud detection checks and may not be who they appear to 
be. Learn about spoofing at http://aka.ms/LearnAboutSpoofing

Selva,
I wasn't thinking of a particular pull request when I sent this out for 
discussion.  I was just thinking in general because there was the hope of being 
able to install Trafodion without restarting HBase.

Regarding the specific pull request you mentioned, I see no issue because it 
only affects install_local_hadoop and that explicitly does an initialize 
Trafodion, so there is no recovery to worry about.  This should be fine.

Also, I have no issue with configuring a coprocessor as a table attribute.  
This certainly loads the coprocessors.

It's unclear for me if there is still an intent to avoid HBase restart or not.  
If not, then this discussion is mute.  But if there is, can you tell me how the 
software migration from version X to version y would work?  If you change the 
class path to point away from version x and toward version y, then disable and 
re-enable each table in order to close and reopen all the regions, this might 
work.  I think you'd want to lock the database to prevent transactions during 
this time.

Am I understanding your proposal correctly?

Thanks,
Sean

-Original Message-
From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
Sent: Monday, October 24, 2016 10:37 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: [DISCUSS] Loading coprocessors from the client side dynamically

The coprocessors are neither pushed nor loaded dynamically from the SQL client 
side. They are simply added as the table attribute and hence configured as 
table coprocessor at the time of creation via Trafodion. So, if you do describe 
in hbase shell, you will see something like below

describe 'TRAFODION._MD_.OBJECTS'
Table TRAFODION._MD_.OBJECTS is ENABLED
TRAFODION._MD_.OBJECTS, {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionObserver|1073741823|',
 coprocesso
r$2 => 
'|org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint|1073741823|',
 coprocessor$3 => '|org.apache.hadoop.hbase.coprocessor.AggregateIm
plementation|1073741823|'}
COLUMN FAMILIES DESCRIPTION
{NAME => '#1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIO

[DISCUSS] Loading coprocessors from the client side dynamically

2016-10-24 Thread Sean Broeder
Hi,
I have seen some discussion related to pushing/loading coprocessors in the 
regions dynamically from the SQL client side when Trafodion is started or when 
a table is opened.  The motivation here is to be able to install Trafodion 
without having to stop/restart HBase where a customer might have a previous 
HBase installation.

I want to point out that Trafodion uses a TrxRegionEndpoint and a 
TrxRegionObserver coprocessor.  These two coprocessors work in tandem to ensure 
region split, region rebalance, and region recovery are possible for 
multi-table transactions.  These two coprocessors ensure data consistency and 
form the basis of our ACID transaction implementation.  We currently mandate 
that these two coprocessors have entries in the hbase-site.xml file.

While I have no problem loading other coprocessors dynamically, I do not think 
it is a good idea to load these two on the fly without stopping/restarting 
HBase, at least not in the current design.

The issue is that other components in the DTM may have code changes that assume 
the corresponding versions of these coprocessors are loaded and running.  If 
the DTM components are not compatible with the coprocessors, unpredictable 
results could occur, including system hangs, process failures, and data 
corruption.  Furthermore, if the coprocessors are loaded dynamically and the 
entries are not in the hbase-site.xml file, then Trafodion recovery would 
effectively be abandoned before the regions are started and there would be no 
way to correct the inconsistencies short of dropping and recreating the tables.

So the 2 main points I hope the readers of this discussion will take away based 
on the current Trafodion design are:
1) The TrxRegionEndpoint and TrxRegionObserver coprocessors must remain in the 
hbase-site.xml file to ensure recovery occurs before the regions are started 
and open for HBase activity.
2) Trafodion and the above coprocessors must run compatible versions of 
software to ensure proper function.  These two coprocessors are not static, so 
it is likely that if Trafodion code has changed, so have the coprocessors.

Thanks,
Sean


RE: Trafodion master Daily Test Result - 332 - Still Failing

2016-09-15 Thread Sean Broeder
RE: Trafodion master Daily Test Result - 332 - Still Failing

In this case we are trying to drop a table and we want to disable all the
regions.  Before doing so, each region makes sure it allows any pending
transactions to complete before closing.  One region finds an active
transactional scanner and continues to wait for the scanner to clear.



2016-09-15 10:26:55,574 INFO
[PriorityRpcServer.handler=11,queue=1,port=16020]
transactional.SplitBalanceHelper: scannersListClear Active Scanner found,
ScannerId: 0 Txid: 1054 Region:
TRAFODION.SCH.T106A,\x00\x00\x00\x02\x00\x00\x00\x00,1473934644524.39d9ea92d6f2109a682f66f38a795b79.

2016-09-15 10:26:55,726 INFO
[PriorityRpcServer.handler=8,queue=0,port=16020]
transactional.SplitBalanceHelper: scannersListClear Active Scanner found,
ScannerId: 0 Txid: 1054 Region:
TRAFODION.SCH.T106A,\x00\x00\x00\x02\x00\x00\x00\x00,1473934644524.39d9ea92d6f2109a682f66f38a795b79.

2016-09-15 10:26:56,075 INFO
[PriorityRpcServer.handler=11,queue=1,port=16020]
transactional.SplitBalanceHelper: scannersListClear Active Scanner found,
ScannerId: 0 Txid: 1054 Region:
TRAFODION.SCH.T106A,\x00\x00\x00\x02\x00\x00\x00\x00,1473934644524.39d9ea92d6f2109a682f66f38a795b79.

2016-09-15 10:26:56,226 INFO
[PriorityRpcServer.handler=8,queue=0,port=16020]
transactional.SplitBalanceHelper: scannersListClear Active Scanner found,
ScannerId: 0 Txid: 1054 Region:
TRAFODION.SCH.T106A,\x00\x00\x00\x02\x00\x00\x00\x00,1473934644524.39d9ea92d6f2109a682f66f38a795b79.



Since it doesn’t clear it essentially locks up the drop forever.  I know
there was a bug for a while where a transaction’s own scanner would block a
drop, but a change was made to allow the close to go through if the scanner
was for our own transaction.  The logging doesn’t show the transaction of
the drop (that I see).



I think Prashanth should have a look at this.



Regards,

Sean



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Thursday, September 15, 2016 9:47 AM
*To:* 'Narain Arvind' <narain.arv...@gmail.com>; '
dev@trafodion.incubator.apache.org' <dev@trafodion.incubator.apache.org>
*Subject:* RE: Trafodion master Daily Test Result - 332 - Still Failing



Sure.  I’ll have a look



*From:* Narain Arvind [mailto:narain.arv...@gmail.com
<narain.arv...@gmail.com>]
*Sent:* Thursday, September 15, 2016 9:22 AM
*To:* dev@trafodion.incubator.apache.org
*Cc:* Sean Broeder <sean.broe...@esgyn.com>
*Subject:* RE: Trafodion master Daily Test Result - 332 - Still Failing



Timeout in core-regress-executor-hdp is during drop of table T106A.

https://jenkins.esgyn.com/job/core-regress-executor-hdp/369/console

Messages related to transactional.SplitBalancerHelper repeated in Region
Server logs.

Sean could you please take a look ? Thanks.

http://traf-testlogs.esgyn.com/Daily-master/332/regress-executor-hdp/hbase-logs/hbase-hbase-regionserver-slave-ahw23.log

2016-09-15 10:25:01,806 INFO  [RS_CLOSE_REGION-slave-ahw23:16020-2]
regionserver.HRegion: Closed
TRAFODION.SCH.T106A,\x00\x00\x00\x01\x00\x00\x00\x00,1473934644524.f32b6c8d8f3932729b9824f51a95a63e.

2016-09-15 10:25:02,059 INFO
[PriorityRpcServer.handler=8,queue=0,port=16020]
transactional.SplitBalanceHelper: scannersListClear Active Scanner found,
ScannerId: 0 Txid: 1054 Region:
TRAFODION.SCH.T106A,\x00\x00\x00\x02\x00\x00\x00\x00,1473934644524.39d9ea92d6f2109a682f66f38a795b79.

…

2016-09-15 10:31:37,204 INFO  [regionserver/
slave-ahw23.trafodion.org/172.31.3.234:16020.leaseChecker]
regionserver.RSRpcServices: Scanner 12971 lease expired on region
TRAFODION.SCH.T106A,\x00\x00\x00\x01\x00\x00\x00\x00,1473934644524.f32b6c8d8f3932729b9824f51a95a63e.

2016-09-15 10:31:37,205 ERROR [regionserver/
slave-ahw23.trafodion.org/172.31.3.234:16020.leaseChecker]
regionserver.RSRpcServices: Closing scanner for
TRAFODION.SCH.T106A,\x00\x00\x00\x01\x00\x00\x00\x00,1473934644524.f32b6c8d8f3932729b9824f51a95a63e.

org.apache.hadoop.hbase.NotServingRegionException: Region
TRAFODION.SCH.T106A,\x00\x00\x00\x01\x00\x00\x00\x00,1473934644524.f32b6c8d8f3932729b9824f51a95a63e.
is not online on slave-ahw23.trafodion.org,16020,1473931541282

at
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)

at
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2875)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices$ScannerListener.leaseExpired(RSRpcServices.java:285)

at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:121)

at java.lang.Thread.run(Thread.java:745)

…

2016-09-15 11:29:49,857 INFO
[PriorityRpcServer.handler=1,queue=1,port=16020]
transactional.SplitBalanceHelper: scannersListClear Active Scanner found,
ScannerId: 0 Txid: 1054 Region:
TRAFODION.SCH.T106A,\x00\x00\x00\x02\x00\x00\x00\x00,1473934644524.39d9ea92d6f2109a682f66f38a795b79.

2016-09-15 11:29:49,864 INFO
[PriorityRpcServer.handler=19,queue=1,port=16020]
transact

RE: trafodion get stuck and hbase log shows "followed by a smaller key"

2016-09-13 Thread Sean Broeder
Resending with a smaller email as the previous one was rejected by the
apache mail daemon for being too large.



Sorry for the inconvenience,

Sean



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, September 13, 2016 6:26 AM
*To:* 'u...@trafodion.incubator.apache.org' <
u...@trafodion.incubator.apache.org>; 'dev@trafodion.incubator.apache.org' <
dev@trafodion.incubator.apache.org>
*Subject:* RE: trafodion get stuck and hbase log shows "followed by a
smaller key"



Hi Qiao,

I am very sorry you ran into this issue.  I’ve never seen this before, but
I don’t typically use hive to load my tables.  I think you should open a
JIRA for this. Do you know if the problem is repeatable?  If so, please put
any details to reproduce into the JIRA.  Also, if you could set
log4j.logger.org.apache.hadoop.hbase.coprocessor.transactional=trace  in
hbase/conf /log4j.properties that would be helpful.



Thanks,

Sean



*From:* 乔彦克 [mailto:qya...@gmail.com <qya...@gmail.com>]
*Sent:* Tuesday, September 13, 2016 1:37 AM
*To:* dev@trafodion.incubator.apache.org;
u...@trafodion.incubator.apache.org
*Subject:* trafodion get stuck and hbase log shows "followed by a smaller
key"





Hi, all,

I use hive to load some data to do some test. This morning after I
loaded some data I did a sum query which leading to a HBase regionserver
crashed. Then I restart HBase hoping to do some load balace, but just got
these errors in the HBase regionserve log.

The worse is that I have no other choice but to execute "initialize
trafodion, drop" because the trafci can only execute this command.



   some other Trafodion table region has the same problems.

   Do you ever encountered this problems?

   Any replies is appreciated.



  Thanks,

  Qiao


RE: discuss namespace in java code

2016-08-24 Thread Sean Broeder
My opinion is that the names themselves are not all that significant, unless
apache has a naming requirement.  What seems more significant, to me anyway,
is that the names are different from other components so that tracing has
some granularity when deciding what components should have tracing turned
on.

Dave's point about being able to use the DTM on other platforms might have
been very relevant a few years ago, prior to adopting the coprocessor model.
But now that the coprocessor model has been adopted it would be more
difficult to move to other platforms.  Still, the idea that we might revert
away from the coprocessor model is not that far-fetched for other reasons,
so perhaps we should at least consider it moving forward.

Regards,
Sean

-Original Message-
From: Narendra Goyal [mailto:narendra.go...@esgyn.com]
Sent: Wednesday, August 24, 2016 8:39 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: discuss namespace in java code

I think the package prefix 'org.trafodion' was picked before we moved to
being in Apache incubation.

I don’t know whether being an Apache project requires the package name be
altered - one might 'want' to do it but perhaps not required.

My vote would be to keep the package name org.trafodion for now. Perhaps
think about changing it as per Aven's suggestion once we are a TLP :)

Thanks,
-Narendra

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com]
Sent: Wednesday, August 24, 2016 8:24 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: discuss namespace in java code

Hi,

The DTM developers might want to speak up.

My understanding was that there was a desire to design the DTM in such a way
that it could be a stand-alone project. That is, it should be possible to
use the DTM with other SQL engines or frameworks. If we still want to keep
that possibility open, then separate name spaces are appropriate.

Dave

-Original Message-
From: Ma, Sheng-Chen (Aven) [mailto:shengchen...@esgyn.cn]
Sent: Wednesday, August 24, 2016 2:46 AM
To: dev@trafodion.incubator.apache.org
Subject: discuss namespace in java code

Hi all:
I noticed that our trafodion namespace is org.trafodion.sql or
org.trafodion.dtm.
Since trafodion been one of apache project ,in my view is we should use
org.apache.trafodion.xxx, but there is a huge change, so I send this email
to discuss this. If it's ok to change, I will do this job.

Thanks
Aven


RE: [ANNOUNCE] New Trafodion committer and PPMC member: Liu Ming

2016-08-19 Thread Sean Broeder
Congratulations Ming!  Well deserved,

-Original Message-
From: Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
Sent: Friday, August 19, 2016 6:54 AM
To: dev@trafodion.incubator.apache.org
Subject: [ANNOUNCE] New Trafodion committer and PPMC member: Liu Ming

The Podling Project Management Committee (PPMC) for Apache Trafodion has
asked Liu Ming, to be a committer, and to join the PPMC. We are pleased to
announce that he has accepted.

Ming has contributed several features and numerous fixes in different
components including DTM, compiler and executor. The approach taken has
always been pragmatic, consultative and as simple as it can be. He has
helped the user community with his steady responses to questions and the dev
community with code reviews and JIRAs filed.  With all this and more he will
be able to bring his unique experience to the Trafodion PPMC.

Please join us in congratulating Ming,

The Trafodion PPMC


RE: Admin permissions

2016-07-25 Thread Sean Broeder
Thanks Dave!

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com]
Sent: Monday, July 25, 2016 1:11 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: Admin permissions

Done! Happy JIRA-ing.

Dave

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com]
Sent: Monday, July 25, 2016 11:58 AM
To: dev@trafodion.incubator.apache.org
Subject: Admin permissions

Hi,
I have some cleanup of some JIRAs assigned to me, but I'm not able to close
them.  Can someone grant me admin permissions?

Thanks,
Sean


Admin permissions

2016-07-25 Thread Sean Broeder
Hi,
I have some cleanup of some JIRAs assigned to me, but I'm not able to close
them.  Can someone grant me admin permissions?

Thanks,
Sean


RE: Proposal to add hive regression tests to check-PR tests

2016-07-18 Thread Sean Broeder
I'd prefer not to leave it up to authors to select which tests are
appropriate.  Sometimes we get it right and others we are horribly wrong.

Thanks,
Sean

-Original Message-
From: Qifan Chen [mailto:qifan.c...@esgyn.com]
Sent: Monday, July 18, 2016 9:20 AM
To: dev 
Subject: Re: Proposal to add hive regression tests to check-PR tests

I agree with Sandhya and wonder if we can enhance check-PR tests (hive for
example, in question) with the following twist.

   1. Randomly select several (say 3) tests from regression/hive. The
   rational is that we only need to  sanity check the changes and a full
daily
   build with test will follow the merge.
   2. Before the check-in, we always run the full regression test, and I do
   not see the value to run full Hive again in check-PR.
   3. In the future, we could find the most appreciate tests for check-PR
   (instead of randomly select, or select the full set).  The author can
point
   out the nature of the change and the check-in tool does the selection.
For
   example, a change in DoP for Hbase tables will select some tests from
   regress/seabase, but not from regress/hive.

Thanks

--Qifan

On Mon, Jul 18, 2016 at 10:46 AM, Sandhya Sundaresan <
sandhya.sundare...@esgyn.com> wrote:

> +0 for me.
>  I am not sure of the need to add the whole test suite  to check tests.
> The hive regressions do run nightly anyway so failures should be clear
> on each nightly run on a daily basis.
> My concern is that long running tests like hive/TEST018   are more to test
> features like bulkload/unload and since we already have the option to
> run "extra tests" in Jenkins, I'm not sure bringing in entire test
> suites  into check tests is the right approach or trend going forward
> and  adding time and resources to what is supposed to be a sanity test for
> every single  PR.
>
> Sandhya
>
> -Original Message-
> From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
> Sent: Monday, July 18, 2016 7:22 AM
> To: dev@trafodion.incubator.apache.org
> Subject: RE: Proposal to add hive regression tests to check-PR tests
>
> Hive regressions takes little less than an hour. As I said before, the
> time is not a factor because the regressions are run in parallel in
> different VMs. Seabase regressions which is run as part of check-PR
> takes around 1 hour and 40 mins. Hence hive regressions shouldn't add
> more time for check-PR to complete, but of course it would need another
> VM.
>
> Selva
>
> -Original Message-
> From: Jin, Jian (Seth) [mailto:jian@esgyn.cn]
> Sent: Sunday, July 17, 2016 7:31 PM
> To: dev@trafodion.incubator.apache.org
> Subject: RE: Proposal to add hive regression tests to check-PR tests
>
> How long will it take for Hive regression?
>
> Br,
>
> Seth
>
> -Original Message-
> From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
> Sent: 2016年7月16日 9:16
> To: dev@trafodion.incubator.apache.org
> Subject: RE: Proposal to add hive regression tests to check-PR tests
>
> +1 to this
>
> -Original Message-
> From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
> Sent: Saturday, July 16, 2016 9:08 AM
> To: dev@trafodion.incubator.apache.org
> Subject: Proposal to add hive regression tests to check-PR tests
>
> If you have subscribed to Trafodion Daily Build, you would have
> noticed that the daily build has been failing for some days. Most
> often, it is due the failure in hive regression tests run as part of
> the daily build.
> Lately, there has been some conscious effort made successfully to
> ensure that the hive regression tests can be run reliably. To maintain
> the Trafodion daily build in that state, I am proposing to include
> hive regressions to check-PR tests.  It shouldn’t add the overall time
> taken to regressions tests because tests are run in parallel on
> different VMs, though it would consume more resources.
>
>
>
> -  Selva
>



--
Regards, --Qifan


RE: [VOTE] Apache Trafodion release 2.0.0 ready for release - release candidate 1

2016-05-16 Thread Sean Broeder
I thought someone recently reported a problem with the java version checking
and having a need to go out 3 characters beyond the underscore.  Could the
problem be the '-' character beyond the 91?

-Original Message-
From: Gunnar Tapper [mailto:tapper.gun...@gmail.com]
Sent: Monday, May 16, 2016 1:29 PM
To: dev@trafodion.incubator.apache.org
Subject: Re: [VOTE] Apache Trafodion release 2.0.0 ready for release -
release candidate 1

pdsh -w trafodion-[1-3].openstack "/usr/bin/java -version" | sort
trafodion-1: openjdk version "1.8.0_91"
trafodion-1: OpenJDK Runtime Environment (build 1.8.0_91-b14)
trafodion-1: OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
trafodion-3: openjdk version "1.8.0_91"
trafodion-3: OpenJDK Runtime Environment (build 1.8.0_91-b14)
trafodion-3: OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
trafodion-2: openjdk version "1.8.0_91"
trafodion-2: OpenJDK Runtime Environment (build 1.8.0_91-b14)
trafodion-2: OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

This is why the configuration file specifies the version Ambari installed:

cat my_config | grep JAVA_HOME

export JAVA_HOME="/usr/jdk64/jdk1.7.0_67"

On Mon, May 16, 2016 at 2:00 PM, Amanda Moran 
wrote:

> Can you do...
>
> pdsh -w trafodion-[1-3]  "$JAVA_HOME/bin/java -version"
>
> and send the output?
>
> Thanks.
>
>
> On Mon, May 16, 2016 at 12:57 PM, Gunnar Tapper
> 
> wrote:
>
> > Hi Amanda:
> >
> > Yes, they do:
> >
> > pdsh -w trafodion-[1-3] "ls /usr/jdk64"
> > trafodion-1: jdk1.7.0_67
> > trafodion-3: jdk1.7.0_67
> > trafodion-2: jdk1.7.0_67
> >
> > Thanks,
> >
> > Gunnar
> >
> > On Mon, May 16, 2016 at 1:53 PM, Amanda Moran
> > 
> > wrote:
> >
> > > Hi there Gunnar-
> > >
> > > Does every node of your cluster have that version of java
> > > installed in
> > the
> > > same directory?
> > >
> > > Thanks.
> > >
> > > On Mon, May 16, 2016 at 11:51 AM, Gunnar Tapper <
> tapper.gun...@gmail.com
> > >
> > > wrote:
> > >
> > > > I tried to install this on Hortonworks HDP-2.3.4.7-4 and ran
> > > > into the following issue:
> > > >
> > > > JAVA HOME
> > > >
> > > > ***ERROR: Your existing JAVA_HOME on trafodion-1.novalocal is
> > > > less
> than
> > > > 1.7.0_65
> > > >
> > > > ***ERROR: Your Java Version on trafodion-1.novalocal =
> > > >
> > > > ***ERROR: Required java version on trafodion-1.novalocal should
> > > > be
> > > greater
> > > > than 1.7.0_65
> > > > [centos@trafodion-1 installer]$ /usr/jdk64/jdk1.7.0_67/bin/java
> > -version
> > > >
> > > > java version "1.7.0_67"
> > > >
> > > > Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
> > > >
> > > > Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
> > > >
> > > > [centos@trafodion-1 installer]$ cat my_config | grep JAVA_HOME
> > > >
> > > > export JAVA_HOME="/usr/jdk64/jdk1.7.0_67"
> > > >
> > > >
> > > > Am I missing something?
> > > >
> > > > Thanks,
> > > >
> > > >
> > > > Gunnar
> > > >
> > > > On Mon, May 9, 2016 at 1:01 PM, Gunnar Tapper <
> tapper.gun...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Thanks Amanda. I'm still -1 since the standard install should
> > > > > work
> if
> > > we
> > > > > are to release 2.0.
> > > > >
> > > > > On Mon, May 9, 2016 at 12:44 PM, Amanda Moran <
> > amanda.mo...@esgyn.com>
> > > > > wrote:
> > > > >
> > > > >> Gunnar, you could actually continue with an easy work around.
> > > > >>
> > > > >> tar -xzf apache-trafodion-2.0.0-incubating-src.tar.gz
> > > > >>
> > > > >> Enter full path (including .tar or .tar.gz) of trafodion tar
> > > > >> file
> > > > >> [/home/centos/Downloads/apache-trafodion_server-2.0.0-
> > > > >> incubating-bin.tar.gz]:
> > > > >> ***INFO: tar file is not a package tar file which includes
> > Trafodion &
> > > > DCS
> > > > >> Note: This sounds scary, but it just trying to tell you, you
> > > > >> will
> > need
> > > > to
> > > > >> enter the other items (like location of the DCS tar by hand).
> > > > >> ***INFO: assuming it is a Trafodion build only tar file
> > > > >>
> > > > >> It will then ask you the location of the DCS tar file and the
> > > > >> REST
> > tar
> > > > >> file
> > > > >> which will now be where you did the "tar -xzf".  From there
> > > > >> it
> > should
> > > > work
> > > > >> :)
> > > > >>
> > > > >> Like Venkat said we are missing the "build-version.txt" file
> > > > >> that
> > the
> > > > >> installer uses, but we are not missing "DCS".
> > > > >>
> > > > >> Thanks!
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Mon, May 9, 2016 at 11:33 AM, Venkat Muthuswamy <
> > > > >> venkat.muthusw...@esgyn.com> wrote:
> > > > >>
> > > > >> > DCS is included in the package. It looks like a
> > "build-version.txt"
> > > > >> file is
> > > > >> > missing from the bin tar.
> > > > >> > The traf_apache_hadoop_config_setup and traf_config_setup
> scripts
> > of
> > > > the
> > > > >> > installer check for this build-version.txt and if found
> > > > >> > missing,
> > > > 

RE: debugging tricks?

2016-04-13 Thread Sean Broeder
There are also the stdout_* files in sql/scripts

-Original Message-
From: Eric Owhadi [mailto:eric.owh...@esgyn.com]
Sent: Wednesday, April 13, 2016 10:12 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: debugging tricks?

Ah OK, thanks I will look,
Eric

-Original Message-
From: Sean Broeder [mailto:sean.broe...@esgyn.com]
Sent: Wednesday, April 13, 2016 12:11 PM
To: dev@trafodion.incubator.apache.org
Subject: RE: debugging tricks?

Hi Eric,
Have you looked at the mon*.log files?  The monitor may redirect stdout for
various processes.

Regards,
Sean

-Original Message-
From: Eric Owhadi [mailto:eric.owh...@esgyn.com]
Sent: Wednesday, April 13, 2016 9:47 AM
To: dev@trafodion.incubator.apache.org
Subject: debugging tricks?

Hi Trafodionners,

I see in the code some debugging statement logging stuff in stderr.

Like

fprintf(stderr,

  "  Attr(%d): dataType: %d nullable: %d variable: %d "

  "offset: %d voaOff: %d align: %d\n",

  k, attr->getDatatype(), attr->getNullFlag(),

  (attr->getVCIndicatorLength() > 0 ? 1 : 0),
attr->getOffset(),

  attr->getVoaOffset(), attr->getDataAlignmentSize());



However, I tried to sqlci 2> err.out

No success.



I tried to run it with debugger in Eclipse (internally using gdb), and
usually when I debug java programs this way, stderr is redirected to the
Console view in eclipse. But not with this.



So question, how do I redirect or view stderr when debugging trafodion cpp
code?

Eric


RE: Instruction about how to become Trafodion contributor

2016-04-13 Thread Sean Broeder
Can someone please grant write access to confluence ID sbroeder so I can
update the wiki?



The first thing I will update is the link that says to request write access
by posting an email to d...@incubator.trafodion.org rather than
dev@trafodion.incubator.apache.org. J



I copied the user list here in case someone else has attempted to gain
access and received a failed email response from d...@incubator.trafodion.org
.



Regards,

Sean



*From:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*Sent:* Wednesday, April 13, 2016 6:11 AM
*To:* u...@trafodion.incubator.apache.org
*Subject:* Re: Instruction about how to become Trafodion contributor



The link is provided directly by clicking on "Contribute" on the website.



On Wed, Apr 13, 2016 at 3:38 AM, Pierre Smits 
wrote:

Hi Jian,

First of all: welcome!



https://cwiki.apache.org/confluence/display/TRAFODION/Trafodion+Contributor+Guide
should provide a good starting point.



Best regards,


Pierre Smits


*ORRTIZ.COM *

OFBiz based solutions & services


*OFBiz Extensions Marketplace*

http://oem.ofbizci.net/oci-2/



On Wed, Apr 13, 2016 at 9:50 AM, Jin, Jian (Seth)  wrote:

Hello there,



It is not easy to find instruction How to become Trafodion contributor in
official website or WiKi page.

Did I miss it or we need to add this at proper place?



Best regards,



金剑 (Seth)









-- 

Thanks,



Gunnar

*If you think you can you can, if you think you can't you're right.*


RE: Non deteminism in tests (Trafodion master Daily Test Result - 165)

2016-04-04 Thread Sean Broeder
Should just committers be able to add such a comment?  Why not open it up to
anyone?

Sean

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com]
Sent: Monday, April 4, 2016 1:09 PM
To: dev 
Subject: Re: Non deteminism in tests (Trafodion master Daily Test Result -
165)

Hi Sandhya,

Yes, that's a good idea. Would it be hard to implement a "jenkins,
run-full-tests" comment that would trigger the same tests as a nightly
build? That would allow testing some complex or risky changes more
thoroughly, without holding up the small and easy ones.

Hans

On Mon, Apr 4, 2016 at 12:12 PM, Sandhya Sundaresan <
sandhya.sundare...@esgyn.com> wrote:

>  RE: Non deteminism in tests (Trafodion master Daily Test Result -
> 165)
>
> For Option 2, we could have an option in Jenkins so that a committer
> could add a comment like "jenkins, run full tests" that would kick off
> the full test run.
>
> For  a small change we may not need to run full tests and committers
> could decide that.
>
> Sandhya
>
> -Original Message-
> From: Sandhya Sundaresan [mailto:sandhya.sundare...@esgyn.com
> ]
> Sent: Monday, April 4, 2016 12:05 PM
> To: 'dev@trafodion.incubator.apache.org' <
> dev@trafodion.incubator.apache.org
> >
> Subject: RE: Non deteminism in tests (Trafodion master Daily Test
> Result -
> 165)
>
> Today,  when this daily email comes out from Steve, I think only
> couple of us are paying attention to the results. We try to get after
> the folks who checked in and try to see if their checkin caused the
> failures.
>
> Clearly this adhoc process isn't going to work well.
>
> So 2 options to get past this :
>
> 1.  Every pull request should kick off all the tests - not adjust the
> small core set.
>
> To avoid long queues, we could build some intelligence into the system
> that kicks off tests to club 3 or 4 PRs that come within  the hour and
> run one full test run for all 3 combined. Unless we automate this full
> regression testing  , whatever Han's lists below will continue to be a
> problem. (Do we have the test machine resources to do this ?)
>
> 2. The other option is for Trafodion committers to NOT commit/merge
> the PR until the results of the entire SQL  regression  and/or Phoenix
> tests run have been posted as a comment in the PR .
>
> Thanks
>
> Sandhya
>
> -Original Message-
>
> From: Hans Zeller [mailto:hans.zel...@esgyn.com
> ]
>
> Sent: Monday, April 4, 2016 10:59 AM
>
> To: dev 
>
> Subject: Re: Trafodion master Daily Test Result - 165
>
> Don't know about you, but seeing tests fail every single day makes me
> kind of indifferent to these test failures, after a few hundred of
> them... I really wish we could have an environment where the daily
> email we get is at least 8 out of 10 times a clear indicator whether a
> build is good or not.
>
> In other words, all tests pass for a good build, or tests fail for a
> bad build. Right now, every single day we see failures, so what does that
> mean?
>
> A few things we could consider:
>
>- Categorize. IMHO that's one of the key ways to deal with errors:
>
>   - Deterministic issues:
>
>  - Failure to run relevant tests - update to an expected file
>
>  missing.
>
>  - Deterministic bugs introduced.
>
>   - Non-deterministic bugs - those are much harder to deal with:
>
>  - Non-deterministic issues in our code.
>
>  - Instability of the underlying platform.
>
>   - Document: Make sure we have JIRAs for all issues that affect
>
>regression test failures.
>
>- Communicate what and who causes test failures. A side-effect of
> the
>
>previous bullet. Most of us break the build once in a while, but we
> should
>
>try not to do it too often.
>
>- A few additional things:
>
>   - Some of our tests are poorly designed, leading to a lot of
> false
>
>   failures. Usually because they try to test too much.
>
>   - Some of our tests cause failures when run twice on the
> development
>
>   platform, usually due to missing cleanup.
>
> This would take some effort. On the other hand, having a clear
> pass/fail indication for a build saves us all a lot of time.
>
> Hans
>
> On Mon, Apr 4, 2016 at 10:49 AM, Qifan Chen  wrote:
>
> > It probably will be too time consuming to test 3 or  flavors by
>
> > developers in general.
>
> >
>
> > Would it be possible to use a default flavor out of box (i.e.,
>
> > configured during a local install), or select a particular flavor if
> there is a need?
>
> >
>
> > Thanks --Qifan
>
> >
>
> > On Mon, Apr 4, 2016 at 11:52 AM, Sandhya Sundaresan <
>
> > sandhya.sundare...@esgyn.com> wrote:
>
> >
>
> > > Hi Steve,
>
> > >  I was not suggesting taking any out of the build. I was
> > > suggesting
>
> > > that all
>
> > > 3 flavors get run nightly in the official build that you kick 

RE: RMIInterface.java and synchronized transactional functions?

2016-03-23 Thread Sean Broeder
Hi Eric,
There has been some effort in the past to remove some of the
synchronization.  Much has been removed, but unfortunately much remains.  I
probably don't recall all the exceptions that we hit when the
synchronization was removed, but among them was
concurrentModificationException.  Additionally, there may have been issues
with keeping the various lists in agreement, but I'm not 100% certain on
this.

Regards,
Sean

-Original Message-
From: Eric Owhadi [mailto:eric.owh...@esgyn.com]
Sent: Wednesday, March 23, 2016 9:47 AM
To: dev@trafodion.incubator.apache.org
Subject: RMIInterface.java and synchronized transactional functions?

Hi Trafodioneers,



I was investigating the code path to implement parallel scanner, when I
stumbled on RMIInterface.java.

In that file, you see different treatment for transactional vs non
transactional get, delete, delete list, put, put list, checkAndPut,
checkAndDelete  getScanner.



All transactional function are “synchronized”, while all non-transactional
are not.

Given the strong drawback of synchronization on concurrency, I am wondering
if these synchronization are not left over from debugging to make the log
more readable, or if there is a strong reason why we must have them.

Someone familiar with the code?



Thanks in advance for the help,
Eric


Test hanging

2016-01-25 Thread Sean Broeder
Hi,

I am trying to validate some changes before committing them and I am
consistently seeing a hive test hang at the same statement.  Even after
backing my changes out the test hangs.  Is this a known issue or likely
still something wrong in my environement?



>>-- insert some data and add one more partition

>>sh regrhive.ksh -v -f $REGRTSTDIR/TEST005_b.hive.sql;



insert overwrite table customer_ddl

select

c_customer_sk,

c_customer_id,

c_current_cdemo_sk,

c_current_hdemo_sk,

c_current_addr_sk,

c_first_shipto_date_sk,

c_first_sales_date_sk,

c_salutation,

c_first_name,

c_last_name,

c_preferred_cust_flag,

c_birth_day,

c_birth_month,

c_birth_year,

c_birth_country,

c_login,

c_email_address,

c_last_review_date

from customer

where c_customer_sk < 2



--Hangs here



I executed this test through the ./tools/runallsb from the sql/regress
directory.



Thanks,

Sean


RE: Test hanging

2016-01-25 Thread Sean Broeder
Thanks Hans,
I'm trying a suggestion to add a property to the mapred-site.xml file -
…
 
 mapreduce.framework.name
 local  == from yarn to local
 
…

I've restarted all and am waiting to see if the condition hits again.

Regards,
Sean

-Original Message-
From: Hans Zeller [mailto:hans.zel...@esgyn.com]
Sent: Monday, January 25, 2016 11:09 AM
To: dev <dev@trafodion.incubator.apache.org>
Subject: Re: Test hanging

Hi Sean, this is probably related to MapReduce, this particular statement
will start MapReduce jobs. You can check whether Yarn and the other required
Hadoop components are up, or just try restarting your Hadoop environment.
You can also look at the Yarn web page to see whether you have any stuck
requests. If you use install_local_hadoop, open page
$MY_SQROOT/sql/scripts/swurls.html to get to the Yarn GUI.

Hans

On Mon, Jan 25, 2016 at 10:13 AM, Eric Owhadi <eric.owh...@esgyn.com> wrote:

> It is working fine for me, last time I ran was Friday night.
> Eric
>
> -Original Message-----
> From: Sean Broeder [mailto:sean.broe...@esgyn.com]
> Sent: Monday, January 25, 2016 11:12 AM
> To: dev@trafodion.incubator.apache.org
> Subject: Test hanging
>
> Hi,
>
> I am trying to validate some changes before committing them and I am
> consistently seeing a hive test hang at the same statement.  Even
> after backing my changes out the test hangs.  Is this a known issue or
> likely still something wrong in my environement?
>
>
>
> >>-- insert some data and add one more partition
>
> >>sh regrhive.ksh -v -f $REGRTSTDIR/TEST005_b.hive.sql;
>
>
>
> insert overwrite table customer_ddl
>
> select
>
> c_customer_sk,
>
> c_customer_id,
>
> c_current_cdemo_sk,
>
> c_current_hdemo_sk,
>
> c_current_addr_sk,
>
> c_first_shipto_date_sk,
>
> c_first_sales_date_sk,
>
> c_salutation,
>
> c_first_name,
>
> c_last_name,
>
> c_preferred_cust_flag,
>
> c_birth_day,
>
> c_birth_month,
>
> c_birth_year,
>
> c_birth_country,
>
> c_login,
>
> c_email_address,
>
> c_last_review_date
>
> from customer
>
> where c_customer_sk < 2
>
>
>
> --Hangs here
>
>
>
> I executed this test through the ./tools/runallsb from the sql/regress
> directory.
>
>
>
> Thanks,
>
> Sean
>


RE: New version of proposed web site

2015-12-01 Thread Sean Broeder
Hi Dave,
The FAQ is correct.  We do still use 2 logs and there is no support
currently that allows a log to be shipped to a remote site.  There was a
question about this on the dev list in late November titled ' Question re
Transaction Logs'

Let me know if you'd like me to forward that again.

Regards,
Sean

-Original Message-
From: Dave Birdsall [mailto:dave.birds...@esgyn.com]
Sent: Tuesday, December 1, 2015 11:29 AM
To: dev@trafodion.incubator.apache.org
Subject: RE: New version of proposed web site

Hi Gunnar,

I don't know the answers to your questions re: DCS, REST, replacing the
current website. Sorry I'm not much help there.

Some feedback on the proposed web site:

1. I love it! You've done an enormous amount of work.

2. I hate the animation on the index.html page. Just when I'm looking
carefully at one of the pictures, it moves away and another takes its place.
I click on the arrows to bring the desired picture back. But again it moves.
I wish it would stop. (FWIW I'm using the Chrome browser.)

3. faq.html: The first section needs minor updating. HP Labs is no longer
involved (though we are grateful for their early support). Possible
wordsmith: "What is Apache Trafodion?" / "Apache Trafodion is an open source
initiative originally cultivated by HP Labs to develop..."

The section, "Is there a transaction log and can the log be shipped to a DR
site?" needs updating. I don't think we use two logs anymore. Perhaps one of
the transaction manager folks can comment. There is a commercial solution
available now for DR on Trafodion but I don't know if it is appropriate to
mention that here. Perhaps something like, "Trafodion itself does not
support shipping and replaying logs..."

But this is low-level detail that can be updated after your new web site is
live.

Dave


-Original Message-
From: Gunnar Tapper [mailto:tapper.gun...@gmail.com]
Sent: Tuesday, December 1, 2015 12:12 AM
To: dev@trafodion.incubator.apache.org
Subject: New version of proposed web site

https://drive.google.com/open?id=0BxlwNhWxn8iTbU44V0FlVldVd2c

Much more documentation included. *Almost* finished porting release notes.
The look-and-feel is pretty much finalized.

Next, I am trying to figure out how to generate the asciidocs and how to
copy DCS and REST documentation to the web-site's directory.

I don't yet know how we replace the current website. For now, it ought to be
enough to simply copy all these files manually?

--
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


RE: Question re Transaction Logs

2015-11-20 Thread Sean Broeder
Hi Carol,
There is the standard HBase HLog, but we do not currently have the ability
to support replication to a DR site because we can't ensure transaction
consisteny on the remote site for transactions that span multiple HLogs
(WAL)s

Also, you are right that there is currently a short circuit for single
region/single row operations that skip the DTM, but that is a bug that
cannot be supported.  It skips conflict checking and there is no relation
maintained to prior of subsequent operations.  So this can result in
undetected conflicts.

I do have on my plate a project to introduce autonomous region transactions
that will bypass the DTM, but still perform conflict checking in the
regions.

Regards,
Sean

-Original Message-
From: Carol Pearson [mailto:carol.pearson...@gmail.com]
Sent: Friday, November 20, 2015 10:45 AM
To: dev@trafodion.incubator.apache.org
Subject: Question re Transaction Logs

Hi,

I'm updating the Trafodion FAQ as part of the website migration.  One of the
questions in the current FAQ is:

Is there a transaction log and can the log be shipped to a DR site?

Yes, there is a log that audits all the transactional activity. There is no
support for shipping and replaying the log on a remote DR site.


https://wiki.trafodion.org/wiki/index.php/FAQ#Is_there_a_transaction_log_and_can_the_log_be_shipped_to_a_DR_site.3F

Are singleton transactions included in the transaction log?  My
understanding is that the TM doesn't get involved in singletons like "INSERT
INTO T VALUES (1,1);" if there's no index on table T.

So should this FAQ answer just say  that Trafodion does not generate a
complete transaction log that can be used to replicate the entire database
on a remote system?  Or am i missing something?

Thanks!
-Carol P.

---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---


RE: core/sqf/src/tm/tools/pwd. is not valid filename in Windows

2015-08-05 Thread Sean Broeder
Hi Weiqing,
It looks as though this is used for printing various tm and transaction
states.  You should have no problems in renaming the file for compilation.

Regards,
Sean

-Original Message-
From: Weiqing Xu [mailto:xuweiqing...@gmail.com]
Sent: Wednesday, August 5, 2015 6:16 AM
To: dev@trafodion.incubator.apache.org
Subject: core/sqf/src/tm/tools/pwd. is not valid filename in Windows

In order to maintain the win-odbc code, we need clone the code in the
windows platform. But there si always an error because this file name is not
valid in the windows system.

It looks it's a C++ source code file. Does anyone who knows the use of this
file can rename it.

Best Regards,
Weiqing Xu


RE: how to make select go through DTM in sqlci?

2015-07-28 Thread Sean Broeder
Hi Ming,
Have you tried overriding the non transactional methods in the
SsccTransactionalTable?  These appear to call the base class, so
HTable.java.  If you provide an override method you should be able to
control what data is returned and force it through the SSCC coprocessor.
Within the region we can make it behave like a DP@ transaction, so the DTM
is not involved.  This will improve efficiency and we'll probably want all
IDU operations to behave in this manner.

Regards,
Sean

-Original Message-
From: ovis_p...@sina.com [mailto:ovis_p...@sina.com]
Sent: Tuesday, July 21, 2015 10:49 PM
To: dev dev@trafodion.incubator.apache.org
Subject: how to make select go through DTM in sqlci?

hi, all,
This is Ming from ShangHai. I am trying to make SSCC pass the SQL
regression test. The SQL regression queries are all run through sqlci. So
all queries are in 'autocommit' mode, if no explicit 'begin/commit' is
invoked? As per Joanie's help, she modified the generator to let all
singleton IDU operation go through DTM if the DTM is in SSCC mode. But I
find SELECT in sqlci is till bypassing the DTM and invoke native Hbase API
directly. How can I force the select go through DTM as well? Any help will
be appreciated, like which file to look at and which function, so I can
study and test.HBase-trx will still remain the current behavior, so
singleton IDU will bypass DTM and rely on HBase's per row transaction
protection.
thanks,Ming


Re: Trailing whitespace

2015-07-14 Thread Sean Broeder
While I like this tool, I think it only shows white space in lines you've
changed (i.e. the diff).  If you want to go looking for other white space
another tool is needed, but I don't know what that tool is.

Sean

On Tue, Jul 14, 2015 at 3:22 PM, Dave Birdsall dave.birds...@esgyn.com
wrote:

 Hi,

 Awhile back I went looking for a tool that would check for trailing white
 space in source code. (I was looking for this because our earlier Gerrit
 code review tool made a big deal about trailing white space and it was
 distracting.)

 I found that git has a good tool for checking for it.

 git diff --check

 This command will look at any files you've changed and tell you what lines
 have trailing white space.

 I don't know if trailing white space will be an annoyance for us in the
 future, but if it is, here's a tool that can help you manage it.

 Dave



Re: Anyone working on putting up a website at trafodion.incubator.apache.org and....

2015-07-10 Thread Sean Broeder
Hi Dave,
Just a quick comment on the contributors section of the wiki '
https://cwiki.apache.org/confluence/display/TRAFODION/Contributors'
Most of the email addresses are still the old hp addresses.  We should
probably change that, otherwise if someone tries to contact a contributor
it will get bounced.

Regards,
Sean

On Fri, Jul 10, 2015 at 9:31 AM, Dave Birdsall dave.birds...@esgyn.com
wrote:

 Hi,

 At the moment I don't think anyone is working on it. Alice Chen was doing
 most of the work there but was redirected, so some regrouping is needed.

 To get that thought process going, I'll think out loud a bit, a mixture of
 statements and questions. Comments welcome.

 Before we went Apache, we had a single web site, www.trafodion.org (still
 out there). That had a variety of content, broken down into the following
 areas:

 1. Understanding the software -- contained a description of Trafodion's
 architecture (in some detail), lists of recently released features, and
 roadmaps
 2. Using the software -- contained information about download and
 installation (including a pointer to a downloads page), and high level
 description of how to use the software (e.g., how to start a Trafodion
 instance)
 3. Contributing -- detailed descriptions of the mechanics of contributing
 to Trafodion (this has been updated with Apache practices, so it is
 current)
 4. Community -- description of governance, and lists of people who were
 currently working on Trafodion and their areas of specialization, and
 community events
 5. Documentation -- the SQL manual and related documentation
 6. Other stuff, e.g. videos and an FAQ

 In the Apache world, Trafodion itself is on a course to have two web sites:
 http://trafodion.incubator.apache.org/ and a wiki,
 https://cwiki.apache.org/confluence/display/TRAFODION/Apache+Trafodion+home
 .
 First question that pops into my mind is, how should content be broken down
 between the two?

 Alice's idea for http://trafodion.incubator.apache.org/ was for it to be
 built as part of the daily build from the source repository. One benefit of
 that is that the workflow for changing the web site was identical to that
 for changing Trafodion code itself. It would be subject to the same review
 standards and go through continuous integration. Updating the wiki of
 course uses a different workflow and has a different permissions structure.

 And of course some of the material on www.trafodion.org will no longer be
 relevant in the Apache world so it goes away, or is replaced with pointers
 to general Apache web sites.

 The current state of things seems to be:

 1. Understanding the software -- none of this is on Apache yet; the stuff
 on the old wiki is up-to-date
 2. Using the software -- none of this is on Apache yet; the stuff on the
 old wiki is up-to-date
 3. Contributing -- general high level info + a list of current contributors
 is present on the Apache wiki (
 https://cwiki.apache.org/confluence/display/TRAFODION/Apache+Trafodion+home
 ),
 while detailed instructions about navigating Trafodion development
 infrastructure (git, Jenkins and the like) are up to date on the old wiki
 www.trafodion.org.
 4. Community -- future events has a page on the Apache wiki; past events
 are on the old wiki. Governance material on the old wiki is obsolete as it
 is pre-Apache.
 5. Documentation -- none of this is on Apache yet; the manuals are
 up-to-date with respect to the last release of Trafodion but do not contain
 new features developed since (which is an additional concern)
 6. Other stuff -- none of this is on Apache yet

 Looking at this list: Governance stuff should go away, replaced by pointers
 to the appropriate ASF pages. As to the rest...

 Getting back to the question of which site should hold what content: I'd
 like to suggest that technical content (documentation, architectural
 description, and the like) should be on the
 http://trafodion.incubator.apache.org/ web site, and go through the same
 workflow as code does.Information about contributing and community, along
 with other stuff seems like appropriate content for the wiki. My
 rationale is that the technical content should reflect the code and
 therefore be in sync with it. What do others think?

 A procedural question: Once we get consensus on what goes where, the next
 step seems to be to structure a work program around it. Are JIRAs the right
 mechanism to do this? A JIRA sounds right for the stuff that goes through
 normal workflow. Would it be appropriate to use a JIRA for the wiki site as
 well?

 Welcoming thoughts and further discussion,

 Dave



 On Fri, Jul 10, 2015 at 8:14 AM, Stack st...@duboce.net wrote:

  Anyone working on the website? This don't look too good as home page:
  http://trafodion.incubator.apache.org/
 
  Yours,
  St.Ack
 
  On Mon, Jun 29, 2015 at 3:20 PM, Stack st...@duboce.net wrote:
 
   I updated our status page
   http://incubator.apache.org/projects/trafodion.html
  
   Anyone on the website?
 

Re: Permissions (was Re: Copyright rules)

2015-07-10 Thread sean . broeder
Hi Dave,
My Jira id is sbroeder.

Thanks,
Sean

Sent from my iPad

 On Jul 10, 2015, at 1:29 PM, Dave Birdsall dave.birds...@esgyn.com wrote:
 
 OK, all. I need the practice. So if someone needs to be added to JIRA as a
 contributor, just send me your JIRA log-on name (which is the user ID you
 use when you go to
 https://issues.apache.org/jira/browse/TRAFODION/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel),
 and I will add you. If you haven't been on JIRA before, you'll need to
 create a JIRA log-on. Pick a user name that you would be happy with if you
 eventually become a committer -- then it can become your apache.org e-mail
 ID too. (I didn't, so I have to keep two IDs straight.)
 
 Dave
 
 On Fri, Jul 10, 2015 at 1:11 PM, Stack st...@duboce.net wrote:
 
 I added you Roberta.
 
 Any administrator can do this: Admins are listed here:
 
 https://issues.apache.org/jira/plugins/servlet/project-config/TRAFODION/roles
 
 St.Ack
 
 
 
 On Fri, Jul 10, 2015 at 11:44 AM, Roberta Marton roberta.mar...@esgyn.com
 wrote:
 
 How do you know if you are on the JIRA's contributor list or not?
 If I am not on the list please add me:  Apache ID:  rmarton
 
 There seems to be many places where a contributor needs to be added, etc
 in
 order to do work.  Unfortunately, I have not been keeping track this.  We
 may want to produce a list so when a new contributor is added, we can
 make
 sure everything is setup.  Or if someone is no longer a contributor, they
 can be removed in all the correct places.
 
  Thanks,
   Roberta
 
 On Fri, Jul 10, 2015 at 11:11 AM, Stack st...@duboce.net wrote:
 
 On Fri, Jul 10, 2015 at 10:48 AM, Pierre Smits pierre.sm...@gmail.com
 
 wrote:
 
 ...
 I advice any new contributor to send an email to this ml with the
 request
 to be added to the contributors list stating their JIRA username, so
 we
 can
 add their name to that list.
 
 
 I will add whoever does the above as a JIRA contributor, np.
 St.Ack
 
 
 
 
 On Fri, Jul 10, 2015 at 6:49 PM, Stack st...@duboce.net wrote:
 
 On Fri, Jul 10, 2015 at 9:32 AM, Qifan Chen qifan.c...@esgyn.com
 wrote:
 
 Hi Dave,
 
 Just wonder how you assigned a JIRA to yourself.   Thanks
 Dave is an administrator so he can do that now.
 
 Can you not assign issues to yourself Qifan? If not, we need to add
 you
 as
 a contributor in JIRA. Then you can.
 
 St.Ack
 
 
 On Fri, Jul 10, 2015 at 10:51 AM, Dave Birdsall 
 dave.birds...@esgyn.com
 
 wrote:
 
 Done. See https://issues.apache.org/jira/browse/TRAFODION-28.
 I
 have
 assigned it to myself.
 
 On Fri, Jul 10, 2015 at 1:50 AM, Stack st...@duboce.net
 wrote:
 
 On Thu, Jul 9, 2015 at 5:22 PM, Dave Birdsall 
 dave.birds...@esgyn.com
 
 wrote:
 
 Perfect! This is exactly the information I needed.
 
 So, presently all Trafodion files contain a Hewlett-Packard
 copyright.
 From
 the link you sent, it sounds like those should now be
 removed?
 It is pretty explicit on what files should have in them
 regards
 copyright:
 
 If the source file is submitted with a copyright notice
 included
 in
 it,
 the copyright owner (or owner's agent) must either:
 
   1. remove such notices, or
  2. move them to the NOTICE file associated with each
 applicable
  project release, or
  3. provide written permission for the ASF to make such
 removal
 or
  relocation of the notices.
 
 
 Later in the doc it talks about the NOTICES file format.
 
 As Pierre suggest, a JIRA and patch to make the change would
 be
 way
 to
 go.
 
 St.Ack
 
 
 Thanks,
 
 Dave
 
 On Thu, Jul 9, 2015 at 5:11 PM, Stack st...@duboce.net
 wrote:
 
 Does this help Dave?
 http://www.apache.org/legal/src-headers.html
 St.Ack
 
 On Thu, Jul 9, 2015 at 5:05 PM, Dave Birdsall 
 dave.birds...@esgyn.com
 
 wrote:
 
 Hi,
 
 What are the proper rules for updating copyrights in
 Trafodion
 source
 code
 now that we are in Apache?
 
 Thanks,
 
 Dave
 
 
 
 --
 Regards, --Qifan