答复: when a query will be cancelled?

2016-06-07 Thread Liu, Ming (Ming)
The SPJ is using Type 2 driver.

Thanks,
Ming

发件人: Venkat Muthuswamy [mailto:venkat.muthusw...@esgyn.com]
发送时间: 2016年6月8日 8:58
收件人: user@trafodion.incubator.apache.org
主题: RE: when a query will be cancelled?

Hi Ming,

Is your SPJ using type 2 jdbc or type 4 jdbc ?

Venkat

From: Liu, Ming (Ming) [mailto:ming@esgyn.cn<mailto:ming@esgyn.cn>]
Sent: Tuesday, June 07, 2016 5:48 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: 答复: when a query will be cancelled?

Hi, Hans,

In my test SPJ, there is only one statement. It is one big UPDATE query, which 
will try to update 10M rows in a single thread, so it takes 1.5 hour.
And we will try to narrow down the issue further by debugging. Want to check 
with community that if there are some special settings about 1 hour timeout.

Ming

发件人: Hans Zeller [mailto:hans.zel...@esgyn.com]
发送时间: 2016年6月8日 0:42
收件人: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
主题: Re: when a query will be cancelled?

Hi Ming,

One thing to test would be where you get the timeout, whether it's in JDBC done 
in the SPJ or in the communication between the master executor and the UDR 
server. When you simulate it in your dev environment, do you also issue a 
single JDBC call that takes more than an hour?

I have to admit I haven't tried it, but hopefully these instructions will work 
for SPJs as well: 
https://cwiki.apache.org/confluence/display/TRAFODION/Tutorial%3A+The+object-oriented+UDF+interface#Tutorial:Theobject-orientedUDFinterface-DebuggingUDFcode

Hans

On Tue, Jun 7, 2016 at 9:04 AM, Liu, Ming (Ming) 
<ming@esgyn.cn<mailto:ming@esgyn.cn>> wrote:
Hi,

We have a SPJ that perform some insert/select operations against a big table, 
and each time then the SPJ runs for 1 hour, the CALL statement will return 
-8007 error code, said it is cancelled. What can be possible reasons for a 
query to be cancelled?

>>CALL QUERY1SPJ();
*** ERROR[8007] The operation has been canceled.
*** ERROR[8811] Trying to close a statement that is either not in the open 
state or has not reached EOF.
--- SQL operation failed with errors.

I have a dev environment, and simulate a long running SPJ (Not same as the SPJ 
in real cluster as above), but I am not able to reproduce it. The test SPJ runs 
1 hour 50 mins and finish correctly. So this seems not a common SPJ issue, but 
I am not sure.

Any suggestions to debug this issue will be very appreciated.

Thanks,
Ming




答复: how to reference a column from a TMUDF?

2016-06-01 Thread Liu, Ming (Ming)
 1  super-user  
21233158559950  12.345.567.345
   2 2  super-services  
21233158559950  12.345.567.345
   2 3  super-services  
21233158559955  12.345.567.345

--- 4 row(s) selected.
>>

Notice how column names are in upper case in the output.




On Wed, Jun 1, 2016 at 4:18 AM, Liu, Ming (Ming) 
<ming@esgyn.cn<mailto:ming@esgyn.cn>> wrote:
Hi, all,

I wrote a simple TMUDF which will perform a solr query and get result as a 
table value.

It can work like this:

>>select * from  udf(solrUDF('db','iphone'));
id   description
-- 
1   iphone 5
2  iphone 5s
--- 2 row(s) selected.


As you can see, it will return two columns: ‘id’ and ‘description’. Now I want 
to do a filter on id, so I try this:

>>select * from  udf(solrUDF('db','iphone')) u where u.id<http://u.id> = 1;

It failed and report this error:

*** ERROR[4003] Column U.ID<http://U.ID> is not a column in table U, or, after 
a NATURAL JOIN or JOIN USING, is no longer allowed to be specified with a table 
correlation name.

*** ERROR[8822] The statement was not prepared.

Because I want to join the udf result from the source Trafodion table, so I 
have to reference the columns in the UDF.

Please help, how can I reference to the column returned from a UDF?

Thanks,
Ming





RMS questions

2016-02-27 Thread Liu, Ming (Ming)
Hi, all,

I am trying to gather query's run-time statistics using RMS command 'get 
statistics'. It works fine, but I have some questions below:

As I understand, RMS will save stats for a given query in shared memory, so it 
cannot save all the history. It only save CURRENT running queries' stats. Is 
this true?
For a long-running query, I can start another session using 'get statistics for 
qid xxx ' to periodically get the stats. For short-running query (finish in 
ms), it seems hard for me to start another session find out qid and run the 
'get statistics'. I think there is a small time window that one can still get 
stats for a query after it finished.
What is that time window, 30 seconds?


If I have a busy system with TPS like 3000 queries/s, can RMS save all of them 
by 30 seconds? That seems huge, and memory is limited. If it works like a ring 
buffer or cache (aging out oldest entries), what is the strategy RMS keep stats 
or aging who out?
What will happen if all active queries will run out of RMS memory? I know we 
can enlarge the size of that memory, but not know exact how, any instructions?
With the instruction, how can one calculate the required memory size if s/he 
know how many queries s/he want to save.

Maybe we can only save stats for 'slow queries'?

Many questions, thanks in advance for any help.

Thanks,
Ming



how to use BLOB/CLOB in Trafodion?

2016-02-16 Thread Liu, Ming (Ming)
Hi, all,

I am interested about how can a user use BLOB/CLOB in Trafodion?
I see usage of stringtolob() and lobtostring() like this:

insert into tbl values(1,stringtolob('clob values'));
select lobtostring(c2,10) from tbl;

This is fine if I have not too long string, but if I have a big text file and 
want to insert the whole text file into a CLOB column, how can I do that?

Thanks,
Ming



答复: [VOTE] Apache Trafodion Logo Proposal

2016-02-18 Thread Liu, Ming (Ming)
+1 for 13

Very good drawing, looking forward the dragon to be born.

Thanks,
Ming

发件人: Roberta Marton [mailto:roberta.mar...@esgyn.com]
发送时间: 2016年2月19日 10:02
收件人: user@trafodion.incubator.apache.org
主题: [VOTE] Apache Trafodion Logo Proposal

There has been quite a lot of discussion on our user list regarding the 
proposed Apache Trafodion logos.
It has now come time for a formal vote on the two most popular logos fondly 
known as  4g and 13.
Both have been attached for your reference

Please respond as follows:

[ ] +1-4g approve option 4g
[ ] +1-13 approve option 13
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

The vote will be open for 72 hours.

   Regards,
   Roberta Marton



DDL column DEFAULT question

2016-03-14 Thread Liu, Ming (Ming)
Hi, all,

I need to create a table in Trafodion with columns having DEFAULT, but I cannot 
make it work, any help will be very appreciated:

CREATE TABLE DBMARTA.ARPT_DIM_AUTORPT_PUB_MARK
(MARK_ID SMALLINTNOT NULL,
  BEGIN_TIME  DATENOT NULL  DEFAULT date'2008-01-01',
  END_TIMEDATENOT NULL  DEFAULT date'2018-01-01',
  ACTIVE_FLAG SMALLINT,
  MARK_NAME   VARCHAR(20),
  DESC_TXTVARCHAR(80),
  primary key(MARK_ID, BEGIN_TIME, END_TIME)
);
*** ERROR[15001] A syntax error occurred at or before:
CREATE TABLE DBMARTA.ARPT_DIM_AUTORPT_PUB_MARK  (MARK_ID SMALLINTNO
T NULL,   BEGIN_TIME  DATENOT NULL  DEFAULT date'2008-01-01',   END
  ^ (134 characters from 
start of SQL statement)

*** ERROR[8822] The statement was not prepared.

It seems NOT NULL and DEFAULT are conflicting? Or my syntax has some other 
issue?

Thanks,
Ming




答复: DDL column DEFAULT question

2016-03-14 Thread Liu, Ming (Ming)
It works!

Thanks Anoop

发件人: anoop [mailto:anoop.sha...@esgyn.com]
发送时间: 2016年3月15日 0:05
收件人: user@trafodion.incubator.apache.org
主题: RE: DDL column DEFAULT question

default date not null...

default comes before null clause. that
is ansi definition


anoop
 Original message 
From: "Liu, Ming (Ming)" <ming@esgyn.cn<mailto:ming@esgyn.cn>>
Date: 3/14/2016 9:01 AM (GMT-08:00)
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: DDL column DEFAULT question

Hi, all,

I need to create a table in Trafodion with columns having DEFAULT, but I cannot 
make it work, any help will be very appreciated:

CREATE TABLE DBMARTA.ARPT_DIM_AUTORPT_PUB_MARK
(MARK_ID SMALLINTNOT NULL,
  BEGIN_TIME  DATENOT NULL  DEFAULT date'2008-01-01',
  END_TIMEDATENOT NULL  DEFAULT date'2018-01-01',
  ACTIVE_FLAG SMALLINT,
  MARK_NAME   VARCHAR(20),
  DESC_TXTVARCHAR(80),
  primary key(MARK_ID, BEGIN_TIME, END_TIME)
);
*** ERROR[15001] A syntax error occurred at or before:
CREATE TABLE DBMARTA.ARPT_DIM_AUTORPT_PUB_MARK  (MARK_ID SMALLINTNO
T NULL,   BEGIN_TIME  DATENOT NULL  DEFAULT date'2008-01-01',   END
  ^ (134 characters from 
start of SQL statement)

*** ERROR[8822] The statement was not prepared.

It seems NOT NULL and DEFAULT are conflicting? Or my syntax has some other 
issue?

Thanks,
Ming




答复: Anyway to start Trafodion without sqstart

2016-03-15 Thread Liu, Ming (Ming)
From my understanding, it is invalid to start trafodion on node by node mode. 
Monitor simulate a single image of operating system from a bunch of nodes. One 
process can be configured to be a pair: active and standby. So when start a 
trafodion process, it will start two processes on two nodes. It is hard to 
start a single node.
Is there any use case to do so?

Thanks,
Ming

发件人: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
发送时间: 2016年3月15日 22:27
收件人: user@trafodion.incubator.apache.org
主题: RE: Anyway to start Trafodion without sqstart

Yes.  The sqgen command takes in the configuration file for the trafodion 
cluster and generates gomon.cold, gomon.warm and other relevant scripts. These 
generated scripts are copied to all nodes in the cluster. These scripts are 
nothing but the commands to sqshell. sqstart uses either gomon.cold or 
gomon.warm to start the Trafodion instance.

Selva

From: Gunnar Tapper 
[mailto:tapper.gun...@gmail.com]
Sent: Monday, March 14, 2016 10:03 PM
To: 
user@trafodion.incubator.apache.org
Subject: Re: Anyway to start Trafodion without sqstart

DCS and REST follow the HBase model so that should be a simple matter of 
invoking the *-daemon.sh scripts.

I think the rest is a matter of using sqshell:

[centos@trafodion incubator-trafodion]$ sqshell
Processing cluster.conf on local host trafodion.novalocal
[SHELL] Shell/shell Version 1.0.1 Apache_Trafodion Release 2.0.0 (Build debug 
[centos], date 11Mar16)
[SHELL] %help
[SHELL] usage: shell {[-a|-i] []} | {-c }
[SHELL] - commands:
[SHELL] -- Command line environment variable replacement: ${}
[SHELL] -- ! comment statement
[SHELL] -- cd 
[SHELL] -- delay 
[SHELL] -- down  [, ]
[SHELL] -- dump [{path }]  | 
[SHELL] -- echo []
[SHELL] -- event [{ASE|TSE|DTM|AMP|BO|VR|CS}]  [ [ 
event-data] ]
[SHELL] -- exec [{[debug][nowait][pri ][name ]
[SHELL]   [nid ][type 
{AMP|ASE|BO|CS|DTM|PSD|SMS|SPX|SSMP|TSE|VR}]
[SHELL] --[in |#default][out |#default]}] path [[]...]
[SHELL] -- exit [!]
[SHELL] -- help
[SHELL] -- kill [{abort}]  | 
[SHELL] -- ldpath [[,]...]
[SHELL] -- ls [{[detail]}] []
[SHELL] -- measure | measure_cpu
[SHELL] -- monstats
[SHELL] -- node [info []]
[SHELL] -- path [[,]...]
[SHELL] -- ps [{ASE|TSE|DTM|AMP|BO|VR|CS}] [|]
[SHELL] -- pwd
[SHELL] -- quit
[SHELL] -- scanbufs
[SHELL] -- set [{[nid ]|[process ]}] key=
[SHELL] -- show [{[nid ]|[process ]}] [key]
[SHELL] -- shutdown [[immediate]|[abrupt]|[!]]
[SHELL] -- startup [trace] []
[SHELL] -- suspend []
[SHELL] -- time 
[SHELL] -- trace 
[SHELL] -- up 
[SHELL] -- wait [ | ]
[SHELL] -- warmstart [trace] []
[SHELL] -- zone [nid |zid ]
[

Obviously, you can up/down nodes but I don't know how that works in 
relationship to the startup command.

On Mon, Mar 14, 2016 at 11:52 AM, Amanda Moran 
> wrote:
Hi there-

Is there a way to start up Trafodion not by using sqstart...? I would like to 
be able to start up/stop each node individually.

Thanks!

--
Thanks,

Amanda Moran



--
Thanks,

Gunnar
If you think you can you can, if you think you can't you're right.


答复: 答复: Anyway to start Trafodion without sqstart

2016-03-15 Thread Liu, Ming (Ming)
The purpose I guess is to bypass time-consuming database recovery if it is a 
warm start. Say, if you have a clean shutdown of Trafodion, you can warm 
restart it. If one use ‘ckillall’, or simply loss power, one needs a cold 
start. But I think this is just the concept. May not support yet? We studied 
the two gomon scripts, seems identical except some minor difference.

And Narendra is correct, I misunderstand the startup code. It is possible.

Thanks,
Ming

发件人: Gunnar Tapper [mailto:tapper.gun...@gmail.com]
发送时间: 2016年3月16日 0:36
收件人: user@trafodion.incubator.apache.org
主题: Re: 答复: Anyway to start Trafodion without sqstart

BTW, in what cases should gomon.warm or the sqstart warm argument be used?

Thanks,

Gunnar

On Tue, Mar 15, 2016 at 10:30 AM, Narendra Goyal 
<narendra.go...@esgyn.com<mailto:narendra.go...@esgyn.com>> wrote:
Yes, one can follow the commands in ‘gomon.cold’ (that sqstart uses (what Selva 
mentioned)) to startup without sqstart.

As such, before ‘sqstart’ executes the commands in ‘gomon.cold’, it does some 
other checks (like checks for orphan processes), and cleans up IPC constructs 
(semaphores, queues, shared memory) via sqipcrm.

Thanks,
-Narendra

From: Gunnar Tapper 
[mailto:tapper.gun...@gmail.com<mailto:tapper.gun...@gmail.com>]
Sent: Tuesday, March 15, 2016 8:10 AM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: Re: 答复: Anyway to start Trafodion without sqstart

The use case is how Apache Ambari works: it assume node-by-node management of 
services. This is how you achieve rolling upgrades, stopping all services on a 
node, etc.

I wonder if it's possible to first start the monitor (using the startup 
command) and then start/stop other components per node. Kind of a hybrid model.

DCS/REST should already support this use case since it's based on the HBase 
model where the scripts start daemons on each configured node.

Gunnar

On Tue, Mar 15, 2016 at 8:57 AM, Liu, Ming (Ming) 
<ming@esgyn.cn<mailto:ming@esgyn.cn>> wrote:
From my understanding, it is invalid to start trafodion on node by node mode. 
Monitor simulate a single image of operating system from a bunch of nodes. One 
process can be configured to be a pair: active and standby. So when start a 
trafodion process, it will start two processes on two nodes. It is hard to 
start a single node.
Is there any use case to do so?

Thanks,
Ming

发件人: Selva Govindarajan 
[mailto:selva.govindara...@esgyn.com<mailto:selva.govindara...@esgyn.com>]
发送时间: 2016年3月15日 22:27
收件人: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
主题: RE: Anyway to start Trafodion without sqstart

Yes.  The sqgen command takes in the configuration file for the trafodion 
cluster and generates gomon.cold, gomon.warm and other relevant scripts. These 
generated scripts are copied to all nodes in the cluster. These scripts are 
nothing but the commands to sqshell. sqstart uses either gomon.cold or 
gomon.warm to start the Trafodion instance.

Selva

From: Gunnar Tapper 
[mailto:tapper.gun...@gmail.com<mailto:tapper.gun...@gmail.com>]
Sent: Monday, March 14, 2016 10:03 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: Re: Anyway to start Trafodion without sqstart

DCS and REST follow the HBase model so that should be a simple matter of 
invoking the *-daemon.sh scripts.

I think the rest is a matter of using sqshell:

[centos@trafodion incubator-trafodion]$ sqshell
Processing cluster.conf on local host trafodion.novalocal
[SHELL] Shell/shell Version 1.0.1 Apache_Trafodion Release 2.0.0 (Build debug 
[centos], date 11Mar16)
[SHELL] %help
[SHELL] usage: shell {[-a|-i] []} | {-c }
[SHELL] - commands:
[SHELL] -- Command line environment variable replacement: ${}
[SHELL] -- ! comment statement
[SHELL] -- cd 
[SHELL] -- delay 
[SHELL] -- down  [, ]
[SHELL] -- dump [{path }]  | <nid,pid>
[SHELL] -- echo []
[SHELL] -- event [{ASE|TSE|DTM|AMP|BO|VR|CS}]  [<nid,pid> [ 
event-data] ]
[SHELL] -- exec [{[debug][nowait][pri ][name ]
[SHELL]   [nid ][type 
{AMP|ASE|BO|CS|DTM|PSD|SMS|SPX|SSMP|TSE|VR}]
[SHELL] --[in |#default][out |#default]}] path [[]...]
[SHELL] -- exit [!]
[SHELL] -- help
[SHELL] -- kill [{abort}]  | <nid,pid>
[SHELL] -- ldpath [[,]...]
[SHELL] -- ls [{[detail]}] []
[SHELL] -- measure | measure_cpu
[SHELL] -- monstats
[SHELL] -- node [info []]
[SHELL] -- path [[,]...]
[SHELL] -- ps [{ASE|TSE|DTM|AMP|BO|VR|CS}] [|<nid,pid>]
[SHELL] -- pwd
[SHELL] -- quit
[SHELL] -- scanbufs
[SHELL] -- set [{[nid ]|[process ]}] key=
[SHELL] -- show [{[nid ]|[process ]}] [key]
[SHELL] -- shutdown [[immediate]|[abrupt]|[!]]
[SHELL] -- startup [trace] []
[SHELL] -- suspend []
[SHELL] -- time 
[SHELL] -- trace 
[SHELL] -- up 
[SHELL] -- wait [ | <nid,pid>]
[SHELL] -- warmstart [trace] []
[SHELL] -- zone [nid |zid ]
[

Obviously, you can up/d

答复: Apache Trafodion At San Jose Strata + Hadoop World Developer Showcase!

2016-03-08 Thread Liu, Ming (Ming)
Great news and wish Trafodion can be known to more and more people!

发件人: Carol Pearson [mailto:carol.pearson...@gmail.com]
发送时间: 2016年3月9日 1:42
收件人: user@trafodion.incubator.apache.org
主题: Apache Trafodion At San Jose Strata + Hadoop World Developer Showcase!

Hi Trafodion Fans,

Great news if you're going to Strata + Hadoop World in San Jose at the end of 
March.  Apache Trafodion was selected to be part of the Developer Showcase on 
Wednesday, 30March!  Stop by to see Apache Trafodion in action and to talk to 
some of the people in the Trafodion community in person.

This is also a great opportunity to get some ideas on how you could join in on 
the Trafodion fun!

-Carol P.
---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---


how to tell the most time-consuming part for a given Trafodion query plan?

2016-03-08 Thread Liu, Ming (Ming)
Hi, all,

We have running some complex queries using Trafodion, and need to analyze the 
performance. One question is, if we want to know which part of the plan take 
longest time, is there any good tool/skills to answer this?

I can use 'get statistics for qid  default' to get runtime stats. But it 
is rather hard to interpret the output. I assume the "Oper CPU Time" is the 
best one we can trust? But I am not sure it is the pure CPU time, or it also 
include 'waiting time'? If I want to know the whole time an operation from 
start to end, is there any way?
And if it is CPU time, is it ns or something else, or just a relative number?

Here is an example output of 'get statistics'

LC   RC   Id   PaId ExId Frag TDB Name DOP Dispatches  
Oper CPU Time  Est. Records Used  Act. Records UsedDetails

12   .13   .70EX_ROOT  11   
  69  0  0 1945
11   .12   13   60EX_SPLIT_TOP 11   
  32 99,550,560  0
10   .11   12   60EX_SEND_TOP  10  32   
   1,844 99,550,560  0
9.10   11   62EX_SEND_BOTTOM   10  20   
 666 99,550,560  0
8.910   62EX_SPLIT_BOTTOM  10  40   
 411 99,550,560  0 53670501
678952EX_TUPLE_FLOW10  10   
  57 99,550,560  0
..7842EX_TRAF_LOAD_PREPARATION 10   0   
   0  1  0 TRAFODION.SEABASE.BLTEST|0|0
5.6832EX_SORT  10 316,410   
  40,033,167 99,550,560  0 0|15880|10
4.5622EX_SPLIT_TOP 10 316,411   
 559,691 99,550,560  5,690,184
3.4522EX_SEND_TOP  160474,849   
  13,076,509 99,550,560  5,690,196
2.3423EX_SEND_BOTTOM   160919,425   
  90,107,363 99,550,560  5,695,235
1.2323EX_SPLIT_BOTTOM  16  94,836   
   4,236,816 99,550,560  5,698,863 350792654
..1213EX_HDFS_SCAN 16  48,227   
 256,448,475  0  5,715,193 
HIVE.BLTEST|5715193|1664264993

Thanks in advance.

Thanks,
Ming



答复: Trafodion support of TO_DATE

2016-03-19 Thread Liu, Ming (Ming)
Thanks Anoop,

I see how to change it now.
This will be some work to replace all literal in the application, so the coming 
TO_DATE feature of Trafodion 2.0 will be a great help for database migration 
project.

Thanks,
Ming

发件人: Anoop Sharma [mailto:anoop.sha...@esgyn.com]
发送时间: 2016年3月17日 12:05
收件人: user@trafodion.incubator.apache.org
主题: RE: Trafodion support of TO_DATE


You will have to convert the TO_DATE part of the query to an explicit datetime 
literal that looks like:
   timestamp ‘2016-03-17 11:47:06’

With TO_DATE support (this is on trafodion now but will be externally available 
as part of Traf 2.0), you
can use the statement that you have listed.
But…the format part need to match the string datetime value. In your example, 
it doesn’t.
The correct syntax will be:
  to_date(‘20160317114706’, ‘MMDDHH24MISS’)

or

  to_date(‘2016-03-17 11:47:06’, ‘-MM-DD HH24:MI:SS’)

anoop

From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
Sent: Wednesday, March 16, 2016 8:51 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: Trafodion support of TO_DATE

Hi, all,

I know there will be a new feature to support TO_DATE, currently, without this 
feature, is there any way to migrate a query using TO_DATE?

Example:
insert into myTbl (staff_id, 
orbat_code,operate_time,ip_arr,browser_version,orbat_desc) 
values('ADMIN','V1369930001010',to_date('20160317114706','-MM-dd 
hh24:mi:ss')


thanks,
Ming


答复: how to tell the most time-consuming part for a given Trafodion query plan?

2016-03-09 Thread Liu, Ming (Ming)
Thank you Selva,

Gunnar pointed me to the sql manual guide before. I should read it more 
carefully.

There are a lot of information in your reply, I need some time to understand 
all of them. But for my major question mark ‘how to tell running time in 
different part for a given query’ is partially answered by your reply. I can 
use ‘Oper CPU time’ for this purpose, along with other tips described in your 
message. But as you correctly pointed out, it is an art ☺ So I need more 
practice to fully grasp it.
Once I have more concrete question, I will ask for help again.

Thanks,
Ming

发件人: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
发送时间: 2016年3月9日 13:36
收件人: user@trafodion.incubator.apache.org
主题: RE: how to tell the most time-consuming part for a given Trafodion query 
plan?

Hi Ming,

The counters or metrics returned by RMS in Trafodion are documented at 
http://trafodion.apache.org/docs/2.0.0/sql_reference/index.html#sql_runtime_statistics.

Counters displayed in operator stats:

The DOP(Degree of Parallelism> determines the number of ESPs involved in 
executing the Trafodion Operator or TDB(Task Definition Block). The TDB can be 
identified by the number in ID column. LC and RC denotes the left and right 
child of the operator. Using these IDs and the parent TDBID (PaID), one can 
construct the query plan from this output. Dispatches column gives an 
indication how often the operator is scheduled for execution by the Trafodion 
SQL Engine scheduler. An operator is scheduled and run traversing the different 
steps within itself till it can't continue or it gives up on its own for other 
operators to be scheduled.

During query execution, you will see these metrics being changed continuously 
for all the operators as the data flows across it till a blocking operator is 
encountered in the plan. The blocking operators are EX_SORT, EX_HASH_JOIN and 
EX_HASH_GRBY..

The operator cpu time is the sum of the cpu time spent in the operator in the 
executor thread of all the processes hosting the operator. Operator cpu time is 
real and measured in real time in microseconds and it is NOT a relative number. 
It doesn’t include the cpu time spent by other threads in executing the tasks 
on behalf of the executor thread. Usually, trafodion executor instance is run 
in a single thread and the engine can have multiple executor instances running 
in a process to support multi-threaded client applications. Most notably, the 
Trafodion engine uses thread pool to pre-fetch the rows while rows are fetched 
sequentially. It is also possible that Hbase uses thread pools to complete the 
operation requested by Trafodion. These thread timings are not included in the 
operator cpu time.  To account for this, RMS provides another counter in a 
different view. It is the pertable view.



GET STATISTICS FOR QID  PERTABLE provides the following counters:


HBase/Hive IOs

Numbers of messages sent to HBase Region Servers(RS)

HBase/Hive IO MBytes

The cumulative size of these messages in MB accounted at the Trafodion layer.

HBase/Hive Sum IO Time

The cumulative time taken in microseconds by RS to respond and summed up across 
all ESPs

HBase/Hive Max IO Time

The maximum of the cumulative time taken in microseconds by RS to respond for 
any ESP. This gives an indication how much of the elapsed time is spent in 
HBase because the messages to RS are blocking


The Sum and Max IO time are the elapsed time measured as wall clock time in 
microseconds.



The max IO time should be less than the elapsed time or response time of the 
query. If the max IO time is closer to the elapsed time, then most of the time 
is spent in Hbase.

The sum IO time should be less than the DOP * elapsed time.

The Operator time is the CPU time.

I sincerely hope you will find the above information useful to digest the 
output from RMS.  I would say reading, analyzing and interpreting the output 
from RMS is an art that you would develop over time and it is always difficult 
to document every usage scenario. If you find something that needs to be added 
or isn’t correct, please let us know.



Selva


From: Liu, Ming (Ming) [mailto:ming@esgyn.cn<mailto:ming@esgyn.cn>]
Sent: Tuesday, March 8, 2016 5:41 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: how to tell the most time-consuming part for a given Trafodion query 
plan?

Hi, all,

We have running some complex queries using Trafodion, and need to analyze the 
performance. One question is, if we want to know which part of the plan take 
longest time, is there any good tool/skills to answer this?

I can use ‘get statistics for qid  default’ to get runtime stats. But it 
is rather hard to interpret the output. I assume the “Oper CPU Time” is the 
best one we can trust? But I am not sure it is the pure CPU time, or it also 
include ‘waiting time’? If I want to know the whole time an operation from 
start to end, i

How to specify default charset of column in DDL create table?

2016-04-06 Thread Liu, Ming (Ming)
Hi, all,

When create a table in Trafodion, by default, CHAR/VARCHAR columns will have 
ISO88591 as charset. Is there any way to change this default behavior, so by 
default, create table will have UTF8 as charset,without explicitly specify in 
DDL.

Thanks,
Ming



Trafodion support of TO_DATE

2016-03-19 Thread Liu, Ming (Ming)
Hi, all,

I know there will be a new feature to support TO_DATE, currently, without this 
feature, is there any way to migrate a query using TO_DATE?

Example:
insert into myTbl (staff_id, 
orbat_code,operate_time,ip_arr,browser_version,orbat_desc) 
values('ADMIN','V1369930001010',to_date('20160317114706','-MM-dd 
hh24:mi:ss')


thanks,
Ming


How to implement a TO_DATE UDF in Trafodion

2016-03-19 Thread Liu, Ming (Ming)
Hi, all,

I just learned the support of TO_DATE will be avialbe in Trafodion in R2.0 from 
Anoop, that is great! At present, due to the urgent requirement of current 
migration project, we cannot wait, so I want to write a UDF to do the TO_DATE.
When I try to write it, I found it seems the UDF can only return simple data 
type like INT, VARCHAR, I cannot find the definition of returning a 
TIMESTAMP/DATE, could anyone help here?

In sqludr.h , I cannot find corresponding data type for DATE/TIMESTAM.
So I don't know how to define the UDF's entry function, example as below:

SQLUDR_LIBFUNC SQLUDR_INT32 to_date(SQLUDR_VC_STRUCT *srcStr,//input string
SQLUDR_CHAR *pattern, //date format
??? *out1, //the output
SQLUDR_INT16 *inInd1,
SQLUDR_INT16 *inInd2,
SQLUDR_INT16 *outInd1,
SQLUDR_TRAIL_ARGS)

How can I specify the out1 type to be a DATE or TIMESTAMP? I marked with '???' .

Thanks in advance,
Ming


答复: How to implement a TO_DATE UDF in Trafodion

2016-03-23 Thread Liu, Ming (Ming)
Thanks Hans,

Yes, SQLUDR_CHAR can support input/output values as TIMESTAMP. This solves my 
problem.
So Trafodion UDF can support TIMESTAMP as both input parameter and output , 
this is great!

Thanks,
Ming

发件人: Hans Zeller [mailto:hans.zel...@esgyn.com]
发送时间: 2016年3月24日 1:34
收件人: user@trafodion.incubator.apache.org
主题: Re: How to implement a TO_DATE UDF in Trafodion

Hi Ming, I think the C data type corresponding to datetime values is 
SQLUDR_CHAR. You create a string that looks like "2016-03-23 01:23:45" (this 
example is for a TIMESTAMP(0)) and return that. Trafodion will convert it to 
the datetime type.

Sorry, I have not tested it but hope it will work.

Hans

On Thu, Mar 17, 2016 at 8:43 PM, Liu, Ming (Ming) 
<ming@esgyn.cn<mailto:ming@esgyn.cn>> wrote:
Hi, all,

I just learned the support of TO_DATE will be avialbe in Trafodion in R2.0 from 
Anoop, that is great! At present, due to the urgent requirement of current 
migration project, we cannot wait, so I want to write a UDF to do the TO_DATE.
When I try to write it, I found it seems the UDF can only return simple data 
type like INT, VARCHAR, I cannot find the definition of returning a 
TIMESTAMP/DATE, could anyone help here?

In sqludr.h , I cannot find corresponding data type for DATE/TIMESTAM.
So I don’t know how to define the UDF’s entry function, example as below:

SQLUDR_LIBFUNC SQLUDR_INT32 to_date(SQLUDR_VC_STRUCT *srcStr,//input string
SQLUDR_CHAR *pattern, //date format
??? *out1, //the output
SQLUDR_INT16 *inInd1,
SQLUDR_INT16 *inInd2,
SQLUDR_INT16 *outInd1,
SQLUDR_TRAIL_ARGS)

How can I specify the out1 type to be a DATE or TIMESTAMP? I marked with ‘???’ .

Thanks in advance,
Ming



答复: Does Trafodion support to use 'order by' in a subquery

2016-03-24 Thread Liu, Ming (Ming)
Thanks Rohit,

That is good to know, after think it again, yes, this is wrong in the first 
place. I tested in Hive, it doesn’t support that syntax either, but seems some 
RDBMS (Oracle, DB2) allow this syntax but ignore it.

However, It will be more user friendly that optimizer or parser simply give a 
warning or ignore it, because other database allows this, but since itself is 
wrong at first, so this ‘enhancement’ is not necessary, just if Trafodion could 
do that, that seems more flexible.

Thanks,
Ming

发件人: Rohit [mailto:rohit.j...@esgyn.com]
发送时间: 2016年3月24日 18:12
收件人: Liu, Ming (Ming) <ming@esgyn.cn>; user@trafodion.incubator.apache.org
主题: RE: Does Trafodion support to use 'order by' in a subquery

Order by is allowed only on the outermost select in ANSI. It is meant to define 
the order of the final results of the query returned to the user. It is not 
allowed in sub queries.

Rohit


 Original message 
From: "Liu, Ming (Ming)" <ming@esgyn.cn<mailto:ming@esgyn.cn>>
Date: 03/24/2016 2:15 AM (GMT-06:00)
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: Does Trafodion support to use 'order by' in a subquery
Hi, all,

I don’t know if this is same in other database, but in Trafodion, if I want to 
have a subquery which have a ‘order by’, it seems not allowed. Is this normal 
or a limitation in Trafodion?
For example:

create table t1 ( c1 int, c2 int);

select * from (
   Select * from t1 order by c1);
*** ERROR[15001] A syntax error occurred at or before:
select * from (select * from t1 order by c1);
   ^ (44 characters from start of SQL 
statement)

Thanks in advance,
Ming



答复: 答复: RMS questions

2016-03-02 Thread Liu, Ming (Ming)
Very nice document Gunnar, I didn’t realize this. Read it through, it is very 
informational and helpful.
Look very good for me, maybe Selva can review it as Gunnar suggested.

Thanks,
Ming

发件人: Gunnar Tapper [mailto:tapper.gun...@gmail.com]
发送时间: 2016年3月3日 6:42
收件人: user@trafodion.incubator.apache.org
主题: Re: 答复: RMS questions

Hi,

Also, RMS is documented in the Trafodion SQL Reference Manual: 
http://trafodion.apache.org/docs/sql_reference/index.html#displaying_sql_runtime_statistics

Selva: Please let me know if this documentation needs to be updated. If so, 
send me the info and I'll incorporate it into this guide.

Thanks,

Gunnar

On Wed, Mar 2, 2016 at 3:37 PM, Liu, Ming (Ming) 
<ming@esgyn.cn<mailto:ming@esgyn.cn>> wrote:
This is very clear and good answer, solve all my confusion! I am now 
understanding how it works. Also understand the relationship among ODBC 
collected stats , repository and RMS. Thanks a lot Selva!

发件人: Selva Govindarajan 
[mailto:selva.govindara...@esgyn.com<mailto:selva.govindara...@esgyn.com>]
发送时间: 2016年3月3日 3:35
收件人: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
主题: RE: RMS questions

Hi Ming,

We are sorry for the delayed response.

Please see my responses embedded.

From: Liu, Ming (Ming) [mailto:ming@esgyn.cn<mailto:ming@esgyn.cn>]
Sent: Saturday, February 27, 2016 8:05 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: RMS questions

Hi, all,

I am trying to gather query’s run-time statistics using RMS command ‘get 
statistics’. It works fine, but I have some questions below:

As I understand, RMS will save stats for a given query in shared memory, so it 
cannot save all the history. It only save CURRENT running queries’ stats. Is 
this true?[Selva]   RMS uses the shared segment to provide near real time 
statistics of the query. The metrics are captured at the relevant components in 
near real time and updated in the shared segment directly while the query is 
being executed. RMS doesn’t poll for the metrics collection, it is the 
infrastructure to provide real time statistics.
For a long-running query, I can start another session using ‘get statistics for 
qid xxx ‘ to periodically get the stats. For short-running query (finish in 
ms), it seems hard for me to start another session find out qid and run the 
‘get statistics’. I think there is a small time window that one can still get 
stats for a query after it finished. [Selva]  For short running queries, you 
can get the statistics after the query is completed before the next query is 
run in the same session using the command “get statistics for qid  
current”.  If the query is issued from a non-interactive application, then you 
might be able to get some kind of summary info from Trafodion repository if it 
is enabled.
What is that time window, 30 seconds?[Selva]  Generally, the statistics is 
retained till the statement is deallocated. The server deallocates the 
statement only when user initiates SQLDrop or Statement.close or the connection 
is closed or the statement object on the client side is somehow garbage 
collected and triggers resource deallocation on the server side.  RMS extends 
the statistics life time a bit more till a next statement is prepared or 
executed in the same session after the statement is deallocated  In case of 
non-interactive application, this time period could be very short.


If I have a busy system with TPS like 3000 queries/s, can RMS save all of them 
by 30 seconds? That seems huge, and memory is limited. If it works like a ring 
buffer or cache (aging out oldest entries), what is the strategy RMS keep stats 
or aging who out? [Selva] As I said earlier, RMS is an infrastructure that aids 
in providing the real time statistics and it is not statistics gathering tool. 
In Trafodion, Type 4 JDBC applications and ODBC applications use the common 
infrastructure DCS to execute the queries. DCS is capable providing the summary 
info or the detailed query statistics based on the configuration settings in 
DCS.
What will happen if all active queries will run out of RMS memory? I know we 
can enlarge the size of that memory, but not know exact how, any instructions?
With the instruction, how can one calculate the required memory size if s/he 
know how many queries s/he want to save.
[Selva] Default size of RMS shared segment is 64 MB. We have been able to 
manage within this space for hundreds of concurrent queries because RMS kicks 
in garbage collection every 10 minutes to gc any orphaned statistics info. 
Statistics can become orphaned if the server component went away abruptly or 
the server component itself failed to deallocate resources. Of course a badly 
written application that doesn’t deallocate statements can make RMS shared 
segment to become full.  RMS relies on the trusted DCS components /type 2 JDBC 
driver to put some capacity limit 

converting INTEGER to CHAR in Trafodion

2016-03-30 Thread Liu, Ming (Ming)
Hi, all,

I know it will be much easier to do in C or Java after get the result set, but 
the Oracle users are used to use to_char() function. Is there similar function 
in Trafodion?

Or maybe we can write a UDF for it, but the problem of UDF is it cannot support 
polymorphism, that is, one can only define the syntax/signature of a UDF once, 
is it true?
For example, it cannot define a TO_CHAR that can :
TO_CHAR(123), return is '123'.
TO_CHAR(date'2001/11/11','/MM/DD'), return is '2001-11-11'.

Seems rather difficult to replace Oracle's TO_CHAR completely.

Or I miss something? Thanks in advance.

Thanks,
Ming





Re: converting INTEGER to CHAR in Trafodion

2016-03-30 Thread Liu, Ming (Ming)
Thanks Anoop for the help again :) Yes, I forgot CAST …

发件人: Anoop Sharma [mailto:anoop.sha...@esgyn.com]
发送时间: 2016年3月30日 22:04
收件人: user@trafodion.incubator.apache.org
主题: RE: converting INTEGER to CHAR in Trafodion

Trafodion supports to_date and to_char to convert datetime  values from string 
to datetime, or datetime to string.
It does not support all of oracle to_date/to_char functionality/formats, but a 
subset of it.
One can also use CAST function to convert from numeric from/to string.

For the 2 examples, you can use to_char to convert from DATE to string.
And use CAST to convert from numeric to string.

>>select to_char(date '2016-10-10', '/MM/DD') from (values(1)) X(a);

(EXPR)
--

2016/10/10

--- 1 row(s) selected.
>>select cast(123 as char(3)) from (values(1)) x(a);

(EXPR)
--

123

--- 1 row(s) selected.
>>

From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
Sent: Wednesday, March 30, 2016 5:12 AM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: converting INTEGER to CHAR in Trafodion

Hi, all,

I know it will be much easier to do in C or Java after get the result set, but 
the Oracle users are used to use to_char() function. Is there similar function 
in Trafodion?

Or maybe we can write a UDF for it, but the problem of UDF is it cannot support 
polymorphism, that is, one can only define the syntax/signature of a UDF once, 
is it true?
For example, it cannot define a TO_CHAR that can :
TO_CHAR(123), return is ‘123’.
TO_CHAR(date’2001/11/11’,’/MM/DD’), return is ‘2001-11-11’.

Seems rather difficult to replace Oracle’s TO_CHAR completely.

Or I miss something? Thanks in advance.

Thanks,
Ming





How DCS will response if a connected client crash?

2016-03-29 Thread Liu, Ming (Ming)
Hi, all,

Say Trafodion configure to have maximum of 10 concurrent connections. That is: 
10 mxosrvr.
Now, 10 clients connected, and 1 of them crashed due to some defect of its own, 
so it doesn't call the normal connection close() operation. Will this 
connection be reused by another new client try to connect immediately?

Or one client have a problem in the network, for example, its network cable was 
plug out. Will its connection be detected as closed and that connection be 
reused by a new one?

Thanks,
Ming



答复: MDAM on index

2016-03-28 Thread Liu, Ming (Ming)
Thanks all,

I want to confirm if all conditions meet, index access can also use MDAM.
it is supported and that is great!

I believe in practice if all PK and Index still cannot cover the query pattern, 
it is time to check the design ☺

Thanks,
Ming
发件人: Qifan Chen [mailto:qifan.c...@esgyn.com]
发送时间: 2016年3月28日 23:27
收件人: user@trafodion.incubator.apache.org
抄送: Dave Birdsall <dave.birds...@esgyn.com>
主题: Re: MDAM on index

The scan optimizer picks the MDAM scan or subset scan based on the cost.  For 
MDAm to win, the  low UEC on the leading key columns is a pre-condition.

Thanks --Qifan

On Mon, Mar 28, 2016 at 10:23 AM, Rohit 
<rohit.j...@esgyn.com<mailto:rohit.j...@esgyn.com>> wrote:
And remember, the key available for MDAM in a secondary index includes both the 
secondary index columns followed by the primary key columns, or  c3, c4, c1, c2 
in this case.  Same MDAM rules should apply to the secondary index as the 
clustering index since its a clustering index too.

Rohit


 Original message 
From: Dave Birdsall <dave.birds...@esgyn.com<mailto:dave.birds...@esgyn.com>>
Date: 03/28/2016 10:12 AM (GMT-06:00)
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: RE: MDAM on index
Hi,

In principle at least, MDAM should be possible with Query 2. Whether it is a 
good plan or not depends on many things: If the UEC of column c3 is high, then 
MDAM on the index on C4 may not be a good choice. If the query accesses other 
columns in the base table besides c3 and c4, then there is an extra join using 
index access which raises the cost. It still might be a good plan though. For 
example, if there is a highly selective predicate on c3 and c4, resulting in 
just a few accesses to the base table then it still may be good. Your mileage 
will vary.

Dave

From: Liu, Ming (Ming) [mailto:ming@esgyn.cn<mailto:ming@esgyn.cn>]
Sent: Monday, March 28, 2016 5:12 AM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: MDAM on index

Hi, all,

If we are creating a table t(c1,c2,c3,c4,,c5, primary key(c1,c2)) and then 
create an index indx on t(c3,c4).
Query 1: select * from t where c2 =10;
Query 2: select * from t where c4 = 10;
I think Query 1 will use MDAM, can Query 2 use MDAM to access indx as well?

Thanks,
Ming




--
Regards, --Qifan



答复: add a comment to a table

2016-04-01 Thread Liu, Ming (Ming)
Shall we file a JIRA to track this requirement?

Thanks,
Ming

发件人: Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
发送时间: 2016年4月1日 11:54
收件人: user@trafodion.incubator.apache.org
主题: Re: add a comment to a table

Hi,

I do not think Trafodion currently supports Oracle's COMMENT ON syntax
https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4009.htm

It will be a relatively simple feature to add but we do not have it today as 
far as I know.

Thank you
Suresh


On Thu, Mar 31, 2016 at 9:56 PM, 
yongqiang.z...@microinsight.com.cn 
> 
wrote:
Hi , all,


I want to add a comment to a table,like the use of oracle" comment on 
table tablename is annotate", How to use jdbc way to implement it?





yongqiang.z...@microinsight.com.cn



答复: HDFS/HBase/Zookeeper Settings set by Installer

2016-05-03 Thread Liu, Ming (Ming)
Yes, this is very important topic.

I can confirm to remove two of them :
Name: hbase.bulkload.staging.dir
Value: /hbase-staging
Why:
Still Needed: NO

Name: hbase.regionserver.region.transactional.tlog
Value: true
Why:
Still Needed: NO

We need more test for :

Name: hbase_coprocessor_region_classes
Value: 
"org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionObserver,org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint,org.apache.hadoop.hbase.coprocessor.AggregateImplementation"
Why: Not fully tested to remove it.
Still needed: Yes

There is already a fix that can get rid of this settings. But we need more 
test. That is, remove this from installer and see if any issues. We tested it 
briefly and it works well, but never get it full QAed. Trafodion now should add 
coprocessor at runtime if installer not doing this.

Another one I have comment:

Name: hbase.hregion.impl
Value: org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion
Why: Trafodion coprocessor need to invoke a private API of HBase Region, so 
have to overwrite this class now. When Trafodion move to HBase 1.2, it is 
possible to get rid of it, since HBase R1.2 public one required method for us. 
See https://issues.apache.org/jira/browse/HBASE-15076?filter=-2
Still Needed: Yes.


IMHO, all other settings should not be mandatory, but kind of 
performance/stability tunings. However, I am not the author of those settings, 
so not very sure. It is better others to comment more.

Yes, thanks Amanda to bring this up, and I hope you can keep pushing this topic.
I feel some settings there is too old that nobody knows the reason or just 
forget, I think testing is a solution: remove it, and test to see if any issue. 
But if someone knows that one is MUST-TO-KEEP or DELETE-IT, it is better to 
reply.

Thanks,
Ming

发件人: Pierre Smits [mailto:pierre.sm...@gmail.com]
发送时间: 2016年5月3日 15:20
收件人: user@trafodion.incubator.apache.org
主题: Re: HDFS/HBase/Zookeeper Settings set by Installer

Hi Amanda,

Thanks for bringing this up. Especially the 'why' aspect. This will be good 
input for the documentation.

Best regards,

Pierre Smits

ORRTIZ.COM
OFBiz based solutions & services

OFBiz Extensions Marketplace
http://oem.ofbizci.net/oci-2/

On Mon, May 2, 2016 at 10:54 PM, Amanda Moran 
> wrote:
Hi there All-

I have been looking over the HDFS/HBase/Zookeeper settings that get set in the 
installer and I am wondering if they are all still needed.

If you have requested a setting in the past, could you please add a description 
of why it is needed (and if it is still needed)?

Thanks a bunch!

**Note: I know this would have looked much better in a spreadsheet but I 
want to make sure everyone can see... and email is the best for that!

HDFS Settings

Name: namenode_java_heapsize
Value: 1GB (or 1073741824 bytes)
Why:
Still needed:

Name: secondary_namenode_java_heapsize
Value: 1 GB (or 1073741824 bytes)
Why:
Still needed:

Name: dfs_namenode_acls_enabled
Value: true
Why:
Still needed:

HBase Master Settings

=HBase Master Config Safety Valve=

Name: hbase_master_distributed_log_splitting
Value: false
Why:
Still needed:

Name: hbase_snapshot_master_timeoutMillis
Value: 60
Why:
Still needed:

HBase Region Server Settings

Name: hbase_coprocessor_region_classes
Value: 
"org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionObserver,org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint,org.apache.hadoop.hbase.coprocessor.AggregateImplementation"
Why:
Still needed:

Name: hbase_regionserver_lease_period
Value: 60
Why:
Still Needed:

=HBase RegionServer Config Safety Valve=

Name: hbase.hregion.impl
Value: org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion
Why:
Still Needed:

Name: hbase.regionserver.region.split.policy
Value: org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy
Why:
Still Needed:

Name: hbase.snapshot.enabled
Value: true
Why:
Still Needed:

Name: hbase.bulkload.staging.dir
Value: /hbase-staging
Why:
Still Needed:

Name: hbase.regionserver.region.transactional.tlog
Value: true
Why:
Still Needed:

Name: hbase.snapshot.region.timeout
Value: 60
Why:
Still Needed:

Zookeeper Settings

Name: maxClientCnxns
Value: 0
Why:
Still Needed:

--
Thanks,

Amanda Moran



RE: install trafodion on 2 nodes cluster

2016-08-08 Thread Liu, Ming (Ming)
Hi,

IMHO, 2 nodes is not a recommended configuration. But it is OK for study and 
test.

For trafodion itself, all of its own components are peer to peer , except the 
DCS master, and even that is rather transparent to end user. During 
installation, you don’t need to explicitly specify which components running on 
which node. You just enter the list of nodes during trafodion_install prompt, 
and the installer will take care of the rest.

But before you install Trafodion, you have to install a Hadoop system with at 
least :HDFS, HBase and Hive installed. The recommended way is to install a good 
Hadoop Distribution like CDH.

For Hadoop, I don’t have much experience for 2 nodes configuration, but it 
seems not too much choices there. You can install all ‘masters’ on 1 node, and 
‘slaves’ on both nodes, for example, 1 name node, 2 data node. You may also 
need to change the HDFS replicator factor from 3 to 2, but I am not expert of 
HDFS. With CDH, this should not be too difficult.

Other people may have better suggestions.

Thanks,
Ming

From: 乔彦克 [mailto:qya...@gmail.com]
Sent: Monday, August 08, 2016 11:15 AM
To: user@trafodion.incubator.apache.org
Subject: install trafodion on 2 nodes cluster

Hello team,
I want to install trafodion for study and test. Now, I have 2 nodes, is there 
someone tell me
how to install the different components within the 2 machine.
Any advice will be appreciated.
Qiao Yanke


RE: create table failed

2016-08-17 Thread Liu, Ming (Ming)
Hi, Qiao,

This is a defect, would you please help to file a JIRA?

I can reproduce it, will make some investigation on this issue.

One workaround for you is to change the DDL a little:
  uid VARCHAR(255) to VARCHAR(254)

Hope it works for you.

Thanks,
Ming

From: 乔彦克 [mailto:qya...@gmail.com]
Sent: Wednesday, August 17, 2016 6:23 PM
To: user@trafodion.incubator.apache.org
Subject: create table failed

Hi,all
Now I've got new problems. Since I have date in my columns, I want to try the 
division feature of trafodion.
I use the bellow sql to create tables, but only get these errors
 "*** ERROR[29157] There was a problem reading from the server
*** ERROR[29160] The message header was not long enough
*** ERROR[29157] There was a problem reading from the server
*** ERROR[29160] The message header was not long enough".
can someone help me or show me error, many thanks.

sql:
"CREATE TABLE page (
  sid varchar(255) CHARACTER SET UTF8 NOT NULL DEFAULT '',
  v_date timestamp(6) NOT NULL,
  uid varchar(255)  CHARACTER SET UTF8 NOT NULL,
  vid int unsigned NOT NULL,
  stime int unsigned NOT NULL,
  etime int unsigned NOT NULL,
  pid bigint  NOT NULL,
  cnum int unsigned NOT NULL DEFAULT 0,
  enum int unsigned NOT NULL DEFAULT 0,
 primary key (sid,v_date desc,uid,vid)
)
salt using 4 partitions on (sid,v_date,uid,vid)
division by (date_trunc('day', v_date))
HBASE_OPTIONS( DATA_BLOCK_ENCODING = 'FAST_DIFF',
COMPRESSION='GZ',
MEMSTORE_FLUSH_SIZE = '1073741824');"

Any reply is appreciated!
Thank you.
Qiao


RE: command

2016-09-12 Thread Liu, Ming (Ming)
Hi, Forling,

You can try to search in the system metadata, here is an example

select ROW_TOTAL_LENGTH , ROW_DATA_LENGTH from "_MD_".tables , "_MD_".objects 
where “_MD_”.objects.OBJECT_UID = “_MD_”.tables.table_uid and 
objects.OBJECT_NAME='your_table_name’;

and you can check other columns in the “_MD_”.tables to see if there are other 
info you need.

Others may have better approach, since I heard there are some new defined 
system dictionary and views. Or maybe some new utility to grab these 
information. But above query is one method.

Thanks,
Ming

From: Dido_vansa [mailto:523766...@qq.com]
Sent: Monday, September 12, 2016 2:52 PM
To: user ; dev 

Subject: command

Hi!

I have a problem about sql command in trafodion .
I want to obtain row length in a table , but I do not understand which command 
I can use in this case.
I'm looking forward to your reply


Best regards,
Forling


RE: Load with log error rows gets Trafodion not work

2016-09-08 Thread Liu, Ming (Ming)

Not sure if these log info helps to find the root cause of metadata corruption? 
I am still investigating.

Thanks,
Ming

From: 乔彦克 [mailto:qya...@gmail.com]
Sent: Friday, September 09, 2016 11:27 AM
To: d...@trafodion.incubator.apache.org; user@trafodion.incubator.apache.org
Cc: Amanda Moran <amanda.mo...@esgyn.com>; Selva Govindarajan 
<selva.govindara...@esgyn.com>; Liu, Ming (Ming) <ming@esgyn.cn>
Subject: Re: Load with log error rows gets Trafodion not work

Thanks to Selva and Amanda, I loaded three data sets from hive to Trafodion 
yesterday, the other two succeed and the last got the error.
And this error result in that I cannot execute any query from trafci but 
"initialize trafodion, drop" (Thanks @Liuming told me to do so). Ming analyzed 
the hbase log and found that the data region belongs to trafodion cannot be 
opened.
After I initialize trafodion again, I reload the three data sets and it goes 
well.

@Selva, the Trafodion and Hbase are running normal and below is the result of 
'sqvers -u' :
   perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
cat: /opt/hptc/pdsh/nodes: No such file or directory
MY_SQROOT=/home/trafodion/apache-trafodion_server-2.0.1-incubating
who@host=trafodion@hadoop2slave7
JAVA_HOME=/usr/lib/jvm/jdk1.7.0_67
linux=2.6.32-220.el6.x86_64
redhat=6.2
NO patches
Most common Apache_Trafodion Release 2.0.1 (Build release [DEV], branch -, date 
24Jun16)
UTT count is 2
[8]Apache_Trafodion Release 2.0.1 (Build release [DEV], branch 
release2.0, date 24Jun16)
 export/lib/hbase-trx-apache1_0_2-2.0.1.jar
 export/lib/hbase-trx-hdp2_3-2.0.1.jar
 export/lib/sqmanvers.jar
 export/lib/trafodion-dtm-apache1_0_2-2.0.1.jar
 export/lib/trafodion-dtm-hdp2_3-2.0.1.jar
 export/lib/trafodion-sql-apache1_0_2-2.0.1.jar
 export/lib/trafodion-sql-hdp2_3-2.0.1.jar
 export/lib/trafodion-utility-2.0.1.jar
[3]Release 2.0.1 (Build release [DEV], branch release2.0, date 24Jun16)
 export/lib/jdbcT2.jar
 export/lib/jdbcT4.jar
 export/lib/lib_mgmt.jar

@Amanda:
The Hdfs /user directory has not the user trafodion, just root and hive. But I 
can load and insert data into Trafodion, so I don't think the problem is there.

Thank you for your replies.
Many thanks again,
Qiao



Amanda Moran 
<amanda.mo...@esgyn.com<mailto:amanda.mo...@esgyn.com>>于2016年9月9日周五 上午1:03写道:
Please run this command:

sudo su hdfs --command "hadoop fs -ls /user"

Please verify you have the trafodion user id listed there.

Thanks!

Amanda

On Thu, Sep 8, 2016 at 8:08 AM, Selva Govindarajan <
selva.govindara...@esgyn.com<mailto:selva.govindara...@esgyn.com>> wrote:

> Hi Qiao,
>
>
>
> The JIRA you mentioned in the message is already fixed and merged to
> Trafodion on July 20th.  It is unfortunate that this JIRA wasn’t marked
> as resolved. I have marked it as resolved now. This JIRA deals with the
> issue of trafodion process aborting when there is an error while logging
> the error rows. The error rows are logged in hdfs directly.  Most likely
> the “Trafodion” user has no write permission to the hdfs directory where
> the error is logged.
>
>
>
> You can try “Load with continue on error … “  command instead and check if
> it works.
>
>
>
> Can you also please send the output of the command below to confirm if the
> version installed has the above fix.
>
>
>
> sqvers -u
>
>
>
> Can you also issue the following command to confirm if the Trafodion and
> hbase are started successfully.
>
>
>
> hbcheck
>
> sqcheck
>
>
>
>
>
> Selva
>
> *From:* 乔彦克 [mailto:qya...@gmail.com<mailto:qya...@gmail.com>]
> *Sent:* Thursday, September 8, 2016 12:20 AM
> *To:* 
> user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>;
>  dev@trafodion.incubator.apache<mailto:dev@trafodion.incubator.apache>
> .org
> *Subject:* Load with log error rows gets Trafodion not work
>
>
>
> Hi, all,
>
>I used load with log error rows to load data from hive, and got the
> following error:
>
> [image: loaderr.png]
>
> which leading to hbase-region server crashed.
>
> I restart Hbase region serve and Trafodion, but query in Trafodion has no
> response, even the simplest query "get tables;"  or " get schemas".
>
> Can someone help me to let Trafodion go normal?
>
> https://issues.apache.org/jira/browse/TRAFODION-2109, this jira describe
> the same problem.
>
>
>
> Any reply is appreciated.
>
> Thank you
>
> Qiao
>



--
Thanks,

Amanda Moran


RE: Trafodion meta table region in hbase cannot be opened

2016-09-22 Thread Liu, Ming (Ming)
Hi, Qiao,

before this region fail to open, did you do a bulkload from hive?
I know you hit same issue several times before, but have different java error 
stack before. So want to confirm with you.
And you paste the error stack from sqlci, is it possible to find what is the 
error stack in Region Server log?

Last time, it was something like below, could you find same issue this time? So 
two questions: did you do a bulkload? And could you find the same error stack?

2016-09-08 16:44:36,327 ERROR [RS_OPEN_REGION-hadoop2slave7:60020-0] 
handler.OpenRegionHandler:
Failed open of 
region=TRAFODION._MD_.COLUMNS,,1471946223350.b6191867e73d4203d3ac6fad3c860138.,
starting to roll back the global memstore size.
org.apache.hadoop.hbase.DroppedSnapshotException: region: 
TRAFODION._MD_.COLUMNS,,1471946223350.b6191867e73d4203d3ac6fad3c860138.
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2243)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1972)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3826)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:969)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:841)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:814)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5828)
at 
org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion.openHRegion(TransactionalRegion.java:101)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5794)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5765)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5721)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5672)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:356)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:126)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AssertionError: Key \xB9"b*M3c\x00ADMCKID  
 

 

 
 
/#1:\x01/1473306352163/Put/vlen=8/seqid=1749 followed
by a smaller key \xB9"b*M3c\x00ADMCKID  
 

 

 
 /#1:\x01/1473306352163/Put/vlen=8/seqid=4003 in cf #1
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:699)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:493)
at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:115)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:71)
at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:940)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2217)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2197)
... 17 more

Ming
-Original Message-
From: 乔彦克 [mailto:qya...@gmail.com] 
Sent: Friday, September 23, 2016 9:59 AM
To: d...@trafodion.incubator.apache.org; user@trafodion.incubator.apache.org
Cc: Dave Birdsall 
Subject: Re: Trafodion meta table region in hbase cannot be opened

Thanks for your reply, Dave.
Scan for  "TRAFODION._MD_.VERSIONS"  doesn't work beacuse the table's region is 
not online and cannot be assigned.
I have nothing to do but delete all Trafodion tables in HBase and after the 
clearing work, I restart all the service, then the system gets Ok to work.
I've get stuck by such problem several times due to the region failed to open. 
there maybe have some bugs when loading data to Trafodion from Hive, I don't 
know quite clear.

Best Regards,
Qiao

Dave Birdsall 

how to setup/enable/disable AQR

2017-08-08 Thread Liu, Ming (Ming)
Hi, all,

I want to disable AQR in Trafodion , is this possible? Or how should I disable 
AQR for specific SQL error?

Thanks,
Ming


RE: how to setup/enable/disable AQR

2017-08-09 Thread Liu, Ming (Ming)
thanks Anoop and Sandhya, this really helps.

Best Regards,
Ming

-Original Message-
From: Anoop Sharma [mailto:anoop.sha...@esgyn.com] 
Sent: Wednesday, August 09, 2017 10:08 PM
To: user@trafodion.incubator.apache.org; d...@trafodion.incubator.apache.org
Subject: RE: how to setup/enable/disable AQR

To disable aqr for a particular error(for example, error 8551), do:
  set session default aqr_entries '- 8551';

To add an error number, do:
  set session default aqr_entries '+ 8551';

Other aqr parameters can also be added.
For ex,
  set session default aqr_entries '+ 8551, 73, 2, 120';
Will do aqr for 8551/73(primary error 8551, secondary error 73), 
will retry 2 times, with delay of 120 secs between each retry.

To see all current aqr errors, do:
  get all aqr entries;

These set statements apply to that session only.
They cannot be added to defaults table.


anoop

-Original Message-
From: Sandhya Sundaresan [mailto:sandhya.sundare...@esgyn.com] 
Sent: Tuesday, August 8, 2017 10:39 PM
To: d...@trafodion.incubator.apache.org; user@trafodion.incubator.apache.org
Subject: RE: how to setup/enable/disable AQR

Hi Ming,

  Yes use the Cqd  AUTO_QUERY_RETRY and set it  to 'OFF'.
If you want to disable it completely set it in the defaults table. 

Sandhya

-Original Message-
From: Liu, Ming (Ming) [mailto:ming@esgyn.cn] 
Sent: Tuesday, August 8, 2017 10:36 PM
To: d...@trafodion.incubator.apache.org; user@trafodion.incubator.apache.org
Subject: how to setup/enable/disable AQR

Hi, all,

I want to disable AQR in Trafodion , is this possible? Or how should I disable 
AQR for specific SQL error?

Thanks,
Ming


RE: Error 8413 shows source hex value?

2017-08-04 Thread Liu, Ming (Ming)
hi, yuan,

The reason for HEX displaying is it can display all character correctly, for 
example Chinese character, in any terminal, or any encoding settings.
For example, the terminal is using UTF8, but  the source is encoded in GBK, if 
not display in HEX, the terminal will show unreadable characters, even harder 
to know what is the real source.
It is rather difficult to read, but it is a safe way to display. One can 
develop tools to further interpret the HEX if needed, but the SQL engine should 
not bother to do this displaying thing IMHO.

thanks,
Ming

From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Friday, August 04, 2017 3:22 PM
To: user@trafodion.incubator.apache.org
Subject: Error 8413 shows source hex value?

Hi Trafodioneers,

Sometimes we may have 8413 error, error message is as below,

[Error Code: -8413, SQL State: 22007]  *** ERROR[8413] The string argument 
contains characters that cannot be converted. Source data(in hex): 
6c655f31343739313937313833393837383038 [2017-08-04 14:21:24]

I am wondering why the source data is shown as hex value which is hard to read. 
So I am suggesting that could we show the original character value here?

Best regards,
Yuan



RE: auto generated primary keys

2017-05-21 Thread Liu, Ming (Ming)
R2.1.0 support ‘Identity column’ as well. The documentation was not updated in 
R2.1.
If you do not specify the primary key in the DDL, Trafodion will generate a 
system hidden column ‘SYSKEY’ as the primary key (Anoop mentioned below). 
However, it is a random unique number, you cannot use it in the query as a key 
to lookup, so it is recommended to use primary key.

thanks,
Ming

From: pieter gmail [mailto:pieter.mar...@gmail.com]
Sent: Sunday, May 21, 2017 2:16 AM
To: user@trafodion.incubator.apache.org
Subject: Re: auto generated primary keys

Hi,

Yes that is it thanks.
I was reading 2.1.0 as that's the docker I installed. Don't see Identity Column 
mentioned there.
I'll try to install 2.2.0.

Is it correct then to define a auto increment primary key as,

CREATE TABLE identity_employee (

  id LARGEINT GENERATED ALWAYS AS IDENTITY,

  description VARCHAR(40),

  PRIMARY KEY (id)

 );

As an aside, I see in section 3.17.1 that "A PRIMARY KEY constraint is required 
in Trafodion SQL."
However I see many examples where the primary key is not specified. Is there 
some default happening in this case?

Thanks
Pieter

On 20/05/2017 19:55, Eric Owhadi wrote:

Hi Pieter,

Did you look at section 5.11 Identity Column?

Is it what you are looking for?

Regards,

Eric



-Original Message-

From: pieter gmail [mailto:pieter.mar...@gmail.com]

Sent: Saturday, May 20, 2017 12:41 PM

To: 
user@trafodion.incubator.apache.org

Subject: RE: auto generated primary keys



Hi,



Does/will trafodion support auto generated primary keys?

I can not see any mention of it in the docs.



Thanks

Pieter



RE: docker

2017-05-21 Thread Liu, Ming (Ming)
Hi, Pieter,

There is no such plan as far as I know, please kindly help to file a JIRA if 
you think it is useful. So developers will pick up the JIRA if it is high 
priority.
@Zhang, Yi (Eason), what do you think?

thanks,
Ming

-Original Message-
From: pieter gmail [mailto:pieter.mar...@gmail.com] 
Sent: Saturday, May 20, 2017 10:59 PM
To: user@trafodion.incubator.apache.org
Subject: RE: docker

Hi,

Are there plans for a docker image for version 2.1.0?

Thanks
Pieter


does trafodion have null value in the index?

2017-06-01 Thread Liu, Ming (Ming)
Hi, all,

I heard that database index cannot save null value, so if the predicate 
contains 'is null' 'is not null', then the index will not be used. Is this true 
for Trafodion as well?

thanks,
Ming



RE: [NEWS]: We got noticed!!!

2017-05-06 Thread Liu, Ming (Ming)
This is really exciting news! Thanks for advertising Trafodion :-) It worth 
more attention.

thanks,
Ming

From: Pierre Smits [mailto:pierre.sm...@gmail.com]
Sent: Saturday, May 06, 2017 5:22 PM
To: user@trafodion.incubator.apache.org
Subject: Fwd: [NEWS]: We got noticed!!!

Forwarding to user@ to reach a broader audience!

Hi all,

GREAT NEWS

With our release announcement made to announce@a.o. we got 
noticed by a researcher from Gartner. He asked following question:

Hi – can you point me to any organizations using Trafodion who would be willing 
to talk about their experiences (anonymized is fine)?
And what can you tell me about the incubation status – are you guys expecting 
TLP anytime soon?

While sending the notification to announce@a.o. will have 
us reaching a greater audience than just sending it to the general incubatior 
ml and our own, this truly an opportunity (when referenced by Gartner in one of 
their publications) to get even better recognized by potential adopters. And 
potentially increase our contributor growth.

Therefore my request to you all (and please ask your company if your not an 
independent contributor):

Please provide your references (of adopters) that I may forward.

Of course I will name you/your company too. If you don't want to sent it via 
the public list, please sent me a our 
private@trafodion.a.o/private@trafodion.incubator.a.o
 ml or even directly to me.

Again, this is important for us given our current incubation status. It may 
help us move forward. So please help!

Best regards,

Pierre Smits


RE: trafodion odbc driver dump

2017-06-27 Thread Liu, Ming (Ming)
Thanks Jack,

This seems like a defect.
Please help to file a JIRA, so we can track this issue when you have free time. 
Otherwise, we will create a JIRA to track this issue.

thanks,
Ming

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Wednesday, June 28, 2017 9:34 AM
To: user@trafodion.incubator.apache.org
Subject: trafodion odbc driver dump

Hi Trafodioner,
There is a dump may related with trafodion odbc driver. I test the trafodion 
workload with HammerDB and after 5 hours the dump created.
Please see the detail log in the attached file.

[root@trafodion HammerDB-2.22]# file core.60054
core.60054: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 
'wish8.6 -file ./hammerdb.tcl auto Driver_TPCC.tcl', real uid: 0, effective 
uid: 0, real gid: 0, effective gid: 0, execfn: 
'/usr/local/HammerDB-2.22/bin/wish8.6', platform: 'x86_64'
[root@trafodion HammerDB-2.22]# gdb /usr/local/HammerDB-2.22/bin/wish8.6 
core.60054
..
Program terminated with signal 11, Segmentation fault.
#0  0x7f1809a79b7a in CHandleGlobal::validateHandle(short, void*) () from 
/usr/lib64/libtrafodbc_drvr64.so

(gdb) where
#0  0x7f1809a79b7a in CHandleGlobal::validateHandle(short, void*) () from 
/usr/lib64/libtrafodbc_drvr64.so
#1  0x7f1809aa6830 in ODBC::getDescSize(void*, short, short*) () from 
/usr/lib64/libtrafodbc_drvr64.so
#2  0x7f1809aa15fc in NeoNumResultCols(void*, short*) () from 
/usr/lib64/libtrafodbc_drvr64.so
#3  0x7f1809ad8959 in SQLNumResultCols () from 
/usr/lib64/libtrafodbc_drvr64.so
#4  0x003b3121f8d1 in SQLNumResultCols () from /usr/lib64/libodbc.so
#5  0x7f180a070cfe in GetResultSetDescription () from 
/usr/local/HammerDB-2.22/lib/tdbcodbc1.0.0/libtdbcodbc1.0.0.so
#6  0x7f180a0735ba in ResultSetConstructor () from 
/usr/local/HammerDB-2.22/lib/tdbcodbc1.0.0/libtdbcodbc1.0.0.so
#7  0x7f18177e2474 in TclOO_Class_Create () from ./lib/libtcl8.6.so
#8  0x7f18177dd411 in TclOOObjectCmdCore () from ./lib/libtcl8.6.so
#9  0x7f18176f0257 in TclNRRunCallbacks () from ./lib/libtcl8.6.so
#10 0x7f18176f1d9d in TclEvalEx () from ./lib/libtcl8.6.so
#11 0x7f18176f2156 in Tcl_EvalEx () from ./lib/libtcl8.6.so
#12 0x7f1810888d72 in ThreadEventProc () from 
/usr/local/HammerDB-2.22/lib/thread2.7.0/libthread2.7.0.so
#13 0x7f1817798a3f in Tcl_ServiceEvent () from ./lib/libtcl8.6.so
#14 0x7f1817798dab in Tcl_DoOneEvent () from ./lib/libtcl8.6.so
#15 0x7f1810887f3e in ThreadWaitObjCmd () from 
/usr/local/HammerDB-2.22/lib/thread2.7.0/libthread2.7.0.so
#16 0x7f18176f115d in TclNREvalObjv () from ./lib/libtcl8.6.so
#17 0x7f18176f1394 in Tcl_EvalObjv () from ./lib/libtcl8.6.so
#18 0x7f18176f1d9d in TclEvalEx () from ./lib/libtcl8.6.so
#19 0x7f18176f2156 in Tcl_EvalEx () from ./lib/libtcl8.6.so
#20 0x7f1810889550 in NewThread () from 
/usr/local/HammerDB-2.22/lib/thread2.7.0/libthread2.7.0.so
#21 0x003b30e07aa1 in start_thread () from /lib64/libpthread.so.0
#22 0x003b30ae8aad in clone () from /lib64/libc.so.6


Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com
[dell_emc_proven_badge_RGB_small]





MDAM has sparse and dense mode, for string column, is it possible to use dense mode?

2017-10-20 Thread Liu, Ming (Ming)
Hi,

I know Trafodion can use MDAM to reduce scanned rows. For probe, there are two 
ways: sparse and dense. If the column is INT, dense probe is just +1 operation, 
if the column is VARCHAR or FLOAT, how to do dense probe? Or we should say 
dense probe is only for INT data type?

thanks,
Ming