[jira] [Commented] (PHOENIX-1103) Remove hash join special case for ChunkedResultIterator

2014-07-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069882#comment-14069882
 ] 

James Taylor commented on PHOENIX-1103:
---

+1. Makes sense, [~gabriel.reid]. Thanks for the patch!

 Remove hash join special case for ChunkedResultIterator
 ---

 Key: PHOENIX-1103
 URL: https://issues.apache.org/jira/browse/PHOENIX-1103
 Project: Phoenix
  Issue Type: Improvement
Reporter: Gabriel Reid
Assignee: Gabriel Reid
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1103.patch


 This is a follow-up issue to PHOENIX-539. There is currently an special case 
 which disables the ChunkedResultIterator in the case of a hash join. This 
 disabling of the ChunkedResultIterator is needed due to the fact that a hash 
 join scan can return multiple rows with the same row key.
 As discussed in the comments of PHOENIX-539, the ChunkedResultIterator should 
 be updated to only end a chunk at between different row keys.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-998) SocketTimeoutException under high concurrent write access to phoenix indexed table

2014-07-22 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069897#comment-14069897
 ] 

Vikas Vishwakarma commented on PHOENIX-998:
---

The client does scan all the records:
ResultScanner scanner = table.getScanner(scan);
IteratorResult iterator = scanner.iterator();
while (iterator.hasNext())
{
Result next = iterator.next();
next.getRow();
next.getValue(Bytes.toBytes(historyColumnFamily), 
Bytes.toBytes(historyColumnQualifier));
scancounter++;
}

Also I don't see this issue when running the same against hbase-0.94 build. 
From service logs in DEBUG mode I am not getting any more info on this. I will 
try to put some trace logs in the client and check further. 


 SocketTimeoutException under high concurrent write access to phoenix indexed 
 table
 --

 Key: PHOENIX-998
 URL: https://issues.apache.org/jira/browse/PHOENIX-998
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.0.0
 Environment: HBase 0.98.1-SNAPSHOT, Hadoop 2.3.0-cdh5.0.0
Reporter: wangxianbin
Priority: Critical

 we have a small hbase cluster, which has one master, six slaves, we test 
 phoenix index concurrent write access performance with four write clients, 
 each client has 100 threads, each thread has one phoenix jdbc connection, and 
 we encounter SocketTimeoutException as follow, and it will retry for very 
 long time, how can i deal with such issue?
 2014-05-22 17:22:58,490 INFO  
 [storm4.org,60020,1400750242045-index-writer--pool3-t10] client.AsyncProcess: 
 #16016, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
 tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
 tableName=IPHOENIX10M
 2014-05-22 17:23:00,436 INFO  
 [storm4.org,60020,1400750242045-index-writer--pool3-t6] client.AsyncProcess: 
 #16027, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
 tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
 tableName=IPHOENIX10M
 2014-05-22 17:23:00,440 INFO  
 [storm4.org,60020,1400750242045-index-writer--pool3-t1] client.AsyncProcess: 
 #16013, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
 tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
 tableName=IPHOENIX10M
 2014-05-22 17:23:00,449 INFO  
 [storm4.org,60020,1400750242045-index-writer--pool3-t7] client.AsyncProcess: 
 #16028, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
 tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
 tableName=IPHOENIX10M
 2014-05-22 17:23:00,473 INFO  
 [storm4.org,60020,1400750242045-index-writer--pool3-t8] client.AsyncProcess: 
 #16020, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
 tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
 tableName=IPHOENIX10M
 2014-05-22 17:23:00,494 INFO  [htable-pool20-t13] client.AsyncProcess: 
 #16016, table=IPHOENIX10M, attempt=12/350 failed 1 ops, last exception: 
 java.net.SocketTimeoutException: Call to storm3.org/172.16.2.23:60020 failed 
 because java.net.SocketTimeoutException: 2000 millis timeout while waiting 
 for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/172.16.2.24:52017 
 remote=storm3.org/172.16.2.23:60020] on storm3.org,60020,1400750242156, 
 tracking started Thu May 22 17:21:32 CST 2014, retrying after 20189 ms, 
 replay 1 ops.
 2014-05-22 17:23:02,439 INFO  
 [storm4.org,60020,1400750242045-index-writer--pool3-t4] client.AsyncProcess: 
 #16022, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
 tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
 tableName=IPHOENIX10M
 2014-05-22 17:23:02,496 INFO  [htable-pool20-t3] client.AsyncProcess: #16013, 
 table=IPHOENIX10M, attempt=12/350 failed 1 ops, last exception: 
 java.net.SocketTimeoutException: Call to storm3.org/172.16.2.23:60020 failed 
 because java.net.SocketTimeoutException: 2000 millis timeout while waiting 
 for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/172.16.2.24:52017 
 remote=storm3.org/172.16.2.23:60020] on storm3.org,60020,1400750242156, 
 tracking started Thu May 22 17:21:32 CST 2014, retrying after 20001 ms, 
 replay 1 ops.
 2014-05-22 17:23:02,496 INFO  [htable-pool20-t16] client.AsyncProcess: 
 #16028, table=IPHOENIX10M, attempt=12/350 failed 1 ops, last exception: 
 java.net.SocketTimeoutException: Call to storm3.org/172.16.2.23:60020 failed 
 because java.net.SocketTimeoutException: 2000 millis timeout while waiting 
 for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/172.16.2.24:52017 
 

[jira] [Updated] (PHOENIX-1102) Query Finds No Rows When Using Multiple Column Families in where clause

2014-07-22 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated PHOENIX-1102:


Attachment: PHOENIX-1102_V3.patch

 Query Finds No Rows When Using Multiple Column Families in where clause
 ---

 Key: PHOENIX-1102
 URL: https://issues.apache.org/jira/browse/PHOENIX-1102
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: James Taylor
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1102.patch, PHOENIX-1102_V2.patch, 
 PHOENIX-1102_V3.patch


 When using multiple column families, query does not find all expected rows.
  
 My table schema:
 CREATE TABLE IF NOT EXISTS FAMILY_TEST (
   NUM1 INTEGER NOT NULL,
   AA.NUM2 INTEGER,
   BB.NUM3 INTEGER,
   CONSTRAINT my_pk PRIMARY KEY (NUM1));
 I populated it with one row, assigning 1 to each field.  I can verify that 
 the record is there, but I can not get a simple expression working that uses 
 fields across two column families:
 SELECT * FROM FAMILY_TEST;
   NUM1   NUM2   NUM3 
 -- -- -- 
  1  1  1 
 Time: 0.038 sec(s)
 SELECT * FROM FAMILY_TEST WHERE NUM2=1 AND NUM3=1;
 no rows selected
 Time: 0.039 sec(s)
 I understand that columns to be queried together should usually be in the 
 same column family for efficiency, but I did not expect my second query to 
 not work at all.  Or if it is not supported, I would expect an error.  I get 
 the same results if I use AA.NUM2 and BB.NUM3 as well.
 I am using Phoenix 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Column Mapping issue via view

2014-07-22 Thread Ahmed.Faraz
Hi there,

While trying to MAP an Hbase locally created table in Phonenix via CREATE VIEW 
(Mapping to an Existing HBase Table) i am seeing strange and sort of corrupt 
data in the columns of following data types,

INTEGER
BIGINT
DECIMAL


I can only see the correct data if i choose all the column mappings as VARCHAR 
in the view which is not acceptable as there could be DML operations (Like 
DELETE, UPDATES, SUM) needed as per data types on these fields, and 
semantically INTEGER data should be mapped as INTEGER rather than CHARCTERS.  
More problematic is that i am seeing strange data from this view in NUMBER 
datatypes but the INT data in Hbase has no issue like that, on one of the 
article on Google i found that it could be due to SERILIAZATION of the data 
while loading in Hbase. Data looks ok in Hbase but not when query in Phoenix

https://groups.google.com/forum/#!topic/phoenix-hbase-user/wvgzItxliZs

I used importtsv and completebulkload JARS for Hbase table data load, where do 
i can control the correct serialization of INTEGER data types while loading in 
Hbase with these loaders?, what could be the cause of that and how to fix it 
best?

Best Regards,
Ahmed.







Re: Column Mapping issue via view

2014-07-22 Thread James Taylor
Hi Ahmed,

First, take at look at this FAQ if you haven't already seen it:
http://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table

If you're mapping to an existing HBase table, then the serialization
that was done to create the existing data must match the serialization
expected by Phoenix. In general, our UNSIGNED_* types matches the way
the HBase Bytes.toBytes(Java primitive type) methods do
serialization. For example if the data was serialized using
Bytes.toBytes(int), then declare your type as an UNSIGNED_INT. See
http://phoenix.apache.org/language/datatypes.html for a complete list
of our data types (and how they map to HBase serialized data). Note
that if your data contains negative numbers, you're out of luck, as
this data will not sort correctly wrt positive numbers.

As far as DECIMAL, HBase does not have a type like this. Take a look
instead at the UNSIGNED_FLOAT or UNSIGNED_DOUBLE types.

There are limitations in what can be mapped to. In particular, if the
row key of the existing data represents multiple columns of data, then
more often than not you won't be able to directly map the HBase table
to a Phoenix table since the separator character used will often not
match what Phoenix expects.

One other alternative is to just re-write the table in a Phoenix
compliant manner. There are many ways this can be done: using a Pig
script and our Pig integration, writing out a CSV file and then
loading it with our Bulk CSV Loader, or just using map-reduce and some
of our utility functions.

Thanks,
James

On Tue, Jul 22, 2014 at 12:45 AM,  ahmed.fa...@swisscom.com wrote:
 Hi there,

 While trying to MAP an Hbase locally created table in Phonenix via CREATE 
 VIEW (Mapping to an Existing HBase Table) i am seeing strange and sort of 
 corrupt data in the columns of following data types,

 INTEGER
 BIGINT
 DECIMAL


 I can only see the correct data if i choose all the column mappings as 
 VARCHAR in the view which is not acceptable as there could be DML operations 
 (Like DELETE, UPDATES, SUM) needed as per data types on these fields, and 
 semantically INTEGER data should be mapped as INTEGER rather than CHARCTERS.  
 More problematic is that i am seeing strange data from this view in NUMBER 
 datatypes but the INT data in Hbase has no issue like that, on one of the 
 article on Google i found that it could be due to SERILIAZATION of the data 
 while loading in Hbase. Data looks ok in Hbase but not when query in Phoenix

 https://groups.google.com/forum/#!topic/phoenix-hbase-user/wvgzItxliZs

 I used importtsv and completebulkload JARS for Hbase table data load, where 
 do i can control the correct serialization of INTEGER data types while 
 loading in Hbase with these loaders?, what could be the cause of that and how 
 to fix it best?

 Best Regards,
 Ahmed.







[jira] [Commented] (PHOENIX-763) Support for Sqoop

2014-07-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070530#comment-14070530
 ] 

James Taylor commented on PHOENIX-763:
--

Thanks so much for the patch,[~maghamravikiran]. Couple of high level comments:
- Can you add a detailed comment to the javadoc for the 
org/apache/phoenix/sqoop/PhoenixSqlManager.java class that explains the high 
level flow for import and export? Also, any limitations that may exist. Or if 
you think this is more appropriate in a different class or classes, that's fine 
too.
- Can you add a test with a composite/multi-part primary key, as that's an 
important use case for Phoenix. Is that supported?
- Can you file a sub-task to this one to document the Sqoop integration on our 
website?
- As far as hadoop1/hadoop2 support for the 4.0 branch, it's ok if you just get 
something working on hadoop2 as we're planning on dropping hadoop1 support for 
4.x soon. If you need help, [~jesse_yates] may be able to see something awry if 
you point him to the right place.

[~gabriel.reid] - are you familiar with Scoop? Would you have any spare cycles 
to review this?


 Support for Sqoop
 -

 Key: PHOENIX-763
 URL: https://issues.apache.org/jira/browse/PHOENIX-763
 Project: Phoenix
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: James Taylor
Assignee: mravi
  Labels: patch
 Attachments: PHOENIX-763-300B.patch


 Not sure anything required from our end, but you should be able to use Sqoop 
 to create and populate Phoenix tables.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (PHOENIX-1102) Query Finds No Rows When Using Multiple Column Families in where clause

2014-07-22 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John resolved PHOENIX-1102.
-

Resolution: Fixed

Pushed to all branches. Thanks for the review James.

 Query Finds No Rows When Using Multiple Column Families in where clause
 ---

 Key: PHOENIX-1102
 URL: https://issues.apache.org/jira/browse/PHOENIX-1102
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: James Taylor
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1102.patch, PHOENIX-1102_3.0.patch, 
 PHOENIX-1102_V2.patch, PHOENIX-1102_V3.patch


 When using multiple column families, query does not find all expected rows.
  
 My table schema:
 CREATE TABLE IF NOT EXISTS FAMILY_TEST (
   NUM1 INTEGER NOT NULL,
   AA.NUM2 INTEGER,
   BB.NUM3 INTEGER,
   CONSTRAINT my_pk PRIMARY KEY (NUM1));
 I populated it with one row, assigning 1 to each field.  I can verify that 
 the record is there, but I can not get a simple expression working that uses 
 fields across two column families:
 SELECT * FROM FAMILY_TEST;
   NUM1   NUM2   NUM3 
 -- -- -- 
  1  1  1 
 Time: 0.038 sec(s)
 SELECT * FROM FAMILY_TEST WHERE NUM2=1 AND NUM3=1;
 no rows selected
 Time: 0.039 sec(s)
 I understand that columns to be queried together should usually be in the 
 same column family for efficiency, but I did not expect my second query to 
 not work at all.  Or if it is not supported, I would expect an error.  I get 
 the same results if I use AA.NUM2 and BB.NUM3 as well.
 I am using Phoenix 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1102) Query Finds No Rows When Using Multiple Column Families in where clause

2014-07-22 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated PHOENIX-1102:


Attachment: PHOENIX-1102_3.0.patch

What I applied to 3.0 branch

 Query Finds No Rows When Using Multiple Column Families in where clause
 ---

 Key: PHOENIX-1102
 URL: https://issues.apache.org/jira/browse/PHOENIX-1102
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: James Taylor
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1102.patch, PHOENIX-1102_3.0.patch, 
 PHOENIX-1102_V2.patch, PHOENIX-1102_V3.patch


 When using multiple column families, query does not find all expected rows.
  
 My table schema:
 CREATE TABLE IF NOT EXISTS FAMILY_TEST (
   NUM1 INTEGER NOT NULL,
   AA.NUM2 INTEGER,
   BB.NUM3 INTEGER,
   CONSTRAINT my_pk PRIMARY KEY (NUM1));
 I populated it with one row, assigning 1 to each field.  I can verify that 
 the record is there, but I can not get a simple expression working that uses 
 fields across two column families:
 SELECT * FROM FAMILY_TEST;
   NUM1   NUM2   NUM3 
 -- -- -- 
  1  1  1 
 Time: 0.038 sec(s)
 SELECT * FROM FAMILY_TEST WHERE NUM2=1 AND NUM3=1;
 no rows selected
 Time: 0.039 sec(s)
 I understand that columns to be queried together should usually be in the 
 same column family for efficiency, but I did not expect my second query to 
 not work at all.  Or if it is not supported, I would expect an error.  I get 
 the same results if I use AA.NUM2 and BB.NUM3 as well.
 I am using Phoenix 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1104) Do not shutdown threadpool when initialization fails

2014-07-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070563#comment-14070563
 ] 

Hudson commented on PHOENIX-1104:
-

SUCCESS: Integrated in Phoenix | 3.0 | Hadoop1 #144 (See 
[https://builds.apache.org/job/Phoenix-3.0-hadoop1/144/])
PHOENIX-1104 Do not shutdown threadpool when initialization fails - added 
comment (mujtaba: rev 38de8fd567bc756d9dbb091b34d89098bccae3dd)
* phoenix-core/src/main/java/org/apache/phoenix/query/BaseQueryServicesImpl.java


 Do not shutdown threadpool when initialization fails
 

 Key: PHOENIX-1104
 URL: https://issues.apache.org/jira/browse/PHOENIX-1104
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Mujtaba Chohan
Priority: Minor
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: P-1104.patch


 If Phoenix first connects when HBase master is initializing then Phoenix 
 throws Incompatible jar exception on subsequent connections from same JVM.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1102) Query Finds No Rows When Using Multiple Column Families in where clause

2014-07-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070593#comment-14070593
 ] 

Hudson commented on PHOENIX-1102:
-

SUCCESS: Integrated in Phoenix | 3.0 | Hadoop1 #145 (See 
[https://builds.apache.org/job/Phoenix-3.0-hadoop1/145/])
PHOENIX-1102 Query Finds No Rows When Using Multiple Column Families in where 
clause. (Anoop) (anoopsamjohn: rev 343d9262cda3a10461bb301ee0089e6df3867d99)
* 
phoenix-core/src/main/java/org/apache/phoenix/filter/MultiCFCQKeyValueComparisonFilter.java
* 
phoenix-core/src/main/java/org/apache/phoenix/filter/MultiKeyValueComparisonFilter.java
* 
phoenix-core/src/main/java/org/apache/phoenix/filter/MultiCQKeyValueComparisonFilter.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnProjectionOptimizationIT.java
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java


 Query Finds No Rows When Using Multiple Column Families in where clause
 ---

 Key: PHOENIX-1102
 URL: https://issues.apache.org/jira/browse/PHOENIX-1102
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: James Taylor
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1102.patch, PHOENIX-1102_3.0.patch, 
 PHOENIX-1102_V2.patch, PHOENIX-1102_V3.patch


 When using multiple column families, query does not find all expected rows.
  
 My table schema:
 CREATE TABLE IF NOT EXISTS FAMILY_TEST (
   NUM1 INTEGER NOT NULL,
   AA.NUM2 INTEGER,
   BB.NUM3 INTEGER,
   CONSTRAINT my_pk PRIMARY KEY (NUM1));
 I populated it with one row, assigning 1 to each field.  I can verify that 
 the record is there, but I can not get a simple expression working that uses 
 fields across two column families:
 SELECT * FROM FAMILY_TEST;
   NUM1   NUM2   NUM3 
 -- -- -- 
  1  1  1 
 Time: 0.038 sec(s)
 SELECT * FROM FAMILY_TEST WHERE NUM2=1 AND NUM3=1;
 no rows selected
 Time: 0.039 sec(s)
 I understand that columns to be queried together should usually be in the 
 same column family for efficiency, but I did not expect my second query to 
 not work at all.  Or if it is not supported, I would expect an error.  I get 
 the same results if I use AA.NUM2 and BB.NUM3 as well.
 I am using Phoenix 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1105) TableNotFoundException does not get caught if SYSTEM.TABLE is not present for whitelisted upgrade

2014-07-22 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-1105:
---

 Summary: TableNotFoundException does not get caught if 
SYSTEM.TABLE is not present for whitelisted upgrade
 Key: PHOENIX-1105
 URL: https://issues.apache.org/jira/browse/PHOENIX-1105
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 3.0-Release, 5.0.0, 3.1, 4.1
Reporter: Mujtaba Chohan
Assignee: Mujtaba Chohan
Priority: Trivial
 Fix For: 5.0.0, 3.1, 4.1


Phoenix client get the following exception (not warning) during whitelisted 
upgrade if SYSTEM.TABLE is not present in HBase.

org.apache.phoenix.exception.PhoenixIOException: SYSTEM.TABLE
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:866)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1146)
at 
org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:114)
at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1265)
at 
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:434)
at 
org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
at 
org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:247)
at 
org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1105) TableNotFoundException does not get caught if SYSTEM.TABLE is not present for whitelisted upgrade

2014-07-22 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-1105:


Attachment: (was: PHOENIX-1105.patch)

 TableNotFoundException does not get caught if SYSTEM.TABLE is not present for 
 whitelisted upgrade
 -

 Key: PHOENIX-1105
 URL: https://issues.apache.org/jira/browse/PHOENIX-1105
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 3.0-Release, 5.0.0, 3.1, 4.1
Reporter: Mujtaba Chohan
Assignee: Mujtaba Chohan
Priority: Trivial
 Fix For: 5.0.0, 3.1, 4.1


 Phoenix client get the following exception (not warning) during whitelisted 
 upgrade if SYSTEM.TABLE is not present in HBase.
 org.apache.phoenix.exception.PhoenixIOException: SYSTEM.TABLE
   at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:866)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1146)
   at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:114)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1265)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:434)
   at 
 org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:247)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1105) TableNotFoundException does not get caught if SYSTEM.TABLE is not present for whitelisted upgrade

2014-07-22 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-1105:


Attachment: PHOENIX-1105.patch

Fixed to catch org.apache.hadoop.hbase.TableNotFoundException rather 
org.apache.phoenix.schema.TableNotFoundException which is not thrown.

 TableNotFoundException does not get caught if SYSTEM.TABLE is not present for 
 whitelisted upgrade
 -

 Key: PHOENIX-1105
 URL: https://issues.apache.org/jira/browse/PHOENIX-1105
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 3.0-Release, 5.0.0, 3.1, 4.1
Reporter: Mujtaba Chohan
Assignee: Mujtaba Chohan
Priority: Trivial
 Fix For: 5.0.0, 3.1, 4.1


 Phoenix client get the following exception (not warning) during whitelisted 
 upgrade if SYSTEM.TABLE is not present in HBase.
 org.apache.phoenix.exception.PhoenixIOException: SYSTEM.TABLE
   at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:866)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1146)
   at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:114)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1265)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:434)
   at 
 org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:247)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1105) TableNotFoundException does not get caught if SYSTEM.TABLE is not present for whitelisted upgrade

2014-07-22 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-1105:


Attachment: PHOENIX-1105.patch

Patch attached

 TableNotFoundException does not get caught if SYSTEM.TABLE is not present for 
 whitelisted upgrade
 -

 Key: PHOENIX-1105
 URL: https://issues.apache.org/jira/browse/PHOENIX-1105
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 3.0-Release, 5.0.0, 3.1, 4.1
Reporter: Mujtaba Chohan
Assignee: Mujtaba Chohan
Priority: Trivial
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1105.patch


 Phoenix client get the following exception (not warning) during whitelisted 
 upgrade if SYSTEM.TABLE is not present in HBase.
 org.apache.phoenix.exception.PhoenixIOException: SYSTEM.TABLE
   at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:866)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1146)
   at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:114)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1265)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:434)
   at 
 org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:247)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1102) Query Finds No Rows When Using Multiple Column Families in where clause

2014-07-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070901#comment-14070901
 ] 

Hudson commented on PHOENIX-1102:
-

SUCCESS: Integrated in Phoenix | Master | Hadoop1 #287 (See 
[https://builds.apache.org/job/Phoenix-master-hadoop1/287/])
PHOENIX-1102 Query Finds No Rows When Using Multiple Column Families in where 
clause. (Anoop) (anoopsamjohn: rev 545abe5357a6e1a5d61d9d5516dca463893b6f6b)
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/filter/MultiCQKeyValueComparisonFilter.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnProjectionOptimizationIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/filter/MultiKeyValueComparisonFilter.java
* 
phoenix-core/src/main/java/org/apache/phoenix/filter/MultiCFCQKeyValueComparisonFilter.java


 Query Finds No Rows When Using Multiple Column Families in where clause
 ---

 Key: PHOENIX-1102
 URL: https://issues.apache.org/jira/browse/PHOENIX-1102
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: James Taylor
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1102.patch, PHOENIX-1102_3.0.patch, 
 PHOENIX-1102_V2.patch, PHOENIX-1102_V3.patch


 When using multiple column families, query does not find all expected rows.
  
 My table schema:
 CREATE TABLE IF NOT EXISTS FAMILY_TEST (
   NUM1 INTEGER NOT NULL,
   AA.NUM2 INTEGER,
   BB.NUM3 INTEGER,
   CONSTRAINT my_pk PRIMARY KEY (NUM1));
 I populated it with one row, assigning 1 to each field.  I can verify that 
 the record is there, but I can not get a simple expression working that uses 
 fields across two column families:
 SELECT * FROM FAMILY_TEST;
   NUM1   NUM2   NUM3 
 -- -- -- 
  1  1  1 
 Time: 0.038 sec(s)
 SELECT * FROM FAMILY_TEST WHERE NUM2=1 AND NUM3=1;
 no rows selected
 Time: 0.039 sec(s)
 I understand that columns to be queried together should usually be in the 
 same column family for efficiency, but I did not expect my second query to 
 not work at all.  Or if it is not supported, I would expect an error.  I get 
 the same results if I use AA.NUM2 and BB.NUM3 as well.
 I am using Phoenix 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-726) Multi-module phoenix to support hadoop1 and hadoop2.

2014-07-22 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-726:


Reporter: Jesse Yates  (was: Jason Yates)

 Multi-module phoenix to support hadoop1 and hadoop2.
 

 Key: PHOENIX-726
 URL: https://issues.apache.org/jira/browse/PHOENIX-726
 Project: Phoenix
  Issue Type: Task
Reporter: Jesse Yates

 Allow build-time selection of supporting hadoop1 or hadoop2. 
 This setup is almost exactly what HBase uses (and it works pretty well). For 
 right now, we just publish artifacts against hadoop1 (similar to hbase 0.94 
 series). Later we can make it work with different names for the different 
 versions supported.
 Check jars/tar by hand - looks ok to me (but I'm not an expert in everything 
 we need).
 To import into eclipse, remove phoenix project. Then import as maven project 
 into eclipse (same as in build.txt - no need to do maven eclipse:eclipse).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (PHOENIX-177) Collect usage and performance metrics

2014-07-22 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates reassigned PHOENIX-177:
---

Assignee: Jesse Yates  (was: Jason Yates)

 Collect usage and performance metrics
 -

 Key: PHOENIX-177
 URL: https://issues.apache.org/jira/browse/PHOENIX-177
 Project: Phoenix
  Issue Type: Task
Affects Versions: 3.0-Release
Reporter: ryang-sfdc
Assignee: Jesse Yates
  Labels: enhancement

 I'd like to know how much cpu, physical io, logical io, wait time, blocking 
 time, transmission time was spent for each thread of execution across the 
 hbase cluster, within coprocessors, and within the client's phoenix 
 threadpools for each query.
 Here are some of the problems I want to solve:
 1) every component has one or more configurable threadpools, and I have no 
 idea how to gather data to make any decisions.
 2) queries that I think should be fast turn out to be dog slow, e.g., select 
 foo from bar where foo like 'abc%' group by foo Without attaching a profiler 
 to hbase, which most people won't bother with, it's not clear why it's slow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-177) Collect usage and performance metrics

2014-07-22 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-177:


Affects Version/s: (was: 3.0-Release)
   4.1
   5.0.0

 Collect usage and performance metrics
 -

 Key: PHOENIX-177
 URL: https://issues.apache.org/jira/browse/PHOENIX-177
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0, 4.1
Reporter: ryang-sfdc
Assignee: Jesse Yates
  Labels: enhancement

 I'd like to know how much cpu, physical io, logical io, wait time, blocking 
 time, transmission time was spent for each thread of execution across the 
 hbase cluster, within coprocessors, and within the client's phoenix 
 threadpools for each query.
 Here are some of the problems I want to solve:
 1) every component has one or more configurable threadpools, and I have no 
 idea how to gather data to make any decisions.
 2) queries that I think should be fast turn out to be dog slow, e.g., select 
 foo from bar where foo like 'abc%' group by foo Without attaching a profiler 
 to hbase, which most people won't bother with, it's not clear why it's slow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-177) Collect usage and performance metrics

2014-07-22 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-177:


Attachment: phoenix-177-master-v0.patch

Attaching patch for master, probably pretty close (if not exactly) what would 
be used for 4.X. Also, more easily parsed code can be seen [on 
github|https://github.com/jyates/phoenix/tree/tracing] - I can do a pull 
request for easier review if people want as well.

For an overview, like my original proposal, it uses HTrace to generate spans 
(segments of work, which may or may not have children) which are then written 
to the metrics2 framework. The framework then has a receiver (only Hadoop2 
supported at the moment) which writes them to a phoenix table. All the these 
different pipes can be configured to be more specialized, depending on the use 
case/need.

FYI [~jamestaylor]

 Collect usage and performance metrics
 -

 Key: PHOENIX-177
 URL: https://issues.apache.org/jira/browse/PHOENIX-177
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0, 4.1
Reporter: ryang-sfdc
Assignee: Jesse Yates
  Labels: enhancement
 Attachments: phoenix-177-master-v0.patch


 I'd like to know how much cpu, physical io, logical io, wait time, blocking 
 time, transmission time was spent for each thread of execution across the 
 hbase cluster, within coprocessors, and within the client's phoenix 
 threadpools for each query.
 Here are some of the problems I want to solve:
 1) every component has one or more configurable threadpools, and I have no 
 idea how to gather data to make any decisions.
 2) queries that I think should be fast turn out to be dog slow, e.g., select 
 foo from bar where foo like 'abc%' group by foo Without attaching a profiler 
 to hbase, which most people won't bother with, it's not clear why it's slow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[GitHub] phoenix pull request: Implement HTrace based tracing

2014-07-22 Thread jyates
GitHub user jyates opened a pull request:

https://github.com/apache/phoenix/pull/5

Implement HTrace based tracing

Small issue in that not everyone serializes htrace annotations, but that's 
an
open question of the right way to do that anyways

Adding tracing to:
 - MutationState
 - query plan tracing
 - iterators

Metrics writing is generalized to support eventual hadoop1 implementation.
Also, supporting test-skipping with a custom Hadoop1 test runner + 
annotation

Default builds to hadoop2, rather than hadoop1 (particularly as hadoop1 is
now a second-class citizen).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jyates/phoenix tracing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/5.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5


commit 59b1ad2e6cb90e4906ca29bd1964617b2ac7f6c9
Author: Jesse Yates jya...@apache.org
Date:   2014-06-06T23:11:32Z

Implement HTrace based tracing

Small issue in that not everyone serializes htrace annotations, but that's 
an
open question of the right way to do that anyways

Adding tracing to:
 - MutationState
 - query plan tracing
 - iterators

Metrics writing is generalized to support eventual hadoop1 implementation.
Also, supporting test-skipping with a custom Hadoop1 test runner + 
annotation

Default builds to hadoop2, rather than hadoop1 (particularly as hadoop1 is
now a second-class citizen).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-177) Collect usage and performance metrics

2014-07-22 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071045#comment-14071045
 ] 

Jesse Yates commented on PHOENIX-177:
-

Pull request: https://github.com/apache/phoenix/pull/5

About time - this has taken way longer than I thought.

 Collect usage and performance metrics
 -

 Key: PHOENIX-177
 URL: https://issues.apache.org/jira/browse/PHOENIX-177
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0, 4.1
Reporter: ryang-sfdc
Assignee: Jesse Yates
  Labels: enhancement
 Attachments: phoenix-177-master-v0.patch


 I'd like to know how much cpu, physical io, logical io, wait time, blocking 
 time, transmission time was spent for each thread of execution across the 
 hbase cluster, within coprocessors, and within the client's phoenix 
 threadpools for each query.
 Here are some of the problems I want to solve:
 1) every component has one or more configurable threadpools, and I have no 
 idea how to gather data to make any decisions.
 2) queries that I think should be fast turn out to be dog slow, e.g., select 
 foo from bar where foo like 'abc%' group by foo Without attaching a profiler 
 to hbase, which most people won't bother with, it's not clear why it's slow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-938) Use higher priority queue for index updates to prevent deadlock

2014-07-22 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071063#comment-14071063
 ] 

Jesse Yates commented on PHOENIX-938:
-

Now that HBase 0.98.4 has been released (thanks again [~apurtell]!), looks like 
its time to commit this guy too. I plan on committing tonight/early tomorrow, 
unless there are any objections. 

 Use higher priority queue for index updates to prevent deadlock
 ---

 Key: PHOENIX-938
 URL: https://issues.apache.org/jira/browse/PHOENIX-938
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.0.0, 4.1
Reporter: James Taylor
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1

 Attachments: PHOENIX-938-master-v3.patch, phoenix-938-4.0-v0.patch, 
 phoenix-938-master-v0.patch, phoenix-938-master-v1.patch, 
 phoenix-938-master-v2.patch, phoenix-938-master-v4.patch


 With our current global secondary indexing solution, a batched Put of table 
 data causes a RS to do a batch Put to other RSs. This has the potential to 
 lead to a deadlock if all RS are overloaded and unable to process the pending 
 batched Put. To prevent this, we should use a higher priority queue to submit 
 these Puts so that they're always processed before other Puts. This will 
 prevent the potential for a deadlock under high load. Note that this will 
 likely require some HBase 0.98 code changes and would not be feasible to 
 implement for HBase 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-938) Use higher priority queue for index updates to prevent deadlock

2014-07-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071074#comment-14071074
 ] 

James Taylor commented on PHOENIX-938:
--

+1, as long as you've tested the new Phoenix snapshot jar against a pre-0.98.4 
release to make sure we'll still run (albeit without your fix).

 Use higher priority queue for index updates to prevent deadlock
 ---

 Key: PHOENIX-938
 URL: https://issues.apache.org/jira/browse/PHOENIX-938
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.0.0, 4.1
Reporter: James Taylor
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1

 Attachments: PHOENIX-938-master-v3.patch, phoenix-938-4.0-v0.patch, 
 phoenix-938-master-v0.patch, phoenix-938-master-v1.patch, 
 phoenix-938-master-v2.patch, phoenix-938-master-v4.patch


 With our current global secondary indexing solution, a batched Put of table 
 data causes a RS to do a batch Put to other RSs. This has the potential to 
 lead to a deadlock if all RS are overloaded and unable to process the pending 
 batched Put. To prevent this, we should use a higher priority queue to submit 
 these Puts so that they're always processed before other Puts. This will 
 prevent the potential for a deadlock under high load. Note that this will 
 likely require some HBase 0.98 code changes and would not be feasible to 
 implement for HBase 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-938) Use higher priority queue for index updates to prevent deadlock

2014-07-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071140#comment-14071140
 ] 

Andrew Purtell commented on PHOENIX-938:


Confirmed that artifacts for 0.98.4-hadoop1 and 0.98.4-hadoop2 are available 
now on repository.apache.org

 Use higher priority queue for index updates to prevent deadlock
 ---

 Key: PHOENIX-938
 URL: https://issues.apache.org/jira/browse/PHOENIX-938
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.0.0, 4.1
Reporter: James Taylor
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1

 Attachments: PHOENIX-938-master-v3.patch, phoenix-938-4.0-v0.patch, 
 phoenix-938-master-v0.patch, phoenix-938-master-v1.patch, 
 phoenix-938-master-v2.patch, phoenix-938-master-v4.patch


 With our current global secondary indexing solution, a batched Put of table 
 data causes a RS to do a batch Put to other RSs. This has the potential to 
 lead to a deadlock if all RS are overloaded and unable to process the pending 
 batched Put. To prevent this, we should use a higher priority queue to submit 
 these Puts so that they're always processed before other Puts. This will 
 prevent the potential for a deadlock under high load. Note that this will 
 likely require some HBase 0.98 code changes and would not be feasible to 
 implement for HBase 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1106) Documentation

2014-07-22 Thread ravi (JIRA)
ravi created PHOENIX-1106:
-

 Summary: Documentation 
 Key: PHOENIX-1106
 URL: https://issues.apache.org/jira/browse/PHOENIX-1106
 Project: Phoenix
  Issue Type: Sub-task
Reporter: ravi


Have a one page documentation stating the steps and limitations if any on the 
process of importing data from Sqoop .



--
This message was sent by Atlassian JIRA
(v6.2#6252)