[jira] [Commented] (PHOENIX-1208) Check for existence of views doesn't take into account the fact that SYSTEM.CATALOG could be split across regions

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114896#comment-14114896
 ] 

Hudson commented on PHOENIX-1208:
-

SUCCESS: Integrated in Phoenix | 3.0 | Hadoop1 #200 (See 
[https://builds.apache.org/job/Phoenix-3.0-hadoop1/200/])
PHOENIX-1208 Check for existence of views doesn't take into account the fact 
that SYSTEM.CATALOG could be split across regions (jtaylor: rev 
87b223fcc88e030f4e85677f49fc6004ca13c78e)
* phoenix-core/src/main/java/org/apache/phoenix/coprocessor/SuffixFilter.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


 Check for existence of views doesn't take into account the fact that 
 SYSTEM.CATALOG could be split across regions
 -

 Key: PHOENIX-1208
 URL: https://issues.apache.org/jira/browse/PHOENIX-1208
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jan Fernando
Priority: Minor
 Fix For: 5.0.0, 4.2, 3.2

 Attachments: PHOENIX-1208.patch


 It is possible that when SYSTEM.CATALOG gets very large that it will be split 
 across multiple regions. The parent table metadata is guaranteed via 
 MetaDataSplitPolicy to be in the same region. However, child tenant specific 
 views could end split across multiple regions. The hasViews() method, when 
 checking for the existence of any views, scans only the region the parent 
 table metadata is located in. We should detect whether the views span 
 multiple regions and scan across them all.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1208) Check for existence of views doesn't take into account the fact that SYSTEM.CATALOG could be split across regions

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114923#comment-14114923
 ] 

Hudson commented on PHOENIX-1208:
-

SUCCESS: Integrated in Phoenix | 4.0 | Hadoop2 #76 (See 
[https://builds.apache.org/job/Phoenix-4.0-hadoop2/76/])
PHOENIX-1208 Check for existence of views doesn't take into account the fact 
that SYSTEM.CATALOG could be split across regions (jtaylor: rev 
6fb2b22b9a30ecf74b608cc1d6081b7889763f20)
* phoenix-core/src/main/java/org/apache/phoenix/coprocessor/SuffixFilter.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


 Check for existence of views doesn't take into account the fact that 
 SYSTEM.CATALOG could be split across regions
 -

 Key: PHOENIX-1208
 URL: https://issues.apache.org/jira/browse/PHOENIX-1208
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jan Fernando
Priority: Minor
 Fix For: 5.0.0, 4.2, 3.2

 Attachments: PHOENIX-1208.patch


 It is possible that when SYSTEM.CATALOG gets very large that it will be split 
 across multiple regions. The parent table metadata is guaranteed via 
 MetaDataSplitPolicy to be in the same region. However, child tenant specific 
 views could end split across multiple regions. The hasViews() method, when 
 checking for the existence of any views, scans only the region the parent 
 table metadata is located in. We should detect whether the views span 
 multiple regions and scan across them all.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1098) Support CASCADE option on DROP TABLE that drops all VIEWs

2014-08-29 Thread Jan Fernando (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Fernando updated PHOENIX-1098:
--

Attachment: PHOENIX-1098-4.1.patch

 Support CASCADE option on DROP TABLE that drops all VIEWs
 -

 Key: PHOENIX-1098
 URL: https://issues.apache.org/jira/browse/PHOENIX-1098
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: James Taylor
Assignee: Jan Fernando
 Fix For: 4.1

 Attachments: PHOENIX-1098-4.1.patch


 It's inconvenient to have to manually drop all of the views of a multi-tenant 
 table before being able to drop the table. We should support a CASCADE option 
 on DROP TABLE which automatically does this, like this:
 DROP TABLE foo CASCADE



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1220) NullPointerException in PArrayDataType.toObject() when baseType is CHAR or BINARY

2014-08-29 Thread Maryann Xue (JIRA)
Maryann Xue created PHOENIX-1220:


 Summary: NullPointerException in PArrayDataType.toObject() when 
baseType is CHAR or BINARY
 Key: PHOENIX-1220
 URL: https://issues.apache.org/jira/browse/PHOENIX-1220
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: Maryann Xue
Priority: Minor


We now assume that for PDataType, if isFixedLength() returns true, we can use 
getByteSize() to get the byte array length of this type. But with BINARY and 
CHAR types, isFixedLength() returns true while getByteSize() returns null, and 
that's why we would get an NPE if we write code like:
{code:title=PArrayDataType.createPhoenixArray()}
if (!baseDataType.isFixedWidth()) {
...
} else {
int elemLength = (maxLength == null ? baseDataType.getByteSize() : 
maxLength);
...
}
{code}
There are more than one occurrences of such code besides this one.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1221) Conflict of SEPERATOR_BYTE with user data byte in PArrayDataType

2014-08-29 Thread Maryann Xue (JIRA)
Maryann Xue created PHOENIX-1221:


 Summary: Conflict of SEPERATOR_BYTE with user data byte in 
PArrayDataType
 Key: PHOENIX-1221
 URL: https://issues.apache.org/jira/browse/PHOENIX-1221
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: Maryann Xue
Priority: Minor


SEPERATOR_BYTE can also appear in VARCHAR_ARRAY or VARBINARY_ARRAY



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (PHOENIX-1221) Conflict of SEPERATOR_BYTE with user data byte in PArrayDataType

2014-08-29 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-1221.
--

Resolution: Invalid

 Conflict of SEPERATOR_BYTE with user data byte in PArrayDataType
 

 Key: PHOENIX-1221
 URL: https://issues.apache.org/jira/browse/PHOENIX-1221
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: Maryann Xue
Priority: Minor
   Original Estimate: 24h
  Remaining Estimate: 24h

 SEPERATOR_BYTE can also appear in VARCHAR_ARRAY or VARBINARY_ARRAY



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-136) Support derived tables in from clause

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-136:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-1167)

 Support derived tables in from clause
 -

 Key: PHOENIX-136
 URL: https://issues.apache.org/jira/browse/PHOENIX-136
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Maryann Xue
  Labels: enhancement
 Fix For: 5.0.0, 3.1, 4.1


 Add support for derived queries of the form:
 SELECT * FROM ( SELECT company, revenue FROM Company ORDER BY revenue) LIMIT 
 10
 Adding support for this requires a compile time change as well as a runtime 
 execution change. The first version of the compile-time change could limit 
 aggregation to only be allowed in the inner or the outer query, but not both. 
 In this case, the inner and outer queries can be combined into a single query 
 with the outer select becoming just a remapping of a subset of the projection 
 from the inner select. The second version of the compile-time change could 
 handle aggregation in the inner and outer select by performing client side 
 (this is likely a less common scenario).
 For the runtime execution, change the UngroupedAggregateRegionObserver would 
 be modified to look for a new TopNLimit attribute with an int value in the 
 Scan. This would control the maximum number of values for the coprocessor to 
 hold on to as the scan is performed. Then the 
 GroupedAggregatingResultIterator would be modified to handle keeping the topN 
 values received back from all the child iterators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-944) Support derived tables in FROM clause that needs extra steps of client-side aggregation or other processing

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-944:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-1167)

 Support derived tables in FROM clause that needs extra steps of client-side 
 aggregation or other processing
 ---

 Key: PHOENIX-944
 URL: https://issues.apache.org/jira/browse/PHOENIX-944
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Maryann Xue
Assignee: Maryann Xue
 Fix For: 3.0.0, 4.0.0, 5.0.0

   Original Estimate: 168h
  Remaining Estimate: 168h

 Groupby in both outer and inner queries cannot be flattened. We can apply an 
 extra step of client-side aggregation to handle such cases.
 See DerivedTableIT.java for all those examples that are currently not 
 supported.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-136) Support derived tables in from clause

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-136:
-

Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-1222

 Support derived tables in from clause
 -

 Key: PHOENIX-136
 URL: https://issues.apache.org/jira/browse/PHOENIX-136
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Maryann Xue
  Labels: enhancement
 Fix For: 5.0.0, 3.1, 4.1


 Add support for derived queries of the form:
 SELECT * FROM ( SELECT company, revenue FROM Company ORDER BY revenue) LIMIT 
 10
 Adding support for this requires a compile time change as well as a runtime 
 execution change. The first version of the compile-time change could limit 
 aggregation to only be allowed in the inner or the outer query, but not both. 
 In this case, the inner and outer queries can be combined into a single query 
 with the outer select becoming just a remapping of a subset of the projection 
 from the inner select. The second version of the compile-time change could 
 handle aggregation in the inner and outer select by performing client side 
 (this is likely a less common scenario).
 For the runtime execution, change the UngroupedAggregateRegionObserver would 
 be modified to look for a new TopNLimit attribute with an int value in the 
 Scan. This would control the maximum number of values for the coprocessor to 
 hold on to as the scan is performed. Then the 
 GroupedAggregatingResultIterator would be modified to handle keeping the topN 
 values received back from all the child iterators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-944) Support derived tables in FROM clause that needs extra steps of client-side aggregation or other processing

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-944:
-

Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-1222

 Support derived tables in FROM clause that needs extra steps of client-side 
 aggregation or other processing
 ---

 Key: PHOENIX-944
 URL: https://issues.apache.org/jira/browse/PHOENIX-944
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Maryann Xue
Assignee: Maryann Xue
 Fix For: 3.0.0, 4.0.0, 5.0.0

   Original Estimate: 168h
  Remaining Estimate: 168h

 Groupby in both outer and inner queries cannot be flattened. We can apply an 
 extra step of client-side aggregation to handle such cases.
 See DerivedTableIT.java for all those examples that are currently not 
 supported.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-927) Support derived tables in joins

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-927:
-

Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-1222

 Support derived tables in joins
 ---

 Key: PHOENIX-927
 URL: https://issues.apache.org/jira/browse/PHOENIX-927
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Maryann Xue
Assignee: Maryann Xue
  Labels: enhancement
 Fix For: 5.0.0, 3.1, 4.1

   Original Estimate: 240h
  Remaining Estimate: 240h

 Support grammar like:
 SELECT a.col1, b.col2, c.col3 FROM 
 (SELECT rk, col1 FROM table1 WHERE col1 LIKE 'foo%' AND col300 IS NULL) AS a 
 JOIN (SELECT rk, col2 FROM table2 WHERE col2 LIKE 'bar%') AS b ON a.rk=b.rk 
 JOIN (SELECT rk, col3 FROM table3 ) AS c ON a.rk=c.rk;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-852) Optimize child/parent foreign key joins

2014-08-29 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-852:


Attachment: 852-3.patch

Moved logic of getting key expression combination to WhereOptimizer.

 Optimize child/parent foreign key joins
 ---

 Key: PHOENIX-852
 URL: https://issues.apache.org/jira/browse/PHOENIX-852
 Project: Phoenix
  Issue Type: Improvement
Reporter: James Taylor
Assignee: Maryann Xue
 Attachments: 852-2.patch, 852-3.patch, 852.patch, PHOENIX-852.patch


 Often times a join will occur from a child to a parent. Our current algorithm 
 would do a full scan of one side or the other. We can do much better than 
 that if the HashCache contains the PK (or even part of the PK) from the table 
 being joined to. In these cases, we should drive the second scan through a 
 skip scan on the server side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (PHOENIX-1222) Support derived/nested queries

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1222.
---

   Resolution: Fixed
Fix Version/s: 4.1
   3.1
   5.0.0

See PHOENIX-944 for some follow up work.

 Support derived/nested queries
 --

 Key: PHOENIX-1222
 URL: https://issues.apache.org/jira/browse/PHOENIX-1222
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Fix For: 5.0.0, 3.1, 4.1






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-944) Support derived tables in FROM clause that needs extra steps of client-side aggregation or other processing

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-944:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-1222)

 Support derived tables in FROM clause that needs extra steps of client-side 
 aggregation or other processing
 ---

 Key: PHOENIX-944
 URL: https://issues.apache.org/jira/browse/PHOENIX-944
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Maryann Xue
Assignee: Maryann Xue
 Fix For: 3.0.0, 4.0.0, 5.0.0

   Original Estimate: 168h
  Remaining Estimate: 168h

 Groupby in both outer and inner queries cannot be flattened. We can apply an 
 extra step of client-side aggregation to handle such cases.
 See DerivedTableIT.java for all those examples that are currently not 
 supported.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1098) Support CASCADE option on DROP TABLE that drops all VIEWs

2014-08-29 Thread Jan Fernando (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Fernando updated PHOENIX-1098:
--

Attachment: PHOENIX-1098-master.patch

Almost identical to the 4.1 patch file. No code changes. Adding for 
completeness.

 Support CASCADE option on DROP TABLE that drops all VIEWs
 -

 Key: PHOENIX-1098
 URL: https://issues.apache.org/jira/browse/PHOENIX-1098
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: James Taylor
Assignee: Jan Fernando
 Fix For: 4.1

 Attachments: PHOENIX-1098-3.1.patch, PHOENIX-1098-4.1.patch, 
 PHOENIX-1098-master.patch


 It's inconvenient to have to manually drop all of the views of a multi-tenant 
 table before being able to drop the table. We should support a CASCADE option 
 on DROP TABLE which automatically does this, like this:
 DROP TABLE foo CASCADE



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1223) arrays of byte[]s don't encode for null bytes

2014-08-29 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1223:


 Summary: arrays of byte[]s don't encode for null bytes
 Key: PHOENIX-1223
 URL: https://issues.apache.org/jira/browse/PHOENIX-1223
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.1
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.2


When encoding arrays of byte[]s Phoenix doesn't correctly encode the null-byte 
(0x00). Phoenix sees that as the terminating character for the element, but 
when you do something like org.apache.hadoop.hbase.util.Bytes.asBytes(int) it 
creates a byte[4] and sets bytes from the right to the left (so 1 would be 
converted to [0,0,0,1]), and then phoenix will see the leading 0-byte as the 
terminator the element and just return a null element

Instead, arrays of byte[]s need to include a length (probably prefix) so it 
knows how many bytes to read in. Its a bigger overhead than any other encoding 
type, but that may be the overhead if you want to do anything goes byte arrays. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1224) Dead loop in hbase scan when hint SKIP_SCAN is set and there is partial key match in RowValueConstructor

2014-08-29 Thread Maryann Xue (JIRA)
Maryann Xue created PHOENIX-1224:


 Summary: Dead loop in hbase scan when hint SKIP_SCAN is set and 
there is partial key match in RowValueConstructor
 Key: PHOENIX-1224
 URL: https://issues.apache.org/jira/browse/PHOENIX-1224
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Maryann Xue


The below test will end up in dead loop in hbase scan.

{code}
@Test
public void testForceSkipScan() throws Exception {
String tempTableWithCompositePK = TEMP_TABLE_COMPOSITE_PK;
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
Connection conn = DriverManager.getConnection(getUrl(), props);
try {
conn.createStatement().execute(CREATE TABLE  + 
tempTableWithCompositePK 
+(col0 INTEGER NOT NULL,  
+ col1 INTEGER NOT NULL,  
+ col2 INTEGER NOT NULL, 
+ col3 INTEGER 
+CONSTRAINT pk PRIMARY KEY (col0, col1, col2))  
+SALT_BUCKETS=4);

PreparedStatement upsertStmt = conn.prepareStatement(
upsert into  + tempTableWithCompositePK + (col0, col1, 
col2, col3)  + values (?, ?, ?, ?));
for (int i = 0; i  3; i++) {
upsertStmt.setInt(1, i + 1);
upsertStmt.setInt(2, i + 2);
upsertStmt.setInt(3, i + 3);
upsertStmt.setInt(4, i + 5);
upsertStmt.execute();
}
conn.commit();

String query = SELECT /*+ SKIP_SCAN*/ * FROM  + 
tempTableWithCompositePK +  WHERE (col0, col1) in ((2, 3), (3, 4), (4, 5));
PreparedStatement statement = conn.prepareStatement(query);
ResultSet rs = statement.executeQuery();
assertTrue(rs.next());
assertEquals(rs.getInt(1), 2);
assertEquals(rs.getInt(2), 3);
assertEquals(rs.getInt(3), 4);
assertEquals(rs.getInt(4), 6);
assertTrue(rs.next());
assertEquals(rs.getInt(1), 3);
assertEquals(rs.getInt(2), 4);
assertEquals(rs.getInt(3), 5);
assertEquals(rs.getInt(4), 7);

assertFalse(rs.next());
} finally {
conn.close();
}
}
{code}

The dead-loop thread:
{panel}
defaultRpcServer.handler=4,queue=0,port=58945 daemon prio=10 
tid=0x7fe4d408c000 nid=0x7bba runnable [0x7fe4c10cf000]
   java.lang.Thread.State: RUNNABLE
at java.util.ArrayList.size(ArrayList.java:177)
at java.util.AbstractList$Itr.hasNext(AbstractList.java:339)
at 
org.apache.hadoop.hbase.filter.FilterList.filterAllRemaining(FilterList.java:199)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:263)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:469)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3937)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4017)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3885)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3876)
at 
org.apache.phoenix.coprocessor.ScanRegionObserver$2.nextRaw(ScanRegionObserver.java:366)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:76)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3157)
- locked 0x000778d5dbd8 (a 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:662)
{panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2014-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-476:
-

Assignee: (was: mravi)

 Support declaration of DEFAULT in CREATE statement
 --

 Key: PHOENIX-476
 URL: https://issues.apache.org/jira/browse/PHOENIX-476
 Project: Phoenix
  Issue Type: Task
Affects Versions: 3.0-Release
Reporter: James Taylor
  Labels: enhancement

 Support the declaration of a default value in the CREATE TABLE/VIEW statement 
 like this:
 CREATE TABLE Persons (
 Pid int NOT NULL PRIMARY KEY,
 LastName varchar(255) NOT NULL,
 FirstName varchar(255),
 Address varchar(255),
 City varchar(255) DEFAULT 'Sandnes'
 )
 To implement this, we'd need to:
 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
 the value when the table is created (in MetaDataClient).
 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
 is present, since the column will never be null.
 3. add a getDefaultValue() accessor in PColumn
 4.  for a row key column, during UPSERT use the default value if no value was 
 specified for that column. This could be done in the PTableImpl.newKey method.
 5.  for a key value column with a default value, we can get away without 
 incurring any storage cost. Although a little bit of extra effort than if we 
 persisted the default value on an UPSERT for key value columns, this approach 
 has the benefit of not incurring any storage cost for a default value.
 * serialize any default value into KeyValueColumnExpression
 * in the evaluate method of KeyValueColumnExpression, conditionally use 
 the default value if the column value is not present. If doing partial 
 evaluation, you should not yet return the default value, as we may not have 
 encountered the the KeyValue for the column yet (since a filter evaluates 
 each time it sees each KeyValue, and there may be more than one KeyValue 
 referenced in the expression). Partial evaluation is determined by calling 
 Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
 while true means it is.
 * modify EvaluateOnCompletionVisitor by adding a visitor method for 
 RowKeyColumnExpression and KeyValueColumnExpression to set 
 evaluateOnCompletion to true if they have a default value specified. This 
 will cause filter evaluation to execute one final time after all KeyValues 
 for a row have been seen, since it's at this time we know we should use the 
 default value.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2014-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116174#comment-14116174
 ] 

James Taylor commented on PHOENIX-476:
--

[~gabriel.reid] - interested in this one? It would be very useful to be able to 
declare an auto increment field in the DDL as follows to support use case 
like this one: http://s.apache.org/Ur1
{code}
CREATE TABLE my_table (
my_pk BIGINT PRIMARY_KEY DEFAULT NEXT VALUE FOR my_seq,
my_col VARCHAR
)
{code}

 Support declaration of DEFAULT in CREATE statement
 --

 Key: PHOENIX-476
 URL: https://issues.apache.org/jira/browse/PHOENIX-476
 Project: Phoenix
  Issue Type: Task
Affects Versions: 3.0-Release
Reporter: James Taylor
  Labels: enhancement

 Support the declaration of a default value in the CREATE TABLE/VIEW statement 
 like this:
 CREATE TABLE Persons (
 Pid int NOT NULL PRIMARY KEY,
 LastName varchar(255) NOT NULL,
 FirstName varchar(255),
 Address varchar(255),
 City varchar(255) DEFAULT 'Sandnes'
 )
 To implement this, we'd need to:
 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
 the value when the table is created (in MetaDataClient).
 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
 is present, since the column will never be null.
 3. add a getDefaultValue() accessor in PColumn
 4.  for a row key column, during UPSERT use the default value if no value was 
 specified for that column. This could be done in the PTableImpl.newKey method.
 5.  for a key value column with a default value, we can get away without 
 incurring any storage cost. Although a little bit of extra effort than if we 
 persisted the default value on an UPSERT for key value columns, this approach 
 has the benefit of not incurring any storage cost for a default value.
 * serialize any default value into KeyValueColumnExpression
 * in the evaluate method of KeyValueColumnExpression, conditionally use 
 the default value if the column value is not present. If doing partial 
 evaluation, you should not yet return the default value, as we may not have 
 encountered the the KeyValue for the column yet (since a filter evaluates 
 each time it sees each KeyValue, and there may be more than one KeyValue 
 referenced in the expression). Partial evaluation is determined by calling 
 Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
 while true means it is.
 * modify EvaluateOnCompletionVisitor by adding a visitor method for 
 RowKeyColumnExpression and KeyValueColumnExpression to set 
 evaluateOnCompletion to true if they have a default value specified. This 
 will cause filter evaluation to execute one final time after all KeyValues 
 for a row have been seen, since it's at this time we know we should use the 
 default value.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1139) Failed to disable local index when index update fails

2014-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116176#comment-14116176
 ] 

James Taylor commented on PHOENIX-1139:
---

[~jeffreyz] - would you mind reviewing this one?

 Failed to disable local index when index update fails
 -

 Key: PHOENIX-1139
 URL: https://issues.apache.org/jira/browse/PHOENIX-1139
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.2
Reporter: Jeffrey Zhong
Assignee: rajeshbabu
 Attachments: PHOENIX-1139.patch


 When PhoenixIndexFailurePolicy is triggered when index update failed, we got 
 following error:
 {noformat}
 2014-07-29 18:24:53,552 WARN  [defaultRpcServer.handler=0,queue=0,port=61926] 
 org.apache.phoenix.index.PhoenixIndexFailurePolicy(136): Attempt to disable 
 index _LOCAL_IDX_T failed with code = TABLE_NOT_FOUND. Will use default 
 failure policy instead.
 {noformat}
 The reason is that in PhoenixIndexFailurePolicy code, we construct index 
 table name from underlying hbase index table name. While local index table 
 can't be derived because underlying local index table name is always like 
 _LOCAL_IDX_DATA TABLE NAME.



--
This message was sent by Atlassian JIRA
(v6.2#6252)