[jira] [Commented] (PHOENIX-3045) Data regions in transition forever if RS holding them down during drop index

2016-07-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373143#comment-15373143
 ] 

Jesse Yates commented on PHOENIX-3045:
--

FWIW, i don't remember the thoughts on using the caching factory. I can imagine 
trying to save as much time and resources as possible, but since truly new 
htables need to relookup region locations and possibly aren't using the same 
threadpool (if the latter, could be a huge issue). However, in the scheme of 
things, probably better to go with the CP factory instead, unless someone has a 
compelling reason... and then documents that in the code :)

> Data regions in transition forever if RS holding them down during drop index
> 
>
> Key: PHOENIX-3045
> URL: https://issues.apache.org/jira/browse/PHOENIX-3045
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Peleato
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3045.patch, PHOENIX-3045_v1.patch
>
>
> There is a chance that region server holding the data regions might abruptly 
> killed before flushing the data table this leads same failure case that data 
> regions won't be opened which leads to the regions in transition forever. We 
> need to handle this case by checking dropped indexes on recovery write 
> failures and skip the corresponding mutations to write to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2892) Scan for pre-warming the block cache for 2ndary index should be removed

2016-05-16 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286053#comment-15286053
 ] 

Jesse Yates commented on PHOENIX-2892:
--

We need the locks when the edits are updating rows to ensure that we get the 
atomicity between the index update and the primary table write.

Looking back at the commit history, maybe this made more sense in 0.94? There 
were some changes around that code to bring Phoenix up to 0.98. I don't 
remember exactly what we did to validate the caching. 

bq. doing a skip scan before a batch of gets not instead of

I believe the rationale was that we should only be touching a few rows, so 
using the skip scan to load the rows will be faster (overall) than doing the 
batched gets/scans later in the CP since the rows will be in cache when we get 
query them in a moment for the index update building.

At a high level, that sounded like fine logic, but we may have been missing 
some internals insight.

All that said, 5 mins for a read seems excessively long. Is there some sort of 
network issue or something else funky going on?

> Scan for pre-warming the block cache for 2ndary index should be removed
> ---
>
> Key: PHOENIX-2892
> URL: https://issues.apache.org/jira/browse/PHOENIX-2892
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.8.0
>
> Attachments: phoenix-2892_v1.patch
>
>
> We have run into an issue in a mid-sized cluster with secondary indexes. The 
> problem is that all handlers for doing writes were blocked waiting on a 
> single scan from the secondary index to complete for > 5mins, thus causing 
> all incoming RPCs to timeout and causing write un-availability and further 
> problems (disabling the index, etc). We've taken jstack outputs continuously 
> from the servers to understand what is going on. 
> In the jstack outputs from that particular server, we can see three types of 
> stacks (this is raw jstack so the thread names are not there unfortunately). 
>   - First, there are a lot of threads waiting for the MVCC transactions 
> started previously: 
> {code}
> Thread 15292: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.waitForPreviousTransactionsComplete(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry)
>  @bci=86, line=253 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.completeMemstoreInsertWithSeqNum(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry,
>  org.apache.hadoop.hbase.regionserver.SequenceId) @bci=29, line=135 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=1906, line=3187 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=79, line=2819 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2761 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, 
> org.apache.hadoop.hbase.quotas.OperationQuota, java.util.List, 
> org.apache.hadoop.hbase.CellScanner) @bci=150, line=692 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionAction, 
> org.apache.hadoop.hbase.CellScanner, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  java.util.List, long) @bci=547, line=654 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Message) @bci=167, 
> line=32213 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.Descriptors$MethodDescriptor, 
> com.google.protobuf.Message, org.apache.hadoop.hbase.CellScanner, long, 
> 

[jira] [Commented] (PHOENIX-2756) FilteredKeyValueScanner should not implement KeyValueScanner

2016-03-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190177#comment-15190177
 ] 

Jesse Yates commented on PHOENIX-2756:
--

Seems ok to me. Nice change - love all the code going away.

> FilteredKeyValueScanner should not implement KeyValueScanner
> 
>
> Key: PHOENIX-2756
> URL: https://issues.apache.org/jira/browse/PHOENIX-2756
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: churro morales
> Attachments: PHOENIX-2756.patch
>
>
> In HBASE-14355, the API for KeyValueScanner changed.  More specifically the 
> method shouldUseScanner() had a signature change.  Phoenix has a class: 
> FilteredKeyValueScanner which implements KeyValueScanner.  For HBase 98, I 
> will put up a patch that doesn't change the API signature ( a little hacky) 
> but this signature change is already in HBase-1.2+.  Either we can raise the 
> scope of KeyValueScanner to @LimitedPrivate in HBase land.  Right now its 
> @Private so people don't assume that external services are depending on the 
> API.  I'll look into how we can make things work in Phoenix land.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145219#comment-15145219
 ] 

Jesse Yates commented on PHOENIX-2674:
--

Committed to 1.0, 4.x-HBase-1.0, and 4.x-HBase-0.98. Thanks for taking a look 
everyone!

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-2674.
--
   Resolution: Fixed
Fix Version/s: 4.7.0

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates reopened PHOENIX-2674:
--

Huh, those are a little surprising test failures. Looking into it, but have a 
full afternoon - won't be able to big in until almost 5PM PST

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145460#comment-15145460
 ] 

Jesse Yates commented on PHOENIX-2674:
--

sorry about that [~giacomotaylor] - it just seemed so innocuous! I'll revert if 
I can't figure out a fix by EOD

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145516#comment-15145516
 ] 

Jesse Yates commented on PHOENIX-2674:
--

Ok, think I found it. Stupid missed line. Anyways, running the suite right now 
to make sure.

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-2674.
--
Resolution: Fixed

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145603#comment-15145603
 ] 

Jesse Yates commented on PHOENIX-2674:
--

Test suite passed locally. Pretty sure about this one. Committed to appropriate 
branches.

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-11 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates reassigned PHOENIX-2674:


Assignee: Jesse Yates

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-11 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-2674:
-
Attachment: PHOENIX-2674.patch

Attaching formal patch that includes m/r test for with and without columns 
specification. Followed the example from the website so maybe we can link this 
later.

[~giacomotaylor] what do you think?

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: PHOENIX-2674.patch, phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2667) Race condition between IndexBuilder and Split for region lock

2016-02-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141568#comment-15141568
 ] 

Jesse Yates commented on PHOENIX-2667:
--

Was thinking we might be able to get away with a SameThreadExectuor, at least 
to start with. Yes, impacts speed, but makes initial testing easier...?

Thanks Enis for the great, deep responses.

> Race condition between IndexBuilder and Split for region lock
> -
>
> Key: PHOENIX-2667
> URL: https://issues.apache.org/jira/browse/PHOENIX-2667
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> In a production cluster, we have seen a condition where the split did not 
> finish for 30+ minutes. Also due to this, no request was being serviced in 
> this time frame affectively making the region offline. 
> The jstack provides 3 types of threads waiting on the regions read or write 
> locks. 
> First, the handlers are all blocked on trying to acquire the read lock on the 
> region in multi(), most of the handlers are like this:
> {code}
> Thread 2328: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
> @bci=20, line=226 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
>  long) @bci=122, line=1033 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
>  long) @bci=25, line=1326 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
> java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,
>  int) @bci=27, line=7754 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock)
>  @bci=3, line=7741 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(org.apache.hadoop.hbase.regionserver.Region$Operation)
>  @bci=211, line=7650 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=21, line=2803 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2760 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, org.apache.
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Messa
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.Descriptors$MethodDescriptor, 
> com.google.protobuf.Message, org.apache.hadoop.hbase.CellScanner, long,
>  - org.apache.hadoop.hbase.ipc.CallRunner.run() @bci=345, line=101 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(java.util.concurrent.BlockingQueue)
>  @bci=54, line=130 (Compiled frame)
>  - org.apache.hadoop.hbase.ipc.RpcExecutor$1.run() @bci=20, line=107 
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> {code}
> second, the IndexBuilder threads from Phoenix index are also blocked waiting 
> on the region read locks: 
> {code}
> Thread 17566: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
> @bci=20, line=226 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
>  long) @bci=122, line=1033 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
>  long) @bci=25, line=1326 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
> java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,

[jira] [Updated] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-10 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-2674:
-
Attachment: phoenix-2674-v0-without-test.patch

Attaching a simple patch that just builds the select statement. Obviously, 
people could just do this themselves, but then we should remove this method 
(not easy to do outside of major version boundaries) or fix it kinda sorta like 
this.

> PhoenixMapReduceUtil#setInput doesn't honor condition clause
> 
>
> Key: PHOENIX-2674
> URL: https://issues.apache.org/jira/browse/PHOENIX-2674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jesse Yates
> Attachments: phoenix-2674-v0-without-test.patch
>
>
> The parameter is completely unused in the method. Further, it looks like we 
> don't actually test this method or any m/r tools directly.
> It would be good to (a) have explicit tests for the MapReduce code - rather 
> than relying on indirect tests like the index util - and, (b) have an example 
> in code for using the mapreduce tools, rather than just the web docs (which 
> can become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2674) PhoenixMapReduceUtil#setInput doesn't honor condition clause

2016-02-10 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-2674:


 Summary: PhoenixMapReduceUtil#setInput doesn't honor condition 
clause
 Key: PHOENIX-2674
 URL: https://issues.apache.org/jira/browse/PHOENIX-2674
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates


The parameter is completely unused in the method. Further, it looks like we 
don't actually test this method or any m/r tools directly.

It would be good to (a) have explicit tests for the MapReduce code - rather 
than relying on indirect tests like the index util - and, (b) have an example 
in code for using the mapreduce tools, rather than just the web docs (which can 
become out of date).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2677) PhoenixMapReduceUtil#setOutput() doesn't build correct column names

2016-02-10 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-2677:


 Summary: PhoenixMapReduceUtil#setOutput() doesn't build correct 
column names
 Key: PHOENIX-2677
 URL: https://issues.apache.org/jira/browse/PHOENIX-2677
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates


When you specify primary key columns to write, it gets dropped from the final 
set of columns in the upsert PreparedStatement, leading to a row that you 
phyiscally cannot write correctly (even if you have all the correct 
information).

This may only occur when also using setInput(), but I'd have to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2667) Race condition between IndexBuilder and Split for region lock

2016-02-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141954#comment-15141954
 ] 

Jesse Yates commented on PHOENIX-2667:
--

Yeah, that's what I was thinking. Correct wrt read vs write concurrency

> Race condition between IndexBuilder and Split for region lock
> -
>
> Key: PHOENIX-2667
> URL: https://issues.apache.org/jira/browse/PHOENIX-2667
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> In a production cluster, we have seen a condition where the split did not 
> finish for 30+ minutes. Also due to this, no request was being serviced in 
> this time frame affectively making the region offline. 
> The jstack provides 3 types of threads waiting on the regions read or write 
> locks. 
> First, the handlers are all blocked on trying to acquire the read lock on the 
> region in multi(), most of the handlers are like this:
> {code}
> Thread 2328: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
> @bci=20, line=226 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
>  long) @bci=122, line=1033 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
>  long) @bci=25, line=1326 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
> java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,
>  int) @bci=27, line=7754 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock)
>  @bci=3, line=7741 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(org.apache.hadoop.hbase.regionserver.Region$Operation)
>  @bci=211, line=7650 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=21, line=2803 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2760 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, org.apache.
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Messa
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.Descriptors$MethodDescriptor, 
> com.google.protobuf.Message, org.apache.hadoop.hbase.CellScanner, long,
>  - org.apache.hadoop.hbase.ipc.CallRunner.run() @bci=345, line=101 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(java.util.concurrent.BlockingQueue)
>  @bci=54, line=130 (Compiled frame)
>  - org.apache.hadoop.hbase.ipc.RpcExecutor$1.run() @bci=20, line=107 
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> {code}
> second, the IndexBuilder threads from Phoenix index are also blocked waiting 
> on the region read locks: 
> {code}
> Thread 17566: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
> @bci=20, line=226 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
>  long) @bci=122, line=1033 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
>  long) @bci=25, line=1326 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
> java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,
>  int) @bci=27, line=7754 (Interpreted frame)
>  - 
> 

[jira] [Commented] (PHOENIX-2667) Race condition between IndexBuilder and Split for region lock

2016-02-09 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139672#comment-15139672
 ] 

Jesse Yates commented on PHOENIX-2667:
--

Well, that sucks. Is there any way we can process that read under the same 
lock? That might be an HBase change, but basically adding an HRegion method 
which is like #scanWithReadLock() that gets called from the standard scan 
method. Anyone else who read region state on update form a CP should have this 
same problem, yes?

I don't understand why we don't have this problem with 
HRegion#checkAndMutateRow() since that takes a lock twice by calling 
startRegionOperation() twice.

However, I noticed this comment:
bq.   // split, merge or compact region doesn't need to check the 
closing/closed state or lock the region

In HRegion#startRegionOperation... so I'm a bit confused

> Race condition between IndexBuilder and Split for region lock
> -
>
> Key: PHOENIX-2667
> URL: https://issues.apache.org/jira/browse/PHOENIX-2667
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> In a production cluster, we have seen a condition where the split did not 
> finish for 30+ minutes. Also due to this, no request was being serviced in 
> this time frame affectively making the region offline. 
> The jstack provides 3 types of threads waiting on the regions read or write 
> locks. 
> First, the handlers are all blocked on trying to acquire the read lock on the 
> region in multi(), most of the handlers are like this:
> {code}
> Thread 2328: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
> @bci=20, line=226 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
>  long) @bci=122, line=1033 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
>  long) @bci=25, line=1326 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
> java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,
>  int) @bci=27, line=7754 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock)
>  @bci=3, line=7741 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(org.apache.hadoop.hbase.regionserver.Region$Operation)
>  @bci=211, line=7650 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=21, line=2803 (Interpreted frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2760 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, org.apache.
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Messa
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.Descriptors$MethodDescriptor, 
> com.google.protobuf.Message, org.apache.hadoop.hbase.CellScanner, long,
>  - org.apache.hadoop.hbase.ipc.CallRunner.run() @bci=345, line=101 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(java.util.concurrent.BlockingQueue)
>  @bci=54, line=130 (Compiled frame)
>  - org.apache.hadoop.hbase.ipc.RpcExecutor$1.run() @bci=20, line=107 
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> {code}
> second, the IndexBuilder threads from Phoenix index are also blocked waiting 
> on the region read locks: 
> {code}
> Thread 17566: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
> @bci=20, line=226 (Compiled frame)
>  - 
> 

[jira] [Commented] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-01-05 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083425#comment-15083425
 ] 

Jesse Yates commented on PHOENIX-2535:
--

I think Storm already shades all its dependencies, so I'm not too concerned 
there... and don't really have the bandwidth right now to pick this up.

> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 4.7.0
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2503) Multiple Java NoClass/Method Errors with Spark and Phoenix

2015-12-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051439#comment-15051439
 ] 

Jesse Yates commented on PHOENIX-2503:
--

Meh, seems ok. Maybe add a comment to that build target point to the spark issue

> Multiple Java NoClass/Method Errors with Spark and Phoenix
> --
>
> Key: PHOENIX-2503
> URL: https://issues.apache.org/jira/browse/PHOENIX-2503
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: Debian 8 (Jessie) x64
> hadoop-2.6.2
> hbase-1.1.2
> phoenix-4.6.0-HBase-1.1
> spark-1.5.2-bin-without-hadoop
>Reporter: Jonathan Cox
>Priority: Blocker
> Attachments: PHOENIX-2503.patch
>
>
> I have encountered a variety of Java errors while trying to get Apache 
> Phoenix working with Spark. In particular, I encounter these errors when 
> submitting Python jobs to the spark-shell, or running interactively in the 
> scala Spark shell. 
> --- Issue 1 ---
> The first issue I encountered was that Phoenix would not work with the binary 
> Spark release that includes Hadoop 2.6 (spark-1.5.2-bin-hadoop2.6.tgz). I 
> tried adding the phoenix-4.6.0-HBase-1.1-client.jar to both spark-env.sh and 
> spark-defaults.conf, but encountered the same error when launching 
> spark-shell:
> 15/12/08 18:38:05 WARN ObjectStore: Version information not found in 
> metastore. hive.metastore.schema.verification is not enabled so recording the 
> schema version 1.2.0
> 15/12/08 18:38:05 WARN ObjectStore: Failed to get database default, returning 
> NoSuchObjectException
> 15/12/08 18:38:05 WARN Hive: Failed to access metastore. This class should 
> not accessed in runtime.
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: 
> Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)
> - Issue 2 -
> Alright, having given up on getting Phoenix to work with the Spark package 
> that includes Hadoop, I decided to download hadoop-2.6.2.tar.gz and 
> spark-1.5.2-bin-without-hadoop.tgz. I installed these, and again added 
> phoenix-4.6.0-HBase-1.1-client.jar to spark-defaults.conf. In addition, I 
> added the following lines to spark-env.sh:
> SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
> export 
> SPARK_DIST_CLASSPATH="$SPARK_DIST_CLASSPATH:/usr/local/hadoop/share/hadoop/tools/lib/*"
>  
> export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
> This solved "Issue 1" described above, and now spark-shell launches without 
> generating an error. Nevertheless, other Spark functionality is now broken:
> 15/12/09 13:55:46 INFO repl.SparkILoop: Created spark context..
> Spark context available as sc.
> 15/12/09 13:55:46 INFO repl.SparkILoop: Created sql context..
> SQL context available as sqlContext.
> scala> val textFile = sc.textFile("README.md")
> java.lang.NoSuchMethodError: 
> com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;
>   at 
> com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.(ScalaNumberDeserializersModule.scala:49)
>   at 
> com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.(ScalaNumberDeserializersModule.scala)
> Note, this error goes away if I omit phoenix-4.6.0-HBase-1.1-client.jar (but 
> then I have no Phoenix support, obviously). This makes me believe that 
> phoenix-4.6.0-HBase-1.1-client.jar contains some conflicting version of 
> Jackson FastXML classes, which are overriding Spark's Jackson classes with an 
> earlier version that doesn't include this particular method. In other words, 
> Spark needs one version of Jackson JARs, but Phoenix is including another 
> that breaks Spark. Does this make any sense?
> Sincerely,
> Jonathan



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1910) Sort out maven assembly dependencies

2015-11-06 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-1910.
--
Resolution: Won't Fix

> Sort out maven assembly dependencies
> 
>
> Key: PHOENIX-1910
> URL: https://issues.apache.org/jira/browse/PHOENIX-1910
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Cody Marcel
>Assignee: Jesse Yates
>Priority: Minor
>
> It's unclear how to correctly add a dependency for maven assembly. Moving the 
> module last is a temp work around, but we should figure out a more explicit 
> way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1910) Sort out maven assembly dependencies

2015-10-28 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978880#comment-14978880
 ] 

Jesse Yates commented on PHOENIX-1910:
--

I think people have figured this out - can we drop it?

> Sort out maven assembly dependencies
> 
>
> Key: PHOENIX-1910
> URL: https://issues.apache.org/jira/browse/PHOENIX-1910
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Cody Marcel
>Assignee: Jesse Yates
>Priority: Minor
>
> It's unclear how to correctly add a dependency for maven assembly. Moving the 
> module last is a temp work around, but we should figure out a more explicit 
> way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2183) Write to log when actually doing WAL replay

2015-08-17 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700515#comment-14700515
 ] 

Jesse Yates commented on PHOENIX-2183:
--

Seems trivial. Only concern would be renaming the JIRA - at first glance this 
implies writing the WAL when doing WAL replay, not just doing the 
debug/progress logging.

 Write to log when actually doing WAL replay
 ---

 Key: PHOENIX-2183
 URL: https://issues.apache.org/jira/browse/PHOENIX-2183
 Project: Phoenix
  Issue Type: Bug
Reporter: Yuhao Bi
 Attachments: PHOENIX-2183.patch


 In Indexer#postOpen(...) we write the log in any condition.
 {code}
 LOG.info(Found some outstanding index updates that didn't succeed during
 +  WAL replay - attempting to replay now.);
 //if we have no pending edits to complete, then we are done
 if (updates == null || updates.size() == 0) {
   return;
 }
 {code}
 We should only write the log when we actually doing a replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2183) Write to log when actually doing WAL replay

2015-08-17 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700517#comment-14700517
 ] 

Jesse Yates commented on PHOENIX-2183:
--

oh, and +1 on patch

 Write to log when actually doing WAL replay
 ---

 Key: PHOENIX-2183
 URL: https://issues.apache.org/jira/browse/PHOENIX-2183
 Project: Phoenix
  Issue Type: Bug
Reporter: Yuhao Bi
 Attachments: PHOENIX-2183.patch


 In Indexer#postOpen(...) we write the log in any condition.
 {code}
 LOG.info(Found some outstanding index updates that didn't succeed during
 +  WAL replay - attempting to replay now.);
 //if we have no pending edits to complete, then we are done
 if (updates == null || updates.size() == 0) {
   return;
 }
 {code}
 We should only write the log when we actually doing a replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2183) Fix debug log line when doing secondary index WAL replay

2015-08-17 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-2183:
-
Summary: Fix debug log line when doing secondary index WAL replay  (was: 
Add debug log line when doing secondary index WAL replay)

 Fix debug log line when doing secondary index WAL replay
 

 Key: PHOENIX-2183
 URL: https://issues.apache.org/jira/browse/PHOENIX-2183
 Project: Phoenix
  Issue Type: Bug
Reporter: Yuhao Bi
 Attachments: PHOENIX-2183.patch


 In Indexer#postOpen(...) we write the log in any condition.
 {code}
 LOG.info(Found some outstanding index updates that didn't succeed during
 +  WAL replay - attempting to replay now.);
 //if we have no pending edits to complete, then we are done
 if (updates == null || updates.size() == 0) {
   return;
 }
 {code}
 We should only write the log when we actually doing a replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2025) Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps

2015-06-03 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571631#comment-14571631
 ] 

Jesse Yates commented on PHOENIX-2025:
--

Kind of a major issue for anyone running unit tests using the phoenix test jars 
(so people can leverage the nice setup/teardown utils in phoenix test) - 
surprised no one else has seen this yet.

Maybe we can split out the test utils (e.g. BaseTest and its brethren) into a 
separate module - phoenix-test-utils - and then just have the phoenix-core 
tests depend on that module and set its own hbase-default.xml (since 
hbase-site.xml doesn't work). Downstream projects would just import 
phoenix-test-utils jar, not phoenix-core:tests.

Or maybe it will just take some pom fiddling to not include the 
hbase-default.xml when we build the tests jar...not sure what that would take.

 Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting 
 up in client apps
 -

 Key: PHOENIX-2025
 URL: https://issues.apache.org/jira/browse/PHOENIX-2025
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
Reporter: Geoffrey Jacoby

 Phoenix seems to have long had its own version of hbase-default.xml as a test 
 resource in phoenix-core with a single setting to override 
 hbase.defaults.for.version.skip to true. Sometime around Phoenix 4.3, 
 phoenix-core seems to have been split into a main jar and a test jar, and the 
 hbase-default.xml went into the test jar.
 The odd result of this is that in client apps that include the test jar, the 
 classloader in HBaseConfiguration.create() now sees Phoenix's 
 hbase-default.xml, rather than HBase's, and creates a Configuration object 
 without HBase's defaults. One major consequence of this is that the 
 HBaseTestingUtility can't start up, because it relies on those HBase defaults 
 being set. This is a huge problem in a client app that includes the 
 phoenix-core test jar in order to make use of the PhoenixTestDriver and 
 BaseTest classes; the upgrade to 4.3 breaks all tests using the 
 HBaseTestingUtility. 
 I've verified that phoenix-core's own tests don't pass if its internal 
 hbase-default.xml is missing (ZK has problems starting up), and that renaming 
 it to hbase-site.xml doesn't seem to fix the problem either. I looked around 
 for a central point in code to manually set the 
 hbase.defaults.for.version.skip flag, but couldn't find one; BaseTest didn't 
 seem to cover all the needed test cases. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1977) Always getting No FileSystem for scheme: hdfs when exported HBASE_CONF_PATH

2015-05-18 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549775#comment-14549775
 ] 

Jesse Yates commented on PHOENIX-1977:
--

bq. Phoenix needs hadoop jars in the class path in the first place.

Yeah, I think its for the M/R integration and the dependency tree that gets 
built out of that. I think the bigger question is what kind of installs people 
are _actually_ doing and how they use phoenix. I feel like people want:

* a drop in jar to their client
* a standalone tarball with a simple HBase config
* drop in, thin server jars (so not including HBase/Hadoop/etc.)

It gets tricky when we start trying to be smart and create semi-thin jars 
that contain phoenix+some dependencies - which do we want to include? Gut says 
everything that doesn't come bundled with HBase/Hadoop by default; this would 
make even more sense if Phoenix were part of HBase proper (as a model), in 
which case it would just be a jar in appropriate tarball.

bq. Maybe there was a reason for including them explicitly

Yeah, because you don't actually need a lot of the dependencies that maven 
would throw in, just to run a client - the result just ends up being massive. 
However, you do need a base set across all of them, so this was that base set. 
In the other building files, we manage the other components as needed. 

Maybe using the maven-shade plugin would be a better solution here, especially 
since if we are considering dropping the tarball... though I think we can to a 
consensus that people still want it? If we keep it, then better to keep things 
in the build files rather than split across two different plugins.

 Always getting No FileSystem for scheme: hdfs when exported HBASE_CONF_PATH
 ---

 Key: PHOENIX-1977
 URL: https://issues.apache.org/jira/browse/PHOENIX-1977
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Attachments: PHOENIX-1977.patch


 Always getting this exception when exported HBASE_CONF_PATH with 
 configuration directory. connection creation always excepting hadoop-hdfs jar 
 to be present in that case.I think we can check and load the hdfs-jar from 
 any of the places HADOOP_HOME , HBASE_HOME/lib and current directory.
 For UDFs we need hadoop-common and hadoop-hdfs jars compulsory.
 {code}
 java.io.IOException: No FileSystem for scheme: hdfs
   at 
 org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2579)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2586)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.hbase.util.DynamicClassLoader.init(DynamicClassLoader.java:104)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.clinit(ProtobufUtil.java:229)
   at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
   at 
 org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.init(ConnectionManager.java:623)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:410)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:319)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
   at 
 org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:286)
   at 
 

[jira] [Commented] (PHOENIX-1979) FamilyOnlyFilter should test for != INCLUDE instead of == SKIP

2015-05-18 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549777#comment-14549777
 ] 

Jesse Yates commented on PHOENIX-1979:
--

Yeah, I think we can drop those classes. IIRC they were from the generic SI 
framework days, but since no one but phoenix is using it, might as well just 
rip out things that phoenix doesnt use :)

 FamilyOnlyFilter should test for != INCLUDE instead of == SKIP
 --

 Key: PHOENIX-1979
 URL: https://issues.apache.org/jira/browse/PHOENIX-1979
 Project: Phoenix
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 5.0.0, 4.3.0, 4.4.0

 Attachments: PHOENIX-1979.patch


 TestFamilyOnlyFilter wants to confirm that the HBase FamilyFilter filters out 
 cells as expected, but is unnecessarily brittle in that it checks for a 
 specific return hint (SKIP) when it should just be checking that the cell was 
 not included (INCLUDE). This breaks after HBASE-13122, which optimizes the 
 FamilyFilter return hints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1965) Upgrade Pig to version 0.13

2015-05-14 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1965:
-
Fix Version/s: 4.5.0

 Upgrade Pig to version 0.13
 ---

 Key: PHOENIX-1965
 URL: https://issues.apache.org/jira/browse/PHOENIX-1965
 Project: Phoenix
  Issue Type: Improvement
Reporter: Prashant Kommireddi
Assignee: Prashant Kommireddi
 Fix For: 4.5.0


 Currently phoenix uses 0.12. The next version has been out and stable for a 
 while now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1965) Upgrade Pig to version 0.13

2015-05-14 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1965:
-
Fix Version/s: 4.4.0

 Upgrade Pig to version 0.13
 ---

 Key: PHOENIX-1965
 URL: https://issues.apache.org/jira/browse/PHOENIX-1965
 Project: Phoenix
  Issue Type: Improvement
Reporter: Prashant Kommireddi
Assignee: Prashant Kommireddi
 Fix For: 4.4.0, 4.5.0


 Currently phoenix uses 0.12. The next version has been out and stable for a 
 while now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1965) Upgrade Pig to version 0.13

2015-05-14 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544263#comment-14544263
 ] 

Jesse Yates commented on PHOENIX-1965:
--

Committed to master. Happy to commit to 4.x branches if [~jamestaylor], 
[~rajeshbabu], [~samarthjain] are OK with it? Seems like we won't have compat 
issues, but worried that once we bump the version on the point branches we 
might start adding things in 0.13 that aren't backwards compatible.

 Upgrade Pig to version 0.13
 ---

 Key: PHOENIX-1965
 URL: https://issues.apache.org/jira/browse/PHOENIX-1965
 Project: Phoenix
  Issue Type: Improvement
Reporter: Prashant Kommireddi
Assignee: Prashant Kommireddi

 Currently phoenix uses 0.12. The next version has been out and stable for a 
 while now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1965) Upgrade Pig to version 0.13

2015-05-13 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14542232#comment-14542232
 ] 

Jesse Yates commented on PHOENIX-1965:
--

So, I'm only going to commit this to the master branch. I'm looking to preserve 
semver in the release branches and I'm concerned that if we bump the dependency 
on the 4.X branches, then we will start incorporating pig 0.13-only features.

Sound reasonable? I really have no experience with pig though...

 Upgrade Pig to version 0.13
 ---

 Key: PHOENIX-1965
 URL: https://issues.apache.org/jira/browse/PHOENIX-1965
 Project: Phoenix
  Issue Type: Improvement
Reporter: Prashant Kommireddi
Assignee: Prashant Kommireddi

 Currently phoenix uses 0.12. The next version has been out and stable for a 
 while now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1954) Reserve chunks of numbers for a sequence

2015-05-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14540851#comment-14540851
 ] 

Jesse Yates commented on PHOENIX-1954:
--

I'm ok with {{NEXT n VALUE FOR seq}} as long as we are explicit about what 
happens in the case:
{code}
NEXT VALUE FOR seq
NEXT 100 VALUE FOR seq
NEXT VALUE FOR seq
{code}
Which would be something like: 1, 101, 201. Essentially throwing away the 
cache from the first next value for (which was 1-100), assigning the batch 
the next 100ids (100-200), and then getting the next sequential ID (201) with a 
cache (201-301).

 Reserve chunks of numbers for a sequence
 

 Key: PHOENIX-1954
 URL: https://issues.apache.org/jira/browse/PHOENIX-1954
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl

 In order to be able to generate many ids in bulk (for example in map reduce 
 jobs) we need a way to generate or reserve large sets of ids. We also need to 
 mix ids reserved with incrementally generated ids from other clients. 
 For this we need to atomically increment the sequence and return the value it 
 had when the increment happened.
 If we're OK to throw the current cached set of values away we can do
 {{NEXT VALUE FOR seq(,N)}}, that needs to increment value and return the 
 value it incremented from (i.e. it has to throw the current cache away, and 
 return the next value it found at the server).
 Or we can invent a new syntax {{RESERVE VALUES FOR seq, N}} that does the 
 same, but does not invalidate the cache.
 Note that in either case we won't retrieve the reserved set of values via 
 {{NEXT VALUE FOR}} because we'd need to be idempotent in our case, all we 
 need to guarantee is that after a call to {{RESERVE VALUES FOR seq, N}}, 
 which returns a value M is that the range [M, M+N) won't be used by any 
 other user of the sequence. My might need reserve 1bn ids this way ahead of a 
 map reduce run.
 Any better ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1954) Reserve chunks of numbers for a sequence

2015-05-11 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538672#comment-14538672
 ] 

Jesse Yates commented on PHOENIX-1954:
--

In the original discussion that came up, we came up with the same syntax, but 
with the follow problem. What if the client first gets a sequence (which 
batches by 100), so they reserve sequence {{0-99}} and get the value 0. Then, 
to reserve a sequence they use {{NEXT 1000 VALUE for seq}}, which bumps the 
external next id to {{1100}. Then when they next do {{NEXT VALUE FOR seq}} what 
should the next value be? 

There are a couple possible solutions:
* They get value 1. Then if they call it 99 more times, they would get 
2,3,...99, 1100. Which skips the reserved sequence. This is however a bit odd 
and why Lars proposed the different syntax, so the client is aware that the 
next sequence is unmanaged
* The get value 1100. This would 'throw away' the client cache of {{0-99}} and 
just get the next logical element of the sequence. Simpler and reserves the 
number space
* They get 1, followed by 2,3,...99,100, 101,...1099. However, this would 
conflict with the idea of a 'reserved' space which is allocated as needed from 
the client's perspective.

The reserved ID space is somewhat separate from the client's standard sequence 
logic, but in many cases, needs to interroperate in the same sequence. For 
instance, batch generating UUIDs (reserving an appropriately sized block) 
interleaving with stream/on-demand generation of UUIDs.

{{ALLOCATE}} differentiates the above cases since it somewhat decouples the 
client's two usages.

 Reserve chunks of numbers for a sequence
 

 Key: PHOENIX-1954
 URL: https://issues.apache.org/jira/browse/PHOENIX-1954
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl

 In order to be able to generate many ids in bulk (for example in map reduce 
 jobs) we need a way to generate or reserve large sets of ids. We also need to 
 mix ids reserved with incrementally generated ids from other clients. 
 For this we need to atomically increment the sequence and return the value it 
 had when the increment happened.
 If we're OK to throw the current cached set of values away we can do
 {{NEXT VALUE FOR seq(,N)}}, that needs to increment value and return the 
 value it incremented from (i.e. it has to throw the current cache away, and 
 return the next value it found at the server).
 Or we can invent a new syntax {{RESERVE VALUES FOR seq, N}} that does the 
 same, but does not invalidate the cache.
 Note that in either case we won't retrieve the reserved set of values via 
 {{NEXT VALUE FOR}} because we'd need to be idempotent in our case, all we 
 need to guarantee is that after a call to {{RESERVE VALUES FOR seq, N}}, 
 which returns a value M is that the range [M, M+N) won't be used by any 
 other user of the sequence. My might need reserve 1bn ids this way ahead of a 
 map reduce run.
 Any better ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1954) Reserve chunks of numbers for a sequence

2015-05-11 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538769#comment-14538769
 ] 

Jesse Yates commented on PHOENIX-1954:
--

I think its more that we want to reserve those values. 

{code}
statement specifies NEXT 1000 VALUES FOR seq and executes it 1001 times
{code}

To me, that would imply that they want 1001*1000 values. 

bq. is the caller committing to allocate no more than 1000 values?

I thinks its more that it is asking for a contiguous allocation of 1000 values, 
of which it may use some part. Any overrun should be expected to have 
duplicates outside the range. However, it is possible that they could, 
interleaved with the reserved batch they just claimed, we asking for the next 
unique sequence, in which case asking {{NEXT VALUE FOR}} they could be asking 
for:
 * the next value in the range
 * the next unreserved value

Depending on how the semantics are defined and we don't necessarily want that, 
especially if we are in a M/R context if we are integrating the naming the 
current NEXT VALUE FOR semantics; its not clear which should should take place 
(and both are valid).

Maybe we could do something like the {{NEXT n VALUE FOR seq}} and then a 
{{SKIP || RESERVE || ALLOCATE n VALUE FOR seq}} where:
* first has the semantics of allocating a larger batch which you set through 
{{NEXT VALUE FOR}} give you the next in the range
* second skips ahead in the batch and {{NEXT VALUE FOR}} gives you the next 
unreserved number.


 Reserve chunks of numbers for a sequence
 

 Key: PHOENIX-1954
 URL: https://issues.apache.org/jira/browse/PHOENIX-1954
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl

 In order to be able to generate many ids in bulk (for example in map reduce 
 jobs) we need a way to generate or reserve large sets of ids. We also need to 
 mix ids reserved with incrementally generated ids from other clients. 
 For this we need to atomically increment the sequence and return the value it 
 had when the increment happened.
 If we're OK to throw the current cached set of values away we can do
 {{NEXT VALUE FOR seq(,N)}}, that needs to increment value and return the 
 value it incremented from (i.e. it has to throw the current cache away, and 
 return the next value it found at the server).
 Or we can invent a new syntax {{RESERVE VALUES FOR seq, N}} that does the 
 same, but does not invalidate the cache.
 Note that in either case we won't retrieve the reserved set of values via 
 {{NEXT VALUE FOR}} because we'd need to be idempotent in our case, all we 
 need to guarantee is that after a call to {{RESERVE VALUES FOR seq, N}}, 
 which returns a value M is that the range [M, M+N) won't be used by any 
 other user of the sequence. My might need reserve 1bn ids this way ahead of a 
 map reduce run.
 Any better ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1728) Pherf - Make tests use mini cluster

2015-04-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14509710#comment-14509710
 ] 

Jesse Yates commented on PHOENIX-1728:
--

bq. moving the phoenix-assembly module to be listed last so that the other unit 
tests run before the tar building?

IIRC, that's not how it works. Instead, phoenix-assembly runs last because it 
depends on the other projects. So, [~cody.mar...@gmail.com] needs to add pherf 
to the dependencies list in the phoenix-assembly pom.

 Pherf - Make tests use mini cluster
 ---

 Key: PHOENIX-1728
 URL: https://issues.apache.org/jira/browse/PHOENIX-1728
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Cody Marcel
Assignee: Cody Marcel
Priority: Minor
  Labels: newbie
 Fix For: 5.0.0, 4.4.0


 Some unit tests currently depend on a cluster being available or they will 
 fail. Make these tests use mini cluster.
 Tests are currently disabled in the build. We need to enable these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1728) Pherf - Make tests use mini cluster

2015-04-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14509808#comment-14509808
 ] 

Jesse Yates commented on PHOENIX-1728:
--

This is one of the caveats of using an assembly module. It requires doing a mvn 
install (or likely, a mvn install -DskipTests) so there is something there to 
build with. Try removing your ~/.m2/repository and see if it builds without the 
mvn install :) (hint: it won't).

 Pherf - Make tests use mini cluster
 ---

 Key: PHOENIX-1728
 URL: https://issues.apache.org/jira/browse/PHOENIX-1728
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Cody Marcel
Assignee: Cody Marcel
Priority: Minor
  Labels: newbie
 Fix For: 5.0.0, 4.4.0


 Some unit tests currently depend on a cluster being available or they will 
 fail. Make these tests use mini cluster.
 Tests are currently disabled in the build. We need to enable these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1728) Pherf - Make tests use mini cluster

2015-04-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14509926#comment-14509926
 ] 

Jesse Yates commented on PHOENIX-1728:
--

+1 and mind filing a follow-up on me to look into how to do it 'right'?

 Pherf - Make tests use mini cluster
 ---

 Key: PHOENIX-1728
 URL: https://issues.apache.org/jira/browse/PHOENIX-1728
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Cody Marcel
Assignee: Cody Marcel
Priority: Minor
  Labels: newbie
 Fix For: 5.0.0, 4.4.0


 Some unit tests currently depend on a cluster being available or they will 
 fail. Make these tests use mini cluster.
 Tests are currently disabled in the build. We need to enable these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1824) Run all IT's through Query Server

2015-04-08 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485704#comment-14485704
 ] 

Jesse Yates commented on PHOENIX-1824:
--

Are you thinking we will want to run all the ITs both ways - with the query 
server and standard client?

How would the @Parameterzied annotation work cleanly? I'd assume you would 
still subclass off a general parameterizing class that sets the values. 
Alternatively, you could get fancy and create your own test runner that does 
the parameterization, so you only add @RunWith.

I've been supportive of a phoenix-it for a while, but we haven't had an 
explicit need for it yet... setting up the query service/config might be a 
reasonble use case. 



 Run all IT's through Query Server
 -

 Key: PHOENIX-1824
 URL: https://issues.apache.org/jira/browse/PHOENIX-1824
 Project: Phoenix
  Issue Type: Test
Affects Versions: 4.4.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: 1824.wip.patch


 Once we have PHOENIX-971 merged, we can increase our confidence in the server 
 by parameterizing our IT suite to run over either driver, or both. This will 
 probably require refactoring the IT suite out of phoenix-core/src/it into a 
 separate module so that module can depend on both phoenix-core and 
 phoenix-server modules.
 This is looking like it will also depend on improvements to Calcite that may 
 not make it into 1.2 release (as RC's for that release have already started).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-971) Query server

2015-04-07 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484267#comment-14484267
 ] 

Jesse Yates commented on PHOENIX-971:
-

Have you tried renaming TestQueryServerBasics.java to something like 
QueryServerBasicsIT.java? Looks like failsafe looks just for the files in the 
/it directory, whereas the surefire plugin isn't specially configured based on 
filename, and the other IT tests are named specifically IT.

Side note, phoenix uses the name of classTest.java convention, rather than 
HBase's convention of Testname of class.java... though its not perfectly used 
throughout :-/ 

 Query server
 

 Key: PHOENIX-971
 URL: https://issues.apache.org/jira/browse/PHOENIX-971
 Project: Phoenix
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Nick Dimiduk
 Fix For: 4.4.0

 Attachments: PHOENIX-971.00.patch, PHOENIX-971.01.patch, image-2.png


 Host the JDBC driver in a query server process that can be deployed as a 
 middle tier between lighter weight clients and Phoenix+HBase. This would 
 serve a similar optional role in Phoenix deployments as the 
 [HiveServer2|https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2]
  does in Hive deploys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-39) Add sustained load tester that measures throughput

2015-03-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355576#comment-14355576
 ] 

Jesse Yates commented on PHOENIX-39:


+1 overall; this is massive though, so I'd want another committer who knows the 
added code more closely to also +1, i.e. [~mujtabachohan].

There are probably some nits we can cleanup, but that will likely be an 
as-we-go kind of thing.

 Add sustained load tester that measures throughput
 --

 Key: PHOENIX-39
 URL: https://issues.apache.org/jira/browse/PHOENIX-39
 Project: Phoenix
  Issue Type: Improvement
Reporter: James Taylor
Assignee: Cody Marcel

 We should add a YCSB-like [1] sustained load tester that measures throughput 
 over an extended time period for a fully loaded cluster using Phoenix. 
 Ideally, we'd want to be able to dial up/down the read/write percentages, and 
 control the types of queries being run (scan, aggregate, joins, array usage, 
 etc). Another interesting dimension is simultaneous users and on top of that 
 multi-tenant views.
 This would be a big effort, but we can stage it and increase the knobs and 
 dials as we go.
 [1] http://hbase.apache.org/book/apd.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1241) Add typing to trace annotations

2015-03-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350494#comment-14350494
 ] 

Jesse Yates commented on PHOENIX-1241:
--

Yeah, byte[] was nice in a general sense, but most ppl want to store strings so 
we definitely saw cases where deserialization of the byte[] led to a garbled 
tag/annotation :-/

Here you could cheat and just say that, for instance if the byte[] is 
prefixed by 0x00,0x00 the next info is the type info, followed up the info; 
otherwise, the byte[] is just stored as a string. Or something like that.

 Add typing to trace annotations
 ---

 Key: PHOENIX-1241
 URL: https://issues.apache.org/jira/browse/PHOENIX-1241
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0, 4.1.1
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.1.1


 Currently traces only support storing string valued annotations - this works 
 for known trace sources. However, phoenix will have trace annotations with 
 specific types. We can improve the storage format to know about these custom 
 types, rather than just storing strings, making the query interface more 
 powerful.
 See PHOENIX-1226 for more discussion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-938) Use higher priority queue for index updates to prevent deadlock

2015-03-05 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349091#comment-14349091
 ] 

Jesse Yates commented on PHOENIX-938:
-

Yup, that's a bug.

Over on PHOENIX-1676 we are tracking down all the issues with the index 
priority - mind adding your comment over there?

At some point I was trying to refactor HBase Rpc Schedulers to handle generic 
queues so scheduler impls wouldn't have to actually manage their own queues, 
but alas, that started to get very convoluted and was never finished.

 Use higher priority queue for index updates to prevent deadlock
 ---

 Key: PHOENIX-938
 URL: https://issues.apache.org/jira/browse/PHOENIX-938
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.0.0, 4.1
Reporter: James Taylor
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1

 Attachments: PHOENIX-938-master-v3.patch, phoenix-938-4.0-v0.patch, 
 phoenix-938-4.0-v0.patch, phoenix-938-master-v0.patch, 
 phoenix-938-master-v1.patch, phoenix-938-master-v2.patch, 
 phoenix-938-master-v4.patch, phoenix-938-master-v5.patch


 With our current global secondary indexing solution, a batched Put of table 
 data causes a RS to do a batch Put to other RSs. This has the potential to 
 lead to a deadlock if all RS are overloaded and unable to process the pending 
 batched Put. To prevent this, we should use a higher priority queue to submit 
 these Puts so that they're always processed before other Puts. This will 
 prevent the potential for a deadlock under high load. Note that this will 
 likely require some HBase 0.98 code changes and would not be feasible to 
 implement for HBase 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1676) Set priority of Index Updates correctly

2015-02-27 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14340926#comment-14340926
 ] 

Jesse Yates commented on PHOENIX-1676:
--

bq. Could this be why PhoenixIndexRpcScheduler.dispatch is not being called for 
the index updates?
So the stack you posted is just from the client side. As I mentioned (and you 
copied above): getClient method just goes straight into the rpc scheduler of 
the HRegionServer, which looking again at the code should instead be rpc 
services of the HRegionServer (my bad!). This is what is used by the 
server-side indexing mechanism, so the indexing queues are not used when *index 
regions are on the same server*. 

What I think you need to test that it being processed by writing a test that 
runs *2 regionservers* and that ensures that the primary and index tables are 
on different servers and then running the update and ensuring that it uses the 
dispatch (maybe by having an update mechanism - latch? static variable? write 
to another table? - in a custom subclass that you look for during the test).

bq. I also tried creating a simple htable, but 
IndexQosRpcController.setPriority is not getting called for this table. 
That is not quite what you are doing in that test. What I was getting at 
offline was that you should just use the standard HBase api's, rather than 
dealing with *any phoenix components* to ensure that nothing is coming into 
play. So just use a MiniHBaseCluster (via a HBaseTestingUtility), set the 
expected properties in the configs, create the tables using an HBaseAdmin (from 
the Util#getHBaseAdmin) and then write to the table. You should see the updates 
going through the expected paths (or there is something wrong).

You have already found a small bug which, will need to be fixed as part of this 
patch, specifically, what you said offline:
 bq. In IndexQosCompat.rpcControllerExists(), should the first check be  if 
(!checked) ?
But that is a small aside to the larger test you should be writing.

 Set priority of Index Updates correctly 
 

 Key: PHOENIX-1676
 URL: https://issues.apache.org/jira/browse/PHOENIX-1676
 Project: Phoenix
  Issue Type: Bug
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva

 I spoke to Jesse offline about this. 
 The priority of index updates isn't being set correctly because of the use of 
 CoprocessorHConnection (which all coprocessors use if they create an HTable 
 via the CPEnvironment).
 Specifically the flow happens like this: the CoprocessorHTableFactory 
 attempts to set the connection qos factory, but it is ignored because the 
 CoprocessorHConnection is used (instead of a standard HConnection) and the 
 #getClient method just goes straight into the rpc scheduler of the 
 HRegionServer, if its on the same server. This allows the region to be 
 directly accessed, but without actually going over the loopback or 
 serializing any information.
 However, this means it ignores the configured rpccontroller factory and the 
 override setting of the rpc priority. We probably shouldn't be runtime 
 changing the configuration - instead we should probably be using some other 
 serialized information.
 The primary fix would seems to be that the regionserver needs to be 
 configured with the IndexQosRpcControllerFactory and then use a static map 
 (or cache of the index metadata) to set the qos for the index servers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client Phoenix-Server jars into Maven Repo

2014-12-30 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261730#comment-14261730
 ] 

Jesse Yates commented on PHOENIX-1567:
--

bq. Is it good practice to publish über jars to Maven? I thought it was not 
good practice but could be mistaken.

This was my first thought as well.

bq. Perhaps a documentation improvement could help clarify the roles of the 
client and server jar in the distribution?

And this was my second thought. Maybe a doc page on 'using phoenix' with 
explicit sections on 'from code' and and 'from tarball'.

If you pull in phoenix-core (and possibly -flume, etc) then maven does all the 
hard work of finding all the transitive dependencies and resolving them 
properly. Including the -client and -server jars in maven would actually make 
life very painful - maven would have no way of knowing about the uber-packaged 
dependencies, almost assuredly leading to weird classpath issues for whomever 
is using phoenix-server or -client.

 Publish Phoenix-Client  Phoenix-Server jars into Maven Repo
 

 Key: PHOENIX-1567
 URL: https://issues.apache.org/jira/browse/PHOENIX-1567
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jeffrey Zhong

 Phoenix doesn't publish Phoenix Client  Server jars into Maven repository. 
 This make things quite hard for down steam projects/applications to use maven 
 to resolve dependencies.
 I tried to modify the pom.xml under phoenix-assembly while it shows the 
 following. 
 {noformat}
 [INFO] Installing 
 /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
  
 to 
 /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
 {noformat}
 Basically the jar published to maven repo will become  
 phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
 phoenix-assembly-4.3.0-SNAPSHOT-server.jar
 The artifact id phoenix-assembly has to be the prefix of the names of jars.
 Therefore, the possible solutions are:
 1) rename current client  server jar to phoenix-assembly-clinet/server.jar 
 to match the jars published to maven repo.
 2) rename phoenix-assembly to something more meaningful and rename our client 
  server jars accordingly
 3) split phoenix-assembly and move the corresponding artifacts into 
 phoenix-client  phoenix-server folders. Phoenix-assembly will only create 
 tar ball files.
 [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1233) Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError

2014-12-19 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1233:
-
Attachment: phoenix-1233-v1.patch

I think we still need something like what i'm attaching. You want to have the 
slf4j-log4j jar in the built client tarball, but just not unpacked in the 
client jar.

I haven't tested to see if this is exactly what you need, but its along the 
right path.

 Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, 
 preempting StackOverflowError
 -

 Key: PHOENIX-1233
 URL: https://issues.apache.org/jira/browse/PHOENIX-1233
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Brian Johnson
Assignee: Ted Yu
 Attachments: 1233-v1.txt, phoenix-1233-v1.patch


 When adding the phoenix jar to the Storm (https://storm.incubator.apache.org) 
 classpath I get the following message and then Storm fails to start:
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/Users/bjohnson/Documents/workspace/korrelate/O2O/jruby/target/dependency/storm/default/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/Users/bjohnson/Documents/workspace/korrelate/O2O/jruby/target/dependency/topology/default/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type 
 [ch.qos.logback.classic.selector.DefaultContextSelector]
 SLF4J: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class 
 path, preempting StackOverflowError. 
 SLF4J: See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more 
 details.
 NameError: cannot initialize Java class backtype.storm.LocalCluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1233) Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError

2014-12-19 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14254482#comment-14254482
 ] 

Jesse Yates commented on PHOENIX-1233:
--

That patch doesn't preclude any of the other slf4j-log4j dependencies(which are 
myriad - take a look at the dependency hierarchy). You still end up with 
org/slf4j/impl/StaticLoggerBinder.class in the client jar, which is still from 
the slf4j-api jar.

We've gotten the unpacked jar impls removed from the -client* jars, by removing 
the explicit includes statements in the phoenix-assembly component descriptors. 
We still want that the jar in the /lib directory of the tar, so it can be a 
standalone client package.

Patch coming momentarily.

 Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, 
 preempting StackOverflowError
 -

 Key: PHOENIX-1233
 URL: https://issues.apache.org/jira/browse/PHOENIX-1233
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Brian Johnson
Assignee: Ted Yu
 Attachments: 1233-v1.txt, phoenix-1233-v1.patch


 When adding the phoenix jar to the Storm (https://storm.incubator.apache.org) 
 classpath I get the following message and then Storm fails to start:
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/Users/bjohnson/Documents/workspace/korrelate/O2O/jruby/target/dependency/storm/default/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/Users/bjohnson/Documents/workspace/korrelate/O2O/jruby/target/dependency/topology/default/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type 
 [ch.qos.logback.classic.selector.DefaultContextSelector]
 SLF4J: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class 
 path, preempting StackOverflowError. 
 SLF4J: See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more 
 details.
 NameError: cannot initialize Java class backtype.storm.LocalCluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1233) Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError

2014-12-19 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1233:
-
Attachment: phoenix-1233-master-v1.patch

Attaching patch that removes slf4j-log4j12 from client jars, but keeps it in 
the standalone client tarball.

 Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, 
 preempting StackOverflowError
 -

 Key: PHOENIX-1233
 URL: https://issues.apache.org/jira/browse/PHOENIX-1233
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Brian Johnson
Assignee: Ted Yu
 Attachments: 1233-v1.txt, phoenix-1233-master-v1.patch, 
 phoenix-1233-v1.patch


 When adding the phoenix jar to the Storm (https://storm.incubator.apache.org) 
 classpath I get the following message and then Storm fails to start:
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/Users/bjohnson/Documents/workspace/korrelate/O2O/jruby/target/dependency/storm/default/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/Users/bjohnson/Documents/workspace/korrelate/O2O/jruby/target/dependency/topology/default/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type 
 [ch.qos.logback.classic.selector.DefaultContextSelector]
 SLF4J: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class 
 path, preempting StackOverflowError. 
 SLF4J: See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more 
 details.
 NameError: cannot initialize Java class backtype.storm.LocalCluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1115) Provide a SQL command to turn tracing on/off

2014-12-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14244688#comment-14244688
 ] 

Jesse Yates commented on PHOENIX-1115:
--

So that was just using the standard HTrace trace allocation mechanism (or 
allowing the client to specify the id via properties). Since the traceid is 
part of the key, you just add a where clause for that id and get all the trace 
info back. Or filter as you see fit on various properties

 Provide a SQL command to turn tracing on/off
 

 Key: PHOENIX-1115
 URL: https://issues.apache.org/jira/browse/PHOENIX-1115
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0, 4.1
Reporter: James Taylor
Assignee: Jeffrey Zhong
 Attachments: Screen Shot 2014-11-21 at 3.41.41 PM.png


 Provide a SQL command that turns tracing on and off. For example, Oracle has 
 this:
 {code}
 ALTER SESSION SET sql_trace = true;
 ALTER SESSION SET sql_trace = false;
 {code}
 We might consider allowing the sampling rate to be set as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1470) KEEP_DELETED_CELLS interface changed in HBase 0.98.8

2014-11-20 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219520#comment-14219520
 ] 

Jesse Yates commented on PHOENIX-1470:
--

Yes, the 0.98.8 release has already been made. The patch above makes it 
backwards compatible for now, but going forward we need to be careful in 
phoenix about using KEEP_DELETED_CELLS. Maybe we need to add a phoenix RC 
checklist task to compile against both the newest version of HBase (e.g. the 
default build) as well as the older versions (maybe just one or two back).

 KEEP_DELETED_CELLS interface changed in HBase 0.98.8
 

 Key: PHOENIX-1470
 URL: https://issues.apache.org/jira/browse/PHOENIX-1470
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jesse Yates
 Attachments: keep-deleted-cells-v1.patch, keep-deleted-cells.patch


 HBASE-12363 changed the contract on HColumnDescriptor#getKeepDeletedCells to 
 no longer return true/false, but instead returns an enum of KeepDeletedCells.
 This seems to be fine at runtime (haven't checked) but it certainly breaks 
 compilation against 0.98.8 and I don't see an obvious way of fixing it that 
 is backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1470) KEEP_DELETED_CELLS interface changed in HBase 0.98.8

2014-11-19 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1470:


 Summary: KEEP_DELETED_CELLS interface changed in HBase 0.98.8
 Key: PHOENIX-1470
 URL: https://issues.apache.org/jira/browse/PHOENIX-1470
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jesse Yates


HBASE-12363 changed the contract on HColumnDescriptor#getKeepDeletedCells to no 
longer return true/false, but instead returns an enum of KeepDeletedCells.

This seems to be fine at runtime (haven't checked) but it certainly breaks 
compilation against 0.98.8 and I don't see an obvious way of fixing it that is 
backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1470) KEEP_DELETED_CELLS interface changed in HBase 0.98.8

2014-11-19 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1470:
-
Attachment: keep-deleted-cells.patch

Attaching a basic patch that at least allows compilation against 0.98.8, but 
doesn't work against older versions.

 KEEP_DELETED_CELLS interface changed in HBase 0.98.8
 

 Key: PHOENIX-1470
 URL: https://issues.apache.org/jira/browse/PHOENIX-1470
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jesse Yates
 Attachments: keep-deleted-cells.patch


 HBASE-12363 changed the contract on HColumnDescriptor#getKeepDeletedCells to 
 no longer return true/false, but instead returns an enum of KeepDeletedCells.
 This seems to be fine at runtime (haven't checked) but it certainly breaks 
 compilation against 0.98.8 and I don't see an obvious way of fixing it that 
 is backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1470) KEEP_DELETED_CELLS interface changed in HBase 0.98.8

2014-11-19 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1470:
-
Attachment: keep-deleted-cells-v1.patch

Updated slightly with something that at least tests backwards compatible 
correctly (and is a little bit cleaner) with [~lhofhansl]'s help.

 KEEP_DELETED_CELLS interface changed in HBase 0.98.8
 

 Key: PHOENIX-1470
 URL: https://issues.apache.org/jira/browse/PHOENIX-1470
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jesse Yates
 Attachments: keep-deleted-cells-v1.patch, keep-deleted-cells.patch


 HBASE-12363 changed the contract on HColumnDescriptor#getKeepDeletedCells to 
 no longer return true/false, but instead returns an enum of KeepDeletedCells.
 This seems to be fine at runtime (haven't checked) but it certainly breaks 
 compilation against 0.98.8 and I don't see an obvious way of fixing it that 
 is backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1470) KEEP_DELETED_CELLS interface changed in HBase 0.98.8

2014-11-19 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14218837#comment-14218837
 ] 

Jesse Yates commented on PHOENIX-1470:
--

I think we have to keep carrying on at this point. Might need to make a wrapper 
in Phoenix to manage between the two cases if it actually comes up in the 
runtime, but I don't think we actually use value of KeepDeletedCells anywhere 
that I can find.

 KEEP_DELETED_CELLS interface changed in HBase 0.98.8
 

 Key: PHOENIX-1470
 URL: https://issues.apache.org/jira/browse/PHOENIX-1470
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jesse Yates
 Attachments: keep-deleted-cells-v1.patch, keep-deleted-cells.patch


 HBASE-12363 changed the contract on HColumnDescriptor#getKeepDeletedCells to 
 no longer return true/false, but instead returns an enum of KeepDeletedCells.
 This seems to be fine at runtime (haven't checked) but it certainly breaks 
 compilation against 0.98.8 and I don't see an obvious way of fixing it that 
 is backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2014-11-17 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215016#comment-14215016
 ] 

Jesse Yates commented on PHOENIX-1457:
--

Seems very feasible. The only thing I would worry about is that HBase uses the 
PayloadCarryingRpcController the same way as everything else, so it can be used 
to get the RPC priority. Then you would just need to create the logic to handle 
those requests in a separate queue, just like with the index updates.

 Use high priority queue for metadata endpoint calls
 ---

 Key: PHOENIX-1457
 URL: https://issues.apache.org/jira/browse/PHOENIX-1457
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor

 If the RS hosting the system table gets swamped, then we'd be bottlenecked 
 waiting for the response back before running a query when we check if the 
 metadata is in sync. We should run endpoint coprocessor calls for 
 MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1331) DropIndexDuringUpsertIT.testWriteFailureDropIndex times out

2014-11-05 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14199015#comment-14199015
 ] 

Jesse Yates edited comment on PHOENIX-1331 at 11/5/14 8:33 PM:
---

Well, is it failing on the jenkins builds? Also is this on master, 4.0? Maybe a 
stacktrace of the hanging test so it can be analyzed?


was (Author: jesse_yates):
Well, is it failing on the jenkins builds?

 DropIndexDuringUpsertIT.testWriteFailureDropIndex times out
 ---

 Key: PHOENIX-1331
 URL: https://issues.apache.org/jira/browse/PHOENIX-1331
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor

 The DropIndexDuringUpsertIT.testWriteFailureDropIndex() test consistently 
 fails by timing out on my Mac laptop and Mac desktop with the following 
 exception:
 {code}
 testWriteFailureDropIndex(org.apache.phoenix.end2end.index.DropIndexDuringUpsertIT)
   Time elapsed: 341.902 sec   ERROR!
 java.lang.Exception: test timed out after 30 milliseconds
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
   at 
 org.apache.phoenix.end2end.index.DropIndexDuringUpsertIT.testWriteFailureDropIndex(DropIndexDuringUpsertIT.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1381) NPE in CellUtil.matchingFamily() for IndexedKeyValue

2014-10-27 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185583#comment-14185583
 ] 

Jesse Yates commented on PHOENIX-1381:
--

My only concern is if this will work on older versions of HBase as well. Its 
changing what getFamily returns which may or may not affect the implementation. 
If you tested it on an older version (pre 0.98.4), and it still works, then we 
are good. Phoenix attempts to support all the HBase versions within a major 
version, eg. the entire 0.98.X series

 NPE in CellUtil.matchingFamily() for IndexedKeyValue
 

 Key: PHOENIX-1381
 URL: https://issues.apache.org/jira/browse/PHOENIX-1381
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
 Environment: hbase 0.98.4+
Reporter: Shunsuke Nakamura
Assignee: Jeffrey Zhong
Priority: Critical
 Fix For: 5.0.0, 4.2

 Attachments: PHOENIX-1381.patch


 NPE in replayRecoveredEdits of phoenix table with local index:
 {code}
 2014-10-16 10:54:29,871 ERROR [RS_OPEN_REGION-XX.XX.XX.XX:53489-1] 
 handler.OpenRegionHandler: Failed open of 
 region=USERTABLE,,1413366337840.f170ee16e795c42b607b962a557a8c2c., starting 
 to roll back the global memstore size.
 java.lang.NullPointerException
 at org.apache.hadoop.hbase.util.Bytes.toShort(Bytes.java:845)
 at org.apache.hadoop.hbase.util.Bytes.toShort(Bytes.java:832)
 at org.apache.hadoop.hbase.KeyValue.getRowLength(KeyValue.java:1303)
 at 
 org.apache.hadoop.hbase.KeyValue.getFamilyOffset(KeyValue.java:1319)
 at org.apache.hadoop.hbase.CellUtil.matchingFamily(CellUtil.java:329)
 at org.apache.hadoop.hbase.CellUtil.matchingColumn(CellUtil.java:344)
 at 
 org.apache.hadoop.hbase.regionserver.wal.WALEdit.getCompaction(WALEdit.java:285)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:3271)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3135)
 {code}
 IndexedKeyValue is an incomplete Cell implementation. Then a part of CellUtil 
 methods such as matchingFamily don't work well for the IndexedKeyValue. 
 With HBase-0.98.4+, this issue can be caused because the matching by 
 kv.matchingColumn in WALEdit.getCompaction was replaced with CellUtils's one 
 at HBASE-11475. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1286) Remove hadoop2 compat modules

2014-10-27 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1286:
-
Fix Version/s: 4.2
   5.0.0

 Remove hadoop2 compat modules
 -

 Key: PHOENIX-1286
 URL: https://issues.apache.org/jira/browse/PHOENIX-1286
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.2

 Attachments: phoenix-1286-master-v0.patch, 
 phoenix-1286-master-v1.patch


 Now that PHOENIX-103 is committed, we can actually remove the the 
 compatibility modules entirely and all the reflection they use to get the 
 right classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1286) Remove hadoop2 compat modules

2014-10-27 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-1286.
--
Resolution: Fixed

Committed to 4.2 and main. thanks for the review [~jamestaylor]!

 Remove hadoop2 compat modules
 -

 Key: PHOENIX-1286
 URL: https://issues.apache.org/jira/browse/PHOENIX-1286
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.2

 Attachments: phoenix-1286-master-v0.patch, 
 phoenix-1286-master-v1.patch


 Now that PHOENIX-103 is committed, we can actually remove the the 
 compatibility modules entirely and all the reflection they use to get the 
 right classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1286) Remove hadoop2 compat modules

2014-10-22 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14180260#comment-14180260
 ] 

Jesse Yates commented on PHOENIX-1286:
--

Nope, because it never got reviewed :-/ Its a little bit behind now though, so 
it needs to be rebased before commit.

 Remove hadoop2 compat modules
 -

 Key: PHOENIX-1286
 URL: https://issues.apache.org/jira/browse/PHOENIX-1286
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: phoenix-1286-master-v0.patch


 Now that PHOENIX-103 is committed, we can actually remove the the 
 compatibility modules entirely and all the reflection they use to get the 
 right classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1329) Correctly support varbinary arrays

2014-10-07 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1329:


 Summary: Correctly support varbinary arrays
 Key: PHOENIX-1329
 URL: https://issues.apache.org/jira/browse/PHOENIX-1329
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.3


Storing arrays of binary data can contain 0x00, which Phoenix uses a the field 
separator. This leads phoenix to return arrays incorrectly - shortening them 
prematurely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1329) Correctly support varbinary arrays

2014-10-07 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1329:
-
Attachment: phoenix-1329-bug.patch

Attaching patch to _demonstrate_ the issue. Its going to take an encoding 
change to actually do this correctly.

 Correctly support varbinary arrays
 --

 Key: PHOENIX-1329
 URL: https://issues.apache.org/jira/browse/PHOENIX-1329
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.3

 Attachments: phoenix-1329-bug.patch


 Storing arrays of binary data can contain 0x00, which Phoenix uses a the 
 field separator. This leads phoenix to return arrays incorrectly - shortening 
 them prematurely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1107) Support mutable indexes over replication

2014-10-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14160597#comment-14160597
 ] 

Jesse Yates commented on PHOENIX-1107:
--

Not sure if I kept it around... will have to dig around my git repo.

 Support mutable indexes over replication
 

 Key: PHOENIX-1107
 URL: https://issues.apache.org/jira/browse/PHOENIX-1107
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: PHOENIX-1107-3.0.V1.patch, phoenix-1107-3.0.v0, 
 phoenix-1107-master-passes.patch


 Mutable indexes don't support usage with replication. For starters, the 
 replication WAL Listener checks the family of the edits, which can throw a 
 NPE for the IndexedKeyValue 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1318) Tables with Put coprocessor cannot use HRegion.mutateRowsWithLocks()

2014-10-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14160607#comment-14160607
 ] 

Jesse Yates commented on PHOENIX-1318:
--

Yeah, that sounds right. There are a couple place where we don't use the same 
path - includes appends/increments and mutateRowWithLocks.

 Tables with Put coprocessor cannot use HRegion.mutateRowsWithLocks()
 

 Key: PHOENIX-1318
 URL: https://issues.apache.org/jira/browse/PHOENIX-1318
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.1, 4.1
Reporter: James Taylor

 Ran into this originally with the SYSTEM.CATALOG table, and we hacked around 
 it by not putting the Put coprocessors on it (which means it can't have 
 mutable indexes). Ran into it again now with the SYSTEM.STATS table (FYI, 
 [~ramkrishna]). If you remember why, [~jesse_yates] would you mind adding a 
 comment here? We should get to the bottom of the issue and get a fix in if 
 possible (maybe requires an HBase change?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1107) Support mutable indexes over replication

2014-10-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14160609#comment-14160609
 ] 

Jesse Yates commented on PHOENIX-1107:
--

hehe, so it is. Future-proofing myself :) I'll commit today, along with the 
rest of the patches I have in queue.

 Support mutable indexes over replication
 

 Key: PHOENIX-1107
 URL: https://issues.apache.org/jira/browse/PHOENIX-1107
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: PHOENIX-1107-3.0.V1.patch, phoenix-1107-3.0.v0, 
 phoenix-1107-master-passes.patch


 Mutable indexes don't support usage with replication. For starters, the 
 replication WAL Listener checks the family of the edits, which can throw a 
 NPE for the IndexedKeyValue 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1289) Drop index during upsert may abort RS

2014-10-06 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-1289.
--
   Resolution: Fixed
Fix Version/s: 4.2
   5.0.0

committed to master and 4.0. thanks [~daniel.M]!

 Drop index during upsert may abort RS
 -

 Key: PHOENIX-1289
 URL: https://issues.apache.org/jira/browse/PHOENIX-1289
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: daniel meng
Assignee: daniel meng
 Fix For: 5.0.0, 4.2

 Attachments: DropIndexDuringUpsertIT.java, PHOENIX-1289.PATCH, 
 phoenix-1289-v1.patch


 below execute path will abort RS:
 1. client A write to table T with mutation m。 and T has an index of name IDX
 2. m arrival at RS, but not start processing yet
 3. client B drop index IDX
 4. RS try to process m, and we get m' for IDX
 5. RS try to write m'  but fail as HBase Table IDX not exist
 6. RS try to disable IDX but fail as Metadata has been deleted
 7. KillServerOnFailurePolicy is triggered, server abort
 8. recovery will fail with the same reason.
 an IT is attached 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1289) Drop index during upsert may abort RS

2014-10-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14160681#comment-14160681
 ] 

Jesse Yates commented on PHOENIX-1289:
--

hmm, looks like a flaky test issue

 Drop index during upsert may abort RS
 -

 Key: PHOENIX-1289
 URL: https://issues.apache.org/jira/browse/PHOENIX-1289
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: daniel meng
Assignee: daniel meng
 Fix For: 5.0.0, 4.2

 Attachments: DropIndexDuringUpsertIT.java, PHOENIX-1289.PATCH, 
 phoenix-1289-v1.patch


 below execute path will abort RS:
 1. client A write to table T with mutation m。 and T has an index of name IDX
 2. m arrival at RS, but not start processing yet
 3. client B drop index IDX
 4. RS try to process m, and we get m' for IDX
 5. RS try to write m'  but fail as HBase Table IDX not exist
 6. RS try to disable IDX but fail as Metadata has been deleted
 7. KillServerOnFailurePolicy is triggered, server abort
 8. recovery will fail with the same reason.
 an IT is attached 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1107) Support mutable indexes over replication

2014-10-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14160706#comment-14160706
 ] 

Jesse Yates commented on PHOENIX-1107:
--

Commited to 4.2 and master, leaving it open for 3.X until we decide on a 
soltion there

 Support mutable indexes over replication
 

 Key: PHOENIX-1107
 URL: https://issues.apache.org/jira/browse/PHOENIX-1107
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: PHOENIX-1107-3.0.V1.patch, phoenix-1107-3.0.v0, 
 phoenix-1107-master-passes.patch


 Mutable indexes don't support usage with replication. For starters, the 
 replication WAL Listener checks the family of the edits, which can throw a 
 NPE for the IndexedKeyValue 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1107) Support mutable indexes over replication

2014-10-06 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1107:
-
Fix Version/s: 4.2
   5.0.0

 Support mutable indexes over replication
 

 Key: PHOENIX-1107
 URL: https://issues.apache.org/jira/browse/PHOENIX-1107
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.2

 Attachments: PHOENIX-1107-3.0.V1.patch, phoenix-1107-3.0.v0, 
 phoenix-1107-master-passes.patch


 Mutable indexes don't support usage with replication. For starters, the 
 replication WAL Listener checks the family of the edits, which can throw a 
 NPE for the IndexedKeyValue 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1289) Drop index during upsert may abort RS

2014-09-30 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150896#comment-14150896
 ] 

Jesse Yates edited comment on PHOENIX-1289 at 9/30/14 11:52 PM:


[~jesse_yates] it' seems strange for me, here is what i'm understand after some 
debugging: there is only one index S.I in the unit test ,  so the first 
parameter in 
 {code}
handlerFailure(MultimapHTableInterfaceReference, Mutation attempted, 
Exception cause) 
{code}
should only has one entry for S.I , and then there will be no loop.  i can not 
understand why  indexTableNames.clear(); is necessary , it always goes right 
on my side.



was (Author: daniel.m):
[~jesse_yates] it' seems strange for me, here is what i'm understand after some 
debugging: there is only one index S.I in the unit test ,  so the first 
parameter in 
 ```java
handlerFailure(MultimapHTableInterfaceReference, Mutation attempted, 
Exception cause) 
```
should only has one entry for S.I , and then there will be no loop.  i can not 
understand why  indexTableNames.clear(); is necessary , it always goes right 
on my side.


 Drop index during upsert may abort RS
 -

 Key: PHOENIX-1289
 URL: https://issues.apache.org/jira/browse/PHOENIX-1289
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: daniel meng
 Attachments: DropIndexDuringUpsertIT.java, PHOENIX-1289.PATCH


 below execute path will abort RS:
 1. client A write to table T with mutation m。 and T has an index of name IDX
 2. m arrival at RS, but not start processing yet
 3. client B drop index IDX
 4. RS try to process m, and we get m' for IDX
 5. RS try to write m'  but fail as HBase Table IDX not exist
 6. RS try to disable IDX but fail as Metadata has been deleted
 7. KillServerOnFailurePolicy is triggered, server abort
 8. recovery will fail with the same reason.
 an IT is attached 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1305) create index throws NPE when dataTable has specified default column family

2014-09-30 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154066#comment-14154066
 ] 

Jesse Yates edited comment on PHOENIX-1305 at 10/1/14 12:11 AM:


This seems more like a band-aid than the fix for the real issue (though it does 
fix it), which is why aren't the properties set correctly? They are going to be 
null unless we have fam_properties set, as from the grammar (ln 390-397):
{code}
create_index_node returns [CreateIndexStatement ret]
:   CREATE l=LOCAL? INDEX (IF NOT ex=EXISTS)? i=index_name ON 
t=from_table_name
(LPAREN pk=index_pk_constraint RPAREN)
(INCLUDE (LPAREN icrefs=column_names RPAREN))?
(p=fam_properties)?
(SPLIT ON v=value_expression_list)?
{ret = factory.createIndex(i, factory.namedTable(null,t), pk, icrefs, 
v, p, ex!=null, l==null ? IndexType.getDefault() : IndexType.LOCAL, 
getBindCount()); }
;
{code}

As generated in the PhoenixSQLParser (ln 1297-1305):
{code}
case 1 :
// PhoenixSQL.g:394:10: p= fam_properties
{
pushFollow(FOLLOW_fam_properties_in_create_index_node1555);
p=fam_properties();
state._fsp--;
if (state.failed) return ret;
}
break;
{code}

what I can't answer, is why we have this logic at all (though I didn't dive 
into the parser.

Maybe [~jamestaylor] can shed some light here?


was (Author: jesse_yates):
This seems more like a band-aid than the fix for the real issue (though it does 
fix it), which is why aren't the properties set correctly? They are going to be 
null unless we have fam_properties set, as from the grammar (ln 390-397):
{code}
create_index_node returns [CreateIndexStatement ret]
:   CREATE l=LOCAL? INDEX (IF NOT ex=EXISTS)? i=index_name ON 
t=from_table_name
(LPAREN pk=index_pk_constraint RPAREN)
(INCLUDE (LPAREN icrefs=column_names RPAREN))?
(p=fam_properties)?
(SPLIT ON v=value_expression_list)?
{ret = factory.createIndex(i, factory.namedTable(null,t), pk, icrefs, 
v, p, ex!=null, l==null ? IndexType.getDefault() : IndexType.LOCAL, 
getBindCount()); }
;
{code}

As generated in the PhoenixSQLParser (ln 1297-1305):
{code}
case 1 :
// PhoenixSQL.g:394:10: p= 
fam_properties
{

pushFollow(FOLLOW_fam_properties_in_create_index_node1555);
p=fam_properties();
state._fsp--;
if (state.failed) return ret;
}
break;
{code}

what I can't answer, is why we have this logic at all (though I didn't dive 
into the parser.

Maybe [~jamestaylor] can shed some light here?

 create index throws NPE when dataTable has specified default column family
 --

 Key: PHOENIX-1305
 URL: https://issues.apache.org/jira/browse/PHOENIX-1305
 Project: Phoenix
  Issue Type: Bug
Reporter: daniel meng
 Attachments: PHOENIX-1305.patch


 {code:sql}
 create table S.T (k varchar not null primary key, v1 varchar, v2 varchar) 
 DEFAULT_COLUMN_FAMILY='A'
 create index I on S.T (v1) include (v2)
 {code}
 {code}
 java.lang.NullPointerException
at 
 org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:704)
at 
 org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:76)
at 
 org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:252)
at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
at 
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:242)
at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:966)
at 
 org.apache.phoenix.end2end.index.CreateIndexIT.testWriteFailureDropIndex(CreateIndexIT.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
 

[jira] [Commented] (PHOENIX-1289) Drop index during upsert may abort RS

2014-09-25 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148546#comment-14148546
 ] 

Jesse Yates commented on PHOENIX-1289:
--

Yup, that seems like a legitimate bug. probably should be pretty easy to check 
though if the index has been removed in the initial failure policy.

 Drop index during upsert may abort RS
 -

 Key: PHOENIX-1289
 URL: https://issues.apache.org/jira/browse/PHOENIX-1289
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: daniel meng
 Attachments: DropIndexDuringUpsertIT.java


 below execute path will abort RS:
 1. client A write to table T with mutation m。 and T has an index of name IDX
 2. m arrival at RS, but not start processing yet
 3. client B drop index IDX
 4. RS try to process m, and we get m' for IDX
 5. RS try to write m'  but fail as HBase Table IDX not exist
 6. RS try to disable IDX but fail as Metadata has been deleted
 7. KillServerOnFailurePolicy is triggered, server abort
 8. recovery will fail with the same reason.
 an IT is attached 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1289) Drop index during upsert may abort RS

2014-09-25 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148550#comment-14148550
 ] 

Jesse Yates commented on PHOENIX-1289:
--

Any interest in taking a shot at the fix [~daniel.M]? Should be just a couple 
lines in PhoenixIndexFailurePolicy where we loop through the index names.

 Drop index during upsert may abort RS
 -

 Key: PHOENIX-1289
 URL: https://issues.apache.org/jira/browse/PHOENIX-1289
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: daniel meng
Assignee: Jesse Yates
 Attachments: DropIndexDuringUpsertIT.java


 below execute path will abort RS:
 1. client A write to table T with mutation m。 and T has an index of name IDX
 2. m arrival at RS, but not start processing yet
 3. client B drop index IDX
 4. RS try to process m, and we get m' for IDX
 5. RS try to write m'  but fail as HBase Table IDX not exist
 6. RS try to disable IDX but fail as Metadata has been deleted
 7. KillServerOnFailurePolicy is triggered, server abort
 8. recovery will fail with the same reason.
 an IT is attached 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-103) Drop hadoop1.0 specifics from code

2014-09-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145462#comment-14145462
 ] 

Jesse Yates commented on PHOENIX-103:
-

Sounds good to me. I'll commit it now.

 Drop hadoop1.0 specifics from code
 --

 Key: PHOENIX-103
 URL: https://issues.apache.org/jira/browse/PHOENIX-103
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0
Reporter: Jeffrey Zhong
Assignee: Jesse Yates
 Attachments: phoenix-103-master-v0.patch


 This JIRA is to track the discuss we had in the dev list:
 The discussion thread is here:
 https://www.mail-archive.com/dev@phoenix.incubator.apache.org/msg00964.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-103) Drop hadoop1.0 specifics from code

2014-09-23 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-103:

Attachment: phoenix-103-4.0-v0.patch

Attaching patch for change committed to 4.0

 Drop hadoop1.0 specifics from code
 --

 Key: PHOENIX-103
 URL: https://issues.apache.org/jira/browse/PHOENIX-103
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0, 4.2
Reporter: Jeffrey Zhong
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.2

 Attachments: phoenix-103-4.0-v0.patch, phoenix-103-master-v0.patch


 This JIRA is to track the discuss we had in the dev list:
 The discussion thread is here:
 https://www.mail-archive.com/dev@phoenix.incubator.apache.org/msg00964.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1286) Remove hadoop2 compat modules

2014-09-23 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1286:


 Summary: Remove hadoop2 compat modules
 Key: PHOENIX-1286
 URL: https://issues.apache.org/jira/browse/PHOENIX-1286
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates


Now that PHOENIX-103 is committed, we can actually remove the the compatibility 
modules entirely and all the reflection they use to get the right classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-103) Drop hadoop1.0 specifics from code

2014-09-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145474#comment-14145474
 ] 

Jesse Yates commented on PHOENIX-103:
-

Created PHOENIX-1286 to finish the work for removing the compat modules + their 
reflection.

 Drop hadoop1.0 specifics from code
 --

 Key: PHOENIX-103
 URL: https://issues.apache.org/jira/browse/PHOENIX-103
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0, 4.2
Reporter: Jeffrey Zhong
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.2

 Attachments: phoenix-103-4.0-v0.patch, phoenix-103-master-v0.patch


 This JIRA is to track the discuss we had in the dev list:
 The discussion thread is here:
 https://www.mail-archive.com/dev@phoenix.incubator.apache.org/msg00964.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1286) Remove hadoop2 compat modules

2014-09-23 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1286:
-
Attachment: phoenix-1286-master-v0.patch

Attaching patch for master. Quite a bit changed here, so I'll open a pull 
request as well.

 Remove hadoop2 compat modules
 -

 Key: PHOENIX-1286
 URL: https://issues.apache.org/jira/browse/PHOENIX-1286
 Project: Phoenix
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: phoenix-1286-master-v0.patch


 Now that PHOENIX-103 is committed, we can actually remove the the 
 compatibility modules entirely and all the reflection they use to get the 
 right classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1272) Avoid pulling in unintended HBase dependencies in phoenix-core

2014-09-23 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1272:
-
Attachment: phoenix-1272-master-v2.patch

Since PHOENIX-103 got committed, unfortunately looks like patch doesn't apply 
cleanly :-/ 

However, here is an updated version based on [~apurtell]'s original. It would 
have gotten a +1 from me though :)

 Avoid pulling in unintended HBase dependencies in phoenix-core
 --

 Key: PHOENIX-1272
 URL: https://issues.apache.org/jira/browse/PHOENIX-1272
 Project: Phoenix
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: PHOENIX-1272.patch, phoenix-1272-master-v2.patch


 I think Phoenix might be pulling in all of the other HBase modules because 
 phoenix-core specifies the 'hbase-testing-util' HBase module as a dependency, 
 and not at test scope:
 {noformat}
   dependencies
   dependency
 groupIdorg.apache.hbase/groupId
 artifactIdhbase-testing-util/artifactId
 exclusions
   exclusion
 groupIdorg.jruby/groupId
 artifactIdjruby-complete/artifactId
   /exclusion
 /exclusions
   /dependency
 ...
 {noformat}
 hbase-testing-util doesn't contain any code. It is a module you can use that 
 will pull in everything needed to start up mini cluster tests, all of the 
 HBase modules including hbase-server and the compat modules, with compile 
 scope. Maven doc says about compile scope: This is the default scope, used 
 if none is specified. Compile dependencies are available in all classpaths of 
 a project. Furthermore, those dependencies are propagated to dependent 
 projects.
 Other test dependencies in the phoenix-core POM are included at test scope 
 and tagged as optional, e.g.
 {noformat}
   dependency
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-test/artifactId
 optionaltrue/optional
 scopetest/scope
  /dependency
 {noformat}
 Perhaps the same should be done for hbase-testing-util ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-103) Drop hadoop1.0 specifics from code

2014-09-22 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-103:

Attachment: phoenix-103-master-v0.patch

Initial patch for removing the dependency on hadoop1.

This doesn't remove the hadoop-compat and hadoop2-compat modules or remove all 
the reflection stuff inherent in those modules. For now, just removes the 
hadoop1 build profiles and builds the correct tarballs, in the hopes that it 
will make reviewing easier :) Would do a follow-up patch/jira that completes 
the update.

Ping [~apurtell] - know you were interested in this.

 Drop hadoop1.0 specifics from code
 --

 Key: PHOENIX-103
 URL: https://issues.apache.org/jira/browse/PHOENIX-103
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0
Reporter: Jeffrey Zhong
 Attachments: phoenix-103-master-v0.patch


 This JIRA is to track the discuss we had in the dev list:
 The discussion thread is here:
 https://www.mail-archive.com/dev@phoenix.incubator.apache.org/msg00964.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-103) Drop hadoop1.0 specifics from code

2014-09-22 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143853#comment-14143853
 ] 

Jesse Yates commented on PHOENIX-103:
-

FWIW successfully ran $mvn install -DskipTests on both OSX and Linux.

 Drop hadoop1.0 specifics from code
 --

 Key: PHOENIX-103
 URL: https://issues.apache.org/jira/browse/PHOENIX-103
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0
Reporter: Jeffrey Zhong
Assignee: Jesse Yates
 Attachments: phoenix-103-master-v0.patch


 This JIRA is to track the discuss we had in the dev list:
 The discussion thread is here:
 https://www.mail-archive.com/dev@phoenix.incubator.apache.org/msg00964.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-103) Drop hadoop1.0 specifics from code

2014-09-22 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143886#comment-14143886
 ] 

Jesse Yates commented on PHOENIX-103:
-

Cool, I'll commit tonight/early tomorrow, unless there are any objections (and 
file a follow-up to remove the reflection, compat modules entirely)

 Drop hadoop1.0 specifics from code
 --

 Key: PHOENIX-103
 URL: https://issues.apache.org/jira/browse/PHOENIX-103
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0
Reporter: Jeffrey Zhong
Assignee: Jesse Yates
 Attachments: phoenix-103-master-v0.patch


 This JIRA is to track the discuss we had in the dev list:
 The discussion thread is here:
 https://www.mail-archive.com/dev@phoenix.incubator.apache.org/msg00964.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1245) Remove usage of empty KeyValue object BATCH_MARKER from Indexer

2014-09-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129253#comment-14129253
 ] 

Jesse Yates commented on PHOENIX-1245:
--

You should also run the IT tests - those really exercise the code paths.

 Remove usage of empty KeyValue object BATCH_MARKER from Indexer
 ---

 Key: PHOENIX-1245
 URL: https://issues.apache.org/jira/browse/PHOENIX-1245
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.0.0, 5.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 4.2

 Attachments: PHOENIX-1245.patch


 This is added to WALEdit in one CP hook and removed in a later CP hook. But 
 not really used.
 Now after HBASE-11805, the removal won't really happen. Later this empty KV 
 is used in other part of HBase core code and resulting in NPE as the bytes 
 ref in this KV is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1245) Remove usage of empty KeyValue object BATCH_MARKER from Indexer

2014-09-09 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127349#comment-14127349
 ] 

Jesse Yates commented on PHOENIX-1245:
--

The analysis here is correct - it so we keep the WALEdit around for the second 
CP call. But since the HBase API changed, we should be good. However, we have 
to be careful that we maintain backwards compatibility with older versions of 
HBase

 Remove usage of empty KeyValue object BATCH_MARKER from Indexer
 ---

 Key: PHOENIX-1245
 URL: https://issues.apache.org/jira/browse/PHOENIX-1245
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.0.0, 5.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 4.2


 This is added to WALEdit in one CP hook and removed in a later CP hook. But 
 not really used.
 Now after HBASE-11805, the removal won't really happen. Later this empty KV 
 is used in other part of HBase core code and resulting in NPE as the bytes 
 ref in this KV is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1245) Remove usage of empty KeyValue object BATCH_MARKER from Indexer

2014-09-09 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127658#comment-14127658
 ] 

Jesse Yates commented on PHOENIX-1245:
--

Yeah, that's kinda what I'm getting at

 Remove usage of empty KeyValue object BATCH_MARKER from Indexer
 ---

 Key: PHOENIX-1245
 URL: https://issues.apache.org/jira/browse/PHOENIX-1245
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.0.0, 5.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 5.0.0, 4.2


 This is added to WALEdit in one CP hook and removed in a later CP hook. But 
 not really used.
 Now after HBASE-11805, the removal won't really happen. Later this empty KV 
 is used in other part of HBase core code and resulting in NPE as the bytes 
 ref in this KV is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1107) Support mutable indexes over replication

2014-09-05 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1107:
-
Attachment: phoenix-1107-master-passes.patch

In the process of actually tracking down whats going on, I wrote an integration 
test to verify that in fact replication is not working for indexes.

Here's the interesting thing... they actually do work!

You just need turn on replication and add the replication scopes to the column 
families. In IndexedKeyValue, getMatchingFamily will always match the METADATA 
family, which then doesn't even try to get replicated in the ReplicationSink.

Attaching a patch for running test, with passes locally for me.

 Support mutable indexes over replication
 

 Key: PHOENIX-1107
 URL: https://issues.apache.org/jira/browse/PHOENIX-1107
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: phoenix-1107-3.0.v0, phoenix-1107-master-passes.patch


 Mutable indexes don't support usage with replication. For starters, the 
 replication WAL Listener checks the family of the edits, which can throw a 
 NPE for the IndexedKeyValue 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1107) Support mutable indexes over replication

2014-09-05 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123810#comment-14123810
 ] 

Jesse Yates commented on PHOENIX-1107:
--

The reason this works (at least in HBase 0.98, so Phoenix-4.X) is that the 
should we replicate this mechanism is inherently different in 0.98. Here's 
the code:
{code}
  @Override
  public void visitLogEntryBeforeWrite(HTableDescriptor htd, HLogKey logKey,
   WALEdit logEdit) {
scopeWALEdits(htd, logKey, logEdit);
  }

 public static void scopeWALEdits(HTableDescriptor htd, HLogKey logKey,
   WALEdit logEdit) {
NavigableMapbyte[], Integer scopes =
new TreeMapbyte[], Integer(Bytes.BYTES_COMPARATOR);
byte[] family;
for (KeyValue kv : logEdit.getKeyValues()) {
  family = kv.getFamily();
  // This is expected and the KV should not be replicated
  if (kv.matchingFamily(WALEdit.METAFAMILY)) continue;
  // Unexpected, has a tendency to happen in unit tests
  assert htd.getFamily(family) != null;

  int scope = htd.getFamily(family).getScope();
  if (scope != REPLICATION_SCOPE_LOCAL 
  !scopes.containsKey(family)) {
scopes.put(family, scope);
  }
}
if (!scopes.isEmpty()) {
  logKey.setScopes(scopes);
}
  }
{code}

which is inherently different from the WAL logic in my first comment on this 
JIRA.

I think we can mark this is Won't Fix and move on.

 Support mutable indexes over replication
 

 Key: PHOENIX-1107
 URL: https://issues.apache.org/jira/browse/PHOENIX-1107
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 3.1, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: phoenix-1107-3.0.v0, phoenix-1107-master-passes.patch


 Mutable indexes don't support usage with replication. For starters, the 
 replication WAL Listener checks the family of the edits, which can throw a 
 NPE for the IndexedKeyValue 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1226) Exception in Tracing

2014-09-05 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-1226.
--
Resolution: Fixed

Committed to 4.0 and master. Thanks for filing the issue [~dispalt] and the 
lively discussion [~jamestaylor]

 Exception in Tracing
 

 Key: PHOENIX-1226
 URL: https://issues.apache.org/jira/browse/PHOENIX-1226
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
 Environment: 0.98.5 hbase, 4.1.0 phoenix
Reporter: Dan Di Spaltro
Assignee: Jesse Yates
 Attachments: phoenix-1226-4.0-v0.patch, phoenix-1226-4.0-v1.patch


 I was exposed to an exception in the tracing code, during my test setup of 
 Phoenix in the following code:
 {code}
 58062 [defaultRpcServer.handler=2,queue=0,port=53950] WARN  
 org.apache.hadoop.ipc.RpcServer  - 
 defaultRpcServer.handler=2,queue=0,port=53950: caught: 
 java.lang.IllegalArgumentException: offset (0) + length (4) exceed the 
 capacity of the array: 3
   at 
 org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:600)
   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:749)
   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:725)
   at 
 org.apache.phoenix.trace.TracingCompat.readAnnotation(TracingCompat.java:56)
   at 
 org.apache.phoenix.trace.TraceMetricSource.receiveSpan(TraceMetricSource.java:121)
   at org.cloudera.htrace.Tracer.deliver(Tracer.java:81)
   at org.cloudera.htrace.impl.MilliSpan.stop(MilliSpan.java:70)
   at org.cloudera.htrace.TraceScope.close(TraceScope.java:70)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 It is related to the following line of code where we interpret all KV 
 annotation values as byte-wise integers here:
 https://github.com/apache/phoenix/blob/v4.1.0/phoenix-hadoop-compat/src/main/java/org/apache/phoenix/trace/TracingCompat.java#L56
 Here is where HBase is adding a non-integer KV annotation:
 https://github.com/apache/hbase/blob/0.98.5/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RequestContext.java#L105
 The fix should be simple, but I am not aware of all the related issues in 
 changing this.
 cc [~jesse_yates], [~samarth.j...@gmail.com], [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1241) Add typing to trace annotations

2014-09-05 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14123848#comment-14123848
 ] 

Jesse Yates commented on PHOENIX-1241:
--

For the second point, are you proposing that we start looking through some 
HBase traces and figure out what they could be and adding those columns by 
default? It would be nice to just dynamically discover those as we get the 
annotation. I know there isn't a real cost to having the columns if we don't 
fill them on every request, but that's a little cruft too.

bq. it is inconvenient that we'd have . 

Its entirely possible is any annotation key we get and convert to to something 
parseable (remember, they are all created as byte[]s). We probably need to 
double-quote everything not phoenix sourced, just to be sure. And how do we 
make sure its not from phoenix? Probably with some annotation prefix (at least 
i don't see a another way, except for a static map of known annotations, which 
is also kinda ugly)

Also, Eli raised a good issue offline about user-specified annotations. It 
would be nice to type those as well, which means parsing the annotation key (or 
value) for the type. But then we end up getting into a nasty situation where 
its possible that a user writes two different types for the same key in the 
same trace (not likely, but possible). We would have to decide if we want to 
support externally specified types and then 

 Add typing to trace annotations
 ---

 Key: PHOENIX-1241
 URL: https://issues.apache.org/jira/browse/PHOENIX-1241
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0, 4.1.1
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.1.1


 Currently traces only support storing string valued annotations - this works 
 for known trace sources. However, phoenix will have trace annotations with 
 specific types. We can improve the storage format to know about these custom 
 types, rather than just storing strings, making the query interface more 
 powerful.
 See PHOENIX-1226 for more discussion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1240) Add TTL to SYSTEM.TRACING_STATS table

2014-09-04 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1240:
-
Issue Type: Sub-task  (was: Improvement)
Parent: PHOENIX-1121

 Add TTL to SYSTEM.TRACING_STATS table
 -

 Key: PHOENIX-1240
 URL: https://issues.apache.org/jira/browse/PHOENIX-1240
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0, 4.1.1
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.1.1


 Tracing table should have a configurable TTL on it, so the table doesn't fill 
 up with too much data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1240) Add TTL to SYSTEM.TRACING_STATS table

2014-09-04 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1240:


 Summary: Add TTL to SYSTEM.TRACING_STATS table
 Key: PHOENIX-1240
 URL: https://issues.apache.org/jira/browse/PHOENIX-1240
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.1.1
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.1.1


Tracing table should have a configurable TTL on it, so the table doesn't fill 
up with too much data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1196) Add ability to add custom tracing annotations for connections

2014-09-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121710#comment-14121710
 ] 

Jesse Yates edited comment on PHOENIX-1196 at 9/4/14 6:26 PM:
--

Over in PHOENIX-1226 we are talking about typing the annotations that come 
through so we can actually have columns that are not just strings, but real 
types. It would be nice if we started with something like:
{code}
phoenix.tracing.custom.annotation.type.annotation key=annotation value
{code}

where the only 'type' we support at the moment is string or varchar. Its 
slightly more verbose, but provides us some flexibility later.

Also, I'm just a big fan of proper prefixing to avoid clashes so adding phoenix 
and custom to the expected key is something that should be done regardless of 
the rest of the key used. 


was (Author: jesse_yates):
Over in PHOENIX-1126 we are talking about typing the annotations that come 
through so we can actually have columns that are not just strings, but real 
types. It would be nice if we started with something like:
{code}
phoenix.tracing.custom.annotation.type.annotation key=annotation value
{code}

where the only 'type' we support at the moment is string or varchar. Its 
slightly more verbose, but provides us some flexibility later.

Also, I'm just a big fan of proper prefixing to avoid clashes so adding phoenix 
and custom to the expected key is something that should be done regardless of 
the rest of the key used. 

 Add ability to add custom tracing annotations for connections
 -

 Key: PHOENIX-1196
 URL: https://issues.apache.org/jira/browse/PHOENIX-1196
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1196) Add ability to add custom tracing annotations for connections

2014-09-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121756#comment-14121756
 ] 

Jesse Yates commented on PHOENIX-1196:
--

Offline discussion w/ Eli came to the conclusion that we could just make the 
property:
{code}
phoenix.trace.custom.annotation.annotation key=annotation value
{code}
That would be compatible with the existing tracing properties, which start with 
phoenix.trace (in QueryServices.java) and just assumes that the value is a 
String. We can add typing later by checking the next word after the common 
prefix. For example the value from,
{code}
phoenix.trace.custom.annotation.my-custom-key
{code}
would still be treated as a String/varchar and the value from
{code}
phoenix.trace.custom.annotation.smallint.my-other-custom-key
{code}
would be a smallint, etc. I'm merely making this examples to show that we have 
the flexibility to add typing later - not that is should be tackled as part of 
this JIRA.




 Add ability to add custom tracing annotations for connections
 -

 Key: PHOENIX-1196
 URL: https://issues.apache.org/jira/browse/PHOENIX-1196
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1241) Add typing to trace annotations

2014-09-04 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1241:


 Summary: Add typing to trace annotations
 Key: PHOENIX-1241
 URL: https://issues.apache.org/jira/browse/PHOENIX-1241
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0, 4.1.1
Reporter: Jesse Yates
 Fix For: 5.0.0, 4.1.1


Currently traces only support storing string valued annotations - this works 
for known trace sources. However, phoenix will have trace annotations with 
specific types. We can improve the storage format to know about these custom 
types, rather than just storing strings, making the query interface more 
powerful.

See PHOENIX-1226 for more discussion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1226) Exception in Tracing

2014-09-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121763#comment-14121763
 ] 

Jesse Yates commented on PHOENIX-1226:
--

I'm just going to commit this fix as-is and we can look at how to encode types 
later - its a complex problem. Added PHOENIX-1241 for doing the typing work.

 Exception in Tracing
 

 Key: PHOENIX-1226
 URL: https://issues.apache.org/jira/browse/PHOENIX-1226
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
 Environment: 0.98.5 hbase, 4.1.0 phoenix
Reporter: Dan Di Spaltro
Assignee: Jesse Yates
 Attachments: phoenix-1226-4.0-v0.patch, phoenix-1226-4.0-v1.patch


 I was exposed to an exception in the tracing code, during my test setup of 
 Phoenix in the following code:
 {code}
 58062 [defaultRpcServer.handler=2,queue=0,port=53950] WARN  
 org.apache.hadoop.ipc.RpcServer  - 
 defaultRpcServer.handler=2,queue=0,port=53950: caught: 
 java.lang.IllegalArgumentException: offset (0) + length (4) exceed the 
 capacity of the array: 3
   at 
 org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:600)
   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:749)
   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:725)
   at 
 org.apache.phoenix.trace.TracingCompat.readAnnotation(TracingCompat.java:56)
   at 
 org.apache.phoenix.trace.TraceMetricSource.receiveSpan(TraceMetricSource.java:121)
   at org.cloudera.htrace.Tracer.deliver(Tracer.java:81)
   at org.cloudera.htrace.impl.MilliSpan.stop(MilliSpan.java:70)
   at org.cloudera.htrace.TraceScope.close(TraceScope.java:70)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:744)
 {code}
 It is related to the following line of code where we interpret all KV 
 annotation values as byte-wise integers here:
 https://github.com/apache/phoenix/blob/v4.1.0/phoenix-hadoop-compat/src/main/java/org/apache/phoenix/trace/TracingCompat.java#L56
 Here is where HBase is adding a non-integer KV annotation:
 https://github.com/apache/hbase/blob/0.98.5/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RequestContext.java#L105
 The fix should be simple, but I am not aware of all the related issues in 
 changing this.
 cc [~jesse_yates], [~samarth.j...@gmail.com], [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1234) QueryUtil doesn't parse zk hosts correctly

2014-09-03 Thread Jesse Yates (JIRA)
Jesse Yates created PHOENIX-1234:


 Summary: QueryUtil doesn't parse zk hosts correctly
 Key: PHOENIX-1234
 URL: https://issues.apache.org/jira/browse/PHOENIX-1234
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 4.1.1, 5.0.0


QueryUtil uses the output of ZKConfig.getZKQuorumServersString to build the 
server list and then use that raw output (+ slight cleanup) to get a 
PhoenixConnection. However, when there is more than 1 server present in the 
hbase.zookeeper.quorum config param, the output is incorrectly formatted for 
phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1234) QueryUtil doesn't parse zk hosts correctly

2014-09-03 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14120677#comment-14120677
 ] 

Jesse Yates commented on PHOENIX-1234:
--

Linking to PHOENIX-883 since its dealing with a similar issue in handling ZK 
hostnames.

 QueryUtil doesn't parse zk hosts correctly
 --

 Key: PHOENIX-1234
 URL: https://issues.apache.org/jira/browse/PHOENIX-1234
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1.1


 QueryUtil uses the output of ZKConfig.getZKQuorumServersString to build the 
 server list and then use that raw output (+ slight cleanup) to get a 
 PhoenixConnection. However, when there is more than 1 server present in the 
 hbase.zookeeper.quorum config param, the output is incorrectly formatted for 
 phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1234) QueryUtil doesn't parse zk hosts correctly

2014-09-03 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1234:
-
Attachment: fix-query-util-failure.patch

Attaching patch that demonstrates the issue (with a little tweaking in 
QueryUtil to just get back the connection string).

 QueryUtil doesn't parse zk hosts correctly
 --

 Key: PHOENIX-1234
 URL: https://issues.apache.org/jira/browse/PHOENIX-1234
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1.1

 Attachments: fix-query-util-failure.patch


 QueryUtil uses the output of ZKConfig.getZKQuorumServersString to build the 
 server list and then use that raw output (+ slight cleanup) to get a 
 PhoenixConnection. However, when there is more than 1 server present in the 
 hbase.zookeeper.quorum config param, the output is incorrectly formatted for 
 phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1234) QueryUtil doesn't parse zk hosts correctly

2014-09-03 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14120678#comment-14120678
 ] 

Jesse Yates edited comment on PHOENIX-1234 at 9/3/14 11:31 PM:
---

Attaching patch that fixes issue (with a little tweaking in QueryUtil to just 
get back the connection string)  and a test.


was (Author: jesse_yates):
Attaching patch that demonstrates the issue (with a little tweaking in 
QueryUtil to just get back the connection string).

 QueryUtil doesn't parse zk hosts correctly
 --

 Key: PHOENIX-1234
 URL: https://issues.apache.org/jira/browse/PHOENIX-1234
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1.1

 Attachments: fix-query-util-failure.patch


 QueryUtil uses the output of ZKConfig.getZKQuorumServersString to build the 
 server list and then use that raw output (+ slight cleanup) to get a 
 PhoenixConnection. However, when there is more than 1 server present in the 
 hbase.zookeeper.quorum config param, the output is incorrectly formatted for 
 phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1234) QueryUtil doesn't parse zk hosts correctly

2014-09-03 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated PHOENIX-1234:
-
Attachment: show-query-util-failure.patch

Attaching patch that demonstrates the failure (using the same kind of tweak to 
QueryUtil to just get the url).

 QueryUtil doesn't parse zk hosts correctly
 --

 Key: PHOENIX-1234
 URL: https://issues.apache.org/jira/browse/PHOENIX-1234
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.1
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 5.0.0, 4.1.1

 Attachments: fix-query-util-failure.patch, 
 show-query-util-failure.patch


 QueryUtil uses the output of ZKConfig.getZKQuorumServersString to build the 
 server list and then use that raw output (+ slight cleanup) to get a 
 PhoenixConnection. However, when there is more than 1 server present in the 
 hbase.zookeeper.quorum config param, the output is incorrectly formatted for 
 phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >