Re: [ANNOUNCE] New Phoenix committer: Mihir Monani

2019-04-29 Thread Mihir Monani
Thanks Thomas and Apache Phoenix Community. :)

On Sun, Apr 28, 2019 at 6:12 AM Thomas D'Silva  wrote:

> On behalf of the Apache Phoenix PMC, I am pleased to announce that
> Mihir Monani has accepted our invitation to become a committer.
> Mihir has done some nice work fixing several bugs related to indexing[1].
>
> Please welcome him to the Apache Phoenix team.
>
> Thanks,
> Thomas
>
> [1]
>
> https://issues.apache.org/jira/browse/PHOENIX-5199?jql=project%20%3D%20PHOENIX%20AND%20assignee%3D%22mihir6692%22%20AND%20status%3DResolved
>


-- 
Mihir Monani
(+91)-9429473434


[jira] [Created] (PHOENIX-5264) Implement the toString method of EncodedQualifiersColumnProjectionFilter

2019-04-29 Thread Kadir OZDEMIR (JIRA)
Kadir OZDEMIR created PHOENIX-5264:
--

 Summary: Implement the toString method of 
EncodedQualifiersColumnProjectionFilter  
 Key: PHOENIX-5264
 URL: https://issues.apache.org/jira/browse/PHOENIX-5264
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.14.0
Reporter: Kadir OZDEMIR
Assignee: Priyank Porwal
 Fix For: 5.0.0, 4.14.0


The current implementation of the toString method of 
EncodedQualifiersColumnProjectionFilter returns an empty string. Having a 
proper implementation helps during debugging. Please see some other Filter 
classes, to get an idea on how to implement this, e.g., 
SingleColumnValueFilter, SkipScanFilter, and PrefixFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5156) Consistent Mutable Global Indexes for Non-Transactional Tables

2019-04-29 Thread Kadir OZDEMIR (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5156:
---
Attachment: PHOENIX-5156.master.010.patch

> Consistent Mutable Global Indexes for Non-Transactional Tables
> --
>
> Key: PHOENIX-5156
> URL: https://issues.apache.org/jira/browse/PHOENIX-5156
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Attachments: PHOENIX-5156.master.001.patch, 
> PHOENIX-5156.master.002.patch, PHOENIX-5156.master.003.patch, 
> PHOENIX-5156.master.004.patch, PHOENIX-5156.master.005.patch, 
> PHOENIX-5156.master.006.patch, PHOENIX-5156.master.007.patch, 
> PHOENIX-5156.master.008.patch, PHOENIX-5156.master.009.patch, 
> PHOENIX-5156.master.010.patch
>
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the mutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent mutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5238) Provide an option to pass hints with PhoenixRDD and Datasource v2

2019-04-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5238:
---
Attachment: PHOENIX-5238.patch

> Provide an option to pass hints with PhoenixRDD and Datasource v2
> -
>
> Key: PHOENIX-5238
> URL: https://issues.apache.org/jira/browse/PHOENIX-5238
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: William Shen
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: connectors-1.0.0
>
> Attachments: PHOENIX-5238.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As a Spark developer, I want to query with NO_CACHE hint using PhoenixRDD, so 
> I can prevent large one-time scans from affecting the block cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5238) Provide an option to pass hints with PhoenixRDD and Datasource v2

2019-04-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan reassigned PHOENIX-5238:
--

Assignee: Xinyi Yan

> Provide an option to pass hints with PhoenixRDD and Datasource v2
> -
>
> Key: PHOENIX-5238
> URL: https://issues.apache.org/jira/browse/PHOENIX-5238
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: William Shen
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: connectors-1.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As a Spark developer, I want to query with NO_CACHE hint using PhoenixRDD, so 
> I can prevent large one-time scans from affecting the block cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Board report due in ~1 week

2019-04-29 Thread Thomas D'Silva
Jaanai,

We are waiting for a few omid bug fixes to do the 4.15/5.1 release that
will have splittable system catalog and the omid integration.
Are you interested in doing a 5.0.1 release that has the HBase 2.0.x
compatibility fixes that were discussed in a previous thread[1]?

The steps to create a RC are straightforward and documented here:
https://phoenix.apache.org/release.html.
The main thing you need to do is to add your code signing key to
https://dist.apache.org/repos/dist/dev/phoenix/KEYS (follow the steps at
the start of that file)
and then commit using svn. Then you can follow the rest of the steps listed
in "How to do a release"

Thanks,
Thomas

[1]
https://lists.apache.org/thread.html/99fcc737d7a8f82ddffb1b34a64f7099f7909900b8bea36dd6afca16@%3Cdev.phoenix.apache.org%3E

On Mon, Apr 29, 2019 at 6:33 PM Jaanai Zhang  wrote:

> I would like to volunteer for a new 5.x release if someone can guide me
> release process. Thanks.
>
> 
>Jaanai Zhang
>Best regards!
>
>
>
> Josh Elser  于2019年4月30日周二 上午12:39写道:
>
> > Hiya folks,
> >
> > It's about that time for another board report. Please reply here with
> > anything of merit that you think the board might find
> > interesting/useful. As a reminder, they board is typically more
> > concerned with high-level project/community details than the
> > nuts-and-bolts of the code changes for the project.
> >
> > On my radar already is...
> >
> > * Multiple new committers and PMC'ers (thanks so much to the folks who
> > have been driving votes!)
> > * NoSQL day in May
> > * 4.14.2 in vote
> > * Need for a new 5.x.y release (if there are no volunteers, I may have
> > to find the time to make this happen. It's been too long)
> >
> > Thanks!
> >
> > - Josh
> >
>


Re: Board report due in ~1 week

2019-04-29 Thread Jaanai Zhang
I would like to volunteer for a new 5.x release if someone can guide me
release process. Thanks.


   Jaanai Zhang
   Best regards!



Josh Elser  于2019年4月30日周二 上午12:39写道:

> Hiya folks,
>
> It's about that time for another board report. Please reply here with
> anything of merit that you think the board might find
> interesting/useful. As a reminder, they board is typically more
> concerned with high-level project/community details than the
> nuts-and-bolts of the code changes for the project.
>
> On my radar already is...
>
> * Multiple new committers and PMC'ers (thanks so much to the folks who
> have been driving votes!)
> * NoSQL day in May
> * 4.14.2 in vote
> * Need for a new 5.x.y release (if there are no volunteers, I may have
> to find the time to make this happen. It's been too long)
>
> Thanks!
>
> - Josh
>


[jira] [Updated] (PHOENIX-5262) Wrong Result on Salted table with Varbinary PK

2019-04-29 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated PHOENIX-5262:
-
Attachment: PHOENIX-5262v2.patch

> Wrong Result on Salted table with Varbinary PK
> --
>
> Key: PHOENIX-5262
> URL: https://issues.apache.org/jira/browse/PHOENIX-5262
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Attachments: PHOENIX-5262.patch, PHOENIX-5262v2.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5228) use slf4j for logging in phoenix project

2019-04-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5228:
---
Attachment: (was: PHOENIX-5228.patch)

> use slf4j for logging in phoenix project
> 
>
> Key: PHOENIX-5228
> URL: https://issues.apache.org/jira/browse/PHOENIX-5228
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1, 5.1.0
>Reporter: Mihir Monani
>Assignee: Xinyi Yan
>Priority: Trivial
>  Labels: SFDC
> Attachments: PHOENIX-5228-4.x-HBase-1.2.patch, 
> PHOENIX-5228-4.x-HBase-1.3.patch, PHOENIX-5228-4.x-HBase-1.4.patch, 
> PHOENIX-5228.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> It would be good to use slf4j for logging in phoenix project. Here is list of 
> files where doesn't use slf4j. 
> phoenix-core :-
> {noformat}
> WALRecoveryRegionPostOpenIT.java
> WALReplayWithIndexWritesAndCompressedWALIT.java
> BasePermissionsIT.java
> ChangePermissionsIT.java
> IndexRebuildIncrementDisableCountIT.java
> InvalidIndexStateClientSideIT.java
> MutableIndexReplicationIT.java
> FailForUnsupportedHBaseVersionsIT.java
> SecureUserConnectionsIT.java
> PhoenixMetricsIT.java
> BaseTracingTestIT.java
> PhoenixTracingEndToEndIT.java
> PhoenixRpcSchedulerFactory.java
> IndexHalfStoreFileReaderGenerator.java
> BinaryCompatibleBaseDecoder.java
> ServerCacheClient.java
> CallRunner.java
> MetaDataRegionObserver.java
> PhoenixAccessController.java
> ScanRegionObserver.java
> TaskRegionObserver.java
> DropChildViewsTask.java
> IndexRebuildTask.java
> BaseQueryPlan.java
> HashJoinPlan.java
> CollationKeyFunction.java
> Indexer.java
> LockManager.java
> BaseIndexBuilder.java
> IndexBuildManager.java
> NonTxIndexBuilder.java
> IndexMemStore.java
> BaseTaskRunner.java
> QuickFailingTaskRunner.java
> TaskBatch.java
> ThreadPoolBuilder.java
> ThreadPoolManager.java
> IndexManagementUtil.java
> IndexWriter.java
> IndexWriterUtils.java
> KillServerOnFailurePolicy.java
> ParallelWriterIndexCommitter.java
> RecoveryIndexWriter.java
> TrackingParallelWriterIndexCommitter.java
> PhoenixIndexFailurePolicy.java
> PhoenixTransactionalIndexer.java
> SnapshotScanner.java
> PhoenixEmbeddedDriver.java
> PhoenixResultSet.java
> QueryLogger.java
> QueryLoggerDisruptor.java
> TableLogWriter.java
> PhoenixInputFormat.java
> PhoenixOutputFormat.java
> PhoenixRecordReader.java
> PhoenixRecordWriter.java
> PhoenixServerBuildIndexInputFormat.java
> PhoenixMRJobSubmitter.java
> PhoenixConfigurationUtil.java
> Metrics.java
> DefaultStatisticsCollector.java
> StatisticsScanner.java
> PhoenixMetricsSink.java
> TraceReader.java
> TraceSpanReceiver.java
> TraceWriter.java
> Tracing.java
> EquiDepthStreamHistogram.java
> PhoenixMRJobUtil.java
> QueryUtil.java
> ServerUtil.java
> ZKBasedMasterElectionUtil.java
> IndexTestingUtils.java
> StubAbortable.java
> TestIndexWriter.java
> TestParalleIndexWriter.java
> TestParalleWriterIndexCommitter.java
> TestWALRecoveryCaching.java
> LoggingSink.java
> ParameterizedPhoenixCanaryToolIT.java
> CoprocessorHConnectionTableFactoryTest.java
> TestUtil.java{noformat}
> phoenix-tracing-webapp :-
> {noformat}
> org/apache/phoenix/tracingwebapp/http/Main.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5228) use slf4j for logging in phoenix project

2019-04-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5228:
---
Attachment: PHOENIX-5228.patch

> use slf4j for logging in phoenix project
> 
>
> Key: PHOENIX-5228
> URL: https://issues.apache.org/jira/browse/PHOENIX-5228
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1, 5.1.0
>Reporter: Mihir Monani
>Assignee: Xinyi Yan
>Priority: Trivial
>  Labels: SFDC
> Attachments: PHOENIX-5228-4.x-HBase-1.2.patch, 
> PHOENIX-5228-4.x-HBase-1.3.patch, PHOENIX-5228-4.x-HBase-1.4.patch, 
> PHOENIX-5228.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> It would be good to use slf4j for logging in phoenix project. Here is list of 
> files where doesn't use slf4j. 
> phoenix-core :-
> {noformat}
> WALRecoveryRegionPostOpenIT.java
> WALReplayWithIndexWritesAndCompressedWALIT.java
> BasePermissionsIT.java
> ChangePermissionsIT.java
> IndexRebuildIncrementDisableCountIT.java
> InvalidIndexStateClientSideIT.java
> MutableIndexReplicationIT.java
> FailForUnsupportedHBaseVersionsIT.java
> SecureUserConnectionsIT.java
> PhoenixMetricsIT.java
> BaseTracingTestIT.java
> PhoenixTracingEndToEndIT.java
> PhoenixRpcSchedulerFactory.java
> IndexHalfStoreFileReaderGenerator.java
> BinaryCompatibleBaseDecoder.java
> ServerCacheClient.java
> CallRunner.java
> MetaDataRegionObserver.java
> PhoenixAccessController.java
> ScanRegionObserver.java
> TaskRegionObserver.java
> DropChildViewsTask.java
> IndexRebuildTask.java
> BaseQueryPlan.java
> HashJoinPlan.java
> CollationKeyFunction.java
> Indexer.java
> LockManager.java
> BaseIndexBuilder.java
> IndexBuildManager.java
> NonTxIndexBuilder.java
> IndexMemStore.java
> BaseTaskRunner.java
> QuickFailingTaskRunner.java
> TaskBatch.java
> ThreadPoolBuilder.java
> ThreadPoolManager.java
> IndexManagementUtil.java
> IndexWriter.java
> IndexWriterUtils.java
> KillServerOnFailurePolicy.java
> ParallelWriterIndexCommitter.java
> RecoveryIndexWriter.java
> TrackingParallelWriterIndexCommitter.java
> PhoenixIndexFailurePolicy.java
> PhoenixTransactionalIndexer.java
> SnapshotScanner.java
> PhoenixEmbeddedDriver.java
> PhoenixResultSet.java
> QueryLogger.java
> QueryLoggerDisruptor.java
> TableLogWriter.java
> PhoenixInputFormat.java
> PhoenixOutputFormat.java
> PhoenixRecordReader.java
> PhoenixRecordWriter.java
> PhoenixServerBuildIndexInputFormat.java
> PhoenixMRJobSubmitter.java
> PhoenixConfigurationUtil.java
> Metrics.java
> DefaultStatisticsCollector.java
> StatisticsScanner.java
> PhoenixMetricsSink.java
> TraceReader.java
> TraceSpanReceiver.java
> TraceWriter.java
> Tracing.java
> EquiDepthStreamHistogram.java
> PhoenixMRJobUtil.java
> QueryUtil.java
> ServerUtil.java
> ZKBasedMasterElectionUtil.java
> IndexTestingUtils.java
> StubAbortable.java
> TestIndexWriter.java
> TestParalleIndexWriter.java
> TestParalleWriterIndexCommitter.java
> TestWALRecoveryCaching.java
> LoggingSink.java
> ParameterizedPhoenixCanaryToolIT.java
> CoprocessorHConnectionTableFactoryTest.java
> TestUtil.java{noformat}
> phoenix-tracing-webapp :-
> {noformat}
> org/apache/phoenix/tracingwebapp/http/Main.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-5101) ScanningResultIterator getScanMetrics throws NPE

2019-04-29 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reopened PHOENIX-5101:
-

> ScanningResultIterator getScanMetrics throws NPE
> 
>
> Key: PHOENIX-5101
> URL: https://issues.apache.org/jira/browse/PHOENIX-5101
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Reid Chan
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5101.414-HBase-1.4.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:92)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:79)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1439)
>   at 
> org.apache.phoenix.iterate.MergeSortResultIterator.close(MergeSortResultIterator.java:44)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:176)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:807)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:81)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.prepareAndExecute(JdbcMeta.java:759)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:206)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:927)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:879)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
>   at 
> org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5101) ScanningResultIterator getScanMetrics throws NPE

2019-04-29 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5101:

Priority: Blocker  (was: Major)

> ScanningResultIterator getScanMetrics throws NPE
> 
>
> Key: PHOENIX-5101
> URL: https://issues.apache.org/jira/browse/PHOENIX-5101
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Reid Chan
>Assignee: Karan Mehta
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5101.414-HBase-1.4.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:92)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:79)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1439)
>   at 
> org.apache.phoenix.iterate.MergeSortResultIterator.close(MergeSortResultIterator.java:44)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:176)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:807)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:81)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.prepareAndExecute(JdbcMeta.java:759)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:206)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:927)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:879)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
>   at 
> org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5263) Provide an option to fully rebuild LOCAL indexes asynchronously through SQL

2019-04-29 Thread Gokcen Iskender (JIRA)
Gokcen Iskender created PHOENIX-5263:


 Summary: Provide an option to fully rebuild LOCAL indexes 
asynchronously through SQL
 Key: PHOENIX-5263
 URL: https://issues.apache.org/jira/browse/PHOENIX-5263
 Project: Phoenix
  Issue Type: Bug
Reporter: Gokcen Iskender
Assignee: Gokcen Iskender


Currently if we run "ALTER INDEX ... REBUILD" , all the rows in the index are 
deleted and the index is rebuilt synchronously.

"ALTER INEX ... REBUILD ASYNC" seems to be used for the IndexTool's partial 
rebuild option, rebuilding from ASYNC_REBUILD_TIMESTAMP (PHOENIX-2890)

So it seems currently the only way to fully rebuild is the drop the index, and 
recreate it.  This is burdensome as it requires have the schema DDL.

We should have an option to fully rebuild asynchronously, that has the same 
semantics as dropping and recreating the index.  A further advantage of this is 
we can maintain the splits of the index table while dropping its data.  We are 
currently seeing issues where rebuilding a large table via a MR job results in 
hotspotting due to all data regions writing to the same index region at the 
start.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5168) IndexScrutinyTool cannot output to table when analyzing tenant-owned indexes

2019-04-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5168:
-
Attachment: PHOENIX-5168-4.x.patch

> IndexScrutinyTool cannot output to table when analyzing tenant-owned indexes
> 
>
> Key: PHOENIX-5168
> URL: https://issues.apache.org/jira/browse/PHOENIX-5168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Attachments: PHOENIX-5168-4.x.patch, PHOENIX-5168.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> IndexScrutiny has a new feature of using tenant connections (when tenant-Id 
> parameter is used) to lookup the indexes on tenant views.
> When tenant-id option is provided and output-format is set to TABLE, we get 
> an error that PHOENIX_SCRUTINY_TABLE cannot be created due to permission. 
> We should be able to output to a table when tenant-id option is used.
> Note that output-format set to FILE is supported.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Board report due in ~1 week

2019-04-29 Thread Josh Elser

Hiya folks,

It's about that time for another board report. Please reply here with 
anything of merit that you think the board might find 
interesting/useful. As a reminder, they board is typically more 
concerned with high-level project/community details than the 
nuts-and-bolts of the code changes for the project.


On my radar already is...

* Multiple new committers and PMC'ers (thanks so much to the folks who 
have been driving votes!)

* NoSQL day in May
* 4.14.2 in vote
* Need for a new 5.x.y release (if there are no volunteers, I may have 
to find the time to make this happen. It's been too long)


Thanks!

- Josh


[jira] [Updated] (PHOENIX-5262) Wrong Result on Salted table with Varbinary PK

2019-04-29 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi updated PHOENIX-5262:
-
Comment: was deleted

(was: [~dbwong], The description for this bug isn't accurate. This is a bug 
generally for Salted Table regardless of PK. If the PK is "k INTEGER PRIMARY 
KEY", we actually have the same issue.)

> Wrong Result on Salted table with Varbinary PK
> --
>
> Key: PHOENIX-5262
> URL: https://issues.apache.org/jira/browse/PHOENIX-5262
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Attachments: PHOENIX-5262.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-5258) Add support for header in input CSV for CsvBulkLoadTool

2019-04-29 Thread Prashant Vithani (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prashant Vithani reopened PHOENIX-5258:
---

> Add support for header in input CSV for CsvBulkLoadTool
> ---
>
> Key: PHOENIX-5258
> URL: https://issues.apache.org/jira/browse/PHOENIX-5258
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Vithani
>Priority: Minor
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)