[jira] [Created] (PHOENIX-3871) Incremental stats collection

2017-05-22 Thread Eli Levine (JIRA)
Eli Levine created PHOENIX-3871:
---

 Summary: Incremental stats collection
 Key: PHOENIX-3871
 URL: https://issues.apache.org/jira/browse/PHOENIX-3871
 Project: Phoenix
  Issue Type: Improvement
Reporter: Eli Levine


Phoenix automatically gathers statistics at [major compaction 
time|http://phoenix.apache.org/update_statistics.html]. While this is useful 
and accurate, it also means that statistics can become stale due to the 
infrequency of major compactions (can be days between major compactions), 
reducing their usefulness. 

This jira asks the question: Is it possible for Phoenix to collects statistics 
at a more granular level, say for every (or a sampling of) UPSERT, or minor 
compaction. Since statistics are always approximations, it is OK for this 
incremental approach to not be 100% accurate.

The current stats collection mechanism at major compaction time should be kept 
to accurately "fix up" stats at major compaction time.

[~jamestaylor], FYI. We talked about this in person a few weeks ago. Creating 
this Jira for posterity. Please add anything that I missed. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3744) Support snapshot scanners for MR-based queries

2017-03-21 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935579#comment-15935579
 ] 

Eli Levine commented on PHOENIX-3744:
-

Thanks for the @-mention, [~giacomotay...@gmail.com]! This is indeed 
interesting. 

> Support snapshot scanners for MR-based queries
> --
>
> Key: PHOENIX-3744
> URL: https://issues.apache.org/jira/browse/PHOENIX-3744
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Akshita Malhotra
>
> HBase support scanning over snapshots, with a SnapshotScanner that accesses 
> the region directly in HDFS. We should make sure that Phoenix can support 
> that.
> Not sure how we'd want to decide when to run a query over a snapshot. Some 
> ideas:
> - if there's an SCN set (i.e. the query is running at a point in time in the 
> past)
> - if the memstore is empty
> - if the query is being run at a timestamp earlier than any memstore data
> - as a config option on the table
> - as a query hint
> - based on some kind of optimizer rule (i.e. based on estimated # of bytes 
> that will be scanned)
> Phoenix typically runs a query at the timestamp at which it was compiled. Any 
> data committed after this time should not be seen while a query is running.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [ANNOUNCE] New Apache Phoenix committer - Geoffrey Jacoby

2017-02-28 Thread Eli Levine
Congrats and welcome!

On Tue, Feb 28, 2017 at 1:08 PM Mujtaba Chohan  wrote:

> Congrats Geoffrey!
>
> On Tue, Feb 28, 2017 at 1:05 PM, Josh Elser  wrote:
>
> > Congrats, Geoffrey! Well deserved!
> >
> >
> > James Taylor wrote:
> >
> >> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Geoffrey
> >> Jacoby has accepted our invitation to become a committer on the Apache
> >> Phoenix project. He's been involved with Phoenix for two years and his
> >> list
> >> of fixed issues and enhancements is impressive [1], including in
> >> particular
> >> allowing our MR integration to write to a different target cluster [2],
> >> having the batch size of commits be byte-based instead of row-based [3],
> >> enabling replication to only occur for multi-tenant views in the system
> >> catalog [4], and putting resource controls in place to prevent too many
> >> simultaneous connections [5].
> >>
> >> Welcome aboard, Geoffrey. Looking forward to many more contributions!
> >>
> >> Regards,
> >> James
> >>
> >> [1]
> >> https://issues.apache.org/jira/issues/?jql=project%20%3D%
> >> 20PHOENIX%20and%20assignee%3Dgjacoby
> >> [2] https://issues.apache.org/jira/browse/PHOENIX-1653
> >> [3] https://issues.apache.org/jira/browse/PHOENIX-541
> >> [4] https://issues.apache.org/jira/browse/PHOENIX-3639
> >> [5] https://issues.apache.org/jira/browse/PHOENIX-3663
> >>
> >>
>


Re: [ANNOUNCE] Ankit Singhal added as Apache Phoenix committer

2016-02-02 Thread Eli Levine
Congrats!

> On Feb 2, 2016, at 3:46 PM, Jesse Yates  wrote:
> 
> Congrats and welcome!
> 
>> On Tue, Feb 2, 2016 at 3:44 PM Mujtaba Chohan  wrote:
>> 
>> Congrats!! :)
>> 
>> On Tue, Feb 2, 2016 at 3:41 PM, Thomas D'Silva 
>> wrote:
>> 
>>> Congrats Ankit!
>>> 
>>> On Tue, Feb 2, 2016 at 2:53 PM, Enis Söztutar 
>> wrote:
 Congratz! Keep up the great work.
 
 Enis
 
 On Mon, Feb 1, 2016 at 6:25 AM, Dumindu Buddhika <
 dumindukarunathil...@gmail.com> wrote:
 
> Congratulations Ankit!
> 
> On Mon, Feb 1, 2016 at 7:43 PM, Josh Mahonin 
>>> wrote:
> 
>> Great work Ankit, well deserved!
>> 
>> On Sun, Jan 31, 2016 at 2:51 PM, Jesse Yates <
>> jesse.k.ya...@gmail.com
 
>> wrote:
>> 
>>> Congratulations and welcome!
>>> 
>>> On Sun, Jan 31, 2016, 11:49 AM Ravi Kiran <
>>> maghamraviki...@gmail.com>
>>> wrote:
>>> 
 Congrats Ankit !!
 
 On Sun, Jan 31, 2016 at 11:41 AM, Nick Dimiduk <
>>> ndimi...@apache.org>
 wrote:
 
> Thanks for the contributions Ankit, and congratulations!
> 
> On Sunday, January 31, 2016, Anoop John <
>> anoop.hb...@gmail.com>
>> wrote:
> 
>> Great   congrats Ankit
>> 
>> On Sunday, January 31, 2016, rajeshb...@apache.org
> 
>> <
>> chrajeshbab...@gmail.com >
>> wrote:
>>> Great work Ankit.
>>> Congratulations!!!
>>> On Jan 31, 2016 5:33 AM, "James Taylor" <
> jamestay...@apache.org
>> > wrote:
>>> 
 On behalf of the Apache Phoenix PMC, I'm pleased to
>>> announce
>> that
> Ankit
 Singhal has accepted our invitation to become a committer
>>> on
> the
> Apache
 Phoenix project. He's done some great work improving
> performance
> around
 aggregate and order by queries as well as reworking our
>> statistics
 collection representation.
 
 Great job, Ankit. Looking forward to many more
>>> contributions!
 
 Regards,
 James
>> 


Re: [ANNOUNCE] Thomas D'Silva added to Apache Phoenix PMC

2015-11-22 Thread Eli Levine
Well done, Thomas. Congrats!

> On Nov 22, 2015, at 4:37 PM, Andrew Purtell  wrote:
> 
> Congratulations Thomas. 
> 
>> On Nov 22, 2015, at 11:39 AM, Nick Dimiduk  wrote:
>> 
>> Congrats Thomas!
>> 
>> On Sunday, November 22, 2015, rajeshb...@apache.org <
>> chrajeshbab...@gmail.com> wrote:
>> 
>>> Great job. Congratulations Thomas!!!
>>> 
>>> On Sun, Nov 22, 2015 at 10:21 AM, James Taylor >> >
>>> wrote:
>>> 
 On behalf of the Apache Phoenix PMC, I'm pleased to announce that Thomas
 D'Silva has accepted our invitation to become a member of the Apache
 Phoenix project management committee (PMC). Most recently, he's been busy
 integrating transaction support into Phoenix in a massive 100+ file pull
 request[1][2].
 
 Great job, Thomas. Welcome aboard. Looking forward to continued
 collaboration.
 
 Regards,
 James
 
 [1] https://github.com/apache/phoenix/pull/131
 [2] https://github.com/apache/phoenix/pull/132
>>> 


Re: [ANNOUNCE] Thomas D'Silva added to Apache Phoenix PMC

2015-11-22 Thread Eli Levine


> On Nov 22, 2015, at 11:39 AM, Nick Dimiduk  wrote:
> 
> Congrats Thomas!
> 
> On Sunday, November 22, 2015, rajeshb...@apache.org <
> chrajeshbab...@gmail.com> wrote:
> 
>> Great job. Congratulations Thomas!!!
>> 
>> On Sun, Nov 22, 2015 at 10:21 AM, James Taylor > >
>> wrote:
>> 
>>> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Thomas
>>> D'Silva has accepted our invitation to become a member of the Apache
>>> Phoenix project management committee (PMC). Most recently, he's been busy
>>> integrating transaction support into Phoenix in a massive 100+ file pull
>>> request[1][2].
>>> 
>>> Great job, Thomas. Welcome aboard. Looking forward to continued
>>> collaboration.
>>> 
>>> Regards,
>>> James
>>> 
>>> [1] https://github.com/apache/phoenix/pull/131
>>> [2] https://github.com/apache/phoenix/pull/132
>> 


[jira] [Commented] (PHOENIX-2429) PhoenixConfigurationUtil.CURRENT_SCN_VALUE for phoenix-spark plugin does not work

2015-11-19 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014226#comment-15014226
 ] 

Eli Levine commented on PHOENIX-2429:
-

[~prkommireddi], FYI. Looks like there might be an issue with Pig loader 
correctly setting the CurrentSCN property.

> PhoenixConfigurationUtil.CURRENT_SCN_VALUE for phoenix-spark plugin does not 
> work
> -
>
> Key: PHOENIX-2429
> URL: https://issues.apache.org/jira/browse/PHOENIX-2429
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0, 4.6.0
>Reporter: Diego Fustes Villadóniga
>
> When I call the method saveToPhoenix to store the contents of a ProductDD, 
> passing a hadoop configuration, where I set 
> PhoenixConfigurationUtil.CURRENT_SCN_VALUE to be a given timestamp, the 
> values are not stored with such timestamp, but using the server time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2431) Make SYSTEM.CATALOG table transactional

2015-11-19 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014219#comment-15014219
 ] 

Eli Levine commented on PHOENIX-2431:
-

There is code that allows add/drop columns on tables with views as long as 
parent table and view metadata are in the same region. After SYSTEM.CATALOG is 
transactional we can probably remove that logic and always allow metadata 
mutations for tables with views.

> Make SYSTEM.CATALOG table transactional
> ---
>
> Key: PHOENIX-2431
> URL: https://issues.apache.org/jira/browse/PHOENIX-2431
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> We currently update the SYSTEM.CATALOG table atomically by using the 
> region.mutateRowsWithLocks() call. This works only if the mutations are all 
> in the same region which can break down if enough views are created on a base 
> table. Instead, now that we have transactions, we should change our 
> SYSTEM.CATALOG table to transactional=true and stop using an endpoint 
> coprocessor to update the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] Josh Mahonin added to Apache Phoenix PMC

2015-11-09 Thread Eli Levine
Congrats, Josh!

On Mon, Nov 9, 2015 at 10:03 AM, Ravi Kiran 
wrote:

> Congratulations Josh!!
>
> On Sun, Nov 8, 2015 at 12:35 PM, Andrew Purtell 
> wrote:
>
> > Congratulations Josh!
> >
> > On Sat, Nov 7, 2015 at 5:25 PM, James Taylor 
> > wrote:
> >
> > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Josh
> > > Mahonin has accepted our invitation to become a member of the Apache
> > > Phoenix project management committee (PMC). He's been the force behind
> > our
> > > Phoenix-Spark integration[1] and has done an excellent job supporting
> it
> > > and moving it forward.
> > >
> > > Great job, Josh. Welcome aboard. Looking forward to continued
> > > collaboration.
> > >
> > > Regards,
> > > James
> > >
> > > [1] https://phoenix.apache.org/phoenix_spark.html
> > >
> >
> >
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
>


Re: [VOTE] Release of Apache Phoenix 4.6.0-HBase-0.98 RC0

2015-10-20 Thread Eli Levine
+1

Tested for regressions using internal Salesforce use-cases. Looks good!

On Mon, Oct 19, 2015 at 4:33 PM, Samarth Jain  wrote:

> Hi Everyone,
>
> This is a call for a vote on Apache Phoenix 4.6.0-HBase-0.98 RC0. This is a
> patch release of Phoenix 4, compatible with the 0.98 branch of Apache
> HBase. The release includes both a source-only release and a convenience
> binary release.
>
> The source tarball, including signatures, digests, etc can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.6.0-HBase-0.98-rc0/src/
>
> The binary artifacts can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.6.0-HBase-0.98-rc0/bin/
>
> For a complete list of changes, see:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12333284
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/mujtaba.asc
>
> KEYS file available here:
> https://dist.apache.org/repos/dist/release/phoenix/KEYS
>
> The hash and tag to be voted upon:
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=ac2d04481e955f263c503ac89e4c2afdb383456a
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.6.0-HBase-0.98-rc0
>
>
> Vote will be open for at least 72 hours. Please vote:
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> Thanks,
> The Apache Phoenix Team
>


[ANNOUNCE] New Apache Phoenix committer - Jan Fernando

2015-09-29 Thread Eli Levine
On behalf of the Apache Phoenix project I am happy to welcome Jan Fernando
as a committer. Jan has been an active user and contributor to Phoenix in
the last couple of years. Some of his major contributions are:
1) Worked deeply in the sequence code including implementing Bulk Sequence
Allocation: PHOENIX-1954 and debugging and fixing several tricky Sequence
Bugs:
PHOENIX-2149, PHOENIX-1096.
2) Implemented DROP TABLE...CASCADE to support tenant-specific views being
dropped: PHOENIX-1098.
3) Worked closely with Cody and Mujtaba in the design of the interfaces for
Pherf and contributed patches to increase support for tenant-specific use
cases: PHOENIX-1791, PHOENIX-2227 Pioneered creating Pherf scenarios at
Salesforce.
4) Worked closely with Samarth on requirements and API design and
validation for Phoenix global- and query-level metrics:
PHOENIX-1452, PHOENIX-1819 to get better visibility into Phoenix internals.

Look forward to continuing working with Jan on Apache Phoenix!

Thanks,

Eli Levine
elilev...@apache.org


Re: [ANNOUNCE] Welcome our newest Committer Dumindu Buddhika

2015-09-18 Thread Eli Levine
Welcome, Dumindu!

On Fri, Sep 18, 2015 at 9:33 AM, Thomas D'Silva 
wrote:

> Congrats Dumindu!
>
> On Fri, Sep 18, 2015 at 8:56 AM, Jesse Yates 
> wrote:
> > Welcome and congrats!
> >
> > On Fri, Sep 18, 2015 at 8:27 AM rajeshb...@apache.org <
> > chrajeshbab...@gmail.com> wrote:
> >
> >> Congratulations Dumindu!!!
> >> Great work.
> >>
> >> Thanks,
> >> Rajeshbabu.
> >>
> >> On Fri, Sep 18, 2015 at 7:43 PM, Ted Yu  wrote:
> >>
> >> > Congratulations Dumindu
> >> >
> >> > On Fri, Sep 18, 2015 at 7:10 AM, Gabriel Reid  >
> >> > wrote:
> >> >
> >> > > Welcome and congratulations Dumindu!
> >> > >
> >> > > - Gabriel
> >> > >
> >> > > On Fri, Sep 18, 2015 at 6:18 AM, Vasudevan, Ramkrishna S
> >> > >  wrote:
> >> > > > Hi All
> >> > > >
> >> > > >
> >> > > >
> >> > > > Please welcome our newest committer Dumindu Buddhika to the Apache
> >> > > Phoenix
> >> > > > team.  Dumindu,  a student and an intern in the GSoC  program, has
> >> > > > contributed lot of new functionalities related to the PHOENIX
> ARRAY
> >> > > feature
> >> > > > and also has involved himself in lot of critical bug fixes even
> after
> >> > the
> >> > > > GSoC period was over.
> >> > > >
> >> > > > He is a quick learner and a very young blood eager to contribute
> to
> >> > > Phoenix
> >> > > > and its roadmap.
> >> > > >
> >> > > >
> >> > > >
> >> > > > All the best and congratulations, Dumindu  Welcome on board
> !!!
> >> > > >
> >> > > >
> >> > > >
> >> > > > Regards
> >> > > >
> >> > > > Ram
> >> > >
> >> >
> >>
>


[jira] [Commented] (PHOENIX-2214) ORDER BY optimization incorrect for queries over views containing WHERE clause

2015-08-27 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14717149#comment-14717149
 ] 

Eli Levine commented on PHOENIX-2214:
-

Come to think of it, ORDER BY can be optimized away for queries that only 
include the parts of the PK not already bound in the WHERE clause of the view's 
DDL, such as SELECT * FROM v2 order by k2 (refer to the patch for V2's DDL). 
What do you think, [~jamestaylor]?

 ORDER BY optimization incorrect for queries over views containing WHERE clause
 --

 Key: PHOENIX-2214
 URL: https://issues.apache.org/jira/browse/PHOENIX-2214
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.5.1
Reporter: Eli Levine
 Attachments: 
 0001-Test-case-to-outline-issue-with-view-ORDER-BY-optimi.patch


 Phoenix optimizes away ORDER BY clauses if they are the same order as the 
 default PK order. However, this optimization is not done correctly for views 
 (tenant-specific and regular) if the view has been created with a WHERE 
 clause.
 See attached patch for repro, in which the last assertEquals() fails due to 
 the fact that ORDER BY is not optimized away, as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2214) ORDER BY optimization incorrect for queries over views containing WHERE clause

2015-08-26 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-2214:

Attachment: 0001-Test-case-to-outline-issue-with-view-ORDER-BY-optimi.patch

 ORDER BY optimization incorrect for queries over views containing WHERE clause
 --

 Key: PHOENIX-2214
 URL: https://issues.apache.org/jira/browse/PHOENIX-2214
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.5.1
Reporter: Eli Levine
 Attachments: 
 0001-Test-case-to-outline-issue-with-view-ORDER-BY-optimi.patch


 Phoenix optimizes away ORDER BY clauses if they are the same order as the 
 default PK order. However, this optimization is not done correctly for views 
 (tenant-specific and regular) if the view has been created with a WHERE 
 clause.
 See attached patch for repro, in which the last assertEquals() fails due to 
 the fact that ORDER BY is not optimized away, as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2214) ORDER BY optimization incorrect for queries over views containing WHERE clause

2015-08-26 Thread Eli Levine (JIRA)
Eli Levine created PHOENIX-2214:
---

 Summary: ORDER BY optimization incorrect for queries over views 
containing WHERE clause
 Key: PHOENIX-2214
 URL: https://issues.apache.org/jira/browse/PHOENIX-2214
 Project: Phoenix
  Issue Type: Bug
Reporter: Eli Levine


Phoenix optimizes away ORDER BY clauses if they are the same order as the 
default PK order. However, this optimization is not done correctly for views 
(tenant-specific and regular) if the view has been created with a WHERE clause.

See attached patch for repro, in which the last assertEquals() fails due to the 
fact that ORDER BY is not optimized away, as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2214) ORDER BY optimization incorrect for queries over views containing WHERE clause

2015-08-26 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-2214:

Affects Version/s: 4.5.1

 ORDER BY optimization incorrect for queries over views containing WHERE clause
 --

 Key: PHOENIX-2214
 URL: https://issues.apache.org/jira/browse/PHOENIX-2214
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.5.1
Reporter: Eli Levine

 Phoenix optimizes away ORDER BY clauses if they are the same order as the 
 default PK order. However, this optimization is not done correctly for views 
 (tenant-specific and regular) if the view has been created with a WHERE 
 clause.
 See attached patch for repro, in which the last assertEquals() fails due to 
 the fact that ORDER BY is not optimized away, as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release of Apache Phoenix 4.5.1-HBase-0.98 RC1

2015-08-17 Thread Eli Levine
+1 on the release.

- Tried 4.5.1-HBase-0.98 RC1 using binary artifacts. Ran the jars through a
bunch of internal Salesforce tests, everything checks out.
- Verified that PHOENIX-2177 is fixed in this release. Admin.modifyTable()
is no longer invoked on column creation for views. No more C_M_MODIFY_TABLE
in HBase logs with subsequent table compaction and reassignment.

Thanks,

Eli


On Fri, Aug 14, 2015 at 9:50 PM, James Taylor jamestay...@apache.org
wrote:

 Hi Everyone,

 This is a call for a vote on Apache Phoenix 4.5.1-HBase-0.98 RC1. This is a
 patch release of Phoenix 4, compatible with the 0.98 branch of Apache
 HBase. The release includes both a source-only release and a convenience
 binary release. The previous RC was sunk due to PHOENIX-2181.

 The source tarball, including signatures, digests, etc can be found at:

 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.1-HBase-0.98-rc1/src/

 The binary artifacts can be found at:

 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.1-HBase-0.98-rc1/bin/

 For a complete list of changes, see:

 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120version=12333272

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/mujtaba.asc

 KEYS file available here:
 https://dist.apache.org/repos/dist/release/phoenix/KEYS

 The hash and tag to be voted upon:

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=7352ce573830a91fd3751ded0f8db78c8bc62867

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.5.1-HBase-0.98-rc1

 Vote will be open for at least 72 hours. Please vote:

 [ ] +1 approve
 [ ] +0 no opinion
 [ ] -1 disapprove (and reason why)

 Thanks,
 The Apache Phoenix Team



[jira] [Commented] (PHOENIX-2177) Adding a column to the view shouldn't call admin.modifyTable() for the base table.

2015-08-17 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700108#comment-14700108
 ] 

Eli Levine commented on PHOENIX-2177:
-

Makes sense, thanks.

 Adding a column to the view shouldn't call admin.modifyTable() for the base 
 table.
 --

 Key: PHOENIX-2177
 URL: https://issues.apache.org/jira/browse/PHOENIX-2177
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 4.5.1

 Attachments: PHOENIX-2177.patch, PHOENIX-2177_v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2177) Adding a column to the view shouldn't call admin.modifyTable() for the base table.

2015-08-17 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700071#comment-14700071
 ] 

Eli Levine commented on PHOENIX-2177:
-

Curious, what's the reason to call Admin.modifyTable() when a column is added 
to a Phoenix table? [~jamestaylor]

 Adding a column to the view shouldn't call admin.modifyTable() for the base 
 table.
 --

 Key: PHOENIX-2177
 URL: https://issues.apache.org/jira/browse/PHOENIX-2177
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 4.5.1

 Attachments: PHOENIX-2177.patch, PHOENIX-2177_v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2177) Adding a column to the view shouldn't call admin.modifyTable() for the base table.

2015-08-12 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694540#comment-14694540
 ] 

Eli Levine commented on PHOENIX-2177:
-

Looks good, [~samarthjain]. Thanks for the quick turnaround!

 Adding a column to the view shouldn't call admin.modifyTable() for the base 
 table.
 --

 Key: PHOENIX-2177
 URL: https://issues.apache.org/jira/browse/PHOENIX-2177
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 4.5.1

 Attachments: PHOENIX-2177.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1673) Allow tenant ID to be of any integral data type

2015-08-10 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680632#comment-14680632
 ] 

Eli Levine commented on PHOENIX-1673:
-

[~neverendingqs], there is a discussion happening over at 
https://github.com/apache/phoenix/pull/104. Would be good to hear your opinion, 
since you filed this issue. Thanks.

 Allow tenant ID to be of any integral data type
 ---

 Key: PHOENIX-1673
 URL: https://issues.apache.org/jira/browse/PHOENIX-1673
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.3.0
Reporter: Mark Tse
  Labels: Newbie, multi-tenant
 Fix For: 4.4.1


 When creating multi-tenant tables and views, the column that identifies the 
 tenant (first primary key column) must be of type 'VARCHAR' or 'CHAR'.
 It should be possible to relax this restriction to use any integral data 
 type. The tenant ID from the connection property can be converted based on 
 the data type of the first primary key column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1779) Parallelize fetching of next batch of records for scans corresponding to queries with no order by

2015-07-10 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622640#comment-14622640
 ] 

Eli Levine commented on PHOENIX-1779:
-

I think you are right. If RVCs are used for purposes other than paging, and I 
think they will be, then users might not always want results ordered. Cool, no 
further concerns from me.

 Parallelize fetching of next batch of records for scans corresponding to 
 queries with no order by 
 --

 Key: PHOENIX-1779
 URL: https://issues.apache.org/jira/browse/PHOENIX-1779
 Project: Phoenix
  Issue Type: Improvement
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1779.patch, PHOENIX-1779_v2.patch, 
 PHOENIX-1779_v3.patch, wip.patch, wip3.patch, wipwithsplits.patch


 Today in Phoenix we parallelize the first execution of scans i.e. we load 
 only the first batch of records up to the scan's cache size in parallel. 
 Loading of subsequent batches of records in scanners is essentially serial. 
 This could be improved especially for queries, including the ones with no 
 order by clauses,  that do not need any kind of merge sort on the client. 
 This could also potentially improve the performance of UPSERT SELECT 
 statements that load data from one table and insert into another. One such 
 use case being creating immutable indexes for tables that already have data. 
 It could also potentially improve the performance of our MapReduce solution 
 for bulk loading data by improving the speed of the loading/mapping phase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1779) Parallelize fetching of next batch of records for scans corresponding to queries with no order by

2015-07-10 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622586#comment-14622586
 ] 

Eli Levine commented on PHOENIX-1779:
-

Going through some code that uses row-value constructors got me thinking: How 
does the fact that rows are no longer guaranteed to be returned in rowkey order 
impact row-value constructors in Phoenix in general? At the end of 
http://phoenix.apache.org/paged.html we suggest the user grab values from the 
last row processed and use them in the next RVC call. After PHOENIX-1779 this 
is no longer guaranteed to work, right? Does the optimization for PHOENIX-1779 
make sense with RVC at all? I see a few options:
1. Force user to supply ORDER BY whenever they use RVCs. Seems pretty onerous.
2. Don't do PHOENIX-1779's optimization in the presence of RVCs.
3. Instruct user to use previous result's largest (or lowest, depending of PK 
sort order) PK value seen, instead of just grabbing values from last row to use 
in RVC. Also pretty onerous for users IMHO.

Imagine this simple use case: somebody is writing code for paging over Phoenix 
results.  The fist query does not use RVCs. Subsequent queries, if any, will 
use RVCs with values filled in based on previous results. Ideally, caller could 
tell Phoenix to run this query in Phoenix with or without RVCs and return 
results in row-key order because they want to use the results for paging and 
easily grab the last PK values to use for subsequent RVCs.

Maybe the right thing to do is: (1) Force row-key ordered results in the 
presence of RVCs and (2) Allow users to pass in a query hint that forces 
ordered results for use in the first paged query with no RVCc.

[~samarthjain], [~jamestaylor], thoughts?

CC [~jfernando_sfdc]

 Parallelize fetching of next batch of records for scans corresponding to 
 queries with no order by 
 --

 Key: PHOENIX-1779
 URL: https://issues.apache.org/jira/browse/PHOENIX-1779
 Project: Phoenix
  Issue Type: Improvement
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1779.patch, PHOENIX-1779_v2.patch, 
 PHOENIX-1779_v3.patch, wip.patch, wip3.patch, wipwithsplits.patch


 Today in Phoenix we parallelize the first execution of scans i.e. we load 
 only the first batch of records up to the scan's cache size in parallel. 
 Loading of subsequent batches of records in scanners is essentially serial. 
 This could be improved especially for queries, including the ones with no 
 order by clauses,  that do not need any kind of merge sort on the client. 
 This could also potentially improve the performance of UPSERT SELECT 
 statements that load data from one table and insert into another. One such 
 use case being creating immutable indexes for tables that already have data. 
 It could also potentially improve the performance of our MapReduce solution 
 for bulk loading data by improving the speed of the loading/mapping phase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-978) Allow views to extend base table's PK (only if last PK column is fixed length)

2015-07-09 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620929#comment-14620929
 ] 

Eli Levine commented on PHOENIX-978:


Thanks for the comments, [~jamestaylor]. [~tdsilva] and I chatted about this. 
What we agreed on is to disallow parent to extend its PK if it has any views. 
This is easy to explain to users and allows us not to worry about a multitude 
of edge cases that we would otherwise have to deal with. Thoughts?

 Allow views to extend base table's PK (only if last PK column is fixed length)
 --

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-978) Allow views to extend base table's PK (only if last PK column is fixed length)

2015-07-08 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619770#comment-14619770
 ] 

Eli Levine commented on PHOENIX-978:


[~tdsilva], can you take a look at the pull request now? I enabled the 
@Ignore'd test.

 Allow views to extend base table's PK (only if last PK column is fixed length)
 --

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-978) Allow views to extend base table's PK (only if last PK column is fixed length)

2015-07-08 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-978:
---
Summary: Allow views to extend base table's PK (only if last PK column is 
fixed length)  (was: Allow views to extend base table's PK)

 Allow views to extend base table's PK (only if last PK column is fixed length)
 --

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-978) Allow views to extend base table's PK

2015-07-08 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619693#comment-14619693
 ] 

Eli Levine commented on PHOENIX-978:


[~tdsilva], many thanks for finding the issue. James and I have chatted offline 
about it and decided to limit the scope of this Jira to allow views to extend 
parent's PK only if its last PK column is fixed length. I am working on this 
now and will submit a new pull request shortly.

 Allow views to extend base table's PK
 -

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-978) Allow views to extend base table's PK (only if last PK column is fixed length)

2015-07-08 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619730#comment-14619730
 ] 

Eli Levine commented on PHOENIX-978:


Working on fixing AlterTableIT now...

 Allow views to extend base table's PK (only if last PK column is fixed length)
 --

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1673) Allow tenant ID to be of any integral data type

2015-07-06 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14615655#comment-14615655
 ] 

Eli Levine commented on PHOENIX-1673:
-

Appreciate the help, [~Jeffrey.Lyons]. I think your approach is a sound one. 
WRT your question about returning no data (for queries) vs. returning an error, 
we should return an error. Failing fast is the right approach here for both 
writing and reading.

 Allow tenant ID to be of any integral data type
 ---

 Key: PHOENIX-1673
 URL: https://issues.apache.org/jira/browse/PHOENIX-1673
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.3.0
Reporter: Mark Tse
  Labels: Newbie, multi-tenant
 Fix For: 5.0.0, 4.4.1


 When creating multi-tenant tables and views, the column that identifies the 
 tenant (first primary key column) must be of type 'VARCHAR' or 'CHAR'.
 It should be possible to relax this restriction to use any integral data 
 type. The tenant ID from the connection property can be converted based on 
 the data type of the first primary key column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-978) Allow views to extend base table's PK

2015-06-27 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14604386#comment-14604386
 ] 

Eli Levine commented on PHOENIX-978:


[~jamestaylor], mind taking a look at the pull request? At this point I have 
the code and associated tests to allow views to extend parent's PK.

What is missing is the extra check that you mention above: during column 
creation on a parent table make sure no child views contain clashing columns. 
My take on the definition of such clashing is that we don't want to allow 
parents to create a column if (1) it's a PK column and a child view has 
extended its PK (meaning the view has a PK column in the same slot as the new 
parent column being added), or (2) it's a non-PK column and a child view has a 
column with the same name already. Anything else?



 Allow views to extend base table's PK
 -

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-978) Allow views to extend base table's PK

2015-06-27 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14604414#comment-14604414
 ] 

Eli Levine commented on PHOENIX-978:


Ah, thanks for pointing out PHOENIX-2058. I'll close out this jira without the 
extra check and we'll track that follow-up work in PHOENIX-2058, unless there 
are objections.

Also, I already implemented support for ALTER VIEW ADD pk_column PRIMARY KEY. 
See [this 
test|https://github.com/apache/phoenix/pull/91/files#diff-948389f1e38c4a930643f478633aef95R517]
 in the pull request for example usage. Seems like if we support PKs for CREATE 
VIEW, we should do the same for ALTER VIEW.

 Allow views to extend base table's PK
 -

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2058) Check for existence and compatibility of columns being added in view

2015-06-27 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-2058:

Description: 
One check I realized we're not doing, but need to do, is ensuring that the 
column being added by the base table doesn't already exist in the view. If the 
column does already exist, ideally we can allow the addition to the base table 
if the type matches and the scale is null or = existing scale and the 
maxLength is null or = existing maxLength. Also, if a column is a PK column 
and it already exists in the view, the position in the PK must match. 

The fact that we've materialized a PTable for the view should make the addition 
of this check easier.

  was:
One check I realized we're not doing, but need to do, is ensuring that the 
column being added by the base table doesn't already exist in the view. If the 
column does already exist, ideally we can allow the addition to the base table 
if the type matches and the scale is null or = existing scale and the 
maxLength is null or = existing maxLength. Also, if a column is a PK column 
and it already exists in the view, the position in the PK must match. However, 
since we don't allow a view to update its PK, this check isn't yet necessary 
(maybe just a TODO for this).

The fact that we've materialized a PTable for the view should make the addition 
of this check easier.


 Check for existence and compatibility of columns being added in view
 

 Key: PHOENIX-2058
 URL: https://issues.apache.org/jira/browse/PHOENIX-2058
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain

 One check I realized we're not doing, but need to do, is ensuring that the 
 column being added by the base table doesn't already exist in the view. If 
 the column does already exist, ideally we can allow the addition to the base 
 table if the type matches and the scale is null or = existing scale and the 
 maxLength is null or = existing maxLength. Also, if a column is a PK column 
 and it already exists in the view, the position in the PK must match. 
 The fact that we've materialized a PTable for the view should make the 
 addition of this check easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-978) Allow views to extend base table's PK

2015-06-27 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-978:
---
Attachment: PHOENIX-978.diff

 Allow views to extend base table's PK
 -

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0

 Attachments: PHOENIX-978.diff


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-978) Allow views to extend base table's PK

2015-06-23 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine reassigned PHOENIX-978:
--

Assignee: Eli Levine

 Allow views to extend base table's PK
 -

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine

 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-978) Allow views to extend base table's PK

2015-06-23 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-978:
---
Fix Version/s: 4.5.0
   5.0.0

 Allow views to extend base table's PK
 -

 Key: PHOENIX-978
 URL: https://issues.apache.org/jira/browse/PHOENIX-978
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.5.0


 CREATE VIEW syntax currently disallows PK constraint to be defined.  As a 
 result views and tenant-specific tables created using CREATE VIEW 
 automatically inherit their base table's PK with no way to extend it.
 Base tables should be allowed to be created with a minimum of PK columns to 
 support views, and views to extend PKs as desired.  This would allow a single 
 base table to support a heterogeneous set of views on top of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: thinking about cutting an RC for 4.5.0 soon

2015-06-23 Thread Eli Levine
I'll pick PHOENIX-978 up. Shooting for getting it in early next week.

Thanks,

Eli

On Mon, Jun 22, 2015 at 10:36 AM, James Taylor jamestay...@apache.org
wrote:

 We'd like to cut the RC and start the vote next week.
 Thanks,
 James

 On Mon, Jun 22, 2015 at 10:29 AM, Eli Levine elilev...@gmail.com wrote:
  I was hoping to get https://issues.apache.org/jira/browse/PHOENIX-978
 into
  4.5, which Jan Fernando (CC'd) is planning to tackle. James, what is your
  time-frame for getting 4.5 out?
 
  Thanks,
 
  Eli
 
 
  On Sun, Jun 21, 2015 at 2:11 PM, James Taylor jamestay...@apache.org
  wrote:
 
  Please let me know if you have any outstanding work you'd like to get
  in. We're thinking of cutting it at the end of the week.
  Thanks,
  James
 



Re: [ANNOUNCE] Josh Mahonin added as Apache Phoenix committer

2015-06-23 Thread Eli Levine
Welcome, Josh!

On Tue, Jun 23, 2015 at 1:03 PM, Thomas D'Silva tdsi...@salesforce.com
wrote:

 Welcome Josh!

 On Tue, Jun 23, 2015 at 10:24 AM, Samarth Jain sama...@apache.org wrote:
  Congratulations and welcome, Josh.
  On Tuesday, June 23, 2015, Gabriel Reid gabriel.r...@gmail.com wrote:
 
  Congrats and welcome Josh!
 
 
  On Tue, Jun 23, 2015 at 7:10 PM Jesse Yates jesse.k.ya...@gmail.com
  javascript:; wrote:
 
   Congrats and welcome!
  
   On Tue, Jun 23, 2015, 10:01 AM Ravi Kiran maghamraviki...@gmail.com
  javascript:;
   wrote:
  
Welcome Josh!!
   
On Tue, Jun 23, 2015 at 11:55 AM, Nick Dimiduk ndimi...@apache.org
  javascript:;
wrote:
   
 Nice work, congratulations Josh, and welcome!

 On Tue, Jun 23, 2015 at 9:40 AM, James Taylor 
  jamestay...@apache.org javascript:;
 wrote:

  On behalf of the Apache Phoenix PMC, I'm pleased to announce
 that
   Josh
  Mahonin has accepted our invitation to become a committer on the
  Apache Phoenix project. He's been the force behind our
  Phoenix-Spark
  integration[1] and has done an excellent job moving it forward.
 
  Great job, Josh. Welcome aboard. Looking forward to many more
  contributions!
 
  Regards,
  James
 
  [1] https://phoenix.apache.org/phoenix_spark.html
 

   
  
 



Re: thinking about cutting an RC for 4.5.0 soon

2015-06-22 Thread Eli Levine
I was hoping to get https://issues.apache.org/jira/browse/PHOENIX-978 into
4.5, which Jan Fernando (CC'd) is planning to tackle. James, what is your
time-frame for getting 4.5 out?

Thanks,

Eli


On Sun, Jun 21, 2015 at 2:11 PM, James Taylor jamestay...@apache.org
wrote:

 Please let me know if you have any outstanding work you'd like to get
 in. We're thinking of cutting it at the end of the week.
 Thanks,
 James



[jira] [Commented] (PHOENIX-1981) PhoenixHBase Load and Store Funcs should handle all Pig data types

2015-06-16 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14588200#comment-14588200
 ] 

Eli Levine commented on PHOENIX-1981:
-

This is now in master. Will do the rest later today. [~rajeshbabu], should this 
go into the 4.4 branches? Seems like a good candidate to me.

 PhoenixHBase Load and Store Funcs should handle all Pig data types
 --

 Key: PHOENIX-1981
 URL: https://issues.apache.org/jira/browse/PHOENIX-1981
 Project: Phoenix
  Issue Type: Improvement
Reporter: Prashant Kommireddi
Assignee: Prashant Kommireddi

 The load and store func (Pig integration) currently do not handle all Pig 
 types. Here is a complete list 
 http://pig.apache.org/docs/r0.13.0/basic.html#data-types
 In addition to handling all simple types (BigInteger and BigDecimal are 
 missing in the LoadFunc currently), we should also look into handling complex 
 Pig types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1961) Tenant-specific View TTLs

2015-05-08 Thread Eli Levine (JIRA)
Eli Levine created PHOENIX-1961:
---

 Summary: Tenant-specific View TTLs
 Key: PHOENIX-1961
 URL: https://issues.apache.org/jira/browse/PHOENIX-1961
 Project: Phoenix
  Issue Type: Bug
Reporter: Eli Levine


It would be useful for customers to define TTLs for tenant-specific views  (and 
maybe general-purpose views, as well). One way would be to leverage per-cell 
TTLs (https://issues.apache.org/jira/browse/HBASE-10560). Another possible 
approach is via a custom co-processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-900) Partial results for mutations

2015-04-23 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine resolved PHOENIX-900.

Resolution: Fixed

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-22 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14507415#comment-14507415
 ] 

Eli Levine commented on PHOENIX-900:


[~samarthjain] Sorry about that. Fixing it now.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-22 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14508047#comment-14508047
 ] 

Eli Levine commented on PHOENIX-1682:
-

Just pushed fixes from [~ivanweiss] to master, 4.3 and 4.x 4.x-HBase-0.98.

 PhoenixRuntime.getTable() does not work with case-sensitive table names
 ---

 Key: PHOENIX-1682
 URL: https://issues.apache.org/jira/browse/PHOENIX-1682
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2.0
Reporter: Eli Levine
Assignee: Ivan Weiss
  Labels: Newbie
 Fix For: 5.0.0, 4.4.0, 4.3.2


 PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
 because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
 without breaking up _name_ into table name and schema name components. In 
 cases where a table is case sensitive (created with _schemaName.tableName_) 
 this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14506141#comment-14506141
 ] 

Eli Levine commented on PHOENIX-900:


It was a bug in my original PR that caused a bunch of test failures. Thanks for 
taking a look! Will commit to master and 4.x-HBase-0.98 now.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1834) Collect row counts per tenant view

2015-04-14 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493627#comment-14493627
 ] 

Eli Levine commented on PHOENIX-1834:
-

This is a great first step. +1 on making stats info easily accesible. Most 
relational DBs make such info available from other tables. e.g. [Posgres stores 
this in its *pg_class* table|https://wiki.postgresql.org/wiki/Count_estimate].

 Collect row counts per tenant view
 --

 Key: PHOENIX-1834
 URL: https://issues.apache.org/jira/browse/PHOENIX-1834
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Eli Levine

 In multi-tenant environments it would be useful to query stats for row count 
 per tenant view. [~ram_krish], any thoughts on the feasibility of this?
 It would probably be useful to extend this to all views but starting with 
 tenant views is a good start IMHO, since their view definitions are simpler 
 and tenant_id is guaranteed to be the first rowkey component (after salt 
 byte).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-14 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14494985#comment-14494985
 ] 

Eli Levine commented on PHOENIX-1682:
-

I think that's the fix, yeah.

 PhoenixRuntime.getTable() does not work with case-sensitive table names
 ---

 Key: PHOENIX-1682
 URL: https://issues.apache.org/jira/browse/PHOENIX-1682
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2.0
Reporter: Eli Levine
Assignee: Ivan Weiss
  Labels: Newbie

 PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
 because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
 without breaking up _name_ into table name and schema name components. In 
 cases where a table is case sensitive (created with _schemaName.tableName_) 
 this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: PHOENIX-1452 in 4.3.1 ?

2015-03-05 Thread Eli Levine
There was a long thread a few months ago on private@ on the semantics of
different types of branches in Phoenix and there seemed to be consensus (or
at least no disagreement) that point releases (e.g. 4.3.1 - 4.3.2) should
only contain bug fixes with this exception: Possibly minor features that
are strictly additive are OK.  PHOENIX-1452 code is fairly localized and
can be turned off with a config, so it seems to fall under that exception.

I'm +1 for checking this into 4.3.

As a side note, it's awesome this change is behind a config. We should do
that more where it makes sense IMHO. Thanks for your work, Samarth!

Eli



On Thu, Mar 5, 2015 at 4:23 PM, Samarth Jain samarth.j...@gmail.com wrote:

 Hello Phoenix devs,

 I am planning on checking in PHOENIX-1452 to our 4.3 branch so that it can
 be part of our next 4.3.1 release. The feature doesn't break any backward
 compatibility and doesn't put any restrictions on which side of jar, client
 or server, should be upgraded first.

 PHOENIX-1452 provides us a way of looking into various global phoenix
 client side metrics. It's a step towards providing more visibility into
 what phoenix is doing and how much it is doing. Additionally, one has the
 capability to toggle the metrics collection on/off via config -
 phoenix.query.metrics.enabled

 Please let me know if you have any concerns. If none, I will proceed with
 checking it in to 4.3.

 Thanks,
 Samarth



[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-02-27 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14340650#comment-14340650
 ] 

Eli Levine commented on PHOENIX-900:


Just reverted both main and 4.0. Will take a look at failure causes and check 
in again early next week. Pardon the churn.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 4.0.0, 5.0.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-02-27 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14340631#comment-14340631
 ] 

Eli Levine commented on PHOENIX-900:


Looking at it now. Will fix or revert today.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 4.0.0, 5.0.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-900) Partial results for mutations

2015-02-27 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine reopened PHOENIX-900:


 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 4.0.0, 5.0.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-900) Partial results for mutations

2015-02-26 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine resolved PHOENIX-900.

Resolution: Fixed

Pushed to 4.0 and main. [~jamestaylor], thanks for the review! Do we need this 
in 3.0?

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 4.0.0, 5.0.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-900) Partial results for mutations

2015-02-26 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-900:
---
Fix Version/s: 5.0.0
   4.0.0

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 4.0.0, 5.0.0

 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1671) Add CommitException.getBindParametersAtIndex(int index)

2015-02-18 Thread Eli Levine (JIRA)
Eli Levine created PHOENIX-1671:
---

 Summary: Add CommitException.getBindParametersAtIndex(int index)
 Key: PHOENIX-1671
 URL: https://issues.apache.org/jira/browse/PHOENIX-1671
 Project: Phoenix
  Issue Type: Bug
Reporter: Eli Levine
Priority: Minor
 Fix For: 4.0.0, 5.0.0


This is an enhancement to PHOENIX-900. See [this 
discussion|https://github.com/apache/phoenix/pull/37#discussion-diff-24786924] 
for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-02-16 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323324#comment-14323324
 ] 

Eli Levine commented on PHOENIX-900:


Done. This is a good checkpoint.

Because MutationState does not check partial results returned by HBase from 
HTableInterface.batch(), partial saves surfaces in 
CommitExcepiton.getFailures() are granular up to HTableInterface level only.

Next step is to modify MutationState to process partial results from HBase 
correctly, which I am working on now.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-02-16 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323373#comment-14323373
 ] 

Eli Levine commented on PHOENIX-900:


Musings on properly handling partial save in Phoenix:

MutationState sorts mutations by HTable, iterates over HTables, and calls 
batch() on each. MutationState catches the first exception it sees 
(index-related corner case in MutationState.commit() notwithstanding) and 
rethrows it as CommitException, leaving any mutations related to 
not-yet-processed HTables uncommitted. In order to handle partial commits 
correctly it seems Phoenix needs to:
1. Call HTableInterface.batch() and process partial results per HTable.
2. Remember partial results for each HTable.batch() call and keep going even 
after an IOException from HTable.batch().
3. At the end of MutationState.commit() if any exceptions were caught throw 
CommitException with partial save info.

Any thoughts on this, [~giacomotaylor]? This is a separate enough chunk of work 
that it warrants its own Jira IMHO.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release of Apache Phoenix 4.3.0 RC0

2015-02-11 Thread Eli Levine
+1

Tested for regressions using a test suite for internal Salesforce use cases
with Phoenix binary artifacts from
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.3.0-rc0/bin/

On Tue, Feb 10, 2015 at 7:25 PM, James Taylor jamestay...@apache.org
wrote:

 Hello everyone,

 This is a call for a vote on Apache Phoenix 4.3.0 RC0. This is the
 next minor release of Phoenix 4, compatible with the 0.98 branch of
 Apache HBase. The release includes both a source-only release and a
 convenience binary release.

 Highlights of the release include:
 - functional indexing [1]
 - many-to-many and cross join support [2]
 - map-reduce over Phoenix tables [3]
 - query hinting to force index usage [4]
 - setting of HBase properties through ALTER TABLE
 - ISO-8601 date format support on input
 - RAND built-in and DATE/TIME literals
 - query timeout support in JDBC Statement
 - over 90 bug fixes

 For a complete list of changes, see:
 https://raw.githubusercontent.com/apache/phoenix/4.0/CHANGES

 The source tarball, including signatures, digests, etc can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.3.0-rc0/src/

 The binary artifacts can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.3.0-rc0/bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/mujtaba.asc

 KEYS file available here:
 https://dist.apache.org/repos/dist/release/phoenix/KEYS

 The hash and tag to be voted upon:

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=cce5b92853f9c83cf2ba965150adf8f1b6616d80

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.3.0-rc0

 Vote will be open for at least 72 hours. Please vote:

 [ ] +1 approve
 [ ] +0 no opinion
 [ ] -1 disapprove (and reason why)

 Thanks,
 The Apache Phoenix Team

 [1] http://phoenix.apache.org/secondary_indexing.html#Functional_Indexes
 [2] http://phoenix.apache.org/joins.html
 [3] http://phoenix.apache.org/phoenix_mr.html
 [4] http://phoenix.apache.org/secondary_indexing.html#Examples



Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Eli Levine
Greetings Phoenix devs,

I'm working on https://issues.apache.org/jira/browse/PHOENIX-900 (Partial
results for mutations). In order to test this functionality properly, I
need to write one or more tests that simulate write failures in HBase.

I think this will involve having a test deploy a custom test-only
coprocessor that will cause some predefined writes to fail, which the test
will verify. Does that sound like the right approach? Any examples of
similar tests in Phoenix or anywhere else in HBase-land?

Thanks,

Eli


Re: Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Eli Levine
Thanks, Jesse. Very useful. Any pointers to specific tests that spin up
Coprocessors dynamically in Phoenix?

On Mon, Feb 9, 2015 at 11:51 AM, Jesse Yates jesse.k.ya...@gmail.com
wrote:

 Yeah, I've done that a handful of times in HBase-land (not sure where
 though). It gets tricky with phoenix using all the BaseTest stuff because
 it does a lot of setup things that could conflict with what you are trying
 to do.*

 What I was frequently doing was using a static latch for turning on/off
 errors since there are a lot of reads/writes that happen on startup that
 you don't want to interfere with. Then you trip the latch when the test
 starts (avoiding any errors setting up .META. or -ROOT-) and you are good
 to go.

 However, in HBase-land we already run mini-cluster things in separate JVMs,
 so the static use is just easier; in Phoenix this may not be as feasible.
 The
 alternative is to get the coprocessors from the coprocessor environment of
 the regionserver in the test and pull out the latch from there.

 -J

 * This has been an issue when working on an internal project using Phoenix-
 we wanted to use a bunch of the BaseTest methods, but not all of them, and
 extent them a little more - and it was notably uncomfortable to mess with;
 we just ended up copying out what we needed. Something to look at in the
 future

 On Mon Feb 09 2015 at 11:01:39 AM Eli Levine elilev...@gmail.com wrote:

  Greetings Phoenix devs,
 
  I'm working on https://issues.apache.org/jira/browse/PHOENIX-900
 (Partial
  results for mutations). In order to test this functionality properly, I
  need to write one or more tests that simulate write failures in HBase.
 
  I think this will involve having a test deploy a custom test-only
  coprocessor that will cause some predefined writes to fail, which the
 test
  will verify. Does that sound like the right approach? Any examples of
  similar tests in Phoenix or anywhere else in HBase-land?
 
  Thanks,
 
  Eli
 



[jira] [Updated] (PHOENIX-900) Partial results for mutations

2015-02-05 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-900:
---
Attachment: PHOENIX-900.patch

Initial attempt. No tests yet. Need to figure out how to simulate partial 
failures.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine
 Attachments: PHOENIX-900.patch


 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1640) Remove PhoenixConnection.executeStatements()

2015-02-04 Thread Eli Levine (JIRA)
Eli Levine created PHOENIX-1640:
---

 Summary: Remove PhoenixConnection.executeStatements()
 Key: PHOENIX-1640
 URL: https://issues.apache.org/jira/browse/PHOENIX-1640
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
 Environment: PhoenixConnection.executeStatements() only seems to be 
used by integration tests in Phoenix and is not part of the java.sql.Conneciton 
interface. Should be moved into test-specific code.
Reporter: Eli Levine
Priority: Minor
 Fix For: 4.0.0, 5.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1640) Remove PhoenixConnection.executeStatements()

2015-02-04 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306373#comment-14306373
 ] 

Eli Levine commented on PHOENIX-1640:
-

Ah cool, thanks. What about moving that code to PhoenixRuntime? Seems like 
PhoenixConnection should be for core JDBC functionality.

 Remove PhoenixConnection.executeStatements()
 

 Key: PHOENIX-1640
 URL: https://issues.apache.org/jira/browse/PHOENIX-1640
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
 Environment: PhoenixConnection.executeStatements() only seems to be 
 used by integration tests in Phoenix and is not part of the 
 java.sql.Conneciton interface. Should be moved into test-specific code.
Reporter: Eli Levine
Priority: Minor
 Fix For: 4.0.0, 5.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-02-03 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303946#comment-14303946
 ] 

Eli Levine commented on PHOENIX-900:


Some notes:
- In MutationState need to use HTable.batch(List, Object[]) to be able to 
obtain partial results per HTable. Currently we use deprecated 
HTable.batch(List), meaning a partial failure within an HTable is treated as a 
full failure.
- It would be useful to expose partial results in CommitException as a list of 
result objects (similar to how HTable.batch() does it). A position in the 
results object would correspond to the order of UPSERT/DELETE statement issued 
on a Connection. CommitException has getUncommittedState() and 
getOmmittedState(), which is a good start. However, they return MutationState, 
a low-level construct, which seems to not preserve the order of Mutations. Need 
to carry per-Connection mutation order somehow...
- [~jamestaylor], any idea how a partial failure might be simulated in a unit 
test or IT?

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine

 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-01-30 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299472#comment-14299472
 ] 

Eli Levine commented on PHOENIX-900:


FYI, shooting for starting this next week.

 Partial results for mutations
 -

 Key: PHOENIX-900
 URL: https://issues.apache.org/jira/browse/PHOENIX-900
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0
Reporter: Eli Levine
Assignee: Eli Levine

 HBase provides a way to retrieve partial results of a batch operation: 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
 Chatted with James about this offline:
 Yes, this could be included in the CommitException we throw 
 (MutationState:412). We already include the batches that have been 
 successfully committed to the HBase server in this exception. Would you be up 
 for adding this additional information? You'd want to surface this in a 
 Phoenix-y way in a method on CommitException, something like this: ResultSet 
 getPartialCommits(). You can easily create an in memory ResultSet using 
 MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
 this (just create a new empty PhoenixStatement with the PhoenixConnection for 
 the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: cut an RC for 4.3 release next week?

2015-01-06 Thread Eli Levine
I'd like to start working on PHOENIX-900 (Partial results for mutations)
and PHOENIX-978 (Allow views to extend base table's PK) soon. These are
likely 2-3 weeks out. Any chance we delay 4.3 until PHOENIX-900 and
PHOENIX-978 are in? Meanwhile we can cut 4.2.3 with the bug fixes.

Thanks,

Eli



On Tue, Jan 6, 2015 at 6:27 PM, James Taylor jamestay...@apache.org wrote:

 Happy New Years, everyone!

 I'd like to propose we cut a 4.3 RC next week as there's a lot of
 important bug fixes and enhancements in there. There are a couple of
 in-flight JIRAs that I'm hoping can make it in too, but please let me
 know one way or the other:

 PHOENIX-1409 Allow ALTER TABLE to update HBase properties
 PHOENIX-1453 Collect row counts per region in stats table
 PHOENIX-1516 Add RANDOM built-in function

 Thoughts? Am I missing any in-progress work?

 Thanks,
 James



[jira] [Commented] (PHOENIX-1496) Further reduce work in StatsCollector

2014-12-19 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14253781#comment-14253781
 ] 

Eli Levine commented on PHOENIX-1496:
-

Sounds like this is a bug fix so +1 for 4.2 backport. Thanks for checking. I 
also looked at the code and it looks innocuous enough (famous last words).

 Further reduce work in StatsCollector
 -

 Key: PHOENIX-1496
 URL: https://issues.apache.org/jira/browse/PHOENIX-1496
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 5.0.0, 4.3, 3.3

 Attachments: 1496-v2.txt, 1496.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1469) Binary columns do not work correctly for indexing

2014-12-19 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14254397#comment-14254397
 ] 

Eli Levine commented on PHOENIX-1469:
-

+1 on getting this into 4.2. Thanks for checking, [~jamestaylor].

 Binary columns do not work correctly for indexing
 -

 Key: PHOENIX-1469
 URL: https://issues.apache.org/jira/browse/PHOENIX-1469
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Jesse Collins
Assignee: Dave Hacker
Priority: Minor
 Attachments: 33.patch


 I recently added a secondary index to the readonlydb.read_only_auth_session 
 table and some queries started to fail at runtime with the error below. My 
 index (as checked in) has the parent_session_id column in an INCLUDE clause, 
 but I found that even if I rebuild the index without that column I still get 
 the error.
 java.lang.Throwable: ( username: ad...@701048957670893.com )
 java.lang.IllegalArgumentException: Unsupported non nullable index type BINARY
 at 
 org.apache.phoenix.util.IndexUtil.getIndexColumnDataType(IndexUtil.java:104)
 at 
 org.apache.phoenix.util.IndexUtil.getIndexColumnDataType(IndexUtil.java:80)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.visit(IndexStatementRewriter.java:99)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.visit(IndexStatementRewriter.java:41)
 at 
 org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:50)
 at 
 org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:96)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.translate(IndexStatementRewriter.java:74)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.translate(IndexStatementRewriter.java:61)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:127)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:81)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:67)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:222)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:217)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:216)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:183)
 at 
 phoenix.connection.ProtectedPhoenixPreparedStatement.executeQuery(ProtectedPhoenixPreparedStatement.java:61)
...
 Table definition:
 CREATE TABLE IF NOT EXISTS TEST.AUTH_SESSION(
 RAW_SESSION_ID BINARY(64) NOT NULL,
 USERS_ID VARCHAR,
 CREATED_DATE TIME,
 LAST_MODIFIED_DATE TIME,
 NUM_SECONDS_VALID INTEGER,
 USER_TYPE VARCHAR,
 PARENT_SESSION_ID BINARY(64),
 SESSION_TYPE VARCHAR,
 PARENT_SESSION_ID_HEX VARCHAR
 CONSTRAINT PK PRIMARY KEY (
 RAW_SESSION_ID
 )
 )
 Index definition:
 CREATE INDEX IF NOT EXISTS IE4AUTH_SESSION_PARENT 
 ON TEST.AUTH_SESSION (PARENT_SESSION_ID_HEX)
 INCLUDE (SESSION_TYPE)
 SQL:
 select RAW_SESSION_ID,
 CREATED_DATE,
 LAST_MODIFIED_DATE,
 NUM_SECONDS_VALID,
 PARENT_SESSION_ID,
 SESSION_TYPE,
 PARENT_SESSION_ID_HEX
 from TEST.AUTH_SESSION
 where USERS_ID = ?
 and RAW_SESSION_ID != ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1469) Binary columns do not work correctly for indexing

2014-12-19 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14254401#comment-14254401
 ] 

Eli Levine commented on PHOENIX-1469:
-

Thanks for the fix, [~dhacker1341]. 

 Binary columns do not work correctly for indexing
 -

 Key: PHOENIX-1469
 URL: https://issues.apache.org/jira/browse/PHOENIX-1469
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Jesse Collins
Assignee: Dave Hacker
Priority: Minor
 Attachments: 33.patch


 I recently added a secondary index to the readonlydb.read_only_auth_session 
 table and some queries started to fail at runtime with the error below. My 
 index (as checked in) has the parent_session_id column in an INCLUDE clause, 
 but I found that even if I rebuild the index without that column I still get 
 the error.
 java.lang.Throwable: ( username: ad...@701048957670893.com )
 java.lang.IllegalArgumentException: Unsupported non nullable index type BINARY
 at 
 org.apache.phoenix.util.IndexUtil.getIndexColumnDataType(IndexUtil.java:104)
 at 
 org.apache.phoenix.util.IndexUtil.getIndexColumnDataType(IndexUtil.java:80)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.visit(IndexStatementRewriter.java:99)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.visit(IndexStatementRewriter.java:41)
 at 
 org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:50)
 at 
 org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:96)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.translate(IndexStatementRewriter.java:74)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.translate(IndexStatementRewriter.java:61)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:127)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:81)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:67)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:222)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:217)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:216)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:183)
 at 
 phoenix.connection.ProtectedPhoenixPreparedStatement.executeQuery(ProtectedPhoenixPreparedStatement.java:61)
...
 Table definition:
 CREATE TABLE IF NOT EXISTS TEST.AUTH_SESSION(
 RAW_SESSION_ID BINARY(64) NOT NULL,
 USERS_ID VARCHAR,
 CREATED_DATE TIME,
 LAST_MODIFIED_DATE TIME,
 NUM_SECONDS_VALID INTEGER,
 USER_TYPE VARCHAR,
 PARENT_SESSION_ID BINARY(64),
 SESSION_TYPE VARCHAR,
 PARENT_SESSION_ID_HEX VARCHAR
 CONSTRAINT PK PRIMARY KEY (
 RAW_SESSION_ID
 )
 )
 Index definition:
 CREATE INDEX IF NOT EXISTS IE4AUTH_SESSION_PARENT 
 ON TEST.AUTH_SESSION (PARENT_SESSION_ID_HEX)
 INCLUDE (SESSION_TYPE)
 SQL:
 select RAW_SESSION_ID,
 CREATED_DATE,
 LAST_MODIFIED_DATE,
 NUM_SECONDS_VALID,
 PARENT_SESSION_ID,
 SESSION_TYPE,
 PARENT_SESSION_ID_HEX
 from TEST.AUTH_SESSION
 where USERS_ID = ?
 and RAW_SESSION_ID != ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1469) Binary columns do not work correctly for indexing

2014-12-19 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-1469:

Fix Version/s: 4.2.3
   4.3
   5.0.0

 Binary columns do not work correctly for indexing
 -

 Key: PHOENIX-1469
 URL: https://issues.apache.org/jira/browse/PHOENIX-1469
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Jesse Collins
Assignee: Dave Hacker
Priority: Minor
 Fix For: 5.0.0, 4.3, 4.2.3

 Attachments: 33.patch


 I recently added a secondary index to the readonlydb.read_only_auth_session 
 table and some queries started to fail at runtime with the error below. My 
 index (as checked in) has the parent_session_id column in an INCLUDE clause, 
 but I found that even if I rebuild the index without that column I still get 
 the error.
 java.lang.Throwable: ( username: ad...@701048957670893.com )
 java.lang.IllegalArgumentException: Unsupported non nullable index type BINARY
 at 
 org.apache.phoenix.util.IndexUtil.getIndexColumnDataType(IndexUtil.java:104)
 at 
 org.apache.phoenix.util.IndexUtil.getIndexColumnDataType(IndexUtil.java:80)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.visit(IndexStatementRewriter.java:99)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.visit(IndexStatementRewriter.java:41)
 at 
 org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:50)
 at 
 org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:96)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.translate(IndexStatementRewriter.java:74)
 at 
 org.apache.phoenix.compile.IndexStatementRewriter.translate(IndexStatementRewriter.java:61)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:127)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:81)
 at 
 org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:67)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:222)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:217)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:216)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:183)
 at 
 phoenix.connection.ProtectedPhoenixPreparedStatement.executeQuery(ProtectedPhoenixPreparedStatement.java:61)
...
 Table definition:
 CREATE TABLE IF NOT EXISTS TEST.AUTH_SESSION(
 RAW_SESSION_ID BINARY(64) NOT NULL,
 USERS_ID VARCHAR,
 CREATED_DATE TIME,
 LAST_MODIFIED_DATE TIME,
 NUM_SECONDS_VALID INTEGER,
 USER_TYPE VARCHAR,
 PARENT_SESSION_ID BINARY(64),
 SESSION_TYPE VARCHAR,
 PARENT_SESSION_ID_HEX VARCHAR
 CONSTRAINT PK PRIMARY KEY (
 RAW_SESSION_ID
 )
 )
 Index definition:
 CREATE INDEX IF NOT EXISTS IE4AUTH_SESSION_PARENT 
 ON TEST.AUTH_SESSION (PARENT_SESSION_ID_HEX)
 INCLUDE (SESSION_TYPE)
 SQL:
 select RAW_SESSION_ID,
 CREATED_DATE,
 LAST_MODIFIED_DATE,
 NUM_SECONDS_VALID,
 PARENT_SESSION_ID,
 SESSION_TYPE,
 PARENT_SESSION_ID_HEX
 from TEST.AUTH_SESSION
 where USERS_ID = ?
 and RAW_SESSION_ID != ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1542) Remove language about upgrading from 2.2 from /upgrading.html

2014-12-18 Thread Eli Levine (JIRA)
Eli Levine created PHOENIX-1542:
---

 Summary: Remove language about upgrading from 2.2 from 
/upgrading.html
 Key: PHOENIX-1542
 URL: https://issues.apache.org/jira/browse/PHOENIX-1542
 Project: Phoenix
  Issue Type: Task
Reporter: Eli Levine
Priority: Minor


Since Phoenix 2.x is not being developed it is time to remove the section about 
upgrading from 2.x to 3.0 from /upgrading.html. The section features 
prominently there and takes away from arguably more important information on 
the page  related to actively supported versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1514) Break up PDataType Enum

2014-12-18 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14252026#comment-14252026
 ] 

Eli Levine commented on PHOENIX-1514:
-

Sounds good, [~ndimiduk]. +1 for this to go into 4.0 from me, as well. Glad we 
are having the branch destination discussion. Let's do more of that in the 
future. :o)

 Break up PDataType Enum
 ---

 Key: PHOENIX-1514
 URL: https://issues.apache.org/jira/browse/PHOENIX-1514
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0

 Attachments: PHOENIX-1514.00.patch, PHOENIX-1514.01.patch, 
 PHOENIX-1514.02.patch, PHOENIX-1514.04.patch, PHOENIX-1514.05.patch, 
 hung-phoenix-verify.txt, stack.txt


 A first step in adopting (a portion of) HBase's type encodings is to break up 
 the PDataType enum into an interface. This will pave the way for more 
 flexibility going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1514) Break up PDataType Enum

2014-12-17 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14250457#comment-14250457
 ] 

Eli Levine commented on PHOENIX-1514:
-

Thanks for the work, Nick! IMHO we might be better off keeping this in main and 
5.0 branch. Looking at the pull req a bit... this change is quite pervasive and 
putting into 4.0 means version 4.3, the next minor Phoenix 4 release will get 
it. This is feeling like a major version feature.  Keeping it in 5.0 and main 
might make more sense and maybe only backport it if/when people express a need 
for this in 4.x.

 Break up PDataType Enum
 ---

 Key: PHOENIX-1514
 URL: https://issues.apache.org/jira/browse/PHOENIX-1514
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0

 Attachments: PHOENIX-1514.00.patch, PHOENIX-1514.01.patch, 
 PHOENIX-1514.02.patch, PHOENIX-1514.04.patch, hung-phoenix-verify.txt, 
 stack.txt


 A first step in adopting (a portion of) HBase's type encodings is to break up 
 the PDataType enum into an interface. This will pave the way for more 
 flexibility going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1514) Break up PDataType Enum

2014-12-17 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14250457#comment-14250457
 ] 

Eli Levine edited comment on PHOENIX-1514 at 12/17/14 8:21 PM:
---

Thanks for the work, Nick! IMHO we might be better off keeping this in main and 
5.0 branch. Looking at the pull req a bit... this change is quite pervasive and 
putting into 4.0 means version 4.3, the next minor Phoenix 4 release will get 
it. This is feeling like a major version feature.  Keeping it in 5.0 and main 
might make more sense and maybe only backport it if/when people express a need 
for this in 4.x?


was (Author: elilevine):
Thanks for the work, Nick! IMHO we might be better off keeping this in main and 
5.0 branch. Looking at the pull req a bit... this change is quite pervasive and 
putting into 4.0 means version 4.3, the next minor Phoenix 4 release will get 
it. This is feeling like a major version feature.  Keeping it in 5.0 and main 
might make more sense and maybe only backport it if/when people express a need 
for this in 4.x.

 Break up PDataType Enum
 ---

 Key: PHOENIX-1514
 URL: https://issues.apache.org/jira/browse/PHOENIX-1514
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0

 Attachments: PHOENIX-1514.00.patch, PHOENIX-1514.01.patch, 
 PHOENIX-1514.02.patch, PHOENIX-1514.04.patch, hung-phoenix-verify.txt, 
 stack.txt


 A first step in adopting (a portion of) HBase's type encodings is to break up 
 the PDataType enum into an interface. This will pave the way for more 
 flexibility going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1514) Break up PDataType Enum

2014-12-17 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14251189#comment-14251189
 ] 

Eli Levine commented on PHOENIX-1514:
-

bq. Or by main and 5.0 branch you just mean master in git? Thanks.
Sorry, [~ndimiduk]. Yes, just the master branch.

The reason I started this conversation is that I feel we (developers of 
Phoenix) could be clearer about the semantics of what major and minor versions 
mean in Phoenix. Major features would only go into into major versions; minor 
features into minor versions. This Jira falls somewhere between the two (maybe 
close to major?): there are a lot of small changes but probably no significant 
change in terms of how Phoenix works. It's really hard to have a discussion 
about such things because we don't have clarity what kind of features should go 
into what kind of version.

I'll kick off a conversation at private@ about this as the first step.

 Break up PDataType Enum
 ---

 Key: PHOENIX-1514
 URL: https://issues.apache.org/jira/browse/PHOENIX-1514
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 5.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0

 Attachments: PHOENIX-1514.00.patch, PHOENIX-1514.01.patch, 
 PHOENIX-1514.02.patch, PHOENIX-1514.04.patch, PHOENIX-1514.05.patch, 
 hung-phoenix-verify.txt, stack.txt


 A first step in adopting (a portion of) HBase's type encodings is to break up 
 the PDataType enum into an interface. This will pave the way for more 
 flexibility going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1520) Provide a means of tracking progress of secondary index population

2014-12-15 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14247259#comment-14247259
 ] 

Eli Levine commented on PHOENIX-1520:
-

Another question that needs working though is how to restart a failed index 
build. Is a full index rebuild required or can we find a way to be more 
incremental and restart closer to where the previous index build left off?

 Provide a means of tracking progress of secondary index population
 --

 Key: PHOENIX-1520
 URL: https://issues.apache.org/jira/browse/PHOENIX-1520
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Dave Hacker

 When an index is created against a table that already has a substantial 
 amount of data, the initial population of the index can take a long time. We 
 should provide a means of monitoring the percentage complete of the task.
 It's possible that this could be done in a way that is general enough to 
 apply to any Phoenix query. The secondary index population is done through an 
 UPSERT SELECT statement that selects from the data table and upserts into the 
 index table. We have table stats up front that tell us how many guidepost 
 chunks will be iterated over. We could monitor the thread pool based on the 
 tasks queued in the pool by ParallelIterators to get an idea of total number 
 of remaining tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] Samarth Jain added as Apache Phoenix committer

2014-12-06 Thread Eli Levine
Well deserved. Congrats and welcome, Samarth!

On Sat, Dec 6, 2014 at 12:10 PM, James Taylor jamestay...@apache.org
wrote:

 On behalf of the Apache Phoenix PMC, I'm pleased to announce that
 Samarth Jain has accepted our invitation to become a committer on the
 Apache Phoenix project. He's been a steady contributor over the past
 year and we're looking forward to many more future contributions.

 Great job, Samarth!

 Regards,
 James



Re: [VOTE] Release of Apache Phoenix 4.2.2 RC0

2014-12-05 Thread Eli Levine
+1 approve. Ran though internal Salesforce integration tests.

On Fri, Dec 5, 2014 at 10:19 AM, James Taylor jamestay...@apache.org
wrote:

 Hello everyone,

 This is a call for a vote on Apache Phoenix 4.2.2 RC0. This is a bug
 fix/patch release of Phoenix 4.2, compatible with the 0.98 branch of
 Apache HBase with feature parity against the Phoenix 3.2 release. The
 release includes both a source-only release and a convenience binary
 release.

 or a complete list of changes, see:
 https://raw.githubusercontent.com/apache/phoenix/4.2/CHANGES

 The source tarball, including signatures, digests, etc can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.2-rc0/src/

 The binary artifacts can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.2-rc0/bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/mujtaba.asc

 KEYS file available here:
 https://dist.apache.org/repos/dist/release/phoenix/KEYS

 The hash and tag to be voted upon:

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=5c6fc2f02d01805255fff335abb675ece07d07d0

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.2.2-rc0

 Vote will be open for at least 72 hours. Please vote:

 [ ] +1 approve
 [ ] +0 no opinion
 [ ] -1 disapprove (and reason why)

 Thanks,
 The Apache Phoenix Team



Re: cut 3.2.2/4.2.2 patch release this week

2014-12-02 Thread Eli Levine
PHOENIX-1498 shouldn't go into 4.2 IMHO because (1) it has dependencies on
other, still unresolved, jiras and (2) a workaround exists via the ALTER
HBase shell command.

Thanks,

Eli


On Tue, Dec 2, 2014 at 1:11 PM, lars hofhansl la...@apache.org wrote:

 Should we have PHOENIX-1498? Or is that too big of a change?
 As for release notes... We do not have them in HBase per se.From jira we
 generate the list of changes, add those into CHANGES.txt, and mention
 anything noteworthy in the release announcement email.We do have a Release
 Note field in Jira, but we don't do much with it, yet.

 Andy, anything to add?

 -- Lars

   From: James Taylor jamestay...@apache.org
  To: dev@phoenix.apache.org dev@phoenix.apache.org; lars hofhansl 
 la...@apache.org; Andrew Purtell apurt...@apache.org; michael stack 
 st...@duboce.net
  Sent: Tuesday, December 2, 2014 12:06 PM
  Subject: Re: cut 3.2.2/4.2.2 patch release this week

 Ok, thanks in advance, Gabriel.

 Lars, Andrew, Stack - how do you handle release notes in HBase?

 Thanks,
 James



 On Mon, Dec 1, 2014 at 11:39 PM, Gabriel Reid gabriel.r...@gmail.com
 wrote:
  Sounds like a good plan.
 
  I'd like to get PHOENIX-1485 in there if possible -- I was planning on
  getting that taken care of this evening (Tuesday), so it shouldn't
  prevent any issues in terms of cutting a release on Wednesday.
 
  There's also the topic of release notes that I brought up on the dev
  list earlier this week. I can start a release_notes.txt file in the
  source root for tracking known issues with a release, etc, but I was
  thinking it would be good to be able to pull this directly from Jira
  (some other projects have a release note field), and I was wondering
  if anyone had any suggestions on this.
 
  - Gabriel
 
 
  On Mon, Dec 1, 2014 at 7:29 PM, James Taylor jamestay...@apache.org
 wrote:
  I'd like to propose that we start the vote for a 3.2.2 and 4.2.2 patch
  release as we have some important fixes committed already. I was
  thinking we could start the vote this Wednesday. Please let me know if
  there's any pending work you'd like to get in before then.
 
  Thanks,
  James






[jira] [Commented] (PHOENIX-1496) Further reduce work in StatsCollector

2014-12-01 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14231043#comment-14231043
 ] 

Eli Levine commented on PHOENIX-1496:
-

Feels like a non-essential enhancement to me. Let's  not put this into 4.2 and 
3.2 IMHO. Thanks for checking. 

 Further reduce work in StatsCollector
 -

 Key: PHOENIX-1496
 URL: https://issues.apache.org/jira/browse/PHOENIX-1496
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 5.0.0, 4.3, 3.3

 Attachments: 1496.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Next Phoenix release

2014-11-25 Thread Eli Levine
Hi Flavio,

The next bug fix release will be 4.2.2. No set date yet but likely within a
week or so. WRT the Maven issues, is there a specific Jira you are
interested in? Transaction support is on the roadmap but is further out.

Eli


On Tue, Nov 25, 2014 at 12:57 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:

 Hi guys,

 I'm curious to know when there will be a new release of Phoenix (I hope it
 will improve a lot maven dependencies management..). Are you going to add
  transaction support sooner or later..?

 Best,
 Flavio



[jira] [Commented] (PHOENIX-1467) Upgrade to 4.12 Junit and update tests by removing @Category annotation

2014-11-25 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14224968#comment-14224968
 ] 

Eli Levine commented on PHOENIX-1467:
-

+1 on committing test- and POM-only changes to the 4.2 branch. Thanks for 
checking, [~jamestaylor].

 Upgrade to 4.12 Junit and update tests by removing @Category annotation
 ---

 Key: PHOENIX-1467
 URL: https://issues.apache.org/jira/browse/PHOENIX-1467
 Project: Phoenix
  Issue Type: Improvement
Reporter: Samarth Jain
Assignee: Samarth Jain
 Attachments: PHOENIX-1467.patch


 The 4.12 Junit release makes the @Category annotation inheritable. This means 
 we no longer need to annotate each our test classes with category annotations 
 like @Category(NeedsOwnMiniClusterTest.class). 
 Test classes that inherit from one of these base test classes - 
 BaseOwnClusterIT, BaseClientManagedTimeIT and BaseHBaseManagedTimeIT will get 
 automatically categorized into @Category(NeedsOwnMiniClusterTest.class), 
 @Category(ClientManagedTimeTest.class) and 
 @Category(HBaseManagedTimeTest.class) respectively. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: why syntax Error LPAREN?

2014-11-19 Thread Eli Levine
Try without the outermost parenthesis like this: select
Records.Operation, Records.status, Records.timestamp from
History where Records.timestamp=(SELECT MAX(Records.timestamp)
FROM History where rowId like 'xyz');

Eli

On Wed, Nov 19, 2014 at 1:29 AM, Ahmed Hussien aahussi...@gmail.com wrote:

 for the following query:

 select Records.Operation, Records.status, Records.timestamp
 from History where (Records.timestamp=(SELECT
 MAX(Records.timestamp) FROM History where rowId like 'xyz'));



 I got this Error:


 org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00):
 Syntax error. Missing LPAREN at line 1, column 95.
 at org.apache.phoenix.exception.PhoenixParserException.newException(
 PhoenixParserException.java:33)
 at org.apache.phoenix.parse.SQLParser.parseStatement(
 SQLParser.java:111)
 at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.
 parseStatement(PhoenixStatement.java:775)
 at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(
 PhoenixStatement.java:856)
 at org.apache.phoenix.jdbc.PhoenixPreparedStatement.init(
 PhoenixPreparedStatement.java:91)
 at org.apache.phoenix.jdbc.PhoenixConnection.prepareStatement(
 PhoenixConnection.java:506)
 at uaCore.DBQuerys.chkClDel(DBQuerys.java:143)
 at uaCore.DBQuerys.ScUpsert(DBQuerys.java:56)
 at uaCore.ReadInsertDelete.insDel(ReadInsertDelete.java:41)
 at uaCore.operate.main(operate.java:6)
 Caused by: MissingTokenException(inserted [@-1,0:0='missing
 LPAREN',77,1:94] at Records)
 at org.apache.phoenix.parse.PhoenixSQLParser.
 recoverFromMismatchedToken(PhoenixSQLParser.java:299)
 at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
 at org.apache.phoenix.parse.PhoenixSQLParser.not_
 expression(PhoenixSQLParser.java:5509)
 at org.apache.phoenix.parse.PhoenixSQLParser.and_
 expression(PhoenixSQLParser.java:5329)
 at org.apache.phoenix.parse.PhoenixSQLParser.or_
 expression(PhoenixSQLParser.java:5266)
 at org.apache.phoenix.parse.PhoenixSQLParser.expression(
 PhoenixSQLParser.java:5231)
 at org.apache.phoenix.parse.PhoenixSQLParser.not_
 expression(PhoenixSQLParser.java:5511)
 at org.apache.phoenix.parse.PhoenixSQLParser.and_
 expression(PhoenixSQLParser.java:5329)
 at org.apache.phoenix.parse.PhoenixSQLParser.or_
 expression(PhoenixSQLParser.java:5266)
 at org.apache.phoenix.parse.PhoenixSQLParser.expression(
 PhoenixSQLParser.java:5231)
 at org.apache.phoenix.parse.PhoenixSQLParser.select_node(
 PhoenixSQLParser.java:3543)
 at org.apache.phoenix.parse.PhoenixSQLParser.hinted_
 select_node(PhoenixSQLParser.java:3685)
 at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(
 PhoenixSQLParser.java:537)
 at org.apache.phoenix.parse.PhoenixSQLParser.statement(
 PhoenixSQLParser.java:443)
 at org.apache.phoenix.parse.SQLParser.parseStatement(
 SQLParser.java:108)
 ... 8 more



Re: why syntax Error LPAREN?

2014-11-19 Thread Eli Levine
Maryann, is the JOIN statement Ahmed is running supposed to work?

Thanks,

Eli


On Wed, Nov 19, 2014 at 1:16 PM, Ahmed Hussien aahussi...@gmail.com wrote:

 Yes,
 but it didn't work also!!!


 On 20 نوف, 2014 ص 12:13, Chris Tarnas wrote:

 I haven't tested it to verify, but you are missing a double quote at the
 end of History in the nested query's from.


 -chris


  On Nov 19, 2014, at 1:11 PM, Ahmed Hussien aahussi...@gmail.com wrote:

 The same problem!!

 On 19 نوف, 2014 م 06:55, Eli Levine wrote:

 Try without the outermost parenthesis like this: select
 Records.Operation, Records.status, Records.timestamp from
 History where Records.timestamp=(SELECT MAX(Records.timestamp)
 FROM History where rowId like 'xyz');

 Eli

 On Wed, Nov 19, 2014 at 1:29 AM, Ahmed Hussien aahussi...@gmail.com
 wrote:

  for the following query:

 select Records.Operation, Records.status, Records.timestamp
 from History where (Records.timestamp=(SELECT
 MAX(Records.timestamp) FROM History where rowId like 'xyz'));



 I got this Error:


 org.apache.phoenix.exception.PhoenixParserException: ERROR 602
 (42P00):
 Syntax error. Missing LPAREN at line 1, column 95.
  at org.apache.phoenix.exception.PhoenixParserException.
 newException(
 PhoenixParserException.java:33)
  at org.apache.phoenix.parse.SQLParser.parseStatement(
 SQLParser.java:111)
  at org.apache.phoenix.jdbc.PhoenixStatement$
 PhoenixStatementParser.
 parseStatement(PhoenixStatement.java:775)
  at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(
 PhoenixStatement.java:856)
  at org.apache.phoenix.jdbc.PhoenixPreparedStatement.init(
 PhoenixPreparedStatement.java:91)
  at org.apache.phoenix.jdbc.PhoenixConnection.prepareStatement(
 PhoenixConnection.java:506)
  at uaCore.DBQuerys.chkClDel(DBQuerys.java:143)
  at uaCore.DBQuerys.ScUpsert(DBQuerys.java:56)
  at uaCore.ReadInsertDelete.insDel(ReadInsertDelete.java:41)
  at uaCore.operate.main(operate.java:6)
 Caused by: MissingTokenException(inserted [@-1,0:0='missing
 LPAREN',77,1:94] at Records)
  at org.apache.phoenix.parse.PhoenixSQLParser.
 recoverFromMismatchedToken(PhoenixSQLParser.java:299)
  at org.antlr.runtime.BaseRecognizer.match(
 BaseRecognizer.java:115)
  at org.apache.phoenix.parse.PhoenixSQLParser.not_
 expression(PhoenixSQLParser.java:5509)
  at org.apache.phoenix.parse.PhoenixSQLParser.and_
 expression(PhoenixSQLParser.java:5329)
  at org.apache.phoenix.parse.PhoenixSQLParser.or_
 expression(PhoenixSQLParser.java:5266)
  at org.apache.phoenix.parse.PhoenixSQLParser.expression(
 PhoenixSQLParser.java:5231)
  at org.apache.phoenix.parse.PhoenixSQLParser.not_
 expression(PhoenixSQLParser.java:5511)
  at org.apache.phoenix.parse.PhoenixSQLParser.and_
 expression(PhoenixSQLParser.java:5329)
  at org.apache.phoenix.parse.PhoenixSQLParser.or_
 expression(PhoenixSQLParser.java:5266)
  at org.apache.phoenix.parse.PhoenixSQLParser.expression(
 PhoenixSQLParser.java:5231)
  at org.apache.phoenix.parse.PhoenixSQLParser.select_node(
 PhoenixSQLParser.java:3543)
  at org.apache.phoenix.parse.PhoenixSQLParser.hinted_
 select_node(PhoenixSQLParser.java:3685)
  at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(
 PhoenixSQLParser.java:537)
  at org.apache.phoenix.parse.PhoenixSQLParser.statement(
 PhoenixSQLParser.java:443)
  at org.apache.phoenix.parse.SQLParser.parseStatement(
 SQLParser.java:108)
  ... 8 more





[jira] [Updated] (PHOENIX-1463) phoenix.query.timeoutMs doesn't work as expected

2014-11-18 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-1463:

Fix Version/s: 4.3
   5.0.0

 phoenix.query.timeoutMs doesn't work as expected
 

 Key: PHOENIX-1463
 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jan Fernando
Assignee: Samarth Jain
Priority: Minor
 Fix For: 5.0.0, 4.3

 Attachments: PHOENIX-1463.patch


 In doing performance testing with Phoenix I noticed that under heavy load we 
 saw queries taking as long as 300 secs even though we had set 
 phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
 when the parent thread waits for all the parallel scans to complete. Each 
 time we call rs.next() and need a to load a new chunk of data from HBase we 
 again run parallel scans with a new 120 sec timeout. Therefore total query 
 time could be timeout * # chunks scanned. I think it would be more intuitive 
 if the query timeout applied to the query as a whole versus resetting for 
 each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1463) phoenix.query.timeoutMs doesn't work as expected

2014-11-18 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14217162#comment-14217162
 ] 

Eli Levine commented on PHOENIX-1463:
-

Thanks for taking care of this, [~samarth.j...@gmail.com]! Since this is a 
subtle change in behavior and not a bug per se I don't think it needs to go 
into the 4.2 branch.

 phoenix.query.timeoutMs doesn't work as expected
 

 Key: PHOENIX-1463
 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jan Fernando
Assignee: Samarth Jain
Priority: Minor
 Fix For: 5.0.0, 4.3

 Attachments: PHOENIX-1463.patch


 In doing performance testing with Phoenix I noticed that under heavy load we 
 saw queries taking as long as 300 secs even though we had set 
 phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
 when the parent thread waits for all the parallel scans to complete. Each 
 time we call rs.next() and need a to load a new chunk of data from HBase we 
 again run parallel scans with a new 120 sec timeout. Therefore total query 
 time could be timeout * # chunks scanned. I think it would be more intuitive 
 if the query timeout applied to the query as a whole versus resetting for 
 each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1463) phoenix.query.timeoutMs doesn't work as expected

2014-11-18 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14217162#comment-14217162
 ] 

Eli Levine edited comment on PHOENIX-1463 at 11/19/14 12:45 AM:


Thanks for taking care of this, [~samarth.j...@gmail.com]! Since this is a 
subtle change in behavior and not a bug per se I don't think it needs to go 
into the 4.2 branch. [~jfernando_sfdc], thoughts?


was (Author: elilevine):
Thanks for taking care of this, [~samarth.j...@gmail.com]! Since this is a 
subtle change in behavior and not a bug per se I don't think it needs to go 
into the 4.2 branch.

 phoenix.query.timeoutMs doesn't work as expected
 

 Key: PHOENIX-1463
 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jan Fernando
Assignee: Samarth Jain
Priority: Minor
 Fix For: 5.0.0, 4.3

 Attachments: PHOENIX-1463.patch


 In doing performance testing with Phoenix I noticed that under heavy load we 
 saw queries taking as long as 300 secs even though we had set 
 phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
 when the parent thread waits for all the parallel scans to complete. Each 
 time we call rs.next() and need a to load a new chunk of data from HBase we 
 again run parallel scans with a new 120 sec timeout. Therefore total query 
 time could be timeout * # chunks scanned. I think it would be more intuitive 
 if the query timeout applied to the query as a whole versus resetting for 
 each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1463) phoenix.query.timeoutMs doesn't work as expected

2014-11-18 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-1463:

Fix Version/s: 4.2

 phoenix.query.timeoutMs doesn't work as expected
 

 Key: PHOENIX-1463
 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jan Fernando
Assignee: Samarth Jain
Priority: Minor
 Fix For: 5.0.0, 4.2, 4.3

 Attachments: PHOENIX-1463.patch


 In doing performance testing with Phoenix I noticed that under heavy load we 
 saw queries taking as long as 300 secs even though we had set 
 phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
 when the parent thread waits for all the parallel scans to complete. Each 
 time we call rs.next() and need a to load a new chunk of data from HBase we 
 again run parallel scans with a new 120 sec timeout. Therefore total query 
 time could be timeout * # chunks scanned. I think it would be more intuitive 
 if the query timeout applied to the query as a whole versus resetting for 
 each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1463) phoenix.query.timeoutMs doesn't work as expected

2014-11-18 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14217210#comment-14217210
 ] 

Eli Levine commented on PHOENIX-1463:
-

You are right, James. The name phoenix.query.timeoutMs implies it's the total 
query timeout, so this is a bug.

 phoenix.query.timeoutMs doesn't work as expected
 

 Key: PHOENIX-1463
 URL: https://issues.apache.org/jira/browse/PHOENIX-1463
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Jan Fernando
Assignee: Samarth Jain
Priority: Minor
 Fix For: 5.0.0, 4.2, 4.3

 Attachments: PHOENIX-1463.patch


 In doing performance testing with Phoenix I noticed that under heavy load we 
 saw queries taking as long as 300 secs even though we had set 
 phoenix.query.timeoutMs to 120 secs. It looks like the timeout is applied 
 when the parent thread waits for all the parallel scans to complete. Each 
 time we call rs.next() and need a to load a new chunk of data from HBase we 
 again run parallel scans with a new 120 sec timeout. Therefore total query 
 time could be timeout * # chunks scanned. I think it would be more intuitive 
 if the query timeout applied to the query as a whole versus resetting for 
 each chunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1456) Incorrect query results caused by reusing buffers in SpoolingResultIterator

2014-11-15 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213720#comment-14213720
 ] 

Eli Levine commented on PHOENIX-1456:
-

Thanks for reporting, Maryann. Are 4.1 and 4.2 affected?

 Incorrect query results caused by reusing buffers in SpoolingResultIterator
 ---

 Key: PHOENIX-1456
 URL: https://issues.apache.org/jira/browse/PHOENIX-1456
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Maryann Xue
   Original Estimate: 120h
  Remaining Estimate: 120h

 The SpoolingResultIterator#OnDiskResultIterator switches between two 
 pre-allocated buffers as reading buffers for the tuple result, based on the 
 assumption that the outer ResultIterator consumes the returned tuple in a 
 streaming fashion and will never look back/forward outside 2-tuple span.
 However, some usages fail this assumption:
 1. OrderedResultIterator: It adds all tuples into its MappedByteBufferQueue 
 on initialization, which is maintained by a priority queue before threshold 
 is reached and spooling to files. 
 This is not revealed in most test cases because, most importantly, 
 OrderedResultIterator is not commonly used on clientside (only 
 ClientProcessingPlan does)
 2. Child/parent hash-join optimization, which uses a list of PK values to 
 create an InListExpression.
 It might be easy to walk around the second usage here though, but may need 
 more consideration on the first one.
 I am thinking to take away SpoolingResultIterator at all if there is an outer 
 ResultIterator being OrderedResultIterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1429) Cancel queued threads when limit reached

2014-11-12 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208564#comment-14208564
 ] 

Eli Levine commented on PHOENIX-1429:
-

Looking over the patch...

In MergeSortResultIterator you left the
{code}
if (iterators != null) {
SQLCloseables.closeAll(iterators);
}
{code}
at the bottom of the *close()* method. Intentional? The same code is invoked in 
the *finally* block in the new code.

 Cancel queued threads when limit reached
 

 Key: PHOENIX-1429
 URL: https://issues.apache.org/jira/browse/PHOENIX-1429
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Attachments: PHOENIX-1429.patch


 We currently spawn a thread per guidepost to process a query. For a full scan 
 with a LIMIT of 1, this is particularly inefficient, as once we get our one 
 row back, we no longer need to execute the other queued threads. We should 
 cancel those threads once the limit is reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1429) Cancel queued threads when limit reached

2014-11-12 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208566#comment-14208566
 ] 

Eli Levine commented on PHOENIX-1429:
-

Otherwise, looks good!

 Cancel queued threads when limit reached
 

 Key: PHOENIX-1429
 URL: https://issues.apache.org/jira/browse/PHOENIX-1429
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Attachments: PHOENIX-1429.patch


 We currently spawn a thread per guidepost to process a query. For a full scan 
 with a LIMIT of 1, this is particularly inefficient, as once we get our one 
 row back, we no longer need to execute the other queued threads. We should 
 cancel those threads once the limit is reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1437) java.lang.OutOfMemoryError: unable to create new native thread

2014-11-11 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206607#comment-14206607
 ] 

Eli Levine commented on PHOENIX-1437:
-

Taylor, can you add the following to get started: Phoenix version, your schema 
and the query used. Thanks.

 java.lang.OutOfMemoryError: unable to create new native thread
 --

 Key: PHOENIX-1437
 URL: https://issues.apache.org/jira/browse/PHOENIX-1437
 Project: Phoenix
  Issue Type: Bug
Reporter: Taylor Finnell

 Getting a java.lang.OutOfMemoryError when using Phoenix on Storm. Here is the 
 full stack trace.
 {code}
 java.lang.OutOfMemoryError: unable to create new native thread
   at java.lang.Thread.start0(Native Method) ~[na:1.7.0_45]
   at java.lang.Thread.start(java/lang/Thread.java:713) ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.addWorker(java/util/concurrent/ThreadPoolExecutor.java:949)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.execute(java/util/concurrent/ThreadPoolExecutor.java:1360)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.AbstractExecutorService.submit(java/util/concurrent/AbstractExecutorService.java:132)
  ~[na:1.7.0_45]
   at 
 org.apache.phoenix.iterate.ParallelIterators.submitWork(org/apache/phoenix/iterate/ParallelIterators.java:356)
  ~[stormjar.jar:na]
   at 
 org.apache.phoenix.iterate.ParallelIterators.getIterators(org/apache/phoenix/iterate/ParallelIterators.java:265)
  ~[stormjar.jar:na]
   at 
 org.apache.phoenix.iterate.ConcatResultIterator.getIterators(org/apache/phoenix/iterate/ConcatResultIterator.java:44)
  ~[stormjar.jar:na]
   at 
 org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(org/apache/phoenix/iterate/ConcatResultIterator.java:66)
  ~[stormjar.jar:na]
   at 
 org.apache.phoenix.iterate.ConcatResultIterator.next(org/apache/phoenix/iterate/ConcatResultIterator.java:86)
  ~[stormjar.jar:na]
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.next(org/apache/phoenix/jdbc/PhoenixResultSet.java:732)
  ~[stormjar.jar:na]
   at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606) 
 ~[na:1.7.0_45]
   at 
 RUBY.each(file:/mnt/hadoop/storm/supervisor/stormdist/korrelate_match_log_processor_staging_KOR-2325-online_sync_to_hbase_tf_part_three-1-1415715986/stormjar.jar!/lib/korrelate_match_log_processor/cleanroom_online_event_adapter.rb:51)
  ~[na:na]
   at 
 RUBY.finish_batch(file:/mnt/hadoop/storm/supervisor/stormdist/korrelate_match_log_processor_staging_KOR-2325-online_sync_to_hbase_tf_part_three-1-1415715986/stormjar.jar!/lib/korrelate_match_log_processor/bolt/abstract_event_reader_bolt.rb:68)
  ~[na:na]
   at 
 RUBY.finishBatch(/Users/tfinnell/.rvm/gems/jruby-1.7.11@O2O-jruby/gems/redstorm-0.6.6/lib/red_storm/proxy/batch_bolt.rb:51)
  ~[na:na]
   at 
 redstorm.proxy.BatchBolt.finishBatch(redstorm/proxy/BatchBolt.java:149) 
 ~[stormjar.jar:na]
   at 
 redstorm.storm.jruby.JRubyTransactionalBolt.finishBatch(redstorm/storm/jruby/JRubyTransactionalBolt.java:56)
  ~[stormjar.jar:na]
   at 
 backtype.storm.coordination.BatchBoltExecutor.finishedId(backtype/storm/coordination/BatchBoltExecutor.java:76)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.coordination.CoordinatedBolt.checkFinishId(backtype/storm/coordination/CoordinatedBolt.java:259)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.coordination.CoordinatedBolt.execute(backtype/storm/coordination/CoordinatedBolt.java:322)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.daemon.executor$fn__4329$tuple_action_fn__4331.invoke(executor.clj:630)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.daemon.executor$fn__4329$tuple_action_fn__4331.invoke(backtype/storm/daemon/executor.clj:630)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.daemon.executor$mk_task_receiver$fn__4252.invoke(executor.clj:398)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.daemon.executor$mk_task_receiver$fn__4252.invoke(backtype/storm/daemon/executor.clj:398)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.disruptor$clojure_handler$reify__1747.onEvent(disruptor.clj:58)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.disruptor$clojure_handler$reify__1747.onEvent(backtype/storm/disruptor.clj:58)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
   at 
 backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(backtype/storm/utils/DisruptorQueue.java:104)
  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402

Re: [VOTE] Release of Apache Phoenix 4.2.1 RC0

2014-11-11 Thread Eli Levine
Mujtaba, is this a new issue or is it there in 4.2.0? If it's in 4.2.0 not
sure it's enough to sink this RC, since it's not a regression.

Thanks,

Eli

On Mon, Nov 10, 2014 at 12:11 PM, Mujtaba Chohan mujt...@apache.org wrote:

 -1.

 Drop table doesn't work with Phoenix 4.1 client with 4.2.1-RC0 on server
 side.
 Exception: com.google.protobuf.UninitializedMessageException: Message
 missing required fields: cascade

 On Sun, Nov 9, 2014 at 11:49 PM, Gabriel Reid gabriel.r...@gmail.com
 wrote:

  +1 to release
 
  * Verified signatures and checksums
  * Ran rat
  * Verified contents of source distribution vs tag in git
  * Successfully ran full integration test suite
 
 
  On Fri, Nov 7, 2014 at 8:54 PM, James Taylor jamestay...@apache.org
  wrote:
   Hello everyone,
  
   This is a call for a vote on Apache Phoenix 4.2.1 RC0. This is a bug
   fix/patch release of Phoenix 4.2, compatible with the 0.98 branch of
   Apache HBase with feature parity against the Phoenix 3.2 release. The
   release includes both a source-only release and a convenience binary
   release.
  
   For a complete list of changes, see:
   https://raw.githubusercontent.com/apache/phoenix/4.2/CHANGES
  
   The source tarball, including signatures, digests, etc can be found at:
   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.1-rc0/src/
  
   The binary artifacts can be found at:
   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.1-rc0/bin/
  
   Release artifacts are signed with the following key:
   https://people.apache.org/keys/committer/mujtaba.asc
  
   KEYS file available here:
   https://dist.apache.org/repos/dist/release/phoenix/KEYS
  
   The hash and tag to be voted upon:
  
 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=73738f4cd6fc2be7b07e660ba4915b1e850627c6
  
 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.2.1-rc0
  
   Vote will be open for at least 72 hours. Please vote:
  
   [ ] +1 approve
   [ ] +0 no opinion
   [ ] -1 disapprove (and reason why)
  
   Thanks,
   The Apache Phoenix Team
 



Re: [VOTE] Release of Apache Phoenix 4.2.1 RC0

2014-11-11 Thread Eli Levine
Agree on the other issues. Just wanted to clarify that we are not sinking
this RC solely because of the compat issue.

On Tue, Nov 11, 2014 at 1:54 PM, Mujtaba Chohan mujt...@apache.org wrote:

 Eli - Drop table compatibility issue would be there in 4.2.0 however stats
 not getting updated correctly is important
 https://issues.apache.org/jira/browse/PHOENIX-1434 and also good if we get
 fix for PHOENIX-1428 and PHOENIX-1429.

 -mujtaba

 .

 On Tue, Nov 11, 2014 at 1:43 PM, Eli Levine elilev...@gmail.com wrote:

  Mujtaba, is this a new issue or is it there in 4.2.0? If it's in 4.2.0
 not
  sure it's enough to sink this RC, since it's not a regression.
 
  Thanks,
 
  Eli
 
  On Mon, Nov 10, 2014 at 12:11 PM, Mujtaba Chohan mujt...@apache.org
  wrote:
 
   -1.
  
   Drop table doesn't work with Phoenix 4.1 client with 4.2.1-RC0 on
 server
   side.
   Exception: com.google.protobuf.UninitializedMessageException: Message
   missing required fields: cascade
  
   On Sun, Nov 9, 2014 at 11:49 PM, Gabriel Reid gabriel.r...@gmail.com
   wrote:
  
+1 to release
   
* Verified signatures and checksums
* Ran rat
* Verified contents of source distribution vs tag in git
* Successfully ran full integration test suite
   
   
On Fri, Nov 7, 2014 at 8:54 PM, James Taylor jamestay...@apache.org
 
wrote:
 Hello everyone,

 This is a call for a vote on Apache Phoenix 4.2.1 RC0. This is a
 bug
 fix/patch release of Phoenix 4.2, compatible with the 0.98 branch
 of
 Apache HBase with feature parity against the Phoenix 3.2 release.
 The
 release includes both a source-only release and a convenience
 binary
 release.

 For a complete list of changes, see:
 https://raw.githubusercontent.com/apache/phoenix/4.2/CHANGES

 The source tarball, including signatures, digests, etc can be found
  at:

  https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.1-rc0/src/

 The binary artifacts can be found at:

  https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.1-rc0/bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/mujtaba.asc

 KEYS file available here:
 https://dist.apache.org/repos/dist/release/phoenix/KEYS

 The hash and tag to be voted upon:

   
  
 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=73738f4cd6fc2be7b07e660ba4915b1e850627c6

   
  
 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.2.1-rc0

 Vote will be open for at least 72 hours. Please vote:

 [ ] +1 approve
 [ ] +0 no opinion
 [ ] -1 disapprove (and reason why)

 Thanks,
 The Apache Phoenix Team
   
  
 



Re: [VOTE] Release of Apache Phoenix 4.2.1 RC0

2014-11-07 Thread Eli Levine
+1

Grabbed binaries and ran them through Salesforce's internal tests. Looks
good. Thanks!

On Fri, Nov 7, 2014 at 11:54 AM, James Taylor jamestay...@apache.org
wrote:

 Hello everyone,

 This is a call for a vote on Apache Phoenix 4.2.1 RC0. This is a bug
 fix/patch release of Phoenix 4.2, compatible with the 0.98 branch of
 Apache HBase with feature parity against the Phoenix 3.2 release. The
 release includes both a source-only release and a convenience binary
 release.

 For a complete list of changes, see:
 https://raw.githubusercontent.com/apache/phoenix/4.2/CHANGES

 The source tarball, including signatures, digests, etc can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.1-rc0/src/

 The binary artifacts can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.1-rc0/bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/mujtaba.asc

 KEYS file available here:
 https://dist.apache.org/repos/dist/release/phoenix/KEYS

 The hash and tag to be voted upon:

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=73738f4cd6fc2be7b07e660ba4915b1e850627c6

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.2.1-rc0

 Vote will be open for at least 72 hours. Please vote:

 [ ] +1 approve
 [ ] +0 no opinion
 [ ] -1 disapprove (and reason why)

 Thanks,
 The Apache Phoenix Team



Re: [VOTE] Release of Apache Phoenix 4.2.0 RC0

2014-10-22 Thread Eli Levine
+1 for 4.2 RC0. Ran extensive functional testing using code that uses
Phoenix over 2M rows, concurrent queries, with and without stats, multiple
region splits.

Thanks,

Eli

On Wed, Oct 22, 2014 at 9:21 AM, James Taylor jamestay...@apache.org
wrote:

 Hi everyone,
 This is a call for a vote on Apache Phoenix 4.2.0 RC0. This is the
 next minor release of Phoenix compatible with the 0.98 branch of
 Apache HBase with feature parity against the upcoming Phoenix 3.2
 release. The release includes both a source-only release and a
 convenience binary release.

 In addition to 50+ bug fixes, the following new features were added:
 - Statistics collection to improve query performance[1]
 - Semi/anti joins (IN and EXISTS subqueries)
 - Correlated subqueries (with some restrictions)
 - Optimized parent/child join execution
 - Annotated connection properties included in tracing and logging
 - Local immutable indexes to decrease network IO
 - Delete support for tables with immutable rows
 - New operators and built-ins: ILIKE (case insensitive LIKE), REGEXP_SPLIT

 For a complete list of changes, see:
 https://raw.githubusercontent.com/apache/phoenix/4.0/CHANGES

 The source tarball, including signatures, digests, etc can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.0-rc0/src/

 The binary artifacts can be found at:
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.2.0-rc0/bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/mujtaba.asc

 KEYS file available here:
 https://dist.apache.org/repos/dist/release/phoenix/KEYS

 The tag and hash to be voted upon:

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.2.0-rc0

 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=59647fc3480b7f00fa741a29b674c81f414a9f93

 Vote will be open for at least 72 hours. Please vote:

 [ ] +1 approve
 [ ] +0 no opinion
 [ ] -1 disapprove (and reason why)

 Thanks,
 The Apache Phoenix Team

 [1] http://phoenix.apache.org/update_statistics.html



Re: Hbase namespaces

2014-10-01 Thread Eli Levine
Ah I see what you mean. Currently HBase namespaces (HBASE-8015) are not
surfaced/used by Phoenix. It think it would be an interesting feature
request. Phoenix has the concept of table schemas and they are currently
used to construct the underlying HBase tables' names. One possibility is to
map Phoenix schemas to HBase namespaces.

Want to file a Jira? If you want to take a stab at implementation that
would be great, too. Would require a bit of thought with respect to design
and how HBase namespaces are surfaced to Phoenix users.

Thanks,

Eli




On Wed, Oct 1, 2014 at 12:28 PM, Nicolas Maillard nmaill...@hortonworks.com
 wrote:

 Hello eli

 Thank yo for the insight. My requirement is actually to have a database
 like system in phoenix. essentialy leveraging Hbase namespaces. This is in
 the whole bringing up in phoenix Hbase new features.
 I was thinking adding to the antr schema something like create myDb.myTable
 construct to be able to create phoenix/Hbase tables that are not in the
 default namespace or in the view part creating a view on an Hbase table
 that is not in the default hbase namespace.
 Does this make more sense ?
 If nothing has been done and it is a valid feature maybe I could take a
 strike if there are other ways I would love to know.

 Hope this is clearer

 On Mon, Sep 29, 2014 at 5:22 PM, Eli Levine elilev...@gmail.com wrote:

  I thought you wanted to have a single physical Phoenix table (backed by a
  single HBase table) be shared for multiple purposes. If so, views are a
  good way to do that. You create a regular table in Phoenix and then
 define
  views on top of that table based on your requirements. Views share the
  parent table's PK structure but can add their own columns and indexes.
 
  I'm not 100% sure I'm groking your requirements, though.
 
  On Mon, Sep 29, 2014 at 1:07 AM, Nicolas Maillard 
  nmaill...@hortonworks.com
   wrote:
 
   Hello Eli
  
   Views were one way I was thinking about, but actually what mean is
  creating
   and using an Hbase table like
   mydb:mytable so either creatng it directly through Phoenix or
 creating
  a
   view on top of the hbase table to access my phoenix semantics.
   Hope this makes sense
  
   On Fri, Sep 26, 2014 at 4:22 PM, Eli Levine elilev...@gmail.com
 wrote:
  
Views are a good way to reuse a single physical table in Phoenix for
multiple purposes. They can be used to essentially partition a
 Phoenix
table. http://phoenix.apache.org/views.html
   
Is that something like what you have in mind?
   
Thanks,
   
Eli
   
   
   
 On Sep 25, 2014, at 11:44 PM, Nicolas Maillard 
nmaill...@hortonworks.com wrote:

 Hello everyone

 I was wondering if phoenix had a way of making use of Hbase
  namespaces,
 that is creating a phoenix table is a given namespace, maybe even
creating
 this namespace.
 As we move n more heaby usage of phoenix it makes for a nice way to
 logically split up tables.

 regards

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
   entity
to
 which it is addressed and may contain information that is
  confidential,
 privileged and exempt from disclosure under applicable law. If the
   reader
 of this message is not the intended recipient, you are hereby
  notified
that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you
 have
 received this communication in error, please contact the sender
immediately
 and delete it from your system. Thank You.
   
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



Re: Hbase namespaces

2014-09-29 Thread Eli Levine
I thought you wanted to have a single physical Phoenix table (backed by a
single HBase table) be shared for multiple purposes. If so, views are a
good way to do that. You create a regular table in Phoenix and then define
views on top of that table based on your requirements. Views share the
parent table's PK structure but can add their own columns and indexes.

I'm not 100% sure I'm groking your requirements, though.

On Mon, Sep 29, 2014 at 1:07 AM, Nicolas Maillard nmaill...@hortonworks.com
 wrote:

 Hello Eli

 Views were one way I was thinking about, but actually what mean is creating
 and using an Hbase table like
 mydb:mytable so either creatng it directly through Phoenix or creating a
 view on top of the hbase table to access my phoenix semantics.
 Hope this makes sense

 On Fri, Sep 26, 2014 at 4:22 PM, Eli Levine elilev...@gmail.com wrote:

  Views are a good way to reuse a single physical table in Phoenix for
  multiple purposes. They can be used to essentially partition a Phoenix
  table. http://phoenix.apache.org/views.html
 
  Is that something like what you have in mind?
 
  Thanks,
 
  Eli
 
 
 
   On Sep 25, 2014, at 11:44 PM, Nicolas Maillard 
  nmaill...@hortonworks.com wrote:
  
   Hello everyone
  
   I was wondering if phoenix had a way of making use of Hbase namespaces,
   that is creating a phoenix table is a given namespace, maybe even
  creating
   this namespace.
   As we move n more heaby usage of phoenix it makes for a nice way to
   logically split up tables.
  
   regards
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



Re: Hbase namespaces

2014-09-26 Thread Eli Levine
Views are a good way to reuse a single physical table in Phoenix for multiple 
purposes. They can be used to essentially partition a Phoenix table. 
http://phoenix.apache.org/views.html

Is that something like what you have in mind?

Thanks,

Eli



 On Sep 25, 2014, at 11:44 PM, Nicolas Maillard nmaill...@hortonworks.com 
 wrote:
 
 Hello everyone
 
 I was wondering if phoenix had a way of making use of Hbase namespaces,
 that is creating a phoenix table is a given namespace, maybe even creating
 this namespace.
 As we move n more heaby usage of phoenix it makes for a nice way to
 logically split up tables.
 
 regards
 
 -- 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader 
 of this message is not the intended recipient, you are hereby notified that 
 any printing, copying, dissemination, distribution, disclosure or 
 forwarding of this communication is strictly prohibited. If you have 
 received this communication in error, please contact the sender immediately 
 and delete it from your system. Thank You.


[jira] [Resolved] (PHOENIX-1198) Add ability to pass custom annotations to be added to log lines

2014-09-26 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine resolved PHOENIX-1198.
-
Resolution: Fixed

I have made a pass and added annotations to all log lines that are within the 
scope of a user operation. This is currently done for (1) all existing (and a 
few new one) client-side log lines, and (2) all existing server-side log lines 
in code related to query execution.

More work is required around server-side code for indexes and upserts. I'll 
file a separate JIRA for that.


 Add ability to pass custom annotations to be added to log lines
 ---

 Key: PHOENIX-1198
 URL: https://issues.apache.org/jira/browse/PHOENIX-1198
 Project: Phoenix
  Issue Type: Improvement
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.1


 Users need a way to associate log lines emitted by Phoenix with other logs 
 generated by higher levels of applications. This JIRA calls for allowing 
 callers to pass custom annotations to Phoenix connections. The mechanism for 
 passing in these annotations is the same as PHOENIX-1196. Phoenix will look 
 for properties that start with phoenix.annotation. and use them when 
 generating log lines.
 e.g. If a connection was created with a property 
 +phoenix.annotation.userid=123+ Phoenix would emit log lines that would look 
 like this: {code}{userid=123} some log line{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1198) Add ability to pass custom annotations to be added to log lines

2014-09-25 Thread Eli Levine (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Levine updated PHOENIX-1198:

Summary: Add ability to pass custom annotations to be added to log lines  
(was: Add ability to pass custom tags to be added to log lines)

 Add ability to pass custom annotations to be added to log lines
 ---

 Key: PHOENIX-1198
 URL: https://issues.apache.org/jira/browse/PHOENIX-1198
 Project: Phoenix
  Issue Type: Improvement
Reporter: Eli Levine
Assignee: Eli Levine
 Fix For: 5.0.0, 4.1


 These tags can be passed in either when creating connections or calling 
 upsert/select. Similar to PHOENIX-1196. Maybe they can shared the same 
 mechanism for passing in values to be logged/traced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >