Reported as
https://issues.apache.org/jira/browse/PHOENIX-2256
James
On 14/09/15 11:56, James Heather wrote:
Table "b" should get evicted first, which creates enough space for
"d". But in fact "c" gets evicted first, and then "b" needs to be
evicted as well to make enough room.
I don't
Reported as
https://issues.apache.org/jira/browse/PHOENIX-2257
On 14/09/15 12:24, James Heather wrote:
I also have two failing integration tests in DerivedTableIT:
Failed tests:
DerivedTableIT.testDerivedTableWithGroupBy:320 expected:<['e']> but
was:<['b', 'c', 'e']>
This looks like a bug in PMetaDataImpl to me. The test looks correct.
Table "b" should get evicted first, which creates enough space for "d".
But in fact "c" gets evicted first, and then "b" needs to be evicted as
well to make enough room.
I don't know if there's a race condition in here
I've set up a repo at
https://github.com/chiastic-security/phoenix-for-cloudera
It is a fork of the vanilla Phoenix github mirror. I've created a branch
called "4.5-HBase-1.0-cdh5", which we can use for making a
CDH5-compatible version. I've not made any of the necessary changes so far.
I
On Mon, Sep 14, 2015 at 9:21 AM, James Heather
wrote:
> I'm not certain of the best way to manage this. Perhaps we need a new
> mailing list for those who want to help, to avoid cluttering this list up.
Just my opinion, but maybe a tag in the email subject,
Thanks for filing these issues. I believe these failures occur on Java 8,
but not on 7. Not sure why, though.
James
On Monday, September 14, 2015, James Heather
wrote:
> Reported as
>
> https://issues.apache.org/jira/browse/PHOENIX-2256
>
> James
>
> On 14/09/15
Thanks!
That sounds a bit fragile. It might mean that the implementation will be
incorrect if the code is compiled under Java 8, which wouldn't be ideal
(especially now that Java 7 has been EOL'd).
Is anyone likely to be looking into the cause?
James
On 14/09/15 16:24, James Taylor wrote:
Yes, I'll look into it. Thanks,
James
On Monday, September 14, 2015, James Heather
wrote:
> Thanks!
>
> That sounds a bit fragile. It might mean that the implementation will be
> incorrect if the code is compiled under Java 8, which wouldn't be ideal
> (especially
This is great James.
Since this is conveniently on Github, maybe we use the issue tracker there?
Interested parties can set a watch. Would you be willing to add 'apurtell' as a
collaborator on the repo? I will fork and send over PRs of course, but you
might want help?
> On Sep 14, 2015, at
Thank you, James! I have assigned the issue to myself.
On Mon, Sep 14, 2015 at 7:39 AM James Heather
wrote:
> Reported as
>
> https://issues.apache.org/jira/browse/PHOENIX-2257
>
> On 14/09/15 12:24, James Heather wrote:
> > I also have two failing integration tests
Jeffrey,
Can you tell us how are creating your view over the existing HBase table?
For example this works for me:
HBase shell:
create 'T', 'f1', 'f2', 'f3'
Phoenix sql line:
create view T ("f1".col1 INTEGER)
Note that HBase shell is case sensitive. So we need to put the family name
f1 in
Sumit,
To add to what Samarth said, even now PreparedStatements help by saving the
parsing cost. Soon, too, for UPDATE VALUES, we'll also avoid recompilation
when using a PreparedStatement. I'd encourage you to use them.
Thanks,
James
On Mon, Sep 14, 2015 at 9:32 PM, Samarth Jain
Hello,
I am using Phoenix 4.5 with Hbase 0.98.1.
PreparedStatement is preferred in case of say Oracle, etc. to help with
effective use of query plans (with bind params). Does it also have same
guarantees with Phoenix or does the Phoenix query engine treat both Statement
and Prepared statement
Thank you James.
Will switch over to PreparedStatement. Interestingly, in some load tests,
prepared statements did not offer any significant advantage. But as I
understand, this is going to change soon.
Best regards,Sumit
From: James Taylor
To: user
Thanks!
Note James's comments on the unit tests: probably this is another issue
that fails on Java 8 but succeeds on Java 7.
That's likely to indicate a fairly subtle bug, or reliance on a Java 7
implementation detail that isn't contractual...
James
On 14/09/15 18:38, Maryann Xue wrote:
Done! Thanks for helping!
The branches in the repo mirror those in vanilla Phoenix. We shouldn't
push any changes to the vanilla branches, but only to "*-cdh5" branches
(or any temporary side branches we need to create).
The issue tracker will be very useful, yes.
James
On 14/09/15 17:22,
Noted. Thanks, James!
On Mon, Sep 14, 2015 at 1:48 PM, James Heather
wrote:
> Thanks!
>
> Note James's comments on the unit tests: probably this is another issue
> that fails on Java 8 but succeeds on Java 7.
>
> That's likely to indicate a fairly subtle bug, or
Yes, we're discussing this right now over on PHOENIX-1598. Please feel free
to chime in over there.
Thanks,
James
On Mon, Sep 14, 2015 at 12:00 PM, Satish Iyengar wrote:
> One of the recommendations in HBase is to have short column names. Now
> when phoenix user defines a
One of the recommendations in HBase is to have short column names. Now when
phoenix user defines a logical but longer column names this affects the
HFile size of such a table (column names are repeated). Even if we use
compression and data block encoding, the size while in transit and in
memory is
Does anyone else get a test failure when they build Phoenix?
If I make a fresh clone of the repo, and then run mvn package, I get a
test failure:
---
Test set: org.apache.phoenix.schema.PMetaDataImplTest
20 matches
Mail list logo