Hi Amit,
I guess processing with HBase + Phoenix is not working for your use-case,
it needs a lot of memory and of course swap. I imagine there's no direct
solution - but post here if you find one (I imagine some good to try
options: splitting the query into smaller ones, salt the table in more
Yes, of course it's possible.
Just not using Phoenix - try writing a Spark job (or MapReduce) and if you
pick the right join condition it might actually be not that slow at all
(time to read the 2 tables in Spark included).
If you still want to do it in Phoenix - try to increase those limits
)
2016-05-12 15:14 GMT+02:00 Ciureanu Constantin <
ciureanu.constan...@gmail.com>:
> Just create a new first unique field CustomerID + TelephoneType to play
> the PK role, something has to be unique there and a HBase table needs a Key
> (this concatenation of 2 or more values is val
Just create a new first unique field CustomerID + TelephoneType to play the
PK role, something has to be unique there and a HBase table needs a Key
(this concatenation of 2 or more values is valid in case it's unique
otherwise invent some other 3rd part or risk to lose phone numbers that are
ta
> would hotspot on single region if keys are monotonically increasing.
>
> On Tue, Oct 4, 2016 at 8:04 AM, Ciureanu Constantin <
> ciureanu.constan...@gmail.com> wrote:
>
> select * from metric_table where metric_type='x'
> -- so far so good
>
> and timestamp &
In Spark 1.4 it worked via JDBC - sure it would work in 1.6 / 2.0 without
issues.
Here's a sample code I used (it was getting data in parallel 24 partitions)
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.JdbcRDD
import java.sql.{Connection,
select * from metric_table where metric_type='x'
-- so far so good
and timestamp > 'start_date' and timestamp < 'end_date'.
-- here in case the timestamp is long (BIGINT in Phoenix) - it should work
fine!
Try also with "timestamp between (x and y)"
Anyway - my proposal would be to reverse the
But Phoenix does this for you (creates a composite key, special separator)
- you just have to specify the PK while creating the table.
CREATE TABLE IF NOT EXISTS us_population (
state CHAR(2) NOT NULL,
city VARCHAR NOT NULL,
population BIGINT
CONSTRAINT my_pk PRIMARY KEY
Then please post a small part of your code (that one reading from Phoenix &
processing the RDD contents)
2016-10-14 11:12 GMT+02:00 Antonio Murgia :
> For the record, autocommit was set to true.
>
> On 10/14/2016 10:08 AM, James Taylor wrote:
>
>
>
> On Fri, Oct 14, 2016
Not sure what ti say, check your apache-commons version, perhaps it's
picking an older one in the classpath.
În vin., 21 oct. 2016, 09:36 Vivek Paranthaman (JIRA), a
scris:
> Vivek Paranthaman shared an issue with you
>
>
>
>
Not sure if this works for the view use-case you have but it's working for
a Phoenix table.
The table create statement should have just the stable columns.
CREATE TABLE IF NOT EXISTS TESTC (
TIMESTAMP BIGINT NOT NULL,
NAME VARCHAR NOT NULL
CONSTRAINT PK PRIMARY KEY (TIMESTAMP, NAME)
);
--
as
timestamps (long)
2016-12-02 9:11 GMT+01:00 Ciureanu Constantin <ciureanu.constan...@gmail.com
>:
> Try using WHERE clause...
>
> ... FROM FARM_PRODUCT_PRICE
> WHERE date=TO_DATE('2015-06-01','-MM-dd')
> LIMIT 100;
>
> 2016-12-02 6:43 GMT+01:00 lk_phoenix <lk_phoe.
What about using the VLH pattern?
An d keep the offsets for each page in the server side, for a while... (the
client might not need all of them, might also never ask for next page)
http://www.oracle.com/technetwork/java/valuelisthandler-142464.html
On May 18, 2017 20:02, "James Taylor"
te older cached results)
2017-05-18 22:02 GMT+02:00 James Taylor <jamestay...@apache.org>:
> HBase does not lend itself to that pattern. Rows overlap in HFiles (by
> design). There's no facility to jump to the Nth row. Best to use the RVC
> mechanism.
>
> On Thu,
Hello,
Check the Java version.
Phoenix was compiled with JDK 7.0 and you are probably using JDK 6.0 (runtime).
From: 聪聪 [mailto:175998...@qq.com]
Sent: Friday, December 19, 2014 9:39 AM
To: user
Subject: sqlline.py operation error
I use HBase version hbase-0.98.6-cdh5.2.0,so I download
Hello Ralph,
Try to check if the PIG script doesn’t produce keys that overlap (that would
explain the reduce in number of rows).
Good luck,
Constantin
From: Ravi Kiran [mailto:maghamraviki...@gmail.com]
Sent: Tuesday, February 03, 2015 2:42 AM
To: user@phoenix.apache.org
Subject: Re: Pig vs
what the real issue is,
could you give a general overview of how your MR job is implemented (or even
better, give me a pointer to it on GitHub or something similar)?
- Gabriel
On Thu, Jan 15, 2015 at 2:19 PM, Ciureanu, Constantin (GfK)
constantin.ciure...@gfk.com wrote:
Hello all,
I finished
machines, 24 tasks can run in the
same time).
Can be this because of some limitation on number of connections to Phoenix?
Regards,
Constantin
-Original Message-
From: Ciureanu, Constantin (GfK) [mailto:constantin.ciure...@gfk.com]
Sent: Wednesday, January 14, 2015 9:44 AM
To: user
Hello all,
Is there any Cascading / Scalding Tap to read / write data from and to Phoenix?
I couldn’t find anything on the internet so far.
I know that there is a Cascading Tap to read from HBase and Cascading
integration with JDBC.
Thank you,
Constantin
-specific InputFormat and OutputFormat implementations were recently
added to Phoenix, so if there's an easy way to wrap an existing InputFormat and
OutputFormat as a Tap in Cascading, then this would probably be the easiest way
to go.
- Gabriel
On Tue, Feb 10, 2015 at 5:47 PM, Ciureanu
Hi Matthew,
Is it working without the quotes “ / ? (I see you are using 2 types of
quotes, weird)
I guess that’s not needed, and probably causing troubles. I don’t have to use
quotes anyway.
Alternatively check the types of data in those 2 tables (if the field types are
not the same in
at this class:
org.apache.phoenix.schema.PDataType.
Good luck,
Vaclav;
On 01/13/2015 10:58 AM, Ciureanu, Constantin (GfK) wrote:
Thank you Vaclav,
I have just started today to write some code :) for MR job that will
load data into HBase + Phoenix. Previously I wrote some application
to load
should hit bottleneck of HBase itself. It should be from 10 to 30+
times faster than your current solution. Depending on HW of course.
I'd prefer this solution for stream writes.
Vaclav
On 01/13/2015 10:12 AM, Ciureanu, Constantin (GfK) wrote:
Hello all,
(Due to the slow speed of Phoenix JDBC
Hello all,
1. Is there a good explanation why updating the statistics:
update statistics tableX;
made this query 2x times slower? (it was 27 seconds before, now it’s
somewhere between 60 – 90 seconds)
select count(*) from tableX;
+--+
|
.
Thanks,
James
On Tue, Mar 3, 2015 at 4:23 AM, Ciureanu, Constantin (GfK)
constantin.ciure...@gfk.commailto:constantin.ciure...@gfk.com wrote:
Hello James,
Btw, I noticed some other issues:
- My table key is (DATUM, … ) ordered ascending by key (LONG, in
milliseconds) – I have changed
.
Cheers,
Matt
From: Ciureanu, Constantin (GfK)
[mailto:constantin.ciure...@gfk.commailto:constantin.ciure...@gfk.com]
Sent: 20 February 2015 14:40
To: user@phoenix.apache.orgmailto:user@phoenix.apache.org
Subject: RE: Inner Join not returning any results in Phoenix
Hi Matthew,
Is it working
26 matches
Mail list logo