I also faced many stability problems with Phoenix..it's very complicated to
tune the tables in order to have decent performance for all kind of queries.
Since we need to be performant for every type of query (analytics and
exploration) we use Elasticsearch + Join plugin (i.e. Siren platform [1])
> You weren't doing anything weird on your own -- you wrote data via the
> JDBC driver? Any index tables?
>
> Aside from weirdness in the client with statistics, there isn't much I've
> seen that ever causes a "bad" table. You have to try pretty hard to
> "corrupt&quo
Hi to all,
today I had a very weird, and potentially critical, behavior in my Phoenix
installation (4.13.2 on CDH 5.11.2).
I started upserting rows in a salted table but I fortunately discovered
that some of them were missing (and the PK was unique).
After 3 hours of attempts and debugging I gave
a.io.tmpdir system property.
>
>
> On 3/20/18 12:47 PM, Flavio Pompermaier wrote:
>
>> Hi to all,
>> I've just discovered that Phoenix continue to create .tmp files in /tmp
>> directory causing the disk to run out of space...is there a way to change
>> this directory and run a cleanup??
>>
>> Best,
>> Flavio
>>
>
Hi to all,
I've just discovered that Phoenix continue to create .tmp files in /tmp
directory causing the disk to run out of space...is there a way to change
this directory and run a cleanup??
Best,
Flavio
Any insight here..?
On Fri, Mar 16, 2018 at 7:23 PM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> Thanks everybody for the help.
> I'm just curios to understand why the first query didn't complete. The
> query is quite complex but the available memory should be m
scan (and use a fair amount
>>> of memory depending on the cardinality of SOMEFIELD). The above with a
>>> secondary index would skip to the distinct values instead.
>>>
>>> Thanks,
>>> James
>>>
>>> On Fri, Mar 16, 2018 at 8:30
EFIELD IS NOT NULL.
>
> Otherwise, you'll end up doing a full table scan (and use a fair amount of
> memory depending on the cardinality of SOMEFIELD). The above with a
> secondary index would skip to the distinct values instead.
>
> Thanks,
> James
>
> On Fri, Mar 16, 2018 at 8:30
Hi to all,
I'm running a query like this one on my Phoenix 4.13 (on CDH 5.11.2):
SELECT COUNT(*) FROM (
SELECT DISTINCT(SOMEFIELD)
FROM TEST.MYTABLE
WHERE VALID = TRUE AND SOMEFIELD IS NOT NULL
)
Unfortunately the query timeouts (timeout is 10 min) Any suggestion
about how to tune my
I also had similar troubles and I fixed them changing the following params
(both on server and client side and restarting hbase):
hbase.rpc.timeout (to 60)
phoenix.query.timeoutMs (to 60)
hbase.client.scanner.timeout.period (from 1 m to 10m)
hbase.regionserver.lease.period (from 1 m to
> Thanks
> Kunal
>
> -Original Message-
> From: Flavio Pompermaier [mailto:pomperma...@okkam.it]
> Sent: Monday, February 05, 2018 7:29 AM
> To: u...@drill.apache.org; user@phoenix.apache.org
> Cc: Bridget Bevens <bbev...@mapr.com>; James Taylor <
>
ts collection (but you can leave off usage to parallelize queries) and
> the do a SUM over the size column for the table using stats table directly,
> or 2) do a count(*) using TABLESAMPLE clause (again enabling stats as
> described above) to prevent a full scan.
>
> On Thu, Feb
ebsite, so she can do
> the final touches and help it find a home on the website.
>
> -Original Message-----
> From: Flavio Pompermaier [mailto:pomperma...@okkam.it]
> Sent: Friday, February 02, 2018 9:04 AM
> To: u...@drill.apache.org
> Cc: James Taylor <jamestay...@apache.o
//phoenix.apache.org/presentations/Drillix.pdf
>> > [2] https://github.com/jacques-n/drill/tree/phoenix_plugin
>> >
>> >> On Fri, Feb 2, 2018 at 10:21 AM, Kunal Khatua <kkha...@mapr.com>
>> wrote:
>> >>
>> >> That's great, Flavio!
>&
e/Phoenix will work well.
> Also, if you wanna do select count(*). The HBase row_counter job will be
> much faster than phoenix queries.
>
> Thanks,
> Anil Gupta
>
> On Thu, Feb 1, 2018 at 7:35 AM, Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> I was abl
sult is same. The difference is that I use 4.12.
>
>
> On Thu, Feb 1, 2018 at 8:23 PM, Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> Hi to all,
>> I'm trying to use the brand-new Phoenix 4.13.2-cdh5.11.2 over HBase and
>> everything was fine until the data was
Hi to all,
I'm trying to use the brand-new Phoenix 4.13.2-cdh5.11.2 over HBase and
everything was fine until the data was quite small (about few millions). As
I inserted 170 M of rows in my table I cannot get the row count anymore
(using ELECT COUNT) because of
Any answer on this..?
On Fri, Jan 12, 2018 at 10:38 AM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> Hi to all,
> looking at the documentation (https://phoenix.apache.org/tuning_guide.html),
> in the writing section, there's the following sentence: "Phoenix use
Hi to all,
I've tested a program that writes (UPSERTS) to Phoenix using executeBatch().
In the logs I see "*Sent batch of 2 for SOMETABLE*" .
Is this correct? I fear that the batch is not executed in batch but
statement by statement.. the code within the
PhoenixStatement.executeBatch() is:
for (i
Hi to all,
looking at the documentation (https://phoenix.apache.org/tuning_guide.html),
in the writing section, there's the following sentence: "Phoenix uses
commit() instead of executeBatch() to control batch updates". Am using a
Phoenix connection with autocommit enabled +
guidePosts is found in the region
> and no count will be stored for a region having a size not enough for
> guidepost width or remaining region after the last guidePosts. so this
> row_count should not be used against actual count.
>
>
> On Wed, Sep 13, 2017 at 4:04 PM
Hi to all,
I've opened this issue https://issues.apache.org/jira/browse/PHOENIX-4523.
Can someone give it a look please?
It seems a double problem: first, mutex table should not be created, second
it seems that TableExistsException is not catched because is wrapped by
a
Here it is: https://issues.apache.org/jira/browse/PHOENIX-4508
On Thu, Dec 28, 2017 at 9:19 AM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> Hi James,
> What should be the subject of the JIRA?
> Could you open it for me...? I'm on vacation and opening tickets on JIR
understanding.
>
>
>
> Regards,
>
> Dor
>
>
>
> *From:* Flavio Pompermaier [mailto:pomperma...@okkam.it]
> *Sent:* יום ה 28 דצמבר 2017 10:06
> *To:* user@phoenix.apache.org
> *Subject:* Re: Phoenix and Cloudera
>
>
>
> I don't think it wiĺl work on C
I don't think it wiĺl work on Cloudera 5.10.1
There are 2 Phoenix parcel: the last official one (Phoenix 4.7 on cdh 4.9)
and the unofficial one (under release) that is Phoenix 4.13 on CDH 5.11.2.
You can find more info at
RHS table being joined) must be small enough to fit into memory on the
region server. If it's too big, you can use the USE_SORT_MERGE_JOIN which
would not have this restriction.
On Wed, Dec 27, 2017 at 3:16 PM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> Just to summarize things...is th
Just to summarize things...is the best approach, in terms of required
memory, for Apache Phoenix queries to use sort merge join? Should inner
queries be avoided?
On 22 Dec 2017 22:47, "Flavio Pompermaier" <pomperma...@okkam.it> wrote:
MYTABLE is definitely much bigger than PEOPLE
OPLE and MYTABLE
>>> (after filtered) respectively?
>>>
>>> For sort merge join, anyone knows are the both sides get shipped to
>>> client to do the merge sort?
>>>
>>> Thanks,
>>>
>>>
>>> On December 22, 2017 at 9:58:30 AM,
Any help here...?
On 20 Dec 2017 17:58, "Flavio Pompermaier" <pomperma...@okkam.it> wrote:
> Hi to all,
> I'm trying to find the best query for my use case but I found that one
> version work and the other one does not (unless that I don't apply some
> tuning to timeo
Yes you can (at least on cdh 5.11.2).
I'm using the last version of the parcel..take a look at
http://community.cloudera.com/t5/Cloudera-Labs/Apache-Phoenix-Support-for-CDH-5-8-x-5-9x-5-10-x-5-11-x/m-p/62687#M416?eid=1=1
On 22 Dec 2017 04:09, wrote:
> Hi,
>
> Can we install
I don't think you could define a table over hbase (that is a key-value
store) without a key...however it would be helpful in many use cases
On 22 Dec 2017 04:06, wrote:
> Hi,
>
> I created a phoenix without specifying primary key, it returned me error.
> I am new to
Hi to all,
I'm trying to find the best query for my use case but I found that one
version work and the other one does not (unless that I don't apply some
tuning to timeouts etc like explained in [1]).
The 2 queries extract the same data but, while the first query terminates
the second does not.
> REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE',
> MIN_VERSIONS => '0', TTL => '900 SECONDS (15 MIN
> UTES)', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY =>
> 'false', BLOCKCACHE => 'true'}
>
> 1 row(s) in 0.1110 secon
this situation...
On 18 Dec 2017 00:49, "Ethan" <ew...@apache.org> wrote:
> System.mutex should come with Phoenix, so you should have it even though
> sometimes doesn't show up. To truncate that table you may try delete
> statement in sqline.
>
>
> On December 17, 2017
t in HBase shell and/or scan it to check whether it's there.
>
> On 17 Dec 2017 22:28, "Flavio Pompermaier" <pomperma...@okkam.it> wrote:
>
> The problem is that I don't have that table..how can I create it from
> HBase shell?
>
> On Sun, Dec 17, 2017 at 11:24
> On 17 December 2017 at 22:01, Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> I've got Phoenix 4.13 both on client and server side..How can I truncate
>> Mutex table? I don't have any..
>>
>> On Sun, Dec 17, 2017 at 10:29 PM, Ethan <ew...@apache.or
at server side has to have a higher version than client
> side.
>
> Another note, I got a similar issue the other day. I solved it by truncate
> system Mutex table. Hope it helps yours too.
>
> Thanks,
>
>
> On December 16, 2017 at 3:23:47 PM, Flavio Pompermaier (
> po
Hi to all,
I've recently updated my Cloudera + Phoenix from CDH 5.9 + 4.7 to CDH
5.11.2 + 4.13 but now I can't connect with Phoenix anymore. When I run
phoenix-sqlline.sql I get:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where
Hi to all,
I just wanted to know how Apache Phoenix ensures that each time a paginated
query is executed on a different offset, different rows are returned [1].
Is there any insight (or constraint) about this?
[1] https://phoenix.apache.org/paged.html
Thanks in advance,
Flavio
rc Spaggiari <
>> jean-m...@spaggiari.org> wrote:
>>
>>> We "simply" need to have a place to host the file, right? From a code
>>> perspective, it can be another branch in the same repo?
>>>
>>> 2017-11-09 8:48 GMT-05:00 Flavio Pompermaie
; allows column IS true or column IS 4 as some kind of alternative to using
> an equality expression. More on that here: https://stackoverflow.
> com/questions/859762/is-this-the-proper-way-to-do-boolean-test-in-sql
>
> On Tue, Dec 5, 2017 at 8:28 AM Flavio Pompermaier <
wrote:
> How about just using VALID = true or just VALID like this: select * from t
> where VALID
>
> On Tue, Dec 5, 2017 at 2:52 AM Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> Hi to all,
>> I'm using Phoenix 4.7 and I cannot use IS operator on boole
Hi to all,
I'm using Phoenix 4.7 and I cannot use IS operator on boolean values (e.g.
VALID IS TRUE)
Would it be that difficult to support it?
Best,
Flavio
Hi to all,
as stated by at the documentation[1] "for optimal performance, number of
salt buckets should match number of region servers".
So, why not to add an option AUTO/DEFAULT for salting that defaults this
parameter to the number of region servers?
Otherwise I have to manually connect to
s. You can track that on
>> PHOENIX-4372.
>>
>> @Flavio - Pedro is targeting a CDH 5.11.2 release.
>>
>> On Fri, Nov 24, 2017 at 8:53 AM Flavio Pompermaier <pomperma...@okkam.it>
>> wrote:
>>
>>> Hi to all,
>>> is there any Parcel
t; * Improvements to statistics collection [3]
>>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>>>
>>>> Source and binary downloads are available here [5].
>>>>
>>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>> 20rowDeletion
>>>> [3] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>> 20statsCollection
>>>> [4] https://phoenix.apache.org/language/functions.html#collation_key
>>>> [5] http://phoenix.apache.org/download.html
>>>>
>>>
>>>
>>
>
--
Flavio Pompermaier
Development Department
OKKAM S.r.l.
Tel. +(39) 0461 041809
immediate
>>>>>> availability of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP
>>>>>> and operational analytics for Apache Hadoop using Apache HBase as its
>>>>>> backing store and providing integration with other projects
No interest from Phoenix PMCs to provide support to the creation of
official Cloudera parcels (at least from Phoenix side)...?
On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> Anyone from Phoenix...?
>
> On 27 Oct 2017 16:47, "Pe
Great! At this point we just need an official Phoenix mentor...
On Fri, Oct 27, 2017 at 4:19 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> See below.
>
> 2017-10-27 8:45 GMT-04:00 Flavio Pompermaier <pomperma...@okkam.it>:
>
>> I just need someone
I can give it a try..is there someone who can lead this thing?
gt;
> I think this is not about sharing a parcel now and stop doing it, right?
> Also there are "financial" issues in terms of servers that I'd probably
> need help with.
>
> On 27 Oct 2017 08:49, "Flavio Pompermaier" <pomperma...@okkam.it> wrote:
>
>> If
failing and I
> haven't looked further into them yet.
>
> On 26 Oct 2017 18:42, "Flavio Pompermaier" <pomperma...@okkam.it> wrote:
>
> I could give it a try...any reference about it?where can I find this
> latest parcel you produced? Any feedback from Cloudera?
different format. It requires some modifications. However, it's doable...
Andrew applied those modifications on a later version and we packaged it
into a Parcel. So it's definitely doable. Might be interesting to do that
for the last version, but will require some efforts...
2017-10-26 12:07 GMT-04:00 Fl
I'll take care of it ;)
On 25 Oct 2017 23:45, "Sergey Soldatov" <sergeysolda...@gmail.com> wrote:
> Hi Flavio,
>
> It looks like you need to ask the vendor, not the community about their
> plan for further releases.
>
> Thanks,
> Sergey
>
> On Wed,
f how you can interact with the server. If it could
become part of HBase and support the full wire protocol, then it might be
an option.
Thanks,
James
On Thu, Oct 5, 2017 at 7:00 AM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:
> Maybe Phoenix could benefit from https://github.com/OpenTSDB/
2017, 9:36 AM James Taylor <jamestay...@apache.org> wrote:
>>
>>> Hi Flavio,
>>> Phoenix supports JDBC. The implementation may do gets, scans, etc., but
>>> it's completely transparent to the user.
>>> Thanks,
>>> James
>>>
>>>
Does it need HDP? Could it be installed also on CDH?
On Mon, Sep 11, 2017 at 3:40 PM, Sudhir Babu Pothineni <
sbpothin...@gmail.com> wrote:
> I think there is Hortonworks ODBC driver:
>
> https://hortonworks.com/hadoop-tutorial/bi-apache-phoenix-odbc/
>
>
> On Sep 11, 2017, at 12:52 AM, Bulvik,
,
Flavio
On Mon, Apr 13, 2015 at 6:35 PM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Flavio,
One good blog for reference is
http://gbif.blogspot.com/2012/07/optimizing-writes-in-hbase.html. Hope it
helps.
Regards
Ravi
On Mon, Apr 13, 2015 at 2:31 AM, Flavio Pompermaier pomperma
Hi to all,
when running a mr job on my Phoenix table I get this exception:
Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms passed
since the last invocation, timeout is currently set to 6
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at
, Flavio Pompermaier pomperma...@okkam.it
wrote:
Hi to all,
when running a mr job on my Phoenix table I get this exception:
Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms
passed since the last invocation, timeout is currently set to 6
I tried to set hbase.client.scanner.caching = 1 on both client and server
side and I still get that error :(
On Mon, Apr 13, 2015 at 10:31 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Disabling caching will turn off this kind of errors? is that possible?
Or is it equivalent to set
Any help here?
On Tue, Mar 31, 2015 at 10:46 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Hi to all,
I'd like to know which is the best way to read a key salted with phoenix.
If I read it during a mapreduce job I see a byte in front of the key
(probably the salted prefix) that I don't
,
If you writing a Map Reduce , I would highly recommend using the custom
InputFormat classes written that handle these .
http://phoenix.apache.org/phoenix_mr.html.
Regards
Ravi
On Wed, Apr 1, 2015 at 12:16 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Any help here?
On Tue, Mar
Hi to all,
I'm using phoenix 4.2.2 and I'm trying to create a table both with
compression and salting but this seems not to be possible.
Here's my SQL:
CREATE TABLE IF NOT EXISTS %s (ID BIGINT NOT NULL, S VARCHAR NOT NULL,
MODEL VARCHAR CONSTRAINT pk PRIMARY KEY (ID, S)) SALT_BUCKETS=10
(ID BIGINT NOT NULL,
S VARCHAR NOT NULL,
MODEL VARCHAR
CONSTRAINT pk PRIMARY KEY (ID, S)) SALT_BUCKETS=10,
COMPRESSION='GZ', BLOCKSIZE='4096';
--
--
CertusNet
*From:* Flavio
Just a curiosity..what is the difference between hbase on hadoop1 or
hadoop2 from a functional point of view?
Does HBase on hadoop2 (Hoya?) rely on YARN features?
On Tue, Sep 23, 2014 at 8:15 PM, James Taylor jamestay...@apache.org
wrote:
We'll definitely remove hadoop1 support from 4.x, as
...@apache.org
wrote:
I see. That makes sense, but it's more of an HBase request than a
Phoenix request. If HBase had a client-only pom, then Phoenix could
have a client-only pom as well.
Thanks,
James
On Thu, Sep 18, 2014 at 1:52 PM, Flavio Pompermaier
pomperma...@okkam.it wrote:
Because
Any help about this..?
What if I save a field as an array? how could I read it from a mapreduce
job? Is there a separator char to use for splitting or what?
On Tue, Sep 9, 2014 at 10:36 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Hi to all,
I'd like to know which is the correct way
, Flavio Pompermaier pomperma...@okkam.it
wrote:
Any help about this..?
What if I save a field as an array? how could I read it from a mapreduce
job? Is there a separator char to use for splitting or what?
On Tue, Sep 9, 2014 at 10:36 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Hi
69 matches
Mail list logo