Re: Materialized views in Hbase/Phoenix

2019-09-27 Thread Pedro Boado
ta consistency between the tables created for each matrix. > > > > *From:* Pedro Boado [mailto:pedro.bo...@gmail.com] > *Sent:* Friday, September 27, 2019 10:53 AM > *To:* user@phoenix.apache.org > *Subject:* Re: Materialized views in Hbase/Phoenix > > > > *CAUTION:* T

Re: Materialized views in Hbase/Phoenix

2019-09-27 Thread Pedro Boado
column when all rows are grouped by a certain row > property). > > > > Precomputing seems much more efficient. > > > > *From:* Pedro Boado [mailto:pedro.bo...@gmail.com] > *Sent:* Friday, September 27, 2019 9:27 AM > *To:* user@phoenix.apache.org > *Subject:*

Re: Materialized views in Hbase/Phoenix

2019-09-27 Thread Pedro Boado
needed to scale to that degree. > > > > If one of the tables fails to write, we need some kind of a rollback > mechanism, which is why I was considering a transaction. We cannot be in a > partial state where some of the ‘views’ are written and some aren’t. > > > > >

Re: Materialized views in Hbase/Phoenix

2019-09-27 Thread Pedro Boado
For just a few million rows I would go for a RDBMS and not Phoenix / HBase. You don't really need transactions to control completion, just write a flag (a COMPLETED empty file, for instance) as a final step in your job. On Fri, 27 Sep 2019, 15:03 Gautham Acharya, wrote: > Thanks Anil. > > >

Re: is Apache phoenix reliable enough?

2019-06-24 Thread Pedro Boado
My former employer has been running for the last 3 years thousands of queries per second (with milliseconds response time) scanning thousands of rows in tables with a few billion rows without further issues. Combined with an additional write load of a few thousand writes per second. But it didn't

Re: Error using VARBINARY in Index

2019-05-13 Thread Pedro Boado
Hi, Indexes in Phoenix are implemented using an additional HBase table, and the index key fields are serialized as HBase table key. So same limitations apply to varbinary and varchar when used as index columns: they can only be used as the last column in the index key. Cheers, Pedro. On Mon,

Re: Phoenix Performance Improvement

2019-01-15 Thread Pedro Boado
What type of queries are being thrown to the cluster? What's the average row size? 5M rows seems a tiny table size. 30ms is OK for scans over a few thousand records, but maybe not for full table scans.

Re: phoenix classpath

2018-12-31 Thread Pedro Boado
Are you connecting to phoenix from a java app? Just add it to your JVM classpath... Depending on how you're running it can be added in one way or another. If it for instance is a springboot app, java -jar app.war -cp folder_containing_additional_classpath_resources Or just include it as part of

Re: column mapping schema decoding

2018-12-26 Thread Pedro Boado
Hi, Column mapping is stored in SYSTEM.CATALOG table . There is only one column mapping strategy with between 1 to 4 bytes to be used to represent column number. Regardless of encoded column size, column name lookup strategy remains the same. Hope it helps, Pedro. On Wed, 26 Dec 2018, 23:00

Re: modifiy parcel to work with cdh 5.16

2018-12-07 Thread Pedro Boado
Hi, change cdh version for dependencies in pom.xml and recompile ( mvn clean package, you'll find your parcels under module phoenix-server) . But do this at your own risk! Potentially a number of IT tests won't pass. Saying that both run same HBase 1.2 is quite imprecise, Cloudera keeps applying

Re: Regarding upgrading from 4.7 to 4.14

2018-11-14 Thread Pedro Boado
Have you tried disabling column name mapping either globally or in a per table basis? Column names are stored in every cell so there is no direct workaround but disabling it. On Wed, 14 Nov 2018, 15:34 talluri abhishek Hi All, > > We are upgrading from Phoenix 4.7 to 4.14 and observed that

Re: Phoenix 5.x and CDH6.x

2018-10-21 Thread Pedro Boado
Yes, but the first release supporting CDH will be delayed to some point in the next couple of months. On Sun, 21 Oct 2018, 09:48 Bulvik, Noam, wrote: > Hi > > > > Do you plan to issue Phoenix 5.x parcel based on CDH6 like there was > phoenix 4.x parcels based on CDH 5.x? > > > > Regards, > > >

Re: Concurrent phoenix queries throw unable to create new native thread error

2018-10-10 Thread Pedro Boado
Are you reaching any of the ulimits for the user running your application? On Wed, 10 Oct 2018, 17:00 Hemal Parekh, wrote: > We have an analytical application running concurrent phoenix queries > against Hortonworks HDP 2.6 cluster. Application uses phoenix JDBC > connection to run queries.

Re: Issue with Restoration on Phoenix version 4.12

2018-09-07 Thread Pedro Boado
Does updating statistics on the table help? On Fri, 7 Sep 2018, 13:51 Azharuddin Shaikh, wrote: > Hi All, > > We have upgraded the phoenix version from 4.8 to 4.12 to resolve duplicate > count issue but we are now facing issue with restoration of tables on > phoenix version 4.12. Our Hbase

Re: Upsert is EXTREMELY slow

2018-07-12 Thread Pedro Boado
you checked that all RS receive same traffic ? On Thu, 12 Jul 2018, 23:10 Pedro Boado, wrote: > I believe it's related to your client code - In our use case we do easily > 15k writes/sec in a cluster lower specced than yours. > > Check that your jdbc connection has autocommit off so

Re: Upsert is EXTREMELY slow

2018-07-12 Thread Pedro Boado
I believe it's related to your client code - In our use case we do easily 15k writes/sec in a cluster lower specced than yours. Check that your jdbc connection has autocommit off so Phoenix can batch writes and that table has a reasonable UPDATE_CACHE_FREQUENCY ( more than 6 ). On Thu, 12

Re: Phoenix connection from Jaspersoft is hanging

2018-06-14 Thread Pedro Boado
Can you set log4j to DEBUG? That will give you a hint about what's going on the server. On Thu, 14 Jun 2018, 18:40 Susheel Kumar Gadalay, wrote: > Can someone please help me to resolve this. > > Thanks > Susheel Kumar > > On Tuesday, June 12, 2018, Susheel Kumar Gadalay > wrote: > > Hi, > > >

Re: Optimisation on join in case of all the data to be joined present in the same machine (region server)

2018-04-16 Thread Pedro Boado
I guess this thread is not about kafka streams but what Josh suggested is basically my last resource plan for building kafka streams as you'll be constrained by HBase/Phoenix upsert ratio -you'll be doing 5x the number of upserts- In my experience Kafka Streams is not bad at all doing this kind

Re: KrbException: Checksum failed when attempting connection to PQS

2018-04-12 Thread Pedro Boado
Just to discard it. Might you need Java Unlimited Cryptography Extension to be installed to deal with the cipher algorithms in your keytabs? On Thu, 12 Apr 2018, 20:47 Yan Koyfman, wrote: > We are attempting to create a connection to PQS (Phoenix 4.13.1) in a > Kerberized

Re: HBase Timeout on queries

2018-02-05 Thread Pedro Boado
Flavio I get same behaviour, a count(*) over 180M records needs a couple of minutes to complete for a table with 10 regions and 4 rs serving it. Why are you evaluating robustness in terms of full scans? As Anil said I wouldn't expect a NoSQL database to run quick counts on hundreds of millions or

Apache Phoenix + Solr integration?

2018-02-01 Thread Pedro Boado
Hi all, Do you know of any integration approach to stream documents from Phoenix to Solr in a similar way to what Lily HBase Indexer does? Thanks!

Re: High CPU usage on Hbase region Server with GlobalMemoryManager warnings

2018-01-31 Thread Pedro Boado
Maybe the warnings are not the cause but a consequence (gc calls finalize() and not the other way around) Any details on memory usage? G1? Non full Vs full gc ratio, average freed memory... Did any gc run in the 3rd RS? Memory percentage assigned to memstores? You have an average memory assigned

Re: Is first query to a table region way slower?

2018-01-30 Thread Pedro Boado
jtaba Chohan" <mujt...@apache.org> wrote: > Just to remove one variable, can you repeat the same test after truncating > Phoenix Stats table? (either truncate SYSTEM.STATS from HBase shell or use > sql: delete from SYSTEM.STATS) > > On Mon, Jan 29, 2018 at 4:36 PM, Pedro Boado

Re: Is first query to a table region way slower?

2018-01-29 Thread Pedro Boado
, > James > > On Sun, Jan 28, 2018 at 5:39 PM Pedro Boado <pedro.bo...@gmail.com> wrote: > >> Hi all, >> >> I'm running into issues with a java springboot app that ends up querying >> a Phoenix cluster (from out of the cluster) through the non-thin clien

Is first query to a table region way slower?

2018-01-28 Thread Pedro Boado
Hi all, I'm running into issues with a java springboot app that ends up querying a Phoenix cluster (from out of the cluster) through the non-thin client. Basically this application has a high latency - around 2 to 4 seconds - for the first query per primary key to each region of a table with

RE: [ANNOUNCE] Apache Phoenix 4.13.2 for CDH 5.11.2 released

2018-01-22 Thread Pedro Boado
user@phoenix.apache.org > Cc: d...@hbase.apache.org; d...@phoenix.apache.org; u...@hbase.apache.org > Subject: Re: [ANNOUNCE] Apache Phoenix 4.13.2 for CDH 5.11.2 released > > On Sat, Jan 20, 2018 at 12:29 PM Pedro Boado <pedro.bo...@gmail.com> > wrote: > > > The Ap

[ANNOUNCE] Apache Phoenix 4.13.2 for CDH 5.11.2 released

2018-01-20 Thread Pedro Boado
The Apache Phoenix team is pleased to announce the immediate availability of the 4.13.2 release for CDH 5.11.2. Apache Phoenix enables SQL-based OLTP and operational analytics for Apache Hadoop using Apache HBase as its backing store and providing integration with other projects in the Apache

Re: Phoenix 4.13 on Hortonworks

2018-01-17 Thread Pedro Boado
servers. > > > Regards > Sumanta > > > -Pedro Boado <pedro.bo...@gmail.com> wrote: - > To: user@phoenix.apache.org > From: Pedro Boado <pedro.bo...@gmail.com> > Date: 01/17/2018 04:04PM > Subject: Re: Phoenix 4.13 on Hortonworks > > Hi, >

Re: Phoenix 4.13 on Hortonworks

2018-01-17 Thread Pedro Boado
Hi, Afaik Hortonworks already includes Apache Phoenix as part of the platform, doesn't it? Cheers. On 17 Jan 2018 10:30, "Sumanta Gh" wrote: > I am eager to learn if anyone has installed Phoenix 4.13 on Hortonworks > HDP cluster. > Please let me know the version number of

Re: can you add the parcels download location to phonix download page (http://phoenix.apache.org/download.html) ?

2017-12-20 Thread Pedro Boado
We haven't made a public release *yet*. Once it's done it will be published in the download page. Thanks. On 20 Dec 2017 08:54, "Bulvik, Noam" wrote: *Noam * -- PRIVILEGED AND CONFIDENTIAL PLEASE NOTE: The information contained in this

RE: problem to run phoenix client 4.13.1 for CDH5.11.2 on windows

2017-12-18 Thread Pedro Boado
Hi Noam, thanks for your feedback. PHOENIX-4454 and PHOENIX-4453 were opened for looking into these issues and a fix for both has already been applied to the git branch. I'll publish a new dev release of the parcel in the next couple of days in the same repo as the previous one. Cheers. On 6

Re: Error upgrading from from 4.7.x to 4.13.x

2017-12-17 Thread Pedro Boado
you should have it even though >> sometimes doesn't show up. To truncate that table you may try delete >> statement in sqline. >> >> >> On December 17, 2017 at 3:14:58 PM, Flavio Pompermaier ( >> pomperma...@okkam.it) wrote: >> >> I did a hbase shell list and

Re: Error upgrading from from 4.7.x to 4.13.x

2017-12-17 Thread Pedro Boado
I create it from HBase shell? On Sun, Dec 17, 2017 at 11:24 PM, Pedro Boado <pedro.bo...@gmail.com> wrote: > You can do that through the hbase shell doing a > > hbase(main):011:0> truncate 'SYSTEM.MUTEX' > > > > > On 17 December 2017 at 22:01, Flavio Pompermaier <

Re: Error upgrading from from 4.7.x to 4.13.x

2017-12-17 Thread Pedro Boado
andler.execute(ReflectiveCommandHa >> ndler.java:38) >> at sqlline.SqlLine.dispatch(SqlLine.java:809) >> at sqlline.SqlLine.initArgs(SqlLine.java:588) >> at sqlline.SqlLine.begin(SqlLine.java:661) >> at sqlline.SqlLine.start(SqlLine.java:398) >> at sqlline.SqlLine.main(SqlLine.java:291) >> sqlline version 1.2.0 >> >> How can I repair my installation? I can't find any log nor anything >> strange in the SYSTEM.CATALOG HBase table.. >> >> Thanks in advance, >> Flavio >> >> > > > -- > Flavio Pompermaier > Development Department > > OKKAM S.r.l. > Tel. +(39) 0461 041809 <+39%200461%20041809> > -- Un saludo. Pedro Boado.

Help: setting hbase row timestamp in phoenix upserts ?

2017-11-29 Thread Pedro Boado
obscure hidden jdbc property - ? I want to avoid by all means doing a checkAndPut as the volume of changes is going to be quite bug. -- Un saludo. Pedro Boado.

RE: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-27 Thread Pedro Boado
:kpalaniap...@marinsoftware.com] > *Sent:* Monday, November 27, 2017 10:51 AM > *To:* user@phoenix.apache.org > *Subject:* Re: [ANNOUNCE] Apache Phoenix 4.13 released > > > > You mean CDH5.9 and 5.10? And also HBASE 17587? > > > > On Mon, Nov 27, 2017 at 12:37 AM, Ped

Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-27 Thread Pedro Boado
re.com> wrote: >>> >>>> @Jmaes, are you still planning to release 4.13HBase1.2? >>>> >>>> On Sun, Nov 19, 2017 at 1:21 PM, James Taylor <jamestay...@apache.org> >>>> wrote: >>>> >>>>> Hi Kumar, >>>>>

Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-19 Thread Pedro Boado
rs:Kumarappan:Desktop:linkedin.gif] > <http://www.linkedin.com/in/kumarpalaniappan> > > On Nov 19, 2017, at 3:43 PM, Pedro Boado <pedro.bo...@gmail.com> wrote: > > As I have volunteered to keep a CDH compatible release for Phoenix and as > for now CDH 5.x is based on HBase 1.2 is

Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-19 Thread Pedro Boado
there were > no plans for a release. Subsequently we've heard from a few folks that they > needed it, and Pedro Boado volunteered to do CDH compatible release > (see PHOENIX-4372) which requires an up to date HBase 1.2 based release. > > So I've volunteered to do one more Phoenix 4.13.1 relea

Re: Cloudera parcel update

2017-11-09 Thread Pedro Boado
m Phoenix PMCs to provide support to the creation of >>> official Cloudera parcels (at least from Phoenix side)...? >>> >>> On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier < >>> pomperma...@okkam.it> wrote: >>> >>>> Anyone from Phoenix

Re: Cloudera parcel update

2017-10-27 Thread Pedro Boado
elp because we also need Phoenix on CDH. Maybe I > could writie some documentation about it's installation and usage, on the > README or on the official Phoenix site. Let's set up a an unofficial (but > working) repo of Phoenix Parcels! > > On Fri, Oct 27, 2017 at 9:12 AM, Pedro Bo

Re: Cloudera parcel update

2017-10-27 Thread Pedro Boado
HDP releases. > > Thanks, > James > > On Thu, Oct 26, 2017 at 2:43 PM, Pedro Boado <pedro.bo...@gmail.com> > wrote: > >> Sorry, it s provided "as is" . Try a "mvn clean package -D >> skipTests=true" . >> >> And grab the parcel f

Re: Cloudera parcel update

2017-10-26 Thread Pedro Boado
y documentation > about this? > > On 26 Oct 2017 20:37, "Pedro Boado" <pedro.bo...@gmail.com> wrote: > >> I've done it for Phoenix 4.11 and CDH 5.11.2 based on previous work from >> chiastic-security. >> >> https://github.com/pboado/phoenix-for

Re: does anyone have 4.10 or 4.11 compiled with CDH compatible hbase jars?

2017-09-23 Thread Pedro Boado
ommunication is strictly > prohibited. If you have received this communication in error, or if any > problems occur with transmission, please contact sender. Thank you. > -- Un saludo. Pedro Boado.

Re: Potential causes for very slow DELETEs?

2017-08-19 Thread Pedro Boado
, but not IndexRPC > (there was a bug that client is sending all rpc with index priority). If > you see it, remove controller factory property on client side. > > Thanks, > Sergey > > On Fri, Aug 18, 2017 at 4:46 AM, Pedro Boado <pedro.bo...@gmail.com> > wrote: > &g

Potential causes for very slow DELETEs?

2017-08-18 Thread Pedro Boado
Hi all, We have two HBase 1.0 clusters running the same process in parallel -effectively keeps the same data in both Phoenix tables- This process feeds data into Phoenix 4.5 via HFile and once the data is loaded a Spark process deletes a few thousand rows from the tables -secondary indexing is

Best strategy for UPSERT SELECT in large table

2017-06-16 Thread Pedro Boado
Hi guys, We are trying to populate a Phoenix table based on a 1:1 projection of another table with around 15.000.000.000 records via an UPSERT SELECT in phoenix client. We've noticed a very poor performance ( I suspect the client is using a single-threaded approach ) and lots of issues with

Why can Cache of region boundaries are out of date be happening in 4.5.x?

2017-05-20 Thread Pedro Boado
Hi, we're just having in production an org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 (XCL08): Cache of region boundaries are out of date. and we don't find a lot of information about the error apart of https://issues.apache.org/jira/browse/PHOENIX-2599 The error

Re: ERROR 201 (22000): Unable to load CSV files using CsvBulkLoadTool due to blank columns

2017-03-30 Thread Pedro Boado
It doesn't make a lot of sense having quotes in an integer column, does it? Maybe removing this quotes from the source would solve the problem. On 30 Mar 2017 18:43, "anil gupta" wrote: > Hi Brian, > > It seems like Phoenix is not liking ''(single quotes) in an integer >

Read only user permissions to Phoenix table - Phoenix 4.5

2017-02-16 Thread Pedro Boado
Hi all, I have a quick question. We are still running on Phoenix 4.5 (I know, it's not my fault) and we're trying to setup a read only user on a phoenix table. The minimum set of permissions to get access through sqlline is grant 'readonlyuser' , 'RXC', 'SYSTEM.CATALOG' grant 'readonlyuser' ,

Re: ROW_TIMESTAMP weird behaviour

2017-02-06 Thread Pedro Boado
Hi. I don't think it's weird. That column is PK and you've upserted twice the same key value so first one is inserted and second one is updated. Regards. On 7 Feb 2017 04:59, "Dhaval Modi" wrote: > Hi All, > > I am facing abnormal scenarios with ROW_TIMESTAMP. > > I

Missing support for HBase 1.0 in Phoenix 4.9 ?

2017-02-06 Thread Pedro Boado
we are stuck at 4.8.2 until we upgrade our cluster to HBase 1.1/1.2 Is there any plan to support HBase 1.0 again on this (or newer) versions? Thanks for a great work! Regards. -- Un saludo. Pedro Boado.