Re: Slow query help

2018-03-16 Thread Samarth Jain
A less resource intensive approach would be to use approx count distinct - https://phoenix.apache.org/language/functions.html#approx_count_distinct You would still need the secondary index though, as James suggested, if you want it to run fast. On Fri, Mar 16, 2018 at 10:26 AM Flavio Pompermaier

Re: error when using hint on global index where table is using row timestamp mapping

2017-10-02 Thread Samarth Jain
Hi Noam, Can you pass on the DDL statements for the table and index and the query you are executing, please? Thanks! On Sun, Oct 1, 2017 at 2:01 AM, Bulvik, Noam wrote: > Hi > > > > I have create a table and used the row timestamp mapping functionality. > The key of the

Re: Apache Phoenix Column Mapping Feature

2017-06-12 Thread Samarth Jain
Column mapping is enabled by default. See details on various configs and table properties here - http://phoenix.apache.org/columnencoding.html On Sun, Jun 11, 2017 at 11:55 PM, Udbhav Agarwal wrote: > Hi, > > I am using apache Phoenix 4.10 on Hbase. I want to use

Re: Scan phoenix created columns, hbase

2017-06-05 Thread Samarth Jain
Cheyene, with Phoenix 4.10, column mapping feature is enabled by default which means the column names declared in the Phoenix schema are going to be different from the column qualifiers in hbase. If you would like to disabled column mapping, set COLUMN_ENCODED_BYTES=NONE property in your ddl. On

Re: Short Tables names and column names

2017-05-30 Thread Samarth Jain
Yes, Phoenix will take care of mapping the column name to hbase column qualifier. Before using the column mapping feature (which is on by default), make sure that the limits, as highlighted on the website, on number of columns works for you. On Tue, May 30, 2017 at 7:21 PM Ash N

Re: Unexpected dynamic column issues

2017-04-06 Thread Samarth Jain
Thanks for reporting the issue, Dave. This has to do with the new column mapping feature that we rolled out in 4.10. To disable it for your table, please create your table like this: create table TMP_SNACKS(k bigint primary key, c1 varchar) COLUMN_ENCODED_BYTES=0; I will file a JIRA and get a

Re: Row timestamp

2017-03-10 Thread Samarth Jain
This is because you are using now() for created. If you used a different date then with TEST_ROW_TIMESTAMP1, the cell timestamp would be that date where as with TEST_ROW_TIMESTAMP2 it would be the server side time. Also, which examples are broken on the page? On Thu, Mar 9, 2017 at 11:28 AM,

Re: Memory leak

2016-12-05 Thread Samarth Jain
Thanks for reporting this, Jonathan. Would you mind filing a JIRA preferably with the object tree that you are seeing in the leak. Also, what version of hbase and phoenix are you using? On Mon, Dec 5, 2016 at 9:53 AM Jonathan Leech wrote: > Looks like PHOENIX-2357 introduced

Re: Recover from "Cluster is being concurrently upgraded from 4.7.x to 4.8.x"

2016-10-06 Thread Samarth Jain
Patrick, Do you have multiple 4.8.1 clients connecting to the cluster at the same time? On Thu, Oct 6, 2016 at 8:11 AM, Patrick FICHE wrote: > Hi, > > I upgraded Phoenix server from 4.7.0 to 4.8.1 on HDP cluster. > > Now, when I try to connect to my server using

Re: Combining an RVC query and a filter on a datatype smaller than 8 bytes causes an Illegal Data Exception

2016-09-19 Thread Samarth Jain
Kumar, Can you try with the 4.8 release? On Mon, Sep 19, 2016 at 2:54 PM, Kumar Palaniappan < kpalaniap...@marinsoftware.com> wrote: > > Any one had faced this issue? > > https://issues.apache.org/jira/browse/PHOENIX-3297 > > And this one gives no rows > > SELECT * FROM TEST.RVC_TEST WHERE

Re: question on calltimeout

2016-06-28 Thread Samarth Jain
+user@phoenix Larry which version of HBase and Phoenix are you using? Starting from 4.7 Phoenix takes care of automatically renewing scanner leases which should such timeouts. To take advantage of that feature, you would need to use an HBase version to a version as recent as 0.98.17 if you are

Re: phoenix task rejected

2016-06-22 Thread Samarth Jain
Please look at this tuning guide: https://phoenix.apache.org/tuning.html You probably would want to adjust these client side properties to deal with your workload: phoenix.query.threadPoolSize and phoenix.query.queueSize. On Wed, Jun 22, 2016 at 9:34 AM, 金砖 wrote: > 16

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-21 Thread Samarth Jain
" > + " ,SITE_ID" > + " ,EMAIL" > + " FROM user.SESSION_EXPIRATION " > + " WHERE NEXT_CHECK <= CURRENT_TIME()" > + " LIMIT " + batchSize > + " ) AS TSE"

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-19 Thread Samarth Jain
D" > + " AND TS1.BRAND_ID = TSE.BRAND_ID" > + " GROUP BY TSE.ID" > + " ,TSE.CLIENT_ID" > + " ,TSE.BRAND_ID" > + " ,TSE.SITE_ID" > + " ,TSE.EMAIL" > + " ) AS TR" > + " LEFT OUTER

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-18 Thread Samarth Jain
w to address this? These .tmp files > never seem to be cleaned up after each query. Is there any work-around? > > > ------ > *From:* Samarth Jain <samarth.j...@gmail.com> > *To:* "user@phoenix.apache.org" <user@phoenix.apache.org> &g

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-15 Thread Samarth Jain
h-resources process ( >> http://www.mastertheboss.com/jboss-server/jboss-datasource/using-try-with-resources-to-close-database-connections >> , >> https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html). >> >> >> -

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-15 Thread Samarth Jain
What version of phoenix are you using? Is the application properly closing statements and result sets? On Friday, April 15, 2016, wrote: > I am running into an issue where a huge number temporary files are being > created in my C:\Users\myuser\AppData\Local\Temp

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-13 Thread Samarth Jain
Srinivas, Are you trying to create a phoenix view over an existing HBase table? On Wed, Apr 13, 2016 at 11:47 AM, Pindi, Srinivas < srinivas.pi...@epsilon.com> wrote: > *Problem* *Statement*: > > While we are trying to create a phoenix view and we are getting the > following exception. > > > >

Re: Phoenix table is unaccessable...

2016-03-11 Thread Samarth Jain
Saurabh, another option for you would be to upgrade your phoenix to our just released 4.7 version. It is possible that you might be hitting a bug that has been fixed now. Worth a try. On Fri, Mar 11, 2016 at 4:07 PM, Sergey Soldatov wrote: > Hi Saurabh, > It seems that

Re: How to set up different phoenix timeout values for different client applications?

2016-03-09 Thread Samarth Jain
Hi Simon, phoenix.query.timeoutMs is a client side phoenix property. You can set it in client side hbase-site.xml for a global setting or can programmatically set it on per jdbc statement via stmt.setQueryTimeout(int seconds). There are a couple other hbase level timeouts that are in play:

Re: Can't change TTL using alter table

2016-03-04 Thread Samarth Jain
Also, are you using the open source version or a vendor supplied distro? On Fri, Mar 4, 2016 at 10:44 AM, Samarth Jain <sama...@apache.org> wrote: > Rafit, > > Changing TTL the way you are doing it should work. Do you have any > concurrent requests going on that are issuing

Re: Can't change TTL using alter table

2016-03-04 Thread Samarth Jain
Rafit, Changing TTL the way you are doing it should work. Do you have any concurrent requests going on that are issuing some kind of ALTER TABLE statements? Also, would you mind posting the DDL statement for your table? - Samarth On Fri, Mar 4, 2016 at 9:20 AM, Rafit Izhak-Ratzin

Re: WARN client.ScannerCallable: Ignore, probably already closed

2016-01-19 Thread Samarth Jain
This likely has to do with hbase scanners running into lease expiration. Try overriding the value of hbase.client.scanner.timeout.period in the server side hbase-site.xml to a large value. We have a feature coming out in Phoenix 4.7 (soon to be released) that will take care of automatically

Re: Phoenix JDBC connection pool

2015-12-15 Thread Samarth Jain
Kannan, See my response here: https://mail-archives.apache.org/mod_mbox/phoenix-user/201509.mbox/%3CCAMfSBK+WKzd5EscXLJcn9nVpDYd66dH=nL=devdc9n_skww...@mail.gmail.com%3E There is a JIRA in place https://issues.apache.org/jira/browse/PHOENIX-2388 to help pooling of phoenix connections. Would be a

Re: weird result I got when I try row_timestamp feature

2015-12-11 Thread Samarth Jain
Hi Roc, FWIW, looking at your schema, it doesn't look like you are using the ROW_TIMESTAMP feature. The constraint part of your DDL needs to be changed like this: CONSTRAINT my_pk PRIMARY KEY ( server_timestamp ROW_TIMESTAMP, app_id, client_ip, cluster_id,host_id,api ) For the issue of getting

Re: CsvBulkUpload not working after upgrade to 4.6

2015-12-09 Thread Samarth Jain
Zack, What version of HBase are you running? And which version of Phoenix (specifically 4.6-0.98 version or 4.6-1.x version)? FWIW, I don't see the MetaRegionTracker.java file in HBase branches 1.x and master. Maybe you don't have the right hbase-client jar in place? - Samarth On Wed, Dec 9,

Re: Help tuning for bursts of high traffic?

2015-12-09 Thread Samarth Jain
Zack, These stats are collected continuously and at the global client level. So collecting them only when the query takes more than 1 second won't work. A better alternative for you would be to report stats at a request level. You could then conditionally report the metrics for queries that

Re: Row timestamp support in 4.6

2015-12-04 Thread Samarth Jain
Pierre, Thanks for reporting this. Do you mind filing a JIRA? Also, as a workaround, can you check if changing the data type from UNSIGNED_LONG to BIGINT resolves the issue? -Samarth On Friday, December 4, 2015, pierre lacave wrote: > > Hi, > > I am trying to use the

Re: Get a count of open connections?

2015-12-03 Thread Samarth Jain
Hi Zack, One simple way to expose the number of open phoenix connections would be via global client metrics that Phoenix exposes at the client JVM level. I have filed https://issues.apache.org/jira/browse/PHOENIX-2485. The client side metrics capability of Phoenix needs to be documented. I have

Re: JRuby on rails -> Phoenix connection error - cannot load java class

2015-12-02 Thread Samarth Jain
Josh, One step worth trying would be is to register the PhoenixDriver instance and see if that helps. Something like this: DriverManager.registerDriver(PhoenixDriver.INSTANCE) Connection con = DriverManager.getConnection("jdbc:phoenix:localhost:2181”) - Samarth On Wed, Dec 2, 2015 at 3:41 PM,

Re: blog describing new time-series data optimization

2015-11-08 Thread Samarth Jain
> > > On Sat, Nov 7, 2015 at 6:53 PM, James Taylor <jamestay...@apache.org> > wrote: > >> If you have time-series data for which you'd like to improve query >> performance, take a look at this[1] blog written by Samarth Jain on a new >> feature in our 4.6 release: >> >> https://blogs.apache.org/phoenix/entry/new_optimization_for_time_series >> >> Enjoy! >> >> James >> > >

Re: Phoenix JDBC driver hangs/timeouts

2015-10-18 Thread Samarth Jain
Alok, Please answer the below questions to help us figure out what might be going on: 1) How many region servers are on the cluster? 2) What is the value configured for hbase.regionserver.handler.count? 3) What kind of queries is your test executing - point look up / range / aggregate/ full

Re: ResultSet size

2015-10-06 Thread Samarth Jain
To add to what Jesse said, you can override the default scanner fetch size programmatically via Phoenix by calling statement.setFetchSize(int). On Tuesday, October 6, 2015, Jesse Yates wrote: > So HBase (and by extension, Phoenix) does not do true "streaming" of rows > -

Re: Can't understand reason for rejected from org.apache.phoenix.job.JobManager: Running, pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 204

2015-10-06 Thread Samarth Jain
Sergea, any chance you have other queries concurrently executing on the client? What version of Phoenix you are on? On Tuesday, October 6, 2015, Serega Sheypak wrote: > Hi, found smth similar here: > >

Re: Can't add views on HBase tables after upgrade

2015-09-15 Thread Samarth Jain
o some docs that reiterate > that, it would help my case to refactor all the scripts to fit the > supported format. > > > > Thanks, > > Jeff > > > > *From:* Samarth Jain [mailto:sama...@apache.org] > *Sent:* September-14-15 7:32 PM > *To:* user@phoenix.apache.org &g

Re: Can't add views on HBase tables after upgrade

2015-09-14 Thread Samarth Jain
e for offering to look into it, Samarth. > > On Sat, Sep 12, 2015 at 11:22 AM, Samarth Jain <sama...@apache.org> wrote: > >> Jeffrey, >> >> I will look into this and get back to you. >> >> - Samarth >> >> On Thu, Sep 10, 2015 at 8:44 AM, Jeffrey

Re: Can't add views on HBase tables after upgrade

2015-09-12 Thread Samarth Jain
Jeffrey, I will look into this and get back to you. - Samarth On Thu, Sep 10, 2015 at 8:44 AM, Jeffrey Lyons wrote: > Hey all, > > > > I have recently tried upgrading my Phoenix version from 4.4-HBase-0.98 to > build 835 on 4.x-HBase-0.98 to get some of the new changes.

Re: How to force timeout when connection fails

2015-09-03 Thread Samarth Jain
Zack, The configs that you overrode do not apply when establishing connection to HBase via phoenix. You might want to muck around with hbase.client.retries.number and zookeeper.recovery.retry to see if you can get a faster response if HBase is down. I am not an expert in that area though. Someone

Re: Phoenix JDBC in web-app, what is the right pattern?

2015-09-03 Thread Samarth Jain
ava.sql.Connection, right? > > 2015-09-03 21:26 GMT+02:00 Samarth Jain <sama...@apache.org>: > >> Your pattern is correct. >> >> Phoenix doesn't cache connections. You shouldn't pool them and you >> shouldn't share them with multiple threads. >> >>

help diagnosing issue

2015-09-01 Thread Samarth Jain
Ralph, Couple of questions. Do you have phoenix stats enabled? Can you send us a stacktrace of RegionTooBusy exception? Looking at HBase code it is thrown in a few places. Would be good to check where the resource crunch is occurring at. On Tue, Sep 1, 2015 at 2:26 PM, Perko, Ralph J

Re: select * from table throws scanner timeout

2015-08-26 Thread Samarth Jain
, 2015 at 6:02 AM, Sunil B bsunil...@gmail.com wrote: Hi Samarth, The patch definitely solves the issue. The query select * from table retrieves all the records. Thanks for the patch. Thanks, Sunil On Tue, Aug 25, 2015 at 1:21 PM, Samarth Jain samarth.j...@gmail.com wrote: Thanks

Re: select * from table throws scanner timeout

2015-08-25 Thread Samarth Jain
Sunil, Can you tells us a little bit more about the table - 1) How many regions are there? 2) Do you have phoenix stats enabled? http://phoenix.apache.org/update_statistics.html 3) Is the table salted? 4) Do you have any overrides for scanner caching ( hbase.client.scanner.caching) or result

Re: HBase rowkey filter impl in Phoenix for scanning specific time range rows

2015-08-24 Thread Samarth Jain
Hi Sun, There is no custom HBase filter that phoenix uses to scan specific time range rows. Having said that, I am currently working on https://issues.apache.org/jira/browse/PHOENIX-914 that is going to provide the capability of having a column directly map to HBase cell level timestamp. By

Re: ERROR 201 (22000): Illegal data on Upsert Select

2015-08-20 Thread Samarth Jain
Yiannis, Can you please provide a reproducible test case (schema, minimum data to reproduce the error) along with the phoenix and hbase versions so we can take a look at it further. Thanks, Samarh On Thu, Aug 20, 2015 at 2:09 PM, Yiannis Gkoufas johngou...@gmail.com wrote: Hi there, I am

Re: How to do true batch updates in Phoenix

2015-08-19 Thread Samarth Jain
You can do this via phoenix by doing something like this: try (Connection conn = DriverManager.getConnection(url)) { conn.setAutoCommit(false); int batchSize = 0; int commitSize = 1000; // number of rows you want to commit per batch. Change this value according to your needs. while (there are

Re: PHOENIX-2000

2015-07-17 Thread Samarth Jain
around this issue? Or if this needs fixing? -Akshat On Wed, Jul 15, 2015 at 2:50 PM, Samarth Jain sama...@apache.org wrote: Changing the email group to user@phoenix.apache.org. Please don't use phoenix-hbase-u...@googlegroups.com as that group is deprecated. Can you try upgrading

Re: Phoenix's behavior when applying limit to the query

2015-05-15 Thread Samarth Jain
Prasanth, To help us answer you better please let us know your table schema. Also what does EXPLAIN select * from phoenix_table_name limit 1000; tell you? -Samarth On Friday, May 15, 2015, Chagarlamudi, Prasanth prasanth.chagarlam...@epsilon.com wrote: Hello, I would appreciate if someone

Re: Socket timeout while counting number of rows of a table

2015-04-09 Thread Samarth Jain
Looking at the exception java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Thu Apr 09 16:49:33 CEST 2015, null, java.net.SocketTimeoutException: callTimeout=6, callDuration=62366

[ANNOUNCE] Apache Phoenix 4.3.1 released

2015-04-08 Thread Samarth Jain
The Apache Phoenix team is pleased to announce the immediate availability of the 4.3.1 release. Highlights of the release being: - Global client side resource metrics - SQL command to turn Phoenix tracing ON/OFF - SQL command to allow setting tracing sampling frequency - Capability to pass guide

Re: Fwd: java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString

2015-03-18 Thread Samarth Jain
You also want to make sure that you are using compatible versions of client and server jars: phoenix-core version is 4.3.0 and phoenix-server.jar version is 4.2.3 are *NOT* compatible. The server side jar version should always be the same or newer (in version) than the client side jar. In

Re: - Multitenancy ...

2015-03-11 Thread Samarth Jain
Hi Naga, Can you try create table mt3 ( tenant_id varchar NOT NULL, tenant_name varchar constraint mt_pk primary key (tenant_id, tenant_name) ) multi_tenant=true; - Samarth On Wednesday, March 11, 2015, Naga Vijayapuram naga_vijayapu...@gap.com wrote: I am on HDP 2.2 ; it uses

Re: TTL

2015-02-11 Thread Samarth Jain
+1 to what Ralph said. FWIW, starting with 4.3 (soon to be out) we allow setting HBase properties like TTL through ALTER TABLE. However, you can't have different TTL for different column families. On Wednesday, February 11, 2015, Perko, Ralph J ralph.pe...@pnnl.gov wrote: That is a great

Re: Scan performance using JDBC result impacted by limit

2015-01-21 Thread Samarth Jain
Vijay, Is there a reason why you are doing PhoenixResultSet.string()? Is it for logging purposes? Regarding your question regarding increase in object creation time, that doesn't seem like it is phoenix related. Are you seeing an increase in time for resultset.next() or are you seeing an

Re: Phoenix4.2.1 against HBase0.98.6 encountered a strange problem when using connection with props

2014-12-04 Thread Samarth Jain
The value of timestamp provided by CURRENT_SCN_ATTRIBUTE has to be greater than the table timestamp. So it really is any arbitrary value = table create timestamp. Providing timestamps on connections helps us with executing point in time or snapshot queries. In other words, it's a way of surfacing

Re: Phoenix4.2.1 against HBase0.98.6 encountered a strange problem when using connection with props

2014-12-03 Thread Samarth Jain
Is there a reason why you are using CURRENT_SCN_ATTRIBUTE while you are getting a phoenix connection? Is it because you want to query data at a point of time? If yes, you probably want to check that the create time stamp of the table MYTEST1 = 141759720L. If you don't want any snapshot like

Re: Regionserver Crashing whenever join query run_tables with 10lac rows

2014-11-12 Thread Samarth Jain
10 lac is 1 million. Siddharth, please let us know the schema and the query you are executing too. Thanks! On Wednesday, November 12, 2014, Vladimir Rodionov vladrodio...@gmail.com wrote: What does RS log file say, Siddharth? Btw, what does 'lac' stand for? In '10 lac'? -Vladimir On Wed,

Re: Can't connect to Phoenix via JDBC in Scala

2014-08-29 Thread Samarth Jain
Hi Russell, I am not a Scala guy, but do you know if calling classOf[com.salesforce.phoenix.jdbc.PhoenixDriver] ends up loading the java class and hence executing the static block? If it doesn't you might want to try DriverManager.registerDriver( com.salesforce.phoenix.jdbc.PhoenixDriver) and

Re: ManagedTests and 4.1.0-RC1

2014-08-28 Thread Samarth Jain
Dan, Can you tell me how you are running your tests? Do you have the test class annotated with the right category annotation - @Category( HBaseManagedTimeTest.class). Also, can you send over your test class to see what might be causing problems? Thanks, Samarth On Thu, Aug 28, 2014 at 10:34