Re: taking a backup of a Phoenix database

2015-08-10 Thread Ankit Singhal
Yes snapshot will work as mentioned by Yuhao , as we have done this during data center migration. On Mon, Aug 10, 2015 at 9:19 PM, Yuhao Bi byh0...@gmail.com wrote: Phoenix is based on HBase, since we can take a snapshot of HBase table, may be we can archive by this method? Taking snapshot

Re: out of memory - unable to create new native thread

2015-07-09 Thread Ankit Singhal
Hi Ralph, Try increasing the ulimit for number of open files(ulimit -n) and processes(ulimit -u) for below users:- hbase hdfs Regards, Ankit Singhal On Tue, Jul 7, 2015 at 4:13 AM, Perko, Ralph J ralph.pe...@pnnl.gov wrote: Hi, I am using a pig script to regularly load data into hbase

Re: Backup and Recovery for disaster recovery

2015-12-26 Thread Ankit Singhal
+1 for taking snapshots and exporting them on DR cluster if there is no requirement of DR cluster to stay up-to-date in realtime. I am not sure if there is any Incremental snaphost feature out yet but doing snapshot on periodic basis is also not that heavy. On Thu, Dec 24, 2015 at 3:44 PM,

Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread Ankit Singhal
Yes, restart your cluster On Wed, Jun 15, 2016 at 8:17 AM, anupama agarwal <anu1...@gmail.com> wrote: > I have created async index with same name. But I am still getting the same > error. Should I restart my cluster for changes to reflect? > On Jun 15, 2016 8:38 PM, "Ankit S

Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread Ankit Singhal
Hi Anupama, Option 1:- You can create a ASYNC index so that WAL can be replayed. And once your regions are up , remember to do the flush of data table before dropping the index. Option 2:- Create a table in hbase with the same name as index table name by using hbase shell. Regards, Ankit

Re: Bulk loading and index

2016-06-25 Thread Ankit Singhal
(v) ASYNC But if you are only using CSVBulkLoadTool for bulk load, then it will automatically prepare and bulk load index data also. So Index maintaining would not be required. Regards, Ankit Singhal On Sat, Jun 25, 2016 at 4:13 PM, Tongzhou Wang (Simon) < tongzhou.wang.1...@gmail.com>

Re: Custom udf

2016-01-29 Thread Ankit Singhal
You can enable remote debugging on regionserver by appending "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071" to HBASE_REGIONSERVER_OPTS and debug through eclipse. On Tue, Jan 12, 2016 at 7:08 PM, Gaurav Agarwal wrote: > Hello > How to debug

Re: strange behavior with DATE columns

2016-01-29 Thread Ankit Singhal
As Afshin also said, You need to adjust your timezone with phoenix.query.dateFormatTimeZone https://phoenix.apache.org/tuning.html phoenix.query.dateFormatTimeZone IST for eg:- *upsert like this:*- jdbc:phoenix:localhost> UPSERT INTO DESTINATION_METRICS_TABLE VALUES (to_date('2015-09-12

Re: ORDER BY Error on Windows

2016-02-24 Thread Ankit Singhal
Hi Yiannis, You may need to set phoenix.spool.directory to correct windows folder as by default it is set to /tmp. It is fixed in 4.7. https://issues.apache.org/jira/browse/PHOENIX-2348 Regards, Ankit Singhal On Wed, Feb 24, 2016 at 10:05 PM, Yiannis Gkoufas <johngou...@gmail.com> wrote:

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Ankit Singhal
Hi Steve, can you check whether the properties are picked by the sql/application client. Regards, Ankit Singhal On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell <sterr...@oculus360.us> wrote: > HI, I hope someone can tell me what I'm doing wrong… > > I set *phoenix.sche

Re: Unexpected region splits

2016-02-25 Thread Ankit Singhal
”, “hbase.hregion.max.filesize”), where R is the number of regions of the same table hosted on the same regionserver) You may read below article to understand splitting policies:- http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging/ Regards, Ankit Singhal On Mon, Feb 15, 2016 at 8:52 PM, Pedro Gandola

Re: Cache of region boundaries are out of date - during index creation

2016-02-25 Thread Ankit Singhal
can you try after truncating the SYSTEM.STATS table or deleting records of parent table only from SYSTEM.STATS like below. DELETE * FROM SYSTEM.STATS WHERE PHYSICAL_NAME='media'; Regards, Ankit Singhal On Wed, Feb 24, 2016 at 8:16 PM, Jaroslav Šnajdr <jsna...@gmail.com> wrote: &

Re: Error : starting spark-shell with phoenix client jar

2016-02-18 Thread Ankit Singhal
Hi Divya, It is fixed in 4.7 , please find a jira for the same. https://issues.apache.org/jira/browse/PHOENIX-2608 Regards, Ankit Singhal On Thu, Feb 18, 2016 at 2:03 PM, Divya Gehlot <divya.htco...@gmail.com> wrote: > Hi, > I am getting following error while starting spark shell

Re: Problem Updating Stats

2016-03-18 Thread Ankit Singhal
=> 'ROW', > REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', > MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP > _DELETED_CELLS => 'true', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'} > > 1 row(s) in 0.0280 seconds > &

Re: Problem Updating Stats

2016-03-18 Thread Ankit Singhal
91 | > | | SYSTEM| STATS | > GUIDE_POSTS_ROW_COUNT | -5 | > > I have attached the SYSTEM.CATALOG contents. > > Thanks, > Ben > > > > On Mar 16, 2016, at 9:34 AM, Ankit Singhal <ankitsingha...@gmail.com> >

Re: [Query:]Table creation with column family in Phoenix

2016-03-11 Thread Ankit Singhal
You may check discussion from below mail chain. https://www.mail-archive.com/dev@phoenix.apache.org/msg19448.html On Fri, Mar 11, 2016 at 3:20 PM, Divya Gehlot wrote: > > Hi, > I created a table in Phoenix with three column families and Inserted the > values as shown

Re: Problem Updating Stats

2016-03-16 Thread Ankit Singhal
Yes it seems to. Did you get any error related to SYSTEM.STATS when the client is connected first time ? can you please describe your system.stats table and paste the output here. On Wed, Mar 16, 2016 at 3:24 AM, Benjamin Kim wrote: > When trying to run update status on an

Re: Problem Updating Stats

2016-03-19 Thread Ankit Singhal
g Phoenix and reinstalling it again. I had > to wipe clean all components. > > Thanks, > Ben > > > On Mar 16, 2016, at 10:47 AM, Ankit Singhal <ankitsingha...@gmail.com> > wrote: > > It seems from the attached logs that you have upgraded phoenix to 4.7 > ver

Re: Phoenix DB Migration with Flyway

2016-03-09 Thread Ankit Singhal
Awesome!! Great work Josh. On Tue, Mar 8, 2016 at 8:59 PM, James Heather wrote: > Cool. That's big news for us. > On 8 Mar 2016 2:15 p.m., "Josh Mahonin" wrote: > >> Hi all, >> >> Just thought I'd let you know that Flyway 4.0 was recently

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-22 Thread Ankit Singhal
TALOG ADD BASE_COLUMN_COUNT INTEGER, IS_ROW_TIMESTAMP BOOLEAN; >!quit Quit the shell and start new session without CurrentSCN. > ./sqlline.py localhost > !describe system.catalog this should resolve the issue of missing column. Regards, Ankit Singhal On Fri, Apr 22, 2016 at 3:0

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-29 Thread Ankit Singhal
and major_compaction on SYSTEM.CATALOG - when you don't see those columns and open connection at currentSCN=9 and alter table to add both the columns. - you may set keep_deleted_cells back to true in SYSTEM.CATALOG Regards, Ankit Singhal Regards, Ankit Singhal On Tue, Apr 26, 2016 at 11:26 PM, Arun

Re: Extract report from phoenix table

2016-04-25 Thread Ankit Singhal
Sanooj, It is not necessary that output can only be written to a table when using MR, you can have your own custom reducer with appropriate OutputFormat set in driver. Similar solutions with phoenix are:- 1. phoenix MR 2. phoenix spark 3. phoenix pig On Thu, Apr 21, 2016 at 11:06 PM, Sanooj

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-30 Thread Ankit Singhal
sqlline.Commands.execute(Commands.java:822) > at sqlline.Commands.sql(Commands.java:732) > at sqlline.SqlLine.dispatch(SqlLine.java:808) > at sqlline.SqlLine.begin(SqlLine.java:681) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > > Thanks, > Arun > >

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-30 Thread Ankit Singhal
00CATALOG\x00IS_ROW_TIMESTAMP\x00", STOPROW => > "\x00SYSTEM\x00CATALOG\x00IS_ROW_TIMESTAMP_\x00"} > > We still see the same error. > > > > Do we need to explicitly delete from phoenix as well? > > > > Thanks, > > Bharathi. > > > &

Re: java.io.EOFException on phoenix table

2016-04-30 Thread Ankit Singhal
It seems your guidePosts collected got corrupted somehow. You may have tried deleting guidePosts for that physical table from SYSTEM.STATS table. On Sun, May 1, 2016 at 10:20 AM, Michal Medvecky wrote: > If anyone experiences the same problem (hello, google!), here is my >

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-30 Thread Ankit Singhal
, Bavineni, Bharata < bharata.bavin...@epsilon.com> wrote: > Ankit, > > We will try restarting HBase cluster. Adding explicit “put” in HBase with > timestamp as 9 for these two columns has any side effects? > > > > Thank you, > > Bharathi. > > > > *From

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-05-01 Thread Ankit Singhal
.phoenix.schema.MetaDataClient.processMutationResult(MetaDataClient.java:2345) > > at > org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:2641) > > > > Any other suggestions? > > > > Thank you, > > Bharathi. > > > >

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-25 Thread Ankit Singhal
xecuteMutation(PhoenixStatement.java:312) >> at >> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435) >> at sqlline.Commands.execute(Commands.java:822) >> at sqlline.Commands.sql(Commands.java:732) >> at sq

Re: Global Index stuck in BUILDING state

2016-05-11 Thread Ankit Singhal
Try recreating your index with ASYNC and update index using INDEX tool so that you don't face issues related to timeout or stuck during the initial load of huge data. https://phoenix.apache.org/secondary_indexing.html On Tue, May 10, 2016 at 7:26 AM, anupama agarwal wrote: >

Re: [while doing select] getting exception - ERROR 1108 (XCL08): Cache of region boundaries are out of date.

2016-05-09 Thread Ankit Singhal
Yes Vishnu , you may be hitting https://issues.apache.org/jira/browse/PHOENIX-2249 so you can try deleting stats for the table "*EVENTS_PROD*'. On Mon, May 9, 2016 at 10:56 AM, vishnu rao wrote: > hi guys need help ! > > i was getting this exception while doing a select.

Re: SYSTEM.CATALOG table - VERSIONS attribute

2016-05-09 Thread Ankit Singhal
Yes, you can but it depends if you don't want to go back in time for schema before 5 versions. On Mon, May 9, 2016 at 8:16 AM, Bavineni, Bharata < bharata.bavin...@epsilon.com> wrote: > Hi, > > SYSTEM.CATALOG table is created with VERSIONS => '1000' by default. Can we > change this value to 5 or

Re: phoenix : timeouts for long queries

2016-05-13 Thread Ankit Singhal
You can try increasing phoenix.query.timeoutMs (and hbase.client.scanner.timeout.period) on the client . https://phoenix.apache.org/tuning.html On Fri, May 13, 2016 at 1:51 PM, 景涛 <844300...@qq.com> wrote: > When I query from a very big table > It get errors as follow: > >

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-05-01 Thread Ankit Singhal
you looking > for any specific data? so that I can filter and send the results? > > > > Thank you for your time looking into this, > > Bharathi. > > > > *From:* Ankit Singhal [mailto:ankitsingha...@gmail.com] > *Sent:* Sunday, May 01, 2016 1:01 AM > *To:* user@phoe

Re: phoenix.spool.directory

2016-04-20 Thread Ankit Singhal
can you please check that hbase-site.xml(where you are setting this property) is in phoenix class path . On Wed, Apr 20, 2016 at 3:10 AM, wrote: > I am having trouble setting the "phoenix.spool.directory" > (QueryServices.SPOOL_DIRECTORY) property value. Any

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-20 Thread Ankit Singhal
'SYSTEM.CATALOG', {RAW=>true} Regards, Ankit Singhal On Wed, Apr 20, 2016 at 4:25 AM, Arun Kumaran Sabtharishi < arun1...@gmail.com> wrote: > After further investigation, we found that Phoenix Upsert query > SYSTEM.CATALOG has IS_ROW_TIMESTAMP column, but PTableImpl.getColumn

Re: problems about using phoenix over hbase

2016-04-20 Thread Ankit Singhal
Hi, I think when you are doing a put from shel then value is going as String not as Integer. So phoenix can decode it with varchar only . If you want to put Integer to your hbase table , use byte representation of integer or java api instead. Regards, Ankit Singhal On Wed, Apr 20, 2016 at 8:00

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-21 Thread Ankit Singhal
is blocking us in the production environment. Any help to > resolve or workaround is highly appreciated. > > Thanks, > Arun > > > On Wed, Apr 20, 2016 at 12:01 PM, Ankit Singhal <ankitsingha...@gmail.com> > wrote: > >> It's ok if you can just post after grep for CATALOG

Re: Problems with Phoenix bulk loader when using row_timestamp feature

2016-08-11 Thread Ankit Singhal
Samarth, filed PHOENIX-3176 for the same. On Wed, Aug 10, 2016 at 11:42 PM, Ryan Templeton wrote: > 0: jdbc:phoenix:localhost:2181> explain select count(*) from > historian.data; > > *+--+* > > *| * * PLAN

[ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-12 Thread Ankit Singhal
Apache Phoenix enables OLTP and operational analytics for Hadoop through SQL support and integration with other projects in the ecosystem such as Spark, HBase, Pig, Flume, MapReduce and Hive. We're pleased to announce our 4.8.0 release which includes: - Local Index improvements[1] - Integration

Re: Errors while launching sqlline

2016-07-13 Thread Ankit Singhal
Hi Vasanth, RC for 4.8(with support of hbase-1.2) is just out today, you can try with the latest build. Regards, Ankit Singhal On Thu, Jul 14, 2016 at 10:06 AM, Vasanth Bhat <vasb...@gmail.com> wrote: > Thanks James. > > When are the early builds going to be availab

Re: Java Query timeout

2016-08-09 Thread Ankit Singhal
a timeout period. You need to increase scanner timeout period along with above properties you mentioned. hbase.client.scanner.timeout.period 6 Regards, Ankit Singhal On Mon, Aug 8, 2016 at 6:55 PM, <kannan.ramanat...@barclays.com> wrote: > Thanks Brian. I have added HBASE

Re: PhoenixFunction

2016-06-29 Thread Ankit Singhal
(FloorYearExpression.class), CeilWeekExpression(CeilWeekExpression.class), CeilMonthExpression(CeilMonthExpression.class), CeilYearExpression(CeilYearExpression.class); Regards, Ankit Singhal On Wed, Jun 29, 2016 at 9:08 AM, Yang Zhang <zhang.yang...@gmail.com> wrote: > when I use the

Re: For multiple local indexes on Phoenix table only one local index table is being created in HBase

2016-06-29 Thread Ankit Singhal
Hi Vamsi, Phoenix uses single local Index table for all the local indexes created on a particular data table. Rows are differentiated by local index sequence id and filtered when requested during the query for particular index. Regards, Ankit Singhal Re On Tue, Jun 28, 2016 at 4:18 AM, Vamsi

Re: phoenix explain plan not showing any difference after adding a local index on the table column that is used in query filter

2016-06-29 Thread Ankit Singhal
, you can read https://phoenix.apache.org/secondary_indexing.html Regards, Ankit Singhal On Tue, Jun 28, 2016 at 4:25 AM, Vamsi Krishna <vamsi.attl...@gmail.com> wrote: > Team, > > I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0). > *Question: *phoenix explain pl

Re: How to troubleshoot 'Could not find hash cache for joinId' which is failing always for some users and never for others

2016-08-15 Thread Ankit Singhal
rched data is different. Yes, it could be possible because some users are hitting certain key range only depending upon the first column(prefix) of the row key. Regards, Ankit Singhal On Mon, Aug 15, 2016 at 6:29 PM, Chabot, Jerry <jerry.p.cha...@hpe.com> wrote: > I’ve added the hint

Re: ROW_TIMESTAMP weird behaviour

2017-02-07 Thread Ankit Singhal
I think you are also hitting https://issues.apache.org/jira/browse/PHOENIX-3176. On Tue, Feb 7, 2017 at 2:18 PM, Dhaval Modi wrote: > Hi Pedro, > > Upserted key are different. One key is for July month & other for January > month. > 1. '2017-*07*-02T15:02:21.050' > 2.

Re: Phoenix tracing did not start

2017-01-19 Thread Ankit Singhal
Hi Pradheep, It seems tracing is not distributed as a part of HDP 2.4.3.0, please work with your vendor for an appropriate solution. Regards, Ankit Singhal On Thu, Jan 19, 2017 at 4:48 AM, Pradheep Shanmugam < pradheep.shanmu...@infor.com> wrote: > Hi, > > I am using hdp 2.

Re: Cannot select data from a system table

2016-08-21 Thread Ankit Singhal
Aaron, you can escape check for reserved keyword with double quotes "" SELECT * FROM SYSTEM."FUNCTION" Regards, Ankit Singhal On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor <amoli...@splicemachine.com> wrote: > Looks like the SYSTEM.FUNCTION table

Re: CONVERT_TZ for TIMESTAMP column

2016-09-02 Thread Ankit Singhal
(*"iso_8601" TIMESTAMP NOT NULL* PRIMARY KEY); upsert into test values(TO_DATE('2016-04-01 22:45:00')); select * from test; +--+ | iso_8601 | +--+ | 2016-04-01 22:45:00.000 | +--+ Regards, Ankit Singhal

Re: Cannot select data from a system table

2016-08-31 Thread Ankit Singhal
Ted Yu <yuzhih...@gmail.com> wrote: >> >> Ankit: >> Is this documented somewhere ? >> >> Thanks >> >> On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal <ankitsingha...@gmail.com> >> wrote: >> >>> Aaron, >>> >>> you c

Re: Creating view on a phoenix table throws Mismatched input error

2016-10-07 Thread Ankit Singhal
Currently, Phoenix doesn't support projecting selective columns of table or expressions in a view. You need to project all the columns with (select *). Please see the section "Limitations" on this page or PHOENIX-1507. https://phoenix.apache.org/views.html On Thu, Oct 6, 2016 at 10:05 PM, Mich

Re: can I prevent rounding of a/b when a and b are integers

2016-09-21 Thread Ankit Singhal
Adding some more workaround , if you are working on column:- select cast(col_int1 as decimal)/col_int2; select col_int1*1.0/3; On Wed, Sep 21, 2016 at 8:33 PM, James Taylor wrote: > Hi Noam, > Please file a JIRA. As a workaround, you can do SELECT 1.0/3. > Thanks, >

Re: Phoenix ResultSet.next() takes a long time for first row

2016-09-22 Thread Ankit Singhal
Share some more details about the query, DDL and explain plan. In Phoenix, there are cases where we do some server processing at the time when rs.next() is called first time but subsequent next() should be faster. On Thu, Sep 22, 2016 at 9:52 AM, Sasikumar Natarajan wrote: >

Re: Phoenix ResultSet.next() takes a long time for first row

2016-09-28 Thread Ankit Singhal
; (col1 VARCHAR NOT NULL, >>> col2 VARCHAR NOT NULL, >>> col3 INTEGER NOT NULL, >>> col4 INTEGER NOT NULL, >>> col5 VARCHAR NOT NULL, >>> col6 VARCHAR NOT NULL, >>> col7 TIMESTAMP NOT NULL, >>> col8 TIMESTAMP NOT NULL, >>>

Re: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

2016-10-23 Thread Ankit Singhal
You need to increase phoenix timeout as well(phoenix.query.timeoutMs). https://phoenix.apache.org/tuning.html On Sun, Oct 23, 2016 at 3:47 PM, Parveen Jain wrote: > hi All, > > I just realized that phoneix doesn't provide "group by" and "distinct" > methods if we use

Re: Analytic functions in Phoenix

2016-10-23 Thread Ankit Singhal
hink query with OVER clause can be re-written by using SELF JOINs in many cases. Regards, Ankit Singhal On Sun, Oct 23, 2016 at 3:11 PM, Mich Talebzadeh <mich.talebza...@gmail.com> wrote: > Hi, > > I was wondering whether analytic functions work in Phoenix. For example > something

Re: Setting default timezone for Phoenix

2016-10-23 Thread Ankit Singhal
://phoenix.apache.org/language/functions.html#to_date [3] https://phoenix.apache.org/tuning.html Regards, Ankit Singhal On Sun, Oct 23, 2016 at 6:15 PM, Mich Talebzadeh <mich.talebza...@gmail.com> wrote: > Hi, > > My queries in Phoenix pickup GMT timezone as default. > > I need them to def

Re: Index in Phoenix view on Hbase is not updated

2016-10-23 Thread Ankit Singhal
bq. Will bulk load from Phoenix update the underlying Hbase table? Yes. instead of using importTSV try to use CSV bulkload only. bq. Do I need to replace Phoenix view on Hbase as with CREATE TABLE? You can still keep VIEW. Regards, Ankit Singhal On Sun, Oct 23, 2016 at 6:37 PM, Mich Talebzadeh

Re: Ordering of numbers generated by a sequence

2016-10-17 Thread Ankit Singhal
JFYI, phoenix.query.rowKeyOrderSaltedTable is deprecated and is not honored from v4.4, so please use phoenix.query.force.rowkeyorder instead. I have updated the docs(http://localhost:8000/tuning.html) now accordingly. On Mon, Oct 17, 2016 at 3:14 AM, Josh Elser wrote: >

Re: huge query result miss some fields

2016-11-24 Thread Ankit Singhal
Do you have bigger rows? if yes , it may be similar to https://issues.apache.org/jira/browse/PHOENIX-3112 and increasing hbase.client.scanner.max.result.size can help. On Thu, Nov 24, 2016 at 6:00 PM, 金砖 wrote: > thanks Abel. > > > I tried update statistics, it did not

Re: Inconsistent null behavior

2016-12-06 Thread Ankit Singhal
@James, is this similar to https://issues.apache.org/jira/browse/PHOENIX-3112? @Mac, can you try if increasing hbase.client.scanner.max.result.size helps? On Tue, Dec 6, 2016 at 10:53 PM, James Taylor wrote: > Looks like a bug to me. If you can reproduce the issue

Re: View timestamp on existing table (potential defect)

2017-04-20 Thread Ankit Singhal
This is because we cap the scan with the current timestamp so anything beyond the current time will not be seen. This is needed mainly to avoid UPSERT SELECT to see its own new writes. https://issues.apache.org/jira/browse/PHOENIX-3176 On Thu, Apr 20, 2017 at 11:52 PM, Randy

Re: Bad performance of the first resultset.next()

2017-04-20 Thread Ankit Singhal
+1 for Joanthan comment, -- Take multiple jstack of the client during the query time and check which thread is working for long. If you find merge sort is the bottleneck then removing salting and using SERIAL scan will help in the query given above. Ensure that your queries are not causing

Re: load kafka to phoenix

2017-04-21 Thread Ankit Singhal
It seems we don't pack the dependencies in phoenix-kafka jar yet. Try including flume-ng-configuration-1.3.0.jar in your classpath to resolve the above issue. On Thu, Apr 20, 2017 at 9:27 AM, lk_phoenix wrote: > hi,all: > I try to read data from kafka_2.11-0.10.2.0 , I get

Re: Passing arguments (schema name) to .sql file while executing from command line

2017-04-21 Thread Ankit Singhal
If you are using phoenix 4.8 onwards then you can try giving zookeeper string appended with a schema like below. psql.py ;schema= /create_table.sql psql.py zookeeer1;schema=TEST_SCHEMA /create_table.sql On Sat, Apr 15, 2017 at 2:25 AM, sid red wrote: > Hi, > > I am

Re: phoenix.schema.isNamespaceMappingEnabled

2017-04-20 Thread Ankit Singhal
Sudhir, Relevant JIRA for the same. https://issues.apache.org/jira/browse/PHOENIX-3288 Let me see if I can crack this for the coming release. On Fri, Apr 21, 2017 at 8:42 AM, Josh Elser wrote: > Sudhir, > > Didn't meant to imply that asking the question was a waste of

Re: Limit of phoenix connections on client side

2017-04-21 Thread Ankit Singhal
bq. 1. How many concurrent phoenix connections the application can open? I don't think there is any limit on this. bq. 2. Is there any limitations regarding the number of connections I should consider? I think as many till your JVM permits. bq. 3. Is the client side config parameter

Re: Can set default value for column in phoenix ?

2017-07-14 Thread Ankit Singhal
Phoenix 4.9 onwards you can specify any expression for default column. (I'm not sure if there is any limitation called out). For syntax:- https://phoenix.apache.org/language/index.html#column_def For examples-

Re: Apache Spark Integration

2017-07-17 Thread Ankit Singhal
You can take a look at our IT tests for phoenix-spark module. https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala On Mon, Jul 17, 2017 at 9:20 PM, Luqman Ghani wrote: > > -- Forwarded message

Re: RegionNotServingException when using Phoenix

2017-07-14 Thread Ankit Singhal
Yes, value 1 for "hbase.client.retries.number" is the root cause of above exception. General guideline/formulae could be(not official):- (time taken for region movement in your cluster + timeout of zookeeper) / hbase.client.pause Or with intuition, you can set to at least 10. On Fri, Jul 14,

Re: Can I reset a row's TTL?

2017-08-07 Thread Ankit Singhal
You can do this by UPSERT SELECT. On Mon, Aug 7, 2017 at 4:13 PM, Ankit Singhal <ankitsingha...@gmail.com> wrote: > You can read KVs for that user and write them again with current time. > > On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes < > cheyenne.osanu.for...@gmail

Re: Can I reset a row's TTL?

2017-08-07 Thread Ankit Singhal
You can read KVs for that user and write them again with current time. On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > I'm using phoenix to store user sessions. The table's TTL is set to 3 days > and I'd like to have the 3 days start over if the user

Re: Renaming table schema in hbase

2017-06-08 Thread Ankit Singhal
d you will just save time by doing so. Regards, Ankit Singhal On Thu, Jun 8, 2017 at 1:34 PM, Michael Young <yomaiq...@gmail.com> wrote: > I have a doubt about step 2 from Ankit Singhal's response in > http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoeni > x-4-4-Rename-tabl

Re: phoenix query modtime

2017-06-23 Thread Ankit Singhal
> way to do this? > > Nan > > On Fri, Jun 23, 2017 at 1:23 AM, Ankit Singhal <ankitsingha...@gmail.com> > wrote: > >> If you have composite columns in your row key of HBase table and they are >> not formed through Phoenix then you can't access an individu

Re: phoenix query modtime

2017-06-23 Thread Ankit Singhal
If you have composite columns in your row key of HBase table and they are not formed through Phoenix then you can't access an individual column of primary key by Phoenix SQL too. Try composing the whole PK and use them in a filter or may check if you can use regex functions[1] or LIKE operator.

Re: Getting too many open files during table scan

2017-06-23 Thread Ankit Singhal
bq. A leading date column is in our schema model:- Don't you have any other column which is obligatory in queries during reading but not monotonous with ingestion? As pre-split can help you avoiding hot-spotting. For parallelism/performance comparison, have you tried running a query on a

Re: Phoenix UDF jar cache?

2017-06-24 Thread Ankit Singhal
Yes, this is a limitation[1] of the current implementation of UDF and class loader used. It is recommended either to reboot the cluster if implementation changes or use new jar name. [1] https://phoenix.apache.org/udf.html On Wed, May 3, 2017 at 4:41 AM, Randy Hu wrote: >

Re: How to create new table as existing table with same structure and data ??

2017-06-23 Thread Ankit Singhal
You can map an existing table to view or table in Phoenix but we expect the name of the table should match with Phoenix table name. (However, you can rename your existing HBase table with snapshot and restore) The DDLs you are using to map the table is not correct or are not supported. You can

Re: Safest migration path for Apache Phoenix 4.5 to 4.9

2017-06-27 Thread Ankit Singhal
If you don't have secondary indexes, views and immutable table then upgrade from 4.5 to 4.9 will just add some new columns in SYSTEM.CATALOG and re-creates STATS table. but still we have not tested an upgrade from 4.5 to 4.9, it is always advisable to do stop after every two versions (especially

Re: Why can Cache of region boundaries are out of date be happening in 4.5.x?

2017-05-20 Thread Ankit Singhal
It could be because of stale stats due to the merging of region or something, you can try deleting the stats from SYSTEM.STATS. http://apache-phoenix-user-list.1124778.n5.nabble.com/Cache-of-region-boundaries-are-out-of-date-during-index-creation-td1213.html On Sat, May 20, 2017 at 8:29 PM, Pedro

Re: checking-in on hbase 1.3.1 support

2017-05-25 Thread Ankit Singhal
Next release of Phoenix(v4.11.0) will be supporting HBase 1.3.1(see PHOENIX-3603) and there is no timeline yet decided for the release. But you may expect some updates in next 1-2 months. On Thu, May 25, 2017 at 3:32 AM, Anirudha Jadhav wrote: > hi, > > just checking in, any

Re: Fwd: Apache Phoenix

2017-05-03 Thread Ankit Singhal
some other purpose. Regards, Ankit Singhal On Tue, May 2, 2017 at 7:55 PM, Josh Elser <els...@apache.org> wrote: > Planning for unexpected outages with HBase is a very good idea. At a > minimum, there will likely be points in time where you want to change HBase > configuration, app

Re: Upsert-Select NullPointerException

2017-05-05 Thread Ankit Singhal
I think you have a salted table and you are hitting a below bug. https://issues.apache.org/jira/browse/PHOENIX-3800 Do you mind trying out the patch, we will have this fixed in 4.11 at least(probably 4.10.1 too). On Fri, May 5, 2017 at 11:06 AM, Bernard Quizon <

Re: Row count

2017-09-13 Thread Ankit Singhal
Best is to do "SELECT COUNT(*) FROM MYTABLE" with index. As index table will have less data so it can be read faster. if you have time series data or your data is always incremental with some ID then you can do incremental count with row_timestamp filters or ID filter bq. however the result

Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-07 Thread Ankit Singhal
bq. This runs successfully if I split this into 2 files, but I'd like to avoid doing that. do you run a different job for each file? if your HBase cluster is not co-located with your yarn cluster then it may be possible that copying of large HFile is timing out(this may happen due to the fewer

Re: Phoenix CSV Bulk Load Tool Date format for TIMESTAMP

2017-09-07 Thread Ankit Singhal
Yes, you can write your own custom mapper to do conversions (look at CsvToKeyValueMapper, CsvUpsertExecutor#createConversionFunction) or consider using chaining of jobs(where the first Job with multiple inputs standardizing the date format followed by CSVBulkLoadTool) or writing a custom

Re: Phoenix system tables in multitenant setup

2017-10-23 Thread Ankit Singhal
bq. But for tables inside, I am assuming the user needs access to the Phoenix SYSTEM tables (and CREATE rights for the namespace in question on the HBase level)? Is that the case? And if so, what are they able to see, as in, only their information, or all information from other tenants as well? If

Re: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.

2018-05-24 Thread Ankit Singhal
Probably , you are affected by https://issues.apache.org/jira/browse/HBASE-20172, are you on JDK 1.7 or lower? can you upgrade to JDK 1.8 and check. On Sun, May 6, 2018 at 9:29 AM, anil gupta wrote: > As per following line: > "Caused by: java.lang.RuntimeException: Could

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-13 Thread Ankit Singhal
gured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks" Regards, Ankit Singhal On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲 wrote: > Thanks all. >

Re: [DISCUSS] Include python-phoenixdb into Phoenix

2018-03-08 Thread Ankit Singhal
able for people from the ASF. I don't > >> know what, if any, infrastructure exists to distribute Python modules. > >> https://packaging.python.org/glossary/#term-built-distribution > >> > >> I feel like a sub-directory in the phoenix repository would be th

[DISCUSS] Include python-phoenixdb into Phoenix

2018-03-01 Thread Ankit Singhal
-phoenixdb [3] *https://github.com/Pirionfr/pyPhoenix <https://github.com/Pirionfr/pyPhoenix>* *[4] https://issues.apache.org/jira/browse/PHOENIX-4636 <https://issues.apache.org/jira/browse/PHOENIX-4636>* Regards, Ankit Singhal On Tue, Apr 11, 2017 at 1:30 AM, James Taylor <jamestay...@ap

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-26 Thread Ankit Singhal
You might be hitting PHOENIX-4785 <https://jira.apache.org/jira/browse/PHOENIX-4785>, you can apply the patch on top of 4.14 and see if it fixes your problem. Regards, Ankit Singhal On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com> wrote: > Any advices?

Re: MutationState size is bigger than maximum allowed number of bytes

2018-09-20 Thread Ankit Singhal
. Regards, Ankit Singhal On Thu, Sep 20, 2018 at 10:24 AM Batyrshin Alexander <0x62...@gmail.com> wrote: > Nope, it was client side config. > Thank you for response. > > On 20 Sep 2018, at 05:36, Jaanai Zhang wrote: > > Are you configuring these on the server sid

Re: JDBC Connection to Apache Phoenix failing

2018-12-04 Thread Ankit Singhal
Is connecting and running some commands through HBase shell working? As per the stack trace, It seems your HBase is not up , Look at the master and regionserver logs for errors. On Tue, Dec 4, 2018 at 4:17 AM Raghavendra Channarayappa < raghavendra.channaraya...@myntra.com> wrote: > Can someone

Re: Hbase vs Phienix column names

2019-01-08 Thread Ankit Singhal
ng phoenixColumnName = pTable.getColumnForColumnQualifier("0".getBytes(), hbaseColumnQualifierBytes).getName(); Regards, Ankit Singhal On Tue, Jan 8, 2019 at 10:03 AM Josh Elser wrote: > (from the peanut-gallery) > > That sounds to me like a useful utility to share with others if you're > going to write

Re: Slow query on Secondary Index

2018-09-18 Thread Ankit Singhal
To better understand the problem, we may require your DDL for both the indexes and data table and also the query using your secondary index And, please try some tuning documented on https://phoenix.apache.org/secondary_indexing.html and see if it helps. On Tue, Sep 18, 2018 at 11:25 AM Josh

Re: split count for mapreduce jobs with PhoenixInputFormat

2019-01-30 Thread Ankit Singhal
As Thomas said, no. of splits will be equal to the number of guideposts available for the table or the ones required to cover the filter. if you are seeing one split per region then either stats are disabled or guidePostwidth is set higher than the size of the region , so try reducing the

Re: Is there any way to using appropriate index automatically?

2019-08-20 Thread Ankit Singhal
code to fix the issue, so the patch would really be appreciated. And, also can you try running "select a,b,c,d,e,f,g,h,i,j,k,m,n from TEST_PHOENIX.APP where c=2 and h = 1 limit 5", and see if index is getting used. Regards, Ankit Singhal On Tue, Aug 20, 2019 at 1:49 AM you Zhuang wrote

Re: Buckets VS regions

2019-08-20 Thread Ankit Singhal
sensitive information about your environment and data like hbase:meta has ip-address/hostname and system.stats has data row keys, so upload only if you think it's a test data and hostnames have no significance). Thanks, Ankit Singhal On Mon, Aug 19, 2019 at 11:17 PM venkata subbarayudu wrote

Re: Multi-Tenancy and shared records

2019-09-03 Thread Ankit Singhal
>> If not possible I guess we have to look at doing something at the HBase level. As Josh said, it's not yet supported in Phoenix, Though you may try using cell-level security of HBase with some Phoenix internal API and let us know if it works for you. Sharing a sample code if you wanna try. /**

  1   2   >