Re: Too many connections from / - max is 60

2020-06-16 Thread anil gupta
has lot of indexing improvements related to > consistency a d also performance. > > -- > Sukumar > > On Mon, Jun 8, 2020, 10:15 PM anil gupta wrote: > >> You were right from the beginning. It is a problem with phoenix secondary >> index! >> I tried 4LW zk com

Re: Too many connections from / - max is 60

2020-06-08 Thread anil gupta
nnections. These numbers should match (X connections from a > RS to a ZK, and X connections in the Java RS process). The focus would > need to be on what opens a new connection and what is not properly > closing that connection (in every case). > > On 6/3/20 4:57 AM, anil gupta wrote:

Re: Too many connections from / - max is 60

2020-06-03 Thread anil gupta
gt; which results in a new ZK connection. There have certainly been bugs > like that in the past (speaking generally, not specifically). > > On 6/1/20 5:59 PM, anil gupta wrote: > > Hi Folks, > > > > We are running in HBase problems due to hitting the limit of ZK > &g

Re: timestamp and pre-existing HBase tables

2020-03-18 Thread anil gupta
to add a timestamp with view, I get an error “Declaring > a column as row_timestamp is not allowed for views" > > > > So is there a way to take advantage of built-in timestamps on preexisting > HBase tables? If so, could someone please point me in the right direction? > > > > Thanks! > > --Willie > > > -- Thanks & Regards, Anil Gupta

Re: Apache Phoenix website search seems to be broken

2019-12-13 Thread anil gupta
> I'll drop it from the website for now. Maybe we can find something akin > to a `site:phoenix.apache.org ` Google search that we can embed? > > On 12/13/19 4:42 PM, anil gupta wrote: > > Hi, > > > > When i try to use the search feature on https://phoenix.apa

Apache Phoenix website search seems to be broken

2019-12-13 Thread anil gupta
Hi, When i try to use the search feature on https://phoenix.apache.org/ it takes me to: http://www1.search-hadoop.com/?subid4=1576273131.0028120806 and there are no results. Is this a temporary error or search-hadoop website is gone. -- Thanks & Regards, Anil Gupta

Re: Phoenix non-Kerberos security ?

2019-11-04 Thread anil gupta
workaround would be to put Phoenix query server behind a homegrown webservice that authenticates and authorizes the users before forwarding the request to Queryserver. HTH, Anil Gupta On Mon, Nov 4, 2019 at 12:45 AM Aleksandr Saraseka wrote: > Hello community. > Does Phoenix have some kind of se

Re: Materialized views in Hbase/Phoenix

2019-09-27 Thread anil gupta
-truecarfinal IMO, Hive integration with HBase is not fully baked and it has a lot of rough edges. So, it better to stick with native Phoenix/HBase if you care about performance and ease of operations. HTH, Anil Gupta On Wed, Sep 25, 2019 at 10:01 AM Gautham Acharya < gauth...@alleninstitute.org>

Re: A strange question about Phoenix

2019-06-19 Thread anil gupta
table. > 2) select my_column from my_table limit 1 works fine. > > However, select * from my_table limit 1; returns no row. > > Do I need to perform some extra operations? > > thanks > > > > > > > -- Thanks & Regards, Anil Gupta

Re: Heap Size Recommendation

2018-11-20 Thread anil gupta
not used for MR/Spark jobs), i think its gonna be ok if you have a heap size of 24 gb for RS. Hope this helps, Anil Gupta On Tue, Nov 20, 2018 at 3:45 AM Azhar Shaikh wrote: > Hi All, > > Is there any update on Heap Size Recommendation. > > Your help is greatly apprecia

Re: Row Scan In HBase Not Working When Table Created With Phoenix

2018-07-28 Thread anil gupta
olumn=0:LNAME, >> timestamp=1532651151877, value=B >> >> \x80\x00\x00\x02column=0:_0, >> timestamp=1532651151877, value=x >> >> \x80\x00\x00\x03column=0:FNAME, >> timestamp=1532651164899, value=C >> >> \x80\x00\x00\x03column=0:LNAME, >> timestamp=1532651164899, value=B >> >> >> >> -- >> Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/ >> > > -- Thanks & Regards, Anil Gupta

Re: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.

2018-05-06 Thread anil gupta
> CompatibilitySingletonFactory.java:59) > ... 47 more > Caused by: java.lang.IllegalAccessError: tried to access class > org.apache.hadoop.metrics2.lib.MetricsInfoImpl from class > org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry > at org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry.newGauge( > DynamicMetricsRegistry.java:139) > at org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl.( > MetricsZooKeeperSourceImpl.java:59) > at org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl.( > MetricsZooKeeperSourceImpl.java:51) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at sun.reflect.NativeConstructorAccessorImpl.newInstance( > NativeConstructorAccessorImpl.java:62) > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance( > DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at java.lang.Class.newInstance(Class.java:442) > at java.util.ServiceLoader$LazyIterator.nextService( > ServiceLoader.java:380) > > -- Thanks & Regards, Anil Gupta

Re: hotspot in System.catalog table

2018-04-13 Thread anil gupta
We saw atleast 5x improvement in Upsert performance from our streaming app just by altering table and adding UPDATE_CACHE_FREQUENCY=6 in all our tables. Overall our cluster, system.catalog table and apps looks more happy. Thanks Again! On Thu, Apr 12, 2018 at 11:37 PM, anil gupta wrote

Re: hotspot in System.catalog table

2018-04-12 Thread anil gupta
Thanks a lot. Going to do that and see the impact. On Thu, Apr 12, 2018 at 11:33 PM, James Taylor wrote: > It’s client side, but that’ll only impact newly created tables. You’ll > need to use the ALTER TABLE command for existing tables. > > On Thu, Apr 12, 2018 at 11:30 PM anil

Re: hotspot in System.catalog table

2018-04-12 Thread anil gupta
I have set phoenix.default.update.cache.frequency=6 in hbase-site.xml via ambari(we barely alter schema). Is this a client or server side property? On Thu, Apr 12, 2018 at 11:14 PM, anil gupta wrote: > I c. As per documentation[1], even for commits of upsert system.catalog is > calle

Re: hotspot in System.catalog table

2018-04-12 Thread anil gupta
into a lot of Phoenix scalability issues due to this. [1] https://phoenix.apache.org/language/index.html#options On Thu, Apr 12, 2018 at 11:06 PM, James Taylor wrote: > No, that won’t make a difference. > > On Thu, Apr 12, 2018 at 10:51 PM anil gupta wrote: > >> Thanks for quic

Re: hotspot in System.catalog table

2018-04-12 Thread anil gupta
ere[2]. > > In the future, we'll let the SYSTEM.CATALOG table span multiple regions - > keep an eye on PHOENIX-3534. > > Thanks, > James > > [1] https://phoenix.apache.org/#Altering > [2] https://phoenix.apache.org/language/index.html#options > > On

hotspot in System.catalog table

2018-04-12 Thread anil gupta
preparedstatement(contrary to Statement), system.catalog table is queried first. Hence, it is resulting into hotspotting. Is my analysis correct? (I have already suggested my colleagues to try using Statement instead of PS if they have to create a new one everytime.) -- Thanks & Regards, Anil Gupta

Re: HBase Timeout on queries

2018-02-01 Thread anil gupta
count(*). The HBase row_counter job will be much faster than phoenix queries. Thanks, Anil Gupta On Thu, Feb 1, 2018 at 7:35 AM, Flavio Pompermaier wrote: > I was able to make it work changing the following params (both on server > and client side and restarting hbase) and now the query a

Re: Salt Number

2017-09-11 Thread anil gupta
(non-random). Example: Timeseries data with time as leading part of rowkey Another way to avoid salting with incremental rowkey is to reverse the leading number of your rowkey. example: reverse(45668) = 86654. HTH, Anil Gupta On Fri, Sep 8, 2017 at 10:23 AM, Pradheep Shanmugam < pradheep.sha

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread anil gupta
And forgot to mention that we invoke our pig scripts through oozie. On Mon, Aug 21, 2017 at 2:20 PM, anil gupta wrote: > Sorry, cant share the pig script. > Here is what we are registering: > REGISTER /usr/lib/phoenix/phoenix-4.7.0-HBase-1.2-client.jar; > REGISTER /usr/lib/pig/lib/p

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread anil gupta
ferently. >> >> One thing I did not mention because I thought it should not matter is >> that to avoid extra costs while testing, I was only running a master node >> with no slaves (no task or core nodes). Maybe lack of slaves causes >> problems not normally seen. Interestin

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-20 Thread anil gupta
r.java:2554) >> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2359) >> Caused by: java.lang.ClassNotFoundException: >> org.apache.phoenix.shaded.org.codehaus.jackson.jaxrs.Jackson >> JaxbJsonProvider >> at java.net.URLClassLoader.findClass(URLClassLoader.java:381) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424) >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357) >> ... 28 more >> >> Has anyone been able to get org.apache.phoenix.pig.PhoenixHBaseStorage() >> to work on recent EMR versions? Please help if you can. >> >> Thank you, >> Steve >> > > -- Thanks & Regards, Anil Gupta

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread anil gupta
line 106: > > checkClosed(); > > if (off < 0 || len < 0 || off > b.length - len) { > throw new ArrayIndexOutOfBoundsException(); > > You didn't get ArrayIndexOutOfBoundsException - maybe b was null ? > > On Thu, Jul 6, 2017 at 2:08 PM, anil gupta

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread anil gupta
read > > Do you see similar line in region server log ? > > Cheers > > On Thu, Jul 6, 2017 at 1:48 PM, anil gupta wrote: > > > Hi All, > > > > We are running HBase/Phoenix on EMR5.2(HBase1.2.3 and Phoenix4.7) and we > running into following exception whe

org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread anil gupta
wiping out this table and rebuilding the dataset. We tried to drop the table and recreate the table but it didnt fix it. Can anyone please let us know how can we get rid of above problem? Are we running into https://issues.apache.org/jira/browse/HBASE-16960? -- Thanks & Regards, Anil Gupta

[IndexTool NOT working] mapreduce.LoadIncrementalHFiles: Split occured while grouping HFiles

2017-05-25 Thread anil gupta
/BI.VIN_IDX/I/d0a6c4b727bb416f840ed254658f3982] failed. This is recoverable and they will be retried. 2017-05-24 18:00:11,793 INFO [main] mapreduce.LoadIncrementalHFiles: Split occured while grouping HFiles, retry attempt 1 with 15 files remaining to group or split -- Thanks & Regards, Anil Gupta

Re: Async Index Creation fails due to permission issue

2017-05-23 Thread anil gupta
; UserGroupInformation.java:1698) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)/ > > The hack i am using right now is to set the permissions manually for these > files when the IndexTool job is running. Is there a better way ? > > > > -- > View this message in context: http://apache-phoenix-user- > list.1124778.n5.nabble.com/Async-Index-Creation-fails- > due-to-permission-issue-tp3573.html > Sent from the Apache Phoenix User List mailing list archive at Nabble.com. > -- Thanks & Regards, Anil Gupta

Re: ERROR 201 (22000): Unable to load CSV files using CsvBulkLoadTool due to blank columns

2017-03-30 Thread anil gupta
ng quotes in an integer column, does > it? > > Maybe removing this quotes from the source would solve the problem. > > On 30 Mar 2017 18:43, "anil gupta" wrote: > >> Hi Brian, >> >> It seems like Phoenix is not liking ''(single quotes) in an in

Re: ERROR 201 (22000): Unable to load CSV files using CsvBulkLoadTool due to blank columns

2017-03-30 Thread anil gupta
Hi Brian, It seems like Phoenix is not liking ''(single quotes) in an integer column. IMO, it will be better if phoenix can handle that by providing an option in csvbulkloadtool to specify '' to be treated as null. Single quotes works fine for varchar columns. Thanks, Anil G

ERROR 201 (22000): Unable to load CSV files using CsvBulkLoadTool due to blank columns

2017-03-30 Thread anil gupta
oadTool? I have looked at documentation but i am unable to find a solution for this. -- Thanks & Regards, Anil Gupta

Re: Phoenix4.8+ support for HBase(1.2+) on EMR 5.x

2017-02-06 Thread anil gupta
re: > https://github.com/apache/bigtop/blob/master/bigtop.bom#L323 > > > > *From: *anil gupta > *Reply-To: *"user@phoenix.apache.org" > *Date: *Monday, February 6, 2017 at 4:22 PM > *To: *"user@phoenix.apache.org" > *Subject: *Re: Phoenix4.8+ support for HBase

Re: Phoenix4.8+ support for HBase(1.2+) on EMR 5.x

2017-02-06 Thread anil gupta
It seems like Phoenix version is mentioned here in Bigtop: https://github.com/apache/bigtop/blob/2b0d4a56fdf8a2427470c3fa454bb18bd173884b/pom.xml#L54 On Mon, Feb 6, 2017 at 4:19 PM, anil gupta wrote: > Hi Folks, > > I have been using EMR 5.x(http://docs.aws.amazon. > com

Phoenix4.8+ support for HBase(1.2+) on EMR 5.x

2017-02-06 Thread anil gupta
Phoenix4.7 with HBase1.2.3? -- Thanks & Regards, Anil Gupta

Re: Index in Phoenix view on Hbase is not updated

2016-10-23 Thread anil gupta
gt; "price_info"."price"); >>>> > >>>> > that showed the records OK at that time. I verified this using explain >>>> > >>>> > >>>> > 0: jdbc:phoenix:rhes564:2181> explain select count(1) from >>>> > "marketDataHbase"; >>>> > +-+ >>>> > | PLAN | >>>> > +-+ >>>> > | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER INDEX_DX1 | >>>> > | SERVER FILTER BY FIRST KEY ONLY | >>>> > | SERVER AGGREGATE INTO SINGLE ROW| >>>> > +-+ >>>> > >>>> > Now the issue is that the above does not show new data since build in >>>> Hbase >>>> > table unless I do the following: >>>> > >>>> > 0: jdbc:phoenix:rhes564:2181> alter index INDEX_DX1 on >>>> "marketDataHbase" >>>> > rebuild; >>>> > >>>> > >>>> > Which is not what an index should do (The covered index should be >>>> > maintained automatically). >>>> > The simple issue is how to overcome this problem? >>>> > >>>> > As I understand the index in Phoenix ia another file independent of >>>> the >>>> > original phoenix view so I assume that this index file is not updated >>>> for >>>> > one reason or other? >>>> > >>>> > Thanks >>>> >>> >> > -- Thanks & Regards, Anil Gupta

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

2016-10-23 Thread anil gupta
own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary damages arising from > such

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

2016-10-23 Thread anil gupta
ve Method) >>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce >>>> ssorImpl.java:62) >>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe >>>> thodAccessorImpl.java:43) >>>> at java.lang.reflect.Method.invoke(Method.java:498) >>>> at org.apache.hadoop.util.RunJar.run(RunJar.java:221) >>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:136) >>>> >>>> I tried putting it inside "" etc but no joy I am afraid! >>>> >>>> Dr Mich Talebzadeh >>>> >>>> >>>> >>>> LinkedIn * >>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw >>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* >>>> >>>> >>>> >>>> http://talebzadehmich.wordpress.com >>>> >>>> >>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for >>>> any loss, damage or destruction of data or any other property which may >>>> arise from relying on this email's technical content is explicitly >>>> disclaimed. The author will in no case be liable for any monetary damages >>>> arising from such loss, damage or destruction. >>>> >>>> >>>> >>> >>> >> > -- Thanks & Regards, Anil Gupta

Re: Bulk dataload and dynamic columns

2016-10-07 Thread anil gupta
It didn't work when we tried and hence posting it here. Thanks in advance > > -- > Thanks, > Sanooj Padmakumar > -- Thanks & Regards, Anil Gupta

Re: java.util.ConcurrentModificationException at java.util.HashMap while doing Upserts

2016-10-02 Thread anil gupta
Thanks for quick response. James. I'll try out some stuff. On Sun, Oct 2, 2016 at 5:00 PM, James Taylor wrote: > Option #2 is fine. Connections are cheap in Phoenix. > > > On Sunday, October 2, 2016, anil gupta wrote: > >> Hi James, >> >> There is a hig

Re: java.util.ConcurrentModificationException at java.util.HashMap while doing Upserts

2016-10-02 Thread anil gupta
en multiple threads > as it's not thread safe. > Thanks, > James > > > On Sunday, October 2, 2016, anil gupta wrote: > >> Hi, >> >> We are running HDP2.3.4(HBase 1.1 and Phoenix 4.4). I have a MapReduce >> job thats writing data to a very simple Phoenix

java.util.ConcurrentModificationException at java.util.HashMap while doing Upserts

2016-10-02 Thread anil gupta
much higher throughput and volume of data but we never ran into this problem. Can anyone provide me more details on why we are getting ConcurrentModificationException while doing upserts? -- Thanks & Regards, Anil Gupta

Re: Full text query in Phoenix

2016-09-18 Thread anil gupta
Hi James, > > I found this for Hbase > https://issues.apache.org/jira/browse/HBASE-3529 > > its patch that can be added to hbase based on what I am seeing > -- Thanks & Regards, Anil Gupta

Re: Joins dont work

2016-09-18 Thread anil gupta
ks & Regards, Anil Gupta

Re: Phoenix upsert query time

2016-08-02 Thread anil gupta
r/upsert/HBaseUpsertDataLayer.java> > > On Tue, Aug 2, 2016 at 10:31 PM, Anil Gupta wrote: > >> Are you using a prepared statement for upserts? IMO, query should be >> compiled only once when prepared statement is used. >> >> Sent from my iPhone >> >>

Re: Phoenix upsert query time

2016-08-02 Thread Anil Gupta
Are you using a prepared statement for upserts? IMO, query should be compiled only once when prepared statement is used. Sent from my iPhone > On Aug 2, 2016, at 7:56 AM, Samarth Jain wrote: > > Best bet is to updgrade your cloudera version to cdh5.7. It supports phoenix > 4.7. See - > http

Re: Table replication

2016-06-09 Thread anil gupta
Phoenix system hbase tables, Global/Local secondary index table and then Primary Phoenix table. I haven't done it yet. But, above is the way i would approach it. Thanks, Anil Gupta. On Thu, Jun 9, 2016 at 6:49 AM, Jean-Marc Spaggiari wrote: > Hi, > > When Phoenix is used, what is t

Re: MapReduce code runs only in local. How to execute my code in distributed mode?

2016-05-26 Thread anil gupta
cluster? Thanks, Anil Gupta On Thu, May 26, 2016 at 7:19 AM, Lucie Michaud < lucie.mich...@businessdecision.com> wrote: > Hello everybody, > > > > For a few days I developed a MapReduce code to insert values in HBase with > Phoenix. But the code runs only in local an

Re: Extending bulk csv loader

2016-05-19 Thread Anil Gupta
You can simply write a mapreduce job to accomplish your business logic. Output format of job will be PhoenixOutputFormat. Have a look at PhoenixOutputFormat for more details. Sent from my iPhone > On May 18, 2016, at 10:53 PM, anupama agarwal wrote: > > Hi All, > > I have a use case in which

Re: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 (XCL08): Cache of region boundaries are out of date.

2016-05-09 Thread anil gupta
at > org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:111) > ... 8 more > > > -- > Thanks, > Sanooj Padmakumar > -- Thanks & Regards, Anil Gupta

Re: csvbulkload CDH-5.7.0

2016-05-02 Thread anil gupta
either recompile(or modify code) phoenix to work with cdh5.7.0 or ask cdh to support Phoenix. > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1156) > ... 10 more > > Any input on this will be extremely helpful. > > -- > Thanks, > Sanooj Padmakumar > -- Thanks & Regards, Anil Gupta

Re: ORM for Apache Phoenix

2016-04-29 Thread anil gupta
DataNucleus but the ideia was to use the power of > Phoenix. > > What is your opinion? Has anyone ever tested, use or design phase? > Thank you for your help. > -- Thanks & Regards, Anil Gupta

Re: Map the hbase column qualifier which is in byte type to phoenix table view

2016-04-15 Thread anil gupta
Hi Vishwanathan, Column qualifiers(cq) in HBase are supposed to be utf-8 encoded string. Can you use utf-8 string as cq? Is there any particular reason to use non-string cq? I am not pretty sure that non-string cq's are supported by Phoenix. HTH, Anil Gupta On Thu, Apr 14, 2016 at 11:

Re: Secondary index memory footprint

2016-03-22 Thread Anil Gupta
for such a prompt reply. > Sumit > > > > From: anil gupta > To: "user@phoenix.apache.org" ; Sumit Nigam > > Sent: Tuesday, March 22, 2016 9:11 PM > Subject: Re: Secondary index memory footprint > > Global indexes are stored in a separate hbase ta

Re: Secondary index memory footprint

2016-03-22 Thread anil gupta
Global indexes are stored in a separate hbase table. So, you can estimate memory footprint by looking at the data size of that index currently. HTH, Anil Gupta On Tue, Mar 22, 2016 at 7:19 AM, Sumit Nigam wrote: > Hi, > > I am trying to estimate what (if any) are the implic

Re: how to decode phoenix data under hbase

2016-03-19 Thread anil gupta
3 PM, kevin wrote: > >> HI,all >> I create a table under phoenix and upsert somedata. I turn to hbase >> client and scan the new table. >> I got data like : >> column=0:NAME, timestamp=1458028540810, value=\xE5\xB0\x8F\xE6\x98\x8E >> >> I don't know how to decode the value to normal string.what's the >> codeset? >> > > > > -- > Thanks, > Sanooj Padmakumar > -- Thanks & Regards, Anil Gupta

Re: Why Phoenix creates an extra CF named as "0"

2016-03-15 Thread anil gupta
Hi James, Due to a typo we forgot to put CF name as prefix to CQ name in 1/1100 column of that table. That led to creation of CF with name "0". After fixing the typo, we only have 2 CF. Thanks, Anil Gupta On Thu, Feb 18, 2016 at 11:20 AM, James Taylor wrote: > Hi Anil, >

Re: Kerberos ticket renewal

2016-03-14 Thread anil gupta
settings. How are you authenticating with Phoenix/HBase? Sorry, I dont remember the exact kerberos setting that we had. HTH, Anil Gupta On Mon, Mar 14, 2016 at 11:00 AM, Sanooj Padmakumar wrote: > Hi > > We have a rest style micro service application fetching data from hbase > using

Re: Excessive region splitting of Global Index table(8 Megabyte regions)

2016-03-12 Thread anil gupta
Taylor wrote: > Yes, good idea. Please file a JIRA. > > On Sat, Mar 12, 2016 at 1:07 PM, anil gupta wrote: > >> To provide more insight, This table has around 1100 columns. I create >> this index on one column. (1/1100) * 8GB comes around 8MB. So, i think, we >> need

Re: Excessive region splitting of Global Index table(8 Megabyte regions)

2016-03-12 Thread anil gupta
12:45 PM, anil gupta wrote: > Ok, Oversight on my side. MAX_FILESIZE => '11994435' for the secondary > index table. > Main table still doesnt shows MAX_FILESIZE attribute. > > On Sat, Mar 12, 2016 at 12:41 PM, James Taylor > wrote: > >> It should show up

Re: Excessive region splitting of Global Index table(8 Megabyte regions)

2016-03-12 Thread anil gupta
this is what I see: > > hbase(main):004:0> describe 'FOO_IDX' > Table FOO_IDX is ENABLED > > FOO_IDX, {TABLE_ATTRIBUTES => {MAX_FILESIZE => '6710886400', ... > > On Sat, Mar 12, 2016 at 12:36 PM, anil gupta > wrote: > >> 8GB setting of r

Re: Excessive region splitting of Global Index table(8 Megabyte regions)

2016-03-12 Thread anil gupta
f your data table is 8GB, I'd guess your index should be 4GB at the > smallest. I think 1GB would be too low. > > Thanks, > James > > On Sat, Mar 12, 2016 at 12:23 PM, anil gupta > wrote: > >> Thanks for the reply, James. We have 2 global secondary index in this >

Re: Excessive region splitting of Global Index table(8 Megabyte regions)

2016-03-12 Thread anil gupta
u can verify the > setting took affect through the HBase shell by running the following > command: > > describe 'MY_TABLE_SCHEMA.MY_INDEX_NAME' > > HTH, > > James > > > On Sat, Mar 12, 2016 at 10:18 AM, anil gupta > wrote: > >> Hi, >>

Excessive region splitting of Global Index table(8 Megabyte regions)

2016-03-12 Thread anil gupta
-- Thanks & Regards, Anil Gupta

Re: Phoenix table is inaccessible...

2016-03-12 Thread Anil Gupta
Yes, Global indexes are stored in separate hbase table and their region location is not related to main table regions. Sent from my iPhone > On Mar 12, 2016, at 4:34 AM, Saurabh Agarwal (BLOOMBERG/ 731 LEX) > wrote: > > Thanks. I will try that. > > Having regions of the secondary indexes o

Feedback on using Phoenix QueryServer in production

2016-03-08 Thread anil gupta
ssentially, doing some aggregate queries. -- Thanks & Regards, Anil Gupta

Fwd: Ruby gem of Apache Phoenix: https://rubygems.org/gems/ruby-phoenix/versions/0.0.8

2016-03-08 Thread anil gupta
Phoenix4.4 with a ruby gem of Phoenix4.2? If not, then what we would need to do?(upgrade ruby gem to Phoenix4.4?) Here is the git: https://github.com/wxianfeng/ruby-phoenix -- Thanks & Regards, Anil Gupta

Re: Dynamic Fields And Views

2016-02-25 Thread anil gupta
d syntax; pseudo code of what I wish I could do. > > select "dynamic_field" from MY_VIEW > > Should I create a JIRA for a new feature? Or is this fundamentally not > possible? > > Thanks, > Steve > -- Thanks & Regards, Anil Gupta

Why Phoenix creates an extra CF named as "0"

2016-02-18 Thread anil gupta
-- Thanks & Regards, Anil Gupta

Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread anil gupta
phoenix.upsert.batch.size. Its at the >> default 1000. >> >> On Wed, Feb 17, 2016 at 12:48 PM, anil gupta >> wrote: >> >>> I think, this has been answered before: >>> http://search-hadoop.com/m/9UY0h2FKuo8RfAPN >>> >>> Ple

Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread anil gupta
java:456) > ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1] > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1] > at > org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

2016-02-14 Thread anil gupta
, phoenix.upsert.batch.size is 1000. Hence, the commits were failing with a huge batch size of 1000. Thanks, Anil Gupta On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen wrote: > I am not sure whether "upsert batch size in phoenix" equals HBase Client > batch puts size or not. > > But

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

2016-02-14 Thread anil gupta
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot? However, AsyncProcess is complaining about 2000 actions. I tried with upsert batch size of 5 also. But it didnt help. On Sun, Feb 14, 2016 at 7:37 PM, anil gupta wrote: > My phoenix upsert batch size is 50. You mean

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

2016-02-14 Thread anil gupta
would not timeout in 18ms On Sun, Feb 14, 2016 at 12:44 PM, anil gupta wrote: > Hi, > > We are using phoenix4.4, hbase 1.1(hdp2.3.4). > I have a MR job that is using PhoenixOutputFormat. My job keeps on failing > due to following error: > > 2016-02-14 12

org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

2016-02-14 Thread anil gupta
2000 actions to finish I have never seen anything like this. Can anyone give me pointers about this problem? -- Thanks & Regards, Anil Gupta

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-13 Thread anil gupta
pache/phoenix/blob/master/phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TypeUtil.java#L94 >> . Can you please share the error you are seeing. >> >> HTH >> >> Ravi. >> >> On Sat, Feb 13, 2016 at 3:16 AM, anil gupta >> wrote: >> >>>

TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-13 Thread anil gupta
we can still use TinyInt and SmallInt in Phoenix table while using Pig-Phoenix loader? If currently there is no way to do it, can we enhance Pig-Phoenix loader to support TinyInt and SmallInt? -- Thanks & Regards, Anil Gupta

Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

2016-01-15 Thread anil gupta
s in the region > server hbase-site.xml. See our Tuning page[1] for more info. > > FWIW, 500K rows would be much faster to insert via our standard UPSERT > statement. > > Thanks, > James > [1] https://phoenix.apache.org/tuning.html > > On Sun, Jan 10, 2016 at 10:18

Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

2016-01-10 Thread Anil Gupta
> On Jan 6, 2016, at 11:14 PM, anil gupta wrote: > > Hi All, > > I am using Phoenix4.4, i have created a global secondary in one table. I am > running MapReduce job with 20 reducers to load data into this table(maybe i m > doing 50 writes/second/reducer). Dataset is arou

Re: Phoenix MR integration api only accepts Index of column for setting column value

2016-01-06 Thread anil gupta
Hi James, Maybe, i am missing your point. I dont see following method in PreparedStatement interface: pstmt.setString("STOCK_NAME", stockName); Do i need to use some other stuff than Phoenix MR integration to get that method? Thanks, Anil Gupta On Tue, Jan 5, 2016 at 8:48 PM, Ja

Getting null pointer exception while invoking sqlline

2016-01-06 Thread anil gupta
) sqlline version 1.1.8 Any clue why this would happen? We are also having lots of problems while dropping tables. Wondering if this is related? -- Thanks & Regards, Anil Gupta

Re: Issue with connecting to Phoenix in kerberised cluster.

2016-01-05 Thread anil gupta
-hadoop/ 4. https://community.cloudera.com/t5/CDH-Manual-Installation/Kerberos-integration-issue-s-with-hadoop-HA/td-p/24794 Thanks, Anil Gupta On Tue, Jan 5, 2016 at 8:18 PM, Ns G wrote: > Hi Team, > > Any idea with this issue? We are struck up with this issue and we need to > provid

Phoenix MR integration api only accepts Index of column for setting column value

2016-01-05 Thread anil gupta
lity its a key/value pair. :) Is there anyway i can achieve the above? Would the community like to have the key/value api? -- Thanks & Regards, Anil Gupta

Re: Unable to Use bulkloader to load Control-A delimited file

2016-01-04 Thread anil gupta
. That's a valid error but i cant update the server jars just like that because of our contract with HDP. I am trying to take this up with HDP and request them to patch their distro. Happy new year. Thanks for you time and support. ~Anil Gupta On Thu, Dec 31, 2015 at 1:02 AM, Gabriel Reid

Re: Unable to Use bulkloader to load Control-A delimited file

2015-12-30 Thread anil gupta
ans to do a 4.5.3 release. > FYI, Andrew put together a 4.6 version that works with CDH here too: > https://github.com/chiastic-security/phoenix-for-cloudera. We also plan > to do a 4.7 release soon. > > Thanks, > James > > > On Wed, Dec 30, 2015 at 4:30 PM, anil gupta wrote: &g

Re: Unable to Use bulkloader to load Control-A delimited file

2015-12-30 Thread anil gupta
ava:2405) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1232) at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.initialize(MRWebAppUtil.java:51) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1444) Is my bulkloader command incorrect?

Re: Unable to Use bulkloader to load Control-A delimited file

2015-12-30 Thread anil gupta
I dont see 4.5.3 release over here: http://download.nextag.com/apache/phoenix/ Is 4.5.3 not released yet? On Wed, Dec 30, 2015 at 11:14 AM, anil gupta wrote: > Hi Gabriel, > > Thanks for the info. What is the backward compatibility policy of Phoenix > releases? Would 4.5.3 client j

Re: Unable to Use bulkloader to load Control-A delimited file

2015-12-30 Thread anil gupta
Hi Gabriel, Thanks for the info. What is the backward compatibility policy of Phoenix releases? Would 4.5.3 client jar work with Phoenix4.4 server jar? 4.4 and 4.5 are considered two major release or minor releases? Thanks, Anil Gupta On Tue, Dec 29, 2015 at 11:11 PM, Gabriel Reid wrote: >

Unable to Use bulkloader to load Control-A delimited file

2015-12-29 Thread anil gupta
nf.Configuration.get(Configuration.java:1232) at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.initialize(MRWebAppUtil.java:51) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1444) -- Thanks & Regards, Anil Gupta

Re: REG: Accessing Phoenix in a kerberised Cluster

2015-12-22 Thread anil gupta
, Please follow the post and let us know how it goes. It should be pretty easy. Thanks, Anil Gupta On Tue, Dec 22, 2015 at 10:24 PM, James Taylor wrote: > See > https://phoenix.apache.org/faq.html#How_do_I_connect_to_secure_HBase_cluster > > On Tue, Dec 22, 2015 at 8:42 PM, Ns G wr

Re: Phoenix JDBC connection to secure HBase fails

2015-12-09 Thread anil gupta
Hi Akhilesh, You can add hbase/hadoop config directories in application classpath. You dont need to copy conf files in your app lib folder. Thanks, Anil Gupta On Wed, Dec 9, 2015 at 2:54 PM, Biju N wrote: > Thanks Akhilesh/Mujtaba for your suggestions. Adding core-site.xml from > the

Re: does phoenix+hbase work for tables larger than a few GB?

2015-09-30 Thread anil gupta
On a side note: Did you enable short circuit reads? Did you try snappy compression on your tables?(IMO, 7200rpm disk is on the slower side so try compressing data on disk). There are some data encoding scheme in HBase. Have a look at those too. On Wed, Sep 30, 2015 at 3:45 PM, anil gupta wrote

Re: does phoenix+hbase work for tables larger than a few GB?

2015-09-30 Thread anil gupta
em to have some memory pressure and I see some slow GC's in the logs. > Anil: 16GB ram should help in some cases. Try to disable blockcache for full table scans. > > Cheers > > > > > > On 30/09/15 21:18, anil gupta wrote: > > Hi Konstantinos, > Please find

Re: unexpected throwable? probably due to query

2015-09-30 Thread anil gupta
hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > > -- Thanks & Regards, Anil Gupta

Re: does phoenix+hbase work for tables larger than a few GB?

2015-09-30 Thread anil gupta
uggest you to evaluate data model of HBase tables and try to convert queries to small range scan or lookups. > > I believe it is a great project and the functionality is really useful. > What's lacking is 3 sample configs for 3 different strength clusters. > Anil: I agree that guidance on configuration of HBase and Phoenix can be improved so that people can get going quickly. > > Thanks > -- Thanks & Regards, Anil Gupta

Re: Problems getting started with Apache Phoenix

2015-09-20 Thread anil gupta
SL (unknown error) >>>> 15/09/17 19:37:18 INFO zookeeper.ClientCnxn: Socket connection >>>> established to localhost/127.0.0.1:2181, initiating session >>>> 15/09/17 19:37:18 INFO zookeeper.ClientCnxn: Session establishment >>>> complete on server localhost

Re: Problems getting started with Apache Phoenix

2015-09-19 Thread anil gupta
when I am writing into Phoenix tables using Java application it is > reflecting in the corresponding Hbase table also. So Phoenix and Hbase > tables are one and the same, right > On Sep 19, 2015 11:35 AM, "anil gupta" wrote: > >> Phoenix api has to be used to read/wri

Re: Problems getting started with Apache Phoenix

2015-09-19 Thread anil gupta
at >> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1924) >> at >> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1896) >> at >> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77) >> at >> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1896) >> at >> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180) >> at >> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132) >> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151) >> at java.sql.DriverManager.getConnection(DriverManager.java:664) >> at java.sql.DriverManager.getConnection(DriverManager.java:270) >> at phoenixTest.main(phoenixTest.java:16) >> >> >> Seems more like a JAR file version mismatch issue. >> Here are the JAR files that I am using: >> Please refer to the screen shot >> >> I have followed all the guidelines for setting up Phoenix at: >> https://phoenix.apache.org/installation.html >> >> My connection from Squirrel is working fine...but from Java Program >> getting the errors. >> -- >> With best Regards: >> Ashutosh Sharma >> > > > > -- > With best Regards: > Ashutosh Sharma > -- Thanks & Regards, Anil Gupta

Re: Phoenix Client create too many threads

2015-09-19 Thread anil gupta
t.close(); > con.close(); > } > } > > > Working fine. Only Phoenix client JAR is needed...nothing more than that. > Few questions, I can see that table that i created using Phoenix is also > created into HBase. But how they are working int

Re: Phoenix Client create too many threads

2015-09-19 Thread anil gupta
t here: > https://phoenix.apache.org/secondary_indexingha.html > > I set the parameter in `hbase-site.xml` and restart the hbase. I also use > the `hbase-site.xml` in client side, but the threads number in my client do > not reduce. > > How can I control the threads in client? > > Thanks ! > -- Thanks & Regards, Anil Gupta

Re: Column comments

2015-09-19 Thread anil gupta
> like: > > CREATE TABLE IF NOT EXISTS us_population ( > state CHAR(2) NOT NULL , > city VARCHAR NOT NULL , > population BIGINT ’ > CONSTRAINT my_pk PRIMARY KEY (state, city)); > > > If this is possible will I be able to read comments via ResultSetMetaData? > > Regards, > Matjaž > > > -- Thanks & Regards, Anil Gupta

  1   2   >