sc.phoenixTableAsRDD number of initial partitions

2016-10-13 Thread Antonio Murgia
Hello everyone, I'm trying to read data from a Phoenix Table using apache Spark. I actually use the suggested method: sc.phoenixTableAsRDD without issuing any query (e.g. reading the whole table) and I noticed that the number of partitions that spark creates is equal to the number of

PrepareAndExecute statement return only 100 rows

2016-10-13 Thread Puneeth Prasad
Hi, PrepareAndExecute statement has a default limit of returning 100 rows. To avoid that, we use maxRowCount = -1, but it still gives only 100 rows. I've copied the PHP code below, the highlighted part is the necessary change to fetch all the rows possible. Can you please suggest where

Re: PrepareAndExecute statement return only 100 rows

2016-10-13 Thread Josh Elser
Hi Puneeth, What version of Phoenix are you using? Indeed per [1], maxRowCount should control the number of rows returned in the ExecuteResponse. However, given that you see 100 rows (which is the default), it sounds like the value is not being respected. The most recent docs may not align

Re: Region start row and end row

2016-10-13 Thread Cheyenne Forbes
May I ask which in memory db are you using?

Re: Region start row and end row

2016-10-13 Thread Cheyenne Forbes
Check out this post for loading data from MySQL to Ignite https://dzone.com/articles/apache-ignite-how-to-read-data-from-persistent-sto and this one (recommended) on how to UPSERT to Phoenix on Ignite PUT... *delete, etc.* https://apacheignite.readme.io/docs/persistent-store#cachestore-example

Re: Region start row and end row

2016-10-13 Thread Anil
Apache Ignite. On 13 October 2016 at 12:05, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > May I ask which in memory db are you using? >

Can phoenix support HBase's TimeStamp?

2016-10-13 Thread Yang Zhang
Hello everyone I saw that we can create a Phoenix table from an exist HBase table,(for detail ) My question is whether Phoenix can supprort the history version of my row? I am trying to use Phoenix to store

Hbase throttling and Phoenix issue

2016-10-13 Thread Sumit Nigam
Hi, I am trying to use hbase throttling feature with Phoenix. Hbase is 1.1.2 and phoenix 4.6.  When I specify big number of SALT_BUCKETS, the hbase throws ThrottlingException even when quotas are high. Please note that this error occurs only when we scan from phoenix shell. From hbase shell,

How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Cheyenne Forbes
Are there people who do this for free?

Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Ted Yu
If there're people who do this for free, would you trust them ? > On Oct 13, 2016, at 4:30 AM, Cheyenne Forbes > wrote: > > Are there people who do this for free?

Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Cheyenne Forbes
Thats the question I shouldve asked myself, no How can I get it done paid?

Re: Region start row and end row

2016-10-13 Thread Anil
HI Cheyenne*,* Thank you very much. Load cannot be done in parallel with one jdbc connection. To make it parallel, each node must read a set of records Following is my approach. 1. Create Cluster wide singleton distributed custom service 2. Get all region(s) information (for each records has

Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Ted Yu
Hortonworks does offer support. > On Oct 13, 2016, at 5:40 AM, Antonio Murgia wrote: > > As far as I know, cloudera let's you install Phoenix through a Parcel, for > free. But they do not offer support for Phoenix. > >> On 10/13/2016 01:38 PM, Cheyenne Forbes wrote: >>

Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Antonio Murgia
As far as I know, cloudera let's you install Phoenix through a Parcel, for free. But they do not offer support for Phoenix. On 10/13/2016 01:38 PM, Cheyenne Forbes wrote: Thats the question I shouldve asked myself, no How can I get it done paid?

Re: Views and alter table

2016-10-13 Thread Sanooj Padmakumar
Hi James, I managed to reproduce this. create table test1(key1 varchar not null,key2 varchar not null,"p".val1 varchar,CONSTRAINT pk PRIMARY KEY(key1,key2)) Compression = 'SNAPPY' ALTER TABLE test1 ADD "p".val2 varchar,"p".val3 varchar,"p".val4 varchar create view test_view1(key2 varchar,val1

Re: Bulk dataload and dynamic columns

2016-10-13 Thread Sanooj Padmakumar
Thanks for the confirmation Anil. On Fri, Oct 7, 2016 at 11:22 PM, anil gupta wrote: > I dont think that feature is supported yet in bulk load tool. > > On Thu, Oct 6, 2016 at 9:55 PM, Sanooj Padmakumar > wrote: > >> Hi All, >> >> Can we populate

Re: Region start row and end row

2016-10-13 Thread Cheyenne Forbes
Hi Anil, Basically what you want to do is copy all the data you had input with Phoenix to your in memory db?

Re: Region start row and end row

2016-10-13 Thread Anil
You are correct. Not all the data.. but with specific start row and end row. Thanks. On 13 October 2016 at 11:39, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > Hi Anil, > > Basically what you want to do is copy all the data you had input with > Phoenix to your in memory db? >