Hi Matthew,
I was able to run the app successfully on the cluster and get data, thanks.
I am able to telnet from my local machine to one of the zookeeper nodes on port
2181 successfully also.
I think the code might just be timing out on my machine.
Is there a way to increase the timeout time?
Hi John,
Glad that you are able to get data when running on the cluster, it means
that at least your Phoenix server is working. Can you send the entire log
output for your app, up to and including the stacktrace? I will compare it
with the output from one of my apps that works.
To set the
Hi John,
Yup, that is exactly the error I was having before I put in the aliases. It
does mean that your app should work fine if you deploy and run it from
inside Amazon, but is a bit of a pain when you are developing / running /
testing anything locally. If you figure out a cleverer way to fix
Thanks Pariksheet.
On Wed, Feb 25, 2015 at 1:35 PM, Pariksheet Barapatre pbarapa...@gmail.com
wrote:
Hi Dhaval,
You can use below code -
$SQLLINE $ZKEEPER END
!outputformat csv
!record $TMP_FILE
select * from system.catalog limit 10;
!record
!quit
END
## $SQLINE is sqlline.py path ,
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.3 release. Highlights include:
- functional indexes [1]
- map-reduce over Phoenix tables [2]
- cross join support [3]
- query hint to force index usage [4]
- set HBase properties through ALTER TABLE
- ISO-8601 date
The Apache Phoenix team is pleased to announce the immediate
availability of the 3.3 release. Highlights include:
- map-reduce over Phoenix tables [1]
- cross join support [2]
- query hinting to force index usage [3]
- csv date/time/timestamp loading improvements
- over 50 bug fixes
The release
Sounds like a bug. I'll try to repro on my end. Thanks for the details, Gary.
James
On Tue, Feb 24, 2015 at 1:49 PM, Gary Schulte
gschu...@marinsoftware.com wrote:
On Tue, Feb 24, 2015 at 12:29 AM, James Taylor jamestay...@apache.org
wrote:
Based on your query plan, the skip scan is
Hi Sun,
Take a look at the files tab of the Meetup page and you'll find our
slides posted: http://www.meetup.com/San-Francisco-Apache-Phoenix-Meetup/files/
We'll post recording of the live streaming that we had soon too. Would
love to hear some feedback from you on what's important for you in
For big tables keep the salt bucket high, generally around 60-90.
Smaller or join tables should have salt buckets as minimal may be 1-4.
Thanks
-- Original message--
From: Krishna
Date: Thu, Feb 26, 2015 07:42
To: d...@phoenix.apache.org;user@phoenix.apache.org;
Subject:Salt
Hi Sun, There are region splitting related issues in HBase 0.98.6-0.98.8.
So it will cause inconsistencies in the cluster if any split failures.
So better to upgrade HBase to latest versions.
Thanks,
Rajeshbabu.
On Thu, Feb 26, 2015 at 8:28 AM, su...@certusnet.com.cn
su...@certusnet.com.cn
Very thanks, James. Would scan that ASAP.
Regards,
Sun.
CertusNet
From: James Taylor
Date: 2015-02-26 09:42
To: su...@certusnet.com.cn; user
Subject: Re: Re: [ANNOUNCE] Apache Phoenix meetup in SF on Tue, Feb 24th
Hi Sun,
Take a look at the files tab of the Meetup page and you'll find our
Are there any recommendations for estimating and optimizing salt buckets
during table creation time? What, if any, are the cons of having high
number (200+) of salt buckets? Is it possible to update salt buckets after
table is created?
Thanks
Anybody? If the feature is not supported yet, I would like to contribute
given some pointers to start with.
On Mon, Feb 16, 2015 at 8:40 PM, Vikas Agarwal vi...@infoobjects.com
wrote:
Hi,
I was looking for query and Java code for the data model represented here
13 matches
Mail list logo