hi, ALL:
we are useing phoenix-4.7 with hbase-1.1.3, running a 17 nodes cluster with
200+ tables and 3500 regions.
Recently there are weird phenomenon happened and I’m certain that have not done
anything to the cluster:
First:
the RS that holding system.catalog region comes up lots of zk
are bigger than 1KB.
I've tried to put limit in pig query, but it's not possible, trying to do with
MR job.
jinzhuan what is your framework to do the dump ?
The problem doesn't exist with dump throw flink.
Regards,
2016-11-24 14:19 GMT+01:00 金砖 jinzh...@wacai.com:
thanks Ankit.
PHOEINX
thanks Ankit.
PHOEINX-3112 is not my case.
Each row is less than 1KB.
金砖
挖财网络技术有限公司
地址:杭州市西湖区古翠路80号浙江科技产业大厦12楼
手机:15558015995
原始邮件
发件人:Ankit singhalankitsingha...@gmail.com
收件人:useru...@phoenix.apache.org
发送时间:2016年11月24日(周四) 20
hi, all:
I’m using phoenix-4.8.0-hbase-1.1 with hbase 1.1.3.
When query a lot of rows(ex: 100,000), some fileds of rows does not exists in
result set.
steps
1. I created a table test(pk varchar primary key, id bigint, name varchar, age
bigint).
2. then populated with 10 rows, with key
these client side properties to deal
with your workload: phoenix.query.threadPoolSize and
phoenix.query.queueSize.
On Wed, Jun 22, 2016 at 9:34 AM, 金砖 <jinzh...@wacai.com
<mailto:jinzh...@wacai.com>> wrote:
16 regionservers, 1500+ regions.
each regionserver holds 100 + reg
in your cluster? How many regions in each server?
This is a client side configuration.
Alicia
On 6/22/16, 12:07 AM, "金砖" <jinzh...@wacai.com> wrote:
Thanks Alicia Shu.
Is this client side error or server side error ?
I did not say anything suspicious in server logs.
PS: there a
or 5000.
Alicia
On 6/21/16, 3:26 AM, "金砖" <jinzh...@wacai.com> wrote:
while select from phoenix shell.
I receive error like this:
Error: Task
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@3af37506
rejected from org.apache.phoenix.job.JobManager$1@91c4a3f[Running,
Thanks for you explanation William, now I understand.
Really appreciated!
在 2016年05月31日 09:41, William 写道:
Hi, 金砖
For signed data types, phoenix will flip the first bit to guarantee
the expected sort order for binary comparison. For example:
100, its binary: 0x00 00 00 64
-100, its
hi, ALL:
Recently I'm importing existing hbase table to phoenix and I found
out phoenix toBytes
method(org.apache.phoenix.schema.types.PInteger.toBytes) is different to
hbase(org.apache.hadoop.hbase.util.Bytes.toBytes).
And document says only support hbase positive values.
Why not just
://phoenix.apache.org/secondary_indexing.html#Index_Usage
On Mon, May 9, 2016 at 11:36 PM, 金砖 <jinzh...@wacai.com
<mailto:jinzh...@wacai.com>> wrote:
hi, I'm using phoenix-4.7, and I found phoenix did not use index
efficiently enough.
Can phoenix optimize more for this situat
hi, I'm using phoenix-4.7, and I found phoenix did not use index
efficiently enough.
Can phoenix optimize more for this situation?
EX:
table:
create table t (pk varchar primary key, name varchar, age
unsigned_int);
and create 2 index on it:
create index name on t (name);
hi:
I'm working on phoenix-4.7-hbase-1.1 on hbase-1.1.3.
1. I have a hbase table :create 'test', 'cf'
2. then create phoenix view create view "test" (pk varchar
primary key, "cf"."column").
for now, everything works fine, I put data through hbase shell: put
'test', 'row1',
Async Index job http://phoenix.apache.org/secondary_indexing.html:
${HBASE_HOME}/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool
--schema MY_SCHEMA --data-table MY_TABLE --index-table ASYNC_IDX
--output-path ASYNC_IDX_HFILES
How to submit that job to a yarn cluster ?
On a single
1. I have hbase table:
hbase shell: create 'table', 'cf'
2. then create view on that
phoenix: create view "table" (pk varchar primary key, "cf"."field"
varchar);
3. create index on view
phoenix: create index idx on "table" ("cf"."field");
IDX created in phoenix, and I can select
hi, I read from document that index can be created asynchronously.
After create index with ASYNC keyword, then kick off a MapReduce Job to
pupulate index.
${HBASE_HOME}/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool
--schema MY_SCHEMA --data-table MY_TABLE --index-table ASYNC_IDX
instead.
Regards,
Ankit Singhal
On Wed, Apr 20, 2016 at 8:00 AM, 金砖 <jinzh...@wacai.com
<mailto:jinzh...@wacai.com>> wrote:
thanks for your reply rafa!
It works. Now the value isn't null, but is -1322110156
.
But when using this : create view if not exist
mary key,
"family"."number" INTEGER);
regards,
rafa
On Tue, Apr 19, 2016 at 12:58 PM, 金砖 <jinzh...@wacai.com
<mailto:jinzh...@wacai.com>> wrote:
I'm new to phoenix
using phoenix-4.7.0-HBase-1.1 on hbase-1.1.3
my steps:
1. creat
I'm new to phoenix
using phoenix-4.7.0-HBase-1.1 on hbase-1.1.3
my steps:
1. create hbase table
create 'test', 'family'
2. put row in hbase table
put 'test', 'row1', 'family:number', '123456'
3. create view in phoenix
create view if not exists "test" (pk VARCHAR primary key,
18 matches
Mail list logo