The ARRAY_APPEND function will appear in the 4.5.0 release.
Thanks,
James
On Thu, Jun 18, 2015 at 1:53 AM, guxiaobo1982 guxiaobo1...@qq.com wrote:
Hi,
I tried the following examples regarding to array data type
create table artest(a integer , b integer[], constraint pk primary key(a));
Hi Alex,
We don't have a way of globally enabling/disabling the normalization
we do for column names by uppercasing them. However, there are a
couple of feature that might help you:
1) You don't need to reference column family names unless column names
are ambiguous without it.
2) You can set a
Hey Leon,
You can have an array in an index, but it has to be at the end of the
PK constraint which is not very useful and likely not what you want -
it'd essentially be equivalent of having the array at the end of your
primary key constraint.
The other alternative I can think of that may be more
Thanks for digging, Arun. That's super helpful. Doing that check for a
view is really a bug as it's not traversing the correct link anyway.
If you have a table - v1 - v2, it won't prevent you from dropping
v1. It's really meant to check whether or not you're trying to drop a
table that has views.
Hi Yanlin,
What version of Phoenix are you using? I tried the following in
sqlline, and it worked fine:
0: jdbc:phoenix:localhost create table t1 (k varchar primary key,
col1 varchar);
No rows affected (10.29 seconds)
0: jdbc:phoenix:localhost select fact.col1 from (select col1 from t1) as fact;
Hi Yufan,
The outer query should use the alias name (c1). If not, please file a
JIRA when you have a chance.
Thanks,
James
On Tue, Jun 16, 2015 at 2:03 PM, yanlin wang wangyan...@gmail.com wrote:
Thanks James. My example is bad …
On Jun 16, 2015, at 1:39 PM, James Taylor jamestay
tool i am using tries to generate SQL with double quote in it. If i
remove the double quoted aliases the phoenix works fine.
Thx
Yanlin
On Jun 16, 2015, at 2:16 PM, James Taylor jamestay...@apache.org wrote:
Hi Yufan,
The outer query should use the alias name (c1). If not, please file
Hi NIshant,
Have you seen this:
https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
Your row key is a byte[] in HBase. It has no column qualifier, so you
wouldn't want to prefix those columns with any column family.
Thanks,
James
On Sun, Jun 14, 2015 at
and contribute back?
here in somewhere ?
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/expression/function/RoundDecimalExpression.java
On Saturday 13 June 2015 05:10 AM, James Taylor wrote:
Hello,
It's a bug - thanks for the detail on how to reproduce
Good idea. Please file a JIRA. Would be good to quantify the potential gain
with and without FAST_DIFF encoding (the default) and/or Snappy compression.
On Sunday, June 14, 2015, yanlin wang wangyan...@gmail.com wrote:
Thanks for the reply Anil.
Is this what you referring:
create view my
.
On Sun, Jun 14, 2015 at 5:59 PM, James Taylor jamestay...@apache.org
javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); wrote:
Good idea. Please file a JIRA. Would be good to quantify the potential
gain with and without FAST_DIFF encoding (the default) and/or Snappy
compression.
On Sunday
On Jun 12, 2015 4:55 AM, James Taylor jamestay...@apache.org
javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); wrote:
Hi Nishant,
So your row key has the '|' embedded in the row key as a separator
character? Are the qualifiers fixed length or variable length? Are
they strings?
Thanks
Hello,
Please see the following link for our support of sequences:
http://phoenix.apache.org/sequences.html
Thanks,
James
On Fri, Jun 12, 2015 at 4:17 AM, Ns G nsgns...@gmail.com wrote:
Hello Friends,
I have requirement in my project to create a record (primary key being id).
This is
:06, James Taylor wrote:
Dawid,
Perhaps a dumb question, but did you execute a CREATE TABLE statement
in sqlline for the tables you're importing into? Phoenix needs to be
told the schema of the table (i.e. it's not enough to just create the
table in HBase).
Thanks,
James
On Mon, Jun 8
That's the lock we take out on the server-side when you drop a view.
The timeout is controlled by the hbase.rowlock.wait.duration config
parameter. The default is 30 seconds which is already way more time
than you'd need to drop the view (which amounts to deleting a handful
of rows from the
based and that is what the PK is based on.
Thanks,
Ralph
On 6/8/15, 10:00 AM, James Taylor jamestay...@apache.org wrote:
Hi Ralph,
What kind of workload do you expect on your cluster? Will there be
many users accessing many different parts of your table(s)
simultaneously? Have you considered
Both DATE and TIME have millisecond granularity (both are stored as 8
byte longs), so I'd recommend using either of those. Phoenix also
supports date arithmetic, so you can do queries like this to get the
last weeks worth of data:
SELECT * FROM SENSOR_DATA
WHERE sid = 'ID1' AND dt CURRENT_TIME()
Hi Vijay,
You've got a couple of options:
1) Force the query to do a skip scan (the Phoenix equivalent of the
FuzzyRowKeyFilter) by adding a hint like this:
select /*+ SKIP_SCAN */ * from my_table where cid = ? and tid = ?
By default, Phoenix won't do a skip scan when there's gaps in the pk
What's different between the two environments (i.e. the working and
not working ones)? Do they have the same amount of data (i.e. number
of views)? Do you mean 1.3M views or 1.3M rows? Do you have any
indexes on your views? If not, then dropping a view is nothing more
than issuing a delete over
I'd look toward Cloudera to help with this. Phoenix has no control over
what's in the CDH distro, so ensuring compatibility isn't really feasible.
I'd encourage any and all Phoenix users running on CDH to let Cloudera know
that you'd like to see Phoenix added to their distro. They've started down
Rather than use a SALT_BUCKET of 2, just don't salt the table at all. It
never makes sense to have a SALT_BUCKET of 1, though.
How many total tables do you have? Are you using views at all (
http://phoenix.apache.org/views.html)?
Thanks,
James
On Wednesday, June 3, 2015, Puneet Kumar Ojha
Glad that it worked. I think one of our HBase committers can probably
explain that better than me. Nick?
Thanks,
James
On Tue, Jun 2, 2015 at 3:07 PM, Arun Kumaran Sabtharishi
arun1...@gmail.com wrote:
James,
Thanks for your reply. It worked!
But, can you help me understand how does it make
Yes, Phoenix supports DOUBLE, but only as of 4.4.0 does it support
specifying a literal in E notation. Prior to 4.4.0, you'll need to
express the number using ###. notation instead.
Thanks,
James
On Fri, May 29, 2015 at 12:16 PM, Yufan Liu yli...@kent.edu wrote:
Hi,
Phoenix supports Double
Hi Isart,
That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
has done a lot with Node JS, so he may be able to give you some tips.
Thanks,
James
On Mon, May 18, 2015 at 9:06 AM, Isart Montane isart.mont...@gmail.com wrote:
Hi Eli,
thanks a lot for your comments. I think you
Hello,
A query like the following:
select * from phoenix_table_name limit 1000;
will apply a limit on the server side for each parallel scan being
done (using a PageFilter) and then also apply a limit on the client
side returning only the first 1000 rows.
Thanks,
James
On Fri, May
-incubating version right?(We are currently using
this version).
Thanks
Prasanth Chagarlamudi
-Original Message-
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Friday, May 15, 2015 11:27 AM
To: user
Subject: Re: Phoenix's behavior when applying limit to the query
Hello
You'll want to derive from BaseHBaseManagedTimeIT. The
BaseConnectionlessQueryTest class is for compile-time only or negative
tests as it doesn't spin up any mini cluster.
Thanks,
James
On Fri, May 15, 2015 at 5:41 AM, Ron van der Vegt
ron.van.der.v...@openindex.io wrote:
Hello everyone,
I'm
with non-default
namespace now??
2015-05-12 6:04 GMT+08:00 James Taylor jamestay...@apache.org:
See PHOENIX-1311.
Thanks,
James
On Mon, May 11, 2015 at 5:24 AM, 娄帅 louis.hust...@gmail.com wrote:
Hi ,all
I installed Phoenix on HBase, and i find the phoenix use the 'default'
namespace
See PHOENIX-1311.
Thanks,
James
On Mon, May 11, 2015 at 5:24 AM, 娄帅 louis.hust...@gmail.com wrote:
Hi ,all
I installed Phoenix on HBase, and i find the phoenix use the 'default'
namespace
as workspace. I manage hbase with many namespaces, and the default namespace
is not used for anyone.
:
James,
It is not always true. For double we tend to convert the null to 0.0
instead of throwing exception.
We should make the behavior uniform in this case. For varchar we support
NULLs anyway explicitly.
Regards
Ram
From: James Taylor [mailto:jamestay...@apache.org]
Sent
+1 to Jaime's suggestion of providing multiple arguments. You can have
a variable number of arguments to a function by providing default
values for trailing arguments. I wouldn't rely on the Tuple argument
in the evaluate method as it might go away in the future
(PHOENIX-1887).
Thanks,
James
On
FWIW, there's an option in sqlline that will cause it to display the
full date granularity, but I don't know what it is. Maybe someone else
does?
Thanks,
James
On Mon, May 4, 2015 at 12:00 AM, Gabriel Reid gabriel.r...@gmail.com wrote:
Hi Siva,
Yes, that's pretty much correct -- TO_DATE is
Hey Jude,
Would you mind trying with 4.3.1 release and letting us know if the
issue is resolved?
Thanks,
James
On Tue, Apr 28, 2015 at 6:07 PM, Jude K j2k...@gmail.com wrote:
Hi,
Yesterday, we created a secondary index on one our tables to help improve
the read speed of performing select
Please add a sub task under PHOENIX-1665. It's related to PHOENIX-953
(Support for UNNEST for ARRAY), but more flexible.
Thanks,
James
On Fri, Apr 24, 2015 at 7:03 PM, Kathiresan S
kathiresanselva...@gmail.com wrote:
Thank you!
Also, wanted to know if there is any JIRA already for this,
Any improvement on the situation with your cluster, Ralph?
I don't have a specific recommendation for you specific to HBase
0.98.4. In general, Phoenix improves with each release, so the later
the better IMHO. The same would likely be said in the HBase community.
I can share the combinations of
Any JIRA yet, Marek? It'd be good to get to the bottom of this. Your
work around is not going to perform nearly as well as using TRUNC on
the date.
Thanks,
James
On Tue, Apr 7, 2015 at 8:53 AM, James Taylor jamestay...@apache.org wrote:
Yes, please open a JIRA and attach that CSV (or ideally
No, that's not possible. Phoenix needs to know the type information
and that's what the table/view definition is telling it.
Thanks,
James
On Tue, Apr 7, 2015 at 4:00 AM, Bradman, Dale
dale.brad...@capgemini.com wrote:
Hello,
Is it possible to issue a SELECT statement on a pre existing HBase
+1 to Thomas' idea. Please file a new JIRA - perhaps a subtask of
PHOENIX-400 for your idea.
Thanks,
James
On Tue, Apr 7, 2015 at 11:28 AM, Thomas D'Silva tdsi...@salesforce.com wrote:
Ashish,
If you want to step through server side code you can enable remote
debugging in hbase-env.sh. I
Hi Marek,
How did you input the data and what does your CREATE TABLE/VIEW
statement look like? What version of Phoenix and HBase are you using?
Also, would you mind running the following query and letting us know the output?
select to_char(hu_ts,'-MM-dd
)
Shall I open a jira for that?
Regards,
Marek
2015-04-06 20:16 GMT+02:00 James Taylor jamestay...@apache.org:
Hi Marek,
How did you input the data and what does your CREATE TABLE/VIEW
statement look like? What version of Phoenix and HBase are you using?
Also, would you mind running
double constraint pk
PRIMARY KEY(hu_ts,hu_ho_id,hu_stream_id) );
Phoenix: 4.3.0
Thanks,
Marek
2015-04-06 22:25 GMT+02:00 James Taylor jamestay...@apache.org:
Hi Marek,
How did you input the data and what does your CREATE TABLE/VIEW
statement look like? What version of Phoenix and HBase
The advantage is that you don't have to know about salted keys and how
to read them.
On Wed, Apr 1, 2015 at 12:56 AM, Flavio Pompermaier
pomperma...@okkam.it wrote:
There's no other way to read salted keys?
Could you describe me shortly which are the advantages of that inputformat?
it reads
Hi Bryan,
Prior to the 4.2 release, if you want to delete rows from a table
declared as immutable, you need to drop the table (in which case the
index would be dropped as well). With 4.2 and above, the index of an
immutable table will be kept in sync when rows are deleted from the
data table with
I suspect you may not have auto commit on, in which case Phoenix would
attempt to buffer the results of the select in memory so that you
could commit it when you choose. Try setting auto commit on
(connection.setAutoCommit(true)) for your connection before issuing
the UPSERT SELECT statement.
On
Hi Naga,
It's lacking an owner at this time, but if you're up for it, it'd be a good
contribution.
Thanks,
James
On Friday, March 27, 2015, Naga Vijayapuram naga_vijayapu...@gap.com
wrote:
Spotted this JIRA - https://issues.apache.org/jira/browse/PHOENIX-1311
Is this planned for resolution
Are you familiar with this project:
https://github.com/simplymeasured/phoenix-spark/
https://github.com/simplymeasured/phoenix-spark/pull/2 ?
On Thursday, March 19, 2015, junaid khalid junaid.kha...@platalytics.com
wrote:
i tried setting hbase-protocol to the SPARK_CLASSPATH but that didnt
Hi Brian,
What version of HBase and Phoenix are you using? I tried the following
on 4.3.0 with 0.98.9-hadoop2 and it worked fine for me:
- From sqlline:
create table FOO(k bigint primary key, v varchar);
upsert into foo values(1,'a');
1 row affected (0.047 seconds)
select * from foo;
No, not currently, but PHOENIX-1550 would provide that.
On Sat, Mar 14, 2015 at 2:36 PM, Brian Johnson br...@brianjohnson.cc wrote:
But is there a way to avoid the create table step? It makes the restore
process much more complicated
On Mar 14, 2015, at 2:34 PM, James Taylor jamestay
Hi Noam,
We're tuning CSV bulk load in PHOENIX-1711, but it won't get you a 7x
speedup (maybe 30% at the most if we're lucky). The other thing you'd
lose by writing all values into one column is incremental update speed
which may or may not apply for your use case. To update a single
value, you'd
Hi Yohan,
Have you done a major compaction on your table and are stats generated
for your table? You can run this to confirm:
SELECT sum(guide_posts_count) from SYSTEM.STATS where
physical_name=your full table name;
Phoenix does intra-region parallelization based on these guideposts as
described
on bytes? [?]
On Thu, Mar 5, 2015 at 3:44 PM, James Taylor jamestay...@apache.org
wrote:
This worked fine for me.
In HBase shell:
create 't1', {NAME = 'f1'}
In Phoenix sqlline:
create view v1(a VARCHAR PRIMARY KEY, f1.b INTEGER) as select * from
t1;
create view v2(a VARCHAR PRIMARY KEY
Hi Ryan,
I suspect its a mismatch between the commons-collections version, as
the HTrace metrics framework requires a fairly new version (3.2.1). We
ran into that with PHOENIX-1613 and had to disable tracing when the
metrics initialization would fail. Perhaps this is another flavor of
something
Mujtaba - do you know where our 4.0.0-incubating artifacts are?
On Thu, Mar 5, 2015 at 9:58 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Ted,
In morning today, I downloaded 4.1 from the link you provided. The problem
is that i was unable to find 4.0.0-incubating release artifacts. So, i
There's a JIRA for supporting user defined functions with a few
comments here: PHOENIX-538
There's also a middle ground that's possible that wouldn't be full
blown UDFs, but would allow you to define new built-in functions in
your own jar which would need to be available on the client and server
Constantin,
I've filed PHOENIX-1693 for this issue, as we seem to be seeing a similar
phenomena too. It seems to only occur if we've never run a major compaction
on the table, though. Is that the case for you as well?
Thanks,
James
On Mon, Feb 16, 2015 at 8:44 AM, Vasudevan, Ramkrishna S
Hi Noam,
Java 1.6 was end of life more than a year ago, so Phoenix binaries no
longer support it. You can likely compile the Phoenix 4.3 source
against Java 1.6 yourself - I don't think we rely on 1.7 features
much.
Thanks,
James
On Sun, Mar 1, 2015 at 10:43 AM, Bulvik, Noam noam.bul...@teoco.com
When you create your view from an HBase table, you need to tell
Phoenix your types. Like you said, your values are an arbitrary array
of bytes. How would Phoenix know how they were serialized?
In general, storing everything as strings won't get you very far in
terms of querying (in Phoenix or
Gary,
I've got a patch available on PHOENIX-1690 that fixes the issue for my
tests. Would you mind giving it a whirl?
Thanks,
James
On Fri, Feb 27, 2015 at 6:40 PM, James Taylor jamestay...@apache.org wrote:
Thanks, Gary. That should be enough for me to repro (though it's a lot
of data
this is due to the timeout in the stats
update?
-Gary
On Fri, Feb 27, 2015 at 12:30 PM, James Taylor jamestay...@apache.org
wrote:
See inline. Thanks for your help on this one, Gary. It'd be good to
get to the bottom of it so it doesn't bite you again.
On Fri, Feb 27, 2015 at 11:13 AM, Gary
I'd recommend dropping the SYSTEM.SEQUENCE table from the HBase shell
(instead of deleting the folder in HDFS). Everything else sounded
fine, but make sure to bounce your cluster and restart your clients
after doing this.
Thanks,
James
On Thu, Feb 26, 2015 at 12:28 PM, Vamsi Krishna
running the loop to collect
the guideposts and let me know if you see that stats output?
Thanks again.
On Thu, Feb 26, 2015 at 5:55 PM, James Taylor jamestay...@apache.org
wrote:
Gary,
I'm not able to repro the issue - I filed PHOENIX-1690 to track it and
attached my test case
The problem with the old way that HBase represents BigDecimal is that
the serialized bytes don't sort the same way that the BigDecimal does
(FWIW, but orthogonal to this discussion and not something that will
help you with this particular situation, a new type system was
introduced in HBase to fix
Gary,
One possible workaround. Can you try adding the SKIP_SCAN hint to your
query (instead of the AND device_type in
('MOBILE','DESKTOP','OTHER','TABLET')), like this?
SELECT /*+ SKIP_SCAN */ count(1) cnt,
...
Thanks,
James
On Wed, Feb 25, 2015 at 10:16 AM, James Taylor jamestay...@apache.org
the original query.
Thanks,
James
On Thu, Feb 26, 2015 at 10:52 AM, James Taylor jamestay...@apache.org wrote:
Gary,
One possible workaround. Can you try adding the SKIP_SCAN hint to your
query (instead of the AND device_type in
('MOBILE','DESKTOP','OTHER','TABLET')), like this?
SELECT
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.3 release. Highlights include:
- functional indexes [1]
- map-reduce over Phoenix tables [2]
- cross join support [3]
- query hint to force index usage [4]
- set HBase properties through ALTER TABLE
- ISO-8601 date
The Apache Phoenix team is pleased to announce the immediate
availability of the 3.3 release. Highlights include:
- map-reduce over Phoenix tables [1]
- cross join support [2]
- query hinting to force index usage [3]
- csv date/time/timestamp loading improvements
- over 50 bug fixes
The release
Sounds like a bug. I'll try to repro on my end. Thanks for the details, Gary.
James
On Tue, Feb 24, 2015 at 1:49 PM, Gary Schulte
gschu...@marinsoftware.com wrote:
On Tue, Feb 24, 2015 at 12:29 AM, James Taylor jamestay...@apache.org
wrote:
Based on your query plan, the skip scan
.
CertusNet
From: James Taylor
Date: 2015-01-26 12:21
To: su...@certusnet.com.cn
CC: user; James Taylor
Subject: Re: [ANNOUNCE] Apache Phoenix meetup in SF on Tue, Feb 24th
Hi Sun,
Yes, we'll make sure to share any slides of presentations. We're also
planning
,
KEYWORD_ID and CUSTOMER_ID formed your primary key constraint, then the
skip scan would work well.
Thanks,
James
On Mon, Feb 23, 2015 at 5:24 PM, James Taylor jamestay...@apache.org
wrote:
Hi Gary,
Would you mind posting your schema and query as well?
Thanks,
James
On Mon, Feb 23, 2015 at 5:08 PM
FYI, SQuirrel sets the max rows to return as 100. You can change this in
the tool, though.
On Tuesday, February 24, 2015, Maryann Xue maryann@gmail.com wrote:
Thanks a lot, Matt, for the reply! Very helpful. *SERVER FILTER BY
PageFilter 100* does look like a but here. I will try again to
Hi Mark,
There's not a great reason for this restriction, so it can likely be
relaxed. The tenant ID comes from a connection property, so it'll be a
string, but we could convert it based on the data type of the first
column. Please file a JIRA is this is important for your use case.
Thanks,
James
We should likely set the TTL automatically on indexes. We'd need to do
something special for shared indexes (local and view indexes), as the
TTL would only apply to a certain set of rows. Samarth - can you file
a JIRA?
FYI, the reason we don't allow a different TTL for different column
families
One more feature to mention is the ability to set a default timezone
through the phoenix.query.dateFormatTimeZone config property. This will be
available in our 4.3 release (see PHOENIX-1485).
On Tue, Feb 10, 2015 at 6:55 PM, Thanaphol Prasitphaithoon
thanapho...@mindterra.com wrote:
Hi All
Mike,
Nothing is required that I'm aware of to just run the unit tests. I do
remember having to add a line in my /etc/host on my Mac laptop that
wasn't required on my linux box. Something like this, where the second
entry after localhost is the ip address of your machine.
127.0.0.1
data and have custom Pig UDF to apply various
transformation and then finally have the tuples upsert back to Phoenix
tables using our PhoenixHBaseStorage.
My two cents :)
Regards
Ravi
On Wed, Jan 28, 2015 at 9:35 AM, James Taylor jamestay...@apache.org
javascript:_e(%7B%7D,'cvml','jamestay
Glad to hear it, Ralph. Still sounds like there's a bug here (or at a
minimum a usability issue), but not a showstopper for the 4.3 release.
Would you mind filing a JIRA for it?
Thanks,
James
On Tue, Feb 3, 2015 at 4:31 PM, Ravi Kiran maghamraviki...@gmail.com wrote:
Hi Ralph,
Glad it is
additional columns to be added after
the enrichment process).
Thank you for your time and I'd appreciate your thoughts about this.
-Jaime
On Jan 27, 2015 11:51 PM, James Taylor jamestay...@apache.org wrote:
Hi Jaime,
Would it be possible to see a few examples of the kind of
transformations
Hi Kevin,
This is a bug that has been fixed in 4.2.2. I tried the following to
verify. Does this reproduce your situation?
Thanks,
James
0: jdbc:phoenix:localhost create table test.product(k varchar primary
key, f.upc integer, f.name varchar);
No rows affected (0.263 seconds)
0:
An index on a view created over an HBase table is not maintained by
Phoenix, as the updates to the table do not go through Phoenix APIs.
If you want to use secondary indexing, your best bet is to use Phoenix
tables, instead of views.
Thanks,
James
On Tue, Jan 27, 2015 at 9:51 PM, Chandu
row key?
Thanks again!
On Sun, Jan 25, 2015 at 12:49 PM, James Taylor jamestay...@apache.org
wrote:
You can also use our map reduce integration which will use secondary
indexes automatically/transparently just as is done when using SQL APIs.
If you use map reduce outside of this against
be used with prefix encoding there is contradiction between
these two features
-Original Message-
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Monday, January 19, 2015 7:00 PM
To: user
Subject: Re: short name for columns
Good idea. Phoenix doesn't do that today. I'm hoping
Good idea. Phoenix doesn't do that today. I'm hoping that HBase can
come up with better block encodings that factor this kind of
information out without perf taking a hit. They actually have one
(TRIE), but I'm not sure how stable it is. Also, I'm not sure how well
the existing encodings do for
need to investigate more deep in
the query. I will check the configurations you
provide in the late tests.
Thanks,
Sun
--
--
CertusNet
*From:* James Taylor
javascript:_e(%7B%7D,'cvml','jamestay...@apache.org');
*Date:* 2015-01-15
:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
On Tue, Jan 13, 2015 at 10:02 PM, James Taylor jamestay...@apache.org
wrote:
The warning for [1] can be ignored, but [2] is problematic. You're
coming from
Wow, that's really awesome, Josh. Nice work. Can you let us know
if/when it makes it in?
One modification you may want to consider in a future revision to
protect yourself in case the SYSTEM.CATALOG schema changes down the
road: Use the DatabaseMetaData APIs[1] instead of querying the
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
On Mon, Jan 12, 2015 at 7:05 PM, James Taylor jamestay...@apache.org
wrote:
Hi Kristoffer,
You'll need to upgrade first from 2.2.3 to 3.0.0-incubating, and then
to each minor version (3.1 and then 3.2.2) to trigger
Hi Sumanta,
Another alternative option is to leverage support for VIEWs in Phoenix (
http://phoenix.apache.org/views.html). In many use cases I've seen where
there are hundreds of sparse columns defined for a schema, there's a column
that determines *which* sparse columns are applicable for a
Thanks for sharing your experiences,Vaclav. That's very valuable.
Yes, for (1) bad things can happen if a region server doesn't have the
Phoenix jar. This was improved as of HBase 0.98.9 with HBASE-12573 and
HBASE-12575. For (3), this was fixed as of Phoenix 3.1/4.1 with
PHOENIX-1075. If you have
.* to solve
this kind of issue with simplicity.
Regards
Sumanta
-James Taylor jamestay...@apache.org wrote: -
To: user user@phoenix.apache.org
From: James Taylor jamestay...@apache.org
Date: 01/07/2015 01:35PM
Subject: Re: Select dynamic column content
Hi Sumanta,
Another alternative
Row (
rowkey:String,
columns:Map[String,Column] //colname - Column
)
case class Table (
name:String,
rows:Map[String,Row] //rowkey - Row
)
On Tue, Dec 23, 2014 at 9:06 PM, James Taylor jamestay...@apache.org
wrote:
No, that's currently not possible. You'd may be able to leverage
Hi David,
bq. Too many regions can easily burden the hbase cluster heavily,
even if empty region
Is that true, HBase-committers?
You'll need to delete the SYSTEM.SEQUENCE, make sure to set the
phoenix.sequence.saltBuckets property, and bounce your cluster. Then
the next time you try to establish
No, that's currently not possible. You'd may be able to leverage one
of the following to help you, though:
- parallel arrays as you've mentioned
- different tables with an FK (and likely an index) between them
- dynamic columns (http://phoenix.apache.org/dynamic_columns.html)
- on-the-fly
The scan is initiated on the client, but intercepted by coprocessors which
do the aggregation on the server side. Take a look at the presentations
here, as they go into the client/server interaction in more depth:
http://phoenix.apache.org/resources.html
On Monday, December 22, 2014, Komal
for ever,
are there other side-effect if dropping the SYSTEM.SEQUENCE table?
If existing other side-effect indeed, how to reduce the region number?
Thank again.
At 2014-12-20 15:13:33, James Taylor jamestay...@apache.org wrote:
Hi,
The system tables store and manage your metadata (i.e. tables
Hi Jamie,
No, it's currently not possible to store value information in column
qualifiers. There is interest in support this, though, if it can be
done in a SQL friendly way: see PHOENIX-1497 and PHOENIX-150.
I think the best you can do is use two parallel arrays, but that
starts to break down if
Hi Jerry,
In upgrading between patch releases (i.e. 4.2.1 - 4.2.2), there's
nothing you need to do other than:
- replace the jar in all the HBase region server jars with the new one
(make sure to remove the old one)
- use the new jar instead of the old jar on the client side.
- bounce your
?
hbase.version0.98.4-hadoop2/hbase.version
hadoop-two.version2.2.0/hadoop-two.version
On Thu, Dec 18, 2014 at 1:44 PM, James Taylor jamestay...@apache.org
wrote:
The 3.3 release is not out yet, but I plan to propose that we cut it
soon. Have you had a chance to try out the many-to-many
Hi Kristoffer,
Yes, you're correct - for a non aggregate, non join query, the
underlying result set is backed by the bloated HBase Result and
KeyValue/Cell. See PHOENIX-1489 - maybe we can continue the discussion
there? Your comments here would be valuable over there too.
Thanks,
James
On Wed,
What version are you using, Abe? The important parameter to bump up if
you get a rejected exception is the queue depth
(phoenix.query.queueSize). When you upgrade to 4.2.x, the default for
this went up to 5000, the reason being that we chunk up work into
smaller pieces. If you're already
.
CertusNet
From: James Taylor
Date: 2014-12-16 14:34
To: user
CC: dev
Subject: Re: hbase 0.96.1.1-cdh5.0.1 for phoenix compatibility
HBase 0.96 is not supported by Phoenix - only HBase 0.98.1 and above.
The CDH 5.1 releases package HBase 0.98, so these are ok.
On Mon, Dec
501 - 600 of 667 matches
Mail list logo