Hi Deepak Noam,
The issue to which you're referring, PHOENIX-1127, is specific to the
CSV bulk loader and is not directly related to Mark's original
question. Let's discuss that on the JIRA. Please let me know there if
you've tried the workarounds I've mentioned. Also, if you could
comment on
Thanks for that somewhat tricky workaround, Gabriel. Might be worth
filing a JIRA to see if we could support specifying \t for tab if it's
easy/feasible?
On Thu, Dec 11, 2014 at 2:04 AM, Gabriel Reid gabriel.r...@gmail.com wrote:
I just discovered how to get this working properly (I had wrongly
cluster? And related question, how to safely detect the failure
;)
Thanks,
JM
2014-12-09 20:48 GMT-05:00 James Taylor jamestay...@apache.org:
No, we're not saying to avoid replication: at SFDC, we rely on
replication to provide an active/active configuration for failover.
Lars H. co
Hello Phoenix users,
We'd like to get an idea of how many users are still relying on the
Phoenix 3.x releases that work with HBase 0.94. We've been diligently
maintaining feature parity (as much as possible) between the 3.x
releases and 4.x releases, but the cost of continuing to do so is
becoming
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.2.2/3.2.2 release. For details of the release,
see our release announcement[1].
The Apache Phoenix team
[1] https://blogs.apache.org/phoenix/entry/announcing_phoenix_4_2_2
The timestamp of a Cell is not surfaced directly in Phoenix. See
PHOENIX-914 for some related discussion. You could create a new
built-in function that would return the remaining TTL for a given row.
For a guide on how to create a new built-in function, see
Hi Siddharth,
Phoenix does no locking, so I'm not sure what functionality you're after
wrt the hint you're proposing. Have you tried adding secondary indexes on
your foreign key columns? Also, join order is important: you'd want to join
from the biggest (on the lefthand side) to the smallest (on
Hi Vijay,
Yes, you can declare a composite primary key with fixed length sizes
for each part. The types you use for each column depend on how you
serialized the data into the rowkey. Are they all strings with a fixed
length? If so, it'd look something like this:
create table events (
See https://issues.apache.org/jira/browse/PHOENIX-976
On Wed, Dec 3, 2014 at 5:38 PM, Rama Ramani rama.ram...@live.com wrote:
Sorry, send to dev alias by mistake, sending to the user DL
When running the Mapreduce command
fromhttp://phoenix.apache.org/bulk_dataload.html, I am getting an Access
Hi Sun,
Support for UDFs would be great (see PHOENIX-538) if someone would
like to contribute this. We'd need a way of disabling them too,
though, IMO.
Adding your own built-in function is pretty easy too, though. See
Hi Noam,
Have you tried the -h option when you do the bulk load? See
http://phoenix.apache.org/bulk_dataload.html
Thanks,
James
On Thu, Nov 27, 2014 at 1:03 AM, Bulvik, Noam noam.bul...@teoco.com wrote:
Hi,
Currently (4.1) there seems to be a limitation that Order of columns in
primary key
Yes, as Ted points out, Phoenix will use a reverse scan to optimize an ORDER BY.
On Mon, Dec 1, 2014 at 7:52 PM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at BaseQueryPlan#iterator():
if (OrderBy.REV_ROW_KEY_ORDER_BY.equals(orderBy)) {
See also PHOENIX-922 which would be a more elegant way of selecting a
single row. It'd be great if someone could pick this one up.
Thanks,
James
On Thu, Nov 27, 2014 at 11:11 AM, Bulvik, Noam noam.bul...@teoco.com wrote:
Sure I can ,
I was looking for more elegant solution
From:
Hi Abe,
Our backward compatibility story between client and server is
evolving. Our model to date has been:
1) update the server Phoenix jar first with the new release. Clients
one minor release back will continue to work with the new server jar
(this is the scenario for which we test).
2) upgrade
FYI, the default column family name is '0'
On Monday, November 24, 2014, Eli Levine elilev...@gmail.com wrote:
Guillermo, Phoenix by default puts columns into a CF named '_'. You can
specify a different CF when creating tables or columns like this:
mycf.col1, in which case Phoenix would put
Thanks, Thomas Noam - that's very useful info. If you wouldn't mind
filing a JIRA, that'd be much appreciated. Of course, patches are welcome
as well.
James
On Wed, Nov 19, 2014 at 10:15 AM, Bulvik, Noam noam.bul...@teoco.com
wrote:
Thanks for the detailed info,
I think that Poenix
Hi Ido,
ORDER BY is an expensive operation. Phoenix essentially needs to
re-write the table (the portion that you're selecting) on the RS using
memory-mapped files (likely the cause of the /tmp files you're seeing)
over which a merge sort will be performed.
I assume that the ORDER BY is the
To add to what Samarth said, please take a look also at the unit tests
in QueryMoreIT as they give you a pretty good idea on how to implement
paged queries in a scalable manner over big data.
Thanks,
James
On Sun, Nov 23, 2014 at 7:33 PM, Samarth Jain samarth.j...@gmail.com wrote:
Supporting
Hi Komal,
The CLIENT keyword in an explain plan means that action will be run on
the client side while SERVER means it'll run on the HBase region
server. We have not documented the meaning of an explain plan, though.
Would you mind filing a JIRA for this?
Thanks,
James
On Wed, Nov 12, 2014 at
Hi Ralph,
When an index is created, we make an estimation of its MAX_FILE_SIZE
in relation to the data table in an attempt to have it split
approximately the same number of times as the data table (because it's
typically smaller). You can override this, though, if its not optimal
for your use
If you salt your table (which pre-splits the table into SALT_BUCKETS
regions), by default your index will be salted and pre-split the same
way.
FWIW, you can also presplit your table and index using the SPLIT ON
(...) syntax: http://phoenix.apache.org/language/index.html#create_table
With the
details, James? Is it fast region reassignment
based on load statistics?
-Vladimir Rodionov
On Fri, Nov 7, 2014 at 10:29 AM, James Taylor jamestay...@apache.org
wrote:
If you salt your table (which pre-splits the table into SALT_BUCKETS
regions), by default your index will be salted and pre
Awesome! Thanks, Otis. This is great!
James
On Fri, Oct 31, 2014 at 9:51 AM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi everyone,
Quick announcement that we've added Apache Phoenix and made all its
resources searchable:
http://search-hadoop.com/phoenix
This let's you
Also, probably best to move this to the HBase mailing list, as this
isn't Phoenix-specific.
Thanks,
James
On Fri, Oct 31, 2014 at 10:05 AM, Jeffrey Zhong jzh...@hortonworks.com wrote:
From the following error, it means region 69e9a7efb9ee00b1ecfe50f825e7cc5b
is opening and can¹t serve any
It's used automatically. Please read
http://phoenix.apache.org/secondary_indexing.html
On Thu, Oct 23, 2014 at 6:52 PM, xuxc1120 xuxc1...@vip.qq.com wrote:
i use create index name_idx on table(info.qulifier);
it is a global index.
so ,how to use name_idx to speed query?
--
Yes, local indexes are supported. Please read
http://phoenix.apache.org/secondary_indexing.html
On Thursday, October 23, 2014, xuxc1120 xuxc1...@vip.qq.com wrote:
the hindex is excellent for local index ,if i bulid hindex(hased on
hbase-0.94.8),and hadoop-1.0.4,and use phoenix-2.2.1,does it
use 2.2.1 instead ,
can phoenix-2.2.1 convert SQL to Hbase scan and then call hindex's
coprocessor to query data?
thank u!
xuxc
-- 原始邮件 --
发件人: James Taylor;jamestay...@apache.org;
发送时间: 2014年10月24日(星期五) 中午11:05
收件人: user@phoenix.apache.orguser
Hi Bob,
Yes, you're correct - dynamic columns end up as column qualifiers.
Column names can't be supplied as parameters, though. How about if you
generate the UPSERT statement, and double quote your dynamic column
names if you want a case sensitive match? It'd look like this, then:
UPSERT INTO
:)
Thanks.
-- Lars
- Original Message -
From: James Taylor jamestay...@apache.org
To: user user@phoenix.apache.org; lars hofhansl la...@apache.org
Cc:
Sent: Monday, September 29, 2014 4:26 PM
Subject: Re: Recreating SYSTEM.CATALOG metadata
@Lars - any idea why Krishna may run
Hi JM,
Yes, that's possible - it's more-or-less what a secondary index is. So
you'd define your table as you did in your first CREATE TABLE
statement, and then you'd define a secondary index like this:
CREATE INDEX S_W_IDX ON asset_metadata (W);
You could also include other columns in the index
the existing CQ?
Thanks,
JM
2014-09-29 12:47 GMT-04:00 James Taylor jamestay...@apache.org:
Hi JM,
Yes, that's possible - it's more-or-less what a secondary index is. So
you'd define your table as you did in your first CREATE TABLE
statement, and then you'd define a secondary index like
/bin/phoenix-2.2.3-incubating.tar.gz
On Tue, Sep 30, 2014 at 12:34 AM, James Taylor jamestay...@apache.org
wrote:
Hi Kristoffer,
Did something change on your cluster prior to the breaking of your
SELECT queries? An upgrade of something?
From the look of the exceptions, it seems like
@Lars - any idea why Krishna may run into issues using Phoenix after a
restore from an HBase backup?
On Sun, Sep 28, 2014 at 9:00 PM, James Taylor jamestay...@apache.org wrote:
Hi Krishna,
I think that's what we need to figure out - why is Phoenix having
trouble when you restore
Hi JM,
Sure, you'd do that like this:
CREATE VIEW t1 ( USER unsigned_long,
ID unsigned_long,
VERSION unsigned_long,
f1.A unsigned_long,
f1.R unsigned_long,
f1.L unsigned_long,
f1.W unsigned_long,
f1.P bigint,
f1.N varchar,
f1.E varchar,
f1.S unsigned_long,
f1.M
The salt byte is the first byte in your row key and that's the max
value for a byte (i.e. it'll be 0-255).
On Wed, Sep 24, 2014 at 10:12 AM, Krishna research...@gmail.com wrote:
Hi,
According to Phoenix documentation
Phoenix provides a way to transparently salt the row key with a salting
Would you be able to talk about your use case a bit and explain why you'd
need this to be higher?
Thanks,
James
On Wednesday, September 24, 2014, Krishna research...@gmail.com wrote:
Thanks... any plans of raising number of bytes for salt value?
On Wed, Sep 24, 2014 at 10:22 AM, James Taylor
of usage of HBase 0.94 on top of Hadoop 1.
So maybe keep it alive in 3.0? 3.0 can be retired when HBase 0.94 is retired
(although I have no plans for 0.94 retirement, yet).
-- Lars
- Original Message -
From: James Taylor jamestay...@apache.org
To: d...@phoenix.apache.org d
+1 to doing the same for hbase-testing-util. Thanks for the analysis, Andrew!
James
On Mon, Sep 22, 2014 at 9:18 AM, Andrew Purtell apurt...@apache.org wrote:
On Thu, Sep 18, 2014 at 3:01 PM, James Taylor jamestay...@apache.org wrote:
I see. That makes sense, but it's more of an HBase
Take a look at my blog on how SequenceIQ setup a Docker for
Phoenix+HBase to make it super easy to get started:
https://blogs.apache.org/phoenix/entry/getting_started_with_phoenix_just
Thanks,
James
correct me if my understanding is wrong.
Thanks Regards,
Prakash Hosalli
-Original Message-
From: James Taylor [mailto:jamestay...@apache.org
javascript:_e(%7B%7D,'cvml','jamestay...@apache.org');]
Sent: Tuesday, September 09, 2014 11:56 PM
To: user; anil gupta
Subject: Re: Hive
+1. Thanks, Alex. I added a blog pointing folks there as well:
https://blogs.apache.org/phoenix/entry/connecting_hbase_to_elasticsearch_through
On Wed, Sep 10, 2014 at 2:12 PM, Andrew Purtell apurt...@apache.org wrote:
Thanks for writing in with this pointer Alex!
On Wed, Sep 10, 2014 at 11:11
Hi Vikas,
Please post your schema and query as it's difficult to have a discussion
without those. Also if you could post your HBase code, that would be
interesting as well.
Thanks,
James
On Friday, September 5, 2014, yeshwanth kumar yeshwant...@gmail.com wrote:
hi vikas,
we used phoenix on a
the block cache after each run (if you
don't).
Thanks,
James
On Fri, Sep 5, 2014 at 9:00 AM, James Taylor jamestay...@apache.org wrote:
Hi Vikas,
Please post your schema and query as it's difficult to have a discussion
without those. Also if you could post your HBase code, that would be
interesting
Vikas,
Please post your schema and query.
Thanks,
James
On Fri, Sep 5, 2014 at 9:18 PM, Vikas Agarwal vi...@infoobjects.com wrote:
Ours is also a single node setup right now and as of now there are less than
1 million rows which is expected to grow around 100m at minimum.
I am aware of
Hi Liang,
I recommend you try this with the binaries we package in our 4.1 release
instead: http://phoenix.apache.org/download.html
Thanks,
James
On Tue, Sep 2, 2014 at 10:50 AM, 夏凉 luoxiulu...@163.com wrote:
Hi Alex,
I changed the code, but it still doesn't work. It output following error
Hi Vikas,
Glad you got it working. Just curious - why did you install Phoenix
via yum when the HDP 2.1 already comes pre-installed with Phoenix?
Thanks,
James
On Mon, Sep 1, 2014 at 10:16 AM, Vikas Agarwal vi...@infoobjects.com wrote:
Yes, I am using HDP 2.1 and installed Phoenix via yum and it
In addition to the above, in our 3.1/4.1 release, you can pass through
the principal and keytab file on the connection URL to connect to
different secure clusters, like this:
DriverManager.getConnection(jdbc:phoenix:h1,h2,h3:2181:user/principal:/user.keytab);
The full URL is now of the form
Hello everyone,
On behalf of the Apache Phoenix team, I'm pleased to announce the
immediate availability of our 3.1 and 4.1 releases:
http://phoenix.apache.org/download.html
These include many bug fixes along with support for nested/derived
tables, tracing, and local indexing. For details of the
You can try a few things:
- salt your table by tacking on a SALT_BUCKETS=n where n is related to
the size of your cluster. Perhaps start with 16.
- lead your primary key constraint with core desc if this is your
primary means of accessing this table.
- add a secondary index over core desc if this
Hey Dan,
There were some changes in the test framework to make them run faster.
Our entire test suite can run in about 10-15mins instead of 60mins
now. One of the new requirements is adding the annotation that Samarth
indicated. Once JUnit releases 4.12, this will no longer be necessary,
as the
Hi JM,
Let me make sure I understand your use case. You have 156M rows worth
of data in the form CustID (BIGINT), URL (VARCHAR). You have a CSV
file with the data. Is CustID already unique in the CSV file? If not,
won't you run into issues trying to load the data, as you'll be
overwriting row
Sounds like you may have an out-of-sync issue with your
SYSTEM.CATALOG. What version of Phoenix were you using before you
tried the 4.1 RC? Is this the 4.1 RC1? Did you upgrade from an earlier
Phoenix version, as the IS_VIEW_REFERENCED was added in the 3.0/4.0
release, I believe? If you upgraded,
Thanks, JM. It'd be great to have support for Phoenix 4.1 once it's
officially released (hopefully in a few days if the RC holds up).
On Tue, Aug 26, 2014 at 4:46 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
I faced this and also, BigTop doesn't compile against Phoenix 4.0.1. And
Spaggiari
jean-m...@spaggiari.org wrote:
Hi James,
I can see 4.0.1 and 4.0, but not 4.1. Which branch will be used for 4.1?
Will it be from 4.0.1?
Thanks,
JM
2014-08-26 19:54 GMT-04:00 James Taylor jamestay...@apache.org:
Thanks, JM. It'd be great to have support for Phoenix 4.1 once it's
I think Ravi committed this as part of a different JIRA. Ravi?
Thanks,
James
On Mon, Aug 25, 2014 at 9:03 PM, Randy Martin randy.mar...@ds-iq.com wrote:
It looks like JIRA issue PHOENIX-898 was originally tracking this, but it
looks like this issue has been reverted in 4.1.0 RC 0 and 1. Can
Hi Jan,
Yes, this works as designed. Would you mind filing a JIRA for us to enhance
our multi tenant docs, as it sounds like it's unclear?
Without creating a view, you won't be able to add tenant specific columns
or indexes (i.e. evolve each tenant's schema independently). You can, of
course,
The dependencies on HBase 0.98.4 are *compile time* dependencies. Is it
necessary for you to compile against CDH 5.1 or just run against it?
On Tuesday, August 19, 2014, Russell Jurney russell.jur...@gmail.com
wrote:
Thats really bad. That means... CDH 5.x can't run Phoenix? How can this be
A secondary index will only be maintained if you go through Phoenix APIs
when you update your data table. Create a table over your HBase table
instead of a view and use Phoenix UPSERT and DELETE statements to update
your data instead of HBase APIs and your mutable secondary index will be
Are you running against a secure cluster? If so, you'd need to compile
Phoenix yourself as the jars in our distribution is for a non secure
cluster.
On Mon, Aug 11, 2014 at 10:29 AM, Jesse Yates jesse.k.ya...@gmail.com wrote:
That seems correct. I'm not sure where the issue is either. It seems
Hi Faisal,
Yes you can use a built-in Boolean function as you've shown in your query.
You can also omit the =TRUE part like this:
SELECT name
FROM profileTable
WHERE name LIKE 'Ale%' AND *myFunc*(name);
How this is processed depends on whether or not the table is salted and
the name is the
Hi Michael,
Take a look at Paged Queries here:
http://phoenix.apache.org/paged.html as well as this email thread:
http://s.apache.org/Dct. Phoenix does not support the OFFSET keyword
in SQL and likely never will as it cannot be implemented efficiently.
Thanks,
James
On Mon, Aug 4, 2014 at 8:36
Looks like you may be the first, Mike. If you try it, would you mind
reporting back how it works?
Thanks,
James
On Mon, Jul 28, 2014 at 10:52 AM, Alex Kamil alex.ka...@gmail.com wrote:
Mike, I'm on cdh4, but generally the extra steps are rebuilding phoenix with
hadoop and hbase jars from cdh,
Looks like a bug - it should not be necessary to have an IS NOT NULL
filter. Please file a JIRA.
Thanks,
James
On Fri, Jul 4, 2014 at 2:50 PM, puneet puneet.ku...@pubmatic.com wrote:
Seems , it is only happening for Phoenix 4.0.0 and not for Phoenix 3.0.0
On Friday 04 July 2014 05:54 PM,
(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)
On Sun, Jun 29, 2014 at 10:27 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
Thanks, that's good to know.
On Sun, Jun 29, 2014 at 10:20 AM, James Taylor jamestay
Increase the client-side Phoenix timeout (phoenix.query.timeoutMs) and
the server-side HBase timeout (hbase.regionserver.lease.period).
Thanks,
James
On Fri, Jun 20, 2014 at 6:30 PM, Andrew a...@starfishzone.com wrote:
Using Phoenix 4 the bundled SqlLine client I am attempting the following
our existing data first to CDH5 and try out a
few things on HBase 0.96.
On Sun, Jun 29, 2014 at 9:50 AM, James Taylor jamestay...@apache.org
wrote:
The default column family (i.e. the name of the column family used for
your table when one is not explicitly specified) was changed from _0
to 0
.
On 28/05/2014 5:51 PM, James Taylor
jamestay...@apache.org wrote:
Hi Roberto,
Yes, thank you very much for asking - there's
definitely interest. Does it handle the case with a table that has a
composite primary key definition
601 - 667 of 667 matches
Mail list logo