Re: OutOfOrderScannerNextException

2015-11-12 Thread James Taylor
Hi Kamran, Couple of suggestions: - Take a look at the various discussions on CDH + Phoenix on the mailing list: http://search-hadoop.com/?fc_project=Phoenix=Phoenix+on+CDH - Use the Phoenix that's in Cloudera labs - Try the latest builds put together that would get you up to Phoenix 4.6.0 and see

Feedback on pull request for Apache Phoenix support?

2015-11-10 Thread James Taylor
Hello Sqoop community, We've had users request Sqoop support for Apache Phoenix in our community, so one of our PMC members (Ravi Kiran) put together this nifty patch on Sqoop 1.4.6 to get us there: https://github.com/apache/sqoop/pull/10 https://issues.apache.org/jira/browse/SQOOP-2649 Would

Re: Issue with single quote in columns with String data type

2015-11-09 Thread James Taylor
Either escape the quote by using two of them in a row like this: upsert into user_address (uid, city, state, country) values(123, ‘L'’Anse’, ‘MI’, ‘USA’) or use bind parameters like this: upsert into user_address (uid, city, state, country) values(?, ?, ?, ?) Thanks, James On Mon, Nov

Re: blog describing new time-series data optimization

2015-11-08 Thread James Taylor
PE, >> EVENT_DATE *ROW_TIMESTAMP*)) >> >> I think the column EVENT_TIME should be EVENT_DATE. Or maybe I'm not >> understanding this correctly. >> >> Greetings, >> >> Juan >> >> >> >> On Sat, Nov 7, 2015 at 6:53 PM, Jame

blog describing new time-series data optimization

2015-11-07 Thread James Taylor
If you have time-series data for which you'd like to improve query performance, take a look at this[1] blog written by Samarth Jain on a new feature in our 4.6 release: https://blogs.apache.org/phoenix/entry/new_optimization_for_time_series Enjoy! James

Re: Phoenix view over existing HBase table - timestamps

2015-11-02 Thread James Taylor
Also, there's a new 4.6.0 feature available to declare a column in your primary key as mapping to the Cell timestamp: https://phoenix.apache.org/rowtimestamp.html Thanks, James On Mon, Nov 2, 2015 at 1:36 PM, Thomas D'Silva wrote: > Camelia > > You can specify the

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-10-31 Thread James Taylor
If you're trying to run locally on a Mac, see this thread: http://search-hadoop.com/m/9UY0h2s6mxw4Opgf2/%252Fetc%252Fhosts=Re+testing+problem On Saturday, October 31, 2015, Naor David wrote: > run netstat -a 1 | grep "SYN_SENT" and check if port 2181 is blocked in > your

Re: replace CsvToKeyValueMapper with my implementation

2015-10-29 Thread James Taylor
I seem to remember you starting down that path, Gabriel - a kind of pluggable transformation for each row. It wasn't pluggable on the input format, but that's a nice idea too, Ravi. I'm not sure if this is what Noam needs or if it's something else. Probably good to discuss a bit more at the use

Re: does anyone have working phoenix version for CDH 5.4.x

2015-10-26 Thread James Taylor
Have you seen this thread[1]? JM was kind enough to build a parcel for phoenix 4.5. Thanks, James [1] http://search-hadoop.com/m/9UY0h2txHR7Ilo811/JM%2527s+4.5.2+parcel+for+CDH5=Re+setting+up+community+repo+of+Phoenix+for+CDH5+ On Mon, Oct 26, 2015 at 11:44 AM, ALEX K

Re: change data type of column

2015-10-21 Thread James Taylor
t's done for >> many schema changes. Any support in Phoenix for online schema changes will >> be a major plus point. >> >> James >> On 20 Oct 2015 5:38 p.m., "James Taylor" <jamestay...@apache.org> wrote: >> >>> We don't support altering the data

Re: Phoenix JDBC driver hangs/timeouts

2015-10-20 Thread James Taylor
Hi Alok, Thanks for the additional information. I'm curious about your use of salting on your table. We typically recommend salting to overcome hotspotting which occurs when you have a row key that is monotonically increasing. A salted table will put a higher load on your cluster during range

Re: change data type of column

2015-10-20 Thread James Taylor
We don't support altering the data type of a column directly. The way I've seen this done is: - create a new column with the new type - copy the old value over, coercing it to the new type (either using UPSERT SELECT or MR) - start using the new column instead of the old column - drop the old

Re: Phoenix 4.4 to 4.6 Client Errors

2015-10-16 Thread James Taylor
; > > Is it possible we also need the changes at > https://github.com/apache/phoenix/commit/567218ad2520869e8912b61f687305c81aa919cd > ? > > > > Mark Tse > > > > *From:* James Taylor [mailto:jamestay...@apache.org > <javascript:_e(%7B%7D,'cvml','jamestay...@apac

Re: Phoenix 4.4 to 4.6 Client Errors

2015-10-16 Thread James Taylor
You'll likely need the addendum patch on that JIRA too, but Samarth could confirm. On Friday, October 16, 2015, James Taylor <jamestay...@apache.org> wrote: > Yes, you'd need that too. > > On Friday, October 16, 2015, Mark Tse <mark@d2l.com > <javascript:_e(%7B%7D,

Re: REG: Issue when creating a Phoenix table through JDBC

2015-10-15 Thread James Taylor
e quotes we > need to surrond the word "SNAPPY". I have tried escaping them but no > success. > > Thanks > On 15-Oct-2015 11:46 am, "James Taylor" <jamestay...@apache.org> wrote: > >> Kind of a guess, but if you're parsing a single SQL statement, it >>

Re: Row value constructors failed on the index, when len(table's pks) > 2 and table's 1st pk is index's last pk

2015-10-15 Thread James Taylor
or this issue, UPSERT seems ok, "SELECT * FROM IDX_T" works fine. > > Thanks, > Chunhui > > 2015-10-15 14:26 GMT+08:00 James Taylor <jamestay...@apache.org>: > >> Any difference if you apply the patch for PHOENIX-2319? >> Thanks, >> James >&

Re: Phoenix 4.4 to 4.6 Client Errors

2015-10-15 Thread James Taylor
Good catch, Mark. I filed PHOENIX-2326. We'll get this tweaked before we cut the RC. On Thu, Oct 15, 2015 at 12:15 PM, Mark Tse wrote: > Hi everyone, > > > > I currently have Phoenix 4.4 installed on HBase 0.98, and am trying out a > Phoenix 4.6 build ( >

added HomeAway, the company behind VRBO, to our Who's Using Phoenix page

2015-10-05 Thread James Taylor
We're excited to include HomeAway, the company behind VRBO, to our Who's Using Phoenix page[1]. If you have a company using Phoenix and would like to be included as well, please let us know. Thanks, James [1] http://phoenix.apache.org/who_is_using.html

Re: Current time from Hbase

2015-10-04 Thread James Taylor
eating a single column, single row table would be > better? > > Thanks again. > > ---------- > *From:* James Taylor <jamestay...@apache.org> > *To:* user <user@phoenix.apache.org>; Sumit Nigam <sumit_o...@yahoo.com> > *Sent:* Monday, October

Re: Current time from Hbase

2015-10-04 Thread James Taylor
In the latest Phoenix release (4.5.2), you can just do the following, as the FROM clause is now optional: SELECT CURRENT_TIME() On Sun, Oct 4, 2015 at 8:45 PM, Sumit Nigam wrote: > Hi, > > How can I get the current time from Hbase? > > I can use Phoenix function

Re: Fixed bug in PMetaDataImpl

2015-10-02 Thread James Taylor
Patch looks great - thanks so much, James. Would you mind prefixing the commit message with "PHOENIX-2256" as that's what ties the pull to the JIRA? I'll get this committed today. James On Fri, Oct 2, 2015 at 7:34 AM, James Heather wrote: > Hi all (@James T in

Re: HBase MOB

2015-10-02 Thread James Taylor
Hi Cristofer, Though I haven't explicitly tried this, in theory you should be able to set the IS_MOB and MOB_THRESHOLD on a column family in the CREATE TABLE or ALTER TABLE calls. You can prefix the property with the column family name if you want it to only apply to that column family. Phoenix

Re: HBase MOB

2015-10-02 Thread James Taylor
Forgot to mention, for syntax examples, see http://phoenix.apache.org/language/index.html#create_table On Fri, Oct 2, 2015 at 9:24 AM, James Taylor <jamestay...@apache.org> wrote: > Hi Cristofer, > > Though I haven't explicitly tried this, in theory you should be able to &

Re: Estimating the "cost" of a query

2015-10-02 Thread James Taylor
Hi Alok, Yes, you could calculate an estimate for this information, but it isn't currently exposed through JDBC or through the explain plan (which would be a good place for it to live). You'd need to dip down to the implementation to get it. Something like this: PhoenixStatement statement =

Re: [DISCUSS] discontinue 4-x HBase 1.0 releases?

2015-09-30 Thread James Taylor
ole Phoenix project for us if there were no >> 1.0 support. >> >> James >> >> >> On 19/09/15 01:53, James Taylor wrote: >> >> +user list >> >> Please let us know if you're counting on HBase 1.0 support given that we >> have HBase 1.1 supp

Re: [ANNOUNCE] New Apache Phoenix committer - Jan Fernando

2015-09-29 Thread James Taylor
Welcome, Jan. Great to have you onboard as a committer! James On Tuesday, September 29, 2015, Andrew Purtell wrote: > Congratulations Jan, and welcome! > > > On Tue, Sep 29, 2015 at 11:23 AM, Eli Levine > wrote: > > > On behalf of the

Re: Setting a TTL in an upsert

2015-09-23 Thread James Taylor
Hi Alex, I can think of a couple of ways to support this: 1) Surface support for per Cell TTLs (HBASE-10560) in Phoenix (PHOENIX-1335). This could have the kind of syntax you mentioned (or alternatively rely on a connection property and no syntactic change would be necessary, and then in

Re: Setting a TTL in an upsert

2015-09-23 Thread James Taylor
Also, for more information on (2), see https://phoenix.apache.org/faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API On Wed, Sep 23, 2015 at 10:55 AM, James Taylor <jamestay...@apache.org> wrote: > Hi Alex, > I can think of a couple of ways to suppo

Re: Setting a TTL in an upsert

2015-09-23 Thread James Taylor
t > will also be possible to emulate (2) by using multiple connections, one per > retention policy duration, so maybe this is a good starting point. > > > > I’m new to the project so will dive into the code to get my bearings > before pulling together a plan of attack. &

Re: Using Phoenix from Python or Go

2015-09-22 Thread James Taylor
Hi Bipin, Take a look at this thread for a way to use Python to talk to Phoenix: http://search-hadoop.com/m/9UY0h2bzSGk2gyXOb/Python=Python+client+for+the+query+server Thanks, James On Mon, Sep 21, 2015 at 8:58 PM, Bipin Nag wrote: > Hi everyone, > > Can Phoenix be used

Re: default values in CREATE statements

2015-09-22 Thread James Taylor
Hi James, We've gotten as far as figuring *how* to do it, but the implementation hasn't been done. I'd be happy to provide guidance if someone would like to volunteer to pursue it. Thanks, James On Tue, Sep 22, 2015 at 3:37 AM, James Heather wrote: > I'm wondering

Re: REG: Query not working correctly

2015-09-22 Thread James Taylor
Hi Satya, It sounds like a bug from your description, but we'll need more information to fix it. How about a standalone test where you anonymize the column names? Thanks, James On Tuesday, September 22, 2015, Ns G wrote: > Hi Team, > > I have a column with unsigned_int as

Re: Local and global indexes

2015-09-21 Thread James Taylor
Hi Sumit, Every use case is different, that's why we created the Pherf tool[1] so you could try representative data sets over various different variations in schema in a multi threaded environment. Please share what you find with us so the community can build up this knowledge. Thanks, James [1]

Re: Any known issues with Phoenix Schema feature

2015-09-21 Thread James Taylor
Hi Vamsi, Phoenix currently only supports the default HBase namespace (see PHOENIX-1966), but as far as I recall, there's been some work (over at Yahoo! I believe) to have a Phoenix schema map to an HBase namespace. Can anyone out there update us on this? Thanks, James On Mon, Sep 21, 2015 at

Re: Does apache phoenix works with MapRDB aka M7?

2015-09-21 Thread James Taylor
Yes, JM is right. It might be more feasible for MapR to implement coprocessors now given the slimmed down and more stable HBase interfaces (as of HBase 1.1). On Mon, Sep 21, 2015 at 9:47 AM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > Hi Ashutosh, > > If I'm not mistaken, there is

Re: Problems getting started with Apache Phoenix

2015-09-19 Thread James Taylor
Hi Ashutosh, Yes, you can use HBase APIs to write to the HBase-backed Phoenix tables, but you have to do it in the way Phoenix expects, using the Phoenix serialization format. Also, you won't be able to leverage some Phoenix features such as secondary indexing which rely on you going through the

Re: [DISCUSS] discontinue 4-x HBase 1.0 releases?

2015-09-18 Thread James Taylor
nds on where the vendors are going, if there will be a long-running > 1.0.x release line, we should keep it around. From Apache perspective, it's > probably fine to drop 1.0.x in favor of 1.1.x. HBase 1.2 is right around > the corner too... > > On Thu, Sep 17, 2015 at 4:42 PM, James Tay

Re: timeouts for long queries

2015-09-15 Thread James Taylor
The other important timeout is Phoenix specific: phoenix.query.timeoutMs. Set this in your hbase-site.xml on the client side to the value in milliseconds for the amount of time you're willing to wait before the query finishes. I might be wrong, but I believe the hbase.rpc.timeout config parameter

Re: Can't add views on HBase tables after upgrade

2015-09-15 Thread James Taylor
eptember-14-15 7:32 PM >> *To:* user@phoenix.apache.org >> *Subject:* Re: Can't add views on HBase tables after upgrade >> >> >> >> Jeffrey, >> >> >> >> Can you tell us how are creating your view over the existing HBase table? >>

Re: Is it appropriate to switch between immutable and mutable ?

2015-09-15 Thread James Taylor
For most cases, you're able to delete from a table with immutable rows (I believe as of 4.2 release), so that kind of switching shouldn't be necessary. In theory, that switching should be ok, but I'm not sure we've tested that code path when the table has an index. Thanks, James On Tuesday,

Re: simple commands that mutate a large number of rows

2015-09-15 Thread James Taylor
That config setting (phoenix.mutate.maxSiize) is just a safety valve to prevent out of memory errors and may be set to whatever you like. However, if you're going to just turn around and do a commit after running your upsert statement, performance will improve if you turn on auto commit instead

Re: failing unit test in Phoenix source (master branch)

2015-09-14 Thread James Taylor
Thanks for filing these issues. I believe these failures occur on Java 8, but not on 7. Not sure why, though. James On Monday, September 14, 2015, James Heather wrote: > Reported as > > https://issues.apache.org/jira/browse/PHOENIX-2256 > > James > > On 14/09/15

Re: failing unit test in Phoenix source (master branch)

2015-09-14 Thread James Taylor
deal > (especially now that Java 7 has been EOL'd). > > Is anyone likely to be looking into the cause? > > James > > On 14/09/15 16:24, James Taylor wrote: > > Thanks for filing these issues. I believe these failures occur on Java 8, > but not on 7. Not sure why, though

Re: Phoenix with PreparedStatement

2015-09-14 Thread James Taylor
Sumit, To add to what Samarth said, even now PreparedStatements help by saving the parsing cost. Soon, too, for UPDATE VALUES, we'll also avoid recompilation when using a PreparedStatement. I'd encourage you to use them. Thanks, James On Mon, Sep 14, 2015 at 9:32 PM, Samarth Jain

Re: Column names in Phoenix backed HBase tables

2015-09-14 Thread James Taylor
Yes, we're discussing this right now over on PHOENIX-1598. Please feel free to chime in over there. Thanks, James On Mon, Sep 14, 2015 at 12:00 PM, Satish Iyengar wrote: > One of the recommendations in HBase is to have short column names. Now > when phoenix user defines a

Re: Can't add views on HBase tables after upgrade

2015-09-12 Thread James Taylor
If we've broken views over HBase tables, we'll need to -1 the RC and get a fix IMO. Thanks in advance for offering to look into it, Samarth. On Sat, Sep 12, 2015 at 11:22 AM, Samarth Jain wrote: > Jeffrey, > > I will look into this and get back to you. > > - Samarth > > On

Re: Add a new column in phoenix existing table

2015-09-10 Thread James Taylor
See ALTER TABLE command: http://phoenix.apache.org/language/index.html#alter On Thursday, September 10, 2015, Serega Sheypak wrote: > https://phoenix.apache.org/dynamic_columns.html > > It works, 100% feel free to ask if it doesn't work for you. > > 2015-09-10 11:08

Re: missing rows after using performance.py

2015-09-08 Thread James Taylor
Hi James, Looks like currently you'll get a error log message generated if a row is attempted to be imported but cannot be (usually due to the data not being compatible with the schema). For psql.py, this would be the client side log and messages would look like this: LOG.error("Error

Re: paged queries failed with index on 3.3.1

2015-09-01 Thread James Taylor
Hello, Both 2.2.3 and 3.3.1 are no longer supported. Would it be possible for you to move on to our 4.x code line on top of HBase 0.98? Thanks, James On Mon, Aug 31, 2015 at 11:04 PM, 刘春珲 wrote: > Hi, > > Recently, I'v updated phoenix from 2.2.3 to 3.3.1. I am confused

Re: select Dynamic column Name

2015-08-27 Thread James Taylor
sunile.man...@teradata.com javascript:_e(%7B%7D,'cvml','sunile.man...@teradata.com'); wrote: James, Where can I find info on JDBC Metadata APIs for each view entity From: James Taylor jamestay...@apache.org javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); Reply-To: user@phoenix.apache.org

Re: select Dynamic column Name

2015-08-27 Thread James Taylor
. Thanks, Satya On 27-Aug-2015 8:26 pm, James Taylor jamestay...@apache.org wrote: Use these standard JDBC APIs: http://docs.oracle.com/javase/7/docs/api/java/sql/DatabaseMetaData.html Querying directly against the SYSTEM.CATALOG is not recommended as underlying schema changes may impact you

Re: Sequence Issue in Phoenix with JDBC

2015-08-25 Thread James Taylor
I suspect you may not being doing a connection.commit() from your JDBC client. Sqline runs with autocommit on, but regular Phoenix connections do not (unless you set phoenix.connection.autoCommit to true in your hbase-sites-xml). Thanks, James On Tue, Aug 25, 2015 at 12:28 AM, divye sheth

Re: Group by a divided value (e.g., time/10) returns NULL.

2015-08-23 Thread James Taylor
Hi Rafit, Looks like a bug. Please file a JIRA. The following seems to work as a workaround: select cast(time/10.0 as integer) as tm, hostname, avg(usage) from test group by hostname, tm; You might also consider using a date[1] type instead of an integer and then using the TRUNC function[2]

Re: REG: Getting the current value of a sequence

2015-08-22 Thread James Taylor
Hi Satya, You can do a NEXT VALUE FOR in a SELECT clause without a from clause like this: SELECT NEXT VALUE FOR my_seq; This will allocate a block of sequences from the server (as determined by the CACHE clause when you create the sequence), cache them on the client, and dole them out as NEXT

Re: phoenix-server JAR not packaged in 4.5.1?

2015-08-22 Thread James Taylor
No, not intentional. Would you mind filing a JIRA and reference the pull request you found that caused the issue? Thanks, James On Saturday, August 22, 2015, Lukáš Lalinský lalin...@gmail.com wrote: I have just downloaded the 4.5.1 bin package and it seems that there is no phoenix-server JAR

Re: Phoenix 4.4+ on CDH

2015-08-21 Thread James Taylor
Hello, That'd be a good question for Cloudera. Thanks, James On Fri, Aug 21, 2015 at 2:54 PM, Buntu Dev buntu...@gmail.com wrote: The current version of Phoenix on CDH seems to be v4.3. Will there be a new release planned? I'm mainly interested in the Phoenix Query Server which seems to open

Re: Table salting

2015-08-18 Thread James Taylor
You can use UPSERT SELECT from the old table to the new table and do this with a single statement: https://phoenix.apache.org/language/index.html#upsert_select Make sure you set your timeouts high if the table is big. Thanks, James On Tue, Aug 18, 2015 at 9:40 AM, Sumanta Gh sumanta...@tcs.com

Re: Table salting

2015-08-18 Thread James Taylor
though On 18-Aug-2015 10:42 pm, James Taylor jamestay...@apache.org javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); wrote: You can use UPSERT SELECT from the old table to the new table and do this with a single statement: https://phoenix.apache.org/language/index.html#upsert_select Make

Re: how to check index's status?

2015-08-18 Thread James Taylor
I'd recommend updating to a new version of Phoenix and HBase. The Phoenix 2.2.3 release is about 1.5 years old and there are probably more than 400 bug fixes between then and now (not to mention many performance improvements and tons of new features). Thanks, James On Tue, Aug 18, 2015 at 8:20

Re: how to check index's status?

2015-08-18 Thread James Taylor
:38 PM, 刘春珲 leeyc...@gmail.com wrote: Agreed. But, It's a existing system. I have not more time to upgrade it. I need to make it online asap, then I will update to a new version of Phoenix and Hbase. Is there any suggestions? Thanks, Chunhui 2015-08-19 11:32 GMT+08:00 James Taylor

Re: how to check index's status?

2015-08-18 Thread James Taylor
. Thanks, James On Tue, Aug 18, 2015 at 9:02 PM, 刘春珲 leeyc...@gmail.com wrote: If I update the phoenix to 3.3.1 and keep hbase on 0.94.27, the 'REBUID' action will be a little quickly? or , it will run backgroud? 2015-08-19 11:49 GMT+08:00 James Taylor jamestay...@apache.org: Other than upgrading

Re: how to write a row_number function in phoenix?

2015-08-17 Thread James Taylor
You might be able to mimic what NEXT VALUE FOR does for sequences, but you wouldn't need any server-side code. We do something similar for query more support by creating a sequence on the fly and using a very big cache value (see WueryMoreIT). On Monday, August 17, 2015, 曾柏棠 zengbait...@ppdai.com

Re: Calcite version

2015-08-10 Thread James Taylor
AM, James Taylor jamestay...@apache.org wrote: Hey , Thanks for letting us know. Too bad we didn't catch this before the release. Would you mind filing a JIRA with a patch attached if you have time? Is there an advantage to waiting until Calcite 1.4 is out instead? Thanks, James On Monday

Re: Poor SELECT performance in wide table

2015-07-29 Thread James Taylor
Hi Sumanta, Would it be possible to get more detail? Phoenix/HBase version, schema, queries? Where are you finding the bottlenecks to be? Thanks, James On Tue, Jul 28, 2015 at 10:46 PM, Sumanta Gh sumanta...@tcs.com wrote: Hi All, Have anyone noticed that with the increasing number of

Re: Signed long values in column

2015-07-29 Thread James Taylor
-HBase-0.98/phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToNumberFunction.java for reference. I really appreciate your help. - Anchal On Wednesday, July 29, 2015 8:43 AM, James Taylor jamestay...@apache.org wrote: Hi Anchal, Phoenix depends on the sort order

Re: Exception from RowCounter

2015-07-26 Thread James Taylor
and that doesn't work if the table is created with 'salt_buckets': https://issues.apache.org/jira/browse/PHOENIX-1248 -- *From:* James Taylor [jamestay...@apache.org] *Sent:* Saturday, July 25, 2015 1:23 PM *To:* user *Cc:* Haisty, Geoffrey *Subject:* Re

Re: ClassNotFoundException for UDF class

2015-07-24 Thread James Taylor
I don't believe you'd want to bundle the dependent jars iniside your jar - I wasn't completely sure if that's what you've done. Also there's a config you need to enable in your client-side hbase-site.xml to use this feature. Thanks, James On Friday, July 24, 2015, Anchal Agrawal

Re: StaleRegionBoundaryCacheException

2015-07-23 Thread James Taylor
? Thanks, Baahu On Tue, Jul 21, 2015 at 3:33 AM, James Taylor jamestay...@apache.org wrote: Thanks for the information, Baahu. If you could figure out how to reproduce this and file a JIRA, that would be much appreciated. James On Mon, Jul 20, 2015 at 1:34 AM, Bahubali Jain bahub

Re: Importing existing HBase table's rowkey

2015-07-22 Thread James Taylor
. Thank you, Anchal On Wednesday, July 22, 2015 5:56 PM, James Taylor jamestay...@apache.org wrote: If it leads with a long that was serialized using Bytes.toBytes(long), then you can map that to the UNSIGNED_LONG type in Phoenix. What's the rest of your row key look like? On Wed, Jul 22

Re: Importing existing HBase table's rowkey

2015-07-22 Thread James Taylor
If it leads with a long that was serialized using Bytes.toBytes(long), then you can map that to the UNSIGNED_LONG type in Phoenix. What's the rest of your row key look like? On Wed, Jul 22, 2015 at 5:54 PM, Anchal Agrawal anc...@yahoo-inc.com wrote: Anil and Krishna, Thanks for your replies.

Re: Help with secondary index

2015-07-21 Thread James Taylor
For (1): ALTER TABLE fma.er_keyed_gz_meterkey_split_custid SET IMMUTABLE_ROWS=true; For (2): You won't need that property if your table is immutable, but it'd still be good to add it for if/when you use mutable secondary indexes. Not sure which of those you'd need to add it to - maybe all of them

Re: StaleRegionBoundaryCacheException

2015-07-20 Thread James Taylor
this error.This table not being written to during count(*) execution. Thanks, Baahu On Fri, Jul 17, 2015 at 10:27 PM, James Taylor jamestay...@apache.org wrote: This exception means that the region boundary cache kept on the client is out of sync with the actual region boundaries on the HBase

Re: TTL and IMMUTABLE_ROWS

2015-07-15 Thread James Taylor
Yes, it's ok, but if you have secondary indexes, make sure to set the same TTL on them as well (just tack on the same TTL=691200 at the end of your CREATE INDEX statement). On Tue, Jul 14, 2015 at 1:48 PM, Serega Sheypak serega.shey...@gmail.com wrote: Hi, here is my table CREATE TABLE IF NOT

Re: How to adjust primary key on existing table

2015-07-14 Thread James Taylor
ALTER TABLE t ADD my_new_col VARCHAR PRIMARY KEY The new column must be nullable and the last existing PK column cannot be nullable and fixed width (or varbinary or array). On Tue, Jul 14, 2015 at 10:01 AM, Riesland, Zack zack.riesl...@sensus.com wrote: This is probably a lame question, but

Re: How to adjust primary key on existing table

2015-07-14 Thread James Taylor
...@sensus.com wrote: Thanks James, To clarify: the column already exists on the table, but I want to add it to the primary key. Is that what your example accomplishes? *From:* James Taylor [mailto:jamestay...@apache.org] *Sent:* Tuesday, July 14, 2015 1:11 PM *To:* user *Subject

Re: How to adjust primary key on existing table

2015-07-14 Thread James Taylor
to copy the data from the old table to the new one? *From:* James Taylor [mailto:jamestay...@apache.org] *Sent:* Tuesday, July 14, 2015 1:17 PM *To:* user *Subject:* Re: How to adjust primary key on existing table Ah, we don't support that currently. You can drop the existing column first

Re: Problem in finding the largest value of an indexed column

2015-07-10 Thread James Taylor
tables. Maybe there is another issue than PHOENIX-2096? The phoenix I am using is pulled from latest 4.x-HBase-0.98 branch which includes the patch of PHOENIX-2096. 2015-07-02 19:55 GMT-07:00 James Taylor jamestay...@apache.org: On further investigation, I believe it should have been working before

Re: Could not find hash cache for joinId

2015-07-08 Thread James Taylor
Alex, Do you pool the PhoenixConnection and if so can you try it without pooling? Phoenix connections are not meant to be poooled. Thanks, James On Wed, Jul 8, 2015 at 12:05 PM, Alex Kamil alex.ka...@gmail.com wrote: Maryann, - the patch didn't help when applied to the client (we havent put

Re: Can't UPSERT into a VIEW?

2015-07-06 Thread James Taylor
Phoenix inserts an empty key value for existing rows when you do a CREATE TABLE on an existing HBase table. If it's a big table, just set your timeouts really high so it has time to complete. Thanks, James On Mon, Jul 6, 2015 at 7:43 AM, Martin Pernollet mpernol...@octo.com wrote: Hi, (using

Re: Problem in finding the largest value of an indexed column

2015-07-02 Thread James Taylor
unexpected result, I will dig more into this. Thank you, James! 2015-07-02 9:58 GMT-07:00 Yufan Liu yli...@kent.edu: Sure, let me have a try 2015-07-02 9:46 GMT-07:00 James Taylor jamestay...@apache.org: Thanks, Yufan. I found an issue and filed PHOENIX-2096 with a patch. Would you mind

Re: MD5 hash function in Phoenix

2015-07-02 Thread James Taylor
Hi Divye, Our MD5 function accepts only a single argument, not four. Would it be possible for you to post some sample code? The code for our MD5 built-in is in org.apache.phoenix.expression.function.MD5Function if you want to take a look with tests in org.apache.phoenix.end2end.MD5FunctionIT.

Re: Problem in finding the largest value of an indexed column

2015-07-02 Thread James Taylor
Thanks, Yufan. I found an issue and filed PHOENIX-2096 with a patch. Would you mind confirming that this fixes the issue you're seeing? James On Thu, Jul 2, 2015 at 9:45 AM, Yufan Liu yli...@kent.edu wrote: I'm using 4.4.0-HBase-0.98 2015-07-01 22:31 GMT-07:00 James Taylor jamestay

Re: Hbase and Phoenix Performance improvement

2015-07-01 Thread James Taylor
Also, try separating your columns into multiple column families to prevent having to scan past your 75+ column qualifiers for every query. On Wed, Jul 1, 2015 at 4:47 AM, Puneet Kumar Ojha puneet.ku...@pubmatic.com wrote: Yes …Salting will improve the scan performance. Try with numbers

Re: StackOverflowError

2015-07-01 Thread James Taylor
Baahu, We're having a difficult time reproducing the StackOverflowError you encountered over on PHOENIX-2074. Do you think you could help us reproduce it? Maybe you can upload a test case and/or a CSV file with some sample data that reproduces it? Thanks, James On Tue, Jun 23, 2015 at 12:24 AM,

Re: Problem in finding the largest value of an indexed column

2015-07-01 Thread James Taylor
table has one region, the query returns correct result: 144048443, but when I manually split it into 4 regions (use hbase tool), it returns 143024961. Let know if you find anything. Thanks! 2015-07-01 11:27 GMT-07:00 James Taylor jamestay...@apache.org: If you could put a complete

Re: Problem in finding the largest value of an indexed column

2015-06-30 Thread James Taylor
Yes, reverse scan will be leveraged when possible. Make you use NULLS LAST in your ORDER BY as rows are ordered with nulls first. On Tue, Jun 30, 2015 at 5:25 PM, Yufan Liu yli...@kent.edu wrote: I used the HBase reverse scan to find the last row on the index table. It returned the expected

Re: How to count table rows from Java?

2015-06-29 Thread James Taylor
different query clients. But how do I set a high timeout so that I can do a large query via Java/JDBC? Thanks! From: James Taylor [mailto:jamestay...@apache.org] Sent: Friday, June 26, 2015 2:14 PM To: user@phoenix.apache.org Subject: Re: How to count table rows from Java? Zach, I wouldn't

new blog on Phoenix/Spark integration

2015-06-29 Thread James Taylor
I've posted a new blog, courtesy of Josh Mahonin, that explains how to use the new Phoenix/Spark integration by walking through a nice simple example: https://blogs.apache.org/phoenix/entry/spark_integration_in_apache_phoenix Thanks so much, Josh! James

Re: Avoid deleting Hbase table when droping table with Phoenix

2015-06-29 Thread James Taylor
You can avoid the creation of the empty value and avoid data being dropped by using the CREATE VIEW command instead of creating a table. Follow the link that Thomas posted to see some of the trade-offs between a view versus table. The main one is that a view is read-only and don't support

Re: create a view on existing production table ?

2015-06-26 Thread James Taylor
Hi Sergey, Yes, you can create a Phoenix view over this HBase table, but you have to explicitly list columns by name (i.e. column qualifier) either at view creation time or at read time (using dynamic columns). Also, the row key must conform to what Phoenix expects if there are multiple columns in

Re: create a view on existing production table ?

2015-06-26 Thread James Taylor
, and it could be just one column per family for one PK, and hundreds of thousands for another PK. How can I possibly accommodate it in a view specification, if I need to explicitly define column by name ? Or I misunderstand something ? Thank you, Sergey From: James Taylor jamestay

Re: How to count table rows from Java?

2015-06-26 Thread James Taylor
Zach, I wouldn't at all say that doing a count(*) is not recommended. It's important to know that 1) this requires a full table scan and 2) this is done by Phoenix asynchronously. You'll need to set the timeouts high enough for this to complete. Phoenix will be much faster than running a MR job,

Re: webcast on Phoenix this Thu @ 10am

2015-06-23 Thread James Taylor
Thursday, June 25th @ 10am PST On Tuesday, June 23, 2015, Ns G nsgns...@gmail.com wrote: Hi James, Can you specify the time zone please? Thanks Satya On 23-Jun-2015 9:32 am, James Taylor jamestay...@apache.org javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); wrote: If you're

Re: count distinct

2015-06-23 Thread James Taylor
Michael, You're correct, count distinct doesn't support multiple arguments currently (I filed PHOENIX-2062 for this). Another workaround is to combine a.col1 and b.col2 into an expression, for example concatenating them. If order matters, you could do this: select count(distinct col1 || col2) ...

Re: error: no valid quorum servers found

2015-06-23 Thread James Taylor
You can specify the zookeeper quorum in the connection string as described here: https://phoenix.apache.org/#SQL_Support. All of the hosts are expected to use the same port (which may be specified as well). On Tue, Jun 23, 2015 at 1:05 PM, Alex Kamil alex.ka...@gmail.com wrote: it's running in

Re: What's column family name for columns of table created by phoniex create table statement without a specific cf name?

2015-06-23 Thread James Taylor
To add to what Gabriel said, you can also specify your own default column family name with the DEFAULT_COLUMN_FAMILY property when you create your table: https://phoenix.apache.org/language/index.html#create_table On Tue, Jun 23, 2015 at 11:07 AM, Gabriel Reid gabriel.r...@gmail.com wrote: The

Re: Can't understand why phoenix saves but not selects

2015-06-23 Thread James Taylor
)); } On Tue, Jun 23, 2015 at 3:41 PM, James Taylor jamestay...@apache.org wrote: Make sure you run the commit on the same connection from which you do the upsert. Looks like you're opening a new connection with each statement. Instead, open it once in the beginning and include

Re: Can't understand why phoenix saves but not selects

2015-06-23 Thread James Taylor
Make sure you run the commit on the same connection from which you do the upsert. Looks like you're opening a new connection with each statement. Instead, open it once in the beginning and include the commit like Samarth mentioned: Connection conn = getJdbcFacade().createConnection(); int result

Re: Deleting phoenix tables and views using hbase shell

2015-06-22 Thread James Taylor
Arun, Manually running DDL against the SYSTEM.CATALOG table can be problematic for a few reasons: - if a write failure occurs in the middle of running that statement, your SYSTEM.CATALOG table can be left in an inconsistent state. We prevent this internally by using a mutateRowsWithLocks call

webcast on Phoenix this Thu @ 10am

2015-06-22 Thread James Taylor
If you're interested in learning more about Phoenix, tune in this Thursday @ 10am where I'll be talking about Phoenix in a free Webcast hosted by O'Reilly: http://www.oreilly.com/pub/e/3443 Thanks, James

<    1   2   3   4   5   6   7   >