Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-24 Thread Steve Terrell
the Pig script as an EMR step to see if I get better results. Thanks, Steve On Mon, Aug 21, 2017 at 4:48 PM, Steve Terrell <sterr...@oculus360.us> wrote: > Thanks for the extra info! Will let everyone know if I solve this. > > On Mon, Aug 21, 2017 at 4:24 PM, anil

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread Steve Terrell
t; aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb- >> hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies >> .jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop >> .jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/ >> aws/emr/security/conf:/usr/share/aws

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread Steve Terrell
the command line like pig try.pig Maybe other people are calling their Phoenix Pig script some other way (EMR steps) or with different parameters? Details where this works would really help out a lot. Thanks, Steve On Mon, Aug 21, 2017 at 10:23 AM, Steve Terrell <sterr...@oculus360.us>

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread Steve Terrell
<anilgupt...@gmail.com> wrote: > Hey Steve, > > We are currently using EMR5.2 and pig-phoenix is working fine for us. We > are gonna try EMR5.8 next week. > > HTH, > Anil > > On Fri, Aug 18, 2017 at 9:00 AM, Steve Terrell <sterr...@oculus360.us> > wrote: >

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-18 Thread Steve Terrell
this list saves other people some time and headache. Thanks, Steve On Thu, Aug 17, 2017 at 2:40 PM, Steve Terrell <sterr...@oculus360.us> wrote: > I'm running EMR 5.8.0 with these applications installed: > Pig 0.16.0, Phoenix 4.11.0, HBase 1.3.1 > > Here is my p

Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-17 Thread Steve Terrell
I'm running EMR 5.8.0 with these applications installed: Pig 0.16.0, Phoenix 4.11.0, HBase 1.3.1 Here is my pig script (try.pig): REGISTER /usr/lib/phoenix/phoenix-4.11.0-HBase-1.3-client.jar; A = load '/steve/a.txt' as (TXT:chararray); store A into 'hbase://A_TABLE' using

Re: Random rows

2017-03-28 Thread Steve Terrell
Here's what I do in one of my applications. A two-step process minimum (three if you get a total row count first): upsert into DEMO(KEY_FIELD_1,KEY_FIELD_2,"random_sample" boolean) select KEY_FIELD_1,KEY_FIELD_2,(rand()<(50.0/1000)) Where in this example, I want to randomly select 50 rows from

Re: MultipleInput in Phoenix mapreduce job

2017-03-24 Thread Steve Terrell
I have been using https://phoenix.apache.org/pig_integration.html for years with much success. Hope this helps, Steve On Fri, Mar 24, 2017 at 7:40 AM, Anil wrote: > Hi, > > I have two table called PERSON and PERSON_DETAIL. i need to populate the > of the person Detail

Re: Problems running queries (not same amount of results)

2016-09-22 Thread Steve Terrell
Could it be due to a mistake in your SQL? select c1, c4, c5 from TABLE1 where (c4 = 'B')*)* AND (c1 <= TO_DATE('22.09.2016 17:15:59', 'dd.MM. HH:mm:ss')); looks like an out of place ")". On Thu, Sep 22, 2016 at 4:10 AM, Jure Buble wrote: > Hi, > > Anyone faced same

Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Steve Terrell
I'm not an expert in traditional SQL or in Phoenix SQL, but my best guess is "probably not". But I'm curious as to why you would like to avoid the group by or the list of columns. I know it looks very wordy, but are there any technical reasons? In my experience SQL is hard to read by human eyes

Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Steve Terrell
Hi! I think you need something like group by u.first_name on the end. Best guess. :) On Sun, Sep 18, 2016 at 11:03 PM, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > this query fails: > > SELECT COUNT(fr.friend_1), u.first_name >> >> FROM users AS u >> >> LEFT

Re: create schema on write

2016-06-03 Thread Steve Terrell
tsException and adding each field through ALTER VIEW calls. > > This is how we've modeled time-series data in support of Argus[1], not as > JSON in this case, but as tags and a metric value. > > HTH. Thanks, > James > > [1] https://github.com/SalesforceEng/Argus > &

Re:

2016-06-03 Thread Steve Terrell
I have a similar situation. I have records with varying fields that I wanted to access individually and also as a group. My actual records are JSON objects, so they look like like this: {"field1": value1, "field2": value2, …} To make matter harder, the fields are also varying types: ints,

Re: FOREIGN KEY

2016-05-12 Thread Steve Terrell
If you don't have any unique data, you could use a Phoenix Sequence to generate keys as you upsert. or some kind of guid. On Thu, May 12, 2016 at 8:22 AM, Ciureanu Constantin < ciureanu.constan...@gmail.com> wrote: > CREATE TABLE IF NOT EXISTS

Re: Failed to make the connection

2016-04-25 Thread Steve Terrell
Are you using Amazon EMR as your cluster? Are you trying to connect to Phoenix on an EMR master or from outside the cluster? On Mon, Apr 25, 2016 at 8:25 AM, Asanka Sanjaya Herath wrote: > I'm using simple phoenix hello world program in a amazon cluster. When I > run

Re: prepareAndExecute with UPSERT not working

2016-04-14 Thread Steve Terrell
I found it much easier and reliable to make my own phoenix HTTP server with my own JSON API. It was too confusing for me to send multiple requests for what would normally be just one SQL statement. And I had problems getting upserts working, to boot (even with the thin server). Now I can make

Re: Missing Rows In Table After Bulk Load

2016-04-08 Thread Steve Terrell
Are the primary keys in the .csv file are all unique? (no rows overwriting other rows) On Fri, Apr 8, 2016 at 10:21 AM, Amit Shah wrote: > Hi, > > I am using phoenix 4.6 and hbase 1.0. After bulk loading 10 mil records > into a table using the psql.py utility, I tried

Re: Error while attempting join query

2016-04-07 Thread Steve Terrell
Thanks for sharing this info, Steve. > > James > > On Thu, Apr 7, 2016 at 8:30 AM, Steve Terrell <sterr...@oculus360.us> > wrote: > >> I've been successful at running HBase 0.98.15 and Phoenix 4.6.0 on EMR. >> Found someone else's solution for this on the intern

Re: Phoenix transactions not committing.

2016-04-01 Thread Steve Terrell
You might try looking up previous emails from me in this mailing list. I had some problems doing commits when using the thin client and Phoenix 4.6.0. Hope this helps, Steve On Thu, Mar 31, 2016 at 11:25 PM, F21 wrote: > As I mentioned about a week ago, I am working

Re: How do I query the phoenix query server?

2016-03-24 Thread Steve Terrell
commit to be true by default (set phoenix.connection.autoCommit to >> true)? In 4.7 this has been fixed, but prior to this, I believe commit >> and rollback were a noop. Is that right, Josh? >> Thanks, >> James >> >> On Thursday, March 24, 2016, Steve Terrell &l

Re: How do I query the phoenix query server?

2016-03-24 Thread Steve Terrell
I forgot to mention, although the docs say only one jar is needed, I found that I also had to have commons-collections-3.2.1.jar in the class path, too. On Thu, Mar 24, 2016 at 10:07 AM, Steve Terrell <sterr...@oculus360.us> wrote: > Hi! Everything I say below pertains only to Phoe

Re: How do I query the phoenix query server?

2016-03-24 Thread Steve Terrell
Hi! Everything I say below pertains only to Phoenix 4.6.0. Don't know what changes in 4.7.0. Judging from the port number, you must be using the thin client server. Have you seen this page? https://phoenix.apache.org/server.html . It has JDBC info. I got the thin client jar to work with

Re: Dynamic Fields And Views

2016-02-25 Thread Steve Terrell
age_title" varchar) as > select * from TMP_SNACKS; > No rows affected (0.048 seconds) > 0: jdbc:phoenix:localhost> select * from MY_VIEW; > ++-+-+ > | K | C1 | page_title | > ++-+-+ > | 1 | a | b | > +---

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-25 Thread Steve Terrell
he snapshot, drop the table, >> recreate it with the new definition, then restore the snapshot. >> >> >> >> On Feb 25, 2016, at 8:44 AM, Steve Terrell <sterr...@oculus360.us> wrote: >> >> I like your outside-the-box thinking. Unfortunately, my end

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-25 Thread Steve Terrell
, Jonathan Leech <jonat...@gmail.com> wrote: > You could also take a snapshot in hbase just prior to the drop table, then > restore it afterward. > > > > On Feb 24, 2016, at 12:25 PM, Steve Terrell <sterr...@oculus360.us> wrote: > > Thanks for your quick and accurate

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
around you may try is by opening a connection at a timestamp a > little greater than last modified timestamp of table and then run drop > table command. but remember you may still loose some data inserted before > that timestamp > > Regards, > Ankit Singhal > > On Wed, Feb

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
> Hi Steve, > > can you check whether the properties are picked by the sql/application > client. > > Regards, > Ankit Singhal > > On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell <sterr...@oculus360.us> > wrote: > >> HI, I hope so

Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
HI, I hope someone can tell me what I'm doing wrong… I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on both the client and server side. I restarted the HBase master service. I used Phoenix to create a table and upsert some values. I used Phoenix to drop the table. I expected

Dynamic Fields And Views

2016-02-23 Thread Steve Terrell
I have a table with many dynamic fields. Works great. However, it's a bit of a nuisance to have to supply each dynamic field's type in every query. Example: select "dynamic_field" from MY_TABLE("dynamic_field" varchar) This example is not too bad, but image it with 5+ dynamic fields being

Re: Looks Like a SELECT Bug, But LIMIT Makes It Work

2016-02-23 Thread Steve Terrell
Done! https://issues.apache.org/jira/browse/PHOENIX-2709 . On Tue, Feb 23, 2016 at 3:15 PM, Sergey Soldatov <sergeysolda...@gmail.com> wrote: > Hi Steve, > It looks like a bug. So, please file a JIRA. > > Thanks, > Sergey > > On Tue, Feb 23, 2016 at 12:52 PM, Steve

Looks Like a SELECT Bug, But LIMIT Makes It Work

2016-02-23 Thread Steve Terrell
I came across a 4.6.0 query that I could not make work unless I add a "limit" to the end, where it should be totally unnecessary. select * from BUGGY where F1=1 and F3 is null results in no records found select * from BUGGY where F1=1 and F3 is null limit 999 results (correctly) in one record

Re: Thin Client Commits?

2016-02-22 Thread Steve Terrell
ad (Phoenix-4.6 is using 1.3.0-incubating). This should be included > in the upcoming Phoenix-4.7.0. > > Sadly, I'm not sure why autoCommit=true wouldn't be working. I don't have > any experience with the SQuirreL. > > [1] https://issues.apache.org/jira/browse/CALCITE-767 > > Ste

Re: Thin Client Commits?

2016-02-22 Thread Steve Terrell
really use a clue, here, if anyone knows what is going on. Thanks again, Steve On Sun, Feb 21, 2016 at 9:56 AM, Steve Terrell <sterr...@oculus360.us> wrote: > I'm surprised that no one knew the answer to this, but I eventually > figured out that I could set phoenix.connection.autoCo

Re: Thin Client Commits?

2016-02-21 Thread Steve Terrell
:35 AM, Steve Terrell <sterr...@oculus360.us> wrote: > I found this page: > http://apache-phoenix-user-list.1124778.n5.nabble.com/Thin-Client-Connection-Refused-td822.html > that says "dbConnection.commit() is not supported" (in a discussion about > thin client). > >

Re: Thin Client Commits?

2016-02-18 Thread Steve Terrell
b 17, 2016 at 9:49 AM, Steve Terrell <sterr...@oculus360.us> wrote: > It seems that when I use phoenix-4.6.0-HBase-0.98-thin-client.jar , that > deletes and upserts do not take effect. Is this expected behavior? > > Thanks, > Steve >

Re: Problem with String Concatenation with Fields

2016-02-17 Thread Steve Terrell
Done! https://issues.apache.org/jira/browse/PHOENIX-2689 Thanks, Steve On Wed, Feb 17, 2016 at 5:58 PM, Thomas D'Silva <tdsi...@salesforce.com> wrote: > Steve, > > That is a bug, can you please file a JIRA. > > Thanks, > Thomas > > On Wed, Feb 17, 2016 at

Problem with String Concatenation with Fields

2016-02-17 Thread Steve Terrell
Can someone please tell me if this is a bug in Phoenix 4.6.0 ? This works as expected: 0: jdbc:phoenix:localhost> select * from BUGGY where (*'tortilla'* ||F2)='tortillachip'; PK1 0 *F1 tortilla* F2 chip But this does not: 0: jdbc:phoenix:localhost> select * from BUGGY where (*F1*

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Steve Terrell
I would be interested in knowing, too. My solution was to write a Pig streaming function that executed the Phoenix upsert command for every row. On Wed, Feb 17, 2016 at 7:21 AM, Sumanta Gh wrote: > Hi, > I was going through the Phoenix Pig integration [1]. > I need to store

Re: Pagination with Phoenix

2016-02-17 Thread Steve Terrell
I was just thinking about this today. I was going to try to implement it by using a LIMIT on every query, with an addition of WHERE (rowkey_field_1 > last_rowkey_field_1_value_from_previous_query) OR (rowkey_field_2 > last_rowkey_field_2_value_from_previous_query) OR … But I haven't tried it

Thin Client Commits?

2016-02-17 Thread Steve Terrell
It seems that when I use phoenix-4.6.0-HBase-0.98-thin-client.jar , that deletes and upserts do not take effect. Is this expected behavior? Thanks, Steve

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
not released yet. A couple of issues can up in the last >> RC, so we'll roll a new one very soon. >> >> Thanks, >> James >> >> On Fri, Feb 5, 2016 at 9:23 AM, Steve Terrell <sterr...@oculus360.us> >> wrote: >> >>> Oh, I didn't know t

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
rting the 4.7 server in JSON mode sometime soon. On Fri, Feb 5, 2016 at 1:59 PM, Lukáš Lalinský <lalin...@gmail.com> wrote: > On Fri, Feb 5, 2016 at 8:46 PM, Steve Terrell <sterr...@oculus360.us> > wrote: >> >> When I tried to send a "createStatement" via curl an

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
eshoot pythondb for https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-0.98-rc1/ . Bye, Steve On Fri, Feb 5, 2016 at 3:07 PM, Steve Terrell <sterr...@oculus360.us> wrote: > Thanks, Lukas. Half the battle is won, now. With your help I was able to > see the JS

Phoenix Query Server and/or Avatica Bug and/or My Misunderstanding

2016-02-04 Thread Steve Terrell
I can query Phoenix by doing something like this: curl -v -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"aaa","sql":"select * from CAT_MAP"}' http://10.0.100.57:8765/ However, I am unable to make such a request in Javascript in my web page because the POST method, along with

Re: select all dynamic columns by primary key if columns names are unknown

2016-02-02 Thread Steve Terrell
I would like to know as well. Today when I upsert and create dynamic columns, I have to also create a second table to keep track of the dynamic field names and data types that were upserted so that the person writing queries for the first table can know what's fields are available. Also would

Re: select all dynamic columns by primary key if columns names are unknown

2016-02-02 Thread Steve Terrell
ou can dynamically add/remove >> columns from a view and this way Phoenix keeps track of it for you and you >> get all the other standard features. >> Thanks, >> James >> >> [1] https://phoenix.apache.org/views.html >> >> On Tue, Feb 2, 2016 at 1:42 PM, Serega

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-12-07 Thread Steve Terrell
in case someone else is scratching their head. Meanwhile, does anyone know why the region server ips are important? I thought communication was only between the client and the master node. Thanks, Steve On Sun, Nov 1, 2015 at 9:29 AM, Steve Terrell <sterr...@oculus360.us> wrote: >

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-11-01 Thread Steve Terrell
Thanks, but I'm trying to run remotely. I'm sure my /etc/hosts is fine as I can ssh and "telnet " OK. On Sun, Nov 1, 2015 at 9:21 AM, Steve Terrell <sterr...@oculus360.us> wrote: > Thank you, but I'm sure this is not the case as I can easily run Squirrel > client on my

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-11-01 Thread Steve Terrell
check if port 2181 is blocked in > your network.. > > On Sat, Oct 31, 2015 at 8:00 PM, Steve Terrell <sterr...@oculus360.us> > wrote: > >> OK, did some more troubleshooting. Still can't run sqlline.py from my >> macbook laptop. Still hangs. >> >> My H

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-10-31 Thread Steve Terrell
? (My ultimate goal was to get SQuirreL working, but though sqlline.py would be an easier problem to tackle. SQuirreL is getting timeouts which I suspect are due to the same hanging that I see with sqlline.py.) Thanks, Steve On Wed, Oct 28, 2015 at 5:04 PM, Steve Terrell <sterr...@oculus360

Re: Best Way To Copy Table From Old Phoenix/HBase versions to Newer?

2015-10-28 Thread Steve Terrell
not sure if > that'll read from a Phoenix 3.x cluster, but it might. > > Thanks, > James > > > On Tue, Oct 27, 2015 at 5:09 PM, Steve Terrell <sterr...@oculus360.us> > wrote: > >> Hi! >> >> I'm trying to copy my tables from an old cluster with HBase 0.94

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-10-28 Thread Steve Terrell
rt 2181 from the machine that you are running > sqlline.py on? > > Alok > > Alok > > a...@cloudability.com > > On Wed, Oct 28, 2015 at 2:23 PM, Steve Terrell <sterr...@oculus360.us> > wrote: > >> I can get "sqlline.py localhost" to work fine from the m

Best Way To Copy Table From Old Phoenix/HBase versions to Newer?

2015-10-27 Thread Steve Terrell
Hi! I'm trying to copy my tables from an old cluster with HBase 0.94.18 & Phoenix 3.2.2 over new a cluster with HBase 0.98.15 and Phoenix 4.6.0 . I was thinking about doing it in Pig using org.apache.phoenix.pig.PhoenixHBaseLoader('old ip') and org.apache.phoenix.pig.PhoenixHBaseStorage(''new

Re: NoSuchMethodError From org.apache.phoenix.pig.PhoenixHBaseLoader in 4.6.0

2015-10-26 Thread Steve Terrell
at 5:14 PM, Steve Terrell <sterr...@oculus360.us> wrote: > Hi! Please help me with resolving this problem. I am porting our > Pig/Phoenix/HBase project to all newer versions, but this one thing is > blocking me. > > Observed When running Phoenix 4.6.0 and HBase 0.98.12.