Correction - the function is meant to DECODE the double columns, not encode
them.
Sent from my iPhone
On Oct 17, 2016, at 2:43 AM, Michael McAllister
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Hi
I'm trying to access a Phoenix table from Datameer. Datam
Hi
I’m trying to access a Phoenix table from Datameer. Datameer can read HBase,
but doesn’t support using the Phoenix JDBC driver, as the Phoenix JDBC driver
only does a partial implementation of the JDBC standard.
To get around this I’m using the Datameer SDK to write a function that will
expensive.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[Description: Description: cid:3410354
This is really an ANSI SQL question. If you use an aggregate function, then you
need to specify what columns to group by. Any columns not being referenced in
the aggregate function(s) need to be in the GROUP BY statement.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D209E4.8C3C4800]
This electronic c
Nithin
>
when I tried enabling region replication, I was not able to bring the HBase
cluster.
>
I’m not sure what you mean here. Specifically referring to “bring the HBase
cluster”.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto
then no Phoenix
connection can be made.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:
Not yet sorry.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D20
Can you please provide the sample rowkey? It is blank in your email below.
Alternatively, an UPSERT VALUES statement I can use to insert a row that I can
work with myself.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcal
Can you provide an example of one of the rowkeys, the values you are expecting
out of it, and the full table definition? Of importance in the table definition
will be whether you have salt buckets defined.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis
Anil
If you split the rowkey on the zero byte character, you should end up with the
individual columns that made up your primary key. Details are here:-
https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
Michael McAllister
Staff Data Warehouse Engineer
in a schema named (null).
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D1F
Seeing as we’re talking COALESCE and NULLs, depending on the version Ankit is
running, this could also be the issue in PHOENIX-2994:-
https://issues.apache.org/jira/browse/PHOENIX-2994
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcal
Perhaps you could create a test case with table creation, upsert and then the
select that reproduces and illustrates the problem?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
All
Are there any known issues if we use the hbase shell to alter a phoenix table
to apply compression? We're currently using Phoenix 4.4 on HDP 2.3.4.
I plan on testing, but also want to double check for any gotchas.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis
Technically, it depends on which version of HDP you are on. Here are the
versions:-
HDP 2.1 = Apache Phoenix 4.0
HDP 2.2 = Apache Phoenix 4.2
HDP 2.3 = Apache Phoenix 4.4
HDP 2.4 = Apache Phoenix 4.4
(From this page -> http://hortonworks.com/hdp/whats-new/)
Michael McAllister
Staff D
Use ORDER BY in your select clause:-
https://phoenix.apache.org/language/#select
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com
... however I was thinking that HA was enabled on HBase tables simply by
setting region replication to 2 or 3?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mai
Mujtaba
Thanks for this information. Seeing as I am using Phoenix 4.2, what is the safe
and approved sequence of steps to drop this table and recreate it as you
mention? Additionally, how do we ensure we don’t lose sequence data?
Michael McAllister
Staff Data Warehouse Engineer | Decision
regions on them. That
mans inefficient use of our cluster.
Have there been any best practices developed as to how to deal with this
situation?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
tax error. Mismatched input. Expecting "NAME", got
"sequence" at line 1, column 29. (state=42P00,code=604)
I do understand this is a system table, but it would be nice to see inside it.
This is from Apache Phoenix 4.2 on HDP 2.2.6.
Michael McAllister
Staff Data Warehouse Engi
caps.
James
On 22/09/15 21:18, Michael McAllister wrote:
More failed attempts ...
0: jdbc:phoenix:redacted,> select count(*) from system."sequence";
Error: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.sequence
(state=42M03,code=1012)
0: jdbc:phoenix:redacted,> select
I think you need a comma between your column definition and your constraint
definition.
On Sep 1, 2015, at 2:54 PM, Serega Sheypak
> wrote:
Hi, I wrote itegration test that uses HBasetesting utility and phoenix. Test
creates table and
Hi
I am trying to test the performance of a query, and I want to make sure I am
not getting cached results from a previous call. Is there anything to be done
to clear any notional cache?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis
. Something like the following:
DELETE FROM SYSTEM.CATALOG WHERE TENANT_ID IS NULL
AND TABLE_SCHEM = 'MYSCHEMA'
AND TABLE_NAME = 'MYTABLE';
You'll need to bounce your cluster to cause the server-side cache to be cleared
too.
On Tue, Aug 25, 2015 at 10:29 AM, Michael McAllister
mmcallis
is get them dropped.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex:
https://h.a/mikewebex
[cid:image002.png@01D080DC.77AD4930
warnings, 1 errors]
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex:
https://h.a/mikewebex
[cid:image002.png@01D080DC.77AD4930
to extract the day/month/year. Perhaps you can backport them or upgrade?One more built-in that might help you is the TRUNC function which lets you "bucketize" based on HOUR, MINUTE, SECOND, DAY, etc.Thanks,JamesOn Tue, Aug 4, 2015 at 12:14 PM, Michael McAllister mmcallis...@homeaway
Hi
I have a table with a date stored in it. The date is always the last day of a
month. For example, 31-AUG-2015 or 28-FEB-2015. I have the date stored in two
separate columns. In one place it is an integer (20150831 and 20150228). In
another place it uses the DATE datatype.
I want to write a
Yes it’s true as far as I can tell, you’ll at least have to get the PK columns
as well.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex
Satya
Try the UPSERT SELECT statement:-
https://phoenix.apache.org/language/#upsert_select
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype: michael.mcallister.hamailto:zimmk...@hotmail.com
in the attached log:-
Line 33 - first delete
Line 39 - shows delete was successful as count(*) returns no rows for the key
deleted.
Line 53 - second delete reports 1 row affected
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis
Yufan
Have you tried using the EXPLAIN command to see what plan is being used to
access the data?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype: michael.mcallister.hamailto:zimmk
. For the second one, it's
CLIENT 67-CHUNK SERIAL 1-WAY REVERSE FULL SCAN OVER TIMESTAMP_INDEX; SERVER
FILTER BY FIRST KEY ONLY; SERVER 1 ROW LIMIT which looks correct, but still
returns the unexpected result.
2015-06-26 16:59 GMT-07:00 Michael McAllister
mmcallis...@homeaway.commailto:mmcallis
Your second query should work with Phoenix 4.3 or later.
Thanks, unfortunately at the moment I’m stuck with Phoenix 4.2.
I will investigate the problem with the first one and get back to you.
Appreciate this.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis
/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html
On Mon, Jun 22, 2015 at 5:07 PM Michael McAllister
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
Hi everyone
New user, first time post. I have a question about the TO_DATE function seeming
to misbehave. I issue
36 matches
Mail list logo