Hi,
I created a table using phoenix. It has two field: integer and string.
Integer field is primary key column. Then I inserted the two rows in his
table. I am able to see the data by querying through sqllib.py shell(Using
select *). I am also able to see the table and its data on hbase
Hi James,
From you reply I understand that it is NOT possible to create such a view,
because each family can have different number of columns, and it could be just
one column per family for one PK, and hundreds of thousands for another PK.
How can I possibly accommodate it in a view
Hi Sergey,
Yes, you can create a Phoenix view over this HBase table, but you have to
explicitly list columns by name (i.e. column qualifier) either at view
creation time or at read time (using dynamic columns). Also, the row key
must conform to what Phoenix expects if there are multiple columns in
I wrote a Java program that runs nightly and collects metrics about our hive
tables.
I would like to include HBase tables in this as well.
Since select count(*) is slow and not recommended on Phoenix, what are my
alternatives from Java?
Is there a way to call
RowCounter is s mapreduce program. After the program completes execution of
the job, it returns information about that job, including job counters.
RowCounter includes its counts in the job counters, so they're easily
accessed programmatically from the returned object. It's not a ResultSet,
but it
Sergey,
It is possible, but maybe in your case it's not feasible.
Thanks,
James
On Friday, June 26, 2015, Sergey Malov sma...@collective.com wrote:
Hi James,
From you reply I understand that it is NOT possible to create such a view,
because each family can have different number of columns,
Hi James
I was under impression that UPSERT SELECT with WHERE would do the trick ?
If I got it wrong what would be correct way of thinking about UPSERT SELECT
with WHERE ?
regards,
S
On Fri, Jun 26, 2015 at 11:53 AM, James Taylor jamestay...@apache.org
wrote:
Once transaction support goes
Zach,
I wouldn't at all say that doing a count(*) is not recommended. It's
important to know that 1) this requires a full table scan and 2) this is
done by Phoenix asynchronously. You'll need to set the timeouts high enough
for this to complete. Phoenix will be much faster than running a MR job,
Yufan
Have you tried using the EXPLAIN command to see what plan is being used to
access the data?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype:
Hi Michael,
Thanks for the advice, for the first one, it's CLIENT 67-CHUNK PARALLEL
1-WAY FULL SCAN OVER TIMESTAMP_INDEX; SERVER FILTER BY FIRST KEY ONLY;
SERVER AGGREGATE INTO SINGLE ROW which is as expected. For the second one,
it's CLIENT 67-CHUNK SERIAL 1-WAY REVERSE FULL SCAN OVER
OK, I’m a Phoenix newbie, so that was the extent of the advice I could give
you. There are people here far more experienced than I am who should be able to
give you deeper advice. Have a great weekend!
Mike
From: Yufan Liu [mailto:yli...@kent.edu]
Sent: Friday, June 26, 2015 7:19 PM
To:
Thank you anyway, Michael!
2015-06-26 17:21 GMT-07:00 Michael McAllister mmcallis...@homeaway.com:
OK, I’m a Phoenix newbie, so that was the extent of the advice I could
give you. There are people here far more experienced than I am who should
be able to give you deeper advice. Have a great
Hi Sergey,
Since you have hundreds of thousand of columns. You can query your data by
using dynamic columns features of phoenix. In this way, you wont need to
predefine 100's of thousands of columns.
Thanks,
Anil Gupta
On Fri, Jun 26, 2015 at 11:34 AM, James Taylor jamestay...@apache.org
wrote:
Hi,
We have created a table (eg, t1), and a global index of one numeric column
of t1 (eg, timestamp). Now we want to find the largest value of timestamp,
we have tried two approaches:
1. select max(timestamp) from t1; This query takes forever to finish, so I
think it maybe doing a full table
Or where can I find the documents if it is already support?
Thanks
15 matches
Mail list logo