Hi Josh,
Haven't tried without salting yet. The problem we encountered is only on
the firstKey, when we do select with a where condition on the secondKey
or thirdKey, it returns the correct result. So I'm guessing it has
something to do with the salted key.
Will try the getSaltedKey first
[root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver
Starting search for JAR files from directory .
Looking for the class org.apache.phoenix.jdbc.PhoenixDriver
This might take a while...
./phoenix-4.8.0-HBase-1.1-client.jar
./phoenix-4.8.0-HBase-1.1-server.jar
if you are using yarn as the resource negotiator , you will get
container(cpu+memory ) allocated from all the node. fyi:
http://spark.apache.org/docs/latest/running-on-yarn.html
it'a scalable parallel caculation. Map reduce(phoenix) will do the same
thing just it's way to do the caculation is
if I was to use spark (via python api for example), the query would be
processed on my webservers or on a separate server like in phoenix?
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions, Co.
Chairman
Avapno Assets, LLC
Bethel Town
Hi Cheyenne ,
That's a very interesting question, if secondary indexes are created well
on phoenix table , hbase will use coprocessor to do the join operation
(java based map reduce job still if I understand correctly) and then
return the result . on the contrary spark is famous for its great
i've been thinking, is spark sql faster than phoenix (or phoenix-spark)
with selects with joins on large data (for example instagram's size)?
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions, Co.
Chairman
Avapno Assets, LLC
Bethel Town
Hi,
The trailing semi-colon on the URL seems odd, but I do not think it
would cause issues in parsing when inspecting the logic in
PhoenixEmbeddedDriver#acceptsURL(String).
Does the Class.forName(..) call succeed? You have Phoenix properly on
the classpath for your mappers?
Dong-iL, Kim
Hi Marica,
Are you able to successfully write your rowkey without salting? If not, it
could be that your 'generateRowKey' function is the culprit.
FWIW, we have some code that does something similar, though we use
'getSaltedKey':
// If salting, we need to prepend an empty byte to 'rowKey', then
Hi Dalin,
Thanks for the information, I'm glad to hear that the spark integration is
working well for your use case.
Josh
On Mon, Sep 12, 2016 at 8:15 PM, dalin.qin wrote:
> Hi Josh,
>
> before the project kicked off , we get the idea that hbase is more
> suitable for
Hi.
I've tested the map reduce code on homepage.
It coudln’t find the jdbc driver as below.
I've insert this code "Class.forName("org.apache.phoenix.jdbc.PhoenixDriver”);"
in mred main method but there is no effect.
What shall I do?
Regards.
Error: java.lang.RuntimeException:
Hi,
We have a table created via phoenix with salt bucket, but we're using HBase
API to insert records since we need to manually set the HBase version and I
believe that it isn't possible via phoenix.
Our table has a composite key (firstKey varchar, secondKey varchar,
thirdKey varchar) and when
11 matches
Mail list logo