Thanks Russell, I’ve posted it there as well now. If you remember the fix then
let me know!
Cheers
> On 2 Dec 2015, at 17:27, Russell Jurney wrote:
>
> I've seen this before in jruby, but I can't recall the fix. Maybe try the
> JRuby list if nobody knows?
>
> On Wednesday, December 2, 2015,
I've seen this before in jruby, but I can't recall the fix. Maybe try the
JRuby list if nobody knows?
On Wednesday, December 2, 2015, Josh Harrison
wrote:
> Thanks for your help Samarth, unfortunately I’ve still got the same error
> after these steps, to give the full error:
>
> NameError: canno
Thanks for your help Samarth, unfortunately I’ve still got the same error after
these steps, to give the full error:
NameError: cannot link Java class org.apache.phoenix.jdbc.PhoenixDriver,
probable missing dependency: Could not initialize class
org.apache.phoenix.jdbc.PhoenixDriver
Josh,
One step worth trying would be is to register the PhoenixDriver instance
and see if that helps. Something like this:
DriverManager.registerDriver(PhoenixDriver.INSTANCE)
Connection con = DriverManager.getConnection("jdbc:phoenix:localhost:2181”)
- Samarth
On Wed, Dec 2, 2015 at 3:41 PM,
Thanks for your help and quick response Russell. This does seemed to have
progressed things, however I’m now getting the following error:
NameError: cannot link Java class org.apache.phoenix.jdbc.PhoenixDriver,
probable missing dependency: Could not initialize class
org.apache.phoenix.jdbc.Pho
It does. Under the hood, the DataFrame/RDD makes use of the
PhoenixInputFormat, which derives the split information from the query
planner and passes those back through to Spark to use for its
parallelization.
After you have the RDD / DataFrame handle, you're also free to use Spark's
repartition()
Yes, I will create new tickets for any issues that I may run into.
Another question: For now I'm pursuing the option of creating a dataframe
as shown in my previous email. How does spark handle parallelization in
this case? Does it use phoenix metadata on splits?
On Wed, Dec 2, 2015 at 11:02 AM,
This seems like a class path problem. Try specifying the class path to the
jar with that class in it via: CLASSPATH=/foo/bar.jar jruby ...
On Wednesday, December 2, 2015, Josh Harrison
wrote:
> Hi Guys,
>
> We’re trying to spin up a testing version of Phoenix and integrate it with
> a Jruby on r
Hi Guys,
We’re trying to spin up a testing version of Phoenix and integrate it with a
Jruby on rails application. I have Phoenix and Hbase successfully installed,
configured and talking to each other, but am coming up with a ‘cannot load java
class’ error when trying to make the connection to
Hi Krishna,
That's great to hear. You're right, the plugin itself should be backwards
compatible to Spark 1.3.1 and should be for any version of Phoenix past
4.4.0, though I can't guarantee that to be the case forever. As well, I
don't know how much usage there is across the board using the Java A
Are you sure your HA configuration is working properly, I doubt this is
related to Phoenix.
Are these parameters correctly setup?
hbase-site.xml
hbase.rootdir
hdfs://nameservice/hbase
hdfs-site.xml
dfs.nameservices
nameservice
dfs.ha.namenodes.nameservice
nn
Yes, that works for Spark 1.4.x. Website says Spark 1.3.1+ for Spark
plugin, is that accurate?
For Spark 1.3.1, I created a dataframe as follows (could not use the
plugin):
*Map options = new HashMap();*
*options.put("url", PhoenixRuntime.JDBC_PROTOCOL +
PhoenixRuntime.JDBC_PROTOCO
Hi guys,
We're planning to upgrade Phoenix 4.2.0 to Phoenix 4.4. As part of this
process, we'll need to migrate our custom UDFs, with the new feature
provided by 4.4. However, the documentation only describes how to create an
Scalar function, not an aggregator one.
Are User Defined Aggregator Fun
Hi ,
We are using Phoenix 4.4.0 in our project and apache commons pool.
We are facing an intermittent issue , sometimes we don't get the phoneix
connection and the thread continue to be in the waiting state for ever,
below is one such thread trace,
Stack trace:
sun.misc.Unsafe.park(Native Method
14 matches
Mail list logo