That's not enough context for us to diagnose why your job is failing,
but this stack trace exists only in Hadoop code. Given that, I would
have to assume it's a bug in Hadoop 3.0.0.
There are many newer versions of Hadoop and I would suggest you use one
of the latest releases of Hadoop 3,
Hello Team,
Kindly help with the below issue as we are not able to write data from
phoenix table to hdfs using Map Reduce.
We are getting below error as the connection lost intermittently for map
reduce job.
Before launching mapper connection is established successfully.
map 0% and reduce 0%
If you are not using Kerberos authentication, then you need to figure
out how your DNS names are resolving for your VM. A
SocketTimeoutException means that your client could not open an socket
to that host:port which was specified. Likely, this is something you
need to figure out with your VM.
(-to: dev@phoenix, +bcc: dev@phoenix, +to: user@phoenix)
I've taken the liberty of moving this over to the user list.
Typically, such an exception is related to Kerberos authentication, when
the HBase service denies in an incoming, non-authenticated client.
However, since you're running
Can you set log4j to DEBUG? That will give you a hint about what's going on
the server.
On Thu, 14 Jun 2018, 18:40 Susheel Kumar Gadalay,
wrote:
> Can someone please help me to resolve this.
>
> Thanks
> Susheel Kumar
>
> On Tuesday, June 12, 2018, Susheel Kumar Gadalay
> wrote:
> > Hi,
> >
>
Can someone please help me to resolve this.
Thanks
Susheel Kumar
On Tuesday, June 12, 2018, Susheel Kumar Gadalay
wrote:
> Hi,
>
> I want to generate a report from Jaspersoft fetching data from Phoenix
tables. I have placed Phoenix client jar, HDFS commons and Utilities jar,
HBase jar in the
Hi,
I want to generate a report from Jaspersoft fetching data from Phoenix
tables. I have placed Phoenix client jar, HDFS commons and Utilities jar,
HBase jar in the Jaspersoft reporting server.
It is hanging when making JDBC connection. If I give wrong credentials it
is throwing error.
Any idea
After we did upgrade to 4.10 on a CDH 5.10, we do see too many ZK
connections (increased the max connections form 60->300->1000), but the
problem still exist.
Are there any known issues? Phoenix connection leak?
Appreciate your time.
t problem remains.
Best regards,
---R
--
View this message in context:
http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-tp3419p3422.html
<http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-ker
t/configuring-phoenix-
> to-run-in-a-secure-cluster.html
> , and linked those configuration files under phoenix bin directory.
>
> But problem remains.
>
> Best regards,
> ---R
>
>
>
> --
> View this message in context: http://apache-phoenix-user-
> list.112477
--
View this message in context:
http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-tp3419p3422.html
Sent from the Apache Phoenix User List mailing list archive at Nabble.com.
che-phoenix-user-
> list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-
> tp3419p3420.html
> Sent from the Apache Phoenix User List mailing list archive at Nabble.com.
>
Version infomation, phoenix: phoenix-4.10.0-HBase-1.2, hbase: hbase-1.2.4
--
View this message in context:
http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-tp3419p3420.html
Sent from the Apache Phoenix User List mailing list archive
Hi group,
I used the following command to execute sqlline.py:
bin/sqlline.py
hadoop-offline034.dx.momo.com,hadoop-offline035.dx.momo.com,hadoop-offline036.dx.momo.com:2181:/hbase:phoenix/hadoop-offline032.dx.momo.com@MOMO.OFFLINE:/opt/hadoop/etc/hadoop/security/phoenix.keytab
from the debug,
Looking for some recommendation around managing connections.
We have a REST service using Phoenix + Hbase and are looking to support
a high volume of request and it's my understanding from previous posting
the phoenix jdbc connections all share the same underlying HBase connection
regardless the
This seems like a class path problem. Try specifying the class path to the
jar with that class in it via: CLASSPATH=/foo/bar.jar jruby ...
On Wednesday, December 2, 2015, Josh Harrison
wrote:
> Hi Guys,
>
> We’re trying to spin up a testing version of Phoenix and
Thanks for your help Samarth, unfortunately I’ve still got the same error after
these steps, to give the full error:
NameError: cannot link Java class org.apache.phoenix.jdbc.PhoenixDriver,
probable missing dependency: Could not initialize class
org.apache.phoenix.jdbc.PhoenixDriver
Josh,
One step worth trying would be is to register the PhoenixDriver instance
and see if that helps. Something like this:
DriverManager.registerDriver(PhoenixDriver.INSTANCE)
Connection con = DriverManager.getConnection("jdbc:phoenix:localhost:2181”)
- Samarth
On Wed, Dec 2, 2015 at 3:41 PM,
I've seen this before in jruby, but I can't recall the fix. Maybe try the
JRuby list if nobody knows?
On Wednesday, December 2, 2015, Josh Harrison
wrote:
> Thanks for your help Samarth, unfortunately I’ve still got the same error
> after these steps, to give the
Phoenix is an embedded driver and it automatically manages connection pooling.
In case of multi-tenancy, tenant-specific connections are created by specifying
tenantId property in JDBC. Does phoenix still automatically handle connection
pooling, if I create multiple connections each with
Hey Nick, Sergey,
Thanks for taking a look. It was just going through a really really long
HBase connection retry loop (20+ minutes!). However, even setting the
retries and delays to 0 in the hbase-site.xml, the connection still takes
about 3 minutes to timeout. Now I'm wondering if there is a
Hi Alex,
It's probably not hanging forever, but going through the -- very long by
default -- HBase connection retry loop. Probably you can enabled more
verbose logging and see exactly what's happening. Can you confirm this is
the case and file a ticket against Phoenix at
Hello,
I'm wondering if there is a way to timeout a PhoenixDriver connection
attempt.
For example if I run:
// invalidZkQuorum = some unreachable ip address
Connection conn = DriverManager.getConnection(jdbc:phoenix: +
invalidZkQuorum);
It just hangs forever. Is there something I can put in
It definitely hangs forever from what I experienced.
It also does not allow to exit from it and only killing terminal session
helps.
On Fri, May 22, 2015 at 6:13 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Hi Alex,
It's probably not hanging forever, but going through the -- very long by
Hi Russell,
Just a naive question, Are you able to connect to HBase via HBase shell or
any other HBase client from that node? I have seen this kind of behavior
when the client does not have the HBase cluster conf in its classpath.
Thanks,
Anil Gupta
On Wed, Jun 25, 2014 at 10:51 PM, Russell
25 matches
Mail list logo