How do you know that PQS isn't just processing your query?
Are there errors in the query server log? Have you used a tool like
jstack to obtain a thread dump from the query server to see what it is
doing?
Cheyenne Forbes wrote:
yes James through the query server. Josh it doent show any
Yes James through the query server. Josh it doent show any errors, it just
hangs there for minutes
Regards,
Cheyenne Forbes
> On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser wrote:
>
>> The error you see would also be rather helpful.
>>
>> James Taylor wrote:
>>
>>> Hi
yes James through the query server. Josh it doent show any errors, it just
hang there for minutes
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions, Co.
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email:
The error you see would also be rather helpful.
James Taylor wrote:
Hi Cheyenne,
Are you referring to joins through the query server?
Thanks,
James
On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes
> wrote:
I was
Hi Xindian,
A couple of initial things that come to mind...
* Make sure that you're using HDP "bits" (jars) everywhere to remove any
possibility that there's an issue between what Hortonworks ships and
what's in Apache.
* Make sure that your Java application/Spark job has the correct
Cool, thanks for the info, JM. Thinking out loud..
* Could be missing/inaccurate /etc/krb5.conf on the nodes running spark
tasks
* Could try setting the Java system property
sun.security.krb5.debug=true in the Spark executors
* Could try to set org.apache.hadoop.security=DEBUG in log4j config
Hi Cheyenne,
Are you referring to joins through the query server?
Thanks,
James
On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I was using phoenix 4.4 then I switched to 4.8 because I thought it was
> related to version 4.4 (both on hbase 1.1.2),
I was using phoenix 4.4 then I switched to 4.8 because I thought it was
related to version 4.4 (both on hbase 1.1.2), neither using json nor
protobufs works.
I tried (also using the outer key word):
> left join
right join
inner join
Using the keytab in the JDBC URL. That the way we use locally and we also
tried to run command line applications directly from the worker nodes and
it works, But inside the Spark Executor it doesn't...
2016-09-15 13:07 GMT-04:00 Josh Elser :
> How do you expect JDBC on
How do you expect JDBC on Spark Kerberos authentication to work? Are you
using the principal+keytab options in the Phoenix JDBC URL or is Spark
itself obtaining a ticket for you (via some "magic")?
Jean-Marc Spaggiari wrote:
Hi,
I tried to build a small app all under Kerberos.
JDBC to
Hi,
I tried to build a small app all under Kerberos.
JDBC to Phoenix works
Client to HBase works
Client (puts) on Spark to HBase works.
But JDBC on Spark to HBase fails with a message like "GSSException: No
valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]"
Keytab
11 matches
Mail list logo