Ns G wrote:
Hi All,
I have written a simple class to access phoenix.
I am able to establish connection. But when executing below line i get
the error.
conn = DriverManager.getConnection(dbUrl);
I am facing below exception when accessing phoenix through JDBC from
eclipse.
INFO - Call
/hdp/current/phoenix-cient/phoenix-client.jar
and run the following groovy script, assuming zookeeper is running on
zknode:
import groovy.sql.Sql
Sql.newInstance("jdbc:phoenix:zknode:/hbase-unsecure",
'foo',
'bar',
"org.apache.phoenix.jdbc.PhoenixDriver")
On Jun 6, 2016, at
Looks like you're knocking up against Hadoop (in o.a.h.c.Configuration).
Have you checked search results without Phoenix specifically?
I haven't run into anything like this before, but I'm also not a bit
Groovy aficionado. If you can share your environment (or some sample
project that can
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79
and Apache Phoenix 4.8.0-SNAPSHOT locally.
Will dig some more.
Brian Jeltema wrote:
Groovy 2.4.3
JDK 1.8
On Jun 8, 2016, at 11:26 AM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wr
Koert,
Apache Phoenix goes through a lot of work to provide multiple versions
of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2
presently). The builds for each of these branches are tested against
those specific versions of HBase, so I doubt that there are issues
between
Negative, sorry :\
I'm not really sure how this all is supposed to work in Groovy. I'm a
bit out of my element.
Brian Jeltema wrote:
Any luck with this?
On Jun 9, 2016, at 10:07 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
FWIW, I've als
Hi Tongzhou,
Maybe you can try `ALTER INDEX index ON table DISABLE`. And then the
same command with USABLE after you update the index. Are you attempting
to do this incrementally? Like, a bulk load of data then a bulk load of
index data, repeat?
Regarding the TTL, I assume so, but I'm not
Hi,
I was just made aware of a neat little .NET driver for Avatica
(specifically, the authors were focused on Phoenix's use of Avatica in
the Phoenix Query Server).
https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview
I'll have to try it out at some point, but would love to
I only wired up commit/rollback in Calcite/Avatica in Calcite-1.6.0 [1],
so I Phoenix-4.6 isn't going to have that in the binaries that you can
download (Phoenix-4.6 is using 1.3.0-incubating). This should be
included in the upcoming Phoenix-4.7.0.
Sadly, I'm not sure why autoCommit=true
Hi Steve,
Sorry for the delayed response.
Putting the "payload" (json or protobuf) into the POST instead of the
header should be the 'recommended' way forward to avoid the limit as you
ran into [1]. I think Phoenix >=4.6 was using Calcite-1.4, but my memory
might be failing me.
Regarding
Hi Jared,
Sounds like https://issues.apache.org/jira/browse/CALCITE-780
That version of Phoenix (probably) is using Calcite-1.2.0-incubating.
You could ask the vendor to update to a newer version, or use Phoenix
4.7.0 (directly from Apache) which is using Calcite-1.6.0.
Jared Katz wrote:
(-cc dev@phoenix)
Deepak,
As the name suggests, that release is targeted for HBase-0.98.x release
lines. Any compatibility of an older release of HBase than 0.98 is
likely circumstantial.
I can't speak on behalf of the HBase community, but I feel relatively
confident in suggesting that it
Hi Jared,
This is just a bad error message on PQS' part. Sorry about that. IIRC,
it was something obtuse like not finding the server-endpoint for the
JSON message you sent.
If you want to do a POST and use the body, you can just put the bytes
for your JSON blob in there and that should be
Correct, James:
Phoenix-4.7.0 uses Calcite-1.6.0. This included lots of goodies includes
commit/rollback support. Phoenix-4.6.0 used Calcite-1.3.0. In general,
if you want to use the QueryServer, I'd strongly recommend trying to go
with Phoenix-4.7.0. You'll inherit *lots* of
Also, setting -Dsun.security.krb5.debug=true when you launch your Java
application will give you lots of very helpful information about what is
happening "under the hood".
Sanooj Padmakumar wrote:
Thanks Josh and everyone else .. Shall try this suggestion
On 22 Mar 2016 09:36, &
Yeah, I don't think the inclusion of Python code should be viewed as a
barrier to inclusion (maybe just a hurdle). I've seen other projects
(Ambari, iirc) which have tons of Python code and lots of integration.
The serialization for PQS can be changed via a single configuration
property in
Hi Pierre,
1.1.2.2.4 is not a version of Apache HBase. Might you be needing to
contact a vendor for specific information?
Either way, the phoenix shaded client and server (targeted for HBase
server) are not attached to the Maven build which means that they are
not deployed via Maven as a
Let's think back to before transactions where added to Phoenix.
With autoCommit=false, updates to HBase will be batched in the Phoenix
driver, eventually flushing on their own or whenever you invoked
commit() on the connection.
With autoCommit=true, updates to HBase are flushed with every
If you invoked a commit on PQS, it should have flushed any cached values
to HBase. The general messages you described in your initial post look
correct at a glance.
If you have an end-to-end example of this that I can play with, I can
help explain what's happening inside of PQS. If you want
For reference materials: definitely check out
https://calcite.apache.org/avatica/
While JSON is easy to get started with, there are zero guarantees on
compatibility between versions. If you use protobuf, we should be able
to hide all schema drift from you as a client (e.g. applications you
Hi Mariana,
You could try defining an array of whatever type you need.
See https://phoenix.apache.org/array_type.html for more details.
- Josh
Mariana Medeiros wrote:
Hello :)
I have a Java class Example with a String and an ArrayList fields.
I am using Apache phoenix to insert and read
Hi Naveen,
The Protocol Buffer dependency on 2.5 is very unlikely to change in
Phoenix as that is directly inherited from HBase (as you can imagine,
these need to be kept in sync).
There are efforts, in both HBase and Phoenix, underway to provide
shaded-jars for each project which would
Nope, you shouldn't need to do this.
"statements" that you create using the CreateStatementRequest are very
similarly treated to the JDBC Statement interface (they essentially
refer to an instance of a PhoenixStatement inside PQS, actually).
You should be able to create one statement and
"statementId": 20,
"sql": "SELECT * FROM us_population",
"maxRowCount": -1
}
And this is the commit command response (if it can give you more
insights)
{
"response": "resultSet",
"connectionId": "8",
"state
Also, you're using the wrong command.
You want "prepareAndExecute" not "prepareAndExecuteBatch".
Josh Elser wrote:
Thanks, will fix this.
Plamen Paskov wrote:
Ah i found the error. It should be "sqlCommands": instead of
"sqlCommands",
The documentati
", "UPSERT INTO
us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]
}
And this is the response i receive:
Error 500
HTTP ERROR: 500
Problem accessing /. Reason:
com.fasterxml.jackson.core.JsonParseException: Unexpected
character (',' (code 44)): was ex
; ]
}
And this is the response i receive:
Error 500
HTTP ERROR: 500
Problem accessing /. Reason:
com.fasterxml.jackson.core.JsonParseException: Unexpected
character (',' (code 44)): was expecting a colon to separate field
name and value
at [Source: java.io.StringReader@41709697; line: 5,
What version of Phoenix are you using?
Plamen Paskov wrote:
Hey folks,
I'm trying to UPSERT some data via the json api but no luck for now. My
requests looks like:
{
"request": "openConnection",
"connectionId": "6"
}
{
"request": "createStatement",
"connectionId": "6"
}
{
"request":
Hi Youngwoo,
The inclusion of hadoop-common is probably the source of most of the
bloat. We really only needed the UserGroupInformation code, but Hadoop
doesn't provide a proper artifact with just that dependency for us to
use downstream.
What dependency issues are you running into? There
You can check the dev list for the VOTE thread which contains a link to
the release candidate but it is not an official Apache Phoenix release yet.
Vasanth Bhat wrote:
Thanks a lot Ankit.
where do I download this from? I am looking at
http://mirror.fibergrid.in/apache/phoenix/don't seem
Did you read James' response in PHOENIX-2271? [1]
Restating for you: as a work-around, you could try to use the recent
transaction support which was added via Apache Tephra to prevent
multiple clients from modifying a cell. This would be much less
efficient than the "native" checkAndPut API
It sounds like whatever query you were running was just causing the
error to happen again locally. Like you said, if you launched a new
instance of sqlline.py, you would have a new JVM and thus a new
ThreadPool (and backing queue).
vishnu rao wrote:
hi
i was using the "sqlline.py" client ..
Looking into this on the HDP side. Please feel free to reach out via HDP
channels instead of Apache channels.
Thanks for letting us know as well.
Josh Mahonin wrote:
Hi Robert,
I recommend following up with HDP on this issue.
The underlying problem is that the
Can you share the error that your RegionServers report in the log before
they crash? It's hard to give an explanation without knowing the error
you're facing.
Thanks.
kevin wrote:
hi,all
I have a test about hbase run top of alluxio . In my hbase there is
a table a create by phoenix an
Short answer is (likely) that your mail provider (Gmail) is rejecting posts
to user@p.a.o which hit its spam trigger but did not hit the ASF's spam
trigger.
This triggers the mailing list to tell you that a message it tried to send
you was rejected. So, you get a warning about a message that you
See
https://github.com/apache/calcite/blob/5181563f9f26d1533a7d98ecca8443077e7b7efa/avatica/core/src/main/java/org/apache/calcite/avatica/remote/Service.java#L1759-L1768
This should be passed down just fine. If you can provide details as to
how it isn't, that'd be great.
Josh Elser wrote:
I
Done
Done
sqlline version 1.1.8
0: jdbc:phoenix:thin:url=http://pqs1.mydomain> !list
1 active connection:
#0 open
jdbc:phoenix:thin:url=http://pqs1.mydomain:8765;serialization=PROTOBUF
Is this something that has changed in newer versions of Phoenix?
On Mon, Feb 20, 2017 at 1:47 PM, Josh El
I believe the sequences track the current value of the sequence.
When your client requests 100 values, it would use 1-100, but Phoenix
only needs to know that the next value it can give out is 101. I'm not
100% sure, but I think this is how it works.
What are you concerned about?
Cheyenne
I thought arbitrary properties would be passed through, but I'm not sure
off the top of my head anymore
Would have to dig through the Avatica JDBC driver to (re)figure this one
out.
Michael Young wrote:
Is it possible to pass the TenantID attribute on the URL when using the
phoenix
Please be aware that you're now only communicating with a single ZK
server instead of the three you have deployed. That ZK server is
unavailable, your client will fail upon the next read to ZK it needs to
make.
Presently, Phoenix doesn't support multiple ports for separate ZK
servers. It
in our
production environment, unfortunately.
We are using Phoenix 4.7 (from HDP 2.5 Community release).
On Wed, Feb 22, 2017 at 4:07 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
So, just that I'm on the same page as you, when you invoke the Java
ap
No, PQS is just a proxy to the Phoenix (thick) JDBC driver.
You are still limited to the capabilities of the Phoenix JDBC driver.
You might be able to do something with a custom UDF, but I'm not sure.
Sudhir Babu Pothineni wrote:
Sorry for not asking the question properly, my understanding
This is a non-issue...
Avatica's use of protobuf is completely shaded (relocated classes). You
can use whatever version of protobuf in your client application you'd like.
Mark Heppner wrote:
If Cheyenne is talking about the query server, I'm not sure where you're
getting that from, Ted. It
nServer logs*
How can above problem can be resolved ?
Thanks.
On Mon, Jan 16, 2017 at 10:22 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
Did you check the RegionServers logs I asked in the last message?
Chetan Khatri
Tulasi,
Any property which you can provide in the `Properties` object when
instantiating the PhoenixDriver (outside of PQS), you can pass into PQS
via the same `Properties` object when instantiating the thin Driver.
The OpenConnectionRequest[1] is the RPC mechanism which passes along
this
You could create a new table with the same schema and then flip the
underlying table out.
* Rename the existing table to "foo"
* Create your table via Phoenix with correct schema and desired name
* Delete underlying HBase table that Phoenix created
* Rename "foo" to the desired name
I _think_
Phoenix's grammar is documented at
http://phoenix.apache.org/language/index.html
Cheyenne Forbes wrote:
Can I use the SQL WITH clause Phoenix instead of "untidy" sub queries?
Did you check the RegionServers logs I asked in the last message?
Chetan Khatri wrote:
Any updates for the above error guys ?
On Fri, Jan 13, 2017 at 9:35 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
(-cc dev@phoenix)
phoenix-4.8.2-HBase-1.
No, I don't believe there is any log4j logging done in PQS that would show
queries being executed.
Ideally, we would have a "query log" in Phoenix which would present an
interface to this data and it wouldn't require anything special in PQS.
However, I wouldn't be opposed to some trivial
Hi,
The trailing semi-colon on the URL seems odd, but I do not think it
would cause issues in parsing when inspecting the logic in
PhoenixEmbeddedDriver#acceptsURL(String).
Does the Class.forName(..) call succeed? You have Phoenix properly on
the classpath for your mappers?
Dong-iL, Kim
phoenix-4.8.0-HBase-1.1-client.jar is the jar which should be used. The
phoenix-4.8.0-HBase-1.1-hive.jar is to be used with the Hive integration.
dalin.qin wrote:
[root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver
Starting search for JAR files from directory .
Looking for
I was going to say that
https://issues.apache.org/jira/browse/PHOENIX-3223 might be related,
but it looks like the HADOOP_CONF_DIR is already put on the classpath.
Glad to see you goth this working :)
On Thu, Sep 8, 2016 at 5:56 AM, F21 wrote:
> Glad you got it working! :)
Yup, Francis got it right. There are POJOs in Avatica which Jackson
(un)marshals the JSON in-to/out-of and logic which constructs the POJOs
from Protobuf and vice versa.
In some hot-code paths, there are implementations in the server which
can use protobuf objects directly (to avoid extra
How do you expect JDBC on Spark Kerberos authentication to work? Are you
using the principal+keytab options in the Phoenix JDBC URL or is Spark
itself obtaining a ticket for you (via some "magic")?
Jean-Marc Spaggiari wrote:
Hi,
I tried to build a small app all under Kerberos.
JDBC to
com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
The error you see would also be rather helpful.
James Taylor wrote:
Hi Cheyenne,
Are you referrin
Puneeth -- One extra thing to add to Francis' great explanation; the
response message told you what you did wrong:
"missingStatement":true
This is telling you that the server does not have a statement with the
ID 12345 as you provided.
F21 wrote:
Hey,
You mentioned that you sent a
Hi Vikram,
See https://issues.apache.org/jira/browse/PHOENIX-1754
This is presently an open issue. You will not be able to use the
convenience "Kerberos login via URL" when on Windows. You will need to
manually perform your Kerberos login (via JAAS or Hadoop's
UserGroupInformation class) and
to run command line applications directly from the worker
nodes and it works, But inside the Spark Executor it doesn't...
2016-09-15 13:07 GMT-04:00 Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>>:
How do you expect JDBC on Spark Kerberos authentication to work? Are
The error you see would also be rather helpful.
James Taylor wrote:
Hi Cheyenne,
Are you referring to joins through the query server?
Thanks,
James
On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes
> wrote:
I was
Hi Xindian,
A couple of initial things that come to mind...
* Make sure that you're using HDP "bits" (jars) everywhere to remove any
possibility that there's an issue between what Hortonworks ships and
what's in Apache.
* Make sure that your Java application/Spark job has the correct
Thanks Prabhjyot. Feel free to assign it directly to me. I can help
triage/fix it.
Prabhjyot Singh wrote:
Thank you Josh, sure I'll do that.
On 2016-09-22 08:23 ( 0530), Josh Elser <j...@gmail.com
<mailto:j...@gmail.com>> wrote:
> Sounds like the thin driver should b
Sounds like the thin driver should be making a copy of the properties if
its going to be modifying it. Want to open a JIRA issue?
Prabhjyot Singh wrote:
Hi,
I'm using DriverManager.getConnection(url, properties) using following
properties
url ->
Thanks bud!
Prabhjyot Singh wrote:
Yes, I did, https://issues.apache.org/jira/browse/PHOENIX-3315. and
somehow didn't tag you.
On 27 September 2016 at 07:57, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
Did you ever create this issue, Prabhjyot? I
Did you ever create this issue, Prabhjyot? I don't recall seeing it come
across my inbox but I might have missed it...
Josh Elser wrote:
Thanks Prabhjyot. Feel free to assign it directly to me. I can help
triage/fix it.
Prabhjyot Singh wrote:
Thank you Josh, sure I'll do that.
On 2016-09-22
If they're generic to Apache Avatica (Apache Calcite sub-project) and
not tied to Apache Phoenix, we'd also love to have you recognized, if
not having the code not directly committed to Avatica :). Avatica is the
underlying tech to the Phoenix Query Server.
Minor clarification with my Phoenix
Hi Noam,
Can you quantify the query you run that shows this error? Also, when you
change the criteria to retrieve less data, do you mean that you're
fetching fewer rows?
Bulvik, Noam wrote:
I am using phonix 4.5.2 and in my table the data in in Array.
When I issue a query sometime the
Hi Dequn,
There should be more to this stacktrace than you provided as the actual
cause is not included. Can you please include the entire stacktrace? If
you are not seeing this client-side, please check the Phoenix Query
Server log file to see if there is more there.
Dequn Zhang wrote:
IIRC, SALT_BUCKET configuration from the data table is implicitly
applied to any index tables you create from that data table.
Pradheep Shanmugam wrote:
Hi,
I have a hbase table created using phoenix which is salted.
Since the queries on the table required a secondary index, I created
index
Hi Pradeep,
No, this is one you will likely have to work around on your own by
building a custom Phoenix client jar that does not include the
javax-servlet classes. They are getting transitively pulled into Phoenix
via Hadoop (IIRC). If your web application already has the classes
present,
Is the directory containing hbase-site.xml where you have made the
modification included on your overriden CLASSPATH? How are you running
this query -- is it on the classpath for that program?
ashish tapdiya wrote:
Query:
SELECT /*+ NO_STAR_JOIN*/ IP, RANK, TOTAL FROM (SELECT SOURCEIPADDR as
(cc: -dev +user, bcc: +dev)
Hi Krishna,
Might you be able to share the stacktrace that accompanied that Exception?
Shiva Krishna wrote:
Hi All,
Can any one give me a small example of Phoenix upserts using Threads in Java.
I wrote a sample it is working fine in local environment but when
Hi Puneeth,
What version of Phoenix are you using?
Indeed per [1], maxRowCount should control the number of rows returned
in the ExecuteResponse. However, given that you see 100 rows (which is
the default), it sounds like the value is not being respected. The most
recent docs may not align
: Josh Elser [mailto:josh.el...@gmail.com]
Sent: 13 October 2016 14:58
To: user@phoenix.apache.org
Subject: Re: PrepareAndExecute statement return only 100 rows
Hi Puneeth,
What version of Phoenix are you using?
Indeed per [1], maxRowCount should control the number of rows returned
Not 100% sure, but yes, I believe this is correct. One of the servers
would get 0-99, the other 100-199. The server to use up that batch of
100 values would then request 200-299, etc. Setting the cache to be 0
would likely impact the performance of Phoenix.
Using some external system to
Or make sure that Zeppelin is adding hbase-site.xml to the classpath.
You can easily test this by making a copy of your phoenix-client.jar and
manually adding in a copy of hbase-site.xml to the jar.
James Taylor wrote:
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
On
com <mailto:pradeep.b...@gmail.com>> wrote:
thanks josh looking into it.
On Wed, Nov 23, 2016 at 1:48 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
Hi Pradeep,
No, this is one you will likely have to work around on your
Just to clarify, like near all other services on Linux, you do not want
to run the Phoenix Query Server as root. Running it as the "hbase" user
(or the user you are running hbase as) is the common way to do this.
Will Xu wrote:
OK, this means you probably don't have Phoenix query server
What's the rest of the stacktrace? You cut off the cause.
venkata subbarayudu wrote:
I faced a strange issue, that, Phoenix hbase upsert query fails with
ArrayIndexOutOfBounds exception.
Query looks like:
upsert into table (pk,col1, col2, col3) select a.pk
for org.apache.phoenix:phoenix ?
we need to have the jdbc driver
thanks
On 06.01.2017 18:38, Josh Elser wrote:
There is no JAR for org.apache.phoenix:phoenix. There's a
source-release tarball [1]
Use org.apache.phoenix:phoenix-core [2]
[1]
http://repo1.maven.org/maven2/org/apache/phoenix/phoenix/4.4.0-HBase
(-cc dev@phoenix)
phoenix-4.8.2-HBase-1.2-server.jar in the top-level binary tarball of
Apache Phoenix 4.8.0 is the jar which is meant to be deployed to all
HBase's classpath.
I would check the RegionServer logs -- I'm guessing that it never
started correctly or failed. The error message is
(-cc the reply-all)
I'm not sure what your reasons are for not accepting "add-ons" as a
valid implementation, but this release doesn't change Phoenix's use of
Tephra to enable cross-row and cross-table transactions which are not
natively provided by HBase.
If you want to see this feature
Yes, use the HBase-provided access control mechanisms.
lk_phoenix wrote:
hi,all:
I want to know how to add access contorl to the table I create by phoenix .
I need to add the privilege through hbase?
2016-12-05
lk_phoenix
mentcache.expiryduration Statement cache expiration
duration. Any statements older than this value will be discarded.
Default is 5 minutes. 5
avatica.statementcache.expiryunit Statement cache expiration unit.
Unit modifier applied to the value provided in
avatica.statem
Thanks.
Tulasi Paradarami wrote:
I created CALCITE-1565 for this issue.
On Thu, Jan 5, 2017 at 12:10 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
Hrm, that's frustrating. No stack trace is a bug. I remember there
being one of these I fixed
There is no JAR for org.apache.phoenix:phoenix. There's a source-release
tarball [1]
Use org.apache.phoenix:phoenix-core [2]
[1]
http://repo1.maven.org/maven2/org/apache/phoenix/phoenix/4.4.0-HBase-1.1/
[2]
http://repo1.maven.org/maven2/org/apache/phoenix/phoenix-core/4.4.0-HBase-1.1/
You should really work backwards from your use cases. The amount of
hardware you need is dependent on your requirements and what else you're
going to be running on the hardware. You're not likely to get a good
answer here because the question is so open-ended.
Cheyenne Forbes wrote:
are
Maybe you could separate some of the columns into separate column
families so you have some physical partitioning on disk?
Whether you select one or many columns, you presently have to read
through each column on disk.
AFAIK, there shouldn't really be an upper limit here (in terms of what
e got.
>
>
> Thanks & Regards,
> Rohit R. K.
>
>
> On Tue, 14 Mar 2017 20:50:30 +0530 Josh Elser <els...@apache.org> wrote
>
>
> When you provide the principal and keytab options in the JDBC URL, the
> ticket cache (created by your kinit invoca
On Mon, Mar 20, 2017 at 12:55 AM, Adi Meller wrote:
> Hello.
> I need to move some (5-6) big (2 tera each) tables from hive to Phoenix
> every day.
>
> I have cdh 5.7 and install phoenix 4.7 thought parcel.
> I have 4 region server with 94gb physical memory And 32 cores
-11-07 18:12 GMT+01:00 Josh Elser<els...@apache.org>:
+1
I was poking around with it this weekend. I had some issues (trying to
use
it from the Avatica side, instead of PQS, specifically), but for the
most
part it worked. Definitely feel free to report any issues you run into:
https://b
The root cause of your exception is a ConnectionRefusedException. This
means that your client was unable to make an network connection to
192.168.1.147:52540 (from earlier in your stacktrace).
Typically, this is an OS or HBase level issue. I'd first try to rule out
a networking level issue
s,
>>>> Michael
>>>>
>>>> On Mon, Apr 3, 2017 at 2:28 PM, Ryan Templeton
>>>> <rtemple...@hortonworks.com> wrote:
>>>>>
>>>>> I see there’s a phoenix-tracing-webapp project in the build plus this
>>>>> on the web
Since you're writing the UDF yourself, you can have it do anything you'd like.
However, I wouldn't think that it would be a good idea to do a remote
RPC for every potential row that you're processing in a query...
On Wed, Apr 12, 2017 at 5:45 PM, Cheyenne Forbes
default to
true. thanks for the explanation why it shouldn't
-Sudhir
On Thu, Apr 20, 2017 at 11:56 AM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
Most likely to avoid breaking existing functionality.
As this mapping is a relatively new feature, we w
the high read load of the
SYSTEM.STATS table.
On Thu, Apr 13, 2017 at 11:42 AM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
What version of Phoenix are you using? (an Apache release? some
vendors' packaging?)
Academically speaking, when you bu
Just some extra context here:
From your original message, you noted how the ZK connection succeeded
but the HBase connection didn't. The JAAS configuration file you
provided is *only* used by ZooKeeper. As you have eventually realized,
hbase-site.xml is the configuration file which controls
I'm guessing that you're using a version of HDP? If you're using those
versions from Apache, please update as they're dreadfully out of date.
What is the DDL of the table you're reading from? Do you have any
secondary indexes on this table (if so, on what columns)? What kind of
query are you
Most likely to avoid breaking existing functionality.
As this mapping is a relatively new feature, we wouldn't want to force
it upon new users.
The need for Phoenix to have the proper core-site, hdfs-site, and
hbase-site XML files on the classpath is a fair knock though (although,
the lack
Reid wouldn't have seen the message he did in the original message about
a successful login if that were the case.
Try adding in "-Dsun.security.krb5.debug" to your PHOENIX_OPTS (I think
that is present in that version of Phoenix). It should give you a lot
more debug information, providing
So properties like phoenix.query.timeoutMs has to be on app side hbase-site?
But I see the above property being set on the server side hbase-site though
ambari..
is that not going to be used by phoenix client?
Thanks,
Pradheep
On 3/8/17, 3:30 PM, "Josh Elser"<els...@apache.org>
1 - 100 of 321 matches
Mail list logo