Are you giving equal amounts of Java heap to both applications?
On 7/21/17 5:04 AM, Siddharth Ubale wrote:
Hi,
Using phoenix 4.10 with hbase0.98.
Thanks,
Siddharth
*From:*Siddharth Ubale [mailto:siddharth.ub...@syncoms.com]
*Sent:* Friday, July 21, 2017 12:24 PM
*To:*
On 7/13/17 1:48 PM, Tanujit Ghosh wrote:
Hi All,
We are facing a problem in our cluster as stated below.
We have a long running java process which does various select on
underlying Phoenix/HBASE table structure and return data. This process
gets requests from other upstream apps and
The phoenix-client jar is not published. Please see
https://issues.apache.org/jira/browse/PHOENIX-1567
On 6/29/17 9:26 AM, Juvenn Woo wrote:
Hi all,
For convenience of deployment, I am trying to specify phoenix as maven
dependency instead of put client jar in the git repo.
While I am able
I think this is more of an issue of your 78 salt buckets than the width
of your table. Each chunk, running in parallel, is spilling incremental
counts to disk.
I'd check your ulimit settings on the node which you run this query from
and try to increase the number of open files allowed before
Yes. Please just try things like this in the future :)
On 6/11/17 12:33 AM, Cheyenne Forbes wrote:
Can I have something like /"select id from table limit 10"/
Regards,
Cheyenne
!
-Original Message-
From: Josh Elser [mailto:els...@apache.org]
Sent: Thursday, May 25, 2017 9:30 AM
To: user@phoenix.apache.org
Subject: Re: Phoenix driver query rejection - note change in config?
Hi Megan,
Did you happen to restart Squirrel and/or re-connect to Phoenix after making
Hey Anil,
That's a relatively old version of HDP. Do you have options to update to
something more recent?
This reads like a bug to me (a JIRA ID escapes me though). However, if
you would manage to get around this, there are more to come.
On 5/25/17 7:38 PM, anil gupta wrote:
Hi,
We are
-HBase-1.2-bin/bin/log4j.properties
/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/sandbox-log4j.properties
/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/config/log4j.properties/
Regards,
Cheyenne O. Forbes
On Thu, May 25, 2017 at 11:31 AM, Josh Elser <els...@apache.org
<mail
Hi Megan,
Did you happen to restart Squirrel and/or re-connect to Phoenix after
making the change? The steps you took (Squirrel, notwithstanding) should
have sufficiently fixed the issue you described.
Another sanity check would be to make sure your didn't have any
mis-typing of the
and many others) that you'd
like to see HDP move to a newer version of Phoenix. The 4.7 release is
four releases back from the current 4.10 release.
Any ballpark timeframe for an HDP release with the latest Phoenix release?
On Fri, May 19, 2017 at 7:56 AM Josh Elser <josh.el...@gmail.
Hi Bernard,
(wearing my Hortonworks hat)
No, there is no way for you to in-place upgrade Phoenix. We do a
significant amount of compatibility testing for the versions of all
components shipped in HDP.
Please note that the version of Phoenix that is shipped inside HDP is
*based* on an
How about you try it :)
Cheyenne Forbes wrote:
Will that give me the quorum servers used by phoenix? for example if the
value in the hbase config is "zk1.aob.net
<http://zk1.aob.net>,zk2.aob.net <http://zk2.aob.net>"
Regards,
Cheyenne O. Forbes
On Tue, May 16, 2017 a
```HBaseConfiguration.create().get("hbase.zookeeper.quorum");```
Cheyenne Forbes wrote:
Can I access the value of "hbase.zookeeper.quorum" in my UDF?
Regards,
Cheyenne O. Forbes
I am not aware of any mechanisms in Phoenix that will automatically
write formatted data, locally or remotely. This will require you to
write some code.
cmbendre wrote:
Hi,
Some of our queries on Phoenix cluster gives millions of rows as a result.
How do i export these results to a csv file
Hi Mike,
Yes, this is a known missing feature in Apache Avatica, the tech behind the
thin client/query server. I've recently committed the implementation in
Avatica, we just need to cut a new release upstream and also here in
Phoenix.
The workaround is to use the thick client or avoid the use of
default to
true. thanks for the explanation why it shouldn't
-Sudhir
On Thu, Apr 20, 2017 at 11:56 AM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
Most likely to avoid breaking existing functionality.
As this mapping is a relatively new feature, we w
Most likely to avoid breaking existing functionality.
As this mapping is a relatively new feature, we wouldn't want to force
it upon new users.
The need for Phoenix to have the proper core-site, hdfs-site, and
hbase-site XML files on the classpath is a fair knock though (although,
the lack
I'm guessing that you're using a version of HDP? If you're using those
versions from Apache, please update as they're dreadfully out of date.
What is the DDL of the table you're reading from? Do you have any
secondary indexes on this table (if so, on what columns)? What kind of
query are you
Reid wouldn't have seen the message he did in the original message about
a successful login if that were the case.
Try adding in "-Dsun.security.krb5.debug" to your PHOENIX_OPTS (I think
that is present in that version of Phoenix). It should give you a lot
more debug information, providing
Just some extra context here:
From your original message, you noted how the ZK connection succeeded
but the HBase connection didn't. The JAAS configuration file you
provided is *only* used by ZooKeeper. As you have eventually realized,
hbase-site.xml is the configuration file which controls
the high read load of the
SYSTEM.STATS table.
On Thu, Apr 13, 2017 at 11:42 AM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
What version of Phoenix are you using? (an Apache release? some
vendors' packaging?)
Academically speaking, when you bu
s,
>>>> Michael
>>>>
>>>> On Mon, Apr 3, 2017 at 2:28 PM, Ryan Templeton
>>>> <rtemple...@hortonworks.com> wrote:
>>>>>
>>>>> I see there’s a phoenix-tracing-webapp project in the build plus this
>>>>> on the web
Since you're writing the UDF yourself, you can have it do anything you'd like.
However, I wouldn't think that it would be a good idea to do a remote
RPC for every potential row that you're processing in a query...
On Wed, Apr 12, 2017 at 5:45 PM, Cheyenne Forbes
The root cause of your exception is a ConnectionRefusedException. This
means that your client was unable to make an network connection to
192.168.1.147:52540 (from earlier in your stacktrace).
Typically, this is an OS or HBase level issue. I'd first try to rule out
a networking level issue
-11-07 18:12 GMT+01:00 Josh Elser<els...@apache.org>:
+1
I was poking around with it this weekend. I had some issues (trying to
use
it from the Avatica side, instead of PQS, specifically), but for the
most
part it worked. Definitely feel free to report any issues you run into:
https://b
On Mon, Mar 20, 2017 at 12:55 AM, Adi Meller wrote:
> Hello.
> I need to move some (5-6) big (2 tera each) tables from hive to Phoenix
> every day.
>
> I have cdh 5.7 and install phoenix 4.7 thought parcel.
> I have 4 region server with 94gb physical memory And 32 cores
e got.
>
>
> Thanks & Regards,
> Rohit R. K.
>
>
> On Tue, 14 Mar 2017 20:50:30 +0530 Josh Elser <els...@apache.org> wrote
>
>
> When you provide the principal and keytab options in the JDBC URL, the
> ticket cache (created by your kinit invoca
When you provide the principal and keytab options in the JDBC URL, the
ticket cache (created by your kinit invocation) is not used.
What does the other logging say from your client? You should see a
message about Phoenix performing a Kerberos login given the information
you provided.
So properties like phoenix.query.timeoutMs has to be on app side hbase-site?
But I see the above property being set on the server side hbase-site though
ambari..
is that not going to be used by phoenix client?
Thanks,
Pradheep
On 3/8/17, 3:30 PM, "Josh Elser"<els...@apache.org>
Unsubscribe yourself just like you subscribed yourself.
user-unsubscr...@phoenix.apache.org
Saravanan A wrote:
unsubscribe.
re #1, you need to ensure that the correct hbase-site.xml is on the
classpath on your application. Phoenix does *not* do this for you.
Pradheep Shanmugam wrote:
Hi,
1. When using phoenix thick client(4.4.0) at the application, where
does the client hbase-site.xml reside..I don’t see one?
I'm confused as to how those two points are related.
Are you saying that without setting the default serialization, you got an
error about being unable to load a commons-http SSL class?
On Mar 5, 2017 17:18, "Cheyenne Forbes"
wrote:
turns out that the only way
, 2017 at 4:15 AM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
No, I don't believe there is any log4j logging done in PQS that
would show queries being executed.
Ideally, we would have a "query log" in Phoenix which would present
an
You're using the wrong jar, Cheyenne.
The client.jar is for the "thick" JDBC driver. The thin-client.jar is
for the "thin" JDBC driver.
Cheyenne Forbes wrote:
I've used Squirrel SQL Client before but now I'm trying Squirrel'
snapshot-20170214_2214 with phoenix-4.9.0-HBase-1.2-client.jar it
No, I don't believe there is any log4j logging done in PQS that would show
queries being executed.
Ideally, we would have a "query log" in Phoenix which would present an
interface to this data and it wouldn't require anything special in PQS.
However, I wouldn't be opposed to some trivial
I believe the sequences track the current value of the sequence.
When your client requests 100 values, it would use 1-100, but Phoenix
only needs to know that the next value it can give out is 101. I'm not
100% sure, but I think this is how it works.
What are you concerned about?
Cheyenne
in our
production environment, unfortunately.
We are using Phoenix 4.7 (from HDP 2.5 Community release).
On Wed, Feb 22, 2017 at 4:07 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
So, just that I'm on the same page as you, when you invoke the Java
ap
Done
Done
sqlline version 1.1.8
0: jdbc:phoenix:thin:url=http://pqs1.mydomain> !list
1 active connection:
#0 open
jdbc:phoenix:thin:url=http://pqs1.mydomain:8765;serialization=PROTOBUF
Is this something that has changed in newer versions of Phoenix?
On Mon, Feb 20, 2017 at 1:47 PM, Josh El
See
https://github.com/apache/calcite/blob/5181563f9f26d1533a7d98ecca8443077e7b7efa/avatica/core/src/main/java/org/apache/calcite/avatica/remote/Service.java#L1759-L1768
This should be passed down just fine. If you can provide details as to
how it isn't, that'd be great.
Josh Elser wrote:
I
Short answer is (likely) that your mail provider (Gmail) is rejecting posts
to user@p.a.o which hit its spam trigger but did not hit the ASF's spam
trigger.
This triggers the mailing list to tell you that a message it tried to send
you was rejected. So, you get a warning about a message that you
Please be aware that you're now only communicating with a single ZK
server instead of the three you have deployed. That ZK server is
unavailable, your client will fail upon the next read to ZK it needs to
make.
Presently, Phoenix doesn't support multiple ports for separate ZK
servers. It
I thought arbitrary properties would be passed through, but I'm not sure
off the top of my head anymore
Would have to dig through the Avatica JDBC driver to (re)figure this one
out.
Michael Young wrote:
Is it possible to pass the TenantID attribute on the URL when using the
phoenix
No, PQS is just a proxy to the Phoenix (thick) JDBC driver.
You are still limited to the capabilities of the Phoenix JDBC driver.
You might be able to do something with a custom UDF, but I'm not sure.
Sudhir Babu Pothineni wrote:
Sorry for not asking the question properly, my understanding
This is a non-issue...
Avatica's use of protobuf is completely shaded (relocated classes). You
can use whatever version of protobuf in your client application you'd like.
Mark Heppner wrote:
If Cheyenne is talking about the query server, I'm not sure where you're
getting that from, Ted. It
Tulasi,
Any property which you can provide in the `Properties` object when
instantiating the PhoenixDriver (outside of PQS), you can pass into PQS
via the same `Properties` object when instantiating the thin Driver.
The OpenConnectionRequest[1] is the RPC mechanism which passes along
this
nServer logs*
How can above problem can be resolved ?
Thanks.
On Mon, Jan 16, 2017 at 10:22 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
Did you check the RegionServers logs I asked in the last message?
Chetan Khatri
Phoenix's grammar is documented at
http://phoenix.apache.org/language/index.html
Cheyenne Forbes wrote:
Can I use the SQL WITH clause Phoenix instead of "untidy" sub queries?
You could create a new table with the same schema and then flip the
underlying table out.
* Rename the existing table to "foo"
* Create your table via Phoenix with correct schema and desired name
* Delete underlying HBase table that Phoenix created
* Rename "foo" to the desired name
I _think_
Did you check the RegionServers logs I asked in the last message?
Chetan Khatri wrote:
Any updates for the above error guys ?
On Fri, Jan 13, 2017 at 9:35 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
(-cc dev@phoenix)
phoenix-4.8.2-HBase-1.
(-cc dev@phoenix)
phoenix-4.8.2-HBase-1.2-server.jar in the top-level binary tarball of
Apache Phoenix 4.8.0 is the jar which is meant to be deployed to all
HBase's classpath.
I would check the RegionServer logs -- I'm guessing that it never
started correctly or failed. The error message is
for org.apache.phoenix:phoenix ?
we need to have the jdbc driver
thanks
On 06.01.2017 18:38, Josh Elser wrote:
There is no JAR for org.apache.phoenix:phoenix. There's a
source-release tarball [1]
Use org.apache.phoenix:phoenix-core [2]
[1]
http://repo1.maven.org/maven2/org/apache/phoenix/phoenix/4.4.0-HBase
There is no JAR for org.apache.phoenix:phoenix. There's a source-release
tarball [1]
Use org.apache.phoenix:phoenix-core [2]
[1]
http://repo1.maven.org/maven2/org/apache/phoenix/phoenix/4.4.0-HBase-1.1/
[2]
http://repo1.maven.org/maven2/org/apache/phoenix/phoenix-core/4.4.0-HBase-1.1/
Thanks.
Tulasi Paradarami wrote:
I created CALCITE-1565 for this issue.
On Thu, Jan 5, 2017 at 12:10 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
Hrm, that's frustrating. No stack trace is a bug. I remember there
being one of these I fixed
mentcache.expiryduration Statement cache expiration
duration. Any statements older than this value will be discarded.
Default is 5 minutes. 5
avatica.statementcache.expiryunit Statement cache expiration unit.
Unit modifier applied to the value provided in
avatica.statem
You should really work backwards from your use cases. The amount of
hardware you need is dependent on your requirements and what else you're
going to be running on the hardware. You're not likely to get a good
answer here because the question is so open-ended.
Cheyenne Forbes wrote:
are
Maybe you could separate some of the columns into separate column
families so you have some physical partitioning on disk?
Whether you select one or many columns, you presently have to read
through each column on disk.
AFAIK, there shouldn't really be an upper limit here (in terms of what
Just to clarify, like near all other services on Linux, you do not want
to run the Phoenix Query Server as root. Running it as the "hbase" user
(or the user you are running hbase as) is the common way to do this.
Will Xu wrote:
OK, this means you probably don't have Phoenix query server
What's the rest of the stacktrace? You cut off the cause.
venkata subbarayudu wrote:
I faced a strange issue, that, Phoenix hbase upsert query fails with
ArrayIndexOutOfBounds exception.
Query looks like:
upsert into table (pk,col1, col2, col3) select a.pk
Yes, use the HBase-provided access control mechanisms.
lk_phoenix wrote:
hi,all:
I want to know how to add access contorl to the table I create by phoenix .
I need to add the privilege through hbase?
2016-12-05
lk_phoenix
(-cc the reply-all)
I'm not sure what your reasons are for not accepting "add-ons" as a
valid implementation, but this release doesn't change Phoenix's use of
Tephra to enable cross-row and cross-table transactions which are not
natively provided by HBase.
If you want to see this feature
Or make sure that Zeppelin is adding hbase-site.xml to the classpath.
You can easily test this by making a copy of your phoenix-client.jar and
manually adding in a copy of hbase-site.xml to the jar.
James Taylor wrote:
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
On
com <mailto:pradeep.b...@gmail.com>> wrote:
thanks josh looking into it.
On Wed, Nov 23, 2016 at 1:48 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
Hi Pradeep,
No, this is one you will likely have to work around on your
Hi Pradeep,
No, this is one you will likely have to work around on your own by
building a custom Phoenix client jar that does not include the
javax-servlet classes. They are getting transitively pulled into Phoenix
via Hadoop (IIRC). If your web application already has the classes
present,
Hi Noam,
Can you quantify the query you run that shows this error? Also, when you
change the criteria to retrieve less data, do you mean that you're
fetching fewer rows?
Bulvik, Noam wrote:
I am using phonix 4.5.2 and in my table the data in in Array.
When I issue a query sometime the
IIRC, SALT_BUCKET configuration from the data table is implicitly
applied to any index tables you create from that data table.
Pradheep Shanmugam wrote:
Hi,
I have a hbase table created using phoenix which is salted.
Since the queries on the table required a secondary index, I created
index
Hi Dequn,
There should be more to this stacktrace than you provided as the actual
cause is not included. Can you please include the entire stacktrace? If
you are not seeing this client-side, please check the Phoenix Query
Server log file to see if there is more there.
Dequn Zhang wrote:
(cc: -dev +user, bcc: +dev)
Hi Krishna,
Might you be able to share the stacktrace that accompanied that Exception?
Shiva Krishna wrote:
Hi All,
Can any one give me a small example of Phoenix upserts using Threads in Java.
I wrote a sample it is working fine in local environment but when
Is the directory containing hbase-site.xml where you have made the
modification included on your overriden CLASSPATH? How are you running
this query -- is it on the classpath for that program?
ashish tapdiya wrote:
Query:
SELECT /*+ NO_STAR_JOIN*/ IP, RANK, TOTAL FROM (SELECT SOURCEIPADDR as
If they're generic to Apache Avatica (Apache Calcite sub-project) and
not tied to Apache Phoenix, we'd also love to have you recognized, if
not having the code not directly committed to Avatica :). Avatica is the
underlying tech to the Phoenix Query Server.
Minor clarification with my Phoenix
Not 100% sure, but yes, I believe this is correct. One of the servers
would get 0-99, the other 100-199. The server to use up that batch of
100 values would then request 200-299, etc. Setting the cache to be 0
would likely impact the performance of Phoenix.
Using some external system to
: Josh Elser [mailto:josh.el...@gmail.com]
Sent: 13 October 2016 14:58
To: user@phoenix.apache.org
Subject: Re: PrepareAndExecute statement return only 100 rows
Hi Puneeth,
What version of Phoenix are you using?
Indeed per [1], maxRowCount should control the number of rows returned
Hi Puneeth,
What version of Phoenix are you using?
Indeed per [1], maxRowCount should control the number of rows returned
in the ExecuteResponse. However, given that you see 100 rows (which is
the default), it sounds like the value is not being respected. The most
recent docs may not align
Hi Vikram,
See https://issues.apache.org/jira/browse/PHOENIX-1754
This is presently an open issue. You will not be able to use the
convenience "Kerberos login via URL" when on Windows. You will need to
manually perform your Kerberos login (via JAAS or Hadoop's
UserGroupInformation class) and
Thanks bud!
Prabhjyot Singh wrote:
Yes, I did, https://issues.apache.org/jira/browse/PHOENIX-3315. and
somehow didn't tag you.
On 27 September 2016 at 07:57, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
Did you ever create this issue, Prabhjyot? I
Did you ever create this issue, Prabhjyot? I don't recall seeing it come
across my inbox but I might have missed it...
Josh Elser wrote:
Thanks Prabhjyot. Feel free to assign it directly to me. I can help
triage/fix it.
Prabhjyot Singh wrote:
Thank you Josh, sure I'll do that.
On 2016-09-22
Thanks Prabhjyot. Feel free to assign it directly to me. I can help
triage/fix it.
Prabhjyot Singh wrote:
Thank you Josh, sure I'll do that.
On 2016-09-22 08:23 ( 0530), Josh Elser <j...@gmail.com
<mailto:j...@gmail.com>> wrote:
> Sounds like the thin driver should b
Sounds like the thin driver should be making a copy of the properties if
its going to be modifying it. Want to open a JIRA issue?
Prabhjyot Singh wrote:
Hi,
I'm using DriverManager.getConnection(url, properties) using following
properties
url ->
com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
The error you see would also be rather helpful.
James Taylor wrote:
Hi Cheyenne,
Are you referrin
The error you see would also be rather helpful.
James Taylor wrote:
Hi Cheyenne,
Are you referring to joins through the query server?
Thanks,
James
On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes
> wrote:
I was
Hi Xindian,
A couple of initial things that come to mind...
* Make sure that you're using HDP "bits" (jars) everywhere to remove any
possibility that there's an issue between what Hortonworks ships and
what's in Apache.
* Make sure that your Java application/Spark job has the correct
to run command line applications directly from the worker
nodes and it works, But inside the Spark Executor it doesn't...
2016-09-15 13:07 GMT-04:00 Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>>:
How do you expect JDBC on Spark Kerberos authentication to work? Are
How do you expect JDBC on Spark Kerberos authentication to work? Are you
using the principal+keytab options in the Phoenix JDBC URL or is Spark
itself obtaining a ticket for you (via some "magic")?
Jean-Marc Spaggiari wrote:
Hi,
I tried to build a small app all under Kerberos.
JDBC to
phoenix-4.8.0-HBase-1.1-client.jar is the jar which should be used. The
phoenix-4.8.0-HBase-1.1-hive.jar is to be used with the Hive integration.
dalin.qin wrote:
[root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver
Starting search for JAR files from directory .
Looking for
Hi,
The trailing semi-colon on the URL seems odd, but I do not think it
would cause issues in parsing when inspecting the logic in
PhoenixEmbeddedDriver#acceptsURL(String).
Does the Class.forName(..) call succeed? You have Phoenix properly on
the classpath for your mappers?
Dong-iL, Kim
Puneeth -- One extra thing to add to Francis' great explanation; the
response message told you what you did wrong:
"missingStatement":true
This is telling you that the server does not have a statement with the
ID 12345 as you provided.
F21 wrote:
Hey,
You mentioned that you sent a
Yup, Francis got it right. There are POJOs in Avatica which Jackson
(un)marshals the JSON in-to/out-of and logic which constructs the POJOs
from Protobuf and vice versa.
In some hot-code paths, there are implementations in the server which
can use protobuf objects directly (to avoid extra
I was going to say that
https://issues.apache.org/jira/browse/PHOENIX-3223 might be related,
but it looks like the HADOOP_CONF_DIR is already put on the classpath.
Glad to see you goth this working :)
On Thu, Sep 8, 2016 at 5:56 AM, F21 wrote:
> Glad you got it working! :)
Hi Youngwoo,
The inclusion of hadoop-common is probably the source of most of the
bloat. We really only needed the UserGroupInformation code, but Hadoop
doesn't provide a proper artifact with just that dependency for us to
use downstream.
What dependency issues are you running into? There
Did you read James' response in PHOENIX-2271? [1]
Restating for you: as a work-around, you could try to use the recent
transaction support which was added via Apache Tephra to prevent
multiple clients from modifying a cell. This would be much less
efficient than the "native" checkAndPut API
You can check the dev list for the VOTE thread which contains a link to
the release candidate but it is not an official Apache Phoenix release yet.
Vasanth Bhat wrote:
Thanks a lot Ankit.
where do I download this from? I am looking at
http://mirror.fibergrid.in/apache/phoenix/don't seem
It sounds like whatever query you were running was just causing the
error to happen again locally. Like you said, if you launched a new
instance of sqlline.py, you would have a new JVM and thus a new
ThreadPool (and backing queue).
vishnu rao wrote:
hi
i was using the "sqlline.py" client ..
Looking into this on the HDP side. Please feel free to reach out via HDP
channels instead of Apache channels.
Thanks for letting us know as well.
Josh Mahonin wrote:
Hi Robert,
I recommend following up with HDP on this issue.
The underlying problem is that the
Can you share the error that your RegionServers report in the log before
they crash? It's hard to give an explanation without knowing the error
you're facing.
Thanks.
kevin wrote:
hi,all
I have a test about hbase run top of alluxio . In my hbase there is
a table a create by phoenix an
Hi,
I was just made aware of a neat little .NET driver for Avatica
(specifically, the authors were focused on Phoenix's use of Avatica in
the Phoenix Query Server).
https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview
I'll have to try it out at some point, but would love to
Hi Tongzhou,
Maybe you can try `ALTER INDEX index ON table DISABLE`. And then the
same command with USABLE after you update the index. Are you attempting
to do this incrementally? Like, a bulk load of data then a bulk load of
index data, repeat?
Regarding the TTL, I assume so, but I'm not
Negative, sorry :\
I'm not really sure how this all is supposed to work in Groovy. I'm a
bit out of my element.
Brian Jeltema wrote:
Any luck with this?
On Jun 9, 2016, at 10:07 PM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:
FWIW, I've als
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79
and Apache Phoenix 4.8.0-SNAPSHOT locally.
Will dig some more.
Brian Jeltema wrote:
Groovy 2.4.3
JDK 1.8
On Jun 8, 2016, at 11:26 AM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wr
Koert,
Apache Phoenix goes through a lot of work to provide multiple versions
of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2
presently). The builds for each of these branches are tested against
those specific versions of HBase, so I doubt that there are issues
between
/hdp/current/phoenix-cient/phoenix-client.jar
and run the following groovy script, assuming zookeeper is running on
zknode:
import groovy.sql.Sql
Sql.newInstance("jdbc:phoenix:zknode:/hbase-unsecure",
'foo',
'bar',
"org.apache.phoenix.jdbc.PhoenixDriver")
On Jun 6, 2016, at
Looks like you're knocking up against Hadoop (in o.a.h.c.Configuration).
Have you checked search results without Phoenix specifically?
I haven't run into anything like this before, but I'm also not a bit
Groovy aficionado. If you can share your environment (or some sample
project that can
201 - 300 of 321 matches
Mail list logo