Your assumptions are not unreasonable :) Phoenix 5.0.x should certainly
work with HBase 2.0.x. Glad to see that it's been corrected already
(embarassing that I don't even remember reviewing this).
Let me start a thread on dev@phoenix about a 5.0.1 or a 5.1.0. We need
to have a Phoenix 5.x
What version of Phoenix are you using? Is this the full stack trace you
see that touches Phoenix (or HBase) classes?
On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
Is there any reason for this exception? Which exactly server is shutting
down if we use quorum of zookepers?
Engineer
IR.ee
On Tue, Sep 11, 2018 at 4:06 AM Josh Elser mailto:els...@apache.org>> wrote:
Lots of details missing here about how you're trying to submit
these
Spark jobs, but let me try to explain how things work now:
Phoenix provides spark(1)
Your question is suitable for the user@phoenix mailing list. Please do
not cross post questions to multiple lists.
On 9/18/18 10:57 AM, Vishwajeet Rana wrote:
Hi,
I have two salted global secondary indexes (A and B) on a table with row
key (primary key) as the covered column. For both of
/apache/twill/twill-discovery-core/0.13.0/twill-discovery-core-0.13.0.jar
Not sure which one I could be missing??
On Fri, Sep 14, 2018 at 7:34 PM Josh Elser <mailto:els...@apache.org>> wrote:
Uh, you're definitely not using the right JARs :)
You'll want the phoenix-client.jar for th
/maven2/org/apache/twill/twill-discovery-core/0.13.0/twill-discovery-core-0.13.0.jar
Not sure which one I could be missing
On Fri, Sep 14, 2018 at 7:34 PM Josh Elser <mailto:els...@apache.org>> wrote:
Uh, you're definitely not using the right JARs :)
You'll want the phoenix-c
Yeah, I think that's his point :)
For a fine-grained facet, the hotspotting is desirable to co-locate the
data for query. To try to make an example to drive this point home:
Consider a primary key constraint(col1, col2, col3, col4);
If I defined the SALT_HASH based on "col1" alone, you'd get
se). We'll be looking into this but if you
have any further advice, appreciated.
Saif
On Wed, Sep 12, 2018 at 5:50 PM Josh Elser
mailto:els...@apache.org>> wrote:
Reminder: Using Phoenix internals forces you to
Reminder: Using Phoenix internals forces you to understand exactly how
the version of Phoenix that you're using serializes data. Is there a
reason you're not using SQL to interact with Phoenix?
Sounds to me that Phoenix is expecting more data at the head of your
rowkey. Maybe a salt bucket
The versions you provided in the description make it sound like you're
actually using HDP's distribution of Apache Phoenix, not an official
Apache Phoenix release.
Please test against an Apache Phoenix release or contact Hortonworks for
support. It would not be unheard of that this issue has
Did you update the HBase jars on all RegionServers?
Make sure that you have all of the Regions assigned (no RITs). There
could be a pretty simple explanation as to why the index can't be
written to.
On 9/9/18 3:46 PM, Batyrshin Alexander wrote:
Correct me if im wrong.
But looks like if you
Lots of details missing here about how you're trying to submit these
Spark jobs, but let me try to explain how things work now:
Phoenix provides spark(1) and spark2 jars. These JARs provide the
implementation for Spark *on top* of what the phoenix-client.jar. You
want to include both the
In released versions of Apache Phoenix, there are only two
authentication models for PQS: None and Kerberos via SPNEGO.
Our Karan and Alex have been doing some good work (in what is presently
slated to be 4.15.0 and 5.1.0) around allowing pluggable authentication
and authorization as a part
Note, that the functionality that Thomas describes is how we intend
Phoenix to work, and may not be how the 4.9 release of Phoenix works
(due to changes that have been made).
On 8/23/18 12:42 PM, Thomas D'Silva wrote:
On a new cluster, the first time a client connects is when the SYSTEM
it seems has not changed since
it was first created as part of PHOENIX-3572 - and is still the same
in master (I checked a bit earlier).
Sure - will have a go at creating a JIRA for this.
Regards,
On Tue, 21 Aug 2018 at 16:23, Josh Elser wrote:
Hi Jack,
Given your assessment, it sounds like
Hi Jack,
Given your assessment, it sounds like you've stumbled onto a race
condition! Thanks for bringing it to our attention.
A few questions:
* Have you checked if the same code exist in the latest
branches/releases (4.x-HBase-1.{2,3,4} or master)?
* Want to create a Jira issue to track
l in no case be liable for any monetary damages
arising from such loss, damage or destruction.
On Mon, 20 Aug 2018 at 21:24, Josh Elser mailto:els...@apache.org>> wrote:
(-cc user@hbase, +bcc user@hbase)
How about the rest of the sta
(-cc user@hbase, +bcc user@hbase)
How about the rest of the stacktrace? You didn't share the cause.
On 8/20/18 1:35 PM, Mich Talebzadeh wrote:
This was working fine before my Hbase upgrade to 1.2.6
I have Hbase version 1.2.6 and Phoenix
version apache-phoenix-4.8.1-HBase-1.2-bin
This
SQL doesn't work like this.
You can use the DatabaseMetaData class, obtained off of the JDBC
Connection class, to inspect the available columns for a query. However,
I'd caution you against constructing a massive disjunction, e.g.
select * from demotable where colA like ".." or colB like
You don't have to create a new Connection every time, but it is not
directly harmful to do so. This recommendation only goes one way (just
because you can create new connections each time, doesn't imply that you
have to, nor necessarily want to).
I wouldn't be worried about any sort of
"Phoenix-server" refers to the phoenix-$VERSION-server.jar that is
either included in the binary tarball or is generated by the official
source-release.
"Deploying" it means copying the jar to $HBASE_HOME/lib.
On 8/6/18 9:56 PM, 倪项菲 wrote:
Hi Zhang Yun,
the link you mentioned tells us
Besides the distribution and parallelism of Spark as a distributed
execution framework, I can't really see how phoenix-spark would be
faster than the JDBC driver :). Phoenix-spark and the JDBC driver are
using the same code under the hood.
Phoenix-spark is using the PhoenixOutputFormat (and
'hbase:meta' at
region=hbase:meta,,1.1588230740,
hostname=Regionserver,60020,1533093258500, seqNum=0
...
Thank you,
BR,
Anung
On Wed, Aug 1, 2018 at 1:50 AM Josh Elser wrote:
Did you enable DEBUG logging on the client or server side? Certainly if
you got a connection timeout, you at least got
I don't recall any big issues on 4.13.2, but I, admittedly, haven't
followed it closely.
You weren't doing anything weird on your own -- you wrote data via the
JDBC driver? Any index tables?
Aside from weirdness in the client with statistics, there isn't much
I've seen that ever causes a
Did you enable DEBUG logging on the client or server side? Certainly if
you got a connection timeout, you at least got a stack trace that you
could share.
You need to provide more information if you want help debugging your setup.
On 7/31/18 6:29 AM, anung wrote:
Hi All,
I have CDH 5.11
Use the absolute path to your keytab, not the tilde character to refer
to your current user's home directory.
On 7/23/18 1:11 AM, Sumanta Gh wrote:
Hi,
I am trying to connect a Kerberos enabled Hbase 2.0 cluster from Phoenix
5.0 client (sqlline).
This is my connection URL -
be slower to update secondary indexes than a use case
would be. Both have to do the writes to a second table to keep it in sync.
On Fri, Jul 13, 2018 at 8:39 AM Josh Elser <mailto:els...@apache.org>> wrote:
Also, they're relying on Phoenix to do secondary index updates for them.
Also, they're relying on Phoenix to do secondary index updates for them.
Obviously, you can do this faster than Phoenix can if you know the exact
use-case.
On 7/12/18 6:31 PM, Pedro Boado wrote:
A tip for performance is reusing the same preparedStatement , just
clearParameters() , set values
of batching which you are completely
missing out on. There are multiple manifestations of this. Row-locks are
just one (network overhead, serialization, and rpc scheduling/execution
are three others I can easily see)
On 7/11/18 4:10 PM, alchemist wrote:
Josh Elser-2 wrote
Josh thanks so much for all
Phoenix does not recommend connection pooling because Phoenix
Connections are not expensive to create as most DB connections are.
The first connection you make from a JVM is expensive. Every subsequent
one is cheap.
On 7/11/18 2:55 PM, alchemist wrote:
Since Phoenix does not recommend
Your real-world situation is not a single-threaded application, is it?
You will have multiple threads which are all updating Phoenix concurrently.
Given the semantics that your application needs from the requirements
you stated, I'm not sure what else you can do differently. You can get
Some thoughts:
* Please _remove_ commented lines before sharing configuration next
time. We don't need to see all of the things you don't have set :)
* 100 salt buckets is really excessive for a 4 node cluster. Salt
buckets are not synonymous with pre-splitting HBase tables. This many
salt
The explain plan for your tables isn't a substitute for the DDLs. Please
provide those.
How about sharing your completely hbase-site.xml and hbase-env.sh files,
rather than just snippets like you have. A full picture is often needed.
Given that HBase cannot directly run on S3, please also
Moving this over to the dev list since this is a thing for developers to
make the call on. Would ask users who have interest to comment over
there as well :)
I think having a "one-button" Phoenix environment is a big win,
especially for folks who want to do one-off testing with a specific
Please reach out to Hortonworks for more information about supported
versions of Phoenix with HDP.
On 6/15/18 6:51 AM, rahuledavalath1 wrote:
Hi All
We are using hortonworks latest* hdp stack 2.6.5*. There the hbase version
is* hbase.1.1.2*.
We downloaded *apache-phoenix-4.14.0-HBase-1.1*
You shouldn't be putting the phoenix-client.jar on the HBase server
classpath.
There is specifically the phoenix-server.jar which is specifically built
to be included in HBase (to avoid issues such as these).
Please remove all phoenix-client jars and provide the
phoenix-5.0.0-server jar
That sounds like the implementation of a HashJoin. You would want to
make sure your smaller relation is serialized for this HashJoin, not the
larger one.
Phoenix also supports a sort-merge join which may be more better
performing when you read a large percentage of data for both relations.
ent" in this case is
queryserver but not ODBC driver. And now I need to check why queryserver
doesn't apply this property.
-Original Message-
From: Josh Elser [mailto:els...@apache.org]
Sent: Wednesday, May 23, 2018 6:52 PM
To: user@phoenix.apache.org
Subject: Re: Phoenix ODBC driver l
Try enabling DEBUG logging for HBase and take a look at the RegionServer
log identified by the hostname in the log message.
Most of the time when you see this error, it's a result of HBase
rejecting the incoming request for a Kerberos authentication issue.
On 5/23/18 12:10 PM, Nicolas Paris
I'd be surprised to hear that the ODBC driver would need to know
anything about namespace-mapping.
Do you have an error? Steps to reproduce an issue which you see?
The reason I am surprised is that namespace mapping is an implementation
detail of the JDBC driver which lives inside of PQS --
Yeah, as Francis says, this should already be exposed via the expected
JDBC APIs.
Kevin -- can you share more details about what version(s) you're
running? A sample program?
If you're running a new enough version, you can set the following log
level via Log4j
Thanks for that, Lew. I'm snowed under this week, but will try to dig
into this some more given the information you provided.
On 4/20/18 4:26 PM, Lew Jackman wrote:
We have a bit more of a stack trace for our bind parameter exception,
not sure if this is very revealing but we have:
As a general statement, the protobuf serialization is much better
maintained and comes with a degree of backwards compatibility (which the
JSON serialization guarantees none).
Thanks for sharing the solution.
On 4/20/18 9:53 AM, Lu Wei wrote:
I did some digging, and the reason is because I
This question is better asked on the Phoenix users list.
The phoenix-client.jar is the one you need and is unique from the
phoenix-core jar. Logging frameworks are likely not easily
relocated/shaded to avoid issues which is why you're running into this.
Can you provide the error you're
We've received some requests to extend the CFP a few more days. The new
day of closing will be this Friday 2018/04/20, end of day.
Please keep them coming in!
On 4/15/18 9:25 PM, Josh Elser wrote:
The PhoenixCon 2018 call for proposals is scheduled to close Monday,
April 16th. If you have
<100M rows
per stream and have a few GB of disk space per processing node available
it should be doable.
On Mon, 16 Apr 2018, 18:49 Rabin Banerjee, <dev.rabin.baner...@gmail.com
<mailto:dev.rabin.baner...@gmail.com>> wrote:
Thanks Josh !
On Mon, Apr 16, 2018 at 11:16
and generate a
combined one as realtime as possible.
On Mon, Apr 16, 2018 at 9:04 PM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
Short-answer: no.
You're going to be much better off de-normalizing your five tables
into one table and elimin
Hi Wei,
Have you searched JIRA for issues that relate to the Phoenix-Hive
integration? There have been a few in the recent past around invalid
queries being generated, especially around column names.
On 4/14/18 7:39 PM, Lu Wei wrote:
## Version:
phoenix: 4.13.2-cdh5.11.2
hive:
Short-answer: no.
You're going to be much better off de-normalizing your five tables into
one table and eliminate the need for this JOIN.
What made you decide to want to use Phoenix in the first place?
On 4/16/18 6:04 AM, Rabin Banerjee wrote:
HI all,
I am new to phoenix, I wanted to know
The PhoenixCon 2018 call for proposals is scheduled to close Monday,
April 16th. If you have an idea for a talk, make sure you get it
submitted ASAP!
Submit your talks at https://easychair.org/conferences/?conf=pc18
If you need more information, please see
Any chance you can share the complete stacktrace you see as well as the
version of Phoenix that you're using, Lew?
This code bridges Phoenix and Avatica -- having line numbers and a
version of code to compare against will help get to the bottom of the issue.
On 4/12/18 4:48 PM, Lew Jackman
Hi all,
There's just one week left to submit abstracts to PhoenixCon 2018, held
in San Jose, CA on June 18th.
We need all of you -- developers, users, admins -- to submit all talks
to make this event a great success. No talk is too small.
Please reach out if there are any questions!
You
(field1, field2) INCLUDE (field3, field4)
On 2018/04/09 17:04:03, Josh Elser <els...@apache.org> wrote:
Have you looked at DEBUG logging client and server(HBase) side?
The "Call exception" log messages imply that the client is repeatedly
trying to issue an RPC to a RegionSe
Have you looked at DEBUG logging client and server(HBase) side?
The "Call exception" log messages imply that the client is repeatedly
trying to issue an RPC to a RegionServer and failing. This should be
where you focus your attention. It may be something trivial to fix
related to
This looks reminiscent of
https://issues.apache.org/jira/browse/PHOENIX-4588 but I'm not certain
if they're the same issue.
On 4/4/18 6:38 PM, spark receiver wrote:
Hi everyone,
I’m using phoenix 4.11 ,facing a strange issue when using upsert.
I’m joinning 2 tables to get “flag“ column
Hi Lew,
I believe the snippet of the stack trace you have provided isn't telling
us anything other than that the real PhoenixDriver code threw an error
(Avatica/PQS are just doing pass-through to the real PhoenixDriver inside).
Can you please look closer at the bottom of the stack trace to
Hi Jins,
Check out
https://community.hortonworks.com/articles/9377/deploying-the-phoenix-query-server-in-production-e.html
which should cover a bit of this (specifically, via HAProxy).
I wrote this prior to the ZK-based discovery and client-driven load
balancer which has showed up recently.
with Phoenix
> 4.7. Am I wrong about that? And, do you have an idea when there is a new HDP
with Phoenix > 4.7?
Best regards
Martin
-Ursprüngliche Nachricht-
Von: Josh Elser [mailto:els...@apache.org]
Gesendet: Montag, 26. März 2018 23:40
An: user@phoenix.apache.org
Betreff: Re: Call que
All,
I'm pleased to announce PhoenixCon 2018 which is to be held in San Jose,
CA on June 18th.
A call for proposals is available now[1], and we encourage all Phoenix
users and developers to contribute a talk and plan to attend the event
(however, event registration is not yet available).
Hey Anil,
You sure there isn't another exception earlier on in the output of your
application? The exception you have here looks more like the JVM was
already shutting down and Phoenix had closed the connection (the
exceptions were about queued tasks being cleared out after the decision
to
Hard to say at a glance, but this issue is happening down in the
MapReduce framework, not in Phoenix itself.
It looks similar to problems I've seen many years ago around
mapreduce.task.io.sort.mb. You can try reducing that value. It also may
be related to a bug in your Hadoop version.
Good
You'll like have to shade the use of Protobuf3 in your application.
this is not something "optional". HBase2 (and the sister release
Phoenix5) will have done this shading internally which will make this
easier for you downstream. However, these releases have not yet been made.
On 3/21/18
I would assume that they would be cleaned up, but I don't have specifics
to point you to off the top of my head.
On 3/21/18 4:32 AM, Flavio Pompermaier wrote:
Is there a way to cleanup them? Are they left there when client query
are interrupted?
On Wed, Mar 21, 2018 at 3:00 AM, Josh Elser
Are they ResultSpooler files?
If so, you want to set `phoenix.spool.directory` which defaults to the
java.io.tmpdir system property.
On 3/20/18 12:47 PM, Flavio Pompermaier wrote:
Hi to all,
I've just discovered that Phoenix continue to create .tmp files in /tmp
directory causing the disk
Jay,
I'm not sure what lead your infrastructure team to come to this
conclusion. My only guess is that they have observed some now-stale
documentation. PQS is supported and has plenty of information available,
both via the Apache Phoenix website and the Apache Avatica website.
Is the any way to force Spark distribute workload evenly? I have tried to
pre-split my Phonix table (now it has about 1200 regions), but it did't
help.
-----Original Message-
From: Josh Elser [mailto:els...@apache.org]
Sent: Friday, March 9, 2018 2:17 AM
To: user@phoenix.apache.org
Subject: Re:
on servers, both tables salted with SALT_BUCKETS=42.
Spark's job running via Yarn.
-Original Message-----
From: Josh Elser [mailto:els...@apache.org]
Sent: Monday, March 5, 2018 9:14 PM
To: user@phoenix.apache.org
Subject: Re: Phoenix as a source for Spark processing
Hi Stepan,
Can you bette
I would guess that Hive would always be capable of out-matching what
HBase/Phoenix can do for this type of workload (bulk-transformation).
That said, I'm not ready to tell you that you can't get the
Phoenix-Spark integration better performing. See the other thread where
you provide more
Yes. If you're starting from a blank slate, please use the Phoenix SQL
statements to create tables, not the HBase shell.
On 3/5/18 4:33 AM, Dominic Egger wrote:
Hi Phoenix Users
So I have a somewhat baffling errors. I have create a view on the
follwoing HBase Table:
create 'xx:yy', {NAME =>
Hi Stepan,
Can you better ballpark the Phoenix-Spark performance you've seen (e.g.
how much hardware do you have, how many spark executors did you use, how
many region servers)? Also, what versions of software are you using?
I don't think there are any firm guidelines on how you can solve
The issue is commonly that sqlline.py is adding the HBase configuration
to the classpath on your behalf. This obviously would not happen in
Squirrel (which Phoenix doesn't control or know about).
* If Squirrel has the ability to add additional classpath entries, you
can try adding
ure out some automation here to make that
happen. IIRC you have something already with Docker. I'm less worried
about this part :)
> Lukas
>
>
>
> On Thu, Mar 1, 2018 at 8:38 PM, Josh Elser <josh.el...@gmail.com> wrote:
>>
>> Obviously, I'm in favor of th
not
a very complete answer (as it doesn't mention hinting or local indexes),
so it'd be good if it was updated.
Thanks,
James
[1]
https://phoenix.apache.org/faq.html#Why_isnt_my_secondary_index_being_used
On Mon, Feb 26, 2018 at 7:43 AM, Josh Elser <els...@apache.org
<mailto:els...@apac
IIRC, Phoenix will only choose to use an index when all columns are
covered (either the index is on the columns or the columns are
explicitly configured to be covered in the DDL).
On 2/26/18 6:45 AM, Alexey Karpov wrote:
Hi.
Let’s say I have a table CREATE TABLE test (id integer NOT NULL
Nope, no tools down in Phoenix.
You can just use the normal `alter` command in the HBase shell to clean
it up.
On 2/22/18 10:04 PM, Reid Chan wrote:
Hi team,
I created a table through HBase api, and then created a view for it on
Phoenix.
And for some reasons, i dropped the view, but
The Apache Phoenix PMC is happy to announce the release of Phoenix
5.0.0-alpha for Apache Hadoop 3 and Apache HBase 2.0. The release is
available for download at here[1].
Apache Phoenix enables OLTP and operational analytics in Hadoop for low
latency applications by combining the power of
Hey Anil,
Check out the MultiHfileOutputFormat class.
You can see how AbstractBulkLoadTool invokes it inside the `submitJob`
method.
On 12/28/17 5:33 AM, Anil wrote:
HI Team,
I was looking at the PhoenixOutputFormat and PhoenixRecordWriter.java ,
could not see connection autocommit is set
.
Thanks,
Marcelo.
On 21 December 2017 at 20:02, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
Hi Marcelo,
The requirement for hbase-site.xml and core-site.xml to be on the
classpath are a "wart", resulting from the close ties to HBase
I'm a little hesitant of this for a few things I've noticed from lots of
various installations:
* Salted tables are *not* always more efficient. In fact, I've found
myself giving advice to not use salted tables a bit more than expected.
Certain kinds of queries will require much more work if
va:906)
at sqlline.Commands.closeall(Commands.java:880)
at sqlline.SqlLine.begin(SqlLine.java:714)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
at
org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
Here the bigdata-namenode is like local
I can't seem to track down that error message to any specific line of code.
Do you have a stacktrace in the PQS log? I'm not sure the the message is
implying that "localhost" is being interpreted as a class name or if
it's saying the PQS at localhost had an error. The more details you can
Please note that the following does not constitute legal advice. Please
consult a lawyer.
--
The LICENSE and NOTICE files are provided, as a convenience, to users of
Apache Phoenix so that they are aware of the licenses and copyright of
the third-party software that is distributed as a part
s contrary to my understanding.
Thought SSL enables secure connections.
Input as always is appropriated.
Thanks.
On Nov 26, 2017 8:58 PM, "Josh Elser" <els...@apache.org
<mailto:els...@apache.org>> wrote:
Thanks, Ash. Just to confirm, there are definitely the
> me.
>
> Meanwhile, I will look up the link you have provided and will continue to
> do research on this topic.
>
> thanks,
> -ash
>
> On Fri, Nov 24, 2017 at 12:11 PM, Josh Elser <els...@apache.org> wrote:
>
>> Why do you have a hard-requirement on using
Why do you have a hard-requirement on using SSL?
HBase itself does not use SSL to provide confidentiality on its wire
communication, it relies on jGSS and SASL to implement this security.
Under the hood, this actually boils down to using GSSAPI, Kerberos
specifically, to implement privacy
There is no such configuration which would preclude your ability to
issue two queries concurrently.
Some relevant information you should share if you'd like more help:
* Versions of HDFS, HBase, and Phoenix
* What your "request" is
* Thread-dump (e.g. jstack) of your client and the PQS
* DEBUG
that required classpath are updated
properly by running phoenix_utils.py; Except phoenix_classpath all other
variables has proper values.
Query:
Could you please tell what else I miss here regarding classpath?
Regards,
Mallieswari D
On Thu, Nov 9, 2017 at 12:00 AM, Josh Elser <els...@apache.
Please note that there is a difference between Phoenix Tracing and the
TRACE log4j level.
It appears that you're using a version of Phoenix which is incompatible
with the version of HBase/Hadoop that you're running. The implementation
of PhoenixMetricsSink is incompatible with the
I don't know why running it inside of Spark would cause issues.
I would double-check the classpath of your application when running in
Spark as well as look at the PQS log (HTTP/500 is a server error).
On 10/25/17 6:39 AM, cmbendre wrote:
I am trying to connect to Phoenix queryserver from
On 10/3/17 3:00 AM, Andrzej wrote:
W dniu 03.10.2017 o 01:35, Josh Elser pisze:
Apache Phoenix does not provide/ship an ODBC driver.
One is provided by Hortonworks, but it is not open source. Would
recommend you use their forums if you need more information than is
included in the below
Apache Phoenix does not provide/ship an ODBC driver.
One is provided by Hortonworks, but it is not open source. Would
recommend you use their forums if you need more information than is
included in the below tutorial.
https://hortonworks.com/hadoop-tutorial/bi-apache-phoenix-odbc/
On
All,
The Apache Phoenix PMC has recently voted to extend an invitation to
Sergey to join the PMC in recognition of his continued contributions to
the community. We are happy to share that he has accepted this offer.
Please join me in congratulating Sergey! Congratulations on a
well-deserved
Hi there,
In general, no, you should not see issues. We (really, Avatica -- the
project "powering" PQS) strives to make changes to the wire-protocol in
a backwards compatible manner. Avatica also includes a basic framework
to do forward and backwards compatibility testing.
You are more
PQS "finds" where to talk to ZooKeeper based on hbase-site.xml on the
classpath.
This uses the environment variable HBASE_CONF_DIR. When this environment
variable is not set, it defaults to /etc/hbase/conf on unix-like
systems. Would recommend you investigate the classpath of PQS on your
y_indexing.html#Local_Indexes
<https://phoenix.apache.org/secondary_indexing.html#Local_Indexes>
On Tue, Sep 5, 2017 at 11:48 AM, Josh Elser <els...@apache.org
<mailto:els...@apache.org>> wrote:
500writes/seconds seems very low to me. On my wimpy laptop,
Sriram,
Did you set the timezone and date-format configuration properties
correctly for your environment?
See `phoenix.query.dateFormatTimeZone` and `phoenix.query.dateFormat` as
described http://phoenix.apache.org/tuning.html
On 9/5/17 2:05 PM, Sriram Nookala wrote:
I'm trying to bulkload
Calls to put in the HBase shell, to the best of my knowledge, are
synchronous. You should not have control returned to you until the
update was committed by the RegionServers. HBase's data guarantees are
that once a call to write data returns to you, all other readers *must*
be able to see
Yup! Those are only passed through to the standard HBase APIs to control
region boundaries for a table when the table is created without other
implications.
You can use the HBase shell commands or Java API to split/merge regions
to your heart's content.
On 8/8/17 2:19 PM, Michael Young
On 7/27/17 4:36 PM, Lew Jackman wrote:
I am joining two tables by using only a key fields in two tables.
(if this were straight hbase, I know I would code with some range scans)
There are many billions of rows in each table.
I am trying to understand the explain plan as I am having
https://phoenix.apache.org/language/index.html#update_statistics
On 7/22/17 8:40 PM, Batyrshin Alexander wrote:
Hello,
We accidentally lost SYSTEM.STATS. How to recover/recreate it?
101 - 200 of 321 matches
Mail list logo