On 9/18/18 12:08 PM, Jaanai Zhang wrote:

I don't understand what performance issues you think exist based solely on
the above. Those numbers appear to be precisely in line with my
expectations. Can you please describe what issues you think exist?


1. Performance of the thick client has almost 1~4 time higher than the thin
client, the performance of the thin client will be decreased when the
number of concurrencies is increased.  For some applications of the web
server, this is not enough.
2. An HA thin client.
3. SQL audit function

A lot of developer like using the thin client, which has a lower
maintenance cost on the client.Sorry, that's all that comes to me. :)

The thin-client is always doing more work to execute the same query as the thick-client (shipping results to/from PQS), so it shouldn't be surprising that the thin-client is slower. This is the trade-off to do less in the client and also make a well-defined API for other language to talk to PQS.

Out of curiosity, did you increase the JVM heap for PQS or increase configuration property defaults for PQS to account for the increased concurrency?

Please be more specific. Asking for "more documentation" doesn't help us
actually turn this around into more documentation. What are the specific
pain points you have experienced? What topics do you want to know more
about? Be as specific as possible.


About documents:
1. I think that we cloud add documents about migrate tools and migrate
cases since many users migrate from RDBMS(MYSQL/PG/SQL SERVER) to Phoenix
for some applications of non-transaction.
2. How to design PK or indexes.

About pain points:
The stability is a big problem. Most of the people use Phoenix as a common
RDBMS, they are informal to execute a query, even if they don't know why
server crash when a scan full table was executed, so define use boundary of
Phoenix is important that rejects some query and reports it user's client.

A migration document would be a great. Something that can supplement the existing "Quick Start" document.

What kind of points would you want to have centralized about designing PK's or indexes?

Are you referring to the hbase-spark (and thus, Spark SQL) integration? Or
something that some company is building?


Some companies are building with SPARK SQL to access Phoenix to support
OLAP and OLTP requirements. it will produce heavily load for HBase cluster
when Spark reads Phoenix tables,  my co-workers want to directly read
HFiles of Phoenix tables for some offline business, but that depends on
more flexible Phoenix API.

Just beware in calling "HBase native sql" as this implies that this is something that is a part of Apache HBase (which is not the case).

I doubt anyone in the Phoenix community would take offense to saying that a basic read/write SQL-esque language on top of HBase would be much more simple/faster than Phoenix is now. The value that Phoenix provides is a _robust_ SQL implementation and a consistent secondary indexing support. Going beyond a "sql skin" and implementing a database management system is where Phoenix excels above the rest.

Uh, I also got some feedback that some features import for users, For
example, "alter table modify column" can avoid reloaded data again which is
expensive operate for the massive data table. I had upload patches to JIRA(
PHOENIX-4815 <https://issues.apache.org/jira/browse/PHOENIX-4815>), but
nobody responds to me  :(.

You should already know that we're all volunteers here, with our own responsibilities. You can ask for assistance/help in reviews, but, as always, be respectful of everyone's time. This goes for code reviews, as well as new documentation.

Now,  i devote to develop chaos test and PQS stability(it was developed on
the branch of my company, these patches will contribute to the community
after stable running ),  if you have any suggests, please tell to me what
you thinking. I would appreciate your reply.

Would be happy to see what you create.

Reply via email to