e spent a significant amount of
> time and effort in stabilizing secondary indexes in the past 1-2 years,
> not to mention others spending time on a local index implementation.
> Judging Phoenix in its entirety based off of an arbitrarily old version
> of Phoenix is disingenuous.
>
I think this is an unavoidable problem in some sense, if global indexes are
used. Essentially global indexes create a graph of dependent region
servers due to index rpc calls from one RS to another. Any single failure
is bound to affect the entire graph, which under reasonable load becomes
the
Another observation with Phoenix global indexes - at very large volumes of
writes, a single region server failure cascades to the entire cluster very
quickly
On Sat, Oct 27, 2018, 4:50 AM Nicolas Paris
wrote:
> Hi
>
> I am benchmarking phoenix to better understand its strength and
> weaknesses.
jeshb...@apache.org" <chrajeshbab...@gmail.com>
wrote:
9,10 slides gives details how read path works.
https://www.slideshare.net/rajeshbabuchintaguntla/local-
secondary-indexes-in-apache-phoenix
Let's know if you need more information.
Thanks,
Rajeshbabu.
On Fri, Jun 30, 2017 at
Hi,
The documentation says - "From 4.8.0 onwards we are storing all local
index data in the separate shadow column families in the same data table".
It is not quite clear to me how the read path works with local indexes. Is
there any document that has some details on how it works ?
2016 at 2:35 PM Neelesh <neele...@gmail.com> wrote:
>
>> Hello,
>> When a region server is under stress (hotspotting, or large
>> replication, call queue sizes hitting the limit, other processes competing
>> with HBase etc), we experience latency spikes
Hello,
When a region server is under stress (hotspotting, or large replication,
call queue sizes hitting the limit, other processes competing with HBase
etc), we experience latency spikes for all regions hosted by that region
server. This is somewhat expected in the plain HBase world.
However,
Hi All,
we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
We're struggling with the following error on pretty much all our region
servers. The indexes are global, the data table has more than a 100B rows
2016-11-26 12:15:41,250 INFO
ot; <jmaho...@gmail.com> wrote:
Hi Neelesh,
The saveToPhoenix method uses the MapReduce PhoenixOutputFormat under the
hood, which is a wrapper over the JDBC driver. It's likely not as efficient
as the CSVBulkLoader, although there are performance improvements over a
simple JDBC client as the writes a
Hi ,
Does phoenix-spark's saveToPhoenix use the JDBC driver internally, or
does it do something similar to CSVBulkLoader using HFiles?
Thanks!
Also, was your change to phoenix.upsert.batch.size on the client or on the
region server or both?
On Wed, Feb 17, 2016 at 2:57 PM, Neelesh <neele...@gmail.com> wrote:
> Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs,
> but haven't t
ttp://search-hadoop.com/m/9UY0h2FKuo8RfAPN
>
> Please let us know if the problem still persists.
>
> On Wed, Feb 17, 2016 at 12:02 PM, Neelesh <neele...@gmail.com> wrote:
>
>> We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
>> Once in a while whil
12 matches
Mail list logo