Hi,
I am trying to use hbase throttling feature with Phoenix. Hbase is 1.1.2 and
phoenix 4.6.
When I specify big number of SALT_BUCKETS, the hbase throws ThrottlingException
even when quotas are high. Please note that this error occurs only when we scan
from phoenix shell. From hbase shell,
Hi Maryann,
I created https://issues.apache.org/jira/browse/PHOENIX-3354 for this issue. I
could not assign to you.
Best regards,Sumit
From: Maryann Xue <maryann@gmail.com>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam
<sumit_o...@yah
INTO ORDERED DISTINCT ROWS BY ["ID"]
|| CLIENT MERGE SORT || DYNAMIC SERVER FILTER BY
(A.CURRENT_TIMESTAMP, A.ID) IN ((TMP.MCT, TMP.TID))
|+------+8 rows selected (0.033 seconds)
Looking forward to hearing from you.
Best regards,Sumit
From
does not miss any data but has the issue of not fitting in memory
(the actual issue with which I started this thread).
Thanks again!Sumit
From: Maryann Xue <maryann@gmail.com>
To: Sumit Nigam <sumit_o...@yahoo.com>; "user@phoenix.apache.org"
<user@phoenix.ap
ld assume that
changing the hash join to sort-merge join would not alter the query results,
right? Do I need to re-write my query?
I am using global index.
Thanks,Sumit
From: Maryann Xue <maryann@gmail.com>
To: Sumit Nigam <sumit_o...@yahoo.com>
Cc: "user@phoenix.apache.org
to interpret explain plan?
Thanks,Sumit
From: Maryann Xue <maryann@gmail.com>
To: Sumit Nigam <sumit_o...@yahoo.com>
Cc: "user@phoenix.apache.org" <user@phoenix.apache.org>
Sent: Thursday, September 29, 2016 11:03 AM
Subject: Re: Hash join confusion
issue.
Switching to sort merge join helped. But not sure if that is the right solution
going forward.
Thanks again!Sumit
From: Maryann Xue <maryann@gmail.com>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam
<sumit_o...@yahoo.com>
S
rn ON more verbose explain plan? Like, seeing number
of bytes, rows that each step results in?
Thanks,Sumit
From: Sumit Nigam <sumit_o...@yahoo.com>
To: Users Mail List Phoenix <user@phoenix.apache.org>
Sent: Tuesday, September 27, 2016 9:17 PM
Subject: Hash join confusion
Hi
Hi,
I am using hbase 1.1 with phoenix 4.6.
I have a table with row key as (current_timestamp, id) which is salted and
index on (id). This table has ~3 million records.
I have a query like given below.
SELECT ID, CURRENT_TIMESTAMP, from TBL
as a inner join (
/forcedotcom/phoenix/wiki/Secondary-Indexing mentions that
index is written to, first.
I use Phoenix 4.5.1
Thanks,Sumit
From: James Taylor <jamestay...@apache.org>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam
<sumit_o...@yahoo.com>
Sent: Th
Hi,
I recently noticed that one of my secondary index was short of 2 entries
compared to data table.
AFAIK, the first update is always to index table. So, the only way an index
table could fall behind the main table is when the index was disabled by
phoenix. Maybe the region server hosting
Hi,
I was benchmarking some of the phoenix queries with different compaction level
tuning.
A strange thing is observed when there are huge number of Hfiles on disk. The
queries not returning any data (resultset size 0) execute very quickly (5-10 ms
or so) but just doing a rs.next() on result
cache, right?
Thanks for such a prompt reply.Sumit
From: anil gupta <anilgupt...@gmail.com>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam
<sumit_o...@yahoo.com>
Sent: Tuesday, March 22, 2016 9:11 PM
Subject: Re: Secondary index memory
Hi,
I am trying to estimate what (if any) are the implications of accumulating data
in phoenix secondary index. I have a secondary index on 3 columns and would
like to know if anyone has an idea of how to estimate memory footprint of
secondary index (if any) based on number of entries in data
Hi,
Is there an easy way to know the number of splits a Phoenix table has?
Preferably through JDBC metadata API?
Thanks,Sumit
ows regions to be explicitly split as well as pre-split and
auto-split. SALT_BUCKETS seems like a pre-split equivalent of sorts, so I am
interested to see what there may be in terms of auto- and explicit-salting.
Thanks,
- Ken
On Mon, Jan 11, 2016 at 6:10 AM Sumit Nigam <sumit_o...@yahoo.c
org>
To: user <user@phoenix.apache.org>; Sumit Nigam <sumit_o...@yahoo.com>
Sent: Thursday, December 10, 2015 11:34 PM
Subject: Re: Help with LIMIT clause
Hi Sumit,I agree, these two queries should return the same result, as long as
you have the ORDER BY clause. What version
Hi,
The link for salted tables https://phoenix.apache.org/salted.html mentions
"Since salting table would not store the data sequentially, a strict sequential
scan would not return all the data in the natural sorted fashion. Clauses that
currently would force a sequential scan, for example,
Hi,
Is there an easy way to completely turn off block cache for a specific table at
table creation time itself? Something like, CREATE TABLE X ( . )
BLOCK_CACHE=FALSE;
I could likely hint the queries during read time, but setting I'd like to turn
it off completely.
Thanks,Sumit
Sorry, do not bother. I figured out - can be done by specifying
BLOCKCACHE=false at create table time.
From: Sumit Nigam <sumit_o...@yahoo.com>
To: Users Mail List Phoenix <user@phoenix.apache.org>
Sent: Thursday, November 5, 2015 1:58 PM
Subject: Block Cache
Hi,
Is t
,Sumit
From: Sumit Nigam <sumit_o...@yahoo.com>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam
<sumit_o...@yahoo.com>
Sent: Thursday, November 5, 2015 2:42 PM
Subject: Re: Block Cache
Sorry, do not bother. I figured out - can be done by spec
Hi,
I have a logic to create some tables at startup of my application as - "CREATE
TABLE IF NOT EXISTS "This can happen from multiple instances of this app
concurrently.
Sometimes, when I restart the application, I get the error -
org.apache.hadoop.hbase.client.RpcRetryingCaller@761e074,
Hi,
Quite often I notice a log statement such as this (happens for all tables) -
Re-resolved stale table ldm:exDocStore withseqNum 0 at timestamp 1445502988636
with 12 columns: [_SALT, CURRENT_TIMESTAMP,ID, 0.CURR_EXDOC, 0.CURR_CHECKSUM,
0.PREV_EXDOC, 0.PREV_CHECKSUM,0.PREV_TIMESTAMP,
this one. In this case, a time based range query can return
different results when executed twice.
Or am I missing something?
Thanks,Sumit
From: James Taylor <jamestay...@apache.org>
To: user <user@phoenix.apache.org>; Sumit Nigam <sumit_o...@yahoo.com>
Sent: Tuesday, October
org>
To: Sumit Nigam <sumit_o...@yahoo.com>
Cc: "user@phoenix.apache.org" <user@phoenix.apache.org>
Sent: Thursday, October 8, 2015 10:40 PM
Subject: Re: Salting and pre-splitting
1. So, explicitly setting phoenix.query.rowKeyOrderSaltedTable to true should
be done
From: Samarth Jain <sama...@apache.org>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Cc: Sumit Nigam <sumit_o...@yahoo.com>
Sent: Wednesday, October 7, 2015 10:53 PM
Subject: Re: Salting and pre-splitting
- Default value of phoenix.query.ro
of
phoenix.query.rowKeyOrderSaltedTable is true and that ensure that LIMIT clause
returns data in rowkey order
Thanks,Sumit
From: Sumit Nigam <sumit_o...@yahoo.com>
To: Users Mail List Phoenix <user@phoenix.apache.org>
Sent: Wednesday, October 7, 2015 12:41 PM
Subject: Sal
Hi,
I am somewhat confused by salting and pre-splitting. Would be grateful if any
of you can clarify the following:
1. Do I need to use pre-splitting along with salting to take advantage of
performance? Or I can still have single region server hot-spotting until I have
enough regions to split
<samarth.j...@gmail.com>
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Cc: Sumit Nigam <sumit_o...@yahoo.com>
Sent: Tuesday, October 6, 2015 9:20 PM
Subject: Re: ResultSet size
To add to what Jesse said, you can override the default scanner fetch size
Hi,
I have 2 queries to fetch some data from Hbase using Phoenix input format. Is
there some easy way to create a prepared statement and pass on to record
reader? I am currently using raw Statement instance.
Thanks,Sumit
Hi,
How can I get the current time from Hbase?
I can use Phoenix function current_time(). One way would be to query this
column against any table as - SELECT CURRENT_TIME() AS TIME FROM TABL LIMIT 1;
Or, I could possibly create a single row, single column table and query against
that table for
Thank you James. Unfortunately, I am using 4.5.1 and moving up might take some
time. In that case, I assume creating a single column, single row table would
be better?
Thanks again.
From: James Taylor <jamestay...@apache.org>
To: user <user@phoenix.apache.org>; Sumit Ni
You can print stack trace to see what is the issue. I ran into similar problem
when my connection got shared between multiple threads.
From: Hafiz Mujadid
To: user@phoenix.apache.org
Sent: Friday, October 2, 2015 11:24 PM
Subject: Exception while executing
Hi Buntu,
Possibly, following schema can help?
Rowkey with columns user, X, Y, timestamp. (Composite PK with user as leading
column). You can MD5 each field to make it fixed length if want.Then, also make
timestamp column as your secondary index. Salt the table.
I think single table is enough
Hi,
I am using Statement's executeBatch method. Intermittently, I get XCL06 which
when I look up Phoenix error codes means - "An executeUpdate is prohibited when
the batch is not empty. Use clearBatch to empty the batch first."
Now, I am never reusing the statement used to construct the batch
Hello,
I am planning on using to_number(current_time()) as my primary key along with
salting enabled.
However, multiple transactions can be upserted at the same current_time(). Will
salting still be able to prevent overwriting one row with another? Or do I need
to postfix another column into
Hi,
I have a table as:
CREATE TABLE EXP (ID BIGINT NOT NULL PRIMARY KEY, TEXT VARCHAR);
If I explain the select:
EXPLAIN SELECT ID FROM EXP;
Then it shows CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER EXP
I assume it still uses rowkey. Or is it should have shown using rowkey in
explain plan?
ing you, I think you can trust
it...
James
On 30/09/15 15:36, Sumit Nigam wrote:
Thanks so much James ...
I am sorry to be asking so many questions ... But last one -
When I - EXPLAIN SELECT ID, TEXT FROM EXP WHERE ID > 5 AND ID < 10
here TEXT is not a part of PK, th
primary key column
Yup
On 30/09/15 15:25, Sumit Nigam wrote:
Thanks James.
So, if I did a range lookup like - EXPLAIN SELECT ID FROM EXP WHERE ID > 5
AND ID < 10 then I get RANGE SCAN OVER EXP [6] - [10]
Is that indication enough that PK/ index is used?
F
Hi Thomas,
You are right. Somehow, the packaged libs were of 1.1 Hbase.
Thanks for testing this out,Sumit
From: Thomas D'Silva <tdsi...@salesforce.com>
To: user@phoenix.apache.org; Sumit Nigam <sumit_o...@yahoo.com>
Sent: Tuesday, September 22, 2015 3:38 AM
Thanks James,
I will test with this tool and report back.
Sumit
From: James Taylor <jamestay...@apache.org>
To: user <user@phoenix.apache.org>; Sumit Nigam <sumit_o...@yahoo.com>
Sent: Monday, September 21, 2015 11:52 AM
Subject: Re: Local and global indexes
Hi Su
Hi,
I am using Hbase 0.98.6 and Phoenix 4.5.1 and this combination. As per phoenix
Jira and some blogs(https://phoenix.apache.org/secondary_indexing.html), these
should also be affected by deadlock occurring during index maintenance for
global indexes
Hi,
I had some doubt about local vs global indexes in Phoenix.
In my use case, I have both read and write heavy use cases against the same
tables. So, is it a better idea to use local indexes?It is also going to help
reduce network chatter.
Plus, my reads may issue queries which fetch
Hello,
I am planning on supplying all guidepost properties through my
DriverManager.getConnection method by passing a java.util.Properties of
key=value pairs.
In the documentation at https://phoenix.apache.org/tuning.html, it mentions
guidepost parameters to be server-side parameters. Does that
Hi James,
Is it right to assume that with auto-commit set to true, the mutate maxSize
being exceeded error would not occur? This should be because now server side
does the commit automatically when the batch size is reached/ buffered.
Thanks,Sumit
From: James Taylor
Hello,
I am using Phoenix 4.5 with Hbase 0.98.1.
PreparedStatement is preferred in case of say Oracle, etc. to help with
effective use of query plans (with bind params). Does it also have same
guarantees with Phoenix or does the Phoenix query engine treat both Statement
and Prepared statement
<user@phoenix.apache.org>
Cc: Sumit Nigam <sumit_o...@yahoo.com>
Sent: Tuesday, September 15, 2015 10:12 AM
Subject: Re: Phoenix with PreparedStatement
Sumit,To add to what Samarth said, even now PreparedStatements help by saving
the parsing cost. Soon, too, for UPDATE VALUES
47 matches
Mail list logo