Yes snapshot will work as mentioned by Yuhao , as we have done this during
data center migration.
On Mon, Aug 10, 2015 at 9:19 PM, Yuhao Bi byh0...@gmail.com wrote:
Phoenix is based on HBase, since we can take a snapshot of HBase table,
may be we can archive by this method?
Taking snapshot
Hi Ralph,
Try increasing the ulimit for number of open files(ulimit -n) and
processes(ulimit -u) for below users:-
hbase
hdfs
Regards,
Ankit Singhal
On Tue, Jul 7, 2015 at 4:13 AM, Perko, Ralph J ralph.pe...@pnnl.gov wrote:
Hi,
I am using a pig script to regularly load data into hbase
+1 for taking snapshots and exporting them on DR cluster if there is no
requirement of DR cluster to stay up-to-date in realtime.
I am not sure if there is any Incremental snaphost feature out yet but
doing snapshot on periodic basis is also not that heavy.
On Thu, Dec 24, 2015 at 3:44 PM,
Yes, restart your cluster
On Wed, Jun 15, 2016 at 8:17 AM, anupama agarwal <anu1...@gmail.com> wrote:
> I have created async index with same name. But I am still getting the same
> error. Should I restart my cluster for changes to reflect?
> On Jun 15, 2016 8:38 PM, "Ankit S
Hi Anupama,
Option 1:-
You can create a ASYNC index so that WAL can be replayed. And once your
regions are up , remember to do the flush of data table before dropping the
index.
Option 2:-
Create a table in hbase with the same name as index table name by using
hbase shell.
Regards,
Ankit
(v) ASYNC
But if you are only using CSVBulkLoadTool for bulk load, then it will
automatically prepare and bulk load index data also. So Index maintaining
would not be required.
Regards,
Ankit Singhal
On Sat, Jun 25, 2016 at 4:13 PM, Tongzhou Wang (Simon) <
tongzhou.wang.1...@gmail.com>
You can enable remote debugging on regionserver by appending "-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071" to
HBASE_REGIONSERVER_OPTS and debug through eclipse.
On Tue, Jan 12, 2016 at 7:08 PM, Gaurav Agarwal
wrote:
> Hello
> How to debug
As Afshin also said, You need to adjust your timezone with
phoenix.query.dateFormatTimeZone
https://phoenix.apache.org/tuning.html
phoenix.query.dateFormatTimeZone
IST
for eg:-
*upsert like this:*-
jdbc:phoenix:localhost> UPSERT INTO DESTINATION_METRICS_TABLE VALUES
(to_date('2015-09-12
Hi Yiannis,
You may need to set phoenix.spool.directory to correct windows folder as by
default it is set to /tmp.
It is fixed in 4.7.
https://issues.apache.org/jira/browse/PHOENIX-2348
Regards,
Ankit Singhal
On Wed, Feb 24, 2016 at 10:05 PM, Yiannis Gkoufas <johngou...@gmail.com>
wrote:
Hi Steve,
can you check whether the properties are picked by the sql/application
client.
Regards,
Ankit Singhal
On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell <sterr...@oculus360.us>
wrote:
> HI, I hope someone can tell me what I'm doing wrong…
>
> I set *phoenix.sche
”, “hbase.hregion.max.filesize”), where R
is the number of regions of the same table hosted on the same regionserver)
You may read below article to understand splitting policies:-
http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging/
Regards,
Ankit Singhal
On Mon, Feb 15, 2016 at 8:52 PM, Pedro Gandola
can you try after truncating the SYSTEM.STATS table or deleting records of
parent table only from SYSTEM.STATS like below.
DELETE * FROM SYSTEM.STATS WHERE PHYSICAL_NAME='media';
Regards,
Ankit Singhal
On Wed, Feb 24, 2016 at 8:16 PM, Jaroslav Šnajdr <jsna...@gmail.com> wrote:
&
Hi Divya,
It is fixed in 4.7 , please find a jira for the same.
https://issues.apache.org/jira/browse/PHOENIX-2608
Regards,
Ankit Singhal
On Thu, Feb 18, 2016 at 2:03 PM, Divya Gehlot <divya.htco...@gmail.com>
wrote:
> Hi,
> I am getting following error while starting spark shell
=> 'ROW',
> REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE',
> MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP
> _DELETED_CELLS => 'true', BLOCKSIZE => '65536', IN_MEMORY => 'false',
> BLOCKCACHE => 'true'}
>
> 1 row(s) in 0.0280 seconds
>
&
91 |
> | | SYSTEM| STATS |
> GUIDE_POSTS_ROW_COUNT | -5 |
>
> I have attached the SYSTEM.CATALOG contents.
>
> Thanks,
> Ben
>
>
>
> On Mar 16, 2016, at 9:34 AM, Ankit Singhal <ankitsingha...@gmail.com>
>
You may check discussion from below mail chain.
https://www.mail-archive.com/dev@phoenix.apache.org/msg19448.html
On Fri, Mar 11, 2016 at 3:20 PM, Divya Gehlot
wrote:
>
> Hi,
> I created a table in Phoenix with three column families and Inserted the
> values as shown
Yes it seems to.
Did you get any error related to SYSTEM.STATS when the client is connected
first time ?
can you please describe your system.stats table and paste the output here.
On Wed, Mar 16, 2016 at 3:24 AM, Benjamin Kim wrote:
> When trying to run update status on an
g Phoenix and reinstalling it again. I had
> to wipe clean all components.
>
> Thanks,
> Ben
>
>
> On Mar 16, 2016, at 10:47 AM, Ankit Singhal <ankitsingha...@gmail.com>
> wrote:
>
> It seems from the attached logs that you have upgraded phoenix to 4.7
> ver
Awesome!! Great work Josh.
On Tue, Mar 8, 2016 at 8:59 PM, James Heather
wrote:
> Cool. That's big news for us.
> On 8 Mar 2016 2:15 p.m., "Josh Mahonin" wrote:
>
>> Hi all,
>>
>> Just thought I'd let you know that Flyway 4.0 was recently
TALOG ADD BASE_COLUMN_COUNT INTEGER,
IS_ROW_TIMESTAMP BOOLEAN;
>!quit
Quit the shell and start new session without CurrentSCN.
> ./sqlline.py localhost
> !describe system.catalog
this should resolve the issue of missing column.
Regards,
Ankit Singhal
On Fri, Apr 22, 2016 at 3:0
and major_compaction on SYSTEM.CATALOG
- when you don't see those columns and open connection at currentSCN=9 and
alter table to add both the columns.
- you may set keep_deleted_cells back to true in SYSTEM.CATALOG
Regards,
Ankit Singhal
Regards,
Ankit Singhal
On Tue, Apr 26, 2016 at 11:26 PM, Arun
Sanooj,
It is not necessary that output can only be written to a table when using
MR, you can have your own custom reducer with appropriate OutputFormat set
in driver.
Similar solutions with phoenix are:-
1. phoenix MR
2. phoenix spark
3. phoenix pig
On Thu, Apr 21, 2016 at 11:06 PM, Sanooj
sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>
> Thanks,
> Arun
>
>
00CATALOG\x00IS_ROW_TIMESTAMP\x00", STOPROW =>
> "\x00SYSTEM\x00CATALOG\x00IS_ROW_TIMESTAMP_\x00"}
>
> We still see the same error.
>
>
>
> Do we need to explicitly delete from phoenix as well?
>
>
>
> Thanks,
>
> Bharathi.
>
>
>
&
It seems your guidePosts collected got corrupted somehow. You may have
tried deleting guidePosts for that physical table from SYSTEM.STATS table.
On Sun, May 1, 2016 at 10:20 AM, Michal Medvecky wrote:
> If anyone experiences the same problem (hello, google!), here is my
>
, Bavineni, Bharata <
bharata.bavin...@epsilon.com> wrote:
> Ankit,
>
> We will try restarting HBase cluster. Adding explicit “put” in HBase with
> timestamp as 9 for these two columns has any side effects?
>
>
>
> Thank you,
>
> Bharathi.
>
>
>
> *From
.phoenix.schema.MetaDataClient.processMutationResult(MetaDataClient.java:2345)
>
> at
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:2641)
>
>
>
> Any other suggestions?
>
>
>
> Thank you,
>
> Bharathi.
>
>
>
>
xecuteMutation(PhoenixStatement.java:312)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
>> at sqlline.Commands.execute(Commands.java:822)
>> at sqlline.Commands.sql(Commands.java:732)
>> at sq
Try recreating your index with ASYNC and update index using INDEX tool so
that you don't face issues related to timeout or stuck during the initial
load of huge data.
https://phoenix.apache.org/secondary_indexing.html
On Tue, May 10, 2016 at 7:26 AM, anupama agarwal wrote:
>
Yes Vishnu , you may be hitting
https://issues.apache.org/jira/browse/PHOENIX-2249 so you can try deleting
stats for the table "*EVENTS_PROD*'.
On Mon, May 9, 2016 at 10:56 AM, vishnu rao wrote:
> hi guys need help !
>
> i was getting this exception while doing a select.
Yes, you can but it depends if you don't want to go back in time for schema
before 5 versions.
On Mon, May 9, 2016 at 8:16 AM, Bavineni, Bharata <
bharata.bavin...@epsilon.com> wrote:
> Hi,
>
> SYSTEM.CATALOG table is created with VERSIONS => '1000' by default. Can we
> change this value to 5 or
You can try increasing phoenix.query.timeoutMs (and
hbase.client.scanner.timeout.period) on the client .
https://phoenix.apache.org/tuning.html
On Fri, May 13, 2016 at 1:51 PM, 景涛 <844300...@qq.com> wrote:
> When I query from a very big table
> It get errors as follow:
>
>
you looking
> for any specific data? so that I can filter and send the results?
>
>
>
> Thank you for your time looking into this,
>
> Bharathi.
>
>
>
> *From:* Ankit Singhal [mailto:ankitsingha...@gmail.com]
> *Sent:* Sunday, May 01, 2016 1:01 AM
> *To:* user@phoe
can you please check that hbase-site.xml(where you are setting this
property) is in phoenix class path .
On Wed, Apr 20, 2016 at 3:10 AM, wrote:
> I am having trouble setting the "phoenix.spool.directory"
> (QueryServices.SPOOL_DIRECTORY) property value. Any
'SYSTEM.CATALOG', {RAW=>true}
Regards,
Ankit Singhal
On Wed, Apr 20, 2016 at 4:25 AM, Arun Kumaran Sabtharishi <
arun1...@gmail.com> wrote:
> After further investigation, we found that Phoenix Upsert query
> SYSTEM.CATALOG has IS_ROW_TIMESTAMP column, but PTableImpl.getColumn
Hi,
I think when you are doing a put from shel then value is going as String
not as Integer. So phoenix can decode it with varchar only .
If you want to put Integer to your hbase table , use byte representation of
integer or java api instead.
Regards,
Ankit Singhal
On Wed, Apr 20, 2016 at 8:00
is blocking us in the production environment. Any help to
> resolve or workaround is highly appreciated.
>
> Thanks,
> Arun
>
>
> On Wed, Apr 20, 2016 at 12:01 PM, Ankit Singhal <ankitsingha...@gmail.com>
> wrote:
>
>> It's ok if you can just post after grep for CATALOG
Samarth, filed PHOENIX-3176 for the same.
On Wed, Aug 10, 2016 at 11:42 PM, Ryan Templeton wrote:
> 0: jdbc:phoenix:localhost:2181> explain select count(*) from
> historian.data;
>
> *+--+*
>
> *| * * PLAN
Apache Phoenix enables OLTP and operational analytics for Hadoop through
SQL support and integration with other projects in the ecosystem such as
Spark, HBase, Pig, Flume, MapReduce and Hive.
We're pleased to announce our 4.8.0 release which includes:
- Local Index improvements[1]
- Integration
Hi Vasanth,
RC for 4.8(with support of hbase-1.2) is just out today, you can try with
the latest build.
Regards,
Ankit Singhal
On Thu, Jul 14, 2016 at 10:06 AM, Vasanth Bhat <vasb...@gmail.com> wrote:
> Thanks James.
>
> When are the early builds going to be availab
a timeout period. You need to increase scanner timeout
period along with above properties you mentioned.
hbase.client.scanner.timeout.period
6
Regards,
Ankit Singhal
On Mon, Aug 8, 2016 at 6:55 PM, <kannan.ramanat...@barclays.com> wrote:
> Thanks Brian. I have added HBASE
(FloorYearExpression.class),
CeilWeekExpression(CeilWeekExpression.class),
CeilMonthExpression(CeilMonthExpression.class),
CeilYearExpression(CeilYearExpression.class);
Regards,
Ankit Singhal
On Wed, Jun 29, 2016 at 9:08 AM, Yang Zhang <zhang.yang...@gmail.com> wrote:
> when I use the
Hi Vamsi,
Phoenix uses single local Index table for all the local indexes created on
a particular data table.
Rows are differentiated by local index sequence id and filtered when
requested during the query for particular index.
Regards,
Ankit Singhal
Re
On Tue, Jun 28, 2016 at 4:18 AM, Vamsi
, you can read
https://phoenix.apache.org/secondary_indexing.html
Regards,
Ankit Singhal
On Tue, Jun 28, 2016 at 4:25 AM, Vamsi Krishna <vamsi.attl...@gmail.com>
wrote:
> Team,
>
> I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
> *Question: *phoenix explain pl
rched data is different.
Yes, it could be possible because some users are hitting certain key range
only depending upon the first column(prefix) of the row key.
Regards,
Ankit Singhal
On Mon, Aug 15, 2016 at 6:29 PM, Chabot, Jerry <jerry.p.cha...@hpe.com>
wrote:
> I’ve added the hint
I think you are also hitting
https://issues.apache.org/jira/browse/PHOENIX-3176.
On Tue, Feb 7, 2017 at 2:18 PM, Dhaval Modi wrote:
> Hi Pedro,
>
> Upserted key are different. One key is for July month & other for January
> month.
> 1. '2017-*07*-02T15:02:21.050'
> 2.
Hi Pradheep,
It seems tracing is not distributed as a part of HDP 2.4.3.0, please work
with your vendor for an appropriate solution.
Regards,
Ankit Singhal
On Thu, Jan 19, 2017 at 4:48 AM, Pradheep Shanmugam <
pradheep.shanmu...@infor.com> wrote:
> Hi,
>
> I am using hdp 2.
Aaron,
you can escape check for reserved keyword with double quotes ""
SELECT * FROM SYSTEM."FUNCTION"
Regards,
Ankit Singhal
On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor <amoli...@splicemachine.com>
wrote:
> Looks like the SYSTEM.FUNCTION table
(*"iso_8601" TIMESTAMP NOT NULL* PRIMARY
KEY);
upsert into test values(TO_DATE('2016-04-01 22:45:00'));
select * from test;
+--+
| iso_8601 |
+--+
| 2016-04-01 22:45:00.000 |
+--+
Regards,
Ankit Singhal
Ted Yu <yuzhih...@gmail.com> wrote:
>>
>> Ankit:
>> Is this documented somewhere ?
>>
>> Thanks
>>
>> On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal <ankitsingha...@gmail.com>
>> wrote:
>>
>>> Aaron,
>>>
>>> you c
Currently, Phoenix doesn't support projecting selective columns of table or
expressions in a view. You need to project all the columns with (select *).
Please see the section "Limitations" on this page or PHOENIX-1507.
https://phoenix.apache.org/views.html
On Thu, Oct 6, 2016 at 10:05 PM, Mich
Adding some more workaround , if you are working on column:-
select cast(col_int1 as decimal)/col_int2;
select col_int1*1.0/3;
On Wed, Sep 21, 2016 at 8:33 PM, James Taylor
wrote:
> Hi Noam,
> Please file a JIRA. As a workaround, you can do SELECT 1.0/3.
> Thanks,
>
Share some more details about the query, DDL and explain plan. In Phoenix,
there are cases where we do some server processing at the time when
rs.next() is called first time but subsequent next() should be faster.
On Thu, Sep 22, 2016 at 9:52 AM, Sasikumar Natarajan
wrote:
>
; (col1 VARCHAR NOT NULL,
>>> col2 VARCHAR NOT NULL,
>>> col3 INTEGER NOT NULL,
>>> col4 INTEGER NOT NULL,
>>> col5 VARCHAR NOT NULL,
>>> col6 VARCHAR NOT NULL,
>>> col7 TIMESTAMP NOT NULL,
>>> col8 TIMESTAMP NOT NULL,
>>>
You need to increase phoenix timeout as well(phoenix.query.timeoutMs).
https://phoenix.apache.org/tuning.html
On Sun, Oct 23, 2016 at 3:47 PM, Parveen Jain wrote:
> hi All,
>
> I just realized that phoneix doesn't provide "group by" and "distinct"
> methods if we use
hink query with OVER clause can be re-written by using SELF JOINs
in many cases.
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 3:11 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:
> Hi,
>
> I was wondering whether analytic functions work in Phoenix. For example
> something
://phoenix.apache.org/language/functions.html#to_date
[3] https://phoenix.apache.org/tuning.html
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 6:15 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:
> Hi,
>
> My queries in Phoenix pickup GMT timezone as default.
>
> I need them to def
bq. Will bulk load from Phoenix update the underlying Hbase table?
Yes. instead of using importTSV try to use CSV bulkload only.
bq. Do I need to replace Phoenix view on Hbase as with CREATE TABLE?
You can still keep VIEW.
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 6:37 PM, Mich Talebzadeh
JFYI, phoenix.query.rowKeyOrderSaltedTable is deprecated and is
not honored from v4.4, so please use phoenix.query.force.rowkeyorder
instead.
I have updated the docs(http://localhost:8000/tuning.html) now accordingly.
On Mon, Oct 17, 2016 at 3:14 AM, Josh Elser wrote:
>
Do you have bigger rows? if yes , it may be similar to
https://issues.apache.org/jira/browse/PHOENIX-3112 and
increasing hbase.client.scanner.max.result.size can help.
On Thu, Nov 24, 2016 at 6:00 PM, 金砖 wrote:
> thanks Abel.
>
>
> I tried update statistics, it did not
@James, is this similar to
https://issues.apache.org/jira/browse/PHOENIX-3112?
@Mac, can you try if increasing hbase.client.scanner.max.result.size helps?
On Tue, Dec 6, 2016 at 10:53 PM, James Taylor
wrote:
> Looks like a bug to me. If you can reproduce the issue
This is because we cap the scan with the current timestamp so anything
beyond the current time will not be seen. This is needed mainly to avoid
UPSERT SELECT to see its own new writes.
https://issues.apache.org/jira/browse/PHOENIX-3176
On Thu, Apr 20, 2017 at 11:52 PM, Randy
+1 for Joanthan comment,
-- Take multiple jstack of the client during the query time and check which
thread is working for long. If you find merge sort is the bottleneck then
removing salting and using SERIAL scan will help in the query given above.
Ensure that your queries are not causing
It seems we don't pack the dependencies in phoenix-kafka jar yet. Try
including flume-ng-configuration-1.3.0.jar in your classpath to resolve the
above issue.
On Thu, Apr 20, 2017 at 9:27 AM, lk_phoenix wrote:
> hi,all:
> I try to read data from kafka_2.11-0.10.2.0 , I get
If you are using phoenix 4.8 onwards then you can try giving zookeeper
string appended with a schema like below.
psql.py ;schema= /create_table.sql
psql.py zookeeer1;schema=TEST_SCHEMA /create_table.sql
On Sat, Apr 15, 2017 at 2:25 AM, sid red wrote:
> Hi,
>
> I am
Sudhir,
Relevant JIRA for the same.
https://issues.apache.org/jira/browse/PHOENIX-3288
Let me see if I can crack this for the coming release.
On Fri, Apr 21, 2017 at 8:42 AM, Josh Elser wrote:
> Sudhir,
>
> Didn't meant to imply that asking the question was a waste of
bq. 1. How many concurrent phoenix connections the application can open?
I don't think there is any limit on this.
bq. 2. Is there any limitations regarding the number of connections I should
consider?
I think as many till your JVM permits.
bq. 3. Is the client side config parameter
Phoenix 4.9 onwards you can specify any expression for default column. (I'm
not sure if there is any limitation called out).
For syntax:-
https://phoenix.apache.org/language/index.html#column_def
For examples-
You can take a look at our IT tests for phoenix-spark module.
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
On Mon, Jul 17, 2017 at 9:20 PM, Luqman Ghani wrote:
>
> -- Forwarded message
Yes, value 1 for "hbase.client.retries.number" is the root cause of above
exception.
General guideline/formulae could be(not official):-
(time taken for region movement in your cluster + timeout of zookeeper) /
hbase.client.pause
Or with intuition, you can set to at least 10.
On Fri, Jul 14,
You can do this by UPSERT SELECT.
On Mon, Aug 7, 2017 at 4:13 PM, Ankit Singhal <ankitsingha...@gmail.com>
wrote:
> You can read KVs for that user and write them again with current time.
>
> On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail
You can read KVs for that user and write them again with current time.
On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I'm using phoenix to store user sessions. The table's TTL is set to 3 days
> and I'd like to have the 3 days start over if the user
d you will just save
time by doing so.
Regards,
Ankit Singhal
On Thu, Jun 8, 2017 at 1:34 PM, Michael Young <yomaiq...@gmail.com> wrote:
> I have a doubt about step 2 from Ankit Singhal's response in
> http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoeni
> x-4-4-Rename-tabl
> way to do this?
>
> Nan
>
> On Fri, Jun 23, 2017 at 1:23 AM, Ankit Singhal <ankitsingha...@gmail.com>
> wrote:
>
>> If you have composite columns in your row key of HBase table and they are
>> not formed through Phoenix then you can't access an individu
If you have composite columns in your row key of HBase table and they are
not formed through Phoenix then you can't access an individual column of
primary key by Phoenix SQL too.
Try composing the whole PK and use them in a filter or may check if you can
use regex functions[1] or LIKE operator.
bq. A leading date column is in our schema model:-
Don't you have any other column which is obligatory in queries during
reading but not monotonous with ingestion? As pre-split can help you
avoiding hot-spotting.
For parallelism/performance comparison, have you tried running a query on a
Yes, this is a limitation[1] of the current implementation of UDF and
class loader used. It is recommended either to reboot the cluster if
implementation changes or use new jar name.
[1] https://phoenix.apache.org/udf.html
On Wed, May 3, 2017 at 4:41 AM, Randy Hu wrote:
>
You can map an existing table to view or table in Phoenix but we expect the
name of the table should match with Phoenix table name. (However, you can
rename your existing HBase table with snapshot and restore)
The DDLs you are using to map the table is not correct or are not
supported. You can
If you don't have secondary indexes, views and immutable table then upgrade
from 4.5 to 4.9 will just add some new columns in SYSTEM.CATALOG and
re-creates STATS table.
but still we have not tested an upgrade from 4.5 to 4.9, it is always
advisable to do stop after every two versions (especially
It could be because of stale stats due to the merging of region or
something, you can try deleting the stats from SYSTEM.STATS.
http://apache-phoenix-user-list.1124778.n5.nabble.com/Cache-of-region-boundaries-are-out-of-date-during-index-creation-td1213.html
On Sat, May 20, 2017 at 8:29 PM, Pedro
Next release of Phoenix(v4.11.0) will be supporting HBase 1.3.1(see
PHOENIX-3603) and there is no timeline yet decided for the release. But you
may expect some updates in next 1-2 months.
On Thu, May 25, 2017 at 3:32 AM, Anirudha Jadhav wrote:
> hi,
>
> just checking in, any
some other purpose.
Regards,
Ankit Singhal
On Tue, May 2, 2017 at 7:55 PM, Josh Elser <els...@apache.org> wrote:
> Planning for unexpected outages with HBase is a very good idea. At a
> minimum, there will likely be points in time where you want to change HBase
> configuration, app
I think you have a salted table and you are hitting a below bug.
https://issues.apache.org/jira/browse/PHOENIX-3800
Do you mind trying out the patch, we will have this fixed in 4.11 at
least(probably 4.10.1 too).
On Fri, May 5, 2017 at 11:06 AM, Bernard Quizon <
Best is to do "SELECT COUNT(*) FROM MYTABLE" with index. As index table
will have less data so it can be read faster.
if you have time series data or your data is always incremental with some
ID then you can do incremental count with row_timestamp filters or ID filter
bq. however the result
bq. This runs successfully if I split this into 2 files, but I'd like to
avoid doing that.
do you run a different job for each file?
if your HBase cluster is not co-located with your yarn cluster then it may
be possible that copying of large HFile is timing out(this may happen due
to the fewer
Yes, you can write your own custom mapper to do conversions (look at
CsvToKeyValueMapper, CsvUpsertExecutor#createConversionFunction) or
consider using chaining of jobs(where the first Job with multiple inputs
standardizing the date format followed by CSVBulkLoadTool) or writing a
custom
bq. But for tables inside, I am assuming the user needs access to the
Phoenix SYSTEM tables (and CREATE rights for the namespace in question
on the HBase level)? Is that the case? And if so, what are they able
to see, as in, only their information, or all information from other
tenants as well? If
Probably , you are affected by
https://issues.apache.org/jira/browse/HBASE-20172, are you on JDK 1.7 or
lower? can you upgrade to JDK 1.8 and check.
On Sun, May 6, 2018 at 9:29 AM, anil gupta wrote:
> As per following line:
> "Caused by: java.lang.RuntimeException: Could
gured region split policy
'org.apache.phoenix.schema.MetaDataSplitPolicy'
for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
or table descriptor if you want to bypass sanity checks"
Regards,
Ankit Singhal
On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲 wrote:
> Thanks all.
>
able for people from the ASF. I don't
> >> know what, if any, infrastructure exists to distribute Python modules.
> >> https://packaging.python.org/glossary/#term-built-distribution
> >>
> >> I feel like a sub-directory in the phoenix repository would be th
-phoenixdb
[3] *https://github.com/Pirionfr/pyPhoenix
<https://github.com/Pirionfr/pyPhoenix>*
*[4] https://issues.apache.org/jira/browse/PHOENIX-4636
<https://issues.apache.org/jira/browse/PHOENIX-4636>*
Regards,
Ankit Singhal
On Tue, Apr 11, 2017 at 1:30 AM, James Taylor <jamestay...@ap
You might be hitting PHOENIX-4785
<https://jira.apache.org/jira/browse/PHOENIX-4785>, you can apply the
patch on top of 4.14 and see if it fixes your problem.
Regards,
Ankit Singhal
On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com>
wrote:
> Any advices?
.
Regards,
Ankit Singhal
On Thu, Sep 20, 2018 at 10:24 AM Batyrshin Alexander <0x62...@gmail.com>
wrote:
> Nope, it was client side config.
> Thank you for response.
>
> On 20 Sep 2018, at 05:36, Jaanai Zhang wrote:
>
> Are you configuring these on the server sid
Is connecting and running some commands through HBase shell working? As per
the stack trace, It seems your HBase is not up , Look at the master and
regionserver logs for errors.
On Tue, Dec 4, 2018 at 4:17 AM Raghavendra Channarayappa <
raghavendra.channaraya...@myntra.com> wrote:
> Can someone
ng phoenixColumnName =
pTable.getColumnForColumnQualifier("0".getBytes(),
hbaseColumnQualifierBytes).getName();
Regards,
Ankit Singhal
On Tue, Jan 8, 2019 at 10:03 AM Josh Elser wrote:
> (from the peanut-gallery)
>
> That sounds to me like a useful utility to share with others if you're
> going to write
To better understand the problem, we may require your DDL for both the
indexes and data table and also the query using your secondary index
And, please try some tuning documented on
https://phoenix.apache.org/secondary_indexing.html and see if it helps.
On Tue, Sep 18, 2018 at 11:25 AM Josh
As Thomas said, no. of splits will be equal to the number of guideposts
available for the table or the ones required to cover the filter.
if you are seeing one split per region then either stats are disabled or
guidePostwidth is set higher than the size of the region , so try reducing
the
code to fix the issue, so the patch
would really be appreciated.
And, also can you try running "select a,b,c,d,e,f,g,h,i,j,k,m,n from
TEST_PHOENIX.APP where c=2 and h = 1 limit 5", and see if index is getting
used.
Regards,
Ankit Singhal
On Tue, Aug 20, 2019 at 1:49 AM you Zhuang
wrote
sensitive information about your environment and data like hbase:meta
has ip-address/hostname and system.stats has data row keys, so upload only
if you think it's a test data and hostnames have no significance).
Thanks,
Ankit Singhal
On Mon, Aug 19, 2019 at 11:17 PM venkata subbarayudu
wrote
>> If not possible I guess we have to look at doing something at the HBase
level.
As Josh said, it's not yet supported in Phoenix, Though you may try using
cell-level security of HBase with some Phoenix internal API and let us know
if it works for you.
Sharing a sample code if you wanna try.
/**
1 - 100 of 112 matches
Mail list logo