} because ${psql.root.logger} doesn't contain 'DRFA'.
This is the log at phoenix client side.
At 2016-01-12 03:30:23, "Billy Watson" <williamrwat...@gmail.com> wrote:
phoenix should log to the hbase logs, if I'm not mistaken
William Watson
Software Engineer
(904) 705-7056 PCS
On M
ned data and do nothing for unsigned data.
I hope this will help you solve your problem.
Thanks.
William.
At 2016-05-30 22:24:15, "Christian Hellström" <psilonl...@gmail.com> wrote:
Because HBase does not store metadata, which Phoenix needs to be a true SQL
skin.
On 30
Hi all,
For time and date data types, such as Time, Date and Timestamp, we cannot
say a Time value is negative or positive and they don't have the 'binary sort
issue' as signed integer has. So I think they should be implemented as
UNSIGNED_LONG instead of LONG.
Can someone tell me why they
eature personally in your personal
branch, but i don't know the best way to support this in an official Phoenix
release. What do you think of this? Any suggested design?
Thanks.
William.
At 2016-10-13 18:12:56, "Yang Zhang" <zhang.yang...@gmail.com> wrote:
Hello everyone
I
solution. Otherwise, I strongly recommend you
using the existing solution that James provided.
Thanks,
William
At 2016-10-17 13:42:07, "James Taylor" <jamestay...@apache.org> wrote:
FYI, a couple of timestamp related features that Phoenix supports today include;
- specify/fil
, firstname:firstname,
lastname:lastname')
tblproperties ('hbase.table.name' = 'Person');
It threw an exception and said the table 'Person' does not exists. I assume
I can do that because these tables are created on Hbase. Please help.
Thanks,
William.
'
with serdeproperties ('hbase.columns.mapping' = ':key, 0:FIRSTNAME, 0:LASTNAME')
tblproperties ('hbase.table.name' = 'PERSON');
it’s getting more fun now :).
William.
From: Ravi Kiran [mailto:maghamraviki...@gmail.com]
Sent: Sunday, March 15, 2015 6:09 AM
To: user@phoenix.apache.org
Subject: Re: Using
Hi does anyone know which AMI version I should choose when creating EMR
cluster in AWS?
Phoenix.apache.org site said to use 3.0.1 but it is no longer available in
the choice for EMR. I used 3.7.0 but it errors out.
Thanks,
William.
Hi all,
Looking to see if there are any best practices or recommendations on
monitoring Phoenix performance and load in production (so that we can
detect degradation before it leads to serious issues, or troubleshoot slow
downs).
I see that there are some metrics available at the HBase level,
Hi everyone,
I am investigating a strange looking entry in our SYSTEM.CATALOG table. The
row is an index table (TABLE_TYPE = i) but it does not contain any other
index information (no DATA_TABLE_NAME and INDEX_TYPE, etc.).
Has anyone encountered similar situation, or is there any other way to
--+
4 rows selected (0.288 seconds)
On Fri, Feb 2, 2018 at 1:23 PM William Shen <wills...@marinsoftware.com>
wrote:
> Hi everyone,
>
> I am investigating a strange looking entry in our SYSTEM.CATALOG table.
> The row is an index table (TABLE_TYPE = i) but it does not contain any
&
Thank you James!
On Tue, Feb 6, 2018 at 10:21 AM James Taylor <jamestay...@apache.org> wrote:
> Hi William,
> The system catalog table changes as new features are implemented. The API
> that you can count on being stable is JDBC and in particular for metadata,
> ou
Miles beat me to it, but yes that's what I was referring to.
On Mon, Jul 16, 2018 at 10:22 AM alchemist
wrote:
> Thanks so much William.
>
> I have a table with say TestTable with 5 columns, say rowPK, col1, col2,
> col3, mediaId etc.I need to update all the columns o
I believe you can try using UPSERT/SELECT on the same table (without a temp
table).
Maybe you can elaborate a little bit what you are trying to do, because
your example query doesnt totally make sense to me as it is upserting
values of col 3 and 4 into 1 and 2?
On Mon, Jul 16, 2018 at 6:19 AM
Unsubscribe
Thanks,
William.
Hi all,
We are running Phoenix 4.13, and periodically we would encounter the
following exception when querying from Phoenix in our staging environment.
Initially, we thought we had some incompatible client version connecting
and creating data corruption, but after ensuring that we are only
x00\x00\x00\x00\x00\x14'\x00\x07\x80\x00\x00\x00\x00\xBC\xF3^"))
=> "00fd00000014270007fdfdfd5e"
On Tue, Oct 16, 2018 at 1:15 PM William Shen
wrote:
> Hi there,
>
> I am trying to scan using a partial match on the row key (derived from the
> Phoenix primary key), however, hbase shell is returning
Hi there,
I am trying to scan using a partial match on the row key (derived from the
Phoenix primary key), however, hbase shell is returning results that do not
look like a match. Can someone help me understand why the following row
keys are considered a match and returned?
In addition, I am not
xed length data types we just store the individual elements.
>
> On Fri, Oct 19, 2018 at 2:40 PM, William Shen
> wrote:
>
>> Hi,
>>
>> Sorry if this is too basic of a question. I tried to look through the
>> documentation but could not find the information. How are P
Hi,
Sorry if this is too basic of a question. I tried to look through the
documentation but could not find the information. How are Phoenix Arrays
stored in HBase, and in particular, how are varchar array stored?
I tried to upsert data in phoenix, and compare the HBase value:
'0'
)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
On Wed, Oct 17, 2018 at 3:21 PM William Shen
wrote:
> Thank Jaa
wrote:
> It looks a bug that the remained part greater than retrieved the length in
> ByteBuffer, Maybe the position of ByteBuffer or the length of target byte
> array exists some problems.
>
>
>Jaanai Zhang
>Best regards!
>
Hi there,
I have encountered the following exception while trying to query from
Phoenix (was able to generate the exception doing a simple SELECT
count(1)). I have verified (MD5) that each region server has the correct
phoenix jars. Would appreciate any guidance on how to proceed further in
figuration.java:2550)
... 27 more
On Wed, Sep 19, 2018 at 2:15 PM William Shen
wrote:
> Hi there,
>
> I have encountered the following exception while trying to query from
> Phoenix (was able to generate the exception doing a simple SELECT
> count(1)). I have verified (MD5) that
Vincent,
Do we expect to see the same behavior with SELECT?
I observed the following. Not sure what about applying the limit is adding
to the time... especially since there is only one row, much less than the
actual LIMIT.
SELECT tb1."updBy" FROM "prod"."ADGROUPS" tb1 WHERE
Shawn, in my own investigation with the SELECT statements running slower
with LIMIT, I have found that with the limit under certain threshold,
Phoenix will perform the scan in SERIAL instead of PARALLEL. Not sure why
that is the case, but maybe your explain plan would yield the same insight.
On
Hi there,
I am setting up Pherf to do some simple benchmarking to compare different
queries. The instruction seems straightforward, however I noticed that even
though I had added the NO_CACHE hint, only the first execution of the query
seems to run with an expected duration, and the subsequent
8053 | -5 |
| null | 80| 8054 | 12 |
| null | 81| 8055 | 12 |
| null | 82 | 8056 | 12 |
| null | 83| 8057 | 12 |
| nul
Hi there,
We've encountered the following compaction failure(#1) for a Phoenix table,
and are not sure how to make sense of it. Using the HBase row key from the
error, we are able to query data directly from hbase shell, and by
examining the data, there arent anything immediately obvious about
box/hbase-user/201504.mbox/%3c44434b16-823d-43ef-aab1-337bfba6f...@5dlab.com%3E
>
>
>
>Jaanai Zhang
> Best regards!
>
>
>
> William Shen 于2018年11月22日周四 上午8:28写道:
>
>> Further narrowed down the HBase row, and was able
Realized the message didnt go thru on the dev list... reposting on
user@phoenix
-- Forwarded message -
From: William Shen
Date: Thu, Sep 13, 2018 at 11:57 AM
Subject: Access Client Side Metrics for PhoenixRDD usage
To: d...@phoenix.apache.org
Hi all,
I see that LLAM-1819 had
Hi all,
Do we have any way of passing in hints when querying Phoenix using
PhoenixRDD in Spark? I reviewed the implementation of PhoenixRDD
and PhoenixRecordWritable, but was not able to find an obvious way to do
so. Is it supported?
Thanks in advance!
- Will
I kept getting this every time I send to the users list. Can we force
remove the subscriber (martin.pernollet-...@sgcib.com) ? Or is this only
happening to me?
Thanks
- Will
-- Forwarded message -
From:
Date: Thu, Apr 4, 2019 at 12:17 PM
Subject: DELIVERY FAILURE: Error
val. Trying it now to see if
> > I'm a moderator (I don't think I am, but might be able to add myself as
> > one).
> >
> > On 4/4/19 7:15 PM, William Shen wrote:
> >> I kept getting this every time I send to the users list. Can we force
> >> remo
some time, mainly including two aspects: 1. access
> SYSTEM.CATALOG table to get schema information of the table 2. access the
> meta table of HBase to get regions information of the table
>
>
>Jaanai Zhang
>Best regards!
>
>
query
>>
>> I am not sure what the reasons are, perhaps you can enable TRACE log to
>> find what leads to slow, I guess that some meta information is reloaded
>> under highly write workload.
>>
>> --------
>>Jaanai Z
Hi there,
I have a component that makes Phoenix queries via the Phoenix JDBC
Connection. I noticed that consistently, the Phoenix Client takes longer to
execute a PreparedStatement and it takes longer to read through the
ResultSet for a period of time (~15m) after a restart of the component. It
Hi all,
I've tried looking around for documentation and in source code, but did not
have much luck trying to understand the following logging from Phoenix JDBC
that gets logged in DEBUG mode. Does anyone know what it means, and is it a
problem to see a lot of these in the log? Thanks!
DEBUG
o pass in?
>
> On Wed, Apr 10, 2019 at 10:42 AM William Shen
> wrote:
>
>> Anyone still using PhoenixRDD with Spark, or anyone had used it in the
>> past that might be able to answer this?
>>
>> Thanks!
>>
>> On Thu, Apr 4, 2019 at 12:16 PM William She
Thanks Thomas. I've created
https://issues.apache.org/jira/browse/PHOENIX-5238
On Wed, Apr 10, 2019 at 8:39 PM Thomas D'Silva
wrote:
> Can you please file a JIRA for this?
>
> On Wed, Apr 10, 2019 at 5:53 PM William Shen
> wrote:
>
>> Thanks for chiming in Thomas. W
Hi,
It is possible to collect metrics on the client side following instructions
on https://phoenix.apache.org/metrics.html. However, is there any metrics
collection support when using PhoenixRDD?
Thanks,
- Will
Jestan,
It seems like a bug to me. What version of Phoenix are you using, and did
you create a ticket already?
On Tue, May 14, 2019 at 10:26 AM Jestan Nirojan
wrote:
> Hi,
>
> I am trying to use COALESCE function to handle default value in WHERE
> condition like below.
>
> select * from table1
ing should work:
select coalesce(functionThatMightReturnNull(), now()) as date;
On Tue, May 14, 2019 at 11:14 AM William Shen
wrote:
> Jestan,
> It seems like a bug to me. What version of Phoenix are you using, and did
> you create a ticket already?
>
> On Tue, May 14, 2019 a
Hieu,
You're welcome to file a JIRA and submit a fix for the CDH branches
(4.14-cdh5.11, 4.x-cdh5.15, etc). I think while 4.x-HBase-1.2 is EOL, the
CDH branches are not. CDH 5.x is based on HBase-1.2, but it also contains a
concoction of 1.2.x, 1.3, 1.4 patches back-ported by Cloudera. (Thomas,
Thank you for clarifying Thomas!
On Thu, May 23, 2019 at 11:27 AM Thomas D'Silva
wrote:
> The CDH branches have been maintained by Pedro, I am don't know what is
> the release plan for those branches.
>
> On Tue, May 21, 2019 at 11:13 PM William Shen
> wrote:
>
>> Hi
Josh,
Any luck on getting Infra to help, or finding a moderator? I'm still
getting these spam... (Who are the moderators anyway?)
Thanks,
- Will
On Fri, Apr 5, 2019 at 3:38 PM William Shen
wrote:
> Thanks Josh for looking into this!
>
> On Fri, Apr 5, 2019 at 12:52 PM Josh Els
-----
>Jaanai Zhang
>Best regards!
>
>
>
> Jestan Nirojan <mailto:jestanniro...@gmail.com>> 于2019年5月15日周三 上午5:04写道:
>
> Hi William,
>
> Thanks, It is working with
> coalesce(functionThatMightReturnNull(), now()) w
Anyone still using PhoenixRDD with Spark, or anyone had used it in the past
that might be able to answer this?
Thanks!
On Thu, Apr 4, 2019 at 12:16 PM William Shen
wrote:
> Hi all,
>
> Do we have any way of passing in hints when querying Phoenix using
> PhoenixRDD in Spark
Hello,
I'm new to HBase/Phoenix and trying to reconcile the guidance for making a
Phoenix view for a preexisting table in HBase (found in the FAQ), and the
guidance for being able to search versions using the built-in HBase timestamp (
https://phoenix.apache.org/rowtimestamp.html)
I can
49 matches
Mail list logo