Glad you have a work around. Would you mind filing a Calcite bug for the
Avatica component after you finish your testing?
Thanks,
James
On Sat, Apr 2, 2016 at 4:10 AM, F21 wrote:
> I was able to successfully commit a transaction if I set the serialization
> of the phoenix
Excellent, Vijay. Nice work - the APIs look very clean. We'll put up a page
on the Phoenix site that points folks toward this (and other Phoenix
add-ons).
James
On Thursday, March 31, 2016, Josh Mahonin wrote:
> This looks awesome, good work!
>
> On Thu, Mar 24, 2016 at
Hi Ankur,
Try setting the UPDATE_CACHE_FREQUENCY on your table (4.7.0 or above) to
prevent the client from checking with the server on whether or not your
table metadata is up to date. See here[1] for more information. You can
issue a command like this which will hold on to your metadata on the
Hi Amit,
Using 4.7.0-HBase-1.1 release, I see the index being used for that query
(see below). An index will help some, as the aggregation can be done in
place as the scan over the index is occurring (as opposed to having to hold
the distinct values found during grouping in memory per chunk of
Hi Amit,
Have you see our documentation and examples for ALTER TABLE [1]?
So you could do ALTER TABLE my_table SET BLOCKCACHE=false;
If you want to prevent rows from being put in the block cache on a per
query basis, you can use the /*+ NO_CACHE */ hint [2] on a query like this:
SELECT /*+
Steve,
For deletes and upsets through the query server in 4.6, did you set auto
commit to be true by default (set phoenix.connection.autoCommit to true)?
In 4.7 this has been fixed, but prior to this, I believe commit and
rollback were a noop. Is that right, Josh?
Thanks,
James
On Thursday, March
Our only public APIs are JDBC and our various integrations with Spark, MR,
etc. Though it's unlikely these APIs will change as Sergey mentioned, it's
possible. The actual binary format won't change, though (at least for
existing tables).
Thanks,
James
On Wed, Mar 23, 2016 at 1:05 PM, Sergey
Alok,
Please file a JIRA with this info. We need a representative data set that
exhibits this bug - would it be possible to provide that? The smaller the
better.
Thanks,
James
On Monday, March 21, 2016, Alok Singh wrote:
> Environment:
> * Phoenix 4.6
> * Hbase 1.1.2
> *
ed
>>>for hbase.dynamic.jars.dir.
>>>
>>>
>>> My question is, can that be any 'udf-user-specific' jar which need to be
>>> copied to HDFS or would it need to register the function and update the
>>> custom UDF classes inside phoenix-core.jar and
atabaseMetaData interface from
> Connection.getMetaData().
>
> I may have this detail wrong, but the point remains: applications are
> getting an incorrect value, or misinterpreting the correct value they
> receive. From what I can see, this issue is unique to Phoenix.
>
> On Th
ase server class path since this
> will be executed on the server side at runtime.*
>
> Does that mean, to register my custom function, i should edit the
> *ExpressionType
> enum *exists in Phoenix and rebuild the *phoenix jar?*
>
>
>
>
> On Thu, Mar 17, 2016 at 6:17 PM,
re.jar and rebuild the
> 'phoenix-core.jar'
>
> Regards
> Swapna
>
>
>
>
> On Fri, Jan 29, 2016 at 6:31 PM, James Taylor <jamestay...@apache.org>
> wrote:
>
>> Hi Swapna,
>> We currently don't support custom aggregate UDF, and it looks like you
>
Saurabh,
Is your table write-once, append-only data (as it looks to be based on the
primary key constraint)? If that's the case, I'd recommend adding
IMMUTABLE_ROWS=true to the end of your CREATE TABLE statement as this will
improve performance.
For the INDEX hint, it needs to be /*+ INDEX */
ry
>> index table.
>> Main table still doesnt shows MAX_FILESIZE attribute.
>>
>> On Sat, Mar 12, 2016 at 12:41 PM, James Taylor <jamestay...@apache.org>
>> wrote:
>>
>>> It should show up for the index table. I did a test on my local HBase,
>&g
Hi Anil,
Phoenix estimates the ratio between the data table and index table as shown
below to attempt to get the same number of splits in your index table as
your data table.
/*
* Approximate ratio between index table size and data table size:
* More or less equal to the ratio between the
FYI, I added Gabriel's excellent answer to our FAQs here:
https://phoenix.apache.org/faq.html#Why_empty_key_value
On Fri, Mar 11, 2016 at 4:10 AM, Ankit Singhal
wrote:
> You may check discussion from below mail chain.
>
Apache Phoenix enables OLTP and operational analytics for Hadoop through
SQL support and integration with other projects in the ecosystem such as
Spark, HBase, Pig, Flume, and MapReduce.
We're pleased to announce our 4.7.0 release which includes:
- ACID transaction support (beta) [1]
- Enhanced
Applications should never query the SYSTEM.CATALOG directly. Instead they
should go through the DatabaseMetaData interface from
Connection.getMetaData(). For column type information, you'd use the
DatabaseMetaData#getColumn method[1] which would return the standard SQL
type for ARRAY in the
Awesome work, Josh. Thanks for letting us know - how about a tweet with an
@ mention of flywaydb and ApachePhoenix to help spread the word further?
James
On Tuesday, March 8, 2016, Josh Mahonin wrote:
> Hi all,
>
> Just thought I'd let you know that Flyway 4.0 was
Hi Rafit,
Did you confirm from the HBase shell whether or not the TTL took effect?
Thanks,
James
On Sat, Mar 5, 2016 at 6:20 PM, Rafit Izhak-Ratzin
wrote:
> Hi Samarth,
>
> The alter table request I am issuing is the following:
> 0: jdbc:phoenix:localhost> alter table
Hi Amit,
For Phoenix 4.6 on CDH, try using this git repo instead, courtesy of Andrew
Purtell:
https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.5
Thanks,
James
On Mon, Feb 29, 2016 at 10:19 PM, Amit Shah wrote:
> Hi Sergey,
>
> I get lot
Hi Peter,
We'd appreciate it if you could start a new thread with an appropriate
subject (rather than the confirm subscribe email you got). We get a lot of
questions on our dev and user lists and having a relevant subject helps
other users find the answer to the same question.
Thanks,
James
On
Lalinský <lalin...@gmail.com> wrote:
> On Sat, Feb 6, 2016 at 7:28 AM, James Taylor <jamestay...@apache.org>
> wrote:
>
>> Lukás - your Python Query Server support would be a welcome addition to
>> Phoenix or Avatica. Send us a pull request for a new module
665a1ac6da8ffe95454a5299a8e55f3. ...
>
> I may not have described my problem very well, but I have already played
> around with the syntax a lot and am pretty sure there is no current
> solution. But I would love to be wrong. :)
>
> Thanks,
> Steve
>
> On Thu, Feb
+1 to Ankit's suggestion. If you haven't altered the table, then you can
just connect at the timestamp of one more than the timestamp at which the
table was created (see [1]), and issue to DROP TABLE command from that
connection. If you have altered the table, then you have to be more careful
as
Hi Steve,
You can do what you want with a view today, but the syntax is just a bit
different than what you tried. You declare your dynamic columns after the
view name, like this:
create MY_VIEW("dynamic_field" varchar) as select * from MY_TABLE
You can also alter a view and dynamically
Hi Johannes,
To override this behavior, you can do the following:
- set phoenix.sequence.saltBuckets in your client-side hbase-site.xml to 0.
- manually disable and drop SYSTEM.SEQUENCE from an HBase shell. Note -
this assumes you're not using sequences - if you are, let us know.
- re-connect a
Hi Dor,
Whether or not Phoenix becomes part of CDH is not under our control. It
*is* under your control, though (assuming you're a customer of CDH). The
*only* way Phoenix will transition from being in Cloudera Labs to being
part of the official CDH distro is if you and other customers demand it.
Hi Nanda,
This error occurs if your table is immutable, you have an index on the
table, and your WHERE clause is filtering on a column not contained in all
of the indexes. If that's not the case, would you mind posting a complete
end-to-end test as it's possible you're hitting a bug.
Thanks,
James
Yes
On Fri, Feb 19, 2016 at 11:33 AM, ashish tapdiya
wrote:
> Hi,
>
> Is phoenix.query.maxGlobalMemoryPercentage a server side property?
>
> Thanks,
> ~Ashish
>
I believe some folks over in the HBase community have revived YCSB, but I'm
not sure where it's new home is. Also, not sure if they applied Mujtaba's
patch. I'd recommend asking over on the HBase dev or user list.
FWIW, we developed Pherf to enable Phoenix users to compare various Phoenix
Hi Anil,
Please post your CREATE TABLE and CREATE INDEX statement.
Thanks,
James
On Thu, Feb 18, 2016 at 11:06 AM, anil gupta wrote:
> Hi,
>
> I have a phoenix table, in that table we defined 2CF. I have one global
> secondary index in this table.
> We also see a 3rd
See https://phoenix.apache.org/paged.html and the unit test for
QueryMoreIT. The row value constructor (RVC) was implemented specifically
to provide an efficient means of pagination over HBase data.
Thanks,
James
On Wed, Feb 17, 2016 at 10:54 AM, Steve Terrell
wrote:
> I
Thanks, Jonathan. I haven't seen this, but a patch would be much
appreciated.
James
On Monday, February 15, 2016, Jonathan Leech wrote:
> Has anyone else seen this? Happening under load in jdk 1.7.0_80 / phoenix
> 4.5.2 - cloudera labs. Based on the source code, It
I think the question Anil is asking is "Does Pig have support for TinyInt
(byte) and SmallInt (short)?" I don't know the answer.
On Sat, Feb 13, 2016 at 9:46 AM, Ravi Kiran
wrote:
> Hi Anil,
>
>We do a mapping of PTintInt and PSmallInt to Pig DataType.INTEGER .
>
Specifically schema changes from an HBase POV, like removing a column
family or renaming the table.
On Friday, February 12, 2016, Jesse Yates wrote:
> Just have to make sure you don't have schema change during snapshots
>
> On Fri, Feb 12, 2016 at 6:24 PM Gaurav
Hi Kannan,
Yes, you can keep 3 versions of a cell in Phoenix (just add VERSIONS=3 to
your DDL statement), however you'll only see one version when you query (by
default, the latest - see [1] for how to see an earlier
version). PHOENIX-590 (not implemented) is about seeing all versions..
HTH,
Hi Noam,
We don't support table rename currently - please file a JIRA. Depending on
how you're using Phoenix, you may be able to do this yourself by using
views[1]. For example, given a regular Phoenix table named my_table, you
can create a view on it like this:
CREATE VIEW my_view AS SELECT *
Glad you got it working, Steve. If you have a chance to file JIRAs where
you ran into issues, that'd be much appreciated.
Lukás - your Python Query Server support would be a welcome addition to
Phoenix or Avatica. Send us a pull request for a new module if you're
interested.
James
On
Phoenix 4.7.0 is not released yet. A couple of issues can up in the last
RC, so we'll roll a new one very soon.
Thanks,
James
On Fri, Feb 5, 2016 at 9:23 AM, Steve Terrell wrote:
> Oh, I didn't know there was a 4.7. Following the links on
>
But please feel free to play around with the last RC:
http://mail-archives.apache.org/mod_mbox/phoenix-dev/201601.mbox/%3CCAAF1JdgFzrwWBBcs586hkJnoZaZFBYDGxtaqUZjAuQM1XwBgOQ%40mail.gmail.com%3E
On Fri, Feb 5, 2016 at 9:26 AM, James Taylor <jamestay...@apache.org> wrote:
> Phoe
The problem is that EMR hasn't updating their HBase version past 0.94 in
the last two years. Phoenix stopped doing releases supporting 0.94 a year
ago and HBase has moved well past 0.94 as well. Phoenix will run just fine
on EMR if they update their HBase version. My two cents: I'd recommend
Let the folks at EMR know. I will too.
On Thu, Feb 4, 2016 at 5:14 PM, j pimmel <frankly.wat...@gmail.com> wrote:
> It would appear not at this point. Though it would be great to get the
> full Hbase + Phoenix stacks supported out-of-the-box.
>
>
>
> On Thu, 4 Feb 20
re-appearing in current
> EMR releases, it seems like standards are getting baked in and it would
> likely ease adoption?
>
>
> http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-release-components.html
>
> Thanks
>
> J
>
> On Thu, 4 Feb 201
See https://phoenix.apache.org/language/index.html#alter_index
On Thu, Feb 4, 2016 at 12:11 PM, Kumar Palaniappan <
kpalaniap...@marinsoftware.com> wrote:
> While data migration, we simply drop the indices on the tables and
> recreate. Would like to avoid.
>
> Is there disable all index syntax
If auto commit is off or the table is transactional, we batch deletes when
you do a commit (see HTable.batchMutation() call in MutationState) or if
auto commit is on (depending on the query) we process completely on server
side (like BulkDeleteProtocol) in our UngroupedAggregateRegionObserver
The actual Delete marker is created in PRowImpl which lives inside of
PTableImpl.
On Wed, Feb 3, 2016 at 8:44 AM, James Taylor <jamestay...@apache.org> wrote:
> MutationState.java
>
> On Wed, Feb 3, 2016 at 8:40 AM, Arun Kumaran Sabtharishi <
> arun1...@gmail.com> wrote:
MutationState.java
On Wed, Feb 3, 2016 at 8:40 AM, Arun Kumaran Sabtharishi wrote:
> After trying to dig through and debug the phoenix source code several
> hours, could not find the one place where the actual phoenix delete
> happens. Kindly point me where does the delete
I encourage you to use views[1] instead. You can dynamically add/remove
columns from a view and this way Phoenix keeps track of it for you and you
get all the other standard features.
Thanks,
James
[1] https://phoenix.apache.org/views.html
On Tue, Feb 2, 2016 at 1:42 PM, Serega Sheypak
I've submitted a Phoenix talk for Hadoop Summit. I'd appreciate a vote for
it here:
http://hadoopsummit.uservoice.com/forums/344967-future-of-apache-hadoop
Regards,
James
Hi William,
Our date types are signed so that you get a larger range: both positive
millis after 1/1/1970 and negative millis before 1/1/1970.
Thanks,
James
On Monday, February 1, 2016, William wrote:
> Hi all,
>For time and date data types, such as Time, Date and
Thanks, Gabriel. If you have any spare cycles, it might be good to add this
to the CSV Bulk Load page and/or as an FAQ as it's come up a few times.
James
On Fri, Jan 29, 2016 at 11:03 PM, Parth Sawant
wrote:
> Hi Gabriel,
> This worked perfectly.
>
> Thanks a lot.
Hi Swapna,
We currently don't support custom aggregate UDF, and it looks like you
found the JIRA here: PHOENIX-2069. It would be a natural extension of UDFs.
Would be great to capture your use case and requirements on the JIRA to
make sure the functionality will meet your needs.
Thanks,
James
On
This sounds like a good idea. Please file a JIRA and we'll get this on the
roadmap. What tooling are you using, and would support for
Statement.cancel() do the trick?
On Wed, Jan 27, 2016 at 7:27 PM, Ken Hampson wrote:
> I would be interested in this as well, knowing how
Hi Binu,
Phoenix has never supported HBase 0.96, so I'm not sure where you got the
release from.
I recommend upgrading to a later, supported version of HBase and a later
version of Phoenix. Give the 4.7.0 RC a try.
One other tip in particular for views you create over existing HBase
tables. Use
Hi Venkat,
I believe this issue has been fixed in PHOENIX-2601. Please give our 4.7.0
a try as this fix is included there.
Thanks,
James
On Friday, January 22, 2016, Venkat Raman wrote:
> Hi All,
>
> We are using secondary indexes and see the following issue. Consider main
>
>>>>>>>>>>
>>>>>>>>>> No other exceptions that I can find. YARN apparently doesn't want
>>>>>>>>>> to aggregate spark's logs.
>>>>>>>>>>
>>>>>>>
Hi Willem,
Let us know how we can help as you start getting into this, in particular
with your schema design based on your query requirements.
Thanks,
James
On Mon, Jan 18, 2016 at 8:50 AM, Pariksheet Barapatre
wrote:
> Hi Willem,
>
> Use Phoenix bulk load. I guess your
See https://phoenix.apache.org/secondary_indexing.html
Hints are not required unless you want Phoenix to join between the index
and data table because the index isn't fully covered and some of these non
covered columns are referenced in the query.
bq. Doesnt a single global covered index
om/intent/follow?original_referer=https://twitter.com/about/resources/buttons=follow_link_name=megamda=followbutton=2.0>
> [image: Description: Macintosh HD:Users:Kumarappan:Desktop:linkedin.gif]
> <http://www.linkedin.com/in/kumarpalaniappan>
>
> On Jan 18, 2016, at 10:07 AM
Hi Anil,
This error occurs if you're performing an update that takes a long time on
a mutable table that has a secondary index. In this case, we make an RPC
before the update which sends index metadata to the region server which
it'll use for the duration of the update to generate the secondary
That was my first thought too, Alicia. However, our MR and Spark
integration uses a different code path. See my comment on PHOENIX-2599 for
a potential fix.
Thanks,
James
On Thu, Jan 14, 2016 at 2:55 PM, Alicia Shu wrote:
> With the fix of PHOENIX-2447
>
Hi Noam,
Please file a JIRA that includes the Phoenix version, the HBase version,
the DDL, query, and sample data (if needed to repro). It's possible you're
hitting an HBase bug in the reverse scan (HBASE-14155).
Thanks,
James
On Wed, Jan 13, 2016 at 4:22 AM, Bulvik, Noam
Thanks for reporting this, Noam. I filed PHOENIX-2593 for it.
James
On Wed, Jan 13, 2016 at 4:09 AM, Bulvik, Noam wrote:
> Hi,
>
>
>
> I am using Phoenix parcel for cloudera (latest parcel for CDH 5.4) and I
> have a table with data in varchar_array column.
>
>
>
>
Any empty string is treated the same as a null string in Phoenix (just like
in Oracle). See PHOENIX-2422.
On Tue, Jan 12, 2016 at 12:24 PM, Nick Dimiduk wrote:
> Hi there,
>
> I have a question about 0-length VARCHAR columns used in primary keys.
> Specifically, I have a
Hi Ken,
PHOENIX-2434 improved our CSV handling of booleans and will appear in our
upcoming 4.7.0 release. It'd be good if you can confirm whether or not this
is what you need. We definitely want to support ingest of CSVs from other
RDBMSs.
There are a couple of other avenues of ingest into
ce the SQOOP-2649
> enhancement was still in patch form. It's definitely something I will keep
> an eye on going forward.
>
> Thanks again,
> - Ken
>
>
> On Fri, Jan 8, 2016 at 1:13 PM James Taylor <jamestay...@apache.org>
> wrote:
>
>> Hi Ken,
>> P
gt;
> pstmt.setString("STOCK_NAME", stockName);
>
> Do i need to use some other stuff than Phoenix MR integration to get that
> method?
>
> Thanks,
>
> Anil Gupta
>
>
>
> On Tue, Jan 5, 2016 at 8:48 PM, James Taylor <jamestay...@apache.org
> <ja
With JDBC, both will already work.
pstmt.setString("STOCK_NAME", stockName);
pstmt.setString(1, stockName);
On Tuesday, January 5, 2016, anil gupta wrote:
> Hi,
>
> I am using Phoenix4.4. I am trying to integrate my MapReduce job with
> Phoenix following this doc:
Gabriel may have meant the Cloudera labs release of 4.5.2, but I'm not sure
if that fix is there or not. We have no plans to do a 4.5.3 release. FYI,
Andrew put together a 4.6 version that works with CDH here too:
https://github.com/chiastic-security/phoenix-for-cloudera. We also plan to
do a 4.7
Have you seen the presentations page[1] on our website?
Thanks,
James
[1] https://phoenix.apache.org/resources.html
On Tue, Dec 29, 2015 at 3:09 PM, Sachin Katakdound <
sachin.katakdo...@gmail.com> wrote:
> Does anyone know of a good Book or online articles that describe the deep
> Apache
See
https://phoenix.apache.org/faq.html#How_do_I_connect_to_secure_HBase_cluster
On Tue, Dec 22, 2015 at 8:42 PM, Ns G wrote:
> Hi There,
>
> Can any one provide me guidelines on how to install and access phoenix on
> a kerberised cluster? How to use keytab in jdbc to
Correct - it has to do with the way we encode column values in the row key.
Since a VARBINARY can be any length with any bytes, we cannot know where it
ends. Thus we only allow it at the end of the row key. With a BINARY,
you're telling Phoenix how big it is, so it can occur anywhere in the PK
You'd need to create multiple views, one for each metric_type:
CREATE VIEW mobile_product_metrics (new_col1 varchar) AS SELECT * FROM
product_metrics WHERE metric_type = 'm';
CREATE VIEW phone_product_metrics (new_col2 varchar) AS SELECT * FROM
product_metrics WHERE metric_type = 'p';
On
IMESTAMP column to a new UNSIGNED_TIMSTAMP column?
>
> On Mon, Dec 21, 2015 at 10:29 PM, James Taylor <jamestay...@apache.org
> <javascript:_e(%7B%7D,'cvml','jamestay...@apache.org');>> wrote:
>
>> Use UNSIGNED_TIMESTAMP instead.
>>
>> On Mon, Dec 21, 2015
Another good contribution would be to add this question to our FAQ.
On Tue, Dec 15, 2015 at 2:20 PM, Samarth Jain wrote:
> Kannan,
>
> See my response here:
>
>
bq. When this simple query with Order by and limit clause is executed, does
it return a valid data considering the fact that the data will be spread
across 4 region servers?
Yes
bq. Does this mean that 15 rows are gathered from each region server and
then the limit clause applied on the client?
Your analysis of the row key structure is correct. Those are all fixed
types (4 + 4 + 8 +8 + 2 = 26 bytes for the key).
If you're going from 0.94 to 0.98, there's stuff you need to do to get your
data into the new format. Best to ask about this on the HBase user list or
look it up in the
Hi Li,
That's not performance degradation. Your query requires a full table scan
to calculate the row count. It's going to get slower as the table grows.
Your new queries will remain fast as the table grows in size.
Thanks,
James
On Fri, Dec 11, 2015 at 7:46 PM, Li Gao
There's some work that was done to make Phoenix and Hive work well together
here: https://github.com/apache/phoenix/pull/74 and here:
https://github.com/nmaillard/Phoenix-Hive
The pull is out of date, but could likely be revived - just needs an owner
and needs to be brought over the finish line.
Hi Venu,
Do you mean that you'd connect to zookeeper and read from zNode from a
Phoenix UDF? That sounds dangerous as a UDF gets executed for every row
when used in a WHERE clause during scanning and filtering from a region
server.
Thanks,
James
On Thu, Dec 10, 2015 at 9:45 AM, Venu Madhav
queries and let us know if this makes a difference.
5) make sure SYSTEM.STATS is still empty - a major compaction would cause
stats to be regenerated
SELECT sum(guide_posts_count) FROM SYSTEM.STATS -- should return 0
On Thu, Dec 10, 2015 at 12:33 PM, James Taylor <jamestay...@apache.org>
Hi Sumit,
I agree, these two queries should return the same result, as long as you
have the ORDER BY clause. What version of Phoenix are you using? What does
your DDL look like? Please file a JIRA that ideally includes a way of
reproducing the issue.
select current_timestamp from TBL order by
Thanks - most helpful would be a complete test case that reproduces it.
Would be helpful if you tried against 4.6 and/or master.
On Thursday, December 10, 2015, Sumit Nigam wrote:
> Thank you James.
>
> I am using Phoenix 4.5.1 with HBase-0.98.14.
>
> I am also noticing
Would it make sense to tweak the Spark installation instructions slightly
with this information, Josh?
On Wed, Dec 9, 2015 at 9:11 AM, Cox, Jonathan A wrote:
> Josh,
>
>
>
> Previously, I was using the SPARK_CLASSPATH, but then read that it was
> deprecated and switched to the
Zack,
Have you asked Hortonworks through your support channel? This sounds like
an issue related to the HDP version you have - you need to confirm with
them that upgrading to Phoenix 4.6.0 will work (and if there are any extra
steps you need to take).
Thanks,
James
On Wed, Dec 9, 2015 at 10:41
s to wait as much as 2 minutes to
> execute (I’m guessing from the pattern that it’s not actually the query
> that is slow, but a very long between when it gets queued and when it
> actually gets executed).
>
>
>
> Oh and the methods you mentioned aren’t in my version of PhoenixRuntime,
> evidently. I’m on 4.2
mmend doing a major compaction prior to running the
queries.
> Q3. Can I get the same population script so that I can report numbers from
> the local cluster.
>
You can use our bin/performance.py script to generate the data.
>
> Thanks,
> Ashish
>
> On Thu, Dec 3, 2015 at
You can disable stats through setting the phoenix.stats.guidepost.width
config parameter to a larger value in the server side hbase-site.xml. The
default is 104857600 (or 10MB). If you set it to your MAX_FILESIZE (the
size you allow a region to grow to before it splits - default 20GB), then
you're
I've set, phoenix.stats.guidepost.per.region to 1 and continue to see
> entries added to the system.stats table. I believe this should have the
> same effect? I'll try setting the guidepost width though.
>
>
> On Mon, Dec 7, 2015 at 12:11 PM, James Taylor <jamestay...@apache.org
> <java
--+
> | 653 |
> +--+
> 1 row selected (0.036 seconds)
>
>
> On Mon, Dec 7, 2015 at 2:41 PM, James Taylor <jamestay...@apache.org>
> wrote:
>
>> Yes, setting that property is another way to disable st
Zack,
Thanks for reporting this and for the detailed description. Here's a bunch
of questions and some things you can try in addition to what Andrew
suggested:
1) Is this reproducible in a test environment (perhaps through Pherf:
https://phoenix.apache.org/pherf.html) so you can experiment more?
Make sure it's the right jar too: there are two with the word "server" in
them.
On Saturday, November 28, 2015, Jesse Yates wrote:
> I think with that version of Phoenix you should have that class.
>
> 1. Can you grep the jar contents and ensure the class
>
and totally ignore the `autoGeneratedKeys`.
> Would this be acceptable for you (if so I would do a PR) ?
>
> Thanks and Regards,
>
> Clement
>
> On 20 novembre 2015 at 17:37:36, James Taylor (jamestay...@apache.org)
> wrote:
>
> Hi Clement,
> Can you tell us a l
java:115)
> ~[phoenix-4.4.0-HBase-1.1-client-minimal.jar:na]
>
> at
> org.apache.phoenix.parse.PhoenixSQLParser.upsert_node(PhoenixSQLParser.java:4454)
> ~[phoenix-4.4.0-HBase-1.1-client-minimal.jar:na]
>
> at
> org.apache.phoenix.parse.PhoenixSQLParser.oneS
Hi Clement,
Can you tell us a little more about your use case and how you'd
like prepareStatement(String sql, int autoGeneratedKeys) to behave?
Thanks,
James
On Fri, Nov 20, 2015 at 2:31 AM, clement escoffier
wrote:
> Hello,
>
> I’m facing an issue with the prepared
t; table.
>
> Let me know your thoughts about this.
>
> Best,
> -Jaime
> On Nov 20, 2015 1:19 PM, "James Taylor" <jamestay...@apache.org> wrote:
>
>> Hi Jaime,
>> Not sure exactly what you mean. Would you mind explaining a bit more what
>> yo
Hi Jaime,
Not sure exactly what you mean. Would you mind explaining a bit more what
you're trying to do (and why)?
Thanks,
James
On Fri, Nov 20, 2015 at 9:01 AM, Jaime Solano wrote:
> Hi guys,
>
> As part of a swapping strategy we're testing, we want to know if it's
>
Yes, please file a JIRA.
On Wed, Nov 18, 2015 at 10:06 AM, Stephen Wilcoxon
wrote:
> I think he's asking for trunc() to support the higher levels (not just
> ways to retrieve the higher level parts). Although, it's a little unclear
> to me exactly what the expected behavior
ed to be the last.
>>
>> On Wed, Sep 30, 2015 at 10:36 AM, James Taylor <jamestay...@apache.org>
>> wrote:
>>
>>> Thanks for letting us know, Anirudha & James. Makes sense to keep the
>>> 1.0 branch going in light of this hard dependency you h
301 - 400 of 667 matches
Mail list logo