It is possible to use a transaction started by a client in a coprocessor.
The transaction is serialized as the TxConstants.TX_OPERATION_ATTRIBUTE_KEY
attribute on the operation.
On Wed, Apr 13, 2016 at 7:42 AM, Mohammad Adnan Raza
wrote:
> Hello everyone,
>
> I have
Got it!
Thanks !
在 2016年04月20日 14:20, Ankit Singhal 写道:
Hi,
I think when you are doing a put from shel then value is going as
String not as Integer. So phoenix can decode it with varchar only .
If you want to put Integer to your hbase table , use byte
representation of integer or java api
can you please check that hbase-site.xml(where you are setting this
property) is in phoenix class path .
On Wed, Apr 20, 2016 at 3:10 AM, wrote:
> I am having trouble setting the "phoenix.spool.directory"
> (QueryServices.SPOOL_DIRECTORY) property value. Any
Hi Arun,
Do you see 'IS_ROW_TIMESTAMP' column in SYSTEM.CATALOG, by doing !describe
on system.catalog.
if not,
can you share the output of below command. As it seems SYSTEM.CATALOG was
updated with timestamp greater v4.6 timestamp , and which stopping upgrade
code to add a new column.
scan
Hi,
I think when you are doing a put from shel then value is going as String
not as Integer. So phoenix can decode it with varchar only .
If you want to put Integer to your hbase table , use byte representation of
integer or java api instead.
Regards,
Ankit Singhal
On Wed, Apr 20, 2016 at 8:00
Josh,
I hope someone familiar can answer this question :)
On 19.04.2016 22:59, Josh Elser wrote:
Thanks for helping out, Francis!
Interesting that Jackson didn't fail when the connectionId was being
passed as a number and not a string (maybe it's smart enough to
convert that?).
Why does
Hello,
I am using phoenix 4.6 and trying to bulk load data into a table from a csv
file using the psql.py utility. How do I map the table columns to the
header values in the csv file through the "-h" argument?
For e.g. Assume my phoenix table does not match the columns in the csv. The
phoenix
Hey Plamen,
I just spun up some clean docker containers running Hbase 1.1.4 and
Phoenix 4.7.0 to replicate what you did. It appears to work correctly.
Using SquirrelSQL, I created the table: CREATE TABLE IF NOT EXISTS
us_population (state CHAR(2) NOT NULL, city VARCHAR NOT NULL, population
!describe SYSTEM.CATALOG is not returning IS_ROW_TIMESTAMP column.
But we do see this column from select statement:
select * from SYSTEM.CATALOG where TABLE_NAME=’TEST_TABLE_1’ AND
TABLE_SCHEM IS NULL AND TENANT_ID IS NULL ;
Thanks,
Arun
On Wed, Apr 20, 2016 at 1:37 AM, Ankit Singhal
I pretty much came to a similar conclusion, that I might have to create a
hbase-site.xml and put it on my path. I was hoping for an alternative. Is
there any other way to get this "phoenix.spool.directory" property set?
Also, do you know of the reasoning as to why this property can not be set
Note that it's case sensitive, so try upper casing your column names in
your psql.py call.
On Wednesday, April 20, 2016, Amit Shah wrote:
> Hello,
>
> I am using phoenix 4.6 and trying to bulk load data into a table from a
> csv file using the psql.py utility. How do I map
Arun,
Please run the command Ankit mentioned in an HBase shell and post the
output back here.
Thanks,
James
On Wednesday, April 20, 2016, Arun Kumaran Sabtharishi
wrote:
> !describe SYSTEM.CATALOG is not returning IS_ROW_TIMESTAMP column.
>
> But we do see this column from
James,
Table SYSTEM.CATALOG is ENABLED
SYSTEM.CATALOG, {TABLE_ATTRIBUTES => {coprocessor$1 =>
'|org.apache.phoenix.coprocessor.ScanRegionObserver|1|', coprocessor$2 =>
'|org.apache
.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|', coprocessor$3 =>
One more question to add,
Do we need to have 1000 versions, and KEEP_DELETED_CELL=true?
I have limited the scan in HBase and here is the following data.
\x00\x00TEST_TABLE_2 column=0:, timestamp=1460455162842, type=DeleteFamily
_0_1460354090089
\x00\x00TEST_TABLE_2 column=0:BASE_COLUMN_COUNT,
The below issue has been resolved when using the compatible cdh version
with phoenix package.
.
*org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
java.lang.AbstractMethodError*
As of now, I see the
> the shell and find the empty ones, another to merge a given region into
a neighbor. We've run them without incident, looks like it all works fine.
One thing we did notice is that the AM leaves the old "retired" regions
around in its counts -- the master status page shows a large number of
Hi Michal,
As a workaround for the issue you're encountering, can dropping the index
and then issuing your create index DDL statement over again? If you have a
minute to file a JIRA on this, that'd be much appreciated.
Thanks,
James
On Wed, Apr 20, 2016 at 5:45 PM, Michal Medvecky
Hello,
0: jdbc:phoenix> drop index idx_media_next_update_at ON media;
Error: org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
_LOCAL_IDX_MEDIA,\x04VkV6,1461181689836.2ee0eac45800bb1f76ef5e28936972d8.:
null
at
Hello,
my hbase cluster was damaged by network outages in AWS. Now I cannot select
some data from my tables:
0: jdbc:phoenix> select id from media where id like '%fhCch_Y8la8%';
+--+
|ID|
Circling back here and adding user@phoenix.
I put together one script to dump region info from the shell and find the
empty ones, another to merge a given region into a neighbor. We've run them
without incident, looks like it all works fine. One thing we did notice is
that the AM leaves the old
Yeah, that sound interesting.
Do you think it should be a script (command, runnable from the client
side), or some chore on master?
You're going on this route because region normalizer lacks features
you guys need?
-Mikhail
> Circling back here and adding user@phoenix. I put together one
Another observation:(After upgrading from Phoenix 4.4 to 4.6.1)
In a new SYSTEM.CATALOG table , when connected from phoenix 4.6.1 client,
!describe SYSTEM.CATALOG does not show IS_ROW_TIMESTAMP
But, select * from SYSTEM.CATALOG shows the IS_ROW_TIMESTAMP column.
Is this an expected behavior?
22 matches
Mail list logo