You can use !describe tablename or !tables from the sqlline. Also typing
!help you can find other helpful things
On יום ג׳, 14 ביול 2015 at 06:52 Eli Levine elilev...@gmail.com wrote:
The standard JDBC way is to use Connection.getMetadata(). See if that does
what you need. You can also query
Hi,
I have two questions,
- When I run the same COUNT query on hive and phoenix they give me different
results. While hive gives 40K phoenix gives 800K. What could cause this big
difference?
- Is there a way to do major compaction via phoenix?
Thanks.
HI There,
I have a query like one below
UPDATE table_1
SETid1 = (SELECT Min(id1)
FROM table_1 t2
WHERE table_1.id3 = t2.id3)
WHERE id3 = id4
I am trying to implement this in Apache Phoenix. Can anyone suggest how
should i be doing this as we do not have set command
If the counts are, indeed, different, then the next question is: how are you
getting data from hive to phoenix?
From: anil gupta [mailto:anilgupt...@gmail.com]
Sent: Tuesday, July 14, 2015 3:48 AM
To: user@phoenix.apache.org
Subject: Re: Phoenix vs Hive
You can do major compaction via Hbase
Satya
Try the UPSERT SELECT statement:-
https://phoenix.apache.org/language/#upsert_select
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 |
skype: michael.mcallister.hamailto:zimmk...@hotmail.com |
This should do it.
DatabaseMetaData dbmd = connection.getMetaData();
ResultSet resultSet = dbmd.getColumns(null, schemaName, tableName, null);
On Mon, Jul 13, 2015 at 8:51 PM, Eli Levine elilev...@gmail.com wrote:
The standard JDBC way is to use Connection.getMetadata(). See if that does
You can do major compaction via Hbase shell.
Whats the exact query you are running? How you mapped hbase table to hive?
You can also run Row_Counter job(
http://thinkonhadoop.blogspot.com/2013/11/hbase-table-row-count.html) of
hbase to count number of rows. This would help figure out whether
This is probably a lame question, but can anyone point me in the right
direction for CHANGING and EXISTING primary key on a table?
I want to add a column.
Is it possible to do that without dropping the table?
Thanks!
ALTER TABLE t ADD my_new_col VARCHAR PRIMARY KEY
The new column must be nullable and the last existing PK column cannot be
nullable and fixed width (or varbinary or array).
On Tue, Jul 14, 2015 at 10:01 AM, Riesland, Zack zack.riesl...@sensus.com
wrote:
This is probably a lame question, but
Ah, we don't support that currently. You can drop the existing column first
(but you lose your data, though you could set a CURRENT_SCN property on
your connection to prevent this data loss). Then you could run the command
I mentioned.
On Tue, Jul 14, 2015 at 10:14 AM, Riesland, Zack
Thanks James,
That’s what I thought.
If I were to make a NEW table with the same columns, is there a simple way to
copy the data from the old table to the new one?
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Tuesday, July 14, 2015 1:17 PM
To: user
Subject: Re: How to adjust
Thanks James,
To clarify: the column already exists on the table, but I want to add it to the
primary key.
Is that what your example accomplishes?
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Tuesday, July 14, 2015 1:11 PM
To: user
Subject: Re: How to adjust primary key on existing
Hi,
I amtrying to connect from phoenix 4.0.0-HBase1.0 to Cloudera 5.4.3, HBase
1.0. I am getting the followng exception. Put( Mutation ).setWriteToWAL(z)
is deprecated. Is the error because of this?
java.lang.NoSuchMethodError:
Hi, here is my table
CREATE TABLE IF NOT EXISTS cross_id_reference
(
id1VARCHAR NOT NULL,
id2VARCHAR NOT NULL,
CONSTRAINT my_pk PRIMARY KEY (id1)
) IMMUTABLE_ROWS=true, TTL=691200;
Is it ok to set TTL and IMMUTABLE_ROWS at the same time? TTL should delete
expired
I'm using Phoenix 4.2.2 and am having problems with using either a CAST or the
TO_DATE function in WHERE clauses in views. The view query is apparently parsed
into an invalid syntax that will not execute. Possibly these are related to bug
Michael,
For using upsert, I need to fetch the primary ID at the least and then do
processing and update the columns. I need to read the entire table to fetch
primary keys and then do processing at service layer and do an upsert,
which I think is quite a task for huge table.
Please let me know
Thank you for your input, I appreciate your help!
Kevin
From: Cody Marcel [mailto:cmar...@salesforce.com]
Sent: Tuesday, July 14, 2015 9:18 AM
To: user@phoenix.apache.org
Subject: Re: Query DDL from Phoenix
This should do it.
DatabaseMetaData dbmd = connection.getMetaData();
ResultSet
Yes. UPSERT SELECT. Have you seen this yet?
https://phoenix.apache.org/language/index.html
On Tue, Jul 14, 2015 at 10:21 AM, Riesland, Zack zack.riesl...@sensus.com
wrote:
Thanks James,
That’s what I thought.
If I were to make a NEW table with the same columns, is there a simple way
Thank you!
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Tuesday, July 14, 2015 1:34 PM
To: user
Subject: Re: How to adjust primary key on existing table
Yes. UPSERT SELECT. Have you seen this yet?
https://phoenix.apache.org/language/index.html
On Tue, Jul 14, 2015 at 10:21 AM,
19 matches
Mail list logo