Hi there,
we have been using phoenix client without a problem in linux systems but we
have encountered some problems on windows.
We run the queries through SquirellSQL using the 4.5.2 client jar
The query which looks like this SELECT * FROM TABLE WHERE ID='TEST' works
without a problem. But when
ithin the MR job. The extracted KeyValues are
> then written to the HFile.
>
> - Gabriel
>
> On Tue, Sep 15, 2015 at 2:12 PM Yiannis Gkoufas <johngou...@gmail.com>
> wrote:
>
>> Hi there,
>>
>> I was going through the code related to index creation via MapRedu
Hi there,
I was going through the code related to index creation via MapReduce job
(IndexTool) and I have some questions.
If I am not mistaken, for a global secondary index Phoenix creates a new
HBase table which has the appropriate key (the column value of the original
table you want to index)
Hi there,
I am using phoenix-spark to insert multiple entries on a phoenix table.
I get the following errors:
..Exception while committing to database..
..Caused by: java.lang.NumberFormatException..
I couldn't find on the logs what was the row that was causing the issue.
Is it possible to
Hi there,
I was trying to experiment a bit with accessing Phoenix-enabled tables
using the HBase API directly. My primary key is compound consisting of a
String and an Unsigned Long.
By printing the bytes of the Row I realized that the byte that splits the
values is 0.
Moreover I realized that
Hi there,
I am getting an error while executing:
UPSERT INTO READINGS
SELECT R.SMID, R.DT, R.US, R.GEN, R.USEST, R.GENEST, RM.LAT, RM.LON,
RM.ZIP, RM.FEEDER
FROM READINGS AS R
JOIN
(SELECT SMID,LAT,LON,ZIP,FEEDER
FROM READINGS_META) AS RM
ON R.SMID = RM.SMID
the full stacktrace is:
,
Mujtaba
On Thu, Aug 13, 2015 at 8:33 AM, Yiannis Gkoufas johngou...@gmail.com
wrote:
Hi there,
When I try to include the following in my pom.xml:
dependency
groupIdorg.apache.phoenix/groupId
artifactIdphoenix-core/artifactId
version4.5.0-HBase-0.98
Hi there,
When I try to include the following in my pom.xml:
dependency
groupIdorg.apache.phoenix/groupId
artifactIdphoenix-core/artifactId
version4.5.0-HBase-0.98/version
scopeprovided/scope
/dependency
I get this error:
Failed to
a lot!
On 6 July 2015 at 23:55, Yiannis Gkoufas johngou...@gmail.com wrote:
Thanks James for your reply!
I will give it a shot!
On 6 July 2015 at 19:04, James Taylor jamestay...@apache.org wrote:
You can use a regular SQL query with comparison operators (=, , =, ,
=, !=) against constants
PrepareStatement.setBytes(colIndexOrString) for
bind variables that are arbitrary bytes for your key. The salting will
happen transparently, so you don't have to do anything special.
Thanks,
James
On Mon, Jul 6, 2015 at 9:06 AM, Yiannis Gkoufas johngou...@gmail.com
wrote:
Hi all,
I have
Hi there,
I have two tables I want to join.
TABLE_A: ( (A,B), C, D, E) where (A,B) is the composite key
TABLE_B: ( (A), C, D, E) where A is the key
I basically want to join TABLE_A and TABLE_B on A and update TABLE_A with
the values C, D, E coming from TABLE_B
When I try to use UPSERT SELECT
Hi there,
I was just looking for some tips on implementing a UDF to be used in a
GROUP BY statement.
For instance lets say I have the table:
( (A, B), C, D) with (A,B) being the composite key
My UDF targets the field C and I want to optimize the query:
SELECT A,MYFUNCTION(C),SUM(D) FROM
Hi Thomas,
please ignore my last email, I wasn't running the sqlline.py command from
within the bin directory.
Now it works just fine!
Thanks!
On 18 June 2015 at 10:39, Yiannis Gkoufas johngou...@gmail.com wrote:
Hi Thomas,
unfortunately just modifying the hbase-site in the current
)
at sqlline.SqlLine.main(SqlLine.java:292)
Thanks a lot for spending time on this
On 17 June 2015 at 22:18, Yiannis Gkoufas johngou...@gmail.com wrote:
Thanks a lot Thomas! That was very useful!
On 17 June 2015 at 20:04, Thomas D'Silva tdsi...@salesforce.com wrote:
If you are running a main class you can use
to get
picked up or else it will use the default timeout.
When using sqlline it sets the CLASSPATH to the HBASE_CONF_PATH
environment variable which default to the current directory.
Try running sqlline directly from the bin directory.
-Thomas
On Wed, Jun 17, 2015 at 3:30 AM, Yiannis Gkoufas
Hi there,
I have failed to understand from the documentation where exactly to set the
client configuration.
For the server, I think is clear that I have to modify hbase-site.xml of my
hbase cluster.
But what is the case for the client? It requires to have hbase-site.xml
somewhere in the
on bigger set of data. I
will try to post the code when I polish it a bit. The partitions should be
sorted with KeyValue sorter before bulkSaving them.
2015-06-16 15:10 GMT+02:00 Yiannis Gkoufas johngou...@gmail.com:
Hi,
didn't realize that I only sent to Dawid.
Resending to the entire list
in the performance.
Thanks a lot!
On 9 June 2015 at 10:26, Yiannis Gkoufas johngou...@gmail.com wrote:
Thanks a lot for your replies!
Will try the DATE field and change the order of the Composite Key.
On 8 June 2015 at 17:37, James Taylor jamestay...@apache.org wrote:
Both DATE and TIME have
Hi Dawid,
I am trying to do the same thing but I hit a wall while writing the Hfiles
getting the following error:
java.io.IOException: Added a key not lexically larger than previous
key=\x00\x168675230967GMP\x00\x00\x00\x01=\xF4h)\xE0\x010GEN\x00\x00\x01M\xDE.\xB4T\x04,
Hi there,
I am investigating Phoenix as a potential data-store for time-series data
on sensors.
What I am really interested in, as a first milestone, is to have efficient
time range queries for a particular sensor. From those queries the results
would consist of 1 or 2 columns (so small rows).
I
for timestamp, try to fit it into long or use
stringified version in a format suitable for byte-by-byte comparison.
2015 06/08 05:23:25.345
-Vlad
On Mon, Jun 8, 2015 at 2:48 AM, Yiannis Gkoufas johngou...@gmail.com
wrote:
Hi there,
I am investigating Phoenix as a potential data-store for time
21 matches
Mail list logo