Yes, but the first release supporting CDH will be delayed to some point in the
next couple of months.
On Sun, 21 Oct 2018, 09:48 Bulvik, Noam,
mailto:noam.bul...@teoco.com>> wrote:
Hi
Do you plan to issue Phoenix 5.x parcel based on CDH6 like there was phoenix
4.x parcels based on C
Hi
Do you plan to issue Phoenix 5.x parcel based on CDH6 like there was phoenix
4.x parcels based on CDH 5.x?
Regards,
Noam
PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and
confidential, and is intended
Hi
I am trying to add phoenix data source to JBoss Wildfly 10. I managed to add
Oracle and impala drivers and data source but when I add phoenix I get this
error
14:11:51,520 WARN [org.jboss.modules] (MSC service thread 1-6) Failed to
define class com.sun.jersey.server.impl.cdi.CDIExtension
Noam
PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and
confidential, and is intended only for the use of the individual to whom it is
addressed and others who have been specifically authorized to receive it.
One more on this when using the client from the regular release (4.13 for HBase
)1.3 it works fine on same pc
From: Bulvik, Noam
Sent: Thursday, November 30, 2017 11:22 AM
To: user@phoenix.apache.org
Subject: problem to run phoenix client 4.13.1 for CDH5.11.2 on windows
Hi
I am using JDBC UI
Hi,
When running sql qlline-thin.py from 4.13 parcel version for CDH 5.11.2 you
get class not found error. This does not happened when you run it from the
regular phonix release (4.13 for HBase 1.3)
java.lang.NoClassDefFoundError:
org/apache/phoenix/shaded/org/apache/http/config/Lookup
indexes to become
corrupted. The documentation needs to be updated.
On Sun, Dec 3, 2017 at 7:07 AM Bulvik, Noam
<noam.bul...@teoco.com<mailto:noam.bul...@teoco.com>> wrote:
Hi,
I want to upset historic data to a table with indexes, I have there TTL defined
and I want to l
Hi,
I want to upset historic data to a table with indexes, I have there TTL defined
and I want to load the data as if it was loaded in the correct time so it will
be cleaned automatically using the TTL mechanism. I implemented a small java
code to load the data after setting "CurrentSCN" to
Hi
Is it possible to set CurrentSCN in a way that it will be used when inserting
data using the bulk loader?
Assuming I have TTL of 3 month and I am loading historical data from a month
ago using the bulk loader, is there a way to set the CurrentSCN so that
timestamp of the loaded data will
Hi
I am using JDBC UI client on windows. After I upgrade to latest parcel of
phonix I got the following error (it did not happened on older parcels either
when I compiled myself or when I used ones supplied by cloudera [4.7])
SEVERE: Failed to locate the winutils binary in the hadoop binary
Where can we take the parcel for CDH 5.11.x from ?
From: Kumar Palaniappan [mailto:kpalaniap...@marinsoftware.com]
Sent: Monday, November 27, 2017 10:51 AM
To: user@phoenix.apache.org
Subject: Re: [ANNOUNCE] Apache Phoenix 4.13 released
You mean CDH5.9 and 5.10? And also HBASE 17587?
On Mon,
w
timestamp mapping
I filed PHOENIX-4265, but I'm not able to repro the issue. Let's continue the
discussion there. Please read the description for limitations on indexes with
row_timestamp column too for 4.12.
Thanks,
James
On Mon, Oct 2, 2017 at 12:38 AM, Bulvik, Noam
<noam.b
Yoav <yoav.sa...@teoco.com>
Subject: Re: error when using hint on global index where table is using row
timestamp mapping
Hi Noam,
Can you pass on the DDL statements for the table and index and the query you
are executing, please?
Thanks!
On Sun, Oct 1, 2017 at 2:01 AM, Bulvik, Noam
<n
Hi
I have create a table and used the row timestamp mapping functionality. The key
of the table is + column. I also created global index
on one of the columns of the table (XXX not one of the key columns).
When I am doing explain select * from my_table where xxx='' I see that
index is not
Hi,
We have a table with multiple global indexes (5-10 indexes ) on different
columns.
When we try to delete large amount of records (more than couple of millions)
based on entry in one of the indexes, even after setting auto commit to true,
we got timeout after 10 minutes.
We saw that in
Tue, Sep 19, 2017 at 4:21 AM Bulvik, Noam
<noam.bul...@teoco.com<mailto:noam.bul...@teoco.com>> wrote:
Hi,
We have a case where we have a table with few index on different columns a, b,
c etc' . It works well if we do select with "and" condition (for example
select W
Hi,
We have a case where we have a table with few index on different columns a, b,
c etc' . It works well if we do select with "and" condition (for example
select Where a='xyz' and b='123' )but when we have or condition (for
example select Where a='xyz' or b='123') we get full
Hi
I tried to use 4.10 release with CDH 5.10 and some operation fail with method
not found. I saw that it may be related to hbase incompatible version in CDH.
Did anyone compiled 4.10 with the CDH version of HBASE and did the changes
Regards,
Noam
Hi
Is there free ODBC driver for Phoenix ?
Regards,
Noam
PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and
confidential, and is intended only for the use of the individual to whom it is
addressed and others
You set it like any other data source of Weblogic.
You need to:
·Set the WLS class path to point to phoenix--client.jar
·Create data source from type other
·Set the connection definition (driver class and jdbc URL)
From: Sukant Jain [mailto:sukantj...@gmail.com]
size is over some limit
Hi Noam,
Can you quantify the query you run that shows this error? Also, when you change
the criteria to retrieve less data, do you mean that you're fetching fewer rows?
Bulvik, Noam wrote:
> I am using phonix 4.5.2 and in my table the data in in Array.
>
> Whe
think the behavior of / is incorrect as is.
On Thu, Sep 22, 2016 at 4:45 AM, Heather, James (ELS-LON)
<james.heat...@elsevier.com<mailto:james.heat...@elsevier.com>> wrote:
On Thu, 2016-09-22 at 05:39 +0000, Bulvik, Noam wrote:
We have an app that let user write their own
...@apache.org]
Sent: Wednesday, September 21, 2016 6:03 PM
To: user <user@phoenix.apache.org>
Subject: Re: can I prevent rounding of a/b when a and b are integers
Hi Noam,
Please file a JIRA. As a workaround, you can do SELECT 1.0/3.
Thanks,
James
On Wed, Sep 21, 2016 at 12:48 AM, Bulvik
Hi,
When I do something like select 1/3 from the result will be integer
value (0) and not double or alike(0.33). Is there some configuration that
can force the result to be double
BTW - when executing the same query in oracle (select 1/3 from dual ) I get
correct result same in impala
You can use also http://www.sql-workbench.net/
From: Divya Gehlot [mailto:divya.htco...@gmail.com]
Sent: Tuesday, April 12, 2016 10:15 AM
To: user@phoenix.apache.org
Subject: SQL editor for Phoenix 4.4
Hi,
I would like to know ,Is there SQL editor apart from Squirrel?
Thanks,
Divya
4.4
From: Dor Ben Dov [mailto:dor.ben-...@amdocs.com]
Sent: Tuesday, March 1, 2016 12:34 PM
To: user@phoenix.apache.org
Subject: RE: Re: HBase Phoenix Integration
Any one knows here which version of Phoenix being used in HortonWorks bundle ?
Dor
From: Amit Shah [mailto:amits...@gmail.com]
Sent:
Hi,
Does phoenix support fast rename of table and/or schema without the need to
disable the table and clone the snapshot data as appears currently in
https://hbase.apache.org/book.html#table.rename
If not are there plans to support it in the future
Regards,
Noam
it still works fine
any idea what else to check
From: Bulvik, Noam
Sent: Thursday, October 8, 2015 7:41 PM
To: 'user@phoenix.apache.org' <user@phoenix.apache.org>
Subject: array support issue
Hi all,
We are using CDH 5.4 and phoenix 4.4. When we try to use the client jar (from
squirrel ) to
Hi,
I am using the phoenix parcel for cloudera 5.4. Seems like there is a bug when
query has Order by DESC the query hang for long time and after a while it fail
with the following error . without DESC the order by works fine
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed
Hi,
I am using Phoenix parcel for cloudera (latest parcel for CDH 5.4) and I have a
table with data in varchar_array column.
When I call to_date(aaa[1]) or to_number to_date(aaa[1])[aaa is the column
defined as varchar array I am get the following error
java.lang.NoSuchMethodException:
() round() and Ceil() )
Hi Noam,
I am working on this. Will keep you posted on its status.
Thanks
Ravi
On Sun, Jan 3, 2016 at 5:15 AM, Bulvik, Noam
<noam.bul...@teoco.com<mailto:noam.bul...@teoco.com>> wrote:
PHOENIX-2433
Noam Bulvik
R Manager
TEOCO CORPORATION
c: +972 54 550798
Hi,
In other implementations of SQL (like Oracle and impala) trunc() on date
support also date parts higher than day level (for example WEEK, MONTH, YEAR)
- any chance it can be supported also in phoenix ?
should I open JIRA for it?
Regards,
Noam
re detail on your specific use case. And even better is a
patch that implements it :-)
- Gabriel
On Thu, Oct 29, 2015 at 3:22 PM, Bulvik, Noam
<noam.bul...@teoco.com<mailto:noam.bul...@teoco.com>> wrote:
> Hi,
>
>
>
> We have private logic to be executed when pa
feature to have in some
situations.
Could you log this request in jira? It would also be really good to have some
more detail on your specific use case. And even better is a patch that
implements it :-)
- Gabriel
On Thu, Oct 29, 2015 at 3:22 PM, Bulvik, Noam <noam.bul...@teoco.com>
on.ALL, FsAction.ALL));
}
right before the the call to loader.doBulkLoad(outputPath, htable)
This unfortunately requires that you modify the source. I'd be interested in a
solution that doesn't require patching phoenix.
-Matt
On Tue, Oct 27, 2015 at 1:06 PM, Bulvik, Noam
&l
Hi all,
We are using CDH 5.4 and phoenix 4.4. When we try to use the client jar (from
squirrel ) to query table with array column we get the following error (even
when doing simple thing like select from :
Error: org.apache.phoenix.schema.IllegalDataException: Unsupported sql type:
VARCHAR
Hi,
We are using the CSV bulk loading (MR) to load our data. we have a table with
50 columns and We did some testing to understand the factors on the performance
of loading.
We compared two cases
A - each column in the data will be a column in hbase table
B - take all non-key column and put
Structs which this could become if taken far enough.
Even without this, just having a set of built-in functions that work off of a
protobuf would be a useful first step and a great contribution.
Thanks,
James
On Mon, Mar 9, 2015 at 11:03 PM, Bulvik, Noam noam.bul...@teoco.com wrote:
Hi,
We
When using CSV bulk loader with dates I am getting this error Error:
org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter
Table DDL:
create table TS_TEST
( ID integer not null,
REC_TIMESTAMP timestamp not null,
Hi,
We are using weblogic 10.3.x which comes with java 1.6. we used phoenix 4.1
without any problem but both 4.2 and 4.3 can't be used because we are getting
java.lang.UnsupportedClassVersionError: org/apache/phoenix/jdbc/PhoenixDriver :
Unsupported major.minor version 51.0
Can you continue
We are using it via ozzie.
We had such issue and we solve it by setting default file permission in hdfs so
all users will have access to it.
I guess there is more advanced solution but for us it was enough
-Original Message-
From: Ganesh R [rganes...@yahoo.co.in]
Received: יום שישי, 30
the existing encodings do for this (maybe
good enough?).
Please file a JIRA. Thanks,
James
On Mon, Jan 19, 2015 at 7:41 AM, Anil Gupta anilgupt...@gmail.com wrote:
You mean to have a support for aliases for columns?
If yes, then +1 for that.
Sent from my iPhone
On Jan 19, 2015, at 3:49 AM, Bulvik
Hi,
Do you plan to support assign short name for columns as part of phoenix
features. i.e. when creating table using phoenix DDL there will be a metadata
table that will convert the column name to short names (like a,b,c ...
aa,bb). each time there will be a query the SQL that the user
bulky,
having this setting turned on can have quite an impact on throughput.
As far as the performance comparison with Impala, I assume you're referring to
impala that is backed by text files or Parquet, correct?
- Gabriel
On Tue, Dec 23, 2014 at 9:43 AM, Bulvik, Noam noam.bul...@teoco.com
Hi,
Any idea why it is not valid to set not null on a column that is not part of
the primary key. Trying to set it will generate the error ERROR 517 (42895):
Invalid not null constraint on non primary key column
It make sense that some column that are not part of the primary key but still
What about timestamp(3)
I am using the CSV bulk loader tool and it seems like the milliseconds are
been truncated when data is inserted to the DB
From: deepak_gatt...@dell.com [mailto:deepak_gatt...@dell.com]
Sent: Saturday, December 13, 2014 1:53 AM
To: user@phoenix.apache.org;
We used for the CSV bulk loader --d $'\t' and it worked
-Original Message-
From: Perko, Ralph J [mailto:ralph.pe...@pnnl.gov]
Sent: Friday, December 12, 2014 11:12 PM
To: user@phoenix.apache.org
Subject: Re: Phoenix loading via psql.py - specifying tab separator
I have encountered the
HI,
We are using connection pool and there is a need to have a general SQL to test
if connection is stil open.
In Oracle there is something like select * from dual;
In Impala you can use select 1 without table name
What can be used for phenix ? both are not valid
Regards,
Noam
Information in
in it and just insert one row.
And you can use as dual table all day long.
Thanks
Deepak Gattala
From: Bulvik, Noam [mailto:noam.bul...@teoco.com]
Sent: Thursday, November 27, 2014 1:05 PM
To: user@phoenix.apache.orgmailto:user@phoenix.apache.org
Subject: is there something like dual table in phoenix
table all day long.
Thanks
Deepak Gattala
From: Bulvik, Noam [mailto:noam.bul...@teoco.com]
Sent: Thursday, November 27, 2014 1:05 PM
To: user@phoenix.apache.org
Subject: is there something like dual table in phoenix
HI,
We are using connection pool and there is a need to have
I created a table with TIMESTAMP column and inserted a value from string to it.
When I query the table I get the result with timezone offset - any way to avid
it.
My steps
* I created a table DATE_TEST with TS column
* For insert I use: upsert into DATE_TEST values
, it says it can't find a jar in the
directory, please check the jar is there.
On Tue, Oct 7, 2014 at 1:53 AM, Bulvik, Noam
noam.bul...@teoco.commailto:noam.bul...@teoco.com wrote:
We login to ozzie using mapred user.
It looks like the ozzie task runs the script as user nobody , when I executed
about it, it will explain what Oozie expects.
On Oct 7, 2014 7:18 AM, Bulvik, Noam
noam.bul...@teoco.commailto:noam.bul...@teoco.com wrote:
I checked the file exists on all cluster machine with full permission (it is
part of CDH files )
From: Artem Ervits [mailto:artemerv
...@gmail.com]
Sent: Monday, October 6, 2014 9:39 AM
To: user@phoenix.apache.org
Cc: Bulvik, Noam
Subject: Re: bulk loading using OOZIE
Hi Noam,
Could you post the error message and/or stack trace you're getting when Oozie
says that a jar is missing or you don't have permission to read it?
- Gabriel
On Sun
running oozie wf is oozie then you should upload the
jar and any property files to /user/oozie.
On Oct 5, 2014 2:41 AM, Bulvik, Noam
noam.bul...@teoco.commailto:noam.bul...@teoco.com wrote:
Hi,
We are trying to do periodic bulk loading using OOZIE as scheduler. We
impalement script task that should
We are using CDH 5.1 and try to use CSV bulk loader. We tryied it with phoenix
4.0 and fail and we thought with 4.1 we will have more luck. After compiling
and running with 4.1 RC1 we still get the same error.
Can you help?
The command line we use is
HADOOP_CLASSPATH=$(hbase
56 matches
Mail list logo