RE: Phoenix ODBC Connection

2015-07-22 Thread Siva B
, 2015 10:12 AM, "Siva B" wrote: Hi, Can anyone share the URL to download Phoenix ODBC installer for Windows. I have to connect phoenix with legacy Dotnet application. Thanks.

Phoenix ODBC Connection

2015-07-22 Thread Siva B
Hi, Can anyone share the URL to download Phoenix ODBC installer for Windows. I have to connect phoenix with legacy Dotnet application. Thanks.

Error occurring after HBase bulk load (Expected single, aggregated KeyValue from coprocessor)

2015-06-04 Thread Siva
) at sqlline.SqlLine.main(SqlLine.java:424) When I drop and recreate the view, it works fine. Did anyone face similar issue? Thanks, Siva.

Re: Joins Benchmark

2015-06-03 Thread Siva
I agree with you Anil!. On Tue, Jun 2, 2015 at 9:06 PM, anil gupta wrote: > Hi Siva/Jaime, > > In my opinion: > HBase is meant for quick key/value lookup or short range based scans and > Hive is meant for Analytical/Datawarehouse kind of workload. Full table > scan in HBase is

Re: Joins Benchmark

2015-06-02 Thread Siva
be good if Phoenix can publish the performance results on join queries. Thanks, Siva. On Tue, Jun 2, 2015 at 1:47 PM, Jaime Solano wrote: > Hi guys, > > Are there benchmarks or numbers showing how Phoenix performs during the > join of two or more huge tables? I'm not famil

Re: PhoenixIOException resolved only after compaction, is there a way to avoid it?

2015-06-02 Thread Siva
. 2GB of heap space on master. No activity is going on the cluster when I was running the queries. Do you recommend any of the parameters to tune memory and GC for Phoenix and Hbase? Thanks, Siva. On Mon, Jun 1, 2015 at 1:14 PM, Vladimir Rodionov wrote: > >> Is IO exception is b

PhoenixIOException resolved only after compaction, is there a way to avoid it?

2015-06-01 Thread Siva
+ | 13480 | +--+ 1 row selected (72.36 seconds) Did anyone face the similar issue? Is IO exception is because of Phoenix not able to read from multiple regions since error was resolved after the compaction? or Any other thoughts? Thanks, Siva.

Re: TO_DATE is not working as expected

2015-05-04 Thread Siva
particular column. May be I m not aware of. Thanks, Siva. On Mon, May 4, 2015 at 9:41 AM, James Taylor wrote: > FWIW, there's an option in sqlline that will cause it to display the > full date granularity, but I don't know what it is. Maybe someone else > does? > > Thanks,

Re: Error while phoenix from spark

2015-05-04 Thread Siva
Thanks a lot Sun, it resolved the issue. Thanks, Siva. On Sun, May 3, 2015 at 7:20 PM, Fulin Sun wrote: > Hi, Siva > > Generally the problem is thrown under that your spark driver classpath did > not recognize the relative hbase-protocol.jar > > Under this condition,

Re: TO_DATE is not working as expected

2015-05-02 Thread Siva
Any help on TO_DATE function? Thanks On Fri, May 1, 2015 at 2:49 AM, Siva wrote: > Hi, > > Phoenix TO_DATE is truncating the time portion from date while converting > the date. Do I need to change the syntax? As per the documentation syntax > seems to be correct. > > 0:

Error while phoenix from spark

2015-05-02 Thread Siva
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Thanks in advance. Thanks, Siva.

Re: TO_DATE is not working as expected

2015-05-02 Thread Siva
Hi Gabriel, Thanks for your response. You meant to say TO_DATE is actually holding the time portion but it is not able to display it sqlline? Can please correct me if I m wrong. createdate column is defined as string on lead view. Thanks, Siva. On Fri, May 1, 2015 at 3:38 AM, Gabriel Reid

TO_DATE is not working as expected

2015-05-01 Thread Siva
08:42:31.963 | 2014-04-29 | +--++ 5 rows selected (0.056 seconds) Thanks, Siva.

BigDataDayLA 2015

2015-04-22 Thread Siva
Hi Everyone, We have BigDataDayLA coming up on 27/6/2015 Los Angeles, CA. We are looking for speakers on NoSql track, so we thought of reaching HBase and Phoenix user groups to see if anyone is interested. You can submit your talks here

Re: Phoenix bulk loading

2015-02-12 Thread Siva
Thanks for your inputs Gabriel. I m thinking of like this, having the consistency to data loader as is and having a new parameter to extend the logic to populate null values for missing columns. Lets see how it goes :) On Thu, Feb 12, 2015 at 12:17 PM, Gabriel Reid wrote: > Hi Siva, > &g

Re: Line separator option in Bulk loader

2015-02-12 Thread Siva
Thanks Nick, I will open a JIRA request for both Phoenix and Hbase. Also I will chip in and will contribute whatever I can :) Thanks, Siva. On Thu, Feb 12, 2015 at 11:10 AM, Nick Dimiduk wrote: > Custom line separator is a reasonable request. Please open JIRAs for HBase > and/or Phoenix

Re: Line separator option in Bulk loader

2015-02-12 Thread Siva
API. Thanks, Siva. On Wed, Feb 11, 2015 at 11:40 PM, Gabriel Reid wrote: > Hi Siva, > > Handling multi-line records with the Bulk CSV Loader (i.e. > MapReduce-based loader) definitely won't support records split over > multiple input lines. It could be that loading via PSQ

Re: Phoenix bulk loading

2015-02-12 Thread Siva
n the same way like Hbase loader. What do you say, any thoughts on this? Thanks, Siva. On Wed, Feb 11, 2015 at 11:34 PM, Gabriel Reid wrote: > Hi Siva, > > If I understand correctly, you want to explicitly supply null values > in a CSV file for some fields. In general, this sh

Phoenix bulk loading

2015-02-11 Thread Siva
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table P_TEST_2_COLS --input /user/sbhavanari/p_h_test_2_cols_less.csv --import-columns NAME,LEADID,D --zookeeper 172.31.45.176:2181:/hbase Thanks, Siva.

Mapping HBase table in Phoenix

2015-02-10 Thread Siva
varchar); queried the table from phoenix, data is being shown as NULL for Column address. 0: jdbc:phoenix:172.31.45.176:2181:/hbase> select * from "tab_2_cf"; ++++ | PK |name| *address* | ++++ | r1 | asdf | null | ++++ Any help on this? Thanks, Siva.

Re: Data loaded from Hbase is not visible in Phoenix

2015-02-05 Thread Siva
Hi Chandu, Data is showing up after specifying all the qualifiers in uppercase as part of bulk loading from Hbase. Thanks a lot for your help. Thanks, Siva. On Thu, Feb 5, 2015 at 1:39 AM, Chandu wrote: > You have to double quotes to get correct mapping from phoenix to hbase > columns.

Line separator option in Bulk loader

2015-02-05 Thread Siva
We have table contains a NOTE column, this column contains lines of text separated by new lines. When I export the data from .csv through bulkloader, Phoenix is failing with error and Hbase terminates the text till encounters the new line and assumes rest of NOTE as new record. Is there a way to

Fwd: Data loaded from Hbase is not visible in Phoenix

2015-02-04 Thread Siva
values. I understand that Hbase stores the data in Byte format, since I created the table in Phoenix and loaded it through HBase, how does phoenix interprets the data types. Could someone throw some light on whats happening behind the scenes? Thanks, Siva.

Fwd: Bulk loading error

2015-02-04 Thread Siva
Hi Everyone, Encountered below error while bulk loading the data. Can you let me know What is the format for date type? at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.RuntimeException: Error on record, java.text.ParseException: Unparseable date: "2