Re: Can I change a String column's size and preserve the data?

2016-10-07 Thread James Taylor
Hi Zack, Phoenix doesn't support an ALTER TABLE statement that allows this, but you can modify the SYSTEM.CATALOG directly with an UPSERT VALUES call if you're careful. Make sure to: - Create a snapshot of the SYSTEM.CATALOG just in case something goes wrong - Ensure that other modifications

Issue w/ CsvBulkUploadTool when column data has "," character

2016-10-07 Thread Riesland, Zack
I am trying to use the CsvBulkUploadTool to get data from Hive to HBase. As I typically do, I created a Hive table w/ the copy of the data that I care about, and with the properties: "row format delimited fields terminated by '|' null defined as 'null' stored as textfile location 'my location'

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Mich Talebzadeh
thanks again all. My primary objective was to write to Hbase directly from Spark streaming and Phoenix was really the catalyst here. My point being that if I manage to write directly from Spark streaming to Hbase would that being a better option. FYI, I can read from phoenix table on Hbase

Need help in retrieving records og HBase through Phoenix

2016-10-07 Thread Mohan Nani
Can some body tell me the Java code to retrieve all column values associated with given rowkey Row key should be the input Lets say, if i have row_key1 col1:value1 col2:valu1 col1:value1 col2:valu2 So when i pass row_key1 as parameters, the code should return like

Re: Bulk dataload and dynamic columns

2016-10-07 Thread anil gupta
I dont think that feature is supported yet in bulk load tool. On Thu, Oct 6, 2016 at 9:55 PM, Sanooj Padmakumar wrote: > Hi All, > > Can we populate dynamic columns as well while bulk loading data ( > https://phoenix.apache.org/bulk_dataload.html) into Hbase using Phoenix >

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread James Taylor
Hi Mich, I'd encourage you to use this mechanism mentioned by Josh: Another option is to use Phoenix-JDBC from within Spark Streaming. I've got a toy example of using Spark streaming with Phoenix DataFrames, but it could just as easily be a batched JDBC upsert. Trying to write directly to HBase

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Mich Talebzadeh
Thanks Josh, I will try your code as well. I wrote this simple program based on some code that directly creates or populates an Hbase table called "new" from Spark 2 import org.apache.spark._ import org.apache.spark.rdd.NewHadoopRDD import org.apache.hadoop.hbase.{HBaseConfiguration,

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Josh Mahonin
Hi Mich, You're correct that the rowkey is the primary key, but if you're writing to HBase directly and bypassing Phoenix, you'll have to be careful about the construction of your row keys to adhere to the Phoenix data types and row format. I don't think it's very well documented, but you might

Re: Views and alter table

2016-10-07 Thread James Taylor
Hi Sanjooj, What version of Phoenix? Would you mind filing a JIRA with steps to reproduce the issue? Thanks, James On Friday, October 7, 2016, Sanooj Padmakumar wrote: > Hi All > > We get mutation state related error when we try altering a table to which > views are added.

Views and alter table

2016-10-07 Thread Sanooj Padmakumar
Hi All We get mutation state related error when we try altering a table to which views are added. We always have to drop the view before doing the alter. Is there a way we can avoid this? Thanks Sanooj

Re: Creating view on a phoenix table throws Mismatched input error

2016-10-07 Thread James Taylor
FWIW, you can use an "alter view drop column" statement after you've created your view to remove columns, though. Thanks, James On Friday, October 7, 2016, Mich Talebzadeh wrote: > thanks Ankit > > Dr Mich Talebzadeh > > > > LinkedIn * >

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Mich Talebzadeh
Thank you all. very helpful. I have not tried the method Ciureanu suggested but will do so. Now I will be using Spark Streaming to populate Hbase table. I was hoping to do this through Phoenix but managed to write a script to write to Hbase table from Spark 2 itself. Having worked with Hbase I

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Ciureanu Constantin
In Spark 1.4 it worked via JDBC - sure it would work in 1.6 / 2.0 without issues. Here's a sample code I used (it was getting data in parallel 24 partitions) import org.apache.spark.SparkConf import org.apache.spark.SparkContext import org.apache.spark.rdd.JdbcRDD import java.sql.{Connection,

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Ted Yu
JIRA on hbase side: HBASE-16179 FYI On Fri, Oct 7, 2016 at 6:07 AM, Josh Mahonin wrote: > Hi Mich, > > There's an open ticket about this issue here: > https://issues.apache.org/jira/browse/PHOENIX- > > Long story short, Spark changed their API (again), breaking the

Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Josh Mahonin
Hi Mich, There's an open ticket about this issue here: https://issues.apache.org/jira/browse/PHOENIX- Long story short, Spark changed their API (again), breaking the existing integration. I'm not sure the level of effort to get it working with Spark 2.0, but based on examples from other

Accessing phoenix tables in Spark 2

2016-10-07 Thread Mich Talebzadeh
Hi, Has anyone managed to read phoenix table in Spark 2 by any chance please? Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw *

Re: Creating view on a phoenix table throws Mismatched input error

2016-10-07 Thread Mich Talebzadeh
thanks Ankit Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw * http://talebzadehmich.wordpress.com *Disclaimer:* Use it at your own risk. Any

Re: Creating view on a phoenix table throws Mismatched input error

2016-10-07 Thread Ankit Singhal
Currently, Phoenix doesn't support projecting selective columns of table or expressions in a view. You need to project all the columns with (select *). Please see the section "Limitations" on this page or PHOENIX-1507. https://phoenix.apache.org/views.html On Thu, Oct 6, 2016 at 10:05 PM, Mich

RE: Recover from "Cluster is being concurrently upgraded from 4.7.x to 4.8.x"

2016-10-07 Thread Patrick FICHE
Hi, No, this occurs even if only one client is connected. I restarted the hbase services to make sure that I had no connection and got the same error after restart with sqlline. Patrick De : Samarth Jain [mailto:samarth.j...@gmail.com] Envoyé : Thursday, October 6, 2016 6:51 PM À :