pport returning multiple rows for
each input row. Does anyone know if UDF framework can support that?
On Tue, Jan 16, 2018 at 6:07 PM, Krishna wrote:
> I would like to convert a column of ARRAY data-type such that each element
> of the array is returned as a row. Hive supports it via La
I would like to convert a column of ARRAY data-type such that each element
of the array is returned as a row. Hive supports it via Lateral Views (
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView
)?
Does UDF framework in Phoenix allow for building such functions?
You have two options:
- Modify your primary key to include metric_type & timestamp as leading
columns.
- Create an index on metric_type & timestamp
On Monday, October 3, 2016, Kanagha wrote:
> Sorry for the confusion.
>
> metric_type,
> timestamp,
> metricId is defined as the primary key via Ph
Hi,
Does Phoenix have API for converting a rowkey (made up of multiple columns)
and in ImmutableBytesRow format to split into primary key columns? I am
performing a scan directly from HBase and would like to convert the rowkey
into column values. We used Phoenix standard JDBC API while writing to
e_Name" --input "HDFS
input file path" -d $'\034'
-d $'\034' --> the field separator in the file is FS so we provided the
explicitly
RegardsRadha krishna G
>
> On Tue, Jul 5, 2016 at 6:21 PM, Vamsi Krishna
> wrote:
>
>> Team,
>>
>> I'm working on HDP 2.3.2 (Phoenix 4.4.0, HBase 1.1.2).
>> When I use '-it' option of CsvBulkLoadTool neither Acutal Table nor Local
>> Index Table is loaded.
>>
Team,
In Phoenix-Spark plugin is DataFrame save operation single threaded?
df.write \
.format("org.apache.phoenix.spark") \
.mode("overwrite") \
.option("table", "TABLE1") \
.option("zkUrl", "localhost:2181") \
.save()
Thanks,
Vamsi Attluri
--
Vamsi Attluri
Team,
I'm working on HDP 2.3.2 (Phoenix 4.4.0, HBase 1.1.2).
When I use '-it' option of CsvBulkLoadTool neither Acutal Table nor Local
Index Table is loaded.
*Command:*
*HADOOP_CLASSPATH=/usr/hdp/current/hbase-master/lib/hbase-protocol.jar:/etc/hbase/conf
yarn jar /usr/hdp/current/phoenix-client/p
Team,
I'm using HDP 2.3.2 (HBase 1.1.2, Phoenix 4.4.0)
I'm seeing an exception when I run the IndexTool MapReduce job to build
Local Index asynchronously.
Could someone please help me understand what I'm doing wrong?
*Create Table:*
CREATE TABLE IF NOT EXISTS VAMSI.TABLE_A (COL1 VARCHAR(36) , COL
s or not and then we can drop it.
>
> Now as part of PHOENIX-1734 we have reimplemented local indexes and
> storing local indexes also in same data table.
>
> Thanks,
> Rajeshbabu.
>
> On Tue, Jun 28, 2016 at 4:45 PM, Vamsi Krishna
> wrote:
>
>> Team,
&
ring the query for particular index.
>
> Regards,
> Ankit Singhal
>
> Re
>
> On Tue, Jun 28, 2016 at 4:18 AM, Vamsi Krishna
> wrote:
>
>> Team,
>>
>> I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
>> *Question:* For multiple local indexes on
e columns is costly operation. so
> optimizer chooses data table to scan instead of using index to serve query.
>
> below query should use local indexes:-
> explain select col2, any_covered_colums from vamsi.table_a where col2 =
> 'abc';
>
> For covered indexes , you can
Team,
We are using HDP 2.3.2 (HBase 1.1.2, Phoenix 4.4.0).
We have two Phoenix tables 'TABLE_A', 'TABLE_B' and a Phoenix view
'TABLE_VIEW'.
Phoenix view is always pointing to one of the above two Phoenix tables
which is called Active table and the other table is called Standby table.
We have a ba
Team,
I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
*Question: *phoenix explain plan not showing any difference after adding a
local index on the table column that is used in query filter. Can someone
please explain why?
*Create table:*
CREATE TABLE IF NOT EXISTS VAMSI.TABLE_A (COL1 VARCH
Team,
I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
*Question:* For multiple local indexes on Phoenix table only one local
index table is being created in HBase. Is this regular behavior? Can
someone explain why?
Phoenix:
CREATE TABLE IF NOT EXISTS VAMSI.TABLE_B (COL1 VARCHAR(36) , COL2
V
Team,
I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
*Question: *Dropping Phoenix local index is not dropping the local index
table in HBase. Can someone explain why?
Phoenix:
CREATE TABLE IF NOT EXISTS VAMSI.TABLE_B (COL1 VARCHAR(36) , COL2
VARCHAR(36) , COL3 VARCHAR(36) CONSTRAINT TABLE_
ed by: java.lang.OutOfMemoryError: Java heap space
Thanks & Regards
Radha Krishna
On Wed, May 18, 2016 at 12:04 AM, Maryann Xue wrote:
> Hi Radha,
>
> Thanks for reporting this issue! Would you mind trying it with latest
> Phoenix version?
>
> Thanks,
> Maryann
>
> On Tue, May 17, 2
ol2=small.col2;
Hash Join
UPSERT INTO Target_Table SELECT big.col1,big.col2...(102 columns) FROM
BIG_TABLE as big JOIN SMALL_TABLE as small ON big.col1=small.col1 where
big.col2=small.col2;
Thanks & Regards
Radha krishna
result of a Spark Job. I would like to know if i convert it to a Dataframe
> and save it, will it do Bulk load (or) it is not the efficient way to write
> data to Phoenix HBase table
>
> --
> Thanks and Regards
> Mohan
>
--
Thanks & Regards
Radha krishna
hist_hist_df.registerTempTable("HIST_TABLE")
val matched_rc = input_incr_rdd_df.join(hist_hist_df,
input_incr_rdd_df("Col1") <=> hist_hist_df("col1")
&& input_incr_rdd_df("col2") <=> hist_hist_df("col2"))
matched_rc.show()
Thanks & Regards
Radha krishna
ment for the create and load scripts.
Thanks & Regards
Radha krishna
Phoenix create table with one column family and 19 salt buckets
===
CREATE TABLE IF NOT EXISTS MY_Table_Name(
"BASE_PROD_ID" VARCHAR,
"SRL_NR_ID&quo
hat need to be combined, but this should still be very minor in
> comparison to the total amount of work required to do aggregations, so
> it also shouldn't have a major effect either way.
>
> - Gabriel
>
> On Wed, Mar 16, 2016 at 7:15 PM, Vamsi Krishna
> wrote:
> >
hands those
>> HFiles over to HBase, so the memstore and WAL are never
>> touched/affected by this.
>>
>> - Gabriel
>>
>>
>> On Tue, Mar 15, 2016 at 1:41 PM, Vamsi Krishna
>> wrote:
>> > Team,
>> >
>> > Does phoenix CsvBulkLoa
Hi,
I'm using CsvBulkLoadTool to load a csv data file into Phoenix/HBase table.
HDP Version : 2.3.2 (Phoenix Version : 4.4.0, HBase Version: 1.1.2)
CSV file size: 97.6 GB
No. of records: 1,439,000,238
Cluster: 13 node
Phoenix table salt-buckets: 13
Phoenix table compression: snappy
HBase table si
are less number of row (< ~100K and this depends on cluster size
> as well), you can go ahead with phoenix-spark plug-in , increase batch
> size to accommodate more rows, else use CVSbulkLoader.
>
> Thanks
> Pari
>
> On 16 March 2016 at 20:03, Vamsi Krishna wrote:
>
&g
Team,
Does phoenix CsvBulkLoadTool write to HBase WAL/Memstore?
Phoenix-Spark plugin:
Does saveToPhoenix method on RDD[Tuple] write to HBase WAL/Memstore?
Thanks,
Vamsi Attluri
--
Vamsi Attluri
According Phoenix-Spark plugin docs, only SaveMode.Overwrite is supported
for saving dataframes to Phoenix table.
Are there any plans to support other save modes (append, ignore) anytime
soon? Only having overwrite option makes it useful for a small number of
use-cases.
but basic tasks
> works fine.
>
> Krishna at "Fri, 15 Jan 2016 18:20:47 -0800" wrote:
> K> Thanks Andrew. Are binaries available for CDH5.5.x?
>
> K> On Tue, Nov 3, 2015 at 9:10 AM, Andrew Purtell
> wrote:
>
> K> Today I pushed a new branch
/phoenix/coprocessor/UngroupedAggregateRegionObserver.java:[550,57]
is not
abstract and does not override abstract method
nextRaw(java.util.List,org.apache.hadoop.hbase.regionserver.ScannerContext)
in org.apache.hadoop.hbase.regionserver.RegionScanner
On Fri, Jan 15, 2016 at 6:20 PM, Krishna wrote
Thanks Andrew. Are binaries available for CDH5.5.x?
On Tue, Nov 3, 2015 at 9:10 AM, Andrew Purtell wrote:
> Today I pushed a new branch '4.6-HBase-1.0-cdh5' and the tag
> 'v4.6.0-cdh5.4.5' (58fcfa6) to
> https://github.com/chiastic-security/phoenix-for-cloudera. This is the
> Phoenix 4.6.0 relea
Did you run compaction after bulk loading twice?
On Friday, January 15, 2016, sac...@outlook.com wrote:
> hi:
>
>when i bulk load the same data twice and The storage doubled
> . I did add the versions 1 when i create the table ,but I can not find it
> in the hbase`s table describ
General recommendation is to choose salt number as a small multiple of
region servers. If you are aware of your key distribution you can pre-split
the table in phoenix too along specific split points.
On Monday, January 11, 2016, Ken Hampson wrote:
> I ran into this as well just today, and am ve
opy data to remote cluster?
> People give different opinions. Replication will not work for us as we’re
> using bulk loading.
>
> Can you advise what are our options to copy data to remote cluster and
> keeping it up to date.
> Thanks for your inputs.
>
> -Regards
> Krishna
>
>
options to copy data to remote cluster and
> keeping it up to date.
> Thanks for your inputs.
>
> -Regards
> Krishna
02 AM, Josh Mahonin wrote:
> Hi Krishna,
>
> That's great to hear. You're right, the plugin itself should be backwards
> compatible to Spark 1.3.1 and should be for any version of Phoenix past
> 4.4.0, though I can't guarantee that to be the case forever. As well, I
&
Are you sure your HA configuration is working properly, I doubt this is
related to Phoenix.
Are these parameters correctly setup?
hbase-site.xml
hbase.rootdir
hdfs://nameservice/hbase
hdfs-site.xml
dfs.nameservices
nameservice
dfs.ha.namenodes.nameservice
nn
6.x <--> Spark 1.5.0
On Tue, Dec 1, 2015 at 7:05 PM, Josh Mahonin wrote:
> Hi Krishna,
>
> I've not tried it in Java at all, but I as of Spark 1.4+ the DataFrame API
> should be unified between Scala and Java, so the following may work for you:
>
> DataFrame df = sql
Hi,
Is there a working example for using spark plugin in Java? Specifically,
what's the java equivalent for creating a dataframe as shown here in scala:
val df = sqlContext.phoenixTableAsDataFrame("TABLE1", Array("ID",
"COL1"), conf = configuration)
Hi,
We tried to use HBase namespace feature with Phoenix and we see there is an
issue with creating LOCAL Indexes when we use HBase namespace.
We are planning on using Phoenix Schema feature in our application.
If someone has already tried it and seen any issues with 'schema' feature,
could you p
(by diff) what is required we can
> figure out if we can support compatibility in some way.
>
>
> On Sep 9, 2015, at 11:00 PM, Krishna <
>
> research...@gmail.com
> > wrote:
>
> I can volunteer to spend some time on this.
>
> CDH artifacts are available i
1400 mappers on 9 nodes is about 155 mappers per datanode which sounds high
to me. There are very few specifics in your mail. Are you using YARN? Can
you provide details like table structure, # of rows & columns, etc. Do you
have an error stack?
On Friday, September 11, 2015, Gaurav Kanade
wrote
Another option is to create HFiles using csv bulk loader on one cluster,
transfer them to the backup cluster and run LoadIncrementalHFiles(...).
On Tue, Sep 1, 2015 at 11:53 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Gaurav,
>
> bulk load bypass the WAL, that's correct. It's t
recent thread points to a stack overflow answer with some clues.
> On 10 Sep 2015 7:00 am, "Krishna" wrote:
>
>> I can volunteer to spend some time on this.
>>
>> CDH artifacts are available in Maven repo but from reading other threads
>> on CDH-Phoenix comp
CDH compatible Phoenix code base?
2) Is having a CDH compatible branch even an option?
Krishna
On Friday, August 28, 2015, Andrew Purtell > wrote:
> Yes I am interested. Assuming CDH artifacts are publicly available in a
> Maven repo somewhere, which I believe is the case, perhap
You can map HBase composite row key to Phoenix primary key only if
serialization used for HBase matches with Phoenix. Ex: leading 1 byte for
bucket, 0-byte char for separating columns, etc.
If you used a different mechanism to serialize rowkey in HBase, you can
still map it Phoenix table but decla
The owner of the directory containing HFiles should be 'hbase' user and
ownership can set using 'chown' command.
On Mon, Jul 6, 2015 at 7:12 AM, Riesland, Zack
wrote:
> I’ve been running CsvBulkLoader as ‘hbase’ and that has worked well.
>
>
>
> But I now need to integrate with some scripts tha
;
> On Fri, Mar 13, 2015 at 1:06 AM Nick Dimiduk wrote:
>
>> This works fine for me:
>>
>> $ ./bin/sqlline.py localhost:2181:/hbase;tenantId=foo
>>
>> At least, it launches without complaint. I don't have any tables with
>> tenants enabled.
>>
>&g
ening in JDBCUtilTest.
>
> On Thu, Mar 12, 2015 at 3:24 PM, Vamsi Krishna
> wrote:
>
>> Hi,
>>
>> Can someone help me understand how to establish a tenant-specific
>> connection using Sqlline?
>>
>> I see the following documented on Phoenix w
Hi,
Can someone help me understand how to establish a tenant-specific
connection using Sqlline?
I see the following documented on Phoenix website, but i'm not sure how to
do that for Sqlline connection:
http://phoenix.apache.org/multi-tenancy.html
For example, a tenant-specific connection is es
Latest version available on EMR is 0.94, so, you can upgrade phoenix to
3.x, however, EMR's default bootstrap script doesn't do that.
Download and save phoenix binaries to your S3 bucket, modify the EMR
bootstrap script to install 3.x, save it to your S3 bucket and add it as a
bootstrap action. I d
gt; it not to be salted at all.
>
> Thanks,
> James
>
> On Tue, Mar 3, 2015 at 1:39 PM, Nick Dimiduk wrote:
> > The first client to connect to Phoenix and notice the absence of
> > SYSTEM.SEQUENCE will create the table. That means the configuration
> > phoenix.sequence
er a variable length
> type.
>
> You can check code on PTableImpl#newKey
>
> On 3/3/15, 10:02 PM, "Krishna" wrote:
>
> >Hi,
> >
> >How does phoenix store composite primary keys in HBase?
> >For example, if the primary key is a composite of two colum
Hi,
How does phoenix store composite primary keys in HBase?
For example, if the primary key is a composite of two columns:
col1 short
col2 integer
Does phoenix concatenate 1 byte short with 4 byte integer to create a 5
byte array to make HBase rowkey?
Please point me to the code that I can refer
ing the folder in HDFS). Everything else sounded
> fine, but make sure to bounce your cluster and restart your clients
> after doing this.
>
> Thanks,
> James
>
> On Thu, Feb 26, 2015 at 12:28 PM, Vamsi Krishna
> wrote:
> > Hi,
> >
> > From phoeni
Ravi, thanks.
If the target table is salted, do I need to compute the leading byte (as i
understand, its a hash value) in the mapper?
On Sunday, March 1, 2015, Ravi Kiran wrote:
> Hi Krishna,
>
> I assume you have already taken a look at the example here
> http://phoeni
Could someone comment of following questions regarding the usage of
PhoenixOutputFormat in a standalone MR job:
- Is there a need to compute hash byte in the MR job?
- Are keys and values stored in BytesWritable before doing a
"context.write(...)" in the mapper?
fifed will create thoe many number of regions.
>
>
>
> Depending upon what regions size is optimal –(dependent on table data) you
> need to opt the salt bucket numbers.
>
>
>
>
>
> *From:* Vamsi Krishna [mailto:vamsi.attl...@gmail.com]
> *Sent:* Thursday, February 26
Hi,
>From phoenix archives I see that we can drop SYSTEM.SEQUENCE table and
set 'phoenix.sequence.saltBuckets'
property to '1' to see the SYSTEM.SEQUENCE table recreated with 1 salt
bucket on cluster restart.
Reference:
http://mail-archives.apache.org/mod_mbox/incubator-phoenix-user/201412.mbox/%3
mar Ojha <
puneet.ku...@pubmatic.com> wrote:
> For big tables keep the salt bucket high, generally around 60-90.
>
> Smaller or join tables should have salt buckets as minimal may be 1-4.
>
>
>
>
>
> Thanks
>
>
>
>
>
> -- Original message---
Are there any recommendations for estimating and optimizing salt buckets
during table creation time? What, if any, are the cons of having high
number (200+) of salt buckets? Is it possible to update salt buckets after
table is created?
Thanks
Using 'record' & 'outputformat' commands of sqlline.py you can save data to
csv file.
After connecting to cluster using sqlline.py execute the following commands:
1. !outputformat csv
2. !record data.csv
3. select * from system.catalog limit 10;
4. !record
5. !quit
You should see data.csv file wi
Hi,
I'm trying to do a batch insert using MyBatis & Phoenix and I'm ending up
in an exception (org.springframework.jdbc.BadSqlGrammarException:).
-
Here is an example of what I'm d
r():
> >
> > if (OrderBy.REV_ROW_KEY_ORDER_BY.equals(orderBy)) {
> >
> > ScanUtil.setReversed(scan);
> >
> > Cheers
> >
> > On Mon, Dec 1, 2014 at 7:45 PM, Krishna wrote:
> >
> >> Hi,
> >>
> >> Does Ph
Hi,
Does Phoenix support reverse scan as explained in HBASE-4811 (
https://issues.apache.org/jira/browse/HBASE-4811).
I think, 0.94 is compiled against hadoop1. So, try sqlline under hadoop1
dir.
On Wednesday, November 19, 2014, Komal Thombare
wrote:
> Hi Krishna,
>
> working under hadoop2 dir.
>
> Thanks and Regards
> Komal Ravindra Thombare
>
> -Krishna > wrote: -
>
Are you executing sqlline under hadoop1 or hadoop2 dir?
On Wednesday, November 19, 2014, Komal Thombare
wrote:
> Hi,
>
> When trying to start sqlline it gives error:
>
> Exception in thread "main" java.lang.NoClassDefFoundError: sqlline/SqlLine
> Caused by: java.lang.ClassNotFoundException: sql
Hi,
I'm trying to use Phoenix JDBC driver 4.1.0 in my application.
I'm able to resolve ivy dependency to 'phoenix-4.1.0.jar'. Ivy dependency:
I see ‘phoenix-4.1.0.jar’ is containing the ‘PhoenixDriver’.
$unzip -l phoenix-4.1.0.jar | grep PhoenixDriver
1102 08-29-14 14:14 org/apache/phoe
Hi,
I'm trying to integrate MyBatis with Apache Phoenix in my project.
I'm trying to use MyBatis Generator to create the Mapper/Model Java, XML
files.
Did anyone try this?
Could anyone please point me to any documentation on integrating MyBatis
with Apache Phoenix?
Thanks,
Vamsi Attluri.
Shu wrote:
> Currently local index can only be created in default namespace.
>
> Alicia
>
> On Tue, Nov 11, 2014 at 3:30 PM, Vamsi Krishna
> wrote:
>
>> Hi,
>>
>> I'm working with HDP 2.2.
>>
>> Hadoop: 2.6.0.2.2.0.0-1084
>>
>> HBase: 0.98
quot;test:table1" disable;
Error: ERROR 1012 (42M03): Table undefined. tableName=TEST:TABLE1INDX1
(state=42M03,code=1012)
*Note:* Same scenario works fine when I create the namespace, table name,
index name in UPPERCASE.
Thanks,
Vamsi Krishna Attluri
Hi,
I'm working with HDP 2.2.
Hadoop: 2.6.0.2.2.0.0-1084
HBase: 0.98.4.2.2.0.0-1084-hadoop2
Phoenix: 4.2
I created namespace 'TEST' in HBase.
I created a table 'TABLE1' in Phoenix under namespace 'TEST' in HBase.
When I try to create a local index on table 'TABLE1', i'm seeing an error.
Please
How many nodes are in the cluster? How fast do you expect this table to
grow?
In the current state, 1.5 mill rows is not a massive dataset, so, having an
index for every possibility will result in more effort towards index
maintenance and not produce comparable query improvements.
If you can narrow
Hi Poonam, You should consider storing date fields in DATE/TIME datatypes
instead of Char(22).
On Friday, October 10, 2014, Poonam Ligade
wrote:
> Hi,
>
> I have phoenix table with below schema,
> CREATE TABLE IF NOT EXISTS TEST (
> ID BIGINT NOT NULL,
> Date1 char(22) NOT NULL,
> StID INTEGER
ervers. I am using
> guava-12.0.1.jar.
>
> On Thu, Oct 2, 2014 at 2:51 PM, Krishna wrote:
>
>> Hi,
>>
>> Aggregate queries seem to be working fine on smaller datasets but when
>> the data needs to be aggregated over millions of rows, query fails with
>> fol
Hi,
Aggregate queries seem to be working fine on smaller datasets but when the
data needs to be aggregated over millions of rows, query fails with
following error stack. I'm running Phoenix 3.1 on HBase 0.94.18. Any help?
Query is something like this:
> select a.customer_id, a.product_id, count(
I filed this JIRA: https://issues.apache.org/jira/browse/PHOENIX-1306
Thanks
On Tue, Sep 30, 2014 at 8:53 AM, James Taylor
wrote:
> Thanks, Krishna. Please file a JIRA for the "option in CREATE TABLE &
> CREATE INDEX clauses to indicate that underlying table is already a
> ph
that, it is still beneficial to have some kind of an option in
CREATE TABLE & CREATE INDEX clauses to indicate that underlying table is
already a phoenix table.
Thanks,
Krishna
On Monday, September 29, 2014, lars hofhansl wrote:
> Not offhand.
>
> A few guesses/questions for Kri
Regards
Krishna
On Sunday, September 28, 2014, James Taylor wrote:
> Hi Krishna,
> Any reason why the SYSTEM.CATALOG hbase table isn't restored as well
> from backup? Yes, if you try to re-create the SYSTEM.CATALOG by
> re-issuing your DDL statement, Phoenix won't know that t
Hi,
When I restore hbase from a backup, sqlline gets stuck unless
SYSTEM.CATALOG table is dropped. It is automatically re-created via
sqlline. However, metadata of previously created phoenix tables is lost.
So, to restore the metadata, when a 'CREATE TABLE' statement is re-issued,
Phoenix takes v
Hi James,
I'm using Phoenix 3.1 running on HBase 0.94.18.
Could you share how queueSize be estimated?
Thanks
On Fri, Sep 26, 2014 at 8:58 PM, James Taylor
wrote:
> Hi Krishna,
> Which version of Phoenix and HBase are you running? This exception
> means that the thread pool on t
Hi,
I'm running into following error when running create index statement.
CREATE INDEX idx_name ON table_name (COL1, COL2) INCLUDE (val)
DEFAULT_COLUMN_FAMILY='cf', DATA_BLOCK_ENCODING='FAST_DIFF', VERSIONS=1,
COMPRESSION='GZ';
Error: org.apache.phoenix.execute.CommitException:
java.util.concurr
to talk about your use case a bit and explain why you'd
> need this to be higher?
> Thanks,
> James
>
>
> On Wednesday, September 24, 2014, Krishna wrote:
>
>> Thanks... any plans of raising number of bytes for salt value?
>>
>>
>> On Wed, Sep 24,
Thanks... any plans of raising number of bytes for salt value?
On Wed, Sep 24, 2014 at 10:22 AM, James Taylor
wrote:
> The salt byte is the first byte in your row key and that's the max
> value for a byte (i.e. it'll be 0-255).
>
> On Wed, Sep 24, 2014 at 10:12 AM,
the max value that SALT_BUCKETS can take? If yes, could someone
explain the reason for this upper bound?
Krishna
Thanks for clarifying Gabriel.
On Tue, Sep 16, 2014 at 11:45 PM, Gabriel Reid
wrote:
> Hi Krishna,
>
> > Does the bulk loader compress mapper output? I couldn't find anywhere in
> the
> > code where "mapreduce.map.output.compress" is set to true.
>
> Th
Hi,
Does the bulk loader compress mapper output? I couldn't find anywhere in
the code where "mapreduce.map.output.compress" is set to true.
Are HFiles compressed only if the Phoenix table (that data is being
imported to) is created with compression parameter (ex: COMPRESSION='GZ')?
Thanks for cl
;
>
> if we can get even a handful of folks willing to commit, I'd say it would
> be worth it!
>
>
> ---
> Jesse Yates
> @jesse_yates
> jyates.github.com
>
>
>
> On Mon, Sep 15, 2014 at 11:57 AM, Krishna wrote:
>
> Hi, Is anyone aware of Phoenix meetups coming up in the next couple of
> months in Bay Area?
>
>
>
> Thanks
>
>
>
>
>
>
>
>
>
>
Hi, Is anyone aware of Phoenix meetups coming up in the next couple of
months in Bay Area?
Thanks
ha
> wrote:
>
>> See Comments Inline
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>> -- Original message--
>>
>> *From: *Krishna
>>
>> *Date: *Tue, Sep 9, 2014 5:24 AM
>>
>> *To: *user@phoenix.apache.org;
&
This issue is resolved by running sqlline using hadoop1 client, earlier I
was using hadoop2 client. Its not clear why this is so - any clarification
from experts will be great.
On Tue, Sep 9, 2014 at 2:37 PM, Krishna wrote:
> Hi,
>
> I've installed Phoenix 3.1.0 on Amazon EMR b
I assume you are referring to the bulk loader. "-a" option allows you to
pass array delimiter.
On Thursday, September 11, 2014, Flavio Pompermaier
wrote:
> Any help about this..?
> What if I save a field as an array? how could I read it from a mapreduce
> job? Is there a separator char to use fo
Hi,
I'm running Phoenix 3.1.0 on AWS using Hadoop 2.2.0 and HBase 0.94.7. When
I run "bin/sqlline.py localhost:2181:/hbase", it errors our with
"java.io.IOException: Could not set up IO Streams" because of
"NoSuchMethError".
Following phoenix jars in hbase lib:
phoenix-3.1.0-client-minimal.jar (m
I am having same issue with "psql" too. It appears Phoenix is unable to
launch. What logs can I check to debug the issue? Are there any Phoenix
specific logs created?
Thanks.
On Tuesday, September 9, 2014, Krishna wrote:
> Hi,
>
> I've installed Phoenix 3.1.0 on Am
Hi,
I've installed Phoenix 3.1.0 on Amazon EMR but the command "./sqlline.py
localhost" just hangs with following output. Any thoughts on what I'm
missing?
Here is related info:
*Phoenix 3.1.0*
*HBase 0.94.18*
*Amazon Hadoop 2.4.0*
*Phoenix core:*
/home/hadoop/.versions/hbase-0.94.18/lib/phoeni
>>
>> Thanks,
>> James
>>
>> On Monday, September 8, 2014, Puneet Kumar Ojha <
>> puneet.ku...@pubmatic.com
>> > wrote:
>>
>>> See Comments Inline
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>
o the bulk loader, is port # required or
optional? If I'm using Hadoop2, should Resource Manager node be substituted
for Job Tracker?
1. -hd HDFS NameNode IP:
2. -mr MapReduce Job Tracker IP:
3. -zk Zookeeper IP:
Thanks for your inputs.
Krishna
96 matches
Mail list logo