Re: HBase - stable versions

2013-09-10 Thread Kiru Pakkirisamy
BTW, can somebody explain the function/purpose of 0.95.2. Do the community expect 0.95.2 to be used in a prod env or does it have to 0.96.0 for that ? Also, I have some development hiccups with it (like cannot find the jar on the maven repo etc, if somebody can provide pointers that would be

Re: HBase - stable versions

2013-09-10 Thread Nicolas Liochon
That's linux terminology. 0.95 is a developper release It should not go in production. When it's ready for production, it will be released as 0.96 0.96 should be ready soon, tests (and fixes are in progress). There is already a release candidate available: 0.96.RC0. There should be a new release

Command to delete based on column Family + rowkey

2013-09-10 Thread Ramasubramanian Narayanan
Dear All, Requirement is to delete all columns which belongs to a column family and for a particular rowkey. Have tried with the below command but record is not getting deleted. * hbase deleteall 't1', 'r1', 'c1'* * * *Test result :* * * 3) Scan the table 't' hbase(main):025:0 scan

Fastest way to get count of records in huge hbase table?

2013-09-10 Thread Ramasubramanian Narayanan
Dear All, Is there any fastest way to get the count of records in a huge HBASE table with billions of records? The normal count command is running for a hour with this huge volume of data.. regards, Rams

Re: Command to delete based on column Family + rowkey

2013-09-10 Thread manish dunani
hey rama, Try this:: *deleteall 't','333'* * * I hope it will definitely works for you!! On Tue, Sep 10, 2013 at 1:31 PM, Ramasubramanian Narayanan ramasubramanian.naraya...@gmail.com wrote: Dear All, Requirement is to delete all columns which belongs to a column family and for a

Re: Fastest way to get count of records in huge hbase table?

2013-09-10 Thread Ashwanth Kumar
Try the RowCounter MR Job, that comes with HBase. [1] http://hbase.apache.org/book/ops_mgt.html#rowcounter On Tue, Sep 10, 2013 at 1:37 PM, Ramasubramanian Narayanan ramasubramanian.naraya...@gmail.com wrote: Dear All, Is there any fastest way to get the count of records in a huge HBASE

Re: Command to delete based on column Family + rowkey

2013-09-10 Thread manish dunani
If you want to delete rowkey for particular columnfamily then you need to mention individually:: delete 't','333','TWO:qualifier_name' This will definitely delete the records which you are looking for. Please revert back if it is not work. On Tue, Sep 10, 2013 at 1:40 PM, manish dunani

Re: 0.95 Error in Connecting

2013-09-10 Thread Nicolas Liochon
(redirected user mailing list, dev mailing list in bcc) Various comments: - you should not need to add the hadoop jar in your client application pom, they will come with hbase. But this should not the cause of your issue. - what does the server say in its logs? - I'm suprised by this: Client

Re: Command to delete based on column Family + rowkey

2013-09-10 Thread Ramasubramanian Narayanan
Manish, I need to delete all the columns for a particular column family of a given rowkey... I don't want to specify the column name (qualifier name) one by one to delete. Pls let me know is there any way to delete like that... regards, Rams On Tue, Sep 10, 2013 at 2:06 PM, manish dunani

How to convert a text or csv file into HFile format and load into HBASE

2013-09-10 Thread Ramasubramanian Narayanan
Dear All, Can you please share a sample code (Java) to convert a text/csv file into HFILE and load it using HBASE API into HBASE. regards, Rams

Re: Command to delete based on column Family + rowkey

2013-09-10 Thread Jean-Marc Spaggiari
This? hbase(main):002:0 help alter Alter column family schema; pass table name and a dictionary specifying new column family schema. Dictionaries are described on the main help command output. Dictionary must include name of column family to alter. For example, To change or add the 'f1' column

Re: How to convert a text or csv file into HFile format and load into HBASE

2013-09-10 Thread Jean-Marc Spaggiari
Hi Rams, Just to make sure, have you looked at importtsv? http://hbase.apache.org/book/ops_mgt.html#importtsv It's tabs, but you can easily update that to take comas or anything else. Also, any specific reason you want to go from csv to HFiles to HBase? Or you can go from csv to HBase? JM

Re: HBase - stable versions

2013-09-10 Thread Kiru Pakkirisamy
Nicolas, makes sense. Thanks for the explanation.   Regards, - kiru From: Nicolas Liochon nkey...@gmail.com To: user user@hbase.apache.org; Kiru Pakkirisamy kirupakkiris...@yahoo.com Cc: d...@hbase.apache.org d...@hbase.apache.org Sent: Tuesday, September 10,

Re: HBase - stable versions

2013-09-10 Thread Vimal Jain
Even we will use 0.94 for foreseeable future. On Tue, Sep 10, 2013 at 9:29 PM, Kiru Pakkirisamy kirupakkiris...@yahoo.com wrote: Nicolas, makes sense. Thanks for the explanation. Regards, - kiru From: Nicolas Liochon nkey...@gmail.com To: user

Re: Tables gets Major Compacted even if they haven't changed

2013-09-10 Thread Dave Latham
Major compactions can still be useful to improve locality - could we add a condition to check for that too? On Mon, Sep 9, 2013 at 10:41 PM, lars hofhansl la...@apache.org wrote: Interesting. I guess we could add a check to avoid major compactions if (1) no TTL is set or we can show that all

HBASE and Zookeeper in parallel

2013-09-10 Thread Sznajder ForMailingList
Hi I am writing a program that makes use of a zookeeper server (I used the queue implementation of Curator) In addition, the program has access to HBASE Database via Gora. Hbase uses Zookeeper My question is: Does HBASE use the same zookeeper server that I am using from my queue

Re: Zookeeper state for failed region servers

2013-09-10 Thread Nicolas Liochon
You won't have this directly. /hbase/rs contains the regionservers that are online. When a regionserver dies, hbase (or zookeeper if it's a silent failure) will remove it from this list. (And obviously this is internal to hbase and could change or not at any time :-) ). But technically you can do

Two concurrent programs using the same hbase

2013-09-10 Thread Sznajder ForMailingList
Hi I installed hbase on a gpfs directory and lanched it using bin/start-hbase.sh Two servers on this gpfs filesystem run a similar program. This program accesses the hbase via GORA call: this.dataStore = DataStoreFactory.getDataStore(Long.class, Pageview.class, new

Re: Performance analysis in Hbase

2013-09-10 Thread Jean-Daniel Cryans
Yeah there isn't a whole lot of documentation about metrics. Could it be that you are still running on a default 1GB heap and you are pounding it with multiple clients? Try raising the heap size? FWIW I gave a presentation at HBaseCon with Kevin O'dell about HBase operations which could shed some

Does hbase runs with hadoop 2.1.0 beta?

2013-09-10 Thread Marcos Sousa
Hi, I'm trying to run Hbase with hadoop 2.1.0 beta. Witch hbase version should I use? I made some tests with 0.95.2 compiling with 2.0 profile but I faced protobuf issues. Thanks, -- Marcos Sousa

Re: Tables gets Major Compacted even if they haven't changed

2013-09-10 Thread Premal Shah
Thanx for the discussion guys. @Anil, we have turned off major compaction in the settings. This is a script which is run manually to make sure all tables get major compacted ever so often to increase data locality. In our case, there is some collateral damage of compacting unchanged regions. I

Re: Does hbase runs with hadoop 2.1.0 beta?

2013-09-10 Thread Ted Yu
0.96.0 should work. See this thread: http://search-hadoop.com/m/7W1PfyzHy51 On Tue, Sep 10, 2013 at 1:25 PM, Marcos Sousa marcoscaixetaso...@gmail.comwrote: Hi, I'm trying to run Hbase with hadoop 2.1.0 beta. Witch hbase version should I use? I made some tests with 0.95.2 compiling with

Re: HBASE and Zookeeper in parallel

2013-09-10 Thread Ted Yu
Take a look at http://hbase.apache.org/book.html#zookeeper Cheers On Tue, Sep 10, 2013 at 12:11 PM, Sznajder ForMailingList bs4mailingl...@gmail.com wrote: Hi I am writing a program that makes use of a zookeeper server (I used the queue implementation of Curator) In addition, the

Strange behavior of blockCacheSize metric with LruBlockCache

2013-09-10 Thread Adrien Mogenet
When enabling the direct memory allocation [HBASE-4027] I'm observing strange values for the blockCacheSize metric. With usual cache, values are slowly growing around 1 GB When I'm enabling direct allocation (~4 GB off-heap), then it looks like this: 1 GB, 4 GB, 1, 4, 1, 4... When looking at the

Re: Does hbase runs with hadoop 2.1.0 beta?

2013-09-10 Thread Elliott Clark
HBase 0.96 will work with Hadoop 2.1.0. Hadoop 2.1.0 changed protobuf versions. 0.95.X had the older version of protobuf that's incompatible with the one used in hadoop 2.1.0. On Tue, Sep 10, 2013 at 1:25 PM, Marcos Sousa marcoscaixetaso...@gmail.com wrote: Hi, I'm trying to run Hbase with

Re: Does hbase runs with hadoop 2.1.0 beta?

2013-09-10 Thread Marcos Sousa
Yes, I noticed that... Thanks, I'm compiling right now. On Tue, Sep 10, 2013 at 5:35 PM, Elliott Clark ecl...@apache.org wrote: HBase 0.96 will work with Hadoop 2.1.0. Hadoop 2.1.0 changed protobuf versions. 0.95.X had the older version of protobuf that's incompatible with the one used in

Re: HBASE and Zookeeper in parallel

2013-09-10 Thread Ivan Kelly
Hi Benjamin, It depends on whether you set HBASE_MANAGES_ZK in you hbase-env.sh https://hbase.apache.org/book/zookeeper.html -Ivan On Tue, Sep 10, 2013 at 10:11:40PM +0300, Sznajder ForMailingList wrote: Hi I am writing a program that makes use of a zookeeper server (I used the queue

Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread Enis Söztutar
Hi, Please join me in welcoming Nick as our new addition to the list of committers. Nick is exceptionally good with user-facing issues, and has done major contributions in mapreduce related areas, hive support, as well as 0.96 issues and the new and shiny data types API. Nick, as tradition, feel

deploy saleforce phoenix coprocessor to hbase/lib??

2013-09-10 Thread Tianying Chang
Hi, Since this is not a hbase system level jar, instead, it is more like user code, should we deploy it under hbase/lib? It seems we can use alter to add the coprocessor for a particular user table. So I can put the jar file any place that is accessible, e.g. hdfs:/myPath? My customer

Re: Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread Stack
On Tue, Sep 10, 2013 at 3:54 PM, Enis Söztutar e...@apache.org wrote: Hi, Please join me in welcoming Nick as our new addition to the list of committers. Nick is exceptionally good with user-facing issues, and has done major contributions in mapreduce related areas, hive support, as well as

Re: deploy saleforce phoenix coprocessor to hbase/lib??

2013-09-10 Thread James Taylor
When a table is created with Phoenix, its HBase table is configured with the Phoenix coprocessors. We do not specify a jar path, so the Phoenix jar that contains the coprocessor implementation classes must be on the classpath of the region server. In addition to coprocessors, Phoenix relies on

Zookeeper state for failed region servers

2013-09-10 Thread Sudarshan Kadambi (BLOOMBERG/ 731 LEXIN)
Could someone tell me what Zookeeper node to watch to know if any region servers are down currently and what the affected region list is? Thank you! -sudarshan

Re: Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread Nick Dimiduk
Thank you everyone! On Tue, Sep 10, 2013 at 3:54 PM, Enis Söztutar e...@apache.org wrote: Hi, Please join me in welcoming Nick as our new addition to the list of committers. Nick is exceptionally good with user-facing issues, and has done major contributions in mapreduce related areas,

Re: Two concurrent programs using the same hbase

2013-09-10 Thread Renato Marroquín Mogrovejo
Hi Benjamin, Are you able to insert data through hbase shell? How big is your zookeeper quorum? Are you running zookeeper as a separate process? Maybe you should just add more connections to your zookeeper process through its configuration file. Renato M. 2013/9/10 Sznajder ForMailingList

Re: Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread lars hofhansl
Congrats Nick, great to have you on board! - Original Message - From: Enis Söztutar e...@apache.org To: d...@hbase.apache.org d...@hbase.apache.org; hbase-user user@hbase.apache.org Cc: Sent: Tuesday, September 10, 2013 3:54 PM Subject: Please welcome our newest committer, Nick

答复: Fastest way to get count of records in huge hbase table?

2013-09-10 Thread 冯宏华
No fast way to get the count of records of a table without scanning and counting, especially when you want to get the accurate count. By design the data/cells of a same record/row can scatter in many different HFiles and memstore, so even we can record the count of records of each HFile as meta

Re: Tables gets Major Compacted even if they haven't changed

2013-09-10 Thread lars hofhansl
That. And other parameters (like compression) might have been changed, too. Would need to check for that as well, From: Dave Latham lat...@davelink.net To: user@hbase.apache.org; lars hofhansl la...@apache.org Sent: Tuesday, September 10, 2013 11:11 AM

hbase table design

2013-09-10 Thread kun yan
Hi all, who can provide some RowKey HBase table design or design-related information, I made reference to the official documents and HBase The Definitive Guide. But what is more specific and detailed case? Thank you -- In the Hadoop world, I am just a novice, explore the entire Hadoop ecosystem,

Re: hbase table design

2013-09-10 Thread Ted Yu
Have you looked at http://hbase.apache.org/book.html#schema.casestudies ? On Tue, Sep 10, 2013 at 7:57 PM, kun yan yankunhad...@gmail.com wrote: Hi all, who can provide some RowKey HBase table design or design-related information, I made reference to the official documents and HBase The

Re: hbase table design

2013-09-10 Thread kun yan
Thank you, before I just read 6. HBase and Schema Design part sections, I did not notice 6.11. Schema Design Case Studies I should be more careful read 2013/9/11 Ted Yu yuzhih...@gmail.com Have you looked at http://hbase.apache.org/book.html#schema.casestudies ? On Tue, Sep 10, 2013 at 7:57

Import data from MySql to HBase using Sqoop2

2013-09-10 Thread Dhanasekaran Anbalagan
Hi Guys, How to import mysql to Hbase table. I am using sqoop2 when i try to import table it's doesn't show storage as Hbase. Schema name: sqoop:000 create job --xid 12 --type import . . . . Boundary query: Output configuration Storage type: * 0 : HDFS* Choose: Please guide me. How to do

RE: Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread rajeshbabu chintaguntla
Congratulations Nick. From: lars hofhansl [la...@apache.org] Sent: Wednesday, September 11, 2013 7:30 AM To: d...@hbase.apache.org; hbase-user Subject: Re: Please welcome our newest committer, Nick Dimiduk Congrats Nick, great to have you on board!

Re: Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread ramkrishna vasudevan
Congratulations Nick.!!! On Wed, Sep 11, 2013 at 9:15 AM, rajeshbabu chintaguntla rajeshbabu.chintagun...@huawei.com wrote: Congratulations Nick. From: lars hofhansl [la...@apache.org] Sent: Wednesday, September 11, 2013 7:30 AM To:

Re: 答复: Fastest way to get count of records in huge hbase table?

2013-09-10 Thread James Taylor
Use Phoenix (https://github.com/forcedotcom/phoenix) by doing the following: CREATE VIEW myHTableName (key VARBINARY NOT NULL PRIMARY KEY); SELECT COUNT(*) FROM myHTableName; As fenghong...@xiaomi.com said, you still need to scan the table, but Phoenix will do it in parallel and use a coprocessor

Re: Please welcome our newest committer, Nick Dimiduk

2013-09-10 Thread Marcos Luis Ortiz Valmaseda
Congratulations, Nick !!! Keep doing this great work 2013/9/10 ramkrishna vasudevan ramkrishna.s.vasude...@gmail.com Congratulations Nick.!!! On Wed, Sep 11, 2013 at 9:15 AM, rajeshbabu chintaguntla rajeshbabu.chintagun...@huawei.com wrote: Congratulations Nick.

HBase use how much hdfs storage space

2013-09-10 Thread kun yan
Hi all How can I know HBase in a table, the table using HDFS storage space? What is the command or in the HBase web page I can see?(version 0.94 hbase) -- In the Hadoop world, I am just a novice, explore the entire Hadoop ecosystem, I hope one day I can contribute their own code YanBit