Re: One Region Server fails - all M/R jobs crash.

2013-11-25 Thread David Koch
Hi Dhaval, Yes, rows can get very big, that's why we filter them. The filter lets KVs pass as long as the KV count is MAX_LIMIT and skips the row entirely once the count exceeds this limit. KV size is about constant. Alternatively, we could use batching, you are right. Also, with regard to the

Re: example of hbase definitive guide

2013-11-25 Thread Silvio Di gregorio
Exception in thread main java.lang.NoClassDefFoundError: org.apache.hadoop.hbase.HBaseConfiguration at client.PutWriteBufferExample.main(PutWriteBufferExample.java:18) 2013/11/25 Haosong Huang haosd...@gmail.com I think the structure of your hbase directory may be have some problem. Try

Re: example of hbase definitive guide

2013-11-25 Thread Silvio Di gregorio
also i have tried to execute the example in standalone mode, i have downloaded the hbase-0.94.13.tar.gz, and after i have launched java -cp hbase-trai.jar:hbase/hbase-0.94.13 client.PutWriteBufferExample as result i have had: Exception in thread main java.lang.NoClassDefFoundError:

Re: example of hbase definitive guide

2013-11-25 Thread Silvio Di gregorio
like extrema ratio i have did a very dirty solution: a runnable JAR with all required libraries inside. Now the error is: Exception in thread main java.lang.reflect.InvocationTargetException at java.lang.reflect.Method.invoke(libgcj.so.10) at

Re: example of hbase definitive guide

2013-11-25 Thread Silvio Di gregorio
sorry for the grammatical error i have did a very dirty solution perhaps better i tried a very dirty solution 2013/11/25 Silvio Di gregorio silvio.digrego...@gmail.com like extrema ratio i have did a very dirty solution: a runnable JAR with all required libraries inside. Now the error is:

Re: example of hbase definitive guide

2013-11-25 Thread Silvio Di gregorio
Caused by: java.lang.RuntimeException: hbase-default.xml file seems to be for and old version of HBase (0.94.13), this version is Unknown at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:68) at

Re: example of hbase definitive guide

2013-11-25 Thread Ted Yu
This error implies there are multiple projects in the class path whose hbase-default.xml are not compatible. Can you find the complete class path ? Cheers On Nov 25, 2013, at 5:18 AM, Silvio Di gregorio silvio.digrego...@gmail.com wrote: Caused by: java.lang.RuntimeException:

Re: example of hbase definitive guide

2013-11-25 Thread Silvio Di gregorio
this finally works java -cp `hbase classpath`:`hadoop classpath`:hbase-trai.jar client.PutWriteBufferExample basically my hbase it wasn't well configurated, i have adjusted with your advices and now it works thanks 2013/11/25 Ted Yu yuzhih...@gmail.com This error implies there are multiple

Hbase block size

2013-11-25 Thread Job Thomas
Hi all , Please tell me how to set Hbase block size in hbase-site.xml . I am using HBase 0.94.12 with hadoop 1.0.4 . I have red that by setting block size to small value(say 8 kb ) , we can improve habse random read. Is that currect? How that property affect the performance

How to get Metadata information in Hbase

2013-11-25 Thread ashishkshukladb
I want to get the metadata information in Hbase. My basic purpose is to - 1.get the information about tables like how many tables and name of tables. 2.get the columnfamilies name in each table. 3.get the columns names and their data types in each columnfamily. Is there any tool or command by

Re: How to get Metadata information in Hbase

2013-11-25 Thread Asaf Mesika
bin/hbase shell In there: Type help and you'll get along On Monday, November 25, 2013, ashishkshukladb wrote: I want to get the metadata information in Hbase. My basic purpose is to - 1.get the information about tables like how many tables and name of tables. 2.get the columnfamilies name

Re: One Region Server fails - all M/R jobs crash.

2013-11-25 Thread Dhaval Shah
Hmm ok. You are right. The principle of Hadoop/HBase is do do big data on commodity hardware but that's not to say you can do it without enough hardware. Get 8 commodity disks and see your performance/throughput numbers improve suibstantially. Before jumping into buying anything though, I would

Re: Hbase block size

2013-11-25 Thread Ted Yu
Please take a look at http://hbase.apache.org/book.html#schema.cf.blocksize Note: with smaller block sizes, the number of StoreFile indexes increases. Cheers On Tue, Nov 26, 2013 at 12:15 AM, Job Thomas j...@suntecgroup.com wrote: Hi all , Please tell me how to set Hbase block size in

Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.

2013-11-25 Thread Ted Yu
Here is the caller to createReorderingProxy(): ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf); where namenode is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB : public class ClientNamenodeProtocolTranslatorPB implements ProtocolMetaInterface,

Re: How to get Metadata information in Hbase

2013-11-25 Thread Ted Yu
bq. 3.get the columns names and their data types in each columnfamily. Each columnfamily can have arbitrarily many columns. There is no tool which returns the above information. On Tue, Nov 26, 2013 at 12:26 AM, Asaf Mesika asaf.mes...@gmail.com wrote: bin/hbase shell In there: Type help

Re: How to get Metadata information in Hbase

2013-11-25 Thread Jean-Marc Spaggiari
1.get the information about tables like how many tables and name of tables. You can use HBaseAdmin from the API. admin.getTableNames() 2.get the columnfamilies name in each table. From the table you can get the descriptor to get the families: table.getTableDescriptor(); 3.get the columns names

Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.

2013-11-25 Thread Ted Yu
Henry: See HBASE-7635 Proxy created by HFileSystem#createReorderingProxy() should implement Closeable On Tue, Nov 26, 2013 at 1:56 AM, Ted Yu yuzhih...@gmail.com wrote: Here is the caller to createReorderingProxy(): ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);

Re: How to get Metadata information in Hbase

2013-11-25 Thread James Taylor
One other tool option for you is to use Phoenix. You use SQL to create a table and define the columns through standard DDL. Your columns make up the allowed KeyValues for your table and the metadata is surfaced through the standard JDBC metadata APIs (with column family mapping to table catalog).

Re: Bulk load fails to identify pre-split regions

2013-11-25 Thread Ted Yu
Amit: bq. One of tha changes I made is a fix to what seems like a bug I don't see an attachment to HBASE-9682 so cannot tell whether the change was related to what you described. Have you encountered the problem since last week ? Cheers On Tue, Nov 19, 2013 at 9:40 PM, Amit Sela

RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.

2013-11-25 Thread Henry Hung
@Ted Yes, I use the hadoop-hdfs-2.2.0.jar. BTW, how do you certain that the namenode class is ClientNamenodeProtocolTranslatorPB? From the NameNodeProxies, I can only assume the ClientNamenodeProtocolTranslatorPB is used only when connecting to single hadoop namenode. public static T

HFile block size

2013-11-25 Thread Job Thomas
Hi all, Out of these property , which one is used to set HFile block size in hbae 0.94.12 hbase.hregion.max.filesize=16384 hfile.min.blocksize.size=16384 hbase.mapreduce.hfileoutputformat.blocksize=16384 Best Regards, Job M Thomas

Re: HFile block size

2013-11-25 Thread Jean-Marc Spaggiari
From the code and the JIRAs: hbase.hregion.max.filesize is used to configure the size of a region (which can contain more than one HFile) hbase.mapreduce.hfileoutputformat.blocksize come from HBase 8949 (While writing hfiles from HFileOutputFormat forcing blocksize from table

Re: HFile block size

2013-11-25 Thread Ted Yu
Job: Please take a look at http://hbase.apache.org/book.html#schema.cf.blocksize On Tue, Nov 26, 2013 at 11:58 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: From the code and the JIRAs: hbase.hregion.max.filesize is used to configure the size of a region (which can contain more

RE: HFile block size

2013-11-25 Thread Job Thomas
Hi Jean , Thanks You for the support. we can create table like this , create 'xyz', {NAME = 'cf', BLOCKSIZE = '8192'} inorder to set the block size. But I am using phoenix to create and query the table. I want to globaly declare this property to let all table crated should take this

RE: HFile block size [Resolved]

2013-11-25 Thread Job Thomas
Hi Jean Ted, Thank You, I have created table manually in hbase with blocksize='8192' ( create 'TESTBLOCK', {NAME = 'cf', BLOCKSIZE = '8192'} ) , and then created table with same name via Phoenix. Best Regards, Job M Thomas From: Job Thomas

Re: HFile block size

2013-11-25 Thread Azuryy Yu
This is no way to declare global property in Phoneix, you have to declare BLOCKSIZE in each 'create' SQL. such as: CREATE TABLE IF NOT EXISTS STOCK_SYMBOL(id int, name string) BLOOMFILTER='ROW', VERSIONS='1', BLOCKSIZE = '8192' On Tue, Nov 26, 2013 at 12:36 PM, Job Thomas j...@suntecgroup.com

Re: HFile block size

2013-11-25 Thread James Taylor
FYI, you can define BLOCKSIZE in your hbase-sites.xml, just like with HBase to make it global. Thanks, James On Mon, Nov 25, 2013 at 9:08 PM, Azuryy Yu azury...@gmail.com wrote: This is no way to declare global property in Phoneix, you have to declare BLOCKSIZE in each 'create' SQL. such

Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.

2013-11-25 Thread Ted Yu
Update: Henry tried my patch attached to HBASE-10029 From master log, it seems my patch worked. I will get back to this thread after further testing / code review. Cheers On Nov 25, 2013, at 6:05 PM, Henry Hung ythu...@winbond.com wrote: @Ted: I create the JIRA, is the information

[ANNOUNCE] HBase 0.94.14 is available for download

2013-11-25 Thread lars hofhansl
The HBase Team is pleased to announce the immediate release of HBase 0.94.14. Download it from your favorite Apache mirror [1]. This release has also been pushed to Apache's maven repository. All previous 0.92.x and 0.94.x releases can upgraded to 0.94.14 via a rolling upgrade without downtime,

RE: HFile block size

2013-11-25 Thread Job Thomas
Hi James, It is working , Thank you for your help . The query latency of a 10 million table has been reduced to 15 millisecond from 28 millisecond by reducing block size of the table. With Thanks, Job M Thomas From: James Taylor