Hi Dhaval,
Yes, rows can get very big, that's why we filter them. The filter lets KVs
pass as long as the KV count is MAX_LIMIT and skips the row entirely once
the count exceeds this limit. KV size is about constant. Alternatively, we
could use batching, you are right.
Also, with regard to the
Exception in thread main java.lang.NoClassDefFoundError:
org.apache.hadoop.hbase.HBaseConfiguration
at client.PutWriteBufferExample.main(PutWriteBufferExample.java:18)
2013/11/25 Haosong Huang haosd...@gmail.com
I think the structure of your hbase directory may be have some problem. Try
also i have tried to execute the example in standalone mode, i have
downloaded the hbase-0.94.13.tar.gz, and after i have launched
java -cp hbase-trai.jar:hbase/hbase-0.94.13 client.PutWriteBufferExample
as result i have had:
Exception in thread main java.lang.NoClassDefFoundError:
like extrema ratio i have did a very dirty solution: a runnable JAR with
all required libraries inside.
Now the error is:
Exception in thread main java.lang.reflect.InvocationTargetException
at java.lang.reflect.Method.invoke(libgcj.so.10)
at
sorry for the grammatical error i have did a very dirty solution
perhaps better i tried a very dirty solution
2013/11/25 Silvio Di gregorio silvio.digrego...@gmail.com
like extrema ratio i have did a very dirty solution: a runnable JAR with
all required libraries inside.
Now the error is:
Caused by: java.lang.RuntimeException: hbase-default.xml file seems to be
for and old version of HBase (0.94.13), this version is Unknown
at
org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:68)
at
This error implies there are multiple projects in the class path whose
hbase-default.xml are not compatible.
Can you find the complete class path ?
Cheers
On Nov 25, 2013, at 5:18 AM, Silvio Di gregorio silvio.digrego...@gmail.com
wrote:
Caused by: java.lang.RuntimeException:
this finally works
java -cp `hbase classpath`:`hadoop classpath`:hbase-trai.jar
client.PutWriteBufferExample
basically my hbase it wasn't well configurated, i have adjusted with your
advices and now it works
thanks
2013/11/25 Ted Yu yuzhih...@gmail.com
This error implies there are multiple
Hi all ,
Please tell me how to set Hbase block size in hbase-site.xml .
I am using HBase 0.94.12 with hadoop 1.0.4 .
I have red that by setting block size to small value(say 8 kb ) , we can
improve habse random read.
Is that currect? How that property affect the performance
I want to get the metadata information in Hbase. My basic purpose is to -
1.get the information about tables like how many tables and name of tables.
2.get the columnfamilies name in each table.
3.get the columns names and their data types in each columnfamily.
Is there any tool or command by
bin/hbase shell
In there:
Type help and you'll get along
On Monday, November 25, 2013, ashishkshukladb wrote:
I want to get the metadata information in Hbase. My basic purpose is to -
1.get the information about tables like how many tables and name of tables.
2.get the columnfamilies name
Hmm ok. You are right. The principle of Hadoop/HBase is do do big data on
commodity hardware but that's not to say you can do it without enough hardware.
Get 8 commodity disks and see your performance/throughput numbers improve
suibstantially. Before jumping into buying anything though, I would
Please take a look at http://hbase.apache.org/book.html#schema.cf.blocksize
Note: with smaller block sizes, the number of StoreFile indexes increases.
Cheers
On Tue, Nov 26, 2013 at 12:15 AM, Job Thomas j...@suntecgroup.com wrote:
Hi all ,
Please tell me how to set Hbase block size in
Here is the caller to createReorderingProxy():
ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
where namenode
is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
public class ClientNamenodeProtocolTranslatorPB implements
ProtocolMetaInterface,
bq. 3.get the columns names and their data types in each columnfamily.
Each columnfamily can have arbitrarily many columns. There is no tool which
returns the above information.
On Tue, Nov 26, 2013 at 12:26 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
bin/hbase shell
In there:
Type help
1.get the information about tables like how many tables and name of tables.
You can use HBaseAdmin from the API. admin.getTableNames()
2.get the columnfamilies name in each table.
From the table you can get the descriptor to get the families:
table.getTableDescriptor();
3.get the columns names
Henry:
See HBASE-7635 Proxy created by HFileSystem#createReorderingProxy() should
implement Closeable
On Tue, Nov 26, 2013 at 1:56 AM, Ted Yu yuzhih...@gmail.com wrote:
Here is the caller to createReorderingProxy():
ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
One other tool option for you is to use Phoenix. You use SQL to create a
table and define the columns through standard DDL. Your columns make up the
allowed KeyValues for your table and the metadata is surfaced through the
standard JDBC metadata APIs (with column family mapping to table catalog).
Amit:
bq. One of tha changes I made is a fix to what seems like a bug
I don't see an attachment to HBASE-9682 so cannot tell whether the change
was related to what you described.
Have you encountered the problem since last week ?
Cheers
On Tue, Nov 19, 2013 at 9:40 PM, Amit Sela
@Ted
Yes, I use the hadoop-hdfs-2.2.0.jar.
BTW, how do you certain that the namenode class is
ClientNamenodeProtocolTranslatorPB?
From the NameNodeProxies, I can only assume the
ClientNamenodeProtocolTranslatorPB is used only when connecting to single
hadoop namenode.
public static T
Hi all,
Out of these property , which one is used to set HFile block size in hbae
0.94.12
hbase.hregion.max.filesize=16384
hfile.min.blocksize.size=16384
hbase.mapreduce.hfileoutputformat.blocksize=16384
Best Regards,
Job M Thomas
From the code and the JIRAs:
hbase.hregion.max.filesize is used to configure the size of a region (which
can contain more than one HFile)
hbase.mapreduce.hfileoutputformat.blocksize come from HBase 8949 (While
writing hfiles from HFileOutputFormat forcing blocksize from table
Job:
Please take a look at http://hbase.apache.org/book.html#schema.cf.blocksize
On Tue, Nov 26, 2013 at 11:58 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
From the code and the JIRAs:
hbase.hregion.max.filesize is used to configure the size of a region (which
can contain more
Hi Jean ,
Thanks You for the support.
we can create table like this , create 'xyz', {NAME = 'cf', BLOCKSIZE =
'8192'} inorder to set the block size.
But I am using phoenix to create and query the table. I want to globaly declare
this property to let all table crated should take this
Hi Jean Ted,
Thank You,
I have created table manually in hbase with blocksize='8192' ( create
'TESTBLOCK', {NAME = 'cf', BLOCKSIZE = '8192'} ) , and then created table
with same name via Phoenix.
Best Regards,
Job M Thomas
From: Job Thomas
This is no way to declare global property in Phoneix, you have to
declare BLOCKSIZE
in each 'create' SQL.
such as:
CREATE TABLE IF NOT EXISTS STOCK_SYMBOL(id int, name string)
BLOOMFILTER='ROW', VERSIONS='1', BLOCKSIZE = '8192'
On Tue, Nov 26, 2013 at 12:36 PM, Job Thomas j...@suntecgroup.com
FYI, you can define BLOCKSIZE in your hbase-sites.xml, just like with HBase
to make it global.
Thanks,
James
On Mon, Nov 25, 2013 at 9:08 PM, Azuryy Yu azury...@gmail.com wrote:
This is no way to declare global property in Phoneix, you have to
declare BLOCKSIZE
in each 'create' SQL.
such
Update:
Henry tried my patch attached to HBASE-10029
From master log, it seems my patch worked.
I will get back to this thread after further testing / code review.
Cheers
On Nov 25, 2013, at 6:05 PM, Henry Hung ythu...@winbond.com wrote:
@Ted:
I create the JIRA, is the information
The HBase Team is pleased to announce the immediate release of HBase 0.94.14.
Download it from your favorite Apache mirror [1]. This release has also been
pushed to Apache's maven repository.
All previous 0.92.x and 0.94.x releases can upgraded to 0.94.14 via a rolling
upgrade without downtime,
Hi James,
It is working , Thank you for your help . The query latency of a 10 million
table has been reduced to 15 millisecond from 28 millisecond by reducing
block size of the table.
With Thanks,
Job M Thomas
From: James Taylor
30 matches
Mail list logo