but class was expected
Maybe there is any posibility to change only required classes in jars? I tried
to use hadoop-core-1.0.0 but then versions don't match and exceptions are
thrown...
--
Konrad Tendera
cy.xml contains '*' everywhere as value so I don't think
it is the problem.
--
Konrad Tendera
at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:448)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
at java.lang.Thread.run(Thread.java:662)
I can't find any info about it. I'm using Hbase 0.92 with Hadoop 0.22
--
Konrad Tendera
I'm wondering if there is any possibility to run HBase 0.92 on top of Hadoop
0.22? I can't find necessary jars such as hadoop-core...
--
Konrad Tendera
;:
> > "info:pg" - keeps page number
> > "info:id" - sender ID
> > "info:nm" - pdf name
> > "info:prop_name" - column to hold property name
> > "info:prop_value" - column to
gt; "info:id" - sender ID
> "info:nm" - pdf name
> "info:prop_name" - column to hold property name
> "info:prop_value" - column to hold property value
> family "data":
> "data:blob" - bl
Hello,
I'm designing some schema for my use case and I'm considering what will
be better: rows or columns. Here's what I need - my schema actually
looks like this (it will be used for keeping not large pdf files or
single pages of larger document)
table files:
family "info":
"info
Hello everyone,
While I was reading some informations about multi-Master environment I
started to wondering how the "main" master is elected. As far as I know
if e.g. I have 5 Masters only one is "real" Master and the rest are only
backups. The main problem is that I couldn't find any larger i
Hello everyone,
While I was reading some informations about multi-Master
environment I started to wondering how the "main" master
is elected. As far as I know if e.g. I have 5 Masters only
one is "real" Master and the rest are only backups. The main problem
is that I couldn't find any larger i
yonghu writes:
>
> Hello,
>...
try something like that:
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
HFile.Writer hwriter = new HFileWriterV2(conf, new CacheConfig(conf), fs, new
Path(fs.getWorkingDirectory() + "/foo"));
So, what should I use instead of HBase? I'm wondering about following solution:
1) let's say our limit is 15MB - files up to this limit worth to keep in hbase
2) files bigger than 15MB are stored in HDFS and HBase keeps only some
information where file is placed
is it appropriate way to solve the
tsuna writes:
> FWIW, all of StumbleUpon's static assets (millions of images,
> JavaScript and CSS files etc) are served out of HBase. Our average
> blob size is 20KB.
>
This is the answer I've expected
Hello,
I'm wondering whether it worths to store my binary data using HBase?
I've read lots of articles and presentation which say that it is
highly not recommended but maybe there was some changes in the
newest versions? Possibly please write how data size influences
performance (what size of fi
13 matches
Mail list logo