Hi,
Please find the log from the master node, I am using hbase-0.94.12 and
zookeeper-3.4.5
014-11-27 12:32:21,444 [myid:0] - INFO
[Thread-1:QuorumCnxManager$Listener@486] - My election bind port:
0.0.0.0/0.0.0.0:3888
2014-11-27 12:32:21,459 [myid:0] - INFO
Hi,
I am using Hadoop 2.5.1 and HBase 0.98.8-hadoop2 stand-alone mode, when I
use the following client side code
public static void main(final String[] args) {
HTableInterface table = null;
try {
final HBaseManager tableManager =
HBaseManager.getInstance();
The error was that in 'value' I have to specify the comparator binary:,
binaryprefix:, etc. I was confusing comparator with the compare operator.
Regards,
Néstor
On Wed, Nov 26, 2014 at 6:42 AM, Néstor Boscán nesto...@gmail.com wrote:
Hi
I've tried to apply the filters using the Java Thrift
Hi,
The issue reported earlier is resolved, I now have a new issue.
When I execute list command from the hbase shell, I am getting the
following error
Can't get master address from ZooKeeper; znode data == null
Any help is greatly appricated.
Thanks Regards
Dhamodharan Ramalingam
From:
bq. Cannot open channel to 1 at election address /172.10.195.299:3888
Can you check zookeeper log on 172.10.195.299 ?
Cheers
On Thu, Nov 27, 2014 at 12:29 AM, dhamodharan.ramalin...@tcs.com wrote:
Hi,
Please find the log from the master node, I am using hbase-0.94.12 and
zookeeper-3.4.5
Restart your zookeeper.
Restart your HBase.
This might be a short term fix.
Thanks,
Krishna
On Thu, Nov 27, 2014 at 7:57 PM, dhamodharan.ramalin...@tcs.com wrote:
Hi,
The issue reported earlier is resolved, I now have a new issue.
When I execute list command from the hbase shell, I am
Hi
Is there a way to use the HBase Thrift Scanner to just jump a number of
rows instead of reading them one by one. This is very useful for paging.
Regards,
Néstor
Dear All.
Hi Wilm ;-)
I have started this question on hadoop-user list.
https://mail-archives.apache.org/mod_mbox/hadoop-user/201411.mbox/%3c0dacebda87d76ce0b72f7c53f0246...@none.at%3E
I hope you can help me.
We have since ~2012 collected a lot of binary data (jpg's).
The size per file is
Hi Aleks ;),
Am 27.11.2014 um 22:27 schrieb Aleks Laz:
Our application is a nginx/php-fpm/postgresql Setup.
The target design is nginx + proxy features / php-fpm / $DB / $Storage.
.) Can I mix HDFS /HBase for binary data storage and data analyzing?
yes. hbase is perfect for that. For storage
Hi Wilm.
Am 27-11-2014 23:41, schrieb Wilm Schumacher:
Hi Aleks ;),
Am 27.11.2014 um 22:27 schrieb Aleks Laz:
Our application is a nginx/php-fpm/postgresql Setup.
The target design is nginx + proxy features / php-fpm / $DB /
$Storage.
.) Can I mix HDFS /HBase for binary data storage and
For MOB, please take a look at HBASE-11339
Cheers
On Nov 27, 2014, at 3:32 PM, Aleks Laz al-userhb...@none.at wrote:
Hi Wilm.
Am 27-11-2014 23:41, schrieb Wilm Schumacher:
Hi Aleks ;),
Am 27.11.2014 um 22:27 schrieb Aleks Laz:
Our application is a nginx/php-fpm/postgresql Setup.
The
Am 28.11.2014 um 00:32 schrieb Aleks Laz:
What's the plan about the MOB-extension?
https://issues.apache.org/jira/browse/HBASE-11339
From development point of view I can build HBase with the MOB-extension
but from sysadmin point of view a 'package' (jar,zip, dep, rpm, ...) is
much
easier to
Dear wilm and ted.
Thanks for your input and ideas.
I will now step back and learn more about big data and big storage to
be able to talk further.
Cheers Aleks
Am 28-11-2014 01:20, schrieb Wilm Schumacher:
Am 28.11.2014 um 00:32 schrieb Aleks Laz:
What's the plan about the MOB-extension?
Hi,
There was a mention of Elasticsearch here that caught my attention.
We use both HBase and Elasticsearch at Sematext. SPM
http://sematext.com/spm/, which monitors things like Hadoop, Spark, etc.
etc. including HBase and ES, can actually use either HBase or Elasticsearch
as the data store. We
Hi,
I am importing a csv file into HBase using the command bin/hbase
org.apache.hadoop.hbase.mapreduce.ImportTsv
When I execute the this map reduce program I am getting the following
error. I am using Hadoop 2.4.1 and HBase 0.98.8-hadoop2
I have set export JAVA_OPTS=-Xms1024m -Xmx10240m in
I have a hbase cluster of version 0.98.5 with hadoop-1.2.1(no mapreduce)
I want to copy all the tables to another cluster whose version is
0.98.1-cdh5.1.0 with 2.3.0-cdh5.1.0.
And also I want specify the hdfs replication factor of the files in
new cluster. is it possible?
Hi Li Li,
You can copy the Hbase Tables Remotely to another machine with the
following commands,
# create new tableOrig on destination cluster
dstCluster$ echo create 'tableOrig', 'cf1', 'cf2' | hbase shell
# on source cluster run copy table with destination ZK quorum specified
using --peer.adr
Hi,
It seems your job is not getting enough memory as required, can you try
adding these properties to your configuration and run schedule the job once
again.
mapred.cluster.map.memory.mb=2048
mapred.cluster.reduce.memory.mb=2048
Hope that solve your problem.
Cheers!
On Fri, Nov 28, 2014 at
18 matches
Mail list logo