If we look at TSDB, Kiji, Asynch HBase, it looks like extensions to HBase
already exist.
I haven't looked at Salesforce,com's SQL interface, but I suspect that they too
have some sort of framework where they have to enforce typing.
Sent from a remote device. Please excuse any typos...
Mike
Andrew,
I was aware of you employer, which I am pretty sure that they have already
dealt with the issue of exporting encryption software and probably hardware
too.
Neither of us are lawyers and what I do know of dealing with the government
bureaucracies, it's not always as simple of just
I do thank you for the advice, and I will try it. Is there a quick two- or
three-sentence summary about why this is the proper order?
I would have thought since the -ROOT- and .META. are on RS, that you'd want
to stop the master first before stopping the RS. Perhaps I'm thinking of
services
If you look at in/start-hbase.sh, you would see:
$bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} $commandToRun
zookeeper
$bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} $commandToRun master
$bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
--hosts ${HBASE_REGIONSERVERS} $commandToRun
zookeeper is up and running and the java client connects to it, it only fails
probably because of the region server - whatever it is.
all of the configuration uses ip's and not names, so I dont think changing
the hosts file will do much.
keep in mind that my configuration is all on one machine.
The problem is that in our case, the customer configures the NTP server and it
could be invalid. We're trying to cover user error cases, but on the other hand
we're trying to understand how big time skew hbase can handle...
Thanks,
YuLing
-Original Message-
From: Kevin O'dell
Sorry I'm late to this thread but I was the guy behind HBASE-7221 and the
algorithms specifically mentioned were MD5 and Murmur (not SHA-1). And
implementation of Murmur already exists in Hbase, and the MD5
implementation was the one that ships with Java.
The intent was to include hashing
The actual time isn't an issue. It's that all of the nodes in the cluster have
the same time...
Give or take a couple of ms.
Sent from a remote device. Please excuse any typos...
Mike Segel
On Mar 18, 2013, at 2:39 PM, yulin...@dell.com wrote:
The problem is that in our case, the customer
So, how about the time changes? Actually in our case, we only have one node.
Let's say initially the customer does not install a NTP server and the clock
becomes slow. Then the customer installs a new NTP server, the clock is
adjusted by the NTP server, which ends up being advanced 5 mins.
Hello,
Is it possible to run a M/R on cluster A over a table that resides on
cluster B with output to a table on cluster A? If so, how?
I am interested in doing this for the purpose of copying part of a table
from B to A. Cluster B is a production environment, cluster A is a slow
test platform.
Checkout how CopyTable does it:
https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
J-D
On Mon, Mar 18, 2013 at 3:09 PM, David Koch ogd...@googlemail.com wrote:
Hello,
Is it possible to run a M/R on cluster A over a table that
Hi,
I am using hbase-0.94.1. There is some mapreduce job running on my server.
After some time, I found that my folder /tmp/hadoop-root/mapred/local/archive
has 14G size.
How to configure this and limit the size? I do not want to waste my space for
archive.
Thanks,
Xia
+1. I really don't want to add typing specific information into hbase core
-- howver, having buliding blocks, plugins, and extra metadata manage it
seems quite reasonable to me.
There are many many games that can be played to encode data and enforcing
typing at the hbase level as opposed to
Mind telling us what hadoop version you are using ?
Thanks
On Mon, Mar 18, 2013 at 4:14 PM, xia_y...@dell.com wrote:
Hi,
I am using hbase-0.94.1. There is some mapreduce job running on my server.
After some time, I found that my folder
/tmp/hadoop-root/mapred/local/archive has 14G size.
yup. Why break a good thing? ;-)
On Mar 18, 2013, at 6:54 PM, Jonathan Hsieh j...@cloudera.com wrote:
+1. I really don't want to add typing specific information into hbase core
-- howver, having buliding blocks, plugins, and extra metadata manage it
seems quite reasonable to me.
There are
Thanks for the clarification Doug.
Back to my point, I was saying that MD5 and SHA-1 are already part of the Java
package so if you're running Java 1.6_xx or Java 1.7_xx, you will have MD5
available. So it could be a good thing.
Murmur is released under MIT... Is there going to be a
Can you ask this question in HDFS user group pls?
-Anoop-
From: bhushan.kandalkar [bhushan.kandal...@harbingergroup.com]
Sent: Monday, March 18, 2013 12:29 PM
To: user@hbase.apache.org
Subject: NameNode of Hadoop Crash?
Hi Following is the error log in
hi, hbase users.
maybe this is a dummy question for well-known hbase users.
if read requests are so many many, data size is bigger than heap size. I mean
that data size from requests couldn't be loaded at once on memories.
What situations could be expected inside of hbase?
flush memstores?
18 matches
Mail list logo