you got it!!!
On Mon, Dec 27, 2010 at 11:51 PM, Pete Haidinyak wrote:
> Ah, so, it's the getRow which returns a byte array which is the actual Row
> Key.
>
> Thanks
>
> -Pete
>
> On Mon, 27 Dec 2010 23:42:58 -0800, Ryan Rawson wrote:
>
>> During a scan, each iteration of Scanner.next() returns a
Ah, so, it's the getRow which returns a byte array which is the actual Row
Key.
Thanks
-Pete
On Mon, 27 Dec 2010 23:42:58 -0800, Ryan Rawson wrote:
During a scan, each iteration of Scanner.next() returns a Result
object which gives you the row key. Check the javadoc!
-ryan
On Mon, Dec 2
During a scan, each iteration of Scanner.next() returns a Result
object which gives you the row key. Check the javadoc!
-ryan
On Mon, Dec 27, 2010 at 11:40 PM, Pete Haidinyak wrote:
> Hi,
> Is there a way to get the Row Key for the result from a Scanner? I've seen
> examples where you get the
Hi,
Is there a way to get the Row Key for the result from a Scanner? I've
seen examples where you get the bytes for the row and extract the row key
from there but my keys are of random lenght.
Thanks
-Pete
Another query ..
hadoop 0.21.0 has been released with the stable hdfs append feature..
Should we do anything in hbase(modify / apply some patch) to use this hdfs
append feature or hbase takes care of this in-built??
from web i've seen that this append feature(hdfs-265) is different from what
Well the reportForDuty() thread slept for 1 sec(as configured), since
initially it could not able to find zk path for /hbase/master and in second
time retry it got it.
***
This e-mail and attachments contain co
Ok, I tell you series of steps I did and my observation:-
>I'm testing in a psuedo cluster mode. Everything from zookeeper(One
Zookeeper), NN and DN, HMaster and a HRegionServer are on one machine.
>On Master Start, avro classes were not found(ClassNotFound Exceptions were
coming), so I dow
TOF has an HBase client HTable in it. Its certainly easier using TOF.
Unless you have special needs, I'd stick w/ TOF.
Good luck,
St.Ack
On Mon, Dec 27, 2010 at 1:03 PM, Nanheng Wu wrote:
> Thanks for the answers. I will use these as my basis for
> investigation. I am using a mapper only job, i
Sounds right to me if thats of any consolation Marc.
St.Ack
On Mon, Dec 27, 2010 at 5:07 PM, Marc Limotte wrote:
> Lars, Todd,
>
> Thanks for the info. If I understand correctly, the importtsv command line
> tool will not compress by default and there is no command line switch for
> it, but I ca
Hey all,
I just was looking at the PoweredBy wiki page and noticed that it's fairly
out of date. I wanted to ping the list and encourage everyone to list their
company or update their information if they aren't already listed. Posting
your company or projcet on PoweredBy is a good way to get publi
Jeff, thanks very much for the response and for pushing forward with the
gateway.
On the Python front, I just wrote an asynchronous client (Tornado-based).
Commit:
https://github.com/mjrusso/pyhbase/commit/07ed39527bd6752158b1293e8d2df528d649a613
Let me know if you have any feedback on this. If
Lars, Todd,
Thanks for the info. If I understand correctly, the importtsv command line
tool will not compress by default and there is no command line switch for
it, but I can modify the source at
hbase-0.89.20100924+28/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
to call FileOut
I am about to do a bunch of Puts with
int lastcolVal = //get count of columns somehow I think; (How do I get
the column count of a column family from a certain row?)
for(int j = 0; j < 10; j++) {
Put put = new Put("activities", lastcolVal, activityId[j]);
context.write(accountNo, p
Thanks for the answers. I will use these as my basis for
investigation. I am using a mapper only job, is it better to use the
HBase client to write to HBase or TableOutputFormat?
On Mon, Dec 27, 2010 at 8:38 AM, Stack wrote:
> On Mon, Dec 27, 2010 at 1:54 AM, Nanheng Wu wrote:
>> I am running so
Thank you very much for the detailed answers. Below is another round of too
many questions. We are swapping data stores late in the game and want to be
sure to start "deep" to avoid the same old problems we have seen in the
past. Thanks in advance for any advice you can provide.
All data is writte
On Fri, Dec 24, 2010 at 5:09 AM, Wayne wrote:
> We are in the process of evaluating hbase in an effort to switch from a
> different nosql solution. Performance is of course an important part of our
> evaluation. We are a python shop and we are very worried that we can not get
> any real performanc
Hey Otis:
Yeah, we're a bit crass when it comes to dealing with exceptions that
come up out of HDFS. We'll just abort the server rather than try
fancy footwork to get around the outage. HBASE-2183 is about doing a
better job of riding over HDFS outage.
St.Ack
On Sat, Dec 25, 2010 at 11:11 AM,
On Sat, Dec 25, 2010 at 4:19 PM, devush wrote:
> Hi,
>
> I am running a single node configuration. hhase-site.xml is empty, but in
> the hdfs-site.xml (hadoop/conf) dfs.data.dir is configured to some
> directory in the users home directory. The files survives the reboot. But
> the Hbase looses th
If block is missing and all datanodes have checked in, then its gone.
Grep the block name in namednode to get sense of the blocks history.
Can you figure what happened to it?
St.Ack
On Sun, Dec 26, 2010 at 2:47 AM, SingoWong wrote:
> but how to slove? fsck only to show which paths are corrupt, bu
On Fri, Dec 24, 2010 at 6:15 AM, Mohit wrote:
> So I restarted the HBase cluster to clear off the cache, and from the shell
> mode , I performed scan 'temp' operation and strangely I could able to fetch
> the data whatever I have deleted and also the data file was present under
> table directory.
On Mon, Dec 27, 2010 at 1:54 AM, Nanheng Wu wrote:
> I am running some tests to load data from HDFS into HBase in a MR job.
> I am pretty new to HBase and I have some questions regarding bulk load
> performance: I have a small cluster with 4 nodes, I set up one node to
> run Namenode/JobTracker/ZK
Hi,
I'm using hbase 0.20.5 and hadoop 0.20.1. Some region servers are crashing,
saying that an file cannot be found, and that a lease has expired (log detail
below). Tried to find in this mailing list for the exact problem but was not
successful. These are the symptoms:
- Typically I see high
Is there a master running? If so, what does it say in the master logs?
St.Ack
On Mon, Dec 27, 2010 at 7:26 AM, Mohit wrote:
> Oh, I'm sorry my mistake, I do rectified it,
>
> but it seems Region Server(whether it is running as a thread(like in
> standalone) or as a local process or a distributed
Oh, I'm sorry my mistake, I do rectified it,
but it seems Region Server(whether it is running as a thread(like in
standalone) or as a local process or a distributed process) is not able to
communicate to Master, It get stuck as from the RS logs I see
"Telling master at 10.18.55.66:6000 that we a
Mozilla is using it for a test project that is building a datawarehouse
on top of our bugzilla installation. While it is still a bit young, it
is usable and very exciting, not only for the searching capabilities,
but also for the application friendly extensions to HBase such as linked
fields,
hi guys,
HBase 0.20.6 is likely to run well on hadoop 21. We have many patches
that help bolster durability on top of branch-20-append, and also some
may apply to hadoop 21.
What you are possibly running in to is using hadoop 20 jars in hbase
0.90 on top of hadoop 21. Try deleting the hadoop 20
I am running some tests to load data from HDFS into HBase in a MR job.
I am pretty new to HBase and I have some questions regarding bulk load
performance: I have a small cluster with 4 nodes, I set up one node to
run Namenode/JobTracker/ZK, and the other three nodes all run
TaskTracker/DataNode/HRe
Hello Users/Sebastian,
Even I'm not able to run HBase 0.20.6 with Hadoop 0.21, I'm getting EOF
exception while creating FileSystem object during HMaster .
Whereas everything is working fine, with Hadoop 0.20.2
Any fixes available or any suggestions?
-Mohit
*
28 matches
Mail list logo