2010/3/3 Ted Yu yuzhih...@gmail.com
Hi,
I wrote a utility to add index to my table.
After running it, I couldn't see the rows in that table I saw before.
hbase(main):007:0 count 'ruletable'
NativeException: org.apache.hadoop.hbase.client.NoServerForRegionException:
No server address listed
something like the following logged in the region server
logs
for each region:
Filled indices for region: 'ruletable,,1267641828807' with
entries
in 00:05:99
Cheers,
Dan
2010/3/5 Ted Yu yuzhih...@gmail.com
2010/3/3 Ted Yu yuzhih...@gmail.com
Hi,
I wrote a utility to add
Previous attempt wasn't delivered.
On Wed, Mar 3, 2010 at 9:30 AM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
I started hbase 0.20.3 successfully on my Linux VM. Master and regionserver
are on the same VM.
There're two empty tables.
Soon I saw the following in regionserver.log:
2010-03-03 09
If you disable writing, you can use org.apache.hadoop.hbase.mapreduce.Export
to export all your data, copy them to your new HDFS, then use
org.apache.hadoop.hbase.mapreduce.Import, finally switch your clients to the
new HBase cluster.
On Wed, Mar 3, 2010 at 11:27 AM, Kevin Peterson
, Ted Yu yuzhih...@gmail.com wrote:
But querying zookeeper shows:
lsr /hbase
hbase
safe-mode
rs
1267640372165
root-region-server
master
shutdown
On Wed, Mar 3, 2010 at 10:59 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
Looks like
You can use Export class.
Please take a look at hbase-2225 as well.
On Tuesday, March 2, 2010, sallonch...@hotmail.com wrote:
Hi, everyone. Recently I encountered a problem about data loss of HBase. So
it comes to the question that how to back up HBase data to recover table
records if HBase
Hi,
We use hbase 0.20.1
On http://snv-it-lin-006:60010/master.jsp, I see two rows for the same
region server:
snv-it-lin-010.projectrialto.com:600301267038448430requests=0, regions=25,
usedHeap=1280, maxHeap=6127 snv-it-lin-010.projectrialto.com:60030
1267466540070requests=0, regions=2,
Hi,
I saw this in our HBase 0.20.1 master log:
2010-03-01 12:38:42,451 INFO [HMaster] master.ProcessRegionOpen(80):
Updated row domaincrawltable,,1267475905927 in region .META.,,1 with
startcode=1267475746189, server=10.10.31.135:60020
2010-03-01 12:39:06,088 INFO [Thread-10]
Hi,
We use hadoop 0.20.1
I saw the following in our log:
2010-02-27 10:05:09,808 WARN
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext: Failed to create
/disk2/opt/kindsight/hadoop/data/mapred/local
[r...@snv-qa-lin-cg ~]# df
Filesystem 1K-blocks Used Available Use%
February 2010 04:05, Ted Yu yuzhih...@gmail.com wrote:
Hi,
I use org.apache.hadoop.hbase.
filter.PrefixFilter in my export utility.
I like the flexibility of RegExpRowFilter but it cannot be used in
Scan.setFilter(org.apache.hadoop.hbase.filter.Filter) call.
Is there Filter implementation
My original post was to nutch mailing list.
But since the error was reported from hadoop class, I thought I may
get some advice here.
Thanks
On Saturday, February 27, 2010, Ted Yu yuzhih...@gmail.com wrote:
Please disregard my previous email - the command was launched from incorrect
I read http://wiki.apache.org/hadoop/Hbase/MultipleMasters
If someone uses Multiple Masters in production, please share your
experience.
Thanks
On Tue, Feb 23, 2010 at 10:31 PM, Stack st...@duboce.net wrote:
What version are you on? There is no hbase.master in hbase 0.20.x.
Its a vestige of
Hi,
We use Java wrapper from Tanuki Software Inc for our region server.
Here are wrapper parameters:
wrapper.startup.timeout=301
wrapper.ping.timeout=3000
wrapper.cpu.timeout=300
wrapper.shutdown.timeout=301
wrapper.jvm_exit.timeout=301
wrapper.restart.delay=301
This morning 2 hours after one of
Hi,
I am looking for counterpart to JobConf.setJobEndNotificationURI() in
org.apache.hadoop.mapreduce
Please advise.
Thanks
and started, and when it finished,
it will maintain. The logs looks like normal, do you know what can lead to
this happen?Thank you.
LvZheng
2010/2/20 Ted Yu yuzhih...@gmail.com
Did the number of child processes increase over time ?
On Friday, February 19, 2010, Zheng Lv lvzheng19800
You can notice that absolute path to jar is used in
http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Usage:
bin/hadoop jar /usr/joe/wordcount.jar
It's CLASSPATH issue.
On Sat, Feb 20, 2010 at 12:29 AM, janani venkat janani.cs...@gmail.comwrote:
I tried executing the
In ASCII, 5 is ahead of a
So the rowkey is outside the region.
On Wed, Feb 17, 2010 at 8:33 AM, Zhenyu Zhong zhongresea...@gmail.comwrote:
Hi,
I came across this problem recently.
I tried to query a table with rowkey '3ec1aa5a50307aed20a222af92a53ad1'.
The query hits on a region with
The number of map tasks is normally different from the number of reduce tasks.
Coding as you planned would limit the flexibility of hadoop.
On Saturday, February 13, 2010, ANKITBHATNAGAR abhatna...@vantage.com wrote:
Hi,
I was working on a scenario where in I am generating a file in close()
Hi,
I see the following in our log:
org.apache.hadoop.fs.ChecksumException: Checksum error:
file:/domain_crawler/crawlworker/crawl/777_pv_10_4_20100128_122735_353.done.tsk/segments/20100131163704/parse_data/part-0/data
at 42990592
at
Hi,
If you have experience storing multi-byte contents in HBase, please share.
Thanks
Hi,
Suppose during export there is ongoing write operation to HBase table I am
exporting, which snapshot does export use ?
Is there special action I should take ?
Thanks
Hi,
I tried to access
http://www.slideshare.net/kevinweil/hadoop-pig-and-twitter-nosql-east-2009
but couldn't.
If anyone has a copy, can you share ?
Thanks
Hi,
We use hbase-0.20.1
I have seen this randomly fail with a ConcurrentModificationException:
HBaseAdmin admin = new HBaseAdmin(hbaseConf);
Has anybody else seen this behavior ?
Thanks
have multiple instances of HBaseAdmin floating around? If so, why?
Please paste full stack trace. That'd help.
Yours,
St.Ack
On Fri, Jan 15, 2010 at 10:22 AM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
We use hbase-0.20.1
I have seen this randomly fail
, 2010 at 4:28 PM, Ted Yu yuzhih...@gmail.com wrote:
Package gcc-4.1.2-46.el5_4.1.x86_64 already installed and latest version
Nothing to do
[r...@tyu-linux batchclient]# yum install gcc-c++
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base
also - perhaps
this would be helpful.
-Todd
On Fri, Jan 15, 2010 at 1:14 PM, Ted Yu yuzhih...@gmail.com wrote:
Todd:
Thanks for the continued support.
I installed lzo-devel:
[r...@tyu-linux batchclient]# rpm -ivh
/opt/kindsight/lzo-devel-2.02-2.el5.1.x86_64.rpm
warning: /opt
-dssi --enable-plugin
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic
--host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)
Alex
On Fri, Jan 15, 2010 at 1:34 PM, Ted Yu yuzhih...@gmail.com wrote:
I tried to set
, Jan 15, 2010 at 2:16 PM, Ted Yu yuzhih...@gmail.com wrote:
[ria...@tyu-linux kevinweil-hadoop-lzo-916aeae]$ ./a.out
Hello World!
[ria...@tyu-linux kevinweil-hadoop-lzo-916aeae]$ gcc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir
Has anybody seen the above ?
Thanks
On Mon, Jan 11, 2010 at 3:34 PM, Todd Lipcon t...@cloudera.com wrote:
Hi Ted,
You need to install liblzo from EPEL:
http://fr.rpmfind.net/linux/RPM/Extras_Packages_for_Enterprise_Linux.html
-Todd
On Mon, Jan 11, 2010 at 3:21 PM, Ted Yu yuzhih
On Tue, Jan 12, 2010 at 10:15 AM, Ted Yu yuzhih...@gmail.com wrote:
I followed http://code.google.com/p/hadoop-gpl-compression/wiki/FAQ
Package gcc-c++-4.1.2-46.el5_4.1.x86_64 already installed and latest
version
Linux tyu-linux 2.6.18-128.2.1.el5 #1 SMP Tue Jul 14 06:36:37 EDT 2009
On Tue, Jan 12, 2010 at 10:57 AM, Ted Yu yuzhih...@gmail.com wrote:
I installed
ftp://fr.rpmfind.net/linux/EPEL/5/x86_64/lzo-2.02-2.el5.1.x86_64.rpmyesterday
.
[r...@tyu-linux software]# rpm -e lzo
[r...@tyu-linux software]# rpm -ivh ~rialto/lzo-2.02-2.el5.1.x86_64.rpm
warning: /home
-compression/
http://wiki.apache.org/hadoop/UsingLzoCompression
On Fri, Jan 8, 2010 at 1:13 PM, Ted Yu yuzhih...@gmail.com wrote:
The input file is in .gz format
FYI
On Fri, Jan 8, 2010 at 11:08 AM, Ted Yu yuzhih...@gmail.com wrote:
My current project processes input file of size
The input file is in .gz format
FYI
On Fri, Jan 8, 2010 at 11:08 AM, Ted Yu yuzhih...@gmail.com wrote:
My current project processes input file of size 02161 bytes.
What I plan to do is to split the file into equal size pieces (and on blank
line boundary) to improve performance.
I found
According to:
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/TextInputFormat.html#isSplitable%28org.apache.hadoop.fs.FileSystem,%20org.apache.hadoop.fs.Path%29
isSplitable() is deprecated.
Which method should I use to replace it ?
Thanks
Hi,
I saw the following from scan 'crawltable' command in hbase shell:
...
com.onsoft.www:http/column=stt:, timestamp=1260405530801,
value=\003
3 row(s) in 0.2490 seconds
How do I query the value for stt column ?
hbase(main):005:0 get 'crawltable', 'com.onsoft.www:http/', {
=
'stt:'}
i.e. '=' rather than '='. Also, its COLUMNS (uppercase I believe) rather
than column.
Run 'help' in the shell for help and examples.
St.Ack
On Tue, Dec 15, 2009 at 11:53 AM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
I saw the following from scan 'crawltable' command in hbase shell
Can someone fix the typo on http://wiki.apache.org/pig/PigSkewedJoinSpec in
the first bullet ?
tow-table inner join
Thanks
On Tue, Nov 17, 2009 at 1:54 PM, Pankil Doshi forpan...@gmail.com wrote:
With respect to Imbalanced data, Can anyone guide me how sorting takes
place
in Hadoop after Map
I found this in fuse_dfs.c:
static struct fuse_operations dfs_oper = {
.getattr= dfs_getattr,
.access= dfs_access,
.readdir= dfs_readdir,
.destroy = dfs_destroy,
.init = dfs_init,
I am wondering what syntax this is.
On Tue, Sep 15, 2009 at 9:51 PM, Shashank
.
On Wed, Sep 16, 2009 at 4:51 PM, Matt Massie m...@cloudera.com wrote:
This is C. This is a common way to set callbacks in a struct. Linux
source
code is full of syntax like this.
-Matt
On Wed, Sep 16, 2009 at 4:44 PM, Ted Yu yuzhih...@gmail.com wrote:
I found this in fuse_dfs.c:
static
I use the following commandline to build fuse:
ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
My ant version is 1.7.1
I got the following error:
[exec] if gcc -DPACKAGE_NAME=\fuse_dfs\
-DPACKAGE_TARNAME=\fuse_dfs\ -DPACKAGE_VERSION=\0.1.0\
-DPACKAGE_STRING=\fuse_dfs\ 0.1.0\
I found Chukwa to be an interesting project.
Can someone give a little detail on how freshly generated log files are
handled ?
I have downloaded the source code. So a few filenames would help me better
understand.
Thanks
at...
http://svn.apache.org/viewvc/hadoop/chukwa/
-Matt
On Mon, Sep 7, 2009 at 3:03 PM, Ted Yu yuzhih...@gmail.com wrote:
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/chukwa/produced
an exception.
And Chukwa isn't in hadoop/src/contrib of 0.20.0
On Mon, Sep 7, 2009 at 2
Hi,
I am using Hadoop 0.20
How can I get pass the exception below ?
[had...@vh20 hadoop]$ jmap -heap 3837
Attaching to process ID 3837, please wait...
sun.jvm.hotspot.debugger.NoSuchSymbolException: Could not find symbol
gHotSpotVMTypeEntryTypeNameOffset in any of the known library
We're using hadoop 0.20.0 to analyze large log files from web servers.
I am looking for better HDFS support so that I don't have to copy log files
from Linux File System over.
Please comment.
Thanks
AM, Ted Yu wrote:
We're using hadoop 0.20.0 to analyze large log files from web servers.
I am looking for better HDFS support so that I don't have to copy log
files
from Linux File System over.
Please comment.
Thanks
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/chukwa/ produced
an exception.
And Chukwa isn't in hadoop/src/contrib of 0.20.0
On Mon, Sep 7, 2009 at 2:44 PM, Ted Yu yuzhih...@gmail.com wrote:
I tried to compile fuse-dfs. libhdfs.so has been compiled.
Under hadoop/src/contrib/fuse
I see Repository moved temporarily to '/viewvc/hadoop/chukwa/'. Please
relocate
On Mon, Sep 7, 2009 at 4:21 PM, Matt Massie m...@cloudera.com wrote:
You can find chukwa at...
http://svn.apache.org/viewvc/hadoop/chukwa/
-Matt
On Mon, Sep 7, 2009 at 3:03 PM, Ted Yu yuzhih...@gmail.com
501 - 547 of 547 matches
Mail list logo