, and will likely revisit it
before long.
Thanks again
Thomas Downing
this exists somewhere? Or is there
some other way to skin this cat?
Thanks
Thomas Downing
parallelize that for you, that takes your table as an input and that
outputs Delete objects.
J-D
On Fri, Aug 6, 2010 at 5:50 AM, Thomas Downing
tdown...@proteus-technologies.com wrote:
Hi,
Continuing with testing HBase suitability in a high ingest rate
environment, I've come up
responded to my posts.
thomas downing
On 7/20/2010 1:06 PM, Stack wrote:
Hey Thomas:
You are using hadoop 0..20.2 or something? And hbase 0.20.5 or so?
You might try
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/.
In particlular, it has HDFS-1118 Fix socketleak
hdfs.
The FIN_WAIT2/TIME_WAIT happens more on large concurrent gets, not so
much for inserts.
property
namedfs.datanode.socket.write.timeout/name
value0/value
/property
-ryan
On Fri, Jul 16, 2010 at 9:33 AM, Thomas Downing
tdown...@proteus-technologies.com wrote:
Thanks for the response.
My
)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:95)
at java.lang.Thread.run(Thread.java:619)
It there is any other info that might help, or any steps you would like
me to take, just let me know.
Thanks
Thomas Downing
risks or issues you may encounter
specifically with Hbase while adjusting these settings.
Hope This Helps,
Travis Hegner
-Original Message-
From: Thomas Downing [mailto:tdown...@proteus-technologies.com]
Sent: Friday, July 16, 2010 10:33 AM
To: user@hbase.apache.org
Subject: High