I am using the Java client API to write 10,000 rows with about 6000 columns
each, via 8 threads making multiple calls to the HTable.put(ListPut) method.
I start with an empty table with one column family and no regions pre-created.
With compression turned off, I am seeing very stable
we are running at 128000 ulimit -n. I am pretty sure the culpreet is the
thrift server, it opens up 20k threads under load, and crashes all other
servers by taking away RAM.
Do you guys disable tcp cookies also? In regards to iptables, what is the
best way to disable?
-Jack
On Sat, Mar 12,
Has anyone tried the zig-zag merge join algorithm that Google uses to do
something similar with their AppEngine data store (BigTable)? It's described
here starting on slide 29:
http://www.scribd.com/doc/16952419/Building-scalable-complex-apps-on-App-Engine
On Sun, Mar 13, 2011 at 1:33 PM, Jack Levin magn...@gmail.com wrote:
we are running at 128000 ulimit -n. I am pretty sure the culpreet is the
thrift server, it opens up 20k threads under load, and crashes all other
servers by taking away RAM.
Do you guys disable tcp cookies also? In regards
Well, since you can start iterating from any point, you can just do a
map-reduce over the larger table. In each mapper, on the first call,
initialize a scanner into the smaller table to start with the key that you
get from the larger table. Each time you get a sequential key from the
master
Hi,
We are experiencing the same issue. Have experimented with the memory
settings also but still get the same problem. We are inserting over
1,000,00 records. We find that it freezes as below but also, after
running for some time, the entire connectivity dies.
Would be interested in any progress
Thanks Michael,
Hmm, but I guess; earlier the secondary index feature was the part of HBase
release, as exposed Java API's. In release 0.20.x.
There was a bug as well to support that, which is fixed. HBASE-883.
Authors/Users/Michael,
Whereas for 0.90.0 onwards there is no direct support as