http://grepcode.com/file/repo1.maven.org/maven2/org.apache.solr/solr-core/4.7.1/org/apache/solr/update/processor/StatelessScriptUpdateProcessorFactory.java#StatelessScriptUpdateProcessorFactory.ScriptUpdateProcessor.invokeFunction%28java.lang.String%2Cjava.lang.Object%5B%5D%29
it looks better chec
In my traces, I did not see any traces from client to either master or
regionserver. I have tried both deployment, pseudo-distributed on one
machine, and full-distributed on three machines (one client, one as hmaster
and zk, and one as regionserver). It only shows the following four spans in
my tra
the sendthread stacktrace looks not correct. Do you have the client log?
(in case zk client code log sth there)
from the zk code, it looks ClientCnxn$SendThread.run should have caught
it(throwable) and done the cleanup work, e.g. notify the main thread, so
that it can wake up from ClientCnxn.submit
No Ted, I did not see hbase-default.xml after running the command.
Im building maven using this command (mvn clean install), i guess everyone
does this way only.
Anyway I'm attaching the jar and groovy script as well. My class is
com.search.ReadHbase.java.
-Vivek
On Wed, Aug 13, 2014 at 8:00 P
Hi Lars-
We are running ZK 3.3.4, Cloudera cdh3u3, HBase 0.94.16.
Thanks,
Ted
> On Aug 13, 2014, at 5:36 PM, "lars hofhansl" wrote:
>
> Hey Ted,
>
> so this is a problem with the ZK client, it seems to not clean itself up
> correctly upon receiving an exception at the wrong moment.
> Which
Sorry, the real region server config is this:
hfile.block.cache.size
0.25
hbase-site.xml
leiwang...@gmail.com
From: Esteban Gutierrez
Date: 2014-08-14 01:05
To: user@hbase.apache.org
Subject: Re: Re: Any fast way to random access hbase data?
Hi Lei,
Any chance for you to provide the value
@Ravi Do you mean using a key + timestamp as rowkey in HBase shell?
If so, you can `import java.text.SimpleDateFormat` to get the timestamp.
More detail on http://hbase.apache.org/book/shell_tricks.html.
On Wed, Aug 13, 2014 at 11:50 PM, Ted Yu wrote:
> rowkey gets involved when you insert / d
Hey Ted,
so this is a problem with the ZK client, it seems to not clean itself up
correctly upon receiving an exception at the wrong moment.
Which version of ZK are you using?
-- Lars
- Original Message -
From: Ted Tuttle
To: "user@hbase.apache.org"
Cc: Development
Sent: Wednesday
Hi,
I am working on https://issues.apache.org/jira/browse/STORM-444. The task is
very similar to https://issues.apache.org/jira/browse/OOZIE-961. Basically in
storm secure mode we would like to fetch topology/job submitter user’s
credentials on behalf of them on our master node and auto populat
Hello-
We are running HBase v0.94.16 on an 8 node cluster.
We have a recurring problem w/ HBase clients hanging. In latest occurrence, I
observed the following sequence of events:
0) client plays w/ HBase for a long time w/o issue
1) client runs out of memory during HBase operation:
>
> hfile.block.cache.size
> 0.0
>
Yikes. Don't do that. :)
Even if your blocks are in the OS cache, upon each single Get HBase needs to
re-allocate a new 64k block on the heap (including the index blocks).
If you see no chance that a working set of the data fits into the aggregate
block cach
The latest stable version of HBase is 0.98.5.
The upgrade procedure for 0.94 -> 0.96 can be applied in the exact same
manner to 0.94 -> 0.98. There is no need to upgrade through 0.96 as an
intermediate step.
We discussed this recently and I expect we are going to stop supporting (as
a communi
Apache HBase 0.98.5 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Te
Amit:
See http://www.us.apache.org/dist/hbase/stable/
0.98.5 was released this week.
On Wed, Aug 13, 2014 at 10:48 AM, Amit Sela wrote:
> Hi all,
>
> We're running with Hadoop 1.0.4 and HBase 0.94.12 and thinking of upgrading
> to Hadoop 2 but I'm not sure which is the latest HBase stable vers
Hi all,
We're running with Hadoop 1.0.4 and HBase 0.94.12 and thinking of upgrading
to Hadoop 2 but I'm not sure which is the latest HBase stable version 0.96
or 0.98 ?
Would you recommend upgrading straight to 0.98 ?
Thanks,
Amit.
Another resource is the Javadoc for the rest server package:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/package-summary.html
On Wed, Aug 13, 2014 at 10:07 AM, Esteban Gutierrez
wrote:
> Hello Sean,
>
> Have you looked into the HBase wiki page for the REST server?
> http://wiki
Hello Sean,
Have you looked into the HBase wiki page for the REST server?
http://wiki.apache.org/hadoop/Hbase/Stargate
cheers,
esteban.
--
Cloudera, Inc.
On Wed, Aug 13, 2014 at 5:57 AM, Sean Kennedy wrote:
> Is there anyone who can provide guidance on creating a RESTful interface
> to con
Hi Lei,
Any chance for you to provide the value for hfile.block.cache.size from one
of the region servers? The HBase master disables the block cache (thats why
it shows 'programatically' as the source of the config)
cheers,
esteban.
--
Cloudera, Inc.
On Wed, Aug 13, 2014 at 6:41 AM, Jean-Mar
Have you looked at the performance guidelines in our online book?
http://hbase.apache.org/book.html#performance
http://hbase.apache.org/book.html#casestudies.perftroub
On Wed, Aug 13, 2014 at 8:43 AM, Pradeep Gollakota
wrote:
> Can you post the client code you're using to read/write from HBase
rowkey gets involved when you insert / delete data.
At time of table creation, you specify column family settings.
Cheers
On Wed, Aug 13, 2014 at 6:48 AM, Ravi Kanth
wrote:
> Team,
>
> I want to create a table with rowkey + timestamp in hbase shell. Is it
> possible?
>
> Regards,
> Ravi
>
Can you post the client code you're using to read/write from HBase?
On Wed, Aug 13, 2014 at 11:21 AM, kacperolszewski
wrote:
> Hello there, I'm running a read/write benchmark on a huge data (tweeter
> posts) for my school project.
> The problem im dealing with is that the tests are going extrea
Hello there, I'm running a read/write benchmark on a huge data (tweeter posts)
for my school project.
The problem im dealing with is that the tests are going extreamly slow.
I dont know how to optimize the process. Hbase is using only about 10% of RAM
memory, and 40% of CPU.
I've been experiment
Team,
I want to create a table with rowkey + timestamp in hbase shell. Is it
possible?
Regards,
Ravi
bq. im building it using maven
maven may have included hbase-default.xml in your jar.
Can you pastebin the output of the following command ?
jar tvf | grep hbase
On Wed, Aug 13, 2014 at 7:21 AM, Vivekanand Ittigi
wrote:
> Im not seeing any hbase-default.xml since that jar is built using Mave
Im not seeing any hbase-default.xml since that jar is built using Maven.
If I had exported (Runnable jar) the same package using eclipse IDE i'd
have seen hbase-default.xml file on opening .jar but instead of exporting im building it using maven and placing
the jar in solr lib.
Note: when i open
bq. .jar
Can you check the contents of the above jar to see if it contains
hbase-default.xml ?
Cheers
On Wed, Aug 13, 2014 at 5:49 AM, Vivekanand Ittigi
wrote:
> Hi Ted,
>
> echo $CLASSPATH
> /home/biginfolabs/BILSftwrs/hbase-0.94.10/conf
>
> under "/home/biginfolabs/BILSftwrs/hbase-0.94.10/c
Like what Esteban said.
Try to use more threads to query HBase. Start with 10 clients, each with 1K
gets per batch, and adjust those numbers to see the impact on the
performances.
Any reason why your block cache is disabled? (hfile.block.cache.size = 0)
JM
2014-08-13 5:23 GMT-04:00 leiwang...@
Is there anyone who can provide guidance on creating a RESTful interface to
connect a client app to an hbase datastore?
Sorry to cast the wide net...
Sincerely,
Sean
Hi Ted,
echo $CLASSPATH
/home/biginfolabs/BILSftwrs/hbase-0.94.10/conf
under "/home/biginfolabs/BILSftwrs/hbase-0.94.10/conf", I've hbase-site.xml.
Actually i've made one more folder called "custom-lib" under
solr-4.2.0/example/lib and this path in pointed in solrconfig.xml using the
following c
Can you show us the contents of solr lib and the classpath ?
Thanks
On Aug 13, 2014, at 4:47 AM, Vivekanand Ittigi wrote:
> I'm trying to read specific HBase data and index into solr using groovy
> script in "/update" handler of solrconfig file but I'm getting the error
> mentioned below
>
> I
I'm trying to read specific HBase data and index into solr using groovy
script in "/update" handler of solrconfig file but I'm getting the error
mentioned below
I'm placing the same HBase jar on which i'm running in solr lib. Many
article said
WorkAround:
1. First i thought that class path has tw
Sorry, I found the reason. I forgot to restart the RegionServer...
> -原始邮件-
> 发件人: "LEI Xiaofeng"
> 发送时间: 2014年8月13日 星期三
> 收件人: user@hbase.apache.org
> 抄送:
> 主题: how to develop a cunstom splitpolicy for hbase table
>
> Hi,
> I want to develop a custom SplitPolicy for my hbase table. Bu
Hi,
I want to develop a custom SplitPolicy for my hbase table. But when I use my
policy to create a new table, I get this exception "Unable to load configured
region split policy...".
I put the MyPolicy.jar in the lib directory of HBase and use following code to
assign it to the table.
HTableD
Haven't tried yet
only one thread
10 regions servers, total 2555 regions.
I am just new to HBase and not sure what exactly the block cache mean, here's
the configuration i can see from the CDH HBase master UI:
hbase.rs.cacheblocksonwrite
false
hbase-default.xml
hbase.offheapcache.percentage
0
hb
Hello Lei,
Have you tried a larger batch size? how many threads or tasks are you using
to fetch data? could you please describe a little bit more your HBase
cluster? e.g. how many region servers, how many regions per RS? whats the
hit ratio of the block cache? any chance for you to share the table
I have a hbase table with more than 2G rows.
Every hour there comes 5M~10M row ids and i must get all the row info from the
hbase table.
But even I use the batch call(1000 row ids as a list) as described here
http://stackoverflow.com/questions/13310434/hbase-api-get-data-rows-information-by-l
36 matches
Mail list logo