Thanks Ted, you're right
So next question is : is it feasible/easy to configure HBase in pseudo
distributed mode on a single computer (8core, 32GB RAM) ? Does it requires
to "install" HDFS too or can I rely on local filesystem ? Iknow that
performance results won't be relevant compared to producti
Hi,
I'm trying to proceed with performance tests (and to figure out what to tune
in my hbase configuration), and I found in HBase documentation the "Enabling
RPC-level logging" section, that suggests to activate RPC DEBUG logging to
better understand what's going on...It's worth saying that I use H
Hi,
Finally I figured out : I had to download libhadoop.so and reference its
location with HBASE_LIBRARY_PATH
Now it works fine !
--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html
Hi Ted,
Thanks for your help,
I deployed Hbase 1.2.5, and in the lib folder, I can see a bunch of hadoop
jars, all of them for the 2.5.1 release :
hadoop-annotations-2.5.1.jar
hadoop-auth-2.5.1.jar
hadoop-client-2.5.1.jar
hadoop-common-2.5.1.jar
hadoop-hdfs-2.5.1.jar
hadoop-mapreduce-client-app-2.
Hi,
I'm experimenting with HBase on a brand new linux VM (ubuntu), as a
standalone installation (I don't have any hadoop distribution on my VM, it's
worth saying it). I would like to test compression options, but couldn't
figure out how to make it working :
I manually installed snappy stuff (apt-g
You were right, moving to 2.6.1 solved my problem, thanks a lot for your help
!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-connection-expiration-on-kerberized-cluster-tp4089493p4089611.html
Sent from the HBase User mailing list archive at Nabble.com.
Hi Biju,
In fact, hbase-site.xml and core-site.xml files already contain the
properties you mention with correct values, and the 2 additional properties
that I set in my code come from the AuthUtil javadoc, in order to make the
AuthChore working...
Thanks for your help !
--
View this messag
Thanks Robert, seems a serious point to consider. I will give a try with
hadoop-common 2.6.1 to check if it works better !
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-connection-expiration-on-kerberized-cluster-tp4089493p4089582.html
Sent from the HBase User
Hi Sean,
Unfortunately, couldn't solve my issue ...
Below is the code of my utility class in charge of logging in and creating
an HBase connection. I added the AuthUtil stuff as suggested in your answer,
but probably missed something :(
My web service basically invokes GetHBaseConnection() metho
Hi Sean,
Thanks a lot for these invaluable pointers, I'm just wondering how I could
miss this in the documentation !
Anyway, I will give a try to this solution
Thanks again
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-connection-expiration-on-kerberized-cl
Hi,
I'm trying to setup web services that interact with my hadoop/hbase
kerberized cluster.
My application is deployed in a tomcat server and I would like to avoid
recreating a new HBase connection each and every time I have to access
HBase.
Similarly, I want my application to be self sufficient
OK, so this is not supported for now...
Additional questions related to region size :
If I set "versions" to high value (several millions for instance), the size
of a row may exceed the region size...What will happen in such case ?
And if I consider another design, like saving versions in addit
Hi Ted,
Thanks for your help...Formatting issue I guess : here is what I asked about
hbase shell :
Is it possible to invoke somethign like :
create 'table', {NAME => 'cf', VERSIONS => -1 }
create 'table', {NAME => 'cf', VERSIONS => MAX_INT }
in order to support unlimited number of versions for
Hi,
I just spent a couple of hours reading documentation about versioning
management.
I understood that there is no theorietical limit to the number of versions
that hbase can store (even if I perfectly understood from this forum that
this is not a good design to keep thousands of versions for a
Hi Ted, thanks for your help !
It seems I was not clear with my explanation, let me try again :
In my input file, let's say I have 2000 parameters and for each parameter,
5000 values recorded along given timeframe.
When I read the file, I read it part by part, basically by using a time
sliding win
Hi,
I would like to know if there is a way to monitor HBase cluster activity, in
order to check if all region servers works evenly when I try to write bulk
data from my java client application.
Is there a simple way to see that all region servers receive requests and
process data "evenly" ? Is th
Hi, thanks for your answer.
About your question related to thread management : yes, I have several
threads (up to 4) that may call my persistence method.
When I wrote the post, I had not configured anything special about regions
for my table so it basically used default splitting policy I guess.
Hi,
I am new to HBase and I'm facing performance issues ...
Short story : I want to persist 1000 values in HBase and it takes same
time on a basic sandbox (HDP hadoop sandbox with single region server node)
as it takes on our "production" cluster (that comprises 12 region server
with higher c
18 matches
Mail list logo