Hello Alok,
You are always welcome :). Everybody starts new at some point. Go
ahead. Lot of good people are here help you out.
Regards,
Mohammad Tariq
On Tue, Nov 27, 2012 at 10:16 AM, Alok Singh Mahor alokma...@gmail.comwrote:
thanks a lot Mahammad for this very complete and so
Hi Shyam,
Are you sure your table is created? If you do a list on the shell,
you can see it? Can you see it on the html gui?
JM
2012/11/27, shyam kumar lakshyam.sh...@gmail.com:
There is no exception or warnings in the log and the console prints the
following
12/11/27 11:03:42 INFO
Thanks guys,
Excuse my ignorance, but having sort of agreed that the configuration that
determines which server should be contacted for what is on the HBase
server, I am not sure how any of the practical suggestions made should
solve the issue, and enable connecting from a remote client.
Let me
Thanks guys,
Excuse my ignorance, but having sort of agreed that the configuration that
determines which-server-should-be-contacted-for-what is on the HBase
server, I am not sure how any of the practical suggestions made should
solve the issue, and enable connecting from a remote client.
Let me
Thanks again, seems helpful for (Ubuntu) quick starting.
On Mon, Nov 26, 2012 at 7:44 PM, stack-3 [via Apache HBase]
ml-node+s679495n4034405...@n3.nabble.com wrote:
On Sun, Nov 25, 2012 at 8:28 AM, matan [hidden
email]http://user/SendEmail.jtp?type=nodenode=4034405i=0
wrote:
Nothing.
Hi there-
re: From what I have understood, these properties are not for Hbase but
for the Hbase client which we write. They tell the client where to look for
ZK.
Yep. That's how it works. Then the client looks up ROOT/META and then
the client talks directly to the RegionServers.
Thank you both for the comments :)
Regards,
Mohammad Tariq
On Tue, Nov 27, 2012 at 8:56 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
You are right Mohammad,
Regards
Ram
On Tue, Nov 27, 2012 at 8:53 PM, Doug Meil doug.m...@explorysmedical.com
wrote:
Hi
HI,
Ya am able to see the table and table description in hbase shell
(list 'table_name' and describe 'table_name')
but am unable to perform scan 'table_name' as i told earlier
--
View this message in context:
Hi,
I currently use the following steps to start and stop HADOOP HBASE cluster:
1) Without Kerberos Security
(start zookeepers)
start the cluster from Master:
{$HADOOP_HOME}/bin/start-dfs.sh // one command
start all servers
Hi Mohammad,
I'm loosing track... I came to understand that ZK tells the client where
the ROOT/META is, and from there the client gets the region server it
should contact. And yet I take it that you are saying that the
configuration for the location of the ROOT/META or region server should be
Can you paste the master logs and RS logs.. am sure that there should have
been some errors in them.. That is why it is not able to locate the META
Regards
Ram
On Tue, Nov 27, 2012 at 7:51 PM, shyam kumar lakshyam.sh...@gmail.comwrote:
HI,
Ya am able to see the table and table description in
AC,
scripts start-dfs.sh and start-mapred.sh is just wrappers for hadoop-daemon.sh
commands.
All the security settings are in the configuration files, so same start
procedure should work for both secure and unsecured modes.
Just make sure you have correct configuration files.
Thank you!
Matan,
in short, your client should be able to resolve all names for all HBMaster,
HBRegionServers and all ZK nodes.
DNS or local /etc/hosts file, does not matter, but names should be resolvable
correctly on the client machine.
Then it will be able to connect to ZK, got HBmaster and ROOT/META
Ian Varley's excellent HBaseCon presentation is another great resource.
http://ianvarley.com/coding/HBaseSchema_HBaseCon2012.pdf
On Mon, Nov 26, 2012 at 5:43 AM, Doug Meil
doug.m...@explorysmedical.com wrote:
Hi there, somebody already wisely mentioned the link to the # of CF's
entry, but here
Lars,
thanks for the great post. However I am using HBase 0.90.6 :(
What is the best approach in my case?
My data is not very big 100GB divided into 4 tables. I don't need daily backup,
weekly maybe. But I need to be able to fully restore the state (all data in a
consistent state) if my migration
Hi Michael,
so are you recommanding 32Gb per node?
What about the disks? SATA drives are to slow?
JM
2012/11/26, Michael Segel michael_se...@hotmail.com:
Uhm, those specs are actually now out of date.
If you're running HBase, or want to also run R on top of Hadoop, you will
need to add
OK... I don't know why Cloudera is so hung up on 32GB. ;-) [Its an inside joke
...]
So here's the problem...
By default, your child processes in a map/reduce job get a default 512MB. The
majority of the time, this gets raised to 1GB.
8 cores (dual quad cores) shows up at 16 virtual
Please don't send same question to three different mailing lists.
See below for answers.
On Tue, Nov 27, 2012 at 6:59 PM, 张莉苹 zlpmiche...@gmail.com wrote:
*Do you know what's the release time of apache hbase 0.96.0 and hbase
0.94.3?*
0.94.3 should be out in a week or two.
0.96.0 start of
Hi,
I have created table in hbase with one column family and planned to release for
development (in pentaho).
Suppose later after doing the data profiling in production if I feel that out
of 600 columns 200 is not going to get used frequently I am planning to group
those into another column
Hi,
Thanks!!
Can someone help in suggesting what is the best rowkey that we can use in this
scenario.
Regards,
Rams
On 27-Nov-2012, at 10:37 PM, Suraj Varma svarma...@gmail.com wrote:
Ian Varley's excellent HBaseCon presentation is another great resource.
As far as i see altering the table with the new columnfamily should be
easier.
- disable the table
- Issue modify table command with the new col family.
- run a compaction.
Now after this when you start doing your puts, they should be in alignment
with the new schema defined for the table. You
Hi Rams,
IMHO, you need to go through http://hbase.apache.org/book.html and the book
HBase:The Definitive Guide to get a deeper understanding of HBase. It
will help you in designing your system.
There is no magical trick to design the most efficient/best RowKey without
knowing the detailed
Thanks Ram!!!
My question is like this...
suppose I have create a table with 100 columns with single column family
'cf1',
now in production there are billions of records are there in that table and
there are mulitiple programs that is feeding into this table (let us take
some 50 programs)...
according to http://hbase.apache.org/book.html#number.of.cfs; - 6.3.2.1.
Column Families : Try to keep the ColumnFamily names as small as possible,
preferably one character (e.g. d for data/default).
发件人: Ramasubramanian Narayanan
I am afraid it has to be changed...Because for your puts to go to the
specified Col family the col family name should appear in your Puts that is
created by the client.
Regards
Ram
On Wed, Nov 28, 2012 at 11:18 AM, Ramasubramanian Narayanan
ramasubramanian.naraya...@gmail.com wrote:
Thanks
I have a table who's keys are prefixed with a byte to help distribute the keys
so scans don't hotspot.
I also have a bunch of slave processes that work to scan the prefix partitions
in parallel. Currently each slave sets up their own hbase connection, scanner,
etc.. Most of the slave
Also what about the current data in the table. Now all are under the single CF.
Modifying the table with addition of a new CF will not move data to the new
family!
Remember HBase only deals with CF at the table schema level. There is no
qualifiers in the schema as such. When data is
27 matches
Mail list logo