1 several times
> >> when we need to delete the draining_servers info of hs1, we will
> need
> >> to delete hs1 several times
> >>
> >>
> >>
> >> finally, what is the original motivation of this tool, some scenario
> >> descriptions are good.
&g
Please take a look at:
bin/draining_servers.rb
On Mon, Apr 25, 2016 at 8:12 PM, WangYQ wrote:
> in hbase, I find there is a "drain regionServer" feature
>
>
> if a rs is added to drain regionServer in ZK, then regions will not be
> move to on these regionServers
>
>
w.r.t. the pipeline, please see this description:
http://itm-vm.shidler.hawaii.edu/HDFS/ArchDocUseCases.html
On Mon, Apr 25, 2016 at 12:18 PM, Saad Mufti wrote:
> Hi,
>
> In our large HBase cluster based on CDH 5.5 in AWS, we're constantly seeing
> the following messages
Can you pastebin more of the master log ?
Which version of hadoop are you using ?
Log snippet from namenode w.r.t. state-0073.log may also
provide some more clue.
Thanks
On Mon, Apr 25, 2016 at 12:56 PM, donmai wrote:
> Hi all,
>
> I'm getting a strange
Please see
https://blog.art-of-coding.eu/executing-operating-system-commands-from-java/
> On Apr 23, 2016, at 10:18 PM, Saurabh Malviya (samalviy)
> wrote:
>
> Hi,
>
> Is there any way to run hbase shell script from Java. Also mentioned this
> question earlier in below
Jayesh:
Is it possible for you to share the JVM parameters ?
Thanks
On Fri, Apr 22, 2016 at 7:48 AM, Thakrar, Jayesh <
jthak...@conversantmedia.com> wrote:
> Karthik,
>
> Yes, tuning can help - but the biggest help is to give "sufficient" memory
> to the regionserver.
> And "sufficient" is
Are you using hbase 1.0 or 1.1 ?
I assume you have verified that hbase master is running normally on
master-sigma.
Are you able to use hbase shell on that node ?
If you check master log, you would see which node hosts hbase:meta
On that node, do you see anything interesting in region server log
ngQueue.java:389)
> at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879)
> at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65)
> at
> org.apache.zookeeper.server.quoru
.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
>
> at
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
>
> at
> org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
> at org.
e, i want ot increase the pool size for
> master_table_operation
>
>
> but not sure if there are any problems.
>
>
> thanks..
>
> At 2016-04-20 21:07:06, "Ted Yu" <yuzhih...@gmail.com> wrote:
> >Adding subject.
> >
> >Adding back user@hbase
> &g
table
>
> i think is ok, right?
>
>
>
>
> 在2016年04月20日 20:53 ,Ted Yu <yuzhih...@gmail.com>写道:
>
>
> Have you seen the comment above that line ?
>
>// We depend on there being only one instance of this executor running
>// at a time.
Have you seen the comment above that line ?
// We depend on there being only one instance of this executor running
// at a time. To do concurrency, would need fencing of enable/disable of
// tables.
// Any time changing this maxThreads to > 1, pls see the comment at
//
Please use the following method of HBaseAdmin:
public CompactionState getCompactionStateForRegion(final byte[]
regionName)
Cheers
On Tue, Apr 19, 2016 at 12:56 PM, Saad Mufti wrote:
> Hi,
>
> We have a large HBase 1.x cluster in AWS and have disabled automatic major
>
rride some method or something is missing.
>
> Best,
> Iván.
>
>
> - Mensaje original -
> > De: "Ted Yu" <yuzhih...@gmail.com>
> > Para: user@hbase.apache.org
> > Enviados: Martes, 19 de Abril 2016 0:22:05
> > Asunto: Re: Processi
n(String[] args) throws Exception {
> int exitCode = ToolRunner.run(HBaseConfiguration.create(),
> new SimpleRowCounter(), args);
> System.exit(exitCode);
> }
> }
>
> Thanks so much,
> Iván.
>
>
>
>
> - Mensaje original -
> > De: "
The referenced link is from a specific vendor.
Mind posting on the vendor's mailing list ?
On Mon, Apr 18, 2016 at 12:45 PM, Colin Kincaid Williams
wrote:
> I would like to insert some data from Spark and or Spark streaming
> into Hbase, on v .98. I found this section of the
bout the replication performance, for instances
> > if the bandwidth is 100MB/s?
> >
> > Regards
> >
> >
> >
> >
> >
> >
> >-
> >
> >
> > On Sun, Apr 17, 2016 at 9:40 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
Have you taken a look at HBASE-11339 (HBase MOB) ?
Note: this feature does not handle 10GB objects well. Consider store GB
image on hdfs.
Cheers
On Sat, Apr 16, 2016 at 6:21 PM, Ascot Moss wrote:
> Hi,
>
> I have a project that needs to store large number of image and
5:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
> MAIL=/var/spool/mail/hadoop
>
> PATH=/usr/local/bin:/bin:/usr/
Have you seen my reply ?
http://search-hadoop.com/m/q3RTtJHewi1jOgc21
The actual value for zookeeper.znode.parent could be /hbase-secure (just an
example).
Make sure the correct hbase-site,xml is in classpath for hbase shell.
On Sat, Apr 16, 2016 at 7:53 AM, Eric Gao
Please take a look at:
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
There're various tests under hbase-server/src/test which demonstrate the
usage.
FYI
On Thu, Apr 14, 2016 at 8:03 PM, Bin Wang wrote:
> Hi there,
>
> I have a HFile that I
There is currently no API for appending Visibility Labels.
checkAndPut() only allows you to compare value, not labels.
On Wed, Apr 13, 2016 at 8:12 AM,
wrote:
> We sell data. A product can be defined as a permission to access data (at
> a cell level).
m using TableInputFormat, not
> InputFormat)
>
> But just to confirm, with the getSplits() function, Are mappers processing
> rows in the same region executed in parallel? (assuming that there are
> empty
> processors/cores)
>
> Thanks,
> Ivan.
>
>
> - Mensaje original
>From region server log:
2016-04-11 03:11:51,589 WARN org.apache.zookeeper.ClientCnxnSocket:
Connected to an old server; r-o mode will be unavailable
2016-04-11 03:11:51,589 INFO org.apache.zookeeper.ClientCnxn: Unable to
reconnect to ZooKeeper service, session 0x52ee1452fec5ac has expired,
It seems your code didn't go through.
Please take a look at ResultSerialization and related classes.
Cheers
On Mon, Apr 11, 2016 at 5:29 PM, 乔彦克 wrote:
> Hi, all
> recently we upgrade our HBase cluster from cdh-0.94 to cdh-1.0. In 0.94
> we use Result.java(implement
tion getConnection(){return connection;} I have
> > configured my connection in constructor.
> > Jacky
> >
> > -Original Message-
> > From: Ted Yu [mailto:yuzhih...@gmail.com]
> > Sent: Saturday, April 09, 2016 11:03 AM
> > To: user@hbase.apache.org
> > Su
bq. if they are located in the same split?
Probably you meant same region.
Can you show the getSplits() for the InputFormat of your MapReduce job ?
Thanks
On Mon, Apr 11, 2016 at 5:48 AM, Ivan Cores gonzalez
wrote:
> Hi all,
>
> I have a small question regarding the
Have you looked at :
http://hbase.apache.org/book.html#ttl
Please describe your use case.
Thanks
On Mon, Apr 11, 2016 at 2:11 AM, hsdcl...@163.com wrote:
> hi,
>
> I want to know the principle of HBase TTL,I would like to use the same
> principle to develop Rowkey of TTL,
Can you look at master log during this period to see what procedure was retried
?
Turning on DEBUG logging if necessary and pastebin relevant portion of master
log.
Thanks
> On Apr 11, 2016, at 1:11 AM, Kevin Bowling wrote:
>
> Hi,
>
> I'm running HBase 1.2.0 on
base-spark/dependency-info.html
>
> do i need to build it from the trunk?
>
> please let me know
>
> Thanks,
> Yeshwanth
>
> -Yeshwanth
> Can you Imagine what I would do if I could do all I can - Art of War
>
> On Tue, Apr 5, 2016 at 5:30 PM, Ted Yu <yuzhih
Can you show the body of getConnection() ?
getTable() itself is not expensive - assuming the same underlying
Connection.
Cheers
On Sat, Apr 9, 2016 at 7:18 AM, Yi Jiang wrote:
> Hi, Guys
> I just have a question, I am trying to save the data into table in HBase
> I am
I guess you have read:
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-HiveMAPtoHBaseColumnFamily
where example for column family mapping is given.
If you need to map column qualifier, probably poll hive mailing list on
syntax.
FYI
On Sat, Apr 9, 2016 at
The 'No StoreFiles for' is logged at DEBUG level.
Was 9151f75eaa7d00a81e5001f4744b8b6a among the regions which didn't finish
split ?
Can you pastebin more of the master log during this period ?
Any other symptom that you observed ?
Cheers
On Thu, Apr 7, 2016 at 12:59 AM, Heng Chen
into hbase it's utilize only the single region
> server, not other region servers and we have 3 to 4 rs.
>
> On Thu, Apr 7, 2016 at 3:27 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Please take a look at:
>> http://hbase.apache.org/book.html#disable.splitting
>>
>>
Please take a look at:
http://hbase.apache.org/book.html#disable.splitting
especially the section titled:
Determine the Optimal Number of Pre-Split Regions
For writing data evenly across the cluster, can you tell us some more about
your use case(s) ?
Thanks
On Tue, Apr 5, 2016 at 11:48 PM,
There are some outstanding bug fixes, e.g. HBASE-15333, for hbase-spark
module.
FYI
On Tue, Apr 5, 2016 at 2:36 PM, Nkechi Achara
wrote:
> So Hbase-spark is a continuation of the spark on hbase project, but within
> the Hbase project.
> They are not any significant
Have you considered setting TTL ?
bq. HBase will not be easily removed
Can you clarify the above ?
Cheers
On Tue, Apr 5, 2016 at 12:03 AM, hsdcl...@163.com wrote:
>
> If I want to delete hbase data in the previous month , how do? To avoid
> errors,
> when delete date , we
How many regions does your table have ?
After sorting, is there a chance that the top N rows come from distinct
regions ?
On Mon, Apr 4, 2016 at 8:27 PM, Shushant Arora
wrote:
> Hi
>
> I have a requirement to scan a hbase table based on insertion timestamp.
> I need
bq. I have been informed
Can you disclose the source of such information ?
For hbase.hstore.compaction.kv.max , hbase-default.xml has:
The maximum number of KeyValues to read and then write in a batch when
flushing or
compacting. Set this lower if you have big KeyValues and problems
with
In refguide, I don't see -Dsnappy mentioned.
I didn't find snappy in pom.xml either.
Have you tried building without this -D ?
On Fri, Apr 1, 2016 at 12:40 AM, Micha wrote:
> Hi,
>
> this is my first maven build, thought this should just work :-)
>
> after calling:
>
>
empty block appending behavior make sense for HBase-only
> usage?
>
> If this is, in fact, unused storage how do we get it back?
>
> Currently df shows 75% filled while du shows 25%. The former is prompting
> us to consider more hardware. If in fact we are 25% we don't need to.
>
Have you seen this thread ?
http://search-hadoop.com/m/uOzYtatlmAcqgzM
On Thu, Mar 31, 2016 at 11:58 AM, Ted Tuttle wrote:
> Hello-
>
> We are running v0.94.9 cluster.
>
> I am seeing that 'fs -dus' reports 24TB used and 'fs -df' reports 74.TB
> used.
>
> Does anyone
bq. COMPRESSION => 'LZ4',
The answer is given by above attribute :-)
On Thu, Mar 31, 2016 at 10:41 AM, marjana wrote:
> Sure, here's describe of one table:
>
> Table RAWHITS_AURORA-COM is ENABLED
> RAWHITS_AURORA-COM
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'f1',
gt;
> Although we have only 5 nodes in this cluster, we do perform HA on every
> levels of HBase service stack. So yes, there are multiple instances of
> every services as long as it's possible or necessay (e.g. we have 3
> HMaster, 2 name node, 3 journal node)
>
> Thanks,
bq. File does not exist: /hbase/data/default/vocabulary/
2639c4d082646bb4a4fa2d8119f9aaef/cnt/2dc367d0e1c24a3b848c68d3b171b06d
Can you search in namenode audit log to see which node initiated the delete
request of the above file ?
Then you can search in that node's region server log to get more
bq. data is distributed on node servers,
Data is on hdfs, i.e. the Data Nodes.
bq. it gets propagated to all data nodes,
If I understand correctly, the -du command queries namenode.
bq. Is this size compressed or uncompressed?
Can you show us the table description (output of describe command
The attachments you mentioned did go through.
For #2, please adjust:
hbase.client.scanner.timeout.period
hbase.rpc.timeout
Both have default value of 60 seconds.
If possible, please pastebin server log snippet before it crashed.
Thanks
On Thu, Mar 31, 2016 at 3:04 AM, Amit Shah
might not want to give a very
> > large size for the heap. You can make use of the off heap BucketCache.
> >
> > -Anoop-
> >
> > On Thu, Mar 31, 2016 at 4:35 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> > > For #1, please see the top two blogs @ https://b
bq. hbase version is 1.1.1.2.3
I don't think there was ever such a release - there should be only 3 dots.
bq. /hbase is the default storage location for tables in hdfs
the root dir is given by hbase.rootdir config parameter.
Here is sample listing:
http://pastebin.com/ekF4tsYn
Under data,
For #1, please see the top two blogs @ https://blogs.apache.org/hbase/
FYI
On Wed, Mar 30, 2016 at 7:59 AM, Amit Shah wrote:
> Hi,
>
> I am trying to configure my hbase (version 1.0) phoenix (version - 4.6)
> cluster to utilize as much memory as possible on the server
Please refer to http://hbase.apache.org/book.html#maven.release (especially
4. Build the binary tarball.)
Pass the following on command line:
-Dhadoop-two.version=2.7.2
FYI
On Wed, Mar 30, 2016 at 7:41 AM, Micha wrote:
> Hi,
>
>
> hbase ships with hadoop jars in the
You can also snapshot each of the 647 tables.
In case something goes unexpected, you can restore any of them.
FYI
On Wed, Mar 30, 2016 at 6:46 AM, Chathuri Wimalasena
wrote:
> Hi All,
>
> We have production system using hadoop 2.5.1 and HBase 0.94.23. We have
> nearly
ting another new attributes of a trade). All attributes are needed
> (the original and new)
>
> 2016-03-25 18:41 GMT+01:00 Ted Yu <yuzhih...@gmail.com>:
>
> > bq. During the processing the size of the data is doubled.
> >
> > This explains the frequent split :-
it is not a typical big data project where we can allow former analysis of
> the data:)
>
> 2016-03-25 17:38 GMT+01:00 Ted Yu <yuzhih...@gmail.com>:
>
> > What's the current region size you use ?
> >
> > bq. During the processing size of the data gets increase
What's the current region size you use ?
bq. During the processing size of the data gets increased
Can you give us some quantitative measure as to how much increase you
observed (w.r.t. region size) ?
bq. I was looking for some "global lock" in source code
Probably not a good idea using global
erver result in the big
> performance difference being seen here?
>
> I am using Hortonworks HBase 1.1.2
>
> On Thu, Mar 24, 2016 at 5:32 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > I assume the partitions' boundaries don't align with region boundaries,
> > rig
I assume the partitions' boundaries don't align with region boundaries,
right ?
Meaning some partitions would cross region boundaries.
Which hbase release do you use ?
Thanks
On Thu, Mar 24, 2016 at 4:45 PM, James Johansville <
james.johansvi...@gmail.com> wrote:
> Hello all,
>
> So, I wrote
24, 2016 at 12:39 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Currently IncreasingToUpperBoundRegionSplitPolicy doesn't detect when the
> master initialization finishes.
>
> There is also some missing piece where region server notifies the
> completion of cluster initi
ntSizeRegionSplitPolicy* which in my use case
> is
> > what I want.
> >
> > Cheers
> > Pedro
> >
> > On Mon, Feb 15, 2016 at 5:22 PM, Ted Yu <yuzhihong@...> wrote:
> >
> > > Can you pastebin region server log snippet around the time when th
Please take a look at the attachment to HBASE-14457
Cheers
On Tue, Mar 22, 2016 at 1:56 PM, Parsian, Mahmoud
wrote:
> If anyone is using HBase with SSDs, please let us know the performance
> improvements.
>
> Thank you!
> Best regards,
> Mahmoud
>
bq. a small number will take 20 minutes or more
Were these mappers performing selective scan on big regions ?
Can you pastebin the stack trace of region server(s) which served such
regions during slow mapper operation ?
Pastebin of region server log would also give us more clue.
On Tue, Mar
explanation.
> >
> > i will revert to protbuf version 2.5
> >
> >
> >
> > -Yeshwanth
> > Can you Imagine what I would do if I could do all I can - Art of War
> >
> > On Fri, Mar 18, 2016 at 8:37 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
Have you posted this question on Phoenix mailing list ?
Looks like you may get better answer there since the exception is related
to Phoenix coprocessor.
Thanks
On Mon, Mar 21, 2016 at 3:51 AM, Pedro Gandola
wrote:
> Hi,
>
> I'm using *Phoenix4.6* and in my use case I
If you are running Linux / OSX on localhost, you can use 'ps' command to
search for the thrift server.
You can also check any process is listening on port 9090.
FYI
On Sun, Mar 20, 2016 at 5:20 AM, ram kumar wrote:
> Hi,
>
> I can't able to connect to Hbase Thrift
>
>
See the following from hbase-default.xml
hbase.thrift.minWorkerThreads
16
...
hbase.thrift.maxWorkerThreads
1000
...
hbase.thrift.maxQueuedRequests
1000
On Thu, Mar 17, 2016 at 1:05 AM, Daniel wrote:
> Hi, I find that the Thrift server will stop
maven, git (which I think you may have setup already)
winutils.exe is out of the picture.
On Wed, Mar 16, 2016 at 7:56 AM, Gaurav Agarwal <gaurav130...@gmail.com>
wrote:
> No Jenkins is running on Unix box then what need to be done
> On Mar 16, 2016 8:12 PM, "Ted Yu" <y
I think what observed (16 concurrent connections) was due to the following
parameter:
hbase.thrift.minWorkerThreads
whose default value is 16.
On Fri, Mar 18, 2016 at 11:06 PM, Daniel wrote:
> Hi, my Thrift API server seems to allow at most 16 concurrent connections.
> Is
>From the code, looks like you were trying to authenticate as user test.
But from the attached log:
bq. getLoginUser :: root (auth:SIMPLE)
FYI
On Thu, Mar 17, 2016 at 8:33 PM, Saurabh Malviya (samalviy) <
samal...@cisco.com> wrote:
>
>
> Hi,
>
>
>
> I am trying to write java code snippet to
Frank:
Can you take a look at the following to see if it may help with your use
case(s) ?
HBASE-15181 A simple implementation of date based tiered compaction
Cheers
On Fri, Mar 18, 2016 at 9:58 AM, Frank Luo wrote:
> There are two reasons I am hesitating going that route.
bq. Do I need to configure winutils.exe
Assuming the Jenkins runs on Windows, winutils.exe would be needed.
Cheers
On Wed, Mar 16, 2016 at 5:19 AM, Gaurav Agarwal
wrote:
> Hello
>
> I used hbasetestingutility to run test cases on Windows and configure
> winutils.exe
; > Region Server. So, you can also consider increasing that.
> >
> > On Fri, Mar 18, 2016 at 10:22 AM, Frank Luo <j...@merkleinc.com> wrote:
> >
> > > Ted,
> > >
> > > Thanks for sharing. I learned something today.
> > >
> >
ed by default having only one region
> >
> > Why not pre-split table into more regions when create it?
> >
> > 2016-03-16 11:38 GMT+08:00 Ted Yu <yuzhih...@gmail.com>:
> >
> > > When one region is split into two, both daughter regions are opened on
> >
HBase is built with this version of protobuf:
2.5.0
On Fri, Mar 18, 2016 at 5:13 PM, yeshwanth kumar
wrote:
> i am using HBase 1.0.0-cdh5.5.1
>
> i am hitting this exception when trying to write to Hbase
>
> following is the stack trace
>
> Exception in thread
ass so that we can avoid an unnecessary byte array copy.
> Unfortunately the stuff we override is marked final in protobuf v3 which
> results in the class verification error.
>
> --
> Sean Busbey
> On Mar 18, 2016 19:38, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> &
When one region is split into two, both daughter regions are opened on the
same server where parent region was opened.
Can you provide a bit more information:
release of hbase
whether balancer was turned on - you can inspect master log to see if
balancer was on
Consider pastebinning portion of
Please take a look at:
private Path getBaseTestDir() {
String PathName = System.getProperty(
BASE_TEST_DIRECTORY_KEY, DEFAULT_BASE_TEST_DIRECTORY);
where:
public static final String BASE_TEST_DIRECTORY_KEY =
"test.build.data.basedirectory";
FYI
On Mon, Mar 14, 2016 at
On top of letting unit test suite pass,
running IntegrationTestBigLinkedList (and integration test related to the
feature you modify) regularly is a good practice such that the changes do
not introduce regression.
FYI
On Mon, Mar 14, 2016 at 2:15 PM, Bryan Beaudreault
You can inspect the output from 'mvn dependency:tree' to see if any
incompatible hadoop dependency exists.
FYI
On Mon, Mar 14, 2016 at 10:26 AM, Parsian, Mahmoud
wrote:
> Hi Keech,
>
> Please post your sample test, its run log, version of Hbase , hadoop, …
> And make
For #1, single Hconnection should work.
For #2, can you clarify ? As long as the hbase-site.xml used to create
the Hconnection
is still valid, you can continue using the connection.
For #3, they're handled by the connection automatically.
For #4, the HTable ctor you cited doesn't exist in
)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1774)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
Looks like your hdfs was transitioning.
FYI
On Wed, Mar 9, 2016 at 7:25 AM, Ted Yu <yuzhih...@gmail.com>
cky <medve...@pexe.so> wrote:
> You can check both logs at
>
> http://michal.medvecky.net/log-master.txt
> http://michal.medvecky.net/log-rs.txt
>
> First restart after upgrade happened at 10:29.
>
> I did not find anything useful.
>
> Michal
>
> On Wed, Mar 9,
Can you take a look at data-1 server log around this time frame to see what
happened ?
Thanks
> On Mar 9, 2016, at 3:44 AM, Michal Medvecky wrote:
>
> Hello,
>
> I upgraded my single hbase master and single hbase regionserver from 1.1.3
> to 1.2.0, by simply stopping both,
The procedure in this context is different from RDBMS stored procedure.
In flush(), you would see:
execProcedure("flush-table-proc", tableName.getNameAsString(),
new HashMap());
In MasterFlushTableProcedureManager.java :
public static final String
t;>
>>> I think we need at least one success story or one very interested user
>>> with a real project on the line to justify a backport. Otherwise it's a
>>> feature without any users - technically, abandoned.
>>>
>>>> On Fri, Feb 26, 201
Can you give us some more information ?
release of hbase
release of Hive
code snippet for registering hbase table
On Sun, Feb 28, 2016 at 7:21 PM, Divya Gehlot
wrote:
> Hi,
> I trying to register a hbase table with hive and getting following error :
>
> Error while
bq. do I have to manually go through each table
I think so.
bq. Is there maybe a GUI manager capable of such task?
No. Please use Admin API or shell command.
Cheers
On Fri, Feb 26, 2016 at 2:15 PM, ginsul wrote:
> Hi,
>
> Due to space issue we have decided to
gt; > with
> > > sparkonhbase -- it is a new module and thus not as invasive.
> > >
> > > Jon.
> > >
> > > On Tue, Jan 19, 2016 at 4:55 PM, Andrew Purtell <
> > andrew.purt...@gmail.com>
> > > wrote:
> > >
> > >> P
3942aff2bba6c0b94ff98a904d1a={ad283942aff2bba6c0b94ff98a904d1a
> state=SPLITTING_NEW, ts=1456302275610,
> server=dx-common-regionserver1-online,60020,1456302268068},
> ab07d6fbcef39be032ba11ca6ba252ef={ab07d6fbcef39be032ba11ca6ba252ef
> state=SPLITTING_NEW...
>
>
>
>
>
&g
bq. two regions were in transition
Can you pastebin related server logs w.r.t. these two regions so that we
can have more clue ?
For #2, please see http://hbase.apache.org/book.html#big.cluster.config
For #3, please see
Switching to user@ mailing list.
It is hard to read the image even after zooming.
Can you provide a clearer image ?
On Mon, Feb 22, 2016 at 10:34 AM, SaCvP125
wrote:
> Hi experts :)
>
> I'm trying to insert some date to hbase but I wanna to insert into a
> specific
Which hbase / hadoop release are you using ?
Can you give the complete stack trace ?
Cheers
On Mon, Feb 22, 2016 at 3:03 AM, Gaurav Agarwal
wrote:
> > I am trying to hbase testing utility to start minicluster on my local but
> getting some exception
> >
Thanks for sharing, Stephen.
bq. scan performance on the region servers needing to scan over all that
data you may not need
When number of versions is large, try to utilize Filters (where
appropriate) which implements:
public Cell getNextCellHint(Cell currentKV) {
See MultiRowRangeFilter for
Which version of hbase are you using ?
Have you taken a look at:
http://hbase.apache.org/book.html#security.client.thrift
Cheers
On Fri, Feb 19, 2016 at 3:11 AM, prabhu Mahendran
wrote:
> Hi,
>
> I have try to access Hbase Thrift authentication in Secured Cluster by
For #1, please take a look
at
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
e.g. the following methods:
public DFSInputStream open(String src) throws IOException {
public HdfsDataOutputStream append(final String src, final int buffersize,
sue .
> I am using hortonworks distribution HDP2.3.4.. How can I upgrade ?
> Could you provide me the steps ?
>
>
>
>
>
> On 18 February 2016 at 20:39, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> This was likely caused by version conflict of jackson dependenci
This was likely caused by version conflict of jackson dependencies between
Spark and Phoenix.
Phoenix uses 1.8.8 while Spark uses 1.9.13
One solution is to upgrade the jackson version in Phoenix.
See PHOENIX-2608
On Thu, Feb 18, 2016 at 12:31 AM, Divya Gehlot
wrote:
>
t; Thanks!
>
> does hbase compress repeated values in keys and columns :location say
> (ASIA). will that be repeated with each key or hbase snappy compression
> will handle that.
>
> same applies for repeated values of a column?
>
> Thanks!
>
> On Wed, Feb 17, 2016 at 7:14
There is also Omid:
https://github.com/yahoo/omid/wiki
On Wed, Feb 17, 2016 at 7:07 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Divya,
>
> HBase doesn't support transactions like rollback and commits. It has row
> level atomicity only.
>
> Some frameworks have been build on
> hbase.hregion.majorcompaction = 0 per table/column family
>
> 4.Does hbase put ,get and delete are blocked while major compaction and are
> working in minor compaction?
>
> No, they are not.
>
> -Vlad
>
> On Tue, Feb 16, 2016 at 4:51 PM, Ted Yu <y
For #2, see http://hbase.apache.org/book.html#managed.compactions
For #3, I don't think so.
On Tue, Feb 16, 2016 at 4:46 PM, Shushant Arora
wrote:
> Hi
>
> 1.does major compaction in hbase runs per table basis.
> 2.By default every 24 hours?
> 3.Can I disable
701 - 800 of 3641 matches
Mail list logo