Yes, i already did.
Den 17 mars 2017 22:40 skrev "Ted Yu" <yuzhih...@gmail.com>:
> Please also notify AsyncHBase mailing list, if you haven't done so.
>
> 2017-03-17 9:01 GMT-07:00 Kristoffer Sjögren <sto...@gmail.com>:
>
> > Thanks Ted, that was indeed
Thanks Ted, that was indeed the problem.
2017-03-17 4:29 GMT+01:00 Ted Yu <yuzhih...@gmail.com>:
> Have you considered the empty start row ?
>
> 2017-03-16 10:51 GMT-07:00 Kristoffer Sjögren <sto...@gmail.com>:
>
>> Thanks Ted, I have posted the question to Async
gt; Since you're using AsyncHBase, please consider posting on their mailing
> list.
>
> Thanks
>
> 2017-03-16 7:05 GMT-07:00 Kristoffer Sjögren <sto...@gmail.com>:
>
>> Hi
>>
>> I'm trying to scan a table using start and stop key ranges based on a
>&g
I should mention that I get 126.222.622 of total 126.717.892 so im
missing 495.270 rows.
2017-03-16 15:05 GMT+01:00 Kristoffer Sjögren <sto...@gmail.com>:
> Hi
>
> I'm trying to scan a table using start and stop key ranges based on a
> single byte.
>
> I'm using A
Hi
I'm trying to scan a table using start and stop key ranges based on a
single byte.
I'm using AsyncHBase where scanners are start key inclusive and stop
key exclusive.
So for a single byte I generate 256 scanners with key ranges [1]. The
"last" key range use start 127 and an empty end key (in
Hi
I want to join a Spark RDD with an HBase table. Im familiar with the
different connectors available but couldn't find this functionality.
The idea I have is to first sort the RDD according to a byte[] key [1]
and rdd.mapPartitions so that I each partition contains a unique and
sequentially
Hi
We are running OpenTSDB 2.2 with HBase 1.1.2 and are having problems
with RegionServers that are shutting down sporadically from alleged GC
pauses.
We run 2 OpenTSDB machines and 30 region servers. 8 GB heaps. The
region servers are collocated with data nodes and yarn jobs. Every
region
Hi
Try to find a docker image. Possibly one that contain both HBase and
OpenTSDB. This will get you started in minutes.
Cheers,
-Kristoffer
On Thu, Aug 11, 2016 at 4:02 PM, Kulkarni, Suyog wrote:
> Hi,
>
> I am very new to both HBase and OpenTSDB and I just
Hi
I'm trying to install a HBase cluster with 1 master
(amb1.service.consul) and 1 region server (amb2.service.consul) using
Ambari on docker containers provided by sequenceiq [1] using a custom
blueprint [2].
Every component installs correctly except for HBase which get stuck
with regions in
Sorry, I should mention that this is HBase 1.1.2.
Zookeeper only report one region server.
$ ls /hbase-unsecure/rs
[amb2.service.consul,16020,1448353564099]
On Tue, Nov 24, 2015 at 9:55 AM, Kristoffer Sjögren <sto...@gmail.com> wrote:
> Hi
>
> I'm trying to install a HBase
ransition
> from PENDING_OPEN to OPEN state, if hbase:meta table is unavailable master
> can not finish initialization.
>
> Regards
> Samir
>
> On Tue, Nov 24, 2015 at 10:11 AM, Kristoffer Sjögren <sto...@gmail.com>
> wrote:
>
>> Sorry, I should mention that th
bin.com/z93p8Mdu
On Tue, Nov 24, 2015 at 10:48 AM, Kristoffer Sjögren <sto...@gmail.com> wrote:
> I removed the node.dc1.consul from resolve.conf and restarted the
> cluster but it still shows up on the master UI.
>
> amb2.node.dc1.consul,16020,1448353564099Tue Nov 24
rrect server but server resolution produce wrong
> values. Do you have multiple network interfaces on servers? What ping
> $HOSTNAME returns? What do you have in /etc/hosts file? Do you have some
> local nameserver running on servers ?
>
> Regards
> Samir
> On Nov 24, 2015 11
consul ? Try changing servers hostname to
> *.service.consul.
> Also try to disable resolution by DNS server, Comment all lines in
> /etc/resolve.conf.
>
> Regards
> Samir
>
> On Tue, Nov 24, 2015 at 12:29 PM, Kristoffer Sjögren <sto...@gmail.com>
> wrote:
>
>>
Hi
We are running 0.94.6-cdh4.4.0 with phoenix-2.2.3 and recently started
seeing connections being forcefully closed, ending with a
ClosedByInterruptException [1]. This problem occur on both write and
scans.
I did some searching and found people that had similar problems. Seems
that one answer
the
longest, but as long as the row is cached, the results are returned quickly.
If you’re trying to do a scan with a start/stop row set … your timing then
could vary between sub-second and minutes depending on the query.
On Apr 8, 2015, at 3:10 PM, Kristoffer Sjögren sto...@gmail.com wrote
.
Is there any chance to run into this problem in the read path for data that
is written infrequently and never changed?
On Wed, Apr 8, 2015 at 9:30 AM, Kristoffer Sjögren sto...@gmail.com wrote:
A small set of qualifiers will be accessed frequently so keeping them in
block cache would be very
: Is there a reason you're using a coprocessor instead of a
regular filter, or a simple qualified get/scan to access data from these
rows? The default stuff is already tuned to load data sparsely, as would
be desirable for your schema.
-n
On Tue, Apr 7, 2015 at 2:22 PM, Kristoffer Sjögren sto
of compaction… if the data is relatively static, you won’t have
compactions because nothing changed.
But if your data is that static… why not put the data in sequence files
and use HBase as the index. Could be faster.
HTH
-Mike
On Apr 8, 2015, at 3:26 AM, Kristoffer Sjögren sto...@gmail.com
this is that you’re reading in large amounts of
data and its more efficient to do this from HDFS than through HBase.
On Apr 8, 2015, at 8:41 AM, Kristoffer Sjögren sto...@gmail.com wrote:
Yes, I think you're right. Adding one or more dimensions to the rowkey
would indeed make the table
Hi
I have a row with around 100.000 qualifiers with mostly small values around
1-5KB and maybe 5 largers ones around 1-5 MB. A coprocessor do random
access of 1-10 qualifiers per row.
I would like to understand how HBase loads the data into memory. Will the
entire row be loaded or only the
.
On Apr 7, 2015, at 11:13 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
I have a row with around 100.000 qualifiers with mostly small values
around
1-5KB and maybe 5 largers ones around 1-5 MB. A coprocessor do random
access of 1-10 qualifiers per row.
I would like
. All edits should be persisted to the
WAL regardless of what Ted said about flushing.
We are working on the problem, please see HBASE-13238
On Saturday, March 14, 2015, Kristoffer Sjögren sto...@gmail.com
javascript:_e(%7B%7D,'cvml','sto...@gmail.com'); wrote:
I think I found the thread
...@gmail.com wrote:
Which release of HBase are you using ?
I wonder if your cluster was hit with HBASE-10499.
Cheers
On Sat, Mar 14, 2015 at 1:13 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
It seems one of our region servers has been stuck closing a region for
almost 22 hours. Puts
which got stuck, there might be data loss if server is restarted since
there would be some data unable to be flushed.
Cheers
On Sat, Mar 14, 2015 at 2:58 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
I think I found the thread that is stuck. Is restarting the server
harmless
(maybe it should have). Please consider
upgrading.
Cheers
On Sat, Mar 14, 2015 at 1:30 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi Ted
Sorry I forgot to mention, hbase-0.94.6 cdh 4.4.
Yeah, it was a pretty write intensive scenario that I think triggered it
(importing a lot
What can I say? Awesome community! :-)
On Mon, Mar 2, 2015 at 11:17 PM, Gary Helmling ghelml...@gmail.com wrote:
Proving it to yourself is sometimes the hardest part!
On Mon, Mar 2, 2015 at 2:11 PM Nick Dimiduk ndimi...@gmail.com wrote:
Gary to the rescue! Does it still count as being
to model time in your schema, it's best to promote it to an indexed
field -- i.e., make it a component of your row key.
-n
On Mon, Mar 2, 2015 at 12:42 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
Thanks, great explanation!
Forgive my laziness, but do you happen to know what part(s
, Kristoffer Sjögren sto...@gmail.com
wrote:
If Scan.setTimeRange is a full table scan then it runs surprisingly
fast
on
tables that host a few hundred million rows :-)
On Sat, Feb 28, 2015 at 8:05 PM, Kristoffer Sjögren sto...@gmail.com
javascript:;
wrote:
Hi
If Scan.setTimeRange is a full table scan then it runs surprisingly fast on
tables that host a few hundred million rows :-)
On Sat, Feb 28, 2015 at 8:05 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi Jean-Marc
I was thinking of Scan.setTimeRange to only get the x latest rows, but I
Hi
I want to understand the effectiveness of timerange scans without setting
start and stop keys? Will HBase do a full table scan or will the scan be
optimized in any way?
Cheers,
-Kristoffer
:41 GMT-05:00 Kristoffer Sjögren sto...@gmail.com:
Hi
I want to understand the effectiveness of timerange scans without setting
start and stop keys? Will HBase do a full table scan or will the scan be
optimized in any way?
Cheers,
-Kristoffer
files of those bad tables.
like: bin/hadoop fs -rm /hbase/TABLENAME
-- 原始邮件 --
发件人: Kristoffer Sjögren;sto...@gmail.com;
发送时间: 2014年10月14日(星期二) 下午3:27
收件人: useruser@hbase.apache.org;
主题: Force remove table
Hi
I accidentally created a few tables
On Tue, Oct 14, 2014 at 1:11 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
I was thinking of doing that but I suspect that zookeeper keeps metadata
of
tables also. Seems like region servers are fine for now without the
master
and I don't want to make the problem worse by taking chances
, 2014 at 8:05 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
It seems that the region servers are complaining about wrong phoenix
classes for some reason. We are running 2.2.0 which is the version
before
phoenix was moved to apache.
But looking at the regionserver logs are stuck
Hi
We are running hbase 0.94.6 cdh 4.4 and have a problem with one table not
being assigned to any region. This is the SYSTEM.TABLE in Phoenix so all
tables are basically non functional at the moment.
When running hbck repair we get the following...
ERROR: Region { meta =
)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
On Thu, Aug 14, 2014 at 4:31 PM, Kristoffer Sjögren sto
From: Kristoffer Sjögren sto...@gmail.com
To: user@hbase.apache.org
Sent: Wednesday, February 12, 2014 11:54 PM
Subject: Re: Standard vs Asynchbase client reconnect after HBase restart
@Ted We are using HBase 0.94.6 from CDH 4 to be exact.
@Lars Thanks a lot
Hi
I have some tests that check client behaviour during a controlled HBase
restart. Everything works as expected and the client is able to recover
after a while.
However, after doing the same tests with the Asynchbase I noticed that this
client recovers almost instantly after HBase comes back up
0.94.x
On Wed, Feb 12, 2014 at 4:39 PM, Ted Yu yuzhih...@gmail.com wrote:
Are you using 0.94.x or 0.96.y ?
Cheers
On Wed, Feb 12, 2014 at 12:41 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
I have some tests that check client behaviour during a controlled HBase
restart
AM, Kristoffer Sjögren sto...@gmail.com wrote:
Hi
I have some tests that check client behaviour during a controlled HBase
restart. Everything works as expected and the client is able to recover
after a while.
However, after doing the same tests with the Asynchbase I noticed
Hi
We are running HBase 0.94.6-cdh4.4.0 and wonder what the best way would be
to upgrade to 0.94.14 and still have some compliance to cdh?
As far as i know, there are no cloudera apt packages for 0.94.14?
Cheers,
-Kristoffer
Hi
I have been performance tuning HBase 0.94.6 running Phoenix 2.2.0 the last
couple of days and need some help.
Background.
- 23 machine cluster, 32 cores, 4GB heap per RS.
- Table t_24 have 24 online regions (24 salt buckets).
- Table t_96 have 96 online regions (96 salt buckets).
- 10.5
administrator to see why pings to these
machines are slow. What are the pings like from a bad RS to another bad RS?
On Sat, Dec 21, 2013 at 7:17 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
I have been performance tuning HBase 0.94.6 running Phoenix 2.2.0 the
last
couple of days
this you wouldn't want to use HBase anyway.
(100k rows I could scan on my phone with a Perl script in less than 1s)
With ping you mean an actual network ping, or some operation on top of
HBase?
-- Lars
From: Kristoffer Sjögren sto...@gmail.com
Btw, I have tried different number of rows with similar symptom on the bad
RS.
On Sat, Dec 21, 2013 at 10:28 PM, Kristoffer Sjögren sto...@gmail.comwrote:
@pradeep scanner caching should not be an issue since data transferred to
the client is tiny.
@lars Yes, the data might be small
should really be sent across the network.
When you do the queries, can you check whether there is any network
traffic?
-- Lars
From: Kristoffer Sjögren sto...@gmail.com
To: user@hbase.apache.org; lars hofhansl la...@apache.org
Sent: Saturday, December 21
Scans on RS 19 and 23, which have 5 regions instead of 4, stands out more
than scans on RS 20, 21, 22. But scans on RS 7 and 18, that also have 5
regions are doing fine, not best, but still in the mid-range.
On Sat, Dec 21, 2013 at 11:51 PM, Kristoffer Sjögren sto...@gmail.comwrote:
Yeah, im
There are quite a lot of established and time wait connections between the
RS on port 50010, but i dont know a good way of monitoring how much data is
going through each connection (if that's what you meant)?
On Sun, Dec 22, 2013 at 12:00 AM, Kristoffer Sjögren sto...@gmail.comwrote:
Scans
Hi
At the moment HFileWriterV2.close breaks at startup when using Guava 15.
This is not a client problem - it happens because we start a master node to
do integration tests.
A bit precarious and wonder if there are any plans to support Guava 15, or
if there are clever way around this?
Cheers,
Thanks! But we cant really upgrade to HBase 0.96 right now, but we need to
go to Guava 15 :-(
I was thinking of overriding the classes fixed in the patch in our test
environment.
Could this work maybe?
On Mon, Dec 16, 2013 at 11:01 AM, Kristoffer Sjögren sto...@gmail.comwrote:
Hi
:
That means more or less backporting the patch to the 0.94, no?
It should work imho.
On Mon, Dec 16, 2013 at 3:16 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Thanks! But we cant really upgrade to HBase 0.96 right now, but we need
to
go to Guava 15 :-(
I
, or do you mean no more RLL
for atomic writes?
On Aug 28, 2013, at 5:18 PM, Ted Yu yuzhih...@gmail.com wrote:
RowLock API has been removed in 0.96.
Can you tell us your use case ?
On Wed, Aug 28, 2013 at 3:14 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
Hi
About the internals of locking a row in hbase.
Does hbase row locks map one-to-one with a locks in zookeeper or are there
any optimizations based on the fact that a row only exist on a single
machine?
Cheers,
-Kristoffer
I want a distributed lock condition for doing certain operations that may
or may not be unrelated to hbase.
On Thu, Aug 29, 2013 at 12:18 AM, Ted Yu yuzhih...@gmail.com wrote:
RowLock API has been removed in 0.96.
Can you tell us your use case ?
On Wed, Aug 28, 2013 at 3:14 PM, Kristoffer
Hi
Im writing a custom filter it seems that filterKeyValue(KeyValue kv) give
me every version, which is fine, but im only interested in the latest
version.
I have tried KeyValue.isLatestTimestamp() to filter out older versions but
this method always returns false?
Also tried setMaxVersions(1)
, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
Im writing a custom filter it seems that filterKeyValue(KeyValue kv) give
me every version, which is fine, but im only interested in the latest
version.
I have tried KeyValue.isLatestTimestamp() to filter out older versions
that Lucene is schemaless and both Solr and Elasticsearch can
detect field types, so in a way they are schemaless, too.
Otis
--
Performance Monitoring -- http://sematext.com/spm
On Fri, Jun 28, 2013 at 2:53 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
@Otis
HBase is a natural fit
, at 4:39 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Thanks for your help Mike. Much appreciated.
I dont store rows/columns in JSON format. The schema is exactly that of a
specific java class, where the rowkey is a unique object identifier with
the class type encoded into it. Columns
...)
On Jun 27, 2013, at 12:59 PM, Kristoffer Sjögren sto...@gmail.com wrote:
Hi
Working with the standard filtering mechanism to scan rows that have
columns matching certain criterias.
There are columns of numeric (integer and decimal) and string types.
These
columns are single
differ.
HTH
-Mike
On Jun 27, 2013, at 4:41 PM, Kristoffer Sjögren sto...@gmail.com wrote:
I realize standard comparators cannot solve this.
However I do know the type of each column so writing custom list
comparators for boolean, char, byte, short, int, long, float, double
seems
be a design approach I would talk. YMMV
Having said that, I expect someone to say its a bad idea and that they
have a better solution.
HTH
-Mike
On Jun 27, 2013, at 5:13 PM, Kristoffer Sjögren sto...@gmail.com wrote:
I see your point. Everything is just bytes.
However, the schema
Hi
Im doing an automated install of HBase cluster on EC2. But when the install
try to create tables (using hbase shell) the master often have not yet had
time start (since this operation is async), so these operations fail.
Is there a way to wait until the master is up and know when its safe to
,
java.util.Collection)}
* and Coprocessor endpoints.
Cheers
On Sat, Apr 20, 2013 at 10:11 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
Just to absolutely be clear, is this also true for a batch that span
multiple rows?
On Sat, Apr 20, 2013 at 2:42 PM, Ted Yu yuzhih...@gmail.com wrote
, 2013 at 10:53 AM, Kristoffer Sjögren
sto...@gmail.com
wrote:
Hi
Is it possible to completely overwrite/replace a row in a single
_atomic_
action? Already existing columns and qualifiers should be removed
if
they
do not exist in the data inserted
).
Cheers
On Apr 20, 2013, at 12:17 AM, Kristoffer Sjögren sto...@gmail.com wrote:
The schema is known beforehand so this is exactly what I need. Great!
One more question. What guarantees does the batch operation have? Are the
operations contained within each batch atomic? I.e. all
Hi
Is it possible to completely overwrite/replace a row in a single _atomic_
action? Already existing columns and qualifiers should be removed if they
do not exist in the data inserted into the row.
The only way to do this is to first delete the row then insert new data in
its place, correct? Or
Hi all
I have a problem starting hbase in a fully distributed 3 machine setup (2
datanodes/regionservers + 1 master/namenode). For some reason zookeeper on
master complains about not finding /hbase/backup-masters in
hbase-user-zookeeper-host.out.
java.io.IOException: Failed to process
[] rowsToLock) throws IOException {
It allows you to combine Put's and Delete's for a single region,
atomically.
On Tue, May 22, 2012 at 1:22 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Thanks, sounds like that should do it.
So im guessing it is correct to assume that _all_ KeyValues
Gotcha.
Columns are quite dynamic in my case, but since I need to fetch rows first
anyways; a KeyOnlyFilter to first find them and then overwrite values will
do just fine.
Cheers,
-Kristoffer
Hi
I'm trying to use Put operations to replace (set) already existing rows
by nullify certain columns and qualifiers as part of an Put operation.
The reason I want to do this is 1) keep the operation atomic/consistent 2)
avoid latency from first doing Delete then Put.
Is there some way to do
don't think you can include a delete with a put and keep it atomic.
You could include a null version of the column with your put, though,
for a similar effect.
--Tom
On Tue, May 22, 2012 at 10:55 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
I'm trying to use Put operations to replace
72 matches
Mail list logo