shows you them.
Not so many in released server code.
(8/9/14, 8:01), yun peng wrote: Hi,
I am using HTracing-0.3.4 with LocalFileSpanReceiver in
HBase-0.99-SNAPSHOT
(in branch1 as of now). I have enabled the tracing configuration (i.e. in
hbase-site.xml) at both client and server side. I am
8, 2014 at 10:02 PM, Ted Yu yuzhih...@gmail.com wrote:
See http://hbase.apache.org/book.html#maven.release
mvn clean install -DskipTests assembly:single -Prelease
Cheers
On Fri, Aug 8, 2014 at 6:41 PM, yun peng pengyunm...@gmail.com wrote:
Hi, I want to build an hbase runtime from src
, Aug 7, 2014 at 7:11 PM, yun peng pengyunm...@gmail.com
javascript:; wrote:
Hi, I have searched online but did not find an answer, so I am asking
around here.
I would like to enable the tracing function on HBase. I heard that it
is
possible as HBase is already instrumented and uses
-SNAPSHOT/lib/ruby/hbase/security.rb
-rw-r--r-- 0 tyustaff 23695 Jul 23 17:53
hbase-0.99.0-SNAPSHOT/lib/ruby/hbase/table.rb
-rw-r--r-- 0 tyustaff4459 Jun 30 21:25
hbase-0.99.0-SNAPSHOT/lib/ruby/hbase/visibility_labels.rb
Cheers
On Sat, Aug 9, 2014 at 6:09 AM, yun peng pengyunm
Hi,
I am using HTracing-0.3.4 with LocalFileSpanReceiver in HBase-0.99-SNAPSHOT
(in branch1 as of now). I have enabled the tracing configuration (i.e. in
hbase-site.xml) at both client and server side. I am running HBase in a
psuedo-dstributed mode, that is, 2 region servers, one hmaster and one
Hi, I want to build an hbase runtime from src file. When I used mvn
package, all the dependencies jars are downloaded into the local .m2
repository.
I want to build an hbase runtime like what is downloaded from the Apache
mirror (e.g. hbase-0.98.4-hadoop2-bin.tar.gz
Hi, I have searched online but did not find an answer, so I am asking
around here.
I would like to enable the tracing function on HBase. I heard that it is
possible as HBase is already instrumented and uses HTrace. But I do not
know how to turn on the HTrace function on an HBase (e.g. what is the
Hi,
I need the HBase-0.99-SNAPSHOT, more specifically the following two jars,
because of an open-source project depends on them.
org.apache.hbase:hbase-testing-util:jar:0.99.0-SNAPSHOT
org.apache.hbase:hbase-client:jar:0.99.0-SNAPSHOT
I have tried to download the latest source which is newer
In our use case memory/cache is small, and we want to improve read/load
(from-disk) performance by storing HFile blocks consecutively on disk...
The idea is that if we store blocks more closely on disk, then read a data
block from HFile would require fewer random disk access.
In particular, to
Hi, All
I am asking the different practices of major and minor compaction... My
current understanding is that minor compaction, triggered automatically,
usually run along with online query serving (but in background), so that it
is important to make it as lightweight as possible... to minimise
right. Only thing is that HBase doesn't know when is
your offpeak, so a major compaction can be triggered anytime if the
minor is promoted to be a major one.
JM
2013/6/22 yun peng pengyunm...@gmail.com:
Hi, All
I am asking the different practices of major and minor compaction... My
a major compaction which will do
the same thing as the minor one, but with the bonus of deleting the
required marked cells.
JM
2013/6/22 yun peng pengyunm...@gmail.com:
Thanks, JM
It seems like the sole difference btwn major and minor compaction is the
number of files (to be all or just
Hi, All...
We have a user case intended to run Mapreduce in background, while the
server serves online operations. The MapReduce job may have lower priority
comparing to the online jobs..
I know this is a different use case of Mapreduce comparing to its
originally targeted scenario (where
asaf.mes...@gmail.com wrote:
On Thu, Jun 20, 2013 at 9:42 PM, yun peng pengyunm...@gmail.com wrote:
Thanks Asaf, I made the response inline.
On Thu, Jun 20, 2013 at 9:32 AM, Asaf Mesika asaf.mes...@gmail.com
wrote:
On Thu, Jun 20, 2013 at 12:59 AM, yun peng pengyunm...@gmail.com
wrote
-regionserver-hotspotting-despite-writing-records-with-sequential-keys/
https://github.com/sematext/HBaseWD
2013/6/20 谢良 xieli...@xiaomi.com
Or maybe you could try to revert your rowkey:)
发件人: yun peng [pengyunm...@gmail.com]
发送时间: 2013年6月20日 5:59
收件
Thanks Asaf, I made the response inline.
On Thu, Jun 20, 2013 at 9:32 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
On Thu, Jun 20, 2013 at 12:59 AM, yun peng pengyunm...@gmail.com wrote:
Thanks for the reply. The idea is interesting, but in practice, our
client
don't know in advance how
Hi, All,
Our use case requires to persist a stream into system like HBase. The
stream data is in format of timestamp, value. In other word, timestamp is
used as rowkey. We want to explore whether HBase is suitable for such kind
of data.
The problem is that the domain of row key (or timestamp)
servers.
On Wednesday, June 19, 2013, yun peng wrote:
Hi, All,
Our use case requires to persist a stream into system like HBase. The
stream data is in format of timestamp, value. In other word, timestamp
is
used as rowkey. We want to explore whether HBase is suitable for such
kind
of data
well
On Thursday, June 20, 2013, yun peng wrote:
It is our requirement that one batch of data writes (say of Memstore
size)
should be in one RS. And
salting prefix, while even the load, may not have this property.
Our problem is really how to manipulate/customise the mapping of row key
Hi, All
HBase could block the online write operations when there are too many data
in memstore (to be more efficient for the potential compaction incurred by
this flush when there're many files on disk). This blocking effect is also
observed by others (e.g.,
this by buffering in the memstore, but if
you sustain the write load something will have to slow down the writers.
Granted, this could be done a bit more graceful.
-- Lars
From: yun peng pengyunm...@gmail.com
To: user@hbase.apache.org
Sent: Sunday, June 9
Hi, All,
Given a large-sized BlockCache, I am wondering how HBase performs to search
a block of a requested rowkey. Is there any index structure (in-memory)
inside BlockCache? or it search in a brute force way, (which seems unlikely
though :-...)
I am aware of reference guide online, but have not
, 2013 at 4:40 PM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
Given a large-sized BlockCache, I am wondering how HBase performs to
search
a block of a requested rowkey. Is there any index structure (in-memory)
inside BlockCache? or it search in a brute force way, (which seems
unlikely
Hi, All,
I am wondering what is exactly stored in BlockCache: Is it the same raw
blocks as in HFile? or does HBase merge several raw blocks and store the
merged block in cache to serve future queries?
To be more specific, when a get operation entails loading of block b1 from
hfile f1, and of
Hi, All,
I'd like to add a global timestamp oracle on Zookeep to assign globally
unique timestamp for each Put/Get issued from HBase cluster. The reason I
put it on Zookeeper is that each Put/Get needs to go through it and unique
timestamp needs some global centralised facility to do it. But I am
server. I'm not sure it's a
good idea.
JM
2013/4/16 yun peng pengyunm...@gmail.com
Hi, All,
I'd like to add a global timestamp oracle on Zookeep to assign globally
unique timestamp for each Put/Get issued from HBase cluster. The reason I
put it on Zookeeper is that each Put/Get needs to go
Hi, All
I am asking the source code showing the file selection mechanism for minor
compaction in HBase. I am aware that HBase minor compaction selects file
based on size and generation time. But I can not find source file that
realizes this mechanism in HBase. I am interested in the algorithm
which is the default:
public class RatioBasedCompactionPolicy extends CompactionPolicy {
Cheers
On Thu, Apr 11, 2013 at 7:52 AM, yun peng pengyunm...@gmail.com wrote:
Hi, All
I am asking the source code showing the file selection mechanism for
minor
compaction in HBase. I am aware
that touched them.
Cheers
On Thu, Apr 11, 2013 at 8:04 AM, yun peng pengyunm...@gmail.com wrote:
Sorry to not mention the version. I am looking at version 0.94.2 which
seems file RatioBasedCompactionPolicy is not yet in.
regards,
Yun
On Thu, Apr 11, 2013 at 11:00 AM, Ted Yu yuzhih
read HRegionServer#put(…) method(s), which is the
server-end of it, you'll see the Server-RPC end of it.
On Thu, Dec 6, 2012 at 6:32 PM, yun peng pengyunm...@gmail.com wrote:
Hi, since on client side HBase can immediately send Put() by turning off
setAutoFlush(), I am wondering if Put
Hi, I have question on how the multiple Puts() are executed when they are
issued against the same region server.
For example, in the case of asynchronous executing Put() using
setAutoFlush(true), there will be multiple Puts() in the writeBuffer. Or
use HTbale API put(List puts) which directly
to the region server in one RPC call if they are for the same
region server.
Thanks,
Jimmy
On Thu, Dec 6, 2012 at 10:34 AM, yun peng pengyunm...@gmail.com wrote:
Hi, I have question on how the multiple Puts() are executed when they are
issued against the same region server.
For example
= mbsc.getMBeanInfo(rsStat);
for (MBeanAttributeInfo mbAttrInfo : mbeanInfo.getAttributes()) {
System.out.println(mbAttrInfo);
}
Regards,
Mikael.S
On Thu, Nov 8, 2012 at 8:31 PM, yun peng pengyunm...@gmail.com wrote:
Yes, JMX exposes compaction time. My cluster has JMX enabled, and I can
Hi, I want to profile the # of disk access (both random and sequential)
issued from HBase (into HDFS). For disk reads, I have tried use
blockCacheMissCount, which seems working. But is it the correct way for
reads (I can't confirmed it from HBase documents)?
For disk writes, I can't find any
compaction is completed.
But you can see that major compaction is in progress from the web UI.
Sorry if am wrong here.
Regards
Ram
On Thu, Nov 8, 2012 at 11:38 AM, yun peng pengyunm...@gmail.com
wrote:
Hi, All,
I want to measure the duration of a major compaction in HBase. Since
= mbsc.getMBeanInfo(rsStat);
for (MBeanAttributeInfo mbAttrInfo : mbeanInfo.getAttributes()) {
System.out.println(mbAttrInfo);
}
Regards,
Mikael.S
On Thu, Nov 8, 2012 at 8:31 PM, yun peng pengyunm...@gmail.com wrote:
Yes, JMX exposes compaction time. My cluster has JMX enabled, and I can
view
Hi, I have fixed the problem. I found the
MBeanServerConnection.getAttribute(ObjectName name, String attribute) is
the one useful here. Thanks to all.
Yun
On Thu, Nov 8, 2012 at 3:58 PM, yun peng pengyunm...@gmail.com wrote:
Hi, Mikael and Jeremy, thanks for your detailed answers. I have tried
Hi, All,
I want to measure the duration of a major compaction in HBase. Since the
function call majorCompact is asynchronous, I may need to manually check
when the major compaction is done. Does Hbase (as of version 0.92.4)
provide an interface to determine completion of major compaction? Thanks.
HBase
calls to see where time is being spent on the client side and then go from
there.
Do your reads read same data set as writes?
On Sat, Nov 3, 2012 at 7:09 AM, yun peng pengyunm...@gmail.com wrote:
Hi, the throughput for write-only workload is 450 ops/sec and for
read-only
900 ops/sec
, is
there anyway to verify this.
Thanks,
Yun
On Sat, Nov 3, 2012 at 1:04 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
What load do you see on the system? I am wondering if bottleneck is on the
client side.
On Fri, Nov 2, 2012 at 9:07 PM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
In my
Hi, All,
I have a HBase cluster run with property hbase.zookeeper.quorum set to be
hostname node3. When HBase starts, Zookeeper can not properly starts and
it throws,
java.io.Exception: Could not find my address: pc270.xxx in the list of
Zookeeper quorum servers
I have node3 configured in
Thanks for the reply, yes, it works if I use IP address.
regards,
Yun
On Fri, Oct 26, 2012 at 11:37 PM, Stack st...@duboce.net wrote:
On Fri, Oct 26, 2012 at 2:44 PM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
I have a HBase cluster run with property hbase.zookeeper.quorum set
=
99, VERSIONS = 4}
COLUMNCELL
cf:c1timestamp=99, value=v1
1 row(s) in 0.0150 seconds
Let me know how this works for you (generally). This is a new feature I added
to 0.94 to support true time-range queries.
-- Lars
- Original Message -
From: yun
Hi, All,
I am trying to understand how and when hbase set the modification
timestamp for hfiles. My original intention is to get a timestamp when
a hfile is generated (when last write to a hfile in compaction).
StoreFile.getModificationTime() looks a good candidate but after
initial tests, it has
:54 PM, lohit lohit.vijayar...@gmail.com wrote:
Last 3 digits of the timestamp you got from fetching getModificationTime()
has all zeros 1350764448000
Do you get all zeros all the time? May be the time precision is not what
you are looking for here?
2012/10/20 yun peng pengyunm...@gmail.com
, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Hi Yun Peng
You want to know the creation time? I could see the getModificationTime()
api. Internally it is used to get a store file with minimum timestamp.
I have not tried it out. Let me know if it solves your purpose.
Just try
Hi, All,
I want to find internal code in hbase where physical deleting a record
occurs.
-some of my understanding.
Correct me if I am wrong. (It is largely based on my experience and even
speculation.) Logically deleting a KeyValue data in hbase is performed by
marking tombmarker (by Delete() per
pls refer to StoreScanner.java
and how the ScanQueryMatcher.match() works.
That is where we decide if any kv has to be avoided due to already deleted
tombstone marker.
Forgot to tell you about this.
Regards
Ram
-Original Message-
From: yun peng [mailto:pengyunm...@gmail.com
Thanks for the notes. I am running the project configuration for
comparison, (as worst case for locality)...
On other hand, even if I make them to colocate, the problem persists, as
the property hbase.zookeeper.quorum has to be fully qualified name..
Does this problem have anything to do with DNS
Hi, All
Given ``hfile`` in ``hbase`` is immutable, I want to know the timestamp
like when the ``hfile`` is generated. Does ``hbase`` have API to allow
user-applications to know this? I need to know in postCompact() stage.
As a first attempt, I have tried using
50 matches
Mail list logo