+1
- downloaded the -hadoop2 and -src tarballs
- inspected both tarballs, the documentation, CHANGES.txt, etc
- installed in local and distributed mode (with Hadoop 2.2.0)
- in both local and distributed mode, inserted some rows, flushed, compacted,
scanned, etc
- spot checked the UI pages
my rowkey is strField,intField
I want to scan it by decreasing order of the int field, how to make it reversed?
if the row key is Bytes.toBytes(intField) + Bytes.toBytes(strField),
then the order is increasing.
one solution is replace intField with -intField. but if
intField==Integer.MIN_VALUE,
How about Reverse Scan? https://issues.apache.org/jira/browse/HBASE-4811
On Thu, Mar 27, 2014 at 4:24 PM, Li Li fancye...@gmail.com wrote:
my rowkey is strField,intField
I want to scan it by decreasing order of the int field, how to make it
reversed?
if the row key is
great feature but I am using 0.94 now
On Thu, Mar 27, 2014 at 4:49 PM, haosdent haosd...@gmail.com wrote:
How about Reverse Scan? https://issues.apache.org/jira/browse/HBASE-4811
On Thu, Mar 27, 2014 at 4:24 PM, Li Li fancye...@gmail.com wrote:
my rowkey is strField,intField
I want to scan
Hey all,
I've put some data (~ 2.5 TB) into an HBase table on a small cluster (8
dn/rs + 1 master, max region size 10 GB, having ~ 350 regions) and
collected something around 1500 entries in my compaction queue (~ 200
per regionserver). Unfortunately, HBase now is slow when pushing new
data
0.94 also support reverse scan.
https://issues.apache.org/jira/browse/HBASE-4811?focusedCommentId=13839323page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13839323
On Thu, Mar 27, 2014 at 4:58 PM, Li Li fancye...@gmail.com wrote:
great feature but I am using 0.94
FileSystem.append is unsupported. If you choose Google Cloud Storage as
your default file
system
https://developers.google.com/hadoop/setting-up-a-hadoop-cluster#choosingafilesystem
,
Thanks for your answer, does HBase use this operation under the hood?
/David
Reverse scan is slower compared to forward scan.
Depending on access pattern, storing int field in decreasing order may be
desirable.
Cheers
On Mar 27, 2014, at 4:54 AM, haosdent haosd...@gmail.com wrote:
0.94 also support reverse scan.
Hi St. Ack,
Thanks for your reply. Yes I wanted to pull everything related to HBase. We
used 0.94.5 before and the dependency looks as follows:
dependency
groupIdorg.apache.hbase/groupId
artifactIdhbase/artifactId
version0.94.5/version
/dependency
So, I expect that the HBase 0.96.0
From src/main/ruby/hbase/admin.rb :
def major_compact(table_or_region_name, family = nil)
if family == nil
@admin.majorCompact(table_or_region_name)
else
# We are major compacting a column family within a region or table.
Hey,
Thank you for your reply. Wasn't aware of the async behaviour
(output/help of hbase shell is a little bit miss-leading in this aspect).
Is there a way to speed the process up or ensure immetiately working
on that?
Regards
Sven
On 27.03.2014 15:52, Ted Yu wrote:
From
Hi All,
I am frustrated with Hbase API. I want to list all the region servers:
regions on the region server and store files in the region. What classes should
I use for that?
Libo
Sometimes when I do **OFFLINE** snapshot of one of my large tables, the
snapshot command takes longer than 1 minute (6 ms) and fails.
Is there any possibility to increase the timeout value of the snapshot
command?
Here is the error message:
Make sure your DNS for the cluster resolves the reverse DNS lookup the same
from the master as well as the region server. Will fill in more details when I
get into the office.
Sent from my iPhone
On Mar 27, 2014, at 6:38 AM, Alex Simenduev shamil...@gmail.com wrote:
Sometimes when I do
hbase.snapshot.master.timeout.millis allows you to control the timeout I
believe.
On Thu, Mar 27, 2014 at 9:38 AM, Alex Simenduev shamil...@gmail.com wrote:
Sometimes when I do **OFFLINE** snapshot of one of my large tables, the
snapshot command takes longer than 1 minute (6 ms) and
On Thu, Mar 27, 2014 at 5:41 AM, David Koch ogd...@googlemail.com wrote:
FileSystem.append is unsupported. If you choose Google Cloud Storage as
your default file
system
https://developers.google.com/hadoop/setting-up-a-hadoop-cluster#choosingafilesystem
,
Thanks for your answer,
+1
Unit test suite passes 100% 25 times out of 25 runs.
Cluster testing looks good with LoadTestTool, YCSB, ITI, and ITIBLL.
An informal performance test on a small cluster comparing 0.98.0 and 0.98.1
indicates no serious perf regressions. See email to dev@ titled Comparsion
between 0.98.0 and
HBaseAdmin#getClusterStatus can be used to list all the region servers.
For each regionserver, you can use HBaseAdmin#getOnlineRegions to list the
regions on it.
For store files, they are changing due to compaction/memstore flush.
On Thu, Mar 27, 2014 at 8:41 AM, Libo Yu yu_l...@hotmail.com
If you really need to get the store files info, you can take a look at
ProtobufUtil#getStoreFiles if you use 0.96+.
On Thu, Mar 27, 2014 at 9:31 AM, Jimmy Xiang jxi...@cloudera.com wrote:
HBaseAdmin#getClusterStatus can be used to list all the region servers.
For each regionserver, you can
Another option is to use Apache Phoenix and let it do these things for you:
CREATE TABLE my_table(
intField INTEGER,
strField VARCHAR,
CONSTRAINT pk PRIMARY KEY (intField DESC, strField));
Thanks,
James
@JamesPlusPlus
http://phoenix.incubator.apache.org/
On Thu, Mar
On Thu, Mar 27, 2014 at 6:36 AM, rakesh rakshit ihavethepotent...@gmail.com
wrote:
Hi St. Ack,
Thanks for your reply. Yes I wanted to pull everything related to HBase. We
used 0.94.5 before and the dependency looks as follows:
dependency
groupIdorg.apache.hbase/groupId
Hi St. Ack,
I got your point now.
Thanks for mentioning that the module has been broken into multiple
modules. I will for sure try including those multiple modules.
Thanks and Regards,
Rakesh
On Thu, Mar 27, 2014 at 11:07 PM, Stack st...@duboce.net wrote:
On Thu, Mar 27, 2014 at 6:36 AM,
I am trying to get the value of the column I just incremented
and I can't seem to find it..
Here is a snippet of what I am trying...
for(Cell cell:result.rawCells()) {
byte[] result_key =
CellUtil.cloneQualifier(cell);
byte[] value
In the for loop, have you tried using this method from KeyValueUtil ?
public static KeyValue ensureKeyValue(final Cell cell) {
Which hbase version are you using ?
Cheers
On Thu, Mar 27, 2014 at 12:01 PM, Todd Gruben tgru...@gmail.com wrote:
I am trying to get the value of the column I
Haven't tried that, I'll give it a go.
I running version 0.96.1.1-cdh5.0.0-beta-2.
cheers,
On Thu, Mar 27, 2014 at 2:23 PM, Ted Yu yuzhih...@gmail.com wrote:
In the for loop, have you tried using this method from KeyValueUtil ?
public static KeyValue ensureKeyValue(final Cell cell) {
Hi,
Apologies if this isn't an appropriate topic for this mailing list or if the
topic has already been covered - I could not find any info in the archives for
the last 6 months or so.
I am trying to use the 0.14 YCSB benchmark for HBase version 0.94.6.1.3.3.0-58.
However, any attempt to run
I've been running the it tests on have been having failures with
IntegrationTestBigLinkedList, IntegrationTestIngest. I've been digging
into ITBLL -- I likely have some sort of MR config problems.
ITLoadAndVerify, ImportTsv, ManyRegions, Bulkload, and Mttr are passing
consistently.
For the
YCSB 0.14, by default, compiles against HBase 0.92. Either update the
pom.xml to change the hbase version to 0.94.x and recompile or put the YCSB
jar (the fat one) at the end of the classpath.
On Thu, Mar 27, 2014 at 4:11 PM, Tapper, Gunnar gunnar.tap...@hp.comwrote:
Hi,
Apologies if this
Hi Ted,
If the region is already compacted and has no new data, does that mean
major compaction will never be triggered and data locality will never
be recovered? Thanks.
Doing major compaction on this region should bring data locally.
Cheers
On Wed, Mar 26, 2014 at 4:02 PM, Libo Yu
I responded to your post on the hadoop user list, but you will need to do
manual major compactions to recover locality. It may recover automatically
over time, but only if there is actively data coming in.
On Thu, Mar 27, 2014 at 8:44 PM, Libo Yu yu_l...@hotmail.com wrote:
Hi Ted,
If the
Hi all,
If I use FSUtils.
computeHDFSBlocksDistribution, I can get HDFSBlocksDistribution for a file. If
I call its getBlockLocalityIndex(String hostname), does it return the locality
index for the file or all files on the host?
If the return is for the file, how to get locality index for all
Hi all,
Anybody knows how table fragmentation is defined? It is returned by
FSUtils.getTotalTableFragmentation.
Thanks
Libo
bq. does it return the locality index for the file or all files on the host
?
the locality index for all files of online regions on the host.
On Thu, Mar 27, 2014 at 8:03 PM, Libo Yu yu_l...@hotmail.com wrote:
Hi all,
If I use FSUtils.
computeHDFSBlocksDistribution, I can get
See javadoc of this method:
public static MapString, Integer getTableFragmentation(
final FileSystem fs, final Path hbaseRootDir)
It scans hdfs for all tables' directories.
For each region, each family, it maintains fragmentation count (cfFrag) for
families where there is more than one
34 matches
Mail list logo