Hi Talat,
That should should work.
Another example would be something like below.
test = LOAD '$TEST'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('cf_data:name
cf_data:age', '-loadKey true -maxTimestamp $test_date')
as (age);
On Wed, Jun 10, 2015 at 12:57 PM, Talat Uyarer
Yes, Lets say, from hbase shell i would like to filter(
SingleColumnValueFilter) rows on basis of cell value that is stored as an
Int.
Lets assume the column name and value to be USER:AGE=5
On Tue, Jun 9, 2015 at 9:26 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. if we have to match against
Hi Ted Yu,
I guess Krishna mention about Pig's HBaseStorage class. I found out
this by searching the class on google. IMHO I find a solution for my
problem. I can use Scan.setTimeRange[0] method. If I want to get
smaller records from timestamp, minTimestamp is set 0 and maxtimestamp
is set the
On Mon, Jun 8, 2015 at 10:27 PM, anil gupta anilgupt...@gmail.com wrote:
So, if we have to match against non-string data in hbase shell. We should
always use double quotes?
Double-quotes means the shell (ruby) will interpret and undo any escaping
-- e..g. showing as hex -- of binary
Can you pastebin the complete stack trace for the region server ?
Thanks
On Jun 9, 2015, at 10:52 PM, mukund murrali mukundmurra...@gmail.com wrote:
We are using HBase-1.0.0. Just before the client stalled, in RS there were
few handler threads that were blocked for MVCC(thread stack
Kindly look into this for full trace of RS.
http://pastebin.com/VS17vVd8
Thanks
On Wed, Jun 10, 2015 at 11:35 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you pastebin the complete stack trace for the region server ?
Thanks
On Jun 9, 2015, at 10:52 PM, mukund murrali
HI Devaraj
Thanks for your suggestion.
Yes i coded like this as per your suggestion.
public static void put_result(ResultScanner input) throws IOException
{
IteratorResult iterator = input.iterator();
while(iterator.hasNext())
{
Result next = iterator.next();
Hello
I am an Intern at Yahoo-Flickr, during one of the projects I wanted to scan the
hbase table first based on timestamp range and then apply some other filter.
However timestamp range filter is not availableIs there any way this can be
done i.e. apply timestamp range filter and other filter
Hi all,
We are using HBASE 0.96 with Hadoop 2.2.0, recently we found there are some
SCAN operations last for more than 1 hours, which lead to heavy network
traffic, because some data is not
stored at local data node and the region is very big, about 100G-500G。
With heavy network traffic, the
You can utilize the following method of Scan:
public Scan setTimeRange(long minStamp, long maxStamp)
To apply Filter, use this:
public Scan setFilter(Filter filter) {
FYI
On Wed, Jun 10, 2015 at 10:33 AM, Devpriya Dave
devpr...@yahoo-inc.com.invalid wrote:
Hello
I am an Intern at
I'm not aware of anything in version 0.96 that will limit the scan for
you - you may have to do it in your client yourself. If you're
willing to upgrade, do check out the throttling available in HBase
1.1:
https://blogs.apache.org/hbase/entry/the_hbase_request_throttling_feature
On Wed, Jun
Greetings HBase users and developers,
On the Apache HBase blog at https://blogs.apache.org/hbase we have just
published the first in a series of posts on Why We Use Apache HBase, in
which we let HBase users and developers borrow our blog so they can
showcase their successful HBase use cases, talk
When in doubt, printf() can be your friend.
Yeah its primitive (old school) but effective.
Then you will know what you’re adding to your list for sure.
On Jun 10, 2015, at 12:39 PM, beeshma r beeshm...@gmail.com wrote:
HI Devaraj
Thanks for your suggestion.
Yes i coded like this as
Can you provide the full code for Conver() and Listclass?
Giving snippets of code is insufficient.
My suspicion is a bug in your code.
You might want to print out the output of Conver(next) before passing to
Listclass.add()
and print out the entire list of Listclass elements, during each
threads?
So that regardless of your hadoop settings, if you want something faster, you
can use one thread for a timer and then the request is in another. So if you
hit your timeout before you get a response, you can stop your thread.
(YMMV depending on side effects… )
On Jun 10, 2015, at
Hi hbase users,
[Reposting this issue]
Hbase-client 0.94.x is working fine in karaf environment. We are working on a
task to use latest/stable hbase-client 1.0.x version. Since the hbase-client
requires hbase-common, having both referred in our pom.xml, classes from
package
Hi,Vladimir Rodionov
Thanks for the reply.
We encounter a problem is that:
we found there are some
SCAN operations (with startkey and endkey and filter) last for more than 1
hours, which lead to heavy network
traffic, because *some data is not **stored at local data node and the
region is very
hi, Dave,
For now we will not upgrade the version, so if there is something we can
monitor the long scan for 0.96?
2015-06-11 2:00 GMT+08:00 Dave Latham lat...@davelink.net:
I'm not aware of anything in version 0.96 that will limit the scan for
you - you may have to do it in your client
Louis,
What do you mean by monitor the long scan? If you need to throttle
network IO during scan, you have to
do it on a client side. Take a look at
org.apache.hadoop.hbase.io.hadoopbackport.ThrottledInputStream
as an example, something similar you will need to implement on top of
ResultScanner -
19 matches
Mail list logo