Re: [ANNOUNCE] New HBase committer Baiqiang Zhao

2021-07-10 Thread Manjeet Singh
Congratulations 

On Sun, Jul 11, 2021, 05:57 Bharath Vissapragada 
wrote:

> Congratulations Baiqiang.
>
> On Sat, Jul 10, 2021 at 12:18 PM Nick Dimiduk  wrote:
>
> > Hi everyone,
> >
> > On behalf of the Apache HBase PMC I am pleased to announce that Baiqiang
> > Zhao has accepted the PMC's invitation to become a committer on the
> > project!
> >
> > We appreciate all of the great contributions Baiqiang has made to
> > the community thus far and we look forward to his continued involvement.
> >
> > Allow me to be the first to congratulate Baiqiang on his new role!
> >
> > Thanks,
> > Nick
> >
>


Re: Request To Get Slack Invite

2020-07-15 Thread Manjeet Singh
Done

On Wed, Jul 15, 2020, 22:13 Diptam Ghosh  wrote:

> Hi Team,
>
> I have been working as Java developer for last two years, worked on some
> of the HBASE Jira recently. I eagerly want to contribute, for that want be
> part of slack channel and active discussions.
> Please send me the invite link.
>
>
> Thanks and Regards
> Diptam ghosh
>


Re: Please add me to HBase project

2019-05-08 Thread Manjeet Singh
Can you please do it for me... I requested it earlier but i am not able to
do it

Manjeet

On Thu, 9 May 2019, 00:13 Sean Busbey,  wrote:

> you should be all set now.
>
> On Wed, May 8, 2019 at 1:04 PM Caroline Zhou
>  wrote:
> >
> > Hi,
> >
> > I'd like to contribute to opensource HBase. Could you please add me to
> the
> > HBase project so I can assign issues to myself and submit patches? My
> > username is caroliney14.
> >
> > Thank you,
> > Caroline Zhou
> >
> > --
> > Caroline Zhou
> > AMTS Software Engineer | Salesforce
> > 
>


Re: request invite to join slack channel

2019-04-11 Thread Manjeet Singh
Done, please check

On Thu, 11 Apr 2019, 23:11 Da Zhou,  wrote:

> Dear HBase Community,
>
> I saw the channel "https://apache-hbase.slack.com/; shared on the HBase
> Book. May I join the HBase Slack channel for easier communication?
>
> Thanks,
> Da
>


Re: Contributor Permission Request

2019-04-11 Thread Manjeet Singh
Hi

Request you to please add me as well for contribute to Apache HBase.
My id is manjeet.chand...@gmail.com

Thanks
Manjeet singh

On Thu, 11 Apr 2019, 19:52 Guanghao Zhang,  wrote:

> Done. Add to HBase jira CONTRIBUTORS. Thanks.
>
> fei wen  于2019年4月11日周四 下午7:26写道:
>
> > Hi,
> >   I want to contribute to Apache HBase.
> > Would you please give me the contributor permission?
> > My JIRA ID is wen.feiyi.
> >
>


Re: Requesting access to hbase slack channel

2019-02-01 Thread Manjeet Singh
Done please check

On Wed, 30 Jan 2019, 11:39 车 珣 
>
> Could you please invite me: singleon...@gmail.com . Thank you~
> On 01/30/2019 13:25,William Shen wills...@marinsoftware.com> wrote:
> Can you please invite me as well? Thank you.
>
> On Sat, Jan 19, 2019 at 7:50 AM Robert Yokota  wrote:
>
> Me too please :) rayok...@gmail.com
>
> On Fri, Jan 18, 2019 at 2:06 AM Peter Somogyi  wrote:
>
> Sent.
>
> On Fri, Jan 18, 2019 at 7:48 AM Abhishek Gupta 
> wrote:
>
> Hi,
>
> I would like an invitation too abhila...@gmail.com
>
> Thanks
>
> On Fri, Jan 18, 2019 at 11:38 AM Manjeet Singh <
> manjeet.chand...@gmail.com
>
> wrote:
>
> Done
>
> On Fri, 18 Jan 2019, 11:24 Buchi Reddy Busi Reddy <
> mailtobu...@gmail.com
> wrote:
>
> Can you also invite mailtobu...@gmail.com please?
>
> On Thu, Jan 17, 2019 at 8:34 PM Manjeet Singh <
> manjeet.chand...@gmail.com>
> wrote:
>
> Seems someone else already did it
>
> Manjeet
>
> On Wed, 16 Jan 2019, 17:33 Nihal Jain  wrote:
>
> Hi
>
> Could you please invite me: nihaljain...@gmail.com <
> nihaljain...@gmail.com
> ?
>
> Regards,
> Nihal
>
> On Wed, 16 Jan, 2019, 2:54 PM Peter Somogyi <
> psomo...@apache.org
> wrote:
>
> Sent invitation to madhurpan...@gmail.com.
>
> On Wed, Jan 16, 2019 at 7:27 AM Madhur Pant <
> madhurpan...@gmail.com>
> wrote:
>
> Hi Team,
>
> I was wondering if I could get access to the HBase user
> slack
> channel
>
>
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapache-hbase.slack.comdata=02%7C01%7C%7Cc2b9d9be2f9f498e25a208d686734bcb%7C84df9e7fe9f640afb435%7C1%7C0%7C636844227076460951sdata=XJDTTVDIXh3k0mnUt4cSTvM0uCmScUAPUZSfzxAi4eM%3Dreserved=0
>
> It says here <
>
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FHBASE-16413data=02%7C01%7C%7Cc2b9d9be2f9f498e25a208d686734bcb%7C84df9e7fe9f640afb435%7C1%7C0%7C636844227076460951sdata=MStwCobJY76EyALeoOClqPI%2FjkRrjp93qgjEEKYPAQA%3Dreserved=0
> >
> that I
> should email you guys  :)
>
> Thanks,
> Madhur Pant
>
>
>
>
>
>
>
>
>
>


Re: Requesting access to hbase slack channel

2019-01-17 Thread Manjeet Singh
Done

On Fri, 18 Jan 2019, 11:24 Buchi Reddy Busi Reddy  Can you also invite mailtobu...@gmail.com please?
>
> On Thu, Jan 17, 2019 at 8:34 PM Manjeet Singh 
> wrote:
>
> > Seems someone else already did it
> >
> > Manjeet
> >
> > On Wed, 16 Jan 2019, 17:33 Nihal Jain  >
> > > Hi
> > >
> > > Could you please invite me: nihaljain...@gmail.com <
> > nihaljain...@gmail.com
> > > >?
> > >
> > > Regards,
> > > Nihal
> > >
> > > On Wed, 16 Jan, 2019, 2:54 PM Peter Somogyi  wrote:
> > >
> > > > Sent invitation to madhurpan...@gmail.com.
> > > >
> > > > On Wed, Jan 16, 2019 at 7:27 AM Madhur Pant 
> > > > wrote:
> > > >
> > > > > Hi Team,
> > > > >
> > > > > I was wondering if I could get access to the HBase user slack
> channel
> > > > >
> > > > > https://apache-hbase.slack.com
> > > > >
> > > > > It says here <https://issues.apache.org/jira/browse/HBASE-16413>
> > > that I
> > > > > should email you guys  :)
> > > > >
> > > > > Thanks,
> > > > > Madhur Pant
> > > > >
> > > >
> > >
> >
>


Re: Requesting access to hbase slack channel

2019-01-17 Thread Manjeet Singh
Seems someone else already did it

Manjeet

On Wed, 16 Jan 2019, 17:33 Nihal Jain  Hi
>
> Could you please invite me: nihaljain...@gmail.com  >?
>
> Regards,
> Nihal
>
> On Wed, 16 Jan, 2019, 2:54 PM Peter Somogyi 
> > Sent invitation to madhurpan...@gmail.com.
> >
> > On Wed, Jan 16, 2019 at 7:27 AM Madhur Pant 
> > wrote:
> >
> > > Hi Team,
> > >
> > > I was wondering if I could get access to the HBase user slack channel
> > >
> > > https://apache-hbase.slack.com
> > >
> > > It says here 
> that I
> > > should email you guys  :)
> > >
> > > Thanks,
> > > Madhur Pant
> > >
> >
>


Re: for help--join in the hbase slack

2018-10-05 Thread Manjeet Singh
Done please check

On Sat, 6 Oct 2018, 08:45 毛蛤丝,  wrote:

> Hello hbase guys:my slack is maoling199210...@sina.com, can someone help
> me to join inhbase slack?Thanks a lot.
>
> 
>
> Best regards
> maoling
> Beijing,China
>
>


Re: HBase developer meetup

2018-10-01 Thread Manjeet Singh
It will be a great if you create Google group and arrange vedio conference
for the same.

Manjeet singh

On Mon, 1 Oct 2018, 22:38 la...@apache.org,  wrote:

>  9 people signed up so far.This is a good chance to make your voice heard,
> give input, and help point the project in the right direction going forward.
> -- Lars
> On Friday, September 28, 2018, 10:37:02 AM PDT, la...@apache.org <
> la...@apache.org> wrote:
>
>  Hi all,
> I'm planning to put together an HBase developer meetup at the Salesforce
> office (with video conference for those who cannot attend in person) in the
> next few weeks.
> If you're interested please put your name in this spreadsheet:
> https://docs.google.com/spreadsheets/d/13eIMItFbM35K_lfn9woGGObW-12tfyOVLqZi8X4p5PM/edit#gid=0
>
> This is a chance to get all those who contribute to HBase together. There
> will also be food. :)I will leave the spreadsheet up for one week - until
> Friday October 5th.
> Possible agenda:- Round-table- Status of branch-1, branch-2, and master-
> Current challenges (operations?, public cloud?, availability?,
> performance?, community?)- Future direction. Where do we want HBase to be
> in 1 years, 2 years, 5 years?- more...
> Thanks.
> -- Lars
>


Re: Hbase slack invite

2018-08-10 Thread Manjeet Singh
Request you to please add me as well

Thanks
 Manjeet Singh

On Fri, 10 Aug 2018, 20:50 Reid Chan,  wrote:

> Sent, please check.
>
>
> R.C
>
> 
> From: James Moore 
> Sent: 10 August 2018 21:35:06
> To: dev@hbase.apache.org
> Subject: Hbase slack invite
>
> May I join the slack channel as well?
>


Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-12 Thread Manjeet Singh
Hi

I have created HBASE-20877
<https://issues.apache.org/jira/browse/HBASE-20877> for the same, request
you to please move it into active sprint.

Thanks
Manjeet Singh

On Thu, Jul 12, 2018 at 7:42 AM, Reid Chan  wrote:

> oldWals are supposed to be cleaned in master background chore, I also
> doubt they are needed.
>
> HBASE-20352(for 1.x version) is to speed up cleaning oldWals, it may
> address your concern "OldWals is quite huge"
>
>
> R.C
>
>
>
> ____
> From: Manjeet Singh 
> Sent: 12 July 2018 08:19:21
> To: u...@hbase.apache.org
> Subject: Re: Query for OldWals and use of WAl for Hbase indexer
>
> I have one more question
>
> If solr is having its own data mean its maintaining data in their shards
> and hbase is maintaining in data folder... Why still oldWals need?
>
> Thanks
> Manjeet singh
>
> On Wed, 11 Jul 2018, 23:19 Manjeet Singh, 
> wrote:
>
> > Thanks Sean for your reply
> >
> > I still have some question un answered like
> > Q1: How Hbase syncronized with Hbase indexer.
> > Q2 What optimization I can apply.
> > Q3 As it's clear from my stats, data in OldWals is quite huge so it's not
> > getting clear my HMaster., how can I improve my HDFS space issue due to
> > this?
> >
> > Thanks
> > Manjeet Singh
> >
> > On Wed, Jul 11, 2018 at 9:33 PM, Sean Busbey  wrote:
> >
> >> Presuming you're using the Lily indexer[1], yes it relies on hbase's
> >> built in cross-cluster replication.
> >>
> >> The replication system stores WALs until it can successfully send them
> >> for replication. If you look in ZK you should be able to see which
> >> regionserver(s) are waiting to send those WALs over. The easiest way
> >> to do this is probably to look at the "zk dump" web page on the
> >> Master's web ui[2].
> >>
> >> Once you have the particular region server(s), take a look at their
> >> logs for messages about difficulty sending edits to the replication
> >> peer you have set up for the destination solr collection.
> >>
> >> If you remove the WALs then the solr collection will have a hole in
> >> it. Depending on how far behind you are, it might be quicker to 1)
> >> remove the replication peer, 2) wait for old wals to clear, 3)
> >> reenable replication, 4) use a batch indexing tool to index data
> >> already in the table.
> >>
> >> [1]:
> >>
> >> http://ngdata.github.io/hbase-indexer/
> >>
> >> [2]:
> >>
> >> The specifics will vary depending on your installation, but the page
> >> is essentially at a URL like
> >> https://active-master-host.example.com:22002/zk.jsp
> >>
> >> the link is on the master UI landing page, near the bottom, in the
> >> description of the "ZooKeeper Quorum" row. it's the end of "Addresses
> >> of all registered ZK servers. For more, see zk dump."
> >>
> >> On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
> >>  wrote:
> >> > Hi All
> >> >
> >> > I have a query regarding Hbase replication and OldWals
> >> >
> >> > Hbase version 1.2.1
> >> >
> >> > To enable Hbase indexing we use below command on table
> >> >
> >> > alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
> >> >
> >> > By Doing this actually replication get enabled as hbase-indexer
> required
> >> > it, as per my understanding indexer use hbase WAL (Please correct me
> if
> >> I
> >> > am wrong).
> >> >
> >> > so question is How Hbase syncronize with Solr Indexer? What is the
> role
> >> of
> >> > replication? what optimization we can apply in order to reduce data
> >> size?
> >> >
> >> >
> >> > I can see that our OldWals are getting filled , if Hmaster it self
> >> taking
> >> > care why it's reached to 7.2 TB? what if I delete it, does it impact
> >> solr
> >> > indexing?
> >> >
> >> > 7.2 K   21.5 K  /hbase/.hbase-snapshot
> >> > 0   0   /hbase/.tmp
> >> > 0   0   /hbase/MasterProcWALs
> >> > 18.3 G  60.2 G  /hbase/WALs
> >> > 28.7 G  86.1 G  /hbase/archive
> >> > 0   0   /hbase/corrupt
> >> > 1.7 T   5.2 T   /hbase/data
> >> > 42  126 /hbase/hbase.id
> >> > 7   21  /hbase/hbase.version
> >> > 7.2 T   21.6 T  /hbase/oldWALs
> >> >
> >> >
> >> >
> >> >
> >> > Thanks
> >> > Manjeet Singh
> >>
> >
> >
> >
> > --
> > luv all
> >
>



-- 
luv all


[jira] [Created] (HBASE-20877) Hbase-1.2.0 OldWals age getting filled and not purged by Hmaster

2018-07-12 Thread Manjeet Singh (JIRA)
Manjeet Singh created HBASE-20877:
-

 Summary: Hbase-1.2.0 OldWals age getting filled and not purged by 
Hmaster
 Key: HBASE-20877
 URL: https://issues.apache.org/jira/browse/HBASE-20877
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.2.0
Reporter: Manjeet Singh


Hbase version 1.2.0 OldWals are getting filled and showing as below

7.2 K 21.5 K /hbase/.hbase-snapshot
0 0 /hbase/.tmp
0 0 /hbase/MasterProcWALs
18.3 G 60.2 G /hbase/WALs
28.7 G 86.1 G /hbase/archive
0 0 /hbase/corrupt
1.7 T 5.2 T /hbase/data
42 126 /hbase/hbase.id
7 21 /hbase/hbase.version
7.2 T 21.6 T /hbase/oldWALs

 

It;s not getting purged by Hmaster as oldWals are supposed to be cleaned in 
master background chore, HBASE-20352(for 1.x version) is created to speed up 
cleaning oldWals, in our case it's not happening.

hbase.master.logcleaner.ttl is 1 minutes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19071) Import from Hbase version 0.94.27 to higher version 1.2.1 not working

2017-10-23 Thread Manjeet Singh (JIRA)
Manjeet Singh created HBASE-19071:
-

 Summary: Import from Hbase version 0.94.27 to higher version 1.2.1 
not working 
 Key: HBASE-19071
 URL: https://issues.apache.org/jira/browse/HBASE-19071
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.2.1
Reporter: Manjeet Singh


Data migration from one cluster to another cluster in same N/W, but with a 
different version of hbase one is 0.94.27 (source cluster hbase) and another is 
destination cluster hbase version is 1.2.1. is not working.

I have used below command to take backup of hbase table on source cluster is:
 ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild /data/backupData/
and as a result below files were genrated by using above command:-
drwxr-xr-x 3 root root4096 Dec  9  2016 _logs
-rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-0
-rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-1
-rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-2
-rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-3
-rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-4
-rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-5
-rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-6
-rw-r--r-- 1 root root   0 Dec 16  2016 _SUCCESS
 I import these file in another Hbase version (1.2.1)

in order to restore these files I am assuming I have to move these files in 
destination cluster and have to run below command 

hbase org.apache.hadoop.hbase.mapreduce.Import   


sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table  
hdfs://:8020/data/ExportedFiles

I am getting below error

17/10/23 16:13:50 INFO mapreduce.Job: Task Id : 
attempt_1505781444745_0070_m_03_0, Status : FAILED
Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
at 
org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2306)
at 
org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:78)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Please congratulate our new PMC Chair Misty Stanley-Jones

2017-09-21 Thread Manjeet Singh
Congratulations Misty

On 22 Sep 2017 12:39 am, "Andrew Purtell"  wrote:

> At today's meeting of the Board, Special Resolution B changing the HBase
> project Chair to Misty Stanley-Jones was passed unanimously.
>
> Please join me in congratulating Misty on her new role!
>
> ​(If you need any help or advice please don't hesitate to ping me, Misty,
> but I suspect you'll do just fine and won't need it.)​
>
>
> --
> Best regards,
> Andrew
>


Re: Avro schema getting changed dynamically

2016-08-30 Thread Manjeet Singh
I want ot add few more points

I am using Java native Api for Hbase get/put

and below is the example

assume i have below schema and I am inserting data by using this schema
into Hbase but later I have new new values coming and i need to fit this
into my schema for this I need to create new schema as showing in below
example
in example 2 I have new field time stamp

example 1
{ "type" : "record", "name" : "twitter_schema", "namespace" :
"com.miguno.avro", "fields" : [ { "name" : "username", "type" : "string",
"doc" : "Name of the user account on Twitter.com" }, { "name" : "tweet",
"type" : "string", "doc" : "The content of the user's Twitter message" } ] }


example 2


{ "type" : "record", "name" : "twitter_schema", "namespace" :
"com.miguno.avro", "fields" : [ { "name" : "username", "type" : "string",
"doc" : "Name of the user account on Twitter.com" }, { "name" : "tweet",
"type" : "string", "doc" : "The content of the user's Twitter message" }, {
"name" : "timestamp", "type" : "long", "doc" : "Unix epoch time in seconds"
} ], }




Thanks
Manjeet





On Tue, Aug 30, 2016 at 11:47 AM, Manjeet Singh <manjeet.chand...@gmail.com>
wrote:

> Hi All,
>
> I have use case to put data in avro format in Hbase , I have frequent read
> write operations but its not a problem.
>
> Problem is what if my avro schema get changed how shuld I deal with it?
> This should in mind what about older data which already inserted in Hbase
> and now we have new schema.
>
> can anyone suggest me solution for the same
>
> Thanks
> Manjeet
>
> --
> luv all
>



-- 
luv all


Avro schema getting changed dynamically

2016-08-30 Thread Manjeet Singh
Hi All,

I have use case to put data in avro format in Hbase , I have frequent read
write operations but its not a problem.

Problem is what if my avro schema get changed how shuld I deal with it?
This should in mind what about older data which already inserted in Hbase
and now we have new schema.

can anyone suggest me solution for the same

Thanks
Manjeet

-- 
luv all


Re: [DISCUSS] 0.98 branch disposition

2016-08-26 Thread Manjeet Singh
I am extremely sorry my intention was to send as  personal message and its
send in group ... sorry for that
On 26 Aug 2016 23:50, "Andrew Purtell" <apurt...@apache.org> wrote:

> Manjeet,
>
> Hijacking a discussion is poor form and not appreciated. Please start a
> brand new discussion thread, and mail u...@hbase.apache.org if you'd like
> a
> response.
>
> On Fri, Aug 26, 2016 at 11:16 AM, Manjeet Singh <
> manjeet.chand...@gmail.com>
> wrote:
>
> > Hi Andy
> >
> > Can you please help me on below issue?
> >
> >
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>


Re: [DISCUSS] 0.98 branch disposition

2016-08-26 Thread Manjeet Singh
Hi Andy

Can you please help me on below issue?


I am using wide table approach, where I might have 100 to  1,00, column
qualifier (U can understand this record as assume rowkey is mobile number
or any ip and column name it self as a value which is url surf by this
subscriber)


I have performance related issue

compression --> snappy

I am getting problem as below
Heap size problem by using scan on shell , as a solution I increase java
heap size by using cloudera manager to 4 GB


second I have below Native API code It took very long time to process can
any one help me on same?





public static ArrayList getColumnQualifyerByPrefixScan(String
rowKey, String prefix) {

ArrayList list = null;
try {

FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
Filter filterB = new QualifierFilter(CompareFilter.CompareOp.EQUAL,
new BinaryPrefixComparator(Bytes.toBytes(prefix)));
filterList.addFilter(filterB);

list = new ArrayList();

Get get1 = new Get(rowKey.getBytes());
get1.setFilter(filterList);
Result rs1 = hTable.get(get1);
int i = 0;
for (KeyValue kv : rs1.raw()) {
 list.add(new String(kv.getQualifier()) + " ");
}
} catch (Exception e) {
//System.out.println(e.getMessage());

}
return list;
}

Thanks
Manjeet

On Fri, Aug 26, 2016 at 11:37 PM, Andrew Purtell 
wrote:

> Greetings,
>
> HBase 0.98.0 was released in February of 2014. We have had 21 releases in 2
> 1/2 years at a fairly regular cadence, a terrific run for any software
> product. However as 0.98 RM I think it's now time to discuss winding down
> 0.98. I want to give you notice of this as far in advance as possible (and
> have just come to a decision barely this week). We have several more recent
> releases at this point that are quite stable, a superset of 0.98
> functionality, and have been proven in deployments. It's wise not to take
> on unnecessary risk by upgrading from a particular version, but in the case
> of 0.98, it's getting to be that time.
>
> If you have not yet, I would encourage you to take a few moments to
> participate in our fully anonymous usage survey:
> https://www.surveymonkey.com/r/NJFKKGW . According to results received so
> far, the versions of HBase in production use break down as:
>
>- 0.94 - 19%
>- 0.96 - 2%
>- *0.98 - 23%*
>- 1.0 - 20%
>- 1.1 - 34%
>- 1.2 - 23%
>
> These figures add up to more than 100% because some respondents I expect
> run more than one version.
>
> For those 23% still on 0.98 (and the 2% on 0.96) it's time to start
> seriously thinking about an upgrade to 1.1 or later. The upgrade process
> can be done in a rolling manner. We consider 1.1 (and 1.2 for that matter)
> to be stable and ready for production.
>
> As 0.98 RM, my plan is to continue active maintenance at a roughly monthly
> release cadence through December of this year. However in January 2017 I
> plan to tender my resignation as 0.98 RM and, hopefully, take that active
> role forward to more recent code not so full of dust and cobwebs and more
> interesting to develop and maintain. Unless someone else steps up to take
> on that task this will end regular 0.98 releases. I do not expect anyone to
> take on that role, frankly. Of course we can still make occasional 0.98
> releases on demand. Any committer can wrangle the bits and the PMC can
> entertain a vote. (If you can conscript a committer to assist with
> releasing I don't think you even need to be a committer to function as RM
> for a release.) Anyway, concurrent with my resignation as 0.98 RM I expect
> the project to discuss and decide an official position on 0.98 support. It
> is quite possible we will announce that position to be an end of life.
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>



-- 
luv all


Re: [ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-22 Thread Manjeet Singh
Hi Ankit

How can I achieve HBase Prefix Filter on "column Qualifier" using Phoenix.
My column Qualifier is like D_com.google D_com.mail.yahoo ..etc so I want
to search all column Qualifier who start with D_ prefix

Thanks,

 Manjeet Singh

On Fri, Aug 12, 2016 at 10:55 PM, Ankit Singhal <an...@apache.org> wrote:

> Apache Phoenix enables OLTP and operational analytics for Hadoop through
> SQL support and integration with other projects in the ecosystem such as
> Spark, HBase, Pig, Flume, MapReduce and Hive.
>
> We're pleased to announce our 4.8.0 release which includes:
> - Local Index improvements[1]
> - Integration with hive[2]
> - Namespace mapping support[3]
> - VIEW enhancements[4]
> - Offset support for paged queries[5]
> - 130+ Bugs resolved[6]
> - HBase v1.2 is also supported ( with continued support for v1.1, v1.0 &
> v0.98)
> - Many performance enhancements(related to StatsCache, distinct, Serial
> query with Stats etc)[6]
>
> The release is available in source or binary form here [7].
>
> Release artifacts are signed with the following key:
> *https://people.apache.org/keys/committer/ankit.asc
> <https://people.apache.org/keys/committer/ankit.asc>*
>
> Thanks,
> The Apache Phoenix Team
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-1734
> [2] https://issues.apache.org/jira/browse/PHOENIX-2743
> [3] https://issues.apache.org/jira/browse/PHOENIX-1311
> [4] https://issues.apache.org/jira/browse/PHOENIX-1508
> [5] https://issues.apache.org/jira/browse/PHOENIX-2722
> [6] *https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> version=12334393=12315120
> <https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> version=12334393=12315120>*
> [7] https://phoenix.apache.org/download.html
>



-- 
luv all


Fwd: Hbase Row key lock

2016-08-16 Thread Manjeet Singh
Hi All

Can anyone help me about how and in which version of Hbase support Rowkey
lock ?
I have seen article about rowkey lock but it was about .94 version it said
that if row key not exist and any update request come and that rowkey not
exist then in this case Hbase hold the lock for 60 sec.

currently I am using Hbase 1.2.2 version

Thanks
Manjeet





-- 
luv all


Blk Load via Collection

2016-08-06 Thread Manjeet Singh
Hi All

I am writing Java Native API for HBase responsible for bulk read write
(operations)
for this I used List of Put and List of Get and compare and put update list
I am facing two problems at some point of time my system get hang seems I
am missing some configurations which might responsible for some cache (my
understanding)

second I am using it to improve performance in term of mass update but its
actually degrading the performance.

 can anyone suggest me the correct approach or configuraions

Thanks
Manjeet

-- 
luv all