Re: [DISCUSS] Graduate Apache HAWQ (incubating) as a TLP

2018-06-20 Thread
Great job, Radar. +1 for Apache HAWQ TLP.

Lei Chang  于2018年6月21日周四 上午9:38写道:

> Thanks Radar for the effort!
>
> Looking forwards to seeing the progress on graduation.
>
> Cheers
> Lei
>
>
>
>
> On Thu, Jun 21, 2018 at 3:04 AM, Lili Ma  wrote:
>
> > Thanks Radar for your great effort :)
> >
> > +1
> >
> > 2018-06-20 12:00 GMT-04:00 Ed Espino :
> >
> > > +1 on our march to Apache TLP status. With Lei Chang as our initial VP
> > and
> > > our wide global and diverse community supporting each other, I look
> > forward
> > > to the new and exciting chapter for the project.
> > >
> > > Radar - Thank you for the excellent leadership through this process.
> > >
> > > Regards,
> > > -=e
> > >
> > > On Tue, Jun 19, 2018 at 2:40 AM Radar Lei  wrote:
> > >
> > > > Hi All,
> > > >
> > > > With the 2.3.0.0-incubating release officially out, the Apache HAWQ
> > > > community and its mentors believe it is time to consider graduation
> to
> > > the
> > > > TLP:
> > > > https://lists.apache.org/thread.html/b4a0b5671ce377b3d51c9b7
> > > > ab00496a1eebfcbf1696ce8b67e078c64@%3Cdev.hawq.apache.org%3E
> > > >
> > > > Apache HAWQ entered incubation in September of 2015, since then, the
> > HAWQ
> > > > community learned a lot about how to do things in Apache ways. Now we
> > > have
> > > > a healthy and engaged community, ready to help with all questions
> from
> > > the
> > > > HAWQ community. We delivered four releases including two binary
> > releases,
> > > > now we can do self-driving releases in good cadence. The PPMC has
> > > > demonstrated a good understanding of growing the community by
> electing
> > 12
> > > > individuals as committers and PPMC members. The PPMC addressed the
> > > maturity
> > > > issues one by one followed by Apache Project Maturity Model,
> currently
> > > all
> > > > the License and IP issues are resolved. This demonstrated our
> > > understanding
> > > > of ASF's IP policies.
> > > >
> > > > All in all, I believe this project is qualified as a true TLP and we
> > > should
> > > > recognize this fact by formally awarding it such a status. This
> thread
> > > > means to open up the very same discussion that we had among the
> mentors
> > > and
> > > > HAWQ community to the rest of the IPMC. It is a DISCUSS thread so
> feel
> > > free
> > > > to ask questions.
> > > >
> > > > To get you all going, here are a few data points which may help:
> > > >
> > > > Project status:
> > > >  http://incubator.apache.org/projects/hawq.html
> > > >
> > > > Project website:
> > > >   http://hawq.incubator.apache.org/
> > > >
> > > > Project documentation:
> > > >http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-inc
> > > > ubating/overview/HAWQOverview.html
> > > >http://hawq.apache.org/#download
> > > >
> > > > Maturity assessment:
> > > > https://cwiki.apache.org/confluence/display/HAWQ/ASF+
> > Maturity+Evaluation
> > > >
> > > > DRAFT of the board resolution is at the bottom of this email
> > > >
> > > > Proposed PMC size: 45 members
> > > >
> > > > Total number of committers: 45 members
> > > >
> > > > PMC affiliation (* indicated chair):
> > > >Pivotal (20)
> > > >  * Oushu (7)
> > > >Amazon (3)
> > > >Hashdata (2)
> > > >Autonomic (1)
> > > >Confluent (1)
> > > >Datometry (1)
> > > >Hortonworks (1)
> > > >Microsoft (1)
> > > >PETUUM (1)
> > > >Privacera (1)
> > > >Qubole (1)
> > > >Snowflake (1)
> > > >State Street (1)
> > > >Unifi (1)
> > > >Visa (1)
> > > >ZEDEDA (1)
> > > >
> > > > 1549 commits on develop
> > > > 1375 PR”s on GitHub
> > > > 63 contributors across all branches
> > > >
> > > > 1624 issues created
> > > > 1350 issues resolved
> > > >
> > > > dev list averaged ~53 msgs/month over last 12 months
> > > > user list averaged ~6 msgs/month over last 12 months
> > > > 129 unique posters
> > > >
> > > >
> > > > committer affiliations:
> > > > active
> > > >   pivotal.io
> > > >   oushu.io
> > > >   hashdata.cn
> > > > occasional
> > > >   amazon.com
> > > >   autonomic.ai
> > > >   confluent.io
> > > >   datometry.com
> > > >   hortonworks.com
> > > >   microsoft.com
> > > >   petuum.com
> > > >   privacera.com
> > > >   qubole.com
> > > >   snowflake.net
> > > >   statestreet.com
> > > >   unifisoftware.com
> > > >   visa.com
> > > >   zededa.com
> > > >
> > > >
> > > > Thanks,
> > > > Radar
> > > >
> > > >
> > > >
> > > > ## Resolution to create a TLP from graduating Incubator podling
> > > >
> > > > X. Establish the Apache HAWQ Project
> > > >
> > > >WHEREAS, the Board of Directors deems it to be in the best
> > > >interests of the Foundation and consistent with the
> > > >Foundation's purpose to establish a Project Management
> > > >Committee charged with the creation and maintenance of
> > > >open-source software, for 

Re: Re: [DISCUSS] Apache HAWQ Graduation from Incubator

2018-05-31 Thread
+1 to Lei. He is worthy of the title of Apache HAWQ PMC Chairman.

2018-06-01 9:44 GMT+08:00 Yi JIN :

> +1 to Lei, I fully support Lei Change as Apache HAWQ project PMC Chairman,
> he deserves this role, not only outstanding contributions to this project
> in a very long period till today also his solid leadership and vision.
>
> Best,
> Yi  Jin
>
> On Fri, Jun 1, 2018 at 3:17 AM, Ed Espino  wrote:
>
> > Ruilong,
> >
> > I also give my full support for Lei Chang as the Apache HAWQ project's
> > initial PMC Chairman. His leadership and vision have contributed
> immensely
> > to the project.
> >
> > Regards,
> > -=e
> >
> > On Thu, May 31, 2018 at 6:32 AM, Ruilong Huo  wrote:
> >
> > > Great progress towards Apache HAWQ graduation! Thanks Radar for pushing
> > > this forward!
> > >
> > > I would like to nominate Lei Chang as the PMC Chair. He initiated HAWQ
> > > project several years ago, led the development, brought it to Apache
> > > incubation, and has always been active in HAWQ community to make it a
> > > world-leading big data product as well as a successfully Apache
> project.
> > > There is no doubt that he is perfect for the role. And I believe he
> will
> > > continue to share his insights and go even further with HAWQ after
> > > graduation.
> > >
> > >
> > > Best regards,
> > > Ruilong Huo
> > >
> > >
> > > At 2018-05-31 15:40:42, "Radar Lei"  wrote:
> > > >Just find some good material for nominating chair from Roman's email,
> > > >thanks Roman.
> > > >
> > > >I think we can follow this to nominate a Chair in this thread too.
> Guys
> > > >please help to nominate or self-nominate. Thanks a lot.
> > > >
> > > >See:
> > > >At the very minimum your resolution will contain: 1. A name of the
> > project
> > > >2. A list of proposed PMC 3. A proposed PMC chair
> > > >
> > > >On #3 I typically recommend podlings I mento to setup a rotating chair
> > > >policy. This is, in no way, an ASF requirement so feel free to ignore
> > it,
> > > >but it worked well before. The chair will be expected up for rotation
> > > every
> > > >year. It will be more that ok for the same person to self-nominate
> once
> > > the
> > > >year is up -- but at the same time it'll be up to the same person to
> > > >actually kick off a thread asking if anybody else is interested in
> > serving
> > > >as a chair for the next year. Of course, if there multiple candidates
> > > there
> > > >will have to be a vote.
> > > >
> > > >
> > > >Regards,
> > > >Radar
> > > >
> > > >On Tue, May 29, 2018 at 10:05 PM, Radar Lei  wrote:
> > > >
> > > >> Thanks Roman.
> > > >>
> > > >> This make sense, I will start to draft the resolution. BTW, we would
> > > need
> > > >> to nominate a chair, I guess it's the last piece we need to draft
> the
> > > >> resolution.
> > > >>
> > > >> Regards,
> > > >> Radar
> > > >>
> > > >> On Tue, May 29, 2018 at 11:24 AM, Roman Shaposhnik <
> > > ro...@shaposhnik.org>
> > > >> wrote:
> > > >>
> > > >>> On Sun, May 27, 2018 at 11:37 PM, Radar Lei 
> wrote:
> > > >>> > Hi Roman,
> > > >>> >
> > > >>> > We have confirmed with each HAWQ committer whether they want to
> > > remain
> > > >>> with
> > > >>> > HAWQ project.  As a summary, 37 PPMC members(including two
> mentors)
> > > and
> > > >>> 7
> > > >>> > committers confirmed they want to remain with HAWQ. [1] The total
> > > >>> committers
> > > >>> > number 44 seems pretty close with PPMC member number 37, is it
> good
> > > >>> enough
> > > >>> > to make PMC == committers as our graduation resolution?
> > > >>>
> > > >>> PMC == committers in this case makes perfect sense to me!
> > > >>>
> > > >>> > Should we update the whimsy and project webpage now or do update
> > > after
> > > >>> graduation?
> > > >>>
> > > >>> It really doesn't matter much. Your next step is to draft a
> > resolution
> > > >>> similar to:
> > > >>> https://www.mail-archive.com/general@incubator.apache.org/ms
> > > >>> g56982.html
> > > >>>
> > > >>> and start a [DISCUSS] thread similar to the above.
> > > >>>
> > > >>> Makes sense?
> > > >>>
> > > >>> Thanks,
> > > >>> Roman.
> > > >>>
> > > >>
> > > >>
> > >
> >
>


Re: Remain with HAWQ project or not?

2018-05-08 Thread
Yes

2018-05-08 18:15 GMT+08:00 jiali yao :

> Yes, I want to remain with HAWQ.
>
> Thanks
> Jiali
>
> On Tue, May 8, 2018 at 1:40 PM, Zhanwei Wang  wrote:
>
> > Yes, I would like to remain a committer.
> >
> >
> > > On May 8, 2018, at 13:26, Hong  wrote:
> > >
> > > Y
> > >
> > > 2018-05-08 1:05 GMT-04:00 stanly sheng :
> > >
> > >> Yes, I want to remain with HAWQ
> > >>
> > >> 2018-05-08 12:16 GMT+08:00 Paul Guo :
> > >>
> > >>> Yes. Thanks Radar to drive HAWQ graduation.
> > >>>
> > >>> 2018-05-08 12:02 GMT+08:00 Lirong Jian :
> > >>>
> >  Yes, I would like to remain a committer.
> > 
> >  Lirong
> > 
> >  Lirong Jian
> >  HashData Inc.
> > 
> >  2018-05-08 10:04 GMT+08:00 Hubert Zhang :
> > 
> > > Yes.
> > >
> > > On Tue, May 8, 2018 at 9:30 AM, Lili Ma  wrote:
> > >
> > >> Yes, of course I want to remain as PMC member!
> > >>
> > >> Thanks Radar for the effort on HAWQ graduation:)
> > >>
> > >> Best Regards,
> > >> Lili
> > >>
> > >> 2018-05-07 20:07 GMT-04:00 Lisa Owen :
> > >>
> > >>> yes, i would like to remain a committer.
> > >>>
> > >>>
> > >>> -lisa owen
> > >>>
> > >>> On Mon, May 7, 2018 at 10:02 AM, Shubham Sharma <
> > >>> ssha...@pivotal.io>
> > >>> wrote:
> > >>>
> >  Yes. I am looking forward to contributing to Hawq.
> > 
> >  On Mon, May 7, 2018 at 12:53 PM, Lav Jain 
> >  wrote:
> > 
> > > Yes. I am very excited about HAWQ.
> > >
> > > Regards,
> > >
> > >
> > > *Lav Jain*
> > > *Pivotal Data*
> > >
> > > lj...@pivotal.io
> > >
> > > On Mon, May 7, 2018 at 6:51 AM, Alexander Denissov <
> > >>> adenis...@pivotal.io
> > >
> > > wrote:
> > >
> > >> Yes.
> > >>
> > >>> On May 7, 2018, at 6:03 AM, Wen Lin 
> > >>> wrote:
> > >>>
> > >>> Yes. I'd like to keep on contributing to HAWQ.
> > >>>
> >  On Mon, May 7, 2018 at 5:21 PM, Ivan Weng <
> > >>> iw...@pivotal.io
> > >
> > >>> wrote:
> > 
> >  Yes, I definitely would like to be with HAWQ.
> > 
> >  Regards,
> >  Ivan
> > 
> > > On Mon, May 7, 2018 at 5:12 PM, Hongxu Ma <
> > > inte...@outlook.com
> > >>>
> > > wrote:
> > >
> > > Yes, let's make HAWQ better.
> > >
> > > Thanks.
> > >
> > >> 在 07/05/2018 16:11, Radar Lei 写道:
> > >> HAWQ committers,
> > >>
> > >> Per the discussion in "Apache HAWQ graduation from
> > > incubator?"
> >  [1],
> > > we
> > > want
> > >> to setup the PMC as part of HAWQ graduation
> > >> resolution.
> > >>
> > >> So we'd like to confirm whether you want to remain as
> > >> a
> > > committer/PMC
> > >> member of Apache HAWQ project?
> > >>
> > >> If you'd like to remain with HAWQ project, it's
> > >> welcome
> >  and
> > >>> please
> > > *respond**
> > >> 'Yes'* in this thread, or *respond 'No'* if you are
> > >> not
> > >>> interested
> > > in
> >  any
> > >> more. Thanks.
> > >>
> > >> This thread will be available for at least 72 hours,
> > >>> after
> > >> that,
> >  we
> >  will
> > >> send individual confirm emails.
> > >>
> > >> [1]
> > >> https://lists.apache.org/thread.html/
> >  b4a0b5671ce377b3d51c9b7ab00496
> > > a1eebfcbf1696ce8b67e078c64@%3Cdev.hawq.apache.org%3E
> > >>
> > >> Regards,
> > >> Radar
> > >>
> > >
> > > --
> > > Regards,
> > > Hongxu.
> > >
> > >
> > 
> > >>
> > >
> > 
> > 
> > 
> >  --
> >  Regards,
> >  Shubham Sharma
> >  Staff Customer Engineer
> >  Pivotal Global Support Services
> >  ssha...@pivotal.io
> >  Direct Tel: +1(510)-304-8201
> >  Office Hours: Mon-Fri 9:00 am to 5:00 pm PDT
> >  Out of Office Hours Contact +1 877-477-2269
> > 
> > >>>
> > >>
> > >
> > >
> > >
> > > --
> > > Thanks
> > >
> > > Hubert Zhang
> > >
> > 
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> Best Regards,
> > >> Xiang Sheng
> > >>
> >
> >
>


Re: [ANNOUNCE] Apache HAWQ 2.3.0.0-incubating Release

2018-03-20 Thread
Congrats! Thanks Yi for this big release. Look forward to Apache
HAWQ's graduation.

2018-03-21 10:56 GMT+08:00 Ivan Weng :

> Cool, thanks Yi for driving this great release.
>
> Regards,
> Ivan
>
> On Wed, Mar 21, 2018 at 10:43 AM, jiali yao 
> wrote:
>
> > Cool!!
> >
> > Thanks Yi and all the contributors for the release.
> >
> >
> >
> > On Wed, Mar 21, 2018 at 10:41 AM, stanly sheng 
> > wrote:
> >
> > > Great!!! Thanks Yi and all the contributors for the release.
> > >
> > > 2018-03-21 10:27 GMT+08:00 Yi JIN :
> > >
> > > > Apache HAWQ (incubating) Project Team is proud to announce Apache
> > > > HAWQ 2.3.0.0-incubating has been released.
> > > >
> > > > Apache HAWQ (incubating) combines exceptional MPP-based analytics
> > > > performance, robust ANSI SQL compliance, Hadoop ecosystem
> > > > integration and manageability, and flexible data-store format
> > > > support, all natively in Hadoop, no connectors required. Built
> > > > from a decade’s worth of massively parallel processing (MPP)
> > > > expertise developed through the creation of the Pivotal
> > > > Greenplum® enterprise database and open source PostgreSQL, HAWQ
> > > > enables to you to swiftly and interactively query Hadoop data,
> > > > natively via HDFS.
> > > >
> > > > *Download Link*:
> > > > https://dist.apache.org/repos/dist/release/incubator/hawq/2.
> > > > 3.0.0-incubating/
> > > >
> > > > *About this release*
> > > > This is a release having both source code and binary
> > > >
> > > > All changes:
> > > > https://cwiki.apache.org/confluence/display/HAWQ/
> Apache+HAWQ+2.3.0.0-
> > > > incubating+Release
> > > >
> > > >
> > > > *HAWQ Resources:*
> > > >
> > > >- JIRA: https://issues.apache.org/jira/browse/HAWQ
> > > >- Wiki: https://cwiki.apache.org/confluence/display/HAWQ/
> > > > Apache+HAWQ+Home
> > > >- Mailing list(s): dev@hawq.incubator.apache.org
> > > >   u...@hawq.incubator.apache.org
> > > >
> > > > *Know more about HAWQ:*
> > > > http://hawq.apache.org
> > > >
> > > > - Apache HAWQ (incubating) Team
> > > >
> > > > =
> > > > *Disclaimer*
> > > >
> > > > Apache HAWQ (incubating) is an effort undergoing incubation at The
> > > > Apache Software Foundation (ASF), sponsored by the name of Apache
> > > > Incubator PMC. Incubation is required of all newly accepted
> > > > projects until a further review indicates that the
> > > > infrastructure, communications, and decision making process have
> > > > stabilized in a manner consistent with other successful ASF
> > > > projects. While incubation status is not necessarily a reflection
> > > > of the completeness or stability of the code, it does indicate
> > > > that the project has yet to be fully endorsed by the ASF.
> > > >
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Xiang Sheng
> > >
> >
>


Re: Re: [VOTE]: Apache HAWQ 2.3.0.0-incubating Release (RC1)

2018-02-20 Thread
I build from branch 2.3.0.0 source code, install, init and run basic
feature tests, looks good to me. +1

One typo is found that hawq.release.version
in hawq-ambari-plugin/build.properties is still 2.2.0, should also change
to 2.3.0

2018-02-20 17:15 GMT+08:00 Ruilong Huo :

> Hong, good to have you vote for the 2.3.0,0 release! It is better to
> elaborate what you have done while reviewing the artifact and document for
> the release.
>
> Best regards,
> Ruilong Huo
>
>
> At 2018-02-20 16:16:38, "Wen Lin"  wrote:
> >Compile from source, install and run feature tests.
> >+1.
> >
> >On Tue, Feb 20, 2018 at 11:33 AM, Hong  wrote:
> >
> >> +1
> >>
> >> 2018-02-19 21:42 GMT-05:00 Yi JIN :
> >>
> >> > Hi All,
> >> >
> >> > This is the vote for Apache HAWQ (incubating) 2.3.0.0-incubating
> Release
> >> > Candidate 1 (RC1). It is a source release for HAWQ core, PXF, and
> Ranger;
> >> > and binary release for HAWQ core,  PXF and Ranger. We have rpm package
> >> > involved for the binary release.
> >> >
> >> > The vote will run for at least 72 hours and will close on Saturday,
> Feb
> >> 24,
> >> > 2017. Thanks.
> >> >
> >> > 1. Wiki page of the release:
> >> > *https://cwiki.apache.org/confluence/display/HAWQ/
> Apache+HAWQ+2.3.0.0-
> >> > incubating+Release
> >> >  Apache+HAWQ+2.3.0.0-
> >> > incubating+Release>*
> >> >
> >> >
> >> > 2. Release Notes (Apache Jira generated):
> >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> >> > version=12340262=Html=12318826
> >> >
> >> >
> >> > 3. Release verification steps can be found at:
> >> > For source tarball: https://cwiki.apache.org/confluence/display/HAWQ
> >> > /Release+Process%3A+Step+by+step+guide#ReleaseProcess:
> Stepbystepguide-
> >> > ValidatetheReleaseCandidate
> >> > For rpm package: https://cwiki.apache.org/confluence/display/HAWQ
> >> > /Build+Package+and+Install+with+RPM
> >> >
> >> >
> >> > 4. Git release branch:
> >> > https://git-wip-us.apache.org/repos/asf?p=incubator-hawq.
> >> > git;a=shortlog;h=refs/heads/2.3.0.0-incubating
> >> >
> >> > 5. Source and Binary release balls with signare:
> >> > https://dist.apache.org/repos/dist/dev/incubator/hawq/2.3.0.
> >> > 0-incubating.RC1/
> >> >
> >> >
> >> > 6. Keys to verify the signature of the release artifact are available
> at:
> >> > https://dist.apache.org/repos/dist/dev/incubator/hawq/KEYS
> >> >
> >> >
> >> > 7. The artifact(s) has been signed with Key ID: CE60F90D1333092A
> >> >
> >> >
> >> > Please vote accordingly:
> >> > [ ] +1 approve
> >> > [ ] +0 no opinion
> >> > [ ] -1 disapprove (and reason why)
> >> >
> >> >
> >> > Best regards,
> >> > Yi (yjin)
> >> >
> >>
>


Re: I want to contribute code to Apache HAWQ

2017-11-14 Thread
Welcome to Apache HAWQ Family

2017-11-13 11:12 GMT+08:00 Shubham Sharma :

> Welcome to the Apache HAWQ community Chiyang. Looking forward to your
> contributions.
>
> Following links will help you get started -
>
> - Contribution guidelines -
> https://cwiki.apache.org/confluence/display/HAWQ/Contributing+to+HAWQ
> - Development Environment setup -
> https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install
> - Frequently faced problems and their solutions -
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65144284
>
> Hope this helps.
>
>
> On Sun, Nov 12, 2017 at 7:00 PM, Lei Chang  wrote:
>
> > Cool.  Welcome to Apache HAWQ.
> >
> > And looking forwards to seeing your updates on the feature.
> >
> > Cheers
> > Lei
> >
> >
> >
> >
> > On Mon, Nov 13, 2017 at 10:54 AM, Chiyang Wan 
> > wrote:
> >
> > > Hello, everyone. I want to contribute code to Apache HAWQ, especially
> for
> > > HAWQ-786(Framework to support pluggable formats and file systems).
> > >
> >
>
>
>
> --
> Regards,
> Shubham Sharma
>


Re: [ANNOUNCE] Apache HAWQ 2.2.0.0-incubating Released

2017-07-13 Thread
+1 for Yi, the top committer of Apache HAWQ

2017-07-13 16:19 GMT+08:00 Roman Shaposhnik :

> Congrats indeed! Very nice to see the community mastering binary
> artifacts in addition to source tarballs!
>
> Thanks,
> Roman.
>
> On Wed, Jul 12, 2017 at 8:47 PM, Vineet Goel  wrote:
> > Great work indeed ! Congrats, everyone.
> >
> > Cheers,
> > Vineet
> >
> >
> > On Wed, Jul 12, 2017 at 2:54 AM Radar Lei  wrote:
> >
> >> Great achievement! Congratulations!
> >>
> >> Regards,
> >> Radar
> >>
> >> On Wed, Jul 12, 2017 at 5:38 PM, Hongxu Ma  wrote:
> >>
> >> > Congratulations!
> >> >
> >> > Wish HAWQ getting better in future!
> >> >
> >> > 在 12/07/2017 15:27, Ruilong Huo 写道:
> >> > > Hi All,
> >> > >
> >> > > The Apache HAWQ (incubating) Project Team is proud to announce
> >> > > the release of Apache HAWQ 2.2.0.0-incubating.
> >> > >
> >> > > This is a source code and binary release.
> >> > >
> >> > > ABOUT HAWQ
> >> > > Apache HAWQ (incubating) combines exceptional MPP-based analytics
> >> > > performance, robust ANSI SQL compliance, Hadoop ecosystem
> integration
> >> > > and manageability, and flexible data-store format support, all
> >> > > natively in Hadoop, no connectors required.
> >> > >
> >> > > Built from a decade’s worth of massively parallel processing (MPP)
> >> > > expertise developed through the creation of open source Greenplum®
> >> > > Database and PostgreSQL, HAWQ enables you to
> >> > > swiftly and interactively query Hadoop data, natively via HDFS.
> >> > >
> >> > > FEATURES AND ENHANCEMENTS INCLUDED IN THIS RELEASE
> >> > > - CentOS 7.x support
> >> > > Apache HAWQ is improved to be compatible with CentOS 7.x along with
> >> 6.x.
> >> > >
> >> > > - Apache Ranger integration
> >> > > Integrate Apache HAWQ with Apache Ranger through HAWQ Ranger Plugin
> >> > Service
> >> > > which is a RESTful service. It enables users to use Apache Ranger to
> >> > authorize
> >> > > user access to Apache HAWQ resources. It also manages all Hadoop
> >> > components’
> >> > > authorization policies with the same user interface, policy store,
> and
> >> > auditing
> >> > > stores.
> >> > >
> >> > > - PXF ORC profile
> >> > > Fully supports PXF with Optimized Row Columnar (ORC) file format.
> >> > >
> >> > > - Fixes and enhancements on Apache HAWQ resource manager, query
> >> > execution, dispatcher,
> >> > > catalog, management utilities and more.
> >> > >
> >> > > JIRA GENERATED RELEASE NOTES
> >> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> >> > projectId=12318826=12339641
> >> > >
> >> > > RELEASE ARTIFACTS ARE AVAILABLE AT
> >> > > http://apache.org/dyn/closer.cgi/incubator/hawq/2.2.0.0-incubating
> >> > >
> >> > > SHA256 & MD5 SIGNATURES (verify your downloads <
> >> > https://www.apache.org/dyn/closer.cgi#verify>):
> >> > > https://dist.apache.org/repos/dist/release/incubator/hawq/2.
> >> > 2.0.0-incubating
> >> > >
> >> > > PGP KEYS
> >> > > https://dist.apache.org/repos/dist/release/incubator/hawq/KEYS
> >> > >
> >> > > DOCUMENTATION
> >> > > http://hawq.incubator.apache.org/docs/userguide/2.2.0.0-incubating
> >> > >
> >> > > HAWQ RESOURCES
> >> > > - JIRA: https://issues.apache.org/jira/browse/HAWQ
> >> > > - Wiki: https://cwiki.apache.org/confluence/display/HAWQ/
> >> > Apache+HAWQ+Home
> >> > > - Mailing list:
> >> > > dev@hawq.incubator.apache.org
> >> > > u...@hawq.incubator.apache.org
> >> > >
> >> > > LEARN MORE ABOUT HAWQ
> >> > > http://hawq.apache.org
> >> > >
> >> > > Best regards,
> >> > > - Apache HAWQ (incubating) Team
> >> > >
> >> > > ==
> >> > > DISCLAIMER
> >> > >
> >> > > Apache HAWQ (incubating) is an effort undergoing incubation at the
> >> Apache
> >> > > Software Foundation (ASF), sponsored by the Apache Incubator PMC.
> >> > >
> >> > > Incubation is required of all newly accepted projects until a
> further
> >> > > review indicates that the infrastructure, communications, and
> decision
> >> > > making process have stabilized in a manner consistent with other
> >> > > successful ASF projects.
> >> > >
> >> > > While incubation status is not necessarily a reflection of the
> >> > > completeness or stability of the code, it does indicate that the
> >> > > project has yet to be fully endorsed by the ASF.
> >> > >
> >> > > Best regards,
> >> > > Ruilong Huo
> >> >
> >> > --
> >> > Regards,
> >> > Hongxu.
> >> >
> >> >
> >>
>


Re: [ANNOUNCE] Apache HAWQ 2.2.0.0-incubating Released

2017-07-12 Thread
Congrats!

2017-07-13 9:55 GMT+08:00 Yandong Yao :

> Great achievement, Congrats!
>
> On Thu, Jul 13, 2017 at 8:46 AM, Lei Chang  wrote:
>
> > Congrats!
> >
> > Cheers
> > Lei
> >
> >
> > On Wed, Jul 12, 2017 at 3:27 PM, Ruilong Huo  wrote:
> >
> > > Hi All,
> > >
> > > The Apache HAWQ (incubating) Project Team is proud to announce
> > > the release of Apache HAWQ 2.2.0.0-incubating.
> > >
> > > This is a source code and binary release.
> > >
> > > ABOUT HAWQ
> > > Apache HAWQ (incubating) combines exceptional MPP-based analytics
> > > performance, robust ANSI SQL compliance, Hadoop ecosystem integration
> > > and manageability, and flexible data-store format support, all
> > > natively in Hadoop, no connectors required.
> > >
> > > Built from a decade’s worth of massively parallel processing (MPP)
> > > expertise developed through the creation of open source Greenplum®
> > > Database and PostgreSQL, HAWQ enables you to
> > > swiftly and interactively query Hadoop data, natively via HDFS.
> > >
> > > FEATURES AND ENHANCEMENTS INCLUDED IN THIS RELEASE
> > > - CentOS 7.x support
> > > Apache HAWQ is improved to be compatible with CentOS 7.x along with
> 6.x.
> > >
> > > - Apache Ranger integration
> > > Integrate Apache HAWQ with Apache Ranger through HAWQ Ranger Plugin
> > Service
> > > which is a RESTful service. It enables users to use Apache Ranger to
> > > authorize
> > > user access to Apache HAWQ resources. It also manages all Hadoop
> > > components’
> > > authorization policies with the same user interface, policy store, and
> > > auditing
> > > stores.
> > >
> > > - PXF ORC profile
> > > Fully supports PXF with Optimized Row Columnar (ORC) file format.
> > >
> > > - Fixes and enhancements on Apache HAWQ resource manager, query
> > execution,
> > > dispatcher,
> > > catalog, management utilities and more.
> > >
> > > JIRA GENERATED RELEASE NOTES
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > > projectId=12318826=12339641
> > >
> > > RELEASE ARTIFACTS ARE AVAILABLE AT
> > > http://apache.org/dyn/closer.cgi/incubator/hawq/2.2.0.0-incubating
> > >
> > > SHA256 & MD5 SIGNATURES (verify your downloads <
> > > https://www.apache.org/dyn/closer.cgi#verify>):
> > > https://dist.apache.org/repos/dist/release/incubator/hawq/2.
> > > 2.0.0-incubating
> > >
> > > PGP KEYS
> > > https://dist.apache.org/repos/dist/release/incubator/hawq/KEYS
> > >
> > > DOCUMENTATION
> > > http://hawq.incubator.apache.org/docs/userguide/2.2.0.0-incubating
> > >
> > > HAWQ RESOURCES
> > > - JIRA: https://issues.apache.org/jira/browse/HAWQ
> > > - Wiki: https://cwiki.apache.org/confluence/display/HAWQ/
> > Apache+HAWQ+Home
> > > - Mailing list:
> > > dev@hawq.incubator.apache.org
> > > u...@hawq.incubator.apache.org
> > >
> > > LEARN MORE ABOUT HAWQ
> > > http://hawq.apache.org
> > >
> > > Best regards,
> > > - Apache HAWQ (incubating) Team
> > >
> > > ==
> > > DISCLAIMER
> > >
> > > Apache HAWQ (incubating) is an effort undergoing incubation at the
> Apache
> > > Software Foundation (ASF), sponsored by the Apache Incubator PMC.
> > >
> > > Incubation is required of all newly accepted projects until a further
> > > review indicates that the infrastructure, communications, and decision
> > > making process have stabilized in a manner consistent with other
> > > successful ASF projects.
> > >
> > > While incubation status is not necessarily a reflection of the
> > > completeness or stability of the code, it does indicate that the
> > > project has yet to be fully endorsed by the ASF.
> > >
> > > Best regards,
> > > Ruilong Huo
> > >
> >
>
>
>
> --
> Best Regards,
> Yandong
>


Re: why these 2 queries had different explain

2017-05-22 Thread
Hi tony,

Yes, confirmed it's a bug. Filed the bug in apache hawq jira
https://issues.apache.org/jira/browse/HAWQ-1470. Will fix it later.

Thanks,
Zhenglin

2017-05-22 10:31 GMT+08:00 tao tony <tonytao0...@outlook.com>:

> OK.this bug appears when using gpfdist external table in select list.I
> create some test data to reproduce this bug in hawq docker.
>
> 1.create a external table
>
> CREATE EXTERNAL TABLE testext (
>  a int,
>  b character varying(255)
> ) LOCATION (
>  'gpfdist://172.19.0.2:8087/test.csv'
> ) FORMAT 'text' (delimiter E',' null E'' escape E'OFF');
>
> test.csv file contains:
>
> cat gpdata/test.csv
> 1,abc
> 2,bce
> 3,ced
>
> 2.create a internal table:
>
> create table test1(c int);
>
> insert into test1 values(1);
>
> insert into test1 values(2);
>
> insert into test1 values(3);
>
> insert into test1 values(4);
>
> 3.run query,could not get testext.b values:
>
> select c,(select s.b from testext s where t.c=s.a) from test1 t;
>   c | ?column?
> ---+--
>   1 |
>   2 |
>   3 |
>   4 |
> (4 rows)
> 4.create an internal table from testext,and run the query again,ti
> returns the correct result
>
> create table test as select * from testext;
>
> select c,(select s.b from test s where t.c=s.a) from test1 t;
>   c | ?column?
> ---+--
>   1 | abc
>   2 | bce
>   3 | ced
>   4 |
> (4 rows)
>
> you could compare the 2  explains,in step 3 ,testext was not broadcasted.
>
>
>
> On 05/22/2017 09:25 AM, 陶征霖 wrote:
> > Hi tony,
> >
> > Could you please provide the simple reproduce steps so that we can easily
> > debug in our own env.?
> >
> > Thanks,
> > Zhenglin
> >
> > 2017-05-18 14:17 GMT+08:00 tao tony <tonytao0...@outlook.com>:
> >
> >> it seems a bug for querying on an external table because I found
> >> ucloud_pay_tenanid ulist was a external table which using gpfdist.
> >>
> >> CREATE EXTERNAL TABLE ucloud_pay_tenanid (
> >>   customercode character varying(255),
> >>   customername character varying(255),
> >>   prefixflag character varying(255),
> >>   customertype character varying(255),
> >>   comments character varying(255)
> >> ) LOCATION (
> >> 'gpfdist://hdptest02.hddomain.cn:8087/ucloud_pay_tenanid_*.csv'
> >> ) FORMAT 'text' (delimiter E';' null E'' escape E'OFF')
> >> ENCODING 'UTF8';
> >>
> >> then I create an internal table using:
> >>
> >> create table ucloudtest as select * from ucloud_pay_tenanid;
> >>
> >> run  the explain,it lookks like the right query plan:
> >>
> >> hdb=# explain select cp.tenantid,
> >> hdb-#  (select ulist.customertype  from ucloudtest ulist where
> >> ulist.customercode=cp.tenantid),cp.orderuuid from cptest cp;
> >>  QUERY PLAN
> >> 
> >> -
> >>Gather Motion 1:1  (slice2; segments: 1)  (cost=0.00..219.50 rows=100
> >> width=42)
> >>  ->  Append-only Scan on cptest cp  (cost=0.00..219.50 rows=100
> >> width=42)
> >>SubPlan 1
> >>  ->  Result  (cost=2.18..2.19 rows=1 width=7)
> >>Filter: ulist.customercode::text = $0::text
> >>->  Materialize  (cost=2.18..2.19 rows=1 width=7)
> >>  ->  Broadcast Motion 1:1  (slice1; segments: 1)
> >> (cost=0.00..2.17 rows=1 width=7)
> >>->  Append-only Scan on ucloudtest ulist
> >> (cost=0.00..2.17 rows=1 width=7)
> >>Settings:  default_hash_table_bucket_number=18; optimizer=off
> >>Optimizer status: legacy query optimizer
> >> (10 rows)
> >>
> >> run the query,get the correct result:
> >>
> >> hdb=# select cp.tenantid,
> >>(select ulist.customertype  from ucloudtest ulist where
> >> ulist.customercode=cp.tenantid),cp.orderuuid from cptest cp;
> >>tenantid | ?column? |orderuuid
> >> --+--+--
> >>sxve7r6c | 便利 | e6d9b57a0c55484392448ea908c1fe49
> >>sxve7r6c | 便利 | 22a80697bfc74d63b7f28eee246c4368
> >>3e7rph46 | 专卖 | 420ad3e45762459e91860b975e9f2751
> >>3e7rph46 | 专卖 | 0634e7e3539a4116b9917a7493838f51
> >>7jvfka5m | 专卖 | a7b96194fe9f

Re: why these 2 queries had different explain

2017-05-21 Thread
Hi tony,

Could you please provide the simple reproduce steps so that we can easily
debug in our own env.?

Thanks,
Zhenglin

2017-05-18 14:17 GMT+08:00 tao tony :

> it seems a bug for querying on an external table because I found
> ucloud_pay_tenanid ulist was a external table which using gpfdist.
>
> CREATE EXTERNAL TABLE ucloud_pay_tenanid (
>  customercode character varying(255),
>  customername character varying(255),
>  prefixflag character varying(255),
>  customertype character varying(255),
>  comments character varying(255)
> ) LOCATION (
> 'gpfdist://hdptest02.hddomain.cn:8087/ucloud_pay_tenanid_*.csv'
> ) FORMAT 'text' (delimiter E';' null E'' escape E'OFF')
> ENCODING 'UTF8';
>
> then I create an internal table using:
>
> create table ucloudtest as select * from ucloud_pay_tenanid;
>
> run  the explain,it lookks like the right query plan:
>
> hdb=# explain select cp.tenantid,
> hdb-#  (select ulist.customertype  from ucloudtest ulist where
> ulist.customercode=cp.tenantid),cp.orderuuid from cptest cp;
> QUERY PLAN
> 
> -
>   Gather Motion 1:1  (slice2; segments: 1)  (cost=0.00..219.50 rows=100
> width=42)
> ->  Append-only Scan on cptest cp  (cost=0.00..219.50 rows=100
> width=42)
>   SubPlan 1
> ->  Result  (cost=2.18..2.19 rows=1 width=7)
>   Filter: ulist.customercode::text = $0::text
>   ->  Materialize  (cost=2.18..2.19 rows=1 width=7)
> ->  Broadcast Motion 1:1  (slice1; segments: 1)
> (cost=0.00..2.17 rows=1 width=7)
>   ->  Append-only Scan on ucloudtest ulist
> (cost=0.00..2.17 rows=1 width=7)
>   Settings:  default_hash_table_bucket_number=18; optimizer=off
>   Optimizer status: legacy query optimizer
> (10 rows)
>
> run the query,get the correct result:
>
> hdb=# select cp.tenantid,
>   (select ulist.customertype  from ucloudtest ulist where
> ulist.customercode=cp.tenantid),cp.orderuuid from cptest cp;
>   tenantid | ?column? |orderuuid
> --+--+--
>   sxve7r6c | 便利 | e6d9b57a0c55484392448ea908c1fe49
>   sxve7r6c | 便利 | 22a80697bfc74d63b7f28eee246c4368
>   3e7rph46 | 专卖 | 420ad3e45762459e91860b975e9f2751
>   3e7rph46 | 专卖 | 0634e7e3539a4116b9917a7493838f51
>   7jvfka5m | 专卖 | a7b96194fe9f48379e2711ac6000191b
>   6xydfh4y | 便利 | 7a55e97119784623a53f6e65ef9680c7
>   sxve7r6c | 便利 | 227f3d22aec14723bb51efc4e2a6f0b4
>   3e7rph46 | 专卖 | f3d02cc77a2348829be2f72ce24bf846
>   6xydfh4y | 便利 | bab722ac7d5748408d3ad2973d292ab5
>
>
> On 05/18/2017 11:11 AM, tao tony wrote:
> > hi guys,
> >
> > The different explains as below make me confused these days,could you
> > please help me to explain why ulist.customertype is null.
> >
> > explain :
> >
> > hdb=# explain select cp.tenantid,
> >(select ulist.customertype  from ucloud_pay_tenanid ulist where
> > ulist.customercode=cp.tenantid),cp.orderuuid from cptest cp;
> >   QUERY PLAN
> > 
> --
> >Gather Motion 18:1  (slice1; segments: 18) (cost=0.00..1350002.00
> > rows=100 width=42)
> >  ->  Append-only Scan on cptest cp  (cost=0.00..1350002.00 rows=6
> > width=42)
> >SubPlan 1
> >  ->  External Scan on ucloud_pay_tenanid ulist
> > (cost=0.00..13500.00 rows=56 width=516)
> >Filter: customercode::text = $0::text
> >Settings:  default_hash_table_bucket_number=18; optimizer=off
> >Optimizer status: legacy query optimizer
> > (7 rows)
> >
> >
> > hdb=# explain select a,(select d from test1 s where t.b=s.e) from test2
> t;
> >  QUERY PLAN
> > 
> -
> >Gather Motion 1:1  (slice2; segments: 1)  (cost=0.00..10.20 rows=9
> > width=6)
> >  ->  Append-only Scan on test2 t  (cost=0.00..10.20 rows=9 width=6)
> >SubPlan 1
> >  ->  Result  (cost=1.01..1.02 rows=1 width=4)
> >Filter: $0::text = s.e::text
> >->  Materialize  (cost=1.01..1.02 rows=1 width=4)
> >  ->  Broadcast Motion 1:1  (slice1; segments: 1)
> > (cost=0.00..1.01 rows=1 width=4)
> >->  Append-only Scan on test1 s
> > (cost=0.00..1.01 rows=1 width=4)
> >Settings:  default_hash_table_bucket_number=18; optimizer=off
> >Optimizer status: legacy query optimizer
> > (10 rows)
> >
> > test data:
> >
> > query1:
> >
> > hdb-#  (select ulist.customertype  from ucloud_pay_tenanid ulist where
> > 

Re: New committer: Xiang Sheng

2017-05-17 Thread
Congratulations!

2017-05-17 11:33 GMT+08:00 Amy Bai :

> Congratulations, Xiang!
>
> Regards,
> Amy
>
> On Wed, May 17, 2017 at 11:31 AM, Hubert Zhang  wrote:
>
> > Cons!
> >
> > On Wed, May 17, 2017 at 10:37 AM, Lili Ma  wrote:
> >
> > > Congratulate Xiang!
> > >
> > > Well deserved!
> > >
> > > Thanks
> > > Lili
> > >
> > > 2017-05-17 10:34 GMT+08:00 Ma Hongxu :
> > >
> > > > Congrats!
> > > > 
> > > >
> > > > 在 17/05/2017 09:53, Wen Lin 写道:
> > > > > Hi,
> > > > >
> > > > > The Project Management Committee (PMC) for Apache HAWQ (incubating)
> > has
> > > > > invited Xiang Sheng to become a committer and we are pleased to
> > > announce
> > > > > that he has accepted.
> > > > > Being a committer enables easier contribution to the project since
> > > there
> > > > is
> > > > > no need to go via the patch submission process. This should enable
> > > better
> > > > > productivity. Please join us in congratulating him and we are
> looking
> > > > > forward to collaborating with him in the open source community. His
> > > > > contribution includes (but not limited to):
> > > > > List contributions to code base, documentation, code review,
> > discussion
> > > > in
> > > > > mailing list, JIRA, etc.
> > > > >
> > > > > Regards!
> > > > > Wen
> > > > >
> > > >
> > > > --
> > > > Regards,
> > > > Hongxu.
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > Thanks
> >
> > Hubert Zhang
> >
>


Re: nested loop and merge join are disabled by default in legacy planner?

2016-12-22 Thread
Yes, in old planner only hashjoin is enabled by default, mergejoin and
nestloop are both disabled. However hashjoin can't handle all the case,
such as "select * from a,b where p>q" where column p and column q comes
from table a and table b respectively. So in the first path no plan is
generated. And then root->config->mpp_trying_fallback_plan is set to true,
which enables all join method and try them all. You can check the function
add_paths_to_joinrel which is called for join path generation.

2016-12-21 16:47 GMT+08:00 Paul Guo :

>   {
> {"enable_nestloop", PGC_USERSET, QUERY_TUNING_METHOD,
> gettext_noop("Enables the planner's use of nested-loop join
> plans."),
> NULL
> },
> _nestloop,
> false, NULL, NULL
> },
> {
> {"enable_mergejoin", PGC_USERSET, QUERY_TUNING_METHOD,
> gettext_noop("Enables the planner's use of merge join plans."),
> NULL
> },
> _mergejoin,
> false, NULL, NULL
> },
>
> I just checked greenplum/gpdb. The two guc are disabled by default also.
> Anyone know the reason or history? thanks.
>


Re: New Committer: Hong Wu

2016-11-04 Thread
Congratulations!

2016-11-04 14:24 GMT+08:00 Radar Da lei :

> Congratulations Hong!
>
> Regards,
> Radar
>
> On Fri, Nov 4, 2016 at 2:16 PM, Ruilong Huo  wrote:
>
> > Congratulations Hong!
> >
> > Best regards,
> > Ruilong Huo
> >
> > On Fri, Nov 4, 2016 at 1:39 PM, Zhanwei Wang  wrote:
> >
> > > Congratulations!
> > >
> > >
> > >
> > > Best Regards
> > >
> > > Zhanwei Wang
> > > wan...@apache.org
> > >
> > >
> > >
> > > > 在 2016年11月4日,下午1:25,Wen Lin  写道:
> > > >
> > > > Congratulations!
> > > >
> > > > On Fri, Nov 4, 2016 at 1:21 PM, Yi Jin  wrote:
> > > >
> > > >> Congratulations! Hong!
> > > >>
> > > >> On Fri, Nov 4, 2016 at 3:47 PM, Lili Ma  wrote:
> > > >>
> > > >>> The Project Management Committee (PMC) for Apache HAWQ (incubating)
> > has
> > > >>> invited Hong Wu to become a committer and we are pleased to
> announce
> > > that
> > > >>> he has accepted.
> > > >>>
> > > >>> Being a committer enables easier contribution to the project since
> > > there
> > > >> is
> > > >>> no need to go via the patch submission process. This should enable
> > > better
> > > >>> productivity.
> > > >>>
> > > >>> Please join us in congratulating him and we are looking forward to
> > > >>> collaborating with him in the open source community.
> > > >>>
> > > >>> His contribution includes (but not limited to):
> > > >>>
> > > >>>   - *Direct contribution to code base:*
> > > >>>  - *79 commits in total with most of the major components in
> hawq
> > > >>>  involved. This shows that he has solid knowledge and skill of
> > > >> hawq.*
> > > >>>  - 66 closed PRs: https://github.com/apache/
> > > >>> incubator-hawq/pulls?q=is%
> > > >>>  3Apr+user%3Axunzhang+author%3Axunzhang+is%3Aclosed
> > > >>>   > > >>> 3Apr+user%3Axunzhang+author%3Axunzhang+is%3Aclosed>
> > > >>>  - *9 features and code refactor including hawq register, hawq
> > > >>>  extract, orc, libhdfs3, test infrastructure, etc.*
> > > >>> - HAWQ-991  >.
> > > >>> Write
> > > >>> hawqregister to support registering tables from yaml files
> > > >>> - HAWQ-1012  jira/browse/HAWQ-1012
> > >.
> > > >>> Check whether the input yaml file for hawq register is
> valid
> > > >>> - HAWQ-1011  jira/browse/HAWQ-1011
> > >.
> > > >>> Check whether the table to be registered is existed
> > > >>> - HAWQ-1033  jira/browse/HAWQ-1033
> > >.
> > > >>> Add
> > > >>> —force option for hawq register
> > > >>> - HAWQ-1050  jira/browse/HAWQ-1050
> > >.
> > > >>> Support help without dash for register
> > > >>> - HAWQ-1034  jira/browse/HAWQ-1034
> > >.
> > > >>> Implement —repair option for hawq register
> > > >>> - HAWQ-1024  jira/browse/HAWQ-1024
> > >.
> > > >>> Add
> > > >>> rollback system in hawq register
> > > >>> - HAWQ-1060  jira/browse/HAWQ-1060
> > >.
> > > >>> Refactor hawq register with better readability and quality
> > > >>> - HAWQ-1005  jira/browse/HAWQ-1005
> > >.
> > > >>> Add
> > > >>> schema info, distribution policy info with Parquet format
> in
> > > >>> hawqextact
> > > >>>- HAWQ-1025  > > jira/browse/HAWQ-1025
> > > >>> .
> > > >>> Add bucket number in the yaml file of hawq extract, modify
> > > >>> the actual elf
> > > >>> for usage1
> > > >>> - HAWQ-796  >.
> > > >>> Extend orc library to support reading files from HDFS
> > > >>> - HAWQ-618  >.
> > > >>> Import libhdfs3 for internal management
> > > >>> - HAWQ-707  >.
> > > >>> Remove google test dependency from libhdfs3 and libyarn
> > folder
> > > >>> - HAWQ-873  >.
> > > >>> Improve checking time for Travis CI
> > > >>> - HAWQ-735  >.
> > > >> Add
> > > >>> —with-thrift to control building thrift inside or not
> > > >>> - HAWQ-721  >.
> > > >> New
> > > >>> Feature Test Skeleton
> > > >>> - HAWQ-911  >.
> > > >>> Optimize and refactor makefile for feature test framework
> > > >>> - HAWQ-810  >.
> > > >> Add
> > > >>> stringFormat utility

Re: new committer: Paul Guo

2016-11-04 Thread
Congratulations!

2016-11-04 14:25 GMT+08:00 Radar Da lei :

> Congratulations!
>
> Regards,
> Radar
>
> On Fri, Nov 4, 2016 at 1:39 PM, Zhanwei Wang  wrote:
>
> > Congratulations!
> >
> >
> >
> > Best Regards
> >
> > Zhanwei Wang
> > wan...@apache.org
> >
> >
> >
> > > 在 2016年11月4日,下午1:25,Ivan Weng  写道:
> > >
> > > Congratulations!
> > >
> > >
> > > Regards,
> > > Ivan
> > >
> > > On Fri, Nov 4, 2016 at 1:22 PM, Yi Jin  wrote:
> > >
> > >> Congratulations! Paul! :)
> > >>
> > >> On Fri, Nov 4, 2016 at 3:48 PM, Lili Ma  wrote:
> > >>
> > >>> Congratulations, Paul!
> > >>>
> > >>> On Fri, Nov 4, 2016 at 12:08 PM, Hong Wu 
> > wrote:
> > >>>
> >  Wow! Congrats Paul.
> > 
> > 
> > > 在 2016年11月4日,上午11:51,Wen Lin  写道:
> > >
> > > Paul,
> > > Congratulations!
> > >
> > >> On Fri, Nov 4, 2016 at 11:34 AM, Ruilong Huo 
> > >> wrote:
> > >>
> > >> The Project Management Committee (PMC) for Apache HAWQ
> (incubating)
> > >>> has
> > >> invited Paul Guo to become a committer and we are pleased to
> > >> announce
> >  that
> > >> he has accepted.
> > >>
> > >> Being a committer enables easier contribution to the project since
> >  there is
> > >> no need to go via the patch submission process. This should enable
> >  better
> > >> productivity.
> > >>
> > >> Please join us in congratulating him and we are looking forward to
> > >> collaborating with him in the open source community.
> > >>
> > >> His contribution includes (but not limited to):
> > >>
> > >>  - *Direct contribution to code base*
> > >> - *56 commits in total which span most of the key components
> of
> > >> hawq. This demonstrate concrete knowledge and in depth
> > >> understanding of the
> > >> product*
> > >> - 56 closed PRs: https://github.com/apache
> > >> /incubator-hawq/pulls?q=is%3Apr+is%3Aclosed+author%
> 3Apaul-guo-
> > >>  > >> 3Apr+is%3Aclosed+author%3Apaul-guo->
> > >> - *9 features, enhancement and code refactor including storage
> > >>> and
> > >> compression, command line tool and management utility,
> > >>> procedural
> > >> language, configure and build system, test infrastructure and
> > >>> test,
> > >> etc*
> > >>- HAWQ-774  >
> > >>> Add
> > >>snappy compression support to row oriented storage
> > >>- HAWQ-984  >
> >  hawq
> > >>config is too slow
> > >>- HAWQ-775  >
> > >> Provide
> > >>a seperate PLR package
> > >>- HAWQ-751  >
> > >>> Add
> > >>plr, pgcrypto, gporca into Apache HAWQ
> > >>- HAWQ-744  >
> > >>> Add
> > >>plperl code
> > >>- HAWQ-1007  > >> jira/browse/HAWQ-1007>
> >  Add
> > >>the pgcrypto code into hawq
> > >>- HAWQ-394  >
> > >> Remove
> > >>pgcrypto from code base
> > >>- HAWQ-914  >
> > >> Improve
> > >>user experience of HAWQ's build infrastructure
> > >>- HAWQ-1081  > >> jira/browse/HAWQ-1081>
> > >> Check
> > >>missing perl modules (at least JSON) in configure
> > >>- HAWQ-867  >
> > >> Replace
> > >>the git-submobule mechanism with git-clone
> > >>- HAWQ-711  >
> > >> Integrate
> > >>libhdfs3 and libyarn makefile into hawq
> > >>- HAWQ-878  >
> > >>> Add
> > >>googletest cases for the ao snappy compression support
> > >>- HAWQ-876  >
> > >>> Add
> > >>the support for initFile option of gpdiff.pl in hawq
> > >>> googletest
> > >>framework
> > >>- HAWQ-917  >
> > >> Refactor
> > >>feature tests for data type check with new googletest
> > >>> framework
> > >> - *34 bug fixes in key components including storage,
> > >> compression,
> > >> dispatcher, command line tool, management utilty, configure
> and
> >  build
> > >> system, and test infrastructure, etc*
> > >>- 

Re: Rename "greenplum" to "hawq"

2016-07-12 Thread
Good idea, but need quite a lot of effort and may also affect custormer
behavior. Should handle it carefully.

2016-07-13 9:54 GMT+08:00 Ivan Weng :

> Agree with this good idea. But as Paul said, there are maybe already many
> users use greeenplum_path.sh or something else in their environment. So we
> need to think about it.
>
>
> Regards,
> Ivan
>
> On Wed, Jul 13, 2016 at 9:31 AM, Paul Guo  wrote:
>
> > I've asked this before. Seems that affects some old users. I'm not sure
> > about the details.
> > I agree that we should change it to a better name in a release.
> >
> > 2016-07-13 9:25 GMT+08:00 Roman Shaposhnik :
> >
> > > On Tue, Jul 12, 2016 at 6:21 PM, Xiang Sheng 
> wrote:
> > > > Agree . @xunzhang.
> > > > However , some greenplum strings can be easily replaced , but there
> are
> > > too
> > > > many in the code or comments.  Changing all of them costs too much
> > > efforts.
> > > >
> > > > So changing the strings that users can see is enough.
> > >
> > > Huge +1 to this! Btw, is this something we may be able to tackle in our
> > > next Apache release?
> > >
> > > Thanks,
> > > Roman.
> > >
> >
>



-- 
Thanks,
Zhenglin


Re: sanity-check before running cases in feature-test

2016-07-12 Thread
Agree that test case should handel it by itself. However, the check should
be implemented in common library. Then all test case can reuse the code.

2016-07-12 14:15 GMT+08:00 Ivan Weng :

> Agree with Hong. Test case should check its environment needed. If the
> check failed, it should terminate the execution and report the error.
>
> On Tue, Jul 12, 2016 at 2:04 PM, Hong Wu  wrote:
>
> > It is user/developer themselves that should take care. Say, if you write
> a
> > test case which is related to plpython, why don't you configure HAWQ with
> > "--with-python" option? We should write a README for feature-test that
> > guides user to run this tests. For example, tell them sourcing
> > "greenplum.sh" before running tests.
> >
> > Consequently, I think add such sanity-check is a little bit of
> > over-engineering which will bring extra problems and complexities.
> >
> > Best
> > xunzhang
> >
> > 2016-07-12 13:47 GMT+08:00 Paul Guo :
> >
> > > I have >1 times to encounter some feature test failures due to reported
> > > missing stuffs.
> > >
> > > e.g.
> > >
> > > 1. I did not have pl/python installed in my hawq build so
> > >UDF/sql/function_set_returning.sql fails to "create language
> > plpythonu"
> > >This makes this case fails.
> > >
> > > 2. Sometimes I forgot to source a greenplum.sh, then all cases run
> > > with failures due to missing psql.
> > >
> > > We seem to be able to improve.
> > >
> > > 1) Sanity-check some file existence in common code, e.g.
> > > psql, gpdiff.pl,
> > >
> > > 2) Some cases could do sanity-check in their own test constructor
> > > functions,
> > > e.g. if the case uses the extension plpython, the test case should
> > > check it itself.
> > >
> > > More thoughts?
> > >
> >
>



-- 
Thanks,
Zhenglin


Re: About *.out files in test/feature

2016-07-11 Thread
Besides .out, there also exists .diff, binary which should be ignored.

2016-07-12 11:32 GMT+08:00 Paul Guo :

> I'd mask the output files after running feature tests in .gitignore and
> clean it up after running "make clean" or "make distclean". Anyone has any
> suggestions? Thanks.
>
> diff --git a/src/test/feature/.gitignore b/src/test/feature/.gitignore
> index a2e6bd4..c7332b2 100644
> --- a/src/test/feature/.gitignore
> +++ b/src/test/feature/.gitignore
> @@ -1 +1,2 @@
>  doc/
> +**/*.out
>
> diff --git a/src/test/feature/Makefile b/src/test/feature/Makefile
> index adc6acc..e0985d1 100644
> --- a/src/test/feature/Makefile
> +++ b/src/test/feature/Makefile
> @@ -35,6 +35,7 @@ doc:
> doxygen doxygen_template
>
>  clean distclean: sharelibclean
> +   find . -type f -name "*.out" |xargs rm -f
> $(RM) feature-test
> $(RM) feature-test.dSYM
>


Re: question on pg_hba.conf updates

2016-07-11 Thread
So far as I know, there is no HAWQ management tools to update
$MASTER_DATA_DIRECTORY/pg_hba.conf

2016-07-12 10:08 GMT+08:00 Vineet Goel :

> Hi all,
>
> Question related to integration with Apache Ambari.
>
> It would be nice to make pg_hba.conf visible and editable in Ambari, so
> that Ambari allows one single interface for admins to update HAWQ and
> System configs such as hawq-site.xml, hawq-check.conf, sysctl.conf,
> limits.conf, hdfs-client.xml, yarn-client.xml etc. Rollback and
> version-history of config files is always a bonus in Ambari.
>
> Are there any backend HAWQ utilities (such as activate-standby or others)
> that update pg_hba.conf file in any way, ever? It would be nice to know so
> that config file change conflicts are managed appropriately.
>
> Thanks
> Vineet
>


Re: HAWQ + ORCA cause core dump

2016-06-29 Thread
If you have already enabled core dump, you can check the core file to debug.

2016-06-30 11:17 GMT+08:00 Guo Gang :

> CC-ing gporca developer.
>
> 2016-06-16 10:20 GMT+08:00 Twelfth Man :
>
> > Hello HAWQ community,
> >
> > We have no problem to build and deploy a local hawq instance for
> > experiment.  But we have no luck to make ORCA work. I wonder any one here
> > had similar experience before and like to share.
> >
> > What we see:
> >
> > create a test table T;
> > insert some data into table T;
> >
> > select count(*) from T;
> >
> > it works with no any problem.
> >
> > set optimizer=on;
> > select count(*) from T;
> >
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> >
> > on the server side, we found the following stack trace from master server
> > log under pg_log/
> >
> > 2016-06-03 02:37:36.993438 CST,,,p26361,th0,,,2016-06-03 02:37:36
> > CST,0,con19,cmd1,seg-1,"PANIC","XX000","Unexpected internal
> error:
> > Master process received signal SIGSEGV",,,0"10x95daf1
> postgres
> >  + 0x95daf1
> > 20x95dcfd postgres
> StandardHandlerForSigillSigsegvSigbus_OnMainThread +
> > 0x2b
> > 30x879c4f postgres CdbProgramErrorHandler + 0xf4
> > 40x7f1950045be0 libpthread.so.0  + 0x50045be0
> > 50x7f194bf3a9a0 libnaucrates.so.3 InitDXL + 0x40
> > 60x7f19502f64ed libdxltranslators.so
> _ZN9COptTasks7ExecuteEPFPvS0_ES0_
> > + 0x3d
> > 70x7f19502f7a56 libdxltranslators.so
> > _ZN9COptTasks15PplstmtOptimizeEP5QueryPb + 0x36
> > 80x7df4df postgres  + 0x7df4df
> > 90x7df8dc postgres planner + 0x3cb
> > 10   0x875958 postgres pg_plan_query + 0x51
> > 11   0x875a63 postgres pg_plan_queries + 0x8e
> > 12   0x876efb postgres  + 0x876efb
> >
> > The 12th Man
> >
>


Re: [VOTE] HAWQ 2.0.0-beta-incubating RC4

2016-01-25 Thread
+1
Downloaded, deployed and tested

2016-01-26 13:35 GMT+08:00 Yi Jin <y...@pivotal.io>:

> +1
>
> What I have done:
> 1) Downloaded, deployed and tested the project.
> 2) Reviewed LICENSE, COPYRIGHT, DISCLAIMER and NOTICE files.
>
> Best,
> Yi Jin
>
> On Tue, Jan 26, 2016 at 4:14 PM, Ruilong Huo <r...@pivotal.io> wrote:
>
> > Downloaded, deployed, tested and reviewed it. +1
> >
> > Best regards,
> > Ruilong Huo
> >
> > On Tue, Jan 26, 2016 at 12:41 PM, Lei Chang <lei_ch...@apache.org>
> wrote:
> >
> > > +1 (binding)
> > >
> > > Cheers
> > > Lei
> > >
> > >
> > > On Tue, Jan 26, 2016 at 6:33 AM, Roman Shaposhnik <
> ro...@shaposhnik.org>
> > > wrote:
> > >
> > > > Lei, can you please explicitly cast +1/-1 vote?
> > > >
> > > > Thanks,
> > > > Roman.
> > > >
> > > > On Sun, Jan 24, 2016 at 7:50 PM, Lei Chang <lei_ch...@apache.org>
> > wrote:
> > > > > Look good!
> > > > >
> > > > > Run mvn verify, get:
> > > > >
> > > > > *
> > > > >
> > > > > Summary
> > > > >
> > > > > ---
> > > > >
> > > > > Generated at: 2016-01-24T19:50:43-08:00
> > > > >
> > > > > Notes: 5
> > > > >
> > > > > Binaries: 0
> > > > >
> > > > > Archives: 0
> > > > >
> > > > > Standards: 2627
> > > > >
> > > > > Apache Licensed: 1796
> > > > >
> > > > > Generated Documents: 0
> > > > >
> > > > > JavaDocs are generated and so license header is optional
> > > > >
> > > > > Generated files do not required license headers
> > > > >
> > > > > 0 Unknown Licenses
> > > > >
> > > > > ***
> > > > >
> > > > > Unapproved licenses:
> > > > >
> > > > > ***
> > > > >
> > > > > Cheers
> > > > >
> > > > > Lei Chang
> > > > >
> > > > > On Sat, Jan 23, 2016 at 9:14 AM, Roman Shaposhnik <
> > > ro...@shaposhnik.org>
> > > > > wrote:
> > > > >
> > > > >> On Fri, Jan 22, 2016 at 11:20 AM, Caleb Welton <
> cwel...@pivotal.io>
> > > > wrote:
> > > > >> > First question for me is:
> > > > >> > - We have a couple existing jiras on IP clearance, including
> > > > >> >   - https://issues.apache.org/jira/browse/HAWQ-184
> > > > >> >   - https://issues.apache.org/jira/browse/HAWQ-207
> > > > >> >
> > > > >> > In particular if HAWQ-184 has *not* been resolved then how are
> we
> > > > clear
> > > > >> of
> > > > >> > our IP issues?  I see that there was a commit associated with
> the
> > > > issue,
> > > > >> so
> > > > >> > perhaps this is just a lack of jira hygiene?
> > > > >>
> > > > >> I wasn't too eager to resolve 184 until the vote passes,
> > > > >> as for 207 -- that ended up being fixed in a different way
> > > > >> and I resolve it as not applicable anymore.
> > > > >>
> > > > >> Thanks,
> > > > >> Roman.
> > > > >>
> > > >
> > >
> >
>



-- 
陶征霖
Pivotal