Hi All,

I'm Hongxu Ma and just joined Pivotal China two months ago. Before Pivotal, I 
worked in Baidu many years and focused on search and infrastructure.
I am a open source fan, and worked on libhdfs3, ranger integrations in HAWQ 
team now.

Welcome to discuss with me and let's make HAWQ better!

Regards,
hongxu

在 22/12/2016 14:21, Yi Jin 写道:
Hi All,

My name is Yi Jin, I am currently working for Pivotal in Sydney Australia as a 
committer of Apache HAWQ. I designed and developed new resource manager 
including resource queue ddl interface, fault tolerant cluster resource 
management, YARN integration and on-demand query resource consumption 
management.

Before working on HAWQ in Pivotal, I used to work for IBM as a developer of DB2 
LUW (Linux, Unix, Windows) mainly in parser, runtime components for its data 
warehouse features, and then I did and verified some technical research work 
for automatically optimizing critical database replication in the core banking 
HA and DR solution (high availability and disaster recovery) which has been 
delivered by IBM upon DB2 z/OS in ICBC the largest bank in China known as its 
Active-Active banking HADR solution.

Based on my work in HAWQ for distributed resource management, I am now pushing 
myself to every components of Apache HAWQ to pursue more contributions to the 
success of Apache HAWQ. Any really happy to have this opportunity to get known 
with you guys. Thank you!

Best,
Yi

On Wed, Dec 21, 2016 at 8:55 PM, Paul Guo 
<[email protected]<mailto:[email protected]>> wrote:
Hi guys,

I started to work on Apache HAWQ ~8 months ago in Pivotal. I'm currently a 
committer of HAWQ. Before this career, I mainly worked on Unix/Linux kernel 
along with various infrastructure softwares in SUN and later a SV startup. 
Later I worked for 1+ year on an Internet company until I found it would be 
interesting to work on infrastructure software which could service more users 
and service them better. I'd expect to discuss with anyone who is interested in 
any system software related topics (besides HAWQ of course:-))

Thanks.


2016-12-21 17:11 GMT+08:00 Hubert Zhang 
<[email protected]<mailto:[email protected]>>:
Hi, All,

My name is Hubert Zhang. I worked on HAWQ resource negotiator and data locality 
feature and now working on HAWQ integrated with Apache Ranger.

For any details related to the upper features and questions about HAWQ, please 
feel free to contact me.


On Wed, Dec 21, 2016 at 1:31 PM, Michael Schubert 
<[email protected]<mailto:[email protected]>> wrote:
Hi David,

instructions to unsubscribe are at the bottom of every email. It's self 
service; we can't unsubscribe for you.

Cheers.



On Tue, Dec 20, 2016 at 5:09 AM -1000, "David Wierz" 
<[email protected]<mailto:[email protected]>> wrote:

Good Morning –

Please unsubscribe me from this listserv

Thank you

David

David J Wierz
Senior Principal | Market Strategy & Finance
[OCI_black_rgb]
[email protected]<mailto:[email protected]> | 
+1.484.432.8075<tel:%28484%29%20432-8075>
387 Hidden Farm Drive | Exton, PA  19341-1185

The information contained in this e-mail is confidential and may be privileged. 
It may be read, copied and used only by the intended recipient. If you have 
received it in error, please contact the sender immediately and delete the 
material from any computer

From: Hong [mailto:[email protected]<mailto:[email protected]>]
Sent: Tuesday, 20 December, 2016 10:06
To: [email protected]<mailto:[email protected]>
Subject: Re: [INTRODUCTIONS] Hi I'm Greg and I'm part of the Apache HAWQ 
community

Dear all,

My name is HongWu(xunzhang is my ID on the Internet). I began to contribute to 
HAWQ this year and now become the HAWQ committer. I wrote near 100 
commits<https://github.com/apache/incubator-hawq/commits?author=xunzhang> to 
make HAWQ better! I am a believer of open source since I benefit a lot from 
open source projects and I know the true meaning of it. I think open source is 
more than just publishing the code. It is delivering comprehensive 
documents/tutorials, collecting users/developers/feedbacks, establish 
discussion/comparison, solving confusions/issues, reproducing good or bad 
results, making clear roadmap, keeping long-term development, treating the 
project as our own child. Moreover, Apache Software Foundation is an elite 
community in the open source world, there are lots of high-quality projects, 
developers and users.

I started my career 2012 at Douban which is a social network website connecting 
people with interest such as book, movie, music, photo, blog and so on. I was 
working as an algorithm engineer there, focused on developing large-scale 
machine learning platforms and optimizing recommendation engine. I am the 
author of Paracel<http://paracel.io/> open source project, which is a 
distributed computational framework designed for machine learning, graph 
algorithms and scientific computing. Paracel wants to simplify the distributed 
and communication details for model developers and data scientists, lets them 
be concentrated on model developing. During this period, I also created the 
Plato: a real-time recommendation system. This work got very outstanding 
results on douban.fm<http://douban.fm> product. Plato changed the architecture 
of Douban's offline processing way to do machine learning.

It's a very exciting jouney with HAWQ! In my point of view, SQL engine is a 
super critical infrastructure in the big data ecosystem. I have tried to design 
a parallel programming language named Girl(for some reason it's still at very 
early stage), but no matter how simple it will be, it supposes developers 
following some paradigms or programming idioms/patterns. But SQL on the other 
side, is the only existing language that erases the distributed coding logic: 
we just write SQLs.

I definitely believe HAWQ could be one of the best SQL engines on Hadoop 
ecosystem with all of our collaborative effort including HAWQ users, HAWQ 
developers, HDB custormers, Pivotal FTEs, HAWQ competitors, Apache mentors and 
so on. Look forward to hear more people with diversed background joining HAWQ 
family.

Beers

2016-12-20 18:03 GMT+08:00 Wen Lin <[email protected]<mailto:[email protected]>>:
Hi, All,

My name is LinWen. I joined Pivotal HAWQ Beijing team since November, 2014. 
Before that, I am an engineer of VMware.
I have implemented some features for HAWQ, like new fault tolerance service, 
libyarn, etc.
I believe HAWQ can be the best SQL on Hadoop engine with our joint effort.

On Thu, Dec 15, 2016 at 11:03 PM, Lili Ma 
<[email protected]<mailto:[email protected]>> wrote:
Hello everyone,

Glad to know everybody here:)

I'm Lili Ma, from Pivotal HAWQ R&D team in Beijing. I have been focusing on 
HAWQ development and product management since 2012 when I joined Pivotal. I 
experienced and contributed in HAWQ's all growth path, from birth, Alpha, 1.X, 
2.0...

My main covering fields about HAWQ include three parts: 1) Storage such as 
internal table storage, HAWQ Input/OutputFormat, hawq extract/register,etc 2) 
Dispatcher and interconnect 3) Security including Ranger integration, Kerberos 
and LDAP.

Before Pivotal, I worked at IBM for more than 2 years and focused on providing 
data service inside of our public cloud provision. The data service includes 
RDS(relational data service) which can provision a distributed relational 
database based on DB2 Federation, and NOSQL service which is based on HBase.

I believe HAWQ can become more successful with our joint effort!  Welcome to 
reach me or this mail list for any HAWQ or other kinds of issues :)

Thanks
Lili


2016-12-15 4:45 GMT+08:00 Dan Baskette 
<[email protected]<mailto:[email protected]>>:
I will add to the email flow…

I am Dan Baskette, I am the Director of Tech Marketing for Pivotal and cover 
Pivotal HDB/Apache HAWQ, Pivotal Greenplum Database, and Apache MADlib.   I 
started my career at Sun Microsystems, and have been working for EMC/Greenplum 
and now Pivotal since 2000….a LONG time in quite a number of roles.   I was 
part of the team that launched Greenplum’s first Hadoop distribution and was 
around for the birth of HAWQ or as we called it when it was in it’s infancy…. 
GOH or Greenplum on Hadoop.   I have been actively running some webcasts on 
various HAWQ how-to topics for Hortonworks, so you can check those out on their 
site.

Hoping this community really takes off in a big way!

Dan



On December 14, 2016 at 10:09:34 AM, Ruilong Huo 
([email protected]<mailto:[email protected]>) wrote:
Hi All,

Great for Gregory to start the thread that people can know each other much 
better, at least in Apache HAWQ community!

I am Ruilong Huo and I am from HDB/HAWQ engineering team in Pivotal. I am from 
Teradata and joined Pivotal after that. It's my honor to be part of HAWQ 
project at its early stage. I am a fan of RDBMS (especially MPP database), big 
data, and cloud technology that changes the IT infrastructure of the 
enterprises and helps to do information transformation in a very large extent.

I hope that with joint effort from hawq community, it will become even greater 
product in big data area, especially in SQL-on-Hadoop category.

Best regards,
Ruilong Huo

On Wed, Dec 14, 2016 at 2:27 PM, Bob Glithero 
<[email protected]<mailto:[email protected]>> wrote:
Hello all,

I'm Bob, and I'm doing product marketing for HDB/HAWQ at Pivotal.  I'm new-ish 
here, and not so much from a coding background as from networking.  I'm from 
Cisco Systems, where I focused on analytics use cases in telecommunications, 
particularly for mobile network operators, for service assurance, customer 
care, and customer profiling.  (also, as you're introducing yourselves, we'd 
love to hear what use cases you're involved with, too).

About a year before I left my group at Cisco acquired an MPP database of its 
own -- ParStream -- for its IoT and fog computing use cases, so it's 
interesting to come here and learn about the architecture and applications of 
HAWQ.

I hope to help make your experience with HAWQ a good one.  If I can help in any 
way, please reach out to me directly or on the list.

Cheers,
Bob



Bob Glithero | Product Marketing
Pivotal, Inc.
[email protected]<mailto:[email protected]> | m: 
415.341.5592<tel:415.341.5592>


On Sun, Dec 11, 2016 at 6:56 PM, Roman Shaposhnik 
<[email protected]<mailto:[email protected]>> wrote:
Greg, thanks for kicking off the roll call. Getting to know each other is super
useful (and can be fun! ;-)). I'll go next:

I am Roman (your friendly neighborhood mentor). I hang around a lot of ASF
big data projects (as a committer and a PMC member), but lately I've been
gravitating towards IoT as well (Apache Mynewt). I started my career at Sun
microsystems back at a time when Linux  wasn't even 1.x and I've been doing
enterprise software ever since. I was lucky enough to get to work on
the original
Hadoop team at Yahoo! and fall in love with not one but two elephants (Hadoop
and Postgres). Recently I've assumed a position of VP of Technology at ODPi
and I'm still hoping to MHGA! My secret weapon is Apache Bigtop (which
co-founded)
and I'm not afraid to use it!

I'm here to help as much as I can to make sure that this community evolves into
a vibrant, self-governed, exciting place worthy of being a top level
project (TLP)
at ASF. If you have any questions or ideas that you may want to bounce off of
me -- please don't hesitate to reach out directly or on the mailing list.

Thanks,
Roman.

On Fri, Dec 9, 2016 at 11:53 AM, Gregory Chase 
<[email protected]<mailto:[email protected]>> wrote:
>
> Dear HAWQs,
>
> I thought it would be fun to get to know some of the other people in the 
> community.
>
> My name is Greg Chase and I run community development for Pivotal for big 
> data open source communities that Pivotal contributes to.
>
> Some of you may have seen my frequent emails about virtual events I help 
> organize for user and contributor education.
>
> Not so long ago, I was in charge of product marketing for an in-memory data 
> warehouse named after a Hawaiian town from a three-letter acronymed German 
> Company. We treated Hadoop as an external table, and returning results from 
> these queries was both slow and brittle due to the network transfer rates.
>
> So I have a special appreciation of the innovation that has gone into 
> creating Hadoop-native HAWQ out of PostgreSQL and Greenplum.
>
> These days I'm much more of a marketer than a coder, but I still love hearing 
> about the kinds of projects that HAWQ users are involved in.
>
> I know we'd all love to hear more about everyone else's projects, and how you 
> became a HAWQ user.  So please introduce yourselves!
>
> --
> Greg Chase
>
> Global Head, Big Data Communities
> http://www.pivotal.io/big-data
>
> Pivotal Software
> http://www.pivotal.io/
>
> 650-215-0477<tel:650-215-0477>
> @GregChase
> Blog: http://geekmarketing.biz/
>








--
Thanks

Hubert Zhang



Reply via email to