Re: [VOTE] Graduation of Apache MetaModel from the Incubator
+1 (binding) Good luck, guys! On 14/11/14 06:39, Henry Saputra wrote: Hi All, The Apache MetaModel community has wrapped up the VOTE to propose for graduation from Apache incubator. The VOTE passed with result: 9 binding +1s zero 0s zero -1s (http://bit.ly/1u8n8eo) Apache MetaModel came into ASF incubator on 2013 and since then have grown into small but active community. We have made several good releases with different release managers, and also add new PPMC/committers [1]. The project also has good traffic on the dev mailing list [2]. We would like to propose graduation of Apache MetaModel from ASF incubator to top level project. [ ] +1 Graduate Apache MetaModel from the Incubator. [ ] +0 Don't care. [ ] -1 Don't graduate Apache MetaModel from the Incubator because.. . The VOTE will open for 72 hours (11/17/2014) Here is the proposal for the board resolution for graduation: === Board Resolution == Establish the Apache MetaModel Project WHEREAS, the Board of Directors deems it to be in the best interests of the Foundation and consistent with the Foundation's purpose to establish a Project Management Committee charged with the creation and maintenance of open-source software, for distribution at no charge to the public, related to providing an implementation of a Platform-as-a-Service Framework. NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee (PMC), to be known as the Apache MetaModel Project, be and hereby is established pursuant to Bylaws of the Foundation; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is responsible for the creation and maintenance of software related to providing an implementation of a Platform-as-a-Service Framework; and be it further RESOLVED, that the office of Vice President, MetaModel be and hereby is created, the person holding such office to serve at the direction of the Board of Directors as the chair of the Apache MetaModel Project, and to have primary responsibility for management of the projects within the scope of responsibility of the Apache MetaModel Project; and be it further RESOLVED, that the persons listed immediately below be and hereby are appointed to serve as the initial members of the Apache MetaModel Project: * Alberto Rodriguez ardlema at apache dot org * Ankit Kumar ankitkumar2711 at apache dot org * Arvind Prabhakar arvind at apache dot org * Henry Saputra hsaputra at apache dot org * Juan Jose van der Linden delostilos at apache dot org * Kasper Sørensen kaspersor at apache dot org * Matt Franklin mfanklin at apache dot org * Noah Slater nslater at apache dot org * Sameer Arora sarora at apache dot org * Tomasz Guzialek tomaszguzialek at apache dot org NOW, THEREFORE, BE IT FURTHER RESOLVED, that Kasper Sørensen be appointed to the office of Vice President, MetaModel, to serve in accordance with and subject to the direction of the Board of Directors and the Bylaws of the Foundation until death, resignation, retirement, removal or disqualification, or until a successor is appointed; and be it further RESOLVED, that the initial Apache MetaModel PMC be and hereby is tasked with the creation of a set of bylaws intended to encourage open development and increased participation in the Apache MetaModel Project; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is tasked with the migration and rationalization of the Apache Incubator MetaModel podling; and be it further RESOLVED, that all responsibilities pertaining to the Apache Incubator MetaModel podling encumbered upon the Apache Incubator Project are hereafter discharged. Thanks, Henry On behalf of Apache MetaModel incubating PPMCs [1] http://incubator.apache.org/projects/metamodel.html [2] http://mail-archives.apache.org/mod_mbox/metamodel-dev - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- Sergio Fernández Senior Researcher Knowledge and Media Technologies Salzburg Research Forschungsgesellschaft mbH Jakob-Haringer-Straße 5/3 | 5020 Salzburg, Austria T: +43 662 2288 318 | M: +43 660 2747 925 sergio.fernan...@salzburgresearch.at http://www.salzburgresearch.at - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Graduation of Apache MetaModel from the Incubator
+1 binding 2014-11-14 8:15 GMT+01:00 Ted Dunning ted.dunn...@gmail.com: +1 (binding) On Thu, Nov 13, 2014 at 9:39 PM, Henry Saputra henry.sapu...@gmail.com wrote: Hi All, The Apache MetaModel community has wrapped up the VOTE to propose for graduation from Apache incubator. The VOTE passed with result: 9 binding +1s zero 0s zero -1s (http://bit.ly/1u8n8eo) Apache MetaModel came into ASF incubator on 2013 and since then have grown into small but active community. We have made several good releases with different release managers, and also add new PPMC/committers [1]. The project also has good traffic on the dev mailing list [2]. We would like to propose graduation of Apache MetaModel from ASF incubator to top level project. [ ] +1 Graduate Apache MetaModel from the Incubator. [ ] +0 Don't care. [ ] -1 Don't graduate Apache MetaModel from the Incubator because.. . The VOTE will open for 72 hours (11/17/2014) Here is the proposal for the board resolution for graduation: === Board Resolution == Establish the Apache MetaModel Project WHEREAS, the Board of Directors deems it to be in the best interests of the Foundation and consistent with the Foundation's purpose to establish a Project Management Committee charged with the creation and maintenance of open-source software, for distribution at no charge to the public, related to providing an implementation of a Platform-as-a-Service Framework. NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee (PMC), to be known as the Apache MetaModel Project, be and hereby is established pursuant to Bylaws of the Foundation; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is responsible for the creation and maintenance of software related to providing an implementation of a Platform-as-a-Service Framework; and be it further RESOLVED, that the office of Vice President, MetaModel be and hereby is created, the person holding such office to serve at the direction of the Board of Directors as the chair of the Apache MetaModel Project, and to have primary responsibility for management of the projects within the scope of responsibility of the Apache MetaModel Project; and be it further RESOLVED, that the persons listed immediately below be and hereby are appointed to serve as the initial members of the Apache MetaModel Project: * Alberto Rodriguez ardlema at apache dot org * Ankit Kumar ankitkumar2711 at apache dot org * Arvind Prabhakar arvind at apache dot org * Henry Saputra hsaputra at apache dot org * Juan Jose van der Linden delostilos at apache dot org * Kasper Sørensen kaspersor at apache dot org * Matt Franklin mfanklin at apache dot org * Noah Slater nslater at apache dot org * Sameer Arora sarora at apache dot org * Tomasz Guzialek tomaszguzialek at apache dot org NOW, THEREFORE, BE IT FURTHER RESOLVED, that Kasper Sørensen be appointed to the office of Vice President, MetaModel, to serve in accordance with and subject to the direction of the Board of Directors and the Bylaws of the Foundation until death, resignation, retirement, removal or disqualification, or until a successor is appointed; and be it further RESOLVED, that the initial Apache MetaModel PMC be and hereby is tasked with the creation of a set of bylaws intended to encourage open development and increased participation in the Apache MetaModel Project; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is tasked with the migration and rationalization of the Apache Incubator MetaModel podling; and be it further RESOLVED, that all responsibilities pertaining to the Apache Incubator MetaModel podling encumbered upon the Apache Incubator Project are hereafter discharged. Thanks, Henry On behalf of Apache MetaModel incubating PPMCs [1] http://incubator.apache.org/projects/metamodel.html [2] http://mail-archives.apache.org/mod_mbox/metamodel-dev - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- Jean-Louis
Re: IP clearance clarification: copyright notices
Hi Alex, On Thu, Nov 13, 2014 at 7:32 PM, Alex Harui aha...@adobe.com wrote: ...Copyright law, AIUI, prevents the receiving project from moving copyrights without the copyright owner’s permission. Thus, if the donor has time to insert Apache headers and move copyrights to NOTICE, that is very helpful, but if the donor is short on time, he can give someone in the receiving project permission to do so Very good point, so for my point 1) above (moving any existing non-Apache copyright notices to a NOTICE file) we could clarify that this needs to be done either by the donators before submitting their code, or by us with written permission from them. With people's comments here it looks like we need to clarify that clause indeed, I'll wait a bit for other opinions before doing that. -Bertrand - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: IP clearance clarification: copyright notices
Hi, On Thu, Nov 13, 2014 at 11:14 PM, Stian Soiland-Reyes soiland-re...@cs.manchester.ac.uk wrote: For our incubator project (Taverna), we saw the need to reorganize our current git repositories... We therefore have made a separate staging area on Github, and then basically we will move from github.com/taverna/* to github.com/taverna-incubator/* step by step Ok, that resonates with the quarantine space idea, he you did that externally but we can also suggest an internal quarantine are (which might be just a folder with this name) when code that's not fully cleaned up is imported. -Bertrand - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
[Request] wiki access
Hi, May I ask help to grant me access to edit incubator wiki page to submit our project's proposal? My Username: lukehan Mail address: luke dot hq at gmail dot com Thank you very much. Best Regards! Luke Han
Re: [Request] wiki access
On Fri, 14 Nov 2014, Han, Luke wrote: May I ask help to grant me access to edit incubator wiki page to submit our project's proposal? Granted Nick - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
[RESULT][VOTE][IP CLEARANCE] Sling Sightly and XSS modules
On Tue, Nov 11, 2014 at 2:30 PM, Bertrand Delacretaz bdelacre...@apache.org wrote: ...See http://incubator.apache.org/ip-clearance/sling-sightly-xss.html for details. Please vote to approve this contribution... The vote passes with +1s from David Nalley, Jan Iversen and myself, thanks! I'll update the above page and import the code in the Sling svn repository. -Bertrand - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
[DISCUSS] OpenAZ as new Incubator project
Abstract OpenAz is a project to create tools and libraries to enable the development of Attribute-based Access Control (ABAC) Systems in a variety of languages. In general the work is at least consistent with or actually conformant to the OASIS XACML Standard. Proposal Generally the work falls into two categories: ready to use tools which implement standardized or well understood components of an ABAC system and design proposals and proof of concept code relating to less well understood or experimental aspects of the problem. Much of the work to date has revolved around defining interfaces enabling a PEP to request an access control decision from a PDP. The XACML standard defines an abstract request format in xml and protocol wire formats in xaml and json, but it does not specify programmatic interfaces in any language. The standard says that the use of XML (or JSON) is not required only the semantics equivalent. The first Interface, AzAPI is modeled closely on the XACML defined interface, expressed in Java. One of the goals was to support calls to both a PDP local to the same process and a PDP in a remote server. AzAPI includes the interface, reference code to handle things like the many supported datatypes in XACML and glue code to mate it to the open source Sun XACML implementation. Because of the dependence on Sun XACML (which is XACML 2.0) the interface was missing some XACML 3.0 features. More recently this was corrected and WSo2 has mated it to their XACML 3.0 PDP. Some work was done by the JPMC team to support calling a remote PDP. WSo2 is also pursuing this capability. A second, higher level interface, PEPAPI was also defined. PEPAPI is more intended for application developers with little knowledge of XACML. It allows Java objects which contain attribute information to be passed in. Conversion methods, called mappers extract information from the objects and present it in the format expected by XACML. Some implementers have chosen to implement PEPAPI directly against their PDP, omitting the use of AzAPI. Naomaru Itoi defined a C++ interface which closely matches the Java one. Examples of more speculative work include: proposals for registration and dispatch of Obligation and Advice handlers, a scheme called AMF to tell PIPs how to retrieve attributes and PIP code to implement it, discussion of PoC code to demonstrate the use of XACML policies to drive OAuth interations and a proposal to use XACML policies to express OAuth scope. ATT has recently contributed their extensive XACML framework to the project. The ATT framework represents the entire XACML 3.0 object set as a collection of Java interfaces and standard implementations of those interfaces. The ATT PDP engine is built on top of this framework and represents a complete implementation of a XACML 3.0 PDP, including all of the multi-decision profiles. In addition, the framework also contains an implementation of the OASIS XACML 3.0 RESTful API v1.0 and XACML JSON Profile v1.0 WD 14. The PEP API includes annotation functionality, allowing application developers to simply annotate a Java class to provide attributes for a request. The annotation support removes the need for application developers to learn much of the API. The ATT framework also includes interfaces and implementations to standardize development of PIP engines that are used by the ATT PDP implementation, and can be used by other implementations built on top of the ATT framework. The framework also includes interfaces and implementations for a PAP distributed cloud infrastructure of PDP nodes that includes support for policy distribution and pip configurations. This PAP infrastructure includes a web application administrative console that contains a XACML 3.0 policy editor, attribute dictionary support, and management of PDP RESTful node instances. In addition, there are tools available for policy simulation. Background Access Control is in some ways the most basic IT Security service. It consists of making a decision about whether a particular request should be allowed and enforcing that decision. Aside from schemes like permission bits and Access Control Lists (ACLs) the most common way access control is implemented is as code in a server or application which typically intertwines access control logic with business logic, User interface and other software. This makes it difficult to understand, modify, analyze or even locate the security policy. The primary challenge of Access Control is striking the right balance between powerful expression and intelligibility to human beings. The OASIS XACML Standard exemplifies Attribute-Based Access Control (ABAC). In ABAC, the Policy Decision Point (PDP) is isolated from other components. The Policy Enforcement Point (PEP) must be located so as to be able to enforce the decision, typically near the resource. The PEP first asks the PDP if access should be allowed and
RE: [PROPOSAL] OpenAZ as new Incubator project
I was not questioning whether to initiate discussion. That was what I was trying to do. I was asking how to go about it. Thanks for the comments, they are noted. Hal -Original Message- From: John D. Ament [mailto:john.d.am...@gmail.com] Sent: Thursday, November 13, 2014 8:59 PM To: general@incubator.apache.org Subject: Re: [PROPOSAL] OpenAZ as new Incubator project I think so. There's a few things that you want to iron out first, before people start voting on this. - 3 is generally the minimum number of mentors. - I can't find a Paul Freemantle on the apache committers list. There's a Paul Fremantle, minor spelling difference. - You may want to review this section to get a better understanding of the goals: http://incubator.apache.org/guides/proposal.html#formulating the Discuss option just helps everyone look at your proposal a little bit better and determine if there's any gotchas. For example, I'm surprised to see a new incubator project using SVN. - Can you list out your issue tracking preference (should probably be JIRA unless you need something else) - Please also explicitly list the mailing lists your want. John On Thu, Nov 13, 2014 at 8:43 PM, Hal Lockhart hal.lockh...@oracle.com wrote: So you want me to repost the proposal with the Subject changed to start with [DISCUSS]? Or should I simply reference the wiki page? Hal -Original Message- From: John D. Ament [mailto:john.d.am...@gmail.com] Sent: Thursday, November 13, 2014 5:03 PM To: general@incubator.apache.org Subject: Re: [PROPOSAL] OpenAZ as new Incubator project Hal, Per customs, would you mind if we cancel this and start with a [DISCUSS] thread about OpenAZ? It's unclear if you meant this to be a vote or something. John On Thu, Nov 13, 2014 at 4:14 PM, Hal Lockhart hal.lockh...@oracle.com wrote: Abstract OpenAz is a project to create tools and libraries to enable the development of Attribute-based Access Control (ABAC) Systems in a variety of languages. In general the work is at least consistent with or actually conformant to the OASIS XACML Standard. Proposal Generally the work falls into two categories: ready to use tools which implement standardized or well understood components of an ABAC system and design proposals and proof of concept code relating to less well understood or experimental aspects of the problem. Much of the work to date has revolved around defining interfaces enabling a PEP to request an access control decision from a PDP. The XACML standard defines an abstract request format in xml and protocol wire formats in xaml and json, but it does not specify programmatic interfaces in any language. The standard says that the use of XML (or JSON) is not required only the semantics equivalent. The first Interface, AzAPI is modeled closely on the XACML defined interface, expressed in Java. One of the goals was to support calls to both a PDP local to the same process and a PDP in a remote server. AzAPI includes the interface, reference code to handle things like the many supported datatypes in XACML and glue code to mate it to the open source Sun XACML implementation. Because of the dependence on Sun XACML (which is XACML 2.0) the interface was missing some XACML 3.0 features. More recently this was corrected and WSo2 has mated it to their XACML 3.0 PDP. Some work was done by the JPMC team to support calling a remote PDP. WSo2 is also pursuing this capability. A second, higher level interface, PEPAPI was also defined. PEPAPI is more intended for application developers with little knowledge of XACML. It allows Java objects which contain attribute information to be passed in. Conversion methods, called mappers extract information from the objects and present it in the format expected by XACML. Some implementers have chosen to implement PEPAPI directly against their PDP, omitting the use of AzAPI. Naomaru Itoi defined a C++ interface which closely matches the Java one. Examples of more speculative work include: proposals for registration and dispatch of Obligation and Advice handlers, a scheme called AMF to tell PIPs how to retrieve attributes and PIP code to implement it, discussion of PoC code to demonstrate the use of XACML policies to drive OAuth interations and a proposal to use XACML policies to express OAuth scope. ATT has recently contributed their extensive XACML framework to the project. The ATT framework represents the entire XACML 3.0 object set as a collection of Java interfaces and standard implementations of those interfaces. The ATT PDP engine is built on top of this framework and represents a complete
Re: [VOTE] Graduation of Apache MetaModel from the Incubator
+1 (binding) -Jake On Fri, Nov 14, 2014 at 12:39 AM, Henry Saputra henry.sapu...@gmail.com wrote: Hi All, The Apache MetaModel community has wrapped up the VOTE to propose for graduation from Apache incubator. The VOTE passed with result: 9 binding +1s zero 0s zero -1s (http://bit.ly/1u8n8eo) Apache MetaModel came into ASF incubator on 2013 and since then have grown into small but active community. We have made several good releases with different release managers, and also add new PPMC/committers [1]. The project also has good traffic on the dev mailing list [2]. We would like to propose graduation of Apache MetaModel from ASF incubator to top level project. [ ] +1 Graduate Apache MetaModel from the Incubator. [ ] +0 Don't care. [ ] -1 Don't graduate Apache MetaModel from the Incubator because.. . The VOTE will open for 72 hours (11/17/2014) Here is the proposal for the board resolution for graduation: === Board Resolution == Establish the Apache MetaModel Project WHEREAS, the Board of Directors deems it to be in the best interests of the Foundation and consistent with the Foundation's purpose to establish a Project Management Committee charged with the creation and maintenance of open-source software, for distribution at no charge to the public, related to providing an implementation of a Platform-as-a-Service Framework. NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee (PMC), to be known as the Apache MetaModel Project, be and hereby is established pursuant to Bylaws of the Foundation; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is responsible for the creation and maintenance of software related to providing an implementation of a Platform-as-a-Service Framework; and be it further RESOLVED, that the office of Vice President, MetaModel be and hereby is created, the person holding such office to serve at the direction of the Board of Directors as the chair of the Apache MetaModel Project, and to have primary responsibility for management of the projects within the scope of responsibility of the Apache MetaModel Project; and be it further RESOLVED, that the persons listed immediately below be and hereby are appointed to serve as the initial members of the Apache MetaModel Project: * Alberto Rodriguez ardlema at apache dot org * Ankit Kumar ankitkumar2711 at apache dot org * Arvind Prabhakar arvind at apache dot org * Henry Saputra hsaputra at apache dot org * Juan Jose van der Linden delostilos at apache dot org * Kasper Sørensen kaspersor at apache dot org * Matt Franklin mfanklin at apache dot org * Noah Slater nslater at apache dot org * Sameer Arora sarora at apache dot org * Tomasz Guzialek tomaszguzialek at apache dot org NOW, THEREFORE, BE IT FURTHER RESOLVED, that Kasper Sørensen be appointed to the office of Vice President, MetaModel, to serve in accordance with and subject to the direction of the Board of Directors and the Bylaws of the Foundation until death, resignation, retirement, removal or disqualification, or until a successor is appointed; and be it further RESOLVED, that the initial Apache MetaModel PMC be and hereby is tasked with the creation of a set of bylaws intended to encourage open development and increased participation in the Apache MetaModel Project; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is tasked with the migration and rationalization of the Apache Incubator MetaModel podling; and be it further RESOLVED, that all responsibilities pertaining to the Apache Incubator MetaModel podling encumbered upon the Apache Incubator Project are hereafter discharged. Thanks, Henry On behalf of Apache MetaModel incubating PPMCs [1] http://incubator.apache.org/projects/metamodel.html [2] http://mail-archives.apache.org/mod_mbox/metamodel-dev - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: IP clearance clarification: copyright notices
Right, good idea If you do the internal area, I would ensure it is not publicly visible to non-apache.org users then. In Github we can get away with it as it is a bit cowboy land (most small projects don't even state their license!!).. but you wouldn't want to end up with projects that live too long in Apache Incubator quarantine space! :) On 14 November 2014 09:15, Bertrand Delacretaz bdelacre...@apache.org wrote: Hi, On Thu, Nov 13, 2014 at 11:14 PM, Stian Soiland-Reyes soiland-re...@cs.manchester.ac.uk wrote: For our incubator project (Taverna), we saw the need to reorganize our current git repositories... We therefore have made a separate staging area on Github, and then basically we will move from github.com/taverna/* to github.com/taverna-incubator/* step by step Ok, that resonates with the quarantine space idea, he you did that externally but we can also suggest an internal quarantine are (which might be just a folder with this name) when code that's not fully cleaned up is imported. -Bertrand - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- Stian Soiland-Reyes, myGrid team School of Computer Science The University of Manchester http://soiland-reyes.com/stian/work/ http://orcid.org/-0001-9842-9718 - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
[Request] Wiki Access
Hi, May I ask help to grant me access to edit incubator wiki page to submit our project's proposal? My Username: lukehan Mail address: luke dot hq at gmail dot com Thank you very much. Luke
Re: IP clearance clarification: copyright notices
Hi, On Fri, Nov 14, 2014 at 3:39 PM, Stian Soiland-Reyes soiland-re...@cs.manchester.ac.uk wrote: (about the quarantine folder) ...good idea If you do the internal area, I would ensure it is not publicly visible to non-apache.org users then... That's not needed IMO, as long as we don't release the code it's fine to have code in our svn/git repositories that's not fully ready in terms of license headers etc. I see putting those things under a quarantine folder only as a warning to the PMC that such code shouldn't be released as is. -Bertrand - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
[PROPOSAL] Kylin for Incubation
Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis in the TB to PB size range Existing SQL-on-Hadoop solutions commonly need to perform partial or full table or file scans to compute the results of queries. The cost of these large data scans can make many queries very slow (more than a minute). The core idea of MOLAP (multi-dimensional OLAP) is to pre-compute data along dimensions of interest and store resulting aggregates as a cube. MOLAP is much faster but is inflexible. We realized that no existing product met our exact requirements externally – especially in the open source Hadoop community. To meet our emerging business needs, we built a platform from scratch to support MOLAP for these business requirements and then to support more others include ROLAP. With an excellent development team and several pilot customers, we have been able to bring the Kylin platform into production as well as open source it. # Rationale When data grows to petabyte scale, the process of pre-calculation of a query takes a long time and costly and powerful hardware. However, with the benefit of Hadoop’s distributed computing architecture, jobs can leverage hundreds or thousands of Hadoop data nodes. There still exists a big gap between the growing volume of data and interactive analytics: - Existing Business Intelligence (OLAP) platforms cannot scale out to support fast growing data. - Existing SQL on Hadoop
RE: [PROPOSAL] Kylin for Incubation
Potential trademark clash: http://www.ubuntu.com/desktop/ubuntu-kylin Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 7:38 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: [PROPOSAL] Kylin for Incubation Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis in the TB to PB size range Existing SQL-on-Hadoop solutions commonly need to perform partial or full table or file scans to compute the results of queries. The cost of these large data scans can make many queries very slow (more than a minute). The core idea of MOLAP (multi-dimensional OLAP) is to pre-compute data along dimensions of interest and store resulting aggregates as a cube. MOLAP is much faster but is inflexible. We realized that no existing product met our exact requirements externally – especially in the open source Hadoop community. To meet our emerging business needs, we built a platform from scratch to support MOLAP for these business requirements and then to support more others include ROLAP. With an excellent development team and several pilot customers, we have been able to bring the Kylin platform into production as well as open source it. # Rationale When data grows to petabyte scale, the process of pre-calculation of a query takes a long time and costly and powerful hardware. However, with the benefit of Hadoop’s
RE: [PROPOSAL] Kylin for Incubation
Please check with VP Trademarks here at Apache. Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 8:00 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: Re: [PROPOSAL] Kylin for Incubation We have noticed this from the beginning, below is the comments from our Legal team: We’ve done a preliminary trademark search for Kylin in the US, and there weren’t any directly conflicting brands. I think it should be ok to use:) Thanks. Luke 2014-11-14 23:47 GMT+08:00 Ross Gardler (MS OPEN TECH) ross.gard...@microsoft.com: Potential trademark clash: http://www.ubuntu.com/desktop/ubuntu-kylin Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 7:38 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: [PROPOSAL] Kylin for Incubation Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis in the TB to PB size range Existing SQL-on-Hadoop solutions commonly need to perform partial or full table or file scans to compute the results of queries. The cost of these large data scans can make many queries very slow (more than a minute). The core idea of MOLAP (multi-dimensional OLAP) is to pre-compute data along dimensions of interest and store
Re: [PROPOSAL] Kylin for Incubation
We have noticed this from the beginning, below is the comments from our Legal team: We’ve done a preliminary trademark search for Kylin in the US, and there weren’t any directly conflicting brands. I think it should be ok to use:) Thanks. Luke 2014-11-14 23:47 GMT+08:00 Ross Gardler (MS OPEN TECH) ross.gard...@microsoft.com: Potential trademark clash: http://www.ubuntu.com/desktop/ubuntu-kylin Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 7:38 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: [PROPOSAL] Kylin for Incubation Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis in the TB to PB size range Existing SQL-on-Hadoop solutions commonly need to perform partial or full table or file scans to compute the results of queries. The cost of these large data scans can make many queries very slow (more than a minute). The core idea of MOLAP (multi-dimensional OLAP) is to pre-compute data along dimensions of interest and store resulting aggregates as a cube. MOLAP is much faster but is inflexible. We realized that no existing product met our exact requirements externally – especially in the open source Hadoop community. To meet our emerging business needs, we built a platform from scratch to support MOLAP for
Re: [PROPOSAL] OpenAZ as new Incubator project
It will be cool to see a XACML project at Apache, especially one that looks certain to be the main open source implementation. One minor correction: Colm MacCárthaigh You have the wrong Apache Colm there :-) Colm (O hEigeartaigh) On Fri, Nov 14, 2014 at 1:55 PM, Hal Lockhart hal.lockh...@oracle.com wrote: I was not questioning whether to initiate discussion. That was what I was trying to do. I was asking how to go about it. Thanks for the comments, they are noted. Hal -Original Message- From: John D. Ament [mailto:john.d.am...@gmail.com] Sent: Thursday, November 13, 2014 8:59 PM To: general@incubator.apache.org Subject: Re: [PROPOSAL] OpenAZ as new Incubator project I think so. There's a few things that you want to iron out first, before people start voting on this. - 3 is generally the minimum number of mentors. - I can't find a Paul Freemantle on the apache committers list. There's a Paul Fremantle, minor spelling difference. - You may want to review this section to get a better understanding of the goals: http://incubator.apache.org/guides/proposal.html#formulating the Discuss option just helps everyone look at your proposal a little bit better and determine if there's any gotchas. For example, I'm surprised to see a new incubator project using SVN. - Can you list out your issue tracking preference (should probably be JIRA unless you need something else) - Please also explicitly list the mailing lists your want. John On Thu, Nov 13, 2014 at 8:43 PM, Hal Lockhart hal.lockh...@oracle.com wrote: So you want me to repost the proposal with the Subject changed to start with [DISCUSS]? Or should I simply reference the wiki page? Hal -Original Message- From: John D. Ament [mailto:john.d.am...@gmail.com] Sent: Thursday, November 13, 2014 5:03 PM To: general@incubator.apache.org Subject: Re: [PROPOSAL] OpenAZ as new Incubator project Hal, Per customs, would you mind if we cancel this and start with a [DISCUSS] thread about OpenAZ? It's unclear if you meant this to be a vote or something. John On Thu, Nov 13, 2014 at 4:14 PM, Hal Lockhart hal.lockh...@oracle.com wrote: Abstract OpenAz is a project to create tools and libraries to enable the development of Attribute-based Access Control (ABAC) Systems in a variety of languages. In general the work is at least consistent with or actually conformant to the OASIS XACML Standard. Proposal Generally the work falls into two categories: ready to use tools which implement standardized or well understood components of an ABAC system and design proposals and proof of concept code relating to less well understood or experimental aspects of the problem. Much of the work to date has revolved around defining interfaces enabling a PEP to request an access control decision from a PDP. The XACML standard defines an abstract request format in xml and protocol wire formats in xaml and json, but it does not specify programmatic interfaces in any language. The standard says that the use of XML (or JSON) is not required only the semantics equivalent. The first Interface, AzAPI is modeled closely on the XACML defined interface, expressed in Java. One of the goals was to support calls to both a PDP local to the same process and a PDP in a remote server. AzAPI includes the interface, reference code to handle things like the many supported datatypes in XACML and glue code to mate it to the open source Sun XACML implementation. Because of the dependence on Sun XACML (which is XACML 2.0) the interface was missing some XACML 3.0 features. More recently this was corrected and WSo2 has mated it to their XACML 3.0 PDP. Some work was done by the JPMC team to support calling a remote PDP. WSo2 is also pursuing this capability. A second, higher level interface, PEPAPI was also defined. PEPAPI is more intended for application developers with little knowledge of XACML. It allows Java objects which contain attribute information to be passed in. Conversion methods, called mappers extract information from the objects and present it in the format expected by XACML. Some implementers have chosen to implement PEPAPI directly against their PDP, omitting the use of AzAPI. Naomaru Itoi defined a C++ interface which closely matches the Java one. Examples of more speculative work include: proposals for registration and dispatch of Obligation and Advice handlers, a scheme called AMF to tell PIPs how to retrieve attributes and PIP code to implement it, discussion of PoC code to demonstrate the use of XACML policies to
Re: [PROPOSAL] Kylin for Incubation
Check again with Apache trademark is a more safe way to continue use this name. Will contact them and do the check again. Thank you very much to point this out. Luke 2014-11-15 0:01 GMT+08:00 Ross Gardler (MS OPEN TECH) ross.gard...@microsoft.com: Please check with VP Trademarks here at Apache. Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 8:00 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: Re: [PROPOSAL] Kylin for Incubation We have noticed this from the beginning, below is the comments from our Legal team: We’ve done a preliminary trademark search for Kylin in the US, and there weren’t any directly conflicting brands. I think it should be ok to use:) Thanks. Luke 2014-11-14 23:47 GMT+08:00 Ross Gardler (MS OPEN TECH) ross.gard...@microsoft.com: Potential trademark clash: http://www.ubuntu.com/desktop/ubuntu-kylin Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 7:38 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: [PROPOSAL] Kylin for Incubation Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis
Re: [VOTE] Graduation of Apache MetaModel from the Incubator
+1 On 14 November 2014 06:39, Henry Saputra henry.sapu...@gmail.com wrote: Hi All, The Apache MetaModel community has wrapped up the VOTE to propose for graduation from Apache incubator. The VOTE passed with result: 9 binding +1s zero 0s zero -1s (http://bit.ly/1u8n8eo) Apache MetaModel came into ASF incubator on 2013 and since then have grown into small but active community. We have made several good releases with different release managers, and also add new PPMC/committers [1]. The project also has good traffic on the dev mailing list [2]. We would like to propose graduation of Apache MetaModel from ASF incubator to top level project. [ ] +1 Graduate Apache MetaModel from the Incubator. [ ] +0 Don't care. [ ] -1 Don't graduate Apache MetaModel from the Incubator because.. . The VOTE will open for 72 hours (11/17/2014) Here is the proposal for the board resolution for graduation: === Board Resolution == Establish the Apache MetaModel Project WHEREAS, the Board of Directors deems it to be in the best interests of the Foundation and consistent with the Foundation's purpose to establish a Project Management Committee charged with the creation and maintenance of open-source software, for distribution at no charge to the public, related to providing an implementation of a Platform-as-a-Service Framework. NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee (PMC), to be known as the Apache MetaModel Project, be and hereby is established pursuant to Bylaws of the Foundation; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is responsible for the creation and maintenance of software related to providing an implementation of a Platform-as-a-Service Framework; and be it further RESOLVED, that the office of Vice President, MetaModel be and hereby is created, the person holding such office to serve at the direction of the Board of Directors as the chair of the Apache MetaModel Project, and to have primary responsibility for management of the projects within the scope of responsibility of the Apache MetaModel Project; and be it further RESOLVED, that the persons listed immediately below be and hereby are appointed to serve as the initial members of the Apache MetaModel Project: * Alberto Rodriguez ardlema at apache dot org * Ankit Kumar ankitkumar2711 at apache dot org * Arvind Prabhakar arvind at apache dot org * Henry Saputra hsaputra at apache dot org * Juan Jose van der Linden delostilos at apache dot org * Kasper Sørensen kaspersor at apache dot org * Matt Franklin mfanklin at apache dot org * Noah Slater nslater at apache dot org * Sameer Arora sarora at apache dot org * Tomasz Guzialek tomaszguzialek at apache dot org NOW, THEREFORE, BE IT FURTHER RESOLVED, that Kasper Sørensen be appointed to the office of Vice President, MetaModel, to serve in accordance with and subject to the direction of the Board of Directors and the Bylaws of the Foundation until death, resignation, retirement, removal or disqualification, or until a successor is appointed; and be it further RESOLVED, that the initial Apache MetaModel PMC be and hereby is tasked with the creation of a set of bylaws intended to encourage open development and increased participation in the Apache MetaModel Project; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is tasked with the migration and rationalization of the Apache Incubator MetaModel podling; and be it further RESOLVED, that all responsibilities pertaining to the Apache Incubator MetaModel podling encumbered upon the Apache Incubator Project are hereafter discharged. Thanks, Henry On behalf of Apache MetaModel incubating PPMCs [1] http://incubator.apache.org/projects/metamodel.html [2] http://mail-archives.apache.org/mod_mbox/metamodel-dev - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- Noah Slater https://twitter.com/nslater - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Kylin for Incubation
Thanks for the reminder Ross. Hopefully we could go in the similar route as Apache Spark, Apache Storm, and Apache MetaModel where the trademark should be used as 'Apache Kylin'. - Henry On Fri, Nov 14, 2014 at 7:47 AM, Ross Gardler (MS OPEN TECH) ross.gard...@microsoft.com wrote: Potential trademark clash: http://www.ubuntu.com/desktop/ubuntu-kylin Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 7:38 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: [PROPOSAL] Kylin for Incubation Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis in the TB to PB size range Existing SQL-on-Hadoop solutions commonly need to perform partial or full table or file scans to compute the results of queries. The cost of these large data scans can make many queries very slow (more than a minute). The core idea of MOLAP (multi-dimensional OLAP) is to pre-compute data along dimensions of interest and store resulting aggregates as a cube. MOLAP is much faster but is inflexible. We realized that no existing product met our exact requirements externally – especially in the open source Hadoop community. To meet our emerging business needs, we built a platform from scratch to support MOLAP for these business requirements and then to support
Re: [VOTE] Release Apache Ranger 0.4.0 (incubating) - (formally known as Apache Argus)
+1, checked LICENSE, NOTICE, and DISCLAIMER, checked signatures, built the code, checked for stray .class or .jar files. Alan. Selvamohan Neethiraj mailto:sneet...@apache.org November 13, 2014 at 0:33 The Apache Ranger community has voted on and approved a proposal to release Apache Ranger 0.4.0 (incubating). This will be our first release since the project entered incubation in July 2014 as Apache Argus and then, got it renamed as Apache Ranger. The ranger-0.4.0-rc3 release candidate is now available with the following artifacts up for a project vote : Git tag for the release: https://git-wip-us.apache.org/repos/asf?p=incubator-argus.git;a=shortlog;h=refs/tags/ranger-0.4.0-rc3 Source release: http://people.apache.org/~sneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz http://people.apache.org/%7Esneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz Source release verification: PGP Signature: http://people.apache.org/~sneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz.asc http://people.apache.org/%7Esneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz.asc MD5/SHA Hash: http://people.apache.org/~sneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz.mds http://people.apache.org/%7Esneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz.mds Keys to verify the signature of the release artifact are available at: https://people.apache.org/keys/group/argus.asc Build verification steps can be found at: http://argus.incubator.apache.org/quick_start_guide.html The vote will be open for at least 72 hours or until necessary number of votes are reached. [ ] +1 approve [ ] +0 no opinion [ ] -1 disapprove (and reason why) Here is my +1 (non binding) Thanks Selva- -- Sent with Postbox http://www.getpostbox.com -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[RESULT] [VOTE] Release Apache Parquet Format (incubating) 2.2.0 RC2
This vote has passed with 3 +1 votes from IPMC members. Here are the votes: +1 (binding): * Tom White (from podling vote) * Stack * Patrick Hunt +1 (non-binding): * Chris Aniszczyk * Julien Le Dem +0: (none) -1: (none) Thank you to everyone for verifying the release! rb On 11/10/2014 02:35 PM, Ryan Blue wrote: Hi everyone, I'd like to propose a vote to release parquet-format-2.2.0-rc2 as the official Parquet Format 2.2.0 release. This release candidate has passed a podling vote, which can be found here: https://mail-archives.apache.org/mod_mbox/incubator-parquet-dev/201411.mbox/%3C54613B48.6060602%40apache.org%3E The release candidate, signature, and checksums are available at: https://dist.apache.org/repos/dist/dev/incubator/parquet/2.2.0-rc2/ The branch used to create the release candidate is: https://git-wip-us.apache.org/repos/asf?p=incubator-parquet-format.git;hb=release-2.2.0-rc2 KEYS to validate the signature are available at: https://dist.apache.org/repos/dist/dev/incubator/parquet/KEYS Please download, verify, and test. [ ] +1: Release this tag as parquet-format-2.2.0 [ ] +0: [ ] -1: This release is not ready because . . . -- Ryan Blue Software Engineer Cloudera, Inc. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: IP clearance clarification: copyright notices
On Fri, Nov 14, 2014 at 1:13 AM, Bertrand Delacretaz bdelacre...@apache.org wrote: On Thu, Nov 13, 2014 at 7:32 PM, Alex Harui aha...@adobe.com wrote: ...Copyright law, AIUI, prevents the receiving project from moving copyrights without the copyright owner’s permission. Thus, if the donor has time to insert Apache headers and move copyrights to NOTICE, that is very helpful, but if the donor is short on time, he can give someone in the receiving project permission to do so Very good point, so for my point 1) above (moving any existing non-Apache copyright notices to a NOTICE file) we could clarify that this needs to be done either by the donators before submitting their code, or by us with written permission from them. With people's comments here it looks like we need to clarify that clause indeed, I'll wait a bit for other opinions before doing that. The two legal imperatives are: 1. Any code we host must at all times be legal to distribute. 2. Only the copyright owner or their authorized agent may modify copyright notices. In addition, there is the policy imperative that individual ALv2 source files must eventually contain our ASF-specific header, and the related social/policy imperative that such files must not contain copyright notices. An important relaxation applies during the quarantine period: 1. While imported code must at all times be legal for us to distribute, it need not adhere to our policies. For example, it may contain GPL headers. So long as the task of modifying headers and copyright notices gets done correctly, technically it doesn't matter whether it happens prior to the first commit or immediately following. I have a mild preference for capturing such changes in version control, though, so that they may be reviewed more easily by the PMC and documented for posterity. Marvin Humphrey - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Retire HDT
Like Bob I would have liked to do more than just monitoring the list, but clearly I haven't and that doesn't give me much legitimacy I'm afraid. +0 from me as well. I've just come back to Hadoop after an absence of 18 months, and I see that a development environment for Hadoop-related computing and data warehousing tools (MapReduce, Spark [SQL|Streaming], Pig, HBase, Hive, etc) and their libraries (Mahout, Giraph, MLlib and GraphX) is still missing, and that there *is* a real need for it. Any cloud provider worth their salt offers a Hadoop as a platform service where you can deploy a cluster within minutes, but there's no way to get started developing jobs right away. And I see organisations struggle with this on private deployments as well. Stacks like Hue and IPython Notebook and simple stepping stone nodes fill part of the gap, but I think we can do way better. I've just come back to this space, and this is a problem I'm addressing for my current client (a bank) as well. If somebody would like to discuss tackling this together then lets set up a short call. I do have to say that looking at the current Hadoop toolset, I'm no longer convinced that Eclipse is the way to go here. Best, Evert On Fri, Nov 14, 2014 at 1:07 AM, Mattmann, Chris A (3980) chris.a.mattm...@jpl.nasa.gov wrote: CC¹ing folks from general@incubator.a.o as they can likely explain (am currently getting ready for a flight back from Italy to Los Angeles and won¹t have time for a bit, Bob). ++ Chris Mattmann, Ph.D. Chief Architect Instrument Software and Science Data Systems Section (398) NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA Office: 168-519, Mailstop: 168-527 Email: chris.a.mattm...@nasa.gov WWW: http://sunset.usc.edu/~mattmann/ ++ Adjunct Associate Professor, Computer Science Department University of Southern California, Los Angeles, CA 90089 USA ++ -Original Message- From: Bob Kerns r...@acm.org Reply-To: d...@hdt.incubator.apache.org d...@hdt.incubator.apache.org Date: Thursday, November 13, 2014 at 6:58 PM To: d...@hdt.incubator.apache.org d...@hdt.incubator.apache.org Subject: Re: [VOTE] Retire HDT As someone who has unfortunately been inactive, I'm going to abstain +0; otherwise I'd vote -1 and try to be one of those 3 active folks. I still have hopes of doing more stuff with Hadoop and HDT in the future, but job responsibilities keep shifting in unexpected directions, and health and family don't leave me enough time to do it if not actively involved at work. If retired, will this list remain active? Otherwise, it might be hard to gather the 3 active people... I think there's significant needs that are going unmet that we could be addressing, if people (myself included) had the time to devote to it. I've not stopped monitoring the list, and I do hope to contribute in the future. If it is retired, what will be the mechanics for contributing new code? Would it have to be brought out of retirement before that could happen via Apache? (Obviously, a fork on GitHub would be an option, but that might detract from a path back to active Apache involvement). On Wed, Nov 12, 2014 at 7:45 AM, Roman Shaposhnik r...@apache.org wrote: On Mon, Nov 10, 2014 at 12:45 AM, Rahul Sharma rsha...@apache.org wrote: Hi all, Based on the discussion happened on the mailing list [1] ,I'd like to call a VOTE to retire[2] Apache HDT from Apache Incubator. It appears i that the project has lost community interest with almost no activity on mailing lists. This VOTE will be open for at least 72 hours and passes on achieving a consensus. +1 [ ] Yes, I am in favor of retiring HDT from the Apache Incubator. +0 [ ] -1 [ ] No, I am not in favor of retiring HDT because... + 1 (binding). Thanks for all your efforts Rahul! I've also appreciated Mirko's comment, but I must say that retirement is NOT a death sentence. The code still will be available and if least 3 active folks were to show up the project can easily be reinstated. Thanks, Roman.
Re: [VOTE] Retire HDT
Like Bob I would have liked to do more than just monitoring the list, but clearly I haven't and that doesn't give me much legitimacy I'm afraid. +0 from me as well. I've just come back to Hadoop after an absence of 18 months, and I see that a development environment for Hadoop-related computing and data warehousing tools (MapReduce, Spark [SQL|Streaming], Pig, HBase, Hive, etc) and their libraries (Mahout, Giraph, MLlib and GraphX) is still missing, and that there *is* a real need for it. Any cloud provider worth their salt offers a Hadoop as a platform service where you can deploy a cluster within minutes, but there's no way to get started developing jobs right away. And I see organisations struggle with this on private deployments as well. Stacks like Hue and IPython Notebook and simple stepping stone nodes fill part of the gap, but I think we can do way better. I've just come back to this space, and this is a problem I'm addressing for my current client (a bank) as well. If somebody would like to discuss tackling this together then lets set up a short call. I do have to say that looking at the current Hadoop toolset, I'm no longer convinced that Eclipse is the way to go here. Best, Evert On Fri, Nov 14, 2014 at 1:07 AM, Mattmann, Chris A (3980) chris.a.mattm...@jpl.nasa.gov wrote: CC¹ing folks from general@incubator.a.o as they can likely explain (am currently getting ready for a flight back from Italy to Los Angeles and won¹t have time for a bit, Bob). ++ Chris Mattmann, Ph.D. Chief Architect Instrument Software and Science Data Systems Section (398) NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA Office: 168-519, Mailstop: 168-527 Email: chris.a.mattm...@nasa.gov WWW: http://sunset.usc.edu/~mattmann/ ++ Adjunct Associate Professor, Computer Science Department University of Southern California, Los Angeles, CA 90089 USA ++ -Original Message- From: Bob Kerns r...@acm.org Reply-To: d...@hdt.incubator.apache.org d...@hdt.incubator.apache.org Date: Thursday, November 13, 2014 at 6:58 PM To: d...@hdt.incubator.apache.org d...@hdt.incubator.apache.org Subject: Re: [VOTE] Retire HDT As someone who has unfortunately been inactive, I'm going to abstain +0; otherwise I'd vote -1 and try to be one of those 3 active folks. I still have hopes of doing more stuff with Hadoop and HDT in the future, but job responsibilities keep shifting in unexpected directions, and health and family don't leave me enough time to do it if not actively involved at work. If retired, will this list remain active? Otherwise, it might be hard to gather the 3 active people... I think there's significant needs that are going unmet that we could be addressing, if people (myself included) had the time to devote to it. I've not stopped monitoring the list, and I do hope to contribute in the future. If it is retired, what will be the mechanics for contributing new code? Would it have to be brought out of retirement before that could happen via Apache? (Obviously, a fork on GitHub would be an option, but that might detract from a path back to active Apache involvement). On Wed, Nov 12, 2014 at 7:45 AM, Roman Shaposhnik r...@apache.org wrote: On Mon, Nov 10, 2014 at 12:45 AM, Rahul Sharma rsha...@apache.org wrote: Hi all, Based on the discussion happened on the mailing list [1] ,I'd like to call a VOTE to retire[2] Apache HDT from Apache Incubator. It appears i that the project has lost community interest with almost no activity on mailing lists. This VOTE will be open for at least 72 hours and passes on achieving a consensus. +1 [ ] Yes, I am in favor of retiring HDT from the Apache Incubator. +0 [ ] -1 [ ] No, I am not in favor of retiring HDT because... + 1 (binding). Thanks for all your efforts Rahul! I've also appreciated Mirko's comment, but I must say that retirement is NOT a death sentence. The code still will be available and if least 3 active folks were to show up the project can easily be reinstated. Thanks, Roman.
Re: IP clearance clarification: copyright notices
On Thu, Nov 13, 2014 at 5:18 AM, Bertrand Delacretaz bdelacre...@apache.org wrote: Hi, In the vote thread about [1] a question came up about the following clause, from our IP clearance form: Check and make sure that the files that have been donated have been updated to reflect the new ASF copyright. I think this actually covers two distinct things: 1) Moving any existing non-Apache copyright notices to a NOTICE file, if the owner of the donated code wants that, or otherwise removing them or making them smaller to avoid bloating the code with multiple copyright notices., if possible. All done by whoever donates the code - as per the Should a project move non-ASF copyright notices from Apache source files to the NOTICE file? section in [2], we don't want to that ourselves. 2) Adding Apache copyright/license headers where required IMO there's no need for 2) to happen before the donation, that just has to happen before the first release of that code. [2] http://www.apache.org/legal/src-headers.html I agree with this. The FAQ at the bottom of [2] says that it applies to all releases occurring after $date. Source without ASF headers is not an issue until you release. (And indeed, many things that we don't release do not contain such headers.) 1) should be taken care of before the IP Clearance. As Marvin notes below, the inclusion of a header does not change its legal status, it merely is present for clarity (and because policy demands it in released files) - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Retire HDT
For whoever is interested: I'm having a Skype call with Mirko this Monday 4pm CET. Let me know if you'd like to join! Evert On Fri Nov 14 2014 at 9:27:40 PM Evert Lammerts evert.lamme...@gmail.com wrote: Like Bob I would have liked to do more than just monitoring the list, but clearly I haven't and that doesn't give me much legitimacy I'm afraid. +0 from me as well. I've just come back to Hadoop after an absence of 18 months, and I see that a development environment for Hadoop-related computing and data warehousing tools (MapReduce, Spark [SQL|Streaming], Pig, HBase, Hive, etc) and their libraries (Mahout, Giraph, MLlib and GraphX) is still missing, and that there *is* a real need for it. Any cloud provider worth their salt offers a Hadoop as a platform service where you can deploy a cluster within minutes, but there's no way to get started developing jobs right away. And I see organisations struggle with this on private deployments as well. Stacks like Hue and IPython Notebook and simple stepping stone nodes fill part of the gap, but I think we can do way better. I've just come back to this space, and this is a problem I'm addressing for my current client (a bank) as well. If somebody would like to discuss tackling this together then lets set up a short call. I do have to say that looking at the current Hadoop toolset, I'm no longer convinced that Eclipse is the way to go here. Best, Evert On Fri, Nov 14, 2014 at 1:07 AM, Mattmann, Chris A (3980) chris.a.mattm...@jpl.nasa.gov wrote: CC¹ing folks from general@incubator.a.o as they can likely explain (am currently getting ready for a flight back from Italy to Los Angeles and won¹t have time for a bit, Bob). ++ Chris Mattmann, Ph.D. Chief Architect Instrument Software and Science Data Systems Section (398) NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA Office: 168-519, Mailstop: 168-527 Email: chris.a.mattm...@nasa.gov WWW: http://sunset.usc.edu/~mattmann/ ++ Adjunct Associate Professor, Computer Science Department University of Southern California, Los Angeles, CA 90089 USA ++ -Original Message- From: Bob Kerns r...@acm.org Reply-To: d...@hdt.incubator.apache.org d...@hdt.incubator.apache.org Date: Thursday, November 13, 2014 at 6:58 PM To: d...@hdt.incubator.apache.org d...@hdt.incubator.apache.org Subject: Re: [VOTE] Retire HDT As someone who has unfortunately been inactive, I'm going to abstain +0; otherwise I'd vote -1 and try to be one of those 3 active folks. I still have hopes of doing more stuff with Hadoop and HDT in the future, but job responsibilities keep shifting in unexpected directions, and health and family don't leave me enough time to do it if not actively involved at work. If retired, will this list remain active? Otherwise, it might be hard to gather the 3 active people... I think there's significant needs that are going unmet that we could be addressing, if people (myself included) had the time to devote to it. I've not stopped monitoring the list, and I do hope to contribute in the future. If it is retired, what will be the mechanics for contributing new code? Would it have to be brought out of retirement before that could happen via Apache? (Obviously, a fork on GitHub would be an option, but that might detract from a path back to active Apache involvement). On Wed, Nov 12, 2014 at 7:45 AM, Roman Shaposhnik r...@apache.org wrote: On Mon, Nov 10, 2014 at 12:45 AM, Rahul Sharma rsha...@apache.org wrote: Hi all, Based on the discussion happened on the mailing list [1] ,I'd like to call a VOTE to retire[2] Apache HDT from Apache Incubator. It appears i that the project has lost community interest with almost no activity on mailing lists. This VOTE will be open for at least 72 hours and passes on achieving a consensus. +1 [ ] Yes, I am in favor of retiring HDT from the Apache Incubator. +0 [ ] -1 [ ] No, I am not in favor of retiring HDT because... + 1 (binding). Thanks for all your efforts Rahul! I've also appreciated Mirko's comment, but I must say that retirement is NOT a death sentence. The code still will be available and if least 3 active folks were to show up the project can easily be reinstated. Thanks, Roman.
[VOTE] Release Apache Metamodel incubating 4.3.0
Hi All, Please vote on releasing the following candidate as Apache MetaModel version 4.3.0- incubating. This will be the fourth incubator release for Metamodel in Apache (and potentially the last, if the also-ongoing vote about graduation passes). The Git tag to be voted on is v4.3.0- incubating: https://git-wip-us.apache.org/repos/asf?p=incubator-metamodel.git;a=tag;h=refs/tags/MetaModel-4.3.0-incubating The source artifact to be voted on is: https://repository.apache.org/content/repositories/orgapachemetamodel-1002/org/apache/metamodel/MetaModel/4.3.0-incubating/MetaModel-4.3.0-incubating-source-release.zip Parent directory (including MD5, SHA1 hashes etc.) of the source is: https://repository.apache.org/content/repositories/orgapachemetamodel-1002/org/apache/metamodel/MetaModel/4.3.0-incubating Release artifacts are signed with the following key: https://people.apache.org/keys/committer/kaspersor.asc Release engineer public key id: 1FE1C2F5 Vote thread link from d...@metamodel.incubator.apache.org mailing list: http://markmail.org/message/27orgrjxpnpanwop Result thread link from d...@metamodel.incubator.apache.org mailing list: http://markmail.org/message/qjgol4br3tzckpp6 Please vote on releasing this package as Apache MetaModel 4.3.0- incubating. The vote is open for 72 hours, or until we get the needed number of votes (3 times +1). [ ] +1 Release this package as Apache MetaModel 4.3.0 -incubating [ ] -1 Do not release this package because ... More information about the MetaModel project can be found at http://metamodel.incubator.apache.org/ Thank you in advance for participating. Regards, Kasper Sørensen
Wiki Access
Can I please get write permissions to the Wiki? Thanks, D.
Re: [VOTE] Release Apache Metamodel incubating 4.3.0
+1 (binding) On Fri, Nov 14, 2014 at 2:06 PM, Kasper Sørensen kasper.soren...@humaninference.com wrote: Hi All, Please vote on releasing the following candidate as Apache MetaModel version 4.3.0- incubating. This will be the fourth incubator release for Metamodel in Apache (and potentially the last, if the also-ongoing vote about graduation passes). The Git tag to be voted on is v4.3.0- incubating: https://git-wip-us.apache.org/repos/asf?p=incubator-metamodel.git;a=tag;h=refs/tags/MetaModel-4.3.0-incubating The source artifact to be voted on is: https://repository.apache.org/content/repositories/orgapachemetamodel-1002/org/apache/metamodel/MetaModel/4.3.0-incubating/MetaModel-4.3.0-incubating-source-release.zip Parent directory (including MD5, SHA1 hashes etc.) of the source is: https://repository.apache.org/content/repositories/orgapachemetamodel-1002/org/apache/metamodel/MetaModel/4.3.0-incubating Release artifacts are signed with the following key: https://people.apache.org/keys/committer/kaspersor.asc Release engineer public key id: 1FE1C2F5 Vote thread link from d...@metamodel.incubator.apache.org mailing list: http://markmail.org/message/27orgrjxpnpanwop Result thread link from d...@metamodel.incubator.apache.org mailing list: http://markmail.org/message/qjgol4br3tzckpp6 Please vote on releasing this package as Apache MetaModel 4.3.0- incubating. The vote is open for 72 hours, or until we get the needed number of votes (3 times +1). [ ] +1 Release this package as Apache MetaModel 4.3.0 -incubating [ ] -1 Do not release this package because ... More information about the MetaModel project can be found at http://metamodel.incubator.apache.org/ Thank you in advance for participating. Regards, Kasper Sørensen - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: IP clearance clarification: copyright notices
On 14 November 2014 21:38, David Nalley da...@gnsa.us wrote: On Thu, Nov 13, 2014 at 5:18 AM, Bertrand Delacretaz bdelacre...@apache.org wrote: Hi, In the vote thread about [1] a question came up about the following clause, from our IP clearance form: Check and make sure that the files that have been donated have been updated to reflect the new ASF copyright. I think this actually covers two distinct things: 1) Moving any existing non-Apache copyright notices to a NOTICE file, if the owner of the donated code wants that, or otherwise removing them or making them smaller to avoid bloating the code with multiple copyright notices., if possible. All done by whoever donates the code - as per the Should a project move non-ASF copyright notices from Apache source files to the NOTICE file? section in [2], we don't want to that ourselves. 2) Adding Apache copyright/license headers where required IMO there's no need for 2) to happen before the donation, that just has to happen before the first release of that code. [2] http://www.apache.org/legal/src-headers.html I agree with this. The FAQ at the bottom of [2] says that it applies to all releases occurring after $date. Source without ASF headers is not an issue until you release. (And indeed, many things that we don't release do not contain such headers.) I am sorry, but I really do not understand why the release is THE moment. If we wait to change the headers until a release, but do other changes in the file before that, these changes are done with the old license, and as a consequence, headers can only be changed with the written approval of the original donator as well as the people who have made changes later (independent whether or not they have filed a icla).why do we want that extra complication. Remeber the license in the file precedes any cla the developer might or might not have filed. I agree that for files without changes a release is the latest moment. rgds jan I. 1) should be taken care of before the IP Clearance. As Marvin notes below, the inclusion of a header does not change its legal status, it merely is present for clarity (and because policy demands it in released files) - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Ranger 0.4.0 (incubating) - (formally known as Apache Argus)
+1 (binding) Checked license signatures. Arun On Nov 13, 2014, at 12:33 AM, Selvamohan Neethiraj sneet...@apache.org wrote: The Apache Ranger community has voted on and approved a proposal to release Apache Ranger 0.4.0 (incubating). This will be our first release since the project entered incubation in July 2014 as Apache Argus and then, got it renamed as Apache Ranger. The ranger-0.4.0-rc3 release candidate is now available with the following artifacts up for a project vote : Git tag for the release: https://git-wip-us.apache.org/repos/asf?p=incubator-argus.git;a=shortlog;h=refs/tags/ranger-0.4.0-rc3 Source release: http://people.apache.org/~sneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz Source release verification: PGP Signature: http://people.apache.org/~sneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz.asc MD5/SHA Hash: http://people.apache.org/~sneethir/ranger/ranger-0.4.0-rc3/ranger-0.4.0-rc3.tar.gz.mds Keys to verify the signature of the release artifact are available at: https://people.apache.org/keys/group/argus.asc Build verification steps can be found at: http://argus.incubator.apache.org/quick_start_guide.html The vote will be open for at least 72 hours or until necessary number of votes are reached. [ ] +1 approve [ ] +0 no opinion [ ] -1 disapprove (and reason why) Here is my +1 (non binding) Thanks Selva- -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[RESULT] [VOTE] Apache Tamaya for Incubation
Hi all This vote passes with 8 +1s and no 0 or -1 votes: +1 Gerhard Petracek (mentor) +1 John D. Ament (mentor) +1 Mark Struberg (mentor) +1 David Blevins ( champion ) +1 Daniel S. Haisch +1 Bertrand Delacretaz +1 Konstantin Boudnik +1 Romain Manni-Bucau Thanks everyone. We are happy to go on with the incubation work ;) Best, Anatole -- *Anatole Tresch* Java Engineer Architect, JSR Spec Lead Glärnischweg 10 CH - 8620 Wetzikon *Switzerland, Europe Zurich, GMT+1* *Twitter: @atsticks* *Blogs: **http://javaremarkables.blogspot.ch/ http://javaremarkables.blogspot.ch/* *Google: atsticksMobile +41-76 344 62 79*
Re: Wiki Access
On Fri, Nov 14, 2014 at 2:05 PM, Dmitriy Setrakyan dsetrak...@gridgain.com wrote: Can I please get write permissions to the Wiki? Please let us know what your username is for wiki.apache.org/incubator. (It's not necessarily the same as your apache ID.) Marvin Humphrey - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Graduation of Apache MetaModel from the Incubator
+1 - binding On Nov 13, 2014, at 9:39 PM, Henry Saputra henry.sapu...@gmail.com wrote: Hi All, The Apache MetaModel community has wrapped up the VOTE to propose for graduation from Apache incubator. The VOTE passed with result: 9 binding +1s zero 0s zero -1s (http://bit.ly/1u8n8eo) Apache MetaModel came into ASF incubator on 2013 and since then have grown into small but active community. We have made several good releases with different release managers, and also add new PPMC/committers [1]. The project also has good traffic on the dev mailing list [2]. We would like to propose graduation of Apache MetaModel from ASF incubator to top level project. [ ] +1 Graduate Apache MetaModel from the Incubator. [ ] +0 Don't care. [ ] -1 Don't graduate Apache MetaModel from the Incubator because.. . The VOTE will open for 72 hours (11/17/2014) Here is the proposal for the board resolution for graduation: === Board Resolution == Establish the Apache MetaModel Project WHEREAS, the Board of Directors deems it to be in the best interests of the Foundation and consistent with the Foundation's purpose to establish a Project Management Committee charged with the creation and maintenance of open-source software, for distribution at no charge to the public, related to providing an implementation of a Platform-as-a-Service Framework. NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee (PMC), to be known as the Apache MetaModel Project, be and hereby is established pursuant to Bylaws of the Foundation; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is responsible for the creation and maintenance of software related to providing an implementation of a Platform-as-a-Service Framework; and be it further RESOLVED, that the office of Vice President, MetaModel be and hereby is created, the person holding such office to serve at the direction of the Board of Directors as the chair of the Apache MetaModel Project, and to have primary responsibility for management of the projects within the scope of responsibility of the Apache MetaModel Project; and be it further RESOLVED, that the persons listed immediately below be and hereby are appointed to serve as the initial members of the Apache MetaModel Project: * Alberto Rodriguez ardlema at apache dot org * Ankit Kumar ankitkumar2711 at apache dot org * Arvind Prabhakar arvind at apache dot org * Henry Saputra hsaputra at apache dot org * Juan Jose van der Linden delostilos at apache dot org * Kasper Sørensen kaspersor at apache dot org * Matt Franklin mfanklin at apache dot org * Noah Slater nslater at apache dot org * Sameer Arora sarora at apache dot org * Tomasz Guzialek tomaszguzialek at apache dot org NOW, THEREFORE, BE IT FURTHER RESOLVED, that Kasper Sørensen be appointed to the office of Vice President, MetaModel, to serve in accordance with and subject to the direction of the Board of Directors and the Bylaws of the Foundation until death, resignation, retirement, removal or disqualification, or until a successor is appointed; and be it further RESOLVED, that the initial Apache MetaModel PMC be and hereby is tasked with the creation of a set of bylaws intended to encourage open development and increased participation in the Apache MetaModel Project; and be it further RESOLVED, that the Apache MetaModel Project be and hereby is tasked with the migration and rationalization of the Apache Incubator MetaModel podling; and be it further RESOLVED, that all responsibilities pertaining to the Apache Incubator MetaModel podling encumbered upon the Apache Incubator Project are hereafter discharged. Thanks, Henry On behalf of Apache MetaModel incubating PPMCs [1] http://incubator.apache.org/projects/metamodel.html [2] http://mail-archives.apache.org/mod_mbox/metamodel-dev - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Apache Tamaya for Incubation
+1 Regards, Alan On Nov 10, 2014, at 4:19 PM, Anatole Tresch atsti...@gmail.com wrote: Hi all, Thanks for the feedback thus far on the Tamaya proposal. Based on prior discussion, I'd like to start the vote for Tamaya to be accepted as a new incubator project. The proposal can be found here https://wiki.apache.org/incubator/TamayaProposal as well as copied below. Vote is open until at least Saturday, 15th November 2014, 23:59:00 UTC [ ] +1 accept Tamaya in the Incubator [ ] ±0 [ ] -1 because... Thanks and Best Regards Anatole -- *Anatole Tresch* Java Engineer Architect, JSR Spec Lead Glärnischweg 10 CH - 8620 Wetzikon *Switzerland, Europe Zurich, GMT+1* *Twitter: @atsticks* *Blogs: **http://javaremarkables.blogspot.ch/ http://javaremarkables.blogspot.ch/* *Google: atsticksMobile +41-76 344 62 79* = = Apache Tamaya - Proposal = == Abstract == Tamaya is a highly flexible configuration solution based on an modular, extensible and injectable key/value based design, which should provide a minimal but extendible modern and functional API leveraging SE, ME and EE environments. ''Tamaya'' hereby translates into ''in the middle'', which is exactly, what configuration should be. It should be in the middle between your code and your runtime. '''NOTE:''' Alternative names could be ''Mahkah=earth, Dakota=friend'' or ''Orenda=magic force''. == Proposal == Tamaya is a highly flexible configuration API based on an modular, extensible and injectable key/value based design. The basic building blocks hereby are: * ''property providers'' implementing a small and easily implementable subset of a `MapString,String`. * support for configuration injection * a type-safe configuration template mechanism * serializable and remote configuration support * a JMX/Rest based management console * Configuration will follow the GoF composite pattern and support several combination strategies. * An extendible and adaptable environment model, so configuration can be provided dependent of the environment currently active. * extension points and a powerful SPI to seamlessly add additional logic to the API, such as secured views, multi-valued validation schemes, en-/decryption etc. * Configuration (and property providers) are designed and implemented as indirectly mutable types, providing thread-safe and performant to configuration. * Configuration changes can be observed by listening on `ConfigChange` events. The API's focus is on simplicity and ease of use. Developers should only have to know a minimal set of artifacts to work with the solution. The API is built on latest Java 8 features and therefore fit perfectly with the functional features of Java 8. Additionally Apache Tamaya will provide * A Java SE based implementation with minimal features and dependencies. * A Java EE extension module for integration with Java EE and Apache Deltaspike. * Once Java ME supports Lambdas, default methods, method references and functional interfaces an implementation targeting Java ME should be provided as well. * Extension modules for different features. * Adapter/inter-operation modules for other configuration solutions including Apache commons-config == Background == There is a global initiative running now for about a year lead by Anatole Tresch (Credit Suisse) with the target of standardizing configuration in Java EE and SE. Due to several reasons it seems currently most sensible to start an OSS project on the topic to join forces that actively want to contribute to the project. It is highly probably that standardization will be restarted at a later point once we have a widely used Apache standard. For further information you may look at http://javaeeconfig.blogspot.com . == Rationale == Configuration is one of the most cross-cutting concerns, which still lacks of a standard. Java EE is currently (EE7) in most areas strictly only configurable during build time of the deployed artifacts. Especially dynamic provisioning of resources or runtime configuration is not supported and in many cases impossible to add without tweaking the underlying application server. On the other hand running two separate configuration solutions for Java EE and Java SE as well make no or little sense. So it would be important we have a unified configuration model at hand, that is flexible enough, so * it can be used in Java SE, EE and ME * it can support contextual behaviour (like in Java EE and multi-tenancy/SaaS scenarios) * it provides a uniform API, regardless, if its used in SE or EE scenarios * it supports existing APIs, e.g. `System.getProperties, java.util.preferences` in SE and `CDI, JNDI` in Java EE * it supports service location pattern like access as well as ''injection'' of configured values. * it is simple in use and easily extensible. * it support integration with existing configuration solutions
Re: [PROPOSAL] Kylin for Incubation
Also, a Chinese localized operating system is pretty clearly different from an olap engine. For comparison see the recent non-issue regarding Amazon aurora versus apache aurora. Sent from my iPhone On Nov 14, 2014, at 9:55, Henry Saputra henry.sapu...@gmail.com wrote: Thanks for the reminder Ross. Hopefully we could go in the similar route as Apache Spark, Apache Storm, and Apache MetaModel where the trademark should be used as 'Apache Kylin'. - Henry On Fri, Nov 14, 2014 at 7:47 AM, Ross Gardler (MS OPEN TECH) ross.gard...@microsoft.com wrote: Potential trademark clash: http://www.ubuntu.com/desktop/ubuntu-kylin Sent from my Windows Phone From: Luke Hanmailto:luke...@gmail.com Sent: 11/14/2014 7:38 AM To: general@incubator.apache.orgmailto:general@incubator.apache.org Subject: [PROPOSAL] Kylin for Incubation Hi all, We would like to propose Kylin as an Apache Incubator project. The complete proposal can be found: https://wiki.apache.org/incubator/KylinProposal and posted the text of the proposal below. Thanks. Luke Kylin Proposal == # Abstract Kylin is a distributed and scalable OLAP engine built on Hadoop to support extremely large datasets. # Proposal Kylin is an open source Distributed Analytics Engine that provides multi-dimensional analysis (MOLAP) on Hadoop. Kylin is designed to accelerate analytics on Hadoop by allowing the use of SQL-compatible tools. Kylin provides a SQL interface and multi-dimensional analysis (MOLAP) on Hadoop to support extremely large datasets and tightly integrate with Hadoop ecosystem. ## Overview of Kylin Kylin platform has two parts of data processing and interactive: First, Kylin will read data from source, Hive, and run a set of tasks including Map Reduce job, shell script to pre-calcuate results for a specified data model, then save the resulting OLAP cube into storage such as HBase. Once these OLAP cubes are ready, a user can submit a request from any SQL-based tool or third party applications to Kylin’s REST server. The Server calls the Query Engine to determine if the target dataset already exists. If so, the engine directly accesses the target data in the form of a predefined cube, and returns the result with sub-second latency. Otherwise, the engine is designed to route non-matching queries to whichever SQL on Hadoop tool is already available on a Hadoop cluster, such as Hive. Kylin platform includes: - Metadata Manager: Kylin is a metadata-driven application. The Kylin Metadata Manager is the key component that manages all metadata stored in Kylin including all cube metadata. All other components rely on the Metadata Manager. - Job Engine: This engine is designed to handle all of the offline jobs including shell script, Java API, and Map Reduce jobs. The Job Engine manages and coordinates all of the jobs in Kylin to make sure each job executes and handles failures. - Storage Engine: This engine manages the underlying storage – specifically, the cuboids, which are stored as key-value pairs. The Storage Engine uses HBase – the best solution from the Hadoop ecosystem for leveraging an existing K-V system. Kylin can also be extended to support other K-V systems, such as Redis. - Query Engine: Once the cube is ready, the Query Engine can receive and parse user queries. It then interacts with other components to return the results to the user. - REST Server: The REST Server is an entry point for applications to develop against Kylin. Applications can submit queries, get results, trigger cube build jobs, get metadata, get user privileges, and so on. - ODBC Driver: To support third-party tools and applications – such as Tableau – we have built and open-sourced an ODBC Driver. The goal is to make it easy for users to onboard. # Background The challenge we face at eBay is that our data volume is becoming bigger and bigger while our user base is becoming more diverse. For e.g. our business users and analysts consistently ask for minimal latency when visualizing data on Tableau and Excel. So, we worked closely with our internal analyst community and outlined the product requirements for Kylin: - Sub-second query latency on billions of rows - ANSI SQL availability for those using SQL-compatible tools - Full OLAP capability to offer advanced functionality - Support for high cardinality and very large dimensions - High concurrency for thousands of users - Distributed and scale-out architecture for analysis in the TB to PB size range Existing SQL-on-Hadoop solutions commonly need to perform partial or full table or file scans to compute the results of queries. The cost of these large data scans can make many queries very slow (more than a minute). The core idea of MOLAP (multi-dimensional OLAP) is to pre-compute data along dimensions of interest and store resulting aggregates as a