[CVE-2019-0224] Apache JSPWiki Cross-site scripting vulnerability

2019-03-26 Thread Juan Pablo Santos Rodríguez
Severity: Medium

Vendor: The Apache Software Foundation

Versions Affected: Apache JSPWiki up to 2.11.0.M2

A carefully crafted URL could execute javascript on another user's session.
No information could be saved on the server or jspwiki database, nor would
an attacker be able to execute js on someone else's browser; only on it's
own browser.

Apache JSPWiki users should upgrade to 2.11.0.M3 or later.

This issue was discovered by Muthukumar Marikani (
https://twitter.com/unkn0wn_p3rson), from ZOHO-CRM Security Team

[CVE-2019-0225] Apache JSPWiki Local File Inclusion (limited ROOT folder) vulnerability leads to user information disclosure

2019-03-26 Thread Juan Pablo Santos Rodríguez
Severity: High

Vendor: The Apache Software Foundation

Versions Affected: Apache JSPWiki up to 2.11.0.M2

A specially crafted url could be used to access files under the ROOT
directory of the application on Apache JSPWiki, which could be used by an
attacker to obtain registered users' details.

Apache JSPWiki users should upgrade to 2.11.0.M3 or later.

This issue was discovered by Muthukumar Marikani (
https://twitter.com/unkn0wn_p3rson), from ZOHO-CRM Security Team

[ANNOUNCEMENT] Apache Commons BCEL 6.3.1

2019-03-26 Thread Gary Gregory
The Apache Commons BCEL team is pleased to announce the release of
Apache Commons BCEL 6.3.1!

The Byte Code Engineering Library (BCEL) is intended to give users a
way to analyze, create, and manipulate compiled .class files. Classes are
represented by objects containing all the symbolic information of the given
class: methods, fields and byte code instructions.

Bug fix release


o BCEL-267: Race conditions on static fields in BranchHandle and
InstructionHandle. Thanks to Stephan Herrmann, Sebb, Gary Gregory, Torsten
o BCEL-297: Possible NPE in override implementation of Object.equals (#20)
Thanks to Mark Roberts, mingleizhang.
o BCEL-315: NullPointerException at
org.apache.bcel.classfile.FieldOrMethod.dump(). Thanks to Gary Gregory.


o BCEL-298: Add some files to .gitignore (#19) Thanks to mingleizhang.

Download it from

Have fun!
-Apache Commons BCEL team


Open source works best when you give feedback:


Please direct all bug reports to JIRA:


Or subscribe to the commons-user mailing list

Gary Gregory, on behalf of the Apache Commons Team.

[ANNOUNCE] Apache Kafka 2.2.0

2019-03-26 Thread Matthias J. Sax
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.2.0

 - Added SSL support for custom principal name
 - Allow SASL connections to periodically re-authenticate
 - Command line tool bin/kafka-topics.sh adds AdminClient support
 - Improved consumer group management
   - default group.id is `null` instead of empty string
 - API improvement
   - Producer: introduce close(Duration)
   - AdminClient: introduce close(Duration)
   - Kafka Streams: new flatTransform() operator in Streams DSL
   - KafkaStreams (and other classed) now implement AutoClosable to
support try-with-resource
   - New Serdes and default method implementations
 - Kafka Streams exposed internal client.id via ThreadMetadata
 - Metric improvements:  All `-min`, `-avg` and `-max` metrics will now
output `NaN` as default value

All of the changes in this release can be found in the release notes:

You can download the source and binary release (Scala 2.11 and 2.12)
from: https://kafka.apache.org/downloads#2.2.0


Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.

With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.

Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 98 contributors to this release!

Alex Diachenko, Andras Katona, Andrew Schofield, Anna Povzner, Arjun
Satish, Attila Sasvari, Benedict Jin, Bert Roos, Bibin Sebastian, Bill
Bejeck, Bob Barrett, Boyang Chen, Bridger Howell, cadonna, Chia-Ping
Tsai, Chris Egerton, Colin Hicks, Colin P. Mccabe, Colin Patrick McCabe,
cwildman, Cyrus Vafadari, David Arthur, Dhruvil Shah, Dong Lin, Edoardo
Comar, Flavien Raynaud, forficate, Gardner Vickers, Guozhang Wang, Gwen
(Chen) Shapira, hackerwin7, hejiefang, huxi, Ismael Juma, Jacek
Laskowski, Jakub Scholz, Jarek Rudzinski, Jason Gustafson, Jingguo Yao,
John Eismeier, John Roesler, Jonathan Santilli, jonathanskrzypek, Jun
Rao, Kamal Chandraprakash, Kan Li, Konstantine Karantasis, lambdaliu,
Lars Francke, layfe, Lee Dongjin, linyli001, lu.ke...@berkeley.edu,
Lucas Bradstreet, Magesh Nandakumar, Manikumar Reddy, Manikumar Reddy O,
Manohar Vanam, Mark Cho, Mathieu Chataigner, Matthias J. Sax, Matthias
Wessendorf, matus-cuper, Max Zheng, Mayuresh Gharat, Mickael Maison,
mingaliu, Nikolay, occho, Pasquale Vazzana, Radai Rosenblatt, Rajini
Sivaram, Randall Hauch, Renato Mefi, Richard Yu, Robert Yokota, Ron
Dagostino, ryannatesmith, Samuel Hawker, Satish Duggana, Sayat, seayoun,
Shawn Nguyen, slim, Srinivas Reddy, Stanislav Kozlovski, Stig Rohde
Døssing, Suman, Tom Bentley, u214578, Vahid Hashemian, Viktor Somogyi,
Viktor Somogyi-Vass, Xi Yang, Xiongqi Wu, ying-zheng, Yishun Guan,
Zhanxiang (Patrick) Huang

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at

Thank you!



[ANNOUNCE] Apache Calcite 1.19.0 released

2019-03-26 Thread Kevin Risden
The Apache Calcite team is pleased to announce the release of
Apache Calcite 1.19.0.

Calcite is a dynamic data management framework. Its cost-based
optimizer converts queries, represented in relational algebra,
into executable plans. Calcite supports many front-end languages
and back-end data engines, and includes an SQL parser and the
Avatica JDBC driver.

This release comes three months after 1.18.0. It includes more
than 80 resolved issues, comprising of a few new features as
well as general improvements and bug-fixes. Among others,
there have been significant improvements in JSON query support.
For more details, see the release notes:


The release is available here:


We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at


Kevin Risden, on behalf of the Apache Calcite Team

[ASF at 20] Our Founders look back on 20 Years of the ASF!

2019-03-26 Thread Sally Khudairi
[this interview, along with photos and links, are available online at 
https://s.apache.org/ASF20th-Founders ]

We recently connected with six of the original 21 Founders of The Apache 
Software Foundation to take a look back at 20 years of the ASF. Joining us are 
Sameer Parekh Brenn, Mark Cox, Lars Eilebrecht, Jim Jagielski, Aram Mirzadeh, 
Bill Stoddard, Randy Terbush, and Dirk-Willem van Gulik, who were generous 
enough to take a walk down memory lane with us.

Q: When did you first get involved with the Apache HTTP Server? What was your 

 Mark: during my PhD work in 1993 I was creating new features and bug fixes for 
the NCSA Web server; I'd also found and fixed a number of security issues and 
was invited by Brian Behlendorf to join the core development team of Apache in 
April 1995, a few weeks after it was formed.

 Randy: I first got involved through finding a few like minded people that were 
working with the NCSA Web server. I began exchanging patches and ideas for how 
to make the NCSA server scale to some of the hosting challenges that we were 
all facing as commercial use of the Web began to grow. Late 1994 if I remember 

 Dirk: I got involved in the early NCSA Web server days –I was working for a 
research lab; and we needed specific functionality to allow us to make small 
geographic subset on huge satellite images available as an 'image'. Something 
novel at that time –as the normal way to get such images was to fill out a 
form; fax it and then wait a few months for a large box or container with tapes 
to arrive. It would then take weeks or months to load up those tapes and 
extract just the area you needed.

 Jim: in 1995, initially in providing portability patches to Apple's old UNIX 
operating system, A/UX and then in adding features, fixing bugs and working on 
the configuration and build system.

 Lars: around 1995 during my studies I developed an interest in Unix and 
Internet technologies, and in Web servers in particular. I actually set up the 
first official Web site for the University of Siegen in Germany. Well, we 
didn't use Apache in the very beginning, but very quickly realized that the 
Apache HTTP Server is the way forward. I started helping other Apache users in 
various online forums, and about a year later I was asked by a German 
publishing company to write about the Apache HTTP Servers which was published 
in 1998.

 Sameer: I became involved when I perceived a need in the marketplace for an 
Open Source HTTP server that supported SSL. Ben Laurie had developed Apache-SSL 
but it was not possible to use it within the United States due to patent 
restrictions. My company developed a solution.

 Bill: it was 1997, and I had just become Chief Programmer for IBM's 
proprietary Lotus Domino Go Webserver. LDGW needed a lot of enhancements but 
the code base was fragile and HTTP servers, by this time, were no longer a 
source of revenue. Exploring alternatives to continuing development on LDGW, we 
found that the Apache HTTP Server had almost everything we needed in a rock 
solid implementation. I can't overstate how big a deal it was in IBM at the 
time to consider using Open Source software.

 Aram: late 1990s ...I migrated Apache HTTPD v1 to Linux and SCO Unix. I also 
had the first easy to follow Website dedicated to guiding users on setting up 
IP-virtual hosts/websites.

Q: How did you get involved with the original Apache Group?

 Dirk: satellite images were both bulky, required complex user interaction to 
select an area on the map, and someone sensitive from a security perspective; 
so we needed all sorts of functionality that was not yet common in the NCSA 
Server, or the more science oriented data server of CERN.

 Randy: I got involved through what was standard operating procedure for me: 
hunting Usenet for other people that were trying to solve the same challenges I 

 Aram: I had been sending commits to NCSA and getting rejected when I heard 
about a bunch of guys leaving to go start a new Web server. I went along a bit 
after they had started to see if I can get some recognition for Linux and SCO 
which had been my responsibility at the company I was working for.

 Sameer: I got involved when I began work on our SSL solution.

 Lars: in 1997 I published the first German book about the Apache HTTP Server. 
When documenting and testing the various features of Apache I ran into some 
issues and bugs and ended up submitting a fairly large number of bug reports 
and some patches to the Apache Group. I guess after a while they got tired of 
all my bug reports and invited me to become a member of the Apache Group... and 
therefore allowing me to apply the bug fixes myself.

 Bill: the Apache Group's home page indicated that they would welcome company 
participation in the project. That opened the door for James Barry, an IBM 
Product Manager, and Yin Ping Shan, an IBM STSM, to contact Brian Behlendorf 
about IBM's participation in the 

[ASF at 20] 20 Years of Open Source Innovation, The Apache Way

2019-03-26 Thread Sally Khudairi
[this post is available online at https://s.apache.org/CmA3  and 
https://opensource.com/article/19/3/apache-projects ]

by Jim Jagielski and Sally Khudairi

As the world’s largest and one of the most influential open source foundations, 
The Apache Software Foundation (ASF) is home to more than 350 community-led 
projects and initiatives. The ASF’s 731 individual Members and more than 7,000 
Committers are global, diverse, and often embodies a case of collective 
humility. We’ve assembled a list of 20 ubiquitous and up-and-coming Apache 
projects to celebrate the ASF’s 20th Anniversary on 26 March 2019, applaud our 
all-volunteer community, and thank the billions of users who benefit from their 
Herculean efforts.

1. Apache HTTP Server
Web/Servers. http://httpd.apache.org/  

The most popular open source HTTP server on the planet shot to fame just 13 
months from its inception in 1995, and remains so today due to its ability to 
provide a secure, efficient and extensible server that provides HTTP services 
observing the latest HTTP standards. Serving modern operating systems including 
UNIX, Microsoft Windows, and Mac OS/X, the Apache HTTP Server played a key role 
in the initial growth of the World Wide Web; its rapid adoption over all other 
Web servers combined was also instrumental to the wide proliferation of 
eCommerce sites and solutions. The Apache HTTP Server project was the ASF’s 
flagship project at its launch, and served as the basis upon which future 
Apache projects emulated with its open, community-driven, meritocratic 
development process known as “The Apache Way”.

2. Apache Incubator
Innovation. http://incubator.apache.org/ 

The Apache Incubator is the ASF’s nexus for innovation, serving as the entry 
path for projects and codebases wishing to officially become part of the 
efforts at The Apache Software Foundation. All code donations from external 
organizations and existing external projects go through the incubation process 
to ensure all donations are in accordance with the ASF legal standards, and 
develop diverse communities that adhere to the ASF’s guiding principles. 
Incubation is required of newly accepted projects until their infrastructure, 
communications, and decision making process have stabilized in a manner 
consistent with other successful ASF projects. Whilst incubation is neither a 
reflection of the completeness or stability of the code, nor does it indicate 
that the project has yet to be fully endorsed by the ASF, its rigorous process 
of mentoring projects and their communities according to “The Apache Way” has 
led to the graduation of nearly 200 projects in the Incubator’s 16-year 
history. Today 51 “podlings” are undergoing development in the Apache Incubator 
across an array of categories, including annotation, artificial intelligence, 
Big Data, cryptography, data science/storage/visualization, development 
environments, Edge and IoT, email, JavaEE, libraries, machine learning, 
serverless computing, and more.

3. Apache Kafka
Big Data. https://kafka.apache.org/ 

The Apache footprint as the foundation of the Big Data ecosystem continues to 
grow, from Accumulo to Hadoop to ZooKeeper, with fifty active projects to date 
and two dozen more in the Apache Incubator. Apache Kafka’s highly-performant 
distributed, fault tolerant, real-time publish-subscribe messaging platform 
powers Big Data solutions at Airbnb, LinkedIn, MailChimp, Netflix, The New York 
Times, Oracle, PayPal, Pinterest, Spotify, Twitter, Uber, Wikimedia Foundation, 
and countless other businesses.

4. Apache Maven
Build Management. http://maven.apache.org/

Spinning out of the Apache Turbine servlet framework project in 2004, Apache 
Maven has risen to the top as the hugely popular build automation tool that 
helps Java developers build and release software. Stable, flexible, and 
feature-rich, Maven streamlines continuous builds, integration, testing, and 
delivery processes with an impressive central repository and robust plug-in 
ecosystem, making it the go-to choice for developers who want to easily manage 
a project’s build, reporting, and documentation.

5. Apache CloudStack
Cloud. http://cloudstack.apache.org/

Super-quick to deploy, well-documented, and with an easy production 
environment, one of the biggest draws to Apache CloudStack is that it “just 
works”. Powering some of the industry’s most visible Clouds – from global 
hosting providers to telcos to the Fortune 100 top 5% and more – the CloudStack 
community is cohesive, agile, and focused, leveraging 11 years of Cloud success 
to enable users to rapidly and affordably build fully featured clouds.

6. Apache cTAKES
Content. http://ctakes.apache.org/ 

Developed from real-world use at the Mayo Clinic in 2006, cTAKES was created by 
a team of physicians, computer scientists and software engineers seeking a 
natural language processing system for extraction of information from 
electronic medical record clinical 

The Apache® Software Foundation Celebrates 20 Years of Community-led Development "The Apache Way"

2019-03-26 Thread Sally Khudairi
[this announcement is available online at 
https://s.apache.org/ASF20thAnniversary ]

The Apache Software Foundation (ASF), the all-volunteer developers, stewards, 
and incubators of more than 350 Open Source projects and initiatives, announced 
today its 20th Anniversary, celebrating "The Apache Way" of community-driven 
development as the key to its success.

The world's largest Open Source foundation is home to dozens of 
freely-available (no cost), enterprise-grade Apache projects that serve as the 
backbone for some of the most visible and widely used applications. The 
ubiquity of Apache software is undeniable, with Apache projects managing 
exabytes of data, executing teraflops of operations, and storing billions of 
objects in virtually every industry. Apache software is an integral part of 
nearly every end user computing device, from laptops to tablets to phones.

"What started before the term 'Open Source' was coined has now grown to support 
hundreds of projects, thousands of contributors and millions of users," said 
Phil Steitz, Chairman of The Apache Software Foundation. "The Apache Way has 
shown itself to be incredibly resilient in the wake of the many changes in 
software and technology over the last twenty years. As the business and 
technology ecosystems around our projects have grown, our community-based open 
development model has evolved but remained true to the core principles 
established in the early days of the Foundation. We remain committed to the 
simple idea that open, community-led development produces great software and 
when you make that software freely available with no restrictions on how it can 
be used or integrated, the communities that develop it get stronger. The 
resulting virtuous cycle has been profoundly impactful on the software industry 
as a whole and on those of us who have had the good fortune of volunteering 
here. When we celebrate fifty years, I am sure that we will say the same thing."

["ASF at 20" promo https://s.apache.org/ASF20 ]

Software for the Public Good
In 1999, 21 founders, including original members of the Apache Group (creators 
of the Apache HTTP Server; the World's most popular Web server since 1996) 
formed The Apache Software Foundation to provide software for the public good. 
The ASF's flagship project, the Apache HTTP Server, continues development under 
the auspices of the ASF, and has grown to serve more than 80 million Websites 

"The most successful revolutions are those birthed by Passion and Necessity. 
What keeps them going are Communities," said ASF co-founder Jim Jagielski. 
"Congratulations to the ASF and to everyone who has had a hand, large and 
small, in making it into who and what we are today."

The Apache Way
The open, community-driven process behind the development of the Apache HTTP 
Server formed the model adopted by future Apache projects as well as emulated 
by other Open Source foundations. Dubbed "The Apache Way", the principles 
underlying the ASF embrace:

Earned Authority: all individuals are given the opportunity to participate, and 
their influence is based on publicly-earned merit – what they contribute to the 
community. Merit lies with the individual, does not expire, is not influenced 
by employment status or employer, and is non-transferable.

Community of Peers: participation at the ASF is done through individuals, not 
organizations. Its flat structure dictates that the Apache community is 
respectful of each other, roles are equal, votes hold equal weight, and 
contributors are doing so on a volunteer basis (even if paid to work on Apache 

Open Communications: as a virtual organization, the ASF requires all 
communications be made online, via email. Most Apache lists are archived and 
publicly accessible to ensure asynchronous collaboration, as required by a 
globally-distributed community

Consensus Decision Making: Apache Projects are auto-governing with a heavy 
slant towards driving consensus to maintain momentum and productivity. Whilst 
total consensus is not possible to establish at all times, holding a vote or 
other coordination may be required to help remove any blocks with binding 

Responsible Oversight: the ASF governance model is based on trust and delegated 
oversight, with self-governing projects providing reports directly to the 
Board. Apache Committers help each other by making peer-reviewed commits, 
employing mandatory security measures, ensuring license compliance, and 
protecting the Apache brand and community at-large from abuse.
The ASF is strictly vendor neutral. No organization is able to gain special 
privileges or control a project's direction, irrespective of employing staff to 
work on Apache projects or sponsorship status.

The ASF Today
Behind the ASF is an all-volunteer community comprising 730 individual Members 
and 7,000 Committers stewarding 200M+ lines of code that benefit billions of 
users worldwide.

Lauded as one of the