I think the model I described can be more secure as only the proxy needs
access to the intranet but I clearly see the need for both approaches as
you correctly stated that my approach is more complex.
I agree about the proposed changes in Topology Manager and zookeeper
discovery. They should cover both approaches.
The change in the existing distribution manager would not be needed in
my approach. Instead we would need a separate distribution manager that
takes care of the proxying or external proxy config. I think we could
implement this using the Aries specific Distributionprovider SPI
(https://github.com/apache/aries-rsa/blob/master/spi/src/main/java/org/apache/aries/rsa/spi/DistributionProvider.java).
This should make the code pretty small compared to a full distribution
provider.
There is one thing I am missing in your description. How do you
configure the proxy server that provides the secure endpoint? I think
the typical production approach will be to use an existing HTTP proxy
server that has an API to configure the forwarding. Would that happen in
the REST provider? I am not sure if that would be a good idea as we will
likely have to support different proxy APIs and I would rather avoid
putting all that into the REST provider. Another thing is that we might
need the same or very similar functionality for SOAP endpoints.
So I guess it might be better to just provide an SPI somehow in Aries
RSA where people could plug in for the different proxy servers.
Christian
On 22.09.2016 12:26, Timothy Ward wrote:
Hi Christian,
This is very nearly the approach that I suggested, but it adds an additional
zone into the mix which I think is unnecessary, and I feel that hides the fact
that exactly the same security concerns exist with it.
In the layout that you’ve drawn the backend server publishes a service using
RSA (the fact that it’s REST/HTTP is immaterial). This is discovered by nodes
in the backend zone using a backend ZooKeeper. The “Proxy” node is also in this
Back End discovery zone, and imports services from it - this is the “DMZ” where
both networks intersect. The firewall has to have a rule configured to allow
the DMZ to talk to the backend ZooKeeper in this model, just as it does when
the DMZ is bigger.
As the proposal isn’t more secure I would avoid this “forwarding” approach, and
instead have each back-end server use a ManagedServiceFactory to configure
itself with two discovery zones (i.e. two EndpointEventListeners with different
configurations). These zones could use two ZooKeepers, or two just namespaces
within a single ZooKeeper. Each discovery configuration should also be able to
provide a String+ of filters that will be placed on the Discovery
EndpointEventListener (these would be anded with the core discovery filter). As
a result you get purely config-admin driven partitioning of the discovery space
based on EndpointDescription Properties.
The front end services would be configured with just the one discovery zone
which doesn’t need special filtering, or any other special handling from the
client Topology Manager
The remaining work needs to be done in two places:
1. The “back-end” topology manager should have a configurable
EndpointEventListener filter (just like the discovery layer), which allows it
to be configured ignore the “secure” endpoints. This is much simpler than a
Topology plugin model, and follows the RSA spec.
2. The distribution provider - this is the only part which is related to
the protocol/transport. In this case the REST Distribution provider simply
needs to be configurable with a secure proxy endpoint, and to add a service
intent which indicates that the transport is secure (e.g.
message.confidentiality). This allows exported services to *require* the intent
by setting the service.exported.intents property. Services which do specify
this intent would not be exported insecurely, something that is not possible in
the forwarding model.
Effectively I believe there is only a small amount of work to do here, all of
which is within the scope of the existing RSA spec:
Topology:
* String+ of filters to apply to the Discovery EndpointEventListener
Discovery:
* MSF for allowing multiple simultaneous configurations
* String+ of filters to apply to the Discovery EndpointEventListener
Distribution:
* Support for configurable additional base urls which map to one or more
service intents
* Ensure that Aries RSA is spec compliant when it comes to intent matching
The advantages of this model are that:
* Services can be deliberately exported as *only* secure endpoints to be
used by the front end
* Services can be deliberately exported as *only* insecure endpoints for
use at the back end
* The secure endpoint information correctly identifies the source
framework, service id and endpoint id
* There is no “republishing” component to build and install, and the
overall system is simpler as a result.
I hope this is clear enough to outline what I’m thinking.
Regards,
Tim
On 22 Sep 2016, at 09:23, Christian Schneider
<ch...@die-schneider.net<mailto:ch...@die-schneider.net>> wrote:
So lets think about a practical example. We want to expose a REST service that
is visible to the inside via its direct url on the server that exposes the
service and also via a proxy server where this service will have a different
url.
So the server side topology manager would detect that the service is to be
published. It would call the distribution provider to export the service.
The provider would then return two ExportRegistrations one for the direct access and one for the access over
the proxy. They could each have a special property like "zone" to distinguish them (like
"backend" and "frontend").
The TopologyManager on the front end client side then could be configured to
only consider Endpoints that have (zone=frontend) for imports.
I think on the client I like this approach. It would also need only minimal
additions to the TopologyManager code. We would either need a config setting
for a import filter or a SPI where the user can supply custom filtering logic.
On the server side I am not so sure. The distribution provider would need to
know about the proxy and it might even need to communicate with it to get the
alias address for the service it exports. Another problem is that in such a
scenario the server administrator would be able to poke new holes into the
firewall.
On the positive side I agree that the DistributionProvider might know more
about the details of the protocol to be qualified to create a suitable address
than the TopologyManager.
Generally I think I would prefer a more central approach where you have a
system that is part of the firewall that manages which services to expose to
the outside world. Maybe this can also be done with Remote Service Admin.
How about this:
See http://liquid-reality.de/display/liquid/Zones+for+Aries+RSA
We use a system with two instances of the zookeeper discovery. One that
communicates with the backend zookeeper and one that communicates with the
frontend zookeeper.
The backend discovery would report all internal services to the
TopologyManager. The TopologyManager would select the backend services for
import using a special proxy DistributionProvider.
The proxy DistributionProvider would create an OSGi service with the necessary
prorperties to be exported as a proxy for the frontend. The TopologyManager
would detect this service and export it using the same proxy
DistributionProvider which would create an ExportRegistration with the url of
the proxy Endpoint it would also either implement an HTTP Proxy for the service
or configure an external proxy server.
This approach might even be able to cover both cases the case where we have one
zookeeper with all addresses and the case where we have spearate zookeepers for
frontend and backend.
If I understand correctly then the Endpoint information for the proxy service
would always be sent to all discovery providers, or can the TopologyManager
select where to export it? If it can not then I think we would need a filtering
config for the zookeeper discovery so the frontend zookeeper will only contain
the frontend services and the backend zookeeper would only contain the backend
services.
What do you think?
Christian
On 21.09.2016 11:49, Timothy Ward wrote:
Hi Christian,
From an RSA perspective this is a Topology Management issue, not a discovery
issue. Services should be exposed by the Distribution Provider with multiple
ExportRegistrations (and hence multiple EndpointDescriptions), one for the
“internal” URL and one for the “proxied” URL The Topology Manager on the client
side should then import the Endpoint Description that gives the behaviour that
it desires.
Using multiple discovery services to do this sort of scoping is possible, but
much more complicated than controlling it using the Topology Manager. Adding a
simple filter to the client side’s Topology Manager’s EndpointEventListener
would probably be sufficient, and could be done easily using Config Admin.
Regards,
Tim
On 19 Sep 2016, at 13:52, Christian Schneider
<ch...@die-schneider.net<mailto:ch...@die-schneider.net>> wrote:
I just had a discussion with Panu Hämäläinen about the DiscoveryPlugin
mechanism.
See https://issues.apache.org/jira/browse/ARIES-1613 and
https://issues.apache.org/jira/browse/ARIES-1614 .
What he needs is to have two zones of services, a backend zone and a frontend
zone.
The (or some) services in the backend will be published with http. Inside the
backend zone the services should be available using this http url.
In the frontend zone these services should also be visible but their url should
point to a proxy server that offer a https connection and potentially some
additional security mechanisms.
So we can not simply have one (zookeeper or other) discovery view. Instead we
need a different discovery view for backend and frontend and some mechanism to
make some services from one zone available in the other while also applying
some changes like pointing to a proxy.
Do you think it makes sense to support this case in Aries RSA in some way?
I thought we might also be able to interact with one of the existing proxy
servers to automatically register the proxy for each service. Such a mechanism
is very typically for cloud enabled architectures. So that might also bring us
nearer to good cloud support.
I would be happy about any ideas and feedback.
Christian
--
Christian Schneider
http://www.liquid-reality.de
Open Source Architect
http://www.talend.com
--
Christian Schneider
http://www.liquid-reality.de
Open Source Architect
http://www.talend.com
--
Christian Schneider
http://www.liquid-reality.de
Open Source Architect
http://www.talend.com