Re: [openstack-dev] [manila] [security] [tc] Add the vulnerability:managed tag to Manila

2016-08-31 Thread John Spray
On Tue, Aug 30, 2016 at 6:07 PM, Jeremy Stanley  wrote:
> Ben has proposed[1] adding manila, manila-ui and python-manilaclient
> to the list of deliverables whose vulnerability reports and
> advisories are overseen by the OpenStack Vulnerability Management
> Team. This proposal is an assertion that the requirements[2] for the
> vulnerability:managed governance tag are met by these deliverables.
> As such, I wanted to initiate a discussion evaluating each of the
> listed requirements to see how far along those deliverables are in
> actually fulfilling these criteria.
>
> 1. All repos for a covered deliverable must meet the criteria or
> else none do. Easy enough, each deliverable has only one repo so
> this isn't really a concern.
>
> 2. We need a dedicated point of contact for security issues. Our
> typical point of contact would be a manila-coresec team in
> Launchpad, but that doesn't exist[3] (yet). Since you have a fairly
> large core review team[4], you should pick a reasonable subset of
> those who are willing to act as the next line of triage after the
> VMT hands off a suspected vulnerability report under embargo. You
> should have at least a couple of active volunteers for this task so
> there's good coverage, but more than 5 or so is probably pushing the
> bounds of information safety. Not all of them need to be core
> reviewers, but enough of them should be so that patches proposed as
> attachments to private bugs can effectively be "pre-approved" in an
> effort to avoid delays merging at time of publication.
>
> 3. The PTL needs to agree to act as a point of escalation or
> delegate this responsibility to a specific liaison. This is Ben by
> default, but if he's not going to have time to serve in that role
> then he should record a dedicated Vulnerability Management Liaison
> in the CPLs list[5].
>
> 4. Configure sharing[6][7][8] on the defect trackers for these
> deliverables so that OpenStack Vulnerability Management team
> (openstack-vuln-mgmt) has "Private Security: All". Once the
> vulnerability:managed tag is approved for them, also remove the
> "Private Security: All" sharing from any other teams (so that the
> VMT can redirect incorrectly reported vulnerabilities without
> prematurely disclosing them to manila reviewers).
>
> 5. Independent security review, audit, or threat analysis... this is
> almost certainly the hardest to meet. After some protracted
> discussion on Kolla's application for this tag, it was determined
> that projects should start supplying threat analyses to a central
> security-analysis[9] repo where they can be openly reviewed and
> ultimately published. No projects have actually completed this yet,
> but there is some process being finalized by the Security Team which
> projects will hopefully be able to follow. You may want to check
> with them on the possibility of being an early adopter for that
> process.

Given that all the drivers live in the Manila repo, will this
requirement for security audits is going to apply to them?  Given the
variety of technologies and network protocols involved in talking to
external storage systems, this strikes me as probably the hardest
part.

John

> 6. Covered deliverables need tests we can rely on to be able to
> evaluate whether privately proposed security patches will break the
> software. A cursory look shows many jobs[10] running in our upstream
> CI for changes to these repos, so that requirement is probably
> addressed (I did not yet check whether those
> unit/functional/integration tests are particularly extensive).
>
> So in summary, it looks like there are still some outstanding
> requirements not yet met for the vulnerability:managed tag but I
> don't see any insurmountable challenges there. Please let me know if
> any of the above is significantly off-track.
>
> [1] https://review.openstack.org/350597
> [2] 
> https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
> [3] https://launchpad.net/~manila-coresec
> [4] https://review.openstack.org/#/admin/groups/213,members
> [5] 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
> [6] https://launchpad.net/manila/+sharing
> [7] https://launchpad.net/manila-ui/+sharing
> [8] https://launchpad.net/pythonmanilaclient/+sharing
> [9] 
> https://git.openstack.org/cgit/openstack/security-analysis/tree/doc/source/templates/
> [10] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml
>
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [Manila] Service VMs, CI, etc

2016-07-29 Thread John Spray
Hi folks,

We're starting to look at providing NFS on top of CephFS, using NFS
daemons running in Nova instances.  Looking ahead, we can see that
this is likely to run into similar issues in the openstack CI that the
generic driver did.

I got the impression that the main issue with testing the generic
driver was that bleeding edge master versions of Nova/Neutron/Cinder
were in use when running in CI, and other stuff had a habit of
breaking.  Is that roughly correct?

Assuming versions are the main issue, we're going to need to look at
solutions to that, which could mean either doing some careful pinning
of the versions of Nova/Neutron used by Manila CI in general, or
creating a separate CI setup for CephFS that had that version pinning.
My preference would be to see this done Manila wide, so that the
generic driver could benefit as well.

Thoughts?

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-07-01 Thread John Spray
On Thu, Jun 30, 2016 at 1:37 PM, Alexey Ovchinnikov
 wrote:
> Hello everyone,
>
> here I will briefly summarize an export update problem one will encounter
> when using nfs-ganesha.
>
> While working on a driver that relies on nfs-ganesha I have discovered that
> it
> is apparently impossible to provide interruption-free export updates. As of
> version
> 2.3 which I am working with it is possible to add an export or to remove an
> export without restarting the daemon, but it is not possible to modify an
> existing
> export. So in other words if you create an export you should define all
> clients
> before you actually export and use it, otherwise it will be impossible to
> change
> rules on the fly. One can come up with at least two ways to work around
> this issue: either by removing, updating and re-adding an export, or by
> creating multiple
> exports (one per client) for an exported resource. Both ways have associated
> problems: the first one interrupts clients already working with an export,
> which might be a big problem if a client is doing heavy I/O, the second one
> creates multiple exports associated with a single resource, which can easily
> lead
> to confusion. The second approach is used in current manila's ganesha
> helper[1].
> This issue seems to be raised now and then with nfs-ganesha team, most
> recently in
> [2], but apparently it will not  be addressed in the nearest future.

This is certainly an important limitation for people to be aware of.
My reading of [2] wasn't that anyone was saying it would necessarily
not be addressed, it just needs someone to do it.  Franks mail on that
thread pretty much laid out the steps needed.

John

> With kind regards,
> Alexey.
>
> [1]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
> [2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Any updates on share groups?

2016-05-04 Thread John Spray
Hi all,

Back in the office with only minor jetlag... unfortunately I had to
skip the discussion last Friday, I was wondering if there was much
more discussion about how share groups were going to work, especially
from a driver POV?  The etherpad notes are mainly recap.

I'd like to get ahead of this for the cephfs driver, because we have
CG support in the existing code.  I'm hoping we'll be able to just
invisibly map what we used to call CGs into new share groups that
happen to have the snapshottable group type.

Related: IIRC the group was in favour of adopting a spec process, did
we agree to do that for the Newton features?

Cheers,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread John Spray
On Fri, Mar 4, 2016 at 1:34 PM, Valeriy Ponomaryov
<vponomar...@mirantis.com> wrote:
> John,
>
> each instance of manila-share service will perform "ensure_share" operation
> for each "share instance" that is located at
> "hostname@driver_config_group_name".
> So, only one driver is expected to run "ensure_share" for some "share
> instance", because each instance of a driver will have its own unique value
> of "hostname@driver_config_group_name".

Thanks - so if I understand you correctly, each share instance is
uniquely associated with a single instance of the driver at one time,
right?  So while I might have two concurrent calls to ensure_share,
they are guaranteed to be for different shares?

Is this true for the whole driver interface?  Two instances of the
driver will never both be asked to do operations on the same share at
the same time?

John



> Valeriy
>
> On Fri, Mar 4, 2016 at 3:15 PM, John Spray <jsp...@redhat.com> wrote:
>>
>> On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo <shinobu...@gmail.com>
>> wrote:
>> > What are you facing?
>>
>> In this particular instance, I'm dealing with a case where we may add
>> some metadata in ceph that will get updated by the driver, and I need
>> to know how I'm going to be called.  I need to know whether e.g. I can
>> expect that ensure_share will only be called once at a time per share,
>> or whether it might be called multiple times in parallel, resulting in
>> a need for me to do more synchronisation a lower level.
>>
>> This is more complicated than locking, because where we update more
>> than one thing at a time we also have to deal with recovery (e.g.
>> manila crashed halfway through updating something in ceph and now I'm
>> recovering it), especially whether the places we do recovery will be
>> called concurrently or not.
>>
>> My very favourite answer here would be a pointer to some
>> documentation, but I'm guessing much this stuff is still at a "word of
>> mouth" stage.
>>
>> John
>>
>> > On Fri, Mar 4, 2016 at 9:06 PM, John Spray <jsp...@redhat.com> wrote:
>> >> Hi,
>> >>
>> >> What expectations should driver authors have about multiple instances
>> >> of the driver being instantiated within different instances of
>> >> manila-share?
>> >>
>> >> For example, should I assume that when one instance of a driver is
>> >> having ensure_share called during startup, another instance of the
>> >> driver might be going through the same process on the same share at
>> >> the same time?  Are there any rules at all?
>> >>
>> >> Thanks,
>> >> John
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > --
>> > Email:
>> > shin...@linux.com
>> > GitHub:
>> > shinobu-x
>> > Blog:
>> > Life with Distributed Computational System based on OpenSource
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread John Spray
On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo <shinobu...@gmail.com> wrote:
> What are you facing?

In this particular instance, I'm dealing with a case where we may add
some metadata in ceph that will get updated by the driver, and I need
to know how I'm going to be called.  I need to know whether e.g. I can
expect that ensure_share will only be called once at a time per share,
or whether it might be called multiple times in parallel, resulting in
a need for me to do more synchronisation a lower level.

This is more complicated than locking, because where we update more
than one thing at a time we also have to deal with recovery (e.g.
manila crashed halfway through updating something in ceph and now I'm
recovering it), especially whether the places we do recovery will be
called concurrently or not.

My very favourite answer here would be a pointer to some
documentation, but I'm guessing much this stuff is still at a "word of
mouth" stage.

John

> On Fri, Mar 4, 2016 at 9:06 PM, John Spray <jsp...@redhat.com> wrote:
>> Hi,
>>
>> What expectations should driver authors have about multiple instances
>> of the driver being instantiated within different instances of
>> manila-share?
>>
>> For example, should I assume that when one instance of a driver is
>> having ensure_share called during startup, another instance of the
>> driver might be going through the same process on the same share at
>> the same time?  Are there any rules at all?
>>
>> Thanks,
>> John
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Email:
> shin...@linux.com
> GitHub:
> shinobu-x
> Blog:
> Life with Distributed Computational System based on OpenSource
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread John Spray
Hi,

What expectations should driver authors have about multiple instances
of the driver being instantiated within different instances of
manila-share?

For example, should I assume that when one instance of a driver is
having ensure_share called during startup, another instance of the
driver might be going through the same process on the same share at
the same time?  Are there any rules at all?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ALLOWED_EXTRA_MISSING is cover.sh

2016-02-11 Thread John Spray
On Wed, Feb 10, 2016 at 6:39 PM, Valeriy Ponomaryov
<vponomar...@mirantis.com> wrote:
> Hello, John
>
> Note, that digit "4" defines amount of "python code blocks", not "python
> code lines". So, you can have uncovered some log message that consists of
> 100 lines. But it will be counted as just 1.

Ah, good to know.

> Who "we" have requirement that new drivers have 90% unit test coverage?

http://docs.openstack.org/developer/manila/devref/driver_requirements.html#unit-tests

> And, Manila CI coverage job non-voting. So, you are not blocked by it.
>
> On Wed, Feb 10, 2016 at 8:30 PM, Knight, Clinton <clinton.kni...@netapp.com>
> wrote:
>>
>> Hi, John.  This is but one reason the coverage job doesn¹t vote; it has
>> other known issues.  It is primarily a convenience tool that lets core
>> reviewers know if they should look more deeply into unit test coverage.
>> For a new driver such as yours, I typically pull the code and check
>> coverage for each new file in PyCharm rather than relying on the coverage
>> job.  Feel free to propose enhancements to the job, though.
>>
>> Clinton
>>
>>
>> On 2/10/16, 1:02 PM, "John Spray" <jsp...@redhat.com> wrote:
>>
>> >Hi,
>> >
>> >I noticed that the coverage script is enforcing a hard limit of 4 on
>> >the number of extra missing lines introduced.  We have a requirement
>> >that new drivers have 90% unit test coverage, which the ceph driver
>> >meets[1], but it's tripping up on that absolute 4 line limit.
>> >
>> >What do folks think about tweaking the script to do a different
>> >calculation, like identifying new files and permitting 10% of the line
>> >count of the new files to be missed?  Otherwise I think the 90% target
>> >is going to continually conflict with the manila-coverage CI task.
>> >
>> >Cheers,
>> >John
>> >
>> >1.
>>
>> > >http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/cover
>> >/manila_share_drivers_cephfs_py.html
>> >2.
>>
>> > >http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/conso
>> >le.html
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] ALLOWED_EXTRA_MISSING is cover.sh

2016-02-10 Thread John Spray
Hi,

I noticed that the coverage script is enforcing a hard limit of 4 on
the number of extra missing lines introduced.  We have a requirement
that new drivers have 90% unit test coverage, which the ceph driver
meets[1], but it's tripping up on that absolute 4 line limit.

What do folks think about tweaking the script to do a different
calculation, like identifying new files and permitting 10% of the line
count of the new files to be missed?  Otherwise I think the 90% target
is going to continually conflict with the manila-coverage CI task.

Cheers,
John

1. 
http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/cover/manila_share_drivers_cephfs_py.html
2. 
http://logs.openstack.org/11/270211/19/check/manila-coverage/47b79d2/console.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Tempest scenario tests vs. gate condition

2015-12-07 Thread John Spray
On Mon, Dec 7, 2015 at 6:14 PM, Ben Swartzlander <b...@swartzlander.org> wrote:
> On 12/03/2015 06:38 AM, John Spray wrote:
>>
>> Hi,
>>
>> We're working towards getting the devstack/CI parts ready to test the
>> forthcoming ceph native driver, and have a question: will a driver be
>> accepted into the tree if it has CI for running the api/ tempest
>> tests, but not the scenario/ tempest tests?
>>
>> The context is that because the scenario tests require a client to
>> mount the shares, that's a bit more work for a new protocol such as
>> cephfs.  Naturally we intend to do get that done, but would like to
>> know if it will be a blocker in getting the driver in tree.
>
>
> This is not currently a requirement for any of the existing 3rd party
> drivers so it wouldn't be fair to enforce it on cephfs.
>
> It *is* something we would like to require at some point, because just
> running the API tests don't really ensure that the driver isn't broken, but
> I'm trying to be sensitive to vendors' limited resources and to add CI
> requirements gradually. The fact that the current generic driver is unstable
> in the gate is a much more serious issue than the fact that some drivers
> don't pass scenario tests.

Understood, thanks to you and Valeriy for the clarification.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Tempest scenario tests vs. gate condition

2015-12-03 Thread John Spray
Hi,

We're working towards getting the devstack/CI parts ready to test the
forthcoming ceph native driver, and have a question: will a driver be
accepted into the tree if it has CI for running the api/ tempest
tests, but not the scenario/ tempest tests?

The context is that because the scenario tests require a client to
mount the shares, that's a bit more work for a new protocol such as
cephfs.  Naturally we intend to do get that done, but would like to
know if it will be a blocker in getting the driver in tree.

Many thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Attach/detach semantics

2015-11-18 Thread John Spray
On Wed, Nov 18, 2015 at 4:33 AM, Ben Swartzlander <b...@swartzlander.org> wrote:
> On 11/17/2015 10:02 AM, John Spray wrote:
>>
>> Hi all,
>>
>> As you may know, there is ongoing work on a spec for Nova to define an
>> "attach/detach" API for tighter integration with Manila.
>>
>> The concept here is that this mechanism will be needed to implement
>> hypervisor mediated FS access using vsock, but that the mechanism
>> should also be applicable more generally to an "attach" concept for
>> filesystems accessed over IP networks (like existing NFS filers).
>>
>> In the hypervisor-mediated case, attach would involve the hypervisor
>> host connecting as a filesystem client and then re-exporting to the
>> guest via a local address.  We think this would apply to
>> driver_handles_share_servers type drivers that support share networks,
>> by mapping the attach/detach share API to attaching/detaching the
>> share network from the guest VM.
>>
>> Does that make sense to people maintaining this type of driver?  For
>> example, for the netapp and generic drivers, is it reasonable to
>> expose nova attach/detach APIs that attach and detach the associated
>> share network?
>
>
> I'm not sure this proposal makes sense. I would like the share attach/detach
> semantics to be the same for all types of shares, regardless of the driver
> type.
>
> The main challenge with attaching to shares on share servers (with share
> networks) is that there may not exist a network route from the hypervisor to
> the share server, because share servers are only required to be accessible
> from the share network from which they are created. This has been a known
> problem since Liberty because this behaviour prevents migration from
> working, therefore we're proposing a mechanism for share-server drivers to
> provide admin-network-facing interfaces for all share servers. This same
> mechanism should be usable by the Nova when doing share attach/detach. Nova
> would just need to list the export locations using an admin-context to see
> the admin-facing export location that it should use.

For these drivers, we're not proposing connecting to them from the
hypervisor -- we would still be connecting directly from the guest via
the share network.

The change would be from the existing workflow:
 * Create share
 * Attach guest network to guest VM (need to look up network info,
talk to neutron API)
 * Add IP access permission for the guest to access the share (need to
know IP of the guest)
 * Mount from guest VM

To a new workflow:
 * Create share
 * Attach share to guest (knowing only share ID and guest instance ID)
 * Mount from guest VM

The idea is to abstract the networking part away, so that the user
just has to say "I want to be able to mount share X from guest Y",
without knowing about the networking stuff going on under the hood.
While this is partly because it's slicker, this is mainly so that
applications can use IP-networking shares interchangeably with future
hypervisor mediated shares: they call "attach" and don't have to worry
about whether that's a share network operation under the hood or a
hypervisor-twiddling operation under the hood.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Attach/detach semantics

2015-11-17 Thread John Spray
Hi all,

As you may know, there is ongoing work on a spec for Nova to define an
"attach/detach" API for tighter integration with Manila.

The concept here is that this mechanism will be needed to implement
hypervisor mediated FS access using vsock, but that the mechanism
should also be applicable more generally to an "attach" concept for
filesystems accessed over IP networks (like existing NFS filers).

In the hypervisor-mediated case, attach would involve the hypervisor
host connecting as a filesystem client and then re-exporting to the
guest via a local address.  We think this would apply to
driver_handles_share_servers type drivers that support share networks,
by mapping the attach/detach share API to attaching/detaching the
share network from the guest VM.

Does that make sense to people maintaining this type of driver?  For
example, for the netapp and generic drivers, is it reasonable to
expose nova attach/detach APIs that attach and detach the associated
share network?

I've CC'd Luis who is working on the Nova spec.

Regards,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Share allow/deny by shared secret

2015-10-27 Thread John Spray
On Tue, Oct 27, 2015 at 6:39 AM, Ben Swartzlander  wrote:
>> The NFS-style process that Manila expects is:
>> Caller> I know a credential (IP address, x509 certificate) and I want
>> you to authorize it
>> Driver> OK, I have stored that credential and you can now use it to
>> access the share.
>
>
> This is accurate. Manila presumes the existence of an external
> authentication mechanism which servers can use to identify clients, so that
> Manila's role can be limited to telling the server which clients should have
> access.
>
>> The Ceph native process is more like:
>> Caller> I want to access this share
>> Driver> OK, I have generated a credential for you, here it is, you can
>> now use it to access the share
>>
>> The important distinction is where the credential comes from.  Manila
>> expects it to come from the caller, Ceph expects to generate it for
>> the caller.
>
>
> The problem with the above statement is that you don't define who "I" am.

I think my "I" concept here is the all-powerful (within his own
tenancy) manila API consumer who is taking on the responsibility to
know about the external entities (like guests, groups of guests).

> The Manila API client is all-powerful when it comes to modifying access
> rules, insofar as a tenant has the power to add/remove any rule from any
> share that that tenant owns. Obviously if  you have access to modify the
> access rules then you have de-facto access to all the shares. The purpose of
> the access-allow/deny APIs is to delegate access to shares to identities
> that exist outside of Manila, such as to IP addresses, to users, or to x509
> principles. These things need to be named somehow so that the file system
> server, the client, and manila can all talk about the same set of
> identities.

The thing making delegation of access control awkward here is that
Ceph is both the provider of the storage and also the provider of the
authentication: there is no external entity (an IP network, an x509
CA) that we defer to.  If the authentication was to an external
system, it would make sense to say that clients should first configure
their identity externally, and then call into Manila to authorize that
identity.  When the system of identities is built into the storage
system, having the API consumer call out to create their identity
would leave them talking directly to Ceph, in addition to talking to
Ceph via Manila.

I should point out that it would also be possible to extend the Ceph
interface to allow users to pass in their own generated key.  In this
case we could have API consumers generate their own secrets, and pass
them into ceph.  So your access id would be something like
"alice/a7df6d76d57f".  The Ceph driver would then have to create the
alice identity in ceph if it didn't already exist, and throw an error
if the existing alice identity wasn't already using that particular
shared secret.  This isn't ideal from a security POV, because it
relies on Manila API consumers to have a suitable source of
randomness, and it's less than ideal usability because they have to
know the expected size/format of the secret.

The usability part is a big concern for me: there's a huge leap
between "pick an ID and pass it into authorize" and "pick an ID, and
generate a cryptographically secure random number N bytes long,
concatenate it with the ID like so and pass it into authorize".
Either we rely on users to get it right (a bit optimistic) or we have
to give them a library for generating it (which isn't very RESTful).

>> To enable us to expose ceph native auth, I propose:
>>   * Add a "key" column to the ShareAccessMapping model
>>   * Enable drivers to optionally populate this from allow() methods
>>   * Expose this to API consumers: right to see a share mapping is the
>> right to see the key.
>>
>> The security model is that the key is a secret, which Manila API users
>> (i.e. administrative functions) are allowed to see, and it is up to
>> them to selectively share the secret with guests.  The reason for
>> giving them allow/deny rather than just having a key per share is so
>> that the administrator can selectively revoke keys.
>
>
> I don't see why the driver should be the place where secrets are generated.
> It seems equally valid for the caller of the Manila API to generate the
> secret himself, and to ask Manila to grant access to a share to anyone
> knowing that secret. This would fit the existing model, and more
> importantly, it would allow granting of shares to multiple users with
> different secrets. I don't see in the above proposal how to grant access to
> a share to both Alice and Bob without telling the same secret to both Alice
> and Bob. The problem that creates is that I can't revoke access to the share
> from Alice without also revoking access from Bob. Maybe I'm misreading what
> you wrote above about key revocation, but it sounds like you have 1 key per
> share, and you can revoke access to each share individually, but 

[openstack-dev] [Manila] Share allow/deny by shared secret

2015-10-21 Thread John Spray
Hi,

(I wanted to put this in an email ahead of Tokyo, where I hope we'll
find time to discuss it.  This is a follow up to
http://osdir.com/ml/openstack-dev/2015-10/msg00381.html)

With the current code, there doesn't appear to be a proper way to
expose Ceph's native authentication system via Manila.  This is
because Ceph generates the shared secret needed to access a share, and
Manila doesn't give us a path to expose such a driver-originated
secret as part of a ShareInstanceMapping.

The NFS-style process that Manila expects is:
Caller> I know a credential (IP address, x509 certificate) and I want
you to authorize it
Driver> OK, I have stored that credential and you can now use it to
access the share.

The Ceph native process is more like:
Caller> I want to access this share
Driver> OK, I have generated a credential for you, here it is, you can
now use it to access the share

The important distinction is where the credential comes from.  Manila
expects it to come from the caller, Ceph expects to generate it for
the caller.

To enable us to expose ceph native auth, I propose:
 * Add a "key" column to the ShareAccessMapping model
 * Enable drivers to optionally populate this from allow() methods
 * Expose this to API consumers: right to see a share mapping is the
right to see the key.

The security model is that the key is a secret, which Manila API users
(i.e. administrative functions) are allowed to see, and it is up to
them to selectively share the secret with guests.  The reason for
giving them allow/deny rather than just having a key per share is so
that the administrator can selectively revoke keys.

The "key" column should be pretty short (255 chars is plenty) -- this
isn't meant for storing big things like PKI certificates, it's
specifically for shared secrets.

I don't know of any other drivers that would use this, but it is a
pretty generic concept in itself: "grant access by a shared key that
the storage system generates".

Cheers,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-07 Thread John Spray
On Tue, Oct 6, 2015 at 11:59 AM, Deepak Shetty  wrote:
>>
>> Currently, as you say, a share is accessible to anyone who knows the
>> auth key (created a the time the share is created).
>>
>> For adding the allow/deny path, I'd simply create and remove new ceph
>> keys for each entity being allowed/denied.
>
>
> Ok, but how does that map to the existing Manila access types (IP, User,
> Cert) ?

None of the above :-)

Compared with certs, the difference with Ceph is that ceph is issuing
credentials, rather than authorizing existing credentials[1]. So
rather than the tenant saying "Here's a certificate that Alice has
generated and will use to access the filesystem, please authorize it",
the tenant would say "Please authorize someone called Bob to access
the share, and let me know the key he should use to prove he is Bob".

As far as I can tell, we can't currently expose that in Manila: the
missing piece is a way to tag that generated key onto a
ShareInstanceAccessMapping, so that somebody with the right to read
from the Manila API can go read Bob's key, and give it to Bob so that
he can mount the filesystem.

That's why the first-cut compromise is to create a single auth
identity for accessing the share, and expose the key as part of the
share's export location.  It's then the user application's job to
share out that key to whatever hosts need to access it.  The lack of
Manila-mediated 'allow' is annoying but not intrinsically insecure.
The security problem with this approach is that we're not providing a
way to revoke/rotate the key without destroying the share.

So anyway.  This might be a good topic for a conversation at the
summit (or catch me up on the list if it's already been discussed in
depth) -- should drivers be allowed to publish generated
authentication tokens as part of the API for allowing access to a
share?

John


1. Aside: We *could* do a certificate-like model if it was assumed
that the Manila API consumer knew how to go and talk to Ceph out of
band to generate their auth identity.  That way, they could go and
create their auth identity in Ceph, and then ask Manila to grant that
identity access to the share.  However, it would be pointless, because
in ceph, anyone who can create an identity can also set the
capabilities of it (i.e. if they can talk directly to ceph, they don't
need Manila's permission to access the share).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread John Spray
On Thu, Oct 1, 2015 at 8:36 AM, Deepak Shetty <dpkshe...@gmail.com> wrote:
>
>
> On Thu, Sep 24, 2015 at 7:19 PM, John Spray <jsp...@redhat.com> wrote:
>>
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>>
>
> 1) README says driver_handles_share_servers=True, but code says
>
> + if share_server is not None:
> + log.warning("You specified a share server, but this driver doesn't use
> that")

The warning is just for my benefit, so that I could see which bits of
the API were pushing a share server in.  This driver doesn't care
about the concept of a share server, so I'm really just ignoring it
for the moment.

> 2) Would it good to make the data_isolated option controllable from
> manila.conf config param ?

That's the intention.

> 3) CephFSVolumeClient - it sounds more like CephFSShareClient , any reason
> you chose the
> word 'Volume" instead of Share ? Volumes remind of RBD volumes, hence the Q

The terminology here is not standard across the industry, so there's
not really any right term.  For example, in docker, a
container-exposed filesystem is a "volume".  I generally use volume to
refer to a piece of storage that we're carving out, and share to refer
to the act of making that visible to someone else.  If I had been
writing Manila originally I wouldn't have called shares shares :-)

The naming in CephFSVolumeClient will not be the same as Manilas,
because it is not intended to be Manila-only code, though that's the
first use for it.

> 4) IIUC there is no need to do access_allow/deny in the cephfs usecase ? It
> looks like
> create_share, put the cephx keyring in client and it can access the share,
> as long as the
> client has network access to the ceph cluster. Doc says you don't use IP
> address based
> access method, so which method is used in case you are using access_allow
> flow ?

Currently, as you say, a share is accessible to anyone who knows the
auth key (created a the time the share is created).

For adding the allow/deny path, I'd simply create and remove new ceph
keys for each entity being allowed/denied.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread John Spray
On Thu, Oct 1, 2015 at 12:58 AM, Shinobu Kinjo <ski...@redhat.com> wrote:
> Is there any plan to merge those branches to master?
> Or is there anything needs to be done more?

As I said in the original email, this is unfinished code, and my
message was just to let people know this was underway so that the
patch didn't come as a complete surprise.

John

>
> Shinobu
>
> - Original Message -
> From: "Ben Swartzlander" <b...@swartzlander.org>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Sent: Saturday, September 26, 2015 9:27:58 AM
> Subject: Re: [openstack-dev] [Manila] CephFS native driver
>
> On 09/24/2015 09:49 AM, John Spray wrote:
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
> Awesome! This is something that's been talking about for quite some time
> and I'm pleased to see progress on making it a reality.
>
>> It requires a special branch of ceph which is here:
>> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>>
>> This isn't done yet (hence this email rather than a gerrit review),
>> but I wanted to give everyone a heads up that this work is going on,
>> and a brief status update.
>>
>> This is the 'native' driver in the sense that clients use the CephFS
>> client to access the share, rather than re-exporting it over NFS.  The
>> idea is that this driver will be useful for anyone who has such
>> clients, as well as acting as the basis for a later NFS-enabled
>> driver.
>
> This makes sense, but have you given thought to the optimal way to
> provide NFS semantics for those who prefer that? Obviously you can pair
> the existing Manila Generic driver with Cinder running on ceph, but I
> wonder how that wound compare to some kind of ganesha bridge that
> translates between NFS and cephfs. It that something you've looked into?
>
>> The export location returned by the driver gives the client the Ceph
>> mon IP addresses, the share path, and an authentication token.  This
>> authentication token is what permits the clients access (Ceph does not
>> do access control based on IP addresses).
>>
>> It's just capable of the minimal functionality of creating and
>> deleting shares so far, but I will shortly be looking into hooking up
>> snapshots/consistency groups, albeit for read-only snapshots only
>> (cephfs does not have writeable shapshots).  Currently deletion is
>> just a move into a 'trash' directory, the idea is to add something
>> later that cleans this up in the background: the downside to the
>> "shares are just directories" approach is that clearing them up has a
>> "rm -rf" cost!
>
> All snapshots are read-only... The question is whether you can take a
> snapshot and clone it into something that's writable. We're looking at
> allowing for different kinds of snapshot semantics in Manila for Mitaka.
> Even if there's no create-share-from-snapshot functionality a readable
> snapshot is still useful and something we'd like to enable.
>
> The deletion issue sounds like a common one, although if you don't have
> the thing that cleans them up in the background yet I hope someone is
> working on that.
>
>> A note on the implementation: cephfs recently got the ability (not yet
>> in master) to restrict client metadata access based on path, so this
>> driver is simply creating shares by creating directories within a
>> cluster-wide filesystem, and issuing credentials to clients that
>> restrict them to their own directory.  They then mount that subpath,
>> so that from the client's point of view it's like having their own
>> filesystem.  We also have a quota mechanism that I'll hook in later to
>> enforce the share size.
>
> So quotas aren't enforced yet? That seems like a serious issue for any
> operator except those that want to support "infinite" size shares. I
> hope that gets fixed soon as well.
>
>> Currently the security here requires clients (i.e. the ceph-fuse code
>> on client hosts, not the userspace applications) to be trusted, as
>> quotas are enforced on the client side.  The OSD access control
>> operates on a per-pool basis, and creating a separate pool for each
>> share is inefficient.  In the future it is expected that CephFS will
>> be extended to support file layouts that use RADOS namespaces, which
>> are cheap, such that we can issue a new namespace to each share and
>> enforce the separation between shares on the OSD

Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread John Spray
On Thu, Oct 1, 2015 at 8:26 AM, Deepak Shetty  wrote:
>> > I think it will be important to document all of these limitations. I
>> > wouldn't let them stop you from getting the driver done, but if I was a
>> > deployer I'd want to know about these details.
>>
>> Yes, definitely.  I'm also adding an optional flag when creating
>> volumes to give them their own RADOS pool for data, which would make
>> the level of isolation much stronger, at the cost of using more
>> resources per volume.  Creating separate pools has a substantial
>> overhead, but in sites with a relatively small number of shared
>> filesystems it could be desirable.  We may also want to look into
>> making this a layered thing with a pool per tenant, and then
>> less-isolated shares within that pool.  (pool in this paragraph means
>> the ceph concept, not the manila concept).
>>
>> At some stage I would like to add the ability to have physically
>> separate filesystems within ceph (i.e. filesystems don't share the
>> same MDSs), which would add a second optional level of isolation for
>> metadata as well as data
>>
>> Overall though, there's going to be sort of a race here between the
>> native ceph multitenancy capability, and the use of NFS to provide
>> similar levels of isolation.
>
>
> Thanks for the explanation, this helps understand things nicely, tho' I have
> a small doubt. When you say separate filesystems within ceph cluster, you
> meant the same as mapping them to different RADOS namespaces, and each
> namespace will have its own MDS, thus providing addnl isolation on top of
> having 1 pool per tenant ?

Physically separate filesystems would be using separate MDSs, and
separate RADOS pools.  For ultra isolation, the RADOS pools would also
be configured to map to different OSDs.

Separate RADOS namespaces do not provide physical separation (multiple
namespaces exist within one pool, hence on the same OSDs), but they
would provide server-side security for preventing clients seeing into
one anothers data pools.  The terminology is confusing because RADOS
namespace is a distinct ceph specific concept from filesystem
namespaces.

CephFS doesn't currently have either the "separate MDSs" isolation, or
the support for using RADOS namespaces in layouts.  They're both
pretty well understood and not massively complex to implement though,
so it's pretty much just a matter of time.

This is all very ceph-implementation-specific stuff, so apologies if
it's not crystal clear at this stage.


>>
>>
>> >> However, for many people the ultimate access control solution will be
>> >> to use a NFS gateway in front of their CephFS filesystem: it is
>> >> expected that an NFS-enabled cephfs driver will follow this native
>> >> driver in the not-too-distant future.
>> >
>> >
>> > Okay this answers part of my above question, but how to you expect the
>> > NFS
>> > gateway to work? Ganesha has been used successfully in the past.
>>
>> Ganesha is the preferred server right now.  There is probably going to
>> need to be some level of experimentation needed to confirm that it's
>> working and performing sufficiently well compared with knfs on top of
>> the cephfs kernel client.  Personally though, I have a strong
>> preference for userspace solutions where they work well enough.
>>
>> The broader question is exactly where in the system the NFS gateways
>> run, and how they get configured -- that's the very next conversation
>> to have after the guts of this driver are done.  We are interested in
>> approaches that bring the CephFS protocol as close to the guests as
>> possible before bridging it to NFS, possibly even running ganesha
>> instances locally on the hypervisors, but I don't think we're ready to
>> draw a clear picture of that just yet, and I suspect we will end up
>> wanting to enable multiple methods, including the lowest common
>> denominator "run a VM with a ceph client and ganesha" case.
>
>
> By the lowest denominator case, you mean the manila concept
> of running the share server inside a service VM or something else ?

Yes, that's exactly what I mean.  To be clear, by "lowest common
denominator" I don't mean least good, I mean most generic.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-09-26 Thread John Spray
On Sat, Sep 26, 2015 at 12:02 PM, John Spray <jsp...@redhat.com> wrote:
> On Sat, Sep 26, 2015 at 1:27 AM, Ben Swartzlander <b...@swartzlander.org> 
> wrote:
>> All snapshots are read-only... The question is whether you can take a
>> snapshot and clone it into something that's writable. We're looking at
>> allowing for different kinds of snapshot semantics in Manila for Mitaka.
>> Even if there's no create-share-from-snapshot functionality a readable
>> snapshot is still useful and something we'd like to enable.
>
> Enabling creation of snapshots is pretty trivial, the slightly more
> interesting part will be accessing them.  CephFS doesn't provide a
> rollback mechanism, so

Oops, missed a bit.

Looking again at the level of support for snapshots in Manila's
current API, it seems like we may not be in such bad shape anyway.
Yes, the cloning case is what I'm thinking about when talk about
writable shapshots: currently clone from snapshot is probably going to
look like a "cp -r", unfortunately.  However, if someone could ask for
a read-only clone, then we would be able to give them direct access to
the snapshot itself.  Haven't fully looked into the snapshot handling
in Manila so let me know if any of this doesn't make sense.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-09-26 Thread John Spray
On Sat, Sep 26, 2015 at 1:27 AM, Ben Swartzlander <b...@swartzlander.org> wrote:
> On 09/24/2015 09:49 AM, John Spray wrote:
>>
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
>
> Awesome! This is something that's been talking about for quite some time and
> I'm pleased to see progress on making it a reality.
>
>> It requires a special branch of ceph which is here:
>> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>>
>> This isn't done yet (hence this email rather than a gerrit review),
>> but I wanted to give everyone a heads up that this work is going on,
>> and a brief status update.
>>
>> This is the 'native' driver in the sense that clients use the CephFS
>> client to access the share, rather than re-exporting it over NFS.  The
>> idea is that this driver will be useful for anyone who has suchq
>> clients, as well as acting as the basis for a later NFS-enabled
>> driver.
>
>
> This makes sense, but have you given thought to the optimal way to provide
> NFS semantics for those who prefer that? Obviously you can pair the existing
> Manila Generic driver with Cinder running on ceph, but I wonder how that
> wound compare to some kind of ganesha bridge that translates between NFS and
> cephfs. It that something you've looked into?

The Ceph FSAL in ganesha already exists, some work is going on at the
moment to get it more regularly built and tested.  There's some
separate design work to be done to decide exactly how that part of
things is going to work, including discussing with all the right
people, but I didn't want to let that hold up getting the initial
native driver out there.

>> The export location returned by the driver gives the client the Ceph
>> mon IP addresses, the share path, and an authentication token.  This
>> authentication token is what permits the clients access (Ceph does not
>> do access control based on IP addresses).
>>
>> It's just capable of the minimal functionality of creating and
>> deleting shares so far, but I will shortly be looking into hooking up
>> snapshots/consistency groups, albeit for read-only snapshots only
>> (cephfs does not have writeable shapshots).  Currently deletion is
>> just a move into a 'trash' directory, the idea is to add something
>> later that cleans this up in the background: the downside to the
>> "shares are just directories" approach is that clearing them up has a
>> "rm -rf" cost!
>
>
> All snapshots are read-only... The question is whether you can take a
> snapshot and clone it into something that's writable. We're looking at
> allowing for different kinds of snapshot semantics in Manila for Mitaka.
> Even if there's no create-share-from-snapshot functionality a readable
> snapshot is still useful and something we'd like to enable.

Enabling creation of snapshots is pretty trivial, the slightly more
interesting part will be accessing them.  CephFS doesn't provide a
rollback mechanism, so

> The deletion issue sounds like a common one, although if you don't have the
> thing that cleans them up in the background yet I hope someone is working on
> that.

Yeah, that would be me -- the most important sentence in my original
email was probably "this isn't done yet" :-)

>> A note on the implementation: cephfs recently got the ability (not yet
>> in master) to restrict client metadata access based on path, so this
>> driver is simply creating shares by creating directories within a
>> cluster-wide filesystem, and issuing credentials to clients that
>> restrict them to their own directory.  They then mount that subpath,
>> so that from the client's point of view it's like having their own
>> filesystem.  We also have a quota mechanism that I'll hook in later to
>> enforce the share size.
>
>
> So quotas aren't enforced yet? That seems like a serious issue for any
> operator except those that want to support "infinite" size shares. I hope
> that gets fixed soon as well.

Same again, just not done yet.  Well, actually since I wrote the
original email I added quota support to my branch, so never mind!

>> Currently the security here requires clients (i.e. the ceph-fuse code
>> on client hosts, not the userspace applications) to be trusted, as
>> quotas are enforced on the client side.  The OSD access control
>> operates on a per-pool basis, and creating a separate pool for each
>> share is inefficient.  In the future it is expected that CephFS will
>> be extended to support file layouts that use RADOS namespaces, which

Re: [openstack-dev] [Manila] CephFS native driver

2015-09-25 Thread John Spray
On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo <ski...@redhat.com> wrote:
> So here are questions from my side.
> Just question.
>
>
>  1.What is the biggest advantage comparing others such as RDB?
>   We should be able to implement what you are going to do in
>   existing module, shouldn't we?

I guess you mean compared to using a local filesystem on top of RBD,
and exporting it over NFS?  The main distinction here is that for
native CephFS clients, they get a shared filesystem where all the
clients can talk to all the Ceph OSDs directly, and avoid the
potential bottleneck of an NFS->local fs->RBD server.

Workloads requiring a local filesystem would probably continue to map
a cinder block device and use that.  The Manila driver is intended for
use cases that require a shared filesystem.

>  2.What are you going to focus on with a new implementation?
>   It seems to be to use NFS in front of that implementation
>   with more transparently.

The goal here is to make cephfs accessible to people by making it easy
to provision it for their applications, just like Manila in general.
The motivation for putting an NFS layer in front of CephFS is to make
it easier for people to adopt, because they won't need to install any
ceph-specific code in their guests.  It will also be easier to
support, because any ceph client bugfixes would not need to be
installed within guests (if we assume existing nfs clients are bug
free :-))

>  3.What are you thinking of integration with OpenStack using
>   a new implementation?
>   Since it's going to be new kind of, there should be differ-
>   ent architecture.

Not sure I understand this question?

>  4.Is this implementation intended for OneStack integration
>   mainly?

Nope (I had not heard of onestack before).

> Since velocity of OpenStack feature expansion is much more than
> it used to be, it's much more important to think of performance.

> Is a new implementation also going to improve Ceph integration
> with OpenStack system?

This piece of work is specifically about Manila; general improvements
in Ceph integration would be a different topic.

Thanks,
John

>
> Thank you so much for your explanation in advance.
>
> Shinobu
>
> - Original Message -
> From: "John Spray" <jsp...@redhat.com>
> To: openstack-dev@lists.openstack.org, "Ceph Development" 
> <ceph-de...@vger.kernel.org>
> Sent: Thursday, September 24, 2015 10:49:17 PM
> Subject: [openstack-dev] [Manila] CephFS native driver
>
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.
>
> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!
>
> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.
>
> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The O

Re: [openstack-dev] [Manila] CephFS native driver

2015-09-25 Thread John Spray
On Fri, Sep 25, 2015 at 10:16 AM, Shinobu Kinjo <ski...@redhat.com> wrote:
> Thank you for your reply.
>
>> The main distinction here is that for
>> native CephFS clients, they get a shared filesystem where all the
>> clients can talk to all the Ceph OSDs directly, and avoid the
>> potential bottleneck of an NFS->local fs->RBD server.
>
> As you know each pass from clients to rados is:
>
>  1) CephFS
>   [Apps] -> [VFS] -> [Kernel Driver] -> [Ceph-Kernel Client]
>-> [MON], [MDS], [OSD]
>
>  2) RBD
>   [Apps] -> [VFS] -> [librbd] -> [librados] -> [MON], [OSD]
>
> Considering above, there could be more bottleneck in 1) than 2),
> I think.
>
> What do you think?

The bottleneck I'm talking about is when you share the filesystem
between many guests.  In the RBD image case, you would have a single
NFS server, through which all the data and metadata would have to
flow: that becomes a limiting factor.  In the CephFS case, the clients
can talk to the MDS and OSD daemons individually, without having to
flow through one NFS server.

The preference depends on the use case: the benefits of a shared
filesystem like CephFS don't become apparent until you have lots of
guests using the same shared filesystem.  I'd expect people to keep
using Cinder+RBD for cases where a filesystem is just exposed to one
guest at a time.

>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, there should be differ-
>>   ent architecture.
>
> Sorry, it's just too ambiguous. Frankly how are you going to
> implement such a new future, was my question.
>
> Make sense?

Right now this is just about building Manila drivers to enable use of
Ceph, rather than re-architecting anything.  A user would create a
conventional Ceph cluster and a conventional OpenStack cluster, this
is just about enabling the use of the two together via Manila (i.e. to
do for CephFS/Manila what is already done for RBD/Cinder).

I expect there will be more discussion later about exactly what the
NFS layer will look like, though we can start with the simple case of
creating a guest VM that acts as a gateway.

>>  4.Is this implementation intended for OneStack integration
>>   mainly?
>
> Yes, that's just my typo -;
>
>  OneStack -> OpenStack

Naturally the Manila part is just for openstack.  However, some of the
utility parts (e.g. the "VolumeClient" class) might get re-used in
other systems that require a similar concept (like containers, other
clouds).

John

>
>
>> This piece of work is specifically about Manila; general improvements
>> in Ceph integration would be a different topic.
>
> That's interesting to me.
>
> Shinobu
>
> - Original Message -
> From: "John Spray" <jsp...@redhat.com>
> To: "Shinobu Kinjo" <ski...@redhat.com>
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Sent: Friday, September 25, 2015 5:51:36 PM
> Subject: Re: [openstack-dev] [Manila] CephFS native driver
>
> On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo <ski...@redhat.com> wrote:
>> So here are questions from my side.
>> Just question.
>>
>>
>>  1.What is the biggest advantage comparing others such as RDB?
>>   We should be able to implement what you are going to do in
>>   existing module, shouldn't we?
>
> I guess you mean compared to using a local filesystem on top of RBD,
> and exporting it over NFS?  The main distinction here is that for
> native CephFS clients, they get a shared filesystem where all the
> clients can talk to all the Ceph OSDs directly, and avoid the
> potential bottleneck of an NFS->local fs->RBD server.
>
> Workloads requiring a local filesystem would probably continue to map
> a cinder block device and use that.  The Manila driver is intended for
> use cases that require a shared filesystem.
>
>>  2.What are you going to focus on with a new implementation?
>>   It seems to be to use NFS in front of that implementation
>>   with more transparently.
>
> The goal here is to make cephfs accessible to people by making it easy
> to provision it for their applications, just like Manila in general.
> The motivation for putting an NFS layer in front of CephFS is to make
> it easier for people to adopt, because they won't need to install any
> ceph-specific code in their guests.  It will also be easier to
> support, because any ceph client bugfixes would not need to be
> installed within guests (if we assume existing nfs clients are bug
> free :-))
>
>>  3.What are you thinking of integration with

[openstack-dev] [Manila] CephFS native driver

2015-09-24 Thread John Spray
Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev