[openstack-dev] [manila] write integrity with NFS-Ganesha over CephFS

2018-03-08 Thread Ramana Raja
Hi Jeff,

Currently, there is no open source backend in manila that provides
scalable and highly-available NFS servers for dynamic cloud workloads.
Manila's CephFS driver could integrate with your on-going work
on active-active NFS over CephFS (with Kubernetes managing the
lifecycle of containerized user-space NFS-Ganesha servers) [1]
to fill this gap.

During the manila project team gathering, we discussed this plan
under the topic of high availability of share servers [2]. One of the
questions was about write integrity issue when a NFS-Ganesha server
container goes down and another container comes up to replace it.
Would there be any such write integrity issues when NFS clients do
asynchronous writes to files with write caching in the NFS client
and the NFS server (NFS-Ganesha server side caching or the libcephfs client
caching), and the NFS-Ganesha server goes down? I guess this a general
NFS protocol question or maybe things get complicated with NFS-Ganesha over 
CephFS?

I looked up the NFSv4 protocol documentation, the implementation of COMMIT
operation [3]. So if a NFS client issues a aysnc write followed by a
COMMIT operation that succeeds, then it's expected that the NFS
server has flushed cached data and metadata onto stable storage,
here CephFS. And if the NFS server crashes losing cached data and metadata,
then the write verifier cookie returned by the WRITE or COMMIT operation
indicates to the client that the server crashed. Now it's up to the
NFS client to re-transmit the uncached data and metadata.

Thanks,
Ramana

[1] https://jtlayton.wordpress.com/2017/11/07/active-active-nfs-over-cephfs/

[2] line 186 in https://etherpad.openstack.org/p/manila-rocky-ptg
the actual spec https://review.openstack.org/#/c/504987/

[3] https://tools.ietf.org/html/rfc7530#section-16.3.5

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila]: Fwd: [Nfs-ganesha-devel] Xenial PPA packages for FSALs?

2016-12-02 Thread Ramana Raja
- Forwarded Message -
> On Friday, December 2, 2016 at 6:08 PM, Kaleb S. KEITHLEY
>  wrote:
> > Hi,
> > 
> > fsal-vfs is in the nfs-ganesha-fsal .deb along with all the other FSALs.
> 
> Ah! I missed this. I see it now [2].
> 
> > 
> > I'm not aware of any compatible builds of Ceph in Launchpad PPAs that
> > could be used to build fsal-ceph. Same goes for fsal-rgw.
> 
> OK.
> Thanks, Kaleb!
> 
> -Ramana
> 
> [2] $ dpkg -L nfs-ganesha-fsal
> /.
> /usr
> /usr/lib
> /usr/lib/x86_64-linux-gnu
> /usr/lib/x86_64-linux-gnu/ganesha
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalproxy.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalnull.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgpfs.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalvfs.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalxfs.so.4.2.0
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgluster.so.4.2.0
> /usr/share
> /usr/share/doc
> /usr/share/doc/nfs-ganesha-fsal
> /usr/share/doc/nfs-ganesha-fsal/copyright
> /usr/share/doc/nfs-ganesha-fsal/changelog.Debian.gz
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalproxy.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalxfs.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalnull.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalxfs.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgpfs.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalproxy.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgpfs.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalvfs.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgluster.so
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalgluster.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalnull.so.4
> /usr/lib/x86_64-linux-gnu/ganesha/libfsalvfs.so
> 
> > 
> > 
> > On 12/02/2016 06:39 AM, Ramana Raja wrote:
> > > Hi,
> > >
> > > It'd be useful to have nfs-ganesha-vfs and nfs-ganesha-ceph
> > > packages for Xenial like those available for Fedora 24. Has
> > > anybody already built or is planning on building Xenial PPA
> > > packages for FSAL_CEPH and FSAL_VFS? I only see nfs-ganesha
> > > Xenial package [1] here,
> > > https://launchpad.net/~gluster/+archive/ubuntu/nfs-ganesha
> > > that doesn't install FSAL shared libraries that I'm interested
> > > in.
> > >
> > > I'm especially interested in FSAL_CEPH, and FSAL_VFS as they
> > > would soon be used in OpenStack Manila, File Systems
> > > as a Service project, to export NFS shares to OpenStack clients.
> > > To test such use-cases/setups in OpenStack's upstream CI, the
> > > OpenStack services + Ganesha + Storage backend would all be
> > > installed and run in a Xenial VM with ~8G RAM. Scripting
> > > the CI's  installation phase would be much simpler if the FSAL
> > > packages for CephFS and VFS were available.
> > >
> > > Thanks,
> > > Ramana
> > >
> > > [1] Files installed with nfs-ganesha Xenial PPA,
> > > $ dpkg-query -L  nfs-ganesha
> > > /.
> > > /lib
> > > /lib/systemd
> > > /lib/systemd/system
> > > /lib/systemd/system/nfs-ganesha-config.service
> > > /lib/systemd/system/nfs-ganesha-lock.service
> > > /lib/systemd/system/nfs-ganesha-config.service-in.cmake
> > > /lib/systemd/system/nfs-ganesha.service
> > > /etc
> > > /etc/defaults
> > > /etc/defaults/nfs-ganesha
> > > /etc/logrotate.d
> > > /etc/logrotate.d/nfs-ganesha
> > > /etc/ganesha
> > > /etc/ganesha/ganesha.conf
> > > /etc/dbus-1
> > > /etc/dbus-1/system.d
> > > /etc/dbus-1/system.d/nfs-ganesha-dbus.conf
> > > /usr
> > > /usr/include
> > > /usr/sbin
> > > /usr/lib
> > > /usr/lib/pkgconfig
> > > /usr/share
> > > /usr/share/doc
> > > /usr/share/doc/nfs-ganesha
> > > /usr/share/doc/nfs-ganesha/copyright
> > > /usr/share/doc/nfs-ganesha/changelog.Debian.gz
> > > /usr/bin
> > > /usr/bin/ganesha.nfsd
> > >
> > > --
> > > Check out the vibrant tech community on one of the world's most
> > > engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> > > ___
> > > Nfs-ganesha-devel mailing list
> > > nfs-ganesha-de...@lists.sourceforge.net
> > > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> > >
> > 
> > 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-08-16 Thread Ramana Raja
On Thursday, June 30, 2016 6:07 PM, Alexey Ovchinnikov 
 wrote:
> 
> Hello everyone,
> 
> here I will briefly summarize an export update problem one will encounter
> when using nfs-ganesha.
> 
> While working on a driver that relies on nfs-ganesha I have discovered that
> it
> is apparently impossible to provide interruption-free export updates. As of
> version
> 2.3 which I am working with it is possible to add an export or to remove an
> export without restarting the daemon, but it is not possible to modify an
> existing
> export. So in other words if you create an export you should define all
> clients
> before you actually export and use it, otherwise it will be impossible to
> change
> rules on the fly. One can come up with at least two ways to work around
> this issue: either by removing, updating and re-adding an export, or by
> creating multiple
> exports (one per client) for an exported resource. Both ways have associated
> problems: the first one interrupts clients already working with an export,
> which might be a big problem if a client is doing heavy I/O, the second one
> creates multiple exports associated with a single resource, which can easily
> lead
> to confusion. The second approach is used in current manila's ganesha
> helper[1].
> This issue seems to be raised now and then with nfs-ganesha team, most
> recently in
> [2], but apparently it will not be addressed in the nearest future.

Frank Filz has added support to Ganesha (upstream "next" branch) to
allow one to dynamically update exports via D-Bus. Available since,
https://github.com/nfs-ganesha/nfs-ganesha/commits/2f47e8a761f3700

It'd be nice if we can test this feature and provide feedback.
Also, ML [2] was updated with more implementation details.

Thanks,
Ramana

> 
> With kind regards,
> Alexey.
> 
> [1]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
> [2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-04 Thread Ramana Raja
+1. Tom's reviews and guidance are helpful
and spot-on.

-Ramana

On Thursday, August 4, 2016 7:52 AM, Zhongjun (A)  
wrote:
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer 
> team
> 
> 
> 
> +1 Tom will be a great addition to the core team.
> 
> 
> 
> 
> 
> 
> 发件人 : Dustin Schoenbrun [mailto:dscho...@redhat.com]
> 发送时间 : 2016 年 8 月 4 日 4:55
> 收件人 : OpenStack Development Mailing List (not for usage questions)
> 主题 : Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team
> 
> 
> 
> 
> 
> +1
> 
> 
> 
> 
> 
> Tom will be a marvelous resource for us to learn from!
> 
> 
> 
> 
> 
> Dustin Schoenbrun
> OpenStack Quality Engineer
> Red Hat, Inc.
> dscho...@redhat.com
> 
> 
> 
> 
> 
> On Wed, Aug 3, 2016 at 4:19 PM, Knight, Clinton < clinton.kni...@netapp.com >
> wrote:
> 
> 
> +1
> 
> 
> 
> 
> 
> Tom will be a great asset for Manila.
> 
> 
> Clinton
> 
> 
> 
> 
> 
> _
> From: Ravi, Goutham < goutham.r...@netapp.com >
> Sent: Wednesday, August 3, 2016 3:01 PM
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer
> team
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org >
> 
> 
> 
> (Not a core member, so plus 0.02)
> 
> 
> 
> I’ve learned a ton of things from Tom and continue to do so!
> 
> 
> 
> 
> From: Rodrigo Barbieri < rodrigo.barbieri2...@gmail.com >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >
> Date: Wednesday, August 3, 2016 at 2:48 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer
> team
> 
> 
> 
> 
> 
> +1
> 
> Tom contributes a lot to the Manila project.
> 
> --
> Rodrigo Barbieri
> Computer Scientist
> OpenStack Manila Core Contributor
> Federal University of São Carlos
> 
> 
> 
> 
> 
> On Aug 3, 2016 15:42, "Ben Swartzlander" < b...@swartzlander.org > wrote:
> 
> 
> 
> Tom (tbarron on IRC) has been working on OpenStack (both cinder and manila)
> for more than 2 years and has spent a great deal of time on Manila reviews
> in the last release. Tom brings another package/distro point of view to the
> community as well as former storage vendor experience.
> 
> -Ben Swartzlander
> Manila PTL
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila]: questions on update-access() changes

2016-06-20 Thread Ramana Raja
On June 18, 2016 1:47:10 AM Ben Swartzlander  wrote:

> Ramana, I think your questions got answered in a channel discussion last
> week, but I just wanted to double check that you weren't still expecting
> any answers here. If you were, please reply and we'll keep this thread going.

Thanks, Ben. Are you referring to this discussion,
http://eavesdrop.openstack.org/irclogs/%23openstack-manila/%23openstack-manila.2016-06-06.log.html#t2016-06-06T17:15:43,
among John, Tom and you? Yes, I followed the above one, and
it concluded that I can proceed with my work and need not worry about
the approaches of changes a) and c).

-Ramana

> 
> 
> On June 2, 2016 9:30:39 AM Ramana Raja  wrote:
> 
> > Hi,
> >
> > There are a few changes that seem to be lined up for Newton to make
> > manila's
> > share access control, update_access(), workflow better [1] --
> > reduce races in DB updates, avoid non-atomic state transitions, and
> > possibly enable the workflow fit in a HA active-active manila
> > configuration (if not already possible).
> >
> > The proposed changes ...
> >
> > a) Switch back to per rule access state (from per share access state) to
> >avoid non-atomic state transition.
> >
> >Understood problem, but no spec or BP yet.
> >
> >
> > b) Use Tooz [2] (with Zookeeper?) for distributed lock management [3]
> >in the access control workflow.
> >
> >Still under investigation and for now fits the share replication
> >workflow [4].
> >
> >
> > c) Allow drivers to update DB models in a restricted manner (only certain
> >fields can be updated by a driver API).
> >
> >This topic is being actively discussed in the community, and there
> >should be
> >a consensus soon on figuring out the right approach, following which
> >there
> >might be a BP/spec targeted for Newton.
> >
> >
> > Besides these changes, there's a update_access() change that I'd like to
> > revive
> > (started in Mitaka), storing access keys (auth secrets) generated by a
> > storage
> > backend when providing share access, i.e.  during update_access(), in the
> > ``share_access_map`` table [5]. This change as you might have figured is a
> > smaller and a simpler change than the rest, but seems to depend on the
> > approaches
> > that might be adopted by a) and c).
> >
> > For now, I'm thinking of allowing a driver's update access()  to return a
> > dictionary of {access_id: access_key, ...} to (ShareManager)access_helper's
> > update_access(), which would then update the DB iteratively with access_key
> > per access_id. Would this approach be valid with changes a) and c) in
> > Newton? change a) would make the driver report access status per rule via
> > the access_helper, during which an 'access_key' can also be returned,
> > change c) might allow the driver to directly update the `access_key` in the
> > DB.
> >
> > For now, should I proceed with implementing the approach currently outlined
> > in my spec [5], have the driver's update_access() return a dictionary of
> > {access_id: access_key, ...} or wait for approaches for changes a) and c)
> > to be outlined better?
> >
> > Thanks,
> > Ramana
> >
> > [1] https://etherpad.openstack.org/p/newton-manila-update-access
> >
> > [2]
> > https://blueprints.launchpad.net/openstack/?searchtext=distributed-locking-with-tooz
> >
> > [3]
> > https://review.openstack.org/#/c/209661/38/specs/chronicles-of-a-dlm.rst
> >
> > [4] https://review.openstack.org/#/c/318336/
> >
> > [5] https://review.openstack.org/#/c/322971/
> > 
> > http://lists.openstack.org/pipermail/openstack-dev/2015-October/077602.html
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila]: questions on update-access() changes

2016-06-02 Thread Ramana Raja
Hi,

There are a few changes that seem to be lined up for Newton to make manila's
share access control, update_access(), workflow better [1] --
reduce races in DB updates, avoid non-atomic state transitions, and
possibly enable the workflow fit in a HA active-active manila
configuration (if not already possible).

The proposed changes ...

a) Switch back to per rule access state (from per share access state) to
   avoid non-atomic state transition.
   
   Understood problem, but no spec or BP yet.


b) Use Tooz [2] (with Zookeeper?) for distributed lock management [3]
   in the access control workflow. 

   Still under investigation and for now fits the share replication workflow 
[4]. 


c) Allow drivers to update DB models in a restricted manner (only certain
   fields can be updated by a driver API).

   This topic is being actively discussed in the community, and there should be
   a consensus soon on figuring out the right approach, following which there
   might be a BP/spec targeted for Newton. 


Besides these changes, there's a update_access() change that I'd like to revive
(started in Mitaka), storing access keys (auth secrets) generated by a storage
backend when providing share access, i.e.  during update_access(), in the
``share_access_map`` table [5]. This change as you might have figured is a
smaller and a simpler change than the rest, but seems to depend on the 
approaches
that might be adopted by a) and c).

For now, I'm thinking of allowing a driver's update access()  to return a
dictionary of {access_id: access_key, ...} to (ShareManager)access_helper's
update_access(), which would then update the DB iteratively with access_key
per access_id. Would this approach be valid with changes a) and c) in 
Newton? change a) would make the driver report access status per rule via
the access_helper, during which an 'access_key' can also be returned, 
change c) might allow the driver to directly update the `access_key` in the
DB.

For now, should I proceed with implementing the approach currently outlined
in my spec [5], have the driver's update_access() return a dictionary of 
{access_id: access_key, ...} or wait for approaches for changes a) and c)
to be outlined better?

Thanks,
Ramana

[1] https://etherpad.openstack.org/p/newton-manila-update-access

[2] 
https://blueprints.launchpad.net/openstack/?searchtext=distributed-locking-with-tooz

[3] https://review.openstack.org/#/c/209661/38/specs/chronicles-of-a-dlm.rst

[4] https://review.openstack.org/#/c/318336/

[5] https://review.openstack.org/#/c/322971/
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077602.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] quotas per tenant/user per share type?

2016-03-14 Thread Ramana Raja
Hi,

Can manila enforce quotas on resources for a tenant/user per
share type? An example use case being, I'm a cloud operator
and I want to restrict the tenant A to consume not more than
'x' GB of storage space for share type 'premium'.

If such a quota mechanism does not exist, is there a plan to
implement it in Newton or O-release? Or can the use case
cited earlier be accomplished by some configuration setup
wizardry?

Thanks,
Ramana




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] HDFS CI broken

2016-02-11 Thread Ramana Raja


- Original Message -
> From: "Ben Swartzlander" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, February 10, 2016 9:00:06 PM
> Subject: [openstack-dev] [Manila] HDFS CI broken
> 
> The gate-manila-tempest-dsvm-hdfs jenkins job has been failing for a
> long time now. It appears to be a config issue that's probably not hard
> to fix, but nobody is actively maintaining this code.
> 
> Since it's a waste of resources to continue running this broken job, I
> plan to disable it, and if nobody wants to volunteer to get it working
> again, we will need to take the HDFS driver out of the tree in Mitaka,
> since we can't ensure its quality without the CI job.

With a set of patches [1] for HDFS's devstack plugin (devstack-plugin-hdfs 
project),
I'm able to get the CI job, gate-manila-tempest-dsvm-hdfs, to pass.
The patches, except one, have been up for a long time. The maintainer of the
HDFS plugin has not reviewed them.

The fixes were not hard. I also am concerned about the lack of maintenance
of the plugin, and the driver, for e.g. is someone going to implement the
new 'update_access' API for the HDFS driver that'd soon be a requirement
for all drivers?

Thanks,
Ramana

[1] https://review.openstack.org/#/c/278504/
need to fix bashate error that is unrelated to the fix

https://review.openstack.org/#/c/264277/
https://review.openstack.org/#/c/261030/
https://review.openstack.org/#/c/261028/
> 
> I really don't like removing drivers, especially fully open-source
> drivers, but we have too many other priorities this release to be
> distracted by fixing this kind of thing. If this driver is something
> people actively use and find valuable, then it should not be hard to
> find a volunteer to fix it.
> 
> -Ben Swartzlander
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-24 Thread Ramana Raja
Hi,

I was trying to figure out the state of DevStack plugin
for Ceph, but couldn't find its source code and ran into
the following doubt. At Mitaka 1, i.e. next week, wouldn't
Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
that still use extras.d's hook script instead of a plugin, stop working?
For reference,
https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
[Deepak, thanks for reminding me about the deprecation of extra.ds.]

The patch that seeks to integrate Ceph DevStack plugin with Jenkins
gates is under review,
https://review.openstack.org/#/c/188768/
It's outdated as the devstack-ceph-plugin it seeks to integrate
seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
activity for quite sometime.

Even if I'm mistaken about all of this can someone please point me to
the Ceph DevStack plugin's source code? I'm interested to know whether
the plugin would be identical to the current Ceph hook script,
extras.d/60-ceph.sh ?

Thanks,
Ramana



 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Two nominations for Manila Core Reviewer Team

2015-04-23 Thread Ramana Raja
+1 to both.

- Original Message -
From: "Ben Swartzlander" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, April 22, 2015 11:53:15 PM
Subject: [openstack-dev] [Manila] Two nominations for Manila Core Reviewer  
Team

I would like to nominate Thomas Bechtold to join the Manila core reviewer team. 
Thomas has been contributing to Manila for close to 6 months and has provided a 
good number of quality code reviews in addition to a substantial amount of 
contributions. Thomas brings both Oslo experience as well as a packager/distro 
perspective which is especially helpful as Manila starts to get used in more 
production scenarios. 

I would also like to nominate Mark Sturdevant. He has also been active in the 
community for about 6 months and has a similar history of code reviews. Mark is 
the maintainer of the HP driver and would add vendor diversity to the core 
team. 

-Ben Swartzlander 
Manila PTL 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Modularity of generic driver (network mediated)

2014-02-05 Thread Ramana Raja
Hi,

The first prototype of the multi-tenant capable GlusterFS driver would 
piggyback on the generic driver, which implements the network plumbing model 
[1]. We'd have NFS-Ganesha server running on the service VM. The Ganesha server 
would mediate access to the GlusterFS backend (or any other Ganesha compatible 
clustered file system backends such as CephFS, GPFS, among others), while the 
tenant network isolation would be done by the service VM networking [2][3]. To 
implement this idea, we'd have to reuse much of the generic driver code 
especially that related to the service VM networking.

So we were wondering whether the current generic driver can be made more 
modular? The service VM could not just be used to expose a formatted cinder 
volume, but instead be used as an instrument to convert the existing single 
tenant drivers (with slight modification) - LVM, GlusterFS - to a multi-tenant 
ready driver. Do you see any issues with this thought - generic driver, a 
modular multi-tenant driver that implements the network plumbing model? And is 
this idea feasible?


[1] https://wiki.openstack.org/wiki/Manila_Networking
[2] 
https://docs.google.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit
[3] 
https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit

Thanks,

Ram

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev