[openstack-dev] [Manila] PTL Candidacy

2014-09-24 Thread Swartzlander, Ben
Hello! I have been the de-facto PTL for Manila from the conception of
the project up to now. Since Manila is an officially incubated OpenStack
program, I have the opportunity to run for election and hopefully become
the officially elected PTL for the Manila project.

I'm running because I feel that the vision of the Manila project is
still not achieved, even though we've made tremendous strides in the
last year, and I want to see the project mature and become part of
core OpenStack.

Some of you may remember the roots of the Manila project, when we
proposed shared file system management as an extension to the
then-nascent Cinder project during the Folsom release. It's taken a lot
of attempts and failures to arrive at the current Manila project, and
it's been an exciting and humbling journey, where along the way I've
had the opportunity to work with many great individuals.

My vision for the future of the Manila includes:
* Getting more integrated with the rest of OpenStack. We have Devstack,
  Tempest, and Horizon integration, and I'd like to get that code into
  the right places where it can be maintained. We also need to add Heat
  integration, and more complete documentation.
* Working with distributions on issues related to packaging and
  installation to make Manila as easy to use as possible. This includes
  work with Chef and Puppet.
* Making Manila usable in more environments. Manila's design center has
  been large-scale public clouds, but we haven't spent enough time on
  small/medium scale environments -- the kind the developers typically
  have and the kind that users typically start out with.
* Taking good ideas from the rest of OpenStack. We're a small team and
  we can't do everything ourselves. The OpenStack ecosystem is full of
  excellent technology and I want to make sure we take the best ideas
  and apply them to Manila. In particular, there are some features I'd
  like to copy from the Cinder project.
* A focus on quality. I want to make sure we keep test coverage high
  as we add new features, and increase test coverage on existing
  features. I also want to try to start vendor CI similar to what
  Cinder has.
* Lastly, I expect to work with vendors to get more drivers contributed
  to expand Manila's hardware support. I am very interested in
  smoothing out some of the networking complexities that make it
  difficult to write drivers today.

I hope you will support my candidacy so I can continue to lead Manila
towards eventual integration with OpenStack and realize my dream of
shared file system management in the cloud.

Thank you,
Ben Swartzlander
Manila PTL, NetApp Architect

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Docs plan for Juno

2014-09-05 Thread Swartzlander, Ben
Now that the project is incubated, we should be moving our docs from the 
openstack wiki to the openstack-manuals project. Rushil Chugh has volunteered 
to lead this effort so please coordinate any updates to documentation with him 
(and me). Our goal is to have the updates to openstack-manuals upstream by Sept 
22. It will go faster if we can split up the work and do it in parallel.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2014-08-11 Thread Swartzlander, Ben
I just saw the agenda for tomorrow's TC meeting and we're on it. I plan to be 
there.

https://wiki.openstack.org/wiki/Meetings#Technical_Committee_meeting

-Ben


From: Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com]
Sent: Monday, July 28, 2014 9:53 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Manila] Incubation request

Manila has come a long way since we proposed it for incubation last autumn. 
Below are the formal requests.

https://wiki.openstack.org/wiki/Manila/Incubation_Application
https://wiki.openstack.org/wiki/Manila/Program_Application

Anyone have anything to add before I forward these to the TC?

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] File-storage for Manila service image

2014-08-05 Thread Swartzlander, Ben
On Tue, 2014-08-05 at 23:50 +0300, Valeriy Ponomaryov wrote:
> Github has file size limit in 100 Mb, see
> https://help.github.com/articles/what-is-my-disk-quota
> 
> 
> Our current image is about 300 Mb.

Do you think we could upload the file to launchpad somehow? I've seen LP
hosting various downloadable files. If that fails maybe the
openstack-infra team has a place for blobs.

Worst case we will just host in on S3 and pay for it out of our pockets.


> On Tue, Aug 5, 2014 at 11:43 PM, Swartzlander, Ben
>  wrote:
> On Tue, 2014-08-05 at 23:13 +0300, Valeriy Ponomaryov wrote:
> > Hello everyone,
> >
> >
> > Currently used image for Manila is located in
> > dropbox: ubuntu_1204_nfs_cifs.qcow2 and dropbox has limit
> for traffic,
> > see https://www.dropbox.com/help/4204
> >
> >
> > Due to generation of excessive traffic, public links were
> banned and
> > image could not be downloaded with error code 509, now it is
> unbanned,
> > until another excess reached.
> >
> >
> > Traffic limit should not threat possibility to use project,
> so we need
> > find stable file storage with permanent public links and
> without
> > traffic limit.
> >
> >
> > Does anyone have any suggestions for more suitable file
> storage to
> > use?
> 
> 
> Let's try creating a github repo and sharing it there. For
> hopefully
> obvious reasons, let's NOT put this into the manila repos
> directly --
> let's keep it separate.
> 
> 
> > --
> > Kind Regards
> > Valeriy Ponomaryov
> >
> 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Kind Regards
> Valeriy Ponomaryov
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] File-storage for Manila service image

2014-08-05 Thread Swartzlander, Ben
On Tue, 2014-08-05 at 23:13 +0300, Valeriy Ponomaryov wrote:
> Hello everyone,
> 
> 
> Currently used image for Manila is located in
> dropbox: ubuntu_1204_nfs_cifs.qcow2 and dropbox has limit for traffic,
> see https://www.dropbox.com/help/4204
> 
> 
> Due to generation of excessive traffic, public links were banned and
> image could not be downloaded with error code 509, now it is unbanned,
> until another excess reached.
> 
> 
> Traffic limit should not threat possibility to use project, so we need
> find stable file storage with permanent public links and without
> traffic limit.
> 
> 
> Does anyone have any suggestions for more suitable file storage to
> use?

Let's try creating a github repo and sharing it there. For hopefully
obvious reasons, let's NOT put this into the manila repos directly --
let's keep it separate.


> -- 
> Kind Regards
> Valeriy Ponomaryov
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2014-07-29 Thread Swartzlander, Ben
On Tue, 2014-07-29 at 13:38 +0200, Thierry Carrez wrote:
> Swartzlander, Ben a écrit :
> > Manila has come a long way since we proposed it for incubation last autumn. 
> > Below are the formal requests.
> > 
> > https://wiki.openstack.org/wiki/Manila/Incubation_Application
> > https://wiki.openstack.org/wiki/Manila/Program_Application
> > 
> > Anyone have anything to add before I forward these to the TC?
> 
> When ready, propose a governance change a bit like this one:
> 
> https://github.com/openstack/governance/commit/52d9b4cf2f3ba9d0b757e16dc040a1c174e1d27e

Thierry, does the governance change process replace the process of
sending an email to the openstack-tc ML?

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Incubation request

2014-07-28 Thread Swartzlander, Ben
Manila has come a long way since we proposed it for incubation last autumn. 
Below are the formal requests.

https://wiki.openstack.org/wiki/Manila/Incubation_Application
https://wiki.openstack.org/wiki/Manila/Program_Application

Anyone have anything to add before I forward these to the TC?

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

2014-06-17 Thread Swartzlander, Ben
The Manila core team welcomes Xing Yang! She has been a very active
reviewer and has been consistently involved with the project.

Xing, thank you for all your effort and keep up the great work!

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] GenericDriver cinder volume error during manila create

2014-06-17 Thread Swartzlander, Ben
On Mon, 2014-06-16 at 23:06 +0530, Deepak Shetty wrote:
> I am trying devstack on F20 setup with Manila sources.

> When i am trying to do
> manila create --name cinder_vol_share_using_nfs2 --share-network-id
> 36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 

> I see the below error in c-vol due to which even tho' my service VM is
> started, manila create errors out as cinder volume is not getting
> exported as iSCSI

> 2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume
> [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774
> 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a
> 6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created
> as raw with specification: {'status': u'creating', 'volume_size': 2,
> 'volume_name': u'volume-8bfd
> 424d-9877-4c20-a9d1-058c06b9bdda'}
> 2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils
> [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774
> 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a6d5c
> 795095] Running cmd (subprocess): sudo
> cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -n
> volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from
> (pid=4
> 623)
> execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
> 2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume
> [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774
> 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a
> 6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> (8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully
> 2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped
> when creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2',
> 'tenant': u'b65a066f32df4aca80
> fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c795095 - - -'}
> 2014-06-16 16:39:38.426 DEBUG cinder.volume.manager
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
> Volume 
> 8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623)
> initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781
> 2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
> Creat
> ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> 2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
> Crea
> ted volume
> path 
> /opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda,
> content: 
>  iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda>
> backing-store /dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> lld iscsi
> IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe
> 
> from (pid=4623)
> create_iscsi_target /opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183
> 2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
> 795095] Running cmd (subprocess): sudo
> cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update
> iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
> a from (pid=4623)
> execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
> 2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
> 795095] Result was 107 from (pid=4623)
> execute /opt/stack/cinder/cinder/openstack/common/processutils.py:167
> 2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Fa
> iled to create iscsi target for volume
> id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while
> running command.
> Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin
> --update
> iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> Exit code: 107
> Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target
> --tid 1 -T
> iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> \nexited with code: 107.\n'
> Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport
> endpoint is not connected\ntgtadm: failed to send request hdr to tgt
> daemon, Transport endpoint is not connected\ntgtadm: failed to send
> request hdr to tgt daemon, Transport endpoint is not connected
> \ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint
> is not connected\n'
> 2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
> d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095]
> Exception during message handling: Failed to create iscsi targe

Re: [openstack-dev] [Cinder][Manila]

2014-04-26 Thread Swartzlander, Ben
> -Original Message-
> From: Alun Champion [mailto:p...@achampion.net] 
> Sent: Saturday, April 26, 2014 7:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Cinder][Manila]
>
> I'm sure this has been discussed I just couldn't find any reference to it, 
> perhaps someone can point me to the discussion/rationale.
> Is there any reason why there needs to be another service to present a 
> control-plane to storage? Obviously object storage is
> different as that is presenting a data-plane API but from a control-plane I'm 
> confused why there needs to be another service,
> surely control-planes are pretty similar and the underlying networking issues 
> for iSCSI would be similar for NFS/CIFS.
> Trove is looking to be a general purpose data container
> (control-plane) service for traditional RDBMS, NoSQL, KeyValue, etc., why is 
> the Cinder API not suitable for providing a general
> purpose storage container (control-plane) service?
>
> Creating separate services will complicate other services, e.g. Trove.
>
> Thoughts?

There are good arguments on both sides of this question. There is substantial 
overlap between Cinder and Manila in their API constructs and backends (they 
both deal with storage, after all). In the long run it's entirely possible that 
the 2 projects could be merged.

However there are also some very important differences. In particular Cinder 
knows almost nothing about networking, but Manila needs to know a great deal 
about individual tenant networks in order to deliver NAS storage to tenants. 
Cinder can rely on hypervisors to do some of the hard work of translating block 
protocols and managing attaching/detaching whereas Manila routes around the 
hypervisor entirely and connects guest VMs with storage directly. The most 
important reason Manila ended up as a separate project from Cinder was because 
the Cinder team didn't want the distraction of dealing with some of the very 
hard technical problems that needed solving for Manila to be successful.

After working on Manila for the past year and struggling with a lot of hard 
technical decisions I think it was the right decision to split the projects. If 
Manila had remained a subproject of Cinder then it either wouldn't have 
received near the attention it needed or it would have sucked attention away 
from a lot of important issues that the Cinder team is dealing with.

If there's a future where Manila and Cinder merge back together then I'm pretty 
sure it's quite far away. The best thing we can do is strive to make both 
projects successful and keep asking these hard questions.

-Ben Swartzlander (Manila PTL)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient?

2014-04-08 Thread Swartzlander, Ben
Options may be bad, but hardcoded values chosen arbitrarily are worse. Unless 
someone can justify why the value needs to be 180 and not 179 or 181 then it 
should be configurable. That's my opinion at any rate.

-Ben


From: Lingxian Kong [mailto:anlin.k...@gmail.com]
Sent: Tuesday, April 08, 2014 11:59 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova][cinder] create server from a volume snapshot, 
180 reties is sufficient?

hi there:

According to the patch https://review.openstack.org/#/c/80619/, Nova
will wait for volume creation for 180s, the config option is rejected by
Russell and Nikola. But the reason I raise it up is, we found the server
creation failed due to timeout in our deployment, with LVM as Cinder
backend.

So, I wander is 180s really suitable here? Are there some guidences
about when should we add an option? But at least, we should not avoid an
option, just because of the existing overwhelming number of them, right?

Thoughts?



--
---
Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; 
anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Modularity of generic driver (network mediated)

2014-02-06 Thread Swartzlander, Ben
Raja, this is one of a few workable approaches that I've thought about. I'm not 
convinced it's the best approach, but it does look to be less effort so we 
should examine it carefully. One thing to consider is that if we go down the 
route of using service VMs for the mediated drivers (such as gluster) then we 
don't need to be tied to Ganesha-NFS -- we could use nfs-kernel-server instead. 
Perhaps Ganesha-NFS is still the better choice but I'd like to compare the two 
in this context. One downside is that service VMs with full virtualization are 
a relatively heavyweight way to deliver file share services to tenants. If 
there were approaches that could use container-based virtualization or no 
virtualization at all, then those would probably be more efficient (although 
also possibly more work).

-Ben


-Original Message-
From: Ramana Raja [mailto:rr...@redhat.com] 
Sent: Wednesday, February 05, 2014 11:42 AM
To: openstack-dev@lists.openstack.org
Cc: vponomar...@mirantis.com; aostape...@mirantis.com; yportn...@mirantis.com; 
Csaba Henk; Vijay Bellur; Swartzlander, Ben
Subject: [Manila] Modularity of generic driver (network mediated)

Hi,

The first prototype of the multi-tenant capable GlusterFS driver would 
piggyback on the generic driver, which implements the network plumbing model 
[1]. We'd have NFS-Ganesha server running on the service VM. The Ganesha server 
would mediate access to the GlusterFS backend (or any other Ganesha compatible 
clustered file system backends such as CephFS, GPFS, among others), while the 
tenant network isolation would be done by the service VM networking [2][3]. To 
implement this idea, we'd have to reuse much of the generic driver code 
especially that related to the service VM networking.

So we were wondering whether the current generic driver can be made more 
modular? The service VM could not just be used to expose a formatted cinder 
volume, but instead be used as an instrument to convert the existing single 
tenant drivers (with slight modification) - LVM, GlusterFS - to a multi-tenant 
ready driver. Do you see any issues with this thought - generic driver, a 
modular multi-tenant driver that implements the network plumbing model? And is 
this idea feasible?


[1] https://wiki.openstack.org/wiki/Manila_Networking
[2] 
https://docs.google.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit
[3] 
https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit

Thanks,

Ram
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation request for Manila

2013-10-10 Thread Swartzlander, Ben
Please consider our formal request for incubation status of the Manila project:
https://wiki.openstack.org/wiki/Manila_Overview

thanks!
-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev