Re: [openstack-dev] [Manila] Midcycle meetup

2016-01-13 Thread Luis Pabon
Is there a link to the topics or schedule?

- Luis

- Original Message -
From: "Ben Swartzlander" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, December 9, 2015 8:25:55 PM
Subject: Re: [openstack-dev] [Manila] Midcycle meetup

On 12/04/2015 04:42 PM, Ben Swartzlander wrote:
> On 11/19/2015 01:00 PM, Ben Swartzlander wrote:
>> If you planning to attend the midcycle in any capacity, please vote your
>> preferences here:
>>
>> https://www.surveymonkey.com/r/BXPLDXT
>
> The results of the survey were clear. Most people prefer the week of Jan
> 12-14.
>
> There was an offer to host in Roseville, CA by HP (thanks HP) but at the
> meeting yesterday most people still preferred the RTP site, so we will
> be planning on hosting the meeting in RTP that week, unless someone
> absolutely can't make that week.
>
> What remains to be decided is whether we do Tuesday+Wednesday or
> Wednesday+Thursday. We've tried both, and the 2 day length has worked
> out very well. I personally lean towards Wednesday+Thursday, but please
> reply back to me or the list if you have a different preference.
>
> We need to finalize the dates so people can make travel arrangements.
> I'll set the deadline to decide by Tuesday Dec 8 so people will have 5
> week to make travel plans.

Okay it's final -- we will hold the midcycle meetup on Jan 13-14 at 
NetApp's RTP office.

-Ben


>> -Ben
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Question to driver maintainers

2015-05-20 Thread Luis Pabon
Hi guys, I am a little confused and would like to maybe clear some things up.  
GlusterFS (the storage system) does support the ability to resize volumes.  I 
will talk to Csaba and see what he means, and we will get back to you soon.

- Original Message -
From: "Ben Swartzlander" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Tuesday, May 19, 2015 12:41:31 PM
Subject: Re: [openstack-dev] [Manila] Question to driver maintainers

On 05/19/2015 10:42 AM, Csaba Henk wrote:
> Hi Igor,
>
>> From: "Igor Malinovskiy" 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Sent: Monday, May 18, 2015 10:15:25 AM
>> Subject: [openstack-dev] [Manila] Question to driver maintainers
> ...
>> So I want to ask driver maintainers here:
>> Will your driver be able to do share extending without loss of connectivity?
> Currenty:
>
> - glusterfs driver can
> - glusterfs-native won't support share extension (*)
>
> in Liberty timeframe, we are to unify the glusterfs* drivers' backend
> management logic, so both glusterfs driver style and glusterfs-native
> driver style backend management will be available for both drivers
> (actual choice made in configuration). So when this will be in place,
> the answer modifies as follows:
>
> - glusterfs and glusterfs-native will either support non-disruptive
>share extension, or won't support share resize at all (*) (depending
>on configuration)

Csaba, this is a truly interesting set of limitations! I'm trying to 
understand what's going on down in the storage system to prevent the 
extension. Is it a case of not having enough free space? Or can you 
support creating new (larger) shares on the same backend while 
simultaneously not being able to resize an existing share? Is there some 
mapping to physical resources that's immutable once configured? What is 
your recommendation to customers who run out of space in a glusterfs 
share today (independent of Manila)?

If your system can't support this case then I'm worried others may have 
similar problems and we could end up having to choose between making 
extend an optional operation (a choice I don't like) or making the 
glusterfs-native driver and possibly other drivers unsupported (also an 
option I don't like).

-Ben

> (*) There are efforts to remove this limitation in GlusterFS, but too
> vague at this point to make a statement on it.
>
> Csaba
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Mount automation using Zeroconf

2015-04-27 Thread Luis Pabon
Hi Clinton,
  I think there are two main parts that are needed to automatically mount 
Manila shares.  One is the share discovery model, and the other is enabling the 
virtual machine to mount the share.  I think the only benefit to using zeroconf 
would be as a standard way to broadcast availability of a network share 
regardless of protocol.  Manila could broadcast the availability of a share by 
using a name like _manila_nfs, _manila_cifs, _manila_gluster, etc.  Although, 
even with zeroconf, the virtual machine still requires an agent to be able to 
attach the share for use.  I think the real benefit of using zeroconf is its 
simplicity.

There could still be other methods we can investigate.  For example (don't kill 
me for this ;-)), have a Manila YP NIS service for NFS shares?

- Luis



 
- Original Message -
From: "Clinton Knight" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, April 22, 2015 3:29:50 PM
Subject: [openstack-dev] [Manila] Mount automation using Zeroconf

Hello, Manila-philes. 

Back in Paris we started talking about Manila mount automation, whereby file 
shares could be automatically mounted on clients, and this will likely be a 
topic in Vancouver. So in order to have an informed discussion at the summit, 
I'd like to explore a few things beforehand. 

Besides brute force approaches like SSH or PsExec, one of the community 
suggestions was to use Zeroconf (aka Bonjour)[1]. Zeroconf sounds attractive on 
the surface, but it seems to have a number of limitations: 

* No standard way to specify local mount point 
* Additional setup required to work beyond the 'local' domain 
* Custom software needed on clients to mount advertised shares 
* Same issues with network connectivity as any other mount automation solution 

Does anyone have a clearer idea how Zeroconf might satisfy the need for Manila 
mount automation? 

Thanks, 
Clinton Knight 
Manila core team 

[1] http://en.wikipedia.org/wiki/Zero-configuration_networking 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Question on documentation reviews

2015-04-07 Thread Luis Pabon
Hi guys,
  I have been reviewing https://review.openstack.org/#/c/171166/, but I am 
concerned that I provided more of a hindrance than assistance. Instead I would 
like to propose the method used by Swift for document reviews, where reviewers 
provide a patch to the author as in https://review.openstack.org/#/c/169990 .

What do you think?

- Luis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-02 Thread Luis Pabon
What is the status on virtfs?  I am not sure if it is being maintained.  Does 
anyone know?

- Luis

- Original Message -
From: "Danny Al-Gaaf" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, ceph-de...@vger.kernel.org
Sent: Sunday, March 1, 2015 9:07:36 AM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

Am 27.02.2015 um 01:04 schrieb Sage Weil:
> [sorry for ceph-devel double-post, forgot to include
> openstack-dev]
> 
> Hi everyone,
> 
> The online Ceph Developer Summit is next week[1] and among other
> things we'll be talking about how to support CephFS in Manila.  At
> a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

> 2) Native CephFS driver
> 
> As I currently understand it,
> 
> - The driver will set up CephFS auth credentials so that the guest
> VM can mount CephFS directly - The guest VM will need access to the
> Ceph network.  That makes this mainly interesting for private
> clouds and trusted environments. - The guest is responsible for
> running 'mount -t ceph ...'. - I'm not sure how we provide the auth
> credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

> This would perform better than an NFS gateway, but there are
> several gaps on the security side that make this unusable currently
> in an untrusted environment:
> 
> - The CephFS MDS auth credentials currently are _very_ basic.  As
> in, binary: can this host mount or it cannot.  We have the auth cap
> string parsing in place to restrict to a subdirectory (e.g., this
> tenant can only mount /tenants/foo), but the MDS does not enforce
> this yet.  [medium project to add that]
> 
> - The same credential could be used directly via librados to access
> the data pool directly, regardless of what the MDS has to say about
> the namespace.  There are two ways around this:
> 
> 1- Give each tenant a separate rados pool.  This works today.
> You'd set a directory policy that puts all files created in that
> subdirectory in that tenant's pool, then only let the client access
> those rados pools.
> 
> 1a- We currently lack an MDS auth capability that restricts which 
> clients get to change that policy.  [small project]
> 
> 2- Extend the MDS file layouts to use the rados namespaces so that
>  users can be separated within the same rados pool.  [Medium
> project]
> 
> 3- Something fancy with MDS-generated capabilities specifying which
>  rados objects clients get to read.  This probably falls in the
> category of research, although there are some papers we've seen
> that look promising. [big project]
> 
> Anyway, this leads to a few questions:
> 
> - Who is interested in using Manila to attach CephFS to guest VMs? 
> - What use cases are you interested? - How important is security in
> your environment?

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly
to the VMs to prevent all the security issues we discussed already.

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-2015

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev