Re: [Openstack] anyone has a sample quantum.conf file that configures a valid quantum.log file?

2013-04-30 Thread Maru Newby
On Apr 29, 2013, at 6:03 PM, yulin...@dell.com wrote:

  
  
 Hi,
  
 I’m new to Quantum and trying to set up an openstack quantum environment.
  
 I installed a single node environment using DevStack(on a VM) successfully. I 
 noticed that by default there is no log files for quantum. I tried to enable 
 the quantum logging using quantum.conf. However, I haven’t been lucky enough 
 to get it work yet. Just wondering if anyone can share a working sample of 
 quantum.conf that enables logging to a log file?

The default devstack configuration is to have the processes log to a screen 
session.  If file output is desired, add the following to your localrc

SCREEN_LOGDIR=$DEST/logs/screen

This ensures that log output is also captured to disk - a file per service in 
the specified output directory.

Thanks,


Maru


  
 Thanks,
  
 YuLing
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] anyone has a sample quantum.conf file that configures a valid quantum.log file?

2013-04-30 Thread Maru Newby

On Apr 30, 2013, at 11:21 AM, yulin...@dell.com wrote:

 Thanks very much Maru.
 
 Another question...I'm playing around with Openstack Dashboard. What I can 
 see is that after I launch a VM, a few ports will be created. Port details 
 would also show the MAC address of the port(something like 
 fa:16:3e:97:1f:b7). Is this MAC address the physical MAC address of the port 
 on the NIC card? If not, what MAC address is it?

It is likely the mac of a virtual nic (quantum port), though the specifics 
would depend on which Quantum plugin is configured.



 Thanks,
 
 YuLing
 
 -Original Message-
 From: Maru Newby [mailto:ma...@redhat.com] 
 Sent: Tuesday, April 30, 2013 2:03 AM
 To: C, Yuling
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] anyone has a sample quantum.conf file that 
 configures a valid quantum.log file?
 
 On Apr 29, 2013, at 6:03 PM, yulin...@dell.com wrote:
 
 
 
 Hi,
 
 I'm new to Quantum and trying to set up an openstack quantum environment.
 
 I installed a single node environment using DevStack(on a VM) successfully. 
 I noticed that by default there is no log files for quantum. I tried to 
 enable the quantum logging using quantum.conf. However, I haven't been lucky 
 enough to get it work yet. Just wondering if anyone can share a working 
 sample of quantum.conf that enables logging to a log file?
 
 The default devstack configuration is to have the processes log to a screen 
 session.  If file output is desired, add the following to your localrc
 
 SCREEN_LOGDIR=$DEST/logs/screen
 
 This ensures that log output is also captured to disk - a file per service in 
 the specified output directory.
 
 Thanks,
 
 
 Maru
 
 
 
 Thanks,
 
 YuLing
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] anyone has a sample quantum.conf file that configures a valid quantum.log file?

2013-04-30 Thread Maru Newby
It is not possible to retrieve the physical mac port as you desire via 
Quantum's API.  The mapping of physical NIC to virtual NIC is plugin-specific 
and not exposed, though it can obviously be discovered manually.

What use-case are you thinking of that requires discovery of the physical mac 
that is transiting traffic for a given VM?

On Apr 30, 2013, at 1:10 PM, yulin...@dell.com wrote:

 I guess that's not physical either... since ifConfig -a on the Ubuntu 
 OS(where my VM resides) gave me different HWaddr (something like  
 5a:86:eb:95:1a:49).  So, another question is whether I can get the physical 
 NIC port MAC from Openstack Quantum? The plugin configured in my environment 
 is OVS plugin.
 
 Thanks,
 
 YuLing
 
 -Original Message-
 From: Maru Newby [mailto:ma...@redhat.com] 
 Sent: Tuesday, April 30, 2013 11:32 AM
 To: C, Yuling
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] anyone has a sample quantum.conf file that 
 configures a valid quantum.log file?
 
 
 On Apr 30, 2013, at 11:21 AM, yulin...@dell.com wrote:
 
 Thanks very much Maru.
 
 Another question...I'm playing around with Openstack Dashboard. What I can 
 see is that after I launch a VM, a few ports will be created. Port details 
 would also show the MAC address of the port(something like 
 fa:16:3e:97:1f:b7). Is this MAC address the physical MAC address of the port 
 on the NIC card? If not, what MAC address is it?
 
 It is likely the mac of a virtual nic (quantum port), though the specifics 
 would depend on which Quantum plugin is configured.
 
 
 
 Thanks,
 
 YuLing
 
 -Original Message-
 From: Maru Newby [mailto:ma...@redhat.com] 
 Sent: Tuesday, April 30, 2013 2:03 AM
 To: C, Yuling
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] anyone has a sample quantum.conf file that 
 configures a valid quantum.log file?
 
 On Apr 29, 2013, at 6:03 PM, yulin...@dell.com wrote:
 
 
 
 Hi,
 
 I'm new to Quantum and trying to set up an openstack quantum environment.
 
 I installed a single node environment using DevStack(on a VM) successfully. 
 I noticed that by default there is no log files for quantum. I tried to 
 enable the quantum logging using quantum.conf. However, I haven't been 
 lucky enough to get it work yet. Just wondering if anyone can share a 
 working sample of quantum.conf that enables logging to a log file?
 
 The default devstack configuration is to have the processes log to a screen 
 session.  If file output is desired, add the following to your localrc
 
 SCREEN_LOGDIR=$DEST/logs/screen
 
 This ensures that log output is also captured to disk - a file per service 
 in the specified output directory.
 
 Thanks,
 
 
 Maru
 
 
 
 Thanks,
 
 YuLing
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum Help

2013-01-09 Thread Maru Newby
Hi Mark,

Where are you attempting to execute ping against the internal ip from?  Based 
on that guide, the internal ip would only be routable on the compute or network 
nodes, and it should be possible to ping the VM from either.  

If you are unable to ping the internal ip from one of those nodes, what error 
do you see?  If it is something like 'permission denied', then namespaces are 
enabled and the ping command will need to be executed via 'sudo ip netns exec 
[namespace id] ping [ip]'.  The namespace id can be determined by executing 'ip 
netns' and finding the namespace id whose prefix matches the router interface 
associated with the tenant network (e.g. qrouter-).

Regardless, if you can ping the floating ip, and are able to ssh into the vm 
via that floating ip, internal network connectivity is working correctly.  
floating ip traffic uses DNAT at the network node, so the internal ip is used 
once a packet transits the network node bound for the VM. 

Thanks,


Maru


I'm nmot sure Being able to ping a VM's floating ip but not the internal ip is 
a promising sign.  The internal ip 

If you are executing ping from the gateway host, Presuming you are executing 
ping from off the gateway 
If you can ping the associated floatingip, can you ssh into the vm using that 
ip?  If so, your vm has connectivity, and the issue is likely that the internal 
network is not externaroutable

The internal network is

On 2013-01-09, at 7:19 AM, Mark Langanki mark.langa...@spanlink.com wrote:

 All,
 
 I have an installation of 3 servers based on the Folsom guide : 
 https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/VLAN/2NICs/OpenStack_Folsom_Install_Guide_WebVersion.rst
 
 Following all the way through, I have quantum all configured per the bottom 
 of the doc : Your first VM.
 
 I have the VM up in a VNC window and can access it.
 
 Network is setup so that I have an internal net of 50.50.1.0/24 (exactly as 
 the doc), and an external net of 172.17.21.x/24 which is the public side.
 
 I have floating IP's in the 172.17.21.x/24 net and I can ping the associated 
 floatingip of the VM instance, I can ping the 50.50.1.1 router, but i can not 
 ping, or access the 50.50.1.3 VM instance.
 
 Where should I be troubleshooting?
 
 How can I know which interface I should be looking at on which node (i.e. is 
 it a vnet, or qwe031qwr type of interface, and is it on the network or 
 compute node?)..
 
 Kind of lost on where to go from here.
 
 Thanks,
 Mark Langanki
 EVP Operations and Engineering
 605 Hwy. 169 N., Suite. 900
 Minneapolis, MN 55441
 T: 763.971.2185 | M: 612.865.2928
 mark.langa...@spanlink.commailto:mark.langa...@spanlink.com| 
 www.spanlink.comhttp://www.spanlink.com/
 Spanlink Communications
 We make it work.
 
 
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Running plugin tests with tox

2012-08-24 Thread Maru Newby
Hi Salvatore,

I see you're working on getting plugins testable with tox:

https://review.openstack.org/#/c/11922/

What about keeping the plugins isolated for testing purposes?  I have been 
unable to work on it yet, but I was thinking it might be a good idea to move 
the plugins out of the main tree (but still in the same repo) for ease of 
maintenance, testing and deployment.  The thought was:

- relocate all plugins outside of main quantum tree (plugins/ dir in the repo 
root)
- give each plugin 
  - its own python root-level package (e.g. quantum_ovs)
  - its own tox.ini
  - its own tools/*-requires

So the layout would be something like:

plugins/quantum_ovs/tox.ini
plugins/quantum_ovs/quantum_ovs/__init__.py
plugins/quantum_ovs/tests/__init__.py
plugins/quanum_ovs/tools/pip-requires

plugins/quantum_linuxbridge/tox.ini
...

The tests for each plugin could then be executed via an independent tox run.

Is there any merit to this, now or in the future?

Thanks,


Maru

On 2012-08-24, at 2:56 PM, Salvatore Orlando (Code Review) wrote:

 Salvatore Orlando has uploaded a new change for review.
 
 Change subject: Enable tox to run OVS plugin unit tests
 ..
 
 Enable tox to run OVS plugin unit tests
 
 Fix bug 1029142
 
 Unit tests have been moved into /quantum/tests/unit
 
 Change-Id: I5d0fa84826f62a86e4ab04c3e1958869f24a1fcf
 ---
 D quantum/plugins/openvswitch/run_tests.py
 D quantum/plugins/openvswitch/tests/__init__.py
 D quantum/plugins/openvswitch/tests/unit/__init__.py
 R quantum/tests/unit/test_ovs_db.py
 R quantum/tests/unit/test_ovs_defaults.py
 R quantum/tests/unit/test_ovs_rpcapi.py
 R quantum/tests/unit/test_ovs_tunnel.py
 7 files changed, 0 insertions(+), 72 deletions(-)
 
 
  git pull ssh://review.openstack.org:29418/openstack/quantum 
 refs/changes/22/11922/1
 --
 To view, visit https://review.openstack.org/11922
 To unsubscribe, visit https://review.openstack.org/settings
 
 Gerrit-MessageType: newchange
 Gerrit-Change-Id: I5d0fa84826f62a86e4ab04c3e1958869f24a1fcf
 Gerrit-PatchSet: 1
 Gerrit-Project: openstack/quantum
 Gerrit-Branch: master
 Gerrit-Owner: Salvatore Orlando salv.orla...@gmail.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: 'PKI Signed Tokens' lack support for revocation

2012-08-09 Thread Maru Newby
Hi Adam,

The blueprint as revised to address Joe's comments looks good to me - nice 
work.  I especially like how the middleware is intended to cache the revocation 
list for a configurable amount of time - it mirrors how token caching already 
works.

Cheers,


Maru

On 2012-08-07, at 10:09 AM, Adam Young wrote:

 On 08/01/2012 09:19 PM, Maru Newby wrote:
 
 I see that support for PKI Signed Tokens has been added to Keystone without 
 support for token revocation.  I tried to raise this issue on the bug report:
 
 https://bugs.launchpad.net/keystone/+bug/1003962/comments/4
 
 And the review:
 
 https://review.openstack.org/#/c/7754/
 
 I'm curious as to whether anybody shares my concern and if there is a 
 specific reason why nobody responded to my question as to why revocation is 
 not required for this new token scheme.   Anybody?
 
 I have written up a blueprint for PKI token revocation.  Please provide 
 feedback.
 
 
 https://blueprints.launchpad.net/keystone/+spec/pki-revoke
 
 
 Thanks,
 
 
 Maru
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: 'PKI Signed Tokens' lack support for revocation

2012-08-02 Thread Maru Newby
Hi Adam,

I was thinking along the same lines - the revocation list could be accessed via 
a simple url.  It wouldn't even have to be hosted by Keystone, necessarily.  
For larger clusters where performance might become an issue, what about 
generating to a static file as needed that is made available via any of the 
usual web server suspects?

As to whether the keystone server cannot be reached, that could be 
configurable.  Some deployments might prefer permissive failure, others 
restrictive failure.  I can see the case for both options.

+1, also, to the set of Keystone servers being a configurable list, with 
differential policies for revocation checking.

As to a justification for revocation, my use-case is more Swift (and integrated 
CDN) than Nova.  A rogue user being able to manipulate VMs is one thing, but 
being able to expose potentially private data to a really wide audience is 
another.  I would rate the damage potential of an object storage compromise as 
easily as great as application-level compromise.

I would be happy to participate in creating and implementing these ideas.  How 
can I help?

Thanks,


Maru 

On 2012-08-02, at 10:24 PM, Adam Young wrote:

 On 08/01/2012 11:05 PM, Maru Newby wrote:
 
 Hi Adam,
 
 I apologize if my questions were answered before.  I wasn't aware that what 
 I perceive as a very serious security concern was openly discussed.  The 
 arguments against revocation support, as you've described them, seem to be:
 
  - it's complicated/messy/expensive to implement and/or execute
  - Kerberos doesn't need it, so why would we?
 
 I'm not sure why either of these arguments would justify the potential 
 security hole that a lack of revocation represents, but I suppose a 'short 
 enough' token lifespan could minimize that hole.  But how short a span are 
 you suggesting as being acceptable?
 
 The delay between when a user's access permissions change (whether roles, 
 password or even account deactivation) and when the ticket reflects that 
 change is my concern.  The default in Keystone has been 24h, which is 
 clearly too long.  Something on the order of 5 minutes would be ideal, but 
 then ticket issuance could become the bottleneck.  Validity that's much 
 longer could be a real problem, though.  Maybe not at the cloud 
 administration level, but for a given project I can imagine someone being 
 fired and their access being revoked.  How long is an acceptable period for 
 that ticket to still be valid?  How much damage could be done by someone who 
 should no longer have access to an account if their access cannot be 
 revoked, by anyone, at all?
 
 
 I realize that I had been thinking about the revocation list as something 
 that needs to be broadcast.  This is certainly not the case.
 
 A much better approach would be for the Keystone server to have a list of 
 revoked tokens exposed in an URL.  Then, as service like Glance or Nova can 
 query the Revocation list on a simple schedule.  The time out would be 
 configurable, of course.
 
 There is a question about what to do if the keystone server cannot be reached 
 during that interval.  Since the current behavior is for authentication to 
 fail,  I suppose we would continue doing that,  but also wait a random amount 
 of time and then requery the Keystone server.
 
 In the future, I would like to make the set of Keystone servers a 
 configurable list, and the policy for revocation checking should be able to 
 vary per server:  some Keystone servers in a federated approach might not be 
 accessible.  In those cases,  it might be necessary for one Keystone server 
 to proxy the revocation list for another server.
 
 Let me know if this scheme makes sense to you.  If so, we can write it up as 
 an additional blueprint.  It should not be that hard to implement.
 
 
 
 I'm hearing that you, as the implementer of this feature, don't consider the 
 lack of revocation to be an issue.  What am I missing?  Is support for 
 revocation so repugnant that the potential security hole is preferable?  I 
 can see that from a developer's perspective, but I don't understand why 
 someone deploying Keystone wouldn't avoid PKI tokens until revocation 
 support became available.
 
 Thanks,
 
 
 Maru 
  
 
 
 On 2012-08-01, at 9:47 PM, Adam Young wrote:
 
 On 08/01/2012 09:19 PM, Maru Newby wrote:
 
 I see that support for PKI Signed Tokens has been added to Keystone 
 without support for token revocation.  I tried to raise this issue on the 
 bug report:
 
 https://bugs.launchpad.net/keystone/+bug/1003962/comments/4
 
 And the review:
 
 https://review.openstack.org/#/c/7754/
 
 I'm curious as to whether anybody shares my concern and if there is a 
 specific reason why nobody responded to my question as to why revocation 
 is not required for this new token scheme.   Anybody?
 
 It was discussed back when I wrote the Blueprint.  While it is possible to 
 do revocations with PKI,  it is expensive and requires a lot of extra 
 checking

Re: [Openstack] Keystone: 'PKI Signed Tokens' lack support for revocation

2012-08-02 Thread Maru Newby
Hi Adam,

I apologize if I came across as disrespectful.  I was becoming frustrated that 
what I perceived as a valid concern was seemingly being ignored, but I 
recognize that there is no excuse for addressing you in a manner that I would 
not myself wish to be treated.  I will do better going forward.

Thanks,


Maru

ps: Thank you for the reminder, Joe!

On 2012-08-02, at 1:56 AM, Joseph Heck wrote:

 Hey Maru,
 
 I think you're putting too many words in Adam's mouth here. First, Adam didnt 
 assert is wasnt valuable, useful, or nessecary - simply that it wasnt in the 
 first cut and not in the list that we agreed was critically essential to an 
 initial implementation. As you noted, its a complex and somewhat tricky issue 
 to get right.
 
 There's always room for more participation to correct the flaws you see in 
 the existing system - the beauty of open source. I would love to see 
 continued work on the signing and revocation work to drive in these features 
 that mean so much to you.  I'd be happy to open a blueprint if you can stand 
 behind it, define what you think it required, and commit to the work to 
 implement that revocation mechanism.
 
 Implying negative emotions on Adam's part when he's been one driving the 
 implementation and doing the work is simply inappropriate. Please consider 
 the blueprint route, definition of a viable solution, and work to make it 
 happen instead of name calling and asserting how the developers doing the 
 work are screwing up.
 
 - joe
 
 On Aug 1, 2012, at 8:05 PM, Maru Newby mne...@internap.com wrote:
 Hi Adam,
 
 I apologize if my questions were answered before.  I wasn't aware that what 
 I perceive as a very serious security concern was openly discussed.  The 
 arguments against revocation support, as you've described them, seem to be:
 
  - it's complicated/messy/expensive to implement and/or execute
  - Kerberos doesn't need it, so why would we?
 
 I'm not sure why either of these arguments would justify the potential 
 security hole that a lack of revocation represents, but I suppose a 'short 
 enough' token lifespan could minimize that hole.  But how short a span are 
 you suggesting as being acceptable?
 
 The delay between when a user's access permissions change (whether roles, 
 password or even account deactivation) and when the ticket reflects that 
 change is my concern.  The default in Keystone has been 24h, which is 
 clearly too long.  Something on the order of 5 minutes would be ideal, but 
 then ticket issuance could become the bottleneck.  Validity that's much 
 longer could be a real problem, though.  Maybe not at the cloud 
 administration level, but for a given project I can imagine someone being 
 fired and their access being revoked.  How long is an acceptable period for 
 that ticket to still be valid?  How much damage could be done by someone who 
 should no longer have access to an account if their access cannot be 
 revoked, by anyone, at all?
 
 I'm hearing that you, as the implementer of this feature, don't consider the 
 lack of revocation to be an issue.  What am I missing?  Is support for 
 revocation so repugnant that the potential security hole is preferable?  I 
 can see that from a developer's perspective, but I don't understand why 
 someone deploying Keystone wouldn't avoid PKI tokens until revocation 
 support became available.
 
 Thanks,
 
 
 Maru 
  
 
 
 On 2012-08-01, at 9:47 PM, Adam Young wrote:
 
 On 08/01/2012 09:19 PM, Maru Newby wrote:
 
 I see that support for PKI Signed Tokens has been added to Keystone 
 without support for token revocation.  I tried to raise this issue on the 
 bug report:
 
 https://bugs.launchpad.net/keystone/+bug/1003962/comments/4
 
 And the review:
 
 https://review.openstack.org/#/c/7754/
 
 I'm curious as to whether anybody shares my concern and if there is a 
 specific reason why nobody responded to my question as to why revocation 
 is not required for this new token scheme.   Anybody?
 
 It was discussed back when I wrote the Blueprint.  While it is possible to 
 do revocations with PKI,  it is expensive and requires a lot of extra 
 checking.  Revocation is a policy decision, and the assumption is that 
 people that are going to use PKI tokens are comfortable with out 
 revocation.  Kerberos service tickets have the same limitation, and 
 Kerberos has been in deployment that way for close to 25 years.
 
 Assuming that PKI ticket lifespan is short enough,  revocation should not 
 be required.  What will be tricky is to balance the needs of long lived 
 tokens (delayed operations, long running operations) against the needs for 
 reasonable token timeout.
 
 PKI Token revocation would look like CRLs in the Certificate world.  While 
 they are used, they are clunky.  Each time a token gets revoked, a blast 
 message would have to go out to all registered parties informing them of 
 the revocation.  Keystone does not yet have a message queue interface, so 
 doing

[Openstack] Keystone: 'PKI Signed Tokens' lack support for revocation

2012-08-01 Thread Maru Newby
I see that support for PKI Signed Tokens has been added to Keystone without 
support for token revocation.  I tried to raise this issue on the bug report:

https://bugs.launchpad.net/keystone/+bug/1003962/comments/4

And the review:

https://review.openstack.org/#/c/7754/

I'm curious as to whether anybody shares my concern and if there is a specific 
reason why nobody responded to my question as to why revocation is not required 
for this new token scheme.   Anybody?

Thanks,


Maru


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: 'PKI Signed Tokens' lack support for revocation

2012-08-01 Thread Maru Newby
Hi Adam,

I apologize if my questions were answered before.  I wasn't aware that what I 
perceive as a very serious security concern was openly discussed.  The 
arguments against revocation support, as you've described them, seem to be:

 - it's complicated/messy/expensive to implement and/or execute
 - Kerberos doesn't need it, so why would we?

I'm not sure why either of these arguments would justify the potential security 
hole that a lack of revocation represents, but I suppose a 'short enough' token 
lifespan could minimize that hole.  But how short a span are you suggesting as 
being acceptable?

The delay between when a user's access permissions change (whether roles, 
password or even account deactivation) and when the ticket reflects that change 
is my concern.  The default in Keystone has been 24h, which is clearly too 
long.  Something on the order of 5 minutes would be ideal, but then ticket 
issuance could become the bottleneck.  Validity that's much longer could be a 
real problem, though.  Maybe not at the cloud administration level, but for a 
given project I can imagine someone being fired and their access being revoked. 
 How long is an acceptable period for that ticket to still be valid?  How much 
damage could be done by someone who should no longer have access to an account 
if their access cannot be revoked, by anyone, at all?

I'm hearing that you, as the implementer of this feature, don't consider the 
lack of revocation to be an issue.  What am I missing?  Is support for 
revocation so repugnant that the potential security hole is preferable?  I can 
see that from a developer's perspective, but I don't understand why someone 
deploying Keystone wouldn't avoid PKI tokens until revocation support became 
available.

Thanks,


Maru 
 


On 2012-08-01, at 9:47 PM, Adam Young wrote:

 On 08/01/2012 09:19 PM, Maru Newby wrote:
 
 I see that support for PKI Signed Tokens has been added to Keystone without 
 support for token revocation.  I tried to raise this issue on the bug report:
 
 https://bugs.launchpad.net/keystone/+bug/1003962/comments/4
 
 And the review:
 
 https://review.openstack.org/#/c/7754/
 
 I'm curious as to whether anybody shares my concern and if there is a 
 specific reason why nobody responded to my question as to why revocation is 
 not required for this new token scheme.   Anybody?
 
 It was discussed back when I wrote the Blueprint.  While it is possible to do 
 revocations with PKI,  it is expensive and requires a lot of extra checking.  
 Revocation is a policy decision, and the assumption is that people that are 
 going to use PKI tokens are comfortable with out revocation.  Kerberos 
 service tickets have the same limitation, and Kerberos has been in deployment 
 that way for close to 25 years.
 
 Assuming that PKI ticket lifespan is short enough,  revocation should not be 
 required.  What will be tricky is to balance the needs of long lived tokens 
 (delayed operations, long running operations) against the needs for 
 reasonable token timeout.
 
 PKI Token revocation would look like CRLs in the Certificate world.  While 
 they are used, they are clunky.  Each time a token gets revoked, a blast 
 message would have to go out to all registered parties informing them of the 
 revocation.  Keystone does not yet have a message queue interface, so doing 
 that is prohibitive in the first implementation.
 
 Note that users can get disabled, and token chaining will no longer work:  
 you won't be able to use a token to get a new token from Keystone.
 
 
 
 Thanks,
 
 
 Maru
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Probetests

2012-06-26 Thread Maru Newby
Have I missed a response in the past week?


On 2012-06-19, at 12:14 PM, Jay Pipes wrote:

 On 06/19/2012 11:10 AM, Maru Newby wrote:
 The swift probetests are broken:
 
 https://bugs.launchpad.net/swift/+bug/1014931
 
 Does the swift team intend to maintain probetests going forward?  Given how 
 broken they are at present (bad imports, failures even when imports are 
 fixed), it would appear that probetests are not gating commits.  That should 
 probably change if the tests are to be maintainable.
 
 Hi Maru, cc'ing Jose from the Swift QA team at Rackspace...
 
 I don't know what the status is on these probetests or whether they are being 
 maintained. Jose or John, any ideas? If they are useful, we could bring them 
 into the module initialization of the Tempest Swift tests.
 
 Best,
 jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift Probetests

2012-06-19 Thread Maru Newby
The swift probetests are broken:

https://bugs.launchpad.net/swift/+bug/1014931

Does the swift team intend to maintain probetests going forward?  Given how 
broken they are at present (bad imports, failures even when imports are fixed), 
it would appear that probetests are not gating commits.  That should probably 
change if the tests are to be maintainable.

Thanks,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Maru Newby
The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru
 
On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

 So my first question is around this. So is the claim is that the client tools 
 are the default interface for the applications? While that works for coders 
 in python, what about people using other languages? Even then, there's no 
 guarantee that the clients in different languages are implemented in the same 
 way. Tempest was designed originally because while it does use an abstraction 
 between the API and the tests, there is nothing to assist the user by 
 retrying and the like. While I think there's a place for writing tests using 
 the command line clients, to me that would be a smoke test of a client and 
 not as much a smoke test of the API.
 
 Daryl
 
 On May 3, 2012, at 12:01 PM, Jay Pipes wrote:
 
 However, before this can happen, a number of improvements need to be made to 
 Tempest. The issue with the smoke tests in Tempest is that they aren't 
 really smoke tests. They do not use the default client tools (like 
 novaclient, keystoneclient, etc) and are not annotated consistently.
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Where does Keystone middleware for Swift belong?

2012-04-12 Thread Maru Newby
Hi John,

On 2012-04-11, at 8:03 PM, John Dickinson wrote:

 I do not think that these pieces of middleware belong in the core swift repo.

Understood.  

 1) Including them in swift would require swift to depend on keystone for full 
 testing.

As I mentioned in my initial email, the modules in question depend on Keystone 
only at the protocol level, and the unit tests run without any dependence on 
Keystone.  I'm presuming that by 'full testing' you mean the currently 
non-existent functional tests?

 2) (When the middleware was created) Keystone's API was in a state of 
 constant flux. Keystone has changed quite a bit since then, so this may not 
 be the case anymore.

As of the essex release, the API and protocol used to communicate identity via 
the environment can be considered more or less stable. 

 3) Swift does not require Keystone to run (and in fact many production 
 environments don't use keystone at all). [1]

Definitely a good point.

 4) We have previously removed auth systems from swift's core code in order to 
 simplify the codebase and allow separate dev cycles. All that is included now 
 is the most basic auth system required for dev work, stand-alone tests, and 
 POC deployments.
 Our thought thus far has been that auth systems providing auth for swift have 
 the responsibility to maintain their integration code. It's not swift's 
 responsibility to manage and maintain code for every auth system that wants 
 to provide an auth mechanism for swift.

I understand - swift core doesn't want to take on responsibility for 
integrating with different auth systems.

 That being said, there has been quite a bit of recent conversation about the 
 concept of *-contrib areas for each project that include optional 
 add-ons/extensions/plugins/etc or alternative APIs for each project. I expect 
 these conversations to continue next week in person at the summit. One option 
 is for the swift keystone middleware to be a part of such a contrib area.
 
 It sounds like there are good reasons for both projects to not want to 
 include these pieces of middleware in their respective core repos. Perhaps a 
 contrib area can be a compromise, but that does not answer who is responsible 
 for it (ie who maintains the code?). I think that's a separate, but related, 
 question.

+1 for allowing non-core functionality to be included via a 'contrib'-rooted 
package rather than having to reside outside the repo it targets.

Regardless of whether it is included as 'contrib' or not, would would you have 
any interest in my factoring out common functionality from tempauth/swauth and 
making it available for reuse?  

From my understanding, a swift auth mechanism has 3 parts - authentication 
(authn), token authentication (token_authn), and authorization (authz).  Both 
of the main implementations (tempauth and swauth) implement all 3 parts 
independently, though the only thing that differs between them is the authn 
backend.  The swift_auth middleware from Keystone similarly duplicates the 
authz functionality of tempauth and swauth.  If the common stuff could be 
extracted and made available as library modules or base classes, the cost of 
maintaining any given auth mechanism would be greatly reduced, and creating a 
new mechanism would be that much easier.

Please let me know your thoughts.

Cheers,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Where does Keystone middleware for Swift belong?

2012-04-11 Thread Maru Newby
The Keystone repo currently contains the following Swift-specific wsgi 
middleware modules:

https://github.com/openstack/keystone/blob/master/keystone/middleware/s3_token.py
https://github.com/openstack/keystone/blob/master/keystone/middleware/swift_auth.py

Neither module depends directly on Keystone.  s3_token calls Keystone through 
HTTP, and swift_auth retrieves Keystone identity data from the wsgi 
environment.  Both modules, however, depend directly on Swift, and this forces 
the Keystone test suite to have to install Swift to run successfully.

Separate from the dependency issue, both middleware modules need to ensure that 
Swift-specific authorization requirements are met.  It doesn't make sense for 
the Keystone project to be responsible for this, since the Swift team is the 
final arbiter of how Swift authorization should behave.

All signs point to the Swift repo being the best place for these modules to be 
maintained.  Does this seem reasonable, or is there a better alternative?  
Please chime in, especially if you are Swift core.

Cheers,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Where does Keystone middleware for Swift belong?

2012-04-11 Thread Maru Newby
Agreed that s3_token belongs in Swift, and as per Joshua, ec2_token probably 
belongs in Nova.

Cheers,


Maru

On 2012-04-11, at 3:07 PM, Chmouel Boudjnah wrote:

 On Thu, Apr 12, 2012 at 12:01 AM, Joshua Harlow harlo...@yahoo-inc.com 
 wrote:
 This also seems to make sense for other items in that directory /middleware?
 Should the EC2/S3 stuff be in nova? 
 
 I'd say the s3 middle-ware if it has to be moved would probably be a better 
 fit in swift than nova since it has been designed to be used with swift's 
 swift3 middleware.
 
 Chnmouel.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Being pedantic about pedanticism: HACKING styleguide

2012-03-23 Thread Maru Newby
Hi Andy,

On 2012-03-22, at 10:00 PM, Andy Smith wrote:

 The rule is there because it makes it obvious where you are using objects 
 from (and they aren't in the current namespace), prevents that where is this 
 defined -scan around the file- oh, it is being imported from this other 
 thing, let's go look there pattern, instead you see very obviously that it 
 isn't defined in the file and usually has a unique enough name already that 
 you know exactly where to look.
 
 It is something pulled from the google style guide.


I won't argue with the positive result of the rule - the avoidance of circular 
imports - but I would not want to endure an increase in code verbosity solely 
to make it clear where a symbol was defined.  Modern editors expose powerful 
regex search and jump-to-definition functions that make it easy to determine 
where a symbol comes from.

Cheers,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Being pedantic about pedanticism: HACKING styleguide

2012-03-22 Thread Maru Newby
Hi Jay,

On 2012-03-22, at 2:13 PM, Jay Pipes wrote:
 
 Object Imports
 ==
 
 In addition, the following DOES NOT appear in Glance's section on imports:
 
 - Do not import objects, only modules
 
 Nowhere in PEP8 does it mention anything about not importing objects. In 
 fact, PEP8 says this:

I'm pretty sure the reason for this rule has nothing to do with PEP8, and is 
instead intended to avoid circular imports.  I don't see the utility in 
applying the rule to 3rd party modules, since circular imports wouldn't be an 
issue, but I do consider it a best practice for project code.

Cheers,


Maru___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone's Swift Integration

2012-03-20 Thread Maru Newby
Hi Chmouel,

Skipping for now is pragmatic, but I'd definitely want to implement stubs after 
your change lands to ensure that unit tests always run.

I vote for implementing support for unauthenticated access asap.  Anonymous 
access to Swift is a very important use case, and not having it means that  
Keystone's swift middleware is not usable as-is.  Deployers will have to 
implement and maintain that functionality themselves until this is resolved.  
What will it take to have it go in for this release?

Thanks,


Maru   

On 2012-03-20, at 2:43 AM, Chmouel Boudjnah wrote:

 Hi Maru,
 
 Sorry I have been taking long to come to you on this, I have revived
 review  4529[1] which add the swift tests. I was talking to termie
 about it sometime ago and the way we decided to do is to skip the
 tests if Swift is not installed[2]. Feel free to add stubs as this is
 not ideal.
 
 I was working as well on container-sync and anonymous requests but was
 not sure if this should go in for Folsom or for this release.
 
 Cheers,
 Chmouel.
 
 [1] https://review.openstack.org/#change,4529
 [2] Ideally I would love to have swift.common.*/swiftclient go to
 another package but that's probably a discussion for Folsom summit.
 
 On Tue, Mar 20, 2012 at 3:33 AM, Maru Newby mne...@internap.com wrote:
 I'd like to write unit tests for keystone.middleware.swift_auth in advance 
 of some functional changes (adding support for unauthenticated container 
 sync and referrer access).  It appears that swift_auth lacks unit tests, 
 though.  Is this due to its dependency on swift, or is there another reason?
 
 Given that untested code is difficult to maintain, what would the best 
 option be to add tests for swift_auth?  Ideally the module would just move 
 to the swift repo, but if for some reason that's not an option, I'm prepared 
 to use stubs.
 
 Thanks,
 
 
 Maru
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone's Swift Integration

2012-03-19 Thread Maru Newby
I'd like to write unit tests for keystone.middleware.swift_auth in advance of 
some functional changes (adding support for unauthenticated container sync and 
referrer access).  It appears that swift_auth lacks unit tests, though.  Is 
this due to its dependency on swift, or is there another reason?

Given that untested code is difficult to maintain, what would the best option 
be to add tests for swift_auth?  Ideally the module would just move to the 
swift repo, but if for some reason that's not an option, I'm prepared to use 
stubs.

Thanks,


Maru

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Enabling data deduplication on Swift

2012-03-10 Thread Maru Newby
Hi Joe,

There's one huge difference between page deduplication and object 
deduplication:  Page size is small and predictable, whereas object size is not. 
 Given this, full compares would not be a good way to implement performant 
object deduplication in swift.

Thanks,


Maru

On 2012-03-10, at 9:57 AM, Joe Gordon wrote:

 Paulo, Caitlin, 
 
 
 Can SHA-1 collisions be generated?  If so can you point me to the article? 
 
 Also why compare hashes in the first place?  Linux 'Kenel Samepage Merging', 
 which does page deduplication for KVM, does a full compare to be safe [1].  
 Even if collisions can't be generated, what are the odds of a collision (for 
 SHA-1 and SHA-256) happening by chance when using Swift at scale?  
 
 
 best,
 Joe Gordon
 
 
 
 [1] http://www.linux-kvm.com/sites/default/files/KvmForum2008_KSM.pdf
 
 
 On Fri, Mar 9, 2012 at 4:44 PM, Caitlin Bestler caitlin.best...@nexenta.com 
 wrote:
 Paulo,
 
  
 
 I believe you’ll find that we’re thinking along the same lines. Please review 
 my proposal at http://etherpad.openstack.org/P9MMYSWE6U
 
  
 
 One quick observation is that SHA-1 is totally inadequate for fingerprinting 
 objects in a public object store. An attacker could easily
 
 predict the fingerprint of content likely to be posted, generate alternate 
 content that had the same SHA-1 fingerprint and pre-empt
 
 the signature. For example: an ISO of an open source OS distribution. If I 
 get my false content with the same fingerprint into the
 
 repository first then everyone who downloads that ISO will get my altered 
 copy.
 
 
  
 
 SHA-256 is really needed to make this type of attack infeasible.
 
  
 
 I also think that distributed deduplication works very well with object 
 versioning. Your comments on the proposal cited above
 
 would be great to hear.
 
  
 
 From: openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net 
 [mailto:openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net] On 
 Behalf Of Paulo Ricardo Motta Gomes
 Sent: Thursday, March 08, 2012 1:19 PM
 To: openstack@lists.launchpad.net
 
 
 Subject: [Openstack] Enabling data deduplication on Swift
 
  
 
 Hello everyone,
 
  
 
 I'm a student of the European Master in Distributed Computing (EMDC) 
 currently working on my master thesis on distributed content-addressable 
 storage/deduplication.
 
  
 
 I'm happy to announce I will be contributing the outcome of my thesis work to 
 OpenStack by enabling both object-level and block-level deduplication 
 functionality on Swift (https://answers.launchpad.net/swift/+question/156862).
 
  
 
 I have written a detailed blog post where I describe the initial architecture 
 of my solution: 
 http://paulormg.com/2012/03/05/enabling-deduplication-in-a-distributed-object-storage/
 
  
 
 Feedback from the OpenStack/Swift community would be very appreciated.
 
  
 
 Cheers,
 
  
 
 Paulo
 
  
 
 -- 
 European Master in Distributed Computing - www.kth.se/emdc
 Royal Institute of Technology - KTH
 
 Instituto Superior Técnico - IST
 
 http://paulormg.com
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Enabling data deduplication on Swift

2012-03-10 Thread Maru Newby
Ah, right.  I'm sorry to say I hadn't TFA yet and wasn't thinking of 
block-level deduplication.  But though block-level compares might be possible, 
the distributed nature of swift would likely make this quite complicated.  It 
might make sense for KVM to compare memory pages, but I imagine that locality 
of data (all on the same host) figured heavily in that implementation decision.

I am, however, confused by your assertion that rsync 'relies on comparison to 
ensure that the match is real, and not just a hash collision'.  

From my reading, rsync computes an MD5 hash and a weaker (and easier to 
compute) 'rolling checksum' on the recipient data, and then the sender also 
computes the computationally cheap 'checksum' and will only bother computing 
the MD5 hash when the checksums match.  So rsync is optimizing hash 
generation, but is still relying on hash collision rather than 'comparison'.  
Or am I missing something?

Regardless, I enjoyed reading Paulo's post and learning about the role 
deduplication might play in balancing storage and retrieval costs in swift.

Cheers,


Maru


On 2012-03-10, at 1:13 PM, andi abes wrote:

 Maybe a happy path exists, between efficiency and correctness ;) I
 think the Rsync is probably a good comparison to the use case at hand
 (it identifies identical blocks between the source and target, and
 only sends deltas of the wire).
 It combines a quick has to identify candidates that might be
 duplicates, but relies on comparison to ensure that the match is real,
 and not just a hash collision.
 
 See the source of all knowledge:
 http://en.wikipedia.org/wiki/Rsync#Algorithm
 
 
 
 
 
 
 On Sat, Mar 10, 2012 at 1:15 PM, Maru Newby mne...@internap.com wrote:
 Hi Joe,
 
 There's one huge difference between page deduplication and object
 deduplication:  Page size is small and predictable, whereas object size is
 not.  Given this, full compares would not be a good way to implement
 performant object deduplication in swift.
 
 Thanks,
 
 
 Maru
 
 
 On 2012-03-10, at 9:57 AM, Joe Gordon wrote:
 
 Paulo, Caitlin,
 
 
 Can SHA-1 collisions be generated?  If so can you point me to the article?
 
 Also why compare hashes in the first place?  Linux 'Kenel Samepage Merging',
 which does page deduplication for KVM, does a full compare to be safe [1].
  Even if collisions can't be generated, what are the odds of a collision
 (for SHA-1 and SHA-256) happening by chance when using Swift at scale?
 
 
 best,
 Joe Gordon
 
 
 
 
 [1] http://www.linux-kvm.com/sites/default/files/KvmForum2008_KSM.pdf
 
 
 On Fri, Mar 9, 2012 at 4:44 PM, Caitlin Bestler
 caitlin.best...@nexenta.com wrote:
 
 Paulo,
 
 
 
 I believe you’ll find that we’re thinking along the same lines. Please
 review my proposal at http://etherpad.openstack.org/P9MMYSWE6U
 
 
 
 One quick observation is that SHA-1 is totally inadequate for
 fingerprinting objects in a public object store. An attacker could easily
 
 predict the fingerprint of content likely to be posted, generate alternate
 content that had the same SHA-1 fingerprint and pre-empt
 
 the signature. For example: an ISO of an open source OS distribution. If I
 get my false content with the same fingerprint into the
 
 repository first then everyone who downloads that ISO will get my altered
 copy.
 
 
 
 SHA-256 is really needed to make this type of attack infeasible.
 
 
 
 I also think that distributed deduplication works very well with object
 versioning. Your comments on the proposal cited above
 
 would be great to hear.
 
 
 
 From: openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net
 [mailto:openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net]
 On Behalf Of Paulo Ricardo Motta Gomes
 Sent: Thursday, March 08, 2012 1:19 PM
 To: openstack@lists.launchpad.net
 
 
 Subject: [Openstack] Enabling data deduplication on Swift
 
 
 
 Hello everyone,
 
 
 
 I'm a student of the European Master in Distributed Computing (EMDC)
 currently working on my master thesis on distributed content-addressable
 storage/deduplication.
 
 
 
 I'm happy to announce I will be contributing the outcome of my thesis work
 to OpenStack by enabling both object-level and block-level deduplication
 functionality on Swift
 (https://answers.launchpad.net/swift/+question/156862).
 
 
 
 I have written a detailed blog post where I describe the initial
 architecture of my
 solution: 
 http://paulormg.com/2012/03/05/enabling-deduplication-in-a-distributed-object-storage/
 
 
 
 Feedback from the OpenStack/Swift community would be very appreciated.
 
 
 
 Cheers,
 
 
 
 Paulo
 
 
 
 --
 European Master in Distributed Computing - www.kth.se/emdc
 Royal Institute of Technology - KTH
 
 Instituto Superior Técnico - IST
 
 http://paulormg.com
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https

[Openstack] WebOb + DeprecationWarning

2012-03-07 Thread Maru Newby
I'm using a devstack-configured box with all the latest code and am running 
into DeprecationWarning wherever weob.Request.str_[GET,PUT,cookies,params] are 
accessed (they are being replaced by unicode equivalents).  Since Python  2.7 
does not ignore DeprecationWarning, and I am running on Python 2.6, the 
warnings are being thrown as exceptions.

This has already been addressed in quantum:

https://bugs.launchpad.net/quantum/+bug/925372

I've submitted a patch for glance:

https://bugs.launchpad.net/glance/+bug/949677

I then realized that the nova api is similarly afflicted, and thought that some 
discussion might be warranted since so many projects were affected:

1. Should DeprecationWarning be ignored by OpenStack projects when using Python 
 2.7?
2. If no to #1, should OpenStack projects be proactively surveyed for use of 
deprecated webob.Request properties, with an eye towards replacing such use 
immediately?  Note that the string properties will not be removed until WebOb 
1.2 and all projects are currently on 1.0.8.
3. As a follow-on to #2, is there going to be any fallout from switching from 
string to unicode webob.Request properties?  Web apps generally code 
defensively against non-ascii input, but being new to OpenStack I'm not sure 
how well this best-practice has been adhered to.

Thanks,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Test Dependencies

2012-03-01 Thread Maru Newby
Is there any interest in adding unittest2 to the test dependencies for 
openstack projects?  I have found its enhanced assertions and 'with 
self.assertRaises' very useful in writing tests.  I see there have been past 
bugs that mentioned unittest2, and am wondering if the reasons for not adopting 
it still stand.

Separately, is the use of mox open to discussion?  mock was recently added as a 
dependency to quantum to perform library patching, which isn't supported by mox 
as far as I know.  The ability to do non-replay mocking is another useful 
feature of mock.  I'm not suggesting that mox be replaced, but am wondering if 
mock could be an additional dependency and used when the functionality provided 
by mox falls short.

Thanks,


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [DEVSTACK] officialize it!

2012-02-06 Thread Maru Newby
-1 on multi-distribution devstack.  Being cross-platform is arguably a place 
where chef/puppet/cfengine automation comes into play, and that's not where 
devstack's self-declared mission lies.

+1 to continuing to have Ubuntu be the reference devstack target.  Maintaining 
support for an apt-based distribution is much easier than the alternatives from 
a developer perspective.

Mind you, I don't think anybody would complain if Redhat et al wanted to 
maintain their own targeted version of devstack.

Thanks,


Maru

On 2012-02-06, at 5:22 PM, Joshua Harlow wrote:

 + There needs to be a way to install on multiple distributions (without 
 saying go figure out the deps yourself).
 
 I know everyone is ubuntu, ubuntu, ubuntu, but this really needs to be fixed 
 (process wise as well).
 
 :-/
 
 On 2/6/12 5:12 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 cc'ing Matt Ray from OpsCode, since he and I discussed related topics
 this past Thursday during the bug squash day...
 
 On 02/06/2012 06:35 PM, Monty Taylor wrote:
  I think the thing you are discussing already exists.
 
  devstack is currently part of and managed by all of the normal OpenStack
  development infrastructure. The canonical repository for it is
  https://review.openstack.org/p/openstack-dev/devstack which is mirrored
  to https://github.com/openstack-dev/devstack. Every change to OpenStack
  is not only gated on devstack properly functioning, every change to
  devstack is gated on OpenStack properly functioning.
 
  Additionally, branches match up, so there is a stable/diablo that works
  with stable/diablo of all of the OpenStack branches and is a part of
  their trunk gating.
 
 This is a critical piece of the puzzle. If I want a Diablo install for
 testing, all I need to do is:
 
 cd $devstack_dir
 git checkout stable/diablo
 rm -rf /opt/stack
 ./stack.sh
 
 And I get a Diablo installation of OpenStack. Likewise, if I want a
 development (Essex currently) version of OpenStack, I just do:
 
 cd $devstack_dir
 git checkout master
 rm -rf /opt/stack
 ./stack.sh
 
 And I get a development installation of OpenStack.
 
 Now, I'm not entirely sure I even need to do the rm -rf /opt/stack part,
 but I do that for good measure, even if it does mean it takes a little
 longer... ;)
 
 This is not something I can do currently with the other deployment methods.
 
  In that sense, it's actually the first install OpenStack method that
  _is_ fully a part of OpenStack - even though there are also chef recipes
  and puppet modules in OpenStack's gerrit as well. (although at some
  point I wouldn't mind getting some installation testing and gating on
  them as well)
 
 Yes, and getting those projects aligned with the core projects' branch
 layout would be good, too. Followup email on the Chef stuff coming
 shortly, as Matt ray and I discussed this last Thursday at length and I
 think there's a lot we can do to improve things.
 
 -jay
 
  So it's pretty official already.
 
  However, as to becoming an official project - it's a developer tool,
  same as git-review or gerrit or the openstack nose-plugin. It's
  something that's useful for developers for developing and testing
  OpenStack. It is not, nor is it meant to be, part of the software we
  ship -- which is the current definition of what it means to be a
  core project. i.e. - If I'm a deployer and I want to install
  OpenStack - is this one of the things I install? With devstack - the
  answer is no.
 
  Is is MASSIVELY helpful and a part of everyday life for all of us?
  ABSOLUTELY (this is why we have to be careful with changes to it and run
  them through the same process everything else gets)
 
  All of that to say - I agree with you, and it's already done. :)
 
  Monty
 
  On 02/06/2012 01:43 PM, Joshua Harlow wrote:
  So the part that worries me about what u just said is the part about “it
  is already some kind of official project”.
  When you have to question whether a project is official or not, that
  seems to pretty make the whole point for making it official ;)
 
  Overall though I think what u are saying is correct, but the overhead I
  don’t see as being a bad thing.
 
  In my idea release management is good since it allows developers to be
  able to setup a development environment for a given openstack release
  (good for when you need to fix bugs against a given release as well as
  good for providing a stable point for other distributions to know what
  goes in a release and what configs need to be adjusted to make that
  release work for all the different components). So I don’t see that as a
  drawback (even though yes it does add work/overhead in, but I don’t see
  that as a valid point, in any case).
 
  Downstream distribution, I am not exactly sure what you mean here?
 
  A technical lead I think is something good to have, as this
  script/code/documentation is not as simple as you might think (and most
  likely won’t get any simpler).
 
  Maybe the correct wording isn’t that this is 

Re: [Openstack] [CHEF] Aligning Cookbook Efforts

2012-02-06 Thread Maru Newby
I've submitted a Swift AIO cookbook for review:

https://review.openstack.org/#change,3613

It follows the latest single-node AIO instructions pretty much to the letter, 
so the resulting environment is well-documented.  We use this cookbook as the 
basis for building Swift development environments here at Internap.

Thanks,


Maru

On 2012-02-06, at 6:07 PM, Jay Pipes wrote:

 Hi Stackers,
 
 tl;dr
 -
 
 There are myriad Chef cookbooks out there in the ecosystem and locked up 
 behind various company firewalls. It would be awesome if we could agree to:
 
 * Align to a single origin repository for OpenStack cookbooks
 * Consolidate OpenStack Chef-based deployment experience into a single 
 knowledge base
 * Have branches on the origin OpenStack cookbooks repository that align with 
 core OpenStack projects
 * Automate the validation and testing of these cookbooks on multiple 
 supported versions of the OpenStack code base
 
 Details
 ---
 
 Current State of Forks
 ==
 
 Matt Ray and I tried to outline the current state of the various OpenStack 
 Chef cookbooks this past Thursday, and we came up with the following state of 
 affairs:
 
 ** The official OpenStack Chef cookbooks **
 
 https://github.com/openstack/openstack-chef
 
 These chef cookbooks are the ones maintained mostly by Dan Prince and Brian 
 Lamar and these are the cookbooks used by the SmokeStack project. The 
 cookbooks contained in the above repo can install all the core OpenStack 
 projects with the exception of Swift and Horizon.
 
 This repo is controlled by the Gerrit instance at review.openstack.org just 
 like other core OpenStack projects.
 
 However, these cookbooks DO NOT currently have a stable/diablo branch -- they 
 are updated when the development trunks of any OpenStack project merges a 
 commit that requires deployment or configuration-related changes to their 
 associated cookbook.
 
 Important note: it's easy for Dan and Brian to know when updates to these 
 cookbooks are necessary -- SmokeStack will bomb out if a deployment-affecting 
 configuration change hits a core project trunk :)
 
 These cookbooks are the ONLY cookbooks that contain stuff for deploying with 
 XenServer, AFAICT.
 
 ** NTT PF Lab Diablo Chef cookbooks **
 
 https://github.com/ntt-pf-lab/openstack-chef/
 
 So, NTT PF Lab forked the upstream Chef cookbooks back in Nov 11, 2011, 
 because they needed a set of Chef cookbooks for OpenStack that functioned for 
 the Diablo code base.
 
 While Nov 11, 2011, is not the *exact* date of the Diablo release, these 
 cookbooks do in fact work for a Diablo install -- Nati Ueno is using them for 
 the FreeCloud deployment so we know they work...
 
 ** OpsCode OpenStack Chef Cookbooks **
 
 Matt Ray from OpsCode created a set of cookbooks for OpenStack for the Cactus 
 release of OpenStack:
 
 https://github.com/mattray/openstack-cookbooks
 http://wiki.opscode.com/display/chef/Deploying+OpenStack+with+Chef
 
 These cookbooks were forked from the Anso Labs' original OpenStack cookbooks 
 from the Bexar release and were the basis for the Chef work that Dell did for 
 Crowbar. Crowbar was originally based on Cactus, and according to Matt, the 
 repositories of OpenStack cookbooks that OpsCode houses internally and uses 
 most often are Cactus-based cookbooks. (Matt, please correct me if I am wrong 
 here...)
 
 ** Rackspace CloudBuilders OpenStack Chef Cookbooks **
 
 The RCB team also has a repository of OpenStack Chef cookbooks:
 
 https://github.com/cloudbuilders/openstack-cookbooks
 
 Now, GitHub *says* that these cookbooks were forked from the official 
 upstream cookbooks, but I do not think that is correct. Looking at this repo, 
 I believe that this repo was *actually* forked from the Anso Labs OpenStack 
 Chef Cookbooks, as the list of cookbooks is virtually identical.
 
 ** Anso Labs OpenStack Chef Cookbooks **
 
 These older cookbooks are in this repo:
 
 https://github.com/ansolabs/openstack-cookbooks/tree/master/cookbooks
 
 Interestingly, this repo DOES contain a cookbook for Swift.
 
 Current State of Documentation
 ==
 
 Documentation for best practices on using Chef for your OpenStack deployments 
 is, well, a bit scattered. Matt Ray has some good information on the README 
 on his cookbook repo and the OpsCode wiki:
 
 https://github.com/mattray/openstack-cookbooks/blob/cactus/README.md
 http://wiki.opscode.com/display/chef/Deploying+OpenStack+with+Chef
 
 But it is unfortunately not going to help people looking to deploy Diablo and 
 later versions of OpenStack.
 
 Most of the other repos contain virtually no documentation on using the 
 cookbooks or how they are written.
 
 I have a suspicion that one of the reasons that there has been such a 
 proliferation of cookbooks has been the lack of documentation pointing people 
 to an appropriate repo, how to use the cookbooks properly, and what the best 
 practices for deployment are. That, and the 

[Openstack] Swift-core PPA appears broken

2011-12-16 Thread Maru Newby
While following the saio instructions 
(http://swift.openstack.org/development_saio.html) I ran into a problem with 
the swift-core ppa.  I get the following on apt-get update after adding the ppa 
repo:

W: Failed to fetch 
http://ppa.launchpad.net/swift-core/ppa/ubuntu/dists/lucid/main/binary-amd64/Packages.gz
  404  Not Found

It appears that there the lucid dist was removed from the ppa, probably on Dec 
11, as the directory /swift-core/ppa/ubuntu/dists is shown as modified on that 
date and no longer contains the lucid subdirectory.

Who has responsibility for maintaining the ppa repo, and is there anything I 
can do to help get it working again?  I'm also interested in finding out how 
the dependencies in the swift-core ppa are maintained, as I would like to be 
able to isolate python dependencies in a virtualenv but the swift source repo 
doesn't appear to list the dependencies and their required versions. 

Thanks!


Maru
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift-core PPA appears broken

2011-12-16 Thread Maru Newby
Please disregard.  I've added a bug in launchpad and submitted a fix to gerrit. 
 Apologies for the newb email.

Thanks!


Maru


On 2011-12-16, at 7:24 PM, Maru Newby wrote:

 While following the saio instructions 
 (http://swift.openstack.org/development_saio.html) I ran into a problem with 
 the swift-core ppa.  I get the following on apt-get update after adding the 
 ppa repo:
 
 W: Failed to fetch 
 http://ppa.launchpad.net/swift-core/ppa/ubuntu/dists/lucid/main/binary-amd64/Packages.gz
   404  Not Found
 
 It appears that there the lucid dist was removed from the ppa, probably on 
 Dec 11, as the directory /swift-core/ppa/ubuntu/dists is shown as modified on 
 that date and no longer contains the lucid subdirectory.
 
 Who has responsibility for maintaining the ppa repo, and is there anything I 
 can do to help get it working again?  I'm also interested in finding out how 
 the dependencies in the swift-core ppa are maintained, as I would like to be 
 able to isolate python dependencies in a virtualenv but the swift source repo 
 doesn't appear to list the dependencies and their required versions. 
 
 Thanks!
 
 
 Maru


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp