Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-31 Thread Sullivan, Jon Paul
 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: 28 October 2014 18:34
 To: openstack-dev
 Subject: Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on
 Devstack
 
 Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
  On 10/28/2014 06:18 AM, Steven Hardy wrote:
   On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
   On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
   On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
   So this should work and I think its generally good.
  
   But - I'm curious, you only need a single image for devtest to
   experiment with tuskar - the seed - which should be about the
   same speed (or faster, if you have hot caches) than devstack, and
   you'll get Ironic and nodes registered so that the panels have
 stuff to show.
  
   TBH it's not so much about speed (although, for me, devstack is
   faster as I've not yet mirrored all-the-things locally, I only
   have a squid cache), it's about establishing a productive
 test/debug/hack/re-test workflow.
  
   mm, squid-cache should still give pretty good results. If its not,
   bug time :). That said..
  
   I've been configuring devstack to create Ironic nodes FWIW, so
   that works OK too.
  
   Cool.
  
   It's entirely possible I'm missing some key information on how to
   compose my images to be debug friendly, but here's my devtest
 frustration:
  
   1. Run devtest to create seed + overcloud
  
   If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
   devtest_seed.sh only. The seed has everything on it, so the rest is
   waste (unless you need all the overcloud bits - in which case I'd
   still tune things - e.g. I'd degrade to single node, and I'd
   iterate on devtest_overcloud.sh, *not* on the full plumbing each
 time).
  
   Yup, I went round a few iterations of those, e.g running
   devtest_overcloud with -c so I could more quickly re-deploy, until I
   realized I could drive heat directly, so I started doing that :)
  
   Most of my investigations atm are around investigating Heat issues,
   or testing new tripleo-heat-templates stuff, so I do need to spin up
   the overcloud (and update it, which is where the fun really began
   ref bug
   #1383709 and #1384750 ...)
  
   2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
   3. Log onto seed VM to debug the issue.  Discover there are no
 logs.
  
   We should fix that - is there a bug open? Thats a fairly serious
   issue for debugging a deployment.
  
   I've not yet raised one, as I wasn't sure if it was either by
   design, or if I was missing some crucial element from my DiB config.
  
   If you consider it a bug, I'll raise one and look into a fix.
  
   4. Restart the heat-engine logging somewhere 5. Realize
   heat-engine isn't quite latest master 6. Git pull heat, discover
   networking won't allow it
  
   Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
   totally fine - I've depended heavily on that to debug various
   things over time.
  
   Not yet dug into it in a lot of detail tbh, my other VMs can access
   the internet fine so it may be something simple, I'll look into it.
 
  Are you sure this is a networking thing?  When I try a git pull I get
 this:
 
  [root@localhost heat]# git pull
  fatal:
  '/home/bnemec/.cache/image-create/source-
 repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
  does not appear to be a git repository
  fatal: Could not read from remote repository.
 
  That's actually because the git repo on the seed would have come from
  the local cache during the image build.  We should probably reset the
  remote to a sane value once we're done with the cache one.
 
  Networking-wise, my Fedora seed can pull from git.o.o just fine
 though.
 
 
 I think we should actually just rip the git repos out of the images in
 production installs. What good does it do sending many MB of copies of
 the git repos around? Perhaps just record HEAD somewhere in a manifest
 and rm -r the source repos during cleanup.d.

The manifests already capture this.  For example 
/etc/dib-manifests/dib-manifest-git-seed on the seed.  The format of that file 
is as-per source-repositories file format for reuse in builds.  This means it 
has the on-disk location of the repo, the remote used, and the sha1 pulled for 
the build.

 
 But, for supporting dev/test, we could definitely leave them there and
 change the remotes back to their canonical (as far as diskimage-builder
 knows) sources.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933

Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Steven Hardy
On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
 So this should work and I think its generally good.
 
 But - I'm curious, you only need a single image for devtest to
 experiment with tuskar - the seed - which should be about the same
 speed (or faster, if you have hot caches) than devstack, and you'll
 get Ironic and nodes registered so that the panels have stuff to show.

TBH it's not so much about speed (although, for me, devstack is faster as
I've not yet mirrored all-the-things locally, I only have a squid cache),
it's about establishing a productive test/debug/hack/re-test workflow.

I've been configuring devstack to create Ironic nodes FWIW, so that works
OK too.

It's entirely possible I'm missing some key information on how to compose
my images to be debug friendly, but here's my devtest frustration:

1. Run devtest to create seed + overcloud
2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
3. Log onto seed VM to debug the issue.  Discover there are no logs.
4. Restart the heat-engine logging somewhere
5. Realize heat-engine isn't quite latest master
6. Git pull heat, discover networking won't allow it
7. scp latest master from my laptop-VM
8. setup.py install, discover the dependencies aren't all there
9. Give up and try to recreate issue on devstack

I'm aware there are probably solutions to all of these problems, but my
point is basically that devstack on my laptop already solves all of them,
so... maybe I can just use that?  That's my thinking, anyway.

E.g here's my tried, tested and comfortable workflow:

1. Run stack.sh on my laptop
2. Do a heat stack-create
3. Hit a problem, look at screen logs
4. Fix problem, restart heat, re-test, git-review, done!

I realize I'm swimming against the tide a bit here, so feel free to educate
me if there's an easier way to reduce the developer friction that exists
with devtest :)

Anyway, that's how I got here, frustration debugging Heat turned into
integrating tuskar with devstack, because I wanted to avoid the same
experience while hacking on tuskar, basically.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Robert Collins
On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
 On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
 So this should work and I think its generally good.

 But - I'm curious, you only need a single image for devtest to
 experiment with tuskar - the seed - which should be about the same
 speed (or faster, if you have hot caches) than devstack, and you'll
 get Ironic and nodes registered so that the panels have stuff to show.

 TBH it's not so much about speed (although, for me, devstack is faster as
 I've not yet mirrored all-the-things locally, I only have a squid cache),
 it's about establishing a productive test/debug/hack/re-test workflow.

mm, squid-cache should still give pretty good results. If its not, bug
time :). That said..

 I've been configuring devstack to create Ironic nodes FWIW, so that works
 OK too.

Cool.

 It's entirely possible I'm missing some key information on how to compose
 my images to be debug friendly, but here's my devtest frustration:

 1. Run devtest to create seed + overcloud

If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
devtest_seed.sh only. The seed has everything on it, so the rest is
waste (unless you need all the overcloud bits - in which case I'd
still tune things - e.g. I'd degrade to single node, and I'd iterate
on devtest_overcloud.sh, *not* on the full plumbing each time).

 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
 3. Log onto seed VM to debug the issue.  Discover there are no logs.

We should fix that - is there a bug open? Thats a fairly serious issue
for debugging a deployment.

 4. Restart the heat-engine logging somewhere
 5. Realize heat-engine isn't quite latest master
 6. Git pull heat, discover networking won't allow it

Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
totally fine - I've depended heavily on that to debug various things
over time.

 7. scp latest master from my laptop-VM
 8. setup.py install, discover the dependencies aren't all there

This one might be docs: heat is installed in a venv -
/opt/stack/venvs/heat, so the deps be should in that, not in the
global site-packages.

 9. Give up and try to recreate issue on devstack

:)

 I'm aware there are probably solutions to all of these problems, but my
 point is basically that devstack on my laptop already solves all of them,
 so... maybe I can just use that?  That's my thinking, anyway.

Sure - its fine to use devstack. In fact, we don't *want* devtest to
supplant devstack, they're solving different problems.

 E.g here's my tried, tested and comfortable workflow:

 1. Run stack.sh on my laptop
 2. Do a heat stack-create
 3. Hit a problem, look at screen logs
 4. Fix problem, restart heat, re-test, git-review, done!

 I realize I'm swimming against the tide a bit here, so feel free to educate
 me if there's an easier way to reduce the developer friction that exists
 with devtest :)

Quite possibly there isn't. Some of your issues are ones we should not
at all have, and I'd like to see those removed. But they are different
tools for different scenarios, so I'd expect some impedance mismatch
doing single-code-base-dev in a prod-deploy-context, and I only asked
about the specifics to get a better understanding of whats up - I
think its totally appropriate to be doing your main dev with devstack.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Steven Hardy
On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
 On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
  On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
  So this should work and I think its generally good.
 
  But - I'm curious, you only need a single image for devtest to
  experiment with tuskar - the seed - which should be about the same
  speed (or faster, if you have hot caches) than devstack, and you'll
  get Ironic and nodes registered so that the panels have stuff to show.
 
  TBH it's not so much about speed (although, for me, devstack is faster as
  I've not yet mirrored all-the-things locally, I only have a squid cache),
  it's about establishing a productive test/debug/hack/re-test workflow.
 
 mm, squid-cache should still give pretty good results. If its not, bug
 time :). That said..
 
  I've been configuring devstack to create Ironic nodes FWIW, so that works
  OK too.
 
 Cool.
 
  It's entirely possible I'm missing some key information on how to compose
  my images to be debug friendly, but here's my devtest frustration:
 
  1. Run devtest to create seed + overcloud
 
 If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
 devtest_seed.sh only. The seed has everything on it, so the rest is
 waste (unless you need all the overcloud bits - in which case I'd
 still tune things - e.g. I'd degrade to single node, and I'd iterate
 on devtest_overcloud.sh, *not* on the full plumbing each time).

Yup, I went round a few iterations of those, e.g running devtest_overcloud
with -c so I could more quickly re-deploy, until I realized I could drive
heat directly, so I started doing that :)

Most of my investigations atm are around investigating Heat issues, or
testing new tripleo-heat-templates stuff, so I do need to spin up the
overcloud (and update it, which is where the fun really began ref bug 
#1383709 and #1384750 ...)

  2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
  3. Log onto seed VM to debug the issue.  Discover there are no logs.
 
 We should fix that - is there a bug open? Thats a fairly serious issue
 for debugging a deployment.

I've not yet raised one, as I wasn't sure if it was either by design, or if
I was missing some crucial element from my DiB config.

If you consider it a bug, I'll raise one and look into a fix.

  4. Restart the heat-engine logging somewhere
  5. Realize heat-engine isn't quite latest master
  6. Git pull heat, discover networking won't allow it
 
 Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
 totally fine - I've depended heavily on that to debug various things
 over time.

Not yet dug into it in a lot of detail tbh, my other VMs can access the
internet fine so it may be something simple, I'll look into it.

  7. scp latest master from my laptop-VM
  8. setup.py install, discover the dependencies aren't all there
 
 This one might be docs: heat is installed in a venv -
 /opt/stack/venvs/heat, so the deps be should in that, not in the
 global site-packages.

Aha, I did think that may be the case, but I'd already skipped to step (9)
by that point :D

  9. Give up and try to recreate issue on devstack
 
 :)
 
  I'm aware there are probably solutions to all of these problems, but my
  point is basically that devstack on my laptop already solves all of them,
  so... maybe I can just use that?  That's my thinking, anyway.
 
 Sure - its fine to use devstack. In fact, we don't *want* devtest to
 supplant devstack, they're solving different problems.
 
  E.g here's my tried, tested and comfortable workflow:
 
  1. Run stack.sh on my laptop
  2. Do a heat stack-create
  3. Hit a problem, look at screen logs
  4. Fix problem, restart heat, re-test, git-review, done!
 
  I realize I'm swimming against the tide a bit here, so feel free to educate
  me if there's an easier way to reduce the developer friction that exists
  with devtest :)
 
 Quite possibly there isn't. Some of your issues are ones we should not
 at all have, and I'd like to see those removed. But they are different
 tools for different scenarios, so I'd expect some impedance mismatch
 doing single-code-base-dev in a prod-deploy-context, and I only asked
 about the specifics to get a better understanding of whats up - I
 think its totally appropriate to be doing your main dev with devstack.

Ok, thanks for the confirmation - I'll report back if/when I get the full
overcloud working on devstack, given that it doesn't sound like a totally crazy
thing to spend a bit of time on :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Steven Hardy
On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
 On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
  On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
snip
  3. Log onto seed VM to debug the issue.  Discover there are no logs.
 
 We should fix that - is there a bug open? Thats a fairly serious issue
 for debugging a deployment.

heh, turns out there's already a long-standing bug (raised by you :D):

https://bugs.launchpad.net/tripleo/+bug/1290759

After some further experimentation and IRC discussion, it turns out that,
in theory devtest_seed.sh --debug-logging should do what I want, only atm
it doesn't work.

https://review.openstack.org/#/c/130369 looks like it may solve that in due
course.

The other (now obvious) thing I was missing was that despite all the
services not being configured to log to any file, the console log ends up
in /var/log/messages, so that was just a misunderstanding on my part.  I
was confused by the fact that the service configs (including use_syslog)
are all false/unset.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Jay Dobies

5. API: You can't create or modify roles via the API, or even view the
content of the role after creating it


None of that is in place yet, mostly due to time. The tuskar-load-roles 
was a short-term solution to getting a base set of roles in. 
Conceptually you're on target with I want to see in the coming releases.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Ben Nemec
On 10/28/2014 06:18 AM, Steven Hardy wrote:
 On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
 On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
 On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
 So this should work and I think its generally good.

 But - I'm curious, you only need a single image for devtest to
 experiment with tuskar - the seed - which should be about the same
 speed (or faster, if you have hot caches) than devstack, and you'll
 get Ironic and nodes registered so that the panels have stuff to show.

 TBH it's not so much about speed (although, for me, devstack is faster as
 I've not yet mirrored all-the-things locally, I only have a squid cache),
 it's about establishing a productive test/debug/hack/re-test workflow.

 mm, squid-cache should still give pretty good results. If its not, bug
 time :). That said..

 I've been configuring devstack to create Ironic nodes FWIW, so that works
 OK too.

 Cool.

 It's entirely possible I'm missing some key information on how to compose
 my images to be debug friendly, but here's my devtest frustration:

 1. Run devtest to create seed + overcloud

 If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
 devtest_seed.sh only. The seed has everything on it, so the rest is
 waste (unless you need all the overcloud bits - in which case I'd
 still tune things - e.g. I'd degrade to single node, and I'd iterate
 on devtest_overcloud.sh, *not* on the full plumbing each time).
 
 Yup, I went round a few iterations of those, e.g running devtest_overcloud
 with -c so I could more quickly re-deploy, until I realized I could drive
 heat directly, so I started doing that :)
 
 Most of my investigations atm are around investigating Heat issues, or
 testing new tripleo-heat-templates stuff, so I do need to spin up the
 overcloud (and update it, which is where the fun really began ref bug 
 #1383709 and #1384750 ...)
 
 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
 3. Log onto seed VM to debug the issue.  Discover there are no logs.

 We should fix that - is there a bug open? Thats a fairly serious issue
 for debugging a deployment.
 
 I've not yet raised one, as I wasn't sure if it was either by design, or if
 I was missing some crucial element from my DiB config.
 
 If you consider it a bug, I'll raise one and look into a fix.
 
 4. Restart the heat-engine logging somewhere
 5. Realize heat-engine isn't quite latest master
 6. Git pull heat, discover networking won't allow it

 Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
 totally fine - I've depended heavily on that to debug various things
 over time.
 
 Not yet dug into it in a lot of detail tbh, my other VMs can access the
 internet fine so it may be something simple, I'll look into it.

Are you sure this is a networking thing?  When I try a git pull I get this:

[root@localhost heat]# git pull
fatal:
'/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
does not appear to be a git repository
fatal: Could not read from remote repository.

That's actually because the git repo on the seed would have come from
the local cache during the image build.  We should probably reset the
remote to a sane value once we're done with the cache one.

Networking-wise, my Fedora seed can pull from git.o.o just fine though.

 
 7. scp latest master from my laptop-VM
 8. setup.py install, discover the dependencies aren't all there

 This one might be docs: heat is installed in a venv -
 /opt/stack/venvs/heat, so the deps be should in that, not in the
 global site-packages.
 
 Aha, I did think that may be the case, but I'd already skipped to step (9)
 by that point :D
 
 9. Give up and try to recreate issue on devstack

 :)

 I'm aware there are probably solutions to all of these problems, but my
 point is basically that devstack on my laptop already solves all of them,
 so... maybe I can just use that?  That's my thinking, anyway.

 Sure - its fine to use devstack. In fact, we don't *want* devtest to
 supplant devstack, they're solving different problems.

 E.g here's my tried, tested and comfortable workflow:

 1. Run stack.sh on my laptop
 2. Do a heat stack-create
 3. Hit a problem, look at screen logs
 4. Fix problem, restart heat, re-test, git-review, done!

 I realize I'm swimming against the tide a bit here, so feel free to educate
 me if there's an easier way to reduce the developer friction that exists
 with devtest :)

 Quite possibly there isn't. Some of your issues are ones we should not
 at all have, and I'd like to see those removed. But they are different
 tools for different scenarios, so I'd expect some impedance mismatch
 doing single-code-base-dev in a prod-deploy-context, and I only asked
 about the specifics to get a better understanding of whats up - I
 think its totally appropriate to be doing your main dev with devstack.
 
 Ok, thanks for the confirmation - I'll 

Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
 On 10/28/2014 06:18 AM, Steven Hardy wrote:
  On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
  On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
  On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
  So this should work and I think its generally good.
 
  But - I'm curious, you only need a single image for devtest to
  experiment with tuskar - the seed - which should be about the same
  speed (or faster, if you have hot caches) than devstack, and you'll
  get Ironic and nodes registered so that the panels have stuff to show.
 
  TBH it's not so much about speed (although, for me, devstack is faster as
  I've not yet mirrored all-the-things locally, I only have a squid cache),
  it's about establishing a productive test/debug/hack/re-test workflow.
 
  mm, squid-cache should still give pretty good results. If its not, bug
  time :). That said..
 
  I've been configuring devstack to create Ironic nodes FWIW, so that works
  OK too.
 
  Cool.
 
  It's entirely possible I'm missing some key information on how to compose
  my images to be debug friendly, but here's my devtest frustration:
 
  1. Run devtest to create seed + overcloud
 
  If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
  devtest_seed.sh only. The seed has everything on it, so the rest is
  waste (unless you need all the overcloud bits - in which case I'd
  still tune things - e.g. I'd degrade to single node, and I'd iterate
  on devtest_overcloud.sh, *not* on the full plumbing each time).
  
  Yup, I went round a few iterations of those, e.g running devtest_overcloud
  with -c so I could more quickly re-deploy, until I realized I could drive
  heat directly, so I started doing that :)
  
  Most of my investigations atm are around investigating Heat issues, or
  testing new tripleo-heat-templates stuff, so I do need to spin up the
  overcloud (and update it, which is where the fun really began ref bug 
  #1383709 and #1384750 ...)
  
  2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
  3. Log onto seed VM to debug the issue.  Discover there are no logs.
 
  We should fix that - is there a bug open? Thats a fairly serious issue
  for debugging a deployment.
  
  I've not yet raised one, as I wasn't sure if it was either by design, or if
  I was missing some crucial element from my DiB config.
  
  If you consider it a bug, I'll raise one and look into a fix.
  
  4. Restart the heat-engine logging somewhere
  5. Realize heat-engine isn't quite latest master
  6. Git pull heat, discover networking won't allow it
 
  Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
  totally fine - I've depended heavily on that to debug various things
  over time.
  
  Not yet dug into it in a lot of detail tbh, my other VMs can access the
  internet fine so it may be something simple, I'll look into it.
 
 Are you sure this is a networking thing?  When I try a git pull I get this:
 
 [root@localhost heat]# git pull
 fatal:
 '/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
 does not appear to be a git repository
 fatal: Could not read from remote repository.
 
 That's actually because the git repo on the seed would have come from
 the local cache during the image build.  We should probably reset the
 remote to a sane value once we're done with the cache one.
 
 Networking-wise, my Fedora seed can pull from git.o.o just fine though.
 

I think we should actually just rip the git repos out of the images in
production installs. What good does it do sending many MB of copies of
the git repos around? Perhaps just record HEAD somewhere in a manifest
and rm -r the source repos during cleanup.d.

But, for supporting dev/test, we could definitely leave them there and
change the remotes back to their canonical (as far as diskimage-builder
knows) sources.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Fox, Kevin M


From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, October 28, 2014 11:34 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

*SNIP*

 I think we should actually just rip the git repos out of the images in
 production installs. What good does it do sending many MB of copies of
 the git repos around? Perhaps just record HEAD somewhere in a manifest
 and rm -r the source repos during cleanup.d.

 But, for supporting dev/test, we could definitely leave them there and
 change the remotes back to their canonical (as far as diskimage-builder
 knows) sources.

You could also set git to pull only the latest revision to save a bunch of 
space but still allow updating easily.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Ben Nemec
On 10/28/2014 01:34 PM, Clint Byrum wrote:
 Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
 On 10/28/2014 06:18 AM, Steven Hardy wrote:
 On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
 On 28 October 2014 22:51, Steven Hardy sha...@redhat.com wrote:
 On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
 So this should work and I think its generally good.

 But - I'm curious, you only need a single image for devtest to
 experiment with tuskar - the seed - which should be about the same
 speed (or faster, if you have hot caches) than devstack, and you'll
 get Ironic and nodes registered so that the panels have stuff to show.

 TBH it's not so much about speed (although, for me, devstack is faster as
 I've not yet mirrored all-the-things locally, I only have a squid cache),
 it's about establishing a productive test/debug/hack/re-test workflow.

 mm, squid-cache should still give pretty good results. If its not, bug
 time :). That said..

 I've been configuring devstack to create Ironic nodes FWIW, so that works
 OK too.

 Cool.

 It's entirely possible I'm missing some key information on how to compose
 my images to be debug friendly, but here's my devtest frustration:

 1. Run devtest to create seed + overcloud

 If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
 devtest_seed.sh only. The seed has everything on it, so the rest is
 waste (unless you need all the overcloud bits - in which case I'd
 still tune things - e.g. I'd degrade to single node, and I'd iterate
 on devtest_overcloud.sh, *not* on the full plumbing each time).

 Yup, I went round a few iterations of those, e.g running devtest_overcloud
 with -c so I could more quickly re-deploy, until I realized I could drive
 heat directly, so I started doing that :)

 Most of my investigations atm are around investigating Heat issues, or
 testing new tripleo-heat-templates stuff, so I do need to spin up the
 overcloud (and update it, which is where the fun really began ref bug 
 #1383709 and #1384750 ...)

 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
 3. Log onto seed VM to debug the issue.  Discover there are no logs.

 We should fix that - is there a bug open? Thats a fairly serious issue
 for debugging a deployment.

 I've not yet raised one, as I wasn't sure if it was either by design, or if
 I was missing some crucial element from my DiB config.

 If you consider it a bug, I'll raise one and look into a fix.

 4. Restart the heat-engine logging somewhere
 5. Realize heat-engine isn't quite latest master
 6. Git pull heat, discover networking won't allow it

 Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
 totally fine - I've depended heavily on that to debug various things
 over time.

 Not yet dug into it in a lot of detail tbh, my other VMs can access the
 internet fine so it may be something simple, I'll look into it.

 Are you sure this is a networking thing?  When I try a git pull I get this:

 [root@localhost heat]# git pull
 fatal:
 '/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
 does not appear to be a git repository
 fatal: Could not read from remote repository.

 That's actually because the git repo on the seed would have come from
 the local cache during the image build.  We should probably reset the
 remote to a sane value once we're done with the cache one.

 Networking-wise, my Fedora seed can pull from git.o.o just fine though.

 
 I think we should actually just rip the git repos out of the images in
 production installs. What good does it do sending many MB of copies of
 the git repos around? Perhaps just record HEAD somewhere in a manifest
 and rm -r the source repos during cleanup.d.

I actually thought we were removing git repos, but evidently not.

 
 But, for supporting dev/test, we could definitely leave them there and
 change the remotes back to their canonical (as far as diskimage-builder
 knows) sources.

I wonder if it would make sense to pip install -e.  Then the copy of the
application in the venvs is simply a pointer to the actual git repo.
This would also make it easier to make changes to the running code -
instead of having to make a change, reinstall, and restart services you
could just make the change and restart like in Devstack.

I guess I don't know if that has any negative impacts for production use
though.

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-27 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-10-27 15:16:59 -0700:
 Hi all,
 
 Lately I've been spending a lot more time digging into TripleO and Tuskar,
 and started looking for a way to spin up simple tests (and in particular,
 play with Tuskar UI/API) without necessarily having the overhead of setting
 up a full devtest environment every time.
 
 So I decided to hack on a patch which automates starting tuskar-api via
 devstack, here's a quick HOWTO if you want to try it:
 
 1. Pull devstack patch
 https://review.openstack.org/#/c/131218/
 
 2. Add t-api to localrc
 enable_service t-api
 Here's my example (Ironic enabled) localrc:
 https://gist.github.com/hardys/2cfd2892ce0e63fa8155
 
 3. Add tuskar roles
 git clone git://github.com/openstack/tripleo-heat-templates.git
 cd tripleo-heat-templates·
 tuskar-load-roles --config-file /etc/tuskar/tuskar.conf -r compute.yaml 
 -r controller.yaml
 
 3. clone+install tuskar-ui
 git clone git://github.com/openstack/tuskar-ui.git
 cd tuskar-ui
 python setup.py install
 
 4. Copy tuskar-ui horizon config
 cp ~/tuskar-ui/_50_tuskar.py.example
 /opt/stack/horizon/openstack_dashboard/local/enabled/_50_tuskar.py
 sudo systemctl restart httpd.service
 
 This provides a basically functional tuskar API and UI, which is enough for
 basic testing of tuskar, tuskarclient and (to some extent) the UI.
 
 I hit some issues, please let me know if new bugs are needed for these, or
 if you can suggest solutions:
 
 1. UI Infrastructure-Overview page always says No controller/compute node,
even though both roles are loaded
 
 2. UI Service configuration has no content at all
 
 3. UI Deployment Roles page says Metering service is not enabled., but
ceilometer is installed and active
 
 4. UI: If, Ironic isn't available for any reason, you get a big error from the
Nodes page of the UI
 
 5. API: You can't create or modify roles via the API, or even view the
 content of the role after creating it
 
 6. After running tuskar-load-roles, the overcloud_roles table is always
 empty (related to 1?)
 
 I'd be interested in peoples thoughts about this general approach - ideally
 I'd like to end up at a point where you could launch an overcloud template
 directly via heat on devstack (with ironic enabled and the appropriate
 controller/compute images in glance obviously) - has anyone else tried
 that?
 

This is pretty awesome Steve, thanks for working on it. I think until
we have QuintupleO and can run things on a cloud instead of a single
machine, devtest's insistence to do things in a production-esque way
will make it too heavy for most developers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-27 Thread Robert Collins
So this should work and I think its generally good.

But - I'm curious, you only need a single image for devtest to
experiment with tuskar - the seed - which should be about the same
speed (or faster, if you have hot caches) than devstack, and you'll
get Ironic and nodes registered so that the panels have stuff to show.

-Rob

On 28 October 2014 11:16, Steven Hardy sha...@redhat.com wrote:
 Hi all,

 Lately I've been spending a lot more time digging into TripleO and Tuskar,
 and started looking for a way to spin up simple tests (and in particular,
 play with Tuskar UI/API) without necessarily having the overhead of setting
 up a full devtest environment every time.

 So I decided to hack on a patch which automates starting tuskar-api via
 devstack, here's a quick HOWTO if you want to try it:

 1. Pull devstack patch
 https://review.openstack.org/#/c/131218/

 2. Add t-api to localrc
 enable_service t-api
 Here's my example (Ironic enabled) localrc:
 https://gist.github.com/hardys/2cfd2892ce0e63fa8155

 3. Add tuskar roles
 git clone git://github.com/openstack/tripleo-heat-templates.git
 cd tripleo-heat-templates·
 tuskar-load-roles --config-file /etc/tuskar/tuskar.conf -r compute.yaml 
 -r controller.yaml

 3. clone+install tuskar-ui
 git clone git://github.com/openstack/tuskar-ui.git
 cd tuskar-ui
 python setup.py install

 4. Copy tuskar-ui horizon config
 cp ~/tuskar-ui/_50_tuskar.py.example
 /opt/stack/horizon/openstack_dashboard/local/enabled/_50_tuskar.py
 sudo systemctl restart httpd.service

 This provides a basically functional tuskar API and UI, which is enough for
 basic testing of tuskar, tuskarclient and (to some extent) the UI.

 I hit some issues, please let me know if new bugs are needed for these, or
 if you can suggest solutions:

 1. UI Infrastructure-Overview page always says No controller/compute node,
even though both roles are loaded

 2. UI Service configuration has no content at all

 3. UI Deployment Roles page says Metering service is not enabled., but
ceilometer is installed and active

 4. UI: If, Ironic isn't available for any reason, you get a big error from the
Nodes page of the UI

 5. API: You can't create or modify roles via the API, or even view the
 content of the role after creating it

 6. After running tuskar-load-roles, the overcloud_roles table is always
 empty (related to 1?)

 I'd be interested in peoples thoughts about this general approach - ideally
 I'd like to end up at a point where you could launch an overcloud template
 directly via heat on devstack (with ironic enabled and the appropriate
 controller/compute images in glance obviously) - has anyone else tried
 that?

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev