> -----Original Message-----
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 28 October 2014 18:34
> To: openstack-dev
> Subject: Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on
> Devstack
> 
> Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
> > On 10/28/2014 06:18 AM, Steven Hardy wrote:
> > > On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
> > >> On 28 October 2014 22:51, Steven Hardy <sha...@redhat.com> wrote:
> > >>> On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
> > >>>> So this should work and I think its generally good.
> > >>>>
> > >>>> But - I'm curious, you only need a single image for devtest to
> > >>>> experiment with tuskar - the seed - which should be about the
> > >>>> same speed (or faster, if you have hot caches) than devstack, and
> > >>>> you'll get Ironic and nodes registered so that the panels have
> stuff to show.
> > >>>
> > >>> TBH it's not so much about speed (although, for me, devstack is
> > >>> faster as I've not yet mirrored all-the-things locally, I only
> > >>> have a squid cache), it's about establishing a productive
> test/debug/hack/re-test workflow.
> > >>
> > >> mm, squid-cache should still give pretty good results. If its not,
> > >> bug time :). That said..
> > >>
> > >>> I've been configuring devstack to create Ironic nodes FWIW, so
> > >>> that works OK too.
> > >>
> > >> Cool.
> > >>
> > >>> It's entirely possible I'm missing some key information on how to
> > >>> compose my images to be debug friendly, but here's my devtest
> frustration:
> > >>>
> > >>> 1. Run devtest to create seed + overcloud
> > >>
> > >> If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
> > >> devtest_seed.sh only. The seed has everything on it, so the rest is
> > >> waste (unless you need all the overcloud bits - in which case I'd
> > >> still tune things - e.g. I'd degrade to single node, and I'd
> > >> iterate on devtest_overcloud.sh, *not* on the full plumbing each
> time).
> > >
> > > Yup, I went round a few iterations of those, e.g running
> > > devtest_overcloud with -c so I could more quickly re-deploy, until I
> > > realized I could drive heat directly, so I started doing that :)
> > >
> > > Most of my investigations atm are around investigating Heat issues,
> > > or testing new tripleo-heat-templates stuff, so I do need to spin up
> > > the overcloud (and update it, which is where the fun really began
> > > ref bug
> > > #1383709 and #1384750 ...)
> > >
> > >>> 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
> > >>> 3. Log onto seed VM to debug the issue.  Discover there are no
> logs.
> > >>
> > >> We should fix that - is there a bug open? Thats a fairly serious
> > >> issue for debugging a deployment.
> > >
> > > I've not yet raised one, as I wasn't sure if it was either by
> > > design, or if I was missing some crucial element from my DiB config.
> > >
> > > If you consider it a bug, I'll raise one and look into a fix.
> > >
> > >>> 4. Restart the heat-engine logging somewhere 5. Realize
> > >>> heat-engine isn't quite latest master 6. Git pull heat, discover
> > >>> networking won't allow it
> > >>
> > >> Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
> > >> totally fine - I've depended heavily on that to debug various
> > >> things over time.
> > >
> > > Not yet dug into it in a lot of detail tbh, my other VMs can access
> > > the internet fine so it may be something simple, I'll look into it.
> >
> > Are you sure this is a networking thing?  When I try a git pull I get
> this:
> >
> > [root@localhost heat]# git pull
> > fatal:
> > '/home/bnemec/.cache/image-create/source-
> repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
> > does not appear to be a git repository
> > fatal: Could not read from remote repository.
> >
> > That's actually because the git repo on the seed would have come from
> > the local cache during the image build.  We should probably reset the
> > remote to a sane value once we're done with the cache one.
> >
> > Networking-wise, my Fedora seed can pull from git.o.o just fine
> though.
> >
> 
> I think we should actually just rip the git repos out of the images in
> production installs. What good does it do sending many MB of copies of
> the git repos around? Perhaps just record HEAD somewhere in a manifest
> and rm -r the source repos during cleanup.d.

The manifests already capture this.  For example 
/etc/dib-manifests/dib-manifest-git-seed on the seed.  The format of that file 
is as-per source-repositories file format for reuse in builds.  This means it 
has the on-disk location of the repo, the remote used, and the sha1 pulled for 
the build.

> 
> But, for supporting dev/test, we could definitely leave them there and
> change the remotes back to their canonical (as far as diskimage-builder
> knows) sources.
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as "HP CONFIDENTIAL".
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to