Re: [foreman-users] Re: [Katello] Katello 3.1 to 3.2 upgrade error

2017-02-03 Thread Edson Manners
Similar Puppet 4 path issue when upgrading the Capsule??

[root@katello3 ~]# capsule-certs-generate --capsule-fqdn 
"dns1.xxx.x.xxx" --certs-tar "~/capsule.dns1.xxx.x.-certs.tar"
Installing Done   
[100%] 
[...]
  Success!
/usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in 
`initialize': No such file or directory - 
/opt/puppetlabs/puppet/cache/foreman_cache_data/oauth_consumer_key 
(Errno::ENOENT)
from 
/usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in 
`open'
from 
/usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in 
`unsafe_load_file'
from 
/usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:153:in 
`load_file_with_options'
from 
/usr/share/katello-installer-base/hooks/boot/01-helpers.rb:30:in 
`read_cache_data'
from 
/usr/share/katello-installer-base/hooks/post/10-post_install.rb:48:in 
`block (4 levels) in load'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:34:in 
`instance_eval'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:34:in 
`block (4 levels) in load'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hook_context.rb:13:in 
`instance_exec'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hook_context.rb:13:in 
`execute'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:51:in 
`block in execute'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:49:in 
`each'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:49:in 
`execute'
from 
/usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:454:in `block in 
run_installation'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/exit_handler.rb:27:in 
`call'
from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/exit_handler.rb:27:in 
`exit'
from 
/usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:160:in `exit'
from 
/usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:453:in 
`run_installation'
from 
/usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:147:in `execute'
from /usr/share/gems/gems/clamp-1.0.0/lib/clamp/command.rb:68:in 
`run'
from /usr/share/gems/gems/clamp-1.0.0/lib/clamp/command.rb:133:in 
`run'
from 
/usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:154:in `run'
from /sbin/capsule-certs-generate:75:in `'

On Friday, February 3, 2017 at 9:24:12 PM UTC-5, Edson Manners wrote:
>
> Thanks for replying Stephen. Here's what I found:
>
> [root@katello3 puppet]# rpm -q --whatprovides 
> /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/storeconfigs.rb
> puppetdb-terminus-2.3.8-1.el7.noarch
>
> So maybe PuppetDB was the culprit.
>
> And yes even though I read in a Katello 3.2 changelog somewhere that that 
> grub2 bug was fixed it seems like it still fell through the cracks.
>
> I'm still trying to get the Katello back online. I'm upgrading the 
> external proxy to see if that'll get rid of the last errors but at this 
> point the server is pretty much unuseable. I don't want to bad mouth 
> Katello becasue I love it but I'd like to warn others about my experience 
> since spent weeks preparing for this upgrade and still got bitten by 
> unexpected errors/bugs.
>
> I'll report back any progress for completeness.
>
>  Feb 03 21:20:04 katello3.rcc.fsu.edu puppet-master[49870]: Report 
> processor failed: Could not send report to Foreman at 
> https://katello3.xxx.x.xxx/api/config_reports: Net::ReadTimeout
> Feb 03 21:20:04 katello3.xxx.x.xxx puppet-master[49870]: 
> ["/usr/share/ruby/net/protocol.rb:158:in `rescue in rbuf_fill'", 
> "/usr/share/ruby/net/protocol.rb:152:in `rbuf_fill'", 
> "/usr/share/ruby/net/protocol.rb:134:in `readuntil'", 
> "/usr/share/ruby/net/protocol.rb:144:in `readline'", 
> "/usr/share/ruby/net/http/response.rb:39:in `read_status_line'", 
> "/usr/share/ruby/net/http/response.rb:28:in `read_new'", 
> "/usr/share/ruby/net/http.rb:1412:in `block in transport_request'", 
> "/usr/share/ruby/net/http.rb:1409:in `catch'", 
> "/usr/share/ruby/net/http.rb:1409:in `transport_request'", 
> "/usr/share/ruby/net/http.rb:1382:in `request'", 
> "/usr/share/ruby/net/http.rb:1375:in `block in request'", 
> "/usr/share/ruby/net/http.rb:852:in `start'", 
> "/usr/share/ruby/net/http.rb:1373:in `request'", 
> "/usr/share/ruby/vendor_ruby/puppet/reports/foreman.rb:65:in `process'", 
> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:37:in 
> `block in process'", 
> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:53:in 
> `block in processors'", 
> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in 
> `each'", 
> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processo

Re: [foreman-users] Re: [Katello] Katello 3.1 to 3.2 upgrade error

2017-02-03 Thread Edson Manners
Thanks for replying Stephen. Here's what I found:

[root@katello3 puppet]# rpm -q --whatprovides 
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/storeconfigs.rb
puppetdb-terminus-2.3.8-1.el7.noarch

So maybe PuppetDB was the culprit.

And yes even though I read in a Katello 3.2 changelog somewhere that that 
grub2 bug was fixed it seems like it still fell through the cracks.

I'm still trying to get the Katello back online. I'm upgrading the external 
proxy to see if that'll get rid of the last errors but at this point the 
server is pretty much unuseable. I don't want to bad mouth Katello becasue 
I love it but I'd like to warn others about my experience since spent weeks 
preparing for this upgrade and still got bitten by unexpected errors/bugs.

I'll report back any progress for completeness.

 Feb 03 21:20:04 katello3.rcc.fsu.edu puppet-master[49870]: Report 
processor failed: Could not send report to Foreman at 
https://katello3.xxx.x.xxx/api/config_reports: Net::ReadTimeout
Feb 03 21:20:04 katello3.xxx.x.xxx puppet-master[49870]: 
["/usr/share/ruby/net/protocol.rb:158:in `rescue in rbuf_fill'", 
"/usr/share/ruby/net/protocol.rb:152:in `rbuf_fill'", 
"/usr/share/ruby/net/protocol.rb:134:in `readuntil'", 
"/usr/share/ruby/net/protocol.rb:144:in `readline'", 
"/usr/share/ruby/net/http/response.rb:39:in `read_status_line'", 
"/usr/share/ruby/net/http/response.rb:28:in `read_new'", 
"/usr/share/ruby/net/http.rb:1412:in `block in transport_request'", 
"/usr/share/ruby/net/http.rb:1409:in `catch'", 
"/usr/share/ruby/net/http.rb:1409:in `transport_request'", 
"/usr/share/ruby/net/http.rb:1382:in `request'", 
"/usr/share/ruby/net/http.rb:1375:in `block in request'", 
"/usr/share/ruby/net/http.rb:852:in `start'", 
"/usr/share/ruby/net/http.rb:1373:in `request'", 
"/usr/share/ruby/vendor_ruby/puppet/reports/foreman.rb:65:in `process'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:37:in 
`block in process'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:53:in 
`block in processors'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in 
`each'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in 
`processors'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:30:in 
`process'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:14:in 
`save'", 
"/usr/share/ruby/vendor_ruby/puppet/indirector/indirection.rb:283:in 
`save'", "/usr/share/ruby/vendor_ruby/puppet/network/http/api/v1.rb:160:in 
`do_save'", 
"/usr/share/ruby/vendor_ruby/puppet/network/http/api/v1.rb:50:in `block in 
call'", "/usr/share/ruby/vendor_ruby/puppet/context.rb:64:in `override'", 
"/usr/share/ruby/vendor_ruby/puppet.rb:246:in `override'", 
"/usr/share/ruby/vendor_ruby/puppet/network/http/api/v1.rb:49:in `call'", 
"/usr/share/ruby/vendor_ruby/puppet/network/http/route.rb:82:in `block in 
process'", "/usr/share/ruby/vendor_ruby/puppet/network/http/route.rb:81:in 
`each'", "/usr/share/ruby/vendor_ruby/puppet/network/http/route.rb:81:in 
`process'", 
"/usr/share/ruby/vendor_ruby/puppet/network/http/handler.rb:63:in `block in 
process'", 
"/usr/share/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in 
`profile'", "/usr/share/ruby/vendor_ruby/puppet/util/profiler.rb:51:in 
`profile'", 
"/usr/share/ruby/vendor_ruby/puppet/network/http/handler.rb:61:in 
`process'", "/usr/share/ruby/vendor_ruby/puppet/network/http/rack.rb:21:in 
`call'", 
"/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/rack/thread_handler_extension.rb:77:in
 
`process_request'", 
"/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/request_handler/thread_handler.rb:140:in
 
`accept_and_process_next_request'", 
"/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/request_handler/thread_handler.rb:108:in
 
`main_loop'", 
"/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/request_handler.rb:441:in
 
`block (3 levels) in start_threads'"]

On Friday, February 3, 2017 at 4:54:05 PM UTC-5, stephen wrote:
>
> On Fri, Feb 3, 2017 at 2:36 PM, Edson Manners  > wrote: 
> > Just an update. It looks like the candlepin migration was looking for 
> Puppet 
> > 4 and not Puppet 3. I can't seem to find any 'foreman-install' arguments 
> > that indicates that you'd like to stick with Puppet 3. So if Katello 3.1 
> > uses Puppet 3 and Katello 3.2 uses Puppet 4 how does one upgrade then? 
>
> Katello 3.2 can use either version of Puppet. 
>
> The only reason we'd be looking in the Puppet directory was if it 
> existed,  We figure out 
> the directory like this: 
>
> https://github.com/Katello/katello-installer/blob/master/hooks/boot/01-helpers.rb#L4
>  
>
> It's maybe a little simplistic, although I don't think you should have 
> any /opt/puppetlabs 
> directory unless you installed some Puppet 4 package.  Was that the case? 
>
> As far as the error in your other message, looks like 
> http://projects.thefo

Re: [foreman-users] Katello 3.1 Monthly Sync Plans

2017-02-03 Thread John Mitsch
I only see hourly, daily, and weekly upstream as well... looks like that is
a documentation error. I filed a bug here:
http://projects.theforeman.org/issues/18394

If you want to request a montly sync plan, file a feature request here:
http://projects.theforeman.org/projects/katello/issues/new

To create a montly sync plan now you could use a cron job with hammer as a
hacky workaround :)

-John

John Mitsch
Red Hat Engineering
(860)-967-7285
irc: jomitsch

On Thu, Feb 2, 2017 at 9:28 AM, Louis Bohm  wrote:

> I just installed Katello with the following:
>
>1. CentOS 7.3
>2. Foreman 1.12.4
>3. Katello 3.1.0
>4. Puppet 3.8.7
>5. Pulp server 2.8.7
>
> The Katello 3.1 docs say there is a Sync Plan interval of monthly but I do
> not see it in the GUI.  I looked around and found the hammer command to add
> a sync plan and tried to add a monthly plan that way.  Instead I got an
> error that there was only hourly, daily and weekly.
>
> Since this is a new install I am open to upgrading to 3.2 if that has a
> fix for this.  However, I did not see anything in the release notes.
>
> Thanks,
> Louis
>
> --
> You received this message because you are subscribed to the Google Groups
> "Foreman users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-users+unsubscr...@googlegroups.com.
> To post to this group, send email to foreman-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/foreman-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-users] Puppet not installed after kickstart

2017-02-03 Thread John Mitsch
Lars,

Is this  on a RHEL system?  What subscriptions are available to the client?

-John

John Mitsch
Red Hat Engineering
(860)-967-7285
irc: jomitsch

On Wed, Feb 1, 2017 at 8:25 AM, Lars  wrote:

> Hi,
>
> When deploying via PXE and running kickstart i notice puppet and
> katello-agent are not installed.
> In the install log i see this:
>
> Installing Katello Agent
> Loaded plugins: product-id, search-disabled-repos, subscription-manager
> No package katello-agent available.
> Error: Nothing to do
>
> (no repositories available)
>
> Also i see this:
>
> Unable to find available subscriptions for all your installed products.
> (i'm working with vdc subscriptions, so my host gets a temp subscription
> until it is discovered by virt-who).
>
> I suppose these 2 things are related.
> So, is there a way to work around this?
> Because now i can't install katello-agent nor puppet and without puppet i
> cannot complete my configuration off course.
>
>
> Kind regards,
>
> --
> You received this message because you are subscribed to the Google Groups
> "Foreman users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-users+unsubscr...@googlegroups.com.
> To post to this group, send email to foreman-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/foreman-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-users] Re: [Katello] Katello 3.1 to 3.2 upgrade error

2017-02-03 Thread Stephen Benjamin
On Fri, Feb 3, 2017 at 2:36 PM, Edson Manners  wrote:
> Just an update. It looks like the candlepin migration was looking for Puppet
> 4 and not Puppet 3. I can't seem to find any 'foreman-install' arguments
> that indicates that you'd like to stick with Puppet 3. So if Katello 3.1
> uses Puppet 3 and Katello 3.2 uses Puppet 4 how does one upgrade then?

Katello 3.2 can use either version of Puppet.

The only reason we'd be looking in the Puppet directory was if it
existed,  We figure out
the directory like this:
   
https://github.com/Katello/katello-installer/blob/master/hooks/boot/01-helpers.rb#L4

It's maybe a little simplistic, although I don't think you should have
any /opt/puppetlabs
directory unless you installed some Puppet 4 package.  Was that the case?

As far as the error in your other message, looks like
http://projects.theforeman.org/issues/17639
should've been backported to 3.2, `mkdir /var/lib/tftpboot/grub2` might fix it


> On Friday, February 3, 2017 at 10:50:11 AM UTC-5, Edson Manners wrote:
>>
>> I followed the following instructions to upgrade Katello
>> https://theforeman.org/plugins/katello/3.2/upgrade/index.html.
>>
>> Everything went smoothly until I ran the foreman upgrade command. I got
>> the error below. For some reason it's trying to use what looks like the
>> Puppet PE path instead of the OS Puppet path.
>> I don't see any bug reports or anyone else with a similar issue so I'm
>> wondering if there's a path argument or something that I missed. Any help is
>> appreciated.
>>
>> HW/SW Spec
>> CentOS 7.3
>>
>>
>> [root@katello3 puppet]# foreman-installer --scenario katello --upgrade
>> Upgrading...
>> Upgrade Step: stop_services...
>> Redirecting to /bin/systemctl stop  foreman-tasks.service
>>
>> Redirecting to /bin/systemctl stop  httpd.service
>>
>> Redirecting to /bin/systemctl stop  pulp_workers.service
>>
>> Redirecting to /bin/systemctl stop  foreman-proxy.service
>>
>> Redirecting to /bin/systemctl stop  pulp_streamer.service
>>
>> Redirecting to /bin/systemctl stop  pulp_resource_manager.service
>>
>> Redirecting to /bin/systemctl stop  pulp_celerybeat.service
>>
>> Redirecting to /bin/systemctl stop  tomcat.service
>>
>> Redirecting to /bin/systemctl stop  squid.service
>>
>> Redirecting to /bin/systemctl stop  qdrouterd.service
>>
>> Redirecting to /bin/systemctl stop  qpidd.service
>>
>> Success!
>>
>> Upgrade Step: start_databases...
>> Redirecting to /bin/systemctl start  mongod.service
>>
>> Redirecting to /bin/systemctl start  postgresql.service
>>
>> Success!
>>
>> Upgrade Step: update_http_conf...
>>
>> Upgrade Step: migrate_pulp...
>>
>>
>> 27216
>>
>> Attempting to connect to localhost:27017
>> Attempting to connect to localhost:27017
>> Write concern for Mongo connection: {}
>> Loading content types.
>> Loading type descriptors []
>> Parsing type descriptors
>> Validating type descriptor syntactic integrity
>> Validating type descriptor semantic integrity
>> Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
>> Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
>> Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
>> Loading unit model: docker_image = pulp_docker.plugins.models:Image
>> Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
>> Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
>> Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
>> Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
>> Loading unit model: package_group =
>> pulp_rpm.plugins.db.models:PackageGroup
>> Loading unit model: package_category =
>> pulp_rpm.plugins.db.models:PackageCategory
>> Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
>> Loading unit model: package_environment =
>> pulp_rpm.plugins.db.models:PackageEnvironment
>> Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
>> Loading unit model: package_langpacks =
>> pulp_rpm.plugins.db.models:PackageLangpacks
>> Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
>> Loading unit model: yum_repo_metadata_file =
>> pulp_rpm.plugins.db.models:YumMetadataFile
>> Updating the database with types []
>> Found the following type definitions that were not present in the update
>> collection [puppet_module, docker_tag, docker_manifest, docker_blob,
>> erratum, distribution, yum_repo_metadata_file, package_group,
>> package_category, iso, package_environment, drpm, package_langpacks, rpm,
>> srpm, docker_image]
>> Updating the database with types [puppet_module, drpm, package_langpacks,
>> erratum, docker_blob, docker_manifest, yum_repo_metadata_file,
>> package_group, package_category, iso, package_environment, docker_tag,
>> distribution, rpm, srpm, docker_image]
>> Content types loaded.
>> Ensuring the admin role and user are in place.
>> Admin role and user are in place.
>> Beginning database migrations.
>> Migration package pulp.server.db.migrations is up to date at version 24
>> Migration

Re: [foreman-users] Excessive looping while loading host edit page

2017-02-03 Thread Stefan Lasiewski
Chris,

I don't know about you, but this command was showing about 1500 interfaces 
for Docker named 'vethNNN':

$ hammer --output=csv host interface list --host docker0N.example.org 
|wc -l

That not right, and `ip addr show` only shows about ~25  interfaces on each 
host. I suspect that Foreman was cataloging every single Docker ephemeral 
interface that's existed on these hosts since I bought up these host a 
month ago.

To delete these by hand, I'm doing the following:

hammer --output=csv host interface list --host docker0N.example.org > 
/tmp/docker0N.int.list

grep veth /tmp/docker0N.int.list  |cut -f1 -d, |while read NID; do
echo "### $NID"
hammer --verbose host interface delete --id $NID --host 
docker0N.example.org
done

-= Stefan

On Friday, February 3, 2017 at 6:32:14 AM UTC-8, Chris Baldwin wrote:
>
> Nice, I hadn't noticed that. If that's the case, this should be a 
> non-issue for us in the long term... once we upgrade :)
>
> On Thursday, February 2, 2017 at 6:29:37 PM UTC-5, Stefan Lasiewski wrote:
>>
>> How funny. I was just looking this up also. Also running Puppet 3.8 & 
>> Foreman 1.12.x, and a dozen Docker hosts. Turns out that Foreman doesn't 
>> like 12 hosts with dozens of interfaces on each!
>>
>> Looks like this has also been fixed in Foreman 1.14. See 
>> http://projects.theforeman.org/issues/16834 , which adds 'veth*' to 
>> ignored_interface_identifiers 
>> .
>>
>> -= Stefan
>>
>> On Thursday, February 2, 2017 at 12:35:49 PM UTC-8, Chris Baldwin wrote:
>>>
>>> I think the other way would be to avoid managing the host directly. 
>>> Since we only use Foreman as an ENC, all class management could (should) be 
>>> moved to a hostgroup, therefor never having to load the NICs.
>>>
>>> On Thursday, February 2, 2017 at 3:31:08 PM UTC-5, Tomer Brisker wrote:

 A possible workaround, if you don't need to manage all of those 
 interfaces in foreman, is to ignore some of them during fact import using 
 the ignored_interface_identifiers setting. 
 You may need to delete the host and re-run puppet for the ignored 
 interfaces to be removed.

 On Thu, Feb 2, 2017 at 10:22 PM, Chris Baldwin  
 wrote:

> Huh, that's interesting. The affected hosts do have a 
> larger-than-average (10+) number of interfaces as they're docker servers, 
> which is a commonality I hadn't noticed.
>
> Do you guys need/want any other logs to help w/ the issue? Is there 
> any kind of workaround that you've found?
>
> On Thursday, February 2, 2017 at 3:12:12 PM UTC-5, Tomer Brisker wrote:
>>
>> Hi Chris,
>>
>> Thank you for reporting this.
>> This looks like you are hitting 
>> http://projects.theforeman.org/issues/7829 which has to do with a 
>> large number of interfaces on the host, leading to the interface partial 
>> being rendered for each interface.
>>
>> On Thu, Feb 2, 2017 at 9:50 PM, Chris Baldwin  
>> wrote:
>>
>>> Hi,
>>>
>>> My setup:
>>> * Multiple Foreman servers, all on 1.12.1
>>> * memcached shared between them
>>> * shared backend DB (psql, 9.4.5)
>>> * Foreman is a puppet 3.8 ENC only
>>>
>>> I have a reasonably large Foreman install. For some reason, some 
>>> hosts take forever to load when clicking on 'edit'. The only thing I 
>>> see in 
>>> the logs is some obscene amount of rendering messages, to the tune of 
>>> 445+ 
>>> seconds of 
>>>
>>> 2017-02-02 11:36:43 [app] [I]   Rendered nic/_base_form.html.erb 
>>> (27.1ms)
>>> 2017-02-02 11:36:43 [app] [I]   Rendered nic/_virtual_form.html.erb 
>>> (1.2ms)
>>> 2017-02-02 11:36:43 [app] [I]   Rendered nic/_
>>> provider_specific_form.html.erb (0.1ms)
>>> 2017-02-02 11:36:43 [app] [I]   Rendered 
>>> nic/manageds/_managed.html.erb (29.9ms)
>>>
>>> over and over. 
>>>
>>> I have a few questions about this:
>>> * I got this info from debug. What else can I look at to get more 
>>> information?
>>> * Why is it rendering the same four items over and over? 
>>> * I actually deleted the host from Foreman and re-ran puppet, that 
>>> seemed to fix the issue temporarily. However, I don't understand *why* 
>>> that 
>>> made a difference. Can someone shed some light on this?
>>>
>>> -Chris (oogs/oogs_/oogs_werk on IRC)
>>>
>>> This log is for a good host. In a bad host, add about 100 times the 
>>> stanzas I listed above.
>>>
>>> 2017-02-02 11:36:42 [app] [I] Started GET "/hosts/
>>> testhost.domain.com/edit" for 127.0.0.101 at 2017-02-02 11:36:42 
>>> -0800
>>> 2017-02-02 11:36:42 [app] [I] Processing by HostsController#edit as 
>>> HTML
>>> 2017-02-02 11:36:42 [app] [I]   Parameters: {"id"=>"
>>> testhost.domain.com"}
>>> 2017-02-02 11:36:42 [app] [D] Cache read: 
>>> _session_id:1234567890asdfghjkl
>>> 2017-02-0

[foreman-users] Re: [Katello] Katello 3.1 to 3.2 upgrade error

2017-02-03 Thread Edson Manners
One last update. I created a soft-link from 
/opt/puppetlabs/puppet/cache/foreman_cache_data/candlepin_db_password to 
/var/lib/puppet/foreman_cache_data/candlepin_db_password
That made the candlepin_migrate complete successfully but I got a bunch of 
errors in xxx that broke Katello. 

These are what I got.
http://projects.theforeman.org/issues/17356

I'm re-installing. Like others said in the forum. Avoid Katello 3.2 if you 
can.


On Friday, February 3, 2017 at 2:36:41 PM UTC-5, Edson Manners wrote:
>
> Just an update. It looks like the candlepin migration was looking for 
> Puppet 4 and not Puppet 3. I can't seem to find any 'foreman-install' 
> arguments that indicates that you'd like to stick with Puppet 3. So if 
> Katello 3.1 uses Puppet 3 and Katello 3.2 uses Puppet 4 how does one 
> upgrade then?
>
> On Friday, February 3, 2017 at 10:50:11 AM UTC-5, Edson Manners wrote:
>>
>> I followed the following instructions to upgrade Katello 
>> https://theforeman.org/plugins/katello/3.2/upgrade/index.html.
>>
>> Everything went smoothly until I ran the foreman upgrade command. I got 
>> the error below. For some reason it's trying to use what looks like the 
>> Puppet PE path instead of the OS Puppet path.
>> I don't see any bug reports or anyone else with a similar issue so I'm 
>> wondering if there's a path argument or something that I missed. Any help 
>> is appreciated.
>>
>> HW/SW Spec
>> CentOS 7.3
>>
>>
>> [root@katello3 puppet]# foreman-installer --scenario katello --upgrade
>> Upgrading...
>> Upgrade Step: stop_services...
>> Redirecting to /bin/systemctl stop  foreman-tasks.service
>>
>> Redirecting to /bin/systemctl stop  httpd.service
>>
>> Redirecting to /bin/systemctl stop  pulp_workers.service
>>
>> Redirecting to /bin/systemctl stop  foreman-proxy.service
>>
>> Redirecting to /bin/systemctl stop  pulp_streamer.service
>>
>> Redirecting to /bin/systemctl stop  pulp_resource_manager.service
>>
>> Redirecting to /bin/systemctl stop  pulp_celerybeat.service
>>
>> Redirecting to /bin/systemctl stop  tomcat.service
>>
>> Redirecting to /bin/systemctl stop  squid.service
>>
>> Redirecting to /bin/systemctl stop  qdrouterd.service
>>
>> Redirecting to /bin/systemctl stop  qpidd.service
>>
>> Success!
>>
>> Upgrade Step: start_databases...
>> Redirecting to /bin/systemctl start  mongod.service
>>
>> Redirecting to /bin/systemctl start  postgresql.service
>>
>> Success!
>>
>> Upgrade Step: update_http_conf...
>>
>> Upgrade Step: migrate_pulp...
>>
>>
>> 27216
>>
>> Attempting to connect to localhost:27017
>> Attempting to connect to localhost:27017
>> Write concern for Mongo connection: {}
>> Loading content types.
>> Loading type descriptors []
>> Parsing type descriptors
>> Validating type descriptor syntactic integrity
>> Validating type descriptor semantic integrity
>> Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
>> Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
>> Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
>> Loading unit model: docker_image = pulp_docker.plugins.models:Image
>> Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
>> Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
>> Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
>> Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
>> Loading unit model: package_group = 
>> pulp_rpm.plugins.db.models:PackageGroup
>> Loading unit model: package_category = 
>> pulp_rpm.plugins.db.models:PackageCategory
>> Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
>> Loading unit model: package_environment = 
>> pulp_rpm.plugins.db.models:PackageEnvironment
>> Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
>> Loading unit model: package_langpacks = 
>> pulp_rpm.plugins.db.models:PackageLangpacks
>> Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
>> Loading unit model: yum_repo_metadata_file = 
>> pulp_rpm.plugins.db.models:YumMetadataFile
>> Updating the database with types []
>> Found the following type definitions that were not present in the update 
>> collection [puppet_module, docker_tag, docker_manifest, docker_blob, 
>> erratum, distribution, yum_repo_metadata_file, package_group, 
>> package_category, iso, package_environment, drpm, package_langpacks, rpm, 
>> srpm, docker_image]
>> Updating the database with types [puppet_module, drpm, package_langpacks, 
>> erratum, docker_blob, docker_manifest, yum_repo_metadata_file, 
>> package_group, package_category, iso, package_environment, docker_tag, 
>> distribution, rpm, srpm, docker_image]
>> Content types loaded.
>> Ensuring the admin role and user are in place.
>> Admin role and user are in place.
>> Beginning database migrations.
>> Migration package pulp.server.db.migrations is up to date at version 24
>> Migration package pulp_docker.plugins.migrations is up to date at version 
>> 2
>> Migration package pulp_puppet.plugins.migra

[foreman-users] Re: [Katello] Katello 3.1 to 3.2 upgrade error

2017-02-03 Thread Edson Manners
Just an update. It looks like the candlepin migration was looking for 
Puppet 4 and not Puppet 3. I can't seem to find any 'foreman-install' 
arguments that indicates that you'd like to stick with Puppet 3. So if 
Katello 3.1 uses Puppet 3 and Katello 3.2 uses Puppet 4 how does one 
upgrade then?

On Friday, February 3, 2017 at 10:50:11 AM UTC-5, Edson Manners wrote:
>
> I followed the following instructions to upgrade Katello 
> https://theforeman.org/plugins/katello/3.2/upgrade/index.html.
>
> Everything went smoothly until I ran the foreman upgrade command. I got 
> the error below. For some reason it's trying to use what looks like the 
> Puppet PE path instead of the OS Puppet path.
> I don't see any bug reports or anyone else with a similar issue so I'm 
> wondering if there's a path argument or something that I missed. Any help 
> is appreciated.
>
> HW/SW Spec
> CentOS 7.3
>
>
> [root@katello3 puppet]# foreman-installer --scenario katello --upgrade
> Upgrading...
> Upgrade Step: stop_services...
> Redirecting to /bin/systemctl stop  foreman-tasks.service
>
> Redirecting to /bin/systemctl stop  httpd.service
>
> Redirecting to /bin/systemctl stop  pulp_workers.service
>
> Redirecting to /bin/systemctl stop  foreman-proxy.service
>
> Redirecting to /bin/systemctl stop  pulp_streamer.service
>
> Redirecting to /bin/systemctl stop  pulp_resource_manager.service
>
> Redirecting to /bin/systemctl stop  pulp_celerybeat.service
>
> Redirecting to /bin/systemctl stop  tomcat.service
>
> Redirecting to /bin/systemctl stop  squid.service
>
> Redirecting to /bin/systemctl stop  qdrouterd.service
>
> Redirecting to /bin/systemctl stop  qpidd.service
>
> Success!
>
> Upgrade Step: start_databases...
> Redirecting to /bin/systemctl start  mongod.service
>
> Redirecting to /bin/systemctl start  postgresql.service
>
> Success!
>
> Upgrade Step: update_http_conf...
>
> Upgrade Step: migrate_pulp...
>
>
> 27216
>
> Attempting to connect to localhost:27017
> Attempting to connect to localhost:27017
> Write concern for Mongo connection: {}
> Loading content types.
> Loading type descriptors []
> Parsing type descriptors
> Validating type descriptor syntactic integrity
> Validating type descriptor semantic integrity
> Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
> Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
> Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
> Loading unit model: docker_image = pulp_docker.plugins.models:Image
> Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
> Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
> Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
> Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
> Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
> Loading unit model: package_category = 
> pulp_rpm.plugins.db.models:PackageCategory
> Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
> Loading unit model: package_environment = 
> pulp_rpm.plugins.db.models:PackageEnvironment
> Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
> Loading unit model: package_langpacks = 
> pulp_rpm.plugins.db.models:PackageLangpacks
> Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
> Loading unit model: yum_repo_metadata_file = 
> pulp_rpm.plugins.db.models:YumMetadataFile
> Updating the database with types []
> Found the following type definitions that were not present in the update 
> collection [puppet_module, docker_tag, docker_manifest, docker_blob, 
> erratum, distribution, yum_repo_metadata_file, package_group, 
> package_category, iso, package_environment, drpm, package_langpacks, rpm, 
> srpm, docker_image]
> Updating the database with types [puppet_module, drpm, package_langpacks, 
> erratum, docker_blob, docker_manifest, yum_repo_metadata_file, 
> package_group, package_category, iso, package_environment, docker_tag, 
> distribution, rpm, srpm, docker_image]
> Content types loaded.
> Ensuring the admin role and user are in place.
> Admin role and user are in place.
> Beginning database migrations.
> Migration package pulp.server.db.migrations is up to date at version 24
> Migration package pulp_docker.plugins.migrations is up to date at version 2
> Migration package pulp_puppet.plugins.migrations is up to date at version 5
> Migration package pulp_rpm.plugins.migrations is up to date at version 35
> Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
> Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
> Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
> Loading unit model: docker_image = pulp_docker.plugins.models:Image
> Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
> Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
> Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
> Loading unit model: srp

[foreman-users] DNS records for hosts without DHCP

2017-02-03 Thread Louis Hather
Is DHCP the only way to make foreman automatically update DNS records?

I see that foreman gets the IP address, regardless of how it was assigned. 

If foreman does not have this capability, could someone please direct me to 
the line of code that calls the dns update module?

As a side (unrelated) note: does the foreman-salt integration work with the 
latest release of salt? 

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


[foreman-users] [Katello] Katello 3.1 to 3.2 upgrade error

2017-02-03 Thread Edson Manners
I followed the following instructions to upgrade Katello 
https://theforeman.org/plugins/katello/3.2/upgrade/index.html.

Everything went smoothly until I ran the foreman upgrade command. I got the 
error below. For some reason it's trying to use what looks like the Puppet 
PE path instead of the OS Puppet path.
I don't see any bug reports or anyone else with a similar issue so I'm 
wondering if there's a path argument or something that I missed. Any help 
is appreciated.

HW/SW Spec
CentOS 7.3


[root@katello3 puppet]# foreman-installer --scenario katello --upgrade
Upgrading...
Upgrade Step: stop_services...
Redirecting to /bin/systemctl stop  foreman-tasks.service

Redirecting to /bin/systemctl stop  httpd.service

Redirecting to /bin/systemctl stop  pulp_workers.service

Redirecting to /bin/systemctl stop  foreman-proxy.service

Redirecting to /bin/systemctl stop  pulp_streamer.service

Redirecting to /bin/systemctl stop  pulp_resource_manager.service

Redirecting to /bin/systemctl stop  pulp_celerybeat.service

Redirecting to /bin/systemctl stop  tomcat.service

Redirecting to /bin/systemctl stop  squid.service

Redirecting to /bin/systemctl stop  qdrouterd.service

Redirecting to /bin/systemctl stop  qpidd.service

Success!

Upgrade Step: start_databases...
Redirecting to /bin/systemctl start  mongod.service

Redirecting to /bin/systemctl start  postgresql.service

Success!

Upgrade Step: update_http_conf...

Upgrade Step: migrate_pulp...


27216

Attempting to connect to localhost:27017
Attempting to connect to localhost:27017
Write concern for Mongo connection: {}
Loading content types.
Loading type descriptors []
Parsing type descriptors
Validating type descriptor syntactic integrity
Validating type descriptor semantic integrity
Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
Loading unit model: docker_image = pulp_docker.plugins.models:Image
Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
Loading unit model: package_category = 
pulp_rpm.plugins.db.models:PackageCategory
Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
Loading unit model: package_environment = 
pulp_rpm.plugins.db.models:PackageEnvironment
Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
Loading unit model: package_langpacks = 
pulp_rpm.plugins.db.models:PackageLangpacks
Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
Loading unit model: yum_repo_metadata_file = 
pulp_rpm.plugins.db.models:YumMetadataFile
Updating the database with types []
Found the following type definitions that were not present in the update 
collection [puppet_module, docker_tag, docker_manifest, docker_blob, 
erratum, distribution, yum_repo_metadata_file, package_group, 
package_category, iso, package_environment, drpm, package_langpacks, rpm, 
srpm, docker_image]
Updating the database with types [puppet_module, drpm, package_langpacks, 
erratum, docker_blob, docker_manifest, yum_repo_metadata_file, 
package_group, package_category, iso, package_environment, docker_tag, 
distribution, rpm, srpm, docker_image]
Content types loaded.
Ensuring the admin role and user are in place.
Admin role and user are in place.
Beginning database migrations.
Migration package pulp.server.db.migrations is up to date at version 24
Migration package pulp_docker.plugins.migrations is up to date at version 2
Migration package pulp_puppet.plugins.migrations is up to date at version 5
Migration package pulp_rpm.plugins.migrations is up to date at version 35
Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
Loading unit model: docker_image = pulp_docker.plugins.models:Image
Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
Loading unit model: package_category = 
pulp_rpm.plugins.db.models:PackageCategory
Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
Loading unit model: package_environment = 
pulp_rpm.plugins.db.models:PackageEnvironment
Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
Loading unit model: package_langpacks = 
pulp_rpm.plugins.db.models:PackageLangpacks
Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
Loading unit model: yum_r

Re: [foreman-users] Excessive looping while loading host edit page

2017-02-03 Thread Chris Baldwin
Nice, I hadn't noticed that. If that's the case, this should be a non-issue 
for us in the long term... once we upgrade :)

On Thursday, February 2, 2017 at 6:29:37 PM UTC-5, Stefan Lasiewski wrote:
>
> How funny. I was just looking this up also. Also running Puppet 3.8 & 
> Foreman 1.12.x, and a dozen Docker hosts. Turns out that Foreman doesn't 
> like 12 hosts with dozens of interfaces on each!
>
> Looks like this has also been fixed in Foreman 1.14. See 
> http://projects.theforeman.org/issues/16834 , which adds 'veth*' to 
> ignored_interface_identifiers 
> .
>
> -= Stefan
>
> On Thursday, February 2, 2017 at 12:35:49 PM UTC-8, Chris Baldwin wrote:
>>
>> I think the other way would be to avoid managing the host directly. Since 
>> we only use Foreman as an ENC, all class management could (should) be moved 
>> to a hostgroup, therefor never having to load the NICs.
>>
>> On Thursday, February 2, 2017 at 3:31:08 PM UTC-5, Tomer Brisker wrote:
>>>
>>> A possible workaround, if you don't need to manage all of those 
>>> interfaces in foreman, is to ignore some of them during fact import using 
>>> the ignored_interface_identifiers setting. 
>>> You may need to delete the host and re-run puppet for the ignored 
>>> interfaces to be removed.
>>>
>>> On Thu, Feb 2, 2017 at 10:22 PM, Chris Baldwin  
>>> wrote:
>>>
 Huh, that's interesting. The affected hosts do have a 
 larger-than-average (10+) number of interfaces as they're docker servers, 
 which is a commonality I hadn't noticed.

 Do you guys need/want any other logs to help w/ the issue? Is there any 
 kind of workaround that you've found?

 On Thursday, February 2, 2017 at 3:12:12 PM UTC-5, Tomer Brisker wrote:
>
> Hi Chris,
>
> Thank you for reporting this.
> This looks like you are hitting 
> http://projects.theforeman.org/issues/7829 which has to do with a 
> large number of interfaces on the host, leading to the interface partial 
> being rendered for each interface.
>
> On Thu, Feb 2, 2017 at 9:50 PM, Chris Baldwin  
> wrote:
>
>> Hi,
>>
>> My setup:
>> * Multiple Foreman servers, all on 1.12.1
>> * memcached shared between them
>> * shared backend DB (psql, 9.4.5)
>> * Foreman is a puppet 3.8 ENC only
>>
>> I have a reasonably large Foreman install. For some reason, some 
>> hosts take forever to load when clicking on 'edit'. The only thing I see 
>> in 
>> the logs is some obscene amount of rendering messages, to the tune of 
>> 445+ 
>> seconds of 
>>
>> 2017-02-02 11:36:43 [app] [I]   Rendered nic/_base_form.html.erb 
>> (27.1ms)
>> 2017-02-02 11:36:43 [app] [I]   Rendered nic/_virtual_form.html.erb 
>> (1.2ms)
>> 2017-02-02 11:36:43 [app] [I]   Rendered nic/_
>> provider_specific_form.html.erb (0.1ms)
>> 2017-02-02 11:36:43 [app] [I]   Rendered 
>> nic/manageds/_managed.html.erb (29.9ms)
>>
>> over and over. 
>>
>> I have a few questions about this:
>> * I got this info from debug. What else can I look at to get more 
>> information?
>> * Why is it rendering the same four items over and over? 
>> * I actually deleted the host from Foreman and re-ran puppet, that 
>> seemed to fix the issue temporarily. However, I don't understand *why* 
>> that 
>> made a difference. Can someone shed some light on this?
>>
>> -Chris (oogs/oogs_/oogs_werk on IRC)
>>
>> This log is for a good host. In a bad host, add about 100 times the 
>> stanzas I listed above.
>>
>> 2017-02-02 11:36:42 [app] [I] Started GET "/hosts/
>> testhost.domain.com/edit" for 127.0.0.101 at 2017-02-02 11:36:42 
>> -0800
>> 2017-02-02 11:36:42 [app] [I] Processing by HostsController#edit as 
>> HTML
>> 2017-02-02 11:36:42 [app] [I]   Parameters: {"id"=>"
>> testhost.domain.com"}
>> 2017-02-02 11:36:42 [app] [D] Cache read: 
>> _session_id:1234567890asdfghjkl
>> 2017-02-02 11:36:42 [app] [D] Setting current user thread-local 
>> variable to oogs
>> 2017-02-02 11:36:42 [app] [D] Cache read: authorize_login_delegation
>> 2017-02-02 11:36:42 [app] [D] Cache read: authorize_login_delegation
>> 2017-02-02 11:36:42 [app] [D] Cache read: idle_timeout
>> 2017-02-02 11:36:42 [app] [D] Setting current organization 
>> thread-local variable to none
>> 2017-02-02 11:36:42 [app] [D] Setting current location thread-local 
>> variable to none
>> 2017-02-02 11:36:42 [app] [I]   Rendered hosts/_progress.html.erb 
>> (0.2ms)
>> 2017-02-02 11:36:42 [app] [D] Setting current organization 
>> thread-local variable to MyOrg
>> 2017-02-02 11:36:42 [app] [D] Setting current location thread-local 
>> variable to MyLoc
>> 2017-02-02 11:36:42 [app] [I]   Rendered config_groups/_
>> config_group.html.erb (41.7ms)
>> 2017-02-02 11:36:42 [app] [I]   Rendered config_groups/_