FWIW, I had errors thrown while upgrading from 3.1 to 3.2 on a self hosted system. Each time I just re-ran the upgrade command and it got a little further. There were three errors thrown and the last time it "just worked".
Now happily running 3.2 cheers L. ------ The most dangerous phrase in the language is, "We've always done it this way." - Grace Hopper On 4 February 2017 at 13:52, Edson Manners <[email protected]> wrote: > Similar Puppet 4 path issue when upgrading the Capsule?? > > [root@katello3 ~]# capsule-certs-generate --capsule-fqdn > "dns1.xxx.xxxxx.xxx" --certs-tar "~/capsule.dns1.xxx.xxxxx.xxxx-certs.tar" > Installing Done > [100%] [........................................................... > ........................................................] > Success! > /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in > `initialize': No such file or directory - /opt/puppetlabs/puppet/cache/ > foreman_cache_data/oauth_consumer_key (Errno::ENOENT) > from > /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in > `open' > from > /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in > `unsafe_load_file' > from > /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:153:in > `load_file_with_options' > from /usr/share/katello-installer-base/hooks/boot/01-helpers.rb:30:in > `read_cache_data' > from > /usr/share/katello-installer-base/hooks/post/10-post_install.rb:48:in > `block (4 levels) in load' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:34:in > `instance_eval' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:34:in > `block (4 levels) in load' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hook_context.rb:13:in > `instance_exec' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hook_context.rb:13:in > `execute' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:51:in > `block in execute' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:49:in > `each' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:49:in > `execute' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:454:in > `block in run_installation' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/exit_handler.rb:27:in > `call' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/exit_handler.rb:27:in > `exit' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:160:in > `exit' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:453:in > `run_installation' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:147:in > `execute' > from /usr/share/gems/gems/clamp-1.0.0/lib/clamp/command.rb:68:in > `run' > from /usr/share/gems/gems/clamp-1.0.0/lib/clamp/command.rb:133:in > `run' > from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:154:in > `run' > from /sbin/capsule-certs-generate:75:in `<main>' > > On Friday, February 3, 2017 at 9:24:12 PM UTC-5, Edson Manners wrote: >> >> Thanks for replying Stephen. Here's what I found: >> >> [root@katello3 puppet]# rpm -q --whatprovides >> /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/applicati >> on/storeconfigs.rb >> puppetdb-terminus-2.3.8-1.el7.noarch >> >> So maybe PuppetDB was the culprit. >> >> And yes even though I read in a Katello 3.2 changelog somewhere that that >> grub2 bug was fixed it seems like it still fell through the cracks. >> >> I'm still trying to get the Katello back online. I'm upgrading the >> external proxy to see if that'll get rid of the last errors but at this >> point the server is pretty much unuseable. I don't want to bad mouth >> Katello becasue I love it but I'd like to warn others about my experience >> since spent weeks preparing for this upgrade and still got bitten by >> unexpected errors/bugs. >> >> I'll report back any progress for completeness. >> >> Feb 03 21:20:04 katello3.rcc.fsu.edu puppet-master[49870]: Report >> processor failed: Could not send report to Foreman at >> https://katello3.xxx.xxxxx.xxx/api/config_reports: Net::ReadTimeout >> Feb 03 21:20:04 katello3.xxx.xxxxx.xxx puppet-master[49870]: >> ["/usr/share/ruby/net/protocol.rb:158:in `rescue in rbuf_fill'", >> "/usr/share/ruby/net/protocol.rb:152:in `rbuf_fill'", >> "/usr/share/ruby/net/protocol.rb:134:in `readuntil'", >> "/usr/share/ruby/net/protocol.rb:144:in `readline'", >> "/usr/share/ruby/net/http/response.rb:39:in `read_status_line'", >> "/usr/share/ruby/net/http/response.rb:28:in `read_new'", >> "/usr/share/ruby/net/http.rb:1412:in `block in transport_request'", >> "/usr/share/ruby/net/http.rb:1409:in `catch'", >> "/usr/share/ruby/net/http.rb:1409:in `transport_request'", >> "/usr/share/ruby/net/http.rb:1382:in `request'", >> "/usr/share/ruby/net/http.rb:1375:in `block in request'", >> "/usr/share/ruby/net/http.rb:852:in `start'", >> "/usr/share/ruby/net/http.rb:1373:in `request'", >> "/usr/share/ruby/vendor_ruby/puppet/reports/foreman.rb:65:in `process'", >> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:37:in >> `block in process'", "/usr/share/ruby/vendor_ruby/p >> uppet/indirector/report/processor.rb:53:in `block in processors'", >> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in >> `each'", >> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in >> `processors'", "/usr/share/ruby/vendor_ruby/p >> uppet/indirector/report/processor.rb:30:in `process'", >> "/usr/share/ruby/vendor_ruby/puppet/indirector/report/processor.rb:14:in >> `save'", "/usr/share/ruby/vendor_ruby/puppet/indirector/indirection.rb:283:in >> `save'", "/usr/share/ruby/vendor_ruby/puppet/network/http/api/v1.rb:160:in >> `do_save'", "/usr/share/ruby/vendor_ruby/puppet/network/http/api/v1.rb:50:in >> `block in call'", "/usr/share/ruby/vendor_ruby/puppet/context.rb:64:in >> `override'", "/usr/share/ruby/vendor_ruby/puppet.rb:246:in `override'", >> "/usr/share/ruby/vendor_ruby/puppet/network/http/api/v1.rb:49:in >> `call'", "/usr/share/ruby/vendor_ruby/puppet/network/http/route.rb:82:in >> `block in process'", "/usr/share/ruby/vendor_ruby/p >> uppet/network/http/route.rb:81:in `each'", "/usr/share/ruby/vendor_ruby/p >> uppet/network/http/route.rb:81:in `process'", >> "/usr/share/ruby/vendor_ruby/puppet/network/http/handler.rb:63:in `block >> in process'", "/usr/share/ruby/vendor_ruby/p >> uppet/util/profiler/around_profiler.rb:58:in `profile'", >> "/usr/share/ruby/vendor_ruby/puppet/util/profiler.rb:51:in `profile'", >> "/usr/share/ruby/vendor_ruby/puppet/network/http/handler.rb:61:in >> `process'", "/usr/share/ruby/vendor_ruby/puppet/network/http/rack.rb:21:in >> `call'", "/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger >> /rack/thread_handler_extension.rb:77:in `process_request'", >> "/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger >> /request_handler/thread_handler.rb:140:in `accept_and_process_next_request'", >> "/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger >> /request_handler/thread_handler.rb:108:in `main_loop'", >> "/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/request_handler.rb:441:in >> `block (3 levels) in start_threads'"] >> >> On Friday, February 3, 2017 at 4:54:05 PM UTC-5, stephen wrote: >>> >>> On Fri, Feb 3, 2017 at 2:36 PM, Edson Manners <[email protected]> >>> wrote: >>> > Just an update. It looks like the candlepin migration was looking for >>> Puppet >>> > 4 and not Puppet 3. I can't seem to find any 'foreman-install' >>> arguments >>> > that indicates that you'd like to stick with Puppet 3. So if Katello >>> 3.1 >>> > uses Puppet 3 and Katello 3.2 uses Puppet 4 how does one upgrade then? >>> >>> Katello 3.2 can use either version of Puppet. >>> >>> The only reason we'd be looking in the Puppet directory was if it >>> existed, We figure out >>> the directory like this: >>> https://github.com/Katello/katello-installer/blob/master/ho >>> oks/boot/01-helpers.rb#L4 >>> >>> It's maybe a little simplistic, although I don't think you should have >>> any /opt/puppetlabs >>> directory unless you installed some Puppet 4 package. Was that the >>> case? >>> >>> As far as the error in your other message, looks like >>> http://projects.theforeman.org/issues/17639 >>> should've been backported to 3.2, `mkdir /var/lib/tftpboot/grub2` might >>> fix it >>> >>> >>> > On Friday, February 3, 2017 at 10:50:11 AM UTC-5, Edson Manners wrote: >>> >> >>> >> I followed the following instructions to upgrade Katello >>> >> https://theforeman.org/plugins/katello/3.2/upgrade/index.html. >>> >> >>> >> Everything went smoothly until I ran the foreman upgrade command. I >>> got >>> >> the error below. For some reason it's trying to use what looks like >>> the >>> >> Puppet PE path instead of the OS Puppet path. >>> >> I don't see any bug reports or anyone else with a similar issue so >>> I'm >>> >> wondering if there's a path argument or something that I missed. Any >>> help is >>> >> appreciated. >>> >> >>> >> HW/SW Spec >>> >> CentOS 7.3 >>> >> >>> >> >>> >> [root@katello3 puppet]# foreman-installer --scenario katello >>> --upgrade >>> >> Upgrading... >>> >> Upgrade Step: stop_services... >>> >> Redirecting to /bin/systemctl stop foreman-tasks.service >>> >> >>> >> Redirecting to /bin/systemctl stop httpd.service >>> >> >>> >> Redirecting to /bin/systemctl stop pulp_workers.service >>> >> >>> >> Redirecting to /bin/systemctl stop foreman-proxy.service >>> >> >>> >> Redirecting to /bin/systemctl stop pulp_streamer.service >>> >> >>> >> Redirecting to /bin/systemctl stop pulp_resource_manager.service >>> >> >>> >> Redirecting to /bin/systemctl stop pulp_celerybeat.service >>> >> >>> >> Redirecting to /bin/systemctl stop tomcat.service >>> >> >>> >> Redirecting to /bin/systemctl stop squid.service >>> >> >>> >> Redirecting to /bin/systemctl stop qdrouterd.service >>> >> >>> >> Redirecting to /bin/systemctl stop qpidd.service >>> >> >>> >> Success! >>> >> >>> >> Upgrade Step: start_databases... >>> >> Redirecting to /bin/systemctl start mongod.service >>> >> >>> >> Redirecting to /bin/systemctl start postgresql.service >>> >> >>> >> Success! >>> >> >>> >> Upgrade Step: update_http_conf... >>> >> >>> >> Upgrade Step: migrate_pulp... >>> >> >>> >> >>> >> 27216 >>> >> >>> >> Attempting to connect to localhost:27017 >>> >> Attempting to connect to localhost:27017 >>> >> Write concern for Mongo connection: {} >>> >> Loading content types. >>> >> Loading type descriptors [] >>> >> Parsing type descriptors >>> >> Validating type descriptor syntactic integrity >>> >> Validating type descriptor semantic integrity >>> >> Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module >>> >>> >> Loading unit model: docker_blob = pulp_docker.plugins.models:Blob >>> >> Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest >>> >>> >> Loading unit model: docker_image = pulp_docker.plugins.models:Image >>> >> Loading unit model: docker_tag = pulp_docker.plugins.models:Tag >>> >> Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata >>> >> Loading unit model: distribution = >>> >> pulp_rpm.plugins.db.models:Distribution >>> >>> >> Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM >>> >> Loading unit model: package_group = >>> >> pulp_rpm.plugins.db.models:PackageGroup >>> >> Loading unit model: package_category = >>> >> pulp_rpm.plugins.db.models:PackageCategory >>> >> Loading unit model: iso = pulp_rpm.plugins.db.models:ISO >>> >> Loading unit model: package_environment = >>> >> pulp_rpm.plugins.db.models:PackageEnvironment >>> >> Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM >>> >> Loading unit model: package_langpacks = >>> >> pulp_rpm.plugins.db.models:PackageLangpacks >>> >> Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM >>> >> Loading unit model: yum_repo_metadata_file = >>> >> pulp_rpm.plugins.db.models:YumMetadataFile >>> >> Updating the database with types [] >>> >> Found the following type definitions that were not present in the >>> update >>> >> collection [puppet_module, docker_tag, docker_manifest, docker_blob, >>> >> erratum, distribution, yum_repo_metadata_file, package_group, >>> >> package_category, iso, package_environment, drpm, package_langpacks, >>> rpm, >>> >> srpm, docker_image] >>> >> Updating the database with types [puppet_module, drpm, >>> package_langpacks, >>> >> erratum, docker_blob, docker_manifest, yum_repo_metadata_file, >>> >> package_group, package_category, iso, package_environment, >>> docker_tag, >>> >> distribution, rpm, srpm, docker_image] >>> >> Content types loaded. >>> >> Ensuring the admin role and user are in place. >>> >> Admin role and user are in place. >>> >> Beginning database migrations. >>> >> Migration package pulp.server.db.migrations is up to date at version >>> 24 >>> >> Migration package pulp_docker.plugins.migrations is up to date at >>> version >>> >> 2 >>> >> Migration package pulp_puppet.plugins.migrations is up to date at >>> version >>> >> 5 >>> >> Migration package pulp_rpm.plugins.migrations is up to date at >>> version 35 >>> >> Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module >>> >>> >> Loading unit model: docker_blob = pulp_docker.plugins.models:Blob >>> >> Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest >>> >>> >> Loading unit model: docker_image = pulp_docker.plugins.models:Image >>> >> Loading unit model: docker_tag = pulp_docker.plugins.models:Tag >>> >> Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata >>> >> Loading unit model: distribution = >>> >> pulp_rpm.plugins.db.models:Distribution >>> >>> >> Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM >>> >> Loading unit model: package_group = >>> >> pulp_rpm.plugins.db.models:PackageGroup >>> >> Loading unit model: package_category = >>> >> pulp_rpm.plugins.db.models:PackageCategory >>> >> Loading unit model: iso = pulp_rpm.plugins.db.models:ISO >>> >> Loading unit model: package_environment = >>> >> pulp_rpm.plugins.db.models:PackageEnvironment >>> >> Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM >>> >> Loading unit model: package_langpacks = >>> >> pulp_rpm.plugins.db.models:PackageLangpacks >>> >> Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM >>> >> Loading unit model: yum_repo_metadata_file = >>> >> pulp_rpm.plugins.db.models:YumMetadataFile >>> >> Database migrations complete. >>> >> >>> >> Upgrade Step: start_httpd... >>> >> Redirecting to /bin/systemctl start httpd.service >>> >> >>> >> Success! >>> >> >>> >> Upgrade Step: start_qpidd... >>> >> Redirecting to /bin/systemctl start qpidd.service >>> >> >>> >> Redirecting to /bin/systemctl start qdrouterd.service >>> >> >>> >> Success! >>> >> >>> >> Upgrade Step: start_pulp... >>> >> Redirecting to /bin/systemctl start pulp_celerybeat.service >>> >> >>> >> Redirecting to /bin/systemctl start pulp_resource_manager.service >>> >> >>> >> Redirecting to /bin/systemctl start pulp_workers.service >>> >> >>> >> Success! >>> >> >>> >> Upgrade Step: migrate_candlepin... >>> >> >>> >> /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in >>> >>> >> `initialize': No such file or directory - >>> >> /opt/puppetlabs/puppet/cache/foreman_cache_data/candlepin_db_password >>> >>> >> (Errno::ENOENT) >>> >> from >>> >> /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in >>> >>> >> `open' >>> >> from >>> >> /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:188:in >>> >>> >> `unsafe_load_file' >>> >> from >>> >> /usr/share/ruby/vendor_ruby/puppet/vendor/safe_yaml/lib/safe_yaml.rb:153:in >>> >>> >> `load_file_with_options' >>> >> from /usr/share/katello-installer-base/hooks/boot/01-helpers.rb:30:in >>> >>> >> `read_cache_data' >>> >> from /usr/share/katello-installer-base/hooks/pre/30-upgrade.rb:28:in >>> >> `migrate_candlepin' >>> >> from /usr/share/katello-installer-base/hooks/pre/30-upgrade.rb:89:in >>> >> `upgrade_step' >>> >> from /usr/share/katello-installer-base/hooks/pre/30-upgrade.rb:125:in >>> >>> >> `block (4 levels) in load' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:34:in >>> >> `instance_eval' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:34:in >>> `block (4 >>> >> levels) in load' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hook_context.rb:13:in >>> >> `instance_exec' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hook_context.rb:13:in >>> >> `execute' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:51:in >>> `block in >>> >> execute' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:49:in >>> `each' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/hooking.rb:49:in >>> `execute' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:408:in >>> >>> >> `run_installation' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:147:in >>> >>> >> `execute' >>> >> from /usr/share/gems/gems/clamp-1.0.0/lib/clamp/command.rb:68:in >>> `run' >>> >> from /usr/share/gems/gems/clamp-1.0.0/lib/clamp/command.rb:133:in >>> `run' >>> >> from /usr/share/gems/gems/kafo-0.9.8/lib/kafo/kafo_configure.rb:154:in >>> >>> >> `run' >>> >> from /sbin/foreman-installer:8:in `<main>' >>> >> >>> > -- >>> > You received this message because you are subscribed to the Google >>> Groups >>> > "Foreman users" group. >>> > To unsubscribe from this group and stop receiving emails from it, send >>> an >>> > email to [email protected]. >>> > To post to this group, send email to [email protected]. >>> > Visit this group at https://groups.google.com/group/foreman-users. >>> > For more options, visit https://groups.google.com/d/optout. >>> >> -- > You received this message because you are subscribed to the Google Groups > "Foreman users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at https://groups.google.com/group/foreman-users. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "Foreman users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/foreman-users. For more options, visit https://groups.google.com/d/optout.
