Red Hat Satellite Server – Fatal error in Python code occurred [[6]]

I have embraced Red Hat Satellite server in a big way over the past year and try to use it wherever possible though not for everything.

One of the features I started using to simply life whilst I look at other configuration management systems, was Configuration Channels.  These allow you to provide a central repository of files and binaries which can be deployed to a server during the initial kickstart server deployment process.

Some changes had been made a month or so ago, to ensure that a specific configuration channel would be included in future deployments by way of updating the Activation Key for that deployment type in Satellite server.  Seems innocent enough at this point.  It is worth noting that there were other configuration channels associated with this activation key.

At the same time I had also added a couple of packages to the software package list which were also required at time of deployment.

Now, I rely on scripts which have been deployed to a server to complete some post server build tasks.  The first thing I noticed after a test deployment, was a complete lack of any scripts where I expected them to be.  The configuration channels had created the required folder structure but had stopped completely and had gone no further.  The error the Satellite server reported back to me was… well not massively helpful;

Fatal error in Python code occurred [[6]]

Nothing more, nothing less.

At this point I started trying to remember what I had added (thankfully not to hard as I document things quite heavily 🙂 ).  Here is roughly the steps I took to confirm whether the issue resided;

  • Remove the additional packages I had specified for this particular build – made no difference
  • Remove what I the most recently added configuration channel – made no difference
  • Tested another Red Hat Enterprise Linux 7 build (not using this particular kickstart profile) – success, so the issue would appear to be limited to this one profile
  • Remove the other configuration channels that were added some time before the last one was added – failed, still the configuration channels would not deploy. But wait, there was light at the end of the tunnel!

But, following this last step, the error message changed, from something not very helpful to something quite helpful indeed!  The message stated that permissions could not be applied as per those stipulated against specific files in the configuration channel.

So it transpires that it was a permissions resolution issue. Well, more a group resolution issue really.  There were a couple of files which were set to be deployed with a specific group.  The group in question is served from a LDAP server and the newly built machine wasn’t configured at that point to talk to the LDAP server, for this particular deployment we didn’t want auto registration with the LDAP services.

So the lesson here is make small changes, test frequently and make sure you document what you have done.  Or use a configuration management system which is version controlled, so you can easily roll back.

Just so we are clear, I was running Red Hat Satellite Server 5.7 (full patched) on RHEL 6.8 and trying to deploy RHEL 7.3.  My adventure to upgrade Satellite server to version 6.2 will be coming to a blog post soon.

So, it would appear this story comes with a lesson attached (free of charge) that all should take note of – “Always make one change at a time and test or as near to one as you can”.

Featured image credit: Charly W Karl posted e.Deorbit closing on target satellite on Flickr.  Thanks very much.

A step-by-Step Guide to Installing Spacewalk on CentOS 7

It would appear that during an upgrade of my blog at some point over the past year, I have managed to wipe out the original how to guide to installing Spacewalk on CentOS 7, so here we go again.

A step-by-step guide to installing Spacewalk on CentOS 7.  Just in case you weren’t aware Spacewalk is the upstream project for Red Hat Satellite Server.

Assumptions

  • You know the basic idea behind Spacewalk, if not see here
  • You have a vanilla VM with CentOS 7.2 installed which was deployed as a “minimal” installation
  • You have subsequently run an update to make sure you have the latest patches
  • You have root access or equivalent via sudo
  • You have got vim installed (if not run the following command should fix that)
    yum install vim -y
  • The machine you intend to install Spacewalk onto has access to the internet

Preparation

Firstly, we need to install and/or create the necessary YUM repo files that will be used to install Spacewalk directly from the Spacewalk official yum repository and all it’s associated dependencies.

  1. Run the following command as root on your spacewalk VM
    rpm -Uvh http://yum.spacewalkproject.org/2.5/RHEL/7/x86_64/spacewalk-repo-2.5-3.el7.noarch.rpm
  2. You then need to manually configure another yum repository for JPackage which is a dependency for Spacewalk, by running the following (you will need to be the root user to do this);
    sudo -i
    cat > /etc/yum.repos.d/jpackage-generic.repo << EOF
    [jpackage-generic]
    name=JPackage generic
    baseurl=ftp://ftp.rediris.es/mirror/jpackage/5.0/generic/free/
    enabled=1
    gpgcheck=1
    gpgkey=http://www.jpackage.org/jpackage.asc
    EOF
  3. And then we also need to install the EPEL yum repository configuration for CentOS 7;
    rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Installation: Embedded Database

Spacewalk utilises a database back end to store the required information about your environment.  The two options are PostgreSQL and Oracle.  Neither would be my preference but I always opt for the lesser of two evils – PostgreSQL.

The installation is a piece of cake, and can be performed by issuing the following command at the command line;

yum install spacewalk-setup-postgresql -y

During the process you should be prompted to accept the Spacewalk GPG key. You will need to enter “y” to accept!

Installation: Spacewalk

Now things have been made pretty easy for you so far.  And we wont stop now.  To install all of the required packages for spacewalk just run the following;

yum install spacewalk-postgresql

And let it download everything you need.  In all (at the time of writing) there were 379 packages totalling 563M.

Again you will likely be prompted to import the Fedora EPEL (7) GPG key.  This is necessary so just type “y” and give that Enter key a gentle tap.

And.. you will also be prompted to import the JPackage Project GPG key.  Same process as above – “y” followed by Enter.

During the installation you will see a lot of text scrolling up the screen.  This will be a mix of general package installation output from yum and some commands that the RPM package will initiate to set and define such things as SELinux contexts.

The key thing is you should see right at the end “Complete!”.  You know you are in a good place at this point.

Security: Setting up the firewall rules

CentOS 7 and (for that matter) Red Hat Enterprise Linux 7 ship with firewalld  as standard.  Now I’m not complete sure of firewalld but I’m sticking with it, but should you decide you want to use iptables (and you have taken steps to make sure it is enabled), then I have provided the firewall rules required for both;

firewalld

firewall-cmd --zone=public --add-service=http
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --zone=public --add-service=https
firewall-cmd --zone=public --add-service=https --permanent

Note.  Make sure you have double dashes/hyphens if you copy and paste as I have seen the pasted text only using a single hyphen.

Skip to section after iptables if you have applied the above configuration!

iptables

Now as iptables can be configured in all manor or ways, I’m just going to provide the basics, if your set-up is typically more customised than the default, then you probably don’t need me telling you how to setup iptables.

I will just make one assumption though.  That the default INPUT policy is set to DROP and than you do not have any DROP, REJECT lines at the end of your INPUT chain.

iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT

And don’t forget to save your firewall rules;

# service iptables save

Configuring Spacewalk

Right then, still with me?  Awesome, so lets continue with getting Spacewalk up and running.  At this point there is one fundamental thing you need…

You must have a resolvable Fully Qualified Domain Name (FQDN).  For my installation I have fudged it and added the FQDN to the host file, as I intend to build the rest of my new lab environment using Spacewalk.

So assuming you have followed everything above we can now simply run the following;

spacewalk-setup

Note.  The above assumes you have the embedded PostgreSQL database and not a remote DB, or the Oracle DB option.  Just saying.

So you should see something like the following (it may take quite some time for many of the tasks to be completed so bare with it);

[root@spacewalk ~]# spacewalk-setup
* Setting up SELinux..
** Database: Setting up database connection for PostgreSQL backend.
** Database: Installing the database:
** Database: This is a long process that is logged in:
** Database:   /var/log/rhn/install_db.log
*** Progress: ###
** Database: Installation complete.
** Database: Populating database.
*** Progress: ###########################
* Configuring tomcat.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
** GPG: Creating /root/.gnupg directory
You must enter an email address.
Admin Email Address? toby@lab.tobyhewood.com
* Performing initial configuration.
* Configuring apache SSL virtual host.
Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? 
** /etc/httpd/conf.d/ssl.conf has been backed up to ssl.conf-swsave
* Configuring jabberd.
* Creating SSL certificates.
CA certificate password? 
Re-enter CA certificate password? 
Organization? Toby Heywood
Organization Unit [spacewalk]? 
Email Address [toby@lab.tobyhewood.com]? 
City? London
State? London
Country code (Examples: "US", "JP", "IN", or type "?" to see a list)? GB
** SSL: Generating CA certificate.
** SSL: Deploying CA certificate.
** SSL: Generating server certificate.
** SSL: Storing SSL certificates.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? y
* Restarting services.
Installation complete.
Visit https://spacewalk to create the Spacewalk administrator account.

Now at this point you are almost ready to break open a beer and give yourself a pat on the back.  But lets finalise the installation first.

Creating your Organisation
(that’s Organization for the Americans)

Setting up your organisation requires only a few simple things to be provided.

  • Click the Create Organization button and you should finally see a similar screen to the following;
    Set up your Spacewalk organization.
  • The last thing to do now you have your shiny new installation of Spacewalk is to perform a few sanity checks;
    Successful installation of Spacewalk.
  • Navigate to Admin > Task Engine Status and confirm that everything looks health and that the Scheduling Service is showing as “ON”
  • You can also take a look at my earlier blog post – spacewalk sanity checking – about some steps I previously took to make sure everything was running.

And there we go, you have install Spacewalk.

Spacewalk – Post install sanity check

After having installed Spacewalk, got it working to a certain point and then found that there may have been issues with the installation, I thought it would be easier to simply re-install spacewalk onto a new virtual machine.

So following on from my how to article, I wanted to make sure that post installation, I had performed sufficient checks to confirm that there were no issues with the scheduler service or cobbler, as these were two things I had great difficulty trying to get working.

I guess it is also worth mentioning that the VM I am running spacewalk on has a single vCPU and 4GB of memory.  For storage I have given it 40G which will do me fine.  And as for the OS it is running CentOS 7 (1511).

So what should we check

Good question.  The following is a rough list of all the services I confirmed as enabled, running and also that there were no horrible errors in the log files

Services

  • cobblerd
  • postgresql
  • xinetd (tftp)
  • httpd
  • tomcat
  • taskomatic (a.k.a. the scheduler)

cobblerd

[toby@manager ~]$ sudo systemctl status cobblerd
● cobblerd.service - LSB: daemon for libvirt virtualization API
   Loaded: loaded (/etc/rc.d/init.d/cobblerd)
   Active: active (running) since Fri 2016-04-15 23:08:37 BST; 24min ago
     Docs: man:systemd-sysv-generator(8)
   CGroup: /system.slice/cobblerd.service
           └─13257 /usr/bin/python -s /bin/cobblerd --daemonize

Apr 15 23:08:35 manager systemd[1]: Starting LSB: daemon for libvirt virtualization API...
Apr 15 23:08:37 manager cobblerd[13247]: Starting cobbler daemon: [  OK  ]
Apr 15 23:08:37 manager systemd[1]: Started LSB: daemon for libvirt virtualization API.
Apr 15 23:31:35 manager systemd[1]: [/run/systemd/generator.late/cobblerd.service:8] Failed to add dependency on network,.service, ignoring: Invalid argument
Apr 15 23:31:35 manager systemd[1]: [/run/systemd/generator.late/cobblerd.service:8] Failed to add dependency on xinetd,.service, ignoring: Invalid argument

The last two lines can be ignore.  I believe this is purely due some references to the sysvinit scripts which are no longer used, and as you will see later things appear to be running fine (this time around)

PostgreSQL

[toby@manager ~]$ sudo systemctl status postgresql
● postgresql.service - PostgreSQL database server
   Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:05:39 BST; 35min ago
 Main PID: 12556 (postgres)
   CGroup: /system.slice/postgresql.service
           ├─12556 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432
           ├─12557 postgres: logger process   
           ├─12559 postgres: checkpointer process   
           ├─12560 postgres: writer process   
           ├─12561 postgres: wal writer process   
           ├─12562 postgres: autovacuum launcher process   
           ├─12563 postgres: stats collector process   
           ├─13191 postgres: rhnuser rhnschema [local] idle in transaction
           ├─13343 postgres: rhnuser rhnschema 127.0.0.1(55225) idle
           ├─13344 postgres: rhnuser rhnschema 127.0.0.1(55226) idle
           ├─13345 postgres: rhnuser rhnschema 127.0.0.1(55227) idle
           ├─13346 postgres: rhnuser rhnschema 127.0.0.1(55228) idle
           ├─13347 postgres: rhnuser rhnschema 127.0.0.1(55229) idle
           ├─13348 postgres: rhnuser rhnschema 127.0.0.1(55230) idle
           ├─13350 postgres: rhnuser rhnschema 127.0.0.1(55231) idle
           ├─13351 postgres: rhnuser rhnschema 127.0.0.1(55232) idle
           ├─13352 postgres: rhnuser rhnschema 127.0.0.1(55233) idle
           ├─13354 postgres: rhnuser rhnschema 127.0.0.1(55234) idle
           ├─13355 postgres: rhnuser rhnschema 127.0.0.1(55235) idle
           ├─13356 postgres: rhnuser rhnschema 127.0.0.1(55236) idle
           ├─13357 postgres: rhnuser rhnschema 127.0.0.1(55237) idle
           ├─13358 postgres: rhnuser rhnschema 127.0.0.1(55238) idle
           ├─13361 postgres: rhnuser rhnschema 127.0.0.1(55240) idle
           ├─13391 postgres: rhnuser rhnschema 127.0.0.1(55244) idle
           ├─13442 postgres: rhnuser rhnschema 127.0.0.1(55246) idle
           ├─13444 postgres: rhnuser rhnschema 127.0.0.1(55248) idle
           ├─13451 postgres: rhnuser rhnschema 127.0.0.1(55250) idle
           ├─13651 postgres: rhnuser rhnschema 127.0.0.1(55266) idle
           ├─28774 postgres: rhnuser rhnschema 127.0.0.1(55272) idle
           ├─28842 postgres: rhnuser rhnschema 127.0.0.1(55275) idle
           ├─28843 postgres: rhnuser rhnschema 127.0.0.1(55276) idle
           ├─28844 postgres: rhnuser rhnschema 127.0.0.1(55277) idle
           ├─28847 postgres: rhnuser rhnschema 127.0.0.1(55278) idle
           ├─28902 postgres: rhnuser rhnschema 127.0.0.1(55279) idle
           └─28903 postgres: rhnuser rhnschema 127.0.0.1(55280) idle

Apr 15 23:05:38 manager systemd[1]: Starting PostgreSQL database server...
Apr 15 23:05:39 manager systemd[1]: Started PostgreSQL database server.

tftp (by way of xinetd)

[toby@manager ~]$ sudo systemctl enable tftp
[toby@manager ~]$ sudo systemctl start tftp
[toby@manager ~]$ sudo systemctl status tftp
● tftp.service - Tftp Server
   Loaded: loaded (/usr/lib/systemd/system/tftp.service; indirect; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:46:30 BST; 2s ago
     Docs: man:in.tftpd
 Main PID: 29012 (in.tftpd)
   CGroup: /system.slice/tftp.service
           └─29012 /usr/sbin/in.tftpd -s /var/lib/tftpboot

httpd

[toby@manager ~]$ sudo systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:08:34 BST; 39min ago
     Docs: man:httpd(8)
           man:apachectl(8)
 Main PID: 13168 (httpd)
   Status: "Total requests: 1; Current requests/sec: 0; Current traffic:   0 B/sec"
   CGroup: /system.slice/httpd.service
           ├─13168 /usr/sbin/httpd -DFOREGROUND
           ├─13169 /usr/sbin/httpd -DFOREGROUND
           ├─13170 /usr/sbin/httpd -DFOREGROUND
           ├─13171 /usr/sbin/httpd -DFOREGROUND
           ├─13172 /usr/sbin/httpd -DFOREGROUND
           ├─13173 /usr/sbin/httpd -DFOREGROUND
           ├─13174 /usr/sbin/httpd -DFOREGROUND
           ├─13175 /usr/sbin/httpd -DFOREGROUND
           └─13176 /usr/sbin/httpd -DFOREGROUND

Apr 15 23:08:34 manager systemd[1]: Starting The Apache HTTP Server...
Apr 15 23:08:34 manager httpd[13168]: AH00557: httpd: apr_sockaddr_info_get() failed for manager
Apr 15 23:08:34 manager httpd[13168]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
Apr 15 23:08:34 manager systemd[1]: Started The Apache HTTP Server.

Feel free to ignore the warning messages with regards to the FQDN.

tomcat

Once the install had completed there was an error message;

Tomcat failed to start properly or the installer ran out of tries. Please check /var/log/tomcat*/catalina.out for errors.

I checked the logs and saw some errors, but as you can see from the following, simple making sure it was enabled and started appears to have cleared up what ever the issue may have been.

[toby@manager ~]$ sudo systemctl enable tomcat
[toby@manager ~]$ sudo systemctl start tomcat
[toby@manager ~]$ sudo systemctl status tomcat
● tomcat.service - Apache Tomcat Web Application Container
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:05:39 BST; 43min ago
 Main PID: 12589 (java)
   CGroup: /system.slice/tomcat.service
           └─12589 /usr/lib/jvm/jre/bin/java -ea -Xms256m -Xmx256m -Djava.awt.headless=true -Dorg.xml.sax.driver=org.apache.xerces.parsers.SAXParser -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024 -XX...

Apr 15 23:08:33 manager server[12589]: INFO: Deployment of configuration descriptor /etc/tomcat/Catalina/localhost/rhn.xml has finished in 172,857 ms
Apr 15 23:08:33 manager server[12589]: Apr 15, 2016 11:08:33 PM org.apache.coyote.AbstractProtocol start
Apr 15 23:08:33 manager server[12589]: INFO: Starting ProtocolHandler ["http-bio-127.0.0.1-8080"]
Apr 15 23:08:34 manager server[12589]: Apr 15, 2016 11:08:34 PM org.apache.coyote.AbstractProtocol start
Apr 15 23:08:34 manager server[12589]: INFO: Starting ProtocolHandler ["ajp-bio-127.0.0.1-8009"]
Apr 15 23:08:34 manager server[12589]: Apr 15, 2016 11:08:34 PM org.apache.coyote.AbstractProtocol start
Apr 15 23:08:34 manager server[12589]: INFO: Starting ProtocolHandler ["ajp-bio-0:0:0:0:0:0:0:1-8009"]
Apr 15 23:08:34 manager server[12589]: Apr 15, 2016 11:08:34 PM org.apache.catalina.startup.Catalina start
Apr 15 23:08:34 manager server[12589]: INFO: Server startup in 173112 ms
Apr 15 23:31:41 manager systemd[1]: Started Apache Tomcat Web Application Container.

taskomatic

Now, this one doesn’t appear to have been moved over to the new systemd environment and therefore we resort back to the good old sysvinit scripts and the service command to confirm this one is working;

[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (13296).

Looking good so far

But can it withstand a reboot?  Now that is the question.  So I repeated the above steps again, just to confirm.  I won’t bore you with all the details;

  • cobblerd.service – active (running)
  • httpd.service – active (running)
  • tftp.service – inactive (dead)
  • postgresql.service – active (running)
  • tomcat.service –active (running)
  • taskomatic – RHN Taskomatic is not running.

taskomatic revisited

[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is not running.
[toby@manager ~]$ sudo chkconfig taskomatic on
[toby@manager ~]$ sudo service taskomatic start
Starting RHN Taskomatic...
[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (10870).

And for good measure I gave the machine another reboot, just to confirm the taskomatic service did start.

[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (1278).

Oh Yeah!  Now I’m a happy camper.  And it’s time to re-visit the initial configuration part, which I shall post about shortly.

Image credit; Thanks to Mark Walsh for making the featured image called “Russell Street Court Cells – Padded Cell” available on Flickr.com.