Improving your email reputation

In the past I have looked at, adding Sender Policy Framework (SPF) which looks at the bounce address of an email, I have also looked at SenderID which looks at the FROM address part of the email header, and then there was Domain Keys Identified Mail (DKIM) which signs your emails to confirm that they were indeed sent from an authorised email server (which has a published public key via DNS).

Now the final piece in the email puzzle is DMARC.

What is DMARC?

Very briefly it stands for Domain Message Authorisation Reporting and Conformance.

What does it do?

Two things;

  • Via DNS you (the sender) publish under your domain record an instruction that states whether or not a receiving email server should trust your SPF, SenderID and DKIM.  All three are also published under your domain DNS zone.  You can say whether or not they should reject or quarantine emails that purport to come from your email server but don’t (or do nothing).
  • Receiving information in XML format about how your domain reputation is fairing from a given receiving email server, as and when your users send emails to third parties.

What does the DNS record look like?

It looks like the following;

_dmarc IN TXT "v=DMARC1; p=none; rua=mailto:postmaster@lab.tobyheywood.com"

OK, so if we break this down a little we have the following components;

  • “_dmarc” – This is the name of the TXT record, which recipient email servers will try to retrieve from your DNS when configured to use DMARC.
  • “IN” – Standard part of a BIND DNS record.  Means Internet, nothing more, nothing less.
  • “TXT” – This is the record type.  DMARC utilises the bog standard TXT record type as this was seen as the quickest method to adoption, rather than treading the lengthy path towards a new DNS record type.
  • Now we have the actual payload (note they are separated by semicolons);
    • “v=DMARC1”  this is the version of DMARC
    • “p=none” – We are effectively saying do nothing and we don’t want to confirm or deny that the emails are from us or that the SPF, SenderID or DKIM information should be trusted
    • “rua=postmaster@lab.tobyheywood.com” – Doesn’t need to be included but if you do include this part, then you can expect to receive emails from the 3rd party email servers who have received email(s) from your domain, confirming what they thought of it

Are there other options?

Yes, though I am only going to focus on a couple here;

The “p=” option has three values that you can use;

  • none – Effectively do nothing, this should be set initially whilst you get things set up. Once you have confirmed things look good, then you can start to be a bit more forceful in what you would like other email providers to do with messages which do not come from your email server.
  • “quarantine” – This is were they would potentially pass the email on for further screening or simply decide to put it into the spam/junk folder.
  • “reject” – This is you saying that if a 3rd party receives an email, supposedly from you, but that wasn’t sent from one of the list of approved email servers (SPF or SenderID) or if it doesn’t pass the DKIM test then it should be rejected and not even delivered.

You set your _dmarc record, now what?

We will assume that you DNS zone has replicated to all of your DNS servers and that you have correctly configured the you email server to sign your outbound emails with your DKIM private key.

At this point I would highly advise going to https:///www.mail-tester.com and send a test email (with a realistic subject and a paragraph or two of readable text) to the email address they provide.

Once mail-tester.com has received your test email, they will process the email headers to confirm whether or not SPF, SenderID, DKIM and DMARC are all correctly configured and working.

It is possible that if your DNS servers are not completely aligned and up-to-date, mail-tester.com may be unable to provide an accurate report.  If that happens give it 12 hours and repeat the test again.

 

Credit: Thanks to Mr Darkroom for making the featured image called Checkpoint available on Flickr.

iSCSI and Jumbo Frames

I’ve recently been working on a project to deploy a couple of Pure Storage Flash Array //M10‘s, and rather than using Fiber Channel we opted for the 10Gb Ethernet (admittedly for reasons of cost) and using iSCSI as the transport mechanism.

Whenever you read up on iSCSI (and NFS for that matter) there inevitably ends up being a discussion around the MTU size.  MY thinking here is that if your network has sufficient bandwidth to handle the Jumbo Frames and large MTU sizes, then it should be done.

Now I’m not going to ramble on about enabling Jumbo Frames exactly, but I am going to focus on the MTU size.

What is MTU?

MTU stands for Message Transport Unit.  It defines the maximum size of a network frame that you can send in a single data transmission across the network.  The default MTU size is 1500.  Whether that be Red Hat Enterprise Linux, , Fedora, Slackware, Ubuntu, Microsoft Windows (pick a version), Cisco IOS and Juniper’s JunOS it has in my experience always been 1500 (though that’s not to say that some specialist providers may change this default value for black box solutions.

So what is a Jumbo Frame?

The internet is pretty much unified on the idea that any packet or frame which is above the 1500 byte default, can be considered a jumbo frame.  Typically you would want to enable this for specific needs such as NFS and iSCSI and the bandwidth is at least 1Gbps or better 10Gbps.

MTU sizing

A lot of what I had ready in the early days about this topic suggests that you should set the MTU to 9000 bytes, so what should you be mindful of when doing so?

Well, lets take an example, you have a requirement where you need to enable jumbo frames and you have set an MTU size of 9000 across your entire environment;

  • virtual machine interfaces
  • physical network interfaces
  • fabric interconnects
  • and core switches

So you enable an MTU of 9000 everywhere, and you then test your shiny new jumbo frame enabled network by way of a large ping;

Linux

$ ping -s 9000 -M do 192.168.1.1

Windows

> ping -l 9000 -f -t 192.168.1.1

Both of the above perform the same job.  They will attempt to send an ICMP ping;

  • To our chosen destination – 192.168.1.1
  • With a packet size of 9000 bytes (option -l 9000 or -s 9000), remember the default is 1500 so this is definitely a Jumbo packet
  • Where the request is not fragmented, thus ensuring that a packet of such a size can actually reach the intended destination without being reduced

The key to the above examples is the “-f” (Windows) and “-M do” (Linux).  This will enforce the requirement that the packet can be sent from your server/workstation to its intended destination without the size of the packet being messed with aka fragmented (as that would negate the whole point of using jumbo frames).

If you do not receive a normal ping response back which states its size as being 9000 then something is not configured correctly.

The error might look like the following;

ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500

The above error is highlighting the fact that we are attempting to send a packet which is bigger than the local NIC is configured to handle.  It is telling us the MTU is set at 1500 bytes.  In this instance we would need to reconfigure our network card to handle the jumbo sized packets.

Now lets take a look at what happens with the ICMP ping request and it’s size.  As a test I have pinged the localhost interface on my machine and I get the following;

[toby@testbox ~]$ ping -s 9000 -M do localhost
PING localhost(localhost (::1)) 9000 data bytes
9008 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.142 ms
9008 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.148 ms
9008 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.145 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2085ms
rtt min/avg/max/mdev = 0.142/0.145/0.148/0.002 ms

Firstly notice the size of each request.  The initial request may have been 9000 however that doesn’t take into account the need for the header to be added to the packet, so that it can be correctly sent over your network or the Internet.  Secondly notice that the packet was received without any fragmentation (note I used the “-M do” option to ensure fragmentation couldn’t take place).  In this instance the loopback interface is configured with a massive MTU of 65536 bytes and so all worked swimmingly.

Note that the final packet size is actually 9008 bytes.

The packet size increased by 8 bytes due to the addition of the ICMP header mentioned above, making the total 9008 bytes.

My example above stated that the MTU had been set to 9000 on ALL devices.  In this instance the packets will never get to their intended destination without being fragmented as 9008 bytes is bigger than 9000 bytes (stating the obvious I know).

The resolution

The intermediary devices (routers, bridges, switches and firewalls) will need an MTU size that is bigger than 9000 and be size sufficiently to accept the desired packet size.  A standard ethernet frame (according to Cisco) would require an additional 18 bytes on top of the 9000 for the payload.  And it would be wise to actually specify a bit higher.  So, an MTU size of 9216 bytes would be better as it would allow enough headroom for everything to pass through nicely.

Focusing on the available options in a Windows world

And here is the real reason for this post.  Microsoft with all their wisdom, provide you with a drop down box to select the required predefined MTU size for your NICs.  With Windows 2012 R2 (possibly slightly earlier versions too), the nearest size you can set via the network card configuration GUI is 9014.  This would result in the packet being fragmented or in the case of iSCSI it would potentially result in very poor performance.  The MTU 9014 isn’t going to work if the rest of the network or the destination device are set at 9000.

The lesson here is make sure that both source and destination machines have an MTU of equal size and that anything in between must be able to support a higher MTU size than 9000.  And given that Microsoft have hardcoded the GUI with a specific number of options, you will probably want to configure your environment to handle this slightly higher size.

Note.  1Gbps Ethernet only supported a maximum MTU size of 9000, so although Jumbo Frames can be enabled you would need to reduce the MTU size slightly on the source and destination servers, with everything in between set at 9000.

Featured image credit; TaylorHerring.  As bike frames go, the Penny Farthing could well be considered to have a jumbo frame.

A step-by-Step Guide to Installing Spacewalk on CentOS 7

It would appear that during an upgrade of my blog at some point over the past year, I have managed to wipe out the original how to guide to installing Spacewalk on CentOS 7, so here we go again.

A step-by-step guide to installing Spacewalk on CentOS 7.  Just in case you weren’t aware Spacewalk is the upstream project for Red Hat Satellite Server.

Assumptions

  • You know the basic idea behind Spacewalk, if not see here
  • You have a vanilla VM with CentOS 7.2 installed which was deployed as a “minimal” installation
  • You have subsequently run an update to make sure you have the latest patches
  • You have root access or equivalent via sudo
  • You have got vim installed (if not run the following command should fix that)
    yum install vim -y
  • The machine you intend to install Spacewalk onto has access to the internet

Preparation

Firstly, we need to install and/or create the necessary YUM repo files that will be used to install Spacewalk directly from the Spacewalk official yum repository and all it’s associated dependencies.

  1. Run the following command as root on your spacewalk VM
    rpm -Uvh http://yum.spacewalkproject.org/2.5/RHEL/7/x86_64/spacewalk-repo-2.5-3.el7.noarch.rpm
  2. You then need to manually configure another yum repository for JPackage which is a dependency for Spacewalk, by running the following (you will need to be the root user to do this);
    sudo -i
    cat > /etc/yum.repos.d/jpackage-generic.repo << EOF
    [jpackage-generic]
    name=JPackage generic
    baseurl=ftp://ftp.rediris.es/mirror/jpackage/5.0/generic/free/
    enabled=1
    gpgcheck=1
    gpgkey=http://www.jpackage.org/jpackage.asc
    EOF
  3. And then we also need to install the EPEL yum repository configuration for CentOS 7;
    rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Installation: Embedded Database

Spacewalk utilises a database back end to store the required information about your environment.  The two options are PostgreSQL and Oracle.  Neither would be my preference but I always opt for the lesser of two evils – PostgreSQL.

The installation is a piece of cake, and can be performed by issuing the following command at the command line;

yum install spacewalk-setup-postgresql -y

During the process you should be prompted to accept the Spacewalk GPG key. You will need to enter “y” to accept!

Installation: Spacewalk

Now things have been made pretty easy for you so far.  And we wont stop now.  To install all of the required packages for spacewalk just run the following;

yum install spacewalk-postgresql

And let it download everything you need.  In all (at the time of writing) there were 379 packages totalling 563M.

Again you will likely be prompted to import the Fedora EPEL (7) GPG key.  This is necessary so just type “y” and give that Enter key a gentle tap.

And.. you will also be prompted to import the JPackage Project GPG key.  Same process as above – “y” followed by Enter.

During the installation you will see a lot of text scrolling up the screen.  This will be a mix of general package installation output from yum and some commands that the RPM package will initiate to set and define such things as SELinux contexts.

The key thing is you should see right at the end “Complete!”.  You know you are in a good place at this point.

Security: Setting up the firewall rules

CentOS 7 and (for that matter) Red Hat Enterprise Linux 7 ship with firewalld  as standard.  Now I’m not complete sure of firewalld but I’m sticking with it, but should you decide you want to use iptables (and you have taken steps to make sure it is enabled), then I have provided the firewall rules required for both;

firewalld

firewall-cmd --zone=public --add-service=http
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --zone=public --add-service=https
firewall-cmd --zone=public --add-service=https --permanent

Note.  Make sure you have double dashes/hyphens if you copy and paste as I have seen the pasted text only using a single hyphen.

Skip to section after iptables if you have applied the above configuration!

iptables

Now as iptables can be configured in all manor or ways, I’m just going to provide the basics, if your set-up is typically more customised than the default, then you probably don’t need me telling you how to setup iptables.

I will just make one assumption though.  That the default INPUT policy is set to DROP and than you do not have any DROP, REJECT lines at the end of your INPUT chain.

iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT

And don’t forget to save your firewall rules;

# service iptables save

Configuring Spacewalk

Right then, still with me?  Awesome, so lets continue with getting Spacewalk up and running.  At this point there is one fundamental thing you need…

You must have a resolvable Fully Qualified Domain Name (FQDN).  For my installation I have fudged it and added the FQDN to the host file, as I intend to build the rest of my new lab environment using Spacewalk.

So assuming you have followed everything above we can now simply run the following;

spacewalk-setup

Note.  The above assumes you have the embedded PostgreSQL database and not a remote DB, or the Oracle DB option.  Just saying.

So you should see something like the following (it may take quite some time for many of the tasks to be completed so bare with it);

[root@spacewalk ~]# spacewalk-setup
* Setting up SELinux..
** Database: Setting up database connection for PostgreSQL backend.
** Database: Installing the database:
** Database: This is a long process that is logged in:
** Database:   /var/log/rhn/install_db.log
*** Progress: ###
** Database: Installation complete.
** Database: Populating database.
*** Progress: ###########################
* Configuring tomcat.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
** GPG: Creating /root/.gnupg directory
You must enter an email address.
Admin Email Address? toby@lab.tobyhewood.com
* Performing initial configuration.
* Configuring apache SSL virtual host.
Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? 
** /etc/httpd/conf.d/ssl.conf has been backed up to ssl.conf-swsave
* Configuring jabberd.
* Creating SSL certificates.
CA certificate password? 
Re-enter CA certificate password? 
Organization? Toby Heywood
Organization Unit [spacewalk]? 
Email Address [toby@lab.tobyhewood.com]? 
City? London
State? London
Country code (Examples: "US", "JP", "IN", or type "?" to see a list)? GB
** SSL: Generating CA certificate.
** SSL: Deploying CA certificate.
** SSL: Generating server certificate.
** SSL: Storing SSL certificates.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? y
* Restarting services.
Installation complete.
Visit https://spacewalk to create the Spacewalk administrator account.

Now at this point you are almost ready to break open a beer and give yourself a pat on the back.  But lets finalise the installation first.

Creating your Organisation
(that’s Organization for the Americans)

Setting up your organisation requires only a few simple things to be provided.

  • Click the Create Organization button and you should finally see a similar screen to the following;
    Set up your Spacewalk organization.
  • The last thing to do now you have your shiny new installation of Spacewalk is to perform a few sanity checks;
    Successful installation of Spacewalk.
  • Navigate to Admin > Task Engine Status and confirm that everything looks health and that the Scheduling Service is showing as “ON”
  • You can also take a look at my earlier blog post – spacewalk sanity checking – about some steps I previously took to make sure everything was running.

And there we go, you have install Spacewalk.

Security Broken by Design

Admit it. You, just like me, use Google every day to answer those tough questions that we face daily.

Sometimes we will ask it how to get us home from somewhere we have never been before – “OK Google, take me home” – other times we might be close to starvation (relatively speaking) – “show me interesting recipes” or “OK Google, give me directions to the nearest drive through McDonalds”, but were I use it most, is at work, where I search for such mundane things as; “rsyslog remote server configuration”. Yes, I know, I could just look at the man page for rsyslog.conf but Google seems to have worked its way into my head so much that it is often the first place I look.

Right… back to the topic at hand – Security Broken by Design.

So whilst Googling how to set up a remote syslog server I read through one persons blog post and an alarm bell started to ring!

This particular post had correctly suggested the configuration for rsyslog on both the client and server but then went on (in a very generic way), instructing readers to opening up firewall ports on the clients.

This highlighted a fundamental lack of understanding on the part of the individual whose blog I was reading. You only need to open up ports 514/tcp or 514/udp to enable rsyslog to function on the server-side.  The connection is initiated from the client NOT the server.  Granted, in a completely hardened installation it is likely that outbound ports will need to be enabled.  BUT, where security is concerned, I feel that things should not be taken for granted or worse, assumed!

This generic discussion about security seems completely idiotic! The likes of Red Hat, Ubuntu and almost all other distributions now enable firewalls by default.  And the normal fashion for such a thing, is to allow “related” and “established” traffic to flow out of your network card to the LAN and potentially beyond.  But (and more importantly) to block none essential traffic inbound to your machine.

If you are working in a hardened environment then one of the two options below would be better suited for your server;

So in short.

Please think before you apply make potentially unnecessary changes to your workstations and servers!

Thanks to Sarah Joy for posting the featured image Leader Lock on Flickr.

Excess memory consumption – clamscan and ownCloud

Yesterday I had an interesting issue, where one of the colo’d servers I manage, became almost unresponsive. After what felt like an age I was finally logged on to the failing box in question.  It quickly became apparent that the ClamAV application clamscan was being spawned multiple times and consuming all available memory and causing the server to swap until it ran out of swap (in fairness the server is lacking in the memory department).

I initially thought it might be related to the AV scanning of emails, but the server wasn’t handling a lot of email at the time.  I then started to dig deeper and as the clamscan processes were owned by apache I started to question whether or not spamassassin was doing some weird and wonderful scanning by way of a clamav plugin.  Not the case. In the end it turns out that it was being triggered by the ownCloud 9.x installation on the same server.  At some point, Antivirus integration was enabled within ownCloud (I believe prior to ClamAV being installed).  It also happened that there was about 1.6GB of data being uploaded to that server which was triggering all of the clamscan processes.

Clamscan is (by ownClouds own admission) not the best option on grounds of performance and reliability, which does beg the question; if it’s not the most suitable way to use ClamAV in an ownCloud installation, why default to that setting???

For me I uninstalled ClamAV which in turn prevented the clamscan app from being executed and I was then able to access ownCloud again to make the necessary changes to the settings so that clamscan was not used.

Changing the method from Executable to one of the Daemon methods is advisable as a minimum.  After changing to the daemon (socket) method it was a more controlled use of the av scanning and I haven’t so far seen any ill effect.

Spacewalk – Initial configuration and registering your first client (on CentOS 7)

When it comes to setting up Spacewalk to provide and meet you organisations package management and provisioning needs, there is more to it than simply installing the Spacewalk and then clicking Provision!  There is a list of hoops to jump through before you can get up and running.  This post aims to tackle the common setup tasks through to your first client registration, but specifically with respect to installing it on a system running CentOS 7.  But lets not get ahead of ourselves.  There is a lot to do, so lets get cracking!

I am assuming here that you have managed to install Spacewalk and are now looking for the next steps starting at creating the administrative user.  If not may I suggest taking a peek here as I have provided a very rough guide to do this.  I am also basing a lot of these steps on the HowTo published on the CentOS wiki, though that was for release 5 of CentOS, not 7.  So I will try to fill in gaps where required.

Initial configuration of Spacewalk

OK, you have at this point, hopefully got Spacewalk/Satellite server installed.  The first thing to do at this point is to login to the web GUI.  Well, when I say login, I mean create the admin account.  Best to do this right away before someone has the opportunity to takeover your nice new Spacewalk/Red Hat Satellite server.  You access the GUI by typing in the FQDN of the spacewalk server and it will redirect you to the Create Spacewalk Administrator.  You should see a screen much like the following;

Screenshot of first login screen post installation of Spacewalk on CentOS 7

Just enter a few essential details and away you go.  OK you won’t get too far yet, but keep reading!

Upon clicking the “Create Login” button, you should see the normal dashboard screen that is displayed when logging into Spacewalk (and Red Hat Satellite) for the first time.  With one exception.  You should also have a banner across the top with the following wording;

You have created your first user for the Spacewalk Service. Additional configuration should be finalized by Click here

Make sure you click the “Click here” link and make sure you complete the rest of the steps.

General Configuration

I would advise you check and double check the General configuration tab, specifically the Spacewalk hostname, this should ideally match the FQDN of your satellite server.  And if you haven’t specified a name which is resolvable via DNS you will likely find that things don’t run exactly as they should.

The Certificate tab will be of interest if you are minting your own SSL certificates or wish to use a commercially generated cert. The Bootstrap script tab is where you define settings relating to how clients connect and the associated security around those connections.

The Organizations tab (which in my opinion should read Organisations (because that’s how you spell it) is where you can define how your organisation looks, you can define multiple activation keys for different parts of your organisation, manage subscriptions and users.  To name but a few of the things you can do.

Restart tab, er, do I really need to suggest what this does???  And finally the Cobbler tab.  From here you can kick of a synchronisation between Spacewalk and Cobblerd.  I recommend clicking it now to make sure the integration between the two applications is working.  I would also suggest you double check the cobbler log file located at /var/log/cobbler/cobbler.log for any signs of problems.  Here’s a sample output;

Mon Apr 18 23:49:10 2016 - INFO | authenticate; ['toby', True]
Mon Apr 18 23:49:10 2016 - INFO | REMOTE sync; user(toby)
Mon Apr 18 23:49:10 2016 - DEBUG | authorize; ['toby', 'sync', None, None, True]
Mon Apr 18 23:49:10 2016 - DEBUG | REMOTE toby authorization result: True; user(?)
Mon Apr 18 23:49:10 2016 - INFO | sync
Mon Apr 18 23:49:10 2016 - INFO | running pre-sync triggers
Mon Apr 18 23:49:10 2016 - INFO | cleaning trees
Mon Apr 18 23:49:10 2016 - INFO | mkdir: /var/lib/tftpboot/pxelinux.cfg
Mon Apr 18 23:49:10 2016 - INFO | mkdir: /var/lib/tftpboot/grub
Mon Apr 18 23:49:10 2016 - INFO | mkdir: /var/lib/tftpboot/images
Mon Apr 18 23:49:10 2016 - INFO | mkdir: /var/lib/tftpboot/s390x
Mon Apr 18 23:49:10 2016 - INFO | mkdir: /var/lib/tftpboot/ppc
Mon Apr 18 23:49:10 2016 - INFO | mkdir: /var/lib/tftpboot/etc
Mon Apr 18 23:49:10 2016 - INFO | removing: /var/lib/tftpboot/grub/images
Mon Apr 18 23:49:10 2016 - INFO | copying bootloaders
Mon Apr 18 23:49:10 2016 - INFO | copying: /usr/share/syslinux/pxelinux.0 -> /var/lib/tftpboot/pxelinux.0
Mon Apr 18 23:49:10 2016 - INFO | copying: /usr/share/syslinux/menu.c32 -> /var/lib/tftpboot/menu.c32
Mon Apr 18 23:49:10 2016 - INFO | copying: /usr/share/syslinux/memdisk -> /var/lib/tftpboot/memdisk
Mon Apr 18 23:49:10 2016 - INFO | copying distros
Mon Apr 18 23:49:10 2016 - INFO | copying images
Mon Apr 18 23:49:10 2016 - INFO | generating PXE configuration files
Mon Apr 18 23:49:10 2016 - INFO | cleaning link caches
Mon Apr 18 23:49:10 2016 - INFO | generating PXE menu structure
Mon Apr 18 23:49:10 2016 - INFO | running post-sync triggers
Mon Apr 18 23:49:10 2016 - DEBUG | running python triggers from /var/lib/cobbler/triggers/sync/post/*
Mon Apr 18 23:49:10 2016 - DEBUG | running python trigger cobbler.modules.sync_post_restart_services
Mon Apr 18 23:49:10 2016 - DEBUG | running shell triggers from /var/lib/cobbler/triggers/sync/post/*
Mon Apr 18 23:49:10 2016 - DEBUG | running python triggers from /var/lib/cobbler/triggers/change/*
Mon Apr 18 23:49:10 2016 - DEBUG | running python trigger cobbler.modules.scm_track
Mon Apr 18 23:49:10 2016 - DEBUG | running shell triggers from /var/lib/cobbler/triggers/change/*

Generate a default activation key

In order to register systems against your newly installed Spacewalk server, you must have a activation key defined.  This is not done automatically, and therefore we shall tackle it now.  Navigate to Systems > Activation Keys.

Initially you should see a message stating;

You do not currently have a universal default activation key set. To set a key as the universal default, please visit the details page of that key and check off the 'Universal Default?' checkbox.

Click + Create Key in the top right hand corner of the Activation Keys screen.  You will need to add the following details;

  • Description
  • A Key (I would advise putting something meaningful in here, rather than allowing a key to be auto-generated.
  • Leave the Usage field blank
  • Leave the Base Channels as default (Spacewalk Default)
  • Add-on Entitlements, I have selected only Provisioning (it can be changed later)
  • I also ticked the Universal Default as I do not want to restrict its use

After the default key has been created, the screen looks like this;

Image contains example screenshot of Spacewalk activation key.

Creating your first package repository (and channel)

I will be focusing on CentOS 7 here, but Satellite is capable of providing a centralised repository for other RPM based distributions.

CentOS 7 Base Repository

We will assume you may at some point want to build further servers using the base OS RPMs.  The first thing you need to do is find a local mirror site which you can base your repository on. CentOS provide a lovely page on their web site – https://www.centos.org/download/mirrors/, which details, (by country) where you can download the packages from.  In my case I searched the page for United Kingdom and picked one from the list.

Lets get on with the job at hand, and create a repository.  Click Channels > Manage Software Channels > Manage Repositories.  And then click Create Repository.  You will then see a screen, not too dissimilar to the one below;

spacewalk_create_repository

Once you have defined the repository label and URL (this is the source url which Spacewalk will be using to obtain the packages from).  I have defined the SSL cert that was generated during installation.

You will now need to create the Channel that will be associated to this channel.  Click Manage Software Channels from the left hand menu and then click on Create Channel.  You will (once the page has loaded) be given a few ground rules regarding naming conventions and then the opportunity to create your new channel.  The long and short of it is this;

  • Channel Name and Channel Label are required (hence the red asterisk)
  • Channel Name;
    • must be between 6 and 256 characters in length
    • must begin with a letter
    • may contain spaces, parentheses () and forward slashes /.
  • Channel Label must;
    • be no longer than 128 characters
    • start with a letter or digit
  • Must be lowercase (no exceptions)
  • May contain hyphens, periods, underscores and numerals

spacewalk_Create_channel

Some other options on the screen also include controlling access to the repository (i.e. is it private and only accessible to your Spacewalk organisation, or is it public), also you can define GPG security settings for signed packages.

The last step is to marry the repository and channel together.  This is achieved by going to the Repositories tab, and selecting the repository from the list of available repositories.  In my case it is just one.

The last step, will be kick off a synchronisation of the repository.  Now there are two ways to do this; 1) By click the Sync tab, then ticking the “Create kickstartable tree” option and then clicking Sync Now.  Or 2) run the following command from the cli.

/usr/bin/spacewalk-repo-sync --channel centos7base --type yum --latest --sync-kickstart

Now sit back and watch/wait.  For the current 7.2 repo, the base number of packages is just over 9000, so depending on your connection to the internet, you could find this to be a quick process or quite slow (also very dependent upon the mirror you have selected).  Another option which I haven’t don’t but I believe it would work, is to use a copy of the installation media.  If you try that option, let me know how you get on 🙂

Registering your first client

First, you will need to make sure you have the required packages on the client to be register.  In my case, I had used a minimal install and as such I was missing the required packages.  Easily rectified;

[root@rhc-client ~]# yum install rhn-setup
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package rhn-setup.noarch 0:2.0.2-6.el7 will be installed
--> Processing Dependency: rhn-client-tools = 2.0.2-6.el7 for package: rhn-setup-2.0.2-6.el7.noarch
--> Processing Dependency: rhnsd for package: rhn-setup-2.0.2-6.el7.noarch
--> Running transaction check
---> Package rhn-client-tools.noarch 0:2.0.2-6.el7 will be installed
--> Processing Dependency: rhnlib >= 2.5.57 for package: rhn-client-tools-2.0.2-6.el7.noarch
--> Processing Dependency: python-hwdata for package: rhn-client-tools-2.0.2-6.el7.noarch
--> Processing Dependency: python-gudev for package: rhn-client-tools-2.0.2-6.el7.noarch
--> Processing Dependency: python-dmidecode for package: rhn-client-tools-2.0.2-6.el7.noarch
---> Package rhnsd.x86_64 0:5.0.13-5.el7 will be installed
--> Processing Dependency: rhn-check >= 0.0.8 for package: rhnsd-5.0.13-5.el7.x86_64
--> Running transaction check
---> Package python-dmidecode.x86_64 0:3.10.13-11.el7 will be installed
---> Package python-gudev.x86_64 0:147.2-7.el7 will be installed
---> Package python-hwdata.noarch 0:1.7.3-4.el7 will be installed
---> Package rhn-check.noarch 0:2.0.2-6.el7 will be installed
--> Processing Dependency: yum-rhn-plugin >= 1.6.4-1 for package: rhn-check-2.0.2-6.el7.noarch
---> Package rhnlib.noarch 0:2.5.65-2.el7 will be installed
--> Running transaction check
---> Package yum-rhn-plugin.noarch 0:2.0.1-5.el7 will be installed
--> Processing Dependency: m2crypto >= 0.16-6 for package: yum-rhn-plugin-2.0.1-5.el7.noarch
--> Running transaction check
---> Package m2crypto.x86_64 0:0.21.1-17.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package               Arch        Version             Repository          Size
================================================================================
Installing:
 rhn-setup             noarch      2.0.2-6.el7         th_lab_server       87 k
Installing for dependencies:
 m2crypto              x86_64      0.21.1-17.el7       th_lab_server      429 k
 python-dmidecode      x86_64      3.10.13-11.el7      th_lab_server       82 k
 python-gudev          x86_64      147.2-7.el7         th_lab_server       18 k
 python-hwdata         noarch      1.7.3-4.el7         th_lab_server       32 k
 rhn-check             noarch      2.0.2-6.el7         th_lab_server       52 k
 rhn-client-tools      noarch      2.0.2-6.el7         th_lab_server      379 k
 rhnlib                noarch      2.5.65-2.el7        th_lab_server       65 k
 rhnsd                 x86_64      5.0.13-5.el7        th_lab_server       48 k
 yum-rhn-plugin        noarch      2.0.1-5.el7         th_lab_server       80 k

Transaction Summary
================================================================================
Install  1 Package (+9 Dependent packages)

Total size: 1.2 M
Total download size: 1.2 M
Installed size: 4.8 M
Is this ok [y/d/N]: y
Downloading packages:
(1/9): python-gudev-147.2-7.el7.x86_64.rpm                 |  18 kB   00:00     
(2/9): m2crypto-0.21.1-17.el7.x86_64.rpm                   | 429 kB   00:00     
(3/9): python-hwdata-1.7.3-4.el7.noarch.rpm                |  32 kB   00:00     
(4/9): rhn-check-2.0.2-6.el7.noarch.rpm                    |  52 kB   00:00     
(5/9): rhn-client-tools-2.0.2-6.el7.noarch.rpm             | 379 kB   00:00     
(6/9): rhn-setup-2.0.2-6.el7.noarch.rpm                    |  87 kB   00:00     
(7/9): rhnlib-2.5.65-2.el7.noarch.rpm                      |  65 kB   00:00     
(8/9): rhnsd-5.0.13-5.el7.x86_64.rpm                       |  48 kB   00:00     
(9/9): yum-rhn-plugin-2.0.1-5.el7.noarch.rpm               |  80 kB   00:00     
--------------------------------------------------------------------------------
Total                                              1.3 MB/s | 1.2 MB  00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python-gudev-147.2-7.el7.x86_64                             1/10 
  Installing : rhnlib-2.5.65-2.el7.noarch                                  2/10 
  Installing : python-hwdata-1.7.3-4.el7.noarch                            3/10 
  Installing : python-dmidecode-3.10.13-11.el7.x86_64                      4/10 
  Installing : rhn-client-tools-2.0.2-6.el7.noarch                         5/10 
  Installing : m2crypto-0.21.1-17.el7.x86_64                               6/10 
  Installing : rhnsd-5.0.13-5.el7.x86_64                                   7/10 
  Installing : rhn-setup-2.0.2-6.el7.noarch                                8/10 
  Installing : yum-rhn-plugin-2.0.1-5.el7.noarch                           9/10 
  Installing : rhn-check-2.0.2-6.el7.noarch                               10/10 
  Verifying  : rhn-setup-2.0.2-6.el7.noarch                                1/10 
  Verifying  : m2crypto-0.21.1-17.el7.x86_64                               2/10 
  Verifying  : rhn-check-2.0.2-6.el7.noarch                                3/10 
  Verifying  : python-dmidecode-3.10.13-11.el7.x86_64                      4/10 
  Verifying  : rhnsd-5.0.13-5.el7.x86_64                                   5/10 
  Verifying  : rhn-client-tools-2.0.2-6.el7.noarch                         6/10 
  Verifying  : python-hwdata-1.7.3-4.el7.noarch                            7/10 
  Verifying  : yum-rhn-plugin-2.0.1-5.el7.noarch                           8/10 
  Verifying  : rhnlib-2.5.65-2.el7.noarch                                  9/10 
  Verifying  : python-gudev-147.2-7.el7.x86_64                            10/10 

Installed:
  rhn-setup.noarch 0:2.0.2-6.el7                                                

Dependency Installed:
  m2crypto.x86_64 0:0.21.1-17.el7      python-dmidecode.x86_64 0:3.10.13-11.el7 
  python-gudev.x86_64 0:147.2-7.el7    python-hwdata.noarch 0:1.7.3-4.el7       
  rhn-check.noarch 0:2.0.2-6.el7       rhn-client-tools.noarch 0:2.0.2-6.el7    
  rhnlib.noarch 0:2.5.65-2.el7         rhnsd.x86_64 0:5.0.13-5.el7              
  yum-rhn-plugin.noarch 0:2.0.1-5.el7 

Complete!

Next step is to install your Spacewalk server’s ssl certificate on the client.  This is a security measure which enables the client to verify that the server it is talking to is really the server it SHOULD be talking too.

[toby@devops ~]$ sudo rpm -Uvh http://manager/pub/rhn-org-trusted-ssl-cert-1.0-1.noarch.rpm
Retrieving http://manager/pub/rhn-org-trusted-ssl-cert-1.0-1.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:rhn-org-trusted-ssl-cert-1.0-1   ################################# [100%]

The final step in the process is to actually register the client against the Spacewalk/Satellite server.

[toby@devops ~]$ sudo rhnreg_ks --serverUrl=https://manager.lab.tobyheywood.com/XMLRPC --sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT --activationkey=1-lab.tobyheywood.com
This system is not subscribed to any channels.
RHN channel support will be disabled.

At this point, we float over to the Spacewalk server UI, and we should now see our client in the list of Systems;

Screenshot showing the newly register client listed under the Systems list in Spacewalk

Now, for those of you who have a keen eye for detail, you will have noticed in the screenshot above and the snippet of command line output at the time of registration, that the system isn’t currently subscribed to any channels.  This is very easily remedied;

  1. Click on the client in the system list
  2. On the initial Overview screen, you will see a box – Subscribed Channels
  3. Click Alter Channel Subscriptions
  4. Now Select from the list under Base Software Channel – in my case “centos_7_base”
  5. Click confirm
  6. You should now see the channel listed under the heading “Software Channel Subscriptions”.
  7. In addition you may have child channels created beneath your base channel.

And there we have it.  Time to have a play and see what you can do by having a click around the tabs related to the system and the wider Spacewalk UI.

Reference material

https://fedorahosted.org/spacewalk/wiki/RegisteringClients

Featured image credit:  Thanks to NASA for making the image of Tim Peake on his spacewalk free to use!

Warning /dev/root does not exist – The Devil is in the Detail

Following on from an earlier post, it would seem that the “Warning /dev/root does not exist” issue is not confined tononekickstart pxe booted installations as I had first thought.

I was working on a RHEL 7 installation using Red Hat Satellite 5.7 (upgrade to 6.x in the pipeline but bigger fish to fry right now), where we were re-using a lot of the RHEL 6 pxelinux kernel parameter’s.

Now as you may or may not know (if you have read my other posts on the topic), there are numerous Anaconda and dracut parameter’s that can be passed to the kernel in the pxelinux.cfg/default (or the mac specific) config file.  The problem we had found was the existence off a ksdevice= parameter which pointed to eth0.  In RHEL/CentOS 7, the ethernet device naming standard changes from ethX to ensX, which works out as follows;

  • en = Ethernet
  • sX = Slot X (where X is the physical or virtual slot number where the nic resides)

By default the first interface is used by anaconda/dracut/pxelinux, IF no option is specified.  If however you specifically tell it to use something which fundamentally doesn’t exist, it WILL still try to use that… and fail!  Miserably!  And give you an error which ultimately seems kind of unrelated.

You have been warned!

As with many things in life, this served as a reminder that things change and that you can’t always take the “old” and reuse with the “new” without issue.

Featured image credit:  Thanks to (Elba) Dave Shewmaker for taking the rather weird picture which resembles The Devil in the Detail and posting it on Flickr.

CentOS 7 – Watch out for nmtui

It would appear that I have been caught out twice now due to the way that nmtui (Network Manager Text User Interface) works.  I have been messing around with various internal sandboxed networks in my VM environment and (I can only assume ion my haste), I have entered the IP address of a second NIC without full regard for the on screen prompts.

In nmtui, there is one field missing which is quite common in many other tools.  Take a look at the following and tell me what’s missing;

nmtui_sandbox_config

So what field do you think is missing?

Now although the information is all on screen in the screenshot above, there is one thing that may not be obvious.  In the Addresses field you specify not only the IP address but also the subnet mask in CIDR (which stands for Classless Inter-Domain Routing) notation.

IF, you happen to enter and IP address without thinking about it and don’t specify the netmask or CIDR, nmtui assumes that you are only referring to a /32, a.k.a. a netmask of 255.255.255.255, which for the uninitiated means just that IP.  If assumes that there is nothing beyond that IP address.  It’s world is only itself.

In the good old days where I used to configure the IP address via ifcfg-eth* files, I also remembered to enter the NETMASK= line, and therefore never had this issue.

Anyway rant over.  Hopefully twice is enough, because if nothing else, if I have name resolutions errors in my logs again, I will be making sure my netmask is set correctly, before thinking that tomcat is having issues.

Grrrrrr

Featured image credit:  Thanks to versageek for making the Network Spagetti image available on Flickr.com.

Spacewalk – Post install sanity check

After having installed Spacewalk, got it working to a certain point and then found that there may have been issues with the installation, I thought it would be easier to simply re-install spacewalk onto a new virtual machine.

So following on from my how to article, I wanted to make sure that post installation, I had performed sufficient checks to confirm that there were no issues with the scheduler service or cobbler, as these were two things I had great difficulty trying to get working.

I guess it is also worth mentioning that the VM I am running spacewalk on has a single vCPU and 4GB of memory.  For storage I have given it 40G which will do me fine.  And as for the OS it is running CentOS 7 (1511).

So what should we check

Good question.  The following is a rough list of all the services I confirmed as enabled, running and also that there were no horrible errors in the log files

Services

  • cobblerd
  • postgresql
  • xinetd (tftp)
  • httpd
  • tomcat
  • taskomatic (a.k.a. the scheduler)

cobblerd

[toby@manager ~]$ sudo systemctl status cobblerd
● cobblerd.service - LSB: daemon for libvirt virtualization API
   Loaded: loaded (/etc/rc.d/init.d/cobblerd)
   Active: active (running) since Fri 2016-04-15 23:08:37 BST; 24min ago
     Docs: man:systemd-sysv-generator(8)
   CGroup: /system.slice/cobblerd.service
           └─13257 /usr/bin/python -s /bin/cobblerd --daemonize

Apr 15 23:08:35 manager systemd[1]: Starting LSB: daemon for libvirt virtualization API...
Apr 15 23:08:37 manager cobblerd[13247]: Starting cobbler daemon: [  OK  ]
Apr 15 23:08:37 manager systemd[1]: Started LSB: daemon for libvirt virtualization API.
Apr 15 23:31:35 manager systemd[1]: [/run/systemd/generator.late/cobblerd.service:8] Failed to add dependency on network,.service, ignoring: Invalid argument
Apr 15 23:31:35 manager systemd[1]: [/run/systemd/generator.late/cobblerd.service:8] Failed to add dependency on xinetd,.service, ignoring: Invalid argument

The last two lines can be ignore.  I believe this is purely due some references to the sysvinit scripts which are no longer used, and as you will see later things appear to be running fine (this time around)

PostgreSQL

[toby@manager ~]$ sudo systemctl status postgresql
● postgresql.service - PostgreSQL database server
   Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:05:39 BST; 35min ago
 Main PID: 12556 (postgres)
   CGroup: /system.slice/postgresql.service
           ├─12556 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432
           ├─12557 postgres: logger process   
           ├─12559 postgres: checkpointer process   
           ├─12560 postgres: writer process   
           ├─12561 postgres: wal writer process   
           ├─12562 postgres: autovacuum launcher process   
           ├─12563 postgres: stats collector process   
           ├─13191 postgres: rhnuser rhnschema [local] idle in transaction
           ├─13343 postgres: rhnuser rhnschema 127.0.0.1(55225) idle
           ├─13344 postgres: rhnuser rhnschema 127.0.0.1(55226) idle
           ├─13345 postgres: rhnuser rhnschema 127.0.0.1(55227) idle
           ├─13346 postgres: rhnuser rhnschema 127.0.0.1(55228) idle
           ├─13347 postgres: rhnuser rhnschema 127.0.0.1(55229) idle
           ├─13348 postgres: rhnuser rhnschema 127.0.0.1(55230) idle
           ├─13350 postgres: rhnuser rhnschema 127.0.0.1(55231) idle
           ├─13351 postgres: rhnuser rhnschema 127.0.0.1(55232) idle
           ├─13352 postgres: rhnuser rhnschema 127.0.0.1(55233) idle
           ├─13354 postgres: rhnuser rhnschema 127.0.0.1(55234) idle
           ├─13355 postgres: rhnuser rhnschema 127.0.0.1(55235) idle
           ├─13356 postgres: rhnuser rhnschema 127.0.0.1(55236) idle
           ├─13357 postgres: rhnuser rhnschema 127.0.0.1(55237) idle
           ├─13358 postgres: rhnuser rhnschema 127.0.0.1(55238) idle
           ├─13361 postgres: rhnuser rhnschema 127.0.0.1(55240) idle
           ├─13391 postgres: rhnuser rhnschema 127.0.0.1(55244) idle
           ├─13442 postgres: rhnuser rhnschema 127.0.0.1(55246) idle
           ├─13444 postgres: rhnuser rhnschema 127.0.0.1(55248) idle
           ├─13451 postgres: rhnuser rhnschema 127.0.0.1(55250) idle
           ├─13651 postgres: rhnuser rhnschema 127.0.0.1(55266) idle
           ├─28774 postgres: rhnuser rhnschema 127.0.0.1(55272) idle
           ├─28842 postgres: rhnuser rhnschema 127.0.0.1(55275) idle
           ├─28843 postgres: rhnuser rhnschema 127.0.0.1(55276) idle
           ├─28844 postgres: rhnuser rhnschema 127.0.0.1(55277) idle
           ├─28847 postgres: rhnuser rhnschema 127.0.0.1(55278) idle
           ├─28902 postgres: rhnuser rhnschema 127.0.0.1(55279) idle
           └─28903 postgres: rhnuser rhnschema 127.0.0.1(55280) idle

Apr 15 23:05:38 manager systemd[1]: Starting PostgreSQL database server...
Apr 15 23:05:39 manager systemd[1]: Started PostgreSQL database server.

tftp (by way of xinetd)

[toby@manager ~]$ sudo systemctl enable tftp
[toby@manager ~]$ sudo systemctl start tftp
[toby@manager ~]$ sudo systemctl status tftp
● tftp.service - Tftp Server
   Loaded: loaded (/usr/lib/systemd/system/tftp.service; indirect; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:46:30 BST; 2s ago
     Docs: man:in.tftpd
 Main PID: 29012 (in.tftpd)
   CGroup: /system.slice/tftp.service
           └─29012 /usr/sbin/in.tftpd -s /var/lib/tftpboot

httpd

[toby@manager ~]$ sudo systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:08:34 BST; 39min ago
     Docs: man:httpd(8)
           man:apachectl(8)
 Main PID: 13168 (httpd)
   Status: "Total requests: 1; Current requests/sec: 0; Current traffic:   0 B/sec"
   CGroup: /system.slice/httpd.service
           ├─13168 /usr/sbin/httpd -DFOREGROUND
           ├─13169 /usr/sbin/httpd -DFOREGROUND
           ├─13170 /usr/sbin/httpd -DFOREGROUND
           ├─13171 /usr/sbin/httpd -DFOREGROUND
           ├─13172 /usr/sbin/httpd -DFOREGROUND
           ├─13173 /usr/sbin/httpd -DFOREGROUND
           ├─13174 /usr/sbin/httpd -DFOREGROUND
           ├─13175 /usr/sbin/httpd -DFOREGROUND
           └─13176 /usr/sbin/httpd -DFOREGROUND

Apr 15 23:08:34 manager systemd[1]: Starting The Apache HTTP Server...
Apr 15 23:08:34 manager httpd[13168]: AH00557: httpd: apr_sockaddr_info_get() failed for manager
Apr 15 23:08:34 manager httpd[13168]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
Apr 15 23:08:34 manager systemd[1]: Started The Apache HTTP Server.

Feel free to ignore the warning messages with regards to the FQDN.

tomcat

Once the install had completed there was an error message;

Tomcat failed to start properly or the installer ran out of tries. Please check /var/log/tomcat*/catalina.out for errors.

I checked the logs and saw some errors, but as you can see from the following, simple making sure it was enabled and started appears to have cleared up what ever the issue may have been.

[toby@manager ~]$ sudo systemctl enable tomcat
[toby@manager ~]$ sudo systemctl start tomcat
[toby@manager ~]$ sudo systemctl status tomcat
● tomcat.service - Apache Tomcat Web Application Container
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-04-15 23:05:39 BST; 43min ago
 Main PID: 12589 (java)
   CGroup: /system.slice/tomcat.service
           └─12589 /usr/lib/jvm/jre/bin/java -ea -Xms256m -Xmx256m -Djava.awt.headless=true -Dorg.xml.sax.driver=org.apache.xerces.parsers.SAXParser -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024 -XX...

Apr 15 23:08:33 manager server[12589]: INFO: Deployment of configuration descriptor /etc/tomcat/Catalina/localhost/rhn.xml has finished in 172,857 ms
Apr 15 23:08:33 manager server[12589]: Apr 15, 2016 11:08:33 PM org.apache.coyote.AbstractProtocol start
Apr 15 23:08:33 manager server[12589]: INFO: Starting ProtocolHandler ["http-bio-127.0.0.1-8080"]
Apr 15 23:08:34 manager server[12589]: Apr 15, 2016 11:08:34 PM org.apache.coyote.AbstractProtocol start
Apr 15 23:08:34 manager server[12589]: INFO: Starting ProtocolHandler ["ajp-bio-127.0.0.1-8009"]
Apr 15 23:08:34 manager server[12589]: Apr 15, 2016 11:08:34 PM org.apache.coyote.AbstractProtocol start
Apr 15 23:08:34 manager server[12589]: INFO: Starting ProtocolHandler ["ajp-bio-0:0:0:0:0:0:0:1-8009"]
Apr 15 23:08:34 manager server[12589]: Apr 15, 2016 11:08:34 PM org.apache.catalina.startup.Catalina start
Apr 15 23:08:34 manager server[12589]: INFO: Server startup in 173112 ms
Apr 15 23:31:41 manager systemd[1]: Started Apache Tomcat Web Application Container.

taskomatic

Now, this one doesn’t appear to have been moved over to the new systemd environment and therefore we resort back to the good old sysvinit scripts and the service command to confirm this one is working;

[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (13296).

Looking good so far

But can it withstand a reboot?  Now that is the question.  So I repeated the above steps again, just to confirm.  I won’t bore you with all the details;

  • cobblerd.service – active (running)
  • httpd.service – active (running)
  • tftp.service – inactive (dead)
  • postgresql.service – active (running)
  • tomcat.service –active (running)
  • taskomatic – RHN Taskomatic is not running.

taskomatic revisited

[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is not running.
[toby@manager ~]$ sudo chkconfig taskomatic on
[toby@manager ~]$ sudo service taskomatic start
Starting RHN Taskomatic...
[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (10870).

And for good measure I gave the machine another reboot, just to confirm the taskomatic service did start.

[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (1278).

Oh Yeah!  Now I’m a happy camper.  And it’s time to re-visit the initial configuration part, which I shall post about shortly.

Image credit; Thanks to Mark Walsh for making the featured image called “Russell Street Court Cells – Padded Cell” available on Flickr.com.

DKIM + Courier-MTA on CentOS7

Email.  One of those things which is critical to business and depending on what you read and who you speak to, you may be led to believe that it is dying out in favour of instant messaging technologies.  Well, as of right now, I can’t see it dying out any time soon, though it does come with a really annoying, frustrating, excruciating problem.  Junk mail!  Otherwise know as spam!  It has once again become a bit of a problem for me, especially when the likes of yahoo begin to imped the flow of emails leaving one of the servers I have worked on recently unable to send emails to recipients with Yahoo accounts.

Now, don’t get my wrong.  I don’t hold a grudge with any of the large email service providers for blocking spam which has sadly managed to be sent via a mail server I have some involvement with, but I do wish the world wasn’t such a bad place that we have to continuously fight on our email frontiers to protect our virtual/real name and brand.

So anyway.  On with the show

The server

  • CentOS 7
  • Patched to the nines
  • Courier MTA
  • Blocking of known spammers was enabled
  • Spamassassin was used as a secondary measure should the hard blocks fail
  • SPF is configured for all hosted domains
  • DKIM was not configured
  • DMAC was not configured
  • It *was not* an open relay
  • The time was not in sync, which  didn’t help later down the line

The problem

The server’s “reputation” had been tarnished as someone had managed to start sending spam via the server through a vulnerability which was found in one of the websites hosted on the server.

After reviewing the advice from Yahoo, I started my investigation on how to approach this topic.

A bit of googling, led me to this web site; https://www.strangeworld.com/blog/archives/128. Now for me, there were a few things missing in the instructions, though complete were it was most needed.  So recreating some of the  good work done on the strangeworld.com website and providing my own twist on things here;

DKIM & ZDKIMFILTER Pre-requisites

In order to install zdkimfilter you need to manually compile the code and install (a .spec file will be included in future releases which should mean you can just use rpmbuild to create the RPM package for you).  The machine I needed to install this onto did not have any compilers install for reasons of security, and so my list of pre-requisites was rather long and included more than I would have liked but needs must, and the compilers were removed afterwards.  But anyway, here is the list of commands issued to get everything where it needed to be;

sudo rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
sudo yum install gcc
sudo yum install openssl-libs openssl-devel
sudo yum install libopendkim opendkim
sudo yum install libtool libidn2 libidn2-devel
sudo yum install opendbx-devel opendbx-mysql opendbx-utils opendbx

Note.  The addition of the elrepo.org rpm is required as there are a couple of dependencies, which don’t appear to be available in the main CentOS yum repository.

Download, compile and install

Now, the original instructions I followed, (see reference material at bottom of this page), took me through the steps of manually compiling the code for and installing opendkim.  After I had installed it, I found that opendkim is available in the CentOS base yum repo and so I could have avoided that step.  You will be pleased to hear that I have not included those steps here (if you do need to know though, see this blog post – strangeworld.com).

Next up we need to obtain the tar ball containing the source code for zdkimfilter, which was downloaded from; http://www.tana.it/sw/zdkimfilter/.  At the time of writing the current version was v1.5.

Now we have that lets work through the process of compiling and installing it.

wget http://www.tana.it/sw/zdkimfilter/zdkimfilter-1.5.tar.gz
tar zxvf zdkimfilter-1.5.tar.gz 
cd zdkimfilter-1.5
./configure --prefix=/usr/local
make -j4
sudo make install

Configuration

Last on the list of installation steps is configuring zdkimfilter and making sure we have the correct directory structure, etc;

cp /etc/courier/filters/zdkimfilter.conf.dist /etc/courier/filters/zdkimfilter.conf
mkdir /etc/courier/filters/keys
chmod u=rwx,g=rw,o-rwx /etc/courier/filters/keys

Nothing more was needed at this time.

Generate some keys and update your DNS zones

At this point, we should have everything installed and configured that we need, and can now proceed with the steps for generating the necessary private keys which will be used to sign our emails as they leave your courier email server and also the public key that will be published in DNS.

A word of warning.  Not sure how important this is, but when looking at generating the keys using opendkim-genkey, I would advise that you choose your selector name carefully.  Initially I set this to be the same as the domain.  I then went to a couple of dkim testing websites and found that one of them didn’t like the “.” (dot) in the separator name.  At this point I changed my plan slightly (in case this was a more wide spread issue) and made sure there were no periods in the separator name.

cd /etc/courier/filters/keys/
opendkim-genkey -b 2048 -d tobyheywood.com -D /etc/courier/filters/keys -s tobyheywood -r -nosubdomains -v
ln -s tobyheywood tobyheywood.com
chown root.daemon tobyheywood.*
(Just making sure ownership is correct)
chmod u=rw,g=r,o-rwx tobyheywood.*
(this will change the permissions on the tobyheywood.private file which contains the private key, and the tobyheywood.txt file which contains the public key)

It’s also worth making sure the directory /etc/courier/filters/keys has the right permissions;

chmod u=rwx,g=rx,o-rwx /etc/courier/filters/keys

And also if you are running SELinux, then it is worth double checking the SELinux security context associated with these newly created files.  In my case, I didn’t have to do anything, but best to check.

Now you need to update your DNS zone file(s) with everything in the .txt file up to the last “)”.  You could include the rest if you wish, but it’s purely a comment.  Reload you zone and test.

Testing, testing, 1, 2, 3

I performed two types of test.  The first was to make sure the record was returned correctly when queried by third parties and that there were no issues with the DNS side of things.  The website I preferred was; https://www.mail-tester.com/spf-dkim-check as it was a nice and simple site and worked better than the unnamed first site I tried.

The second part of my test, was to send an email to a yahoo or gmail account to confirm that it was accepted and so that I could review the headers.  This turned out to be a very good move!  The headers of my first test emails showed there was no DKIM key present, which I thought was odd, but that was due to the umask and the keys directory didn’t have the execute bit set for group members.  I have corrected this in the above [rough] instructions.

Another attempt at sending an email showed there was a valid key but that the time on this server was ahead by some 6 minutes.  I then found that NTP hadn’t been enabled and so had to enable this in order to get the time aligned with other servers on the Internet.

My last test email, was a complete success.  The headers showed that everything was as it should be.  The DKIM signature had been accepted.

The next thing I need to read up on and implement for the domain in question is dmarc.  But more on that later.

Reference material

Courier MTA and DKIM

Image credit:  Thanks to Jake Rush for uploading the featured image GotCredit to flickr.com.