I have to admit this isn’t a question that crops up all that often, but given what I have been through over recent months, I thought I’d share with the world.
Lets start from the beginning…
What are raw devices?
Raw devices, are used when there is a need to present underlying storage devices as character devices, which will consume be consumed typically by RDBMS.
Some examples of database technologies where “raws” might be used would be;
SAP Replication Server
The list is not limited to the above, but they are the ones I know about.
Why uses raw devices?
Well, there is a belief (and probably some, maybe lots of evidence) that file systems introduce an extra overhead when processing I/O. Though I would question whether in the day and age of Enterprise SSD and NVMe based storage solutions, whether it is anywhere near as relevant as it used to be??
So the idea is simple. Present a raw character device (or multiples of) to the database technology being used and allow it to handle the file system type tasks and thereby remove the (perceived??) overhead of a file system.
For me I can only think this would be really relevant in particular high, low latency environments, and even then I would be keen to see some base line figures to confirm if the extra admin overhead raw devices brings is worth it.
Hang on! You enticed me in with how many raw devices can you create? So tell me!!
OK, the greatest number of raw devices you can created on a Red Hat Enterprise Linux/Cent OS 7 system is… 8192.
That’s 0 all the way up to 8191.
Would I really ever need that many raw devices?
Well, maybe. I doubt you would ever really create 8192 raw devies, as this would require you to effectively provision the same number of storage LUNs.
So how did I stumble across this (potentially unimportant) fact?
Well, whilst working on the requirements from my local DBA team, I was also attempting to introduce a level of standardisation, so it was easy to see what the raw devices were used for.
Raw devices are numbered and it doesn’t look like you can use alphabetical characters in the name. So a number standard had to be created. For example;
raw devices numbered 1100-1150 might be the raws used to store the actual data, 1300-1320 be contain the logs and 1500-1510 might be for temp tables.
So if you also include a bit of future proofing and have a large enough number of storage LUNs to provision for use directly by the RDBMS then you could quickly find yourself constrained if you don’t plan ahead.
Anyway, the above was found out because I started to get strange problems when trying to create udev rules which would out create the raw devices, so I had a fun our of trying to work out the magic number.
For raw devices (if not much else) is 8191 (the maximum raw number you can use).
In the past I have looked at, adding Sender Policy Framework (SPF) which looks at the bounce address of an email, I have also looked at SenderID which looks at the FROM address part of the email header, and then there was Domain Keys Identified Mail (DKIM) which signs your emails to confirm that they were indeed sent from an authorised email server (which has a published public key via DNS).
Now the final piece in the email puzzle is DMARC.
What is DMARC?
Very briefly it stands for Domain Message Authorisation Reporting and Conformance.
What does it do?
Via DNS you (the sender) publish under your domain record an instruction that states whether or not a receiving email server should trust your SPF, SenderID and DKIM. All three are also published under your domain DNS zone. You can say whether or not they should reject or quarantine emails that purport to come from your email server but don’t (or do nothing).
Receiving information in XML format about how your domain reputation is fairing from a given receiving email server, as and when your users send emails to third parties.
What does the DNS record look like?
It looks like the following;
_dmarc IN TXT "v=DMARC1; p=none; rua=mailto:firstname.lastname@example.org"
OK, so if we break this down a little we have the following components;
“_dmarc” – This is the name of the TXT record, which recipient email servers will try to retrieve from your DNS when configured to use DMARC.
“IN” – Standard part of a BIND DNS record. Means Internet, nothing more, nothing less.
“TXT” – This is the record type. DMARC utilises the bog standard TXT record type as this was seen as the quickest method to adoption, rather than treading the lengthy path towards a new DNS record type.
Now we have the actual payload (note they are separated by semicolons);
“v=DMARC1” this is the version of DMARC
“p=none” – We are effectively saying do nothing and we don’t want to confirm or deny that the emails are from us or that the SPF, SenderID or DKIM information should be trusted
“email@example.com” – Doesn’t need to be included but if you do include this part, then you can expect to receive emails from the 3rd party email servers who have received email(s) from your domain, confirming what they thought of it
Are there other options?
Yes, though I am only going to focus on a couple here;
The “p=” option has three values that you can use;
none – Effectively do nothing, this should be set initially whilst you get things set up. Once you have confirmed things look good, then you can start to be a bit more forceful in what you would like other email providers to do with messages which do not come from your email server.
“quarantine” – This is were they would potentially pass the email on for further screening or simply decide to put it into the spam/junk folder.
“reject” – This is you saying that if a 3rd party receives an email, supposedly from you, but that wasn’t sent from one of the list of approved email servers (SPF or SenderID) or if it doesn’t pass the DKIM test then it should be rejected and not even delivered.
You set your _dmarc record, now what?
We will assume that you DNS zone has replicated to all of your DNS servers and that you have correctly configured the you email server to sign your outbound emails with your DKIM private key.
At this point I would highly advise going to https:///www.mail-tester.com and send a test email (with a realistic subject and a paragraph or two of readable text) to the email address they provide.
Once mail-tester.com has received your test email, they will process the email headers to confirm whether or not SPF, SenderID, DKIM and DMARC are all correctly configured and working.
It is possible that if your DNS servers are not completely aligned and up-to-date, mail-tester.com may be unable to provide an accurate report. If that happens give it 12 hours and repeat the test again.
Credit: Thanks to Mr Darkroom for making the featured image called Checkpoint available on Flickr.
I have embraced Red Hat Satellite server in a big way over the past year and try to use it wherever possible though not for everything.
One of the features I started using to simply life whilst I look at other configuration management systems, was Configuration Channels. These allow you to provide a central repository of files and binaries which can be deployed to a server during the initial kickstart server deployment process.
Some changes had been made a month or so ago, to ensure that a specific configuration channel would be included in future deployments by way of updating the Activation Key for that deployment type in Satellite server. Seems innocent enough at this point. It is worth noting that there were other configuration channels associated with this activation key.
At the same time I had also added a couple of packages to the software package list which were also required at time of deployment.
Now, I rely on scripts which have been deployed to a server to complete some post server build tasks. The first thing I noticed after a test deployment, was a complete lack of any scripts where I expected them to be. The configuration channels had created the required folder structure but had stopped completely and had gone no further. The error the Satellite server reported back to me was… well not massively helpful;
Fatal error in Python code occurred []
Nothing more, nothing less.
At this point I started trying to remember what I had added (thankfully not to hard as I document things quite heavily 🙂 ). Here is roughly the steps I took to confirm whether the issue resided;
Remove the additional packages I had specified for this particular build – made no difference
Remove what I the most recently added configuration channel – made no difference
Tested another Red Hat Enterprise Linux 7 build (not using this particular kickstart profile) – success, so the issue would appear to be limited to this one profile
Remove the other configuration channels that were added some time before the last one was added – failed, still the configuration channels would not deploy.But wait, there was light at the end of the tunnel!
But, following this last step, the error message changed, from something not very helpful to something quite helpful indeed! The message stated that permissions could not be applied as per those stipulated against specific files in the configuration channel.
So it transpires that it was a permissions resolution issue. Well, more a group resolution issue really. There were a couple of files which were set to be deployed with a specific group. The group in question is served from a LDAP server and the newly built machine wasn’t configured at that point to talk to the LDAP server, for this particular deployment we didn’t want auto registration with the LDAP services.
So the lesson here is make small changes, test frequently and make sure you document what you have done. Or use a configuration management system which is version controlled, so you can easily roll back.
Just so we are clear, I was running Red Hat Satellite Server 5.7 (full patched) on RHEL 6.8 and trying to deploy RHEL 7.3. My adventure to upgrade Satellite server to version 6.2 will be coming to a blog post soon.
So, it would appear this story comes with a lesson attached (free of charge) that all should take note of – “Always make one change at a time and test or as near to one as you can”.
I’ve recently been working on a project to deploy a couple of Pure Storage Flash Array //M10‘s, and rather than using Fiber Channel we opted for the 10Gb Ethernet (admittedly for reasons of cost) and using iSCSI as the transport mechanism.
Whenever you read up on iSCSI (and NFS for that matter) there inevitably ends up being a discussion around the MTU size. MY thinking here is that if your network has sufficient bandwidth to handle the Jumbo Frames and large MTU sizes, then it should be done.
Now I’m not going to ramble on about enabling Jumbo Frames exactly, but I am going to focus on the MTU size.
What is MTU?
MTU stands for Message Transport Unit. It defines the maximum size of a network frame that you can send in a single data transmission across the network. The default MTU size is 1500. Whether that be Red Hat Enterprise Linux, , Fedora, Slackware, Ubuntu, Microsoft Windows (pick a version), Cisco IOS and Juniper’s JunOS it has in my experience always been 1500 (though that’s not to say that some specialist providers may change this default value for black box solutions.
So what is a Jumbo Frame?
The internet is pretty much unified on the idea that any packet or frame which is above the 1500 byte default, can be considered a jumbo frame. Typically you would want to enable this for specific needs such as NFS and iSCSI and the bandwidth is at least 1Gbps or better 10Gbps.
A lot of what I had ready in the early days about this topic suggests that you should set the MTU to 9000 bytes, so what should you be mindful of when doing so?
Well, lets take an example, you have a requirement where you need to enable jumbo frames and you have set an MTU size of 9000 across your entire environment;
virtual machine interfaces
physical network interfaces
and core switches
So you enable an MTU of 9000 everywhere, and you then test your shiny new jumbo frame enabled network by way of a large ping;
$ ping -s 9000 -M do 192.168.1.1
> ping -l 9000 -f -t 192.168.1.1
Both of the above perform the same job. They will attempt to send an ICMP ping;
To our chosen destination – 192.168.1.1
With a packet size of 9000 bytes (option -l 9000 or -s 9000), remember the default is 1500 so this is definitely a Jumbo packet
Where the request is not fragmented, thus ensuring that a packet of such a size can actually reach the intended destination without being reduced
The key to the above examples is the “-f” (Windows) and “-M do” (Linux). This will enforce the requirement that the packet can be sent from your server/workstation to its intended destination without the size of the packet being messed with aka fragmented (as that would negate the whole point of using jumbo frames).
If you do not receive a normal ping response back which states its size as being 9000 then something is not configured correctly.
The error might look like the following;
ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500
The above error is highlighting the fact that we are attempting to send a packet which is bigger than the local NIC is configured to handle. It is telling us the MTU is set at 1500 bytes. In this instance we would need to reconfigure our network card to handle the jumbo sized packets.
Now lets take a look at what happens with the ICMP ping request and it’s size. As a test I have pinged the localhost interface on my machine and I get the following;
[toby@testbox ~]$ ping -s 9000 -M do localhost
PING localhost(localhost (::1)) 9000 data bytes
9008 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.142 ms
9008 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.148 ms
9008 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.145 ms
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2085ms
rtt min/avg/max/mdev = 0.142/0.145/0.148/0.002 ms
Firstly notice the size of each request. The initial request may have been 9000 however that doesn’t take into account the need for the header to be added to the packet, so that it can be correctly sent over your network or the Internet. Secondly notice that the packet was received without any fragmentation (note I used the “-M do” option to ensure fragmentation couldn’t take place). In this instance the loopback interface is configured with a massive MTU of 65536 bytes and so all worked swimmingly.
Note that the final packet size is actually 9008 bytes.
The packet size increased by 8 bytes due to the addition of the ICMP header mentioned above, making the total 9008 bytes.
My example above stated that the MTU had been set to 9000 on ALL devices. In this instance the packets will never get to their intended destination without being fragmented as 9008 bytes is bigger than 9000 bytes (stating the obvious I know).
The intermediary devices (routers, bridges, switches and firewalls) will need an MTU size that is bigger than 9000 and be size sufficiently to accept the desired packet size. A standard ethernet frame (according to Cisco) would require an additional 18 bytes on top of the 9000 for the payload. And it would be wise to actually specify a bit higher. So, an MTU size of 9216 bytes would be better as it would allow enough headroom for everything to pass through nicely.
Focusing on the available options in a Windows world
And here is the real reason for this post. Microsoft with all their wisdom, provide you with a drop down box to select the required predefined MTU size for your NICs. With Windows 2012 R2 (possibly slightly earlier versions too), the nearest size you can set via the network card configuration GUI is 9014. This would result in the packet being fragmented or in the case of iSCSI it would potentially result in very poor performance. The MTU 9014 isn’t going to work if the rest of the network or the destination device are set at 9000.
The lesson here is make sure that both source and destination machines have an MTU of equal size and that anything in between must be able to support a higher MTU size than 9000. And given that Microsoft have hardcoded the GUI with a specific number of options, you will probably want to configure your environment to handle this slightly higher size.
Note. 1Gbps Ethernet only supported a maximum MTU size of 9000, so although Jumbo Frames can be enabled you would need to reduce the MTU size slightly on the source and destination servers, with everything in between set at 9000.
Featured image credit; TaylorHerring. As bike frames go, the Penny Farthing could well be considered to have a jumbo frame.
Spacewalk utilises a database back end to store the required information about your environment. The two options are PostgreSQL and Oracle. Neither would be my preference but I always opt for the lesser of two evils – PostgreSQL.
The installation is a piece of cake, and can be performed by issuing the following command at the command line;
yum install spacewalk-setup-postgresql -y
During the process you should be prompted to accept the Spacewalk GPG key. You will need to enter “y” to accept!
Now things have been made pretty easy for you so far. And we wont stop now. To install all of the required packages for spacewalk just run the following;
yum install spacewalk-postgresql
And let it download everything you need. In all (at the time of writing) there were 379 packages totalling 563M.
Again you will likely be prompted to import the Fedora EPEL (7) GPG key. This is necessary so just type “y” and give that Enter key a gentle tap.
And.. you will also be prompted to import the JPackage Project GPG key. Same process as above – “y” followed by Enter.
During the installation you will see a lot of text scrolling up the screen. This will be a mix of general package installation output from yum and some commands that the RPM package will initiate to set and define such things as SELinux contexts.
The key thing is you should see right at the end “Complete!”. You know you are in a good place at this point.
Security: Setting up the firewall rules
CentOS 7 and (for that matter) Red Hat Enterprise Linux 7 ship with firewalld as standard. Now I’m not complete sure of firewalld but I’m sticking with it, but should you decide you want to use iptables (and you have taken steps to make sure it is enabled), then I have provided the firewall rules required for both;
Note. Make sure you have double dashes/hyphens if you copy and paste as I have seen the pasted text only using a single hyphen.
Skip to section after iptables if you have applied the above configuration!
Now as iptables can be configured in all manor or ways, I’m just going to provide the basics, if your set-up is typically more customised than the default, then you probably don’t need me telling you how to setup iptables.
I will just make one assumption though. That the default INPUT policy is set to DROP and than you do not have any DROP, REJECT lines at the end of your INPUT chain.
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
And don’t forget to save your firewall rules;
# service iptables save
Right then, still with me? Awesome, so lets continue with getting Spacewalk up and running. At this point there is one fundamental thing you need…
You must have a resolvable Fully Qualified Domain Name (FQDN). For my installation I have fudged it and added the FQDN to the host file, as I intend to build the rest of my new lab environment using Spacewalk.
So assuming you have followed everything above we can now simply run the following;
Note.The above assumes you have the embedded PostgreSQL database and not a remote DB, or the Oracle DB option. Just saying.
So you should see something like the following (it may take quite some time for many of the tasks to be completed so bare with it);
[root@spacewalk ~]# spacewalk-setup
* Setting up SELinux..
** Database: Setting up database connection for PostgreSQL backend.
** Database: Installing the database:
** Database: This is a long process that is logged in:
** Database: /var/log/rhn/install_db.log
*** Progress: ###
** Database: Installation complete.
** Database: Populating database.
*** Progress: ###########################
* Configuring tomcat.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
** GPG: Creating /root/.gnupg directory
You must enter an email address.
Admin Email Address? firstname.lastname@example.org
* Performing initial configuration.
* Configuring apache SSL virtual host.
Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]?
** /etc/httpd/conf.d/ssl.conf has been backed up to ssl.conf-swsave
* Configuring jabberd.
* Creating SSL certificates.
CA certificate password?
Re-enter CA certificate password?
Organization? Toby Heywood
Organization Unit [spacewalk]?
Email Address [email@example.com]?
Country code (Examples: "US", "JP", "IN", or type "?" to see a list)? GB
** SSL: Generating CA certificate.
** SSL: Deploying CA certificate.
** SSL: Generating server certificate.
** SSL: Storing SSL certificates.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? y
* Restarting services.
Visit https://spacewalk to create the Spacewalk administrator account.
Now at this point you are almost ready to break open a beer and give yourself a pat on the back. But lets finalise the installation first.
Creating your Organisation
(that’s Organization for the Americans)
Setting up your organisation requires only a few simple things to be provided.
Click the Create Organization button and you should finally see a similar screen to the following;
The last thing to do now you have your shiny new installation of Spacewalk is to perform a few sanity checks;
Navigate to Admin > Task Engine Status and confirm that everything looks health and that the Scheduling Service is showing as “ON”
You can also take a look at my earlier blog post – spacewalk sanity checking – about some steps I previously took to make sure everything was running.
Admit it. You, just like me, use Google every day to answer those tough questions that we face daily.
Sometimes we will ask it how to get us home from somewhere we have never been before – “OK Google, take me home” – other times we might be close to starvation (relatively speaking) – “show me interesting recipes” or “OK Google, give me directions to the nearest drive through McDonalds”, but were I use it most, is at work, where I search for such mundane things as; “rsyslog remote server configuration”. Yes, I know, I could just look at the man page for rsyslog.conf but Google seems to have worked its way into my head so much that it is often the first place I look.
Right… back to the topic at hand – Security Broken by Design.
So whilst Googling how to set up a remote syslog server I read through one persons blog post and an alarm bell started to ring!
This particular post had correctly suggested the configuration for rsyslog on both the client and server but then went on (in a very generic way), instructing readers to opening up firewall ports on the clients.
This highlighted a fundamental lack of understanding on the part of the individual whose blog I was reading. You only need to open up ports 514/tcp or 514/udp to enable rsyslog to function on the server-side. The connection is initiated from the client NOT the server. Granted, in a completely hardened installation it is likely that outbound ports will need to be enabled. BUT, where security is concerned, I feel that things should not be taken for granted or worse, assumed!
This generic discussion about security seems completely idiotic! The likes of Red Hat, Ubuntu and almost all other distributions now enable firewalls by default. And the normal fashion for such a thing, is to allow “related” and “established” traffic to flow out of your network card to the LAN and potentially beyond. But (and more importantly) to block none essential traffic inbound to your machine.
If you are working in a hardened environment then one of the two options below would be better suited for your server;
So in short.
Please think before you apply make potentially unnecessary changes to your workstations and servers!
Recently, I’ve been working on deploying a clustered Instant Messaging (IM) chat service in my lab and after setting up the clustering by way of the Hazelcast plugin, I found that I was have some rather strange errors being written into the log files which suggested that the server to server connectivity was not being successfully initiated.
Here is a snippet from the log file;
2016.09.14 17:51:13 WARN [Server SR - 16593225]: org.jivesoftware.openfire.net.SocketReader - Closing session due to incorrect hostname in stream header. Host: of1.lab.tobyheywood.com. Connection: org.jivesoftware.openfire.net.SocketConnection@c53ac0 socket: Socket[addr=/192.168.1.11,port=44042,localport=5269] session: null
2016.09.14 17:51:13 WARN [Server SR - 3158473]: org.jivesoftware.openfire.net.SocketReader - Closing session due to incorrect hostname in stream header. Host: of1.lab.tobyheywood.com. Connection: org.jivesoftware.openfire.net.SocketConnection@1f8e35b socket: Socket[addr=/192.168.1.11,port=44043,localport=5269] session: null
2016.09.14 17:51:13 WARN [pool-10-thread-3]: org.jivesoftware.openfire.server.ServerDialback[Acting as Originating Server: Create Outgoing Session from: openfire.lab.tobyheywood.com to RS at: of1.lab.tobyheywood.com (port: 5269)] - Unable to create a new outgoing session
2016.09.14 17:51:13 WARN [pool-10-thread-3]: org.jivesoftware.openfire.session.LocalOutgoingServerSession[Create outgoing session for: openfire.lab.tobyheywood.com to of1.lab.tobyheywood.com] - Unable to create a new session: Dialback (as a fallback) failed.
2016.09.14 17:51:13 WARN [pool-10-thread-3]: org.jivesoftware.openfire.session.LocalOutgoingServerSession[Authenticate local domain: 'openfire.lab.tobyheywood.com' to remote domain: 'of1.lab.tobyheywood.com'] - Unable to authenticate: Fail to create new session.
Now as part of my investigation into the issue I noticed that the servers were not listing on the server to server port (TCP port 5269). Which in all honesty confused me even more.
A bit of Googling later (admit it, we all do it sometimes), and I had found the solution.
From the Openfire Web UI, Navigate to the following location and set the STARTTLS Policy to “Required“.
Server > Server Settings > Server to Server > STARTTLS Policy = Required.
Do this for both (or all) nodes and restart the services. You should find that things are looking happier.
When it comes to setting up Spacewalk to provide and meet you organisations package management and provisioning needs, there is more to it than simply installing the Spacewalk and then clicking Provision! There is a list of hoops to jump through before you can get up and running. This post aims to tackle the common setup tasks through to your first client registration, but specifically with respect to installing it on a system running CentOS 7. But lets not get ahead of ourselves. There is a lot to do, so lets get cracking!
I am assuming here that you have managed to install Spacewalk and are now looking for the next steps starting at creating the administrative user. If not may I suggest taking a peek here as I have provided a very rough guide to do this. I am also basing a lot of these steps on the HowTo published on the CentOS wiki, though that was for release 5 of CentOS, not 7. So I will try to fill in gaps where required.
Initial configuration of Spacewalk
OK, you have at this point, hopefully got Spacewalk/Satellite server installed. The first thing to do at this point is to login to the web GUI. Well, when I say login, I mean create the admin account. Best to do this right away before someone has the opportunity to takeover your nice new Spacewalk/Red Hat Satellite server. You access the GUI by typing in the FQDN of the spacewalk server and it will redirect you to the Create Spacewalk Administrator. You should see a screen much like the following;
Just enter a few essential details and away you go. OK you won’t get too far yet, but keep reading!
Upon clicking the “Create Login” button, you should see the normal dashboard screen that is displayed when logging into Spacewalk (and Red Hat Satellite) for the first time. With one exception. You should also have a banner across the top with the following wording;
You have created your first user for the Spacewalk Service. Additional configuration should be finalized by Click here
Make sure you click the “Click here” link and make sure you complete the rest of the steps.
I would advise you check and double check the General configuration tab, specifically the Spacewalk hostname, this should ideally match the FQDN of your satellite server. And if you haven’t specified a name which is resolvable via DNS you will likely find that things don’t run exactly as they should.
The Certificate tab will be of interest if you are minting your own SSL certificates or wish to use a commercially generated cert. The Bootstrap script tab is where you define settings relating to how clients connect and the associated security around those connections.
The Organizations tab (which in my opinion should read Organisations (because that’s how you spell it) is where you can define how your organisation looks, you can define multiple activation keys for different parts of your organisation, manage subscriptions and users. To name but a few of the things you can do.
Restart tab, er, do I really need to suggest what this does??? And finally the Cobbler tab. From here you can kick of a synchronisation between Spacewalk and Cobblerd. I recommend clicking it now to make sure the integration between the two applications is working. I would also suggest you double check the cobbler log file located at /var/log/cobbler/cobbler.log for any signs of problems. Here’s a sample output;
In order to register systems against your newly installed Spacewalk server, you must have a activation key defined. This is not done automatically, and therefore we shall tackle it now. Navigate to Systems > Activation Keys.
Initially you should see a message stating;
You do not currently have a universal default activation key set. To set a key as the universal default, please visit the details page of that key and check off the 'Universal Default?' checkbox.
Click + Create Key in the top right hand corner of the Activation Keys screen. You will need to add the following details;
A Key (I would advise putting something meaningful in here, rather than allowing a key to be auto-generated.
Leave the Usage field blank
Leave the Base Channels as default (Spacewalk Default)
Add-on Entitlements, I have selected only Provisioning (it can be changed later)
I also ticked the Universal Default as I do not want to restrict its use
After the default key has been created, the screen looks like this;
Creating your first package repository (and channel)
I will be focusing on CentOS 7 here, but Satellite is capable of providing a centralised repository for other RPM based distributions.
CentOS 7 Base Repository
We will assume you may at some point want to build further servers using the base OS RPMs. The first thing you need to do is find a local mirror site which you can base your repository on. CentOS provide a lovely page on their web site – https://www.centos.org/download/mirrors/, which details, (by country) where you can download the packages from. In my case I searched the page for United Kingdom and picked one from the list.
Lets get on with the job at hand, and create a repository. Click Channels > Manage Software Channels > Manage Repositories. And then click Create Repository. You will then see a screen, not too dissimilar to the one below;
Once you have defined the repository label and URL (this is the source url which Spacewalk will be using to obtain the packages from). I have defined the SSL cert that was generated during installation.
You will now need to create the Channel that will be associated to this channel. Click Manage Software Channels from the left hand menu and then click on Create Channel. You will (once the page has loaded) be given a few ground rules regarding naming conventions and then the opportunity to create your new channel. The long and short of it is this;
Channel Name and Channel Label are required (hence the red asterisk)
must be between 6 and 256 characters in length
must begin with a letter
may contain spaces, parentheses () and forward slashes /.
Channel Label must;
be no longer than 128 characters
start with a letter or digit
Must be lowercase (no exceptions)
May contain hyphens, periods, underscores and numerals
Some other options on the screen also include controlling access to the repository (i.e. is it private and only accessible to your Spacewalk organisation, or is it public), also you can define GPG security settings for signed packages.
The last step is to marry the repository and channel together. This is achieved by going to the Repositories tab, and selecting the repository from the list of available repositories. In my case it is just one.
The last step, will be kick off a synchronisation of the repository. Now there are two ways to do this; 1) By click the Sync tab, then ticking the “Create kickstartable tree” option and then clicking Sync Now. Or 2) run the following command from the cli.
Now sit back and watch/wait. For the current 7.2 repo, the base number of packages is just over 9000, so depending on your connection to the internet, you could find this to be a quick process or quite slow (also very dependent upon the mirror you have selected). Another option which I haven’t don’t but I believe it would work, is to use a copy of the installation media. If you try that option, let me know how you get on 🙂
Registering your first client
First, you will need to make sure you have the required packages on the client to be register. In my case, I had used a minimal install and as such I was missing the required packages. Easily rectified;
[root@rhc-client ~]# yum install rhn-setup
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
--> Running transaction check
---> Package rhn-setup.noarch 0:2.0.2-6.el7 will be installed
--> Processing Dependency: rhn-client-tools = 2.0.2-6.el7 for package: rhn-setup-2.0.2-6.el7.noarch
--> Processing Dependency: rhnsd for package: rhn-setup-2.0.2-6.el7.noarch
--> Running transaction check
---> Package rhn-client-tools.noarch 0:2.0.2-6.el7 will be installed
--> Processing Dependency: rhnlib >= 2.5.57 for package: rhn-client-tools-2.0.2-6.el7.noarch
--> Processing Dependency: python-hwdata for package: rhn-client-tools-2.0.2-6.el7.noarch
--> Processing Dependency: python-gudev for package: rhn-client-tools-2.0.2-6.el7.noarch
--> Processing Dependency: python-dmidecode for package: rhn-client-tools-2.0.2-6.el7.noarch
---> Package rhnsd.x86_64 0:5.0.13-5.el7 will be installed
--> Processing Dependency: rhn-check >= 0.0.8 for package: rhnsd-5.0.13-5.el7.x86_64
--> Running transaction check
---> Package python-dmidecode.x86_64 0:3.10.13-11.el7 will be installed
---> Package python-gudev.x86_64 0:147.2-7.el7 will be installed
---> Package python-hwdata.noarch 0:1.7.3-4.el7 will be installed
---> Package rhn-check.noarch 0:2.0.2-6.el7 will be installed
--> Processing Dependency: yum-rhn-plugin >= 1.6.4-1 for package: rhn-check-2.0.2-6.el7.noarch
---> Package rhnlib.noarch 0:2.5.65-2.el7 will be installed
--> Running transaction check
---> Package yum-rhn-plugin.noarch 0:2.0.1-5.el7 will be installed
--> Processing Dependency: m2crypto >= 0.16-6 for package: yum-rhn-plugin-2.0.1-5.el7.noarch
--> Running transaction check
---> Package m2crypto.x86_64 0:0.21.1-17.el7 will be installed
--> Finished Dependency Resolution
Package Arch Version Repository Size
rhn-setup noarch 2.0.2-6.el7 th_lab_server 87 k
Installing for dependencies:
m2crypto x86_64 0.21.1-17.el7 th_lab_server 429 k
python-dmidecode x86_64 3.10.13-11.el7 th_lab_server 82 k
python-gudev x86_64 147.2-7.el7 th_lab_server 18 k
python-hwdata noarch 1.7.3-4.el7 th_lab_server 32 k
rhn-check noarch 2.0.2-6.el7 th_lab_server 52 k
rhn-client-tools noarch 2.0.2-6.el7 th_lab_server 379 k
rhnlib noarch 2.5.65-2.el7 th_lab_server 65 k
rhnsd x86_64 5.0.13-5.el7 th_lab_server 48 k
yum-rhn-plugin noarch 2.0.1-5.el7 th_lab_server 80 k
Install 1 Package (+9 Dependent packages)
Total size: 1.2 M
Total download size: 1.2 M
Installed size: 4.8 M
Is this ok [y/d/N]: y
(1/9): python-gudev-147.2-7.el7.x86_64.rpm | 18 kB 00:00
(2/9): m2crypto-0.21.1-17.el7.x86_64.rpm | 429 kB 00:00
(3/9): python-hwdata-1.7.3-4.el7.noarch.rpm | 32 kB 00:00
(4/9): rhn-check-2.0.2-6.el7.noarch.rpm | 52 kB 00:00
(5/9): rhn-client-tools-2.0.2-6.el7.noarch.rpm | 379 kB 00:00
(6/9): rhn-setup-2.0.2-6.el7.noarch.rpm | 87 kB 00:00
(7/9): rhnlib-2.5.65-2.el7.noarch.rpm | 65 kB 00:00
(8/9): rhnsd-5.0.13-5.el7.x86_64.rpm | 48 kB 00:00
(9/9): yum-rhn-plugin-2.0.1-5.el7.noarch.rpm | 80 kB 00:00
Total 1.3 MB/s | 1.2 MB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Installing : python-gudev-147.2-7.el7.x86_64 1/10
Installing : rhnlib-2.5.65-2.el7.noarch 2/10
Installing : python-hwdata-1.7.3-4.el7.noarch 3/10
Installing : python-dmidecode-3.10.13-11.el7.x86_64 4/10
Installing : rhn-client-tools-2.0.2-6.el7.noarch 5/10
Installing : m2crypto-0.21.1-17.el7.x86_64 6/10
Installing : rhnsd-5.0.13-5.el7.x86_64 7/10
Installing : rhn-setup-2.0.2-6.el7.noarch 8/10
Installing : yum-rhn-plugin-2.0.1-5.el7.noarch 9/10
Installing : rhn-check-2.0.2-6.el7.noarch 10/10
Verifying : rhn-setup-2.0.2-6.el7.noarch 1/10
Verifying : m2crypto-0.21.1-17.el7.x86_64 2/10
Verifying : rhn-check-2.0.2-6.el7.noarch 3/10
Verifying : python-dmidecode-3.10.13-11.el7.x86_64 4/10
Verifying : rhnsd-5.0.13-5.el7.x86_64 5/10
Verifying : rhn-client-tools-2.0.2-6.el7.noarch 6/10
Verifying : python-hwdata-1.7.3-4.el7.noarch 7/10
Verifying : yum-rhn-plugin-2.0.1-5.el7.noarch 8/10
Verifying : rhnlib-2.5.65-2.el7.noarch 9/10
Verifying : python-gudev-147.2-7.el7.x86_64 10/10
m2crypto.x86_64 0:0.21.1-17.el7 python-dmidecode.x86_64 0:3.10.13-11.el7
python-gudev.x86_64 0:147.2-7.el7 python-hwdata.noarch 0:1.7.3-4.el7
rhn-check.noarch 0:2.0.2-6.el7 rhn-client-tools.noarch 0:2.0.2-6.el7
rhnlib.noarch 0:2.5.65-2.el7 rhnsd.x86_64 0:5.0.13-5.el7
Next step is to install your Spacewalk server’s ssl certificate on the client. This is a security measure which enables the client to verify that the server it is talking to is really the server it SHOULD be talking too.
The final step in the process is to actually register the client against the Spacewalk/Satellite server.
[toby@devops ~]$ sudo rhnreg_ks --serverUrl=https://manager.lab.tobyheywood.com/XMLRPC --sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT --activationkey=1-lab.tobyheywood.com
This system is not subscribed to any channels.
RHN channel support will be disabled.
At this point, we float over to the Spacewalk server UI, and we should now see our client in the list of Systems;
Now, for those of you who have a keen eye for detail, you will have noticed in the screenshot above and the snippet of command line output at the time of registration, that the system isn’t currently subscribed to any channels. This is very easily remedied;
Click on the client in the system list
On the initial Overview screen, you will see a box – Subscribed Channels
Click Alter Channel Subscriptions
Now Select from the list under Base Software Channel – in my case “centos_7_base”
You should now see the channel listed under the heading “Software Channel Subscriptions”.
In addition you may have child channels created beneath your base channel.
And there we have it. Time to have a play and see what you can do by having a click around the tabs related to the system and the wider Spacewalk UI.
Following on from an earlier post, it would seem that the “Warning /dev/root does not exist” issue is not confined to “none” kickstart pxe booted installations as I had first thought.
I was working on a RHEL 7 installation using Red Hat Satellite 5.7 (upgrade to 6.x in the pipeline but bigger fish to fry right now), where we were re-using a lot of the RHEL 6 pxelinux kernel parameter’s.
Now as you may or may not know (if you have read my other posts on the topic), there are numerous Anaconda and dracut parameter’s that can be passed to the kernel in the pxelinux.cfg/default (or the mac specific) config file. The problem we had found was the existence off a ksdevice= parameter which pointed to eth0. In RHEL/CentOS 7, the ethernet device naming standard changes from ethX to ensX, which works out as follows;
en = Ethernet
sX = Slot X (where X is the physical or virtual slot number where the nic resides)
By default the first interface is used by anaconda/dracut/pxelinux, IF no option is specified. If however you specifically tell it to use something which fundamentally doesn’t exist, it WILL still try to use that… and fail! Miserably! And give you an error which ultimately seems kind of unrelated.
You have been warned!
As with many things in life, this served as a reminder that things change and that you can’t always take the “old” and reuse with the “new” without issue.
It would appear that I have been caught out twice now due to the way that nmtui (Network Manager Text User Interface) works. I have been messing around with various internal sandboxed networks in my VM environment and (I can only assume ion my haste), I have entered the IP address of a second NIC without full regard for the on screen prompts.
In nmtui, there is one field missing which is quite common in many other tools. Take a look at the following and tell me what’s missing;
So what field do you think is missing?
Now although the information is all on screen in the screenshot above, there is one thing that may not be obvious. In the Addresses field you specify not only the IP address but also the subnet mask in CIDR (which stands for Classless Inter-Domain Routing) notation.
IF, you happen to enter and IP address without thinking about it and don’t specify the netmask or CIDR, nmtui assumes that you are only referring to a /32, a.k.a. a netmask of 255.255.255.255, which for the uninitiated means just that IP. If assumes that there is nothing beyond that IP address. It’s world is only itself.
In the good old days where I used to configure the IP address via ifcfg-eth* files, I also remembered to enter the NETMASK= line, and therefore never had this issue.
Anyway rant over. Hopefully twice is enough, because if nothing else, if I have name resolutions errors in my logs again, I will be making sure my netmask is set correctly, before thinking that tomcat is having issues.
Featured image credit: Thanks to versageek for making the Network Spagetti image available on Flickr.com.