I have to admit this isn’t a question that crops up all that often, but given what I have been through over recent months, I thought I’d share with the world.
Lets start from the beginning…
What are raw devices?
Raw devices, are used when there is a need to present underlying storage devices as character devices, which will consume be consumed typically by RDBMS.
Some examples of database technologies where “raws” might be used would be;
SAP ASE
SAP IQ
SAP Replication Server
MySQL
The list is not limited to the above, but they are the ones I know about.
Why uses raw devices?
Well, there is a belief (and probably some, maybe lots of evidence) that file systems introduce an extra overhead when processing I/O. Though I would question whether in the day and age of Enterprise SSD and NVMe based storage solutions, whether it is anywhere near as relevant as it used to be??
So the idea is simple. Present a raw character device (or multiples of) to the database technology being used and allow it to handle the file system type tasks and thereby remove the (perceived??) overhead of a file system.
For me I can only think this would be really relevant in particular high, low latency environments, and even then I would be keen to see some base line figures to confirm if the extra admin overhead raw devices brings is worth it.
Hang on! You enticed me in with how many raw devices can you create? So tell me!!
OK, the greatest number of raw devices you can created on a Red Hat Enterprise Linux/Cent OS 7 system is… 8192.
That’s 0 all the way up to 8191.
Would I really ever need that many raw devices?
Well, maybe. I doubt you would ever really create 8192 raw devies, as this would require you to effectively provision the same number of storage LUNs.
So how did I stumble across this (potentially unimportant) fact?
Well, whilst working on the requirements from my local DBA team, I was also attempting to introduce a level of standardisation, so it was easy to see what the raw devices were used for.
Raw devices are numbered and it doesn’t look like you can use alphabetical characters in the name. So a number standard had to be created. For example;
raw devices numbered 1100-1150 might be the raws used to store the actual data, 1300-1320 be contain the logs and 1500-1510 might be for temp tables.
So if you also include a bit of future proofing and have a large enough number of storage LUNs to provision for use directly by the RDBMS then you could quickly find yourself constrained if you don’t plan ahead.
Anyway, the above was found out because I started to get strange problems when trying to create udev rules which would out create the raw devices, so I had a fun our of trying to work out the magic number.
For raw devices (if not much else) is 8191 (the maximum raw number you can use).
Normally, I perform my OS upgrades by way of a clean install. This time round though, I thought I’d give the upgrade process a try, given Fedora are pushing it quite a lot this time round.
The actually process took about 30-35 minutes on my machine, and that’s including the time required to download the software updates in the first place.
The upgrade process was started from the Software GUI. Clicking the install button results in the PC rebooting and then running in “no mans land” for a while whilst the updates are applied. During the process all you really have to watch is a small bit of text in the upper left corner of your screen.
Once the upgrade has completed, the PC reboots.
The first thing you notice is that grub now has a new kernel version to boot from. Admittedly not overly note worthy for me this time around as I’m just upgrading my day to day machine and don’t really need to consider what new features there are in the kernel on this occasion. And if it breaks something then I will enhance my knowledge whilst fixing whatever goes wrong.
Next up I have the usual prompt for my disk encryption password and then shortly after that the login prompt.
Upon entering my password, my screen flickered, the screen went grey and the mouse pointer was relocated right into the centre of my screen. At this point my PC locked up. Awesome! Just what I wanted.
A brief bit of googling didn’t really show anything specific for Fedora 26 but it did yield a link the the Common Fedora 25 Bugs page. The more interesting part though described my exact problem. Frozen grey screen after upgrade.
So, it looks like it is my fault, well sort of. I happen to have installed the EasyScreenCast Gnome plugin and this seems to upset things. Well sort of. I left that enabled and installed, however I removed (as advised in the F25 bugs page) the package “clutter-gst2”.
A quick reboot and my issue was resolved! Yay google and the Fedora wiki to the rescue. And now to have a look at what has changed.
I’ve recently been working on a project to deploy a couple of Pure Storage Flash Array //M10‘s, and rather than using Fiber Channel we opted for the 10Gb Ethernet (admittedly for reasons of cost) and using iSCSI as the transport mechanism.
Whenever you read up on iSCSI (and NFS for that matter) there inevitably ends up being a discussion around the MTU size. MY thinking here is that if your network has sufficient bandwidth to handle the Jumbo Frames and large MTU sizes, then it should be done.
Now I’m not going to ramble on about enabling Jumbo Frames exactly, but I am going to focus on the MTU size.
What is MTU?
MTU stands for Message Transport Unit. It defines the maximum size of a network frame that you can send in a single data transmission across the network. The default MTU size is 1500. Whether that be Red Hat Enterprise Linux, , Fedora, Slackware, Ubuntu, Microsoft Windows (pick a version), Cisco IOS and Juniper’s JunOS it has in my experience always been 1500 (though that’s not to say that some specialist providers may change this default value for black box solutions.
So what is a Jumbo Frame?
The internet is pretty much unified on the idea that any packet or frame which is above the 1500 byte default, can be considered a jumbo frame. Typically you would want to enable this for specific needs such as NFS and iSCSI and the bandwidth is at least 1Gbps or better 10Gbps.
MTU sizing
A lot of what I had ready in the early days about this topic suggests that you should set the MTU to 9000 bytes, so what should you be mindful of when doing so?
Well, lets take an example, you have a requirement where you need to enable jumbo frames and you have set an MTU size of 9000 across your entire environment;
virtual machine interfaces
physical network interfaces
fabric interconnects
and core switches
So you enable an MTU of 9000 everywhere, and you then test your shiny new jumbo frame enabled network by way of a large ping;
Linux
$ ping -s 9000 -M do 192.168.1.1
Windows
> ping -l 9000 -f -t 192.168.1.1
Both of the above perform the same job. They will attempt to send an ICMP ping;
To our chosen destination – 192.168.1.1
With a packet size of 9000 bytes (option -l 9000 or -s 9000), remember the default is 1500 so this is definitely a Jumbo packet
Where the request is not fragmented, thus ensuring that a packet of such a size can actually reach the intended destination without being reduced
The key to the above examples is the “-f” (Windows) and “-M do” (Linux). This will enforce the requirement that the packet can be sent from your server/workstation to its intended destination without the size of the packet being messed with aka fragmented (as that would negate the whole point of using jumbo frames).
If you do not receive a normal ping response back which states its size as being 9000 then something is not configured correctly.
The error might look like the following;
ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500
The above error is highlighting the fact that we are attempting to send a packet which is bigger than the local NIC is configured to handle. It is telling us the MTU is set at 1500 bytes. In this instance we would need to reconfigure our network card to handle the jumbo sized packets.
Now lets take a look at what happens with the ICMP ping request and it’s size. As a test I have pinged the localhost interface on my machine and I get the following;
[toby@testbox ~]$ ping -s 9000 -M do localhost
PING localhost(localhost (::1)) 9000 data bytes
9008 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.142 ms
9008 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.148 ms
9008 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.145 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2085ms
rtt min/avg/max/mdev = 0.142/0.145/0.148/0.002 ms
Firstly notice the size of each request. The initial request may have been 9000 however that doesn’t take into account the need for the header to be added to the packet, so that it can be correctly sent over your network or the Internet. Secondly notice that the packet was received without any fragmentation (note I used the “-M do” option to ensure fragmentation couldn’t take place). In this instance the loopback interface is configured with a massive MTU of 65536 bytes and so all worked swimmingly.
Note that the final packet size is actually 9008 bytes.
The packet size increased by 8 bytes due to the addition of the ICMP header mentioned above, making the total 9008 bytes.
My example above stated that the MTU had been set to 9000 on ALL devices. In this instance the packets will never get to their intended destination without being fragmented as 9008 bytes is bigger than 9000 bytes (stating the obvious I know).
The resolution
The intermediary devices (routers, bridges, switches and firewalls) will need an MTU size that is bigger than 9000 and be size sufficiently to accept the desired packet size. A standard ethernet frame (according to Cisco) would require an additional 18 bytes on top of the 9000 for the payload. And it would be wise to actually specify a bit higher. So, an MTU size of 9216 bytes would be better as it would allow enough headroom for everything to pass through nicely.
Focusing on the available options in a Windows world
And here is the real reason for this post. Microsoft with all their wisdom, provide you with a drop down box to select the required predefined MTU size for your NICs. With Windows 2012 R2 (possibly slightly earlier versions too), the nearest size you can set via the network card configuration GUI is 9014. This would result in the packet being fragmented or in the case of iSCSI it would potentially result in very poor performance. The MTU 9014 isn’t going to work if the rest of the network or the destination device are set at 9000.
The lesson here is make sure that both source and destination machines have an MTU of equal size and that anything in between must be able to support a higher MTU size than 9000. And given that Microsoft have hardcoded the GUI with a specific number of options, you will probably want to configure your environment to handle this slightly higher size.
Note. 1Gbps Ethernet only supported a maximum MTU size of 9000, so although Jumbo Frames can be enabled you would need to reduce the MTU size slightly on the source and destination servers, with everything in between set at 9000.
Featured image credit; TaylorHerring. As bike frames go, the Penny Farthing could well be considered to have a jumbo frame.
Please note. This is for an outdated version of Spacewalk.
It would appear that during an upgrade of my blog at some point over the past year, I have managed to wipe out the original how to guide to installing Spacewalk on CentOS 7, so here we go again.
A step-by-step guide to installing Spacewalk on CentOS 7. Just in case you weren’t aware Spacewalk is the upstream project for Red Hat Satellite Server.
Assumptions
You know the basic idea behind Spacewalk, if not see here
You have a vanilla VM with CentOS 7.2 installed which was deployed as a “minimal” installation
You have subsequently run an update to make sure you have the latest patches
You have root access or equivalent via sudo
You have got vim installed (if not run the following command should fix that)
yum install vim -y
The machine you intend to install Spacewalk onto has access to the internet
Preparation
Firstly, we need to install and/or create the necessary YUM repo files that will be used to install Spacewalk directly from the Spacewalk official yum repository and all it’s associated dependencies.
Run the following command as root on your spacewalk VM
You then need to manually configure another yum repository for JPackage which is a dependency for Spacewalk, by running the following (you will need to be the root user to do this);
Spacewalk utilises a database back end to store the required information about your environment. The two options are PostgreSQL and Oracle. Neither would be my preference but I always opt for the lesser of two evils – PostgreSQL.
The installation is a piece of cake, and can be performed by issuing the following command at the command line;
yum install spacewalk-setup-postgresql -y
During the process you should be prompted to accept the Spacewalk GPG key. You will need to enter “y” to accept!
Installation: Spacewalk
Now things have been made pretty easy for you so far. And we wont stop now. To install all of the required packages for spacewalk just run the following;
yum install spacewalk-postgresql
And let it download everything you need. In all (at the time of writing) there were 379 packages totalling 563M.
Again you will likely be prompted to import the Fedora EPEL (7) GPG key. This is necessary so just type “y” and give that Enter key a gentle tap.
And.. you will also be prompted to import the JPackage Project GPG key. Same process as above – “y” followed by Enter.
During the installation you will see a lot of text scrolling up the screen. This will be a mix of general package installation output from yum and some commands that the RPM package will initiate to set and define such things as SELinux contexts.
The key thing is you should see right at the end “Complete!”. You know you are in a good place at this point.
Security: Setting up the firewall rules
CentOS 7 and (for that matter) Red Hat Enterprise Linux 7 ship with firewalld as standard. Now I’m not complete sure of firewalld but I’m sticking with it, but should you decide you want to use iptables (and you have taken steps to make sure it is enabled), then I have provided the firewall rules required for both;
Note. Make sure you have double dashes/hyphens if you copy and paste as I have seen the pasted text only using a single hyphen.
Skip to section after iptables if you have applied the above configuration!
iptables
Now as iptables can be configured in all manor or ways, I’m just going to provide the basics, if your set-up is typically more customised than the default, then you probably don’t need me telling you how to setup iptables.
I will just make one assumption though. That the default INPUT policy is set to DROP and than you do not have any DROP, REJECT lines at the end of your INPUT chain.
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
And don’t forget to save your firewall rules;
# service iptables save
Configuring Spacewalk
Right then, still with me? Awesome, so lets continue with getting Spacewalk up and running. At this point there is one fundamental thing you need…
You must have a resolvable Fully Qualified Domain Name (FQDN). For my installation I have fudged it and added the FQDN to the host file, as I intend to build the rest of my new lab environment using Spacewalk.
So assuming you have followed everything above we can now simply run the following;
spacewalk-setup
Note.The above assumes you have the embedded PostgreSQL database and not a remote DB, or the Oracle DB option. Just saying.
So you should see something like the following (it may take quite some time for many of the tasks to be completed so bare with it);
[root@spacewalk ~]# spacewalk-setup
* Setting up SELinux..
** Database: Setting up database connection for PostgreSQL backend.
** Database: Installing the database:
** Database: This is a long process that is logged in:
** Database: /var/log/rhn/install_db.log
*** Progress: ###
** Database: Installation complete.
** Database: Populating database.
*** Progress: ###########################
* Configuring tomcat.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
** GPG: Creating /root/.gnupg directory
You must enter an email address.
Admin Email Address? toby@lab.tobyhewood.com
* Performing initial configuration.
* Configuring apache SSL virtual host.
Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]?
** /etc/httpd/conf.d/ssl.conf has been backed up to ssl.conf-swsave
* Configuring jabberd.
* Creating SSL certificates.
CA certificate password?
Re-enter CA certificate password?
Organization? Toby Heywood
Organization Unit [spacewalk]?
Email Address [toby@lab.tobyhewood.com]?
City? London
State? London
Country code (Examples: "US", "JP", "IN", or type "?" to see a list)? GB
** SSL: Generating CA certificate.
** SSL: Deploying CA certificate.
** SSL: Generating server certificate.
** SSL: Storing SSL certificates.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? y
* Restarting services.
Installation complete.
Visit https://spacewalk to create the Spacewalk administrator account.
Now at this point you are almost ready to break open a beer and give yourself a pat on the back. But lets finalise the installation first.
Creating your Organisation
(that’s Organization for the Americans)
Setting up your organisation requires only a few simple things to be provided.
Click the Create Organization button and you should finally see a similar screen to the following;
The last thing to do now you have your shiny new installation of Spacewalk is to perform a few sanity checks;
Navigate to Admin > Task Engine Status and confirm that everything looks health and that the Scheduling Service is showing as “ON”
You can also take a look at my earlier blog post – spacewalk sanity checking – about some steps I previously took to make sure everything was running.
Admit it. You, just like me, use Google every day to answer those tough questions that we face daily.
Sometimes we will ask it how to get us home from somewhere we have never been before – “OK Google, take me home” – other times we might be close to starvation (relatively speaking) – “show me interesting recipes” or “OK Google, give me directions to the nearest drive through McDonalds”, but were I use it most, is at work, where I search for such mundane things as; “rsyslog remote server configuration”. Yes, I know, I could just look at the man page for rsyslog.conf but Google seems to have worked its way into my head so much that it is often the first place I look.
Right… back to the topic at hand – Security Broken by Design.
So whilst Googling how to set up a remote syslog server I read through one persons blog post and an alarm bell started to ring!
This particular post had correctly suggested the configuration for rsyslog on both the client and server but then went on (in a very generic way), instructing readers to opening up firewall ports on the clients.
This highlighted a fundamental lack of understanding on the part of the individual whose blog I was reading. You only need to open up ports 514/tcp or 514/udp to enable rsyslog to function on the server-side. The connection is initiated from the client NOT the server. Granted, in a completely hardened installation it is likely that outbound ports will need to be enabled. BUT, where security is concerned, I feel that things should not be taken for granted or worse, assumed!
This generic discussion about security seems completely idiotic! The likes of Red Hat, Ubuntu and almost all other distributions now enable firewalls by default. And the normal fashion for such a thing, is to allow “related” and “established” traffic to flow out of your network card to the LAN and potentially beyond. But (and more importantly) to block none essential traffic inbound to your machine.
If you are working in a hardened environment then one of the two options below would be better suited for your server;
So in short.
Please think before you apply make potentially unnecessary changes to your workstations and servers!
After having installed Spacewalk, got it working to a certain point and then found that there may have been issues with the installation, I thought it would be easier to simply re-install spacewalk onto a new virtual machine.
So following on from my how to article, I wanted to make sure that post installation, I had performed sufficient checks to confirm that there were no issues with the scheduler service or cobbler, as these were two things I had great difficulty trying to get working.
I guess it is also worth mentioning that the VM I am running spacewalk on has a single vCPU and 4GB of memory. For storage I have given it 40G which will do me fine. And as for the OS it is running CentOS 7 (1511).
So what should we check
Good question. The following is a rough list of all the services I confirmed as enabled, running and also that there were no horrible errors in the log files
Services
cobblerd
postgresql
xinetd (tftp)
httpd
tomcat
taskomatic (a.k.a. the scheduler)
cobblerd
[toby@manager ~]$ sudo systemctl status cobblerd
● cobblerd.service - LSB: daemon for libvirt virtualization API
Loaded: loaded (/etc/rc.d/init.d/cobblerd)
Active: active (running) since Fri 2016-04-15 23:08:37 BST; 24min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/cobblerd.service
└─13257 /usr/bin/python -s /bin/cobblerd --daemonize
Apr 15 23:08:35 manager systemd[1]: Starting LSB: daemon for libvirt virtualization API...
Apr 15 23:08:37 manager cobblerd[13247]: Starting cobbler daemon: [ OK ]
Apr 15 23:08:37 manager systemd[1]: Started LSB: daemon for libvirt virtualization API.
Apr 15 23:31:35 manager systemd[1]: [/run/systemd/generator.late/cobblerd.service:8] Failed to add dependency on network,.service, ignoring: Invalid argument
Apr 15 23:31:35 manager systemd[1]: [/run/systemd/generator.late/cobblerd.service:8] Failed to add dependency on xinetd,.service, ignoring: Invalid argument
The last two lines can be ignore. I believe this is purely due some references to the sysvinit scripts which are no longer used, and as you will see later things appear to be running fine (this time around)
[toby@manager ~]$ sudo systemctl enable tftp
[toby@manager ~]$ sudo systemctl start tftp
[toby@manager ~]$ sudo systemctl status tftp
● tftp.service - Tftp Server
Loaded: loaded (/usr/lib/systemd/system/tftp.service; indirect; vendor preset: disabled)
Active: active (running) since Fri 2016-04-15 23:46:30 BST; 2s ago
Docs: man:in.tftpd
Main PID: 29012 (in.tftpd)
CGroup: /system.slice/tftp.service
└─29012 /usr/sbin/in.tftpd -s /var/lib/tftpboot
httpd
[toby@manager ~]$ sudo systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2016-04-15 23:08:34 BST; 39min ago
Docs: man:httpd(8)
man:apachectl(8)
Main PID: 13168 (httpd)
Status: "Total requests: 1; Current requests/sec: 0; Current traffic: 0 B/sec"
CGroup: /system.slice/httpd.service
├─13168 /usr/sbin/httpd -DFOREGROUND
├─13169 /usr/sbin/httpd -DFOREGROUND
├─13170 /usr/sbin/httpd -DFOREGROUND
├─13171 /usr/sbin/httpd -DFOREGROUND
├─13172 /usr/sbin/httpd -DFOREGROUND
├─13173 /usr/sbin/httpd -DFOREGROUND
├─13174 /usr/sbin/httpd -DFOREGROUND
├─13175 /usr/sbin/httpd -DFOREGROUND
└─13176 /usr/sbin/httpd -DFOREGROUND
Apr 15 23:08:34 manager systemd[1]: Starting The Apache HTTP Server...
Apr 15 23:08:34 manager httpd[13168]: AH00557: httpd: apr_sockaddr_info_get() failed for manager
Apr 15 23:08:34 manager httpd[13168]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
Apr 15 23:08:34 manager systemd[1]: Started The Apache HTTP Server.
Feel free to ignore the warning messages with regards to the FQDN.
tomcat
Once the install had completed there was an error message;
Tomcat failed to start properly or the installer ran out of tries. Please check /var/log/tomcat*/catalina.out for errors.
I checked the logs and saw some errors, but as you can see from the following, simple making sure it was enabled and started appears to have cleared up what ever the issue may have been.
Now, this one doesn’t appear to have been moved over to the new systemd environment and therefore we resort back to the good old sysvinit scripts and the service command to confirm this one is working;
[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (13296).
Looking good so far
But can it withstand a reboot? Now that is the question. So I repeated the above steps again, just to confirm. I won’t bore you with all the details;
cobblerd.service – active (running)
httpd.service – active (running)
tftp.service – inactive (dead)
postgresql.service – active (running)
tomcat.service –active (running)
taskomatic – RHN Taskomatic is not running.
taskomatic revisited
[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is not running.
[toby@manager ~]$ sudo chkconfig taskomatic on
[toby@manager ~]$ sudo service taskomatic start
Starting RHN Taskomatic...
[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (10870).
And for good measure I gave the machine another reboot, just to confirm the taskomatic service did start.
[toby@manager ~]$ sudo service taskomatic status
RHN Taskomatic is running (1278).
Oh Yeah! Now I’m a happy camper. And it’s time to re-visit the initial configuration part, which I shall post about shortly.
Image credit; Thanks to Mark Walsh for making the featured image called “Russell Street Court Cells – Padded Cell” available on Flickr.com.
Right then. So far (if you have been following along) we have done the following;
Created a local yum repository based on the installation media on the initial server in our sand boxed lab
Installed and setup DNS with Forward and Reverse zones
Installed and configured dhcpd for the lab network
Tested the DNS and dhcp services using a client machine
And network enabled our yum repository by exposing the directory using Apache
The final steps to enable an ISO free installation of CentOS 7 into a KVM virtual machine are;
Installing, configuring and testing Trivial FTP, adding additional configuration to DHCPd to enable PXE booting and testing that setup (this post)
The final piece will be to create our kickstart file from which will define the standard installation on CentOS 7 (maybe splitting out into server and client)
So, lets not waste any time and get our hands dirty and flex our fingers with a bit of typing…
Trivial FTP (tftp)
First things first, lets log on to the server and get the packages installed. Thankfully they are part of the installation media and therefore part of the yum repository that was set up in the last post.
[toby@rhc-server ~]$ sudo yum install tftp*
Loaded plugins: fastestmirror
baselocal | 3.6 kB 00:00:00
Loading mirror speeds from cached hostfile
Resolving Dependencies
-- Running transaction check
--- Package tftp.x86_64 0:5.2-11.el7 will be installed
--- Package tftp-server.x86_64 0:5.2-11.el7 will be installed
-- Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================
Installing:
tftp x86_64 5.2-11.el7 baselocal 35 k
tftp-server x86_64 5.2-11.el7 baselocal 44 k
Transaction Summary
============================================================================================================================================
Install 2 Packages
Total download size: 79 k
Installed size: 112 k
Is this ok [y/d/N]: y
Downloading packages:
--------------------------------------------------------------------------------------------------------------------------------------------
Total 654 kB/s | 79 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : tftp-server-5.2-11.el7.x86_64 1/2
Installing : tftp-5.2-11.el7.x86_64 2/2
Verifying : tftp-5.2-11.el7.x86_64 1/2
Verifying : tftp-server-5.2-11.el7.x86_64 2/2
Installed:
tftp.x86_64 0:5.2-11.el7 tftp-server.x86_64 0:5.2-11.el7
Complete!
Note, I’ve installed both client and server RPMs on the server. For my tests from a client I will only install the client tftp package.
Next step is to prepare the folder structure where the files will be served from. Don’t forget to make sure SELinux contexts, etc. are defined correctly otherwise things will not work as expected.
I am slightly cheating with the above command as I am using the standard tftp directory SELinux security types and contexts as a reference. Why make things more difficult than they need to be.
And now lets configure the tftp daemon to use this new location by default.
So lets just check where we are;
tftp server and client installed? – check!
folder structure created, permissions set and SELinux attributes defined correctly – check!
tftp server configured? – Nope
Best fix the TFTP configuration side of things before going further;
[toby@rhc-server lib]$ cat /etc/xinetd.d/tftp
# default: off
# description: The tftp server serves files using the trivial file transfer \
# protocol. The tftp protocol is often used to boot diskless \
# workstations, download configuration files to network-aware printers, \
# and to start the installation process for some operating systems.
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /tftpboot
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}
OK, so the remaining task at this point before we start the service is to make sure the firewall is configured to allow connectivity to the tftp service via its LAN interface and also to make sure the hosts.allow file has an entry for the tftp service. hosts.allow is used by the xinetd processes, and is required in addition to the firewall changes.
Lets get the hosts.allow file out of the way first;
[root@rhc-server ~]# cat /etc/hosts.allow
#
# hosts.allow This file contains access rules which are used to
# allow or deny connections to network services that
# either use the tcp_wrappers library or that have been
# started through a tcp_wrappers-enabled xinetd.
#
# See 'man 5 hosts_options' and 'man 5 hosts_access'
# for information on rule syntax.
# See 'man tcpd' for information on tcp_wrappers
#
in.tftp: ALL
And now for the firewall;
[root@rhc-server ~]# firewall-cmd --zone public --add-service tftp
[root@rhc-server ~]# firewall-cmd --permanent --zone public --add-service tftp
success
It is worth mentioning that if you use the “–permanent” parameter on the command line, it will not be applied immediately. Now we should be good to start the service and do some tests. We will create a test file to try and copy via tftp before performing the tests.
Based on the above the service has started successfully and nothing appears to be out of the ordinary in the journal, so lets proceed with the testing, and here is my test file;
[root@rhc-server ~]# echo "Hello? Is it me you're looking for?" > /tftpboot/this_is_a_test
[root@rhc-server ~]# ll -Z /tftpboot/
-rw-r--r--. root root unconfined_u:object_r:tftpdir_rw_t:s0 this_is_a_test
Call me paranoid, but I wanted to make sure the file had inherited the SELinux type of “tftpdir_rw_t”. Which it did 🙂
Testing locally on server
[toby@rhc-server ~]$ tftp -4 localhost
tftp> get this_is_a_test
tftp> quit
[toby@rhc-server ~]$ ls
this_is_a_test
[toby@rhc-server ~]$ cat this_is_a_test
Hello? Is it me you're looking for?
Looking good so far! 🙂
Testing remotely from client
Before we can test lets determine if the tftp package is installed, in my case it wasn’t, so I installed it;
[toby@rhc-client ~]$ yum search tftp
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
============================================================ N/S matched: tftp =============================================================
syslinux-tftpboot.x86_64 : SYSLINUX modules in /tftpboot, available for network booting
tftp.x86_64 : The client for the Trivial File Transfer Protocol (TFTP)
tftp-server.x86_64 : The server for the Trivial File Transfer Protocol (TFTP)
Name and summary matches only, use "search all" for everything.
[toby@rhc-client ~]$ yum list tftp
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Available Packages
tftp.x86_64 5.2-11.el7 th_lab_server
[toby@rhc-client ~]$ sudo yum install tftp
[sudo] password for toby:
Loaded plugins: fastestmirror, langpacks
th_lab_server | 3.6 kB 00:00:00
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package tftp.x86_64 0:5.2-11.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================
Installing:
tftp x86_64 5.2-11.el7 th_lab_server 35 k
Transaction Summary
============================================================================================================================================
Install 1 Package
Total download size: 35 k
Installed size: 48 k
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/th_lab_server/packages/tftp-5.2-11.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for tftp-5.2-11.el7.x86_64.rpm is not installed
tftp-5.2-11.el7.x86_64.rpm | 35 kB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
Package : centos-release-7-0.1406.el7.centos.2.3.x86_64 (@anaconda)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : tftp-5.2-11.el7.x86_64 1/1
Verifying : tftp-5.2-11.el7.x86_64 1/1
Installed:
tftp.x86_64 0:5.2-11.el7
Complete!
And now the test from the client.
[toby@rhc-client ~]$ sudo firewall-cmd --zone public --add-service tftp
[sudo] password for toby:
success
[toby@rhc-client ~]$ tftp rhc-server
tftp> get this_is_a_test
tftp> quit
[toby@rhc-client ~]$ ls
Desktop Documents Downloads Music Pictures Public Templates test this_is_a_test Videos
[toby@rhc-client ~]$ cat this_is_a_test
Hello? Is it me you're looking for?
[toby@rhc-client ~]$ sudo firewall-cmd --zone public --remove-service tftp
success
Note. If you don’t temporarily enable the tftp port on the client, the test will fail. I got the follow error via tcpdump which highlighted that the firewall was blocking the request;
23:25:40.388650 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 54)
192.168.20.50.36588 > 192.168.20.1.69: [udp sum ok] 26 RRQ "this_is_a_test" netascii
23:25:40.390312 IP (tos 0x0, ttl 64, id 31747, offset 0, flags [none], proto UDP (17), length 70)
192.168.20.1.43788 > 192.168.20.50.36588: [bad udp cksum 0xa9c7 -> 0xddd1!] UDP, length 42
23:25:40.390721 IP (tos 0xc0, ttl 64, id 3881, offset 0, flags [none], proto ICMP (1), length 98)
192.168.20.50 > 192.168.20.1: ICMP host 192.168.20.50 unreachable - admin prohibited, length 78
IP (tos 0x0, ttl 64, id 31747, offset 0, flags [none], proto UDP (17), length 70)
192.168.20.1.43788 > 192.168.20.50.36588: [udp sum ok] UDP, length 42
At this point we have now completed the necessary steps to ensure we have a working tftp service.
Setting up PXE
The second part of today’s post covers the steps needed to enable booting from the network to facilitate building machines without the hassle of creating USB bootable images or burning ISO images to CD or DVD.
Out task list for this section is;
Add the additional config to the DHCP scope
Ensure we have the pxelinux/syslinux installed and copied to the required location
Create a basic menu to provide end users with a installation options
Test
Configuring DHCP
So, lets start by reminding ourselves what the current dhcpd.conf file looks like;
[toby@rhc-server ~]$ sudo cat /etc/dhcp/dhcpd.conf
[sudo] password for toby:
#
# lab.tobyheywood.com dhcp daemon configuration file
#
# 2016-02-22 - Initial creation
#
# Define which IP to listen on. NOTE. daemon can only listen to one
# IP at a time if defined.
local-address 192.168.20.1;
# option definitions common to all supported networks...
option domain-name "lab.tobyheywood.com";
option domain-name-servers ns.lab.tobyheywood.com;
default-lease-time 600;
max-lease-time 7200;
# Use this to enble / disable dynamic dns updates globally.
#ddns-update-style interim;
# This is the authoritative DHCP server.
authoritative;
# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;
# Interface which can be accessed from outside the sandbox
# **** NOT IN USE ****
subnet 192.168.122.0 netmask 255.255.255.0 {
}
# The lab network
subnet 192.168.20.0 netmask 255.255.255.128 {
range 192.168.20.50 192.168.20.99;
option routers rtr.lab.tobyheywood.com;
}
So in order to get PXE (Preboot eXecution Environment) to function we will need to add a few more lines to the configuration, as highlighted below.
[toby@rhc-server ~]$ sudo cat /etc/dhcp/dhcpd.conf
[sudo] password for toby:
#
# lab.tobyheywood.com dhcp daemon configuration file
#
# 2016-02-22 - Initial creation
#
# Define which IP to listen on. NOTE. daemon can only listen to one
# IP at a time if defined.
local-address 192.168.20.1;
# option definitions common to all supported networks...
option domain-name "lab.tobyheywood.com";
option domain-name-servers ns.lab.tobyheywood.com;
default-lease-time 600;
max-lease-time 7200;
# Use this to enble / disable dynamic dns updates globally.
#ddns-update-style interim;
# This is the authoritative DHCP server.
authoritative;
# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;
# Interface which can be accessed from outside the sandbox
# **** NOT IN USE ****
subnet 192.168.122.0 netmask 255.255.255.0 {
}
# The lab network
subnet 192.168.20.0 netmask 255.255.255.128 {
range 192.168.20.50 192.168.20.99;
option routers rtr.lab.tobyheywood.com;
}
# Additional configuration for PXE booting
allow booting;
allow bootp;
option option-128 code 128 = string;
option option-129 code 129 = text;
next-server 192.168.20.1;
filename "/pxelinux.0";
Setting up the required syslinux files in /tftpboot
During the section above when testing the tftp service on the client, I saw that there may be a shortcut to getting things up and running. Everything I have read says you need to manually copy the syslinux files into the /tftpboot directory, however, if you search the rpm database with yum for anything tftp related, I see that there is potentially an easier way.
[toby@rhc-server ~]$ yum list tftp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
tftp.x86_64 5.2-11.el7 @baselocal
[toby@rhc-server ~]$ yum search tftp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
=========================================== N/S matched: tftp ===========================================
syslinux-tftpboot.x86_64 : SYSLINUX modules in /tftpboot, available for network booting
tftp.x86_64 : The client for the Trivial File Transfer Protocol (TFTP)
tftp-server.x86_64 : The server for the Trivial File Transfer Protocol (TFTP)
As you can see there appears to be a syslinux-tftpboot rpm. So lets see what it gives us;
[toby@rhc-server ~]$ sudo yum install syslinux-tftpboot
[sudo] password for toby:
Loaded plugins: fastestmirror
baselocal | 3.6 kB 00:00:00
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package syslinux-tftpboot.x86_64 0:4.05-8.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================================================
Package Arch Version Repository Size
=========================================================================================================
Installing:
syslinux-tftpboot x86_64 4.05-8.el7 baselocal 425 k
Transaction Summary
=========================================================================================================
Install 1 Package
Total download size: 425 k
Installed size: 1.3 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : syslinux-tftpboot-4.05-8.el7.x86_64 1/1
Verifying : syslinux-tftpboot-4.05-8.el7.x86_64 1/1
Installed:
syslinux-tftpboot.x86_64 0:4.05-8.el7
Complete!
[toby@rhc-server ~]$ ls /tftpboot/
cat.c32 dmitest.c32 host.c32 ls.c32 pcitest.c32 rosh.c32 vesamenu.c32
chain.c32 elf.c32 ifcpu64.c32 lua.c32 pmload.c32 sanboot.c32 vpdtest.c32
cmd.c32 ethersel.c32 ifcpu.c32 mboot.c32 poweroff.com sdi.c32 whichsys.c32
config.c32 gfxboot.c32 ifplop.c32 memdisk pwd.c32 sysdump.c32 zzjson.c32
cpuid.c32 gpxecmd.c32 int18.com memdump.com pxechain.com this_is_a_test
cpuidtest.c32 gpxelinux.0 kbdmap.c32 meminfo.c32 pxelinux.0 ver.com
disk.c32 hdt.c32 linux.c32 menu.c32 reboot.c32 vesainfo.c32
It looks like we have a winner! Now we just need to make sure that the Preboot environment has a kernel or two to play with;
OK, so lets now create the directory which will hold the PXE menus;
[toby@rhc-server ~]$ sudo mkdir /tftpboot/pxelinux.cfg
[sudo] password for toby:
[toby@rhc-server ~]$ ls -Zd /tftpboot/pxelinux.cfg
drwxr-xr-x. root root unconfined_u:object_r:tftpdir_t:s0 /tftpboot/pxelinux.cfg
And now lets look at what is required to create the menus from which we will be able to select installation options.
[root@rhc-server centos7]# cat /tftpboot/pxelinux.cfg/default
DEFAULT menu.c32
PROMPT 0
TIMEOUT 300
ONTIMEOUT localdisk
MENU TITLE PXE Network Boot
LABEL localdisk
MENU LABEL ^Local Hard Drive
MENU DEFAULT
LOCALBOOT 0
LABEL Install_CentOS_7_2
MENU LABEL CentOS 7.2
KERNEL centos7/vmlinuz
APPEND initrd=http://rhc-server.lab.tobyheywood.com/centos7/isolinux/initrd.img inst.repo=http://rhc-server.lab.tobyheywood.com/centos7
At this point, all I wanted to do was prove that the pxe settings in dhcp were correct and that maybe, just maybe, I could build a vm across the network, first time. No such luck! 🙁 Well, I guess 50% of the way there.
You can skip the next and jump to here if you don’t want to understand the pain I went through to get this operational.
I thought things were going well. I created a blank VM, made sure to select that I was going to boot from the network to install the OS in Virtual Manager and turned the VM on.
So far so good!
I’m now feel pretty happy with myself. I select CentOS 7.0 and hit the enter key… and then am presented with the expected Welcome to CentOS 7 screen, from where I can kick off a manual installation.
So all, in all, things look good. However there is more work to be done, as the next step is to create a kickstart file to define how the base install should look.
Featured image taken by Ed Robinson who kindly uploaded it to Flickr.com. I have made no changes to this image and left it in its original form for your viewing pleasure. And to give the page a bit of colour 😉
As with most of my posts recently I have been looking at what is required to setup an isolated lab environment and from what started out as a simple idea, has slightly snowballed, due to one or more issues along the way.
The most recent is… During the installation I found that after selecting my installation options via the GUI, I was confronted with a status message “Waiting for 1 threads to finish”. It sat there for a very long time!
The fix
Adding the “inst.geoloc=0” kernel parameter. So my CentOS 7.2 code block in the pxelinux.cfg/default now looks like this;
Well, it turns out the RHEL/CentOS 7 installation process & Fedoras’ for that matter try to be clever and determine your geographical location. Now in a sandboxed world, that is going to be pretty challenging for the installer to accomplish, so it sits there, doing for all intents and purposes not a lot.
Adding the above mentioned parameter helps by simply disabling that functionality.
As with all things in life reading the manual is very useful. 🙂
In an isolated network, access to installation media can be essential, DNS and DHCP a pretty much standard in all environments (there are some exceptions) and all are pretty much mandatory in order to get your network up and running.
The ultimate aim of this series is to end up with a server which can be used to build more servers and/or clients into the lab network that I am setting up.
Before we can reach this goal, there are a few outstanding things to tackle;
Making our local repository readable from within our network (this article)
Setting up a TFTP server and confirm it works
Enable PXE booting functionality via DHCPd
Customising our installs using Kickstart
The above will then provide a basic but functional method of deploying more servers and clients into the lab environment, across the network and removes the need for monkeying around with ISO images, USB sticks (if you were to do similar in a real network) and once tested removes the human errors that can be introduced when manually installing an OS multiple times.
So what do we need
Apache (a.k.a. httpd)
Installing Apache
Given that I set up a local yum repository based on the installation media, it couldn’t be simpler.
[toby@rhc-server ~]$ sudo yum install httpd
Loaded plugins: fastestmirror
baselocal | 3.6 kB 00:00:00
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
--> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================
Installing:
httpd x86_64 2.4.6-17.el7.centos.1 baselocal 2.7 M
Installing for dependencies:
apr x86_64 1.4.8-3.el7 baselocal 103 k
apr-util x86_64 1.5.2-6.el7 baselocal 92 k
httpd-tools x86_64 2.4.6-17.el7.centos.1 baselocal 77 k
mailcap noarch 2.1.41-2.el7 baselocal 31 k
Transaction Summary
============================================================================================================================================
Install 1 Package (+4 Dependent packages)
Total download size: 3.0 M
Installed size: 10 M
Is this ok [y/d/N]: y
Downloading packages:
--------------------------------------------------------------------------------------------------------------------------------------------
Total 12 MB/s | 3.0 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : apr-1.4.8-3.el7.x86_64 1/5
Installing : apr-util-1.5.2-6.el7.x86_64 2/5
Installing : httpd-tools-2.4.6-17.el7.centos.1.x86_64 3/5
Installing : mailcap-2.1.41-2.el7.noarch 4/5
Installing : httpd-2.4.6-17.el7.centos.1.x86_64 5/5
Verifying : mailcap-2.1.41-2.el7.noarch 1/5
Verifying : httpd-2.4.6-17.el7.centos.1.x86_64 2/5
Verifying : apr-util-1.5.2-6.el7.x86_64 3/5
Verifying : apr-1.4.8-3.el7.x86_64 4/5
Verifying : httpd-tools-2.4.6-17.el7.centos.1.x86_64 5/5
Installed:
httpd.x86_64 0:2.4.6-17.el7.centos.1
Dependency Installed:
apr.x86_64 0:1.4.8-3.el7 apr-util.x86_64 0:1.5.2-6.el7 httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 mailcap.noarch 0:2.1.41-2.el7
Complete!
Before you run for the hills screaming, just take a deep breath and embrace something that by default will make your server more secure, yes it does have a bit of a learning curve but doesn’t everything?
I’m a believer in using the tools available to ensure I end up with a secure and stable environment. SELinux is one of those things which for many years I avoided like the plague but to be honest, that was due to me not having had the time to properly understand what it does and how it does it.
After spending some time tinkering with it, it didn’t seem half as scary. For sure, it complicates things a little when you come to troubleshoot permission issues, but then everything is more contained.
Lets make sure that the directory containing the installation media, which is in a none standard location (as far as Apache is concerned), has the correct SELinux permissions assigned to the folder structure. The easiest way is to copy the existing SELinux contexts from the /var/www/html and here is the command to do that;
As you can see we now have the right context associated with the centos7 directory, now we need to make sure the httpd.conf file is updated to present the centos7 directory and it’s contents to the outside world.
Rather than modifying the httpd.conf file itself it is recommended that you create your own .conf files in /etc/httpd/conf.d/ and these will be loaded after the initial httpd.conf file. I created a single test files as follows;
[toby@rhc-server ~]$ cat /etc/httpd/conf.d/software.conf
Alias "/centos7" "/software/centos7"
<Directory /software/centos7>
Options +Indexes
Order allow,deny
Allow from all
Require all granted
</Directory>
Screenshot showing successful directory listing from client machine of centos7 media
The Alias allows you to point to a directory which is outside of Apaches’ DocumentRoot, (typically set to /var/www/html). The Directory block, contains two things of note. First, for testing I have added the “Options +Indexes” so that when I try to connect from a web browser on my client machine, I can confirm that I can see the contents of the repository directory. The second chunk of config, starting “Order all,deny…” is there so that Apache will allow connections to this none standard location.
One thing I did have to do, that I haven’t stated above is allow HTTP connections through the firewall.
This was accomplished by way of a simple one liner;
Note. To make this new firewall rule permanent you need to use the “–permanent” firewall-cmd option on the command line. I added this afterwards once I was happy that everything was working.
Configuring a yum .repo file to access the centralised software repository
This is very similar in the steps taken when I setup the local yum repository. The only difference this time will be that I’ll give it a more meaningful name and the file location will be a http:// address rather that a file:///.
And then, as the saying goes, the proof is in the pudding;
[root@rhc-client yum.repos.d]# yum repolist
Loaded plugins: fastestmirror, langpacks
Repository 'th_lab_server' is missing name in configuration, using id
th_lab_server | 3.6 kB 00:00:00
(1/2): th_lab_server/group_gz | 157 kB 00:00:00
(2/2): th_lab_server/primary_db | 4.9 MB 00:00:00
Loading mirror speeds from cached hostfile
repo id repo name status
th_lab_server th_lab_server 8,465
repolist: 8,465
Oops, now it would appear I haven’t added a name parameter in the dot repo file. Let me correct that…
[root@rhc-client yum.repos.d]# cat CentOS-lab-Media.repo
[th_lab_server]
name="CentOS7 Media on rhc-server.lab.tobyheywood.com"
baseurl=http://rhc-server.lab.tobyheywood.com/centos7/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
And now if I run the command “yum repolist” again, it should return the list of enabled repositories without complaining, oh and the repo name column will also show my desired name for my network enabled repository ( a shorter name may be better);
[root@rhc-client yum.repos.d]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
repo id repo name status
th_lab_server "CentOS7 Media on rhc-server.lab.tobyheywood.com" 8,465
repolist: 8,465
And there we have it a working centralised repository, if you don’t have access to Red Hat Satellite server or if you don’t want to install the open source version Spacewalk.
I guess the final test, would be to install a couple of packages;
[root@rhc-client yum.repos.d]# yum install iostat
Loaded plugins: fastestmirror, langpacks
th_lab_server | 3.6 kB 00:00:00
Loading mirror speeds from cached hostfile
No package iostat available.
Error: Nothing to do
[root@rhc-client yum.repos.d]# yum whatprovides iostat
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
th_lab_server/filelists_db | 5.8 MB 00:00:00
sysstat-10.1.5-4.el7.x86_64 : Collection of performance monitoring tools for Linux
Repo : th_lab_server
Matched from:
Filename : /usr/bin/iostat
sysstat-10.1.5-4.el7.x86_64 : Collection of performance monitoring tools for Linux
Repo : @anaconda
Matched from:
Filename : /usr/bin/iostat
[root@rhc-client yum.repos.d]# yum install sysstat -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package sysstat-10.1.5-4.el7.x86_64 already installed and latest version
Nothing to do
Ah, I guess a demo is never meant to go really smoothly, but this is probably better as it demonstrates some awesome functionality that yum has.
Line 3, shows that it has found my networked repo, line 5 shows that there is no such package in the repo, line 7 highlights a way to find the right package for the command you want to run, lines 10 through 14 show conclusively that it has connected across the network to the repo, and has found a suitable package called sysstat. In line 21, I’m trying to install the package only to be told in line 24 that it’s already installed.
The really keen eyed of you may have also spotted the @anaconda repo, this should have rung an alarm bell in my head to say, hey! What are you doing? Its already installed!
Every now and then, we find ourselves in a bit of a predicament. In this instance whilst performing an upgrade on a server, things just weren’t going well and it appeared we had some corruption in the RPM database on one of our servers.
We were seeing segmentation faults when trying to use “rpm”.
The following page on the rpm.org website proved very useful, in getting things back up and running swiftly;