Back to basics – Setting up a local (internal) DNS server

Today’s “back to basics” post is all about setting up a local internal DNS server. Though there are a number of applications out there which will ease the process of setting up DNS and/or DHCP services, I am sticking with what would be used in the enterprise environment and also focusing on doing things manually.

Why manually?? Well, it has been a while since I’ve had to configure named from scratch and so it’s good to remind me how it all fits together. Plus, by knowing how the internals are working (as far as the configuration is concerned) it means if I run into problems, then I’m better placed to fix the issue sooner and with less searching.

So for DNS that would be (IMHO) BIND.  BIND has been around for a very long time and for those of you with an interest in it’s history, take a look at; BIND – Wikipedia.  In CentOS and RHEL you have two options the base BIND packages or an additional package which configures BIND to run in a chroot jail.

Because I am only setting this up for the purpose of a lab, I will not be making use of the chroot version (though in all honesty it doesn’t require much additional effort) and will stick
instead with the base package.  HOWEVER, IF you are going to use BIND on a server which has a public facing interface onto the big
, bad Internet, then without doubt, MAKE SURE YOU USE THE CHROOT package too.  There is no reason not to in my opinion!

Back to the lab installation


[root@rhc-server ~]# yum install bind -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
–> Running transaction check
—> Package bind.x86_64 32:9.9.4-14.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

Package                     Arch                          Version                                   Repository                        Size
bind                        x86_64                        32:9.9.4-14.el7                           baselocal                        1.8 M

Transaction Summary
Install  1 Package

Total download size: 1.8 M
Installed size: 4.3 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 32:bind-9.9.4-14.el7.x86_64                                                                                              1/1
Verifying  : 32:bind-9.9.4-14.el7.x86_64                                                                                              1/1

bind.x86_64 32:9.9.4-14.el7



OK, so now we have it installed we need to create a zone file which will contain all of our required internal DNS zone information.  Zone files are stored in the /var/named/data directory (or if using chroot version of bind /var/named/chroot/var/named/data).  As you can see currently there are no zone files in this location;


[root@rhc-server data]# pwd
[root@rhc-server data]# ls
[root@rhc-server data]#


For my internal DNS I’m going setting up a zone called “” and the zone file looks more or less as follows;


[root@rhc-server data]# cat
$TTL 1d
@        IN    SOA (
2016022100    ; Serial
1h        ; Refresh
15m        ; Retry
10d        ; Expire (10 days should be enough in the lab)
1h )        ; Negative Cache
; Name Servers
IN    NS
; MX Records (Mail eXchange)
; CNAME (Canonical Name a.k.a. Aliases)
provision    IN    CNAME
; A Records (IPv4 addresses)
ns        IN    A
rhc-server    IN    A

; AAAA Records (IPv6 Records)
; – Not used yet but at a later stage in the lab


As you can probably spot, I am using the IPv4 address as the primary IP address for my server.

The next step is to updated the /etc/named.conf file so that it is listening on the right ip address and also to define the zone I have just created.

// named.conf
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
// See /usr/share/doc/bind*/sample/ for example named configuration files.

options {
listen-on port 53 {; };
//listen-on-v6 port 53 { ::1; };
directory     "/var/named";
dump-file     "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query     { any; };

dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;

/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";

pid-file "/run/named/";
session-keyfile "/run/named/session.key";

logging {
channel default_debug {
file "data/";
severity dynamic;

zone "." IN {
type hint;
file "";

zone "" IN {
type master;
file "data/";

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";


To maintain a level of sanity, lets just do a couple of checks to make sure everything is, as it should be.


[root@rhc-server etc]# named-checkzone /var/named/data/
zone loaded serial 2016022100
[root@rhc-server etc]# named-checkconf
[root@rhc-server etc]# echo $?
[root@rhc-server etc]#


And now time to enable the service, start it and then test that I have got everything right.  Also as part of this step I have installed bind-utils so that I can confirm the zone is active by way of querying the name server.;


[root@rhc-server etc]# systemctl enable named
ln -s ‘/usr/lib/systemd/system/named.service’ ‘/etc/systemd/system/’
[root@rhc-server etc]# systemctl start named.service
[root@rhc-server etc]# yum install bind-utils -y
[root@rhc-server etc]# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.021 ms
64 bytes from icmp_seq=2 ttl=64 time=0.093 ms
64 bytes from icmp_seq=3 ttl=64 time=0.091 ms
64 bytes from icmp_seq=4 ttl=64 time=0.053 ms
64 bytes from icmp_seq=5 ttl=64 time=0.093 ms

— ping statistics —
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.021/0.070/0.093/0.029 ms
[root@rhc-server etc]# dig

; <<>> DiG 9.9.4-RedHat-9.9.4-14.el7 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56872
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 4096
;        IN    A

;; ANSWER SECTION:    86400    IN    A

;; AUTHORITY SECTION:    86400    IN    NS

;; Query time: 0 msec
;; WHEN: Sun Feb 21 18:50:48 GMT 2016
;; MSG SIZE  rcvd: 81


The only thing that I had done ahead of time was to make sure that my /etc/resolv.conf file was updated to reflect the correct search and nameserver parameters.  So all in all looking good.

Till next time.


Err, well.  That’s embarrassing!  So as part of the above post though it work, somethings will not.  More specifically, if you try to perform a reverse look up it will fail miserably.  So to complete the picture, you can see what I did with regards the reverse zone here.

Network bonding vs teaming in Linux

The terms bonding and teaming are quite often used interchangeably, especially when wearing both Linux and Microsoft hats.  And that term is further confused once you start wearing your network admin hat as well.

However, in the world of Linux (in my case Fedora/CentOS/RHEL) there are distinct differences between the terms.  One is the future of network interface collaboration, whilst the other is still very much capable, it lacks some of the features people may find useful going forward.

And to quote the Red Hat Enterprise Linux 7 Networking Guide;

“The combining or aggregating together of network links in order to provide a logical link with higher throughput, or to provide redundancy, is known by many names such as channel bonding, Ethernet bonding, port trunking, channel teaming, NIC teaming, link aggregation, and so on. This concept as originally implemented in the Linux kernel is widely referred to as bonding. The term Network Teaming has been chosen to refer to this new implementation of the concept. The existing bonding driver is unaffected, Network Teaming is offered as an alternative and does not replace bonding in Red Hat Enterprise Linux 7. ”

So why should you consider adopting teaming rather than the bonding method?

If you are looking for a complete comparison from the horses mouth, then may I suggest the; Red Hat Enterprise Linux Networking Guide: Comparison of Network Teaming to Bonding.

The key reasons why you might want to use teaming rather than bonding are;

  • Teaming has a small kernel module which implements fast handling of packets flowing through your teamed interfaces
  • support for IPv6 (NS/NA) link monitoring
  • Capable of working with D-Bus and Unix Domain Sockets (the default)
  • It provides an extensible and scale-able solution for your teaming requirements
  • load balancing for LACP support
  • It makes use of NetworkManager and its associates tools (the modern way) to manage your network connections

For me, I see teamd as ultimately the replacement for bonding in the coming years.  Especially given that you can make use of tools such as nmcli which means that automation and repeatable configuration becomes much simper as the cli tools remove a lot of the checking and verification steps you would ultimately want when developing your own scripts to do the same job in a more manual fashion.

Obviously, NetworkManager has been around for a while so standard configurations could be applied using the likes of nmcli, but for anything more involved (see above reasons why you might consider teamd) then it becomes a no-brainer!

Anyway, I would highly recommend reading the Red Hat Enterprise Linux 7 documentation, there are some amazing nuggets of information in there.

Enabling GNS3 to talk to it’s host and beyond

I’m currently working my way through a CCNA text book and reached a point where I need to be able perform some tasks which rely on connecting the virtual network environment inside of GNS3 to the host machine, for the purpose of connecting to a tftp service (just in case you were curious).

After a little googling it became apparent that this is indeed possible, though most of the guides focused on using GNS3 on a Windows machine. Where as I, am very much a Linux guy.

So as a reminder to myself but also as a helpful reference for others here is what I had to do on my Fedora 22 machine

The first way was using standard tools in Linux, the second I made sure I was able to create the same setup using Network Manager (again to make sure that I am utilising the latest tools for the job).

Standard method (from the command line)

[bash]$ sudo dnf install tunctl
$ tunctl -t tap0 -u toby
$ ifconfig tap0 netmask up
$ firewall-cmd –zone=FedoraWorkstation –add-interface=tap0 –permanent[/bash]


Using Network Manager (from the command line)

[bash]$ sudo ip tuntap add dev tap1 mode tap user toby
$ sudo ip addr add dev tap1
$ sudo ip link set tap1 up
$ sudo firewall-cmd –zone=FedoraWorkstation –add-interface=tap1
$ sudo ip addr show tap1
11: tap1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc fq_codel state UP group default qlen 500
link/ether 26:2b:e4:a0:54:ba brd ff:ff:ff:ff:ff:ff
inet scope global tap1
valid_lft forever preferred_lft forever
inet6 fe80::242b:e4ff:fea0:54ba/64 scope link
valid_lft forever preferred_lft forever

Configuring the interface on the Cisco router inside of GNS3

[text]router1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
router1(config)#int f0/0
router1(config-if)#ip address
router1(config-if)#no shut
router1>write mem

The bit of config inside GNS3

I nearly forgot to write this section.  Doh!  Anyway, it’s lucky for everyone that I remember, so without any further padding… seriously, no more padding… the config in GNS3…

Select Cloud from the side panel in GNS3.

Next we need to configure the cloud… (hint.  right click the cloud and select configure).

Select TAP and then type in the name of the tap device created, in my case tap1.

The final step is to draw the virtual connection between the cloud and the router (making sure to map it to the correct interface).

At this point we should be in a happy place.

Proof that it works

[text]# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from icmp_seq=2 ttl=64 time=0.096 ms
64 bytes from icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from icmp_seq=4 ttl=64 time=0.058 ms
64 bytes from icmp_seq=5 ttl=64 time=0.103 ms

— ping statistics —
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.050/0.082/0.103/0.023 ms
# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=9.16 ms
64 bytes from icmp_seq=2 ttl=255 time=5.64 ms
64 bytes from icmp_seq=3 ttl=255 time=11.2 ms
64 bytes from icmp_seq=4 ttl=255 time=7.29 ms
64 bytes from icmp_seq=5 ttl=255 time=2.98 ms

— ping statistics —
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 2.980/7.266/11.253/2.847 ms


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 80 percent (4/5), round-trip min/avg/max = 4/9/12 ms

I do believe that about covers.

Firewalld: firewall-cmd example to drop packets from specific ip

Today I spotted some attempts to perform a zone transfer from one of the DNS servers I manage.  Given this is on CentOS 7 and therefore using by default Firewalld, I had a quick read of the documentation regarding how best to drop these attempts.

Here we go;

firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="x.x.x.x" service name="dns" drop'

And that was all that was required.  Note that single quotes are used to contain the entire string.

Should you need some bed time reading, then I would highly recommend reading the following;

Chroot your DNS

I am in the process of setting up a new DNS service and it had been such a long time since I previously set up DNS with chroot that I need some assistance.

Credit, where it is due…

Also, when setting up a DNS service make sure you add an appropriate rule to your fiewall.

# firewall-cmd –zone=public –add-port=54/tcp

Job done

Getting started with firewalld

Change is one of those things, that without realising, can begin to get the better of you.  I realised this after I finally decided to take a look at firewalld.  The thought of having to learn a new way to do things when I was already happy with iptables meant that I kept putting it off.

Once you master the basic command options, you are away and it is for the most part self-explanatory.

So here are a few links to get you started;

Online LUN expansion (Step-by-Step)

As with many things in life, it is easy to outgrow the environment you find yourself in.  When looking a LUNs and using LVM we can easily accommodate resizing of the back end storage and transferring this through the to volume presented to your RHEL server.

Note. The following details presenting a brand new LUN to the server rather than trying to expand the existing underlying LUN, as I feel this is a safer option.

The following provides a rough guide to the steps require;

  • Create new LUN and export to server
  • Configure multipathing
  • Create partition and set it touseLVM
    • parted /dev/mapper/new_lun
    • parted> mklabel gpt
    • parted> mkpart new_name ext4 0% 100%
    • parted> set 1 lvm on
    • parted> q
  • Run pvcreate on raw device file /dev/mapper/whateverp1
  • Run vgextend vol_group pv_dev
  • Run pvmove old_pv_dev new_pv_dev (this step will take a long time if the LUN is huge)
  • Run vgreduce vg_name old_pv_dev
  • pvremove old_pv_dev
  • Run lvextend -l +100%FREE /dev/vol_group/logical_vol
  • Run resize2fs /dev/vol_group/logical_vol

If you now run `df -h` you should see the file system has grown to the size of the new LUN.