iSCSI and Jumbo Frames

Posted on 2 CommentsPosted in Arista, CentOS 7, Cisco, Fedora, GNS3, iSCSI, Juniper, Linux, Microsoft, Networking, Networks, NFS, Pure Storage, RHEL 6, RHEL 7, Storage, System Administration, Windows 2008 R2

I’ve recently been working on a project to deploy a couple of Pure Storage Flash Array //M10‘s, and rather than using Fiber Channel we opted for the 10Gb Ethernet (admittedly for reasons of cost) and using iSCSI as the transport mechanism.

Whenever you read up on iSCSI (and NFS for that matter) there inevitably ends up being a discussion around the MTU size.  MY thinking here is that if your network has sufficient bandwidth to handle the Jumbo Frames and large MTU sizes, then it should be done.

Now I’m not going to ramble on about enabling Jumbo Frames exactly, but I am going to focus on the MTU size.

What is MTU?

MTU stands for Message Transport Unit.  It defines the maximum size of a network frame that you can send in a single data transmission across the network.  The default MTU size is 1500.  Whether that be Red Hat Enterprise Linux, , Fedora, Slackware, Ubuntu, Microsoft Windows (pick a version), Cisco IOS and Juniper’s JunOS it has in my experience always been 1500 (though that’s not to say that some specialist providers may change this default value for black box solutions.

So what is a Jumbo Frame?

The internet is pretty much unified on the idea that any packet or frame which is above the 1500 byte default, can be considered a jumbo frame.  Typically you would want to enable this for specific needs such as NFS and iSCSI and the bandwidth is at least 1Gbps or better 10Gbps.

MTU sizing

A lot of what I had ready in the early days about this topic suggests that you should set the MTU to 9000 bytes, so what should you be mindful of when doing so?

Well, lets take an example, you have a requirement where you need to enable jumbo frames and you have set an MTU size of 9000 across your entire environment;

  • virtual machine interfaces
  • physical network interfaces
  • fabric interconnects
  • and core switches

So you enable an MTU of 9000 everywhere, and you then test your shiny new jumbo frame enabled network by way of a large ping;

Linux

$ ping -s 9000 -M do 192.168.1.1

Windows

> ping -l 9000 -f -t 192.168.1.1

Both of the above perform the same job.  They will attempt to send an ICMP ping;

  • To our chosen destination – 192.168.1.1
  • With a packet size of 9000 bytes (option -l 9000 or -s 9000), remember the default is 1500 so this is definitely a Jumbo packet
  • Where the request is not fragmented, thus ensuring that a packet of such a size can actually reach the intended destination without being reduced

The key to the above examples is the “-f” (Windows) and “-M do” (Linux).  This will enforce the requirement that the packet can be sent from your server/workstation to its intended destination without the size of the packet being messed with aka fragmented (as that would negate the whole point of using jumbo frames).

If you do not receive a normal ping response back which states its size as being 9000 then something is not configured correctly.

The error might look like the following;

ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500

The above error is highlighting the fact that we are attempting to send a packet which is bigger than the local NIC is configured to handle.  It is telling us the MTU is set at 1500 bytes.  In this instance we would need to reconfigure our network card to handle the jumbo sized packets.

Now lets take a look at what happens with the ICMP ping request and it’s size.  As a test I have pinged the localhost interface on my machine and I get the following;

[toby@testbox ~]$ ping -s 9000 -M do localhost
PING localhost(localhost (::1)) 9000 data bytes
9008 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.142 ms
9008 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.148 ms
9008 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.145 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2085ms
rtt min/avg/max/mdev = 0.142/0.145/0.148/0.002 ms

Firstly notice the size of each request.  The initial request may have been 9000 however that doesn’t take into account the need for the header to be added to the packet, so that it can be correctly sent over your network or the Internet.  Secondly notice that the packet was received without any fragmentation (note I used the “-M do” option to ensure fragmentation couldn’t take place).  In this instance the loopback interface is configured with a massive MTU of 65536 bytes and so all worked swimmingly.

Note that the final packet size is actually 9008 bytes.

The packet size increased by 8 bytes due to the addition of the ICMP header mentioned above, making the total 9008 bytes.

My example above stated that the MTU had been set to 9000 on ALL devices.  In this instance the packets will never get to their intended destination without being fragmented as 9008 bytes is bigger than 9000 bytes (stating the obvious I know).

The resolution

The intermediary devices (routers, bridges, switches and firewalls) will need an MTU size that is bigger than 9000 and be size sufficiently to accept the desired packet size.  A standard ethernet frame (according to Cisco) would require an additional 18 bytes on top of the 9000 for the payload.  And it would be wise to actually specify a bit higher.  So, an MTU size of 9216 bytes would be better as it would allow enough headroom for everything to pass through nicely.

Focusing on the available options in a Windows world

And here is the real reason for this post.  Microsoft with all their wisdom, provide you with a drop down box to select the required predefined MTU size for your NICs.  With Windows 2012 R2 (possibly slightly earlier versions too), the nearest size you can set via the network card configuration GUI is 9014.  This would result in the packet being fragmented or in the case of iSCSI it would potentially result in very poor performance.  The MTU 9014 isn’t going to work if the rest of the network or the destination device are set at 9000.

The lesson here is make sure that both source and destination machines have an MTU of equal size and that anything in between must be able to support a higher MTU size than 9000.  And given that Microsoft have hardcoded the GUI with a specific number of options, you will probably want to configure your environment to handle this slightly higher size.

Note.  1Gbps Ethernet only supported a maximum MTU size of 9000, so although Jumbo Frames can be enabled you would need to reduce the MTU size slightly on the source and destination servers, with everything in between set at 9000.

Featured image credit; TaylorHerring.  As bike frames go, the Penny Farthing could well be considered to have a jumbo frame.

Enabling GNS3 to talk to it’s host and beyond

Posted on Leave a commentPosted in Fedora, GNS3, Linux, Networking, Networks, RHEL 7, System Administration

I’m currently working my way through a CCNA text book and reached a point where I need to be able perform some tasks which rely on connecting the virtual network environment inside of GNS3 to the host machine, for the purpose of connecting to a tftp service (just in case you were curious).

After a little googling it became apparent that this is indeed possible, though most of the guides focused on using GNS3 on a Windows machine. Where as I, am very much a Linux guy.

So as a reminder to myself but also as a helpful reference for others here is what I had to do on my Fedora 22 machine

The first way was using standard tools in Linux, the second I made sure I was able to create the same setup using Network Manager (again to make sure that I am utilising the latest tools for the job).

Standard method (from the command line)

$ sudo dnf install tunctl<br />
$ tunctl -t tap0 -u toby<br />
$ ifconfig tap0 10.0.1.10 netmask 255.255.255.0 up<br />
$ firewall-cmd --zone=FedoraWorkstation --add-interface=tap0 --permanent

 

Using Network Manager (from the command line)

$ sudo ip tuntap add dev tap1 mode tap user toby<br />
$ sudo ip addr add 10.0.0.10/255.255.255.0 dev tap1<br />
$ sudo ip link set tap1 up<br />
$ sudo firewall-cmd --zone=FedoraWorkstation --add-interface=tap1<br />
$ sudo ip addr show tap1<br />
11: tap1: &lt;broadcast,multicast,up,lower_up&gt; mtu 1500 qdisc fq_codel state UP group default qlen 500<br />
link/ether 26:2b:e4:a0:54:ba brd ff:ff:ff:ff:ff:ff<br />
inet 10.0.0.10/24 scope global tap1<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::242b:e4ff:fea0:54ba/64 scope link<br />
valid_lft forever preferred_lft forever<br />

Configuring the interface on the Cisco router inside of GNS3

router1#conf t<br />
Enter configuration commands, one per line.  End with CNTL/Z.<br />
router1(config)#int f0/0<br />
router1(config-if)#ip address 10.0.0.1 255.255.255.0<br />
router1(config-if)#no shut<br />
router1(config)<br />
router1&gt;write mem<br />

The bit of config inside GNS3

I nearly forgot to write this section.  Doh!  Anyway, it’s lucky for everyone that I remember, so without any further padding… seriously, no more padding… the config in GNS3…

GNS3_Side_Panel
Select Cloud from the side panel in GNS3.

Next we need to configure the cloud… (hint.  right click the cloud and select configure).

GNS3_tap1_configuration_screen
Select TAP and then type in the name of the tap device created, in my case tap1.

The final step is to draw the virtual connection between the cloud and the router (making sure to map it to the correct interface).

At this point we should be in a happy place.

Proof that it works

# ping -c 5 10.0.0.10<br />
PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.<br />
64 bytes from 10.0.0.10: icmp_seq=1 ttl=64 time=0.103 ms<br />
64 bytes from 10.0.0.10: icmp_seq=2 ttl=64 time=0.096 ms<br />
64 bytes from 10.0.0.10: icmp_seq=3 ttl=64 time=0.050 ms<br />
64 bytes from 10.0.0.10: icmp_seq=4 ttl=64 time=0.058 ms<br />
64 bytes from 10.0.0.10: icmp_seq=5 ttl=64 time=0.103 ms</p>
<p>--- 10.0.0.10 ping statistics ---<br />
5 packets transmitted, 5 received, 0% packet loss, time 3999ms<br />
rtt min/avg/max/mdev = 0.050/0.082/0.103/0.023 ms<br />
# ping -c 5 10.0.0.1<br />
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.<br />
64 bytes from 10.0.0.1: icmp_seq=1 ttl=255 time=9.16 ms<br />
64 bytes from 10.0.0.1: icmp_seq=2 ttl=255 time=5.64 ms<br />
64 bytes from 10.0.0.1: icmp_seq=3 ttl=255 time=11.2 ms<br />
64 bytes from 10.0.0.1: icmp_seq=4 ttl=255 time=7.29 ms<br />
64 bytes from 10.0.0.1: icmp_seq=5 ttl=255 time=2.98 ms</p>
<p>--- 10.0.0.1 ping statistics ---<br />
5 packets transmitted, 5 received, 0% packet loss, time 4005ms<br />
rtt min/avg/max/mdev = 2.980/7.266/11.253/2.847 ms</p>
<p>router1#ping 10.0.0.10</p>
<p>Type escape sequence to abort.<br />
Sending 5, 100-byte ICMP Echos to 10.0.0.10, timeout is 2 seconds:<br />
.!!!!<br />
Success rate is 80 percent (4/5), round-trip min/avg/max = 4/9/12 ms<br />

I do believe that about covers.

GNS3 1.3.8 – The Complete installation guide for Fedora 22

Posted on Leave a commentPosted in GNS3, Networks

Following on from previous posts about getting GNS3 up and running, here is the next installment.  In the future I plan to release a shell script which will automate all of the steps below and therefore simply things greatly for my fellow Fedora users.

Get the required packages installed ahead of time.

sudo dnf install python3-setuptools python3-devel python3-sip.i686 python3-sip.x86_64 python3-PyQt4.i686 python3-PyQt4.x86_64 python3-PyQt4-devel.i686 python3-net* gcc gcc-c++ elfutils-libelf-devel libuuid-devel libuuid-devel cmake flex bison glibc-devel iniparser-devel

Lets start in alphabetical order;

Install dynamips – 0.2.14

The process to build remains the same as in my previous posts;

$ cd .../dynamips_extracted_folder<br />
$ mkdir build<br />
$ cd build<br />
$ cmake ..<br />
$ sudo make install

You should find that you get similar output to what is displayed below;

Install the project...<br />
-- Install configuration: &amp;quot;&amp;quot;<br />
-- Up-to-date: /usr/local/share/doc/dynamips/ChangeLog<br />
-- Up-to-date: /usr/local/share/doc/dynamips/COPYING<br />
-- Up-to-date: /usr/local/share/doc/dynamips/MAINTAINERS<br />
-- Up-to-date: /usr/local/share/doc/dynamips/README<br />
-- Up-to-date: /usr/local/share/doc/dynamips/README.hypervisor<br />
-- Up-to-date: /usr/local/share/doc/dynamips/RELEASE-NOTES<br />
-- Up-to-date: /usr/local/share/doc/dynamips/TODO<br />
-- Up-to-date: /usr/local/share/man/man1/dynamips.1<br />
-- Up-to-date: /usr/local/share/man/man1/nvram_export.1<br />
-- Up-to-date: /usr/local/share/man/man7/hypervisor_mode.7<br />
-- Installing: /usr/local/bin/nvram_export<br />
-- Installing: /usr/local/bin/dynamips

Install gns3-gui – 1.3.9

Building the GUI is one of the simpler packages to get built and installed, requiring only one line of commands once you are in the right directory.

$ cd ../../gns3-gui-1.3.9/<br />
$ sudo python3 setup.py install

At the end of the installation process you should see something similar to;

Finished processing dependencies for gns3-gui==1.3.9

Install gns3-server – 1.3.9

Next step, lets get the server binaries built so that we can actually start to use the GNS3 GUI.

cd ../gns3-server-1.3.9/<br />
sudo python3 setup.py install

Once the install has been completed you should have a line similar too;

Finished processing dependencies for gns3-server==1.3.8

Install iouyap – 0.95

Now, for this particular package I struggled to get things working (see my other post GNS3 – Problems Compiling IOUYAP) if you want to experience the same pain that I went through, or if you want to tell me what I could have done to rectify things, then that would be awesome too.

Anyway, what I ended up doing was checking out a copy of the current code from the GIT repository.

[toby@thebay GNS3-1.3.9]$ git clone https://github.com/GNS3/iouyap.git<br />
Cloning into 'iouyap'...<br />
remote: Counting objects: 78, done.<br />
remote: Total 78 (delta 0), reused 0 (delta 0), pack-reused 78<br />
Unpacking objects: 100% (78/78), done.<br />
Checking connectivity... done.<br />
[toby@thebay GNS3-1.3.9]$ ls<br />
dynamips-0.2.14      gns3-gui-1.3.9      gns3-server-1.3.9      iouyap       iouyap-0.95.zip  ubridge-0.9.0.zip  vpcs-0.6.1.zip<br />
dynamips-0.2.14.zip  gns3-gui-1.3.9.zip  gns3-server-1.3.9.zip  iouyap-0.95  ubridge-0.9.0    vpcs-0.6.1<br />
[toby@thebay GNS3-1.3.9]$ cd iouyap<br />
[toby@thebay iouyap]$ ls<br />
config.c  dictionary.h  iouyap.c  iouyap.ini  Makefile  netmap.c  netmap_parse.y  README.rst<br />
config.h  iniparser.h   iouyap.h  LICENSE     NETMAP    netmap.h  netmap_scan.l<br />
[toby@thebay iouyap]$ make<br />
gcc  -g -DDEBUG -Wall   -c -o iouyap.o iouyap.c<br />
bison -y -d netmap_parse.y<br />
mv -f y.tab.c netmap_parse.c<br />
gcc  -g -DDEBUG -Wall   -c -o netmap_parse.o netmap_parse.c<br />
flex  -t netmap_scan.l netmap_scan.c<br />
gcc  -g -DDEBUG -Wall   -c -o netmap_scan.o netmap_scan.c<br />
gcc  -g -DDEBUG -Wall   -c -o netmap.o netmap.c<br />
gcc  -g -DDEBUG -Wall   -c -o config.o config.c<br />
gcc    iouyap.o netmap_parse.o netmap_scan.o netmap.o config.o  -liniparser -lpthread -o iouyap<br />
rm netmap_scan.c netmap_parse.c<br />
[toby@thebay iouyap]$ sudo make install<br />
[sudo] password for toby:<br />
chmod +x iouyap<br />
sudo cp iouyap /usr/local/bin<br />
sudo setcap cap_net_admin,cap_net_raw=ep iouyap

As you can see from the above output this worked for me so hopefully it will also work for you.

Installing VPCS

This also proved to be a pain.  I’m guessing the issue is that the source code has not been update since 2014 and the OS has moved on since then and as such the shared library files it was trying to access no longer existed.

It turns out the simple solution for this one is to download the the VPCS, ready made binary from the website; http://sourceforge.net/projects/vpcs/.

Save the downloaded executable and make sure to do a chmod +x against the file otherwise you may have problems launching it.

[toby@thebay Downloads]$ chmod +x vpcs_0.6_Linux64<br />
[toby@thebay Downloads]$ ./vpcs_0.6_Linux64</p>
<p>Welcome to Virtual PC Simulator, version 0.6<br />
Dedicated to Daling.<br />
Build time: Nov 21 2014 08:28:12<br />
Copyright (c) 2007-2014, Paul Meng (mirnshi@gmail.com)<br />
All rights reserved.</p>
<p>VPCS is free software, distributed under the terms of the &quot;BSD&quot; licence.<br />
Source code and license can be found at vpcs.sf.net.<br />
For more information, please visit wiki.freecode.com.cn.</p>
<p>Press '?' to get help.</p>
<p>VPCS[1]

Last on the list is ubridge

For the time being, as it is very much under construction and its primary use relates interacting with VMware, I am going to skip this part.  Sorry, but everything above short of the required Cisco images should get you up and running.

I based this decision on this post; https://vanity-gns3.jiveon.com/thread/8540