Never Ending Security

It starts all here

Proxmox as a home virtualization solution

For many years now I’ve been using VirtualBox. In fact, I’ve been using it for so long it was a Sun Microsystems product whenever I started using it. It is incredibly easy to get started with, you can have a working virtualization environment on top of Ubuntu Linux in minutes. As a platform for experimentation and development, it is very difficult to beat. It is actually open source but most of the features that are make it a modern virtualization platform are closed source. As far as I am concerned it is closed source platform that happens to be free for my personal use.

I’ve never really been happy with VirtualBox as a host for virtual machines that are in someway critical to infrastructure. I do lot of tinkering with things, but once I am satisfied with a solution I’d prefer to never touch it again. The ease of use that comes with the graphical user interface is starkly contrasted by the command-line VirtualBox tool. Literally, everything is available through the command-line tool. My usual usage pattern involves creating a linked-clone of an existing machine, changing the NIC’s MAC address and then customizing the machine for the purpose at hand. I can do all this with the GUI and then useVirtualBox startvm 'somevm' --type=headless to start it from an SSH session. The actual GUI is perfectly usable through X11 forwarded via SSH.

The real thing that has pushed me away from VirtualBox as a production environment is that on several occasions I’ve had multiple virtual machines simply abort with no explanation. There are no logs of any kind to indicate a problem. The worse part is that when I restarted them they just worked. There was not even an appearance of them being broken. So I have been searching for a replacement for a while. My requirements are straightforward.

1. Installable without large amounts of effort on my part
2. Installable on a single physical piece of consumer-grade hardware
3. Have a GUI interface that is usable remotely
4. Have a command line interface that is usable remotely
5. Support guest templates
6. Allow for redundant storage of at least the virtual machine images
7. Zero-cost

There are many ways that these problems could be solved. I could probably come up with some scripts that would be usable on any KVM linux host to do what I need. However, I am actively trying to avoid reinventing the wheel. There are tons of great solutions for open-source virtualization out there. The biggest problem is that most of them are aiming to solve the problem of virtualizing hundreds of servers over tens of pieces of physical hardware. For my own personal usage I really don’t need or want a full rack of equipment to act as a virtualization host. I played around with OpenNebula for a while. Its is possible to get it running on a single piece of hardware but it the set up is quite involved. The other thing I really need is the ability to use software RAID of some kind. High quality RAID controllers are prohibitively expensive and cheap RAID controllers are usually worse off than linux’s native MDADM support. I’ve been using MDADM in mirrored mode for years and never once had it cause me a problem. This is actually an unusual requirement. Most enterprise virtualization products just assume you are going to spend money on something like a SAN.

Proxmox is an attractive solution because it is a linux distribution designed for virtualization but is still just a basic Debian machine. If it is easy enough to get it running, I should be able to customize it to fit my needs. I downloaded Proxmox VE 3.2.

Installation

Installation of Proxmox is done through a linux live cd. By default you’ll get a system using the ext3 filesystem but if you typelinux ext4 at the first prompt the installed system uses the ext4 filesystem. After that you’ll have to accept the license agreement. In the next few screens you configure the root user, the time zone, and country. The installer gets an address from the local DHCP server if available and then prompts you to accept it. It is a little strange because it actually statically configures the network interface to use this IP address. This could cause problems in some environments. Just make sure you put an IP address in the configuration screen that is something out side of your DHCP pool. If you have multiple hard drives Proxmox asks you to select a single one for installation. After that installation is automatic.

The Web Interface

After installation you can jump directly into the web interface. The web interface for Proxmox runs by default on port 8006 serving HTTPS. I’m not really sure how this decision was made. The process is called pveproxy and there is no immediately obvious way to reconfigure it. You can access it directly using the IP address of the box and specifying the HTTPS protocol succh ashttps://192.168.1.1:8006/. However, most browsers are not thrilled with HTTPS running on non-standard ports. Chrome on Ubuntu 14.04 was not suitable for using this interface. The console of each VM is accessed using a VNC client that is Java based which Chrome did not like. It works very well with Firefox however.

You’ll be prompted for a username and password. Use root and the password you entered during installation. There is a nag screen reminding you that you aren’t subscribed each time you log in.

HTTPS support using nginx

It is much simpler to just install nginx to handle the HTTPS duties. This is strictly optional. The web interface uses web sockets to support VNC. The version of nginx that is installed is too old to support this. A newer version is available from the debian wheezy backports.

To enable the backports add the following line to /etc/apt/sources.list

deb http://ftp.debian.org/debian wheezy-backports main contrib non-free


Adding the repository just makes the packages available. To mark them for installation you’ll need to pin them. Create the file/etc/apt/preferences.d/nginx-backports and give it the following content

Package: nginx*
Pin: release n=wheezy-backports
Pin-Priority: 900


Now you can install nginx with aptitude install nginx. You should get a 1.6.x version from the backports repository. Check this by doing the following.

# nginx -v
nginx version: nginx/1.6.2


Once nginx is installed you’ll need to configure it to act as a proxy to the pveproxy process running on the machine. I created the file /etc/nginx/sites-available/proxmox.

upstream proxmox {
#proxy to the locally running instance of pveproxy
server 127.0.0.1:8006;
keepalive 1;
}

server {
listen 80;
server_name proxmox.your.domain;
#Do not redirect to something like $host$1 here because it can
#send clients using the IP address to something like https://192.168.1.1
rewrite ^(.*) https://proxmox.your.domain permanent;
}

server {
listen 443;
server_name proxmox.your.domain;
ssl on;
#The server certificate and any intermediaries concatenated in order
ssl_certificate /etc/nginx/proxmox.your.domain.crt;
#The private key to the server certificate
ssl_certificate_key /etc/nginx/proxmox.your.domain.key;

#Only use TLS 1.2
#comment this out if you have very old devices
ssl_protocols TLSv1.2;

#Forward everything SSL to the pveproxy process
proxy_redirect off;
location ~ ^.+websocket${ proxy_pass https://proxmox; proxy_http_version 1.1; proxy_set_header Upgrade$http_upgrade;
}
location / {
proxy_pass https://proxmox;
proxy_http_version 1.1;
}
}


This file should be easy to understand. If it is not I suggest looking at the documentation for nginx here, here, and here.

I have a certificate authority that I used locally to sign the certificate for my machine. If you don’t have your own CA setup, I highly recommend using easy-rsa3. You’ll need to generate your own.

You enable this new proxy definition by creating a symbolic link in /etc/nginx/sites-enabled.

ln -v -s /etc/nginx/sites-available/proxmox /etc/nginx/sites-enabled/proxmox


I disabled the default site by deleting the symbolic link for it

rm -v /etc/nginx/sites-enabled/default


Then do service nginx restart. After that you can access the machine like any other HTTPS enabled site.

Creating a CentOS 7 VM

To create your first VM pick the “Create VM” in the upper right. This starts a wizard that takes you through the initial configuration as a series of tabs. The “VM ID” is automatically assigned but you should give the VM a meaningful name.

In the OS tab you’ll need to select the operating system type you are installing. I selected “Linux 3.x/2.6 Kernel(I26)”.

The first problem you’ll run into is that you have no ISO images to use as a boot medium. You can rsync ISO images to/var/lib/vz/templates/iso and they’ll become available momentarily. I downloaded and copied overCentOS-7.0-1406-x86_64-DVD.iso. The netinstall version of CentOS 7.0.1406 is problematic in that it does not know what software repositories to use.

For the hard drive I created a 24 gigabyte image using the “SATA” Bus type. I used the default qcow2 image type. These appear to be dynamically sized and grow on disk as needed. I also checked “No backup”. ( 1/11/15 – You should use the hard disk type “VIRTIO” here, it has the best KVM performance)

If you want to make more processing power available to the guest operating system add more cores. Adding more sockets could make the kernel think it is running in a NUMA environment of some sort. For memory I chose 1024 megabytes. The CPU and memory can both easily be changed later on.

For networking select the default of “Brigded mode” and use the bridge vmbr0. This is the default bridge that is created automatically on installation. I have not explored the use of “NAT mode”.

After that the machine can be booted by selecting it from the list on the left hand side and clicking the “Start” button near the upper right. It will begin the boot sequence. In order to install CentOS 7, you can connect to the terminal by clicking on the “Console” button that is nearby. The VNC terminal worked fine for me in Firefox. It is Java based, and I barely noticed that I was using a web based piece of software. I’m not going to go through the steps I performed to install CentOS 7 here because there is plenty of literature on that topic already.

Create a VM template

You can create a template by converting an existing virtual machine to a template. This process is one-way: a template cannot be converted back into a virtual machine. To make CentOS 7 into a template I did the following.

1. Install CentOS 7 from the DVD ISO image
2. Only set the root password during install
3. Delete the SSH host keys in /etc/ssh on boot
4. Run sys-unconfig

It really is that easy. Running the last step halts the machine, but I had to stop it using the web interface of Proxmox. After that right click on the machine and select “Convert To Template”. Templates are then cloned into virtual machines by right clicking on them and selecting “Clone”.

The Debian Within

The system that gets installed is just Debian. You can SSH into the machine as root with the password you gave the installer.

Customization

Since the installed system is just a Debian machine you can customize it to do just about anything. I installed the sudo package, created a user for myself and added the user to the sudo group. I then edited /etc/ssh/sshd_config with a line ofPermitRootLogin no. I consider this mandatory, even on machines not exposed to the internet. I also configure apt to use the instance of apt-cacher-ng running on my local network.

Network configuration

In my case I am using a Realtek integrated NIC that identifies as “Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller”. I’ve used this motherboard under linux exclusively since I purchased it so I did not anticipate any problems. The default network configuration entered during installation is reflected in /etc/network/interfaces.

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
gateway 192.168.12.2
bridge_ports eth0
bridge_stp off
bridge_fd 0


As you can see, a bridge is configured instead of using eth0 directly. This bridge is used as the NIC for the virtual machines, effectively making them appear like they are plugged into your network.

Setting up a second bridge

My goal is to have all of my virtual machines to be on a different subnet than other devices on my network. I also need to avoid manual configuration of IP addresses on the virtual machines. On my DHCP server I added an additional DHCP pool for the192.168.14.0/24 subnet. I use dhcpd so I added the following to /etc/dhcp/dhcpd.conf

subnet 192.168.14.0 netmask 255.255.255.0
{
#30 minutes
default-lease-time 1800;
#the proxmox host
option routers 192.168.14.12;
option domain-name-servers 192.168.14.95;
option domain-name "home.hydrogen18.com";

pool
{
range 192.168.14.129 192.168.14.254;
allow unknown-clients;
}

}


My DHCP server is authoritative for the domain home.hydrogen18.com. If you add an interface with an IP address matching one of the pools, dhcpd automatically starts providing DHCP on that interface. Since I have plenty of physical bandwidth on my home network I wanted to use VLANs to keep the VMs separate from other devices. On the machine acting as my DHCP server I added to/etc/network/interfaces.

auth eth0.14
iface eth0.14 inet static


The syntax eth0.X automatically indicates that the interface should use a VLAN. This works, but requires that you have the kernel module for it loaded. You can do that with the following.

# modprobe 8021q
# echo '8021q' >> /etc/modules


Now any device on my network using a VLAN of 14 will get an IP address in the 192.168.14.0/24 range. But I still needed a way to place all of the virtual machines on VLAN 14. To do this I added a bridge for VLAN 14 on the proxmox host.

auto vmbr14
iface vmbr14 inet static
bridge_ports eth0.14
bridge_stp off
bridge_fd 0


The same syntax used above for declaring the vlan is used in the bridge_ports option of the bridge declaration. In order to get the hosts on 192.168.14.0/24 subnet to intercommunicate with my existing hosts, I needed a device to act as an IP router. The logical machine for this is the proxmox host. This is done by turning on IP forwarding in the networking stack of the linux kernel. It turns out this is already enabled

# cat /proc/sys/net/ipv4/ip_forward
1


No further action was necessary. Now whenever I create virtual machines I have the option of vmbr0 or vmbr14. Selecting vmbr14causes them to receive a DHCP assigned address in the 192.168.14.0/24 subnet.

Storage & Filesystem

The installer created 3 partitions on the drive

#lsblk /dev/sdb
NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb                   8:16   0   1.8T  0 disk
├─sdb1                8:17   0     1M  0 part
├─sdb2                8:18   0   510M  0 part /boot
└─sdb3                8:19   0   1.8T  0 part
├─pve-root (dm-0) 253:0    0    96G  0 lvm  /
├─pve-swap (dm-1) 253:1    0     7G  0 lvm  [SWAP]
└─pve-data (dm-2) 253:2    0   1.7T  0 lvm  /var/lib/vz


The /boot filesystem is placed directly on the physical disk. My suspicion is that /boot was placed on its own partition to support some older systems that needed /boot to be near the beggining of the disk. Almost any modern linux system can boot off a /bootpartition that is placed anywhere. Furthermore, you can place /boot in LVM so that it can be mirrored and relocated. The 1 megabyte partition is marked as bios_grub. The third partition is used as a single physical volume for LVM.

  --- Physical volume ---
PV Name               /dev/sdb3
VG Name               pve
PV Size               1.82 TiB / not usable 0
Allocatable           yes
PE Size               4.00 MiB
Total PE              476804
Free PE               4095
Allocated PE          472709
PV UUID               zqLFMd-gsud-dmDD-xyNV-hduA-Lnu2-B1ZF6v


In my case this is on a 2 terabyte hard drive I have in the machine. This physical volume is added to a single volume group and three logical volumes are created

  --- Logical volume ---
LV Path                /dev/pve/swap
LV Name                swap
VG Name                pve
LV UUID                df3swz-RUho-dOzK-XQcm-YjDF-gVXa-fLXo7d
LV Creation host, time proxmox, 2015-01-01 10:55:19 -0600
LV Status              available
# open                 1
LV Size                7.00 GiB
Current LE             1792
Segments               1
Allocation             inherit
- currently set to     256
Block device           253:1

--- Logical volume ---
LV Path                /dev/pve/root
LV Name                root
VG Name                pve
LV UUID                GdPhWd-Dydo-2QY5-UJFd-qp5G-jnMe-A5gMbC
LV Creation host, time proxmox, 2015-01-01 10:55:19 -0600
LV Status              available
# open                 1
LV Size                96.00 GiB
Current LE             24576
Segments               1
Allocation             inherit
- currently set to     256
Block device           253:0

--- Logical volume ---
LV Path                /dev/pve/data
LV Name                data
VG Name                pve
LV UUID                3tulMK-XLKM-JcCp-DIBW-1jT5-RBt2-JFHDUL
LV Creation host, time proxmox, 2015-01-01 10:55:19 -0600
LV Status              available
# open                 1
LV Size                1.70 TiB
Current LE             446341
Segments               1
Allocation             inherit
- currently set to     256
Block device           253:2


I really have no idea how the installer decided on a 7 gigabyte swap given that I have 8 gigabytes of memory in the machine. Also if you have a virtualization host that is aggressively swapping, the experience is going to be miserable. The logical volume/dev/pve/data is mounted as /var/lib/vz. This is where everything for the virtual machines is stored. The installer gave the majority of the available space to /data which is a good decision. However, I don’t want to use all of my available space as a filesystem. I want to use logical volumes directly for some virtual machines.

Migrating to mirrored LVM

There are a few things I need to change about the base installation

1. All the filesystems should be on logical volumes.
2. The logical volumes in LVM should be mirrored.
3. I should be able to use logical volumes directly for virtual machines

There are a number of ways I could go about achieving this. I decided to chose the path of least resistance since LVM is set up on the base install. In order to make the changes I want the easiest way is to boot into a live CD environment. Since Proxmox doesn’t support this, I grabbed the Debian Wheezy 64-bit LiveCD.

Once in the Debian LiveCD environment you can switch from the default user named user to root with sudo su. After that you’ll need to get LVM started since the LiveCD does not by default.

aptitude install lvm2 #Install the LVM modules
service lvm2 restart #Restart LVM
service udev restart #Restart udev
vgmknodes #Map devices for any existing logical volumes


With LVM up and running I added my second disk directly to LVM. You can partition it if you’d like, but there is generally no reason to.

pvcreate /dev/sdx #Change sdx to your second hard drive
vgextend pve /dev/sdx #Extend the existing volume group


The first thing to do is to convert the swap volume to be mirrored.

lvconvert --mirrors 1 --mirrorlog mirrored --alloc anywhere /dev/pve/swap


This warrants additional explanation. I found a great reference explaining why the defaults of LVM do not work for a two disk setup. Here is an explanation of the above

1. --mirrors 1 Keep one copy of the data
2. --mirrorlog mirrored Mirror the log of the logical volume
3. --alloc anywhere Place the log of the logical volume anywhere

These options are needed because by default LVM would attempt to store some metadata about the logical volume in memory. By using --mirrorlog mirrored two copies of this metadata are stored on disk.

Now lets reduce the size of the data fileystem. In my case I am going to reduce it down to 256 gigabytes in size. Even with several virtual machine templates I wound up with 243 gigabytes of free space after doing this. The ext4 filesystem already on the logical volume uses 4096 byte blocks. This means I need to reduce the size to 67108864 blocks. You can check the current number of blocks and the block size with dumpe2fs.

#Show block size information
dumpe2fs -h /dev/pve/root  | grep Block


The filesystem must be checked with e2fsck and then resized with resize2fs

#Check the existing filesystem
e2fsck -f /dev/pve/data
resize2fs -p /dev/pve/data N #Replace 'N' with the number of blocks for the filesystem


On a new filesystem this step should complete quickly since few blocks are in use. After resize2fs is complete the size of the file system has been shrunk but the physical volume has not. The LVM volume group created by the installer used 4 megabyte extents. In order to determine how many extents the physical volume is some calculation must done. If this is done wrong, the filesystem is destroyed.

(BN)/E=S

The above variables are

• B – The block size of the filesystem
• N – The length of the filesystem in blocks
• E – The size of the extents used by LVM
• S – The number of extents needed by the logical volume

Once S is calculated you will likely wind up with a number that has a fractional remainder. This number must be rounded up to the next integer value. You can call this number T

S<T

The logical volume can now be resized to free up the physical extents in the volume group.

lvresize --extents T /dev/pve/data


This step should complete almost instantly. Next we can create a mirrored logical volume for /boot. We can’t convert the existing/boot since it is a partition on the physical disk.

lvcreate --extents 128 --mirrors 1 --mirrorlog mirrored --nosync --alloc anywhere --name 'boot' pve
mkfs.ext4 /dev/pve/boot #Format the block device as ext4


The syntax of lvcreate is similar to the syntax used for lvconvert above. The only thing new is --nosync. This tells LVM to create the logical volume as mirrored but not to synchronize. Since the next step is to create a filesystem on the logical volume, this is not an issue. The newly created filesystem is empty. To get the contents of /boot we need to mount both the old and new filesystems and copy everything over.

#mount the old boot filesystem
mkdir /mnt/oldboot
mount -t ext4 /dev/sdx /mnt/oldboot #replace sdx with old boot partition
#mount the new boot filesystem
mkdir /mnt/newboot
mount -t ext4 /dev/pve/boot /mnt/newboot
#copy oldboot to newboot
cp -a -P -v -R /mnt/oldboot/* /mnt/newboot/

#unmount the filesystems
umount /mnt/oldboot
umount /mnt/newboot

#wipe the old '/boot' FS
dd bs=512 count=4 if=/dev/zero of=/dev/sdx #replace sdx with the old boot partition


Now that the copy of the old /boot filesystem has been copied over, we need to instruct grub to boot using the new one. The file/etc/fstab must be updated to reference the new /boot as well. This filesystem is mounted by UUID, so use dumpe2fs to determine the UUID of the new filesytem.

#show just the UUID of the filesystem
dumpe2fs -h /dev/mapper/boot | grep -i uuid


To change /etc/fstab and grub a chroot environment is used. The / filesystem of the installation needs to be mounted. You can’t mount it to / however because the live CD environment already mounts a filesystem there. This is why the chroot is needed. You also need to mount /boot. This still isn’t quite enough. The mount command is used with --bind to expose the /sys,/proc, and /dev filesystems of the live CD environment to the chroot.

#mount the root filesystem
mkdir /mnt/root
mount -t ext4 /dev/pve/root /mnt/root
#mount newboot in root
mount -t ext /dev/pve/boot /mnt/root/boot
#bind filesystems into /mnt/root
mount --bind /dev /mnt/root/dev
mount --bind /sys /mnt/root/sys
mount --bind /proc /mnt/root/proc
chroot /mnt/root


Now that we’re in the chroot environment we can edit /etc/fstab. You should be able to find a line that looks like this.

#Find the line for '/boot/' looks like
UUID=1949701c-da21-4aa4-ac9b-9023d11db7c5 /boot ext4 defaults 0 1


The UUID will not be the same. Replace UUID=1949701c... with UUID=xxx where xxx is the UUID of the /boot filesystem we found using dumpe2fs above.

Grub can be reinstalled and updated automatically. There is a good explanation of this process here.

#install grub to the disk
grub-install /dev/sdx #device you selected during proxmox install
#update the grub configuration
update-grub


I got the error error: physical volume pv0 not found. about 30 times when I did this. It doesn’t seem to matter. To verify that everything has been updated we can check /boot/grub/grub.cfg.

#verify the UUID set in /boot is now in the configuration
grep -m 1 xxx /boot/grub/grub.cfg


Again, xxx is the UUID of the /boot filesystem. At least one line should match.

Now just type exit to leave the chroot. At this point /data and / logical volumes are still unmirrored. LVM can be manipulated while systems are in use, so there isn’t much point in staying the in the LiveCD environment. Reboot the machine withshutdown -h -r now and remove the CD when prompted.

Once Proxmox boots back up, SSH in as root. You’ll want to start a screen session before upgrading the logical to mirrored because it can be very time consuming.

#upgrade data logical volume to mirrored
lvconvert --mirrors 1 --mirrorlog mirrored --alloc anywhere /dev/pve/data
lvconvert --mirrors 1 --mirrorlog mirrored --alloc anywhere /dev/pve/root


Enable LVM in the Web Interface

To use LVM volumes from the web interface you must enable LVM as a storage option. This is done by selecting “Server View” then the “Storage” tab. Click the “Add” button and a drop down appears, select the LVM option. You’ll need select the volume group you want to use, in my case that is “pve”.

After adding the volume group you’ll have the option of using logical volumes as the storage for virtual machines. You can add a logical volume to an existing virtual machine by clicking it in the left hand pane, clicking the “Hardware” tab and clicking “Add”. From the drop down menu select “Hard Disk”. The “Storage” option in the modal dialog has the LVM volume group as an option.

The created logical volume has a predictable name. But it is not mirrored

  --- Logical volume ---
LV Path                /dev/pve/vm-103-disk-1
LV Name                vm-103-disk-1
VG Name                pve
LV UUID                ib3q66-BY38-bagH-k1Z2-FDsV-kTMt-OKjlMH
LV Creation host, time basov, 2015-01-08 19:47:54 -0600
LV Status              available
# open                 0
LV Size                32.00 GiB
Current LE             8192
Segments               1
Allocation             inherit
- currently set to     256
Block device           253:36


The logical volume can be made mirrored by using the same lvconvert commands as used to make /dev/pve/root mirrored.

How to virtualize pfSense firewall including using VirtIO drivers on Proxmox VE

This install will cover how to install pfSense firewall as a virtual machine. Is it safe to virtualize a firewall?  I will leave it up for you to do your own research to find your answer there numerous online discussions which go over this topic.  These are just two which I have stumbled upon. From serverfault and Security Week.  Personally I am more in the camp of folks who agree it is safe to Virtualize a firewall. You can read about pfSense here.

How to virtualize pfSense firewall including using VirtIO drivers

The requirements of this tutorial are the following:

1. A functioning Proxmox Hypervisor with version 3.3-5/bfebec03 or newer.
2. You have already created the necessary network bridges.  I have gone over this on my other tutorial how to Virtualize IPCop on Proxmox.
3. Administrative rights on the Proxmox server.
4. (Might be optional) I have a Proxmox Community subscription plan for pricing you can check it here.  The subscription plan provides access to the Enterprise repository.  The cost is very reasonable when compared to other commercial virtualization platforms.  I paid 99.80 euro’s, at the time of conversion it was \$115.41 per year.
5. Comfortable using Linux.
6. Some knowledge using vi

Creating a Linux Bridge

This is done on the Proxmox host.

This is the part I miss using VMware ESX control panel assigning virtual switches and nic cards.  Proxmox web interface has the ability to create Linux Bridges and OVS switches for virtual machines to use but the configuration I am going to use can’t be done through the Proxmox web interface.  This has to be done through the command line.

I prefer to  use vi when editing files so I had to install this.

apt-get install vim

Connect to Proxmox host using SSH.

ssh -l root proxmox-server-ip

What the following bridge settings mean.

bridge_stp off # disable Spanning Tree Protocol

bridge_fd 0 # no forwarding delay

bridge_ports eth0 # which nic card to attach

Move to the network directory.

cd /etc/network

Edit the interface file.

vi interfaces

Copy and paste below after any configuration already in there.  On my Proxmox host physical server I have 5 physical network cards installed.  I therefore created 4 network bridges.

Below is the process of creating one network bridge. Each time you add another network bridge just rename each network bridge as vmbr1, vmbr2, vmbr3, etc.

## this is for pfSense WAN nic

auto vmbr1
iface vmbr1 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0

Save and exit.

:wq

Each time a network bridge is created a reboot is needed to apply new settings.  So it is better to add all of the bridge configuration one time.

reboot

Below is what my network bridge configuration file looks like.  Yours make look different depending on how many you have.

I purposely left out network bridge vmbr0 from being assigned for use for virtual machines.  This is the network I will be using solely when I connect to my Proxmox web gui.  Proxmox scheduled backups is also going through this network.

Note: vmbr0 is the only network bridge which should have a gateway IP assigned!

The reason we don’t put a gateway IP address for the network bridges we create because we add the gateway IP on the virtual machines nic card.  Example: the image below shows my Windows 7 computer has a gateway IP address of 172.16.2.6 which is the IP address of my pfSense LAN nic card.

After Proxmox reboots your network settings should look similar to mine.  The IP address for vmbr0 and gateway settings have been erased for security reasons.  vmbr1 settings for Port/Slaves, IP address, Subnet mask and Gateway are intentionally left blank.  This is to make sure any network traffic coming through vmbr1/eth1 will pass through pfSense WAN virtual nic.

When you have met all of the requirements let us begin.

Check to make sure the pfSense ISO has not been altered.  On my Mac I open a terminal and use md5 to check the checksum against the md5 checksum posted on the pfSense website.

Logging in to the Proxmox web GUI

Login to the Proxmox web gui this will be https://172.16.1.10:8006.  The Proxmox hypervisor will be using a self signed certificate do your acceptance for your specific browser of choice.  I will be using Firefox.

Upload the ISO to the Proxmox Hypervisor

On the left menu click on local the choose content tab then upload.  Navigate to where your pfSense ISO is then click upload.

Create a Virtual Machine

After you login click on the menu Create VM which is located on the top right.

Give your VM an ID and name.  Click next.

Choose other OS types since pfSense is built using FreeBSD. Click next.

For the ISO click on the drop down to choose your uploade pfSense ISO file. Click next.

Choose IDE for Bus/Device for now we will later replace this using a VirtIO driver. I choose Raw disk for my block format.  According to Proxmox developers this is the more performant. Click next.

Allocate your CPUs. My Super Micro box has two sockets hence the settings below. Leave it at kvm64 bit. Click next.

Allocate memory.  It will depend on how much your physical server has to spare and your intended use for your pfSense firewall.  Click next.

Add a nic card assign it to network bridge.  I have mine to use vmbr1 using an Intel E1000 driver for the nic card.  Click next the finish.

Then go back into the hardware tab and add another nic card using Intel E1000 driver.  Click add.

Be sure to add the second nic card to use a different network bridge.  Mine is setup as vmbr3.

Then go back into the hardware tab and add the third nic card using Realtec driver.  Add it no another bridge for mine it will vmbr4. Click add.

This third nic card will be assigned for our DMZ.

Yours will look similar to my hardware summary here except maybe for the CPU count.  If you’re curios to know what sort of resources you need for your environment consult thisguide.

Launch the VM

Click on the newly create pfSense VM, then on the top right menu click Start.  When it starts immediately click on Console.  These two menus are pretty much close to each other. Choose noVNC.

Pay attention to the screen I mean it, it will fly past so quickly. When you see the install option menu enter i.  You know you will be successful when you see the image below.  Use the settings shown.  Enter.

Choose Quick/Easy Install. Enter.  OK. Enter.

Click OK to proceed with installation.

Installation proceeds.

Install standard kernel. Enter.

Reboot.

Note down the names your three identified nic cards.

Choose n (No) when asked to setup vlans.  Enter.

em0 (0) is numeral zero

Type in em0 (0) is numeral zero for the WAN interface. Enter.

For the LAN nic hit enter em1.

For the DMZ nic enter re0.

You will be asked for Optional2 just hit enter for none.

Confirm network settings. y enter.

Enabling VirtIO

This is the part we will load necessary modules so we can use VirtIO drivers.  We will be editing the file /boot/loader.conf.local.  Choose option 8. Enter.

I will be using vi to edit the configuration file.  We need to put it into this file so the instruction becomes permanent otherwise it will be gone each our pfSense virtual firewall reboots.

vi /boot/loader.conf

Add the following entries one on each line.

virtio_load="YES"
virtio_blk_load="YES"

When the done the file will look.

Save the file.

:wq

Type exit. Enter.  To close out the shell console.

This part we will shutdown our pfSense VM.  Choose option 6.  Enter.   Type y enter.

Your VM icon will turn from white black indicating the VM has been shutdown.  Click your VM pfSense from the left menu of the Proxmox web GUI then go to hardware tab.  Click CD/DVD choose remove. Click yes.

Now start the VM back up by clicking start from the top right menu.  Access the console again.

When the options menu comes up choose option 2. Enter.

You will again be asked if you want to setup vlans.  Choose n.  If you want to setup vlans you can read the pfSense online docs.

You’re shown available interfaces to configure.

Enter the number of the interface you want to configure.  I am will be adding a static IP for the LAN interface.

Enter 2

Enter the LAN IP.  I am putting in IP address 172.16.2.6.  Enter.

I am using the subnet mask 255.255.255.0, therefore I will put in 24 for bit count.  Enter.

When you get to this part just enter for none.  Enter.

For LAN IPv6 enter for none.  Enter.

Do you want to enable DHCP on the LAN interface.  I will enable DHCP for mine. Enter y.

Enter the beginning IP for your DHCP client range.  This is what I have.  Enter

Enter the end of the IP range.  This is what I have. Enter.

Set to n when asked to revert the webconfigurator protocol to HTTP.  We want to access our pfSense web GUI through SSL.

Now it indicates we will be able to access our pfSense firewall using IP 172.16.2.6 from a web browser.  Enter to take console back to menus.

Connecting to pfSense web gui

From another computer we will now connect to our pfSense Web GUI using the IP address you have used for your LAN nic.

Type in the URL in your browser

Note: Your browser will warn you since you’re connecting to self signed certificate. Just accept it.
https://172.16.2.6  (Replace with your own LAN IP)

pfSense wizard will assists you setting up your newly installed pfSense firewall.  Click next.

You can sign up for the pfSense Gold Subscription.  I will skip this for now. Click next.

Provide your pfSense hostname and domain.  Add your DNS name servers or have DHCP provide those for you.  I am using Google’s name servers. Click next.

Set your timezone. Use the default time server.  Click next

Set your WAN settings here.  Yours could be DHCP or PPOE.  I will set mine as static IP.  The static IP the address, subnet mask and gateway will be provided to you by your Internet Service Provider.  Click next.

After you set your WAN IP as static go to General Setup menu.  Look at the DNS settings if it has an option to use a GW set this to the default gateway provided to you by ISP provider.

Note: I had an issue where I was unable to update my pfSense firewall even though I was able to ping an external host from the pfSense console.  I was even able to do an nslookup successfully but each time I tried to update pfSense an error came back which said it was unable to contact the pfSense update server.  After putting this GW information for my DNS the update worked.

We have already set our LAN IP through the console so just click next.

Congratulations!  You have just setup your pfSense router.

pfSense Dashboard.

Let us check if our pfSense has any updates.  From the System menu > Firmware > Auto Update tab.

As I was checking the update it turns out pfSense version 2.2 just got released!  With a click of a button I was able to uprade my pfSense 2.1.5 to 2.2 easily.  After installation of the upgrade the firewall will automatically reboot.

Since there are significant changes introduce by 2.2, I did a simple to test to make sure my virtIO enabled nic cards still works using the ping option 7 from the pfSense console.  Test looked good.

From my Linux workstation I am also able to ping an external address.  The Linux worstation is using the IP address of the pfSense as its default gateway.  This is the LAN IP of the pfSense firewall.

You now have a functioning pfSense firewall but if you want to use the VirtIO device drivers continue with instructions below.

Change the block and nic device driver to use VirtIO on pfSense

Why would you want to do this?  Here is the answer from the libvirt.org website.

“Virtio is a virtualization standard for network and disk device drivers where just the guest’s device driver “knows” it is running in a virtual environment, and cooperates with the hypervisor. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization.”

From the pfSense console choose option 8 for shell. Enter.

Type in

vi /etc/fstab

Change the following two lines.

/dev/ad0s1a       /           ufs       rw     1     1
/dev/ad0s1b       none    swap   rw      0    0

/dev/vtbd0s1a    /            ufs       rw    1     1
/dev/vtbd0s1b    none     swap   sw    0     0

:wq

Then exit out of the console. Type in exit.

Shutdown your pfSense server from the console.  Choose option 6. Enter.

The configuration we will need to change could be found at the Proxmox hypervisor.  Log back into your Proxmox web gui then on the left menu click on your Proxmox host.  Mine is called proxmox-supermicro.

Then from the top right menu click console then choose noVNC.

Then move to the directory where the configuration file we need is located.  This will contain all of the configuration files of your KVM based virtual machine which is what we’re using for our pfSense firewall. My pfSense virtual machine has the VM ID of 198.

cd /etc/pve/qemu-server/

Before you alter the original file it is wise to make a copy first.

cp 198.conf 198.conf.orig

After making the copy edit the file. We need to change this line

vi 198.conf

ide0: local:198/vm-198-disk-1.raw,format=raw,size=10G

to read as (the one marked in red is the numeral zero indicating this is the first block device).

virtio0: local:198/vm-198-disk-1.raw,format=raw,size=10G

Change the bootdisk also to.

bootdisk: virtio0

:wq

Start up your pfSense virtual machine.  Good job!  Now you’re running your block device using the virtIO driver.  If you look at your hardware summary you will find your hard disk is using (virtio0).

Set VirtIO nic drivers for pfSense

Note: Very important! Before proceeding with changing anything this needs to be done using the pfSense gui. Go to System then Advance then Networking. Disable hardware checksum offload. Click save.

Shutdown your pfSense firewall from the console or web gui.

Click on your VM ID, then hardware tab then click nic card you want to change the driver then click edit. I am going to change all nic cards to use virtIO.

Start pfSense backup.  You will once again be asked to configure your network interfaces. Click n when asked to setup VLANS.  Pay attention to the naming convention which has changed for the network cards they all start with vtnet with 0,1,2 appended on each end for each network card.

Lets start to assigned each one.

Enter for WAN using vtnet0

Enter for LAN using vtnet1

Enter for DMZ using vtnet2

Enter for none.

Confirm y  to apply new settings.

From the pfSense console choose option 7.  This will test if our new network card drivers are working.  Ping an external host IP.

References:

https://doc.pfsense.org/index.php/VirtIO_Driver_Support

Installing IPCop as a Virtual Machine on Proxmox VE

How I  virtualized my IPCop installation on Proxmox VE hypervisor.  This how-to assumes you already have a running Proxmox VE host.  If you want to try  Proxmox VE click here.  Other requirements are, there needs to be two physical network cards installed on the Proxmox host. Three if you intend to setup DMZ.

From the Proxmox web panel click on local (proxmox-name-of-your-proxmox-host).  Then click Content tab then Upload.  Which brings up the upload window.  Browse to location of the downloaded IPCop iso then click upload.

Creating a Linux Bridge

This is the part I miss using VMware ESX control panel assigning virtual switches and nic cards.  Proxmox web interface has the ability to create Linux Bridges and OVS switches for virtual machines to use but the configuration I am going to use can’t be done through the Proxmox web interface.  This has to be done through the command line.

Note: I found it easier to keep the other physical network cards unplugged except for one nic card which will be used by the Proxmox web control panel.  As I created each virtual bridge it was only then I plugged in the associated nic card.  This made it easier for me to identify as to which physical nic card to assign to each virtual bridge added.

The image below shows starting with one plugged in nic card.

I prefer to  use vi when editing files so I had to install it first.

apt-get install vim

Connect to Proxmox host using SSH.

ssh -l root proxmox-server-ip

What the following bridge settings mean.

bridge_stp off # disable Spanning Tree Protocol

bridge_fd 0 # no forwarding delay

bridge_ports eth0 # which nic card to attach

Move to the network directory.

cd /etc/network

Edit the interface file.

vi interfaces

Copy and paste below after any configuration already in there.

## this is for IPCop WAN nic

auto vmbr1
iface vmbr1 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0

Save and exit.

:wq

Each time a network bridge is created a reboot is needed to apply new settings.

# reboot

After Proxmox reboots your network settings should look similar to mine.  The IP address for vmbr0 and gateway settings have been erased for security reasons.  vmbr1 settings for Port/Slaves, IP address, Subnet mask and Gateway are intentionally left blank.  This is to make sure any network traffic coming through vmbr1/eth1 will pass through IPCop WAN virtual nic.

My IPCop topology created using this free online drawing tool.

Create IPCop virtual machine

From the top right corner of web interface click on Create VM.  Name the Virtual Machine.  Click next.

Choose the new Linux versions. Click next.

Using default storage called local.  This will be where my virtual machine images will be stored.  From drop down choose IPCop iso we uploaded earlier. Click next.

Hard disk settings.  Bus/Device is set to use IDE.  When I tried to use VirtIO, IPCop was unable to find the hard disk during installation. I picked raw format for speed.  Click next.

For CPU type I am using KVM32.  Why I went with kvm32 click here.

Allocate memory.  Click next.

Add nic card for LAN (GREEN) use.  I am using the Intel E1000 model to make it easier to identify which nic card to assign for GREEN use. Click next.  Then click finish.

Now add the WAN (RED) nic.  Click on IPCop vm then Hardware tab menu.  Then for bridge use vmbr1 we created earlier.  For nic card model use Realtec RTL8139.  Click add.

This is what my hardware looks like.  Mac addresses erased for security reasons.

Click on Start to start the IPCop VM from the right top menu.  The status should show OK on the task panel below.  Status will also show resource usage.  To complete setup we will need to connect to VM using Console.  Click on console.  Which brings up the IPCop boot screen.  Click inside the console window then click enter key on the key board.

Note: if console window only shows white blank screen just click reload.

Choose language.

Click ok to begin installation.

Choose keyboard setting.

Choose timezone and set correct time.

Accept hard drive to install on.  When ask are you sure you want to continue choose Ok.

This will be a Hard Disk install.

Installation begins.

We’re not restoring from backup click tab to skip.

Install done.    Click enter.

Choose a name.

Enter domain name.

Choose static.  Depends of course on how your WAN setup.  Mine is a static IP.

Network Card Assignment

This is why I wanted to use two different nic models so I could easily identify which nic card to assign.  I already know bridge vmbr0 is using eth0 on the Proxmox host.  This is also where the Promox web interface is listening on.

The Realtek virtual network device will be assigned to WAN (RED).  Choose select then RED. Tab to asssign.

Do the same for the Intel Card but this time assign it to GREEN for internal LAN use.

When all cards have been assigned tab to Done.

Assign WAN IP for RED interface.

Assign DNS name servers to use and WAN gateway.

Skip enabling DHCP unless you need it activated for your LAN.

Create password for the next three screens for each IPCop user account.

Installation is finally done!

After IPCop reboots login on the console to test if you can ping an internal IP and WAN IP.  Login as root.

You should be able to ping out to an external IP.  I am pinging Google’s nameserver below.

I am also able to ping an internal IP.

I now have a functioning IPCop firewall.  But what if I wanted to add another nic card so I can place some hosts in DMZ?

Here is one of the reasons it is good to use a DMZ network.  NY Times Article.

To make this work I had to add another physical network card on my Proxmox server.  I then had to add another bridge for DMZ use.

Again we have to edit the file.

vi /etc/network/interfaces

Adding this right below the vmbr1 we created earlier.

## this is for IPCop DMZ nic
auto vmbr2
iface vmbr2 inet manual
bridge_ports eth2
bridge_stp off
bridge_fd 0

Save the file.

:wq

Reboot Proxmox host.

Checking the network configuration on our Proxmox host you will find a new bridge called vmbr2.  With the associated physical nic eth2 showing it is active.  We now could assign this to our virtual IPCop firewall.

Go ahead and shutdown the IPCop vm we will then add a virtual nic from the hardware tab menu. I am adding another model Intel E1000 for this virtual nic which will attached to the physical nic card eth2.

Go ahead and start the IPCop vm to setup our new virtual nic card. Logging as root on the console. Then type setup > enter.

Scroll down to Networking.  Tab to select.

Scroll down to Drivers and card assignments.  Tab to select.

There is the unassigned Intel card.  Tab to select.

Scroll down to Orange.  Orange in IPCop speak is the color assigned to DMZ zones.  Blue as you guessed it is assigned for Wifi hot spots. Tab to assign.

All 3 virtual nics should be assigned.  Tab to done.

Now we will need to add an IP for the Orange nic card.  This IP will be used as a gateway for any computers or devices which are connected to the Orange switch or Hub.

Scroll down to Address settings.  Tab to select.

Select which interface to configure. Tab to select.

Put in IP from any of the private class range. Tab ok. Then tab Go Back > Go Back.  Then exit setup.

You should be able to ping the IP in the Orange zone.

Connecting to IPCop web interface

With our networking setup done time to connect to IPCop from the web browser. IPCop uses port 8443. Point your browser to your IPCop’s IP address (GREEN).

https://192.168.1.1:8443  (your browser will prompt you to accept an unsigned certificate. Go ahead and accept the IPCop certificate).

If you need to change IPCop default gui port to something else other than 8443, you could do so by doing it on the command line.  The command below will change the port to 5445.

/usr/local/bin/setreservedports.pl --gui 5445

Login using the credentials you created earlier to manage IPCop this would be admin.

First thing I like to do after I login is to check for IPCop updates. From the System menu > updates.  Here it shows I have three updates to apply by clicking on the green down arrow beside each update. Then click apply.

After applying all updates I want to check if there are any open ports open through IPCop going into my LAN.  First I will change the gateway setting on my Mac to use the IP address of the GREEN zone which was 192.168.1.1.

Using this website I can scan my IPCop WAN IP in this example I was using IP 123.123.123.123.  Below are my results if it were open a green indicator will show next to the port number.

Checking my IPCop firewall logs the DROP scan results show up.

Looking at my IPCop virtual machine’s status from Proxmox control panel.  I can see very low resource usage I even reduced my original memory allocation of 2.5 GB to 1 GB.

There is also a nice real time view for CPU, Memory, Network and Disk IO usage.  Available for each virtual machine.

This is the part I really like about the Proxmox hypervisor I am able to backup a running virtual machine without shutting down the vm.  It will still be accessible while the backup snapshot is in progress.  Yes this feature comes free with the Proxmox hypervisor unlike free versions of ESX.  There was a time I had to use a commercial tool from Trilead to backup my virtual machines on free ESX. Not anymore!

When I did a backup to my nfs storage.

It took only 21 seconds to complete a backup of my IPCop vm.

Upon looking at the real space being used by my IPCop vm this tells me I could have allocated a smaller hard drive space when I created my virtual machine earlier.  If I was using qcow2 I can resize the virtual disk from the web control panel.  Why I decided to use the raw format? This was based on what I have read from Promox support forum if you want performance speed use the raw format.

I hope this will urge you to virtualize IPCop using the rock solid reliable Open Source bare metal hypervisor called Proxmox ve.

This concludes the tutorial Installing IPCop as a Virtual Machine on Proxmox VE.

References:

https://wiki.debian.org/BridgeNetworkConnections