The Next Level of Remote Access

In a previous post I outlined the stages of remote access I’ve had configured in my lab so far, starting with simple port forwarding and RDP, leading to a L2TP VPN to my router. This is all great, and gives me complete access to my lab network from anywhere in the world…as long as I have my laptop (or am comfortable adding my VPN credentials to the computer I’m using). So the question is, what happens when I don’t have VPN access?

Enter Nginx and reverse proxying! Nginx is a very popular web server, similar to Apache. Many people use it for its ease in configuring reverse proxies. Whereas a standard proxy is an internet server that is “bouncing” traffic from an internet server to a local user, a reverse proxy is a local server that is “bouncing” traffic from a local server to an internet user (By definition, this is still technically a proxy, but due to the origin and flow of the traffic, it logically is the reverse of what most people think of a proxy as). What this means is that an instance of Nginx can be configured as a single point of service for another server (or servers) within the LAN. This is beneficial for several reasons:

  • Only 1 or 2 ports needs to be forwarded, depending on whether you want http or https, or both
  • A single web server can handle requests for pages from multiple services, across multiple machines
  • Separate hosts and webpages can be given separate locations on the domain (
  • SSL only needs to be configured on a single server to be effective for all traffic on that domain

Setting up a reverse proxy through Nginx is surprisingly simple. After installing Nginx, you’ll want to create a conf file in /etc/nginx/conf.d to contain all your changes. I named mine by the domains that they serve to make it easy to tell apart, as you can create as many as you want and Nginx will read from all of them. It’s generally a good idea to keep them separated to make it easier to change options when necessary. In your conf file you’ll want to put the following lines to create a server block, which contains all the configuration for a web server.

## Start ##
server {

listen; #remove this line to listen on all interfaces


access_log /var/log/nginx/;
error_log /var/log/nginx/;

## End ##

This is a basic server block for the domain Change the listen line to match your web server’s external interface, or remove it to listen on all interfaces. The server_name line defines which requests the server block will handle, in this case all requests to the domain. This isn’t important if you only have 1 domain, but you can create additional conf files or server blocks to serve other domains or subdomains using the same Nginx instance. The access_log and error_log lines define where to store logs, which is important for troubleshooting.

In order to configure the reverse proxy, you’ll need to add a location block. Here’s an example location block for

location /myapp/ {

It’s pretty straightforward, there is a location line that defines what the location is, and proxy_pass defines the address of the server to be proxied. In this instance, the service running at would be served by Nginx at This is the simplest configuration for a reverse proxy setup. Most services will require some additional configuration to support the reverse proxy, so refer to your service’s documentation to find out.

Stay tuned!


Lab Migrations

In a previous post, I highlighted the addition of a new server to my lab that was intended to be a VM host for all my services. That server is now officially the host to all the services in my lab, except for network storage! Not only that, but all the services that were previously provided by various VMs and native apps on Windows Server 2012 are now all operating in logically segregated and new VMs. At this time I have 4 Linux VMs running that support all of the services I need in my lab (again, except for network storage, which is handled completely by Windows Server 2012 on my R510)

Previously I had an ugly mix of services that led to a number of problems:

  1. Not all services would automatically start after a reboot
  2. If a certain service required a restart, it was likely that the entire host needed to be rebooted, unnecessarily affecting other services
  3. If a certain service encountered issues that required a rollback through snapshots or updates, other services would be unnecessarily affected
  4. Increased vulnerability as a result of external access required by some services

Each of my 4 VMs now serves a specific purpose, and services run on specific boxes so that an issue with one service won’t affect other services unnecessarily. They are as follows:

Roadhog – Plex Media Server box (Ubuntu)

  • Plex Media Server
  • PlexPy

Running PlexPy on the same box as PMS means that if the whole box goes down, PlexPy won’t notify me, but it means that as long as PMS is running, PlexPy is as well, and tracking watch status. I keep a good eye on the lab so if the machine goes down I usually notice quickly. Additionally, network issues can’t affect PlexPy connecting to PMS.

Sombra – Download box (Ubuntu)

  • Sonarr
  • Radarr
  • Jackett
  • Deluge
  • Ombi

Having each of these services in a single provides some benefits:

  1. Simple communication between the services (Sonarr > Deluge, Ombi > Sonarr, etc), since they can all use localhost
  2. Many of these run on Mono, which means Mono only needs to be installed on a single box to support all of them
  3. Each of these services is related in nature, and can easily be moved all together if necessary

Pharah – External access box (Ubuntu)

  • SSH server
  • nginx reverse proxy server

Pharah is the newest box, created to replace the older CentOS box that was creating problems since it had previously supported several services and was only being used as an SSH server. It was replaced with a more streamlined Ubuntu box that only runs the services that are needed.

Pihole – Pi-Hole box (Debian)

Only runs Pi-Hole. Since Pi-Hole is a network service separate from the rest of the services, it runs on a completely separated VM so that the service can maintain as much uptime as possible.

In the future I expect to create another VM with MineOS to run a Minecraft server. For now though, all the services I run in my lab are handled by these VMs or dedicated devices such as my router.


Edgerouter-X and VPNs

Being a homelabber on-the-go, I need to be able to access my lab remotely to maintain and resolve issues for a large number of things. I also like to upgrade and configure remotely, and I need a reliable way to do that. There are several ways to do all of this with varying degrees of security and simplicity. The simplest of these is to use port forwarding, which I do. I have several ports forwarded from inside my network so I can access the things I need to externally.

The next option, which is a little bit more complicated, but also more secure, is SSH tunneling. What’s great about SSH tunneling is that all the traffic is encrypted by default, since it’s a SSH connection, and I can have full access into my network without having to open ports for everything. This is a little tedious however, since I have to close the SSH connection every time I want to access a different service, and I either have to have all my tunnels saved, or open them each manually every time I want to connect. This also means I have to maintain a SSH server on my network, (which I do regardless) and if that server is unreachable or needs to be restarted, I lose that access.

Enter client VPN on the Ubiquiti Edgerouter-X (ER-X). The ER-X is an excellent device that I would recommend to anybody who has any solid networking experience. I have the Edgerouter X SFP, which is an improvement on the standard ER-X, as it provides 5 passive PoE (vs 1 on the ER-X) ports, offers a gigabit SFP port, and has a few additional software features not present on the ER-X. The ER-X offers a client VPN option using PPTP, L2TP, or OpenVPN (if you’re not afraid to venture into the command line). Since the VPN is on the router, it’s the first device behind the modem, independent from all my servers and computers, and has full access across the network. By connecting to a VPN on this device, I have full remote access to my entire network, the same as if I were sitting at home, and it is all encrypted. Setting up a L2TP VPN server on the ER-X is fairly simple if you’re willing to dabble in the CLI a little bit (guide here). This is what I have set up on my ER-X.

Microsoft makes everything annoying though. Since the Windows Creator’s update, all VPN settings have been moved to the Settings app, which is lacking some critical options, like the ability to change authentication for L2TP to use MSCHAPv2 instead of EAP, which doesn’t work on the ER-X. So to fix this, I had to venture into Powershell. Thankfully the Powershell vpnclient module still has the ability to change all the options, so I was able to change the authentication method quite easily and get it working on my laptop.

Another annoying issue I’m encountering; when the LAN I’m on remotely experiences any loss of connectivity or other issues, the VPN encounters an error and once the connectivity is restored, I can’t reconnect to the VPN until several hours later. This is pretty annoying. I suspect that one end of the tunnel isn’t being properly closed when the connection is lost, so a new one can’t be opened when connectivity is reestablished. To verify this, once I lose connection and the VPN is broken, I can log into my ER-X and issue a show vpn ipsec status and it will show that there is 1 open tunnel, even though the VPN connection is down. Once I issue restart vpn the tunnel is cleared and I can reconnect again. My next step is to update EdgeOS to the latest version to see if it includes a fix for this bug.

Stay tuned!

It’s up!

If you’ve read the previous post, you’ll know that I have 2 physical servers: a Dell R510 and a Dell R710. Until now, everything has been running on my R510 using a mix of natively running programs and VMs. This includes Plex, FTP, SSH, SMB, Minecraft, and a bunch of other stuff. At the end of last year I acquired a new server, the R710, with the intention of separating and restructuring my homelab setup into 2 servers: a VM host and a storage host. On the R510 there are 8 3.5″ hard drive bays connected to a SAS backplane to which I can connect my choice of RAID controllers, so that makes it the logical choice for the storage controller, leaving the R710 as my VM host.

In the new host there are dual Xeon X5660s, a Dell Perc 6/i RAID controller (which isn’t really necessary, just simply there to interface with the singular drive), and a 256GB Samsung 840 Evo SSD. Since the R510 will be the mass storage host, that means I can store all of my VMs on the single SSD without much fear of running out of space, even with a large Plex metadata folder (they can get quite large). Since I’m using Hyper-V in Server 2012 R2 on my current server, I decided to continue using Hyper-V and simply upgrade the host OS to Server 2016. Since I’m currently studying technology at university, I have access to Microsoft Dreamspark, which gives me access to pretty much any Windows OS for free.

So it’s finally up and running! CaptureUsing the iDRAC 6 Enterprise that came with my R710 (and wow, is it an amazing feature), I was able to install Server 2016 Datacenter without even being at home. The iDRAC has a pretty awesome feature called Virtual Console, which allows you to remotely view whatever is being displayed on the screen at any time, including during boot. (I’m sure this isn’t exclusive to Dell, but it’s the only system like this I have any experience with so far) Along with this is I can attach virtual media, meaning that after I spend about 5 minutes with the server to configure an IP address for the iDRAC, I don’t ever have to sit in front of the server to do any software maintenance again.


So my next steps are to migrate everything from my old server to the new one in the form of VMs. At the time of this writing, I’ve already migrated the 2 VMs that were running in Hyper-V, since those were easy enough. My next task is to migrate my Plex server to a new VM before I conduct a P2V on my old server. After that, I’ll perform the P2V migration, and I’ll be free to install FreeNAS on the R510 and complete the transition.

Stay tuned!

Introduction to the lab

IMG_20170330_201414Welcome to my lab!

What you see here is a conglomeration of my lab and my roommate’s lab. I’ve lived in this house with my roommate for almost a year, and during that time I’ve grown my lab more than double. The rack itself is my roommate’s rack, although I’ve recently acquired a similar 24U rack to house my lab when I leave. Starting from the top, there is:

  • 1U APC KVM terminal (roommate’s; not working)
  • Wiremold PDU (roommate’s)
  • Dell R510 (my primary server)
    • Windows Server 2012 R2
    • 2x Xeon X5650
    • 16GB RAM
    • Dell PERC S300 RAID controller
    • 80GB HDD (boot drive)
    • 2x 1TB WD Red in RAID1 (backup and data storage)
    • 2TB HDD (media storage)
  • Dell R710 (my new virtualization host)
    • Windows Server 2016
    • 2x Xeon X5660
    • 32GB RAM
    • Dell PERC 6/i RAID controller
    • 256GB Samsung 840 Evo (boot drive and VM storage)
  • HP DL380 G7 (roommate’s server)
  • Netapp storage array (roommate’s)
  • 2x 1500VA Liebert rack-mount UPS w/ network card (one mine, one roommate’s)

And on the back:

  • 48 port ethernet patch panel (roommate’s)
    • Each room of the house is wired with 2 Cat5e jacks to this panel
  • 24 port Cisco Catalyst 2960S gigabit switch (mine)

On top of the rack is a Comcast cable modem and my Ubiquiti EdgeRouter X SFP. All the cabling in the rack and through the house is Cat5e and has been tested to support gigabit speeds. On the ground level there is a Linksys EA6400 router in bridge mode acting as a switch in the home theater and a wireless AP for the whole house.

That’s about it for hardware. Right now my R510 is handling all my services and storage. It’s currently running Server 2012 R2 with Hyper-V. I have Plex, Sonarr, Radarr, and Jackett running on the host. In a CentOS VM I have deluge and PlexPy running, and in a debian VM I have Pi-Hole running. I’ve just installed Server 2016 to the R710 and am planning to migrate my 2 VMs as well as a P2V conversion of the 2012 host to the R710, then I’ll be installing FreeNAS 10 to a flash drive and using the R510 as a storage server.

That’s about it for an introduction to the lab. I’ll make more posts as things change and grow, so stay tuned!