New Host for my Lab!

I recently picked up a new host to add to my home lab.  For years, I’ve been running VMWare ESXi/VSphere on a single node. Recently the RAM utilization was starting to creep up to the point where I couldn’t squeeze any more VMs on it. I started out on a month long deal-hunt to find something that would work for me.

My requirements were:

  • Cheap (like real cheap!) <$300 preferred
  • Relatively modern CPU architecture (at least 4-cores, 8 logical)
  • Power Bill Friendly (Energy Efficiency for performance)
  • Preferably rack mounted
  • At least  32GB of RAM
  • Quiet if possible!
  • Some type of remote management
  • Support for remote monitoring (SNMP/iDRAC/etc).
  • Dual GB NIC with room to add a NIC card

I ended up scoring a used Dell R210ii.  This ticked off nearly everything on the list.  Here are the specs!

  • Intel Xeon Quad Core E3-1240 v2 3.4GHz
  • 32GB DDR3 1600MHz (Maxed out)
  • 240GB SSD
  • Dell iDRAC 6
  • Dual GB NIC + Management port

All in shipped price of $285

Read more… Pictures included!

The easy way to pull UPS statistics using CyberPower Panel

Since setting up my Grafana panel, one thing that has been bugging me is that I had to modify the “vanilla” Cyberpower Panel software with pwrstat to pull the statistics I wanted. Not only was I concerned with conflicts with the actual shutdown software, but I didn’t want to introduce a forced post-process after upgrading the panel in the future. A redditor recommended that I check out init_status.js on the Cyber Power Panel, to see if I could potentially pull data from there instead of from pwrstat as my previous script had been doing.

http://<your CPP IP>:3052/agent/ppbe.js/init_status.js

You get an output like this

var ppbeJsObj={"status":{"communicationAvaiable":true,"onlyPhaseArch":false,"utility":{"state":"Normal","stateWarning":false,"voltage":"122.0","frequency":null,"voltages":null,"currents":null,"frequencies":null,"powerFactors":null},"bypass":{"state":"Normal","stateWarning":false,"voltage":null,"current":null,"frequency":null,"voltages":null,"currents":null,"frequencies":null,"powerFactors":null},"output":{"state":"Normal","stateWarning":false,"voltage":"122.0","frequency":null,"load":10,"watt":90,"current":null,"outputLoadWarning":false,"outlet1":null,"outlet2":null,"activePower":null,"apparentPower":null,"reactivePower":null,"voltages":null,"currents":null,"frequencies":null,"powerFactors":null,"loads":null,"activePowers":null,"apparentPowers":null,"reactivePowers":null,"emergencyOff":null,"batteryExhausted":null},"battery":{"state":"Normal, Fully Charged","stateWarning":false,"voltage":"24.0","capacity":100,"runtimeFormat":0,"runtimeFormatWarning":false,"runtimeHour":1,"runtimeMinute":1,"chargetimeFormat":null,"chargetimeHour":null,"chargetimeMinute":null,"temperatureCelsius":null,"highVoltage":null,"lowVoltage":null,"highCurrent":null,"lowCurrent":null},"upsSystem":{"state":"Normal","stateWarning":false,"temperatureCelsius":null,"temperatureFahrenheit":null,"maintenanceBreak":null,"systemFaultDueBypass":null,"systemFaultDueBypassFan":null},"modules":null,"deviceId":1}};

Well look at that! There’s pretty much everything I ever wanted, in a completely open format that does not require ANY login. Guess it’s time to dust off vi and get to bashing this. One COULD go the route of installing a JSON parser, but if you look closely, the output isn’t exactly perfect JSON. Either way, why institute the overhead when you could simply grep what you want. So without further ado, here’s how I did just that.

First you want to pull the status page into a variable

cpp_json_data=$(curl http://<your CPP IP>:3052/agent/ppbe.js/init_status.js)

Then you’ll want to parse what you want with grep like this:

#input voltage
cpp_involts=$(echo $cpp_json_data |\
grep -oP '(?<="voltage":")[^."]*' | head -1)
#battery voltage
cpp_battvolts=$(echo $cpp_json_data |\
grep -oP '(?<="voltage":")[^."]*' | tail -1)
cpp_loadwatt=$(echo $cpp_json_data |\
grep -oP '(?<="watt":)[^,]*' | head -1)
#Capacity %
cpp_capacity=$(echo $cpp_json_data |\
grep -oP '(?<="capacity":)[^,]*' | head -1)
runtimeHour=$(echo $cpp_json_data |\
grep -oP '(?<="runtimeHour":)[^,]*' | head -1)
runtimeMinute=$(echo $cpp_json_data |\
grep -oP '(?<="runtimeMinute":)[^,]*' | head -1)
#Load %
cpp_loadpercent=$(echo $cpp_json_data |\
grep -oP '(?<="load":)[^,]*' | head -1)

What you see here, is that we request the CPP page with the “JSON”-like data. The rest of the commands are simply parsing that data to pull out the numbers I wanted to plug into my influxdb. This turned out to be infinitely easier than doing it the other method, and easy to maintain going forward. I’ve since updated my script to incorporate this function, which competely eliminates the need to have the script I previously had running on the CPP instance.

Note: As with my other scripts, you’ll need to use a version of grep that supports regular expressions (the -P flag). The default in Ubuntu supports this. OSX does not, but a simple brew will get you the grep you’re looking for :). Alternatively you could use awk, or perl if you are fancy!

There you have it, a completely transparent modification


Here is the new script
Here are the changes made to

Let me know what you think, or if you have any suggestions for improvement! Happy labbin.

Setup a wicked Grafana Dashboard to monitor practically anything

I recently made a post on Reddit showcasing my Grafana dashboard. I wasn’t expecting it to really get any clicks, but as a labor of love I thought I’d spark the interest of a few people. I got a significant amount of feedback requesting that I make a blog post to show how I setup the individual parts to make my Grafana dashboard sparkle.

Let’s start with the basics.  What the heck is Grafana?  Well this image should give you an idea of what you could be able to make, or make better with Grafana.


I use this single page to graph all my the statistics I care about glancing at in a moment’s notice.  It allows me to see a quick overview of how my server is doing without having to access five or six different hosts to see where things are at.  Furthermore, it graphs these over time, so you can quickly see how your resources are managing the workload you have on the server at any given point.  So if you’re sold – let’s get started!  There is a lot to cover, so I’ll start with laying out the basics to help new users understand how it all ties together.

Let’s start with terminology and applications that will be used in this tutorial.

  • Grafana – The front end used to graph data from a database. What you see in the image above, and by far the most fun part of the whole setup.
  • Grafite – A backend database supported by Grafana. It has a lot of neat custom features that make it an attractive option for handling all of the incoming data.
  • InfluxDB – Another backend database supported by Grafana. I prefer this database for speed to implement, my own prior knowledge, and as a byproduct of a few tutorials I dug up online. This tutorial will be showing you how to setup services using InfluxDB, however I’m sure that Grafite would work equally as well if you want to color outside of the box.
  • SNMP – Simple Network Management Protocol. I use this protocol as a standard query tool that most network devices natively support, or can have support added. SNMP uses OIDs to query data, but don’t worry, you don’t have to have any special addons if you don’t want them. I recommend you look up the specific SNMP datasheet for your device, as some devices have custom OIDs that give you very interesting graphable information! I’ll explain this more later.
  • IPMI – Intelligent Platform Management Interface. This is used to pull CPU temperatures and fan speeds from my Supermicro motherboard. Most server grade motherboards have a management port with SNMP support. Look it up, you’ll be surprised the information you can get!
  • Telegraf – During the course of this article you’ll see that I use a lot of custom scripts to get SNMP/IPMI data. Another option would be to use Telegraf. I eventually will move most of my polling to Telegraf, but for right now I’m using Telegraf purely for docker statistics. I’ll explain how to set it up here.
  • Collectd – CollectD is an old popular favorite. It’s an agent that runs on the baremetal server or in a VM that will automatically write data into your InfluxDB database. Very cool – but I don’t use it, because I prefer to limit installing extra tools on every server to monitor them.

I’ll walk you through how I setup the following monitoring applications:

  • ESXi CPU and RAM Monitoring via SNMP and a custom script for RAM
  • Supermicro IPMI for temperature and fan speed monitoring
  • Sophos SNMP for traffic load monitoring
  • UPS/Load monitoring with custom scripts and SNMP through a Synology NAS and CyberPower Panel
  • Docker Statistics for CPU and RAM monitoring via Telegraf
  • Synology Temperature and Storage information using SNMP
  • Plex Current Transcodes using a simple PHP script

Read More…

Setting up a Dockerized GitLab at Home

There are few things I love more than git. It’s part of my daily workflow, and I’m not even a developer by profession (any more). I frequently will git init folders just to have history, and to transfer things between servers. One thing I do often is create git repositories in my configuration folders on my servers so I can see what I changed, and roll back in case I royally mucked something up.

This isn’t a git primer, instead I want to share how I setup an instance of Gitlab on my DMZ docker that’s hosting a few external services for me. Compared to the vanilla installation guide, this is MILES easier to load via a docker. What does it give you? Well, future upgrades are easy, the whole database, configuration, and history is in a convenient and easy to backup folder structure, and finally the ability to move this server around as needed.

To get started, this tutorial assumes a few things.

  1. You have an Ubuntu linux server with docker installed.
  2. You’re already familiar with the basics of docker (this isn’t a tutorial for that either).
  3. You have a basic understanding of linux operations, moving files around, and what these commands mean.
  4. Your docker server/VM has 2 CPU cores and 2GB of RAM available.

So let’s get started!

Read More…


Docker @ Home? Why yes!

For years I’ve run a personal home server.  Well scratch that, maybe I should call it a lab.  I’ve been a long-term user of VSphere at home, and over the years I’ve slowly but surely expanded our environment to be more akin to a small business setup.  It’s a hobby, and I enjoy it.  Why not?

Recently while visiting r/homelab, I ran across a post about a guy who set up a linux host running docker all of his home media applications.  Docker?  What the heck is docker?  Why do I need that when I can just spin-up VMs.  Or how about a better question… Why do I need that when I can just install all of these apps in a single VM?

In short – here are the advantages:

  • Less resource intensive than separate VMs.  (Duh!)
  • Complete environment isolation, meaning no more mono-libs or java libs cluttering up your host server.
  • Speaking of Libs – having the RIGHT environment for the app you are running.  What I need an OLD version of PHP to run this?  No big deal!
  • Separation of configuration data from application data.
  • EASY upgrades docker pull / docker run / docker rm.  Or my personal favorite… just script it!
  • Quick and easy deployment of new tools/toys to play with without causing harm to the rest of your system.

There are MANY other reasons to use docker, especially when it comes to development (both linux and web development).  I won’t get into that here.  Mostly logging this setup for myself. I’ll share after the break how I set up a brand new Ubuntu 16.04 VM with docker, and migrated my entire home media server to it in one night.

Before continuing – I always give credit where it’s due. Much of my inspiration came from this post []. Check it out for more (likely better written) posts similar to this one!

Read More…

Finally a place for things…

I finally got around to setting up a blog for my tech hobbies.

What can you expect here?  Well mostly just things I discover as I’m playing around with my home lab.  Product reviews (rare), programming and opensource tools.  I’m going to primarily use this to document neat things I’ve setup for myself.

Stay Tuned!