Snakefifty – A Paintball Marketplace

Last year, after about 7 years in the Database Administration world, I decided to transition to a Software Engineer role. I needed a project to dive into to help get up to speed and ready for that transition. I decided to tackle a pain point in one of my hobbies.

I’ve been involved with paintball on and off since I was in my teens. One thing has always been consistent – a strong used equipment market where players buy, sell, and trade gear. Back when I first got into the sport most of that activity took place on forums. Over the years that community has mostly moved over to Facebook groups. Both have some strengths, but I felt both were lacking pretty significantly.

Today’s options


Honestly, I still prefer the forums to Facebook. The most popular forum has a feedback system that, for the most part, helped users confidently do business with each other. I view that as the most valuable feature of that platform and it is what always keeps me coming back.

Unfortunately the forums haven’t changed much since the early 2000’s. Images require third party hosting and BBCode to share. Chat functionality is pretty primitive. Facebook wins by a landslide in the usability category and that has resulted in a shrinking user base on the forums.


Facebook’s biggest flaw is that it actually doesn’t even allow for paintball markers to be sold on their platform as it categorizes them as firearms (eyeroll). Because of that, users don’t actually use the marketplace features of Facebook. Instead they meet in groups and arrange deals privately over Messenger. Groups and posts get flagged/removed all the time and it’s hard to keep up with the current relevant groups and their rules. This also circumvents the feedback system which makes scamming even easier.


Ebay isn’t much of a contender here. Their fee structure scares most users away and a lot of the gear you find is overpriced to compensate. There’s also no real way to trade. I’ll use it from time to time, but it’s always my last option.

Introducing Snakefifty

I set out to build a platform that combines the strengths of all the above while keeping fees minimal. A modern platform with a simple UI that drives a transaction workflow capable of facilitating all types of deals. By building workflows around the various transaction types, it removes all doubt about what is expected of a user and eliminates the need to figure it out deal by deal.

High level feature overview:
  • Offer/counter offer system
  • Ability to handle standard purchases, purchases from cash offers, straight trades, trades adding cash, trades including multiple items.
  • Feedback system
  • Modern chat functionality
  • Ability for sellers to offer free, fixed, or estimated shipping (USPS/FedEx/UPS)
  • Admin panel to simplify moderation
  • Closely integrated with PayPal to mirror current user experience

Hosting/maintaining this wouldn’t be free and I’d need to monetize this somehow. I decided a minimal flat fee is how I would tackle that. While it’s possible that users could reject the idea of an additional fee, I believe the value add is there.

I already mentioned that Snakefifty uses PayPal, but I did consider other options. I would have loved to use Stripe Connect (their marketplace offering) but the pricing was just unrealistic for this niche. Fees would have ended up being similar to eBay and that would be a non starter. I ultimately stuck with PayPal as that is what the vast majority of the community uses and is already familiar with. I simply leveraged PayPal’s parallel payments to collect a fee.

Current Status

I am pretty proud of the platform and I believe it has the potential to provide significant benefit for the paintball community. That said, I’ve decided to stop work on it for now to focus on other projects/interests.

Taking a quick pulse of the community it seems that while many acknowledge the pain points in the current channels, most seem tolerant of them. Acquiring enough users to make the platform viable is a significant challenge and one that I am not entirely confident I would succeed in. Even though I aim to keep fees minimal, I can’t get around the fact that I am adding a fee that doesn’t exist on Facebook and the forums.

There have also been some legal/tax changes that complicate running a platform like this. In 2018 the Supreme Court ruled that states could charge sales tax on out of state purchases even if the seller doesn’t have a physical presence in the state. That in itself wasn’t news to me. What I didn’t realize, however, is that over the last year or so most states have shifted the burden of collecting/remitting taxes to “marketplace facilitators.” The benefit is that an individual seller, for the most part, no longer has to worry about paying taxes to every single state their customers reside in. States also save on resources as they don’t have to try to collect from individual sellers.

Putting that burden on the marketplace seems reasonable in the cases of Amazon and eBay, but it makes it significantly more difficult to run a small marketplace like this. While most states do carve out exceptions for marketplaces with limited volume, if I ever cross those thresholds it will turn into a full time job to keep up. Keeping fees minimal is a hard requirement for this platform to be viable and even with the volume to cross those thresholds it wouldn’t be worth the effort required. Even if I could find a way to keep up, users would likely perceive the tax as just another fee that they don’t have to deal with when they deal privately in other channels.

Ultimately I went into this project knowing that it would be a pretty big challenge for it to succeed as a business. That was a secondary goal to building something that let me get up to speed in my new role. This project was absolutely a success on that front.

Here’s a quick tour of the project in it’s current state:

Homelab – 2019 Edition

I made some fairly significant upgrades to my home lab a few months back and an update is overdue. Without wasting any time, here is the 2019 edition in all its glory:

My previous setup did what I needed it to but the storage situation was a mess. Lots of standalone drives with no real redundancy. I initially looked into a few options for addressing this including buying a NAS and building a NAS. Then I stumbled on Unraid. I was vaguely familar with Unraid but I never gave it a real look until recently. Unraid is super flexible with storage options, has a built in hypervisor, and makes standing up containers super easy. Ultimately the relatively low cost of used hardware and the flexibility of Unraid was the best combo for my needs. My plan was to build out a new server, shuck all the external drives, and migrate everything off of my ESX host (all VMs) to the new Unraid box.

After stalking eBay for awhile, I ended up with the following:

  • 2x Xeon 2650 v2 (16 physical cores)
  • 64 GB ECC DDR3 RAM
  • GeForce GTX 1060 6GB SSC
  • 46 TB spinning, 1 TB SSD (already owned the storage)
  • Fractal Design Define R5 case

The migration itself wasn’t trivial and it made me question why I created so much work for myself outside of work. I managed to get everything completely migrated to new containers/VMs on the Unraid server, with the majority of that going to containers. This also included migrating the physical Windows server that was running on the TS140. That newly freed up TS140 was repurposed into my development server. I am currently running Proxmox and a handful of VMs on there. I had a UPS die in the middle of all this (found out during a power outage, as seems typical) but luckily with the reduced power draw compared to the old i7 I can handle running everything off of one UPS.

You may have noticed I threw a pretty nice graphics card into a server. One of my favorite things to come out of this is the capability to do remote gaming. I have the GPU passed through to a VM on my Unraid server and I use NVIDIA GameStream and Moonlight to game on my Thinkpad. It has been pretty much flawless to this point. I am not a huge gamer and the GPU helps with some other interests (editing and such) but it has given me the ability to play modern games on high settings without a dedicated gaming computer.

No changes on the networking side of things other than that I now run all my home DNS through Pi-hole.

Documenting My Home Network/Lab

I’m a big believer in learning by doing. Over the last couple years I’ve built up a nice little home network/lab where I host various applications and test new tech. I will hardly be the first person to make this recommendation, but if you are able to get your hands on some hardware to set up a home lab I would highly recommend it. Any old computer you can turn into an ESXi host is enough to get started. I have learned an enormous amount about various technologies that I wouldn’t have had much opportunity to learn elsewhere. First, the obligatory picture:

The breakdown:

  • Ubiquiti Edgerouter X
  • Ubiquiti UniFi AP AC LR (not pictured as it’s located in a location central to the house)
  • ARRIS SURFboard DOCSIS 3.0 Cable Modem (SB6183)
  • Cisco SG300-28
  • Cisco SF302-08MPP
  • TRENDnet 24-Port Cat6 Patch Panel (22 runs in the house, minimum for 2 per room)
  • Startech 12U 19-Inch Desktop Open Frame 2 Post Rack
  • Raspberry Pi 3 Model B
  • Lenovo TS140
    • Xeon E3-1226 v3 3.3Ghz
    • 16 GB Ram
    • 44 TB (raw) of various spinning storage, mostly WD reds (yeah, yeah, I know. I’ll get a NAS eventually)
  • Old computer serving as my ESXi host
    • Intel Core i7-2600K Sandy Bridge Quad-Core 3.4GHz
    • 16 GB RAM
    • Samsung 840 EVO 250GB SSD
    • 5 TB (raw) of WD Reds
  • 2x CyberPower CP1500AVRLCD UPS
    • One is for my “core” equipment (network stuff/TS140), the other is on the ESXi host

I am a huge fan of the Ubiquiti hardware. This combo was one of the first purchases and its pretty fantastic to not have to regularly “reset the router” like seems to be necessary for just about every consumer router/AP. I picked up both switches off of eBay and have been pleased with them. Some Cisco purists dislike the small business line but they’ve been great for my purposes. The SG300-28 (gigabit) is my main switch and the SF302-08MPP (fast ethernet, POE) runs my cameras. I didn’t need to have separate switches, but based on what was available on eBay this was perfect for my goals. If we’re being honest, need went out the window a long time ago anyway. I run a handful of VLANs and 2 separate physical LANs to segment off the different portions of my network. Necessary? Probably not, but where’s the fun in that? This has allowed me to tinker with VLANing and setting up firewall rules. Fun stuff!

The TS140 runs AD, DNS, Blue Iris (cameras), and acts as a file server. My storage situation could certainly be improved, but all important data is backed up locally and remotely and any data that I consider to be less important is at least mirrored to separate local storage. My Raspberry Pi runs PiVPN which is a super simple way to run OpenVPN and is much more manageable than my previous setup where I ran it on my router. The ESXi host runs the following VMs and applications:

  • Plex and related apps
  • Borg Backup
  • Crashplan
  • Nextcloud
  • Kali box for security tinkering
  • Windows VM for when I need to run Windows specific apps
  • LibreNMS
  • Whatever I feel like playing with at the moment

Each of my Linux boxes are running Ubuntu 16.04. All VMs are regularly backed up using ghettoVCB.

That’s pretty much it for now. I will update when I inevitably add more!

SQL Saturday #357 Recap

In January I posted about a couple of upcoming speaking engagements and after some nagging for a follow up from @JayDue216, here it is. First up was the Ohio North SQL Server User Group’s January meeting. This wasn’t my first time speaking in a professional setting but it was by far the audience with the most SQL Server experience I have been in front of. All in all, I thought it went alright. I didn’t feel too great about it immediately afterward but I received some good feedback from those in attendance which is always great to hear. Nerves got to me a bit and I caught myself tripping over my own words at a few points. I had a great deal of confidence in the content I developed and I can’t express enough how valuable that is when your delivery isn’t going as smoothly as planned. Having that confidence allows you to slow yourself down, gather yourself, move to the next slide, and keep powering through the session. At the end of the day it felt good to get the first one out of the way and identify areas for improvement.

After getting my feet wet next up was SQL Saturday #357 in Cleveland. I was initially scheduled to speak early in the day but an out of town speaker was hoping to catch an early flight home and I agreed to move to the last time slot of the day. I caught a few sessions early in the day, but I ended up spending a great deal of the time in the speaker room going through my presentation to make sure I was ready to go.

SQLSat357I only did it for the sweet SQL Saturday pullover

Unlike the user group, I felt very good about the presentation after I had finished. I didn’t feel myself tripping over my own words this time and my demos went smoothly. Despite being the last session of the day I had a full classroom and the audience seemed pretty engaged throughout the entire presentation.

Meet the New and Improved MCSA Certification

When I was working through my MCSA certification the least enjoyable part, for me, was the 463 exam (Implementing a Data Warehouse with Microsoft SQL Server 2012). I wasn’t really a fan for a number of reasons. It was the least relevant to my role at the time and it felt like portions of the exam were less about practical knowledge and more about promoting those technologies. Still, being a nerd I enjoyed the challenge of learning something new.

On Monday, Microsoft announced that there will be changes to the current exam structure. The 463 exam is no longer a requirement to receive the certification. The 461 (Querying Microsoft SQL Server 2012) and 462 (Administering Microsoft SQL Server 2012 Databases) exams are still required, but now you have four choices for your third exam. Those choices include the 463 exam, 411 (Administering Windows Server 2012), 412 (Configuring Advanced Windows Server 2012 Services), and 483 (Programming in C#). This gives more flexibility to those like myself whose interests aren’t really on the BI side of things. I would have loved spending more time learning Windows Server or resurrecting my C# skills.

I think this change will allow the certification to appeal to a wider range of SQL Server professionals which is a win for everyone.


My first post on this blog detailed a scenario where a read uncommitted select statement could ultimately block an insert/update/delete. In this scenario, a long running read uncommitted select is executed requiring a schema stability lock. That lock prevented the online rebuild from grabbing the schema modification lock necessary and caused the update statement to get queued up behind it.

SQL Server 2014 introduced an option that will allow more control over how online rebuilds behave in a scenario such as the one I described. The WAIT_AT_LOW_PRIORITY option gives you a few different choices in dealing with blocking scenarios. The syntax, from Books Online, is below.

MAX_DURATION is the time, in minutes, the rebuild will wait to acquire the necessary locks before taking action. ABORT_AFTER_WAIT tells it what to do after that time period has passed. Setting it to NONE means SQL Server will just continue to wait while setting it to SELF will cause it to give up on the rebuild. If this rebuild absolutely must finish, this could be set to BLOCKERS which would kill the process or processes preventing it from completing. To see this in action, lets first create our old scenario without the WAIT_AT_LOW_PRIORITY option.


Now let’s try the same thing except we will use the WAIT_AT_LOW_PRIORITY option with a MAX_DURATION of 1 and ABORT_AFTER_WAIT set to SELF. This means after a minute of waiting the rebuild will give up.


As we can see, after a minute of waiting the rebuild gave up and we would not have been able to recreate our previous scenario.

Automatic Page Repair in Action

One of the cool features of Mirroring and Always On Availability Groups is Automatic Page Repair. With Mirroring or Availability Groups, when SQL Server encounters certain types of corruption it will look to the Mirror/Replicas to see if they have a clean copy of the page. If they do, it will repair the page in the Principal/Primary and make your life easy. Obviously any occurrence of corruption requires further investigation, but this is a handy feature where SQL Server can bail you out of certain scenarios. In this post I will step through an example of what happens when you corrupt a page in a normal database and then what happens when you corrupt the same page in a database that is part of an Availability Group.

For this example I have two SQL Server 2012 servers (V-SQL1, V-SQL2) that live on a Windows Server Failover Cluster. They are both part of an Availability Group (AG1) that contains the BaseballData database. These replicas are in synchronous commit mode. V-SQL1 also has a copy of the BaseballData database called BaseballData2.


We will start by corrupting a page in BaseballData2 to see how SQL Server normally behaves when it encounters corruption, and then we will do the same thing with the AG database (BaseballData). Let’s begin by picking a random page to corrupt.



Here we have picked page 7071 slot 33 of file 1 which contains data about the 1942 Yankees.


Next we are going to use a hex editor to corrupt the page we previously identified by writing some zeroes to the file. We will first need to take the database offline.


Now that it is corrupt, lets bring the database back online and try to retrieve some information about the 1942 Yankees.


As you can see, we encountered an invalid checksum. In this situation, our best bet is to go to a backup to get that data back. Let’s take a look at msdb..suspect_pages and sys.dm_hadr_auto_page_repair.


We can see our corrupt page in suspect_pages and we see nothing in dm_hadr_auto_page_repair. As you can probably guess, we will see something in dm_hadr_auto_page_repair later in this example. Let’s go ahead and step through the same process on the BaseballData database. To do this, I will stop the V-SQL1 instance (causing failover) and edit the data file the same way as we did for BaseballData2. I will then start the instance back up.




Now that our instance is running again, let’s fail back to V-SQL1 and then try to run our select statement.


It threw the same error, great example right? Let’s try again and see what happens.


Success! It looks like Automatic Page Repair had our back. We can verify this by querying suspect_pages and dm_hadr_auto_page_repair.


Now we see our page listed in dm_hadr_auto_page_repair, because SQL Server repaired that page with Automatic Page Repair. Running CHECKDB returns no errors as our corruption has successfully been repaired.

Image 1017

Sort order without an ORDER BY

If you’ve worked with SQL Server for any length of time, you know that sort order is never guaranteed without an ORDER BY. I was recently asked why a query (without an ORDER BY) brings back results in alphabetical order in production while in test it returns them in the order they were inserted to the table. To explain this, I showed a quick example that I thought I would share.

First, let’s create a table and populate it with some data. We will be creating a clustered index on an identity column and a nonclustered index on the lname column. I then used to create 100 dummy rows to insert.

Now let’s go ahead and select all rows from the table where the last name begins with the letter L and grab the execution plan.


Looking at the identity column, we can see here it brought back the results in the order they were inserted to the table. With the table being so small, it was most efficient to scan the clustered index. Since we have a clustered index on the identity column the rows will be stored on page in that order. Now let’s hint the same query to use the nonclustered index on lname.



Here we can see that it returned the results in order by lname which is the order the lname column and clustering key would be stored at the leaf level of our nonclustered index. This ended up being what was happening in the environment I was asked about. The table in their production system was much larger and was correctly choosing to use a nonclustered index. This is just a simple example of how sort order is never guaranteed without an ORDER BY.

All About Collation

Collation is one of the easiest settings to overlook when configuring SQL Server but it’s one that can come back to bite you if it isn’t properly set. In this post I intend to cover the basics of collations, common issues resulting from collation misconfiguration, and basic resolution of those common issues.

So what is a collation? One way to think of a collation is as a dialect of a language. For example, in the U.S. we use the word color. Other English speaking countries use the word colour instead. Who is right? Well, that would depend entirely on where you are from. If you are in the U.S., color is probably going to be considered correct. If you had never even seen the word colour, would you know it means the same thing as color?

Collations can be thought of in a similar way. There are many variations of a language, each with different rules associated. These rules are set by the collation and govern things like sorting, case sensitivity, and accent sensitivity. In one collation, ‘ABC’ may be the same thing as ‘abc’ while in others that may not be the case. The “language” associated with a collation is going to be a code page.

Single byte (8 bit) characters (char, varchar, text) can make a total of 256 different combinations with their bits being flipped on and off. Since there are more than 256 characters in the entire world, there is no way to represent every possible character in a single byte. This is where code pages come in. Code pages exist at the Windows level and keep track of which characters the 256 combinations represent.

For example, English speaking countries are going to use the Latin code page (1252). The 256 characters on that code page are going to be characters used in the English language. We wouldn’t be able to fit every Greek character in that same code page, so Greek has it’s own code page as well (1253). All collations, their description, and their code page can be found with the query below.

Unicode data types (nchar, nvarchar, ntext) are stored double byte so they can actually store multiple languages without having to reference a code page. Our Unicode data types will still play by the sorting and comparison rules of the collation, so you still don’t get to ignore collation if you are using Unicode data types.

Collation is set at the instance level, the database level, and the column level. When a new database is created it will use the instance collation unless otherwise specified. Your system databases will also match that instance collation. Any character columns created in your user databases will use the database collation when they are created unless a collation is defined in the create statement.

When problems with collation occur it’s typically because something was overlooked in the planning stages. At some point somebody realizes the wrong collation was used and they decide to change it at the database level. Problem solved right? Unfortunately that is not the case as the database collation is simply what SQL Server looks to when new objects are created. Changing this setting will have no impact on existing objects. So in this scenario, the change is made and time goes on and new tables are added to the database using this new collation. What happens at this point is that you have two collations actively used in your database and if there is ever a need to compare columns in different collations you will encounter errors.

The other issues commonly seen are TempDB collation mismatches. We know that TempDB’s default collation is going to match the instance collation so any objects (such as temp tables) created in TempDB will use that collation by default. So what happens if your user database uses a different collation and you have to compare it to data in a temp table created with TempDB’s default collation? You guessed it, you get collation mismatch errors.

The proper fix for these issues really depends on the scenario. With collation mismatches within a user database you can simply alter the columns if there are only a few tables impacted. Unfortunately these issues, in my experience, are rarely caught that early. I have worked with databases that have had multiple collations throughout the database and manually altering them was more work than it was worth. In this scenario I would just migrate all the data to a new database that was created in the correct collation.

Your other option if there are only a few tables is to collate the columns in question within the query. This is done by adding a COLLATE clause to the column. Unless this is a report or some other ad-hoc query, this probably isn’t going to be a very viable option.

With TempDB collation mismatches the fix is somewhat similar. You can take the sledgehammer approach and either change the collation of the database and all existing objects to match the instance (and therefore TempDB) or change the instance collation. Neither is an easy task and can introduce even more issues if done incorrectly. Alternatively, you could modify the code like in the other scenario. This could be accomplished by specifying a collation when creating the temporary object, or modifying the queries to collate the columns on the fly.

Not scared yet? Well the above are best case scenarios assuming that the collations in question share the same code page. When they are using different code pages a potential for data loss exists when changing between collations, and you probably won’t know when it happens. Your best bet is to avoid these issues altogether by making sure the proper collation is used in the first place. As a general rule, your instance and all databases on that instance should use the same collation. This is the best way to be sure that you will never encounter issues within your user databases or in TempDB.

Let’s create a test database and step through a few examples.

Now let’s take a look at the collation at the instance, object, and table levels.


We can see that our instance collation is SQL_Latin1_General_CP1_CI_AS. As we learned, our database collation will match that along with our columns since we did not specify otherwise. Now let’s change the database collation of our database and create a new table with that collation. If we select from the two tables, we will see that the data looks identical.


Both sets of data look identical, but watch what happens when we try to join them together.


And now we have the dreaded “cannot resolve the collation conflict” error. This is why collation mismatches can be a silent killer. You generally don’t know it’s in issue until it jumps out at you. As we discussed earlier, there are two ways to fix this. Alter the collation of the columns themselves, or modify the query. First we will alter the columns with success.


The other option is to use the COLLATE clause within our query. I have set our columns back to the wrong collation for this example.


Let’s now look at TempDB collation mismatches. As it stands currently, our person1 table is using the SQL_Latin1_General_CP1_CI_AS collation and person2 is using Latin1_General_100_CI_AI. Here we will create and populate a temp table and then try to join to it.


Once again, our join fails because our temp table was created using the default collation of TempDB. If we were to include a collation in our create table statement, this would work without issue. Alternatively we could modify our select query to use the COLLATE clause like in the previous example.


Unfortunately, most real world examples I have encountered aren’t always this simple to resolve. If thousands of objects are using multiple collations the solution becomes a little less clear. It becomes even more difficult when different code pages are in play as data loss is a possibility. With proper planning, however, these situations can be avoided entirely.