Today is a big day for me as after many months (or years depending how you look at it), I’ve finally launched the first product for my business, BrandVantage. This post is the story of how I started with one idea and ended up launching with a different one.
The Original Idea: Let’s build a digital brand expert!
I worked as a web developer for a local web development agency for a number of years and in that time, I learnt a lot about how a variety of different businesses operated online.
There were a few key “problems” I found in common across many of those businesses:
- Under-utilising analytics
- Misunderstanding analytics
- Not keeping on top of industry information
- Lack of competitor analysis/understanding
- Difficulty with Search Engine Optimization (SEO)
In moderate-to-large companies where you have marketing departments, most of this stuff can be covered by one or more staff dedicated to these things. In smaller companies, the business owner is normally the one where these tasks fall on to, but they are already wearing many different hats. It felt like something was here — if I could automate some of these tasks in different ways, I could both help business owners and earn myself some money along the way.
Automation of tasks, especially ones in analytics or SEO spaces, isn’t a new idea. In fact, I’ve seen many businesses in a similar space launch on Product Hunt over the years since starting, but that didn’t deter me. I was building a better mousetrap and wanting to launch it at a lower price, not something truly innovative so it was going to be an uphill battle. This area though, helping small businesses online be as efficient in tasks as some bigger businesses can, is something I felt passionately about so I proceeded anyway.
Attempt One: Very Hacky (in PHP)
Way back in 2015/16/17, while still at my full-time job, I spent nights and weekends building and tinkering on solutions to the problems business owners face. It was a hacky PHP solution pulling real-time information from sources like Twitter, Google Analytics and Facebook. A hacky approach seemed like a good idea as that seemed to be the way people launched things, do the quickest and hackiest thing you can to get it out the door.
While working on it, I had a few interested parties though what I built could barely be considered a prototype. The thing was a mess. I could do some basic queries, but it wasn’t what I considered sellable and definitely not user-friendly, something I considered key to the product. I was also running into technical problems with scale — any sufficiently complex query was performed real-time, which was getting more complicated. Real-time processing had to be out. I needed to pre-compute and store it in a database.
I wanted to take this more seriously and I didn’t feel like a “quick and hacky” approach to building a product was right for me. With this in mind, it seemed like a good opportunity to change the tech stack to something that would be better long term.
Attempt Two: Slightly Less Hacky (in .NET)
Moving to .NET felt like the smart move for me as at my job I had spent a lot more time working in .NET than PHP, plus I vastly prefered the tooling in .NET vs PHP. That said, the .NET code I had worked on to-date would definitely be considered “legacy code”.
My first version in .NET (specifically .NET Framework), predating my use of version control, was trying to keep costs low by using MySQL through Entity Framework. After a lot of pain and suffering with that, I had a short stint of MSSQL before I settled upon MongoDB.
MongoDB might seem like a weird choice — there are some people that have very strong opinions about which type of database you should use. Honestly it came down to a gut feel after messing around with it — it seemed more compatible to the way I was approaching problems than a relational database would. I liked the code-first approach to Entity Framework so much though that I recreated the “feel” of Entity Framework for MongoDB with some custom code. This later became an open source project of mine called MongoFramework.
I’m not going to lie, progress was… slow. While I was putting quite a lot of time into working on it, it was still an extremely ambitious project. I have strong feelings about building “MVPs” where some people focus too much on the “minimum” without enough focus on the “viable”. At the end of the day people buy products that meet their needs, and cutting too much out would meet no-ones needs. If someone was going to use this, in a market with many competitors of varying quality, it had to do its job well. There didn’t seem much I could reasonably cut to make it any more minimal if I wanted people to buy it.
I kept working at it every night, building pieces to extract and store data from a variety of sources. I was pulling in data from Google Analytics, Google Webmaster Tools (now called Google Search Console), Twitter, Facebook, IP Geolocation, DNS information and also from news articles. What I thought I could do is once I had the different data sources together, I would write custom rules that could infer insights from individual or combined data sets. These insights would form the basis of the “digital brand expert”. After all, that was the goal of the idea, something that could help out small business owners.
After 2 years of working on this in my spare time, it felt the right time to leave my job and go into this full time. I felt like I was so close to launching and I just needed something more than the same day-to-day work. So I did it — I left my job after 7 years.
Going Full-time into the Idea
Right out of the gate, I had moved from .NET Framework to .NET Core, was working on UI/UX improvements for the application and launched the website for it. I worked with an accountant and a lawyer to setup the business, bought a trade mark for the product name, and I felt good like I was only a few months away from launching. This feeling didn’t last though…
Over time, it felt like I was taking two steps forward then one step back — some technical, some business related. Sure, that is still progress, but having new issues crop up every day or so can really crush your motivation.
My best/happiest/most productive days were days I ignored or avoided different issues I had. If I had a problem with the login system, I would focus on how the UX of the menus worked. If I had a problem with data gathering, I would add more tests to the codebase. While I didn’t entirely ignore the problem, I would wait a week or two before I looked at it again, somewhat hoping it would solve itself — unfortunately that isn’t how things work.
In time though, I got to a stage where it felt like I could launch and was hyping myself up until reality struck: I didn’t actually build what I set out to build.
The UI/UX was good, I had strategies for deployment and plans for next steps, but it wasn’t a “digital brand expert”. It was instead a glorified data store for information that people could better access through existing tools. That’s kinda a big problem!
When realising this I poured time into fixing that huge lapse in judgement, but I couldn’t do it. No matter how I tried, I just couldn’t figure out how to build this rules engine. It was like my entire thought process was just clouded. I couldn’t see the solution to the problem like I can for most other things.
This was depressing and I ended up having a month or so hiatus from working on it. When I have had stints of not feeling like or not being able to do programming in the past, I try and spur it on again by watching some show or movie which has some strong relation to technology (fictional or not). My go-to is usually something like Iron Man, but this time I was rewatching Halt and Catch Fire where I found some inspiration.
The Pivot: An API to the Internet
Later in the series a lot of the focus is around the Web, and it was in these episodes where my thoughts about the Internet and the data on it have changed. There is a quote from one of the main characters at the end of Season 3 that resonates with me:
“The moment we decide what the Web is, we’ve lost. The moment we try to tell people what to do with it, we’ve lost. All we have to do is build a door and let them inside.”
- Joe MacMillan (Season 3, Episode 10)
The Internet is a treasure trove of information, it is searchable but generally unstructured. People have managed to create all sorts of different pages in HTML, but in the process of making a website everything is designed for a human user. It is this way for obvious reasons, we are the consumers of web pages after all… aren’t we?
Behind these user-friendly web pages are usually other specific bits of markup, providing some level of structured data for specific situations. Sometimes it is a description metatag for search engines, other times it might be Open Graph metatags for social media links. We build these things to help aid computers processing our web pages.
In 2011, Schema.org was created. This was a collaborative effort between Google, Bing and Yahoo (later that year, Yandex as well) with the mission to “create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond”. Through 3 different encodings (Microdata, RDFa and JSON-LD), websites could express detailed structured data.
There is another quote from Halt and Catch Fire which I like:
“Computers aren’t the thing. They’re the thing that gets us to the thing.”
- Joe MacMillan (Season 1, Episode 1)
As much as I like computers and programming, they are used to help us achieve other goals. From my attempts of trying to build a “digital brand expert”, I knew that data is fundamental to help build more advanced systems and give new insights. Having easier access to other forms of data from web pages around the world may allow new and different tools to be built.
So I decided rather than try and solve a problem that I was clouded by, I would pivot my product. It wouldn’t be the “digital brand expert” (yet), instead it would be a data provider in its own right. The scope of functionality was smaller and the path seemed clearer — I provide structured data from web pages.
Being a data provider in this manner can help me achieve my original goal at a later point in time — I’ll have a different and unique dataset that my competitors wouldn’t actually have or have it at a lower cost than they might. For example, when I was integrating news articles into my “digital brand expert”, the service I was using had a high monthly cost and still was relatively limited on queries. Instead as my own data provider, I could get access to something like news articles at no additional cost.
Thinking this way, I’m basically letting my “digital brand expert” concept take a hiatus while I could earn money providing data for others to build tools or integrate into their own workflow. So I pivoted in late 2019 to build a tool to get structured data out of web pages.
Actually Building the Thing
The goal was standization and interoperability so I needed to support the major existing types of structured data, but also derive structured data for where it is missing. For interoperability reasons, I didn’t want to create a new standard for structured data. Instead, I decided the Schema.org vocabulary would be a good fit for my use case.
There are a lot of types in Schema.org and I didn’t want to write them myself so I found a library called Schema.NET. And because I care about open source, and I would be taking a large dependence on the library, I contributed a variety of patches and performance improvements. I’m now a joint collaborator on the project with the project’s creator, Muhammad Rehan Saeed.
I built an initial prototype to see if it would be doable and it was — I was able to extract common known data formats into a singular format. Over the next few weeks, I continued to refine it and expand it with some basic logic to derive new structured data from pages without any.
Now that I achieved the goal I set out for, I needed to put it into something sellable.
I had all my code to date for my “digital brand expert” and a lot of it would actually be useable, so I just ripped out what I didn’t need and started to port my prototype into it. It needed a bit of work to tie it together, but overall this part went pretty smoothly.
Everything seemed good till I was actually integrating subscriptions/payment into the application. What I had for the “digital brand expert” idea was flawed in a few ways and I recently discovered the fun world of international sales tax.
I tried a few different solutions and was liaising with my accountant about what would work though ended up taking way longer than I wanted. Each of these different business/integration issues hurt my productivity like the issues I had with my original idea — it has been hard.
I did hit different productivity slumps (and one breakdown) though in the end, I slowly and steadily made progress and finally reached the point of launching something.
Behind the Technical Curtain
I like knowing about how things work and I’m sure other people out there are similar so here is the technical breakdown of some aspects:
The core is an ASP.NET Core 3.1 application running as an Azure App Service on Linux. The database is powered by MongoDB (on MongoDB Atlas) using my open source “Entity Framework”-like library called MongoFramework. The various pages in the application are Razor Pages and the API itself is using MVC.
Internally to the API, I am using Schema.NET for converting to/from the Schema.org vocabulary. The API itself honours Robots.txt files when converting pages so I built a robust open source solution for parsing Robots.txt files.
Additionally different aspects of the system internally use caching where I used Cache Tower, my own caching library that supports multi-layer caching.
For handling background tasks like removing old data from the database, I use Hangfire. For error logging, I use Sentry which I’ve written a custom layer to hook Hangfire exceptions into. For performance monitoring, I use MiniProfiler where I’ve added support for MongoFramework so I can see how long my queries take.
GitHub manages the code itself with Azure DevOps managing the building, testing and deploying of the application. I actually run my own Azure DevOps build agent locally which helps quite a bit with build and release performance.
Conclusion
There’s something cathartic about writing this post as I am closing one chapter of my life and opening another. Launching BrandVantage is a big step for me — I’m both excited and nervous about doing so, though optimistic in the future of the business.
I have a lot of big plans for BrandVantage including:
- News API: News articles from around the web as Article Schema objects
- Product API: Product pages restuctured to Product Schema objects
- Plus a few others…
I do plan to revisit the “digital brand expert” idea again. I still think there is something good there, but next time I think I’ll be a little more prepared.
Links
Originally published at https://turnerj.com.