4 Months Check-In: I've Completely Lost the Ball

Welp, the last month or so I went down a lot of rabbit holes. I was spinning up local servers, digging into advanced network security, dissecting Euclidian distance, and tweaking complex configuration files that didn't really move the needle. Some great learning, but not great progress. So, here's where I'm at:

Knowledge gained so far

  1. Model Evaluation & Integration - Testing, benchmarking, and integrating state-of-the-art models from a variety of sources
  2. Core concepts - Model strengths, prompting, quantization, vector DBs, tensors, agents, MCP, ACP/A2A, and other fundamentals
  3. Toolchain fluency - Docker, Ollama, HuggingFace, Openwebui, n8n, supabase, various APIs

What I've built

  1. Goals App - an agent that intelligently updates a database based on natural language input. (think "just read for 20 minutes" or for business, updating CRM notes/fields)
  2. News Aggregator - workflow that pulls and summarizes targeted data sources (Competitor research, market trends, sentiment analysis, etc...)
  3. Personal Website - coded entirely via local LLM prompts (although I wouldn't recommend it). Proof that AI can carry a non-coder from zero to live site.

I'll break down each project in it's own post.

Learning From My Mistakes

Going local-only taught me foundations, but trying to self-host everything slowed real progress. I stayed on that track for too long. Also, I spent too many hours researching concepts and frameworks that I will most likely never use.

No flashy new app or benchmark today. I just wanted to give an honest update and to say that I will be more active with some cool, real, projects very soon.

I finally built something - brandonsbowers.com

This week, I wanted to push my local AI setup beyond experiments to actually building something tangible. After a lot of testing, breaking, and learning, I feel confident taking this thing to the next step. Building.

Now I have a website!

The entire site was originally coded by my local AI models. I have no formal software development background, but with the help of these tools, I now have a fully customizable, built-from-scratch website, in a matter of days.

It's a small project on the surface, but each step forward opens the door to building bigger things.

Much more to come soon...

Hi, I'm Brandon brandonsbowers.com

Sometimes it doesn't go as planned

I was testing a new local model with a simple prompt and got an endless loop of the same sentence, which was then topped off with a flood of emojis (this was not the response I was looking for). Pretty sure this one's on me: I still have to learn about configuring and fine-tuning some of these models so they behave the way I expect.

After some travel, I'm back at the keyboard and lining up several new projects. There are also a few, new, open-source models that I am very excited about. Another update is coming soon, but for now, here's a quick look at the learning curve that comes with running models locally.

More soon.

Weeks 3 & 4: My Local Model is Smarter Than ChatGPT? (Kind of)

One of my biggest goals was to get a powerful AI model running completely offline on my PC. I've been testing Phi-4, an open-source model from Microsoft. According to the Artificial Analysis Intelligence Index (https://artificialanalysis.ai/), it can hold its own, and sometimes outperform certain ChatGPT versions!

Of course, model comparisons are nuanced, and each model has different strengths, so it's rarely apples-to-apples, but it's still wild to see how far we've come in just a year. The models that blew everyone's minds back then aren't even close to what we're using today, and even open-source solutions are really impressive now.

Again, this is running 100% locally. There's no data leaving my machine, no subscriptions or APIs, and it's surprisingly fast! (I'll also share a snippet of my ollama --verbose stats for any fellow nerds in the comments.)

What's Next?

  • Maybe I'll set up RAG so this can interact with files
  • I might set up API integrations so it can search the web
  • There could be a big project coming soon for my little model

Loving the progress so far. On to the next milestone!

A bar chart comparing AI models, with the Phi-4 model highlighted, showing a score of 40.

Weeks 1 & 2: A Lot of Breaking & A Big Milestone

This is Phi-4 (Microsoft's open-source model) running completely offline on my PC — no API calls, no subscriptions, no cloud dependencies. Just a local LLM stored and running straight from my machine.

And now, it has memory! If you look closely, my second prompt doesn't mention a LinkedIn post at all, yet the model remembers. That's because I've now integrated memory into my setup.

Getting here wasn't easy. I broke a lot of .yaml and .env files along the way. Some nights, I made zero progress—just trial, error, and deleting broken configurations.

For those unfamiliar, YAML and env files are like the instruction manual for AI frameworks. They tell the UI (LibreChat in this case) which models to use, how to connect them, where to store memory, and more.

This was my biggest milestone yet, but I'm just getting started. Next up:

  • Retrieval-Augmented Generation (RAG) – So the AI can search and pull from my own data.
  • Meilisearch & pgvector – Enhancing memory and search capabilities.
  • Speech-to-Text (STT) – Voice interactions down the line.
  • Web Search – Eventually, I want my AI to pull in real-time information.

Plenty to build, plenty to break... but that's the fun part.

Week 0: Trial by Fire 🔥

Week 0 is in the books, and I already have a local LLM running offline on my PC!

It's extremely basic right now, but I wanted to dive right in and see what I could build before fully understanding everything. That meant a lot of trial and error — YouTube tutorials, articles, Reddit threads, and some light troubleshooting.

What I Set Up:

  • Installed Git, Python, Docker, and other core tools
  • Downloaded Ollama and ran Mistral 7B locally

Getting it running was just the first step. Now, I need to build a real framework for what comes next—stay tuned!

On to Week 1!

My 6-Month Goal is to Fail at Building AI Solutions.

I think we all feel the pressure on LinkedIn to always have the next big insight, to act like we've mastered every trend, and to position ourselves as experts.

But I don't know much about AI.

Ok, I understand how LLMs work at a high level, I can craft decent prompts, and I've played around with some integrations. But I don't really know what it takes to build useful AI-powered solutions.

So I built a PC specifically for one reason: to fail.

Over the next six months, I'm diving headfirst into running local AI models, experimenting with automation, and figuring out how AI can actually be useful in real business scenarios—without any coding experience.

This will be messy. Things will break. I'll probably get stuck (a lot). But instead of pretending I have all the answers, I'm inviting failure, because that's the only way to truly learn.

If you're also figuring out AI—whether you're technical or not—let's talk. I can use as much help as I can get!