Revenge Coding at 2AM: Building a Chrome Extension to Filter Twitter Noise

August 23, 2025
chrome extension·javascript·ollama·local AI·privacy·twitter·social media·hackathon·claude code
Revenge Coding at 2AM: Building a Chrome Extension to Filter Twitter Noise

Revenge Coding at 2AM: Building a Chrome Extension to Filter Twitter Noise

Or: How I channeled my hackathon frustration into something actually useful

The Spark: A Hackathon Gone Wrong

Last month, my roommate pulled me into this "vibe coding" hackathon. Not your typical hackathon with pizza and energy drinks - this one had pushups, jumping jacks, and people taking alcohol shots. Everyone was building the next meme game.

I spent two hours pretending to code a stupid game idea from my random teammate. The game kept breaking, burning through Replit credits like crazy. Then I tried pivoting to another game since apparently one wasn't enough. But here's the thing - building something cool or useful wasn't even the point of this vibe coding competition. It was all about the memes and the vibes.

What made it worse was people I knew coming up to me, betting I'd win. But I had zero control or inspiration over what I was building. I was just going through the motions.

So I went home at 11 PM, annoyed and restless.

That's when the revenge coding began.

The Real Problem

I realized why I was so frustrated. Earlier that day, I'd spent probably 2 hours scrolling through Twitter, reading absolute trash. Engagement bait. Hot takes about hot takes. Threads explaining why some random thing is "actually problematic." Videos of people reacting to videos of people reacting to things.

This is number one bullshit. [insert Khabib meme]

The irony wasn't lost on me - I was annoyed about wasting time at a hackathon while I regularly waste hours consuming digital garbage.

What if I could just... not see the trash? What if Twitter could only show me stuff I actually care about?

The 2AM Solution

Between midnight and 5 AM, fueled by spite and instant coffee, I built Signal/Noise Ratio for X - a Chrome extension that uses local AI to analyze every tweet and visually mark the trash.

The concept is dead simple:

  • 🟢 Green badge = Actually interesting content
  • 🟡 Yellow badge = Meh, could go either way
  • 🔴 Red badge = Why does this tweet exist?

Plus I added this little ECG waveform dashboard in the corner because if I'm going to build something at 2 AM, it might as well look cool.

How I Built It (Spoiler: It Was Easy)

Here's the thing - building Chrome extensions in 2024 is actually stupidly easy, especially with Claude Code doing the heavy lifting. The whole architecture is straightforward:

Twitter Page → Chrome Extension → Local Server → Ollama (Local AI) → Analysis

The Tech Stack

I wanted to experiment with running AI locally (privacy and all that), so I used:

  • Ollama - Runs LLMs on your machine
  • Chrome Extension - Manifest V3, pure JavaScript
  • Express server - Bridge between extension and Ollama
  • MutationObserver - Watches for new tweets

The coolest part? Everything runs locally. Your Twitter data never leaves your machine. Not because I'm some privacy crusader, but because I was too lazy to set up API keys and deal with rate limits.

Key Implementation Bits

Detecting new tweets as they load:

const observer = new MutationObserver((mutations) => {
  mutations.forEach((mutation) => {
    mutation.addedNodes.forEach((node) => {
      const tweets = node.querySelectorAll('article[data-testid="tweet"]');
      tweets.forEach(tweet => analyzeTweet(tweet));
    });
  });
});

Asking Ollama to judge tweets:

const prompt = `Rate this tweet's quality (0-100):
  80-100: Actually interesting/useful
  40-79: Decent, personal insights
  0-39: Clickbait, rage bait, trash
  
  Tweet: "${tweetText}"`;

That's basically it. Claude Code wrote most of the boilerplate, I just had to wire things together and make it look decent.

The "Challenges" (There Weren't Many)

Honestly? This was easier than expected. The only real issue was:

Determining what makes "high signal" - What's trash to one person might be gold to another. Claude initially suggested using heuristics, local models, AND API calls. But I decided to keep it dead simple - strip out all the heuristics and API calls, and test it with 100% local models only.

I'd been playing with API calls for months, but never really trusted local models enough to build something serious with them. This side project was my experiment to see if these locally-run models were actually any good. Spoiler: they're better than I expected!

I ended up with a simple prompt: educational content, breaking news, and genuine insights = signal. Engagement bait, hot takes, and reaction videos = noise. The local AI figured out the nuances surprisingly well.

The whole thing took about 5 hours to build, and another 2 hours today to polish and fix edge cases.

What I Learned

Local AI is Actually Viable

Running AI on your laptop in 2025 is surprisingly good. The 3B parameter Llama model runs fast enough to analyze tweets in real-time without melting my MacBook.

Claude Code is a Cheat Code

I'm not going to pretend I hand-coded every line. Claude Code basically built 70% of this. I just had to:

  • Describe what I wanted
  • Fix a few quirks
  • Add the styling and animations

We're living in the future where you can revenge-code a working product before sunrise.

Visual Feedback Changes Everything

Just adding colored badges completely changed how I browse Twitter. I find myself automatically skipping red-badged tweets and focusing on the green ones. It's like having a trash filter for your brain.

The Results

Been using it for a day now. Some observations:

  • My feed is roughly 30% signal, 70% noise (yikes)
  • Tech Twitter has surprisingly low signal (too many hot takes)
  • Certain accounts are consistently green (adding them to a list)
  • I'm reading maybe 50% fewer tweets but getting more value

The best part? I built this out of spite at 2 AM and it's already more useful than whatever game we were building at the hackathon.

Want to Try It?

The extension is open source: github.com/phuaky/signal_noise_ratio

Quick Setup

  1. Install Ollama and pull a model:
brew install ollama
ollama pull llama3.2:3b
ollama serve
  1. Start the local server:
cd server && npm install && npm start
  1. Load the extension in Chrome:

    • Go to chrome://extensions/
    • Enable Developer Mode
    • Load unpacked → select the folder
  2. Open Twitter and watch the magic happen

Future Ideas (If I Get Bored Again)

  • Add support for LinkedIn (so much corporate noise)
  • Custom training based on what you like/skip
  • Export stats on your consumption patterns
  • Maybe actually add the game sprites as easter eggs (jk)

The Takeaway

Sometimes the best projects come from frustration. I went to a hackathon, got annoyed about building useless stuff, then went home and built something I actually use every day.

The moral? Next time you're stuck building something you don't care about, just leave and build what you actually want. Revenge coding at 2 AM is surprisingly productive.

Also, we're living in an era where you can build a functional AI-powered Chrome extension in one angry night. What a time to be alive.


P.S. - To my teammate: Sorry for leaving early. But hey, at least I built something I didn't immediately delete.

P.P.S. - If you're tired of reading trash on Twitter, give this a try. Or don't. I built it for me anyway.