Signal Noise Ratio for X
A privacy-first Chrome extension that analyzes Twitter/X feed quality using AI running entirely on your machine. No cloud services, no data collection - just local processing.
GitHub Repository: github.com/phuaky/signal_noise_ratio
Features
- 100% Local AI Analysis - Uses Ollama to run LLMs on your machine
- Visual Quality Indicators - Green/Yellow/Red badges for instant recognition
- Real-time Processing - Analyzes tweets as they appear in your feed
- ECG Waveform Dashboard - Live visualization of feed quality ratio
- Privacy by Design - Your data never leaves your computer
- Model Flexibility - Supports multiple LLMs (Llama, Qwen) with different speed/quality trade-offs
- Debug Panel - Press Ctrl+Shift+D for detailed analysis logs
- WebSocket Connection - Reliable real-time communication with local server
Technical Stack
- Chrome Extension - Manifest V3 with content scripts
- JavaScript - Pure JS, no build tools required
- Node.js/Express - Local server bridging extension and Ollama
- Ollama - Local LLM runtime (supports Llama 3.2, Qwen)
- MutationObserver API - Efficient DOM monitoring for new tweets
- WebSockets - Real-time bidirectional communication
- Chrome Storage API - Settings persistence
Architecture
Twitter/X DOM
↓
Content Scripts (MutationObserver)
↓
Local Express Server (Port 3001)
↓
Ollama API (Port 11434)
↓
Local LLM (3B/7B models)
Impact & Results
- Feed Analysis: Discovered my feed is ~30% signal, 70% noise
- Time Saved: Reading 50% fewer tweets while getting more value
- Privacy Preserved: Zero data sent to external services
- Performance: 3B model analyzes tweets in <2 seconds on MacBook
- Reliability: 5+ hours of continuous operation without issues
Key Implementation Details
The extension uses a MutationObserver to detect new tweets as Twitter's virtual scrolling adds them to the DOM. Each tweet is queued and sent in batches to the local server, which forwards them to Ollama for analysis. The AI scores each tweet from 0-100 based on content quality, with results cached for 1 hour to avoid re-analysis.
The visual feedback system uses colored badges and borders for instant recognition, while the ECG-style waveform provides an engaging real-time visualization of overall feed quality.
Installation
- Install Ollama and pull a model:
ollama pull llama3.2:3b
ollama serve
- Start the local server:
cd server && npm install && npm start
- Load extension in Chrome:
- Navigate to
chrome://extensions/
- Enable Developer Mode
- Load unpacked → select extension folder
- Navigate to
Lessons Learned
This project validated that local AI is genuinely viable for real-time applications in 2025. The 3B parameter models are fast enough for interactive use while maintaining good accuracy. Building with Claude Code accelerated development significantly - what could have taken days was completed in one night of "revenge coding" after a frustrating hackathon.
The privacy-first approach, while initially chosen out of laziness (avoiding API keys), proved to be a major feature. Users can analyze sensitive content without privacy concerns, making the tool more trustworthy than cloud-based alternatives.
Future Enhancements
- Multi-platform support (Youtube, Reddit, Hacker News)
- Custom training based on user preferences
- Export analytics and consumption patterns
- Thread-level analysis instead of individual tweets
- GPU acceleration for larger models