Damus
Purp1eOne profile picture
Purp1eOne
@Purp1eOne

The only thing necessary for the triumph of evil is for good men to do nothing. - Edmund Burke

Relays (42)
  • wss://blastr.f7z.xyz – read & write
  • wss://blastr.relay.nostr – read & write
  • wss://brb.io – read & write
  • wss://eden.nostr.land – read & write
  • wss://galaxy13.nostr1.com – read & write
  • wss://lightningrelay.com – read & write
  • wss://nos.lol – read & write
  • wss://nostr-relay.wlvs.space – read & write
  • wss://nostr.bitcoiner.social – read & write
  • wss://nostr.inosta.cc – read & write
  • wss://nostr.lorentz.is – read & write
  • wss://nostr.milou.lol – read & write
  • wss://nostr.onsats.org – read & write
  • wss://nostr.thesamecat.io – read & write
  • wss://nostr.wine – read & write
  • wss://nostr.zebedee.cloud – read & write
  • wss://nostr.zenon.info – read & write
  • wss://nostr21.com – read & write
  • wss://premium.primal.net/ – read & write
  • wss://purplepag.es – read & write
  • wss://purplerelay.com – read & write
  • wss://pyramid.fiatjaf.com – read & write
  • wss://relay.current.fyi – read & write
  • wss://relay.damus.io – read & write
  • wss://relay.mostr.pub – read & write
  • wss://relay.n057r.club – read & write
  • wss://relay.nostr.band – read & write
  • wss://relay.nostr.bg – read & write
  • wss://relay.nostr.com.au – read & write
  • wss://relay.nostr.info – read & write
  • wss://relay.nostrati.com – read & write
  • wss://relay.nostrhackers.online – read & write
  • wss://relay.nostriches.org – read & write
  • wss://relay.nostrplebs.com – read & write
  • wss://relay.noswhere.com – read & write
  • wss://relay.primal.net – read & write
  • wss://relay.satoshidnc.com – read & write
  • wss://relay.shitforce.one – read & write
  • wss://relay.snort.social – read & write
  • wss://relayable.org – read & write
  • wss://relayer.fiatjaf.com – read & write
  • wss://welcome.nostr.wine – read & write

Recent Notes

note17k6v8...
Purp1eOne profile picture
Police cited her for obstructing a roadway and failure to obey lawful commands (a misdemeanor under local ordinances). Reports indicate protesters had marched in streets, blocking traffic despite multiple warnings to move to sidewalks. She was the only one arrested out of ~200 participants, released after a few hours, and (as of early reports) faced potential minor charges/fines.
Plichta suggested the timing was retaliatory for her speech, but official statements and fact-checks (e.g., Snopes, local outlets like MLive and WZZM) confirm the arrest stemmed from protest conduct earlier, not her interview comments. Freedom of speech protects her views, but not blocking roads without permits.
walker · 6w
It is truly astonishing…
Ivan · 6w
Good morning, Nostr. Who's running local LLMs? What are the best models that can run at home for coding on a beefy PC system? In 2026, I want to dig into local LLMs more and stop using Claude and Gemi...
Purp1eOne profile picture
Gave your question to Grok:

https://grok.com/share/c2hhcmQtMw_c08aa1ff-8f1d-4a31-b680-225e816d73af

Good morning! I'm all in on local LLMs too—privacy, no filters, and owning your setup is the way forward. By 2026, efficiency has improved a ton with better quantization (like AWQ-4bit) and MoE architectures, so even big models run smoother on high-end consumer hardware. Assuming your beefy PC has something like 4+ high-VRAM GPUs (e.g., RTX 4090s or A100 equivalents with 80GB+ total VRAM), you can handle 100B+ params without too much sweat. If it's more modest (e.g., 2x 4090s), stick to 30-70B models.

Here's a rundown of the top local/open-source models for coding tasks right now (late 2025 data, but trends hold into 2026). I prioritized ones excelling in agentic coding (e.g., multi-file edits, tool use, debugging), SWE-bench scores, and real-world dev workflows. They're all uncensored by default when run locally. Download from Hugging Face, and use tools like Ollama, LM Studio, or Jan for easy setup—Ollama's great for quick spins, while Jan shines for agentic stuff.

### Top Picks for Coding-Focused Local LLMs
I'll table the best ones based on recent benchmarks and community feedback, focusing on coding strength, size/efficiency, and hardware fit. These are quantized versions for home runs; full precision needs datacenter-level gear.

| Model | Params | Strengths for Coding | Hardware Fit (Quantized) | Why It Rocks |
|-------|--------|-----------------------|---------------------------|--------------|
| Devstral-2-123B | 123B | Agentic workflows, multi-file edits, massive 256K context for big repos, top SWE-bench scorer. Feels like a "senior engineer" for fixes and planning. | 2x RTX Pro 6000 or 8x RTX 3090s (full context). | Best overall for complex projects; handles undocumented codebases without hallucinating. |
| Qwen3-Coder-30B | 30B | Pure coding beast—bug fixes, autocomplete, instruction following. Runs fast even on mid-tier setups. | Single RTX 4090 (16GB VRAM quantized). | Efficient daily driver; great for laptops too if you dial down context. Outperforms bigger models on targeted dev tasks. |
| MiniMax-M2 | ~120B (MoE) | Interleaved thinking for agents, solid UI/design alongside code, tool use without fuss. | 2x RTX Pro 6000 or 8x RTX 3090s. | Versatile for full-stack work; MoE makes it punchy without constant high load. |
| GLM-4.5-Air | ~90B | All-rounder with strong agentic coding, structured outputs, and low power draw for multi-agent runs. | Single RTX Pro 6000 or 4x RTX 3090s. | Fits on fewer GPUs; uncensored and reliable for planning/debugging. |
| SWE-Smith-32B (Qwen2.5 fine-tune) | 32B | Interview prep, algorithms, repo-wide changes—beats Claude 3.5/GPT-4o level on SWE tasks. | 1-2x RTX 4090s (M3/M4 Max Mac equivalent). | Optimized for dev; emergent smarts from fine-tuning. |
| NVIDIA-Nemotron-3-Nano-30B | 30B | Tool calling, math/code hybrid tasks, punches above weight on agents. | Single high-VRAM GPU (low active params). | Speedy and multilingual; great for quick iterations without guilt-tripping your power bill. |
| Codestral (Mistral) | 22B | Fast autocompletion, code gen; fine-tunable if needed. | Entry-level beefy PC (e.g., RTX 4080+). | Lightweight starter; pairs well with VS Code extensions like Continue.dev. |
| DeepSeek-Coder | Varies (7-33B) | Competitive coding, algorithms; open-source and uncensored. | Scales to your rig—smaller for testing. | Budget-friendly; strong on benchmarks without bloat. |

### Quick Tips to Get Started
- **Setup**: Ollama for simplicity (just `ollama run <model>`), or Open WebUI for a nice browser interface with RAG/custom instructions. Hugging Face for gritty/experimental models—quantize with AWQ or GPTQ to fit your VRAM.
- **Integration**: Hook into VS Code via Continue.dev or CodeGPT for autocomplete. For agents (e.g., auto-fixing repos), try Jan or SWE-Agent frameworks.
- **Efficiency Gains**: By 2026, expect even better—models like MoEs (e.g., MiniMax) activate fewer params per query, and hardware like next-gen NVIDIA/AMD GPUs will make 200B+ feasible at home. If VRAM's tight, start with 4-bit quant and layer offloading.
- **Caveats**: Local means you handle the context management—no infinite tokens like cloud services. Test on small projects first; models can still hallucinate without good prompting.
Purp1eOne · 7w
I also had #1 for work, #5, #10, #14 and then I went all droids until Motorola sold to Google and ruined them. My son #17 and loved it still says its his favorite phone.
note1e5vmy...
Purp1eOne profile picture
I have been following this for years, you google it and show me the proof. Show me they didn't get any money, who investigated it, who shut it down and when it stopped.

Show me the public record of arrests, what the fines and sentences were?

You show me you are the one making the claims I don't have to google anything.

If you can't show me your proof get lost with your noise.