Unshackled AI
Local freedom. Full potential.
A home forunrestricted intelligence.
News, guides, and community discussion about abliterated and unrestricted local LLM models. Run it locally. Keep it yours.
$ ./run-llm --local
Loading model...
Context: unlimited
Filters: none
Freedom: enabled
> █
Unrestricted
No artificial limitations
Private
100% runs on your hardware
Open
Open weights, open community
Community Driven
Share, discuss, improve
Latest Articles
Llama 3.1 70B Uncensored: Local Setup & Benchmarks
We run extensive tests on Llama 3.1 70B abliterated on a single 3090. Results, settings, and recommendations.
Abliterated vs Unrestricted: What's the Real Difference?
A deep dive into what content is actually removed, why it matters, and how unrestricted models behave in practice.
Building the Perfect Local LLM Stack in 2024
Hardware, inference engines, quantization, and tips to get the most out of your local setup.
Forum Discussions
Llama 3.1 8B vs 70B – real world usage?
Model DiscussionCurious what everyone's experience has been with the smaller 8B model compared to 70B locally.
Mixtral 8x22B unrestricted – initial impressions
Model DiscussionJust spent the weekend testing Mixtral 8x22B unrestricted. Here are my thoughts and benchmarks.
Best inference settings for storytelling?
PromptingLooking for advice on temperature, top-p, and context length for more creative, immersive outputs.
New Q4_K_M quantization results (Llama 3 70B)
BenchmarksRan some tests comparing Q4_K_M with Q5_K_M. Surprisingly good results.