We've recently seen a huge influx of new users from other platforms, and we’re beyond excited to welcome so many new roleplayers to our community! To ensure we continue providing a high-quality roleplay experience for both our veterans and newcomers, we're actively scaling up our front-end and back-end systems to accommodate our growth.
🔧Scheduled Downtime:
Starting February 25, 2025 10:00 AM, the WyvernChat will be down for approximately 6-8 hours while we perform essential upgrades. These improvements will help create a smoother, faster, and more stable experience for everyone. Thank You for Being Here!
If you've been with us for a while, we truly appreciate your support and patience as we make these improvements
If you’re just beginning your journey here, welcome! Please pardon our dust as we make room to properly welcome you all.
We’ll keep you updated on our progress and let you know as soon as we’re back online. Thank you for being part of this amazing community! 💙
— The WyvernChat Team
You know why you're getting the sudden influxes, please don't make the same mistakes.
The other sites have come to regret it, and there's always someone else around the corner willing to embrace the leavers.
Any numbers on the influx? I'd have to assume most people migrating are not paying users and the costs must be high.
Have you considered using web llm with a gemma-3 ~4b merge to allow free users to run models locally without setup?
gemma 3 4b runs at a few tps on my phone with decent output quality. (llama.cpp with unsloth's gguf, not web-llm)
I'd assume it could work well on a laptop.
code
https://github.com/mlc-ai/web-llm
demo
https://chat.webllm.ai/?model=SmolLM2-135M-Instruct-q0f16-MLC