Damus
m34t704f · 1d
I'm curious as a fellow Shakespeare.diy experiment or, which AI model and provider do you use. Do you also checkout/download the codebase locally to work on it? I agree keeping at it is really importa...
Jared Logan profile picture
I use Sonnet 4.6 and Opus 4.6 directly through the Shakespeare AI tokens for the most part. These seem to do the best work for larger tasks. I've been experimenting with xAI models (which are very cost effective, and function well for UI and lighter tasks) as well as @PayPerQ as a provider. Neither are quite as well performing as the built in Sonnet or Opus (Haiku is also great for lighter work). I also have only used the Nostr git and Shakespeare deploy methods so far, which seem to work flawlessly.

I prefer to chunk out the work in separate chats. So once a feature or related work is finished I create a New Chat. As I do this, I make sure my README.md file for the repo is always updated/accurate as needed after making features and function changes (prior to starting a new chat). It also helps to have a spec file that I feed it, which places the file in the repo and is used to reference as I start new chats and feature work. (I craft versions of this separately in Claude Desktop often). This helps with preserving work done, while staying aligned with the future concept.

If you hit console errors, have it review. Often times it's lingering from a previous build. If that's the case a hard refresh of the page should clear it, while preserving your chat.