

I'm wondering how this is supposed to work, given the context is not only my last input to openClaw or will that trigger once the sub-agents get smaller tasks? Actually my test prompt about 5+5 was the most expensive I asked an LLM in a long time. Isn't the idea that the 15 dimensional chess would lead to the conclusion that for the final prompt, the prior conversation context is not needed, thus a cheap model can handle it?