Cache AI responses via language model middleware (wrapGenerate/wrapStream with simulateReadableStream) or onFinish callbacks with KV storage.