Cache AI responses using language model middleware (intercept with wrapGenerate/wrapStream) or onFinish callbacks; replay cached streams with simulateReadableStream.