Streaming displays LLM response parts incrementally instead of waiting for full generation; use `streamText` with `for await` iteration to stream text chunks.