Efficiently condense long text using LLMs and the Spring Ecosystem.
Spring AI: Abstraction layer for LLMs (OpenAI, Gemini).
Key Component: Inject the `ChatClient` (via `ChatClient.Builder`) for seamless model interaction.
Goal: Create a service that accepts long text and returns a brief summary.
Prompt Template: Define the instruction for the LLM (e.g., "Summarize the text concisely: {input}").
Execution: Use `ChatClient.prompt(template) .call().content()` to trigger the AI call and extract the response text.
Result: The service method returns the summarized string.
Controller: Implement a `@RestController` to expose the functionality.
Endpoint: Use `@PostMapping` (e.g., `/summarize/normal`).
Data Flow: Controller accepts request body (`@RequestBody String text`) and delegates processing to the `SummarizationService`.
Long Inputs: Use **Chunking** to split large text before summarization.
Recursive Summary: Summarize chunks, combine results, and then summarize the combined text for the final output.
Control: Adjust the prompt to control output style (e.g., "in five bullet points") or length ("under 100 words").