This is an automated email from the ASF dual-hosted git repository.
wenjin272 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-agents.git
The following commit(s) were added to refs/heads/main by this push:
new df897987 [Doc] Introduce Mem0 based Long-Term Memory (#637)
df897987 is described below
commit df89798776f1087184d38274ebd821fa0bd09dce
Author: Howie Wang <[email protected]>
AuthorDate: Mon May 11 10:15:24 2026 +0800
[Doc] Introduce Mem0 based Long-Term Memory (#637)
* [Doc] Introduce Mem0 based Long-Term Memory
---
.../docs/development/memory/long_term_memory.md | 591 ++++++++++++---------
1 file changed, 330 insertions(+), 261 deletions(-)
diff --git a/docs/content/docs/development/memory/long_term_memory.md
b/docs/content/docs/development/memory/long_term_memory.md
index 8c018689..b2475fc0 100644
--- a/docs/content/docs/development/memory/long_term_memory.md
+++ b/docs/content/docs/development/memory/long_term_memory.md
@@ -24,447 +24,516 @@ under the License.
## Overview
-Long-Term Memory is a persistent storage mechanism in Flink Agents designed
for storing large amounts of data across multiple agent runs with semantic
search capabilities. It provides efficient storage, retrieval, and automatic
compaction to manage memory capacity.
+Long-Term Memory is a persistent storage mechanism in Flink Agents for storing
information across multiple agent runs with semantic search capabilities. It
provides automatic memory extraction, consolidation, and retrieval.
-{{< hint info >}}
-Long-Term Memory is built on vector stores, enabling semantic search to find
relevant information based on meaning rather than exact matches.
-{{< /hint >}}
+Long-Term Memory currently supports the [Mem0](https://github.com/mem0ai/mem0)
backend. Mem0 is an intelligent memory layer that automatically extracts facts
from conversations, consolidates related memories, and provides semantic
retrieval — eliminating the need for manual memory management.
-## When to Use Long-Term Memory
+## Prerequisites
-Long-Term Memory is ideal for:
+Declare the following resources in your agent plan:
+- A [ChatModel]({{< ref "docs/development/chat_models" >}}) for memory
extraction and management
+- An [EmbeddingModel]({{< ref "docs/development/embedding_models" >}}) for
vector generation
+- A [VectorStore]({{< ref "docs/development/vector_stores" >}}) for persistent
storage
-- **Large Document Collections**: Storing and searching through large amounts
of text.
-- **Conversation History**: Maintaining long conversation histories with
semantic search.
-- **Context Retrieval**: Finding relevant context from past interactions.
+## Configuration
-{{< hint warning >}}
-Long-Term Memory is designed for retrieve concise and highly related context.
For complete original data retrieval, consider using [Short-Term Memory]({{<
ref "docs/development/memory/sensory_and_short_term_memory" >}}) instead.
-{{< /hint >}}
+Mem0 Long-Term Memory is enabled by setting three configuration options:
+
+| Key | Type | Description
|
+|--------------------------------------------|--------|----------------------------------|
+| `long-term-memory.mem0.chat-model-setup` | String | Resource name of the
chat model |
+| `long-term-memory.mem0.embedding-model-setup` | String | Resource name of
the embedding model |
+| `long-term-memory.mem0.vector-store` | String | Resource name of the
vector store |
+
+When all three options are configured, the framework automatically creates a
Mem0-based Long-Term Memory instance and attaches it to the `RunnerContext`.
+
+### Configuration Example
+
+{{< tabs "LTM Configuration" >}}
+
+{{< tab "Python" >}}
+
+```python
+from flink_agents.api.execution_environment import AgentsExecutionEnvironment
+from flink_agents.api.core_options import AgentConfigOptions
+from flink_agents.api.memory.long_term_memory import LongTermMemoryOptions
+
+env = AgentsExecutionEnvironment.get_execution_environment()
+agents_config = env.get_config()
+
+# Set job identifier (maps to Mem0 user_id)
+agents_config.set(AgentConfigOptions.JOB_IDENTIFIER, "my_job")
+
+# Configure Mem0 Long-Term Memory
+agents_config.set(
+ LongTermMemoryOptions.Mem0.CHAT_MODEL_SETUP,
+ "my_chat_model"
+)
+agents_config.set(
+ LongTermMemoryOptions.Mem0.EMBEDDING_MODEL_SETUP,
+ "my_embedding_model"
+)
+agents_config.set(
+ LongTermMemoryOptions.Mem0.VECTOR_STORE,
+ "my_vector_store"
+)
+```
+
+{{< /tab >}}
+
+{{< tab "Java" >}}
+
+```java
+AgentsExecutionEnvironment agentsEnv =
+ AgentsExecutionEnvironment.getExecutionEnvironment(env);
+Configuration agentsConfig = agentsEnv.getConfig();
-## Data Structure
+// Set job identifier (maps to Mem0 user_id)
+agentsConfig.set(AgentConfigOptions.JOB_IDENTIFIER, "my_job");
+
+// Configure Mem0 Long-Term Memory
+agentsConfig.set(
+ LongTermMemoryOptions.Mem0.CHAT_MODEL_SETUP,
+ "my_chat_model"
+);
+agentsConfig.set(
+ LongTermMemoryOptions.Mem0.EMBEDDING_MODEL_SETUP,
+ "my_embedding_model"
+);
+agentsConfig.set(
+ LongTermMemoryOptions.Mem0.VECTOR_STORE,
+ "my_vector_store"
+);
+```
-### Memory Item
+{{< /tab >}}
-`MemorySetItem` is the abstraction for representing an item stored in
long-term memory. The item can be a piece of text, a chat message, a java or
python object, semi-structured document, image, audio and video.
+{{< /tabs >}}
{{< hint info >}}
-Currently, item can only be string and `ChatMessage`.
+If `JOB_IDENTIFIER` is not configured, the Flink job ID will be used by
default.
{{< /hint >}}
-`MemorySetItem` has the following properties:
-* **memory_set_name**: The name of the memory set this item belong to.
-* **id**: The unique identifier of the memory item.
-* **value**: The value of the memory item.
-* **compacted**: Whether this item has been compacted.
-* **created_time**: Timestamp or timestamp range for when this memory item was
created.
-* **last_accessed_time**: Timestamp for the last time this memory item was
accessed.
+## Data Model
+
+### MemorySetItem
-### Memory Set
+Represents a single memory item stored in Long-Term Memory:
-`MemorySet` is a set of memory items, which can be maintained and searched
separately.
+| Field | Type | Description
|
+|---------------------|-----------------------|--------------------------------------------|
+| `memory_set_name` | String | Name of the memory set this
item belongs to |
+| `id` | String | Unique identifier of the item
|
+| `value` | String | The memory content (extracted
by Mem0) |
+| `created_at` | Optional[DateTime] | When the item was created
|
+| `updated_at` | Optional[DateTime] | When the item was last updated
|
+| `additional_metadata` | Optional[Map] | Additional metadata associated
with the item |
-`MemorySet` has the following properties:
-- **Name**: Unique identifier for the memory set
-- **Item Type**: Type of items stored
-- **Capacity**: Maximum number of items before compaction is triggered
-- **Compaction Config**: Configuration for compaction
+### MemorySet
+
+A named collection of memory items. Memory sets provide logical grouping and
isolation of memories. See [Context Isolation](#context-isolation) for details
on how memories are scoped and isolated.
## Operations
-### Creating and Getting Memory Set
+### Getting a Memory Set
-{{< tabs "Memory Set Management" >}}
+{{< tabs "Get Memory Set" >}}
{{< tab "Python" >}}
```python
-ltm = ctx.long_term_memory
-# Get or create a memory set
-memory_set: MemorySet = ltm.get_or_create_memory_set(
- name="my_memory_set",
- item_type=str, # or ChatMessage
- capacity=50,
- compaction_config=CompactionConfig(
- model="my_chat_model",
- limit=1 # Number of summaries to generate
- )
-)
+from flink_agents.api.decorators import action
+from flink_agents.api.events.event import InputEvent, Event
+from flink_agents.api.runner_context import RunnerContext
-# Get an existing memory set
-memory_set: MemorySet = ltm.get_memory_set(name="my_memory_set")
-
-# Delete a memory set
-deleted: bool = ltm.delete_memory_set(name="my_memory_set")
+@action(InputEvent.EVENT_TYPE)
+@staticmethod
+def process_event(event: Event, ctx: RunnerContext) -> None:
+ ltm = ctx.long_term_memory
+
+ # Get (or create) a memory set
+ memory_set = ltm.get_memory_set(name="conversations")
```
+
{{< /tab >}}
{{< tab "Java" >}}
+
```java
-BaseLongTermMemory ltm = ctx.getLongTermMemory();
-// Get or create a memory set
-MemorySet memorySet =
- ltm.getOrCreateMemorySet(
- "my_memory_set",
- String.class,
- 50,
- new CompactionConfig("my_chat_model", 1));
-
-// Get an existing memory set
-memorySet = ltm.getMemorySet("my_memory_set");
-
-// Delete a memory set
-boolean deleted = ltm.deleteMemorySet("my_memory_set");
+@Action(listenEventTypes = {InputEvent.EVENT_TYPE})
+public static void processEvent(Event event, RunnerContext ctx) throws
Exception {
+ InputEvent inputEvent = InputEvent.fromEvent(event);
+ BaseLongTermMemory ltm = ctx.getLongTermMemory();
+
+ // Get (or create) a memory set
+ MemorySet memorySet = ltm.getMemorySet("conversations");
+}
```
+
{{< /tab >}}
{{< /tabs >}}
### Adding Items
-Add items to a memory set. When capacity is reached, compaction is
automatically triggered:
-
{{< tabs "Adding Items" >}}
{{< tab "Python" >}}
+
```python
# Add a single item
-item_id: List[str] = memory_set.add("This is a conversation message")
+ids = memory_set.add(items="The user prefers Python over Java.")
# Add multiple items
-item_ids: List[str] = memory_set.add([
- "First message",
- "Second message",
- "Third message"
+ids = memory_set.add(items=[
+ "User likes coffee in the morning.",
+ "User works from home on Fridays.",
])
-# Add with custom IDs
-item_ids = memory_set.add(
- items=["Message 1", "Message 2"],
- ids=["msg_1", "msg_2"]
+# Add with metadata
+ids = memory_set.add(
+ items="Important meeting tomorrow.",
+ metadatas={"category": "work"}
)
```
+
{{< /tab >}}
{{< tab "Java" >}}
+
```java
// Add a single item
-String itemId = memorySet.add(List.of("This is a conversation message"), null,
null).get(0);
+List<String> ids = memorySet.add(
+ List.of("The user prefers Python over Java."), null);
// Add multiple items
-List<String> itemIds = memorySet.add(List.of(
- "First message",
- "Second message",
- "Third message"
-), null, null);
-
-// Add with custom IDs
-itemIds = memorySet.add(
- List.of("Message 1", "Message 2"),
- List.of("msg_1", "msg_2"),
- null
+ids = memorySet.add(List.of(
+ "User likes coffee in the morning.",
+ "User works from home on Fridays."
+), null);
+
+// Add with metadata
+ids = memorySet.add(
+ List.of("Important meeting tomorrow."),
+ List.of(Map.of("category", "work"))
);
```
+
{{< /tab >}}
{{< /tabs >}}
-{{< hint info >}}
-If no custom ids are provided, random id will be generated for each item.
-{{< /hint >}}
### Retrieving Items
-Retrieve items by ID or get all items:
-
{{< tabs "Retrieving Items" >}}
{{< tab "Python" >}}
+
```python
-# Get a single item by ID
-item: MemorySetItem = memory_set.get(ids="item_id_1")
+# Get a specific item by ID
+items = memory_set.get(ids="mem_123abc")
# Get multiple items by IDs
-items: List[MemorySetItem] = memory_set.get(ids=["item_id_1", "item_id_2"])
+items = memory_set.get(ids=["mem_123abc", "mem_456def"])
+
+# Get all items
+all_items = memory_set.get()
-# Get all items if no IDs provided
-all_items: List[MemorySetItem] = memory_set.get()
+# Get with metadata filter
+work_items = memory_set.get(filters={"category": "work"})
# Access item properties
for item in items:
print(f"ID: {item.id}")
print(f"Value: {item.value}")
- print(f"Compacted: {item.compacted}")
- print(f"Created: {item.created_time}")
- print(f"Last Accessed: {item.last_accessed_time}")
+ print(f"Created: {item.created_at}")
+ print(f"Updated: {item.updated_at}")
+ print(f"Metadata: {item.additional_metadata}")
```
+
{{< /tab >}}
{{< tab "Java" >}}
+
```java
-// Get a single item by ID
-MemorySetItem item = memorySet.get(List.of("item_id_1")).get(0);
+// Get a specific item by ID
+List<MemorySetItem> items = memorySet.get(List.of("item_id_1"), null, null);
// Get multiple items by IDs
-List<MemorySetItem> items = memorySet.get(List.of("item_id_1", "item_id_2"));
+items = memorySet.get(List.of("item_id_1", "item_id_2"), null, null);
-// Get all items if no IDs provided
-List<MemorySetItem> allItems = memorySet.get(null);
+// Get all items
+List<MemorySetItem> allItems = memorySet.get(null, null, null);
+
+// Get with metadata filter
+List<MemorySetItem> workItems = memorySet.get(null, Map.of("category",
"work"), null);
// Access item properties
-for (MemorySetItem myItem : items) {
+for (MemorySetItem item : items) {
System.out.println("ID: " + item.getId());
System.out.println("Value: " + item.getValue());
- System.out.println("Compacted: " + item.isCompacted());
- System.out.println("Created: " + item.getCreatedTime());
- System.out.println("Last Accessed: " + item.getLastAccessedTime());
+ System.out.println("Created: " + item.getCreatedAt());
+ System.out.println("Updated: " + item.getUpdatedAt());
+ System.out.println("Metadata: " + item.getAdditionalMetadata());
}
```
+
{{< /tab >}}
{{< /tabs >}}
### Semantic Search
-Search for relevant items using natural language queries:
-
{{< tabs "Semantic Search" >}}
{{< tab "Python" >}}
+
```python
-# Search for relevant items
-results: List[MemorySetItem] = memory_set.search(
- query="What did the user ask about?",
- limit=5
+# Basic search
+results = memory_set.search(
+ query="What does the user like?",
+ limit=5,
+)
+
+# Search with metadata filter
+results = memory_set.search(
+ query="programming languages",
+ limit=5,
+ filters={"topic": "programming"},
)
```
+
{{< /tab >}}
{{< tab "Java" >}}
+
```java
-// Search for relevant items
+// Basic search
List<MemorySetItem> results = memorySet.search(
- "What did the user ask about?",
- 5, // limit
- null // additional kwargs passed to vector store query
+ "What does the user like?",
+ 5,
+ null,
+ Map.of()
+);
+
+// Search with metadata filter
+results = memorySet.search(
+ "programming languages",
+ 5,
+ Map.of("topic", "programming"),
+ Map.of()
);
```
+
{{< /tab >}}
{{< /tabs >}}
-### Count Size
-
-Check the current size of a memory set:
+### Deleting Items
-{{< tabs "Checking Size" >}}
+{{< tabs "Deleting Items" >}}
{{< tab "Python" >}}
+
```python
-# Get the current size
-current_size = memory_set.size
+# Delete specific items by ID
+memory_set.delete(ids="mem_123abc")
-# Check if capacity is reached
-if memory_set.size >= memory_set.capacity:
- print("Capacity reached, compaction will be triggered on next add")
+# Delete multiple items
+memory_set.delete(ids=["mem_123abc", "mem_456def"])
+
+# Delete all items in the memory set
+memory_set.delete()
```
+
{{< /tab >}}
{{< tab "Java" >}}
+
```java
-// Get the current size
-int currentSize = memorySet.size();
+// Delete specific items by ID
+memorySet.delete(List.of("item_id_1"));
-// Check if capacity is reached
-if (currentSize >= memorySet.getCapacity()) {
- System.out.println("Capacity reached, compaction will be triggered on next
add");
-}
+// Delete multiple items
+memorySet.delete(List.of("item_id_1", "item_id_2"));
+
+// Delete all items in the memory set
+memorySet.delete(null);
```
+
{{< /tab >}}
{{< /tabs >}}
-## Usage in Agent
+### Deleting a Memory Set
-### Prerequisites
+{{< tabs "Delete Memory Set" >}}
-To use Long-Term Memory, you need:
+{{< tab "Python" >}}
-1. **Vector Store**: A configured vector store (e.g., ChromaDB) - see [Vector
Stores]({{< ref "docs/development/vector_stores" >}})
-2. **Embedding Model**: An embedding model for converting text to vectors -
see [Embedding Models]({{< ref "docs/development/embedding_models" >}})
-3. **Chat Model** : Used for summarizing and combining related items.
+```python
+ltm = ctx.long_term_memory
+deleted = ltm.delete_memory_set(name="conversations")
+```
+
+{{< /tab >}}
+
+{{< tab "Java" >}}
+
+```java
+BaseLongTermMemory ltm = ctx.getLongTermMemory();
+boolean deleted = ltm.deleteMemorySet("conversations");
+```
-### Configuration
+{{< /tab >}}
-Before using Long-Term Memory, you need to configure it in your agent
execution environment.
+{{< /tabs >}}
+### Metadata Filtering
-| Key | Default | Type
| Description
|
-|--------------------------------------------------|---------|-----------------------|------------------------------------------------------------------------------------------------|
-| AgentConfigOptions.JOB_IDENTIFIER | job id | String
| The unique identifier of the agent job, remaining consistent after
restoring from a savepoint. |
-| LongTermMemoryOptions.BACKEND | none |
LongTermMemoryBackend | The backend of the long-term memory.
|
-| LongTermMemoryOptions.EXTERNAL_VECTOR_STORE_NAME | none | String
| The name of the vector store used as backend.
|
-| LongTermMemoryOptions.ASYNC_COMPACTION | true | boolean
| Execute compaction asynchronously.
|
+Add metadata when storing memories and use filters during retrieval and search:
-{{< tabs "Long-Term Memory Configuration" >}}
+{{< tabs "Metadata Filtering" >}}
{{< tab "Python" >}}
-```python
-agents_env = AgentsExecutionEnvironment.get_execution_environment(env=env)
-agents_config = agents_env.get_config()
-
-# Set job identifier
-agents_config.set(AgentConfigOptions.JOB_IDENTIFIER, "my_job")
-# Configure long-term memory backend
-agents_config.set(
- LongTermMemoryOptions.BACKEND,
- LongTermMemoryBackend.EXTERNAL_VECTOR_STORE
+```python
+# Store with metadata
+memory_set.add(
+ items="User prefers functional programming.",
+ metadatas={"topic": "programming", "confidence": "high"}
)
-# Specify the vector store to use
-agents_config.set(
- LongTermMemoryOptions.EXTERNAL_VECTOR_STORE_NAME,
- "my_vector_store"
-)
+# Retrieve with filter
+results = memory_set.get(filters={"topic": "programming"})
-# Enable async compaction
-agents_config.set(LongTermMemoryOptions.ASYNC_COMPACTION, True)
+# Search with filter
+results = memory_set.search(
+ query="what programming language",
+ limit=5,
+ filters={"confidence": "high"}
+)
```
+
{{< /tab >}}
{{< tab "Java" >}}
-```java
-AgentsExecutionEnvironment agentsEnv =
- AgentsExecutionEnvironment.getExecutionEnvironment(env);
-Configuration agentsConfig = agentsEnv.getConfig();
-
-// Set job identifier
-agentsConfig.set(AgentConfigOptions.JOB_IDENTIFIER, "my_job");
-// Configure long-term memory backend
-agentsConfig.set(
- LongTermMemoryOptions.BACKEND,
- LongTermMemoryBackend.EXTERNAL_VECTOR_STORE
+```java
+// Store with metadata
+memorySet.add(
+ List.of("User prefers functional programming."),
+ List.of(Map.of("topic", "programming", "confidence", "high"))
);
-// Specify the vector store to use
-agentsConfig.set(
- LongTermMemoryOptions.EXTERNAL_VECTOR_STORE_NAME,
- "my_vector_store"
-);
+// Retrieve with filter
+List<MemorySetItem> results = memorySet.get(null, Map.of("topic",
"programming"), null);
-// Enable async compaction
-agentsConfig.set(LongTermMemoryOptions.ASYNC_COMPACTION, true);
+// Search with filter
+results = memorySet.search(
+ "what programming language",
+ 5,
+ Map.of("confidence", "high"),
+ Map.of()
+);
```
+
{{< /tab >}}
{{< /tabs >}}
-### Accessing Long-Term Memory
+## Usage in Agent
-Long-Term Memory is accessed through the `RunnerContext` object:
+### Complete Example
-{{< tabs "Accessing Long-Term Memory" >}}
+{{< tabs "Complete Example" >}}
{{< tab "Python" >}}
```python
-@action(InputEvent.EVENT_TYPE)
-def process_event(event: Event, ctx: RunnerContext) -> None:
- # Access long-term memory
- ltm = ctx.long_term_memory
+from flink_agents.api.decorators import action
+from flink_agents.api.execution_environment import AgentsExecutionEnvironment
+from flink_agents.api.core_options import AgentConfigOptions
+from flink_agents.api.events.event import InputEvent, OutputEvent, Event
+from flink_agents.api.memory.long_term_memory import LongTermMemoryOptions
+from flink_agents.api.runner_context import RunnerContext
+
+class PersonalizedAssistant:
- # Get or create a memory set
- memory_set = ltm.get_or_create_memory_set(
- name="conversations",
- item_type=str,
- capacity=100,
- compaction_config=CompactionConfig(model="my_chat_model")
- )
+ @action(InputEvent.EVENT_TYPE)
+ @staticmethod
+ def process_event(event: Event, ctx: RunnerContext) -> None:
+ """Respond to user using long-term memory."""
+ ltm = ctx.long_term_memory
+ user_query = InputEvent.from_event(event).input
+
+ # Get memory set
+ memory_set = ltm.get_memory_set(name="assistant_memories")
+
+ # Search for relevant context from past interactions
+ relevant = memory_set.search(query=user_query, limit=5)
+ memory_context = "\n".join([f"- {m.value}" for m in relevant])
+
+ # Generate response using your Agent logic
+ prompt = f"Known context:\n{memory_context}\n\nUser: {user_query}"
+ response = f"Response to: {user_query}"
+
+ # Store the interaction
+ memory_set.add(items=f"User asked about: {user_query}")
+
+ ctx.send_event(OutputEvent(output=response))
+
+# Setup
+env = AgentsExecutionEnvironment.get_execution_environment()
+agents_config = env.get_config()
+agents_config.set(AgentConfigOptions.JOB_IDENTIFIER, "personalized_assistant")
+agents_config.set(LongTermMemoryOptions.Mem0.CHAT_MODEL_SETUP, "my_chat_model")
+agents_config.set(LongTermMemoryOptions.Mem0.EMBEDDING_MODEL_SETUP,
"my_embedding_model")
+agents_config.set(LongTermMemoryOptions.Mem0.VECTOR_STORE, "my_vector_store")
```
+
{{< /tab >}}
{{< tab "Java" >}}
+
```java
@Action(listenEventTypes = {InputEvent.EVENT_TYPE})
public static void processEvent(Event event, RunnerContext ctx) throws
Exception {
- // Access long-term memory
+ InputEvent inputEvent = InputEvent.fromEvent(event);
BaseLongTermMemory ltm = ctx.getLongTermMemory();
+ String userQuery = inputEvent.getInput();
- // Get or create a memory set
- MemorySet memorySet = ltm.getOrCreateMemorySet(
- "conversations",
- String.class,
- 100,
- new CompactionConfig("my_chat_model")
- );
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
+ // Get memory set
+ MemorySet memorySet = ltm.getMemorySet("assistant_memories");
-## Compaction
+ // Search for relevant context from past interactions
+ List<MemorySetItem> relevant = memorySet.search(userQuery, 5, null,
Map.of());
+ StringBuilder memoryContext = new StringBuilder();
+ for (MemorySetItem item : relevant) {
+ memoryContext.append("- ").append(item.getValue()).append("\n");
+ }
-When capacity is reached, long-term memory will use LLM to summarize and
combine related items.
+ // Generate response using your Agent logic
+ String response = "Response to: " + userQuery;
-User can configure the compaction config when create the `MemorySet`.
-{{< tabs "Compaction Config" >}}
+ // Store the interaction
+ memorySet.add(List.of("User asked about: " + userQuery), null);
-{{< tab "Python" >}}
-
-```python
-# Create memory set with compaction configuration.
-memory_set = ltm.get_or_create_memory_set(
- name="conversations",
- item_type=str,
- capacity=10, # The framework will automatically trigger compactions and
try to maintain the
- # size of the memory set not exceeding the given capacity
with best efforts
- compaction_config=CompactionConfig(model="my_chat_model", limit=1)
-)
+ ctx.sendEvent(new OutputEvent(response));
+}
```
-{{< /tab >}}
-{{< tab "Java" >}}
-```java
-// Create memory set with compaction configuration.
-MemorySet memorySet = ltm.getOrCreateMemorySet(
- "conversations",
- String.class,
- 10, // The framework will automatically trigger compactions and try to
maintain the
- // size of the memory set not exceeding the given capacity with best
efforts
- new CompactionConfig("my_chat_model", 1)
-);
-```
{{< /tab >}}
{{< /tabs >}}
-### Async Compaction
-
-Compactions are by default asynchronously performed, to avoid blocking the
agent execution. You can also explicitly disable this, so that the agent
execution will be paused during the compaction.
-{{< tabs "Async Compaction" >}}
-
-{{< tab "Python" >}}
-```python
-# Explicitly disable async compaction in configuration
-agents_config.set(LongTermMemoryOptions.ASYNC_COMPACTION, False)
-```
-{{< /tab >}}
-
-{{< tab "Java" >}}
-```java
-// Explicitly disable async compaction in configuration
-agentsConfig.set(LongTermMemoryOptions.ASYNC_COMPACTION, false);
-```
-{{< /tab >}}
+## Context Isolation
-{{< /tabs >}}
+Long-Term Memory automatically provides context isolation through Flink's
keyed partition model. Each keyed partition maintains its own isolated set of
memories, ensuring that memories from one user or session do not leak into
another.
-{{< hint info >}}
-When async compaction is enabled, compaction runs in a background thread. If
compaction fails, errors are logged but don't cause the Flink job to fail.
-{{< /hint >}}
+The isolation hierarchy works as follows:
+- **Job-level** (`JOB_IDENTIFIER`): Separates memories between different Flink
jobs
+- **Partition-level** (keyed partition key): Separates memories between
different keys within the same job
+- **Set-level** (memory set name): Separates memories between different
logical categories within the same partition
-{{< hint info >}}
-When async compaction is enabled, compaction won't block user adding items to
the memory set. The size of the memory set may exceed capacity temporarily.
-{{< /hint >}}
\ No newline at end of file
+This means you can safely use the same memory set name across different
partitions — each partition will only access its own memories.