AI Filter Block
Apply AI-powered filtering to arrays using various models and providers
Overview
The AI Filter Block applies a filter operation over an array of values where the filter condition is evaluated by one of the AI models from the 250+ models LawMe connects to. This powerful block allows you to leverage AI capabilities to perform complex filtering tasks on your data.
Inputs
The system prompt to send to the model. Optional. Used to provide high-level guidance to the AI model.
The condition prompt to filter the array. Required. This prompt describes the criteria that each array element will be evaluated against.
The input array to filter. Required. Can be an array of any data type.
Outputs
The index numbers of each element that passed the filter condition.
The filtered input array containing only the elements that passed the condition.
Editor Settings
The AI model to use for evaluating filter conditions. Available models are dynamically populated based on the LLM provider configuration.
The sampling temperature to use. Values between 0-2. Higher values (0.8) make output more random, lower values (0.2) make it more focused and deterministic.
Alternative to temperature sampling. Only tokens comprising the top P probability mass are considered. For example, 0.1 means only tokens in the top 10% probability are considered.
Toggle between using top P sampling or temperature sampling.
The maximum number of tokens to generate in each chat completion.
A sequence where the API will stop generating further tokens.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
If specified, the model will attempt to generate deterministic results for repeated requests with the same seed and parameters.
Advanced Settings
Overrides the max number of tokens a model can support. Leave blank for preconfigured token limits.
If enabled, requests with the same parameters and messages will be cached for immediate responses without an API call.
If enabled, streaming responses from this node will be shown in Subgraph nodes that call this graph.
Error Handling
- The block will retry failed attempts up to 3 times with exponential backoff
- Token limits are automatically enforced based on the selected model
- Invalid responses (non TRUE/FALSE) from the model will trigger an error
- The block includes built-in timeout handling and request cancellation support
Example Usage
- Connect an array of items to the Input port
- Add a condition prompt like “Keep only items that are environmentally sustainable products”
- Optionally add a system prompt for additional context
- Configure the model and parameters in the settings
- Connect the Output port to use the filtered results
Notes
- The block automatically handles token counting and cost tracking
- Responses are strictly evaluated as TRUE/FALSE
- The block supports parallel processing of array elements
- Built-in caching can improve performance for repeated operations