Overview

The AI Filter Block applies a filter operation over an array of values where the filter condition is evaluated by one of the AI models from the 250+ models LawMe connects to. This powerful block allows you to leverage AI capabilities to perform complex filtering tasks on your data.

Inputs

System Prompt
string

The system prompt to send to the model. Optional. Used to provide high-level guidance to the AI model.

Condition Prompt
string
required

The condition prompt to filter the array. Required. This prompt describes the criteria that each array element will be evaluated against.

Input
any[]
required

The input array to filter. Required. Can be an array of any data type.

Outputs

Indicies
number[]

The index numbers of each element that passed the filter condition.

Output
any[]

The filtered input array containing only the elements that passed the condition.

Editor Settings

AI Model
string
default:"gpt-3.5-turbo"

The AI model to use for evaluating filter conditions. Available models are dynamically populated based on the LLM provider configuration.

Temperature
number
default:0

The sampling temperature to use. Values between 0-2. Higher values (0.8) make output more random, lower values (0.2) make it more focused and deterministic.

Top P
number
default:1

Alternative to temperature sampling. Only tokens comprising the top P probability mass are considered. For example, 0.1 means only tokens in the top 10% probability are considered.

Use Top P
boolean
default:false

Toggle between using top P sampling or temperature sampling.

Max Tokens
number
default:2048

The maximum number of tokens to generate in each chat completion.

Stop
string

A sequence where the API will stop generating further tokens.

Presence Penalty
number

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

Frequency Penalty
number

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

Seed
number

If specified, the model will attempt to generate deterministic results for repeated requests with the same seed and parameters.

Advanced Settings

Custom Max Tokens
number

Overrides the max number of tokens a model can support. Leave blank for preconfigured token limits.

Cache Responses
boolean
default:false

If enabled, requests with the same parameters and messages will be cached for immediate responses without an API call.

Use for subgraph partial output
boolean
default:false

If enabled, streaming responses from this node will be shown in Subgraph nodes that call this graph.

Error Handling

  • The block will retry failed attempts up to 3 times with exponential backoff
  • Token limits are automatically enforced based on the selected model
  • Invalid responses (non TRUE/FALSE) from the model will trigger an error
  • The block includes built-in timeout handling and request cancellation support

Example Usage

  1. Connect an array of items to the Input port
  2. Add a condition prompt like “Keep only items that are environmentally sustainable products”
  3. Optionally add a system prompt for additional context
  4. Configure the model and parameters in the settings
  5. Connect the Output port to use the filtered results

Notes

  • The block automatically handles token counting and cost tracking
  • Responses are strictly evaluated as TRUE/FALSE
  • The block supports parallel processing of array elements
  • Built-in caching can improve performance for repeated operations