Harness and provider profiles are public beta APIs and may be updated in future releases.
create_deep_agent call site. Use HarnessProfile when building profiles in Python; use HarnessProfileConfig when loading or saving YAML/JSON files. Deep Agents ships built-in harness profiles for OpenAI and Anthropic (Claude) models.
Provider profiles are a narrower companion API for model-construction kwargs, which don’t affect the harness. Most callers don’t need them; reach for one when you want init_chat_model defaults, credential checks, or runtime-derived kwargs as defaults with your provider choice (for example, when packaging a provider integration).
Harness profiles
AHarnessProfile describes prompt-assembly, tool-visibility, middleware, and default-subagent adjustments that create_deep_agent applies after the chat model has been constructed:
base_system_prompt— replace the base Deep Agents system promptsystem_prompt_suffix— append text to the base system prompttool_description_overrides— override individual tool descriptionsexcluded_tools— remove specific harness-level tools from the tool setexcluded_middleware— strip specific middleware classes from the stackextra_middleware— append middleware to every stack this profile applies togeneral_purpose_subagent— disable, rename, or re-prompt the general-purpose subagent
excluded_middleware accept two forms:
- A middleware class (matched by exact type), or a plain string that matches
AgentMiddleware.name. Use plain strings for built-ins and public aliases such as"SummarizationMiddleware". - An
module:Classimport ref (for example,"my_pkg.middleware:TelemetryMiddleware") to target an exact middleware class from a config file. Import refs resolve lazily, so use them only for trusted local configuration — loading one imports Python code.
Lookup order for preconfigured model instances
Lookup order for preconfigured model instances
When you pass a preconfigured chat model instance instead of a
provider:model string, the harness synthesizes the canonical provider:identifier key from the instance and looks it up in this order:- Exact
provider:identifiermatch - Identifier-only (only when the identifier already contains
:) - Provider-only fallback
Registration keys
Both profile types use the same key format:- Provider-level — a bare provider name like
"openai"applies to every model from that provider. - Model-level — a fully qualified
provider:modelkey like"openai:gpt-5.4"applies only to that specific model.
Merge semantics
| Field | Merge behavior |
|---|---|
base_system_prompt, system_prompt_suffix | New value wins when set; otherwise inherits |
tool_description_overrides | Mappings merge per key; new value wins on a shared key |
excluded_tools, excluded_middleware | Sets union |
extra_middleware | Merged by concrete class: new instance replaces existing at its position, novel classes append |
general_purpose_subagent | Merged field-wise (unset fields inherit) |
init_kwargs (provider) | Dicts merge key-wise; new value wins on a shared key |
pre_init (provider) | Callables chain: existing runs first, then the new one |
init_kwargs_factory (provider) | Factories chain with their outputs merged every resolve_model call |
Provider profiles
AProviderProfile declares how Deep Agents should construct a chat model for a given provider or specific model spec. It applies only when you provide a provider:model string when creating the deep agent, not when you pass a preconfigured model with init_chat_model:
init_kwargs— static initialization arguments forwarded toinit_chat_modelpre_init— side effects to run before construction (for example, credential validation)init_kwargs_factory— kwargs derived from runtime state (for example, headers pulled from environment variables)
Load profiles from config files
For YAML/JSON-backed workflows, useHarnessProfileConfig. It mirrors the declarative subset of HarnessProfile (prompt text, tool-description overrides, excluded tools and middleware, general-purpose subagent edits) and owns to_dict / from_dict. Runtime-only state — middleware instances, factories, and class-form excluded_middleware entries — stays on HarnessProfile.
register_harness_profile accepts either type, so config-backed callers don’t need a manual conversion step:
HarnessProfileConfig.from_harness_profile(...) exports a runtime profile back to the declarative shape when it only uses serializable features:
- Class-form
excluded_middlewareentries serialize as a public alias (when the class exposes one viaserialized_name: ClassVar[str]) or as amodule:Classimport ref. - Non-empty
extra_middlewareand middleware classes declared in__main__or inside a function scope cannot be serialized — export raisesValueError.
Ship a profile as a plugin
Distributable profiles can register themselves viaimportlib.metadata entry points instead of requiring callers to run register_*_profile by hand. Load order is built-ins first, then entry-point plugins, then any direct register_*_profile calls in user code; all three paths funnel through the same additive registration, so later registrations layer on top of earlier ones under the same key.
Declare an entry point in the distribution’s own pyproject.toml under the appropriate group:
deepagents.profiles is imported:
Related
- Harness — overview of harness capabilities
- Models — configure model providers and parameters
- Customization — full
create_deep_agentconfiguration surface
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

