Releases: pinecone-io/python-sdk
9.0.0
Release v9.0.0
v9 is a total rewrite of the Pinecone Python SDK. Rewrites are always ambitious undertakings, and we were motivated by three outcomes that had become difficult to achieve incrementally on the v8 codebase:
- A substantially simpler installation — one
pip installcovering all transports, with a much smaller dependency tree. - Meaningful end-to-end performance improvements — see the Performance section for initial measurements of performance on queries and batch upserts.
- An architecture that supports faster iteration on new product features. v8's request/response layer was generated from the OpenAPI spec, which made it expensive to introduce anything that didn't fit the generated mold. v9's hand-written internals let new product surfaces land directly in the SDK without going through a cumbersome codegen process. The
pc.previewnamespace introduced in this release is a concrete example — it would not have been feasible for us to ship in the v8 client. This benefit is harder to quantify than installation or latency, but it changes the cost of every future feature, which adds up over time.
We made an effort to preserve much of the public surface of the SDK. Most v8 code is expected to continue to run unchanged — but the internals are entirely new. If you are upgrading from v8, start with the [migration guide][migration].
At a glance
- Installation simplifies to
pip install pinecone. The[grpc]and[asyncio]extras are no longer needed; both transports ship in the base package. - Vector operations are significantly faster. We observed in initial end-to-end benchmark testing that sequential batch upserts completed 3.3× faster than the v8 SDK and scaled with concurrency up to ~17× faster at the throughput limit. Large queries deserialize 3.4× faster, and serverless cold start dropped from ~210 ms to ~45 ms.
- The full public surface is type-checked.
mypy --strictis clean; IDE autocomplete and downstream type-checked codebases see complete annotations. - Only three runtime dependencies —
httpx,msgspec,orjson. v8.1.2 declared 7 in its base install, plus up to 7 more across the[grpc]and[asyncio]extras (14 in a fully-enabled install). A smaller dependency tree means fewer version conflicts in your environment and a smaller third-party security-advisory surface to track. - The control plane is resource-oriented — groupings like
pc.indexes,pc.collections,pc.backups,pc.inference,pc.assistant, andpc.previewmirror the resources they act on. The flat v8 method names are preserved as aliases. - Assistant is built in. The
pinecone-plugin-assistantpackage and the plugin discovery system are retired;pc.assistantis part of the core client. pc.previewintroduces a namespace for public preview features, beginning with full-text search over documents.- Most v8 code paths continue to work. Where signatures changed, deprecated aliases are in place. The migration guide enumerates the cases that need code changes.
Installation
pip install pineconeThis is the entire install for sync REST, asyncio REST, and gRPC. The gRPC transport is now a Rust extension built into the wheel, so there is no grpcio to install, no version pinning or conflicts to manage with other dependencies of your app.
from pinecone import Pinecone
pc = Pinecone(api_key="...")
index = pc.index("my-index") # sync REST
grpc_index = pc.index("my-index", grpc=True) # gRPC, no grpcio dependencyPython 3.10+ is required. See Migrating from v8 for why Python 3.9 was dropped.
Performance
The improvements come from three changes to the internals:
- The OpenAPI-generated request/response layer was removed. Every call in v8 routed through several layers of generated boilerplate; v9's HTTP path is hand-written and direct.
- Serialization moved to
msgspecandorjson. Response objects are typed structs decoded at native speed, not dicts populated by Python-level loops. - An automated optimization loop swept the hot paths and surfaced a number of incremental improvements that would have been hard to find by inspection without an experimental harness.
The numbers below are from end-to-end benchmarks against a real Pinecone serverless index (1536-dim, AWS us-east-1, p50 latencies). Ratios are v8 / v9. Your mileage will vary with index configuration, region, client hardware, and network—treat these figures as directional, not a guarantee.
| Scenario | v8 | v9 | Ratio |
|---|---|---|---|
| Upsert b=100 (sync REST) | 1.00 s | 306 ms | 3.3× |
| Upsert b=100 (async REST) | 1.01 s | 308 ms | 3.3× |
| Throughput 10k vectors @ concurrency=20 (async) | 64.6 s | 4.45 s | 14.5× |
| Throughput 10k vectors @ concurrency=100 (sync) | 75.1 s | 4.42 s | 17.0× |
| Query top_k=100, +values +metadata | 786 ms | 287 ms | 2.7× |
| Query top_k=1000, +values | 7.01 s | 2.05 s | 3.4× |
Cold import (python -c "import pinecone") |
196 ms | 17.6 ms | 11.2× |
| Net cold start (import + construct) | ~210 ms | ~45 ms | ~5× |
The largest practical change is in bulk upsert under concurrency. v8's REST path saturates client CPU on serialization and stops scaling past a single connection — adding workers does not increase throughput. v9 sustains roughly 16× the v8 sync throughput at concurrency=20 and lands within 2× of gRPC. If you adopted gRPC primarily for upsert throughput on REST, the REST path in v9 may be fast enough to remove that complexity from your stack.
gRPC per-call latency is roughly at parity with v8's Python grpcio channel, which is expected — the wire format dominates. The Rust channel scales further under high concurrency without GIL contention.
You can read a little more about this on Batching Large Upserts
Type safety
mypy --strict runs clean across the codebase. The public surface — every class, method, parameter, and return value — is fully annotated. A handful of Any types remain at the JSON boundary, where they are the appropriate choice; the rest of the surface gives complete IDE autocomplete and works cleanly in downstream type-checked codebases.
Fewer dependencies
v9 ships with three runtime dependencies: httpx[http2], msgspec, and orjson.
For comparison, v8.1.2 declared:
- 7 dependencies in its base install
- 5 more under the
[grpc]extra - 2 more under the
[asyncio]extra
— a maximum of 14 declared runtime dependencies for a fully-enabled install. v9 includes both gRPC and async support in the base package and declares only three. The practical benefits:
- Fewer version conflicts. Each dependency is a constraint your environment has to satisfy. With three constraints instead of fourteen, the SDK is much less likely to pin you out of an upgrade in another part of your stack.
- Smaller security-advisory surface. Every transitive dependency is a CVE channel you have to monitor. Fewer direct deps means fewer transitive deps and a smaller monitoring surface.
- Simpler dependency tree to reason about. When something goes wrong at install time, there's less to read.
Resource-oriented client organization
v9’s control plane is resource-oriented: methods are grouped under the resource they target—pc.indexes, pc.collections, pc.backups, pc.inference, and so on—instead of a long flat list on the root client. That lines up with the API’s resource model and will make it easier to maintain and navigate as we continue adding new capabilities.
pc.indexes.create(name="my-index", dimension=1536, spec=...)
pc.indexes.list()
pc.collections.create(...)
pc.backups.list()
pc.inference.embed(...)This is mostly an organizational change, but it matters as the surface grows. Piling every method onto the root client scales poorly; giving each resource its own subtree keeps discovery sane and gives new areas—pc.assistant, pc.preview, and whatever comes next—a stable place to land without fighting for method names on pc.
Existing v8 code does not need to change. The flat v8 method names are preserved as aliases on the client:
# Both forms work in v9
pc.create_index(name="my-index", dimension=1536, spec=...)
pc.indexes.create(name="my-index", dimension=1536, spec=...)
pc.list_indexes()
pc.indexes.list()The resource-oriented (namespaced) form is recommended for new code.
Assistant is part of the core client
The Pinecone Assistant API previously shipped as a separate plugin (pinecone-plugin-assistant) installed alongside pinecone. In v9, it is part of the main package. Code that imported from pinecone_plugins.assistant.* should switch to pinecone.models.assistant:
from pinecone import Pinecone
from pinecone.models.assistant import Message
pc = Pinecone(api_key="...")
assistant = pc.assistant.create_assistant("my-assistant")
assistant.upload_file(file_path="report.pdf")
response = assistant.chat(messages=[Message(role="user", content="...")])The runtime methods on pc.assistant (create_assistant, list_assistants, describe_assistant, chat, upload_file, …) are unchanged. Only the import paths moved.
The plugin discovery system itself is retired. Going forward, Assistant improvements ship in the main SDK release stream rather than as separately-versioned packages, which removes a coordination step and shortens the path from feature work to general availability.
The full v8-to-v9 import-path mapping is in [§8 of the migration guide][migration-s8].
pc.preview: a namespace for Early Access and Public Preview features
v9 introduces pc.preview, a dedicated nam...
Release v8.1.2
Release v8.1.1
Bug fixes
- Fix crash when delete() receives an empty response body — The asyncio delete() and delete_namespace() methods could crash with an AttributeError when
the server returned an empty response body. These methods now return None gracefully instead of crashing. (#623, fixes
#564)
Security & dependency updates
- Bump orjson minimum to 3.11.6 (CVE-2025-67221) (#625)
- Bump aiohttp to 3.13.5 in lockfile (CVE-2026-22815) (#630)
- Bump pygments to 2.20.0 in lockfile (ReDoS fix) (#628)
- Add explicit GITHUB_TOKEN permissions to workflow files (#629)
- Bump minimatch to 3.1.5 in bump-version action (#618)
- Bump picomatch to 2.3.2 in bump-version action (#627)
Full Changelog: v8.1.0...v8.1.1
v8.1.0
This release adds support for creating and configuring index read_capacity for BYOC indexes:
import pinecone
from pinecone import ByocSpec
pc = pinecone.Pinecone(api_key="YOUR_API_KEY")
# Create a BYOC index with OnDemand read capacity
pc.create_index(
name="my-byoc-index",
dimension=1536,
spec=ByocSpec(
environment="my-byoc-env",
read_capacity={"mode": "OnDemand"},
)
)
# Create a BYOC index with Dedicated read capacity
pc.create_index(
name="my-byoc-index",
dimension=1536,
spec=ByocSpec(
environment="my-byoc-env",
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "b1",
"scaling": "Manual",
"manual": {"replicas": 2},
},
},
)
)The following user-facing types have been added or updated to support this:
ByocSpec— now accepts optionalread_capacityandschemafieldsReadCapacityDict— union alias for the two read capacity modes belowReadCapacityOnDemandDict—{"mode": "OnDemand"}ReadCapacityDedicatedDict—{"mode": "Dedicated", "dedicated": ReadCapacityDedicatedConfigDict}ReadCapacityDedicatedConfigDict—{"node_type": str, "scaling": str, "manual": ScalingConfigManualDict}ScalingConfigManualDict—{"shards": int, "replicas": int}MetadataSchemaFieldConfig—{"filterable": bool}, used with the schema field onByocSpec
All of the above are exported from the top-level pinecone module.
Support for scan_factor and max_candidates has been added to Index.query() and Index.query_namespaces():
# scan_factor widens the IVF scan to trade latency for higher recall
# max_candidates controls how many candidates are reranked with exact distances
results = index.query(
vector=[...],
top_k=10,
scan_factor=2.0,
max_candidates=500,
)Both parameters are optional and only take effect on dedicated read node (DRN) dense indexes. scan_factor adjusts how much of the IVF index is scanned when gathering vector candidates, and max_candidates caps the number of candidates that undergo exact-distance reranking to improve recall.
What's Changed
- Regenerate code from
2025-10, implementschema/read_capacityinBYOCSpecby @austin-denoble in #614 - Implement
scan_factorandmax_candidatesforqueryby @austin-denoble in #617
Full Changelog: v8.0.1...v8.1.0
v8.0.1
Security
🔒 Fixed Protobuf Denial-of-Service Vulnerability (CVE-2025-4565)
Updated protobuf dependency to address a denial-of-service vulnerability when parsing deeply nested recursive structures in a Pure-Python backend.
Affected users: Only users of the grpc extras (pip install pinecone[grpc]) and PineconeGRPC client will be affected by the change. Users of the default REST client (Pinecone) are not affected.
Changes:
- Upgraded
protobuffrom5.xto6.33.0+ - Upgraded
googleapis-common-protosfrom1.66.0to1.72.0+for compatibility - Regenerated gRPC code with protobuf v33.0
Impact:
- Breaking Change: Minimum protobuf version is now
6.33.0(was5.29.5) - Users with pinned protobuf versions
<6.33.0will need to upgrade - No API or functionality changes for end users
- All existing code continues to work with the new protobuf version
References:
Release v8.0.0
Upgrading from 7.x to 8.x
The v8 release of the Pinecone Python SDK has been published as pinecone to PyPI.
With a few exceptions noted below, nearly all changes are additive and non-breaking. The major version bump primarily reflects the step up to API version 2025-10 and the addition of a new dependency on orjson for fast JSON parsing.
Breaking Changes
namespace parameter in GRPC methods. When namespace=None, the parameter is omitted from requests, allowing the API to handle namespace defaults appropriately. This change affects upsert_from_dataframe methods in GRPC clients. The API is moving toward "__default__" as the default namespace value, and this change ensures the SDK doesn't override API defaults.
Note: The official SDK package was renamed last year from pinecone-client to pinecone beginning in version 5.1.0. Please remove pinecone-client from your project dependencies and add pinecone instead to get the latest updates if upgrading from earlier versions.
What's new in 8.x
Dedicated Read Capacity for Serverless Indexes
You can now configure dedicated read nodes for your serverless indexes, giving you more control over query performance and capacity planning. By default, serverless indexes use OnDemand read capacity, which automatically scales based on demand. With dedicated read capacity, you can allocate specific read nodes with manual scaling control.
Create an index with dedicated read capacity:
from pinecone import (
Pinecone,
ServerlessSpec,
CloudProvider,
AwsRegion,
Metric
)
pc = Pinecone()
pc.create_index(
name='my-index',
dimension=1536,
metric=Metric.COSINE,
spec=ServerlessSpec(
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1,
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "t1",
"scaling": "Manual",
"manual": {
"shards": 2,
"replicas": 2
}
}
}
)
)Configure read capacity on an existing index:
You can switch between OnDemand and Dedicated modes, or adjust the number of shards and replicas for dedicated read capacity:
from pinecone import Pinecone
pc = Pinecone()
# Switch to OnDemand read capacity
pc.configure_index(
name='my-index',
read_capacity={"mode": "OnDemand"}
)
# Switch to Dedicated read capacity with manual scaling
pc.configure_index(
name='my-index',
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "t1",
"scaling": "Manual",
"manual": {
"shards": 3,
"replicas": 2
}
}
}
)
# Scale up by increasing shards and replicas
pc.configure_index(
name='my-index',
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "t1",
"scaling": "Manual",
"manual": {
"shards": 4,
"replicas": 3
}
}
}
)When you change read capacity configuration, the index will transition to the new configuration. You can use describe_index to check the status of the transition.
See PR #528 for details.
Fetch and Update Vectors by Metadata
Fetch vectors by metadata filter
You can now fetch vectors using metadata filters instead of vector IDs. This is especially useful when you need to retrieve vectors based on their metadata properties.
from pinecone import Pinecone
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Fetch vectors matching a complex filter
response = index.fetch_by_metadata(
filter={'genre': {'$in': ['comedy', 'drama']}, 'year': {'$eq': 2019}},
namespace='my_namespace',
limit=50
)
print(f"Found {len(response.vectors)} vectors")
# Iterate through fetched vectors
for vec_id, vector in response.vectors.items():
print(f"ID: {vec_id}, Metadata: {vector.metadata}")Pagination support:
When fetching large numbers of vectors, you can use pagination tokens to retrieve results in batches:
# First page
response = index.fetch_by_metadata(
filter={'status': 'active'},
limit=100
)
# Continue with next page if available
if response.pagination and response.pagination.next:
next_response = index.fetch_by_metadata(
filter={'status': 'active'},
pagination_token=response.pagination.next,
limit=100
)Update vectors by metadata filter
The update method used to require a vector id to be passed, but now you have the option to pass a metadata filter instead. This is useful for bulk metadata updates across many vectors.
There is also a dry_run option that allows you to preview the number of vectors that would be changed by the update before performing the operation.
from pinecone import Pinecone
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Preview how many vectors would be updated (dry run)
response = index.update(
set_metadata={'status': 'active'},
filter={'genre': {'$eq': 'drama'}},
dry_run=True
)
print(f"Would update {response.matched_records} vectors")
# Apply the update by repeating the command without dry_run
response = index.update(
set_metadata={'status': 'active'},
filter={'genre': {'$eq': 'drama'}}
)FilterBuilder for fluent filter construction
A new FilterBuilder utility class provides a type-safe, fluent interface for constructing metadata filters. While perhaps a bit verbose, it can help prevent common errors like misspelled operator names and provides better IDE support.
When you chain .build() onto the FilterBuilder it will emit a python dictionary representing the filter. Methods that take metadata filters as arguments will continue to accept dictionaries as before.
from pinecone import Pinecone, FilterBuilder
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Simple equality filter
filter1 = FilterBuilder().eq("genre", "drama").build()
# Returns: {"genre": "drama"}
# Multiple conditions with AND using & operator
filter2 = (FilterBuilder().eq("genre", "drama") &
FilterBuilder().gt("year", 2020)).build()
# Returns: {"$and": [{"genre": "drama"}, {"year": {"$gt": 2020}}]}
# Multiple conditions with OR using | operator
filter3 = (FilterBuilder().eq("genre", "comedy") |
FilterBuilder().eq("genre", "drama")).build()
# Returns: {"$or": [{"genre": "comedy"}, {"genre": "drama"}]}
# Complex nested conditions
filter4 = ((FilterBuilder().eq("genre", "drama") &
FilterBuilder().gte("year", 2020)) |
(FilterBuilder().eq("genre", "comedy") &
FilterBuilder().lt("year", 2000))).build()
# Use with fetch_by_metadata
response = index.fetch_by_metadata(filter=filter2, limit=50)
# Use with update
index.update(
set_metadata={'status': 'archived'},
filter=filter3
)The FilterBuilder supports all Pinecone filter operators: eq, ne, gt, gte, lt, lte, in_, nin, and exists. Compound expressions are built with and as & and or as |.
See PR #529 for fetch_by_metadata, PR #544 for update() with filter, and PR #531 for FilterBuilder.
Other New Features
Create namespaces programmatically
You can now create namespaces in serverless indexes directly from the SDK:
from pinecone import Pinecone
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Create a namespace with just a name
namespace = index.create_namespace(name="my-namespace")
print(f"Created namespace: {namespace.name}, Vector count: {namespace.vector_count}")
# Create a namespace with schema configuration
namespace = index.create_namespace(
name="my-namespace",
schema={
"fields": {
"genre": {"filterable": True},
"year": {"filterable": True}
}
}
)Note: This operation is not supported for pod-based indexes.
See PR #532 for details.
Match terms in search operations
For sparse indexes with integrated embedding configured to use the pinecone-sparse-english-v0 model, you can now specify which terms must be present in search results:
from pinecone import Pinecone, SearchQuery
pc = Pinecone()
index = pc.Index(host="your-index-host")
response = index.search(
namespace="my-namespace",
query=SearchQuery(
inputs={"text": "Apple corporation"},
top_k=10,
match_terms={
"strategy": "all",
"terms": ["apple", "corporation"]
}
)
)The match_terms parameter ensures that all specified terms must be present in the text of each search hit. Terms are normalized and tokenized before matching, and order does not matter.
See PR #530 for details.
Admin API enhancements
**Update API keys, proj...
v7.3.0
This minor release includes the ability to interact with the Admin API and adds support for working with index namespaces via gRPC. Previously, namespace support was available only through REST.
Admin api
This release introduces an Admin class that provides support for performing CRUD operations on projects and API keys using REST.
Projects
from pinecone import Admin
# Use service account credentials
admin = Admin(client_id='foo', client_secret='bar')
# Example: Create a project
project = admin.project.create(
name="example-project",
max_pods=5
)
print(f"Project {project.id} was created")
# Example: Rename a project
project = admin.project.get(name='example-project')
admin.project.update(
project_id=project.id,
name='my-awesome-project'
)
# Example: Enable CMEK on all projects
project_list = admin.projects.list()
for proj in project_list_response.data:
admin.projects.update(
project_id=proj.id,
force_encryption_with_cmek=True
)
# Example: Set pod quota to 0 for all projects
project_list = admin.projects.list()
for proj in project_list_response.data:
admin.projects.update(project_id=proj.id, max_pods=0)
# Delete the project
admin.project.delete(project_id=project.id)API Keys
from pinecone import Admin
# Use service account credentials
admin = Admin(client_id='foo', client_secret='bar')
project = admin.project.get(name='my-project')
# Create an API key
api_key_response = admin.api_keys.create(
project_id=project.id,
name="ci-key",
roles=["ProjectEditor"]
)
key = api_key_response.value # 'pcsk_....'
# Look up info on a key by id
key_info = admin.api_keys.get(
api_key_id=api_key_response.key.id
)
# Delete a key
admin.api_keys.delete(
api_key_id=api_key_response.key.id
)Working with namespaces with gRPC
The gRPC Index class now exposes methods for calling describe_namespace, delete_namespace, list_namespaces, and list_namespaces_paginated.
from pinecone.grpc import PineconeGRPC as Pinecone
pc = Pinecone(api_key='YOUR_API_KEY')
index = pc.Index(host='your-index-host')
# list namespaces
results = index.list_namespaces_paginated(limit=10)
next_results = index.list_namespaces_paginated(limit=10, pagination_token=results.pagination.next)
# describe namespace
namespace = index.describe_namespace(results[0].name)
# delete namespaces (NOTE: this deletes all data within the namespace)
index.delete_namespace(results[0].name)What's Changed
- Implement Admin API by @jhamon in #512
- Add support for list, describe, and delete namespaces in grpc by @rohanshah18 in #517
Full Changelog: v7.2.0...v7.3.0
Release v7.2.0
This minor release includes new methods for working with index namespaces via REST, and the ability to configure an index with the embed configuration, which was not previously exposed.
Working with namespaces
The Index and IndexAsyncio classes now expose methods for calling describe_namespace, delete_namespace, list_namespaces, and list_namespaces_paginated. There is also a NamespaceResource which can be used to perform these operations. Namespaces themselves are still created implicitly when upserting data to a specific namespace.
from pinecone import Pinecone
pc = Pinecone(api_key='YOUR_API_KEY')
index = pc.Index(host='your-index-host')
# list namespaces
results = index.list_namespaces_paginated(limit=10)
next_results = index.list_namespaces_paginated(limit=10, pagination_token=results.pagination.next)
# describe namespace
namespace = index.describe_namespace(results[0].name)
# delete namespaces (NOTE: this deletes all data within the namespace)
index.delete_namespace(results[0].name)Configuring integrated embedding for an index
Previously, the configure_index methods did not support providing an embed argument when configuring an existing index. These methods now support embed in the shape of ConfigureIndexEmbed. You can convert an existing index to an integrated index by specifying the embedding model and field_map. The index vector type and dimension must match the model vector type and dimension, and the index similarity metric must be supported by the model. You can use list_models and get_model on the Inference class to get specific details about models.
You can later change the embedding configuration to update the field map, read parameters, or write parameters. Once set, the model cannot be changed.
from pinecone import Pinecone
pc = Pinecone(api_key='YOUR_API_KEY')
# convert an existing index to use the integrated embedding model multilingual-e5-large
pc.configure_index(
name="my-existing-index",
embed={"model": "multilingual-e5-large", "field_map": {"text": "chunk_text"}},
)What's Changed
- Add describe, delete, and list namespaces (REST) by @rohanshah18 in #507
- Fix release workflow by @rohanshah18 in #516
- Add
embedto Indexconfigurecalls by @austin-denoble in #515
Full Changelog: v7.1.0...v7.2.0
Release v7.1.0
This release fixes an issue where GRPC methods using async_req=True ignored user-provided timeout values, defaulting instead to a hardcoded 5-second timeout imposed by PineconeGrpcFuture. To verify this fix, we added a new test file, test_timeouts.py, which uses a mock GRPC server to simulate client timeout behavior under delayed response conditions.
Release v7.0.2
This small bugfix release includes the following fixes:
- Windows users should now be able to install without seeing the
readlineerror reported by in #502. See #503 for details on the root cause and fix. - We have added a new multi-platform installation testing workflow to catch future issues like the above Windows problem.
- While initially running these new tests we discovered a dependency was not being included correctly for the Assistant functionality:
pinecone-plugin-assistant. The assistant plugin had been inadvertently added as a dev dependency rather than a dependency, which means our integration tests for that functionality were able to pass while the published artifact was not including it. We have corrected this problem, which means assistant functions should now work without installing anything additional.