The following information applies only to the Unstructured Ingest CLI and the Unstructured Ingest Python library.
The Unstructured SDKs for Python and JavaScript/TypeScript, and the Unstructured open-source library, do not support this functionality.
You can use the Unstructured Ingest CLI or the Unstructured Ingest Python library to generate embeddings after the partitioning and chunking steps in an ingest pipeline. The chunking step is particularly important to ensure that the text pieces (also known as the documents or elements) can fit the input limits of an embedding model.
You generate embeddings by specifying an embedding model that is provided or used by an embedding provider. An embedding model creates arrays of numbers known as vectors, representing the text that is extracted by Unstructured. These vectors are stored or embedded next to the data itself.
These vector embeddings allow vector databases to more quickly and efficiently analyze and process these inherent properties and relationships between data. For example, you can save the extracted text along with its embeddings in a vector store. When a user queries a retrieval augmented generation (RAG) application, the application can use a vector database to perform a similarity search in that vector store and then return the documents whose embeddings are the closest to that user’s query.
Learn more about chunking and embedding.
To use the Ingest CLI or Ingest Python library to generate embeddings, do the following:
Choose an embedding provider that you want to use from among the following allowed providers, and note the provider’s ID:
The following list assumes that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), you are limited to the provider and model names that are supported by the Unstructured API. See the list of supported provider names.
bedrock
for Amazon Bedrock. Learn more.huggingface
for Hugging Face. Learn more.mixedbread-ai
for Mixedbread. Learn more.octoai
for Octo AI. Learn more.openai
for OpenAI. Learn more.togetherai
for Together.ai. Learn more.vertexai
for Google Vertex AI PaLM. Learn more.voyageai
for Voyage AI. Learn more.Run the following command to install the required Python package for the embedding provider:
bedrock
, run pip install "unstructured-ingest[bedrock]"
.huggingface
, run pip install "unstructured-ingest[embed-huggingface]"
.mixedbread-ai
, run pip install "unstructured-ingest[embed-mixedbreadai]"
.octoai
, run pip install "unstructured-ingest[embed-octoai]"
.openai
, run pip install "unstructured-ingest[openai]"
.togetherai
, run pip install "unstructured-ingest[togetherai]"
.vertexai
, run pip install "unstructured-ingest[embed-vertexai]"
.voyageai
, run pip install "unstructured-ingest[embed-voyageai]"
.For the following embedding providers, you can choose the model that you want to use. If you do choose a model, note the model’s name:
The following list assumes that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), you are limited to the model names that are supported by the Unstructured API. See the list of supported model names.
bedrock
. Choose a model. No default model is provided. Learn more about the supported models.huggingface
. Choose a model, or use the default model sentence-transformers/all-MiniLM-L6-v2.mixedbread-ai
. Choose a model, or use the default model mixedbread-ai/mxbai-embed-large-v1.octoai
. Choose a model, or use the default model thenlper/gte-large
.openai
. Choose a model, or use the default model text-embedding-ada-002
.togetherai
. Choose a model, or use the default model togethercomputer/m2-bert-80M-32k-retrieval
.vertexai
. Choose a model, or use the default model text-embedding-05
.voyageai
. Choose a model. No default model is provided.Note the special settings to connect to the provider:
The following special settings assume that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), do not include any of these special settings. Unstructured uses its own internal special settings when using the specified provider to generate the embeddings.
bedrock
, you’ll need an AWS access key value, the corresponding AWS secret access key value, and the corresponding AWS Region identifier. Get an AWS access key and secret access key.huggingface
, if you use a gated model (a model with special conditions that you must accept before you can use it, or a privately published model), you’ll need an HF inference API key value, beginning with hf_
. Get an HF inference API key. To learn whether your model requires an HF inference API key, see your model provider’s documentation.mixedbread-ai
, you’ll need a Mixedbread API key value. Get a Mixedbread API key.octoai
, you’ll need an Octo AI API token value. Get an Octo AI API token.openai
, you’ll need an OpenAI API key value. Get an OpenAI API key.togetherai
, you’ll need a together.ai API key value. Get a together.ai API key.vertexai
, you’ll need the path to a Google Cloud credentials JSON file. Learn more here and here.voyageai
, you’ll need a Voyage AI API key value. Get a Voyage AI API key.Now, apply all of this information as follows, and then run your command or code:
Ingest CLI
The following options assume that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), do not include any of the following options:
--embedding-api-key
--embedding-aws-access-key-id
--embedding-aws-secret-access-key
--embedding-aws-region
Unstructured uses its own internal settings for these options when using the specified provider to generate the embeddings.
For the source connector command:
Set the command’s --embedding-provider
to the provider’s ID, for example huggingface
.
Set --embedding-model-name
to the model name, as applicable, for example sentence-transformers/sentence-t5-xl
. Or omit this to use the default model, as applicable.
Set --embedding-api-key
to the provider’s required API key value or credentials JSON file path, as appropriate.
For bedrock
:
--embedding-aws-access-key-id
to the AWS access key value.--embedding-aws-secret-access-key
to the corresponding AWS secret access key value.--embedding-aws-region
to the corresponding AWS Region identifier.Ingest Python library
The following parameters assume that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), do not include any of the following parameters:
embedding_api_key
embedding_aws_access_key_id
embedding_aws_secret_access_key
embedding_aws_region
Unstructured uses its own internal settings for these parameters when using the specified provider to generate the embeddings.
For the source connector’s EmbedderConfig
object:
Set the embedding_provider
parameter to the provider’s ID, for example huggingface
.
Set embedding_model_name
to the model name, as applicable, for example sentence-transformers/sentence-t5-xl
. Or omit this to use the default model, as applicable.
Set embedding_api_key
to the provider’s required API key value or credentials JSON file path, as appropriate.
For bedrock
:
embedding_aws_access_key_id
to the AWS access key value.embedding_aws_secret_access_key
to the corresponding AWS secret access key value.embedding_aws_region
to the corresponding AWS Region identifier.The following information applies only to the Unstructured Ingest CLI and the Unstructured Ingest Python library.
The Unstructured SDKs for Python and JavaScript/TypeScript, and the Unstructured open-source library, do not support this functionality.
You can use the Unstructured Ingest CLI or the Unstructured Ingest Python library to generate embeddings after the partitioning and chunking steps in an ingest pipeline. The chunking step is particularly important to ensure that the text pieces (also known as the documents or elements) can fit the input limits of an embedding model.
You generate embeddings by specifying an embedding model that is provided or used by an embedding provider. An embedding model creates arrays of numbers known as vectors, representing the text that is extracted by Unstructured. These vectors are stored or embedded next to the data itself.
These vector embeddings allow vector databases to more quickly and efficiently analyze and process these inherent properties and relationships between data. For example, you can save the extracted text along with its embeddings in a vector store. When a user queries a retrieval augmented generation (RAG) application, the application can use a vector database to perform a similarity search in that vector store and then return the documents whose embeddings are the closest to that user’s query.
Learn more about chunking and embedding.
To use the Ingest CLI or Ingest Python library to generate embeddings, do the following:
Choose an embedding provider that you want to use from among the following allowed providers, and note the provider’s ID:
The following list assumes that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), you are limited to the provider and model names that are supported by the Unstructured API. See the list of supported provider names.
bedrock
for Amazon Bedrock. Learn more.huggingface
for Hugging Face. Learn more.mixedbread-ai
for Mixedbread. Learn more.octoai
for Octo AI. Learn more.openai
for OpenAI. Learn more.togetherai
for Together.ai. Learn more.vertexai
for Google Vertex AI PaLM. Learn more.voyageai
for Voyage AI. Learn more.Run the following command to install the required Python package for the embedding provider:
bedrock
, run pip install "unstructured-ingest[bedrock]"
.huggingface
, run pip install "unstructured-ingest[embed-huggingface]"
.mixedbread-ai
, run pip install "unstructured-ingest[embed-mixedbreadai]"
.octoai
, run pip install "unstructured-ingest[embed-octoai]"
.openai
, run pip install "unstructured-ingest[openai]"
.togetherai
, run pip install "unstructured-ingest[togetherai]"
.vertexai
, run pip install "unstructured-ingest[embed-vertexai]"
.voyageai
, run pip install "unstructured-ingest[embed-voyageai]"
.For the following embedding providers, you can choose the model that you want to use. If you do choose a model, note the model’s name:
The following list assumes that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), you are limited to the model names that are supported by the Unstructured API. See the list of supported model names.
bedrock
. Choose a model. No default model is provided. Learn more about the supported models.huggingface
. Choose a model, or use the default model sentence-transformers/all-MiniLM-L6-v2.mixedbread-ai
. Choose a model, or use the default model mixedbread-ai/mxbai-embed-large-v1.octoai
. Choose a model, or use the default model thenlper/gte-large
.openai
. Choose a model, or use the default model text-embedding-ada-002
.togetherai
. Choose a model, or use the default model togethercomputer/m2-bert-80M-32k-retrieval
.vertexai
. Choose a model, or use the default model text-embedding-05
.voyageai
. Choose a model. No default model is provided.Note the special settings to connect to the provider:
The following special settings assume that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), do not include any of these special settings. Unstructured uses its own internal special settings when using the specified provider to generate the embeddings.
bedrock
, you’ll need an AWS access key value, the corresponding AWS secret access key value, and the corresponding AWS Region identifier. Get an AWS access key and secret access key.huggingface
, if you use a gated model (a model with special conditions that you must accept before you can use it, or a privately published model), you’ll need an HF inference API key value, beginning with hf_
. Get an HF inference API key. To learn whether your model requires an HF inference API key, see your model provider’s documentation.mixedbread-ai
, you’ll need a Mixedbread API key value. Get a Mixedbread API key.octoai
, you’ll need an Octo AI API token value. Get an Octo AI API token.openai
, you’ll need an OpenAI API key value. Get an OpenAI API key.togetherai
, you’ll need a together.ai API key value. Get a together.ai API key.vertexai
, you’ll need the path to a Google Cloud credentials JSON file. Learn more here and here.voyageai
, you’ll need a Voyage AI API key value. Get a Voyage AI API key.Now, apply all of this information as follows, and then run your command or code:
Ingest CLI
The following options assume that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), do not include any of the following options:
--embedding-api-key
--embedding-aws-access-key-id
--embedding-aws-secret-access-key
--embedding-aws-region
Unstructured uses its own internal settings for these options when using the specified provider to generate the embeddings.
For the source connector command:
Set the command’s --embedding-provider
to the provider’s ID, for example huggingface
.
Set --embedding-model-name
to the model name, as applicable, for example sentence-transformers/sentence-t5-xl
. Or omit this to use the default model, as applicable.
Set --embedding-api-key
to the provider’s required API key value or credentials JSON file path, as appropriate.
For bedrock
:
--embedding-aws-access-key-id
to the AWS access key value.--embedding-aws-secret-access-key
to the corresponding AWS secret access key value.--embedding-aws-region
to the corresponding AWS Region identifier.Ingest Python library
The following parameters assume that you are calling the embedding provider directly. If you are calling Unstructured’s software-as-a-service (SaaS) for processing instead (for example, by specifying an Unstructured API key and an Unstructured SaaS URL), do not include any of the following parameters:
embedding_api_key
embedding_aws_access_key_id
embedding_aws_secret_access_key
embedding_aws_region
Unstructured uses its own internal settings for these parameters when using the specified provider to generate the embeddings.
For the source connector’s EmbedderConfig
object:
Set the embedding_provider
parameter to the provider’s ID, for example huggingface
.
Set embedding_model_name
to the model name, as applicable, for example sentence-transformers/sentence-t5-xl
. Or omit this to use the default model, as applicable.
Set embedding_api_key
to the provider’s required API key value or credentials JSON file path, as appropriate.
For bedrock
:
embedding_aws_access_key_id
to the AWS access key value.embedding_aws_secret_access_key
to the corresponding AWS secret access key value.embedding_aws_region
to the corresponding AWS Region identifier.