This article covers connecting Unstructured to Delta Tables in Amazon S3. For information about connecting Unstructured to Delta Tables in Databricks instead, see Delta Tables in Databricks.
If you’re new to Unstructured, read this note first.
Before you can create a destination connector, you must first sign in to your Unstructured account:
After you sign in, the Unstructured user interface (UI) appears, which you use to get your Unstructured API key. To learn how, watch this 40-second how-to video.
After you create the destination connector, add it along with a source connector to a workflow. Then run the worklow as a job. To learn how, try out the hands-on Workflow Endpoint quickstart, go directly to the quickstart notebook, or watch the two 4-minute video tutorials for the Unstructured Python SDK.
You can also create destination connectors with the Unstructured user interface (UI). Learn how.
If you need help, reach out to the community on Slack, or contact us directly.
You are now ready to start creating a destination connector! Keep reading to learn how.
Send processed data from Unstructured to a Delta Table, stored in Amazon S3.
The requirements are as follows.
The following video shows how to fulfill the minimum set of Amazon S3 requirements to store Delta Tables:
The preceding video does not show how to create an AWS account.
For more information about requirements, see the following:
An AWS account. Create an AWS account.
An S3 bucket. Create an S3 bucket. Additional approaches are in the following video and in the how-to sections at the end of this page.
For authenticated bucket read access, the authenticated AWS IAM user must have at minimum the permissions of s3:ListBucket
and s3:GetObject
for that bucket. Learn how.
For bucket write access, authenticated access to the bucket must be enabled (anonymous access must not be enabled), and the authenticated AWS IAM user must have at
minimum the permission of s3:PutObject
for that bucket. Learn how.
For authenticated access, an AWS access key and secret access key for the authenticated AWS IAM user in the account. Create an AWS access key and secret access key.
If the target files are in the root of the bucket, the path to the bucket, formatted as protocol://bucket/
(for example, s3://my-bucket/
).
If the target files are in a folder, the path to the target folder in the S3 bucket, formatted as protocol://bucket/path/to/folder/
(for example, s3://my-bucket/my-folder/
).
If the target files are in a folder, make sure the authenticated AWS IAM user has authenticated access to the folder as well. Enable authenticated folder access.
To use the Amazon S3 console to add an access policy that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to an existing S3 bucket, do the following.
Sign in to the AWS Management Console.
Open the Amazon S3 Console.
Browse to the existing bucket and open it.
Click the Permissions tab.
In the Bucket policy area, click Edit.
In the Policy text area, copy the following JSON-formatted policy.
To change the following policy to restrict it to a specific user in the AWS account, change root
to that
specific username.
In this policy, replace the following:
<my-account-id>
with your AWS account ID.<my-bucket-name>
in two places with the name of your bucket.Click Save changes.
To use the AWS CloudFormation console to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.
Save the following YAML to a file on your local machine, for example create-s3-bucket.yaml
. To change
the following bucket policy to restrict it to a specific user in the AWS account, change root
to that
specific username.
Sign in to the AWS Management Console.
Open the AWS CloudFormation Console.
Click Create stack > With new resources (standard).
On the Create stack page, with Choose an existing template already selected, select Upload a template file.
Click Choose file, and browse to and select the YAML file from your local machine.
Click Next.
Enter a unique Stack name and BucketName.
Click Next two times.
Click Submit.
Wait until the Status changes to CREATE_COMPLETE.
After the bucket is created, you can delete the YAML file, if you want.
To use the AWS CLI to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.
Copy the following script to a file on your local machine, for example a file named create-s3-bucket.sh
.
To change the following bucket policy to restrict it to a specific user in the AWS account, change root
to that
specific username.
In this script, replace the following:
<my-account-id>
with your AWS account ID.<my-unique-bucket-name>
with the name of your bucket.<us-east-1>
with your AWS Region.Run the script, for example:
After the bucket is created, you can delete the script file, if you want.
A Delta table consists of Parquet files that contain data and a transaction log that stores metadata about the transactions. Learn more.
The Delta Tables in Amazon S3 destination connector generates the following output within the specified path to the S3 bucket (or the specified folder within the bucket):
.parquet
) file per file in the source location. For example, for a file in the source location named my-file.pdf
, an associated
file with the extension .parquet
is generated. Various kinds of file transactions can result in additional Parquet files being generated. These Parquet filenames are automatically generated by the Delta Lake engine and are not meant to be manually modified._delta_log
that contains metadata and change history about the .parquet
files. As Parquet files are added to, changed, or removed from
the specified bucket or folder path, the _delta_log
folder is updated with any related metadata and change history details.Together, this set of Parquet files and their associated _delta_log
folder (and its contents) describe a single, versioned Delta table. Because of this, Unstructured recommends the following usage best practices:
_delta_log
folder within a Delta table’s directory. This can lead to data loss or table corruption._delta_log
folder (and its contents) together as a unit.
Note that the copied or moved Delta table will
no longer be controlled by the original Delta Tables in S3 destination connector.To create a Delta Tables in Amazon S3 destination connector, see the following examples.
Replace the preceding placeholders as follows:
<name>
(required) - A unique name for this connector.<aws-region>
(required) - The AWS Region identifier (for example, us-east-1
) for the Amazon S3 bucket you want to store the Delta Table in.<table-uri>
(required) - The URI of the Amazon S3 bucket you want to store the Delta Table in. This typically takes the format s3://my-bucket/my-folder
.<aws-access-key-id>
(required) - The AWS access key ID for the AWS IAM principal (such as an IAM user) that has the appropriate access to the S3 bucket.<aws-secret-access-key>
(required) - The AWS secret access key for the corresponding AWS access key ID.This article covers connecting Unstructured to Delta Tables in Amazon S3. For information about connecting Unstructured to Delta Tables in Databricks instead, see Delta Tables in Databricks.
If you’re new to Unstructured, read this note first.
Before you can create a destination connector, you must first sign in to your Unstructured account:
After you sign in, the Unstructured user interface (UI) appears, which you use to get your Unstructured API key. To learn how, watch this 40-second how-to video.
After you create the destination connector, add it along with a source connector to a workflow. Then run the worklow as a job. To learn how, try out the hands-on Workflow Endpoint quickstart, go directly to the quickstart notebook, or watch the two 4-minute video tutorials for the Unstructured Python SDK.
You can also create destination connectors with the Unstructured user interface (UI). Learn how.
If you need help, reach out to the community on Slack, or contact us directly.
You are now ready to start creating a destination connector! Keep reading to learn how.
Send processed data from Unstructured to a Delta Table, stored in Amazon S3.
The requirements are as follows.
The following video shows how to fulfill the minimum set of Amazon S3 requirements to store Delta Tables:
The preceding video does not show how to create an AWS account.
For more information about requirements, see the following:
An AWS account. Create an AWS account.
An S3 bucket. Create an S3 bucket. Additional approaches are in the following video and in the how-to sections at the end of this page.
For authenticated bucket read access, the authenticated AWS IAM user must have at minimum the permissions of s3:ListBucket
and s3:GetObject
for that bucket. Learn how.
For bucket write access, authenticated access to the bucket must be enabled (anonymous access must not be enabled), and the authenticated AWS IAM user must have at
minimum the permission of s3:PutObject
for that bucket. Learn how.
For authenticated access, an AWS access key and secret access key for the authenticated AWS IAM user in the account. Create an AWS access key and secret access key.
If the target files are in the root of the bucket, the path to the bucket, formatted as protocol://bucket/
(for example, s3://my-bucket/
).
If the target files are in a folder, the path to the target folder in the S3 bucket, formatted as protocol://bucket/path/to/folder/
(for example, s3://my-bucket/my-folder/
).
If the target files are in a folder, make sure the authenticated AWS IAM user has authenticated access to the folder as well. Enable authenticated folder access.
To use the Amazon S3 console to add an access policy that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to an existing S3 bucket, do the following.
Sign in to the AWS Management Console.
Open the Amazon S3 Console.
Browse to the existing bucket and open it.
Click the Permissions tab.
In the Bucket policy area, click Edit.
In the Policy text area, copy the following JSON-formatted policy.
To change the following policy to restrict it to a specific user in the AWS account, change root
to that
specific username.
In this policy, replace the following:
<my-account-id>
with your AWS account ID.<my-bucket-name>
in two places with the name of your bucket.Click Save changes.
To use the AWS CloudFormation console to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.
Save the following YAML to a file on your local machine, for example create-s3-bucket.yaml
. To change
the following bucket policy to restrict it to a specific user in the AWS account, change root
to that
specific username.
Sign in to the AWS Management Console.
Open the AWS CloudFormation Console.
Click Create stack > With new resources (standard).
On the Create stack page, with Choose an existing template already selected, select Upload a template file.
Click Choose file, and browse to and select the YAML file from your local machine.
Click Next.
Enter a unique Stack name and BucketName.
Click Next two times.
Click Submit.
Wait until the Status changes to CREATE_COMPLETE.
After the bucket is created, you can delete the YAML file, if you want.
To use the AWS CLI to create an Amazon S3 bucket that allows all authenticated AWS IAM users in the corresponding AWS account to read and write to the bucket, do the following.
Copy the following script to a file on your local machine, for example a file named create-s3-bucket.sh
.
To change the following bucket policy to restrict it to a specific user in the AWS account, change root
to that
specific username.
In this script, replace the following:
<my-account-id>
with your AWS account ID.<my-unique-bucket-name>
with the name of your bucket.<us-east-1>
with your AWS Region.Run the script, for example:
After the bucket is created, you can delete the script file, if you want.
A Delta table consists of Parquet files that contain data and a transaction log that stores metadata about the transactions. Learn more.
The Delta Tables in Amazon S3 destination connector generates the following output within the specified path to the S3 bucket (or the specified folder within the bucket):
.parquet
) file per file in the source location. For example, for a file in the source location named my-file.pdf
, an associated
file with the extension .parquet
is generated. Various kinds of file transactions can result in additional Parquet files being generated. These Parquet filenames are automatically generated by the Delta Lake engine and are not meant to be manually modified._delta_log
that contains metadata and change history about the .parquet
files. As Parquet files are added to, changed, or removed from
the specified bucket or folder path, the _delta_log
folder is updated with any related metadata and change history details.Together, this set of Parquet files and their associated _delta_log
folder (and its contents) describe a single, versioned Delta table. Because of this, Unstructured recommends the following usage best practices:
_delta_log
folder within a Delta table’s directory. This can lead to data loss or table corruption._delta_log
folder (and its contents) together as a unit.
Note that the copied or moved Delta table will
no longer be controlled by the original Delta Tables in S3 destination connector.To create a Delta Tables in Amazon S3 destination connector, see the following examples.
Replace the preceding placeholders as follows:
<name>
(required) - A unique name for this connector.<aws-region>
(required) - The AWS Region identifier (for example, us-east-1
) for the Amazon S3 bucket you want to store the Delta Table in.<table-uri>
(required) - The URI of the Amazon S3 bucket you want to store the Delta Table in. This typically takes the format s3://my-bucket/my-folder
.<aws-access-key-id>
(required) - The AWS access key ID for the AWS IAM principal (such as an IAM user) that has the appropriate access to the S3 bucket.<aws-secret-access-key>
(required) - The AWS secret access key for the corresponding AWS access key ID.