AWS Resources
Rime provisions and manages the AWS resources required to move data from extraction into Snowflake. This includes S3 buckets for staging extracted data, SNS topics for event notifications, and IAM roles that grant Snowflake secure access to your S3 buckets. You configure these through the UI and Rime handles provisioning via Terraform internally.
All AWS resource changes go through the plan/apply workflow. Nothing is created or modified until you review and approve the change preview.
How AWS resources fit the pipeline
The extraction-to-ingestion flow uses AWS as an intermediary between your data sources and Snowflake:
- A connector extracts data and writes Parquet files to an S3 bucket
- S3 sends an event notification to an SNS topic when new files land
- Snowpipe subscribes to the SNS topic and loads the new data into Snowflake raw tables
Rime configures all three layers — the bucket, the notification, and the IAM permissions — so that they work together without manual wiring.
Connecting your AWS account
Before creating AWS resources, provide Rime with AWS credentials. Navigate to Project Settings > AWS Connection and configure:
- Region — the AWS region where resources will be created. Choose a region close to your Snowflake account for lower latency. For New Zealand customers,
ap-southeast-2(Sydney) is the most common choice - Access method — either an IAM access key pair or an IAM role ARN that Rime assumes. Role assumption is recommended for production environments because it avoids long-lived credentials
Rime encrypts all AWS credentials at rest using AES-256-GCM.
S3 buckets
S3 buckets store the Parquet files produced by Rime’s extraction connectors. To create a bucket, navigate to Infrastructure > Resources, select Add Resource, and choose S3 Bucket.
Configuration options
| Option | Description | Default |
|---|---|---|
| Name | Globally unique bucket name. Rime prepends your project identifier to help avoid collisions | Required |
| Region | AWS region for the bucket. Should match your project’s AWS region | Project region |
| Versioning | If enabled, S3 retains previous versions of every object. Useful for audit trails but increases storage cost | Disabled |
| Lifecycle rules | Automatically transition or delete objects after a specified number of days | None |
Lifecycle rules
Lifecycle rules help control storage costs. When adding a rule, configure:
- Prefix — which objects the rule applies to (e.g.,
raw/for all raw extraction data). Leave blank to apply to the entire bucket - Transition days — number of days before objects move to a cheaper storage class (e.g., S3 Infrequent Access after 30 days)
- Expiration days — number of days before objects are permanently deleted
For most data pipelines, extracted Parquet files only need to persist long enough for Snowpipe to ingest them. A typical configuration:
- Transition to Infrequent Access after 30 days
- Delete after 90 days
If your organisation requires longer retention for compliance, adjust accordingly.
Recommended bucket layout
Most projects use a single bucket with prefix-based organisation:
rime-{project}-data/├── raw/ # Connector extraction output│ ├── postgres-main/ # One prefix per connector│ ├── salesforce/│ └── csv-uploads/└── staging/ # Temporary processing filesFor larger deployments with strict access control requirements, you can create separate buckets per data source. This allows finer-grained IAM policies and simplifies compliance auditing.
SNS topics
SNS topics relay S3 event notifications to Snowpipe. When a Parquet file lands in S3, an event notification triggers the SNS topic, which Snowpipe subscribes to for automatic ingestion.
To create an SNS topic, navigate to Infrastructure > Resources, select Add Resource, and choose SNS Topic.
Configuration options
| Option | Description | Default |
|---|---|---|
| Name | Topic identifier. Rime suggests a name based on the associated S3 bucket | Required |
| S3 bucket | The bucket this topic receives events from | Required |
| Event types | Which S3 events trigger notifications. Typically s3:ObjectCreated:* | Object created |
| Filter prefix | Only send notifications for objects matching this prefix (e.g., raw/) | None |
| Filter suffix | Only send notifications for objects matching this suffix (e.g., .parquet) | None |
How SNS connects S3 to Snowpipe
When Rime creates an SNS topic, it also configures:
- S3 event notification on the specified bucket, filtered by the prefix and suffix you set
- SNS topic policy that allows the S3 bucket to publish to the topic
- Snowpipe subscription that allows your Snowflake account to receive messages from the topic
This three-way configuration is handled automatically. If you modify the bucket or topic later, Rime updates all related resources to keep them consistent.
IAM roles and policies
Rime creates IAM roles that allow Snowflake to access your S3 buckets securely. This uses Snowflake’s external stage authentication model, where Snowflake assumes an IAM role in your AWS account.
Automatic IAM configuration
When you create an S3 bucket through Rime, the platform automatically provisions:
- An IAM role with a trust policy that allows your Snowflake account to assume it
- An IAM policy attached to that role, granting read access to the specified bucket and prefix
- External ID configuration to prevent confused deputy attacks
You do not need to manually create IAM roles or copy ARNs between consoles. Rime handles the bidirectional configuration: it creates the IAM role in AWS and configures the Snowflake storage integration to reference it.
Policy scope
Each IAM policy is scoped to the minimum permissions required:
s3:GetObjectands3:GetObjectVersionon the bucket contentss3:ListBucketon the bucket itself- No write permissions (Snowflake only reads; Rime’s connectors write using separate credentials)
If you need to adjust the policy scope (for example, to restrict access to a specific prefix), edit the IAM resource in the Rime UI. The change goes through the normal plan/apply workflow.
Resource dependencies
AWS resources have dependencies on each other and on Snowflake resources. Rime tracks these dependencies and applies changes in the correct order:
- S3 bucket is created first (no dependencies)
- IAM role and policy are created next (depends on the bucket ARN)
- SNS topic is created with the bucket event notification (depends on the bucket)
- Snowflake storage integration is configured with the IAM role ARN (depends on the IAM role)
- Snowflake pipe subscribes to the SNS topic (depends on the topic and storage integration)
When you remove a resource, Rime checks for dependencies and warns you if other resources rely on it. Removing an S3 bucket, for example, will flag the associated SNS topic, IAM role, and Snowflake pipe as affected resources.
Common patterns
Single-region pipeline
The simplest setup for a New Zealand-based organisation:
- One S3 bucket in
ap-southeast-2 - One SNS topic per bucket
- One IAM role per Snowflake storage integration
- Snowflake account in
ap-southeast-2
This keeps latency low and simplifies IAM policy management.
Multi-source with shared bucket
When extracting from many sources into a single Snowflake database:
- One S3 bucket with per-connector prefixes
- One SNS topic with a
raw/prefix filter - Snowpipe loads all connector output into the
RAWdatabase, separated by schema
This is the default layout Rime creates when you add connectors to a project.
Isolated buckets per source
For organisations with strict compliance requirements:
- Separate S3 bucket per data source
- Separate SNS topic per bucket
- Separate IAM role per bucket
- Snowpipe configured per bucket-schema pair
This provides stronger isolation but increases the number of resources to manage. Rime handles the additional complexity through its dependency tracking and change preview.
Next steps
- Configure Snowflake Resources for databases, schemas, warehouses, and roles
- Understand the Change Management workflow for reviewing and applying infrastructure changes
- Set up Drift Detection to catch manual AWS console changes