Quickstart
This guide walks you through the essential steps to get data flowing through Rime: creating a project, connecting Snowflake, setting up a connector, and running your first extraction.
Prerequisites
- A Snowflake account (any edition)
- Credentials for at least one data source (a database, SaaS application, or CSV/JSON files)
- A Rime account (sign up at the Rime sign-up page)
Step 1: Create a project
After signing in, you land on the Dashboard. Projects are the top-level organisational unit in Rime — they group connectors, infrastructure, transformations, and pipelines together.
- Click New Project on the Dashboard or in the sidebar
- Enter a project name (e.g., “Production Data” or “Analytics”)
- Click Create
You are redirected to the project overview page.
Step 2: Connect Snowflake
Rime needs access to your Snowflake account to manage databases, schemas, and load data.
- Navigate to Settings > Snowflake within your project
- Enter your Snowflake account identifier (e.g.,
xy12345.ap-southeast-2) - Provide authentication credentials:
- Password authentication: enter your Snowflake username and password
- Key pair authentication: upload or paste your private key
- Click Test Connection to verify access
- Click Save
Rime encrypts your credentials at rest using AES-256-GCM. They are never stored in plain text.
Step 3: Provision infrastructure
Before extracting data, Rime needs a destination in Snowflake and an S3 bucket for staging.
- Go to Infrastructure in the project sidebar
- Click Add Resource and select Snowflake Database
- Name it (e.g.,
RAW_DATA) - Add a schema (e.g.,
PUBLIC)
- Name it (e.g.,
- Click Add Resource again and select S3 Bucket
- Rime generates a unique bucket name
- An IAM role is created automatically for Snowflake access
- Click Plan Changes to see what Rime will create
- Review the change preview and click Apply
Rime provisions the database, schema, S3 bucket, IAM role, and Snowpipe configuration. This takes 1-2 minutes.
Step 4: Create a connector
Connectors pull data from your source systems.
- Go to Connectors in the project sidebar
- Click New Connector
- Select your source type (e.g., PostgreSQL)
- Enter connection details:
- Host, port, database name
- Username and password (encrypted at rest)
- Click Test Connection to verify
- Rime discovers available tables and columns
- Select the tables you want to sync
- Set a sync schedule (e.g., every 6 hours) or leave it as manual
- Click Create Connector
Step 5: Run your first sync
- On the connector detail page, click Sync Now
- Watch the progress in real time:
- Tables are extracted in parallel
- Row counts update as data flows
- Any errors appear immediately
- When the sync completes, verify data in Snowflake by checking the run summary
The extraction pipeline is: source database -> Apache Arrow -> Parquet file -> S3 -> Snowpipe -> Snowflake raw table.
What’s next
You now have data flowing from a source system into Snowflake through Rime. From here:
- Transform your data — set up Kimball or Data Vault models to shape raw data into analytics-ready tables
- Build a pipeline — create a DAG pipeline that chains extraction, transformation, and validation steps
- Set up monitoring — configure alert rules to catch failures and anomalies
- Enable governance — review masked-by-default settings and classify sensitive columns