Import dbt Projects
If you already have dbt models running against your Snowflake warehouse, you can import them into Rime rather than rebuilding your transformation layer from scratch. Rime connects to your Git repository, parses the dbt project structure, extracts the lineage graph, and maps models into its transformation management UI.
Connecting your Git repository
Navigate to Project > Migration > Import dbt Project and provide the repository details:
| Field | Description |
|---|---|
| Repository URL | The Git clone URL (HTTPS or SSH) for the repository containing your dbt project |
| Branch | The branch to import from (defaults to main) |
| Path | If the dbt project is not at the repository root, specify the subdirectory (e.g., dbt/ or analytics/) |
| Authentication | HTTPS credentials (username + token) or SSH key |
Rime clones the repository and locates the dbt_project.yml file to identify the project structure. If the repository contains multiple dbt projects, you will be prompted to select which one to import.
What is parsed
Rime reads the following from your dbt project:
Models
All .sql files in the models/ directory are parsed. For each model, Rime extracts:
- Model name and file path
- Materialization type (table, view, incremental, ephemeral)
- Schema and database target (from
schema.ymlor config blocks) - Description and column documentation (if present in YAML schema files)
- Config options (tags, grants, pre/post hooks)
Sources
Source definitions from schema.yml files are parsed to identify the raw data that feeds into your transformation layer. Each source maps to a Snowflake database and schema.
Tests
Both schema tests (defined in YAML) and custom data tests (SQL files in tests/) are imported. Rime tracks:
- Test type (not_null, unique, accepted_values, relationships, or custom)
- Which model or column the test applies to
- Test severity (warn or error)
Macros
Custom macros in the macros/ directory are catalogued. Rime records macro names, arguments, and which models reference them.
Lineage from manifest.json
If a target/manifest.json file exists in the repository (generated by a previous dbt compile or dbt run), Rime extracts the full dependency graph. This provides the most accurate lineage, including dependencies on sources, cross-project references, and ephemeral model chains.
If no manifest is available, Rime builds an approximate lineage graph by parsing ref() and source() calls in the model SQL. This is less accurate for complex projects with dynamic references or Jinja logic, but covers the majority of cases.
Model mapping
Imported dbt models are mapped into Rime’s transformation projects according to their layer:
| dbt convention | Rime layer | Description |
|---|---|---|
staging/ or stg_ prefix | Staging | Source-conformed cleaning and renaming |
intermediate/ or int_ prefix | Intermediate | Cross-source joins and business logic |
marts/ or dim_/fct_ prefix | Marts | Business-facing dimension and fact tables |
raw_vault/ or hub_/link_/sat_ prefix | Raw vault (Data Vault) | Hub, link, and satellite tables |
business_vault/ | Business vault (Data Vault) | Business rules applied to raw vault |
| Other | Uncategorized | Models that do not match a known convention |
Uncategorized models are imported but flagged for manual review. You can assign them to the correct layer in the transformation UI after import.
If your project uses the Kimball methodology, models are organized into staging and marts. If it uses Data Vault, models are organized into raw vault, business vault, and marts. Mixed projects are supported — Rime assigns each model to the layer that best matches its naming and directory structure.
Lineage extraction
The imported lineage graph is displayed in the transformation UI using the same visual DAG editor used for Rime-native models. You can:
- See the full upstream and downstream dependencies for any model
- Identify which sources feed into which marts
- Spot orphaned models (models with no downstream consumers)
- Trace data flow from source to mart
The lineage graph updates automatically when you make changes to imported models through Rime.
Limitations
Importing a dbt project does not guarantee perfect fidelity with every possible dbt feature:
- Custom macros that use advanced Jinja (environment variables, run-context variables, or adapter-specific calls) may not translate into Rime’s internal template system. These macros are catalogued but flagged for manual review.
- Packages (
packages.ymldependencies like dbt-utils, dbt-expectations) are recorded but not automatically installed. If your models depend on package macros, you may need to recreate equivalent logic in Rime. - Hooks and operations (pre-hook, post-hook, on-run-start, on-run-end) are imported as metadata but need manual configuration in Rime’s pipeline steps.
- Snapshots are not imported in the current release. Snapshot definitions are logged but skipped.
- Exposures and metrics (dbt Semantic Layer) are catalogued but not yet integrated into Rime’s monitoring.
Post-import workflow
After the import completes:
-
Review the model map. Check that models are assigned to the correct layers. Reassign any uncategorized models.
-
Review flagged items. Models with custom macros, package dependencies, or other limitations are flagged in the import summary. Address these before running transformations through Rime.
-
Connect to Snowflake. Ensure your Rime project’s Snowflake connection has the credentials and permissions needed to run the imported models (the same permissions your existing dbt runs use).
-
Run a validation. Execute a
dbt compilethrough Rime to verify that the imported models compile successfully. This does not write any data — it only checks that the SQL is valid. -
Switch execution. Once validated, you can run transformations through Rime instead of your existing dbt orchestration. We recommend running both in parallel for a transition period before decommissioning the old pipeline.
Models imported from Git are now managed through Rime’s transformation UI. Changes are made in the UI, and Rime generates the underlying dbt SQL internally.
Next steps
- Import your Snowflake objects if you have not already
- Import your cloud resources (S3, IAM)
- Review the transformation overview for how to manage models through Rime