Skip to content

Connectors Overview

Connectors are configured connections to your data sources. Each connector knows how to authenticate with a source system, discover its schema, and extract data into Rime’s ingestion pipeline. Once configured, a connector runs on a schedule (or on demand) to keep your Snowflake warehouse current with source data.

Supported source types

Rime includes built-in connectors for five categories of data sources:

CategorySources
DatabasesPostgreSQL, MySQL, Microsoft SQL Server, MongoDB, Oracle
SaaS applicationsSalesforce, Xero, Shopify, Google Sheets
DevOps toolsJira, GitHub, GitLab, Azure DevOps
REST APIsAny HTTP API with JSON responses
FilesCSV and JSON uploads

Each category has its own authentication patterns, sync behaviour, and configuration options. See the dedicated pages for details:

The 5-step configuration flow

Every connector follows the same setup process in the Rime UI:

Step 1: Select connector type

Choose a source type from the connector catalogue. Each type has a description of what it connects to and what authentication it requires.

Step 2: Enter connection configuration

Provide the connection details for your source system. The fields vary by connector type — a PostgreSQL connector needs a host, port, database name, and credentials, while a Salesforce connector needs OAuth authorization. All connectors require a display name so you can identify them later.

Step 3: Test connection

Rime attempts to connect to the source system using the credentials you provided. The test verifies network reachability, authentication, and basic read permissions. If the test fails, the UI displays the specific error (authentication rejected, host unreachable, SSL handshake failed, etc.) so you can correct the configuration before proceeding.

You cannot proceed past this step until the connection test succeeds.

Step 4: Discover and select tables

Rime queries the source system’s metadata to discover available tables, columns, and data types. You then select which tables to extract. For each table, you can include or exclude individual columns.

This step also shows the inferred Snowflake data types for each column. See Schema Discovery for details on type mapping and how schema changes are handled.

Step 5: Set schedule

Configure when the connector should run. You can set a cron schedule with timezone selection, or leave the connector unscheduled and trigger syncs manually. See Connector Scheduling for cron syntax, timezone handling, and common schedule patterns.

Connector lifecycle

Each connector has a status that reflects its current state:

  • Created — the connector has been saved but has not run yet. This is the initial state after completing the setup wizard.
  • Active — the connector has run successfully at least once and is operating normally. Scheduled syncs will continue to run.
  • Paused — you have manually paused the connector. No scheduled syncs will run until you resume it. You can still trigger manual syncs while paused.
  • Errored — the most recent sync failed. The connector remains in this state until a subsequent sync succeeds. Scheduled syncs continue to run (the error may be transient), but you should investigate the failure. See Monitoring Extraction Runs for error details.

You can pause and resume connectors from the connector detail page. Pausing a connector does not delete its configuration, credentials, or sync history.

Credential encryption

All connector credentials (passwords, API keys, tokens, OAuth secrets) are encrypted at rest using AES-256-GCM. Rime encrypts credentials before writing them to the database and decrypts them only at the moment a connector runs. Encryption keys are managed by Rime’s control plane and are never exposed through the UI or API.

Credentials are stored as encrypted binary data alongside the connector configuration. The connector’s non-sensitive configuration (host, port, selected tables, schedule) is stored as structured JSON.

Connector limits by tier

Your licensing tier determines how many connectors you can create per project:

TierConnector limit
Free / Trial2
Small Business10
Business50
Business CriticalUnlimited

These limits apply per project. If you have multiple projects, each project has its own connector quota. Attempting to create a connector beyond your limit will display an error in the UI with a prompt to upgrade your tier.

What happens when a connector runs

When a connector sync starts (either on schedule or manually triggered), Rime:

  1. Decrypts the stored credentials
  2. Spawns an isolated connector runner process
  3. Connects to the source system and extracts each selected table as Apache Arrow record batches
  4. Writes Arrow data to Parquet files with Snappy compression
  5. Uploads Parquet files to the project’s S3 staging bucket
  6. Triggers Snowpipe to load the data into Snowflake raw tables

Each table is extracted independently. If one table fails (due to a permission change, a timeout, or a schema mismatch), the remaining tables continue to extract. The run is marked as partially failed, and the per-table error details are available in the run history.

For a deeper look at the extraction pipeline, see How Extraction Works.

Next steps

  • Configure your first connector by following the setup wizard in Project > Connectors > Add Connector
  • Review Connector Scheduling to set up automated syncs
  • Monitor your extraction runs from the run history page