Skip to content

Monitoring Extraction Runs

Every time a connector syncs — whether triggered by a schedule, a manual trigger, or a pipeline step — Rime records the run with full details. The run history gives you visibility into what was extracted, how long it took, whether anything failed, and how data volumes are changing over time.

Accessing run history

Run history is available in two places:

  • Connector detail page: click on any connector in Project > Connectors, then select the History tab. This shows runs for that specific connector.
  • Project-level extraction page: go to Project > Extraction to see runs across all connectors in the project, sorted by most recent.

Run status

Each run has one of the following statuses:

StatusDescription
RunningThe extraction is currently in progress. Per-table status updates in near-real-time
CompletedAll selected tables were extracted successfully
Partially failedSome tables were extracted successfully, but one or more tables failed. The successful tables were loaded into Snowflake normally
FailedAll tables failed to extract, or the connector runner process itself crashed before extracting any data
SkippedThe scheduled run was skipped because the previous run was still in progress. See overlapping syncs
CancelledThe run was manually cancelled by a user before it completed

Run details

Clicking on a run in the history list opens the run detail view, which shows:

Summary

  • Trigger: how the run was started (scheduled, manual, or pipeline)
  • Start time: when the extraction began
  • End time: when the extraction finished (or failed)
  • Duration: total elapsed time
  • Total rows: sum of rows extracted across all tables
  • Tables: count of tables attempted, succeeded, and failed

Per-table status

Each table within a run has its own row in the detail view:

ColumnDescription
Table nameName of the source table or endpoint
Statuscompleted, failed, or running
Rows extractedNumber of rows read from the source
DurationTime spent extracting this table
ErrorError message (if the table failed)

Tables are listed in the order they were extracted. Failed tables display an expandable error section with the full error message and stack context.

Error details

When a table or run fails, the error details help you diagnose the issue. Common error categories:

Connection failures

  • Connection refused: the source system is unreachable. Check that the host is correct, the service is running, and firewall rules allow access from Rime’s network.
  • Authentication failed: credentials are invalid or expired. For OAuth-based connectors, the token may need to be re-authorized.
  • SSL handshake failed: the SSL/TLS configuration does not match between Rime and the source. Review the SSL settings in the connector configuration.

Timeout errors

  • Connection timeout: Rime could not establish a connection within the timeout window (default: 30 seconds). This usually indicates network issues or an unresponsive source.
  • Query timeout: a database query took too long to return results. This can happen with very large tables on undersized database servers. Consider extracting the table during off-peak hours or adding an index on the source.

Data type mismatches

  • Type conversion error: a value in the source could not be converted to the expected Arrow type. For example, a column typed as INTEGER in the schema contains a non-numeric value. This usually indicates a source schema change. Refresh the schema to detect the change.
  • Null in non-nullable column: the source returned a null value for a column marked as non-nullable in the discovered schema. This is logged as a warning and the null is passed through.

Rate limit errors

  • 429 Too Many Requests: the source API rejected requests due to rate limiting. Rime retries with backoff, but if rate limits are very restrictive, the extraction may fail after exhausting retries. Reduce the rate limit configuration or schedule syncs during off-peak hours for the source API.

Infrastructure errors

  • S3 upload failed: Rime could not upload the Parquet file to S3. Check the S3 bucket configuration and IAM permissions in your infrastructure settings.
  • Snowpipe error: the Parquet file was uploaded to S3 but Snowpipe failed to load it. This can indicate a schema mismatch between the Parquet file and the Snowflake table definition. Check the Snowpipe copy history in the Snowflake console for details.

Row count tracking

The run history tracks row counts at multiple levels:

  • Per-table per-run: the number of rows extracted from each table in each run
  • Per-run total: the sum of rows across all tables in a run
  • Trend: the connector detail page shows a row count chart over time, so you can spot patterns and anomalies

Volume anomaly detection

If your project has monitoring configured with volume anomaly alerts, Rime compares each run’s row count against the rolling 30-day average. If the count deviates by more than 50% (configurable), an alert is raised. This catches issues like:

  • A source table was truncated or dropped
  • An upstream process stopped writing data
  • A filter in the source query is excluding more rows than expected
  • A data load produced duplicates, inflating the count

Filtering and searching run history

The run history table supports:

FilterDescription
StatusShow only runs with a specific status (completed, failed, partially failed, etc.)
Date rangeFilter runs by start time within a date range
Trigger typeFilter by how the run was triggered (scheduled, manual, pipeline)
ConnectorOn the project-level extraction page, filter by a specific connector
SearchSearch by table name to find runs that extracted a specific table

Filters can be combined. For example, you can view all failed runs for a specific connector in the last 7 days.

Run retention

Run history records are retained based on your licensing tier:

TierRun history retention
Free / Trial7 days
Small Business30 days
Business90 days
Business Critical1 year

After the retention period, run records are deleted. Row count aggregates (daily totals) are retained longer for trend analysis.

Reacting to failures

When a run fails or partially fails:

  1. Check the error details on the run detail page to understand the cause.
  2. If the issue is with the source system (credentials expired, table dropped, network issue), fix the source-side problem first.
  3. If the issue is with the connector configuration (wrong SSL mode, incorrect schema), update the connector settings.
  4. Trigger a manual sync to verify the fix. See on-demand sync.
  5. If the connector is scheduled, it will continue to retry on its regular schedule. A successful run clears the error state.

For persistent failures, check the connector lifecycle documentation. A connector in the “errored” state continues running on schedule — the error may resolve itself if the underlying issue is transient.

Next steps