Administration & Management — User Guide

Audience: Workspace admins and users who need to monitor job history, track AI usage costs, configure runtime environments, and manage team settings.


Overview

Workspace is the primary boundary for metadata: connections, repositories, jobs, Agent Intelligence, Knowledge Base, billing hooks, and API keys. APIs scope data by workspace — always confirm the active workspace in the selector before running pipelines or sharing secrets. Invited members share the same workspace subject to RBAC; see Team Settings below.

Dagen provides five administration views:

View Path Access
Job History /job-history All users
Usage Analytics /chargeback All users (admin for full data)
Runtime Environments /runtime-environments All users
Team Settings /team-settings Admins only
Model Settings /model-settings Admins only

Workspace sharing and isolation

  • A workspace is the tenant boundary for metadata: database connections, repos, jobs, Agent Intelligence, Knowledge Base, billing hooks, and API keys.
  • APIs enforce workspace_id scoping—users cannot query another workspace by ID.
  • Share workspace (owners): invite by email, assign role, optional message; invitees use Accept Invite. Members share assets subject to RBAC.
  • Always select the correct workspace in the workspace switcher before connecting secrets or starting long jobs.
  • Do not upload secrets into the Knowledge Base—treat uploads as retrievable by agents in that workspace.

Team administration lives under Team Settings (/team-settings) for invites and membership.


Where runs surface (cross-feature)

Area Route / action Logs & recovery
dbt / Dataform / Data Model /pipelines — Run menu, schedules, execution log Fix with Agent on failure
Spark /spark-pipelines — Run Job Platform job UI + Dagen messages
Data Ingestion /airbyte-ingestion Sync stats on cards; AI troubleshooting
Workflows /workflow-orchestrator — View Runs, Dashboard Channel notifications (e.g. Slack)

Runtime engine: many failures are environment issues—wrong Runtime Environments configuration (see Part 3 below), expired credentials, network egress, or cluster capacity. Re-run Test on the runtime card; confirm default ingestion runtime in the ingestion header; for Spark validate cluster / region / namespace fields.

Agent-assisted debugging: in AI Chat, attach pipeline, database, or workflow context and paste error excerpts; prefer Guided or Semi execution mode until validated.


Part 1 — Job History (Activity)

The Job History page shows a log of all agent interactions in your workspace.

Browsing Jobs

The main table has these columns:

Column Description
Job ID Unique identifier
Agent Name Which agent handled the job
Start Time When the job began
End Time When the job completed
Status Current state (success, failed, running, etc.)
Actions View details button

Use pagination controls at the bottom to navigate through results.

Viewing Job Details

  1. Click View Details on any row.
  2. The detail dialog shows:
    • Status chip.
    • Start Time and End Time.
    • Final Summary of the job outcome.
  3. Below the summary, the Interaction History section lists every step:
Step Type What It Shows
User Message The user's input
Assistant Response The agent's reply
Tool Call Tool name, status, and parameters
Tool Result Tool name, status, and returned result
System Information Internal system events

Each tool step also shows its duration in milliseconds.

If no steps were recorded, you see: "No detailed steps recorded for this job."


Part 2 — Usage Analytics (Chargeback)

The Usage Analytics page tracks AI model usage, costs, and performance across your workspace.

Filtering

Use the filter bar at the top:

Filter Options
Workspace All Workspaces, or select a specific one
Time Period Last 7 Days, Last 30 Days, Last 90 Days, Custom Range
Agent All Agents, or select a specific one

When Custom Range is selected, date pickers for Start Date and End Date appear.

Summary Cards

Four cards at the top provide high-level metrics:

Card Primary Value Secondary Value
Total Cost Dollar amount Number of API calls
Tokens Used Total token count Input + output breakdown
Avg Response Time Milliseconds Success rate percentage
Cost per Token Dollar amount "Average efficiency"

Charts

Usage Timeline

A line chart showing cost and token usage over time. Switch the interval with the selector: Hourly, Daily, Weekly, or Monthly.

Usage by Model

A doughnut chart showing cost distribution across AI models.

Model Usage Statistics Table

Column Description
Provider AI provider name
Model Model identifier
Calls Number of API calls
Tokens Total tokens consumed
Cost Total cost
Avg Response Time Average latency

Recent Usage Details Table

A paginated table of individual API calls:

Column Description
Timestamp When the call was made
Agent Which agent made the call
Model Provider and model ID
Tokens Total (with input + output breakdown)
Cost Cost of the call
Time Response time in milliseconds
Status Success or Error

Navigate pages with the Previous / Next buttons ("Page X of Y").

Budget Alerts

The Budget Alerts section at the bottom shows any configured cost thresholds that have been triggered, with alert type, message, and timestamp.


Part 3 — Runtime Environments

The Runtime Environments page (/runtime-environments) configures where your compute workloads run.

Runtime Categories

The page organizes runtimes into three categories, each with its own tab:

Category Purpose Providers
Data Ingestion Kubernetes clusters for ingestion jobs GCP/GKE, AWS/EKS, Azure/AKS, On-Premise K8s
Spark / Processing Environments for Spark notebooks and jobs Databricks, Databricks Serverless, Dataproc, Dataproc Serverless, EMR Serverless, Synapse Serverless, Snowflake, Vertex AI
Python Execution Environments for Python scripts Docker, Kubernetes, Cloud Run, Lambda, Fargate

Switch between All Runtimes and category-specific tabs. Each tab shows a count chip.

Adding a Runtime

  1. Click Add Runtime in the header.
  2. Select the runtime type from the menu:
    • Data Ingestion Runtime — for ingestion jobs (CDC, batch sync).
    • Spark / Data Processing — for Databricks, Snowflake, Dataproc, etc.
    • Python Execution — for Docker, Kubernetes, Cloud Run, Lambda.
  3. Fill in the form:
    • Runtime Name (required).
    • Provider-specific configuration fields (host, cluster, credentials, etc.).
    • Optionally check Set as default runtime for this workspace.
  4. Click Create Runtime.

Testing a Runtime

Click Test on any runtime card. A connection test runs and the result appears in a snackbar (success or error with details).

Setting the Default

Click Set Default on a runtime card to make it the workspace default. The default runtime is marked with a Default chip.

Editing and Deleting

  • Click Edit to modify a runtime's configuration.
  • Click the Delete icon and confirm: "Are you sure you want to delete {name}? This action cannot be undone."

Setup Guide

Click Setup Guide in the header for provider-specific setup instructions. Tabs cover Data Ingestion, Serverless Spark, Databricks, Snowflake, and Google Cloud.


Part 4 — Team Settings (Admin Only)

The Team Settings page (/team-settings) is restricted to workspace admins. Non-admin users see: "Access Restricted — You do not have admin privileges to manage team settings, users, or billing configuration."

Sections

Section Component Purpose
Onboarding Invite User Form Invite new users to the workspace
Team Members User Search Manager Browse, search, and manage workspace members
Billing & Credits User Billing Manager View and manage billing information and credits
Platform Configuration Billing Configuration Configure billing rules and thresholds
Sample Data Sample BigQuery Settings Configure sample datasets for onboarding

Inviting Users

Use the onboarding section to invite new team members by email. They receive an invitation to join your workspace.

Managing Members

The team members section lets you search for users, view their roles, and update permissions.


Part 5 — Model Settings (Admin Only)

For detailed model configuration, see the dedicated Model Settings page.


Troubleshooting

Symptom Cause Fix
Job History table is empty No agent interactions have occurred yet Use the AI Chat to interact with an agent
Usage Analytics shows "Loading usage analytics..." Data is being fetched Wait a moment; for large workspaces this may take a few seconds
Cost shows $0.00 Model pricing not configured Go to Model Settings and set input/output pricing for your models
"Access Restricted" on Team Settings You are not a workspace admin Ask an admin to grant you admin privileges
Runtime test fails Incorrect credentials or unreachable endpoint Verify the cluster endpoint, namespace, and credentials; ensure network access from Dagen
"No Ingestion Runtimes Configured" No ingestion runtime added Click Add Ingestion Runtime and configure a Kubernetes cluster
Model shows "API Key: Not Configured" API key was not provided Edit the model and enter the API key
Budget alert fires unexpectedly Threshold set too low for current usage Review and adjust budget thresholds in Usage Analytics settings