[dagster-deltalake,dagster-deltalake-polars] BREAKING CHANGE - we now support deltalake>=1.0.0 for dagster-deltalake and dagster-deltalake-polars and we will no longer support deltalake<1.0.0 moving forward. End user APIs remain the same for both libraries.
[dagster-databricks] Spark Python and Python Wheel tasks are now supported in PipesDatabricksServerlessClient.
[dagster-dbt] dagster-dbt project prepare-and-package --components . will no longer attempt to load components outside of DbtProjectComponent, preventing errors when attempting to run this command in environments that do not have the necessary env vars set for other components.
[dg] adds dg api secret list and dg api secret get
Fixed a bug in the backfill daemon where an asset backfill with CANCELING or FAILING status could become permanently stuck in CANCELING or FAILING if the partitions definitions of the assets changed.
Fixed an issue introduced in the 1.11.12 release where auto-complete in the Launchpad for nested fields stopped working.
Fixed an issue where backfills would fail if a TimeWindowPartitionsDefinition's start date was changed in the middle of the backfill, even if it did not remove any of the targeted partitions.
[ui] Fixed the link to "View asset lineage" on runs that don't specify an asset selection.
[ui] Allow searching across code locations with * wildcard in selection inputs for jobs and automations.
[ui] Added AutomationCondition.all_new_executed_with_tags, which allows automation conditions to be filtered to partitions that have been materialized since the last tick from runs with certain tags. This condition can be used to require or prevent certain run tags from triggering downstream declarative automation conditions.
anthropic, mcp, and claude-code-sdk dependencies of dagster-dg-cli are now under a separate ai extra, allowing dagster-dg-cli to be installed without these dependencies.
Added AutomationCondition.all_new_updates_have_run_tags and AutomationCondition.any_new_update_has_run_tags, which allows automation conditions to be filtered to partitions that have been materialized since the last tick from runs with certain tags. This condition can be used to require or prevent certain run tags from triggering downstream declarative automation conditions. These conditions are similar to AutomationCondition.executed_with_tags, but look at all new runs since the most recent tick instead of just looking at the latest run.
Fixed a bug which would cause steps downstream of an asset with skippable=True and a blocking asset check to execute as long as the asset check output was produced, even if the asset output was skipped.
When a backfill fails, it will now cancel all of its in-progress runs before terminating.
Fixed an issue that would cause trailing whitespace to be added to env vars using dot notation ({{ env.FOO }}) when listing the env vars used by a component. (Thanks, @edgarrmondragon!)
Fixed issue that would cause errors when using multi to single partition mappings with DbIOManagers.
[ui] Fixed issue with the "Report materialization" dialog for non-partitioned assets.
[ui] Typing large YAML documents in the launchpad when default config is present is now more performant.
[ui] Fixed an issue where setting a FloatMetadataValue to float('inf') or float('-inf') would cause an error when loading that metadata over graphql.
[ui] The "Clear" button in the dimension partition text input for multi-partitioned assets now clears invalid selections as expected.
[dagster-dbt] Fixed an issue with the DbtCloudWorkspaceClient that would cause errors when calling trigger_job_run with no steps_override parameter.
Added inline-component command to the publicly available scaffold commands in the Dagster CLI.
Added a new require_upstream_step_success config param to all executors. If {"step_dependency_config": {"require_upstream_step_success": False}} is set, this will allow downstream steps to execute immediately after all required upstream outputs have finished, even if the upstream step has not completed in its entirety yet. This can be useful particularly in cases where there are large multi-assets with downstream assets that depend on only a subset of the assets in the upstream step.
The logsForRun resolvers and eventConnection resolvers in the Dagster GraphQL API will now apply a default limit of 1000 to the number of logs returned from a single graphql query. The cursor field in the response can be used to continue iterating through the logs for a given run.
[dagster-airbyte] @airbyte_assets and AirbyteWorkspaceComponent (previously AirbyteCloudWorkspaceComponent) now support Airbyte OSS and Enterprise.
Fixed an issue where the dagster_dg_cli package failed to import when using Python 3.9.
Fixed an issue with AutomationCondition.eager() that could cause runs for materializable assets to be launched at the same time as an upstream observable source asset that had an automation condition, even if the upstream observation would not result in a new data version.
Fixed an issue which could, in some circumstances, cause errors during Declarative Automation evaluation after a dynamic partition was deleted.
Fixed an issue that could cause confusing errors when attempting to supply attributes configuration to Component subclasses that did not inherit from Resolvable.
[ui] Fixed an issue where the "Report materialization events" dialog for partitioned assets only worked if the partition was failed or missing.
[ui] Fixed a browser crash which could occur in the global asset graph.
[ui] Fixed a bug with the sensor preview behavior that would cause run requests contianing run_keys that had already been submitted to show up in the preview result.
[dagster-dbt] Fixed an issue that would cause the DbtCloudWorkspace to error before yielding asset events if the associated dbt Cloud run failed. Now, it will raise the error after all relevant asset events have been produced.
[dagster-dbt] Added the dbt-core dependency back to dagster-dbt as it is still required for the dbt Cloud integration. If both dbt-core and dbt Fusion are installed, dagster-dbt will still prefer using dbt Fusion by default.
Launching a backfill of a non-subsettable multi-asset without including every asset will now raise a clear error at backfill submission time, instead of failing with a confusing error after the backfill has started.
Fixed an issue where passing in an empty list to the assetKeys argument of the assetsOrError field in the GraphQL API would return every asset instead of an empty list of assets.
[dagster-dbt] Fixed an issue that would cause the DbtCloudWorkspace to error before yielding asset events if the associated DBT Cloud run failed. Now, it will raise the error after all relevant asset events have been produced.
A param exclusions was added to time window partition definitions to support custom calendars.
The dagster library now supports protobuf==6.x
[dg] dg scaffold defs --help now shows descriptions for subcommands.
[dg] A new dg check toml command has been added to validate your TOML configuration files.
[dagster-databricks] The DatabricksAssetBundleComponent has been added in preview. Databricks tasks can now be represented as assets and submitted via Dagster.
[dagster-dbt] The DbtProjectComponent now takes an optional cli_args configuration to allow customizing the command that is run when your assets are executed.
[dagster-dbt] The polling interval and timeout used for runs triggered with the DbtCloudWorkspace resource can now be customized with the DAGSTER_DBT_CLOUD_POLL_INTERVAL and DAGSTER_DBT_CLOUD_POLL_TIMEOUT environment variables.
[ui] Added the ability to filter to failed/missing partitions in the asset report events dialog.
[ui] A tree view has been added in the Global Asset Lineage.
[telemetry] Telemetry disclaimer now prints to stderr.
Fixed an issue that would require config provided to backfills to contain config for all assets in the code location rather than just the selected ones.
dg will now report multiple detected errors in a configuration file instead of failing on the first detected error.
It is now possible to supply run config when launching an asset backfill.
Updated the root URL to display the Overview/Timeline view for locations with schedules/automations, but no jobs (thanks @dschafer!)
Added tzdata as a dependency to dagster, to ensure that declaring timezones like US/Central work in all environments.
[dagster-dg-cli] Updated scaffolded file names to handle consecutive upper case letters (ACMEDatabricksJobComponent → acme_databricks_job_component.py not a_c_m_e_databricks_job_component.py)
[dagster-dg-cli] Validating requirements.env is now opt-in for dg check yaml.
[dagster-dbt] DAGSTER_DBT_CLOUD_POLL_INTERVAL and DAGSTER_DBT_CLOUD_POLL_TIMEOUT environment variables can now be used to configure the polling interval and timeout for fetching data from dbt Cloud.