---
title: "Release highlights: 1.18"
description: Release highlights provide a concise overview of the most important new features, improvements, and fixes in a software update, helping users quickly understand what's changed and how it impacts their workflow.
keywords: [dlt, data-pipelines, etl, databricks, iceberg, release-notes, data-engineering]
---

# Release highlights: 1.18

## New: Named destinations and the `@dlt.destination` factory

You can now reference destinations **by name** directly from your workspace configuration. This makes it easy to switch between development, staging, and production environments without changing pipeline code.

If your `.dlt/config.toml` contains:

```toml
[destination.custom_name]
destination_type = "duckdb"
```

You can activate it like this:

```py
import dlt

@dlt.resource
def my_data():
    yield {"id": 1}

pipeline = dlt.pipeline(destination="custom_name")

pipeline.run(my_data)
```

The destination type and all related settings are resolved automatically from the configuration.

---

## Databricks destination with Iceberg and partitioning support

The Databricks destination now supports the **Iceberg table format**, along with full support for **partitioning** and **clustering** hints.

Previously, these hints were silently ignored. With dlt 1.18, they are now **enforced**, and invalid combinations (such as mixing partitioning with clustering) are rejected early to prevent incorrect table layouts.

Example:

```py
import dlt
from dlt.destinations.adapters import databricks_adapter

pipeline = dlt.pipeline(pipeline_name="to_databricks", destination="databricks")

@dlt.resource
def my_data():
    for i in range(10):
        yield {
            "event_id": i,
            "year": 2024,
            "month": (i % 3) + 1,
            "customer_id": i % 5,
        }

# Enable clustering
databricks_adapter(my_data, cluster="AUTO")

# Enable partitioning
databricks_adapter(my_data, partition=["year", "month"])

# Use Iceberg table format
databricks_adapter(my_data, table_format="ICEBERG")

pipeline.run(my_data)
```

[Read more →](../dlt-ecosystem/destinations/databricks#advanced-examples)

---

## Graceful shutdowns for pipelines

Signal handling in pipelines has been fully reworked. Pipelines now **shut down gracefully** when interrupted, instead of raising exceptions.

- The first `Ctrl+C` triggers a clean shutdown
- A second interrupt forces an immediate exit

This applies to both **process-based** and **threaded** loaders and makes running long or production workloads much safer.

[Read more →](../running-in-production/running#allow-a-graceful-shutdown)

---

## Collect custom metrics with `add_metrics()`

You can now attach **custom metric collectors** directly to resources. This allows you to track things like batch counts, API pages, or any other extraction-level signal without modifying the data itself.

Example: counting how many batches a resource produces:

```py
import dlt

def batch_counter(items, meta, metrics):
    metrics["batch_count"] = metrics.get("batch_count", 0) + 1

@dlt.resource
def purchases():
    for i in range(3):
        yield [{"id": i}]

purchases = purchases.add_metrics(batch_counter)

pipeline = dlt.pipeline("metrics_demo", destination="duckdb")
load_info = pipeline.run(purchases)

trace = pipeline.last_trace
load_id = load_info.loads_ids[0]
resource_metrics = trace.last_extract_info.metrics[load_id][0]["resource_metrics"]["purchases"]

print("Custom metrics:", resource_metrics.custom_metrics)
```

Output:

```text
Custom metrics: {'batch_count': 3}
```

Custom metrics are stored alongside performance and transform statistics under `resource_metrics` and are available in traces and dashboards.

---

## HTTP-based filesystem resources

The `filesystem` source can now read files directly from **HTTP URLs**, in addition to local paths and object storage such as S3.

This makes it easy to load publicly hosted datasets without setting up extra infrastructure.

Example:

```py
import dlt
from dlt.sources.filesystem import filesystem, read_csv

pipeline = dlt.pipeline("metrics_demo", destination="duckdb")
fs = filesystem(bucket_url="https://example.com/data/", file_glob="*.csv")
pipeline.run(fs | read_csv())
```

[Read more →](../dlt-ecosystem/verified-sources/filesystem/basic#quick-example)

---

## Shout-out to new contributors

Big thanks to our newest contributors:

* [@and2reak](https://github.com/and2reak) — [#3164](https://github.com/dlt-hub/dlt/pull/3164)
* [@ivasio](https://github.com/ivasio) — [#3185](https://github.com/dlt-hub/dlt/pull/3185)
* [@Magicbeanbuyer](https://github.com/Magicbeanbuyer) — [#3217](https://github.com/dlt-hub/dlt/pull/3217)
* [@TheLazzziest](https://github.com/TheLazzziest) — [#3029](https://github.com/dlt-hub/dlt/pull/3029)
* [@adrian-173](https://github.com/adrian-173) — [#3239](https://github.com/dlt-hub/dlt/pull/3239)

---

**Full release notes**

[View the complete list of changes →](https://github.com/dlt-hub/dlt/releases)
