Skip to content

Last updated: July 22, 2025

Table availability data quality checks, SQL examples

A table-level check that ensures a query can be successfully executed on a table without server errors. It also verifies that the table exists and is accessible (queryable). The actual value (the result of the check) indicates the number of failures. If the table is accessible and a simple query can be executed without errors, the result will be 0.0. A sensor result (the actual value) of 1.0 indicates that there is a failure. Any value greater than 1.0 is stored only in the check result table and represents the number of consecutive failures in the following days.


The table availability data quality check has the following variants for each type of data quality checks supported by DQOps.

profile table availability

Check description

Verifies availability of a table in a monitored database using a simple query.

Data quality check name Friendly name Category Check type Time scale Quality dimension Sensor definition Quality rule Standard
profile_table_availability Table availability availability profiling Availability table_availability max_failures

Command-line examples

Please expand the section below to see the DQOps command-line examples to run or activate the profile table availability data quality check.

Managing profile table availability check from DQOps shell

Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the warning rule with the default parameters.

dqo> check activate -c=connection_name -t=schema_name.table_name  -ch=profile_table_availability --enable-warning

You can also use patterns to activate the check on all matching tables and columns.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=profile_table_availability --enable-warning

Additional rule parameters are passed using the -Wrule_parameter_name=value.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=profile_table_availability --enable-warning
                    -Wmax_failures=value

Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the error rule with the default parameters.

dqo> check activate -c=connection_name -t=schema_name.table_name  -ch=profile_table_availability --enable-error

You can also use patterns to activate the check on all matching tables and columns.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=profile_table_availability --enable-error

Additional rule parameters are passed using the -Erule_parameter_name=value.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=profile_table_availability --enable-error
                    -Emax_failures=value

Run this data quality check using the check run CLI command by providing the check name and all other targeting filters. The following example shows how to run the profile_table_availability check on all tables on a single data source.

dqo> check run -c=data_source_name -ch=profile_table_availability

It is also possible to run this check on a specific connection and table. In order to do this, use the connection name and the full table name parameters.

dqo> check run -c=connection_name -t=schema_name.table_name -ch=profile_table_availability

You can also run this check on all tables on which the profile_table_availability check is enabled using patterns to find tables.

dqo> check run -c=connection_name -t=schema_prefix*.fact_*  -ch=profile_table_availability

YAML configuration

The sample schema_name.table_name.dqotable.yaml file with the check configured is shown below.

# yaml-language-server: $schema=https://cloud.dqops.com/dqo-yaml-schema/TableYaml-schema.json
apiVersion: dqo/v1
kind: table
spec:
  profiling_checks:
    availability:
      profile_table_availability:
        warning:
          max_failures: 0
        error:
          max_failures: 1
        fatal:
          max_failures: 5
  columns: {}
Samples of generated SQL queries for each data source type

Please expand the database engine name section to see the SQL query rendered by a Jinja2 template for the table_availability data quality sensor.

BigQuery
{% import '/dialects/bigquery.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `your-google-project-id`.`<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
ClickHouse
{% import '/dialects/clickhouse.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Databricks
{% import '/dialects/databricks.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
DB2
{% import '/dialects/db2.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
DuckDB
{% import '/dialects/duckdb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM  AS analyzed_table

        LIMIT 1
    ) AS tab_scan
HANA
{% import '/dialects/hana.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
MariaDB
{% import '/dialects/mariadb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
MySQL
{% import '/dialects/mysql.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Oracle
{% import '/dialects/oracle.sql.jinja2' as lib with context -%}
SELECT
    CASE
       WHEN COUNT(*) > 0 THEN COUNT(*)
       ELSE 1.0
    END AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CASE
       WHEN COUNT(*) > 0 THEN COUNT(*)
       ELSE 1.0
    END AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
PostgreSQL
{% import '/dialects/postgresql.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_postgresql_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Presto
{% import '/dialects/presto.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_trino_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
QuestDB
{% import '/dialects/questdb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Redshift
{% import '/dialects/redshift.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_redshift_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Snowflake
{% import '/dialects/snowflake.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_snowflake_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Spark
{% import '/dialects/spark.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
SQL Server
{% import '/dialects/sqlserver.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT TOP 1
            *
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
    ) AS tab_scan
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT TOP 1
            *
        FROM [your_sql_server_database].[<target_schema>].[<target_table>] AS analyzed_table

    ) AS tab_scan
Teradata
{% import '/dialects/teradata.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
    ) AS tab_scan
Trino
{% import '/dialects/trino.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_trino_catalog"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan

daily table availability

Check description

Verifies availability of a table in a monitored database using a simple query. Stores the most recent table availability status for each day when the data quality check was evaluated.

Data quality check name Friendly name Category Check type Time scale Quality dimension Sensor definition Quality rule Standard
daily_table_availability Table availability availability monitoring daily Availability table_availability max_failures

Command-line examples

Please expand the section below to see the DQOps command-line examples to run or activate the daily table availability data quality check.

Managing daily table availability check from DQOps shell

Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the warning rule with the default parameters.

dqo> check activate -c=connection_name -t=schema_name.table_name  -ch=daily_table_availability --enable-warning

You can also use patterns to activate the check on all matching tables and columns.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=daily_table_availability --enable-warning

Additional rule parameters are passed using the -Wrule_parameter_name=value.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=daily_table_availability --enable-warning
                    -Wmax_failures=value

Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the error rule with the default parameters.

dqo> check activate -c=connection_name -t=schema_name.table_name  -ch=daily_table_availability --enable-error

You can also use patterns to activate the check on all matching tables and columns.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=daily_table_availability --enable-error

Additional rule parameters are passed using the -Erule_parameter_name=value.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=daily_table_availability --enable-error
                    -Emax_failures=value

Run this data quality check using the check run CLI command by providing the check name and all other targeting filters. The following example shows how to run the daily_table_availability check on all tables on a single data source.

dqo> check run -c=data_source_name -ch=daily_table_availability

It is also possible to run this check on a specific connection and table. In order to do this, use the connection name and the full table name parameters.

dqo> check run -c=connection_name -t=schema_name.table_name -ch=daily_table_availability

You can also run this check on all tables on which the daily_table_availability check is enabled using patterns to find tables.

dqo> check run -c=connection_name -t=schema_prefix*.fact_*  -ch=daily_table_availability

YAML configuration

The sample schema_name.table_name.dqotable.yaml file with the check configured is shown below.

# yaml-language-server: $schema=https://cloud.dqops.com/dqo-yaml-schema/TableYaml-schema.json
apiVersion: dqo/v1
kind: table
spec:
  monitoring_checks:
    daily:
      availability:
        daily_table_availability:
          warning:
            max_failures: 0
          error:
            max_failures: 1
          fatal:
            max_failures: 5
  columns: {}
Samples of generated SQL queries for each data source type

Please expand the database engine name section to see the SQL query rendered by a Jinja2 template for the table_availability data quality sensor.

BigQuery
{% import '/dialects/bigquery.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `your-google-project-id`.`<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
ClickHouse
{% import '/dialects/clickhouse.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Databricks
{% import '/dialects/databricks.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
DB2
{% import '/dialects/db2.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
DuckDB
{% import '/dialects/duckdb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM  AS analyzed_table

        LIMIT 1
    ) AS tab_scan
HANA
{% import '/dialects/hana.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
MariaDB
{% import '/dialects/mariadb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
MySQL
{% import '/dialects/mysql.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Oracle
{% import '/dialects/oracle.sql.jinja2' as lib with context -%}
SELECT
    CASE
       WHEN COUNT(*) > 0 THEN COUNT(*)
       ELSE 1.0
    END AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CASE
       WHEN COUNT(*) > 0 THEN COUNT(*)
       ELSE 1.0
    END AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
PostgreSQL
{% import '/dialects/postgresql.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_postgresql_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Presto
{% import '/dialects/presto.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_trino_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
QuestDB
{% import '/dialects/questdb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Redshift
{% import '/dialects/redshift.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_redshift_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Snowflake
{% import '/dialects/snowflake.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_snowflake_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Spark
{% import '/dialects/spark.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
SQL Server
{% import '/dialects/sqlserver.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT TOP 1
            *
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
    ) AS tab_scan
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT TOP 1
            *
        FROM [your_sql_server_database].[<target_schema>].[<target_table>] AS analyzed_table

    ) AS tab_scan
Teradata
{% import '/dialects/teradata.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
    ) AS tab_scan
Trino
{% import '/dialects/trino.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_trino_catalog"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan

monthly table availability

Check description

Verifies availability of a table in a monitored database using a simple query. Stores the most recent table availability status for each month when the data quality check was evaluated.

Data quality check name Friendly name Category Check type Time scale Quality dimension Sensor definition Quality rule Standard
monthly_table_availability Table availability availability monitoring monthly Availability table_availability max_failures

Command-line examples

Please expand the section below to see the DQOps command-line examples to run or activate the monthly table availability data quality check.

Managing monthly table availability check from DQOps shell

Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the warning rule with the default parameters.

dqo> check activate -c=connection_name -t=schema_name.table_name  -ch=monthly_table_availability --enable-warning

You can also use patterns to activate the check on all matching tables and columns.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=monthly_table_availability --enable-warning

Additional rule parameters are passed using the -Wrule_parameter_name=value.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=monthly_table_availability --enable-warning
                    -Wmax_failures=value

Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the error rule with the default parameters.

dqo> check activate -c=connection_name -t=schema_name.table_name  -ch=monthly_table_availability --enable-error

You can also use patterns to activate the check on all matching tables and columns.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=monthly_table_availability --enable-error

Additional rule parameters are passed using the -Erule_parameter_name=value.

dqo> check activate -c=connection_name -t=schema_prefix*.fact_*  -ch=monthly_table_availability --enable-error
                    -Emax_failures=value

Run this data quality check using the check run CLI command by providing the check name and all other targeting filters. The following example shows how to run the monthly_table_availability check on all tables on a single data source.

dqo> check run -c=data_source_name -ch=monthly_table_availability

It is also possible to run this check on a specific connection and table. In order to do this, use the connection name and the full table name parameters.

dqo> check run -c=connection_name -t=schema_name.table_name -ch=monthly_table_availability

You can also run this check on all tables on which the monthly_table_availability check is enabled using patterns to find tables.

dqo> check run -c=connection_name -t=schema_prefix*.fact_*  -ch=monthly_table_availability

YAML configuration

The sample schema_name.table_name.dqotable.yaml file with the check configured is shown below.

# yaml-language-server: $schema=https://cloud.dqops.com/dqo-yaml-schema/TableYaml-schema.json
apiVersion: dqo/v1
kind: table
spec:
  monitoring_checks:
    monthly:
      availability:
        monthly_table_availability:
          warning:
            max_failures: 0
          error:
            max_failures: 1
          fatal:
            max_failures: 5
  columns: {}
Samples of generated SQL queries for each data source type

Please expand the database engine name section to see the SQL query rendered by a Jinja2 template for the table_availability data quality sensor.

BigQuery
{% import '/dialects/bigquery.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `your-google-project-id`.`<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
ClickHouse
{% import '/dialects/clickhouse.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Databricks
{% import '/dialects/databricks.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
DB2
{% import '/dialects/db2.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
DuckDB
{% import '/dialects/duckdb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM  AS analyzed_table

        LIMIT 1
    ) AS tab_scan
HANA
{% import '/dialects/hana.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
MariaDB
{% import '/dialects/mariadb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
MySQL
{% import '/dialects/mysql.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Oracle
{% import '/dialects/oracle.sql.jinja2' as lib with context -%}
SELECT
    CASE
       WHEN COUNT(*) > 0 THEN COUNT(*)
       ELSE 1.0
    END AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CASE
       WHEN COUNT(*) > 0 THEN COUNT(*)
       ELSE 1.0
    END AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
PostgreSQL
{% import '/dialects/postgresql.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_postgresql_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Presto
{% import '/dialects/presto.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_trino_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
QuestDB
{% import '/dialects/questdb.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Redshift
{% import '/dialects/redshift.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_redshift_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Snowflake
{% import '/dialects/snowflake.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_snowflake_database"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan
Spark
{% import '/dialects/spark.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM `<target_schema>`.`<target_table>` AS analyzed_table

        LIMIT 1
    ) AS tab_scan
SQL Server
{% import '/dialects/sqlserver.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT TOP 1
            *
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
    ) AS tab_scan
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT TOP 1
            *
        FROM [your_sql_server_database].[<target_schema>].[<target_table>] AS analyzed_table

    ) AS tab_scan
Teradata
{% import '/dialects/teradata.sql.jinja2' as lib with context -%}
SELECT
    0.0 AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    0.0 AS actual_value
FROM
    (
        SELECT
            *
        FROM "<target_schema>"."<target_table>" AS analyzed_table

        QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
    ) AS tab_scan
Trino
{% import '/dialects/trino.sql.jinja2' as lib with context -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
    {{- lib.render_time_dimension_projection('tab_scan') }}
FROM
    (
        SELECT
            *
            {{- lib.render_time_dimension_projection('analyzed_table') }}
        FROM {{ lib.render_target_table() }} AS analyzed_table
        {{ lib.render_where_clause() }}
        LIMIT 1
    ) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SELECT
    CAST(0.0 AS DOUBLE) AS actual_value
FROM
    (
        SELECT
            *
        FROM "your_trino_catalog"."<target_schema>"."<target_table>" AS analyzed_table

        LIMIT 1
    ) AS tab_scan

What's next