Last updated: July 22, 2025
Table availability data quality checks, SQL examples
A table-level check that ensures a query can be successfully executed on a table without server errors. It also verifies that the table exists and is accessible (queryable). The actual value (the result of the check) indicates the number of failures. If the table is accessible and a simple query can be executed without errors, the result will be 0.0. A sensor result (the actual value) of 1.0 indicates that there is a failure. Any value greater than 1.0 is stored only in the check result table and represents the number of consecutive failures in the following days.
The table availability data quality check has the following variants for each type of data quality checks supported by DQOps.
profile table availability
Check description
Verifies availability of a table in a monitored database using a simple query.
Data quality check name | Friendly name | Category | Check type | Time scale | Quality dimension | Sensor definition | Quality rule | Standard |
---|---|---|---|---|---|---|---|---|
profile_table_availability |
Table availability | availability | profiling | Availability | table_availability | max_failures |
Command-line examples
Please expand the section below to see the DQOps command-line examples to run or activate the profile table availability data quality check.
Managing profile table availability check from DQOps shell
Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the warning rule with the default parameters.
dqo> check activate -c=connection_name -t=schema_name.table_name -ch=profile_table_availability --enable-warning
You can also use patterns to activate the check on all matching tables and columns.
dqo> check activate -c=connection_name -t=schema_prefix*.fact_* -ch=profile_table_availability --enable-warning
Additional rule parameters are passed using the -Wrule_parameter_name=value.
Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the error rule with the default parameters.
dqo> check activate -c=connection_name -t=schema_name.table_name -ch=profile_table_availability --enable-error
You can also use patterns to activate the check on all matching tables and columns.
dqo> check activate -c=connection_name -t=schema_prefix*.fact_* -ch=profile_table_availability --enable-error
Additional rule parameters are passed using the -Erule_parameter_name=value.
Run this data quality check using the check run CLI command by providing the check name and all other targeting filters. The following example shows how to run the profile_table_availability check on all tables on a single data source.
It is also possible to run this check on a specific connection and table. In order to do this, use the connection name and the full table name parameters.
You can also run this check on all tables on which the profile_table_availability check is enabled using patterns to find tables.
YAML configuration
The sample schema_name.table_name.dqotable.yaml file with the check configured is shown below.
# yaml-language-server: $schema=https://cloud.dqops.com/dqo-yaml-schema/TableYaml-schema.json
apiVersion: dqo/v1
kind: table
spec:
profiling_checks:
availability:
profile_table_availability:
warning:
max_failures: 0
error:
max_failures: 1
fatal:
max_failures: 5
columns: {}
Samples of generated SQL queries for each data source type
Please expand the database engine name section to see the SQL query rendered by a Jinja2 template for the table_availability data quality sensor.
BigQuery
{% import '/dialects/bigquery.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
ClickHouse
{% import '/dialects/clickhouse.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Databricks
{% import '/dialects/databricks.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
DB2
{% import '/dialects/db2.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
DuckDB
{% import '/dialects/duckdb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
HANA
{% import '/dialects/hana.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
MariaDB
{% import '/dialects/mariadb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
MySQL
{% import '/dialects/mysql.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Oracle
{% import '/dialects/oracle.sql.jinja2' as lib with context -%}
SELECT
CASE
WHEN COUNT(*) > 0 THEN COUNT(*)
ELSE 1.0
END AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
PostgreSQL
{% import '/dialects/postgresql.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Presto
{% import '/dialects/presto.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
QuestDB
{% import '/dialects/questdb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Redshift
{% import '/dialects/redshift.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Snowflake
{% import '/dialects/snowflake.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Spark
{% import '/dialects/spark.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SQL Server
Teradata
{% import '/dialects/teradata.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Trino
{% import '/dialects/trino.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
daily table availability
Check description
Verifies availability of a table in a monitored database using a simple query. Stores the most recent table availability status for each day when the data quality check was evaluated.
Data quality check name | Friendly name | Category | Check type | Time scale | Quality dimension | Sensor definition | Quality rule | Standard |
---|---|---|---|---|---|---|---|---|
daily_table_availability |
Table availability | availability | monitoring | daily | Availability | table_availability | max_failures |
Command-line examples
Please expand the section below to see the DQOps command-line examples to run or activate the daily table availability data quality check.
Managing daily table availability check from DQOps shell
Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the warning rule with the default parameters.
dqo> check activate -c=connection_name -t=schema_name.table_name -ch=daily_table_availability --enable-warning
You can also use patterns to activate the check on all matching tables and columns.
dqo> check activate -c=connection_name -t=schema_prefix*.fact_* -ch=daily_table_availability --enable-warning
Additional rule parameters are passed using the -Wrule_parameter_name=value.
Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the error rule with the default parameters.
dqo> check activate -c=connection_name -t=schema_name.table_name -ch=daily_table_availability --enable-error
You can also use patterns to activate the check on all matching tables and columns.
dqo> check activate -c=connection_name -t=schema_prefix*.fact_* -ch=daily_table_availability --enable-error
Additional rule parameters are passed using the -Erule_parameter_name=value.
Run this data quality check using the check run CLI command by providing the check name and all other targeting filters. The following example shows how to run the daily_table_availability check on all tables on a single data source.
It is also possible to run this check on a specific connection and table. In order to do this, use the connection name and the full table name parameters.
You can also run this check on all tables on which the daily_table_availability check is enabled using patterns to find tables.
YAML configuration
The sample schema_name.table_name.dqotable.yaml file with the check configured is shown below.
# yaml-language-server: $schema=https://cloud.dqops.com/dqo-yaml-schema/TableYaml-schema.json
apiVersion: dqo/v1
kind: table
spec:
monitoring_checks:
daily:
availability:
daily_table_availability:
warning:
max_failures: 0
error:
max_failures: 1
fatal:
max_failures: 5
columns: {}
Samples of generated SQL queries for each data source type
Please expand the database engine name section to see the SQL query rendered by a Jinja2 template for the table_availability data quality sensor.
BigQuery
{% import '/dialects/bigquery.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
ClickHouse
{% import '/dialects/clickhouse.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Databricks
{% import '/dialects/databricks.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
DB2
{% import '/dialects/db2.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
DuckDB
{% import '/dialects/duckdb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
HANA
{% import '/dialects/hana.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
MariaDB
{% import '/dialects/mariadb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
MySQL
{% import '/dialects/mysql.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Oracle
{% import '/dialects/oracle.sql.jinja2' as lib with context -%}
SELECT
CASE
WHEN COUNT(*) > 0 THEN COUNT(*)
ELSE 1.0
END AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
PostgreSQL
{% import '/dialects/postgresql.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Presto
{% import '/dialects/presto.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
QuestDB
{% import '/dialects/questdb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Redshift
{% import '/dialects/redshift.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Snowflake
{% import '/dialects/snowflake.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Spark
{% import '/dialects/spark.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SQL Server
Teradata
{% import '/dialects/teradata.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Trino
{% import '/dialects/trino.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
monthly table availability
Check description
Verifies availability of a table in a monitored database using a simple query. Stores the most recent table availability status for each month when the data quality check was evaluated.
Data quality check name | Friendly name | Category | Check type | Time scale | Quality dimension | Sensor definition | Quality rule | Standard |
---|---|---|---|---|---|---|---|---|
monthly_table_availability |
Table availability | availability | monitoring | monthly | Availability | table_availability | max_failures |
Command-line examples
Please expand the section below to see the DQOps command-line examples to run or activate the monthly table availability data quality check.
Managing monthly table availability check from DQOps shell
Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the warning rule with the default parameters.
dqo> check activate -c=connection_name -t=schema_name.table_name -ch=monthly_table_availability --enable-warning
You can also use patterns to activate the check on all matching tables and columns.
dqo> check activate -c=connection_name -t=schema_prefix*.fact_* -ch=monthly_table_availability --enable-warning
Additional rule parameters are passed using the -Wrule_parameter_name=value.
Activate this data quality using the check activate CLI command, providing the connection name, table name, check name, and all other filters. Activates the error rule with the default parameters.
dqo> check activate -c=connection_name -t=schema_name.table_name -ch=monthly_table_availability --enable-error
You can also use patterns to activate the check on all matching tables and columns.
dqo> check activate -c=connection_name -t=schema_prefix*.fact_* -ch=monthly_table_availability --enable-error
Additional rule parameters are passed using the -Erule_parameter_name=value.
Run this data quality check using the check run CLI command by providing the check name and all other targeting filters. The following example shows how to run the monthly_table_availability check on all tables on a single data source.
It is also possible to run this check on a specific connection and table. In order to do this, use the connection name and the full table name parameters.
You can also run this check on all tables on which the monthly_table_availability check is enabled using patterns to find tables.
YAML configuration
The sample schema_name.table_name.dqotable.yaml file with the check configured is shown below.
# yaml-language-server: $schema=https://cloud.dqops.com/dqo-yaml-schema/TableYaml-schema.json
apiVersion: dqo/v1
kind: table
spec:
monitoring_checks:
monthly:
availability:
monthly_table_availability:
warning:
max_failures: 0
error:
max_failures: 1
fatal:
max_failures: 5
columns: {}
Samples of generated SQL queries for each data source type
Please expand the database engine name section to see the SQL query rendered by a Jinja2 template for the table_availability data quality sensor.
BigQuery
{% import '/dialects/bigquery.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
ClickHouse
{% import '/dialects/clickhouse.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Databricks
{% import '/dialects/databricks.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
DB2
{% import '/dialects/db2.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
DuckDB
{% import '/dialects/duckdb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
HANA
{% import '/dialects/hana.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
MariaDB
{% import '/dialects/mariadb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
MySQL
{% import '/dialects/mysql.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Oracle
{% import '/dialects/oracle.sql.jinja2' as lib with context -%}
SELECT
CASE
WHEN COUNT(*) > 0 THEN COUNT(*)
ELSE 1.0
END AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
PostgreSQL
{% import '/dialects/postgresql.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Presto
{% import '/dialects/presto.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
QuestDB
{% import '/dialects/questdb.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Redshift
{% import '/dialects/redshift.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Snowflake
{% import '/dialects/snowflake.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Spark
{% import '/dialects/spark.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
SQL Server
Teradata
{% import '/dialects/teradata.sql.jinja2' as lib with context -%}
SELECT
0.0 AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
QUALIFY ROW_NUMBER() OVER (ORDER BY 1) = 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
Trino
{% import '/dialects/trino.sql.jinja2' as lib with context -%}
SELECT
CAST(0.0 AS DOUBLE) AS actual_value
{{- lib.render_time_dimension_projection('tab_scan') }}
FROM
(
SELECT
*
{{- lib.render_time_dimension_projection('analyzed_table') }}
FROM {{ lib.render_target_table() }} AS analyzed_table
{{ lib.render_where_clause() }}
LIMIT 1
) AS tab_scan
{% if lib.time_series is not none -%}
GROUP BY time_period
ORDER BY time_period
{%- endif -%}
What's next
- Learn how to configure data quality checks in DQOps
- Look at the examples of running data quality checks, targeting tables and columns