Skip to content

Last updated: July 22, 2025

DQOps REST API jobs models reference

The references of all objects used by jobs REST API operations are listed below.

TimeWindowFilterParameters

The structure of this object is described below

 Property name   Description                       Data type 
daily_partitioning_recent_days The number of recent days to analyze incrementally by daily partitioned data quality checks. integer
daily_partitioning_include_today Analyze also today and later days when running daily partitioned checks. By default, daily partitioned checks will not analyze today and future dates. Setting true will disable filtering the end dates. boolean
monthly_partitioning_recent_months The number of recent months to analyze incrementally by monthly partitioned data quality checks. integer
monthly_partitioning_include_current_month Analyze also the current month and later months when running monthly partitioned checks. By default, monthly partitioned checks will not analyze the current month and future months. Setting true will disable filtering the end dates. boolean
from_date Analyze the data since the given date (inclusive). The date should be an ISO 8601 date (yyyy-MM-dd). The analyzed table must have the timestamp column properly configured, it is the column that is used for filtering the date and time ranges. Setting the beginning date overrides recent days and recent months. date
from_date_time Analyze the data since the given date and time (inclusive). The date and time should be an ISO 8601 local date and time without the time zone (yyyy-MM-dd HH:mm:ss). The analyzed table must have the timestamp column properly configured, it is the column that is used for filtering the date and time ranges. Setting the beginning date and time overrides recent days and recent months. datetime
to_date Analyze the data until the given date (exclusive, the given date and the following dates are not analyzed). The date should be an ISO 8601 date (YYYY-MM-DD). The analyzed table must have the timestamp column properly configured, it is the column that is used for filtering the date and time ranges. Setting the end date overrides the parameters to disable analyzing today or the current month. date
to_date_time Analyze the data until the given date and time (exclusive). The date should be an ISO 8601 date (yyyy-MM-dd). The analyzed table must have the timestamp column properly configured, it is the column that is used for filtering the date and time ranges. Setting the end date and time overrides the parameters to disable analyzing today or the current month. datetime
where_filter An additional filter which must be a valid SQL predicate (an SQL expression that returns 'true' or 'false') that is added to the WHERE clause of the SQL query that DQOps will run on the data source. The purpose of a custom filter is to analyze only a subset of data, for example, when a new batch of records is loaded, and the data quality checks are evaluated as a data contract. All the records in that batch must tagged with the same value, and the passed predicate to find records from that batch would use the filter in the form: "{alias}.batch_id = 1". The filter can use replacement tokens {alias} to reference the analyzed table. string

ErrorSamplesDataScope

Enumeration of possible error samples collection scopes. "table" - a whole table is analyzed for error samples, "data_groupings" - error samples are collected for each data grouping.

The structure of this object is described below

 Data type   Enum values 
string table
data_group

ErrorSamplerResult

The structure of this object is described below

 Property name   Description                       Data type 
executed_error_samplers The total count of all executed error samplers. This count only includes data quality checks that have an error sampling template defined. integer
columns_analyzed The count of columns for which DQOps executed an error sampler and tried to collect error samples. integer
columns_successfully_analyzed The count of columns for which DQOps managed to obtain error samples. integer
total_error_samplers_failed The count of error samplers that failed to execute. integer
total_error_samples_collected The total number of error samples (values) that were collected. integer

CollectErrorSamplesParameters

The structure of this object is described below

 Property name   Description                       Data type 
check_search_filters Check search filters that identify the checks for which the error samples should be collected. CheckSearchFilters
time_window_filter Optional time window filter, configures the time range for partitioned tables that is analyzed to find error samples. TimeWindowFilterParameters
data_scope The target scope of collecting error samples. Error samples can be collected for the entire table or for each data grouping separately. ErrorSamplesDataScope
dummy_sensor_execution Boolean flag that enables a dummy error sample collection (sensors are executed, but the error samples results are not written to the parquet files). boolean
error_sampler_result The summary of the error sampling collection job after if finished. Returns the number of error samplers executed, columns analyzed, error samples (values) captured. ErrorSamplerResult

DqoJobStatus

Job status of a job on the queue.

The structure of this object is described below

 Data type   Enum values 
string queued
running
waiting
finished
failed
cancel_requested
cancelled

CollectErrorSamplesResult

The structure of this object is described below

 Property name   Description                       Data type 
job_id Job id that identifies a job that was started on the DQOps job queue. DqoQueueJobId
result Optional result object that is returned only when the wait parameter was true and the "collect error samples" job has finished. Contains the summary result of collecting error samples, including the number of error samplers that were executed, and the number of error samples collected. ErrorSamplerResult
status Job status DqoJobStatus

CollectStatisticsResult

The structure of this object is described below

 Property name   Description                       Data type 
executed_statistics_collectors The total count of all executed statistics collectors. integer
columns_analyzed The count of columns for which DQOps executed a collector and tried to read the statistics. integer
columns_successfully_analyzed The count of columns for which DQOps managed to obtain statistics. integer
total_collectors_failed The count of statistics collectors that failed to execute. integer
total_collected_results The total number of results that were collected. integer

CollectStatisticsQueueJobResult

The structure of this object is described below

 Property name   Description                       Data type 
job_id Job id that identifies a job that was started on the DQOps job queue. DqoQueueJobId
result Optional result object that is returned only when the wait parameter was true and the "collect statistics" job has finished. Contains the summary result of collecting basic statistics, including the number of statistics collectors (queries) that managed to capture metrics about the table(s). CollectStatisticsResult
status Job status DqoJobStatus

DeleteStoredDataQueueJobParameters

Parameters for the "delete stored data queue job that deletes data from parquet files stored in DQOps user home's .data* directory.

The structure of this object is described below

 Property name   Description                       Data type 
connection The connection name. string
full_table_name The schema and table name. It is provided as ., for example public.fact_sales. This filter does not support patterns. string
date_start The start date (inclusive) to delete the data, based on the time_period column in Parquet files. date
date_end The end date (inclusive) to delete the data, based on the time_period column in Parquet files. date
delete_errors Delete the data from the errors table. Because the default value is false, this parameter must be set to true to delete the errors. boolean
delete_statistics Delete the data from the statistics table. Because the default value is false, this parameter must be set to true to delete the statistics. boolean
delete_check_results Delete the data from the check_results table. Because the default value is false, this parameter must be set to true to delete the check results. boolean
delete_sensor_readouts Delete the data from the sensor_readouts table. Because the default value is false, this parameter must be set to true to delete the sensor readouts. boolean
delete_error_samples Delete the data from the error_samples table. Because the default value is false, this parameter must be set to true to delete the error samples. boolean
delete_incidents Delete the data from the incidents table. Because the default value is false, this parameter must be set to true to delete the incidents. boolean
delete_checks_configuration Delete the data quality configured checks from the table. They are detached from the configuration.Because the default value is false, this parameter must be set to true to delete the checks configuration. boolean
column_names The list of column names to delete the data for column level results or errors only for selected columns. List[string]
check_category The check category name, for example volume or anomaly. string
table_comparison_name The name of a table comparison configuration. Deletes only table comparison results (and errors) for a given comparison. string
check_name The name of a data quality check. Uses the short check name, for example daily_row_count. string
check_type The type of checks whose results and errors should be deleted. For example, use monitoring to delete only monitoring checks data. string
sensor_name The full sensor name whose results, checks based on the sensor, statistics and errors generated by the sensor sound be deleted. Uses a full sensor name, for example: table/volume/row_count. string
data_group_tag The names of data groups in any of the grouping_level_1...grouping_level_9 columns in the Parquet tables. Enables deleting data tagged for one data source or a subset of results when the group level is captured from a column in a monitored table. string
quality_dimension The data quality dimension name, for example Timeliness or Completeness. string
time_gradient The time gradient (time scale) of the sensor and check results that are captured. string
collector_category The statistics collector category when statistics should be deleted. A statistics category is a group of statistics, for example sampling for the column value samples. string
collector_name The statistics collector name when only statistics are deleted for a selected collector, for example sample_values. string
collector_target The type of the target object for which the basic statistics are deleted. Supported values are table and column. string
incident_status_name The incidents status name when only incidents are deleted, for example muted. string

DqoRoot

DQOps root folders in the dqo use home that may be replicated to a remote file system (uploaded to DQOps Cloud or any other cloud). It is also used as a lock scope.

The structure of this object is described below

 Data type   Enum values 
string data_sensor_readouts
data_check_results
data_statistics
data_errors
data_error_samples
data_incidents
sources
sensors
rules
checks
settings
credentials
dictionaries
patterns
_indexes
_indexes_sources
_local_settings

ParquetPartitionId

Identifies a single partition for hive partitioned tables stored as parquet files.

The structure of this object is described below

 Property name   Description                       Data type 
data_domain Data domain name. string
table_type Table type. DqoRoot
connection_name Connection name. string
table_name Table name (schema.table). PhysicalTableName
month The date of the first day of the month that identifies a monthly partition. date

DataDeleteResultPartition

Results of the "data delete" job for the monthly partition.

The structure of this object is described below

 Property name   Description                       Data type 
rows_affected_count The number of rows that were deleted from the partition. integer
partition_deleted True if a whole partition (a parquet file) was deleted instead of removing only selected rows. boolean

DeleteStoredDataResult

Compiled results of the "data delete".

The structure of this object is described below

 Property name   Description                       Data type 
partition_results Dictionary of partitions that where deleted or updated when the rows were deleted. Dict[ParquetPartitionId, DataDeleteResultPartition]

DeleteStoredDataQueueJobResult

Object returned from the operation that queues a "delete stored data" job. The result contains the job id that was started and optionally can also contain a dictionary of partitions that were cleared or deleted if the operation was started with wait=true parameter to wait for the "delete stored data" job to finish.

The structure of this object is described below

 Property name   Description                       Data type 
job_id Job id that identifies a job that was started on the DQOps job queue. DqoQueueJobId
result Optional result object that is returned only when the wait parameter was true and the "delete stored data" job has finished. Contains a list of partitions that were deleted or updated. DeleteStoredDataResult
status Job status DqoJobStatus

DqoJobType

Job type that identifies a job by type.

The structure of this object is described below

 Data type   Enum values 
string run_checks
run_checks_on_table
collect_statistics
collect_scheduled_statistics
collect_statistics_on_table
collect_error_samples
collect_error_samples_on_table
queue_thread_shutdown
synchronize_folder
synchronize_multiple_folders
run_scheduled_checks_cron
import_schema
import_tables
auto_import_tables
delete_stored_data
repair_stored_data

FileSynchronizationDirection

Data synchronization direction between a local DQOps Home and DQOps Cloud data quality data warehouse.

The structure of this object is described below

 Data type   Enum values 
string full
download
upload

SynchronizeRootFolderParameters

Parameter object for starting a file synchronization job. Identifies the folder and direction that should be synchronized.

The structure of this object is described below

 Property name   Description                       Data type 
folder DqoRoot
direction FileSynchronizationDirection
force_refresh_native_table boolean

SynchronizeRootFolderDqoQueueJobParameters

Parameters object for a job that synchronizes one folder with DQOps Cloud.

The structure of this object is described below

 Property name   Description                       Data type 
synchronization_parameter SynchronizeRootFolderParameters

SynchronizeMultipleFoldersDqoQueueJobParameters

Simple object for starting multiple folder synchronization jobs with the same configuration.

The structure of this object is described below

 Property name   Description                       Data type 
direction File synchronization direction, the default is full synchronization (push local changes and pull other changes from DQOps Cloud). FileSynchronizationDirection
force_refresh_native_tables Force full refresh of native tables in the data quality data warehouse. The default synchronization mode is to refresh only modified data. boolean
detect_cron_schedules Scans the yaml files (with the configuration for connections and tables) and detects new cron schedules. Detected cron schedules are registered in the cron (Quartz) job scheduler. boolean
sources Synchronize the "sources" folder. boolean
sensors Synchronize the "sensors" folder. boolean
rules Synchronize the "rules" folder. boolean
checks Synchronize the "checks" folder. boolean
settings Synchronize the "settings" folder. boolean
credentials Synchronize the ".credentials" folder. boolean
dictionaries Synchronize the "dictionaries" folder. boolean
patterns Synchronize the "patterns" folder. boolean
data_sensor_readouts Synchronize the ".data/sensor_readouts" folder. boolean
data_check_results Synchronize the ".data/check_results" folder. boolean
data_statistics Synchronize the ".data/statistics" folder. boolean
data_errors Synchronize the ".data/errors" folder. boolean
data_incidents Synchronize the ".data/incidents" folder. boolean
synchronize_folder_with_local_changes Synchronize all folders that have local changes. When this field is set to true, there is no need to enable synchronization of single folders because DQOps will decide which folders need synchronization (to be pushed to the cloud). boolean

RunChecksTarget

The structure of this object is described below

 Data type   Enum values 
string sensors_and_rules
only_sensors
only_rules

RunChecksResult

The structure of this object is described below

 Property name   Description                       Data type 
highest_severity The highest check severity for the data quality checks executed in this batch. RuleSeverityLevel
executed_checks The total count of all executed checks. integer
valid_results The total count of all checks that finished successfully (with no data quality issues). integer
warnings The total count of all invalid data quality checks that finished raising a warning. integer
errors The total count of all invalid data quality checks that finished raising an error. integer
fatals The total count of all invalid data quality checks that finished raising a fatal error. integer
execution_errors The total number of checks that failed to execute due to some execution errors. integer

RunChecksParameters

The structure of this object is described below

 Property name   Description                       Data type 
check_search_filters Target data quality checks filter. CheckSearchFilters
time_window_filter Optional time window filter, configures the time range that is analyzed or the number of recent days/months to analyze for day or month partitioned data. TimeWindowFilterParameters
collect_error_samples Set the value to true to collect error samples for failed data quality checks. Set the value to false to disable error sampling collection despite any other settings on the table or check level. boolean
execution_target Set the data quality check execution mode. The default mode sensors_and_rules will both collect metrics and validate them with rules. Alternatively, it is possible to run only sensors, or validate existing data with rules. RunChecksTarget
dummy_execution Set the value to true when the data quality checks should be executed in a dummy mode (without running checks on the target systems and storing the results). Only the jinja2 sensors will be rendered. boolean
run_checks_result The result of running the check, updated when the run checks job finishes. Contains the count of executed checks. RunChecksResult

RunChecksOnTableParameters

The structure of this object is described below

 Property name   Description                       Data type 
connection The name of the target connection. string
max_jobs_per_connection The maximum number of concurrent 'run checks on table' jobs that can be run on this connection. Limits the number of concurrent jobs. integer
table The full physical name (schema.table) of the target table. PhysicalTableName
check_search_filters Target data quality checks filter. CheckSearchFilters
time_window_filter Optional time window filter, configures the time range that is analyzed or the number of recent days/months to analyze for day or month partitioned data. TimeWindowFilterParameters
dummy_execution Set the value to true when the data quality checks should be executed in a dummy mode (without running checks on the target systems and storing the results). Only the jinja2 sensors will be rendered. boolean
collect_error_samples Set the value to true to collect error samples for failed data quality checks. boolean
execution_target Set the data quality check execution mode. The default mode sensors_and_rules will both collect metrics and validate them with rules. Alternatively, it is possible to run only sensors, or validate existing data with rules. RunChecksTarget
run_checks_result The result of running the check, updated when the run checks job finishes. Contains the count of executed checks. RunChecksResult

StatisticsCollectorTarget

The structure of this object is described below

 Data type   Enum values 
string table
column

StatisticsCollectorSearchFilters

Hierarchy node search filters for finding enabled statistics collectors (basic profilers) to be started.

The structure of this object is described below

 Property name   Description                       Data type 
column_names The list of column names or column name patters. This field accepts search patterns in the format: 'fk_*', '*_id', 'prefix*suffix'. List[string]
collector_name The target statistics collector name to capture only selected statistics. Uses the short collector nameThis field supports search patterns such as: 'prefix*', '*suffix', 'prefix_*_suffix'. In order to collect only top 10 most common column samples, use 'column_samples'. string
sensor_name The target sensor name to run only data quality checks that are using this sensor. Uses the full sensor name which is the full folder path within the sensors folder. This field supports search patterns such as: 'table/volume/row_*', '*count', 'table/volume/prefix*_suffix'. string
collector_category The target statistics collector category, for example: nulls, volume, sampling. string
target The target type of object to collect statistics from. Supported values are: table to collect only table level statistics or column to collect only column level statistics. StatisticsCollectorTarget
enabled_cron_schedule_expression Expected CRON profiling schedule. string
connection The connection (data source) name. Supports search patterns in the format: 'source*', '*_prod', 'prefix*suffix'. string
full_table_name The schema and table name. It is provided as ., for example public.fact_sales. The schema and table name accept patterns both in the schema name and table name parts. Sample patterns are: 'schema_name.tab_prefix_*', 'schema_name.', '.*', 'schema_name.*customer', 'schema_name.tab*_suffix'. string
enabled A boolean flag to target enabled tables, columns or checks. When the value of this field is not set, the default value of this field is true, targeting only tables, columns and checks that are not implicitly disabled. boolean
max_results Optional limit for the maximum number of results to return. integer

StatisticsDataScope

Enumeration of possible statistics scopes. "table" - a whole table was profiled, "data_groupings" - groups of rows were profiled.

The structure of this object is described below

 Data type   Enum values 
string table
data_group

CollectStatisticsQueueJobParameters

The structure of this object is described below

 Property name   Description                       Data type 
statistics_collector_search_filters Statistics collectors search filters that identify the type of statistics collector to run. StatisticsCollectorSearchFilters
data_scope The target scope of collecting statistics. Statistics can be collected for the entire table or for each data grouping separately. StatisticsDataScope
configure_table Turns on a special mode of collecting statistics that will configure the timestamp and ID columns. It should be used only during the first statistics collection. boolean
samples_limit The default limit of column samples that are collected. integer
dummy_sensor_execution Boolean flag that enables a dummy statistics collection (sensors are executed, but the statistics results are not written to the parquet files). boolean
collect_statistics_result The summary of the statistics collection job after if finished. Returns the number of collectors analyzed, columns analyzed, statistics results captured. CollectStatisticsResult

CollectStatisticsOnTableQueueJobParameters

The structure of this object is described below

 Property name   Description                       Data type 
connection The name of the target connection. string
max_jobs_per_connection The maximum number of concurrent 'run checks on table' jobs that can be run on this connection. Limits the number of concurrent jobs. integer
table The full physical name (schema.table) of the target table. PhysicalTableName
statistics_collector_search_filters Statistics collectors search filters that identify the type of statistics collector to run. StatisticsCollectorSearchFilters
data_scope The target scope of collecting statistics. Statistics can be collected for the entire table or for each data grouping separately. StatisticsDataScope
samples_limit The default limit of column samples that are collected. integer
configure_table Turns on a special mode of collecting statistics that will configure the timestamp and ID columns. It should be used only during the first statistics collection. boolean
dummy_sensor_execution Boolean flag that enables a dummy statistics collection (sensors are executed, but the statistics results are not written to the parquet files). boolean
collect_statistics_result The summary of the statistics collection job after if finished. Returns the number of collectors analyzed, columns analyzed, statistics results captured. CollectStatisticsResult

CollectErrorSamplesOnTableParameters

The structure of this object is described below

 Property name   Description                       Data type 
connection The name of the target connection. string
max_jobs_per_connection The maximum number of concurrent 'collect error samples on table' jobs that can be run on this connection. Limits the number of concurrent jobs. integer
table The full physical name (schema.table) of the target table. PhysicalTableName
check_search_filters Check search filters that identify data quality checks for which the error samples are collected. CheckSearchFilters
time_window_filter Optional time window filter, configures the time range for partitioned tables that is analyzed to find error samples. TimeWindowFilterParameters
data_scope The target scope of collecting error samples. Error samples can be collected for the entire or for each data grouping separately. ErrorSamplesDataScope
dummy_sensor_execution Boolean flag that enables a dummy error samples collection (sensors are executed, but the error samples results are not written to the parquet files). boolean
error_sampler_result The summary of the error sampling collection job after if finished. Returns the number of error samplers that collected samples, columns analyzed, error samples (values) captured. ErrorSamplerResult

ImportSchemaQueueJobParameters

Parameters for the {@link ImportSchemaQueueJob ImportSchemaQueueJob} job that imports tables from a database.

The structure of this object is described below

 Property name   Description                       Data type 
connection_name Connection name where the tables are imported. string
schema_name Source schema name from which the tables are imported. string
table_name_contains Optional filter for the names of tables to import, it is a text (substring) that must be present inside table names. This filter is case sensitive. string

ImportTablesQueueJobParameters

Parameters for the {@link ImportTablesQueueJob ImportTablesQueueJob} job that imports selected tables from the source database.

The structure of this object is described below

 Property name   Description                       Data type 
connection_name Connection name string
schema_name Schema name string
table_name_contains Optional filter for the table names to import. The table names that are imported must contain a substring matching this parameter. This filter is case sensitive. string
table_names Optional list of table names inside the schema. When the list of tables is empty, all tables are imported. List[string]
tables_import_limit Optional parameter to configure the limit of tables that are imported from a single schema. Leave this parameter blank to use the default limit (300 tables). integer

RepairStoredDataQueueJobParameters

Parameters for the {@link RepairStoredDataQueueJob RepairStoredDataQueueJob} job that repairs data stored in user's ".data" directory.

The structure of this object is described below

 Property name   Description                       Data type 
connection_name string
schema_table_name string
repair_errors boolean
repair_statistics boolean
repair_check_results boolean
repair_sensor_readouts boolean

DqoJobEntryParametersModel

Model object returned to UI that has typed fields for each supported job parameter type.

The structure of this object is described below

 Property name   Description                       Data type 
synchronize_root_folder_parameters The job parameters for the "synchronize folder" queue job. SynchronizeRootFolderDqoQueueJobParameters
synchronize_multiple_folders_parameters The job parameters for the "synchronize multiple folders" queue job. SynchronizeMultipleFoldersDqoQueueJobParameters
run_scheduled_checks_parameters The job parameters for the "run scheduled checks" cron queue job. CronScheduleSpec
collect_scheduled_statistics_parameters The job parameters for the "collect scheduled statistics" cron queue job. CronScheduleSpec
auto_import_tables_parameters The job parameters for the "auto import tables" cron queue job. CronScheduleSpec
run_checks_parameters The job parameters for the "run checks" queue job. RunChecksParameters
run_checks_on_table_parameters The job parameters for the "run checks on table" queue job. RunChecksOnTableParameters
collect_statistics_parameters The job parameters for the "collect statistics" queue job. CollectStatisticsQueueJobParameters
collect_statistics_on_table_parameters The job parameters for the "collect statistics on table" queue job. CollectStatisticsOnTableQueueJobParameters
collect_error_samples_parameters The job parameters for the "collect error samples" queue job. CollectErrorSamplesParameters
collect_error_samples_on_table_parameters The job parameters for the "collect error samples on table" queue job. CollectErrorSamplesOnTableParameters
import_schema_parameters The job parameters for the "collect schema" queue job. ImportSchemaQueueJobParameters
import_table_parameters The job parameters for the "collect tables" queue job. ImportTablesQueueJobParameters
delete_stored_data_parameters The job parameters for the "delete stored data" queue job. DeleteStoredDataQueueJobParameters
repair_stored_data_parameters The job parameters for the "repair stored data" queue job. RepairStoredDataQueueJobParameters

DqoJobHistoryEntryModel

Model of a single job that was scheduled or has finished. It is stored in the job monitoring service on the history list.

The structure of this object is described below

 Property name   Description                       Data type 
job_id DqoQueueJobId
job_type DqoJobType
parameters DqoJobEntryParametersModel
status DqoJobStatus
error_message string
data_domain string

DqoJobChangeModel

Describes a change to the job status or the job queue (such as a new job was added).

The structure of this object is described below

 Property name   Description                       Data type 
status DqoJobStatus
job_id DqoQueueJobId
change_sequence long
updated_model DqoJobHistoryEntryModel
domain_name string

FolderSynchronizationStatus

Enumeration of statuses identifying the synchronization status for each folder that can be synchronized with the DQOps cloud.

The structure of this object is described below

 Data type   Enum values 
string unchanged
changed
synchronizing

CloudSynchronizationFoldersStatusModel

Model that describes the current synchronization status for each folder.

The structure of this object is described below

 Property name   Description                       Data type 
sources The synchronization status of the "sources" folder. FolderSynchronizationStatus
sensors The synchronization status of the "sensors" folder. FolderSynchronizationStatus
rules The synchronization status of the "rules" folder. FolderSynchronizationStatus
checks The synchronization status of the "checks" folder. FolderSynchronizationStatus
settings The synchronization status of the "settings" folder. FolderSynchronizationStatus
credentials The synchronization status of the ".credentials" folder. FolderSynchronizationStatus
dictionaries The synchronization status of the "dictionaries" folder. FolderSynchronizationStatus
patterns The synchronization status of the "patterns" folder. FolderSynchronizationStatus
data_sensor_readouts The synchronization status of the ".data/sensor_readouts" folder. FolderSynchronizationStatus
data_check_results The synchronization status of the ".data/check_results" folder. FolderSynchronizationStatus
data_statistics The synchronization status of the ".data/statistics" folder. FolderSynchronizationStatus
data_errors The synchronization status of the ".data/errors" folder. FolderSynchronizationStatus
data_incidents The synchronization status of the ".data/incidents" folder. FolderSynchronizationStatus

DqoJobQueueIncrementalSnapshotModel

Job history snapshot model that returns only changes after a given change sequence.

The structure of this object is described below

 Property name   Description                       Data type 
job_changes List[DqoJobChangeModel]
folder_synchronization_status CloudSynchronizationFoldersStatusModel
last_sequence_number long

DqoJobQueueInitialSnapshotModel

Returns the current snapshot of running jobs.

The structure of this object is described below

 Property name   Description                       Data type 
jobs List[DqoJobHistoryEntryModel]
folder_synchronization_status CloudSynchronizationFoldersStatusModel
last_sequence_number long

ImportTablesResult

Result object from the {@link ImportTablesQueueJob ImportTablesQueueJob} table import job that returns list of tables that have been imported.

The structure of this object is described below

 Property name   Description                       Data type 
source_table_specs Table schemas (including column schemas) of imported tables. List[TableSpec]

ImportTablesQueueJobResult

Object returned from the operation that queues a "import tables" job. The result contains the job id that was started and optionally can also contain the result of importing tables if the operation was started with wait=true parameter to wait for the "import tables" job to finish.

The structure of this object is described below

 Property name   Description                       Data type 
job_id Job id that identifies a job that was started on the DQOps job queue. DqoQueueJobId
result Optional result object that is returned only when the wait parameter was true and the "import tables" job has finished. Contains the summary result of importing tables, including table and column schemas of imported tables. ImportTablesResult
status Job status DqoJobStatus

RunChecksQueueJobResult

The structure of this object is described below

 Property name   Description                       Data type 
job_id Job id that identifies a job that was started on the DQOps job queue. DqoQueueJobId
result Optional result object that is returned only when the wait parameter was true and the "run checks" job has finished. Contains the summary result of the data quality checks executed, including the severity of the most severe issue detected. The calling code (the data pipeline) can decide if further processing should be continued. RunChecksResult
status Job status DqoJobStatus

SpringErrorPayload

Object mapped to the default spring error payload (key/values).

The structure of this object is described below

 Property name   Description                       Data type 
timestamp Error timestamp as an epoch timestamp. long
status Optional status code. integer
error Error name. string
exception Optional exception. string
message Exception's message. string
path Exception's stack trace (optional). string

SynchronizeMultipleFoldersQueueJobResult

Object returned from the operation that queues a "synchronize multiple folders" job. The result contains the job id that was started and optionally can also contain the job finish status if the operation was started with wait=true parameter to wait for the "synchronize multiple folders" job to finish.

The structure of this object is described below

 Property name   Description                       Data type 
job_id Job id that identifies a job that was started on the DQOps job queue. DqoQueueJobId
status Job status DqoJobStatus