Last updated: July 21, 2025
dqo connection command-line command
The reference of the connection command in DQOps. Modify or list connections
dqo connection list
List connections that match a given condition
Description
Lists all the created connections for the logged-in user that match the conditions specified in the options. It allows the user to filter connections based on various parameters.
Command-line synopsis
$ dqo [dqo options...] connection list [-h] [-fw] [-hl] [-n=<name>] [-of=<outputFormat>]
[-d=<dimensions>]... [-l=<labels>]...
DQOps shell synopsis
dqo> connection list [-h] [-fw] [-hl] [-n=<name>] [-of=<outputFormat>]
[-d=<dimensions>]... [-l=<labels>]...
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
-d --dimension |
Dimension filter | ||
-fw --file-write |
Write command response to a file | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
-l --label |
Label filter | ||
-n --name |
Connection name filter | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
dqo connection add
Add a connection with specified details
Description
Creates a new connection to the database with the specified details such as connection name, database type, hostname, username, and password. It allows the user to connect to the database from the application to perform various operations on the database.
Command-line synopsis
$ dqo [dqo options...] connection add [-h] [--duckdb-enable-optimizer] [-fw] [-hl]
[--sqlserver-disable-encryption]
[--athena-aws-authentication-mode=<awsAuthenticationMode>]
[--athena-output-location=<athenaOutputLocation>]
[--athena-region=<athenaRegion>]
[--athena-work-group=<athenaWorkGroup>]
[--bigquery-authentication-mode=<authenticationMode>]
[--bigquery-billing-project-id=<billingProjectId>]
[--bigquery-jobs-create-project=<jobsCreateProject>]
[--bigquery-json-key-content=<jsonKeyContent>]
[--bigquery-json-key-path=<jsonKeyPath>]
[--bigquery-quota-project-id=<quotaProjectId>]
[--bigquery-source-project-id=<sourceProjectId>]
[--clickhouse-database=<database>] [--clickhouse-host=<host>]
[--clickhouse-password=<password>] [--clickhouse-port=<port>]
[--clickhouse-user=<user>]
[--databricks-access-token=<accessToken>]
[--databricks-catalog=<catalog>] [--databricks-host=<host>]
[--databricks-http-path=<httpPath>]
[--databricks-initialization-sql=<initializationSql>]
[--databricks-password=<password>] [--databricks-port=<port>]
[--databricks-user=<user>] [--db2-database=<database>]
[--db2-host=<host>] [--db2-password=<password>]
[--db2-platform=<db2PlatformType>] [--db2-port=<port>]
[--db2-user=<user>]
[--duckdb-aws-authentication-mode=<awsAuthenticationMode>]
[--duckdb-aws-default-authentication-chain=<awsDefaultAuthentica
tionChain>] [--duckdb-azure-account-name=<accountName>]
[--duckdb-azure-authentication-mode=<azureAuthenticationMode>]
[--duckdb-azure-client-id=<clientId>]
[--duckdb-azure-client-secret=<clientSecret>]
[--duckdb-azure-tenant-id=<tenantId>]
[--duckdb-database=<database>]
[--duckdb-directories=<directoriesString>]
[--duckdb-files-format-type=<filesFormatType>]
[--duckdb-password=<password>] [--duckdb-profile=<profile>]
[--duckdb-read-mode=<readMode>] [--duckdb-region=<region>]
[--duckdb-storage-type=<storageType>] [--duckdb-user=<user>]
[--hana-host=<host>] [--hana-instance-number=<instanceNumber>]
[--hana-password=<password>] [--hana-port=<port>]
[--hana-user=<user>] [--mariadb-database=<database>]
[--mariadb-host=<host>] [--mariadb-password=<password>]
[--mariadb-port=<port>] [--mariadb-user=<user>]
[--mysql-database=<database>]
[--mysql-engine=<mysqlEngineType>] [--mysql-host=<host>]
[--mysql-password=<password>] [--mysql-port=<port>]
[--mysql-sslmode=<sslmode>] [--mysql-user=<user>] [-n=<name>]
[-of=<outputFormat>] [--oracle-database=<database>]
[--oracle-host=<host>]
[--oracle-initialization-sql=<initializationSql>]
[--oracle-password=<password>] [--oracle-port=<port>]
[--oracle-user=<user>] [--postgresql-database=<database>]
[--postgresql-engine=<postgresqlEngineType>]
[--postgresql-host=<host>] [--postgresql-options=<options>]
[--postgresql-password=<password>] [--postgresql-port=<port>]
[--postgresql-sslmode=<sslmode>] [--postgresql-user=<user>]
[--presto-database=<database>] [--presto-host=<host>]
[--presto-password=<password>] [--presto-port=<port>]
[--presto-user=<user>] [--questdb-database=<database>]
[--questdb-host=<host>] [--questdb-password=<password>]
[--questdb-port=<port>] [--questdb-user=<user>]
[--redshift-authentication-mode=<redshiftAuthenticationMode>]
[--redshift-database=<database>] [--redshift-host=<host>]
[--redshift-password=<password>] [--redshift-port=<port>]
[--redshift-user=<user>]
[--single-store-parameters-spec=<singleStoreDbParametersSpec>]
[--snowflake-account=<account>]
[--snowflake-database=<database>]
[--snowflake-password=<password>] [--snowflake-role=<role>]
[--snowflake-user=<user>] [--snowflake-warehouse=<warehouse>]
[--spark-host=<host>] [--spark-password=<password>]
[--spark-port=<port>] [--spark-user=<user>]
[--sqlserver-authentication-mode=<authenticationMode>]
[--sqlserver-database=<database>] [--sqlserver-host=<host>]
[--sqlserver-password=<password>] [--sqlserver-port=<port>]
[--sqlserver-user=<user>] [-t=<providerType>]
[--teradata-host=<host>] [--teradata-password=<password>]
[--teradata-port=<port>] [--teradata-user=<user>]
[--trino-catalog=<catalog>] [--trino-engine=<trinoEngineType>]
[--trino-host=<host>] [--trino-password=<password>]
[--trino-port=<port>] [--trino-user=<user>]
[-C=<String=String>]... [-D=<String=String>]...
[-DB2=<String=String>]... [-Duck=<String=String>]...
[-E=<String=String>]... [-F=<String=String>]...
[-H=<String=String>]... [-K=<String=String>]...
[-M=<String=String>]... [-MA=<String=String>]...
[-O=<String=String>]... [-P=<String=String>]...
[-Q=<String=String>]... [-R=<String=String>]...
[-S=<String=String>]... [-T=<String=String>]...
[-TE=<String=String>]...
DQOps shell synopsis
dqo> connection add [-h] [--duckdb-enable-optimizer] [-fw] [-hl]
[--sqlserver-disable-encryption]
[--athena-aws-authentication-mode=<awsAuthenticationMode>]
[--athena-output-location=<athenaOutputLocation>]
[--athena-region=<athenaRegion>]
[--athena-work-group=<athenaWorkGroup>]
[--bigquery-authentication-mode=<authenticationMode>]
[--bigquery-billing-project-id=<billingProjectId>]
[--bigquery-jobs-create-project=<jobsCreateProject>]
[--bigquery-json-key-content=<jsonKeyContent>]
[--bigquery-json-key-path=<jsonKeyPath>]
[--bigquery-quota-project-id=<quotaProjectId>]
[--bigquery-source-project-id=<sourceProjectId>]
[--clickhouse-database=<database>] [--clickhouse-host=<host>]
[--clickhouse-password=<password>] [--clickhouse-port=<port>]
[--clickhouse-user=<user>]
[--databricks-access-token=<accessToken>]
[--databricks-catalog=<catalog>] [--databricks-host=<host>]
[--databricks-http-path=<httpPath>]
[--databricks-initialization-sql=<initializationSql>]
[--databricks-password=<password>] [--databricks-port=<port>]
[--databricks-user=<user>] [--db2-database=<database>]
[--db2-host=<host>] [--db2-password=<password>]
[--db2-platform=<db2PlatformType>] [--db2-port=<port>]
[--db2-user=<user>]
[--duckdb-aws-authentication-mode=<awsAuthenticationMode>]
[--duckdb-aws-default-authentication-chain=<awsDefaultAuthentica
tionChain>] [--duckdb-azure-account-name=<accountName>]
[--duckdb-azure-authentication-mode=<azureAuthenticationMode>]
[--duckdb-azure-client-id=<clientId>]
[--duckdb-azure-client-secret=<clientSecret>]
[--duckdb-azure-tenant-id=<tenantId>]
[--duckdb-database=<database>]
[--duckdb-directories=<directoriesString>]
[--duckdb-files-format-type=<filesFormatType>]
[--duckdb-password=<password>] [--duckdb-profile=<profile>]
[--duckdb-read-mode=<readMode>] [--duckdb-region=<region>]
[--duckdb-storage-type=<storageType>] [--duckdb-user=<user>]
[--hana-host=<host>] [--hana-instance-number=<instanceNumber>]
[--hana-password=<password>] [--hana-port=<port>]
[--hana-user=<user>] [--mariadb-database=<database>]
[--mariadb-host=<host>] [--mariadb-password=<password>]
[--mariadb-port=<port>] [--mariadb-user=<user>]
[--mysql-database=<database>]
[--mysql-engine=<mysqlEngineType>] [--mysql-host=<host>]
[--mysql-password=<password>] [--mysql-port=<port>]
[--mysql-sslmode=<sslmode>] [--mysql-user=<user>] [-n=<name>]
[-of=<outputFormat>] [--oracle-database=<database>]
[--oracle-host=<host>]
[--oracle-initialization-sql=<initializationSql>]
[--oracle-password=<password>] [--oracle-port=<port>]
[--oracle-user=<user>] [--postgresql-database=<database>]
[--postgresql-engine=<postgresqlEngineType>]
[--postgresql-host=<host>] [--postgresql-options=<options>]
[--postgresql-password=<password>] [--postgresql-port=<port>]
[--postgresql-sslmode=<sslmode>] [--postgresql-user=<user>]
[--presto-database=<database>] [--presto-host=<host>]
[--presto-password=<password>] [--presto-port=<port>]
[--presto-user=<user>] [--questdb-database=<database>]
[--questdb-host=<host>] [--questdb-password=<password>]
[--questdb-port=<port>] [--questdb-user=<user>]
[--redshift-authentication-mode=<redshiftAuthenticationMode>]
[--redshift-database=<database>] [--redshift-host=<host>]
[--redshift-password=<password>] [--redshift-port=<port>]
[--redshift-user=<user>]
[--single-store-parameters-spec=<singleStoreDbParametersSpec>]
[--snowflake-account=<account>]
[--snowflake-database=<database>]
[--snowflake-password=<password>] [--snowflake-role=<role>]
[--snowflake-user=<user>] [--snowflake-warehouse=<warehouse>]
[--spark-host=<host>] [--spark-password=<password>]
[--spark-port=<port>] [--spark-user=<user>]
[--sqlserver-authentication-mode=<authenticationMode>]
[--sqlserver-database=<database>] [--sqlserver-host=<host>]
[--sqlserver-password=<password>] [--sqlserver-port=<port>]
[--sqlserver-user=<user>] [-t=<providerType>]
[--teradata-host=<host>] [--teradata-password=<password>]
[--teradata-port=<port>] [--teradata-user=<user>]
[--trino-catalog=<catalog>] [--trino-engine=<trinoEngineType>]
[--trino-host=<host>] [--trino-password=<password>]
[--trino-port=<port>] [--trino-user=<user>]
[-C=<String=String>]... [-D=<String=String>]...
[-DB2=<String=String>]... [-Duck=<String=String>]...
[-E=<String=String>]... [-F=<String=String>]...
[-H=<String=String>]... [-K=<String=String>]...
[-M=<String=String>]... [-MA=<String=String>]...
[-O=<String=String>]... [-P=<String=String>]...
[-Q=<String=String>]... [-R=<String=String>]...
[-S=<String=String>]... [-T=<String=String>]...
[-TE=<String=String>]...
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
--athena-aws-authentication-mode |
The authentication mode for AWS Athena. Supports also a null configuration with a custom environment variable. | iam default_credentials |
|
--athena-output-location |
The location in Amazon S3 where query results will be stored. Supports also a null configuration with a custom environment variable. | ||
--athena-region |
The AWS Athena Region where queries will be run. Supports also a null configuration with a custom environment variable. | ||
--athena-work-group |
The Athena WorkGroup in which queries will run. Supports also a null configuration with a custom environment variable. | ||
--bigquery-authentication-mode |
Bigquery authentication mode. The default value uses the current GCP application default credentials. The default GCP credentials is the Service Account of a VM in GCP cloud, a GCP JSON key file whose path is in the GOOGLE_APPLICATION_CREDENTIALS environment variable, or it is the default GCP credentials obtained on a user's computer by running 'gcloud auth application-default login' from the command line. | google_application_credentials json_key_content json_key_path |
|
--bigquery-billing-project-id |
Bigquery billing GCP project id. This is the project used as the default GCP project. The calling user must have a bigquery.jobs.create permission in this project. | ||
--bigquery-jobs-create-project |
Configures the way how to select the project that will be used to start BigQuery jobs and will be used for billing. The user/service identified by the credentials must have bigquery.jobs.create permission in that project. | create_jobs_in_source_project create_jobs_in_default_project_from_credentials create_jobs_in_selected_billing_project_id |
|
--bigquery-json-key-content |
Bigquery service account key content as JSON. | ||
--bigquery-json-key-path |
Path to a GCP service account key JSON file used to authenticate to Bigquery. | ||
--bigquery-quota-project-id |
Bigquery quota GCP project id. | ||
--bigquery-source-project-id |
Bigquery source GCP project id. This is the project that has datasets that will be imported. | ||
--clickhouse-database |
ClickHouse database name | ||
--clickhouse-host |
ClickHouse host name | ||
--clickhouse-password |
ClickHouse database password. The value can be in the null format to use dynamic substitution. | ||
--clickhouse-port |
ClickHouse port number | ||
--clickhouse-user |
ClickHouse user name. The value can be in the null format to use dynamic substitution. | ||
--databricks-access-token |
Databricks access token for the warehouse. | ||
--databricks-catalog |
Databricks catalog name. | ||
--databricks-host |
Databricks host name | ||
--databricks-http-path |
Databricks http path to the warehouse. For example: /sql/1.0/warehouses/ |
||
--databricks-initialization-sql |
Custom SQL that is executed after connecting to Databricks. | ||
--databricks-password |
(Obsolete) Databricks database password. | ||
--databricks-port |
Databricks port number | ||
--databricks-user |
(Obsolete) Databricks user name. | ||
--db2-database |
DB2 database name | ||
--db2-host |
DB2 host name | ||
--db2-password |
DB2 database password. The value can be in the null format to use dynamic substitution. | ||
--db2-platform |
DB2 platform type. | luw zos |
|
--db2-port |
DB2 port number | ||
--db2-user |
DB2 user name. The value can be in the null format to use dynamic substitution. | ||
--duckdb-aws-authentication-mode |
The authentication mode for AWS. Supports also a null configuration with a custom environment variable. | iam default_credentials |
|
--duckdb-aws-default-authentication-chain |
The default authentication chain for AWS. For example: 'env;config;sts;sso;instance;process'. Supports also a null configuration with a custom environment variable. | ||
--duckdb-azure-account-name |
Azure Storage Account Name used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-azure-authentication-mode |
The authentication mode for Azure. Supports also a null configuration with a custom environment variable. | connection_string credential_chain service_principal default_credentials |
|
--duckdb-azure-client-id |
Azure Client ID used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-azure-client-secret |
Azure Client Secret used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-azure-tenant-id |
Azure Tenant ID used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-database |
DuckDB database name for in-memory read mode. The value can be in the null format to use dynamic substitution. | ||
--duckdb-directories |
Virtual schema name to directory mappings. The path must be an absolute path. | ||
--duckdb-enable-optimizer |
Enables a query optimizer that uses statistics. By default, the optimizer is disabled to enable analysis of Parquet files with invalid or outdated statistics. | ||
--duckdb-files-format-type |
Type of source files format for DuckDB. | csv json parquet avro iceberg delta_lake |
|
--duckdb-password |
DuckDB password for a remote storage type. The value can be in the null format to use dynamic substitution. | ||
--duckdb-profile |
The AWS profile used for the default authentication. The value can be in the null format to use dynamic substitution. | ||
--duckdb-read-mode |
DuckDB read mode. | in_memory files |
|
--duckdb-region |
The region for the storage credentials. The value can be in the null format to use dynamic substitution. | ||
--duckdb-storage-type |
The storage type. | local s3 azure gcs |
|
--duckdb-user |
DuckDB user name for a remote storage type. The value can be in the null format to use dynamic substitution. | ||
-fw --file-write |
Write command response to a file | ||
--hana-host |
Hana host name | ||
--hana-instance-number |
Hana instance number | ||
--hana-password |
Hana database password. The value can be in the null format to use dynamic substitution. | ||
--hana-port |
Hana port number | ||
--hana-user |
Hana user name. The value can be in the null format to use dynamic substitution. | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
--mariadb-database |
MariaDB database name. The value can be in the null format to use dynamic substitution. | ||
--mariadb-host |
MariaDB host name | ||
--mariadb-password |
MariaDB database password. The value can be in the null format to use dynamic substitution. | ||
--mariadb-port |
MariaDB port number | ||
--mariadb-user |
MariaDB user name. The value can be in the null format to use dynamic substitution. | ||
--mysql-database |
MySQL database name. The value can be in the null format to use dynamic substitution. | ||
--mysql-engine |
MySQL engine type. Supports also a null configuration with a custom environment variable. | mysql singlestoredb |
|
--mysql-host |
MySQL host name | ||
--mysql-password |
MySQL database password. The value can be in the null format to use dynamic substitution. | ||
--mysql-port |
MySQL port number | ||
--mysql-sslmode |
SslMode MySQL connection parameter | DISABLED PREFERRED REQUIRED VERIFY_CA VERIFY_IDENTITY |
|
--mysql-user |
MySQL user name. The value can be in the null format to use dynamic substitution. | ||
-n --name |
Connection name | ||
--oracle-database |
Oracle database name. The value can be in the null format to use dynamic substitution. | ||
--oracle-host |
Oracle host name | ||
--oracle-initialization-sql |
Custom SQL that is executed after connecting to Oracle. This SQL script can configure the default language, for example: alter session set NLS_DATE_FORMAT='YYYY-DD-MM HH24:MI:SS' | ||
--oracle-password |
Oracle database password. The value can be in the null format to use dynamic substitution. | ||
--oracle-port |
Oracle port number | ||
--oracle-user |
Oracle user name. The value can be in the null format to use dynamic substitution. | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
|
--postgresql-database |
PostgreSQL database name. The value can be in the null format to use dynamic substitution. | ||
--postgresql-engine |
Postgresql engine type. Supports also a null configuration with a custom environment variable. | postgresql timescale |
|
--postgresql-host |
PostgreSQL host name | ||
--postgresql-options |
PostgreSQL connection 'options' initialization parameter. For example setting this to -c statement_timeout=5min would set the statement timeout parameter for this session to 5 minutes. | ||
--postgresql-password |
PostgreSQL database password. The value can be in the null format to use dynamic substitution. | ||
--postgresql-port |
PostgreSQL port number | ||
--postgresql-sslmode |
Connect to PostgreSQL using sslmode connection parameter | disable allow prefer require verify_ca verify_full |
|
--postgresql-user |
PostgreSQL user name. The value can be in the null format to use dynamic substitution. | ||
--presto-database |
Presto database name. The value can be in the null format to use dynamic substitution. | ||
--presto-host |
Presto host name | ||
--presto-password |
Presto database password. The value can be in the null format to use dynamic substitution. | ||
--presto-port |
Presto port number | ||
--presto-user |
Presto user name. The value can be in the null format to use dynamic substitution. | ||
-t --provider |
Connection provider type | bigquery clickhouse databricks db2 duckdb hana mariadb mysql oracle postgresql presto questdb redshift snowflake spark sqlserver teradata trino |
|
--questdb-database |
QuestDB database name. The value can be in the null format to use dynamic substitution. | ||
--questdb-host |
QuestDB host name | ||
--questdb-password |
QuestDB database password. The value can be in the null format to use dynamic substitution. | ||
--questdb-port |
QuestDB port number | ||
--questdb-user |
QuestDB user name. The value can be in the null format to use dynamic substitution. | ||
--redshift-authentication-mode |
The authentication mode for AWS. Supports also a null configuration with a custom environment variable. | iam default_credentials user_password |
|
--redshift-database |
Redshift database name. The value can be in the null format to use dynamic substitution. | ||
--redshift-host |
Redshift host name | ||
--redshift-password |
Redshift database password. The value can be in the null format to use dynamic substitution. | ||
--redshift-port |
Redshift port number | ||
--redshift-user |
Redshift user name. The value can be in the null format to use dynamic substitution. | ||
--single-store-parameters-spec |
Single Store DB parameters spec. | ||
--snowflake-account |
Snowflake account name, e.q. |
||
--snowflake-database |
Snowflake database name. The value can be in the null format to use dynamic substitution. | ||
--snowflake-password |
Snowflake database password. The value can be in the null format to use dynamic substitution. | ||
--snowflake-role |
Snowflake role name. | ||
--snowflake-user |
Snowflake user name. The value can be in the null format to use dynamic substitution. | ||
--snowflake-warehouse |
Snowflake warehouse name. | ||
--spark-host |
Spark host name | ||
--spark-password |
Spark database password. The value can be in the null format to use dynamic substitution. | ||
--spark-port |
Spark port number | ||
--spark-user |
Spark user name. The value can be in the null format to use dynamic substitution. | ||
--sqlserver-authentication-mode |
Authenticaiton mode for the SQL Server. The value can be in the null format to use dynamic substitution. | sql_password active_directory_password active_directory_service_principal active_directory_default |
|
--sqlserver-database |
SQL Server database name. The value can be in the null format to use dynamic substitution. | ||
--sqlserver-disable-encryption |
Disable SSL encryption parameter. The default value is false. You may need to disable encryption when SQL Server is started in Docker. | ||
--sqlserver-host |
SQL Server host name | ||
--sqlserver-password |
SQL Server database password. The value can be in the null format to use dynamic substitution. | ||
--sqlserver-port |
SQL Server port number | ||
--sqlserver-user |
SQL Server user name. The value can be in the null format to use dynamic substitution. | ||
--teradata-host |
Teradata host name | ||
--teradata-password |
Teradata database password. The value can be in the null format to use dynamic substitution. | ||
--teradata-port |
Teradata port number | ||
--teradata-user |
Teradata user name. The value can be in the null format to use dynamic substitution. | ||
--trino-catalog |
The Trino catalog that contains the databases and the tables that will be accessed with the driver. Supports also a null configuration with a custom environment variable. | ||
--trino-engine |
Trino engine type. | trino athena |
|
--trino-host |
Trino host name. | ||
--trino-password |
Trino database password. The value can be in the null format to use dynamic substitution. | ||
--trino-port |
Trino port number. | ||
--trino-user |
Trino user name. The value can be in the null format to use dynamic substitution. | ||
-C |
ClickHouse additional properties that are added to the JDBC connection string | ||
-D |
Databricks additional properties that are added to the JDBC connection string | ||
-DB2 |
DB2 additional properties that are added to the JDBC connection string | ||
-Duck |
DuckDB additional properties that are added to the JDBC connection string | ||
-E |
Presto additional properties that are added to the JDBC connection string. | ||
-F |
Snowflake additional properties that are added to the JDBC connection string | ||
-H |
Hana additional properties that are added to the JDBC connection string | ||
-K |
Spark additional properties that are added to the JDBC connection string | ||
-M |
MySQL additional properties that are added to the JDBC connection string | ||
-MA |
MariaDB additional properties that are added to the JDBC connection string | ||
-O |
Oracle's additional properties that are added to the JDBC connection string | ||
-P |
PostgreSQL additional properties that are added to the JDBC connection string | ||
-Q |
QuestDB additional properties that are added to the JDBC connection string | ||
-R |
Redshift additional properties that are added to the JDBC connection string | ||
-S |
SQL Server additional properties that are added to the JDBC connection string | ||
-T |
Trino additional properties that are added to the JDBC connection string | ||
-TE |
Teradata additional properties that are added to the JDBC connection string. |
dqo connection remove
Remove the connection(s) that match a given condition
Description
Removes the connection or connections that match the conditions specified in the options. It allows the user to remove any unwanted connections that are no longer needed.
Command-line synopsis
DQOps shell synopsis
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
-fw --file-write |
Write command response to a file | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
-n --name |
Connection name | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
dqo connection update
Update the connection(s) that match a given condition
Description
Update the connection or connections that match the conditions specified in the options with new details. It allows the user to modify existing connections in the application.
Command-line synopsis
$ dqo [dqo options...] connection update [-h] [--duckdb-enable-optimizer] [-fw] [-hl]
[--sqlserver-disable-encryption]
[--athena-aws-authentication-mode=<awsAuthenticationMode>]
[--athena-output-location=<athenaOutputLocation>]
[--athena-region=<athenaRegion>]
[--athena-work-group=<athenaWorkGroup>]
[--bigquery-authentication-mode=<authenticationMode>]
[--bigquery-billing-project-id=<billingProjectId>]
[--bigquery-jobs-create-project=<jobsCreateProject>]
[--bigquery-json-key-content=<jsonKeyContent>]
[--bigquery-json-key-path=<jsonKeyPath>]
[--bigquery-quota-project-id=<quotaProjectId>]
[--bigquery-source-project-id=<sourceProjectId>]
[--clickhouse-database=<database>]
[--clickhouse-host=<host>]
[--clickhouse-password=<password>]
[--clickhouse-port=<port>] [--clickhouse-user=<user>]
[--databricks-access-token=<accessToken>]
[--databricks-catalog=<catalog>] [--databricks-host=<host>]
[--databricks-http-path=<httpPath>]
[--databricks-initialization-sql=<initializationSql>]
[--databricks-password=<password>]
[--databricks-port=<port>] [--databricks-user=<user>]
[--db2-database=<database>] [--db2-host=<host>]
[--db2-password=<password>]
[--db2-platform=<db2PlatformType>] [--db2-port=<port>]
[--db2-user=<user>]
[--duckdb-aws-authentication-mode=<awsAuthenticationMode>]
[--duckdb-aws-default-authentication-chain=<awsDefaultAuthent
icationChain>] [--duckdb-azure-account-name=<accountName>]
[--duckdb-azure-authentication-mode=<azureAuthenticationMode>
] [--duckdb-azure-client-id=<clientId>]
[--duckdb-azure-client-secret=<clientSecret>]
[--duckdb-azure-tenant-id=<tenantId>]
[--duckdb-database=<database>]
[--duckdb-directories=<directoriesString>]
[--duckdb-files-format-type=<filesFormatType>]
[--duckdb-password=<password>] [--duckdb-profile=<profile>]
[--duckdb-read-mode=<readMode>] [--duckdb-region=<region>]
[--duckdb-storage-type=<storageType>] [--duckdb-user=<user>]
[--hana-host=<host>]
[--hana-instance-number=<instanceNumber>]
[--hana-password=<password>] [--hana-port=<port>]
[--hana-user=<user>] [--mariadb-database=<database>]
[--mariadb-host=<host>] [--mariadb-password=<password>]
[--mariadb-port=<port>] [--mariadb-user=<user>]
[--mysql-database=<database>]
[--mysql-engine=<mysqlEngineType>] [--mysql-host=<host>]
[--mysql-password=<password>] [--mysql-port=<port>]
[--mysql-sslmode=<sslmode>] [--mysql-user=<user>]
[-n=<name>] [-of=<outputFormat>]
[--oracle-database=<database>] [--oracle-host=<host>]
[--oracle-initialization-sql=<initializationSql>]
[--oracle-password=<password>] [--oracle-port=<port>]
[--oracle-user=<user>] [--postgresql-database=<database>]
[--postgresql-engine=<postgresqlEngineType>]
[--postgresql-host=<host>] [--postgresql-options=<options>]
[--postgresql-password=<password>]
[--postgresql-port=<port>] [--postgresql-sslmode=<sslmode>]
[--postgresql-user=<user>] [--presto-database=<database>]
[--presto-host=<host>] [--presto-password=<password>]
[--presto-port=<port>] [--presto-user=<user>]
[--questdb-database=<database>] [--questdb-host=<host>]
[--questdb-password=<password>] [--questdb-port=<port>]
[--questdb-user=<user>]
[--redshift-authentication-mode=<redshiftAuthenticationMode>]
[--redshift-database=<database>] [--redshift-host=<host>]
[--redshift-password=<password>] [--redshift-port=<port>]
[--redshift-user=<user>]
[--single-store-parameters-spec=<singleStoreDbParametersSpec>
] [--snowflake-account=<account>]
[--snowflake-database=<database>]
[--snowflake-password=<password>] [--snowflake-role=<role>]
[--snowflake-user=<user>]
[--snowflake-warehouse=<warehouse>] [--spark-host=<host>]
[--spark-password=<password>] [--spark-port=<port>]
[--spark-user=<user>]
[--sqlserver-authentication-mode=<authenticationMode>]
[--sqlserver-database=<database>] [--sqlserver-host=<host>]
[--sqlserver-password=<password>] [--sqlserver-port=<port>]
[--sqlserver-user=<user>] [--teradata-host=<host>]
[--teradata-password=<password>] [--teradata-port=<port>]
[--teradata-user=<user>] [--trino-catalog=<catalog>]
[--trino-engine=<trinoEngineType>] [--trino-host=<host>]
[--trino-password=<password>] [--trino-port=<port>]
[--trino-user=<user>] [-C=<String=String>]...
[-D=<String=String>]... [-DB2=<String=String>]...
[-Duck=<String=String>]... [-E=<String=String>]...
[-F=<String=String>]... [-H=<String=String>]...
[-K=<String=String>]... [-M=<String=String>]...
[-MA=<String=String>]... [-O=<String=String>]...
[-P=<String=String>]... [-Q=<String=String>]...
[-R=<String=String>]... [-S=<String=String>]...
[-T=<String=String>]... [-TE=<String=String>]...
DQOps shell synopsis
dqo> connection update [-h] [--duckdb-enable-optimizer] [-fw] [-hl]
[--sqlserver-disable-encryption]
[--athena-aws-authentication-mode=<awsAuthenticationMode>]
[--athena-output-location=<athenaOutputLocation>]
[--athena-region=<athenaRegion>]
[--athena-work-group=<athenaWorkGroup>]
[--bigquery-authentication-mode=<authenticationMode>]
[--bigquery-billing-project-id=<billingProjectId>]
[--bigquery-jobs-create-project=<jobsCreateProject>]
[--bigquery-json-key-content=<jsonKeyContent>]
[--bigquery-json-key-path=<jsonKeyPath>]
[--bigquery-quota-project-id=<quotaProjectId>]
[--bigquery-source-project-id=<sourceProjectId>]
[--clickhouse-database=<database>]
[--clickhouse-host=<host>]
[--clickhouse-password=<password>]
[--clickhouse-port=<port>] [--clickhouse-user=<user>]
[--databricks-access-token=<accessToken>]
[--databricks-catalog=<catalog>] [--databricks-host=<host>]
[--databricks-http-path=<httpPath>]
[--databricks-initialization-sql=<initializationSql>]
[--databricks-password=<password>]
[--databricks-port=<port>] [--databricks-user=<user>]
[--db2-database=<database>] [--db2-host=<host>]
[--db2-password=<password>]
[--db2-platform=<db2PlatformType>] [--db2-port=<port>]
[--db2-user=<user>]
[--duckdb-aws-authentication-mode=<awsAuthenticationMode>]
[--duckdb-aws-default-authentication-chain=<awsDefaultAuthent
icationChain>] [--duckdb-azure-account-name=<accountName>]
[--duckdb-azure-authentication-mode=<azureAuthenticationMode>
] [--duckdb-azure-client-id=<clientId>]
[--duckdb-azure-client-secret=<clientSecret>]
[--duckdb-azure-tenant-id=<tenantId>]
[--duckdb-database=<database>]
[--duckdb-directories=<directoriesString>]
[--duckdb-files-format-type=<filesFormatType>]
[--duckdb-password=<password>] [--duckdb-profile=<profile>]
[--duckdb-read-mode=<readMode>] [--duckdb-region=<region>]
[--duckdb-storage-type=<storageType>] [--duckdb-user=<user>]
[--hana-host=<host>]
[--hana-instance-number=<instanceNumber>]
[--hana-password=<password>] [--hana-port=<port>]
[--hana-user=<user>] [--mariadb-database=<database>]
[--mariadb-host=<host>] [--mariadb-password=<password>]
[--mariadb-port=<port>] [--mariadb-user=<user>]
[--mysql-database=<database>]
[--mysql-engine=<mysqlEngineType>] [--mysql-host=<host>]
[--mysql-password=<password>] [--mysql-port=<port>]
[--mysql-sslmode=<sslmode>] [--mysql-user=<user>]
[-n=<name>] [-of=<outputFormat>]
[--oracle-database=<database>] [--oracle-host=<host>]
[--oracle-initialization-sql=<initializationSql>]
[--oracle-password=<password>] [--oracle-port=<port>]
[--oracle-user=<user>] [--postgresql-database=<database>]
[--postgresql-engine=<postgresqlEngineType>]
[--postgresql-host=<host>] [--postgresql-options=<options>]
[--postgresql-password=<password>]
[--postgresql-port=<port>] [--postgresql-sslmode=<sslmode>]
[--postgresql-user=<user>] [--presto-database=<database>]
[--presto-host=<host>] [--presto-password=<password>]
[--presto-port=<port>] [--presto-user=<user>]
[--questdb-database=<database>] [--questdb-host=<host>]
[--questdb-password=<password>] [--questdb-port=<port>]
[--questdb-user=<user>]
[--redshift-authentication-mode=<redshiftAuthenticationMode>]
[--redshift-database=<database>] [--redshift-host=<host>]
[--redshift-password=<password>] [--redshift-port=<port>]
[--redshift-user=<user>]
[--single-store-parameters-spec=<singleStoreDbParametersSpec>
] [--snowflake-account=<account>]
[--snowflake-database=<database>]
[--snowflake-password=<password>] [--snowflake-role=<role>]
[--snowflake-user=<user>]
[--snowflake-warehouse=<warehouse>] [--spark-host=<host>]
[--spark-password=<password>] [--spark-port=<port>]
[--spark-user=<user>]
[--sqlserver-authentication-mode=<authenticationMode>]
[--sqlserver-database=<database>] [--sqlserver-host=<host>]
[--sqlserver-password=<password>] [--sqlserver-port=<port>]
[--sqlserver-user=<user>] [--teradata-host=<host>]
[--teradata-password=<password>] [--teradata-port=<port>]
[--teradata-user=<user>] [--trino-catalog=<catalog>]
[--trino-engine=<trinoEngineType>] [--trino-host=<host>]
[--trino-password=<password>] [--trino-port=<port>]
[--trino-user=<user>] [-C=<String=String>]...
[-D=<String=String>]... [-DB2=<String=String>]...
[-Duck=<String=String>]... [-E=<String=String>]...
[-F=<String=String>]... [-H=<String=String>]...
[-K=<String=String>]... [-M=<String=String>]...
[-MA=<String=String>]... [-O=<String=String>]...
[-P=<String=String>]... [-Q=<String=String>]...
[-R=<String=String>]... [-S=<String=String>]...
[-T=<String=String>]... [-TE=<String=String>]...
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
--athena-aws-authentication-mode |
The authentication mode for AWS Athena. Supports also a null configuration with a custom environment variable. | iam default_credentials |
|
--athena-output-location |
The location in Amazon S3 where query results will be stored. Supports also a null configuration with a custom environment variable. | ||
--athena-region |
The AWS Athena Region where queries will be run. Supports also a null configuration with a custom environment variable. | ||
--athena-work-group |
The Athena WorkGroup in which queries will run. Supports also a null configuration with a custom environment variable. | ||
--bigquery-authentication-mode |
Bigquery authentication mode. The default value uses the current GCP application default credentials. The default GCP credentials is the Service Account of a VM in GCP cloud, a GCP JSON key file whose path is in the GOOGLE_APPLICATION_CREDENTIALS environment variable, or it is the default GCP credentials obtained on a user's computer by running 'gcloud auth application-default login' from the command line. | google_application_credentials json_key_content json_key_path |
|
--bigquery-billing-project-id |
Bigquery billing GCP project id. This is the project used as the default GCP project. The calling user must have a bigquery.jobs.create permission in this project. | ||
--bigquery-jobs-create-project |
Configures the way how to select the project that will be used to start BigQuery jobs and will be used for billing. The user/service identified by the credentials must have bigquery.jobs.create permission in that project. | create_jobs_in_source_project create_jobs_in_default_project_from_credentials create_jobs_in_selected_billing_project_id |
|
--bigquery-json-key-content |
Bigquery service account key content as JSON. | ||
--bigquery-json-key-path |
Path to a GCP service account key JSON file used to authenticate to Bigquery. | ||
--bigquery-quota-project-id |
Bigquery quota GCP project id. | ||
--bigquery-source-project-id |
Bigquery source GCP project id. This is the project that has datasets that will be imported. | ||
--clickhouse-database |
ClickHouse database name | ||
--clickhouse-host |
ClickHouse host name | ||
--clickhouse-password |
ClickHouse database password. The value can be in the null format to use dynamic substitution. | ||
--clickhouse-port |
ClickHouse port number | ||
--clickhouse-user |
ClickHouse user name. The value can be in the null format to use dynamic substitution. | ||
--databricks-access-token |
Databricks access token for the warehouse. | ||
--databricks-catalog |
Databricks catalog name. | ||
--databricks-host |
Databricks host name | ||
--databricks-http-path |
Databricks http path to the warehouse. For example: /sql/1.0/warehouses/ |
||
--databricks-initialization-sql |
Custom SQL that is executed after connecting to Databricks. | ||
--databricks-password |
(Obsolete) Databricks database password. | ||
--databricks-port |
Databricks port number | ||
--databricks-user |
(Obsolete) Databricks user name. | ||
--db2-database |
DB2 database name | ||
--db2-host |
DB2 host name | ||
--db2-password |
DB2 database password. The value can be in the null format to use dynamic substitution. | ||
--db2-platform |
DB2 platform type. | luw zos |
|
--db2-port |
DB2 port number | ||
--db2-user |
DB2 user name. The value can be in the null format to use dynamic substitution. | ||
--duckdb-aws-authentication-mode |
The authentication mode for AWS. Supports also a null configuration with a custom environment variable. | iam default_credentials |
|
--duckdb-aws-default-authentication-chain |
The default authentication chain for AWS. For example: 'env;config;sts;sso;instance;process'. Supports also a null configuration with a custom environment variable. | ||
--duckdb-azure-account-name |
Azure Storage Account Name used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-azure-authentication-mode |
The authentication mode for Azure. Supports also a null configuration with a custom environment variable. | connection_string credential_chain service_principal default_credentials |
|
--duckdb-azure-client-id |
Azure Client ID used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-azure-client-secret |
Azure Client Secret used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-azure-tenant-id |
Azure Tenant ID used by DuckDB Secret Manager. The value can be in the null format to use dynamic substitution. | ||
--duckdb-database |
DuckDB database name for in-memory read mode. The value can be in the null format to use dynamic substitution. | ||
--duckdb-directories |
Virtual schema name to directory mappings. The path must be an absolute path. | ||
--duckdb-enable-optimizer |
Enables a query optimizer that uses statistics. By default, the optimizer is disabled to enable analysis of Parquet files with invalid or outdated statistics. | ||
--duckdb-files-format-type |
Type of source files format for DuckDB. | csv json parquet avro iceberg delta_lake |
|
--duckdb-password |
DuckDB password for a remote storage type. The value can be in the null format to use dynamic substitution. | ||
--duckdb-profile |
The AWS profile used for the default authentication. The value can be in the null format to use dynamic substitution. | ||
--duckdb-read-mode |
DuckDB read mode. | in_memory files |
|
--duckdb-region |
The region for the storage credentials. The value can be in the null format to use dynamic substitution. | ||
--duckdb-storage-type |
The storage type. | local s3 azure gcs |
|
--duckdb-user |
DuckDB user name for a remote storage type. The value can be in the null format to use dynamic substitution. | ||
-fw --file-write |
Write command response to a file | ||
--hana-host |
Hana host name | ||
--hana-instance-number |
Hana instance number | ||
--hana-password |
Hana database password. The value can be in the null format to use dynamic substitution. | ||
--hana-port |
Hana port number | ||
--hana-user |
Hana user name. The value can be in the null format to use dynamic substitution. | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
--mariadb-database |
MariaDB database name. The value can be in the null format to use dynamic substitution. | ||
--mariadb-host |
MariaDB host name | ||
--mariadb-password |
MariaDB database password. The value can be in the null format to use dynamic substitution. | ||
--mariadb-port |
MariaDB port number | ||
--mariadb-user |
MariaDB user name. The value can be in the null format to use dynamic substitution. | ||
--mysql-database |
MySQL database name. The value can be in the null format to use dynamic substitution. | ||
--mysql-engine |
MySQL engine type. Supports also a null configuration with a custom environment variable. | mysql singlestoredb |
|
--mysql-host |
MySQL host name | ||
--mysql-password |
MySQL database password. The value can be in the null format to use dynamic substitution. | ||
--mysql-port |
MySQL port number | ||
--mysql-sslmode |
SslMode MySQL connection parameter | DISABLED PREFERRED REQUIRED VERIFY_CA VERIFY_IDENTITY |
|
--mysql-user |
MySQL user name. The value can be in the null format to use dynamic substitution. | ||
-n --name |
Connection name, supports wildcards for changing multiple connections at once, i.e. "conn*" | ||
--oracle-database |
Oracle database name. The value can be in the null format to use dynamic substitution. | ||
--oracle-host |
Oracle host name | ||
--oracle-initialization-sql |
Custom SQL that is executed after connecting to Oracle. This SQL script can configure the default language, for example: alter session set NLS_DATE_FORMAT='YYYY-DD-MM HH24:MI:SS' | ||
--oracle-password |
Oracle database password. The value can be in the null format to use dynamic substitution. | ||
--oracle-port |
Oracle port number | ||
--oracle-user |
Oracle user name. The value can be in the null format to use dynamic substitution. | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
|
--postgresql-database |
PostgreSQL database name. The value can be in the null format to use dynamic substitution. | ||
--postgresql-engine |
Postgresql engine type. Supports also a null configuration with a custom environment variable. | postgresql timescale |
|
--postgresql-host |
PostgreSQL host name | ||
--postgresql-options |
PostgreSQL connection 'options' initialization parameter. For example setting this to -c statement_timeout=5min would set the statement timeout parameter for this session to 5 minutes. | ||
--postgresql-password |
PostgreSQL database password. The value can be in the null format to use dynamic substitution. | ||
--postgresql-port |
PostgreSQL port number | ||
--postgresql-sslmode |
Connect to PostgreSQL using sslmode connection parameter | disable allow prefer require verify_ca verify_full |
|
--postgresql-user |
PostgreSQL user name. The value can be in the null format to use dynamic substitution. | ||
--presto-database |
Presto database name. The value can be in the null format to use dynamic substitution. | ||
--presto-host |
Presto host name | ||
--presto-password |
Presto database password. The value can be in the null format to use dynamic substitution. | ||
--presto-port |
Presto port number | ||
--presto-user |
Presto user name. The value can be in the null format to use dynamic substitution. | ||
--questdb-database |
QuestDB database name. The value can be in the null format to use dynamic substitution. | ||
--questdb-host |
QuestDB host name | ||
--questdb-password |
QuestDB database password. The value can be in the null format to use dynamic substitution. | ||
--questdb-port |
QuestDB port number | ||
--questdb-user |
QuestDB user name. The value can be in the null format to use dynamic substitution. | ||
--redshift-authentication-mode |
The authentication mode for AWS. Supports also a null configuration with a custom environment variable. | iam default_credentials user_password |
|
--redshift-database |
Redshift database name. The value can be in the null format to use dynamic substitution. | ||
--redshift-host |
Redshift host name | ||
--redshift-password |
Redshift database password. The value can be in the null format to use dynamic substitution. | ||
--redshift-port |
Redshift port number | ||
--redshift-user |
Redshift user name. The value can be in the null format to use dynamic substitution. | ||
--single-store-parameters-spec |
Single Store DB parameters spec. | ||
--snowflake-account |
Snowflake account name, e.q. |
||
--snowflake-database |
Snowflake database name. The value can be in the null format to use dynamic substitution. | ||
--snowflake-password |
Snowflake database password. The value can be in the null format to use dynamic substitution. | ||
--snowflake-role |
Snowflake role name. | ||
--snowflake-user |
Snowflake user name. The value can be in the null format to use dynamic substitution. | ||
--snowflake-warehouse |
Snowflake warehouse name. | ||
--spark-host |
Spark host name | ||
--spark-password |
Spark database password. The value can be in the null format to use dynamic substitution. | ||
--spark-port |
Spark port number | ||
--spark-user |
Spark user name. The value can be in the null format to use dynamic substitution. | ||
--sqlserver-authentication-mode |
Authenticaiton mode for the SQL Server. The value can be in the null format to use dynamic substitution. | sql_password active_directory_password active_directory_service_principal active_directory_default |
|
--sqlserver-database |
SQL Server database name. The value can be in the null format to use dynamic substitution. | ||
--sqlserver-disable-encryption |
Disable SSL encryption parameter. The default value is false. You may need to disable encryption when SQL Server is started in Docker. | ||
--sqlserver-host |
SQL Server host name | ||
--sqlserver-password |
SQL Server database password. The value can be in the null format to use dynamic substitution. | ||
--sqlserver-port |
SQL Server port number | ||
--sqlserver-user |
SQL Server user name. The value can be in the null format to use dynamic substitution. | ||
--teradata-host |
Teradata host name | ||
--teradata-password |
Teradata database password. The value can be in the null format to use dynamic substitution. | ||
--teradata-port |
Teradata port number | ||
--teradata-user |
Teradata user name. The value can be in the null format to use dynamic substitution. | ||
--trino-catalog |
The Trino catalog that contains the databases and the tables that will be accessed with the driver. Supports also a null configuration with a custom environment variable. | ||
--trino-engine |
Trino engine type. | trino athena |
|
--trino-host |
Trino host name. | ||
--trino-password |
Trino database password. The value can be in the null format to use dynamic substitution. | ||
--trino-port |
Trino port number. | ||
--trino-user |
Trino user name. The value can be in the null format to use dynamic substitution. | ||
-C |
ClickHouse additional properties that are added to the JDBC connection string | ||
-D |
Databricks additional properties that are added to the JDBC connection string | ||
-DB2 |
DB2 additional properties that are added to the JDBC connection string | ||
-Duck |
DuckDB additional properties that are added to the JDBC connection string | ||
-E |
Presto additional properties that are added to the JDBC connection string. | ||
-F |
Snowflake additional properties that are added to the JDBC connection string | ||
-H |
Hana additional properties that are added to the JDBC connection string | ||
-K |
Spark additional properties that are added to the JDBC connection string | ||
-M |
MySQL additional properties that are added to the JDBC connection string | ||
-MA |
MariaDB additional properties that are added to the JDBC connection string | ||
-O |
Oracle's additional properties that are added to the JDBC connection string | ||
-P |
PostgreSQL additional properties that are added to the JDBC connection string | ||
-Q |
QuestDB additional properties that are added to the JDBC connection string | ||
-R |
Redshift additional properties that are added to the JDBC connection string | ||
-S |
SQL Server additional properties that are added to the JDBC connection string | ||
-T |
Trino additional properties that are added to the JDBC connection string | ||
-TE |
Teradata additional properties that are added to the JDBC connection string. |
dqo connection schema list
List schemas in the specified connection
Description
It allows the user to view the summary of all schemas in a selected connection.
Command-line synopsis
$ dqo [dqo options...] connection schema list [-h] [-fw] [-hl] [-n=<name>] [-of=<outputFormat>]
[-d=<dimensions>]... [-l=<labels>]...
DQOps shell synopsis
dqo> connection schema list [-h] [-fw] [-hl] [-n=<name>] [-of=<outputFormat>]
[-d=<dimensions>]... [-l=<labels>]...
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
-d --dimension |
Dimension filter | ||
-fw --file-write |
Write command response to a file | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
-l --label |
Label filter | ||
-n --name |
Connection name filter | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
dqo connection table list
List tables for the specified connection and schema name.
Description
List all the tables available in the database for the specified connection and schema. It allows the user to view all the tables in the database.
Command-line synopsis
$ dqo [dqo options...] connection table list [-h] [-fw] [-hl] [-c=<connection>] [-of=<outputFormat>]
[-s=<schema>] [-t=<tableNameContains>]
[-d=<dimensions>]... [-l=<labels>]...
DQOps shell synopsis
dqo> connection table list [-h] [-fw] [-hl] [-c=<connection>] [-of=<outputFormat>]
[-s=<schema>] [-t=<tableNameContains>]
[-d=<dimensions>]... [-l=<labels>]...
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
-c --connection |
Connection name | ||
-d --dimension |
Dimension filter | ||
-fw --file-write |
Write command response to a file | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
-l --label |
Label filter | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
|
-s --schema |
Schema name | ||
-t --table |
Table name or a fragment of the table name |
dqo connection table show
Show table for connection
Description
Show the details of the specified table in the database for the specified connection. It allows the user to view the details of a specific table in the database.
Command-line synopsis
$ dqo [dqo options...] connection table show [-h] [-fw] [-hl] [-c=<connection>] [-of=<outputFormat>]
[-t=<table>]
DQOps shell synopsis
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
-c --connection |
Connection name | ||
-fw --file-write |
Write command response to a file | ||
-t --table --full-table-name |
Full table name (schema.table), supports wildcard patterns 'sch.tab' | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |
dqo connection edit
Edit connection that matches a given condition
Description
Edit the connection or connections that match the filter conditions specified in the options. It allows the user to modify the details of an existing connection in the application.
Command-line synopsis
DQOps shell synopsis
Command options
All parameters supported by the command are listed below.
Command argument | Description | Required | Accepted values |
---|---|---|---|
-c --connection |
Connection Name | ||
-fw --file-write |
Write command response to a file | ||
--headless -hl |
Starts DQOps in a headless mode. When DQOps runs in a headless mode and the application cannot start because the DQOps Cloud API key is missing or the DQOps user home folder is not configured, DQOps will stop silently instead of asking the user to approve the setup of the DQOps user home folder structure and/or log into DQOps Cloud. | ||
-h --help |
Show the help for the command and parameters | ||
-of --output-format |
Output format for tabular responses | TABLE CSV JSON |