Queries
Available queries in the Management API
application
Returns the Application specified by the given ID.
Arguments
Returns
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
A paginated list of Policies associated with the Application.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
applicationByName
Returns the Application with the given unique name.
Arguments
Returns
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
A paginated list of Policies associated with the Application.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
applications
Returns the Applications within the Environment.
The applications
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns
The Application connection object.
Learn more about pagination in GraphQL.
The Application connection’s edges.
The Application edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See Application
The Application connection’s nodes.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
See Account
The Application’s Environment.
See Environment
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
See Propeller
The Application’s OAuth 2.0 scopes.
See ApplicationScope
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
A paginated list of Policies associated with the Application.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
The Application connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
dataSource
Returns the Data Source specified by the given ID.
query {
dataSource(id: "DSOXXXXX") {
id
uniqueName
type
tables (first: 100){
nodes {
id
name
columns (first: 100) {
nodes {
name
type
isNullable
supportedDataPoolColumnTypes
}
}
}
}
}
}
Arguments
Returns
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
checks
insteadThe error object.
The error code.
The error message.
dataSourceByName
Returns the Data Source specified by the given unique name.
Arguments
Returns
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
checks
insteadThe error object.
The error code.
The error message.
dataSources
Returns the Data Sources within the Environment.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source. Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads.
The dataSources
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns
The Data Source connection object.
Learn more about pagination in GraphQL.
The Data Source connection’s edges.
The Data Source edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See DataSource
The Data Source connection’s nodes.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
See Account
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
checks
insteadSee Error
The Data Source connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
dataPool
Returns the Data Pool specified by the given ID.
A Data Pool is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries.
Arguments
Returns
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The Data Pool column connection’s page info.
See PageInfo
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The Data Pool column connection’s page info.
See PageInfo
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The list of Metrics powered by the Data Pool.
Arguments
See MetricConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadsetupTasks
insteadThe error object.
The error code.
The error message.
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadA Data Pool’s tenant ID column. The tenant ID column is used to control access to your data with access policies.
The name of the column that represents the tenant ID.
The tenant ID column’s type.
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
dataPoolByName
Returns the Data Pool specified by the given unique name.
A Data Pool is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries.
Arguments
Returns
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The Data Pool column connection’s page info.
See PageInfo
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The Data Pool column connection’s page info.
See PageInfo
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The list of Metrics powered by the Data Pool.
Arguments
See MetricConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadsetupTasks
insteadThe error object.
The error code.
The error message.
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadA Data Pool’s tenant ID column. The tenant ID column is used to control access to your data with access policies.
The name of the column that represents the tenant ID.
The tenant ID column’s type.
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
dataPools
Returns the Data Pools within the Environment.
A Data Pool is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries. Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads.
The dataPools
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns
The Data Pool connection object.
Learn more about pagination in GraphQL.
The Data Pool connection’s edges. See DataPoolEdge
The Data Pool connection’s nodes. See DataPool
The Data Pool connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
environment
Returns the Environment specified by the given ID.
Arguments
Returns
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
The Account object.
The Account’s unique identifier.
materializedViews
Returns the Materialized Views within the Environment.
The materializedViews
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns
The Materialized View connection object.
Learn more about pagination in GraphQL.
The Materialized View connection’s edges.
The Materialized View edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See MaterializedView
The Materialized View connection’s nodes.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
See Account
The Materialized View’s Environment.
See Environment
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
See DataPool
The Materialized View’s source Data Pool.
See DataPool
Other Data Pools queried by the Materialized View.
See DataPool
The Materialized View connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
dataPoolAccessPolicy
Returns the Data Pool Access Policy specified by the given ID.
A Data Pool Access Policy limits the data that Applications can access within a Data Pool.
Arguments
Returns
The ID of the Data Pool Access Policy.
The Data Pool Access Policy’s unique name.
The Data Pool Access Policy’s description.
The Data Pool Access Policy’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool Access Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Data Pool Access Policy’s creation date and time in UTC.
The Data Pool Access Policy’s last modification date and time in UTC.
The Data Pool Access Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool Access Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool to which the Access Policy belongs. See DataPool
Columns that the Access Policy makes available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Applications that are assigned to this Data Pool Access Policy.
Arguments
Row-level filters that the Access Policy applies before executing queries.
deprecated: UsefiltersSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
metric
Returns the Metric specified by the given ID.
A Metric is a business indicator measured over time.
Arguments
Returns
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool that powers this Metric. See DataPool
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsList the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsQuery the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the counter.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The counter response object. It contains a single Metric value for the given time range and Query Filters.
The value of the counter.
The Query statistics and metadata.
See QueryInfo
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
See MetricInput
The time range for calculating the time series.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
The Query statistics and metadata.
See QueryInfo
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the leaderboard.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
See DimensionInput
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
The Query statistics and metadata.
See QueryInfo
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadmetricByName
Returns the Metric specified by the given unique name.
A Metric is a business indicator measured over time.
Arguments
Returns
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool that powers this Metric. See DataPool
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsList the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsQuery the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the counter.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The counter response object. It contains a single Metric value for the given time range and Query Filters.
The value of the counter.
The Query statistics and metadata.
See QueryInfo
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
See MetricInput
The time range for calculating the time series.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
The Query statistics and metadata.
See QueryInfo
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the leaderboard.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
See DimensionInput
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
The Query statistics and metadata.
See QueryInfo
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadmetrics
Returns the Metrics within the Environment.
A Metric is a business indicator measured over time. Each Metric is associated with one Data Pool, which is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries. Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads.
The metrics
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns
The Metric connection object.
Learn more about pagination in GraphQL.
The Metric connection’s edges. See MetricEdge
The Metric connection’s nodes. See Metric
The Metric connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
booster
Returns the Booster specified by the given ID.
A Booster significantly improves the query performance for a Metric.
Arguments
Returns
Boosters allow you to optimize Metric Queries for a subset of commonly used Dimensions. A Metric can have one or many Boosters to optimize for the different Query patterns.
Boosters can be understood as an aggregating index. The index is formed from left to right as follows:
- The Data Pool’s Tenant ID column (if present)
- Metric Filter columns (if present)
- Query Filter Dimensions (see
dimensions
) - The Data Pool’s timestamp column
The Booster’s unique identifier.
The Booster’s Account.
The Account object.
The Account’s unique identifier.
The Booster’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Booster’s creation date and time in UTC.
The Booster’s last modification date and time in UTC.
The Booster’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Booster’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric this Booster is associated to. See Metric
The status of the Booster (once LIVE it will be available for speeding up Metric queries).
The Booster status.
CREATED
: The Booster has been created. Propel will start optimizing the Data Pool soon.OPTIMIZING
: Propel is setting up the Booster and optimizing the Data Pool.LIVE
: The Booster is now live and available to speed up Metric queries.FAILED
: Propel failed to setup the Booster. Please write to support. Alternatively, you can delete the Booster and try again.DELETING
: Propel is deleting the Booster and all of its associated data.
If the Booster fails during the optimization process, this field includes a descriptive error message.
The error object.
The error code.
The error message.
When the Booster is OPTIMIZING, this represents its progress as a number from 0 to 1. In all other states, progress is null.
Dimensions included in the Booster.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe number of records in the Booster.
The amount of storage in terabytes used by the Booster.
policy
Returns a Policy by ID.
Arguments
Returns
The Policy type. It governs an Application’s access to a Metric’s data.
The Policy’s unique identifier.
The Policy’s Account.
The Account object.
The Account’s unique identifier.
The Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Policy’s creation date and time in UTC.
The Policy’s last modification date and time in UTC.
The Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The type of Policy.
The types of Policies that can be applied to a Metric.
ALL_ACCESS
: Grants access to all Metric data.TENANT_ACCESS
: Grants access to a specified tenant’s Metric data.
The Application that is granted access. See Application
The Metric that the Application is granted access to. See Metric
sync
Returns a Sync by ID.
Arguments
Returns
The Sync object.
This represents the process of syncing data from your Data Source (for example, a Snowflake data warehouse) to your Data Pool.
The Sync’s unique identifier.
The Sync’s Account.
The Account object.
The Account’s unique identifier.
The Sync’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
See Account
The Sync’s creation date and time in UTC.
The Sync’s last modification date and time in UTC.
The Sync’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Sync’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Sync’s Data Pool. See DataPool
The Sync’s Data Pool’s Data Source.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
See Account
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
checks
insteadSee Error
The number of new, updated, and deleted records contained within the Sync, if known. This excludes filtered records.
The (compressed) size of the Sync, in bytes, if known.
The status of the Sync (all Syncs begin as SYNCING before transitioning to SUCCEEDED or FAILED).
The status of a Sync.
SYNCING
: Propel is actively syncing records contained within the Sync.SUCCEEDED
: The Sync succeeded. Propel successfully synced all records contained within the Sync.FAILED
: The Sync failed. Propel failed to sync some or all records contained within the Sync.DELETING
: Propel is deleting the Sync.
The time at which the Sync started.
The time at which the Sync succeeded.
The time at which the Sync failed.
If the Sync failed, this represents the reason the Sync failed.
The error object.
The error code.
The error message.
The number of new records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of updated records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of deleted records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of filtered records contained within the Sync, due to issues such as a missing timestamp Dimension, if any are known to be invalid.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadtable
Returns a table by ID.
Arguments
Returns
The table object.
Once a table introspection succeeds, it creates a new table object for every table it introspected.
The table’s ID.
The table’s name.
The Data Source to which the table belongs. See DataSource
The number of rows contained within the table at the time of introspection. Check the table’s cachedAt
time, since this info can become out of date.
The size of the table (in bytes) at the time of introspection. Check the table’s cachedAt
time, since this info can become out of date.
The time at which the table was cached (i.e., the time at which it was introspected).
The time at which the table was created. This is the same as its cachedAt
time.
The table’s creator. This corresponds to the initiator of the table Introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The table’s columns.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
See ColumnEdge
The column connection’s nodes.
See Column
The column connection’s page info.
See PageInfo
The table’s columns which can be used as a timestamp for a Data Pool.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
See ColumnEdge
The column connection’s nodes.
See Column
The column connection’s page info.
See PageInfo
The table’s columns which can be used as a measure for a Metric.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
See ColumnEdge
The column connection’s nodes.
See Column
The column connection’s page info.
See PageInfo
Information about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedmetricReport
Build a report, or table, consisting of multiple Metrics broken down by one-or-more dimensions.
The first few columns of the report are the dimensions you choose to break down by. The subsequent columns are the
Metrics you choose to query. By default, the report sorts on the first Metric in descending order, but you can
configure this with the orderByMetric
and sort
inputs.
Finally, reports use cursor-based pagination. You can control page size with the first
and
last
inputs.
Arguments
The fields for querying a Metric Report.
A Metric Report is a table whose columns include dimensions and Metric values, calculated over a given time range.
The time range for calculating the Metric Report.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many dimensions to group the Metric values by. Typically, dimensions in a report are what you want to compare and rank.
The fields for specifying a dimension to include in a Metric Report.
The column name of the dimension to include in a Metric Report. This must match the name of a Data Pool column.
The name to display in the headers
array when displaying the report. This defaults to the column name if unspecified.
The sort order for the dimension. It can be ascending (ASC
) or descending (DESC
) order. Defaults to ascending (ASC
) order when not provided.
See Sort
One or more Metrics to include in the Metric Report. These will be broken down by dimensions
.
The fields for specifying a Metric to include in a Metric Report.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The name to display in the headers
array when displaying the report. This defaults to the Metric’s unique name if unspecified.
The Query Filters to apply when calculating the Metric, in the form of SQL.
The sort order for the Metric. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The Metric’s unique name. If not specified, Propel will lookup the Metric by ID.
deprecated: Usemetric
The Metric’s ID. If not specified, Propel will lookup the Metric by unique name.
deprecated: Usemetric
The Query Filters to apply when calculating the Metric.
deprecated: UsefilterSql
insteadSee FilterInput
The Query Filters to apply when building the Metric Report, in the form of SQL. These can be used to filter out rows.
The index of the column to order the Metric Report by. The index is 1-based and defaults to the first Metric column. In other words, by default, reports are ordered by the first Metric; however, you can order by the second Metric, third Metric, etc., by overriding the orderByColumn
input. You can also order by dimensions this way.
The number of rows to be returned when paging forward. It can be a number between 1 and 1,000.
The cursor to use when paging forward.
The number of rows to be returned when paging forward. It can be a number between 1 and 1,000.
The cursor to use when paging backward.
The Query Filters to apply when building the Metric Report. These can be used to filter out rows.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Returns
The Metric Report connection object.
It includes headers
and rows
for a single page of a report. It also allows paging forward and backward to other
pages of the report.
The report connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The report connection’s edges.
The Metric Report edge object.
The edge’s node.
See MetricReportNode
The edge’s cursor.
The report connection’s nodes.
The Metric Report node object.
This type represents a single row of a report.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of columns. Each column contains the dimension and Metric values for a single row, as defined in the report input. Use this to display a single row within your table.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of rows. Each row contains dimension and Metric values, as defined in the report input. Use these to display the rows of your table.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
If the Query was boosted, the Booster that was used. See Booster
The Propeller used for this query.
See Propeller
The Query status.
See QueryStatus
The Query type.
See QueryType
The Query subtype.
See QuerySubtype
The SQL the query executed.
sqlV1
Query Data Pools using SQL.
Arguments
Input to the SqlV1 api.
The SQL query.
The SQL dialect to use. If not provided, the query is parsed on a best-effort basis.
The SQL dialect to use when parsing queries.
POSTGRESQL
: Parse as PostgreSQL-compatible SQL.CLICKHOUSE
: Parse as ClickHouse-compatible SQL.
Returns
Response from the SQL API.
The column names in the same order as present in the data
field.
The name of the returned column.
The returned column’s type.
See ColumnType
Whether the column is nullable, meaning whether it accepts a null value.
The data gathered by the SQL query. The data is returned in an N x M matrix format, where the first dimension are the rows retrieved, and the second dimension are the columns. Each cell can be either a string or null, and the string can represent a number, text, date or boolean value.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
If the Query was boosted, the Booster that was used. See Booster
The Propeller used for this query.
See Propeller
The Query status.
See QueryStatus
The Query type.
See QueryType
The Query subtype.
See QuerySubtype