Unions
Union types in the Management API
ApplicationOrFailureResponse
The result of a mutation which creates or modifies an Application.
If successful, an ApplicationResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies an Application.
The Application which was created or modified.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
See Account
The Application’s Environment.
See Environment
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
See Propeller
The Application’s OAuth 2.0 scopes.
See ApplicationScope
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
A paginated list of Policies associated with the Application.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
The failure response object.
The error that caused the failure.
The error object.
The error code.
The error message.
ConnectionSettings
The Snowflake Data Source connection settings.
The Snowflake account. This is the part before the “snowflakecomputing.com” part of your Snowflake URL.
The Snowflake database name.
The Snowflake warehouse name. It should be “PROPELLING” if you used the default name in the setup script.
The Snowflake schema.
The Snowflake username. It should be “PROPEL” if you used the default name in the setup script.
The Snowflake role. It should be “PROPELLER” if you used the default name in the setup script.
The Amazon Data Firehose Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose issues requests to its custom HTTP endpoint.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Additional columns for the Amazon Data Firehose’s table.
A column in an Amazon Data Firehose Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you send a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
See ColumnType
Whether the column’s type is nullable or not.
Copy this value into the URL field when configuring your Amazon Data Firehose to deliver to a custom HTTP endpoint.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The primary timestamp column, if any.
Copy this value into the X-Amz-Firehose-Access-Key
header when configuring your Amazon Data Firehose to deliver to a custom HTTP endpoint.
The ClickHouse Data Source connection settings.
Which database to connect to
The password for the provided user
Whether the user has readonly permissions or not for querying ClickHouse
The URL where the ClickHouse host is listening to HTTP[S] connections
The user for authenticating against the ClickHouse host
The HTTP Data Source connection settings.
The HTTP Basic authentication settings for uploading new data.
If this parameter is not provided, anyone with the URL to your tables will be able to upload data. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
The HTTP Data Source’s tables.
An HTTP Data Source’s table.
The ID of the table
The name of the table
All the columns present in the table
The Kafka Data Source connection settings.
The type of authentication to use. Can be SCRAM-SHA-256, SCRAM-SHA-512, PLAIN or NONE
The bootstrap server(s) to connect to
The password for the provided user
Whether the the connection to the Kafka servers is encrypted or not
The user for authenticating against the Kafka servers
The PostgreSQL Data Source connection settings.
Which database to connect to
The host where PostgreSQL is listening
The port where PostgreSQL is listening (usually 5432)
Which schema to use
The user for authenticating against PostgreSQL
The connection settings for an Amazon S3 Data Source. These include the Amazon S3 bucket name, the AWS access key ID, and the tables (along with their paths). We do not allow fetching the AWS secret access key after it has been set.
The AWS access key ID for an IAM user with sufficient access to the Amazon S3 bucket.
The name of the Amazon S3 bucket.
The Amazon S3 Data Source’s tables.
An Amazon S3 Data Source’s table.
The ID of the table
The name of the table
The path to the table’s files in Amazon S3.
All the columns present in the table
The Webhook Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Webhook Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
The additional columns for the Webhook Data Source table.
A column in the Webhook Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you POST a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
See ColumnType
Whether the column’s type is nullable or not.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The primary timestamp column, if any.
The Webhook URL for posting JSON events
The tenant ID column, if any.
deprecated: Will be removed; use Data Pool Access Policies instead.The unique ID column, if any. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated.
deprecated: Will be removed; use Table Settings to define the primary key.DataSourceOrFailureResponse
The result of a mutation which creates or modifies a DataSource.
If successful, an DataSourceResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
See Account
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
checks
insteadSee Error
The failure response object.
The error that caused the failure.
The error object.
The error code.
The error message.
TableEngine
A Data Pool’s table engine.
Parameters for the MergeTree table engine.
The type is always MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
Parameters for the ReplacingMergeTree table engine.
The type is always REPLACING_MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
The ver
parameter to the ReplacingMergeTree engine.
Parameters for the SummingMergeTree table engine.
The type is always SUMMING_MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
The columns argument for the SummingMergeTree table engine
Parameters for the AggregatingMergeTree table engine.
The type is always AGGREGATING_MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
Parameters for the PostgreSQL table engine.
The type is always POSTGRESQL
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
DataPoolOrFailureResponse
The result of a mutation which creates or modifies a Data Pool.
If successful, an DataPoolResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies a Data Pool.
The Data Pool which was created or modified.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
See Account
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
See Timestamp
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The list of Syncs of the Data Pool.
Arguments
See SyncsFilter
See SyncConnection
The list of Metrics powered by the Data Pool.
Arguments
See MetricConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
setupTasks
insteadSee Error
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The failure response object.
The error that caused the failure.
The error object.
The error code.
The error message.
MetricSettings
A Metric’s settings, depending on its type.
Settings for Count Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Settings for Sum Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to be summed.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsMetric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Settings for Count Distinct Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension where the count distinct operation is going to be performed.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsMetric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Settings for Average Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to be averaged.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsMetric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Settings for Min Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to select the minimum from.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsMetric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Settings for Max Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to select the maximum from.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsMetric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Settings for Custom Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The expression that defines the aggregation function for this Metric.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.