Skip to main content

Changelog

The latest features, product improvements and bug fixes from the Propel team.

changelog-header-decoration

· One min read

  • The Console now displays a descriptive message when trying to delete a Data Pool that has Metrics attached
  • The Console now displays a descriptive message when trying to delete a Metric that has an access policy attached.
  • Password reset flow now works.
  • The Console now returns to the last environment the user was in vs. defaulting to the prod environment.
  • You can now re-order dimensions on Boosters to sort the most commonly used dimensions first.
  • Added suggestedDataPoolColumnType and supportedDataPoolColumnTypes to the Column object in the GraphQL schema.
  • Average, Minimum, and Maximum Metrics will now return nullfor “no data”, rather than zero. This is the mathematically correct answer. This applies to counters, time series, leaderboards, reports, and dimension stats.
  • Signup emails sent from Propel in response to signups, etc., will now arrive from a ”mail.propeldata.com” MAIL FROM address.
  • Fixed an issue with pending DataPools that caused mismatches between DataSource columns and DataPool columns.
  • We are no longer exposing stack traces in GraphQL error responses.
  • Fix to correctly handle TIMESTAMP_TZ and TIMESTAMP_LTZ columns when syncing Snowflake Data Pools. This issue led to no Syncs being created for these Data Pools.
· One min read

Today we are thrilled to announce Propel's AWS S3 Data Source connector. The AWS S3 Data Source enables you to power your customer-facing analytics from Parquet files in your AWS S3 bucket. Whether you have a Data Lake in AWS S3, are landing Parquet files in AWS S3 as part of your data pipeline or event-driven architecture, or are extracting data using services like Airbyte or Fivetran, you can now define Metrics and query their data blazingly fast via Propel's GraphQL API.

Read the blog post: Introducing the AWS S3 Data Source: Power customer-facing analytics from Parquet files in your S3 bucket.

· One min read

Today, we are thrilled to introduce Propellers, an easy way for product development teams to select the optimal cost and query speed for their customer-facing analytics use cases.

Propellers are the unit of compute in Propel. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.

Read the blog post: Introducing Propellers: Easily select the optimal cost and query speed for each use case

· One min read

Application scopes allow your client- or server-side app to access Propel resources. We’re now offering you greater control in restricting what an Application can or cannot do on your app’s behalf with OAuth 2.0 scopes.

Your app can request the following scopes:

  • admin — The Application has read/write access to Data Sources, Data Pools, and Metrics within its Environment.
  • metric:query — The Application can query Metrics within its Environment.
  • metric:stats — The Application can query Metrics’ Dimension Statistics within its Environment.

When generating an access token for your app, you can choose which of these scopes to include. The example below uses curl to generate an access token with only the “metric:query” and “metric:stats” scopes. This ensures the generated access token can only query Metrics and Dimension Statistics, perfect for securing customer-facing apps.

curl https://auth.us-east-2.propeldata.com/oauth2/token \
-d grant_type=client_credentials \
-d client_id=$APPLICATION_ID \
-d client_secret=$APPLICATION_SECRET \
-d 'scope=metric:query metric:stats'

Applications can use any of the available scopes.

· One min read

Business Metrics are based on aggregate data analysis. In some cases, you want to sum revenue for example. In other cases, you want to count the number of requests or count unique visitors for a given time range. In addition to Sum, Count, and Count Distinct Metric types, you can now define Min, Max and Average Metric types.

  • Min - Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.
  • Max - Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.
  • Average - Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.
· One min read

  • You can now reconnect a Data Source if a connection failed.
  • You can now introspect tables in a Data Source to get the latest tables and schemas.
  • You can now see the query activity on the Metric detail page.
  • The Dashboard now shows top queries by Applications and Metrics.
  • You can now see the unique values for a Metric Dimension.
· 2 min read

Once a data set gets to a certain size, as engineers we often wonder, “What values do we actually have in there?” Answering this question can help us understand the correctness of our data, but it can also help us improve the product experience.

For example, if you want to filter on a numerical Dimension, wouldn’t it be great to build a slider with a min and max value? If you want to filter on a Dimension like “country,” wouldn’t it be great to build a dropdown with all the available countries in your Dimension?

Now, you can, with Dimension Statistics! When querying a Metric’s Dimensions, you can ask for stats and get the Dimension’s min, max, average, and uniqueValues:

query {
metricByName (uniqueName: "My Metric") {
dimensions {
columnName
stats {
min
max
average
uniqueValues
}
}
}
}

In fact, we’re using this feature internally in the Console to show you unique values for all of your Dimensions here:

An animated screen capture of the Propel console, showing the “View unique values” feature for a Dimension named “AREA”, powered by Dimension Statistics. A scroll-able, modal window appears, showing all the values “AREA” can take on. An animated screen capture of the Propel console, showing the “View unique values” feature for a Dimension named “AREA”, powered by Dimension Statistics. A scroll-able, modal window appears, showing all the values “AREA” can take on.

· One min read

When syncing a data warehouse table to a Data Pool, you can now see the detailed Sync activity giving you complete operational visibility if something fails. For every Sync, you can see its status, whether it succeeded or failed, when it started, how many records were added, if there were any invalid records, and how long it took.

· 2 min read

In addition to counters and time series, we now support leaderboard queries. Leaderboards are great for visualizing the “top N” of something, such as “the top 10 salespeople of the month” or “the top 100 events last year.” You can query it with a timeRange set of dimensions to group on, a sort order, filters, and a lowLimit. For example,

query {
metricByName(uniqueName: "sales") {
leaderboard ({
timeRange: { relative: "THIS_MONTH" },
dimensions: [{ columnName: "SALES_PERSON" }],
rowLimit: 10
}) {
headers
rows
}
}
}

The result you get back is an array of headers and an array of rows:

{
"headers": ["SALES_PERSON", "SALES"],
"rows": [
["Alice", "100"],
["Bob", "99"],
["Carol", "80"],
["Dave", "76"],
["Erin", "75"],
["Frank", "75"],
["Grace", "66"],
["Heidi", "63"],
["Ivan", "34"],
["Judy", "33"]
]
}

Perfect for piping into your favorite graph visualization library! For example, here we use ECharts to visualize a leaderboard from the state of California:

A screenshot of a leaderboard visualization. Rows are labeled with areas from the state of California and are sorted in descending order. A screenshot of a leaderboard visualization. Rows are labeled with areas from the state of California and are sorted in descending order.

· One min read

Sometimes you need to define Metrics with a subset of the data you have. For example, if you have a Metric like revenue, you’ll want to exclude all sales records where the type is “PROMOTION” or “TRIAL”.

You can now define Metrics with a subset of records of a Data Pool. When defining a Metric via the Console or API, you can create Metric Filters to include or exclude records from the Metric values. See below for an example where we define a Metric to sum up records where “AREA” equals “California”.

An animated screen capture of the Propel console, showing how to use Metric Filters to select a subset of records from a Data Pool. An animated screen capture of the Propel console, showing how to use Metric Filters to select a subset of records from a Data Pool.

· One min read

Different products need to expose different Metrics to their end-users. For example, e-commerce products expose Sum Metrics like “Total Sales”, Count Metrics like “Number of orders”, and Count Distinct Metrics like “Unique visitors”.

When building in-product analytics, you can now define Sum, Count, or Count Distinct metrics for your product in a single place. Front-end engineers can access the Metric data with time series or counter queries using the Metrics API.

An animated screen capture of Propel’s GraphQL Explorer, showing how to query a Metric using the GraphQL API with various time granularities and filters. An animated screen capture of Propel’s GraphQL Explorer, showing how to query a Metric using the GraphQL API with various time granularities and filters.

· One min read

You can now connect Propel to your Snowflake account. This connection lets you use your Snowflake data in your customer-facing web and mobile applications with Propel's GraphQL API. Propel manages all the caching, optimization, authorization, and API infrastructure so that your teams can focus on the product experience. The Snowflake Data Source is now available to all customers.

An animated screen capture of the Propel console, showing how to create a Data Source and the numerous checks that confirm the connection is working An animated screen capture of the Propel console, showing how to create a Data Source and the numerous checks that confirm the connection is working

Read the blog posts: