Skip to main content

Changelog

The latest features, product improvements and bug fixes from the Propel team.

changelog-header-decoration

· 3 min read

New features

  • ❄️ Propel can now sync updates and deletes from your Snowflake data, unlocking a host of new use cases! Learn more.
  • 🍽️ Propel can now synchronize data from Snowflake Dynamic Tables. Learn more.
  • ⏰ Queries now support setting time zones. Learn more.
  • 💰 We updated pricing for the P1_X_SMALL propeller. See pricing.

Fixes

  • Time Series queries were sometimes returning unexpected numbers of granules (e.g., when passing timeZone). For example, asking for TODAY with granularity DAY should always return a single granule; asking for THIS_WEEK with granularity DAY should always return seven granules. This is now fixed.
  • Multiple S3-based files with the same name in different directories are now handled correctly.
  • Fixed a bug with wrong validations in the Leaderboard query builder in the Console, not allowing the query to execute.
  • Fix Metric overview documentation links in the Console.
  • Fixed error state for sparklines in the Console.
  • Fixed a bug with tooltips showing for sparklines in the Console.
  • Fixed an issue with time series charts in the console not starting at 0.
  • Fixed an issue with Propeller time-out errors showing twice in the Console.
  • Fixed a bug where an erroneous 0 would show up in filter lists in the Console.

Improvements

  • Syncs have a new processedRecords property. It is the sum of the existing properties newRecordsupdatedRecords, and failedRecords. This is in support of the updating Data Pool functionality.
  • Better and more detailed errors for different Data Pool errors on creation and sync in the console and API.
  • Graphs in the Console now render using the browser timezone.
  • Customers can now more easily paste or type into all autocomplete components in the Console.
  • Setting a filter value in the Metric Playground will now provide autocomplete for less than 1000 values, and free text type in all cases.
  • New, better design for Applications listing page in Console with ID, scopes, and Propeller displayed.
  • The UI Kit now uses Luxon under the hood for better date handling.
  • On the docs, the Quickstart now introduces how to set up a Propel Application and use it in the Next.js starter app.
  • On the Leaderboards, Counter, Time Series, and Metric Report overview docs, we now use TacoSoft (our Quickstart data set) in all example queries.
  • On the Leaderboards, Counter, Time Series, and Metric Report docs, we updated query examples to use the up-to-date top-level query structure (metricName in query input instead of the deprecated metricByName or metricById).
  • We made visual improvements in the navbar on the docs site with higher contrast.
· 4 min read

New features

  • 🌮 Customers are now able to create sample data with the new TacoSoft Data Source.
  • 🟣 New Console look and feel.
  • 💦 The UI Kit supports a new prop, refetchInterval, which can be used to specify how frequently a component should re-fetch new data.
  • 📦 We have simplified UI Kit and re-packaged it as a single, tree-shakeable NPM library, @propeldata/ui-kit.

Fixes

  • In the Metric Playground, we fixed an issue with the filter drop-downs not showing values. Customers can now select a unique value from the drop-down when using filters in the Playground.
  • Fixed an issue with unique name-checking. Customers will now see validation for every unique name in the console.
  • Fixed a bug in the Metric Report API where users could select dimensions not declared in any of the report's Metrics. Now developers are only able to select dimensions specified in the Metrics.
  • Fixed a Counter Metric performance regression.
  • Fixed a corner case where new Data Pools were getting stuck in the "CREATED" state.
  • Fixed a bug where if a measure or dimension column in a Metric is JSON, we were not taking it into account as part of the available columns to be selected.
  • Fixed an issue with the calculation of query timeouts. Some queries were incorrectly timing out at 3 seconds when they should time out at 10 seconds.
  • Fixed a password reset bug.
  • Fixed an issue where we were missing time granules for certain relative time ranges.
  • The Amazon S3 Data Source now supports syncing larger S3 buckets, and will sync up to 1,000 files at a time.
  • Previously, S3 Data Sources could be created with invalid S3 bucket names, resulting in them getting stuck in a "CONNECTING" state. Now, when attempting to create or modify an S3 Data Source, setting bucket to an invalid S3 bucket name will result in synchronous failure with a BAD_REQUEST error message: "Invalid S3 bucket name; ensure you pass only the S3 bucket name and not its ARN or URL".
  • Customers can now switch tables and see the updated schema when creating a Data Pool. Previously, the schema was not updated when switching tables.

Improvements

  • New homepage with a handy video!
  • In the Data Pool section of the Console, we have improved the Sync error messages for Snowflake and S3 Data Sources. Customers will now see a helpful message with the error details for failed Syncs in the Data Pool overview syncs table.
  • In the Metric definition, Metric settings, and Playground sections of the Console, the filter operators IS_NULL and IS_NOT_NULL are now available.
  • In the Data Pool section of the Console, the "Preview Data" table now loads faster and adjusts to the screen height. Additionally, the text in the cells of the table will not wrap, and an ellipsis will be displayed when the text is too long.
  • In the Data Pool section in Console, we updated Data Pool documentation links.
  • We have improved API error messages for our customers. Authentication and authorization-related errors and identifier parsing errors will no longer be returned as internal errors. Instead, we catch these errors and provide more informative error messages.
  • We have improved the handling of query errors by introducing a new error that specifically indicates when the Propeller is too small. If a Metric query exceeds the maximum execution time, customers will now receive a more informative error message. The message will indicate that the user needs a bigger Propeller instead of a generic error message such as "The query failed for an unknown reason.”
  • Data Pool Syncs are now created before attempting to connect to the underlying database, resulting in earlier visibility. Previously, if we failed to connect to a database, we did not create any Sync and kept retrying until successful. With the new process, we create failed Syncs that represent unsuccessful attempts, improving visibility for customers.
  • The Console now remembers the last environment a user accessed when switching between accounts.
  • The Console now remembers the last account and environment a user accessed when logging back in.
  • If a user has never logged in, the Console defaults to their most recently created account and its development environment.
· 2 min read

New features

  • 😍 Support for Querying JSON data and JSON data type. Snowflake users can now sync their VARIANT, OBJECT, and ARRAY columns as JSON to Propel. Read the blog post
  • 🛠️ New data synchronization controls are now available for Snowflake. Read the blog post.
  • 🌮 You can now provision demo data for TacoSoft, our imaginary B2B SaaS taco-selling application. The demo data will make it easier to experience the full power of Propel.
  • 🤓 We introduced a new METRIC_READ scope, which enables developers to list metrics without requiring full ADMIN scope. This new scope allows Propel Applications to retrieve and list Metric resources within the Environment, without being able to query their data.
  • ⛔️ We added new IS_NULL and IS_NOT_NULL filter operators for Metric and query filters. The value field in FilterInput is now nullable. If the specified filter operator is IS_NULL or IS_NOT_NULL, then the value field is not required. Otherwise, the value field is required, and the request will be rejected if it is not present.

Fixes

  • Fix to allow modifying AND and OR Filters on Metrics.
  • Previously, a bug was triggered when customers provided an invalid Snowflake account, causing their Snowflake Data Source to become stuck in the "CONNECTING" state. This issue has now been fixed.
  • When creating a Data Pool, customers will be redirected to the correct link for tenant ID documentation.
  • When creating an Amazon S3-powered Data Pool, we now display the first empty sync instead of the "hang tight" graphic.

Improvements

  • The documentation site has new styles ✨.
  • In the Console, the input to create a new Amazon S3 Data Source table is now ”Unique Name”, instead of “Name”. It will show an error when the name is not unique within the Data Source context.
  • In the Console, during Data Pool creation, customers will not see cached data after leaving the creation flow and returning, but it will be kept within the Data Pool creation session.
  • The Terraform provider now supports creating MIN, MAX and AVERAGE Metrics. It supports setting the cursor and sync interval for Data Pools, creating and updating Policies, and it is smarter about when to replace versus update a changed resource.
· 2 min read

Improvements

  • Metric Report: We increased the number of supported dimensions from 2 to 10.
  • Metric Report: We enabled report-level filtering. Filters can now be passed which can remove rows from the report.
  • Logins and signups now use the Secure Remote Password (SRP) protocol.
  • Data Pools now have a new Preview Data tab. This tab shows the most recent records synchronized to the data pools.
  • Add support for Parquet data types: Map, List and Struct that are mapped to Propel column data type: JSON.
  • Add support for group structures within Parquet files.

Fixes

  • Fixed an issue with ingesting timestamps with a value of 0 and enhanced error handling for negative epoch timestamps.
  • Fixed an issue with WEEK granularity starting on a Sunday. The WEEK granularity now starts on Monday, consistent with LAST_N_WEEKS and the week-based relative time ranges.
  • Fixed a bug with changing between Relative and Absolute time in the metric playground.
  • Fixed a bug with the GraphQL variables when changing between relative types in the metric playground.
  • Fixed a bug where the username string in the top right of the web console would show the id instead of the username.
  • Fixed an issue displaying the setup checklist for S3 data source.
  • Fixed a non-clickable save button on account settings.
· One min read

· One min read

  • Launched our new website!
  • Launched Metric Report
  • Queries now support OR filtering
  • Launced a preview of our React Components library, Propel UI Kit on Github.
  • Performance optimizations for asynchronous sync operations.
  • Unique names can now be up to 192 bytes.
  • Allow customers to create Applications with the APPLICATION_ADMIN scope.
  • Allow APPLICATION_ADMIN-scoped Applications to create other Applications with lesser scopes (e.g., ADMIN, METRIC_QUERY, etc.)
  • Support DOUBLE and FLOAT column types for Tenant.
· One min read

  • Launched Terraform provider.
  • Launched Grafana plugin.
  • Fixed an issue with Snowflake number type support with scale greater than 9
  • Adds support for "data_pool:query" and "data_pool:stats" scopes in the OAuth 2.0 API for requesting an Application access token.
  • Opened up signups for Snowflake customers.
  • Fixed a bug in tenant filtering for metricReport API.
  • Fixed a bug that added one extra time unit at the end of time series queries with relative time ranges filters.
  • Fixed a bug allowing support DOUBLE and FLOAT column types for Tenant.
  • Pagination fixes in web console.
  • Fix to disallow changing HTTP Data source table name after creation.
  • Fixed a bug in playground visualization card height.
· One min read

  • The Console now displays a descriptive message when trying to delete a Data Pool that has Metrics attached
  • The Console now displays a descriptive message when trying to delete a Metric that has an access policy attached.
  • Password reset flow now works.
  • The Console now returns to the last environment the user was in vs. defaulting to the prod environment.
  • You can now re-order dimensions on Boosters to sort the most commonly used dimensions first.
  • Added suggestedDataPoolColumnType and supportedDataPoolColumnTypes to the Column object in the GraphQL schema.
  • Average, Minimum, and Maximum Metrics will now return nullfor “no data”, rather than zero. This is the mathematically correct answer. This applies to counters, time series, leaderboards, reports, and dimension stats.
  • Signup emails sent from Propel in response to signups, etc., will now arrive from a ”mail.propeldata.com” MAIL FROM address.
  • Fixed an issue with pending DataPools that caused mismatches between DataSource columns and DataPool columns.
  • We are no longer exposing stack traces in GraphQL error responses.
  • Fix to correctly handle TIMESTAMP_TZ and TIMESTAMP_LTZ columns when syncing Snowflake Data Pools. This issue led to no Syncs being created for these Data Pools.
· One min read

Today we are thrilled to announce Propel's Amazon S3 Data Source connector. The Amazon S3 Data Source enables you to power your customer-facing analytics from Parquet files in your Amazon S3 bucket. Whether you have a Data Lake in Amazon S3, are landing Parquet files in Amazon S3 as part of your data pipeline or event-driven architecture, or are extracting data using services like Airbyte or Fivetran, you can now define Metrics and query their data blazingly fast via Propel's GraphQL API.

Read the blog post: Introducing the Amazon S3 Data Source: Power customer-facing analytics from Parquet files in your S3 bucket.

· One min read

Today, we are thrilled to introduce Propellers, an easy way for product development teams to select the optimal cost and query speed for their customer-facing analytics use cases.

Propellers are the unit of compute in Propel. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.

Read the blog post: Introducing Propellers: Easily select the optimal cost and query speed for each use case

· One min read

Application scopes allow your client- or server-side app to access Propel resources. We’re now offering you greater control in restricting what an Application can or cannot do on your app’s behalf with OAuth 2.0 scopes.

Your app can request the following scopes:

  • admin — The Application has read/write access to Data Sources, Data Pools, and Metrics within its Environment.
  • metric:query — The Application can query Metrics within its Environment.
  • metric:stats — The Application can query Metrics’ Dimension Statistics within its Environment.

When generating an access token for your app, you can choose which of these scopes to include. The example below uses curl to generate an access token with only the “metric:query” and “metric:stats” scopes. This ensures the generated access token can only query Metrics and Dimension Statistics, perfect for securing customer-facing apps.

curl https://auth.us-east-2.propeldata.com/oauth2/token \
-d grant_type=client_credentials \
-d client_id=$APPLICATION_ID \
-d client_secret=$APPLICATION_SECRET \
-d 'scope=metric:query metric:stats'

Applications can use any of the available scopes.

· One min read

Business Metrics are based on aggregate data analysis. In some cases, you want to sum revenue for example. In other cases, you want to count the number of requests or count unique visitors for a given time range. In addition to Sum, Count, and Count Distinct Metric types, you can now define Min, Max and Average Metric types.

  • Min - Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.
  • Max - Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.
  • Average - Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.
· One min read

  • You can now reconnect a Data Source if a connection failed.
  • You can now introspect tables in a Data Source to get the latest tables and schemas.
  • You can now see the query activity on the Metric detail page.
  • The Dashboard now shows top queries by Applications and Metrics.
  • You can now see the unique values for a Metric Dimension.
· 2 min read

Once a data set gets to a certain size, as engineers we often wonder, “What values do we actually have in there?” Answering this question can help us understand the correctness of our data, but it can also help us improve the product experience.

For example, if you want to filter on a numerical Dimension, wouldn’t it be great to build a slider with a min and max value? If you want to filter on a Dimension like “country,” wouldn’t it be great to build a dropdown with all the available countries in your Dimension?

Now, you can, with Dimension Statistics! When querying a Metric’s Dimensions, you can ask for stats and get the Dimension’s min, max, average, and uniqueValues:

query {
metricByName (uniqueName: "My Metric") {
dimensions {
columnName
stats {
min
max
average
uniqueValues
}
}
}
}

In fact, we’re using this feature internally in the Console to show you unique values for all of your Dimensions here:

An animated screen capture of the Propel console, showing the “View unique values” feature for a Dimension named “AREA”, powered by Dimension Statistics. A scroll-able, modal window appears, showing all the values “AREA” can take on. An animated screen capture of the Propel console, showing the “View unique values” feature for a Dimension named “AREA”, powered by Dimension Statistics. A scroll-able, modal window appears, showing all the values “AREA” can take on.

· One min read

When syncing a data warehouse table to a Data Pool, you can now see the detailed Sync activity giving you complete operational visibility if something fails. For every Sync, you can see its status, whether it succeeded or failed, when it started, how many records were added, if there were any invalid records, and how long it took.