Skip to main content

9 posts tagged with "General"

· 2 min read


  • Metric Report: We increased the number of supported dimensions from 2 to 10.
  • Metric Report: We enabled report-level filtering. Filters can now be passed which can remove rows from the report.
  • Logins and signups now use the Secure Remote Password (SRP) protocol.
  • Data Pools now have a new Preview Data tab. This tab shows the most recent records synchronized to the data pools.
  • Add support for Parquet data types: Map, List and Struct that are mapped to Propel column data type: JSON.
  • Add support for group structures within Parquet files.


  • Fixed an issue with ingesting timestamps with a value of 0 and enhanced error handling for negative epoch timestamps.
  • Fixed an issue with WEEK granularity starting on a Sunday. The WEEK granularity now starts on Monday, consistent with LAST_N_WEEKS and the week-based relative time ranges.
  • Fixed a bug with changing between Relative and Absolute time in the metric playground.
  • Fixed a bug with the GraphQL variables when changing between relative types in the metric playground.
  • Fixed a bug where the username string in the top right of the web console would show the id instead of the username.
  • Fixed an issue displaying the setup checklist for S3 data source.
  • Fixed a non-clickable save button on account settings.
· One min read

· One min read

  • Launched our new website!
  • Launched Metric Report
  • Queries now support OR filtering
  • Launced a preview of our React Components library, Propel UI Kit on Github.
  • Performance optimizations for asynchronous sync operations.
  • Unique names can now be up to 192 bytes.
  • Allow customers to create Applications with the APPLICATION_ADMIN scope.
  • Allow APPLICATION_ADMIN-scoped Applications to create other Applications with lesser scopes (e.g., ADMIN, METRIC_QUERY, etc.)
  • Support DOUBLE and FLOAT column types for Tenant.
· One min read

  • Launched Terraform provider.
  • Launched Grafana plugin.
  • Fixed an issue with Snowflake number type support with scale greater than 9
  • Adds support for "data_pool:query" and "data_pool:stats" scopes in the OAuth 2.0 API for requesting an Application access token.
  • Opened up signups for Snowflake customers.
  • Fixed a bug in tenant filtering for metricReport API.
  • Fixed a bug that added one extra time unit at the end of time series queries with relative time ranges filters.
  • Fixed a bug allowing support DOUBLE and FLOAT column types for Tenant.
  • Pagination fixes in web console.
  • Fix to disallow changing HTTP Data source table name after creation.
  • Fixed a bug in playground visualization card height.
· One min read

  • The Console now displays a descriptive message when trying to delete a Data Pool that has Metrics attached
  • The Console now displays a descriptive message when trying to delete a Metric that has an access policy attached.
  • Password reset flow now works.
  • The Console now returns to the last environment the user was in vs. defaulting to the prod environment.
  • You can now re-order dimensions on Boosters to sort the most commonly used dimensions first.
  • Added suggestedDataPoolColumnType and supportedDataPoolColumnTypes to the Column object in the GraphQL schema.
  • Average, Minimum, and Maximum Metrics will now return nullfor “no data”, rather than zero. This is the mathematically correct answer. This applies to counters, time series, leaderboards, reports, and dimension stats.
  • Signup emails sent from Propel in response to signups, etc., will now arrive from a ”” MAIL FROM address.
  • Fixed an issue with pending DataPools that caused mismatches between DataSource columns and DataPool columns.
  • We are no longer exposing stack traces in GraphQL error responses.
  • Fix to correctly handle TIMESTAMP_TZ and TIMESTAMP_LTZ columns when syncing Snowflake Data Pools. This issue led to no Syncs being created for these Data Pools.
· One min read

Today we are thrilled to announce Propel's AWS S3 Data Source connector. The AWS S3 Data Source enables you to power your customer-facing analytics from Parquet files in your AWS S3 bucket. Whether you have a Data Lake in AWS S3, are landing Parquet files in AWS S3 as part of your data pipeline or event-driven architecture, or are extracting data using services like Airbyte or Fivetran, you can now define Metrics and query their data blazingly fast via Propel's GraphQL API.

Read the blog post: Introducing the AWS S3 Data Source: Power customer-facing analytics from Parquet files in your S3 bucket.

· One min read

Today, we are thrilled to introduce Propellers, an easy way for product development teams to select the optimal cost and query speed for their customer-facing analytics use cases.

Propellers are the unit of compute in Propel. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.

Read the blog post: Introducing Propellers: Easily select the optimal cost and query speed for each use case

· One min read

Application scopes allow your client- or server-side app to access Propel resources. We’re now offering you greater control in restricting what an Application can or cannot do on your app’s behalf with OAuth 2.0 scopes.

Your app can request the following scopes:

  • admin — The Application has read/write access to Data Sources, Data Pools, and Metrics within its Environment.
  • metric:query — The Application can query Metrics within its Environment.
  • metric:stats — The Application can query Metrics’ Dimension Statistics within its Environment.

When generating an access token for your app, you can choose which of these scopes to include. The example below uses curl to generate an access token with only the “metric:query” and “metric:stats” scopes. This ensures the generated access token can only query Metrics and Dimension Statistics, perfect for securing customer-facing apps.

curl \
-d grant_type=client_credentials \
-d client_id=$APPLICATION_ID \
-d client_secret=$APPLICATION_SECRET \
-d 'scope=metric:query metric:stats'

Applications can use any of the available scopes.

· One min read

  • You can now reconnect a Data Source if a connection failed.
  • You can now introspect tables in a Data Source to get the latest tables and schemas.
  • You can now see the query activity on the Metric detail page.
  • The Dashboard now shows top queries by Applications and Metrics.
  • You can now see the unique values for a Metric Dimension.