Skip to main content

24 posts tagged with "General"

· 4 min read

Improvements

Ingestion

  • Improved Webhook Data Pool performance: Increased request per second (RPS) and concurrent request capacity 4x, significantly reducing 429 responses.
  • The Webhook Data Pool now accepts payloads up to 1 MiB.

Console

  • Enhanced account security: Users can now set up Multi-Factor Authentication (MFA) for their accounts.
  • Improved user experience: The last selected Propeller for the Data Pool’s Preview Data will be remembered and displayed for future sessions.
  • Updated console for deprecated API Scope: Applications without prior access to DATA_POOL_READ, DATA_POOL_STATS, METRIC_READ, and METRIC_STATS scopes will no longer be able to set these scopes as they are now deprecated

API

  • Deprecated DATA_POOL_READ, DATA_POOL_STATS, METRIC_READ, and METRIC_STATS API scopes.
  • Expanded DATA_POOL_QUERY, METRIC_QUERY API scopes to allow seeing the schema for the Data Pools and columns they have access to.
  • Improved the resiliency of copy jobs for large tables, reducing errors and preventing jobs from getting stuck. Enhanced error messages to provide more informative feedback when jobs fail.
  • Propellers no longer have a limit on maximum bytes to read making them more powerful.
  • Materialize View creation now tests the SQL query prior to creating the Materialized View. This protects developers from creating Materialized Views with incorrect SQL. I handles the case when queries take a long time for large datasets gracefully.

SQL

  • Enhanced JOIN clause functionality:
    • Added support for multiple column expressions using AND and OR operators
    • Expanded operator support beyond = to include >, <, LIKE, and IN among others.
  • Increased query result size limit from 500KB to 10MB, allowing for larger data retrieval
  • New ClickHouse functions for enhanced string manipulation and searching:
    • Position and Search functions like:- position, locate, positionCaseInsensitive, positionUTF8, positionCaseInsensitiveUTF8- multiSearch functions for various use cases (e.g., AllPositions, FirstPosition, FirstIndex)
    • Pattern Matching:- match, REGEXP, multiMatchAny, multiMatchAnyIndex, multiMatchAllIndices- Fuzzy matching: multiFuzzyMatchAny, multiFuzzyMatchAnyIndex, multiFuzzyMatchAllIndices
    • Extraction and Comparison:- extract, extractAll, extractAllGroupsHorizontal, extractAllGroupsVertical- like, notLike, ilike, notILike functions
    • N-gram and Substring Operations:- ngramDistance, ngramSearch (with case-sensitive and UTF8 variants)- countSubstrings, countMatches (with case-insensitive options)
    • Specialized String Functions:- regexpExtract, hasSubsequence, hasToken (with various options for case sensitivity and UTF8)

Embeddable UI (0.11.1)

  • New layout components: <Container>, <Flex>, <Grid> , and <Card>.
  • New typography components: <Text> and <Heading> .
  • New <Tabs> component for tab-based layouts with support for Card components as tabs.
  • Added importable theme colors: Users can now import specific colors (e.g., gray) from "@propeldata/ui-kit/colors" for consistent styling across the application.
  • <TimeRangePicker> now integrates seamlessly with <FilterProvider> for improved data filtering across components.
  • Added new <TimeGrainPicker> component that also integrated seamlessly with the <FilterProvider> .
  • Improved React compatibility: All components are now compatible with React Server Components (RSC) and exported as client-side components, enhancing performance and flexibility.
  • Enhanced debugging capabilities: Components now log prop mismatch errors, facilitating easier troubleshooting and development.
  • Added groupBy functionality to the <TimeSeries> component, enabling data grouping and more flexible visualizations.
  • Enhanced the <SimpleFilter> component with clearable functionality. Developers can use the disableClearable prop to turn off this feature if needed.
  • Improved color customization: Replaced accentColor with accentColors to provide more versatile theming options.

Terraform (v1.3.4)

  • Enhanced flexibility in Data Pool configuration: Added support for explicitly setting empty values for partition_by, order_by, and primary_key fields. Users can now use the syntax [""] to define these fields as empty when needed. This improvement allows for more precise control over Data Pool settings.

Fixes

Console

  • Resolved issues with Google sign-in for accepting invitations.
  • Fixed a bug that prevented sign-ups for users who had previously registered but not verified their email.
  • Fixed a bug that caused the SQL Console to attempt to send a Propeller for Applications resulting in an error.

API

  • The API now returns a NOT_FOUND error when the requested resource doesn't exist. This applies to Materialized Views, Copy Jobs, and Data Pool access policies, improving error handling and user experience.

Embeddable UI

  • Fixed inconsistent border radius across components for improved visual coherence.
  • Aligned <SimpleFilter> component styles with select-based components like the <TimeRangePicker> for a more uniform user interface.

Terraform (v1.3.4)

  • Fixed an issue where order_by, partition_by, and primary_key columns were being applied in an incorrect order when defining Table Settings during Data Pool creation. We replaced Set with List to ensure order is preserved in fields where it's critical.
· One min read

Improvements

Console

  • Developers can now sign up and sign in using their Google account.
  • Developers can now sign up and sign in using their GitHub account.

SQL

  • Enhanced support for ClickHouse and PostgreSQL array functions.
  • Introduced new ClickHouse SQL functions:
    • arrayJoin: Allows for the expansion of arrays into separate rows.
    • JSONExtractArrayRaw: Extracts an array from a JSON string.
    • JSONExtractKeys: Retrieves keys from a JSON object.
    • JSONArrayLength: Determines the length of a JSON array.
  • Added ClickHouse tuple manipulation functions:
    • tupleNames: Returns the names of tuple elements.
    • tupleElement: Extracts a specific element from a tuple.
  • Added ClickHouse geospatial functions for advanced geographical data processing.
· 2 min read

New Features

Up to 47% price reduction for your queries

Our most powerful Propellers, the P1_LARGE and P1_X_LARGE, which can read 250 and 500 million rows per second respectively, now have a significantly lower price.

PropellerOld priceNew price% Price drop
P1_MEDIUM$0.10 per GB read$0.06 per GB read40%
P1_LARGE$0.12 per GB read$0.07 per GB read42%
P1_X_LARGE$0.15 per GB read$0.08 per GB read47%

As we gain scale, we are committed to passing those savings to our customers.

See new pricing.

User management in Console

Customers can now invite their team members to their Propel account. This feature enhances collaboration by allowing multiple users to access and work on the same account. Team members can share resources, manage Data Pools, and streamline their workflows within a single, unified account.

Log in and invite your teammates.

Fixes

Console

  • SQL Playground will not show an error when the query is missing a FROM statement.
  • We fixed an issue where the Snowflake Data Pool creation flow was incorrectly sending ver param for MergeTree tables, causing creation to fail.
  • Customers will not see cached tables after running an introspection on any data source.

Ingestion

  • We fixed an issue with new Snowflake Data Pools on ReplacingMergeTree that were not supporting Re-sync functionality, the FINAL clause was not being correctly added to the underlying ClickHouse queries, and _propel_is_deleted filtering wasn’t working in some cases.

API

  • We fixed an issue when creating and modifying Applications that was not allowing nullable values for the unique name and description fields. Now unique name and description fields are optional as defined in the GraphQL schema.

Improvements

Console

  • The SQL Console and Materialized View creation flow now supports formatting the current query.
  • When creating a Materialized View with an existing Data Pool, developers will only see Data Pools with a compatible schema.
  • Customers will be able to enable/disable access control on Snowflake Data Pool creation.
  • You can now copy Metric name and ID from the Metric list page.

Terraform

  • Developers can now manage (create, delete, update) Applications via Terraform.
· 3 min read

Materialized Views

We're introducing Materialized Views in Propel’s Serverless ClickHouse as a powerful tool for data transformation. Developers can leverage Materialized Views to reshape, filter, or enrich data with SQL. Materialized Views are persistent query results that update dynamically as the original data changes.

The key benefit? Data is transformed in real time. No scheduling. No full-refreshes.

Materialized Views in ClickHouse

Learn more about Materialized Views

Rockset Migration Service

OpenAI has announced the acquisition of Rockset, and as a result, the Rockset service will cease to operate. For those unfamiliar with Rockset, it was a cloud-hosted real-time analytics database that enabled millisecond-latency queries for aggregations and joins, similar to Propel.

We are pleased to announce the immediate availability of the Rockset Migration Service. This service is designed to offer a seamless transition for companies from Rockset.

To get started with the migration process, please schedule a kick-off call with our team here.

Propel&#39;s Rockset Migration Service

Customizable Table Engine and Sorting Key for all Data Pools

We are thrilled to announce that Propel now supports customizable table engines and sorting keys for all Data Pools. What does this mean? Better query performance, more cost-efficient reads, and support for real-time updates and delete on any Data Pool type.

Table engines in Propel’s Serverless ClickHouse determine how tables store, process, read, and update their data.

The sorting key is a set of one or more columns that Propel uses to organize the rows within a table. It determines the order of the rows in the table and significantly impacts the query performance. If the rows are sorted well, Propel can efficiently skip over unneeded rows and thus optimize query performance.

This enhancement provides users with more flexibility and control over their data, allowing them to optimize their data pools for their specific use cases.

Customizable Table Engine and Sorting Key for all Data Pools

Learn more about the table engine and sorting key

Fixes

Console

  • The “Operations” tab for all Data Pool types.
  • All Data Pool types can now have a customizable Propeller in the “Preview Data” section.

Improvements

Terraform

  • The resource for Kafka Data Pools is now available.
  • The resource for ClickHouse Data Pools is now available.
  • The resource for Materialized View is now available.
  • The Data Pool resource now supports table settings.
  • The Unique ID field is now deprecated in the Data Pool resource.
  • The Tenant ID field is now deprecated in the Data Pool resource.
  • The timestamp field is now Optional in the Data Pool terraform.

API

  • The API ADMIN scope is sufficient for Applications to get and list other Applications, but it does not allow them to fetch other Application secrets. This simplifies Terraforming of Data Pool Access Policies, which previously failed when the Terraform Application had only ADMIN scope.
  • The ClickHouse Data Source API is now public.
  • You can now modify the timestamp field in the Data Pool API.
  • The uniqueId and tenantId fields are now deprecated in the Webhook Connection Settings object
· 4 min read

New Features

Expanded SQL function support

We have significantly expanded our SQL function support, extending it to a broad range of functions for PostgreSQL and ClickHouse SQL dialects, as well as unique Propel functions. This improvement offers developers greater flexibility and control when querying, transforming, and managing data.

Learn more about Propel SQL function support.

A screenshot of Propel&#39;s SQL function documentation

New Console Navigation

We’ve rolled out an updated Console navigation. The new menu structure and design organizes the Console into two primary sections: “Data” and “API”. The “Data” section houses all Serverless ClickHouse-related functionalities, and “API” contains all API-related functionalities.

Log in to the Console to see the new navigation.

A screenshot of Propel&#39;s new Console navigation

GraphQL Schema Explorer

We introduced a GraphQL Schema Explorer in the Console. Developers can now actively search through the Propel API GraphQL schema, access API endpoints with ease, and directly download the schema from a provided URL.

Check out the new GraphQL Schema Explorer in the Console.

A screenshot of Propel&#39;s GraphQL Schema Explorer

Fixes

Ingestion

  • We resolved an issue with the S3 Data Pool that caused a persistent “CONNECTING” state when customers used an “s3://” or “https://” prefix in the URL for their table path, rather than a relative path within the bucket. Now, if the provided “s3://” or “https://” URL points to the named bucket, the URL’s path is used as the relative path into the bucket. Invalid table paths provided by the customer will now be synchronously rejected.

Console

  • We fixed the SQL Console to respect the Propeller selection.
  • We fixed the GraphQL Playground to respect the Propeller selection.
  • When creating a Data Pool, we fixed the case when a timestamp is not selected.
  • In the API Playground, we fixed leaderboard dimensions clearing for ClickHouse Data Pools.
  • In the API Playground, we fixed the leaderboard table view width and height.
  • In the API Playground, we fixed the time dimension not pre-populating.
  • In the SQL Console, we fixed result cell wrapping.
  • Data Pools created by Materialized Views now see the "Operations" tab.

Improvements

SQL

  • The SQL interface now properly handles identifier quoting.
  • We now support CASE statements in SQL.

API

Console

  • The API Playground will now populate filter values using the Top Values API.
  • The API Playground now includes an “All time” option for the relative time range.
  • The API Playground has a new layout for TimeSeries, Counter, and Leaderboard APIs.
  • The API Playground now supports setting a “Time Dimension” that is different from the Data Pool’s default timestamp.
  • The API Playground will now show the Data Pool selection first.
  • The API Playground will now only show the “Existing Metrics” for the selected Data Pool.
  • The SQL Console has a new improved layout.
  • The SQL Console has new improved SQL syntax validation.
  • The SQL Console will now show long result values in a tooltip when they are trimmed.
  • The Data Pool list page will not break if the sync activity query fails.
  • When creating and updating Amazon S3 credentials, the path and bucket will be trimmed to prevent white spaces from being accidentally entered.

Have any questions or feedback?

Don't hesitate to ask them on our Reddit community. We value your input and are here to help.

· 2 min read

New features

ClickHouse Data Pool beta

The ClickHouse Data Pool enables you to read through to your self-hosted ClickHouse or ClickHouse Cloud rather than syncing data to Propel. This allows you to utilize the data in your analytic dashboards, reports, and workflows directly from your ClickHouse instance through the Propel APIs and UI components.

Learn more about the ClickHouse Data Pool.

Propel&#39;s Webhook latency demo.

GraphQL Playground

The GraphQL Playground enables you to run GraphQL queries directly from the Console, offering a simple way to interact with your data when building applications.

Key Features:

  • GraphQL schema autocompletion
  • Code examples
  • Access token generator

Log in to the Console and click "Playground", then select "API: GraphQL".

Propel&#39;s GraphQL Playground

Fixes

  • Fixed AND/OR logic bug in SQL.
  • Fixed typo in default Webhook Data pool name.
  • Free trial plan no longer expires incorrectly.
  • Fixed case-insensitivity in Materialized Views.
  • Fix automatic syncing of HTTP Data Sources which contain multiple HTTP Data Pools.
  • Fixed the column _propel_synced_at that was incorrectly set for some Webhook Data Pools, resulting in out-of-range values.
  • Fixed columns length validation for Kafka on Data Pool creation.
  • Fixed a race condition while re-assigning Access Policies.

Improvements

  • You can now create Data Pools with ClickHouse types via the API.
  • Implemented improvements to our access token API that reduced/eliminated the HTTP 500 errors that some customers experienced.
  • Added support for AggregatingMergeTree table engine for Data Pools via the API.
  • The Webhook Data Pool now returns HTTP 413 Content Too Large error if the payload exceeds 1,048,320 bytes or has more than 500 events.
  • Add support for read-only filterSql with a SQL representation of the filters.
  • Raised the GraphQL aliases limit to 250.
  • Improved the case insensitivity matching for identifiers in SQL.
  • Added support lists of strings and lists of numbers in Postgres SQL.
  • Added support NOT IN and AT TIME ZONE operators in SQL.
  • Added support column auto aliasing in SQL.
  • Added support for unary expressions in SQL.
  • Added support for CURRENT_DATE in SQL.
· 2 min read

New features

⚡️ 10x faster ingestion for the Webhook Data Pool

The Webhook Data Pool now ingests events 10x faster. We have optimized ingestion so that data is available within single digit seconds.

Learn more about the Webhook Data Pool.

Propel&#39;s Webhook latency demo.

🛝 SQL Playground

With Propel’s new SQL Playground, you can now execute SQL queries directly from the Console. It provides you with an easy way to explore your data when building applications.

Key Features:

  • Code examples – easily integrate queries into your code. Get code examples for querying data using SQL via the API, cURL, and JavaScript.
  • "Query as" selector – simulate querying as your app. Different apps have unique data query permissions set through Access Policies. The Playground allows you to test queries as a specific app.
  • "Propeller" selector – experiment with query speeds. Test different query speeds to optimize performance and cost for your data application.

Log in to the Console and click "Playground", then select "API: SQL".

Propel&#39;s SQL Playground

✈️ Airbyte destination

The Airbyte destination lets you synchronize data from over 350+ sources to Propel's Serverless ClickHouse infrastructure. It provides an easy way to power your customer-facing analytics and data applications with data from any SaaS application, database, or platform supported by Airbyte.

Learn more about the Airbyte destination.

Propel&#39;s Airbyte destination

Fixes

  • Customers will be able to change the URL in their ClickHouse credentials on creation flow in the Console.
  • Customers will not see stale data when modifying credentials for ClickHouse and Kafka in the Console.
  • Signed in customers will be redirected from /login to dashboard.
  • In the GraphQL Playground, customers will be able to clear the variables input and run queries with no variables.
  • Fixed a bug when parsing table names that contain aliases in SQL.
  • Fixed a bug where table aliases where being lowercased in SQL.

Improvements

  • By setting the disable_partial_success=true query parameter, you can ensure that, if any individual event in a batch of events fails validation, the entire request will fail. For example: https://webhooks.us-east-2.propeldata.com/v1/WHK00000000000000000000000000?disable_partial_success=true
  • Added SUBSTRING function to SQL.
  • Added support for extracting parts from a timestamp in SQL.
  • Raised SQL response size limit to 2 MB.
  • The PostgreSQL interface now supports extended queries.
· 3 min read

New features

Kafka Data Pool

The new Kafka Data Pool lets you ingest real-time streaming data into Propel. It provides an easy way to power real-time dashboards, streaming analytics, and workflows with a low-latency data API on top of your Kafka topics.

The architectural overview when connecting Kafka to Propel.

Learn more about the Kafka Data Pool.

🆓 New Generous Free Tier

We are introducing a new, generous free tier! It includes up to $15 of usage per month, and the best part is, it does not expire.

Propel Free Tier

Sign up and get started today.

䷰ Schema evolution: Add a column to Data Pool operation

We are introducing Schema Evolution for Data Pools with the ability to add new columns to your Data Pools. Now, you can add new columns to your Data Pools, allowing you to evolve your data schema as your needs grow and change.

Propel Schema Evolution

Learn more about the Add column to Data Pool operation.

🚚 Batch delete and update operation

The new batch delete operation helps you stay GDPR compliant by providing a straightforward way to permanently delete data from a Data Pool. Meanwhile, the batch update operation helps maintain data integrity and facilitates data backfilling in the event of schema changes. Both operations can be done via the Console or API.

Propel batch delete

Learn more about batch updates and deletes.

🪵 React UI Kit logging controls

The Propel UI Kit now features logging capabilities for faster development and clean logging in production. By default, all errors are logged to the browser's console. This behavior can be customized using the LogProvider component. The LogProvider uses React's context mechanism to propagate log settings to nested components, allowing for specific component logging. Available log levels include "error", "warn", "info", or "debug".

Propel UI Kit Logging

Learn more about the React UI Kit’s logging controls.

Fivetran preview

The Fivetran destination lets you synchronize data from over 400 sources to Propel's Serverless ClickHouse infrastructure.

Propel Fivetran

Learn more about our Fivetran destination.

Bring your own ClickHouse preview

The ClickHouse Data Pool reads through to your self-hosted ClickHouse or ClickHouse Cloud rather than syncing data to Propel.

ClickHouse with Propel Architecture

Learn more about the ClickHouse “read-through” Data Pool.

Fixes

  • Fix timezone argument on toStartOfWeek, toStartOfMonth, and toStartOfYear SQL functions.
  • Fixed login loop for accounts with Okta integration.
  • Fixed environments dropdown in Console in the new Data Pool page.
  • Fixed the Preview Data section of the Console where customers can now change time range and page size when the query results in a time out or an error.

Improvements

  • Add support for timestamps without timezones
  • Support NOW() and CURRENT_DATE functions in SQL.
  • Support INTERVAL in SQL.
  • An alternative timestamp can be supplied to TimeRangeInput when querying.
  • Customers can view TableSettings (enginepartitionByprimaryKey, and orderBy) for their Data Pools via the API.
  • Allow creating Data Pools (including Webhook Data Pools) without timestamp via the API.
  • Allow setting TableSettings (enginepartitionByprimaryKey, and orderBy) when creating a Data Pool via the API.
  • In the Console, customers can have a different environment in multiple tabs without losing the last selected state.
  • In the Console, Customers will see the processedRecords instead of newRecords in the Processed Records column for the Syncs table.
  • Customers can now change the sort and timestamp column in the Console in the Preview Data tab.
· 4 min read

New features

🔎 SQL API

You can now query any Data Pool using SQL over the GraphQL API. Need to join, group by, or perform complex queries? No problem. Propel's SQL supports PostgreSQL syntax, including joins, unions, and common table expressions for more complex queries. The SQL API allows you to query your data however you'd like, and Propel's multi-tenant access policies ensure that customers can only query their own data.

Propel&#39;s SQL API

Learn more about the SQL API.

🐘 PostgreSQL-compatible SQL interface

You can now connect any BI tool or PostgreSQL client to Propel. Essentially, Propel mimics a PostgreSQL instance, providing a seamless connection to a variety of tools or client applications.

For SaaS applications, this simplifies the process of providing a customer-facing SQL interface for custom reporting and data sharing.

Propel&#39;s SQL interface

Learn more about the SQL interface.

🗄️ Data Grid API

The new Data Grid API efficiently retrieves individual records from a Data Pool, with the added convenience of built-in pagination, filtering, and sorting. It's perfect for displaying data in a table format, making it ideal for data tables with individual events, orders, requests, or log messages.

Propel&#39;s Data Grid API use cases

Learn more about the Data Grid API

📌 Records by ID API

The new Records by ID API is optimized for quick, unique ID lookups. It returns the records corresponding to the given IDs. This API can present detailed record information in a data table or record detail page.

Learn more about the Records by ID API.

🧮 Top Values API

The new Top Values API returns the most common values in a specified column ordered by frequency. The Top Values API can populate UI filters, prompt available values to AI agents, or showcase trending values within a column.

Propel&#39;s Top Values API use cases

💚💙💜 UI Kit themes

You can now control the look and feel of all your UI components in one theme. The theme of the UI Kit determines all essential visual elements, including the colors of components, the depth of shadows, and the overall light or dark appearance of the interface. We provide light and dark themes out of the box and the ability to customize your own theme.

Propel&#39;s UI Kit Themes

Learn more about themes in the UI Kit.

🔓 UI Kit Access Token Provider

You can now easily fetch and refresh API access tokens from the frontend. The new  AccessTokenProvider component allows you to provide a function that fetches an access token from your backend. Using this function, the provider will serve the fetched access token to all its child components and automatically refresh the token when it expires.

Code example of Propel&#39;s UI Kit Access Token provider

Learn more about the Access Token Provider.

⏳ UI Kit Filter component

The new Filter component simplifies the process of adding filters to your dashboards. It uses Propel's Top Values API to fill the dropdown list with unique values from a specific column, arranged by their frequency.

Example of Propel&#39;s UI Kit Filter component

Learn more about the filter component.

🍰 UI Kit Pie Chart component

The PieChart component is designed to create pie or doughnut charts using the Leaderboard API.

Example of Propel&#39;s UI Kit Pie Chart component

Learn more about the Pie Chart component.

🪝 UI Kit Query Hooks

Propel's UI Kit provides prebuilt React components for querying data from Propel's GraphQL API. These components can be used to query data for custom visualizations or to build with third-party libraries such as D3.js, Recharts, Nivo, or Chart.js.

Code example of Propel&#39;s UI Kit Query Hooks

Learn more about the Query Hooks.

Fixes

  • Fix timezone argument on toStartOfWeek, toStartOfMonth, and toStartOfYear SQL functions.
  • Fixed login loop for accounts with Okta integration.

Improvements

  • Support LIKE and NOT LIKE filter operators in SQL and the GraphQL API.
  • Support TO_TIMESTAMP function in SQL.
  • Support CAST function in SQL.
  • Mark the tenant ID field in the Data Pool as deprecated. No longer needed with the new Access Policies.
  • Support WITH statements in the SQL API.
  • Support UNION statements in the SQL API.
  • New data_pool:read scope to list Data Pools and their schemas.
  • We made timeRange optional in GraphQL API.
  • The dimensions stats API has now been deprecated and replaced with the Top Values API.
· 4 min read

New features

🧠 OpenAI integration

Propel’s OpenAI integration lets you easily collect OpenAI ChatGPT API usage events from your application. Once the events are in the Data Pool, you can use them to power usage metering, customer-facing dashboards, reports, and data-driven workflows.

Learn more about the OpenAI integration.

OpenAI Propel integration diagram

🔓 New, more powerful Access Policies

Access Policies now allow you to control column- and row-level access to a Data Pool’s data. They provide a powerful way to govern how your applications, whether internal or customer-facing, access the data. You assign Access Policies to Propel Applications, giving each set of API credentials specific access to the data.

Learn more about the new Access Policies.

11d9006c-63ba-4a8f-80ed-8fad31ac1ba7.png

🔏 Dynamic Access Policies for multi-tenant applications

Multi-tenant SaaS or consumer applications have more specific data access control requirements. Each tenant should only access their own data, and the application must support potentially millions of unique tenants. Dynamic Access Policies allow you to pass policy values via a custom claim in the API access token. The policy values are cryptographically signed to the access token and used to evaluate the policy. This securely controls access to tenant data without the need to create a policy for each tenant, which could be cumbersome.

Learn more about the multi-tenant access controls.

95e397fd-6ce8-407e-a250-f7e1c85837d9.png

Real-time updates for Webhook Data Pools

The Webhook Data Pool supports real-time updates. It unlocks advanced analytics use cases where you have to deal with late-arriving data that needs to be updated in the original record. Real-time updates have the additional benefit that you can safely retry requests without worrying about creating duplicates.

Read the real-time update docs to learn more.

🚦 New conditional aggregate functions for Custom Metrics: COUNT_IF, SUM_IF, and AVG_IF

These new functions enable you to define Metrics by aggregating records based on certain conditions.

Let's say you want to calculate the Net Promoter Score (NPS), a common metric for customer satisfaction. NPS is calculated based on responses to a single question: "On a scale of 0-10, how likely are you to recommend our company/product/service to a friend or colleague?" Responses are classified as follows:

  • Promoters (score 9-10)
  • Passives (score 7-8)
  • Detractors (score 0-6)

NPS is then calculated by subtracting the percentage of customers who are Detractors from the percentage of customers who are Promoters.

Here's how you can use COUNT_IF to calculate NPS:

(COUNT_IF(response >= 9) - COUNT_IF(response <= 6)) / COUNT() * 100

This will calculate the percentage of Promoters, subtract the percentage of Detractors, and multiply by 100 to give you the NPS score.

Learn more about defining Custom Metrics.

Fixes

  • We fixed a bug in the custom expression validation. We were allowing unknown columns to be present in comparison expressions.
  • We fixed a bug in the Playground in the Console where customers could not select a metric type in Playground when they didn't have Metrics created.
  • We fixed a bug in the Console where the query count by Application and by Metric was not shown correctly.

Improvements

  • New Applications will have the DATA_POOL_QUERY and DATA_POOL_STATS scopes by default.
  • In the Console, customers can now see the basic authentication information for the Webhook Data Pool URLs if authentication is enabled.
  • In the Console, customers can view failed events for Webhook Data Pools in the new Error log.
  • In the Console, customers can create custom queries in the Playground by Metric type and Data Pool.
  • New customers will see the first-time user experience cards until the first Data Pool is created.
  • We improved the error messages that are shown when querying Data Pools and Metrics without the expected scope.
  • We now return a Bad Request Error if clients provide invalid time zones.
  • We added comparisons to Custom Metrics, and now expressions like SUM(foo > 1) or SUM(foo IS NOT NULL) are supported.
  • We added IS and IS NOT comparison operators to the custom expressions.
· 3 min read

New features

  • 🎉  Self-serve sign-up is open! You can now sign up to Propel and get started without filling in a form or contacting us.

  • 🕸️  New Webhook Data Pool. This new Data Pool type allows for easy ingestion of JSON events into Propel. Webhook setup and management is available in both the console and via our GraphQL API. The console has a rich UX for easily building and testing your JSON schema to match the event structure. Check out the documentation.

  • 🌮 We have a new and improved Quickstart to get you going with sample data as fast as possible.

  • 💥 New Console navigation. We’ve streamlined the navigation and introduced the concept of Credentials for Data Pools. (For existing customers: Credentials replace the concept of Data Sources within the Console. No changes have been made to the public APIs.)

  • 🛝 New top-level API Playground! The API Playground used to be available inside each metric definition in the Console. We’ve moved it out into the main navigation, making it easier than ever to query your metrics.

  • { } New code samples in the API Playground. In addition to grabbing sample GraphQL code for your queries, you can now copy full cURL and JavaScript samples for making queries - right from inside the playground.

  • 👩🏽‍💻 UI Kit Code Examples. The UI Kit’s documentation in Storybook now has code examples for each component.

  • 📊 Added support for custom label formatting in the Leaderboard component in UI Kit. See the pull request.

Fixes

  • We fixed an issue when using MAX, MIN, FIRST, LAST, and ANY aggregate functions inside an arithmetic expression and their first argument is a JSON column. Expressions like  MAX(foo.bar)/60 now work correctly.
  • We fixed a bug in the LAST aggregation function when the LAST function call was not a top-level operation in the expression.
  • We fixed issues with DATE formatting in some instances in Parquet files used in ingestion.
  • We fixed a navigation issue in the Console for users with smaller screens.
  • We fixed an issue with the navigation collapse arrow for the menu.
  • We fixed an issue with displaying a selectable timezone in the playground.
  • We fixed an issue with displaying Custom Metrics filters in the “Settings” tab of a Metric.

Improvements

  • Our documentation now features a new navigation system that categorizes data sources into events, data warehouses, and Gen AI sources.
  • In the Console, we've introduced a time range selector to the "Preview Data" tab for Data Pools.
  • On supported Data Pools, customers can now trigger manual syncs even when syncing is paused.
· 2 min read

New features

  • ➗ ✖️ ➕ ➖ New Custom Metrics type. Custom Metrics enable you to define custom expressions to aggregate data from your Data Pool. This provides a more flexible approach to defining Metrics that capture more complex business logic.
  • 🥇 LAST and FIRST aggregation functions for Custom Metric expressions. Read the docs.
  • % PERCENTILE aggregation functions for Custom Metric expressions. Read the docs.
  • ❄️ Propel can now synchronize data from Snowflake views as well as standard tables and dynamic tables. Read the blog post.
  • 💰 We launched self-service billing, usage reports, and our trial plans. Log in to the Console and go to the new Billing section.
  • 📒 The React UI Components have new documentation in Storybook.

Fixes

  • We fixed a bug where Metrics could created for a given Data Pool outside the context of an environment.
  • Fixed an issue on signup with special character handling.
  • Fixed an issue where queries were executing during Console sign-out.
  • Fixed an issue in the UI kit with time series label granularity displaying incorrectly.

Improvements

  • We’ve made several improvements to the underlying performance of Data Pools that connect to Snowflake and have updated records.
  • Customers will be able to select a timezone for queries in the Playground.
  • We improved the text descriptions of Data Pool creation fields in the Console.
  • Customers will now see top-level GraphQL queries for the playground instead of the metric query.
  • Various stability and performance enhancements to file handling with Amazon S3.
  • We added a search functionality on our documentation site.
  • We've added new guides to our docs:
· 3 min read

New features

  • ❄️ Propel can now sync updates and deletes from your Snowflake data, unlocking a host of new use cases! Learn more.
  • 🍽️ Propel can now synchronize data from Snowflake Dynamic Tables. Learn more.
  • ⏰ Queries now support setting time zones. Learn more.
  • 💰 We updated pricing for the P1_X_SMALL propeller. See pricing.

Fixes

  • Time Series queries were sometimes returning unexpected numbers of granules (e.g., when passing timeZone). For example, asking for TODAY with granularity DAY should always return a single granule; asking for THIS_WEEK with granularity DAY should always return seven granules. This is now fixed.
  • Multiple S3-based files with the same name in different directories are now handled correctly.
  • Fixed a bug with wrong validations in the Leaderboard query builder in the Console, not allowing the query to execute.
  • Fix Metric overview documentation links in the Console.
  • Fixed error state for sparklines in the Console.
  • Fixed a bug with tooltips showing for sparklines in the Console.
  • Fixed an issue with time series charts in the console not starting at 0.
  • Fixed an issue with Propeller time-out errors showing twice in the Console.
  • Fixed a bug where an erroneous 0 would show up in filter lists in the Console.

Improvements

  • Syncs have a new processedRecords property. It is the sum of the existing properties newRecordsupdatedRecords, and failedRecords. This is in support of the updating Data Pool functionality.
  • Better and more detailed errors for different Data Pool errors on creation and sync in the console and API.
  • Graphs in the Console now render using the browser timezone.
  • Customers can now more easily paste or type into all autocomplete components in the Console.
  • Setting a filter value in the Metric Playground will now provide autocomplete for less than 1000 values, and free text type in all cases.
  • New, better design for Applications listing page in Console with ID, scopes, and Propeller displayed.
  • The UI Kit now uses Luxon under the hood for better date handling.
  • On the docs, the Quickstart now introduces how to set up a Propel Application and use it in the Next.js starter app.
  • On the Leaderboards, Counter, Time Series, and Metric Report overview docs, we now use TacoSoft (our Quickstart data set) in all example queries.
  • On the Leaderboards, Counter, Time Series, and Metric Report docs, we updated query examples to use the up-to-date top-level query structure (metricName in query input instead of the deprecated metricByName or metricById).
  • We made visual improvements in the navbar on the docs site with higher contrast.
· 4 min read

New features

  • 🌮 Customers are now able to create sample data with the new TacoSoft Data Source.
  • 🟣 New Console look and feel.
  • 💦 The UI Kit supports a new prop, refetchInterval, which can be used to specify how frequently a component should re-fetch new data.
  • 📦 We have simplified UI Kit and re-packaged it as a single, tree-shakeable NPM library, @propeldata/ui-kit.

Fixes

  • In the Metric Playground, we fixed an issue with the filter drop-downs not showing values. Customers can now select a unique value from the drop-down when using filters in the Playground.
  • Fixed an issue with unique name-checking. Customers will now see validation for every unique name in the console.
  • Fixed a bug in the Metric Report API where users could select dimensions not declared in any of the report's Metrics. Now developers are only able to select dimensions specified in the Metrics.
  • Fixed a Counter Metric performance regression.
  • Fixed a corner case where new Data Pools were getting stuck in the "CREATED" state.
  • Fixed a bug where if a measure or dimension column in a Metric is JSON, we were not taking it into account as part of the available columns to be selected.
  • Fixed an issue with the calculation of query timeouts. Some queries were incorrectly timing out at 3 seconds when they should time out at 10 seconds.
  • Fixed a password reset bug.
  • Fixed an issue where we were missing time granules for certain relative time ranges.
  • The Amazon S3 Data Source now supports syncing larger S3 buckets, and will sync up to 1,000 files at a time.
  • Previously, S3 Data Sources could be created with invalid S3 bucket names, resulting in them getting stuck in a "CONNECTING" state. Now, when attempting to create or modify an S3 Data Source, setting bucket to an invalid S3 bucket name will result in synchronous failure with a BAD_REQUEST error message: "Invalid S3 bucket name; ensure you pass only the S3 bucket name and not its ARN or URL".
  • Customers can now switch tables and see the updated schema when creating a Data Pool. Previously, the schema was not updated when switching tables.

Improvements

  • New homepage with a handy video!
  • In the Data Pool section of the Console, we have improved the Sync error messages for Snowflake and S3 Data Sources. Customers will now see a helpful message with the error details for failed Syncs in the Data Pool overview syncs table.
  • In the Metric definition, Metric settings, and Playground sections of the Console, the filter operators IS_NULL and IS_NOT_NULL are now available.
  • In the Data Pool section of the Console, the "Preview Data" table now loads faster and adjusts to the screen height. Additionally, the text in the cells of the table will not wrap, and an ellipsis will be displayed when the text is too long.
  • In the Data Pool section in Console, we updated Data Pool documentation links.
  • We have improved API error messages for our customers. Authentication and authorization-related errors and identifier parsing errors will no longer be returned as internal errors. Instead, we catch these errors and provide more informative error messages.
  • We have improved the handling of query errors by introducing a new error that specifically indicates when the Propeller is too small. If a Metric query exceeds the maximum execution time, customers will now receive a more informative error message. The message will indicate that the user needs a bigger Propeller instead of a generic error message such as "The query failed for an unknown reason.”
  • Data Pool Syncs are now created before attempting to connect to the underlying database, resulting in earlier visibility. Previously, if we failed to connect to a database, we did not create any Sync and kept retrying until successful. With the new process, we create failed Syncs that represent unsuccessful attempts, improving visibility for customers.
  • The Console now remembers the last environment a user accessed when switching between accounts.
  • The Console now remembers the last account and environment a user accessed when logging back in.
  • If a user has never logged in, the Console defaults to their most recently created account and its development environment.
· 2 min read

New features

  • 😍 Support for Querying JSON data and JSON data type. Snowflake users can now sync their VARIANT, OBJECT, and ARRAY columns as JSON to Propel. Read the blog post
  • 🛠️ New data synchronization controls are now available for Snowflake. Read the blog post.
  • 🌮 You can now provision demo data for TacoSoft, our imaginary B2B SaaS taco-selling application. The demo data will make it easier to experience the full power of Propel.
  • 🤓 We introduced a new METRIC_READ scope, which enables developers to list metrics without requiring full ADMIN scope. This new scope allows Propel Applications to retrieve and list Metric resources within the Environment, without being able to query their data.
  • ⛔️ We added new IS_NULL and IS_NOT_NULL filter operators for Metric and query filters. The value field in FilterInput is now nullable. If the specified filter operator is IS_NULL or IS_NOT_NULL, then the value field is not required. Otherwise, the value field is required, and the request will be rejected if it is not present.

Fixes

  • Fix to allow modifying AND and OR Filters on Metrics.
  • Previously, a bug was triggered when customers provided an invalid Snowflake account, causing their Snowflake Data Source to become stuck in the "CONNECTING" state. This issue has now been fixed.
  • When creating a Data Pool, customers will be redirected to the correct link for tenant ID documentation.
  • When creating an Amazon S3-powered Data Pool, we now display the first empty sync instead of the "hang tight" graphic.

Improvements

  • The documentation site has new styles ✨.
  • In the Console, the input to create a new Amazon S3 Data Source table is now ”Unique Name”, instead of “Name”. It will show an error when the name is not unique within the Data Source context.
  • In the Console, during Data Pool creation, customers will not see cached data after leaving the creation flow and returning, but it will be kept within the Data Pool creation session.
  • The Terraform provider now supports creating MIN, MAX and AVERAGE Metrics. It supports setting the cursor and sync interval for Data Pools, creating and updating Policies, and it is smarter about when to replace versus update a changed resource.