Easily ingest data into ClickHouse.
Ingest JSON events via HTTP.
Ingest messages from Kafka topics.
Ingest events from Amazon Data Firehose.
Ingest events from Segment.
Don’t see a data source you need or want access to any preview? Let us know.
Data Pools are ClickHouse tables with an ingestion pipeline from a data source.
Event-based data sources like the Webhook Data Pool collect and write events into Data Pools. These Data Pools have a very simple schema:
Column | Type | Description |
---|---|---|
_propel_received_at | TIMESTAMP | The timestamp when the event was collected in UTC. |
_propel_payload | JSON | The JSON Payload of the event |
During the setup of a Webhook Data Pool, you can optionally unpack top-level or nested keys from the incoming JSON event into specific columns. See the Webhook Data Pool for more details.
Data warehouses and data lake-based Data Pools, such as Snowflake or Amazon S3 Parquet, synchronize records at a given interval from the source table and write them into Data Pools. You can create multiple Data Pools, one for each table.
Data warehouses and data lake-based Data Pools also offer additional properties that enable you to control their synchronization behavior. These include:
How long does it take for my data to be synced into Propel? Is Propel real-time?
Once data gets to Propel via syncs or events, it is available via the API within a couple of seconds.
In what region is the data stored?
The data is stored in the AWS US East 2 region. We are working on expanding our region coverage. If you are interested in using Propel in a different region, please contact us.
How much data can I bring into Propel?
As much as you need. Propel does not have any limits on how much data you bring. You should think of the data in Propel as the data you need to serve to your applications.
How long does Propel keep the data?
You can keep data in Propel for as long as you need. For instance, if your application requires data for only 90 days, you can use the Delete API to remove data after 90 days.
Easily ingest data into ClickHouse.
Ingest JSON events via HTTP.
Ingest messages from Kafka topics.
Ingest events from Amazon Data Firehose.
Ingest events from Segment.
Don’t see a data source you need or want access to any preview? Let us know.
Data Pools are ClickHouse tables with an ingestion pipeline from a data source.
Event-based data sources like the Webhook Data Pool collect and write events into Data Pools. These Data Pools have a very simple schema:
Column | Type | Description |
---|---|---|
_propel_received_at | TIMESTAMP | The timestamp when the event was collected in UTC. |
_propel_payload | JSON | The JSON Payload of the event |
During the setup of a Webhook Data Pool, you can optionally unpack top-level or nested keys from the incoming JSON event into specific columns. See the Webhook Data Pool for more details.
Data warehouses and data lake-based Data Pools, such as Snowflake or Amazon S3 Parquet, synchronize records at a given interval from the source table and write them into Data Pools. You can create multiple Data Pools, one for each table.
Data warehouses and data lake-based Data Pools also offer additional properties that enable you to control their synchronization behavior. These include:
How long does it take for my data to be synced into Propel? Is Propel real-time?
Once data gets to Propel via syncs or events, it is available via the API within a couple of seconds.
In what region is the data stored?
The data is stored in the AWS US East 2 region. We are working on expanding our region coverage. If you are interested in using Propel in a different region, please contact us.
How much data can I bring into Propel?
As much as you need. Propel does not have any limits on how much data you bring. You should think of the data in Propel as the data you need to serve to your applications.
How long does Propel keep the data?
You can keep data in Propel for as long as you need. For instance, if your application requires data for only 90 days, you can use the Delete API to remove data after 90 days.