How to ingest data from Kafka into Propel
This guide explains how to connect Kafka with Propel to create customer-facing analytics dashboards.
It covers how to:
- Create a user in your Kafka cluster
- Make sure your Kafka cluster is accessible from Propel IPs
- Create a Kafka Data Pool
Requirementsโ
- You have a Propel account.
- A Kafka cluster with the topics to ingest.
- Access to create users and grant permissions in your Kafka cluster.
1. Create a user in your Kafka clusterโ
In this section, you'll set up a user for Propel. This user will connect to your Kafka cluster and have the necessary access to the topics.
- Self-hosted Kafka
- Confluent Cloud
- AWS MSK
- Redpanda
1.1 Create the user "propel"โ
First, you'll need to create the user. Kafka doesn't manage users directly; it relies on the underlying authentication system. So if you're using SASL/PLAIN for authentication, you would add the user to the JAAS configuration file.
- Open the JAAS configuration file (e.g.,
kafka_server_jaas.conf
) in a text editor. - Add the following entry to create the user "propel" with the password:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="propel"
password="YOUR_SUPER_SECURE_PASSWORD"
user_propel="YOUR_SUPER_SECURE_PASSWORD";
};
Replace YOUR_SUPER_SECURE_PASSWORD
with a secure password.
- Save the file.
1.2: Configure the Kafka Server to Use the JAAS Configโ
- Set the
KAFKA_OPTS
environment variable to point to your JAAS config file:
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/kafka_server_jaas.conf"
- Restart the Kafka server for the changes to take effect.
1.3: Grant Describe
and Read
Access to the "propel" Userโ
Now, you'll use Kafka's Access Control Lists (ACLs) to grant permissions to the "propel" user.
Use the kafka-acls
CLI to add ACLs for the "propel" user so that it can operate on the "propel-*" consumer groups.
Make sure to replace localhost:2181
with your Zookeeper server.
bin/kafka-acls.sh \
--authorizer-properties zookeeper.connect=localhost:2181 \
--add \
--allow-principal 'User:propel' \
--operation Describe \
--operation Read \
--operation Delete \
--group 'propel-' --resource-pattern-type prefixed
For each topic you need to ingest to Propel, run the following command:
bin/kafka-acls.sh \
--authorizer-properties zookeeper.connect=localhost:2181 \
--add \
--allow-principal 'User:propel' \
--operation Describe \
--operation Read \
--topic 'YOUR_TOPIC'
These commands grant Describe
and Read
access to the topics for the user "propel".
Step 1.4: Verify the ACLsโ
Verify that the ACLs have been correctly set by listing the ACLs for the topics you authorized.
bin/kafka-acls.sh \
--authorizer-properties zookeeper.connect=localhost:2181 \
--list \
--topic YOUR_TOPIC \
You should see the ACLs you added for the user "propel".
These instructions set up a API Key and secret with READ
and DESCRIBE
permissions in Confluent Cloud.
1.1: Create an API Key in the Confluent Cloud console.โ
In Confluent Cloud, you generally use API keys for authentication rather than user/password combinations.
- Log into your Confluent Cloud account.
- Go to the "Environments" section in the sidebar and click on your environment.
- Click on the "Clusters" tab and select your cluster.
- Click on "API Keys" and then on "Create key".
- Choose the "Service account", then "Create new one" and name it "Propel".
- In the "Add ACLs to service account" section, assign:
DESCRIBE
andREAD
operations to the topic you need to ingest to Propel.DESCRIBE
,READ
andDELETE
operations to the consumer group "propel-*"
- Click "Next" and get your Key and Secret that you will use as a user and password to connect to your Kafka cluster.
1.2: Verify the Permissionsโ
After setting the permissions, you can verify them by clicking on the API key in the "API Keys" section and reviewing the roles and resources it has access to.
These instructions set up an IAM user with READ
and DESCRIBE
permissions in AWS MSK.
1.1: Create an IAM User with an IAM policy that grants the necessary permissionsโ
Sign in to the AWS IAM Management Console.
In the navigation pane, choose "Users" and then choose "Create User".
For "Username", enter "propel" and click "Next".
Select "Attach policies directly", search an select the
AmazonMSKReadOnlyAccess
policy, and click "Next".Then, create a custom policy that grants describe, read, and delete permissions on MSK consumer groups prefixed with โpropel-โ:
- Click "Create policy", then choose the JSON tab.
- Paste the following JSON policy into the editor:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kafka-cluster:DescribeGroup",
"kafka-cluster:DeleteGroup",
"kafka-cluster:ReadGroup"
],
"Resource": "arn:aws:kafka:<region>:<account-id>:group/<cluster-name>/propel-*"
}
]
}- Replace
<region>
,<account-id>
, and<cluster-name>
with your actual AWS region, account ID, and MSK cluster name. - Click "Review policy", give your policy a name (e.g., "PropelMSKPolicy"), and click "Create policy".
1.2: Create the security credentials for the userโ
- Click on the user "propel" you just created.
- Click on the "Security credentials" tab and click on "Create access key".
- Select "Other" and click "Next".
- Enter any tags and click "Create access key" (optional: add metadata to the user by attaching tags as key-value pairs).
- You now have the access key and secret you can use as user and password to connect to your Kafka cluster. Save these credentials securely, as you will not have access to the secret access key again after this step.
1.1: Enable SASL/SCRAM Authentication in Redpandaโ
Follow these steps to enable SASL/SCRAM for authentication with Redpanda.
Edit the Redpanda configuration file (usually located at
/etc/redpanda/redpanda.yaml
).Enable SASL/SCRAM by adding or updating the following configuration:
redpanda:
kafka_api:
- address: 0.0.0.0
port: 9092
name: sasl_listen
See Redpanda docs for enabling SASL/SCRAM.
- Enable TLS encryption. SASL provides authentication, but not encryption. To enable SASL authentication with TLS encryption for the Kafka API, in redpanda.yaml, enter:
redpanda:
kafka_api:
- address: 0.0.0.0
port: 9092
name: sasl_tls_listener
authentication_method: sasl
kafka_api_tls:
- name: sasl_tls_listener
key_file: broker.key
cert_file: broker.crt
truststore_file: ca.crt
enabled: true
require_client_auth: false
See Redpanda docs for enabling TLS encryption.
- Confirm the SCRAM mechanism is enabled
To check if SASL/SCRAM is enabled, run the following command:
rpk cluster config get sasl_mechanisms
You should see SCRAM
in the output.
See Redpanda docs for checking SASL/SCRAM.
- Restart Redpanda to apply the changes:
Restart your Redpanda server to apply the changes.
1.2: Create the User "propel"โ
Redpanda uses rpk
, a command-line tool, to manage users and ACLs.
Create the user "propel":
rpk acl user create propel --password '<YOUR_SUPER_SECURE_PASSWORD>' --mechanism SCRAM-SHA-256
Replace <YOUR_SUPER_SECURE_PASSWORD>
with a secure password.
1.3: Grant DESCRIBE
, READ
and DELETE
access to the "propel" userโ
Now, grant the necessary ACLs to the user "propel" to the topics you need to ingest.
Use the rpk acl
command to add ACLs for the "propel" user so that it can operate on the "propel-*" consumer groups.
rpk acl create --allow-principal 'User:propel' --operation describe --operation read --operation delete --resource-pattern-type prefixed --group 'propel-'
For each topic you need to ingest to Propel, run the following command:
rpk acl create --allow-principal 'User:propel' --operation describe --operation read --topic YOUR_TOPIC
These commands grant DESCRIBE
and READ
access to the topic "YOUR_TOPIC" for the user "propel".
1.4: Verify the ACLsโ
Verify that the ACLs have been correctly set by listing the ACLs for the topic "YOUR_TOPIC":
rpk acl list --topic YOUR_TOPIC
You should see the ACLs you added for the user "propel".
2. Make sure your Kafka cluster is accessible from Propel IPs.โ
To ensure that Propel can connect to your Kafka cluster, you need to authorize access from the following IP addresses:
3.17.239.162
3.15.73.135
18.219.73.236
3. Create a Kafka Data Poolโ
To create a Kafka Data Pool, go to the Data Pools section in the Console, click Create Data Pool and click on the Kafka tile.
If you create a Kafka Data Pool for the first time, you must create your Kafka credentials for Propel to connect to your Kafka servers.
3.1 Create your Kafka credentialsโ
To create your Kafka credentials, you will need the following details:
- Bootstrap servers: The list of addresses for your Kafka cluster's brokers.
- Authentication type: The authentication protocol used by your Kafka cluster: SCRAM-SHA-256, SCRAM-SHA-512, PLAIN, or NONE.
- TLS: Whether your Kafka cluster uses TLS for secure communication.
- Username: The username for the user you created in your Kafka cluster.
- Password: The password for the user you created in your Kafka cluster.
3.2 Test your credentialsโ
After entering your Kafka credentials, click Create and test credentials to ensure Propel can successfully connect to your Kafka cluster. If the connection is successful, you will see a confirmation message. If not, check your entered credentials and try again.
3.3 Introspect your Kafka topicsโ
Here, you will see a list of topics available to ingest. If you don't see the topic you want to ingest, make sure your user has the right permissions to access the topic.
3.4 Select the topic to ingest and timestampโ
Here, you will see a list of topics available to ingest. Select the topic you want to ingest into this Data Pool. You will see the schema of the Data Pool.
Next, you need to select the timestamp column. This is the column that will be used to order the data in the Data Pool. By default, Propel selects the _timestamp
generated by Kafka.
3.5 Name your Data Pool and start ingestingโ
After you've selected the topic, provide a name for your Data Pool. This name will be used to identify the Data Pool in Propel. Once you've named your Data Pool, click Create Data Pool. Propel will then start ingesting data from the selected Kafka topics into your Data Pool.
3.6 Look at the data in your Data Poolโ
Once you've started ingesting data, you can view the data in your Data Pool. Go to the Data Pools section in the Console, click on your Kafka Data Pool, and click on the Preview Data tab. Here, you can see the data that has been ingested from your Kafka topic.
What's next?โ
Now that you've ingested data from Kafka into Propel, you can begin using the APIs with the data in your Data Pool. Here are some suggestions for your next steps:
- Explore the APIs in the API Playground.
- Learn about Transforming your data.
- Learn about the APIs to Query Your Data.
- Learn about Defining Metrics.