Prerequisites
To follow the steps on this page:- Create a target with the Real-time analytics capability enabled. You need your connection details. This procedure also works for .
- Install Java8 or higher to run Apache Kafka
Install and configure Apache Kafka
To install and configure Apache Kafka:-
Extract the Kafka binaries to a local folder
From now on, the folder where you extracted the Kafka binaries is called
<KAFKA_HOME>. -
Configure and run Apache Kafka
Use the
-daemonflag to run this process in the background. -
Create Kafka topics
In another Terminal window, navigate to
<KAFKA_HOME>, then callkafka-topics.shand create the following topics:accounts: publishes JSON messages that are consumed by the timescale-sink connector and inserted into your .deadletter: stores messages that cause errors and that Kafka Connect workers cannot process.
-
Test that your topics are working correctly
- Run
kafka-console-producerto send messages to theaccountstopic: - Send some events. For example, type the following:
- In another Terminal window, navigate to
<KAFKA_HOME>, then runkafka-console-consumerto consume the events you just sent:You see
- Run
Install the sink connector to communicate with Tiger Cloud
To set up Kafka Connect server, plugins, drivers, and connectors:-
Install the connector
In another Terminal window, navigate to
<KAFKA_HOME>, then download and configure the sink and driver. -
Start Kafka Connect
Use the
-daemonflag to run this process in the background. -
Verify Kafka Connect is running
In yet another another Terminal window, run the following command:
You see something like:
Create a table in your Tiger Cloud service to ingest Kafka events
To prepare your for Kafka integration:- Connect to your
-
Create a to ingest Kafka events
When you create a using CREATE TABLE … WITH …, the default partitioning column is automatically the first column with a timestamp data type. Also, creates a columnstore policy that automatically converts your data to the , after an interval equal to the value of the chunk_interval, defined through
compress_afterin the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the conversion, chunks are compressed by up to 98%, and organized for efficient, large-scale queries. You can customize this policy later using alter_job. However, to changeafterorcreated_before, the compression settings, or the the policy is acting on, you must remove the columnstore policy and add a new one. You can also manually convert chunks in a to the .
Create the Tiger Cloud sink
To create a sink in Apache Kafka:-
Create the connection configuration
-
In the terminal running Kafka Connect, stop the process by pressing
Ctrl+C. -
Write the following configuration to
<KAFKA_HOME>/config/timescale-standalone-sink.properties, then update thepropertieswith your connection details. -
Restart Kafka Connect with the new configuration:
-
In the terminal running Kafka Connect, stop the process by pressing
-
Test the connection
To see your sink, query the
/connectorsroute in a GET request:You see:
Test the integration with Tiger Cloud
To test this integration, send some messages onto theaccounts topic. You can do this using the kafkacat or kcat utility.
-
In the terminal running
kafka-console-producer.shenter the following json stringsLook in your terminal runningkafka-console-consumerto see the messages being processed. -
Query your for all rows in the
accountstableYou see something like:created_at name city 2025-02-18 13:55:05.147261+00 Lola Copacabana 2025-02-18 13:55:05.216673+00 Holly Miami 2025-02-18 13:55:05.283549+00 Jolene Tennessee 2025-02-18 13:55:05.35226+00 Barbara Ann California