Prerequisites
To follow the steps on this page:- Create a target with the Real-time analytics capability enabled. You need your connection details. This procedure also works for .
- Create a Supabase project
Set up your Tiger Cloud service
To set up a optimized for analytics to receive data from Supabase:-
Optimize time-series data in hypertables
Time-series data represents how a system, process, or behavior changes over time.
are tables that help you improve insert and query performance by automatically partitioning your data by
time.
Connect to your and create a table that will point to a Supabase database:
When you create a using CREATE TABLE … WITH …, the default partitioning column is automatically the first column with a timestamp data type. Also, creates a columnstore policy that automatically converts your data to the , after an interval equal to the value of the chunk_interval, defined through
compress_afterin the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the conversion, chunks are compressed by up to 98%, and organized for efficient, large-scale queries. You can customize this policy later using alter_job. However, to changeafterorcreated_before, the compression settings, or the the policy is acting on, you must remove the columnstore policy and add a new one. You can also manually convert chunks in a to the . -
Create optimized analytical queries
are designed to make queries on very large datasets run
faster. in use materialized views to
continuously, and incrementally refresh a query in the background, so that when you run the query,
only the data that has changed needs to be computed, not the entire dataset.
-
Create a continuous aggregate pointing to the Supabase database.
-
Set up a delay stats comparing
origin_timetotime. -
Set up a view to receive the data from Supabase.
-
Create a continuous aggregate pointing to the Supabase database.
-
Add refresh policies for your analytical queries
You use
start_offsetandend_offsetto define the time range that the continuous aggregate will cover. Assuming that the data is being inserted without any delay, set thestart_offsetto5 minutesand theend_offsetto1 minute. This means that the continuous aggregate is refreshed every minute, and the refresh covers the last 5 minutes. You setschedule_intervaltoINTERVAL '1 minute'so the continuous aggregate refreshes on your every minute. The data is accessed from Supabase, and the continuous aggregate is refreshed every minute in the other side.Do the same thing for data inserted with a delay:
Set up a Supabase database
To set up a Supabase database that injects data into your :-
Connect a foreign server in Supabase to your
- Connect to your Supabase project using Supabase dashboard or psql.
-
Enable the
postgres_fdwextension. -
Create a foreign server that points to your .
Update the following command with your connection details, then run it
in the Supabase database:
-
Create the user mapping for the foreign server
Update the following command with your connection details, then run it
in the Supabase database:
-
Create a foreign table that points to a table in your .
This query introduced the following columns:
time: with a default value ofnow(). This is because thetimecolumn is used by to optimize data in the .origin_time: store the original timestamp of the data.
origin_time) and the time the data is inserted into your (time). -
Create a foreign table in Supabase
-
Create a foreign table that matches the
signs_per_minuteview in your . It represents a top level view of the data. -
Create a foreign table that matches the
signs_per_minute_delayview in your .
-
Create a foreign table that matches the
Test the integration
To inject data into your from a Supabase database using a foreign table:-
Insert data into your Supabase database
Connect to Supabase and run the following query:
-
Check the data in your
Connect to your and run the following query:
You see something like:
origin_time time name 2025-02-27 16:30:04.682391+00 2025-02-27 16:30:04.682391+00 test