Skip to main content
This page shows how to run real-time analytical queries against a through Supabase using a foreign data wrapper (fdw) to bring aggregated data from your .

Prerequisites

To follow the steps on this page:
  • Create a target with the Real-time analytics capability enabled.

    You need your connection details. This procedure also works for .

Set up your Tiger Cloud service

To set up a optimized for analytics to receive data from Supabase:
  1. Optimize time-series data in hypertables Time-series data represents how a system, process, or behavior changes over time. are tables that help you improve insert and query performance by automatically partitioning your data by time. Connect to your and create a table that will point to a Supabase database:
    CREATE TABLE signs (
        time timestamptz NOT NULL DEFAULT now(),
        origin_time timestamptz NOT NULL,
        name TEXT
    ) WITH (
      tsdb.hypertable
    );
    
    When you create a using CREATE TABLE … WITH …, the default partitioning column is automatically the first column with a timestamp data type. Also, creates a columnstore policy that automatically converts your data to the , after an interval equal to the value of the chunk_interval, defined through compress_after in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the conversion, chunks are compressed by up to 98%, and organized for efficient, large-scale queries. You can customize this policy later using alter_job. However, to change after or created_before, the compression settings, or the the policy is acting on, you must remove the columnstore policy and add a new one. You can also manually convert chunks in a to the .
  2. Create optimized analytical queries are designed to make queries on very large datasets run faster. in use materialized views to continuously, and incrementally refresh a query in the background, so that when you run the query, only the data that has changed needs to be computed, not the entire dataset.
    1. Create a continuous aggregate pointing to the Supabase database.
      CREATE MATERIALIZED VIEW IF NOT EXISTS signs_per_minute
      WITH (timescaledb.continuous)
      AS
      SELECT time_bucket('1 minute', time) as ts,
       name,
       count(*) as total
      FROM signs
      GROUP BY 1, 2
      WITH NO DATA;
      
    2. Set up a delay stats comparing origin_time to time.
      CREATE MATERIALIZED VIEW IF NOT EXISTS _signs_per_minute_delay
      WITH (timescaledb.continuous)
      AS
      SELECT time_bucket('1 minute', time) as ts,
        stats_agg(extract(epoch from origin_time - time)::float8) as delay_agg,
        candlestick_agg(time, extract(epoch from origin_time - time)::float8, 1) as delay_candlestick
      FROM signs GROUP BY 1
      WITH NO DATA;
      
    3. Set up a view to receive the data from Supabase.
      CREATE VIEW signs_per_minute_delay
      AS
        SELECT ts,
        average(delay_agg) as avg_delay,
        stddev(delay_agg) as stddev_delay,
        open(delay_candlestick) as open,
        high(delay_candlestick) as high,
        low(delay_candlestick) as low,
        close(delay_candlestick) as close
      FROM _signs_per_minute_delay
      
  3. Add refresh policies for your analytical queries You use start_offset and end_offset to define the time range that the continuous aggregate will cover. Assuming that the data is being inserted without any delay, set the start_offset to 5 minutes and the end_offset to 1 minute. This means that the continuous aggregate is refreshed every minute, and the refresh covers the last 5 minutes. You set schedule_interval to INTERVAL '1 minute' so the continuous aggregate refreshes on your every minute. The data is accessed from Supabase, and the continuous aggregate is refreshed every minute in the other side.
    SELECT add_continuous_aggregate_policy('signs_per_minute',
     start_offset => INTERVAL '5 minutes',
     end_offset => INTERVAL '1 minute',
     schedule_interval => INTERVAL '1 minute');
    
    Do the same thing for data inserted with a delay:
    SELECT add_continuous_aggregate_policy('_signs_per_minute_delay',
     start_offset => INTERVAL '5 minutes',
     end_offset => INTERVAL '1 minute',
     schedule_interval => INTERVAL '1 minute');
    

Set up a Supabase database

To set up a Supabase database that injects data into your :
  1. Connect a foreign server in Supabase to your
    1. Connect to your Supabase project using Supabase dashboard or psql.
    2. Enable the postgres_fdw extension.
      CREATE EXTENSION postgres_fdw;
      
    3. Create a foreign server that points to your . Update the following command with your connection details, then run it in the Supabase database:
      CREATE SERVER timescale
      FOREIGN DATA WRAPPER postgres_fdw
      OPTIONS (
          host '<value of host>',
          port '<value of port>',
          dbname '<value of dbname>',
          sslmode 'require',
          extensions 'timescaledb'
      );
      
  2. Create the user mapping for the foreign server Update the following command with your connection details, then run it in the Supabase database:
    CREATE USER MAPPING FOR CURRENT_USER
    SERVER timescale
    OPTIONS (
       user '<value of user>',
       password '<value of password>'
    );
    
  3. Create a foreign table that points to a table in your . This query introduced the following columns:
    • time: with a default value of now(). This is because the time column is used by to optimize data in the .
    • origin_time: store the original timestamp of the data.
    Using both columns, you understand the delay between Supabase (origin_time) and the time the data is inserted into your (time).
    CREATE FOREIGN TABLE signs (
      TIME timestamptz NOT NULL DEFAULT now(),
      origin_time timestamptz NOT NULL,
      NAME TEXT)
    SERVER timescale OPTIONS (
      schema_name 'public',
      table_name 'signs'
    );
    
  4. Create a foreign table in Supabase
    1. Create a foreign table that matches the signs_per_minute view in your . It represents a top level view of the data.
      CREATE FOREIGN TABLE signs_per_minute (
       ts timestamptz,
       name text,
       total int
      )
      SERVER timescale OPTIONS (schema_name 'public', table_name 'signs_per_minute');
      
    2. Create a foreign table that matches the signs_per_minute_delay view in your .
      CREATE FOREIGN TABLE signs_per_minute_delay (
         ts timestamptz,
         avg_delay float8,
         stddev_delay float8,
         open float8,
         high float8,
         low float8,
         close float8
      ) SERVER timescale OPTIONS (schema_name 'public', table_name 'signs_per_minute_delay');
      

Test the integration

To inject data into your from a Supabase database using a foreign table:
  1. Insert data into your Supabase database Connect to Supabase and run the following query:
    INSERT INTO signs (origin_time, name) VALUES (now(), 'test')
    
  2. Check the data in your Connect to your and run the following query:
    SELECT * from signs;
    
    You see something like:
    origin_timetimename
    2025-02-27 16:30:04.682391+002025-02-27 16:30:04.682391+00test
You have successfully integrated Supabase with your .