Restoring normal table into hypertable: data lost after doing timescaledb_post_backup()

We have a 1 TB database backup (it’s Zabbix DB) that we are going to restore to Postgres17.

And at the same time we want to convert some tables to TimescaleDB.

One way is to restore whole DB with normal tables, and then convert it to hypertables. But this process requires a lot of time, and more than 1 TB of additional space for converting big tables.

So I decided to make a trick to avoid double conversion: covert tables to hypertables and then restore data to it.

The path is as follows:

  1. Create db schema
  2. Install Timescale DB extension to empty DB.
  3. Run timescaledb_pre_restore()
  4. Convert history tables to hypertables: SELECT create_hypertable(‘history_uint’, ‘clock’, chunk_time_interval => 86400, migrate_data => true);
  5. Run pg_restore
  6. Run timescaledb_post_restore()

This works will until step 6.

After running timescaledb_post_restore() the history tables become empty:

After step 5 we have:

select count(*) from history_uint;

count

5913520368
(1 row)

After timescaledb_post_restore()

select count(*) from history_uint;

count

0
(1 row)

Is it possible to run restore this way: normal table into hypertable or is this not supported? What if I don’t run pre/post restore steps and just restore data into hypertable?

Strange note: it seems that there’s no data in tables, but the space is not reclaimed even after running vacuum.

Hmm i think those steps are not suitable for your goal. If you want to import data from another source you must not run timescaledb_pre_restore/timescaledb_post_restore this is only needed if you want to restore a dump from an already existing timescaledb instance

Thank you!

I tried to restore the DB without running timescale_pre/post_restore() and the data seems to be there now, however when I try to execute a slect query, the Postgres server crashes:

select itemid,clock,ns, count(clock) as cnt

from history

group by itemid,clock,ns

having count(clock) >1;

2025-11-19 17:28:16.529 UTC [31356] LOG: database system is ready to accept connections
2025-11-19 17:28:16.529 UTC [31483] LOG: TimescaleDB background worker launcher connected to shared catalogs

2025-11-19 17:29:54.295 UTC [31356] LOG: received fast shutdown request
2025-11-19 17:29:54.300 UTC [31356] LOG: aborting any active transactions
2025-11-19 17:29:54.303 UTC [31483] FATAL: terminating background worker “TimescaleDB Background Worker Launcher” due to administrator command
2025-11-19 17:29:54.303 UTC [31486] FATAL: terminating background worker “TimescaleDB Background Worker Scheduler” due to administrator command
2025-11-19 17:29:54.303 UTC [31485] FATAL: terminating background worker “TimescaleDB Background Worker Scheduler” due to administrator command
2025-11-19 17:29:54.360 UTC [31356] LOG: background worker “TimescaleDB Background Worker Launcher” (PID 31483) exited with exit code 1
2025-11-19 17:29:54.360 UTC [31356] LOG: background worker “logical replication launcher” (PID 31484) exited with exit code 1
2025-11-19 17:29:54.363 UTC [31356] LOG: background worker “TimescaleDB Background Worker Scheduler” (PID 31485) exited with exit code 1
2025-11-19 17:29:54.363 UTC [31356] LOG: background worker “TimescaleDB Background Worker Scheduler” (PID 31486) exited with exit code 1
2025-11-19 17:29:54.410 UTC [31356] LOG: server process (PID 31508) was terminated by signal 9: Killed
2025-11-19 17:29:54.410 UTC [31356] DETAIL: Failed process was running: select itemid,clock,ns, count(clock) as cnt from history group by itemid,clock,ns having count(clock) >1;
2025-11-19 17:29:54.410 UTC [31356] LOG: terminating any other active server processes
2025-11-19 17:29:54.420 UTC [31514] postgres@zabbix FATAL: the database system is shutting down
2025-11-19 17:29:54.505 UTC [31356] LOG: abnormal database system shutdown
2025-11-19 17:29:54.811 UTC [31356] LOG: database system is shut down

There were no other connections to the DB nor the server.

According to the logs the process was killed by someone (Signal 9). Could be linux oomkiller. If it was killed by the oomkiller you would find evidence of that in the operating system logs.

Thanks. That’s exatcly the issue:

2025-11-19T17:29:53.964916+00:00 ip-172-31-28-242 kernel: Out of memory: Killed process 31508 (postgres) total-vm:15669360kB, anon-rss:11389432kB, file-rss:2820kB, shmem-rss:59676kB, UID:111 pgtables:22628kB oom_score_adj:0
2025-11-19T17:29:54.001744+00:00 ip-172-31-28-242 systemd[1]: postgresql@17-main.service: A process of this unit has been killed by the OOM killer. 

However this is new clear instance with 16Gb mem. Only Postgres 17 is installed and it was automatically tuned by a timescaledb-tune

max_connections = 100                   # (change requires restart)
shared_buffers = 3926MB                 # min 128kB
work_mem = 128MB                                # min 64kB
maintenance_work_mem = 256MB            # min 64kB
dynamic_shared_memory_type = posix      # the default is usually the first option
effective_io_concurrency = 256          # 1-1000; 0 disables prefetching
max_worker_processes = 23               # (change requires restart)
max_parallel_workers_per_gather = 2     # limited by max_parallel_workers
max_parallel_workers = 4                # number of max_worker_processes that
wal_buffers = 16MB                      # min 32kB, -1 sets based on shared_buffers
checkpoint_completion_target = 0.9      # checkpoint target duration, 0.0 - 1.0
max_wal_size = 20GB
min_wal_size = 512MB
random_page_cost = 1.1                  # same scale as above
effective_cache_size = 11779MB
default_statistics_target = 100 # range 1-10000
jit = off                               # allow JIT compilation
cluster_name = '17/main'                        # added to process titles if nonempty
autovacuum_max_workers = 10             # max number of autovacuum subprocesses
autovacuum_naptime = 10         # time between autovacuum runs
default_toast_compression = lz4 # 'pglz' or 'lz4'
shared_preload_libraries = 'timescaledb'                # (change requires restart)
max_locks_per_transaction = 256         # min 10
include_dir = 'conf.d'                  # include files ending in '.conf' from
timescaledb.max_background_workers = 16
timescaledb.last_tuned = '2025-11-10T00:35:17Z'
timescaledb.last_tuned_version = '0.18.1'

Is it somehow related to a timescaledb tuning?