Slow create_hypertable()

Hi community!
I am using Postgresql v17.6 for zabbix database. I have installed timescaleDB v2.21.3 and tried to convert several tables to hypertables with data migration, and facing slow converting process. For example, history_uint table with size 238Gb is still converting more then 12 hours (besides table contains 4645075615 records for last 30 day). Server hardware consist of nvme disk in raid1, 256Gb DDR4 RAM and 64 Core 2.1Ghz CPU. I don’t see the bottleneck in server resources: disk I/O is’t under pressure, memory usage is 10%, load average is 2. Replications is off.
timescaledb_information.chunks and timescaledb_information.hypertables are empty. The only indicator that converting is processing something, is the strace output, where I see active reads/writes.
I didn’t found any solution how to forecast time required for create_hypertable() with data migration, and also I didn’t found how to properly monitor the process of creating hypertable with data migration. I use this command for hypertable creation
PERFORM create_hypertable(‘history_uint’, ‘clock’, chunk_time_interval => 86400, migrate_data => true, if_not_exists => true);

So, I am finding advices on:

  • how to forecast time required for hypertable creation with data migration?
  • how to check the status of hypertable creation process?
  • how to speed up the process of hypertable creation with data migration?

Thanks.

I think its advisable to rename the old table and create an initially empty hypertable and doing the data migration as separate task after the hypertable was created. This way new data can already be inserted into the hypertable and you can migrate old data at your own pace in the background.

1 Like