Choosing the right `chunk_time_interval` value for TimescaleDB Hypertables

Hello @jayeetamukherjee, it seems like every chunk would have 7 records if you’re only in a single stock. Please, try to use the 25% of your memory rule that @carlota mentioned in the last comment. Probably depending on the cardinality, like how many assets you’re observing, you could get a chunk per year or every few years. If you continue loading real-time data using more granular data, you can also change the chunk size later, and it will be valid for further data.

If you need more insights about what’s going on, try to run EXPLAIN ANALYZE in your query to see why it’s too slow. Maybe you need to restrict a time interval adding a WHERE clause to not add all the chunks.

For querying, I’d start the investigation by:

  1. Check what is the chunk size,
  2. Check how many chunks do you have
  3. Check how many records are in each chunk
  4. Use explain analyze to get into why it’s so slow to read.

For inserts, you should probably be careful with the distance you’re from the server and how you’re doing the inserts, and try to prioritize inserts in batches.

Also, take a look at this post that is all about inserts performance.