Comparing Performance - Standard Table and a Hypertable - Academic Project

Hello everyone,

I apologize if I’m posting in the wrong section, but I’ve been looking for a solution for almost a week now regarding my academic project on time series data.

The goal of my project is to demonstrate the effectiveness of hypertables compared to standard PostgreSQL tables when handling time series data.

The idea is to take real data (for example, call history) and create two identical tables, except that one will be a regular table and the other will be a hypertable. After inserting around 30,000 rows of data into both tables, I’ll test performance using pg_bench with INSERT and SELECT queries.

The problem is that I’m not getting good results. Despite multiple tests, the standard table seems to outperform the hypertable.

I’m using the following parameters for my tests: C-2, J-2, R-1200, T-60 (my computer is not very powerful).

Could anyone help me understand why the standard table is performing better in this case? Or maybe offer some advice on how to proceed with this project as I’m a bit lost.

The goal is to compare two identical tables (one standard and one hypertable) by running the same SQL queries after inserting the same data. I analyze the performance in terms of TPS, latency, etc.

Thanks a lot for your help!

Hello,

a few thoughts:

  1. Timescale hypertable performs well, if you have a senseful chunk interval. You might want to reduce the chunk interval to a lower value to gain performance advantage.
  2. The chunks should be not to big and not to small.
  3. 30,000 rows might be much too less data to test performance. I would rather recommend to insert 300,000 or 3 million rows.
  4. Make sure that your test data is spread over time (see also 1).

For more help, provide details about your table structure and your queries. Good luck!

See also