-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Schema cache speed went down after refactor #3882
Comments
I think we should try to fix this or otherwise revert f31848f. Reasons:
|
Note that this is only about the root endpoint / OpenAPI schema, not about any regular requests. And it's also not about the schema cache, imho. What we need here: Better reporting for the loadtest results, so that we can interpret them better.
This is 100% not related to the SQL queries, but to the haskell processing of the result set. |
Hm, so on f31848f, changing the queries resulted in more Haskell processing? (It's not apparent by looking at the code change) |
I always understood the schema cache as: SQL queries + Haskell processing, this is why now we even log the latter time too: #3779. So in that way, it is about the schema cache. |
No, but the OpenAPI request on the root endpoint also runs SQL queries. Those are slower now, that's why we lose performance here. But when we migrate away from OpenAPI-in-core, that will change entirely anyway. The SQL queries for the schema cache are also slower, yes. But the SQL queries make up only a tiny percentage of the Schema Cache load time, I think. So it will not slow down the overall Schema Cache load time. |
#3046 shows spikes in CPU usage for PostgREST, not PostgreSQL. If the query was the problem, PostgreSQL resource usage would be spiking, not PostgREST. While debugging #3733 it was very clear that the Schema Cache queries are almost always fast and it's the processing in PostgREST that takes time. Most likely for relationships. |
Found the loadtest in which
head
andmain
started to differ: https://github.com/PostgREST/postgrest/actions/runs/9846138600/job/27183597190After some local tests, I think the commit f31848f is where the throughput started to go down (the previous one returned similar values to
v12.2.3
):Originally posted in #3640 (comment)
The text was updated successfully, but these errors were encountered: