-
Notifications
You must be signed in to change notification settings - Fork 152
-
|
we're seeing decreasing counters in our deployed servers. We're using Kubernetes pods with puma set to 2 workers, it appears that user@user2-l:../~$ curl -sL http://user2-l.dhcp.company.com:38080/app/metrics Notice that the counter resets to 9. If we look at a lot of this we see another request bounce up to 13, then down to 10. The total number of requests is always higher than either of these counters. This seems to demonstrate that there are two registry instances (each worker process has its own Singleton). I tried to move the registry creation outside of the process by using Puma's I'm not sure about what I've found, so it could be for other reasons. Are there any guides that people can recommend for the right way to setup the prometheus client in the context of multiple processes? (we haven't started using multithreaded Rails yet, but if someone has encountered this there, any ideas are welcome). AFAIK there is no way to scrape specific workers in puma, but if there was, then we could fix this problem by scraping all the workers separately instead of just the pods. |
Beta Was this translation helpful? Give feedback.
All reactions
Are you using the DirectFileStore to store your metrics?
Read more here. Make sure to read the caveats!
Replies: 1 comment 3 replies
-
|
Are you using the DirectFileStore to store your metrics? Read more here. Make sure to read the caveats! |
Beta Was this translation helpful? Give feedback.
All reactions
-
1
-
|
we avoided DirectFileStore because one of our apps exploded into thousands of files. indeed, I have read this part before:
So I assume that this combined with the other issue open for prefork servers means that Prometheus client doesn't support multiple-processes/multithreading in Ruby without DirectFileStore. Good to know, I'll take a deeper look at DirectFileStore. In pods there shouldn't be much of a difference, AFAIK, the filesystem is a memory filesystem I think? |
Beta Was this translation helpful? Give feedback.
All reactions
-
|
The typical solution for puma is to ID the workers by their worker ID number, instead of the OS PID. This way the number of files is limited to the number of Puma workers. The files are re-used between worker process restarts. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Right. The only reason DirectFileStore exists is precisely to support multiprocess. Multithreading you can totally do with the default store, but not multiprocess.
Yeah, this, together with performance of exports is the #1 thing we want to fix. We're struggling a bit for time to dedicate to it, but it's the top of our list. |
Beta Was this translation helpful? Give feedback.
All reactions
-
1