-
Notifications
You must be signed in to change notification settings - Fork 36
ts-delta resque queue is processed extremely slow #26
Comments
@jbusam which version did you upgrade from and to? Could it have anything to do with the performance implications discussed here: #25 (comment) |
My 2 cents. The locking and concentrating of work down into fewer jobs was absolutely essential for high volume on the resque+sphinx combination. It was only a band-aid for us in the end as we outgrew even the ts-resque-delta 1.x solution and have moved on to sidekiq+elasticsearch. |
I upgraded to TS 3.1.1 from 2.1.0 and ts-resque-delta from 1.2.4 to the latest 2.0.0. @agibralter Not sure what you are referencing in that ticket? do you think I should try the resque-loner approach? I assumed that has been fixed due to the ticket been closed.. The older versions could easily keep of with the workload... but looking at my resque queue now it just seems to steadily increase and I can't see anything useful in the log files I'm afraid. Any hint/idea would be very helpful |
@jbusam Basically what I'm referencing in the comment is that the jump to 2.0.0 made performance sacrifices in favor of simplicity in design. @pat had good reason to go down that route, and I think @ryansch's comment above is quite relevant. What kind of load are you dealing with? I.e. search-index updates per-minute? |
that number varies a lot (stupid answer... sorry) my best guess would be that the peak average is around 200 changes per minute |
Ah, yeah... so my old app was on that order of magnitude and worked on 1.2.4 because of the optimizations that @ryansch and I made... I imagine that removing those optimizations (such as our lock that prevented many duplicate indexing jobs from entering, and ultimately, overwhelming the queue) could cause trouble for you. I'd recommend downgrading to 1.2.4. I'm sorry that I'm not actively working on this project anymore–if I were I'd probably try to port those optimizations to 2.0.0, but as @ryansch said, those only go so far too. |
ok thx... so I assume it's time to move on... I'll give loner a try... just because this app will be replaced in 6 months time... and I currently don't want to rewrite everything to work with sidekiq and es... thanks again |
I would definitely recommend adding resque-loner and see if that helps. The main roadblock that kept me from making it a dependency was the tests I had weren't performing reliably when resque-loner was loaded. I think this was an issue with the tests, not resque-loner, but that did push me towards the simpler approach. Are all of these updates coming from separate HTTP requests? Or a rake task? Or something else? |
They are split between one worker consuming jobs and external http requests. resque-loner seems to be working on dev and staging, but so does the app without it. Can't really test if it will work on production unless I deploy... will definitely try it over the weekend |
@jbusam just curious, where did you add |
yeah I just tried what @jrust mentioned in #25 (comment) |
The resque-loner/@jrust fix doesn't seem to work for new records. The do not show up in search results until a full index rebuild. |
Hi Tim - are you seeing any logs for Sphinx-related Resque jobs when the new records are being created? |
@tjoneseng I'm seeing the same issue in production with resque-loner, it seemed to work at first, but now the FlagAsDeleted jobs get queued, but the IndexJob's don't get queued anymore, so the delta index never gets rebuilt. For now, I'm running a cron job every few minutes that runs the indexer for the delta index, and just letting resque-delta handle the FlagAsDeleted Jobs. I haven't figured out why the index job isn't being enqueued. I can't run without resque-loner because the queue grows enormous and unworkable without it. |
Hm, just noticed this issue on resque-loner, and I did delete the ts-delta queue at one point because of a job pile up, maybe that's the issue: resque/resque-loner#41 |
Maybe a way forward to confirm it's the same issue is to fork ts-resque-delta, use the workaround noted in that issue, see if that helps keep things on track for you? |
I used the workaround in that issue and everything is working fine now. Thanks. Was an issue with resque-loner & doing a redis.del('ts-delta'). It kept the resque-loner locks around. All good now. |
Great stuff :) |
Yep, just wanted to log it here in case anyone else ran into the issue :) Thanks! |
I'm seeing extremely slow processing since upgrading to the latest version of ts-resque-delta and ts.
Is there a specific number of jobs it is suppose to process at a time?
or are the jobs in resque not being removed anymore?
The text was updated successfully, but these errors were encountered: