You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that calculation of metrics for a large amount of data takes a lot of time and also only one CPU is used at a high level (even when async_mode is set to True). Some test with an original example from README:
On my machine, this code takes around 300ms. When I change the number of times when we add preds and gt to the metric_fn from 10 to 1000, it takes 10 seconds, from 1000 to 10000 2 minutes. That seems like a drastic change. And it corresponds to around 10000 * 8 = 80000 boxes. I noticed such behaviour when I trained the detection model, and it took around 10 minutes to measure metrics on validation. Moreover, in my case, htop shows a load of only one processor at ~100% level whereas others at the same level as before metrics calculation.
Is it expected to have such a long computation time for a large number of bounding boxes? Are there some workarounds to make computation faster?
The text was updated successfully, but these errors were encountered:
@dinarkino What about "async mode". This mode allows to predict bounding boxes by network and compute metric in parallel processes, it doesn't provide to parallel computation of metric.
And I think it expected behaviour when more data requires more computations. Our implementation has some parts that can be computed parallel on side of numpy, other parts are sequently. I think it can be more optimized, but we didn't it.
I noticed that calculation of metrics for a large amount of data takes a lot of time and also only one CPU is used at a high level (even when async_mode is set to True). Some test with an original example from README:
On my machine, this code takes around 300ms. When I change the number of times when we add preds and gt to the metric_fn from 10 to 1000, it takes 10 seconds, from 1000 to 10000 2 minutes. That seems like a drastic change. And it corresponds to around 10000 * 8 = 80000 boxes. I noticed such behaviour when I trained the detection model, and it took around 10 minutes to measure metrics on validation. Moreover, in my case, htop shows a load of only one processor at ~100% level whereas others at the same level as before metrics calculation.
Is it expected to have such a long computation time for a large number of bounding boxes? Are there some workarounds to make computation faster?
The text was updated successfully, but these errors were encountered: