-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Waiting for runner's bench_func and bench_command functions to complete instead of receiving outputs individually? #141
Comments
Maybe write:
|
I have tried that, but the output remains the same. Just to make sure, this is what that code looks like: #!/usr/bin/env python3
import pyperf
import time
def func():
time.sleep(0.001)
if __name__ == "__main__":
runner = pyperf.Runner()
results = runner.bench_func('sleep', func)
print(results) My workaround for this weird bug is to know how many processes are being used (either set by the user or use the default of 20) and check #!/usr/bin/env python3
import pyperf
import time
def func():
time.sleep(0.001)
if __name__ == "__main__":
procs = 20
runner = pyperf.Runner(processes=procs)
results = runner.bench_func('sleep', func)
if len(result.get_runs()) == procs:
print(f"Mean: {results.mean()}")
print(f"Median: {results.median()}") But from my understanding of the overall code, |
An update regarding this is that this issue also makes it mildly difficult to output all the results into a single output file. Here's an example: #!/usr/bin/env python3
import pyperf
import time
import csv
from statistics import mean, median
def func():
time.sleep(0.001)
if __name__ == "__main__":
procs = 20
runner = pyperf.Runner(processes=procs)
my_results = {}
for i in range(1,11):
result = runner.bench_func('sleep', func)
if len(result.get_runs()) == procs:
my_results[i] = list(result.get_values())
with open("output.csv", "w", "newline="") as my_file:
headers = ["Loop", "Mean", "Median"]
writer = csv.DictWriter(my_file, fieldnames=headers)
writer.writeheader()
for k in my_results.keys():
writer.writerow(
{
"Loop": k,
"Mean": mean(my_results[k]),
"Median": median(my_results[k]),
}
) Even though I have the #!/usr/bin/env python3
import pyperf
import time
import csv
from pathlib import Path
from statistics import mean, median
def func():
time.sleep(0.001)
if __name__ == "__main__":
procs = 20
runner = pyperf.Runner(processes=procs)
my_results = {}
for i in range(1,11):
result = runner.bench_func('sleep', func)
if result is None:
pass
elif len(result.get_runs()) == procs:
my_results[i] = list(result.get_values())
open_mode = "w"
if Path("output.csv").exists():
open_mode = "a"
csv_file = open("output.csv", open_mode, newline="")
headers = ["Loop", "Mean", "Median"]
writer = csv.DictWriter(my_file, fieldnames=headers)
if open_mode == "a":
writer.writeheader()
for k in my_results.keys():
writer.writerow(
{
"Loop": k,
"Mean": mean(my_results[k]),
"Median": median(my_results[k]),
}
) This is kind of cumbersome. I'm still trying to understand the underlying code for pyperf to see why this is occurring, but I'd want to look into creating a |
That's a surprising way to use pyperf. Why not writing results in a JSON file and then load the JSON to process it? https://pyperf.readthedocs.io/en/latest/api.html#BenchmarkSuite.load |
The issue is that dumping the benchmark into a JSON file for the many benchmarks I want to run, which are almost entirely identical except for a different combinations of arguments, would mean I would get tons of JSON files saved. On top of that, this looping issue would actually write the JSON file for each process that |
I'm trying to utilize pyperf to benchmark some functions and save their results to a CSV of my formatting.
I'm using this example code from the documentation:
I want to benchmark the same function with
mulitprocessing.pool.Pool
/mulitprocessing.pool.ThreadPool
/concurrent.futures.ProcessPoolExecutor
/concurrent.futures.ThreadPoolExecutor
and varying values for things like the number of CPU cores and the chunksize formap
functions.The issue is that assigning a variable to store the output of
runner.bench_func
and printing out that variable leads to an output like this:Whereas I want to suppress this output and wait for all runs to complete before moving forward with the program.
Is there some other way of storing the results of a benchmark that I can't seem to find in the documentation? Or should is there a way to force the runner to wait for a benchmark to complete?
The text was updated successfully, but these errors were encountered: