Skip to content

Commit

Permalink
#34 progressing with baseline
Browse files Browse the repository at this point in the history
  • Loading branch information
JJ committed Sep 13, 2024
1 parent 02206d7 commit 759fe8d
Showing 1 changed file with 8 additions and 1 deletion.
9 changes: 8 additions & 1 deletion paper/energy-sac-2025.Rnw
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,14 @@ These applications need to have some way of {\em reading} the sensors that the c

In order to take actual measurements, there are several options; either link our programs to a library that taps the RAPL API, or use a command-line tool that runs our scripts and takes measurements when the process exits. Since we are working with different languages, not all of which have published libraries that work with RAPL \footnote{C++ certainly has, but {\sf zig}, being a younger language, does not for the time being}, we will opt for the latter.

And once again, we are faced with different options. Linux includes a command-line tool called {\sf perf} \cite{de2010new} that is concerned with all kind of performance measurements, including energy consumption. It is an excellent tool as long as the only kind of systems you are going to measure use that operating system. In the past, however, we have used another took called {\sf pinpoint} \cite{pinpoint}, a tool that is available for Linux as well as MacOS, and besides offers a single interface for different power consumption APIs. The most important thing, for the purpose of this paper, however, is that we have used it for the measurements in previous papers. Using the same tool again gives us the capability of making comparisons with the results published in those papers, since the methodology used to estimate consumption from RAPL register reading will be exactly the same. {\sf pinpoint} is free software released under the MIT license and can be downloaded from its repository at \url{https://github.com/osmhpi/pinpoint}.
And once again, we are faced with different options. Linux includes a command-line tool called {\sf perf} \cite{de2010new} that is concerned with all kind of performance measurements, including energy consumption. It is an excellent tool as long as the only kind of systems you are going to measure use that operating system. In the past, however, we have used another took called {\sf pinpoint} \cite{pinpoint}, a tool that is available for Linux as well as MacOS, and besides offers a single interface for different power consumption APIs. The most important thing, for the purpose of this paper, however, is that we have used it for the measurements in previous papers. Using the same tool again gives us the capability of making comparisons with the results published in those papers, since the methodology used to estimate consumption from RAPL register reading will be exactly the same. {\sf pinpoint} is free software released under the MIT license and can be downloaded from its repository at \url{https://github.com/osmhpi/pinpoint}. As we have done in other papers \cite{wivace23-anon,lion24-anon}, a Perl script launches the program a certain amount of times, set to 30 in this case, and works on averages.


\subsection{Baseline measurements}

As indicated in the introduction, no energy profiling tool is able to disaggregate energy spent by a specific process, and {\sf pinpoint is no exception}; it measures energy spent by the devices under measurement for the duration of the process. If wa really want to know what specific functions are spending, we have to make baseline measurements of a certain kind, and then run another application that includes the functions we are interested subtracting averages obtained in the first measurement.

There is no single way of establishing this measurement. In \cite{icsoft23-anon}, for instance, we simply took the average time spent by our programs and run a {\sf sleep} command for that average amount of time, measuring the energy consumed with essentially an empty program. This can certainly work in the general case, but in evolutionary algorithms there are two essential steps to apply any genetic operator or fitness function: the chromosomes need to be generated first, and then the operators applied to them. Generating chromosomes is a non-trivial operation, and how long it takes is related to the data structures that are used to store them. This can take a certain amount of time, that certainly can and should be separated from the application of the functions themselves. This is why, from \cite{wivace23-anon} on, we have started to use a different approach from the paragraph above, using a baseline a program that generates chromosomes using the data structure that we will use later.

\section{Results}\label{sec:results}

Expand Down

0 comments on commit 759fe8d

Please sign in to comment.