Benchmark
There's More Than One Way To Do It – that's our motto. However, some
ways are always going to be
faster than others. How can you tell though? You could analyze each of
the statements for efficiency, or
you could simply roll your sleeves up and try it out.
318
= Page 10 =
Modules
Our next module is for testing and timing code. Benchmark exports two
subroutines: timethis and
timethese , the first of which, timethis , is quite easy to use:
#!/usr/bin/perl
# benchtest.plx
use warnings;
use strict;
use Benchmark;
my $howmany = 10000;
my $what = q/my $j=1; for (1..100) {$j*=$ _ }/;
timethis($howmany, $what);
So, we give it some code and a set number of times to run it. Make sure
the code is in single quotes so
that Perl doesn't attempt to interpolate it. You should, after a little
while, see some numbers. These
will, of course, vary depending on the speed of your CPU and how busy
your computer is, but mine
says this:
> perl benchtest.plx
timethis 10000: 3 wallclock secs ( 2.58 usr + 0.00 sys = 2.58 CPU) @
3871.47/s (n=10000)
>
This tells us that we ran something 10,000 times, and it took 3 seconds
of real time. These seconds were
2.58 spent in calculating ('usr' time) and 0 seconds interacting with
the disk (or other non-calculating
time). It also tells us that we ran through 3871.47 iterations of the
test code each second.
To test several things and weigh them up against each other, we can use
timethese . Instead of
taking a string to represent code to be run, it takes an anonymous hash.
The hash keys are names
given to sections of the code, and the values are corresponding
subroutine references, which we
usually create anonymously.
To check the fastest way to read a file from the disk, we could do this:
#!/usr/bin/perl
# benchtest2.plx
use warnings;
use strict;
use Benchmark;
my $howmany = 100;
timethese($howmany, {
line => sub {
my $file;
open TEST, "words" or die $!;
while (<TEST>) { $file .= $ _ }
close TEST;
},
slurp => sub {
my $file;
local undef $/;
open TEST, "words" or die $!;
$file = <TEST>;
close TEST;
},
319
= Page 11 =
Chapter 10
join => sub {
my $file;
open TEST, "words" or die $!;
$file = join "", <TEST>;
close TEST;
}
});
One way reads the file in a line at a time, one slurps the whole file in
at once, and one joins the lines
together. As you might expect, the slurp method is quite considerably
faster:
Benchmark: timing 100 iterations of join, line, slurp...
join: 42 wallclock secs (35.64 usr + 3.78 sys = 39.43 CPU) @ 2.54/s
(n=100)
line: 37 wallclock secs (29.77 usr + 3.17 sys = 32.94 CPU) @ 3.04/s
(n=100)
slurp: 6 wallclock secs ( 2.87 usr + 2.65 sys = 5.53 CPU) @ 18.09/s
(n=100)
Also bear in mind that each benchmark will not only time differently
between each machine and the
next, but often between times you run the benchtest – so don't base your
life around benchmark tests.
If a pretty way to do it is a thousandth of a second slower than an ugly
way to do it, choose the pretty
one. If speed is really that important to you, you should probably be
programming in something other
than Perl.