|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
News | Recommended Links | Recommended Papers | Tutorials | Reference |
Debugging | Horror Stories | Unix History | Humor | Etc |
|
Based on An Introduction to the UNIX Make Utility
On Unix platforms the standard make utility is a build tool for software that uses a Bourne shell type syntax for compiling and linking source code. While basic idea (to compile all dependencies -- the files timestamp of which is later than timestamp of object and executable for the "main" file.
Make must chose correct sequence of compilations based on interdependencies between the programs.
A typical male file provides several standard target or "modes" including standard "pseudotargets" make clean and make install (see below)
There are various implementations of make, including one written in Perl:
Makepp Home Page — Compatible but reliable and improved replacement for makeMakepp, a build program which has a number of features that allow for reliable builds and simpler build files, is a drop-in replacement for GNU make. It supports almost all of the syntax that GNU make supports, and can be used with makefiles produced by utilities such as automake. It is called makepp (or make++) because it was designed with special support for C++, which has since been extended to other languages like Swig or embedded SQL. Also its relationship to make is analogous to C++'s relationship to C: it is almost 100% backward compatible but adds a number of new features and much better ways to write makefiles.
Some features that makepp adds to make are: greatly improved handling of builds that involve multiple makefiles (recursive make is no longer necessary — read Miller's Recursive Make Considered Harmful for why that's cool); automatic scanning for include files; rebuild triggered if build command changes; checksum-based signature methods for reliable builds, smart enough to ignore whitespace or comment changes; extensibility through Perl programming (within your makefile); repositories (automatically importing files from another tree); build caches (not recompiling identically what a user of the same cache already did). For a more complete feature list, see the manual.
Makepp 2.0 currently runs on probably all Linux/Unix-variants and Windows, as well as Ebcdic platforms BS2000 and z/OS. In principle it should run anywhere you have Perl 5.6.0, or higher. Perl 5.6.0 is known to have bugs on the x86 platform that cause failures in large builds. Perl 5.6.1 has some bugs on some platforms that cause weird failures, but on other platforms it works fine. I generally work with Perl 5.8.0 or newer.
You can post to the sourceforge makepp forums for help or open discussion in English, Esperanto, German or French.
The dominant version of make is GNU make sometimes called gmake. There is also different slightly better implementation from Bell Labs nmake.
|
Windows compilers "project files" are generally equivalent to Makefiles. Actually most commercial C compiler IDE's contain something like a built-in make using some "project files". (If conducting a port to a Unix platform you might want to disentangle yourself from the non-transportability and awful licensing issues involved with these "project files" though.
Makefile creation has long traditions in Unix environment. It usually has at thest three preconfigured 'pseudo-targets":
make # use the default makefile (makefile or Makefile ) and the first target in it make clean # removes all packages make install # install the package into target directories
clean and install are called a phony targets and are discussed later in the section on dependency rules.
Of course a 'make' utility can be simply written in a scripting language. In such cases the syntax of the input Makefile is also somewhat "arbitrary" meaning that it need not follow the command language syntax of the native language interpreter (lex & yacc are helpful here but need not play a role). In fact, Nick Ing-Simmons has written a make-like utility entirely in perl. It is available from CPAN.
perl Makefile.PL
Again, the make utility is tool originally created for compiling computer programs, but that can be used for various tasks like installing packages.
Make is controlled by the makefile has is special functional mini-language that consists of rules.
By default makefile is a file in the current directory with the name Makefile or makefile. A rule in the makefile tells make how to execute a series of commands in order to build a target file from source files. It also specifies a list of dependencies of the target file. This list should include all files (whether source files or other targets) which are used as inputs to the commands in the rule. A simple rule has the following syntax:
target: dependencies ... commands ...
Note that commands are arbitrary shell commands. When you run make, you can specify particular targets to update; otherwise, make selects the first target listed in the makefile. Of course, any other target files needed as input for generating these targets must be updated first.
At the heart of make is the mechanism of resolving dependencies based on timestamps of the objects.
Make uses the makefile to figure out which target files ought to be brought up to date, and then determines which of them actually need to be updated. If a target file is newer than all of its dependencies, then it is already up to date, and it does not need to be regenerated.
The other target files do need to be updated, but in the right order: each target file must be regenerated before it is used in regenerating other targets which depend on it.
Make goes through a makefile starting with the target it is going to create and checks each of the target's dependencies to see if they are also listed as targets.
It follows the chain of dependencies until it reaches the end of the chain and then begins backing out executing the commands found in each target's rule.
Actually every file in the chain may not need to be compiled. Make looks at the time stamp for each file in the chain and compiles from the point that is required to bring every file in the chain up to date. If any file is missing it is updated if possible.
Make builds object files from the source files and then links the object files to create the executable. If a source file is changed only its object file needs to be compiled and then linked into the executable instead of recompiling all the source files.
This is an example makefile to build an executable file called prog1. It requires the source files file1.cc, file2.cc, and file3.cc. An include file, mydefs.h, is required by files file1.cc and file2.cc. If you wanted to compile this file from the command line using C++ the command would be
% CC -o prog1 file1.cc file2.cc file3.cc
A makefile could implement the same command better and more efficient if we write the makefile. In this case insread of this line we can type:
% make prog1
or if prog1 is the first target defined in the makefile
% make
Our first example makefile is much longer than necessary but is useful for describing what is going on.
prog1 : file1.o file2.o file3.o gcc -o prog1 file1.o file2.o file3.o file1.o : file1.c mydefs.h gcc -c file1.c file2.o : file2.c mydefs.h gcc -c file2.c file3.o : file3.c gcc -c file3.c clean : rm file1.o file2.o file3.o
Let's go through the example to see what make does by executing with the command make prog1 and assuming the program has never been compiled.
This example can be simplified somewhat by defining macros. Macros are useful for replacing duplicate entries. The object files in this example were used three times, creating a macro can save a little typing. Plus and probably more importantly, if the objects change, the makefile can be updated by just changing the object definition.
OBJS = file1.o file2.o file3.o prog1 : $(OBJS) gcc -o prog1 $(OBJS) file1.o : file1.c mydefs.h gcc -c file1.c file2.o : file2.c mydefs.h gcc -c file2.c file3.o : file3.c gcc -c file3.c clean : rm $(OBJS)
This makefile is still longer than necessary and can be shortened by letting make use its internal macros, special macros, and suffix rules.
OBJS = file1.o file2.o file3.o prog1 : ${OBJS} ${CC} -o $@ ${OBJS} file1.o file2.o : mydefs.h clean : rm ${OBJS}
Make is invoked from a command line with the following format
make [-f makefile] [-bBdeiknpqrsSt] [macro name=value] [names]
However from this vast array of possible options only the -f makefile and the names options are used frequently. The table below shows the results of executing make with these options.
Command | Result |
make | use the default makefile, build the first target in the file |
make myprog | use the default makefile, build the target myprog |
make -f mymakefile | use the file mymakefile as the makefile, build the first target in the file |
make -f mymakefile myprog | use the file mymakefile as the makefile, build the target myprog |
To operate make needs to know the relationship between your program's component files and the commands to update each file. This information is contained in a makefile you must write called Makefile or makefile.By default when invoked without parameters make will search the current working directory for one of the following two files and try to use the first found:
Hint:
Comments can be entered in the makefile following a pound sign ( # ) and the remainder of the line will be ignored by make. If multiple lines are needed each line must begin with the pound sign.
# This is a comment line
A rule consist of three parts, one or more targets, zero or more dependencies, and zero or more commands in the following form:
target1 [target2 ...] :[:] [dependency1 ...] [; commands] [<tab> command]
Note: each command line must begin with a tab as the first character on the line and only command lines may begin with a tab.
A target is usually the name of the file that make creates, often an object file or executable program.
A phony target is one that isn't really the name of a file. It will only have a list of commands and no prerequisites.
One common use of phony targets is for removing files that are no longer needed after a program has been made. The following example simply removes all object files found in the directory containing the makefile.
clean : rm *.o
A dependency identifies a file that is used to create another file. For example a .cc file is used to create a .o, which is used to create an executable file.
Each command in a rule is interpreted by a shell to be executed. By default make uses the /bin/sh shell. The default can be over ridden by using the macro SHELL = /bin/sh or equivalent to use the shell of your preference. This macro should be included in every makefile to make sure the same shell is used each time the makefile is executed.
Macros allow you to define constants. By using macros you can avoid repeating text entries and make makefiles easier to modify. Macro definitions have the form
NAME1 = text string NAME2 = another string
Macros are referred to by placing the name in either parentheses or curly braces and preceding it with a dollar sign ( $ ). The previous definitions could referenced
$(NAME1) ${NAME2}
which are interpreted as
text string another string
Some valid macro definitions are
LIBS = -lm OBJS = file1.o file2.o $(more_objs) more_objs = file3.o CC = gcc DEBUG_FLAG = # assign -g for debugging
which could be used in a makefile entry like this
prog1 : ${objs} ${CC} $(DEBUG_FLAG) -o prog1 ${objs} ${LIBS}
Macro names can use any combination of upper and lowercase letters, digits and underlines. By convention macro names are in uppercase. The text string can also be null as in the DEBUG_FLAG example which also shows that comments can follow a definition.
You should note from the previous example that the OBJSmacro contains another macro $(MORE_OBJS). The order that the macros are defined in does not matter but if a macro name is defined twice only the last one defined will be used. Macros cannot be undefined and then redefined as something else.
Make can receive macros from four sources, macros maybe defined in the makefile like we've already seen, internally defined within make, defined in the command line, or inherited from shell environment variables.
Internally defined macros are ones that are predefined in make. You can invoke make with the -p option to display a listing of all the macros, suffix rules and targets in effect for the current build. Here is a partial listing with the default macros from MTSU's mainframe frank.
CC = gcc CCFLAGS = -O GFLAGS = CFLAGS = -O LDFLAGS = LD = ld LFLAGS = MAKE = make MAKEFLAGS = b
There are a few special internal macros that make defines for each dependency line. Most are beyond the scope of this document but one is especially useful in a makefile and you are likely to see it even in simple makefiles.
The macro @ evaluates to the name of the current target. In the following example the target name is prog1 which is also needed in the command line to name the executable file. In this example -o @ evaluates to -o prog1.
prog1 : ${objs} ${CC} -o $@ ${objs}
Macros can be defined on the command line. From the previous example the debug flag, which was null, could be set from the command line with the command
% make prog1 DEBUG_FLAG=-g
Definitions comprised of several words must be enclosed in single or double quotes so that the shell will pass them as a single argument. For example
% make prog1 "LIBS= -lm -lX11"
could be used to link an executable using the math and X Windows libraries.
Shell variables that have been defined as part of the environment are available to make as macros within a makefile. C shell users can see the environment variables they have defined from the command line with the command
% env
These variables can be set within the .login file or from the command line with a command like:
% setenv DIR /usr/bin
With four sources for macros there is always the possibility of conflicts. There are two orders of priority available for make. The default priority order from least to greatest is:
If make is invoked with the -e option the priority order from least to greatest is
Make has a set of default rules called suffix or implicit rules. These are generalized rules that make can use to build a program. For example in building a C++ program these rules tell make that .o object files are made from .cc source files. The suffix rule that make uses for a C++ program is
.cc.o: $(CXX) $(CXXFLAGS) -c $<
where $< is a special macro which in this case stands for a .cc file that is used to produce a particular target .o file.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Jun 17, 2020 | opensource.com
Knowing how Linux uses libraries, including the difference between static and dynamic linking, can help you fix dependency problems. Feed 27 up Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0 x Subscribe nowGet the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
Linux, in a way, is a series of static and dynamic libraries that depend on each other. For new users of Linux-based systems, the whole handling of libraries can be a mystery. But with experience, the massive amount of shared code built into the operating system can be an advantage when writing new applications.
To help you get in touch with this topic, I prepared a small application example that shows the most common methods that work on common Linux distributions (these have not been tested on other systems). To follow along with this hands-on tutorial using the example application, open a command prompt and type:
$ git clone https: // github.com / hANSIc99 / library_sample
$ cd library_sample /
$ make
cc -c main.c -Wall -Werror
cc -c libmy_static_a.c -o libmy_static_a.o -Wall -Werror
cc -c libmy_static_b.c -o libmy_static_b.o -Wall -Werror
ar -rsv libmy_static.a libmy_static_a.o libmy_static_b.o
ar: creating libmy_static.a
a - libmy_static_a.o
a - libmy_static_b.o
cc -c -fPIC libmy_shared.c -o libmy_shared.o
cc -shared -o libmy_shared.so libmy_shared.o
$ make clean
rm * .oAfter executing these commands, these files should be added to the directory (run
my_appls
to see them):
libmy_static.a
libmy_shared.so About static linkingWhen your application links against a static library, the library's code becomes part of the resulting executable. This is performed only once at linking time, and these static libraries usually end with a
.a
extension.A static library is an archive ( ar ) of object files. The object files are usually in the ELF format. ELF is short for Executable and Linkable Format , which is compatible with many operating systems.
The output of the
$ file libmy_static.afile
command tells you that the static librarylibmy_static.a
is thear
archive type:
libmy_static.a: current ar archiveWith
$ ar -t libmy_static.aar -t
, you can look into this archive; it shows two object files:
libmy_static_a.o
libmy_static_b.oYou can extract the archive's files with
$ ar -x libmy_static.aar -x <archive-file>
. The extracted files are object files in ELF format:
$ file libmy_static_a.o
libmy_static_a.o: ELF 64 -bit LSB relocatable, x86- 64 , version 1 ( SYSV ) , not stripped About dynamic linking More Linux resourcesDynamic linking means the use of shared libraries. Shared libraries usually end with
- Linux commands cheat sheet
- Advanced Linux commands cheat sheet
- Free online course: RHEL Technical Overview
- Linux networking cheat sheet
- SELinux cheat sheet
- Linux common commands cheat sheet
- What are Linux containers?
- Our latest Linux articles
.so
(short for "shared object").Shared libraries are the most common way to manage dependencies on Linux systems. These shared resources are loaded into memory before the application starts, and when several processes require the same library, it will be loaded only once on the system. This feature saves on memory usage by the application.
Another thing to note is that when a bug is fixed in a shared library, every application that references this library will profit from it. This also means that if the bug remains undetected, each referencing application will suffer from it (if the application uses the affected parts).
It can be very hard for beginners when an application requires a specific version of the library, but the linker only knows the location of an incompatible version. In this case, you must help the linker find the path to the correct version.
Although this is not an everyday issue, understanding dynamic linking will surely help you in fixing such problems.
Fortunately, the mechanics for this are quite straightforward.
To detect which libraries are required for an application to start, you can use
$ ldd my_appldd
, which will print out the shared libraries used by a given file:
linux-vdso.so.1 ( 0x00007ffd1299c000 )
libmy_shared.so = > not found
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f56b869b000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f56b8881000 )Note that the library
libmy_shared.so
is part of the repository but is not found. This is because the dynamic linker, which is responsible for loading all dependencies into memory before executing the application, cannot find this library in the standard locations it searches.Errors associated with linkers finding incompatible versions of common libraries (like
$ LD_LIBRARY_PATH =$ ( pwd ) : $LD_LIBRARY_PATHbzip2
, for example) can be quite confusing for a new user. One way around this is to add the repository folder to the environment variableLD_LIBRARY_PATH
to tell the linker where to look for the correct version. In this case, the right version is in this folder, so you can export it:
$ export LD_LIBRARY_PATHNow the dynamic linker knows where to find the library, and the application can be executed. You can rerun
$ ldd my_appldd
to invoke the dynamic linker, which inspects the application's dependencies and loads them into memory. The memory address is shown after the object path:
linux-vdso.so.1 ( 0x00007ffd385f7000 )
libmy_shared.so = > / home / stephan / library_sample / libmy_shared.so ( 0x00007f3fad401000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f3fad21d000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f3fad408000 )To find out which linker is invoked, you can use
$ file my_appfile
:
my_app: ELF 64 -bit LSB executable, x86- 64 , version 1 ( SYSV ) , dynamically linked, interpreter / lib64 / ld-linux-x86- 64 .so.2, BuildID [ sha1 ] =26c677b771122b4c99f0fd9ee001e6c743550fa6, for GNU / Linux 3.2.0, not strippedThe linker
$ file / lib64 / ld-linux-x86- 64 .so.2/lib64/ld-linux-x86–64.so.2
is a symbolic link told-2.30.so
, which is the default linker for my Linux distribution:
/ lib64 / ld-linux-x86- 64 .so.2: symbolic link to ld- 2.31 .soLooking back to the output of
ldd
, you can also see (next tolibmy_shared.so
) that each dependency ends with a number (e.g.,/lib64/libc.so.6
). The usual naming scheme of shared objects is:**lib** XYZ.so **.<MAJOR>** . **<MINOR>**On my system,
$ file / lib64 / libc.so.6libc.so.6
is also a symbolic link to the shared objectlibc-2.30.so
in the same folder:
/ lib64 / libc.so.6: symbolic link to libc- 2.31 .soIf you are facing the issue that an application will not start because the loaded library has the wrong version, it is very likely that you can fix this issue by inspecting and rearranging the symbolic links or specifying the correct search path (see "The dynamic loader: ld.so" below).
For more information, look on the
Dynamic loadingldd
man page .Dynamic loading means that a library (e.g., a
.so
file) is loaded during a program's runtime. This is done using a certain programming scheme.Dynamic loading is applied when an application uses plugins that can be modified during runtime.
See the
The dynamic loader: ld.sodlopen
man page for more information.On Linux, you mostly are dealing with shared objects, so there must be a mechanism that detects an application's dependencies and loads them into memory.
ld.so
looks for shared objects in these places in the following order:
- The relative or absolute path in the application (hardcoded with the
-rpath
compiler option on GCC)- In the environment variable
LD_LIBRARY_PATH
- In the file
/etc/ld.so.cache
Keep in mind that adding a library to the systems library archive
unset LD_LIBRARY_PATH/usr/lib64
requires administrator privileges. You could copylibmy_shared.so
manually to the library archive and make the application work without settingLD_LIBRARY_PATH
:
sudo cp libmy_shared.so / usr / lib64 /When you run
$ ldd my_appldd
, you can see the path to the library archive shows up now:
linux-vdso.so.1 ( 0x00007ffe82fab000 )
libmy_shared.so = > / lib64 / libmy_shared.so ( 0x00007f0a963e0000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f0a96216000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f0a96401000 ) Customize the shared library at compile timeIf you want your application to use your shared libraries, you can specify an absolute or relative path during compile time.
Modify the makefile (line 10) and recompile the program by invoking
make -B
. Then, the output ofldd
showslibmy_shared.so
is listed with its absolute path.Change this:
CFLAGS =-Wall -Werror -Wl,-rpath,$(shell pwd)To this (be sure to edit the username):
CFLAGS =/home/stephan/library_sample/libmy_shared.soThen recompile:
$ makeConfirm it is using the absolute path you set, which you can see on line 2 of the output:
$ ldd my_app
linux-vdso.so.1 ( 0x00007ffe143ed000 )
libmy_shared.so = > / lib64 / libmy_shared.so ( 0x00007fe50926d000 )
/ home / stephan / library_sample / libmy_shared.so ( 0x00007fe509268000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007fe50909e000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007fe50928e000 )This is a good example, but how would this work if you were making a library for others to use? New library locations can be registered by writing them to
/etc/ld.so.conf
or creating a<library-name>.conf
file containing the location under/etc/ld.so.conf.d/
. Afterward,ldconfig
must be executed to rewrite theld.so.cache
file. This step is sometimes necessary after you install a program that brings some special shared libraries with it.See the
How to handle multiple architecturesld.so
man page for more information.Usually, there are different libraries for the 32-bit and 64-bit versions of applications. The following list shows their standard locations for different Linux distributions:
Red Hat family
- 32 bit:
/usr/lib
- 64 bit:
/usr/lib64
Debian family
- 32 bit:
/usr/lib/i386-linux-gnu
- 64 bit:
/usr/lib/x86_64-linux-gnu
Arch Linux family
- 32 bit:
/usr/lib32
- 64 bit:
/usr/lib64
FreeBSD (technical not a Linux distribution)
- 32bit:
/usr/lib32
- 64bit:
/usr/lib
Knowing where to look for these key libraries can make broken library links a problem of the past.
While it may be confusing at first, understanding dependency management in Linux libraries is a way to feel in control of the operating system. Run through these steps with other applications to become familiar with common libraries, and continue to learn how to fix any library challenges that could come up along your way.
- Do I need a makefile?
- A simple makefile
- Phony targets
- Working with several directories
- Using Wildcards
- Functions and Advanced Variable Usage
- Debugging Makefiles
A makefile is the set of instructions that you use to tell makepp how to build your program. Makepp can accept most makefiles written for the standard unix make, but if you're starting from scratch, it is often much simpler to use some of makepp's advanced features. This is an introduction for writing makefiles that are specific to makepp.
If you already know a lot about writing makefiles, you might want to at least peruse the later sections of this file because they show the preferred way to do things with makepp, which is often different from the traditional way to do it with make. Another source of examples and advice on writing makefiles for makepp is makepp_cookbook.
Building a program from its source files can be a complicated and time-consuming operation. The commands are too long to be typed in manually every time. However, a straightforward shell script is seldom used for compiling a program, because it's too time-consuming to recompile all modules when only one of them has changed.
However, it's too error-prone to allow a human to tell the computer which files need to be recompiled. Forgetting to recompile a file can mean hours of frustrating debugging. A reliable automatic tool is necessary for determining exactly which modules need recompilation.
Makepp (short for Make-plus-plus, or make++) is a tool for solving exactly this problem. It is an improvement on the make program, a standard tool that has been around for many years. It relies either on its own builtin knowledge (in very simple cases), or on a file called a makefile that contains a detailed recipe for building the program.
Usually, the input files are program source code, and the output files are executables, but makepp doesn't care what they are. You can use a makefile to control any kind of procedure where you need to selectively execute certain commands depending on which files have changed. You could, for example, use makepp to do data analysis, where your input files are raw data and analysis programs, and your output files are processed data or graphs or whatever. Makepp will figure out which of the processed data files need to be updated whenever some of the data files or analysis programs change. The examples in this introduction will assume you are building an executable program from source code, but you can do a lot more with makepp than just that if you use your imagination.
If your program consists of a single module, you probably don't need makepp, because you know that any change that you make requires recompiling that module. However, if your program consists of even just two modules, then you will definitely want to use a program like makepp.
If your program is relatively simple and doesn't require anything particularly special, makepp may already know how to build it without your explicitly giving instructions. For example, suppose you have a program in a single source file, calledtest.c
. You can just typemakepp test
and your program will build like this:% makepp test makepp: Entering directory `/somewhere/or/other' gcc -g -Wall -c test.c -o test.o gcc -g -Wall test.o -o test Warning: on unix, to run a program called 'test', you must type ./test rather than just 'test'.These are the basic commands needed to compile a program on unix. If these commands don't make any sense to you, see makepp_tutorial_compilation.
Makepp contains builtin rules for C, C++, and Fortran.
Makepp can sometimes figure out how to compile programs that are contained in more than one source file, or programs that must be linked with various system libraries. It does this by guessing which source files and libraries you need based on the files that you include. The actual algorithm is too complicated to discuss here in a tutorial (but see makepp_builtin); you can try it, and if it doesn't work automatically for you, you need to write your own makefile.
By default, for C and C++, makepp compiles the program with debug information and without optimization. If you want to turn on optimization so that your program runs faster, change the command line to:
makepp CFLAGS=-O2 testIf you're compiling C++ instead of C, use
CXXFLAGS=-O2
instead ofCFLAGS=-O2
. For a complete list of other options you can configure without writing a makefile, see makepp_builtin.Makepp's builtin rules are somewhat more powerful than the standard unix make, but if you write programs of any complexity, it's likely that you'll need a makefile eventually to tell makepp what to do.
If you are not familiar with unix compilation commands, it may be helpful at this point to read makepp_tutorial_compilation for a description of what these cryptic unix compilation commands do.
Suppose you are writing a C++ program which has two source modules,
processing.cxx
andgui.cxx
, along with numerous include files. If you were to build your program from scratch, you would need to execute something like these commands:c++ -c processing.cxx -o processing.o c++ -c gui.cxx -o gui.o c++ processing.o gui.o -o my_programThe first two commands are compilation commands, and the third invokes the linker to combine the two object files into a single executable. If you make changes to
gui.cxx
but not toprocessing.cxx
, then you don't need to reexecute the first command, but you do need to execute the last two commands. makepp can figure this out for you automatically.(If you've never worked with make before, you may be thinking that you could combine the above three commands into a single command, like this:
c++ processing.cxx gui.cxx -o my_programWhen you omit the
-c
option to the compiler, it combines the compilation and linking step. This is often quite convenient when you are not writing a makefile. However, it's not a good idea to do this in a makefile, because it always recompiles both modules even if one of them hasn't changed, and this can take a significant amount of extra time.)In order to use makepp to control the build process, you'll need to write a makefile. The makefile is a text file that contains the recipe for building your program. It usually resides in the same directory as the sources, and it is usually called
Makefile
.Each one of these commands should be a separate rule in a makefile. A rule is an instruction for building one or more output files from one or more input files. Makepp determines which rules need to be reexecuted by determining whether any of the input files for a rule have changed since the last time the rule was executed.
A rule has a syntax like this:
output_filenames : input_filenames actionsThe first line of the rule contains a space-separated list of output files, followed by a colon, followed by a space-separated list of input files. The output files are also called targets, and the input files are also called dependencies; we say that the target file depends on the dependencies, because if any of the dependencies change, the target must be rebuilt.
The remaining lines of the rule (the actions) are shell commands to be executed. Each action must be indented with at least one space (traditional make requires a tab character). Usually, there's just one action line, but there can be as many as you want; each line is executed sequentially, and if any one of them fails, the remainder are not executed. The rule ends at the first line which is not indented.
You can place the rules in any order in the makefile, but it is traditional to write the rule that links the program first, followed by the compilation rules. One reason for this is that if you simply type "
makepp
", then makepp attempts to build the first target in the file, which means that it will build your whole program and not just a piece of it. (If you want to build something other than the first target, you have to specify the name of the target on the command line, e.g., "makepp processing.o
".)The above compilation commands should be written as three separate rules. A makefile for building this program could look like this:
# Link command: my_program: processing.o gui.o c++ processing.o gui.o -o my_program # Compilation commands: processing.o: processing.cxx c++ -c processing.cxx -o processing.o gui.o: gui.cxx c++ -c gui.cxx -o gui.o(Characters on a line following a
#
are ignored; they are just comments. You do not need the "# Link command:
" comment in the makefile at all.)To use this makefile, simply cd to the directory and type "
makepp
". Makepp will attempt to build the first target in the makefile, which ismy_program
. (If you don't want it to build the first target, then you have to supply a the name of the target you actually want to build on the command line.)When makepp attempts to build
my_program
, it realizes that it first must buildprocessing.o
andgui.o
before it can execute the link command. So it looks at the other rules in the makefile to determine how to build these.In order to build
processing.o
, makepp uses the second rule. Sinceprocessing.o
depends onprocessing.cxx
, makepp will also try to makeprocessing.cxx
. There is no rule to makeprocessing.cxx
; it must already exist.Makepp checks whether
processing.cxx
has changed since the last timeprocessing.o
was built. By default, it determines this by looking at the dates on the file. Makepp remembers what the date ofprocessing.cxx
was the last timeprocessing.o
was made by storing it in a separate file (in a subdirectory called.makepp
). Makepp will execute the actions to build the target if any of the following is true:
- The target does not exist.
- The target exists, but makepp does not have any information about the last build.
- The date on any input file has changed since the last build.
- The date on any target has changed since the last build.
- The actions have changed since the last build.
- The last build occured on a different architecture (different CPU type or operating system type).
It might seem a little funny that makepp executes the action if either the output file or the input files have changed since the last build. Makepp is designed to guarantee that your build is correct, according to the commands in the makefile. If you go and modify the file yourself, then makepp can't guarantee that the modified file is actually correct, so it insists on rebuilding. (For more information on how makepp decides whether to rebuild, and how you can control this, see makepp_signatures.)
Now
processing.o
might not depend only onprocessing.cxx
; ifprocessing.cxx
includes any.h
files, then it needs to be recompiled if any of those.h
files has changed, even ifprocessing.cxx
itself has not changed. You could modify the rule like this:
# Unnecessary listing of .h files processing.o: processing.cxx processing.h simple_vector.h list.h c++ -c processing.cxx -o processing.oHowever, it is a real nuisance to modify the makefile every time you change the list of files that are included, and it is also extremely error prone. You would not only have to list the files that
processing.cxx
includes, but also all the files that those files include, etc. You don't have to do this. Makepp is smart enough to check for include files automatically. Any time it sees a command that looks like a C or C++ compilation (by looking at the first word of the action), it reads in the source files looking for#include
directives. It knows where to look for include files by scanning for-I
options on your compiler command line. Any files which are included are automatically added to the dependency list, and any files which those include. If any of them has changed, the file will be recompiled.Once makepp knows that
processing.o
is up to date, it then determines whethergui.o
needs to be rebuilt by applying the same procedure to the third rule. When bothprocessing.o
andgui.o
are known to be built correctly, then makepp applies the same procedure to see if the link command needs to be reexecuted.The above makefile will work, but even for this simple problem, an experienced user is not likely to write his makefile this way. Several improvements are discussed in the next sections.
So far, our makefile for compiling our program of two modules looks like this:# Link command: my_program: processing.o gui.o c++ processing.o gui.o -o my_program # Compilation commands: processing.o: processing.cxx c++ -c processing.cxx -o processing.o gui.o: gui.cxx c++ -c gui.cxx -o gui.oThis works wonderfully, but suppose now we want to change some compilation options. Or maybe we want to use a different compiler. We'd have to change all three compilation lines.
Similarly, suppose we want to change the list of modules to compile. We'd have to change it in two places.
Duplication of information like this is a recipe for disaster. If you go and change your makefile, it's pretty much guaranteed that at some point, you or someone else will forget to change one of the places. Depending on what the change is (especially if it affects preprocessor definitions), this can lead to subtle and hard-to-debug problems in your program.
The way to avoid duplication of information is to specify the information only once and store it in a variable, which can be accessed each time the information is needed.
# Define the symbols we might want to change: CXX := c++ CXXFLAGS := -g OBJECTS := processing.o gui.o my_program: $(OBJECTS) $(CXX) $(OBJECTS) -o my_program processing.o: processing.cxx $(CXX) $(INCLUDES) $(CXXFLAGS) -c processing.cxx -o processing.o gui.o: gui.cxx $(CXX) $(CXXFLAGS) -c gui.cxx -o gui.o
Here
$(CXX)
expands to be the value of the variableCXX
, and similarly for$(CXXFLAGS)
and$(OBJECTS)
. Now we can just change one line in the makefile, and all relevant compilation commands are affected.In fact, we don't even need to change the makefile to change compilation options. Assignments specified on the command line override assignments in the makefile. For example, we could type this to the shell:
makepp CXXFLAGS="-g -O2"
which overrides the setting of
CXXFLAGS
in the makefile. It is as if the makefile contained the line
CXXFLAGS := -g -O2instead of the definition it does contain.
It might not at all be useful to be able to override these things for your own development, but if you distribute your sources to other people, they might appreciate it.
Variable names are case sensitive (e.g.,
OBJECTS
is different fromobjects
). Usually people write most variables in upper case only, but you don't have to.If you need to put a literal dollar sign into a rule action, write it with a double dollar sign, like this:
test: for testfile in *.test; do run_test $$testfile; doneConventionally, there are a few variables which you might want to set. These are just conventions, but you will see them in a lot of makefiles.
CC := cc # The C compiler. CFLAGS := -g # C compilation options which relate to # optimization or debugging (usually # just -g or -O). Usually this wouldn't # include -I options to specify the # include directories, because then you # couldn't override it on the command line # easily as in the above example. CXX := c++ # The C++ compiler. (Sometimes "CPP" instead # of CXX.) CXXFLAGS := -g # C++ compilation options related to # optimization or debugging (-O or -g). F77 := f77 # The fortran compiler. FFLAGS := # Optimization flags for fortran.
Makepp will guess appropriate values for some of these variables if you don't specify them (see makepp_builtin), but it is usually best to set them explicitly--it makes it easier on anyone reading your makefile.
There are a lot more extremely powerful things you can do with variables, but first we need to explain some more things about makefiles.
Having one rule for each compilation command is fine when there are only a few files, but what if your program consists of dozens of source files? Most of them have to be compiled with very similar commands. It is tedious to type in a separate rule for each source file, and then if you decide to change the rules, you have to change the makefile in a dozen places. A better solution to this problem is to use a pattern rule.
A pattern rule is a concise way of specifying a rule for many files at once. The rule will depend on the file names, but usually it depends on them in a simple way. You specify a pattern by using the
%
wildcard. When present in the dependency list,%
matches any string of any length; when present in the list of targets,%
stands for the string that%
in the dependency list matched.The following pattern rule will take any
.c
file and compile it into a.o
file:%.o: %.c $(CC) $(CFLAGS) $(INCLUDES) -c $(input) -o $(output)(This assumes that you have the variables
CC
,CFLAGS
, andINCLUDES
defined to be something suitable. Makepp will guess a value forCC
andCFLAGS
.)The first line of the rule says that it applies to every possible input file that matches the pattern
%.c
. These.c
files can be transformed into the corresponding.o
file using the specified actions.The action of rule is quite similar to the other actions we've seen previously, except that it uses automatic variables. An automatic variable is a variable whose value is automatically set by makepp depending on the rule that it appears in. The most useful automatic variables are:
$(input)
- The name of the first input file. In this rule, this would be the file that matches the
%.c
pattern.$(dependency)
is a synonymn for$(input)
. In older makefiles, you will also see the cryptic symbol<
used as well.$(output)
- The name of the first output file. In this rule, this would be the file that matches the
%.o
pattern.$(target)
and$@
are synonymns.$(inputs)
- The name of all explicitly listed input files. In this case, since there is only one,
$(inputs)
is equivalent to$(input)
.$(dependencies)
and$^
are synonymns.$(outputs)
- The name of all explicitly listed targets. In this case, since there is only one,
$(outputs)
is equivalent to$(output)
.$(targets)
is a synonymn for$(outputs)
.Note that these variables are lower case.
You can use these automatic variables even for non-pattern rules. This avoids repeating target filenames.
You can actually do considerably more complicated things with pattern rules. For example,
# Put the object files into a separate directory: objects/%.o: %.cpp $(CXX) $(CXXFLAGS) -c $(input) -o $(output) # Run a preprocessor to make source files: moc_%.cxx: %.h $(MOC) $(input) -o $(output)Using pattern rules and automatic variables, we'd probably rewrite our makefile for our simple program like this:
CXX := c++ CXXFLAGS := -g INCLUDES := -I. # This would contain any -I options to the # compiler, if there are any. LIBS := -L/usr/X11R6/lib -lX11 # Contains libraries we need to link in. OBJECTS := processing.o gui.o my_program: $(OBJECTS) $(CXX) $(inputs) -o $(output) $(LIBS) %.o: %.cxx $(CXX) $(INCLUDES) $(CXXFLAGS) -c $(input) -o $(output)Now we don't have to have an explicit rule for each object file we need to produce. If we want to add another module to our program, we only have to change the one line that defines the
OBJECTS
variable. Note that this makefile is now much more concise than our original makefile. Each piece of information occurs only once so there is no possibility of making a mistake by changing information in one place and forgetting to change it in others.When you use pattern rules, it's not uncommon for there to be two different rules that can produce the same file. If both rules are pattern rules, then the one that occurs later in the makefile is actually used. If one rule is a pattern rule, and the other is an explicit rule (one that actually names the target file explicitly), then the explicit rule is used. This is often helpful if you want to compile most modules with the same command, but there is one module that needs slightly different compilation options, as shown in this makefile fragment:
CXXFLAGS := -g -O2 FAST_CXXFLAGS := -DNO_DEBUG -O6 -malign-double -funroll-all-loops %.o: %.cpp $(CXX) $(CXXFLAGS) -c $(input) -o $(output) time_critical_subs.o: time_critical_subs.cpp $(CXX) $(FAST_CXXFLAGS) -c $(input) -o $(output)There is also another syntax that can be more convenient for affecting compilation options for just one or a few targets. It is possible to tell makepp that a variable should have a different value for certain specific targets. In this example, it would look like this:
CXXFLAGS := -g -O2 FAST_CXXFLAGS := -DNO_DEBUG -O6 -malign-double -funroll-all-loops %.o: %.cpp $(CXX) $(CXXFLAGS) -c $(input) -o $(output) time_critical_subs.o: CXXFLAGS := $(FAST_CXXFLAGS)In general, if you specify a variable name after a list of targets, then it takes a different value when the build command for those targets is being determined.
If you find yourself wanting to do something with patterns that isn't expressed easily using the
%
wildcard, makepp has another syntax which is somewhat harder to read, but considerably more powerful. See the:foreach
clause for more details.Makepp actually has builtin rules for compiling C or C++ or Fortran code, which are available if you don't override them with your own rules. The builtin rules are almost identical to the examples above. Most makefiles contain pattern rules for compilation, but you can depend on the builtin rules if you want.
Often it is convenient to put commands into the makefile that don't actually build a file, but are somehow logically associated with the build process. For example, a very common procedure in makefiles is something like this:
prefix=/usr/local install: our_program install -m 0755 our_program $(prefix)/bin install -m 0644 *.png $(prefix)/share/our_program/icons .PHONY: installWhen someone types
makepp install
, then makepp first buildsour_program
, then runs the commands associated with the install target. Theinstall
command simply copies its arguments to the specified directory, and sets the file's protection to the indicated value. So it copiesour_program
into/usr/local/bin
, and some associated data files into/usr/local/share/our_program/icons
. But this doesn't create a file calledinstall
in the current directory.The
install
target here is called a phony target because makepp treats it as if it were a real file, but it is not actually a file, it's just a trick for forcing makepp to build its dependencies and then run some commands.That's what the line .PHONY: install
is for. It tells makepp that it really shouldn't expect the file
./install
to exist after the commands have executed. If you forget the phony declaration, then makepp will expect the fileinstall
to exist after executing the commands, and it will complain loudly if it does not.You can also write the phony declaration like this: $(phony install): our_program ...
and then omit the
.PHONY: install
line. This means that you can declare the target as phony on the same line as you define it, which may make your makefiles more readable.Phony targets are extremely common in makefiles. In almost all makefiles, the first target is the phony target
all
, like this:$(phony all): program1 program2 program3If no target is specified on the command line, makepp attempts to build the first target in the file. If your makefile makes more than just one program, you most likely want to build all of the programs by default. In this example, if the programmer just types
makepp
without any arguments, makepp attempts to buildall
, which forces it to build all three programs from this directory.Here is a sample makefile fragment that illustrates some commonly used phony targets:
PROGRAMS := combobulator discombobulator $(phony all): $(PROGRAMS) # All is the first target, so it's the default. combobulator: $(COMBOBULATOR_OBJS) $(CXX) $(inputs) -o $(output) discombobulator: $(DISCOMBOBULATOR_OBJS) $(CXX) $(inputs) -o $(output) # # This target makes sure everything is compiled, and then puts the # programs into a place where everyone can access them. We make the # directories if they don't exist yet. # prefix := /usr/local $(phony install): all test -d $(prefix) || mkdir $(prefix) test -d $(prefix)/bin || mkdir $(prefix)/bin for prog in $(PROGRAMS); do \ install -m 0755 $$prog $(prefix)/bin; \ done test -d $(prefix)/share || mkdir $(prefix)/share test -d $(prefix)/share/combobulate || mkdir -p $(prefix)/share/combobulate for icon in *.xbm; do \ install -m 0644 $$icon $(prefix)/share/combobulate; \ done # Note the use of the double dollar sign to pass a single dollar sign to # the shell. Note also the backslashes at the end of a line to indicate # that a shell command continues to the next line. # # This target gets rid of all the junk that gets built during compiles. # (Note that this could be done more thoroughly with the only_targets # function.) # $(phony clean): rm -f $(PROGRAMS) *.o # # This target makes a source distribution for shipping out to someone. # VERSION := 3.14 $(phony distribution): rm -rf combobulate-$(VERSION) # Get rid of previous junk, if any. mkdir combobulate-$(VERSION) cp *.c *.h Makefile README INSTALL combobulate-$(VERSION) tar cf - combobulate-$(VERSION) | gzip -9c > combobulate-$(VERSION).tar.gz rm -rf combobulate-$(VERSION) # # This target runs regression tests to make sure the program(s) are # doing what they are supposed to do. # $(phony test): $(PROGRAMS) noecho for testfile in *.test; do \ ./combobulate $$testfile | ./discombobulate - > junk_output; \ if cmp -s junk_output $$testfile; then \ echo passed $$testfile; \ else \ echo failed $$testfile; \ fi; \ done # # If "noecho" is the first word of the action, the action itself is not # printed before it is executed. In this case, printing the action # would merely clutter up the screen so it is very common to suppress # printing for such long commands. #Working with several directories
If your program grows to a substantial size, or if it uses libraries that need to be built but should be kept separate, it is quite likely that you have split up your sources into several directories. One of the main motivations for writing makepp was to make dealing with several directories much easier than with the standard make utility. If you're familiar with the standard unix make, you'll notice that with makepp, you don't have to mess around with ugly complexities like recursive invocations of make.
With makepp, you simply put a separate makefile in each directory that builds the relevant files in that directory. When a makefile refers to files whose build commands are in different makefiles, makepp automatically finds the appropriate build rules in the other makefiles. All actions in each makefile are executed with the current directory set to be the directory containing the makefile, so each makefile can be written independently of all the others. No makefile has to know anything about the other makefiles; it does not even have to tell makepp to load the rules from those other makefiles.
When you've written your makefiles, cd to the directory that contains your main program, and type
makepp
just like you usually would. Makepp will load in the makefile from that directory. It will notice that this makefile refers to files in other directories, and it will examine those other directories to see if there is a makefile in them. In this way, all relevant makefiles will be loaded.As a simple example, suppose your top level directory contains the following makefile:
# Top level makefile: CXX := c++ CXXFLAGS := -O2 my_program: main.o goodies/libgoodies.so $(CXX) $(inputs) -o $(output) %.o: %.cxx $(CXX) $(CXXFLAGS) -c $(input) -o $(output)You would need to write a makefile in the directory
goodies
which buildslibgoodies.so
, like this:# goodies/Makefile CXX := c++ CXXFLAGS := -O2 MODULES = candy.o chips.o licorice.o cookies.o popcorn.o spinach.o libgoodies.so: $(MODULES) $(CXX) -shared $(inputs) -o $(output) # Note that the command is written assuming that # the current directory is the subdirectory # "goodies", not the top level subdirectory. # Makepp cds into this directory before executing # any commands from this makefile. %.o: %.cxx $(CXX) $(CXXFLAGS) -fpic -c $(input) -o $(output)And that's all you need to do.
Any variables which you specify on the command line override the definition of the variable in all makefiles. Thus, for example, if you type
makepp CXXFLAGS="-g"
, all modules will be recompiled for debug because the definition ofCXXFLAGS
in both makefiles is overridden.The directories containing other sources need not be subdirectories of the top-level directory (as they are in this example). They can be anywhere in the file system; makepp will automatically load a makefile from any directory that contains a file which is a dependency of some target it is trying to build. It will also load a makefile from any directory that is scanned by a wildcard.
Automatic loading works if files built by your makefile all reside in the same directory as the makefile itself. If you write your makefile so that its rules produce files in a different directory than the makefile itself, then you might have to tell makepp where to look for the makefiles, since it doesn't have any way of guessing. You can do this using the
load_makefile
statement in your makefile. For more information about this and other issues related to multi-directory builds, see makepp_cookbook/Tips for multiple directories.One caveat: if you reference the variable
Makepp has several other features which make life slightly easier for programmers who have to maintain a program spanning several directories. In the above examples, you'll notice that the definitions of the variables$(MAKE)
in your makefile, makepp automatically goes into backward compatibility mode and turns off automatic loading.CXX
andCXXFLAGS
have to be repeated in each makefile. It can be a nuisance to reenter the same information into every makefile, and it could be a problem if you ever decide to change it--you may have to modify dozens of different makefiles.What you can do instead is to put all of the information that's common to each makefile into a separate file, located perhaps at the top of the directory tree. Common information usually includes variable definitions, and sometimes also pattern rules. (In the above example, however, the pattern rules are not the same in both makefiles.) Let's suppose you've called this file
standard_defs.mk
. Then each makefile simply needs to contain a statement like this:include standard_defs.mkWhen makepp sees this statement, it inserts the contents of the file into the makefile at that point. The
include
statement first looks for the file in the current directory, then in the parent of the current directory, and so on up to the top level of the file system, so you don't actually need to specify../standard_defs.mk
or../../../../standard_defs.mk
.So we could rewrite the above makefiles to look like this.
standard_defs.mk
would exist in the top level directory, and it might contain the following definitions:# standard_defs.mk CXX := c++ CXXFLAGS := -O2 # # We've also included a pattern rule that might be useful in one or more # subdirectories. This pattern rule is for C compilation for putting # things into a shared library (that's what the -fpic option is for). # %.o: %.cxx $(CXX) $(CXXFLAGS) -fpic -c $(input) -o $(output)Note that since the included file is actually inserted into each makefile, rules in the included file are applied with the default directory set to the directory containing the makefile that included the file, not the directory containing the include file.
The top level
Makefile
might look like this:# Top level makefile include standard_defs.mk my_program: main.o goodies/libgoodies.so $(CXX) $(inputs) -o $(output) # # Note that this pattern rule overrides the one found in standard_defs.mk, # because makepp sees it later. This pattern rule is for compilation for # a module that doesn't belong in a shared library. # %.o: %.cxx $(CXX) $(CXXFLAGS) $(input) -o $(output)And the subdirectory's makefile might look like this:
# goodies/Makefile include standard_defs.mk MODULES = candy.o chips.o licorice.o cookies.o popcorn.o spinach.o libgoodies.so: $(MODULES) $(CXX) -shared $(inputs) -o $(output) # We don't need the pattern rule for compilation of .cxx to .o files, # because it's contained in standard_defs.mk.If you run makepp from within an editor such as emacs, and you are editing sources from several different directories, you may find that the default directory for makepp differs depending on which file you were most recently editing. As a result, makepp may not load the correct makefile.
What you can do to ensure that makepp always loads the correct makefile(s), no matter what directory happens to be your current directory, is to use the
-F
command line option, like this:makepp -F ~/src/my_programMakepp will first cd to the directory
~/src/my_program
before it attempts to load a makefile.Up until this point, we've had to explicitly list all of the modules that go into a program or a library. The previous makefile, for example, contained this line:
MODULES = candy.o chips.o licorice.o cookies.o popcorn.o spinach.o libgoodies.so: $(MODULES) $(CXX) -shared $(inputs) -o $(output)In this case, listing all of the modules that go into
libgoodies.so
is not such a big deal since there aren't very many of them. But sometimes it can be a real nuisance to list all of the object files, especially if this list is changing rapidly during development. Frequently, you want every single module in the whole directory to be compiled into your program or library. It would be a lot easier if you could just tell makepp to do that without listing them all.Well, you can. The above lines could be rewritten as:
libgoodies.so: *.o $(CXX) -shared $(inputs) -o $(output) The*.o
wildcard matches any existing.o
files, or any.o
files which do not yet exist but can be made by any of the rules that makepp knows about from any makefiles that it has read. So the wildcard will return the same list of files, no matter whether you haven't compiled anything yet, or whether all the modules have been compiled before.Of course, if you contaminate your directories with extra files that shouldn't be compiled directly into your library, (e.g., if you write little test programs and leave them in same directory as the library source files), then these modules will be incorrectly included into your library. If you choose to use wildcards, it's up to you to keep the directory clean enough.
Makepp supports the usual unix wildcards and one additional one:
- Matches any string of 0 or more characters. It will not match the
/
character. For example,a*c
matchesac
,abc
, andaaaaabc
, but notaa/bc
.- Matches exactly one character (not including
/
). For example,???.o
matches all filenames that have 3 characters before the.o
extension.
- Matches any of a list of characters at that position. For example,
[abc].o
matchesa.o
,b.o
,c.o
, but notabc.o
ord.o
. You can also specify a range of characters, e.g.,data_[0-9]
will matchdata_0
,data_1
, etc.- This is a special wildcard, found only in makepp (and the zsh shell, from which I stole the idea). It matches any number of intervening directories. For example,
**/*.o
matchesxyz.o
,test_programs/abc.o
, anda/deeply/nested/subdirectory/def.o
.If your sources are contained in several subdirectories, and you want to link all the object modules together, you could write it like this:
liboodles.so: **/*.o $(CXX) -shared $(inputs) -o $(output)Functions and Advanced Variable Usage
Makepp has a number of extremely powerful ways of manipulating text. This tutorial shows a few of the more useful ways, but you might want to glance through makepp_variables and makepp_functions for a more complete list.
A common problem in makefiles is the maintenance of two lists of files which correspond. Consider the following two variables:SOURCES := a.cpp bc.cpp def.cpp OBJS := a.o bc.o def.oWe might want to have a list of sources if the makefile can build source distributions, and we might need a list of objects for the link command. It's tedious to change both lines whenever a new module is added, and it's not unlikely that a programmer will change one line and forget to change the other. Here we will show four different ways to avoid the duplication.
- The patsubst function
- The first is to use makepp's functions to convert one list into another. A function invocation looks a little like a variable, except that a function can take arguments:
$(function arg1 arg2 arg3 ...)Makepp supplies many powerful functions, but probably the most useful of them is the
patsubst
function. You could write the above lines like this:SOURCES = a.cpp bc.cpp def.cpp OBJS = $(patsubst %.cpp, %.o, $(SOURCES))The
patsubst
function applies a pattern to every word in a list of words, and performs a simple textual substitution. Any words in the list that match the pattern in the first argument are put into the output after making the substitution indicated by the second argument. The%
wildcard matches any string of 0 or more characters. In this example, the pattern%.cpp
is applied to every word in$(SOURCES)
. The first word,a.cpp
matches the pattern, and the%
wildcard matches the stringa
. The%
in the second argument is then replaced bya
, and the result isa.o
. For the second argument,%
matchesbc
, so the result isbc.o
.Makepp's functions can strip directory names, remove extensions, filter out matching words, return the output from shell commands, and other useful tricks. In addition, you can also write your own functions in perl that can be called from other parts of the makefile. See makepp_extending for details.
- Substitution references
- Since the
patsubst
function is so common, there is an abbreviated syntax for it called a substitution reference. We could have written the above lines like this:SOURCES = a.cpp bc.cpp def.cpp OBJS = $(SOURCES:%.cpp=%.o)- rc-style substitution
- Sometimes invocations of
patsubst
or the equivalent substitution references can be somewhat cryptic. Makepp provides another option which is sometimes more convenient: i<rc-style substitution (so called because it was pioneered by the rc shell).
MODULES := a bc def SOURCES := $(MODULES).cpp OBJS := $(MODULES).oWhat happened here is that when it evaluated
$(MODULES).cpp
, makepp appended.cpp
to every word in$(MODULES)
, and similarly for$(MODULES).o
. In general, any characters preceding the$(variable)
(up to a word delimiter) are placed before each word in$(variable)
, and any characters following$(variable)
are placed after each word in$(variable)
. Thus the result of evaluatingx$(MODULES)y
would bexay xbcy xdefy
.- Inline perl code
- If you know perl, you can insert perl code to perform arbitrary manipulations on variables into your makefile. This is best illustrated by an example:
SOURCES := a.cpp bc.cpp def.cpp perl_begin ($OBJS = $SOURCES) =~ s/\.cpp/.o/g; perl_endAny text between the
perl_begin
statement and theperl_end
statement is passed off to the perl interpreter. All variables in the makefile (except automatic variables) are accessible as perl scalars. Any variables you set with perl code will be accessible in the makefile.So what the above example does is to copy the text from
$SOURCES
to$OBJS
, then substitute each occurence of.cpp
with.o
.In this example, using inline perl code is probably unnecessary since there are easier and clearer ways of doing the same manipulation. But the full power of the perl interpreter is available if you need it.
Source/Object Separation and Variant Builds
Up to this point all of the makefiles we have seen put the object files in the same directory as the source files. This is usually the way makefiles are written, and it's certainly the simplest way to do things. However, suppose you have to compile your program on both a linux machine and a Solaris machine. The binaries from the two machines are incompatible, of course. Unlike the traditional make, makepp is smart enough to know that if the last compilation was on linux, and the current compilation is on Solaris, a recompilation of everything is necessary.
But this still leaves a problem: when you recompile on Solaris, you wipe out your linux binaries. Then when you switch back to linux, you have to recompile everything again, even though the source files that haven't changed.
A related problem is if you build your program with several different options. Suppose for example that you usually compile your program with optimization:
CFLAGS := -O2 %.o: %.c $(CC) $(CFLAGS) -c $(input) -o $(output) my_program: *.o $(CC) $(inputs) -o $(output)However, you discover a bug, and you want to enable debugging on all files, so you do change
CFLAGS
:CFLAGS := -g -DMALLOC_DEBUGMakepp realizes that the build commands have changed, and it needs to recompile everything. But again, recompiling with debugging enabled wipes out your old binaries, so if you want to turn optimization back on, everything must be recompiled again, even the files that haven't changed.
The obvious solution to these problems is to put the architecture-dependent or build-variant-dependent files in a separate subdirectory. There are two basic techniques for doing this: explicitly specifying an alternate directory, or using repositories.
Explicit specifications of alternate directories
You could rewrite the rules in your makefile to dump the objects into a different directory, like this:
ARCH := $(shell uname -m) # ARCH becomes the output from the uname -m command. CFLAGS := -O2 OBJDIR := $(ARCH)-optim $(OBJDIR)/%.o: %.c $(CC) $(CFLAGS) -c $(input) -o $(output) $(OBJDIR)/my_program: $(OBJDIR)/*.o $(CC) $(inputs) -o $(output)Now when you run makepp,
ARCH
is automatically set to something different for each architecture, and all of the objects are placed in a different directory for each architecture, so they don't overwrite each other. If you want to recompile turning on debugging, then you would have to change bothCFLAGS
andOBJDIR
.One problem with this approach is that implicit loading will no longer work. The only place that makepp knows to look for a makefile when it needs to build something is in the directory of the file it's trying to build. If this is a problem for you, then you can explicitly tell makepp where to look using the
load_makefile
statement.Repositories
Repositories are a magical way of using a makefile that is written to put objects in the same directory, but having makepp automatically put the objects in a different directory. Suppose we start with the original makefile above (before we modified it to put the objects in a different directory), and we've been working on linux so our source directory is filled with linux binaries. When we want to recompile our code on solaris instead of linux, we use the following command instead of just typing
makepp
:% mkdir solaris % cd solaris % makepp -R ..What the
-R
option to makepp does in this case is to declare the directory..
(which is the original source directory) as a repository. A repository is just a way of getting makepp to trick all of the actions into believing that all files in one directory tree are actually located in a different directory tree in the file system. In the above example, makepp pretends that all the files in..
(and all subdirectories of..
) are actually in the current directory (and corresponding subdirectories).More precisely, a repository is a place where makepp looks if it needs to find a file that doesn't exist in the current directory tree. If the file exists in the current directory tree, it is used; if it doesn't exist, but a file exists in the repository, makepp makes a temporary symbolic link from the file in the repository to the current directory. (A symbolic link is an alias for the original file. It's like a copy, except that trying to access the link actually accesses the original file.) The rule actions then act on the file in the current directory, but actually reference the files in the repository.
In this example, initially we start off with a blank new directory
solaris
. (It doesn't have to be blank, of course, and it won't be the second time you run makepp.) Makepp is run in this directory, and it sees that there is no makefile there. However, there is a makefile in the repository, so it links in the one from the repository, and reads it. The pattern rule in the makefile that converts.c
files into.o
files causes makepp to link all the.c
files that it needs from the repository, and run the compilation command from thesolaris
subdirectory. Therefore the.o
files are now placed into thesolaris
subdirectory, not in the top level directory. When the build command is finished, any files linked from the repository are deleted, so thesolaris
subdirectory will contain only the binary files for Solaris. Any.o
files that exist in the repository are unmodified, so when you go back to your linux machine and rerun makepp, most of your program will not have to be recompiled.Sometimes it might be more convenient to use a different form of the repository command. The above three shell commands could be entirely replaced by the following one command:
% makepp -R solaris=. -F solarisWhat this does is to say that the files in the current directory are to be linked into the
solaris
subdirectory as necessary. (Thesolaris
subdirectory will be created automatically if it does not exist.) Then the-F
option causes makepp to cd to the solaris directory and execute the makefile there (which will be linked from the repository).Using a repository does not have the same drawbacks as explicitly specifying an object directory; makefiles will be implicitly loaded as expected, since as far as makepp is concerned, the makefile actually is in the same directory as the target files. However, if your build involves not just one but several directory trees, using repositories can become quite complicated.
Repositories are just a way of pretending that things located at one place in the file system are actually in a different place for the duration of the build. This is a very powerful technique that can be used for more than just separating your sources and binaries. For more details, see makepp_repositories.
If you have a complicated build procedure, you find that makepp is rebuilding things more often than you think they need to be rebuilt. Or you may find that it is not rebuilding things when it should. You don't have to keep staring at your makefiles until you see the problem. On every build, makepp produces a log file that explains which rule it thought it was supposed to use to build each target, what files it thought each target depended on, and (if it did decide to rebuild) why it thought a rebuild was necessary. This file is called.makepp_log
and it is placed in the directory you actually ran makepp from. (Of course, the filename has a leading period, which means that you won't see it there unless you specifically look for it. It's designed to be unobtrusive.)The log file's format is more or less self-explanatory. Indentation in the log file conveys depth in makepp's inference tree. Suppose the target is
all
, andall
depends onmy_program
, andmy_program
depends on*.o
, which depend on the corresponding.c
files. Log messages related toall
will not be indented, log messages related to building the targetmy_program
will be indented two spaces, log messages related to building any of the object files will be indented 4 spaces, and log messages related to building any of the source files will be indented 6 spaces.If you're doing a parallel make (using the
-j
command line option), the order of the messages in the log file will not make nearly as much sense since messages from different targets will be interspersed. You might try debugging a serial make first.
- Not specifying all dependencies
- Makepp is designed to be extremely clever about finding dependencies, and if you just use a standard unix C or C++ compiler command, it is actually somewhat difficult to get makepp to miss something. (Please send me examples if you find that it missed something, so I can make makepp smarter.) However, if you are running commands other than compilation, or dealing with languages other than C or C++, it is much easier to run into problems.
If you don't tell makepp all of the dependencies of a file, and it can't infer them by looking at the command line or scanning the files for includes, then it may not rebuild a file when it should. You can make this kind of error less likely by using only automatic variables in your actions, rather than repeating the dependency lists. For example,
combined_file: a b c do_something a b c d > combined_filehas an error because d is mentioned in the action but not in the dependency list. If the command had been written using automatic variables like this:
combined_file a b c d do_something $(inputs) > combined_filethen it would have been impossible to make this mistake.
Another way that a missing dependency can occur is if a program actually uses a file but doesn't take the file's name on the command line. For example, if you're compiling Fortran code, makepp at the moment doesn't know how to scan for included files. Thus you must explicitly list any files that are included.
One thing that is sometimes helpful for testing is to start with a completely clean directory--just the bare minimum you think should be necessary--and rebuild absolute everything from scratch. This can be most conveniently done by using repositories, like this:
rm -rf test-build-dir makepp -R test-build-dir=. -F test-build-dirIf the build fails because some file is not present, it means that makepp didn't realize some file was a dependency, because it only links files from the repository that it thought were needed. Performing this test occasionally may save hours of debugging later. I have worked on projects where this was never done for months because recompilation took so long. As a result, many little problems crept in. There were some object files that didn't have source files any more, some source files that were never properly rebuilt by a preprocessing command, etc.
Of course, this won't catch all missing dependencies, but it will catch some of them.
- Not specifying all targets
- You must specify all files that a given command modifies as targets, or else makepp may not have realized they have changed. You can specify more than one target. For example,
y.tab.h y.tab.c: parse.y yacc -d parse.yIf you had forgotten to specify y.tab.h as a target, then makepp would not know to rebuild y.tab.h using this command, and files that depend on y.tab.h might not be recompiled after the yacc command is run.
Please suggest things that you have found confusing or dangerous, and I'll either note them or try to fix makepp so they aren't a danger any more.
March 16, 2011 | blogs.perl.org
Following up Stupid Unix Tricks: Workflow Control with GNU Make -- this trick works on any platform with a make(1) program, including Windows, QNX, VMS, and z/OS.
It also serves to de-couple dependency checking and the workflow execution engine from the rest of your program (with the caveat that your program may need to interpret the output from make(1).)
About: remake is a patched and modernized version of GNU make utility that adds improved error reporting, the ability to trace execution in a comprehensible way, and a debugger.The debugger lets you set breakpoints on targets, show and set variables in expanded or unexpanded form, inspect target descriptions, see the target call stack, and even execute arbitrary GNU make fragments (e.g. add a dependency to an existing target).
Changes: Changes have been made to bring this up to GNU Make release 3.81. This is alpha code.
About: Yruba provides a rule system similar to make or ant for the shell (bash). It provides a clear separation between a list of dependencies that must be up-to-date before the current task can be performed, an explicit test that checks whether the target is really out-of-date, and a command that finally makes the target. Everything is plain bash syntax, so there is no new command language to learn.
"Most UNIX and Linux programs are built by running make. The make utility reads a file (generally named either 'makefile' or 'Makefile,' but hereafter merely referred to as 'a makefile') that contains instructions and performs various actions to build a program. In many build processes, the makefile is itself generated entirely by other software; for instance, the autoconf/automake programs are used to develop build routines. Other programs may ask you to directly edit a makefile, and of course, new development may require you to write one.
"The phrase 'the make utility' is misleading..."
See also http://makeashorterlink.com/?S12961445
Abstract: In a programming project, it is easy to lose track of which files need to be reprocessed or recompiled after a change is made in some part of the source. Make provides a simple mechanism for maintaining up-to-date versions of programs that result from many operations on a number of files. It is possible to tell Make the sequence of commands that create certain files, and the list of files that require other files to be current before the operations can be done. Whenever a change is made in any... (Update)
<number-of-targets>
* <number-of-platforms>
to just
<number-of-targets> + <number-of-platforms>
.The language of GNU make is indeed functional, complete with combinators (map and filter), applications and anonymous abstractions. GNU make does support lambda-abstractions. The following is one example from the Makefile in question: it is a rule to build a test target for the SCM Scheme system. The list of source code files and the name of the target/root-test-file are passed as two arguments of the rule:
make-scmi= scm -b -l $(LIBDIR)/myenv-scm.scm \ $(foreach file,$(1),-l $(LIBDIR)/$(file)) \ -l $(2).scmThe rule returns an OS command to interpret or compile the target. The rule can be invoked as
$(call make-scmi,util.scm catch-error.scm,vmyenv)
.
As in TeX, the arguments of a function are numbered (it is possible to assign them meaningful symbolic
names, too). Makefile's foreach
corresponds to Scheme's map
. The
comparison with the corresponding Scheme code is striking:(define make-scmi (lambda (arg1 arg2) `(scm -b -l ,(mks LIBDIR '/ 'myenv-scm.scm) ,@(map (lambda (file) `(-l ,(mks LIBDIR '/ file))) arg1) -l ,(mks arg2 '.scm))))
- Version
- The current version is 4.6, Oct 30, 2003.
- References
- SSAX updates; Makefile as a functional program [plain text file]
A message describing the motivation for the functional Makefile to automate regression testing, and a few examples. The message was posted on the SSAX-SXML mailing list on Mon, 18 Nov 2002 12:48:41 -0800
...A Make rule is composed of:
target: prerequisites commands
A target is considered "up to date" if it exists and is newer than its prerequisites.
Make works backwards, starting with the target of the first rule in the file. In our example, that's
sample
. Make checks the prerequisites for sample --main.o
andexample.o
-- to see if they have rules. If they do, it recursively checks their rules.Make walks down the recursion chain until it finds a target that has no prerequisites, or whose prerequisites have no rules. Once it hits one of those, it walks back up its recursion chain and runs commands as necessary. It creates a recursion chain for every prerequisite it encounters that has a rule.
Once all of the prerequisite rules have been run, it eventually returns to
sample
's rule. If the file doesn't exist, or is older than its prerequisites now are (after their rules have been recursively tested), it runs the commands to generatesample
.In the example makefile, Make:
- Runs the first rule it sees --
sample
.- Checks to see whether
sample
's prerequisites have rules. They do.- Runs the rule for the first prerequisite --
main.o
.- Checks to see whether
main.o
's prerequisites have rules. They don't.- Checks whether
main.o
is up to date. If not, it runs the commands formain.o
.- Runs the rule for the second prerequisite --
example.o
.- Checks to see whether
example.o
's prerequisites have rules. They don't.- Checks whether or not
example.o
is up to date. If not, it runs the commands forexample.o
.- Returns to
sample
's rule- Checks whether or not
sample
is up to date. If not, it runs the commands to update it.Make can run the prerequisites in any order. The important part of this sequence is that it runs recursively backwards from the first target (or the target named in the command parameters), and tests only the rules that it encounters in the prerequisites chain.
Make aborts compilation if it receives an error. This is usually useful behavior -- it lets you correct compiler-detected problems during a compile-and-test cycle. The option
-i
tells Make to ignore errors.In software development, it's very convenient to create a script to remove old compiled code so that the next build recompiles everything. It's also convenient to have a script for installing the code. Make allows scripts like this to be included in the makefile, as phony targets. Phony targets may have prerequisites, and may themselves be prerequisites.
The special rule
.PHONY
tells Make which targets are not files. This avoids conflict with files of the same name, and improves performance.If a phony target is included as a prerequisite for another target, it will be run every time that other target is required. Phony targets are never up-to-date.
To run a phony target from the command line, call Make with the name of the phony target, e.g.:
make clean
.# Naming our phony targets .PHONY: clean install # Removing the executable and the object files clean: rm sample main.o example.o echo clean: make complete # Installing the final product install: cp sample /usr/local echo install: make complete
Makefile Variables
As a project gets larger, more files are usually added. If you repeat a list of files, you can accidentally leave files out of the list. It's simpler to make use of a variable that expands into the list.
The syntax for declaring and setting a makefile variable is
varname = variable contents
. To call the variable, use$(varname)
.# Defining the object files objects = main.o example.o # Linking object files sample: $(objects) cc -o sample $(objects) echo sample: make complete # Compiling source files main.o: main.c main.h cc -c main.c example.o: example.c defs.h cc -c example.c # Removing the executable and the object files clean: rm sample $(objects) echo clean: make complete
Final Touches
There are a few touches which make the difference between a usable and a professional makefile. The next example adds those extra touches.
# 1 # Defining the compiler: CC=gcc # Defining the object files: objects = main.o example.o # 2 # The default rule - compiling our main program: all: sample echo all: make complete # 3 sample: $(objects) # If we get here, all the dependencies are now built. # Link it: $(CC) -o $@ $+ # 4 # Tell make how to build .o files from .c files: %.o:%.c $(CC) -c $+ # 5 #Now make sure that make rebuilds files if included headers change: main.o: main.h defs.h example.o: example.h defs.h
- Use a variable for the compiler, in case you want to use the same makefile with a different compiler.
- When called without a rule parameter, Make runs the first rule it encounters. It is more human-readable to explicitly state your first rule.
all
is a common name for a first rule.- The automatic variable
$@
means "the name of the target." The automatic variable$+
means "all prerequisites, space-separated." Automatic variables are pre-defined in Make.- A pattern rule tells make how to convert the prerequisite to the target. The
%
in the pattern means "one or more characters," and refers to the same string in the prerequisite and the target. This particular pattern tells make how to convert a*.c
file to a*.o
file of the same name.The automatic variable
$+
means "all prerequisites, space-separated."- These rules are relying on "implicit rules." Make has built-in patterns for converting a
*.h
file to the dependent*.o
. These rules are included to define the prerequisites for the relevant*.o
files.
themestream.com: Inside the make command |
(Sep 2, 2000, 18:39 UTC) (449 reads) (0 talkbacks) (Posted by john)
The make program is a very flexible and powerful tool. Most of us have some familiarity with it from using it in the compilation process of the Linux kernel and other source codes that for whatever reason, we choose to compile rather than install the binary version. But what is the make command and what else can it do besides compile programs?The canonical_documentation (authoritative reference) to the make command is the gnu documentation. If you do decide to read through all of that, you will know alot about make and probably be able to imagine various situations in which it could be used productively. Here are the ideas that I had for using make after reading the docs.
Rebuilding postscript files from tex/latex files
I don't know what your specific needs are but when I create a tex file or latex file, I don't want to have to issue a bunch of commands to get printable output but that is exactly what you have to do if you are doing things from the command line. Let's take a look at a simple example designed to relieve me of this duty:
letter.ps : letter.tex latex letter.tex dvips -o letter.ps letter.dvi print : letter.ps /usr/bin/gs -q -dSAFER -r360x180 -sDEVICE=epson -dNOPAUSE -sOutputFile=\|lpr letter.ps view : letter.ps gv letter.ps
So what does this do? Well, let's say I've created a latex file. What I want to do is print it. I don't want to have to do anything else except print it. Of course, it is a good idea to preview things first so that is why I included the "view" rule. The above file would be called "Makefile" and to run it, we would need to make sure the letter.tex file were in the current directory, along with Makefile. Let's run it first with the view option:
it is the AT&T Bell Labs next-generation make that is way better than standard UNIX make: parallel builds, include file scanning, coshell (instead of fork/exec for every shell command), distributed build support, compiled makefiles, state tracking from one build to the next, etc., etc., etc. When originally developed, it helped cut build times for the AT&T 5ESS switch from 3 days down to
In the first of these articles I showed a technique for printing the value of any Makefile macro by defining a special rule called print-%. Now I'm going to show how to trace where a macro is used in a Makefile.Consider this simple Makefile:
X=$(YS) hate $(ZS)
Y=dog
YS=$(Y)$(S)
Z=cat
ZS=$(Z)$(S)
S=sall: $(YS) $(ZS)
@echo $(X)
$(YS):
@echo $(Y) $(Y)
$(ZS):
@echo $(Z) $(Z)
When run it printsdog dog
cat cat
dogs hate cats
Tracing Macro Use
Now try to trace through and see wherethe macro $(Y) is used. It's actually used on lines 8, 9, 11, and 12 (twice). It's amazing how often macros get used! That's because Make defaults to only getting the value of a macro when needed and macros are frequently deeply nested.Tracing such use for any real Makefile would be an impossible task, but it's possible to get Make to do the work for you. Take a look at the code which should be added to the start of the Makefile to be traced (it'll only get used when explicitly called).
ifdef TRACE
.PHONY: _trace _value
_trace: ; @$(MAKE) --no-print-directory TRACE= $(TRACE) ='$$(warning TRACE $(TRACE))$(shell $(MAKE) TRACE=$(TRACE) _value)'
_value: ; @echo '$(value $(TRACE))'
endif
Before diving into understanding how it works, here's an example of using it to trace the value of $(Y) in our example Makefile. To use the tracer you tell GNU Make to run the trace target by setting the TRACE macro to the name of the macro you wanted tracked. In this example we want to watch use of the macro Y:
% gmake TRACE=YMakefile:8: TRACE Y
Makefile:11: TRACE Y
Makefile:12: TRACE Y
Makefile:12: TRACE Y
dog dog
cat cat
Makefile:9: TRACE Y
dogs hate cats
From the lines containing the word TRACE you can see Y being used first on line 8 (the definition of the all target references Y via the $(YS)), then on line 11(the definition of the cats target is using $(YS) which uses Y), then twice on line 12 (the two references to $(Y) itself as we execute the rule) and finally on line 9 ($(X) references $(YS) which references $(Y)).
With the power of the tracer we can try another task: finding out where $(S) is used:
% gmake TRACE=SMakefile:8: TRACE S
Makefile:8: TRACE S
Makefile:11: TRACE S
Makefile:14: TRACE S
dog dog
cat cat
Makefile:9: TRACE S
Makefile:9: TRACE S
dogs hate cats
Google matched content |
Make - Wikipedia, the free encyclopedia
Introduction to the make Utility
GNU Make - GNU Project - Free Software Foundation (FSF)
GNU make Table of Contents - GNU Project - Free Software Foundation (FSF)
oreilly.com Managing Projects with GNU Make, Third Edition
Alternatives:
A Case
For Make - http://citeseer.ist.psu.edu/fowler90case.html
Fowler 1990 - Explains many old-make limitations and new-make (Nmake) features including
procedure rules, accuracy mechanisms, viewpathing, and semaphores for blocking unwanted parallelism.
An Automatic Make Facility - http://citeseer.ist.psu.edu/holyer00automatic.html
Holyer and Pehlivan 2000 - Program uses no makefile. It records manually-issued compilation commands the first time round, then rebuilds programs using recorded command traces. |
Compare and Contrast Lucent Nmake and GNU Make - http://www.bell-labs.com/project/nmake/faq/gmake.html
Lucent FAQ - Summarizes the function and typical syntax of many make features, using a convenient table format. |
Recursive Make Considered Harmful by Peter Miller, 1997
For large UNIX projects, the traditional method of building the project is to use recursive make. On some projects, this results in build times which are unacceptably large, when all you want to do is change one file. In examining the source of the overly long build times, it became evident that a number of apparently unrelated problems combine to produce the delay, but on analysis all have the same root cause.This paper explores an number of problems regarding the use of recursive make, and shows that they are all symptoms of the same problem. Symptoms that the UNIX community have long accepted as a fact of life, but which need not be endured any longer. These problems include recursive makes which take ``forever'' to work out that they need to do nothing, recursive makes which do too much, or too little, recursive makes which are overly sensitive to changes in the source code and require constant Makefile intervention to keep them working.
The resolution of these problems can be found by looking at what make does, from first principles, and then analyzing the effects of introducing recursive make to this activity. The analysis shows that the problem stems from the artificial partitioning of the build into separate subsets. This, in turn, leads to the symptoms described. To avoid the symptoms, it is only necessary to avoid the separation; to use a single Makefile for the whole project.
This conclusion runs counter to much accumulated folk wisdom in building large projects on UNIX. Some of the main objections raised by this folk wisdom are examined and shown to be unfounded. The results of actual use are far more encouraging, with routine development performance improvements significantly faster than intuition may indicate. The use of a single project Makefile is not as difficult to put into practice as it may first appear.
Make reads its instructions from text files. An initialization file is read first, followed by the makefile. The initialization file holds instructions for all "makes" and is used to customize the operation of Make. Make automatically reads the initialization file whenever it starts up. Typically the initialization file is named make.ini and it resides in the directory of make.exe and mkmf.exe. The name and location of the initialization file is discussed in detail on Page .
The makefile has instructions for a specific project. The default name of the makefile is literally makefile, but the name can be specified with a command-line option.
With a few exceptions, the initialization file holds the same kind of information as does a makefile. Both the initialization file and the makefile are composed of the following components: comments, dependency lines, directives, macros, response files, rules and shell lines.
C Programming Tutorial Make and Makefiles
Creating Makefiles A Mini Tutorial LG #83
oreilly.com Managing Projects with GNU Make, Third Edition
FreeBSD.org/Writing and Debugging a Makefile
The make utility executes a list of shell commands associated with each target, typically to create or update a file of the same name. makefile contains entries that describe how to bring a target up to date with respect to those on which it depends, which are called dependencies.
SYNTAX
/usr/ccs/bin/make [ -d ] [ -dd ] [ -D ] [ -DD ] [ -e ] [ -i ] [ -k ] [ -n ] [ -p ] [ -P ] [ -q ] [ -r ] [ -s] [ -S ] [ -t ] [ -V ] [ -f makefile ] ... [-K statefile ] ... [ target ... ] [ macro = value ... ]
/usr/xpg4/bin/make [ -d ] [ -dd ] [ -D ] [ -DD ] [ -e ] [ -i ] [ -k ] [ -n ] [ -p ] [ -P ] [ -q ] [ -r ] [ -s] [ -S ] [ -t ] [ -V ] [ -f makefile ] ... [ target... ] [ macro = value ... ]
-d | Displays the reasons why make chooses to rebuild a target; make displays any and all dependencies that are newer. In addition, make displays options read in from the MAKEFLAGS environment variable. |
-dd | Displays the dependency check and processing in vast detail. |
-D | Displays the text of the makefiles read in. |
-DD | Displays the text of the makefiles, make.rules file, the state file, and all hidden-dependency reports. |
-e | Environment variables override assignments within makefiles. |
-i | Ignores error codes returned by commands. Equivalent to the special-function target `.IGNORE:'. |
-k | When a nonzero error status is returned by a rule, or when make cannot find a rule, abandons work on the current target, but continues with other dependency branches that do not depend on it. |
-n | No execution mode. Prints commands, but does not execute them. Even lines beginning with an @ are printed. However, if a command line contains a reference to the $(MAKE) macro, that line is always executed (see the discussion of MAKEFLAGS in ). When in POSIX mode, lines beginning with a "+" are executed. |
-p | Prints out the complete set of macro definitions and target descriptions. |
-P | Merely reports dependencies, rather than building them. |
-q | Question mode. make returns a zero or nonzero status code depending on whether or not the target file is up to date. When in POSIX mode, lines beginning with a "+" are executed. |
-r | Does not read in the default makefile /usr/share/lib/make/make.rules. |
-s | Silent mode. Does not print command lines before executing them. Equivalent to the special-function target .SILENT:. |
-S | Undoes the effect of the -k option. Stops processing when a non-zero exit status is returned by a command. |
-t | Touches the target files (bringing them up to date) rather than performing their rules. This can be dangerous when files are maintained by more than one person. When the .KEEP_STATE: target appears in the makefile, this option updates the state file just as if the rules had been performed. When in POSIX mode, lines beginning with a "+" are executed. |
-V | Puts make into SysV mode. Refer to sysV-make for respective details. |
-f makefile | Uses the description file makefile. A `-' as the makefile argument
denotes the standard input. The contents of makefile, when present, override the standard set
of implicit rules and predefined macros. When more than one `-f makefile' argument pair appears,
make uses the concatenation of those files, in order of appearance.
When no makefile is specified, /usr/ccs/bin/make tries the following in sequence, except when in POSIX mode (see the .POSIX in the section below):
When no makefile is specified, /usr/ccs/bin/make in POSIX mode and /usr/xpg4/bin/make try the following files in sequence:
|
-K statefile | Uses the state file statefile. A `-' as the statefile argument denotes the standard input. The contents of statefile, when present, override the standard set of implicit rules and predefined macros. When more than one `-K statefile' argument pair appears, make uses the concatenation of those files, in order of appearance. (See also .KEEP_STATE and .KEEP_STATE_FILE in the section). |
target | Target names, as defined in . |
macro = value | Macro definition. This definition overrides any regular definition for the specified macro within the makefile itself, or in the environment. However, this definition can still be overridden by conditional macro assignments. |
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019