|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
News | Selected Computer Books | Advanced C and C++ Compiling | Tutorials | Reference | |
Best books about C++ debugging | Horror Stories | Unix History | Humor | Etc |
|
On Unix platforms the standard make utility is a build tool for software that uses a Bourne shell type syntax for compiling and linking source code. While basic idea (to compile files timespam pn which is later than timestamp on object and executable files, the detailss are complex as there can be interdependencies between the progems. Also the probloem provides several "mode" modes including "presudotargets" make clean and make install (see below)
The dominant version of make is GNU make sometimes called gmake. There is also different slightly better implementatione from Bell Labs nmake.
|
Windows compilers "project files" are generally equivalent to Makefiles. Actually most commercial C compiler IDE's contain something like a built-in make using some "project files". (If conducting a port to a Unix platform you might want to disentangle yourself from the non-transportability and awful licensing issues involved with these "project files" though.
Makefile creation has long traditions in Unix environment. It usually has several preconfigured options:
make # use the default makefile (makefile or Makefile ) and first target in it
make clean # removes all packages
make install # install the package into target directories
clean and install are called a phony targets and are discussed later in the section on dependency rules.
Of course a 'make' utility can be simply written in a scripting language. In such cases the syntax of the input Makefile is also somewhat "arbitrary" meaning that it need not follow the command language syntax of the native language interpreter (lex & yacc are helpful here but need not play a role). In fact, Nick Ing-Simmons has written a make-like utility entirely in perl. It is available from CPAN.
perl Makefile.PL
Again, the make utility is tool originally created for compiling computer programs but that can be used for various tasks line installing packages.
Make is controlled by the makefile has is speciali mini-language that consists of rules. By default makefile is a file in the current directory with the name Makefile or makefile. A rule in the makefile tells make how to execute a series of commands in order to build a target file from source files. It also specifies a list of dependencies of the target file. This list should include all files (whether source files or other targets) which are used as inputs to the commands in the rule. A simple rule has the following syntax:
target: dependencies ... commands ...
Note that commands are arbitrary shell commands. When you run make, you can specify particular targets to update; otherwise, make updates the first target listed in the makefile. Of course, any other target files needed as input for generating these targets must be updated first.
Make uses the makefile to figure out which target files ought to be brought up to date, and then determines which of them actually need to be updated. If a target file is newer than all of its dependencies, then it is already up to date, and it does not need to be regenerated. The other target files do need to be updated, but in the right order: each target file must be regenerated before it is used in regenerating other targets.
Make goes through a makefile starting with the target it is going to create. make looks at each of the target's dependencies to see if they are also listed as targets. It follows the chain of dependencies until it reaches the end of the chain and then begins backing out executing the commands found in each target's rule. Actually every file in the chain may not need to be compiled. Make looks at the time stamp for each file in the chain and compiles from the point that is required to bring every file in the chain up to date. If any file is missing it is updated if possible.
Make builds object files from the source files and then links the object files to create the executable. If a source file is changed only its object file needs to be compiled and then linked into the executable instead of recompiling all the source files.
This is an example makefile to build an executable file called prog1. It requires the source files file1.cc, file2.cc, and file3.cc. An include file, mydefs.h, is required by files file1.cc and file2.cc. If you wanted to compile this file from the command line using C++ the command would be
% CC -o prog1 file1.cc file2.cc file3.cc
This command line is rather long to be entered many times as a program is developed and is prone to typing errors. A makefile could run the same command better by using the simple command
% make prog1
or if prog1 is the first target defined in the makefile
% make
This first example makefile is much longer than necessary but is useful for describing what is going on.
prog1 : file1.o file2.o file3.o CC -o prog1 file1.o file2.o file3.o file1.o : file1.cc mydefs.h CC -c file1.cc file2.o : file2.cc mydefs.h CC -c file2.cc file3.o : file3.cc CC -c file3.cc clean : rm file1.o file2.o file3.o
Let's go through the example to see what make does by executing with the command make prog1 and assuming the program has never been compiled.
This example can be simplified somewhat by defining macros. Macros are useful for replacing duplicate entries. The object files in this example were used three times, creating a macro can save a little typing. Plus and probably more importantly, if the objects change, the makefile can be updated by just changing the object definition.
OBJS = file1.o file2.o file3.o prog1 : $(OBJS) CC -o prog1 $(OBJS) file1.o : file1.cc mydefs.h CC -c file1.cc file2.o : file2.cc mydefs.h CC -c file2.cc file3.o : file3.cc CC -c file3.cc clean : rm $(OBJS)
This makefile is still longer than necessary and can be shortened by letting make use its internal macros, special macros, and suffix rules.
OBJS = file1.o file2.o file3.o prog1 : ${OBJS} ${CXX} -o $@ ${OBJS} file1.o file2.o : mydefs.h clean : rm ${OBJS}
Make is invoked from a command line with the following format
make [-f makefile] [-bBdeiknpqrsSt] [macro name=value] [names]
However from this vast array of possible options only the -f makefile and the names options are used frequently. The table below shows the results of executing make with these options.
Command | Result |
make | use the default makefile, build the first target in the file |
make myprog | use the default makefile, build the target myprog |
make -f mymakefile | use the file mymakefile as the makefile, build the first target in the file |
make -f mymakefile myprog | use the file mymakefile as the makefile, build the target myprog |
To operate make needs to know the relationship between your program's component files and the commands to update each file. This information is contained in a makefile you must write called Makefile or makefile.By default when invoked without parameters make will search the current working directory for one of the following two files and try to use the first found:
Hint:
Comments can be entered in the makefile following a pound sign ( # ) and the remainder of the line will be ignored by make. If multiple lines are needed each line must begin with the pound sign.
# This is a comment line
A rule consist of three parts, one or more targets, zero or more dependencies, and zero or more commands in the following form:
target1 [target2 ...] :[:] [dependency1 ...] [; commands] [<tab> command]
Note: each command line must begin with a tab as the first character on the line and only command lines may begin with a tab.
A target is usually the name of the file that make creates, often an object file or executable program.
A phony target is one that isn't really the name of a file. It will only have a list of commands and no prerequisites.
One common use of phony targets is for removing files that are no longer needed after a program has been made. The following example simply removes all object files found in the directory containing the makefile.
clean : rm *.o
A dependency identifies a file that is used to create another file. For example a .cc file is used to create a .o, which is used to create an executable file.
Each command in a rule is interpreted by a shell to be executed. By default make uses the /bin/sh shell. The default can be over ridden by using the macro SHELL = /bin/sh or equivalent to use the shell of your preference. This macro should be included in every makefile to make sure the same shell is used each time the makefile is executed.
Macros allow you to define constants. By using macros you can avoid repeating text entries and make makefiles easier to modify. Macro definitions have the form
NAME1 = text string NAME2 = another string
Macros are referred to by placing the name in either parentheses or curly braces and preceding it with a dollar sign ( $ ). The previous definitions could referenced
$(NAME1) ${NAME2}
which are interpreted as
text string another string
Some valid macro definitions are
LIBS = -lm OBJS = file1.o file2.o $(more_objs) more_objs = file3.o CXX = CC DEBUG_FLAG = # assign -g for debugging
which could be used in a makefile entry like this
prog1 : ${objs} ${CXX} $(DEBUG_FLAG) -o prog1 ${objs} ${LIBS}
Macro names can use any combination of upper and lowercase letters, digits and underlines. By convention macro names are in uppercase. The text string can also be null as in the DEBUG_FLAG example which also shows that comments can follow a definition.
You should note from the previous example that the OBJSmacro contains another macro $(MORE_OBJS). The order that the macros are defined in does not matter but if a macro name is defined twice only the last one defined will be used. Macros cannot be undefined and then redefined as something else.
Make can receive macros from four sources, macros maybe defined in the makefile like we've already seen, internally defined within make, defined in the command line, or inherited from shell environment variables.
Internally defined macros are ones that are predefined in make. You can invoke make with the -p option to display a listing of all the macros, suffix rules and targets in effect for the current build. Here is a partial listing with the default macros from MTSU's mainframe frank.
CXX = CC CXXFLAGS = -O GFLAGS = CFLAGS = -O CC = cc LDFLAGS = LD = ld LFLAGS = MAKE = make MAKEFLAGS = b
There are a few special internal macros that make defines for each dependency line. Most are beyond the scope of this document but one is especially useful in a makefile and you are likely to see it even in simple makefiles.
The macro @ evaluates to the name of the current target. In the following example the target name is prog1 which is also needed in the command line to name the executable file. In this example -o @ evaluates to -o prog1.
prog1 : ${objs} ${CXX} -o $@ ${objs}
Macros can be defined on the command line. From the previous example the debug flag, which was null, could be set from the command line with the command
% make prog1 DEBUG_FLAG=-g
Definitions comprised of several words must be enclosed in single or double quotes so that the shell will pass them as a single argument. For example
% make prog1 "LIBS= -lm -lX11"
could be used to link an executable using the math and X Windows libraries.
Shell variables that have been defined as part of the environment are available to make as macros within a makefile. C shell users can see the environment variables they have defined from the command line with the command
% env
These variables can be set within the .login file or from the command line with a command like:
% setenv DIR /usr/bin
With four sources for macros there is always the possibility of conflicts. There are two orders of priority available for make. The default priority order from least to greatest is:
If make is invoked with the -e option the priority order from least to greatest is
Make has a set of default rules called suffix or implicit rules. These are generalized rules that make can use to build a program. For example in building a C++ program these rules tell make that .o object files are made from .cc source files. The suffix rule that make uses for a C++ program is
.cc.o: $(CXX) $(CXXFLAGS) -c $<
where $< is a special macro which in this case stands for a .cc file that is used to produce a particular target .o file.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Jun 17, 2020 | opensource.com
Knowing how Linux uses libraries, including the difference between static and dynamic linking, can help you fix dependency problems. Feed 27 up Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0 x Subscribe nowGet the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
Linux, in a way, is a series of static and dynamic libraries that depend on each other. For new users of Linux-based systems, the whole handling of libraries can be a mystery. But with experience, the massive amount of shared code built into the operating system can be an advantage when writing new applications.
To help you get in touch with this topic, I prepared a small application example that shows the most common methods that work on common Linux distributions (these have not been tested on other systems). To follow along with this hands-on tutorial using the example application, open a command prompt and type:
$ git clone https: // github.com / hANSIc99 / library_sample
$ cd library_sample /
$ make
cc -c main.c -Wall -Werror
cc -c libmy_static_a.c -o libmy_static_a.o -Wall -Werror
cc -c libmy_static_b.c -o libmy_static_b.o -Wall -Werror
ar -rsv libmy_static.a libmy_static_a.o libmy_static_b.o
ar: creating libmy_static.a
a - libmy_static_a.o
a - libmy_static_b.o
cc -c -fPIC libmy_shared.c -o libmy_shared.o
cc -shared -o libmy_shared.so libmy_shared.o
$ make clean
rm * .oAfter executing these commands, these files should be added to the directory (run
my_appls
to see them):
libmy_static.a
libmy_shared.so About static linkingWhen your application links against a static library, the library's code becomes part of the resulting executable. This is performed only once at linking time, and these static libraries usually end with a
.a
extension.A static library is an archive ( ar ) of object files. The object files are usually in the ELF format. ELF is short for Executable and Linkable Format , which is compatible with many operating systems.
The output of the
$ file libmy_static.afile
command tells you that the static librarylibmy_static.a
is thear
archive type:
libmy_static.a: current ar archiveWith
$ ar -t libmy_static.aar -t
, you can look into this archive; it shows two object files:
libmy_static_a.o
libmy_static_b.oYou can extract the archive's files with
$ ar -x libmy_static.aar -x <archive-file>
. The extracted files are object files in ELF format:
$ file libmy_static_a.o
libmy_static_a.o: ELF 64 -bit LSB relocatable, x86- 64 , version 1 ( SYSV ) , not stripped About dynamic linking More Linux resourcesDynamic linking means the use of shared libraries. Shared libraries usually end with
- Linux commands cheat sheet
- Advanced Linux commands cheat sheet
- Free online course: RHEL Technical Overview
- Linux networking cheat sheet
- SELinux cheat sheet
- Linux common commands cheat sheet
- What are Linux containers?
- Our latest Linux articles
.so
(short for "shared object").Shared libraries are the most common way to manage dependencies on Linux systems. These shared resources are loaded into memory before the application starts, and when several processes require the same library, it will be loaded only once on the system. This feature saves on memory usage by the application.
Another thing to note is that when a bug is fixed in a shared library, every application that references this library will profit from it. This also means that if the bug remains undetected, each referencing application will suffer from it (if the application uses the affected parts).
It can be very hard for beginners when an application requires a specific version of the library, but the linker only knows the location of an incompatible version. In this case, you must help the linker find the path to the correct version.
Although this is not an everyday issue, understanding dynamic linking will surely help you in fixing such problems.
Fortunately, the mechanics for this are quite straightforward.
To detect which libraries are required for an application to start, you can use
$ ldd my_appldd
, which will print out the shared libraries used by a given file:
linux-vdso.so.1 ( 0x00007ffd1299c000 )
libmy_shared.so = > not found
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f56b869b000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f56b8881000 )Note that the library
libmy_shared.so
is part of the repository but is not found. This is because the dynamic linker, which is responsible for loading all dependencies into memory before executing the application, cannot find this library in the standard locations it searches.Errors associated with linkers finding incompatible versions of common libraries (like
$ LD_LIBRARY_PATH =$ ( pwd ) : $LD_LIBRARY_PATHbzip2
, for example) can be quite confusing for a new user. One way around this is to add the repository folder to the environment variableLD_LIBRARY_PATH
to tell the linker where to look for the correct version. In this case, the right version is in this folder, so you can export it:
$ export LD_LIBRARY_PATHNow the dynamic linker knows where to find the library, and the application can be executed. You can rerun
$ ldd my_appldd
to invoke the dynamic linker, which inspects the application's dependencies and loads them into memory. The memory address is shown after the object path:
linux-vdso.so.1 ( 0x00007ffd385f7000 )
libmy_shared.so = > / home / stephan / library_sample / libmy_shared.so ( 0x00007f3fad401000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f3fad21d000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f3fad408000 )To find out which linker is invoked, you can use
$ file my_appfile
:
my_app: ELF 64 -bit LSB executable, x86- 64 , version 1 ( SYSV ) , dynamically linked, interpreter / lib64 / ld-linux-x86- 64 .so.2, BuildID [ sha1 ] =26c677b771122b4c99f0fd9ee001e6c743550fa6, for GNU / Linux 3.2.0, not strippedThe linker
$ file / lib64 / ld-linux-x86- 64 .so.2/lib64/ld-linux-x86–64.so.2
is a symbolic link told-2.30.so
, which is the default linker for my Linux distribution:
/ lib64 / ld-linux-x86- 64 .so.2: symbolic link to ld- 2.31 .soLooking back to the output of
ldd
, you can also see (next tolibmy_shared.so
) that each dependency ends with a number (e.g.,/lib64/libc.so.6
). The usual naming scheme of shared objects is:**lib** XYZ.so **.<MAJOR>** . **<MINOR>**On my system,
$ file / lib64 / libc.so.6libc.so.6
is also a symbolic link to the shared objectlibc-2.30.so
in the same folder:
/ lib64 / libc.so.6: symbolic link to libc- 2.31 .soIf you are facing the issue that an application will not start because the loaded library has the wrong version, it is very likely that you can fix this issue by inspecting and rearranging the symbolic links or specifying the correct search path (see "The dynamic loader: ld.so" below).
For more information, look on the
Dynamic loadingldd
man page .Dynamic loading means that a library (e.g., a
.so
file) is loaded during a program's runtime. This is done using a certain programming scheme.Dynamic loading is applied when an application uses plugins that can be modified during runtime.
See the
The dynamic loader: ld.sodlopen
man page for more information.On Linux, you mostly are dealing with shared objects, so there must be a mechanism that detects an application's dependencies and loads them into memory.
ld.so
looks for shared objects in these places in the following order:
- The relative or absolute path in the application (hardcoded with the
-rpath
compiler option on GCC)- In the environment variable
LD_LIBRARY_PATH
- In the file
/etc/ld.so.cache
Keep in mind that adding a library to the systems library archive
unset LD_LIBRARY_PATH/usr/lib64
requires administrator privileges. You could copylibmy_shared.so
manually to the library archive and make the application work without settingLD_LIBRARY_PATH
:
sudo cp libmy_shared.so / usr / lib64 /When you run
$ ldd my_appldd
, you can see the path to the library archive shows up now:
linux-vdso.so.1 ( 0x00007ffe82fab000 )
libmy_shared.so = > / lib64 / libmy_shared.so ( 0x00007f0a963e0000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f0a96216000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f0a96401000 ) Customize the shared library at compile timeIf you want your application to use your shared libraries, you can specify an absolute or relative path during compile time.
Modify the makefile (line 10) and recompile the program by invoking
make -B
. Then, the output ofldd
showslibmy_shared.so
is listed with its absolute path.Change this:
CFLAGS =-Wall -Werror -Wl,-rpath,$(shell pwd)To this (be sure to edit the username):
CFLAGS =/home/stephan/library_sample/libmy_shared.soThen recompile:
$ makeConfirm it is using the absolute path you set, which you can see on line 2 of the output:
$ ldd my_app
linux-vdso.so.1 ( 0x00007ffe143ed000 )
libmy_shared.so = > / lib64 / libmy_shared.so ( 0x00007fe50926d000 )
/ home / stephan / library_sample / libmy_shared.so ( 0x00007fe509268000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007fe50909e000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007fe50928e000 )This is a good example, but how would this work if you were making a library for others to use? New library locations can be registered by writing them to
/etc/ld.so.conf
or creating a<library-name>.conf
file containing the location under/etc/ld.so.conf.d/
. Afterward,ldconfig
must be executed to rewrite theld.so.cache
file. This step is sometimes necessary after you install a program that brings some special shared libraries with it.See the
How to handle multiple architecturesld.so
man page for more information.Usually, there are different libraries for the 32-bit and 64-bit versions of applications. The following list shows their standard locations for different Linux distributions:
Red Hat family
- 32 bit:
/usr/lib
- 64 bit:
/usr/lib64
Debian family
- 32 bit:
/usr/lib/i386-linux-gnu
- 64 bit:
/usr/lib/x86_64-linux-gnu
Arch Linux family
- 32 bit:
/usr/lib32
- 64 bit:
/usr/lib64
FreeBSD (technical not a Linux distribution)
- 32bit:
/usr/lib32
- 64bit:
/usr/lib
Knowing where to look for these key libraries can make broken library links a problem of the past.
While it may be confusing at first, understanding dependency management in Linux libraries is a way to feel in control of the operating system. Run through these steps with other applications to become familiar with common libraries, and continue to learn how to fix any library challenges that could come up along your way.
- Paperback: 168 pages
- Publisher: O'Reilly & Associates; 2nd edition (February 1, 1993)
- Language: English
- ISBN-10: 0937175900
- ISBN-13: 978-0937175903
- Product Dimensions: 6 x 0.4 x 9 inches
Good Book
Byvijaya dantulurion March 25, 2001
Format: Paperback
I recently had to work on our project's make file. The first look at it made me nervous. Fortunately I found this book. This book is a great introduction to unix' power tool 'make'. The authors clearly had enough experience to tell us what, how and whys. The first chapter generates excitement to continue on to the next ones. Chapters two and three must be read with lots of patience.
Remember, 'make' is a complex tool used for complex projects. Its not an easy go. Troubleshooting section listed some common problems, which, by the way, are really helpful.
The project management is good too. The only complaint I have with this book is it is a little pricey. For thirty bucks, I expect more bang. The authors could have updated the book with new breed of make tools like Apache's 'ant'. An example of building a project could have really helped. The man pages listed for 'make' on my unix system didn't take me far enough to grasp this tool. I highly recommend it to beginners.
Paperback: 256 pages
Publisher: No Starch Press; 1 edition (April 16, 2015)
Language: English
ISBN-10: 1593276494
ISBN-13: 978-1593276492
Product Dimensions: 7 x 0.5 x 9.2 inches
Shipping Weight: 15.2 ounces (View shipping rates and policies)M. Helmke on April 27, 2015
I think this book is fantastic. It does have one weakness that
The GNU Make Book is intended for people who already have an understanding of GNU Make, what it is, and the basics of how and why someone would use it. The reader is assumed to know enough about programming and source code, about compiling and creating software executables to not need an introduction. The book begins by talking about setting environment variables in your makefile. If you know what this means, you will likely benefit from the book. If you don't, you aren't ready for this book.
I think this book is fantastic. It does have one weakness that, once addressed, would be likely to broaden its appeal and earn the review 5 stars instead of just 4. Many people who want or need to learn to use GNU make more effectively do not yet have the foundational knowledge necessary for reading or benefiting from this book. That could be remedied in a 15-20 page introductory chapter covering topics like "what is make?" and "how is make typically used?" The descriptions could be short, but would set the context for the rest of the book and ease the nervous reader in. Perhaps starting with something like, "GNU make is a tool that enables you to automate the generation of program executables from program source code" would be useful and could be followed by, "
This is typically accomplished by writing a Makefile, which includes a list of instructions for make to use as it does its work."
Michael Kim on May 1, 2015
The definitive GNU Make book
GNU Make is an automation tool for software builds. With that said, this book is intended for readers who have experience working in a Linux or Mac OS X environment, experience with programming, know what GNU Make is, and how they can use it to their advantage.
If you are new to GNU Make, I recommend that you read up on GNU Make and work with it a little first before reading this book to better grasp the concept.
The author does well in explaining and elaborating the content. The code is easy to read and to follow along. The pages are structured well so that you can easily distinguish what is code and what is text. Here is a list of topics discussed in the book:
- A thorough rundown of the basics of variables, rules, targets, and makefiles.
- Fix wastefully long build times and other common problems.
- Gain insight into more advanced capabilities.
- Master user-define functions, variables, and path handling.
- Weigh the pitfalls and advantages of GNU make parallelization.
- Handle automatic dependency generation, rebuilding, and non-recursive make.
- Modify the GNU Make source and take advantage of the GNU Make Standard Library.
- Create makefile assertions and debug makefiles.
Overall, this is a great GNU Make book that has a lot of useful content from a highly credible author.
Mick Charles Beaver on July 27, 2015
Hot Pizza
A coworker once told me that "GNU Make isn't the build system you need, it's the build system you deserve." I have to agree with that. GNU Make is a real-world tool for real-world problems and is arguably rough around the edges because of this.
Appropriately, "The GNU Make Book" is a well-written walk through on how to debug and understand GNU Make and its quirks.
It is a book written for programmers with more than a few scars on their fingertips. While you should look elsewhere for a tutorial or a complete reference, this one of a kind book will at least give you an umbrella as you weather the storm of tears that often comes with inheriting someone else's Makefile.
Nowhere else will you find as many high-quality and in-depth examples for flexing the mainstream features of a Makefile, profiling a slow build, and debugging the various things that can go wrong.
It's relatively short and relatively cheap. Would buy again. A+++.
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: July, 02, 2020