I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
Ordinarily technology changes fast. But programming languages are
different: programming languages are not just technology, but what
programmers think in.
They're half technology and
half religion. And so the median language, meaning whatever
language the median programmer uses, moves as slow as an iceberg.
A fruitful way to think about language development is to consider it a to be
special type of theory building. Peter Naur suggested that programming in general
is theory building activity in his 1985 paper "Programming as Theory Building".
But idea is especially applicable to compilers and interpreters. What Peter Naur
failed to understand was that design of programming languages has religious overtones
and sometimes represent an activity, which is pretty close to the process of creating
a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences
are high priests of the church of programming languages. some like Niklaus Wirth
and Edsger W. Dijkstra (temporary) reached the status close to those of (false)
prophets :-).
On a deep conceptual level building of a new language is a human way of solving
complex problems. That means that complier construction in probably the most underappreciated
paradigm of programming of large systems much more so then greatly oversold object-oriented
programming. OO benefits are greatly overstated. For users, programming languages
distinctly have religious aspects, so decisions about what language to use are often
far from being rational and are mainly cultural. Indoctrination at the university
plays a very important role. Recently they were instrumental in making Java a new
Cobol.
The second important observation about programming languages is that language
per se is just a tiny part of what can be called language programming environment.
the latter includes libraries, IDE, books, level of adoption at universities,
popular, important applications written in the language, level of support and key
players that support the language on major platforms such as Windows and Linux and
other similar things. A mediocre language with good programming environment
can give a run for the money to similar superior in design languages that are just
naked. This is a story behind success of Java. Critical application
is also very important and this is a story of success of PHP which is nothing but
a bastardatized derivative of Perl (with all most interesting Perl features removed
;-) adapted to creation of dynamic web sites using so called LAMP stack.
Progress in programming languages has been very uneven and contain several setbacks.
Currently this progress is mainly limited to development of so called
scripting languages. Traditional
high level languages field is stagnant for many decades.
At the same time there are some mysterious, unanswered question about factors
that help the language to succeed or fail. Among them:
Why new programming languages repeat old mistakes? If this
because complexity of languages is already too high, or because language designers
are unable to learn on from "old masters" ?
Why starting from approximately 1990 the progress in language design
is almost absent and the most popular languages created after 1990 such
as Java and PHP are at best mediocre and constitute a (huge) step back from
the state of the art of language design?
Why fashion rules fashionable languages (OO-based) gain momentum and
support despite their (obvious) flaws.
Why "worse is better" approach is so successful, why less powerful and
less elegant
languages can make it to mainstream and stay here ?
How complexity of the language inhibit it wide usage. The story of PHP (language
inferiors to almost any other scripting language developed after 1990) eliminating
Perl as a CGI scripting language is an interesting and pretty fascinating story.
Success of Pascal (which is bastardatized version of Algol) is similar but is
related to the fact that it was used at universities as the first programming
language. Now the same situation repeats with Java.
Those are difficult questions to answer without some way of classifying languages
into different categories. Several such classifications exists. First of all like
with natural languages, the number of people who speak a given language is a tremendous
force that can overcome any real of perceived deficiencies of the language.
In programming languages, like in natural languages nothing succeed like success.
History of programming languages raises interesting general questions about the
limit of complexity of programming languages. There is strong historical evidence
that a language with simpler core, or even simplistic core Basic, Pascal) have better
chances to acquire high level of popularity. The underlying fact here probably
is that most programmers are at best mediocre and such programmers tend on intuitive
level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1
and PHP to Perl. Or at least avoid it on a particular phase of language development
(C++ is not simpler language then PL/1, but was widely adopted because of the progress
of hardware, availability of compilers and not the least, because it was associated
with OO exactly at the time OO became a mainstream fashion). Complex
non-orthogonal languages can succeed only as a result of a long period of language
development (which usually adds complexly -- just compare Fortran IV with Fortran
99; or PHP 3 with PHP 5 ) from a smaller core. The banner of some fashionable new
trend extending existing popular language to this new "paradigm" is also a possibility
(OO programming in case of C++, which is a superset of C).
Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but
even if they were successful, their success typically was temporary rather then
permanent (PL/1, Ada, Perl). As Professor Wilkes noted (iee90):
Things move slowly in the computer language field but, over a sufficiently
long period of time, it is possible to discern trends. In the 1970s, there was
a vogue among system programmers for BCPL, a typeless language. This has now
run its course, and system programmers appreciate some typing support.
At the same time, they like a language with low
level features that enable them to do things their way, rather than the compiler’s
way, when they want to.
They continue, to have a strong preference for
a lean language. At present they tend to favor C in its various
versions. For applications in which flexibility is important, Lisp may be said
to have gained strength as a popular programming language.
Further progress is necessary in the direction of achieving modularity. No
language has so far emerged which exploits objects in a fully satisfactory manner,
although C++ goes a long way. ADA was progressive in this respect, but
unfortunately it is in the process of collapsing
under its own great weight.
ADA is an example of what can happen when an official attempt is made to
orchestrate technical advances. After the experience
with PL/1 and ALGOL 68, it should have been clear that the future did not lie
with massively large languages.
I would direct the reader’s attention to Modula-3, a modest attempt to build
on the appeal and success of Pascal and Modula-2 [12].
Complexity of the compiler/interpreter also matter as it affects portability:
this is one thing that probably doomed PL/1 (and
later Ada), although those days a new language typically come with open source compiler
(or in case of scripting languages, an interpreter) and this is less of a problem.
Here is an interesting take on language design from the preface to The D programming
language book:
Programming language design seeks power in simplicity and, when successful,
begets beauty.
Choosing the trade-offs among contradictory requirements is a difficult task
that requires good taste from the language designer as much as mastery of theoretical
principles and of practical implementation matters. Programming language design
is software-engineering-complete.
D is a language that attempts to consistently do the right thing within the
constraints it chose: system-level access to computing resources, high performance,
and syntactic similarity with C-derived languages. In trying to do the right
thing, D sometimes stays with tradition and does what other languages do, and
other times it breaks tradition with a fresh, innovative solution. On occasion
that meant revisiting the very constraints that D ostensibly embraced. For example,
large program fragments or indeed entire programs can be written in a well-defined
memory-safe subset of D, which entails giving away a small amount of system-level
access for a large gain in program debuggability.
You may be interested in D if the following values are important to you:
Performance. D is a systems programming language. It has a memory
model that, although highly structured, is compatible with C’s and can call
into and be called from C functions without any intervening translation.
Expressiveness. D is not a small, minimalistic language, but
it does have a high power-to-weight ratio. You can define eloquent, self-explanatory
designs in D that model intricate realities accurately.
“Torque.” Any backyard hot-rodder would tell you that power isn’t
everything; its availability is. Some languages are most powerful for small
programs, whereas other languages justify their syntactic overhead only
past a certain size. D helps you get work done in short scripts and large
programs alike, and it isn’t unusual for a large program to grow organically
from a simple single-file script.
Concurrency. D’s approach to concurrency is a definite departure
from the languages it resembles, mirroring the departure of modern hardware
designs from the architectures of yesteryear. D breaks away from the curse
of implicit memory sharing (though it allows statically checked explicit
sharing) and fosters mostly independent threads that communicate with one
another via messages.
Generic code. Generic code that manipulates other code has been
pioneered by the powerful Lisp macros and continued by C++ templates, Java
generics, and similar features in various other languages. D offers extremely
powerful generic and generational mechanisms.
Eclecticism. D recognizes that different programming paradigms
are advantageous for different design challenges and fosters a highly integrated
federation of styles instead of One True Approach.
“These are my principles. If you don’t like them, I’ve got others.”
D tries to observe solid principles of language design. At times, these
run into considerations of implementation difficulty, usability difficulties,
and above all human nature that doesn’t always find blind consistency sensible
and intuitive. In such cases, all languages must make judgment calls that
are ultimately subjective and are about balance, flexibility, and good taste
more than anything else. In my opinion, at least, D compares very favorably
with other languages that inevitably have had to make similar decisions.
At the initial, the most difficult stage of language development the language
should solve an important problem that was inadequately solved by currently popular
languages. But at the same time the language has few chances rto cesseed unless
it perfectly fits into the current software fashion. This "fashion factor" is probably
as important as several other factors combined with the exclution of "language sponsor"
factor.
Like in woman dress fashion rules in language design. And with
time this trend became more and more prononced. A new language should simultaneously
represent the current fashionable trend. For example OO-programming was a
visit card into the world of "big, successful languages" since probably early 90th
(C++, Java, Python). Before that "structured programming" and "verification"
(Pascal, Modula) played similar role.
PL/1, Java, C#, Ada are languages that had powerful sponsors. Pascal,
Basic, Forth are examples of the languages that had no such sponsor during the initial
period of development. C and C++ are somewhere in between.
But any language now need a "programming environment" which consists
of a set of libraries, debugger and other tools (make tool, link, pretty-printer, etc). The set of standard" libraries
and debugger are probably two most important elements. They cost lot of time
(or money) to develop and here the role of powerful sponsor is difficult to underestimate.
While this is not a necessary condition for becoming popular, it really helps: other things equal
the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent
language (C-- with garbage collection and OO) was pushed through the throat on the strength of
marketing and huge amount of money spend on creating Java programming environment. The same
was partially true for
C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting
language now then, say, Perl (which is better integrated with Unix and has pretty
innovative for scripting languages support of pointers and regular expressions),
or Ruby (which has support of coroutines form day 1, not as "bolted on" feature
like in Python). Like in political campaigns, negative advertizing also matter. For example Perl
suffered greatly from blackmail comparing programs in it with "white noise". And then
from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk
that Perl book publishing franchise ;-)
People proved to be pretty gullible and in this sense language marketing is not that different
from woman clothing marketing :-)
One very important classification of programming languages is based on so called
the level of the language. Essentially after there is at least one
language that is successful on a given level, the success of other languages on
the same level became more problematic. Higher chances for success are for languages
that have even slightly higher, but still higher level then successful predecessors.
The level of the language informally can be described as the number of statements
(or, more correctly, the number of lexical units (tokens)) needed to write
a solution of a particular problem in one language versus another. This way we can
distinguish several levels of programming languages:
Lowest levels. This level is occupied by assemblers and
languages designed fro specific instruction sets like PL\360.
Low level with access to low level architecture features (C,
BCPL). They are also called system programming languages and are, in essence,
a high-level assembler). In those languages you need specify details related
to the machine organization (computer instruction set); memory is allocated
explicitly.
High level without automatic memory allocation for variables and
garbage collection (Fortran, Algol style languages like Modula, Pascal,
PL/1, C++, VB. Most of languages in this category are compiled.
High level with automatic memory allocation for variables
and garbage collection. Languages of this category (Java, C#) typically
are compiled not to the native instruction set of the computer they need to
run, but to some abstract instruction set called virtual machine.
Very high level languages (scripting languages, as well as Icon,
SETL, and awk). Most are impossible to compile as dynamic features prevent
generation of code at compile time. they also typically use a virtual machine and garbage collection.
OS shells. They also are often called "glue" languages as they
provide integration of existing OS utilities. Those language currently represent the highest level of
languages available. This category is mainly represented by Unix shells such
as bash and ksh93, but Windows PowerShell belongs to the same category. . They
typically use virtual machine and intermediate code like scripting languages. They
presuppose a specific OS as a programming environment and as such are less portable
then other categories.
Some people distinguish between "nanny languages" and "sharp razor" languages.
The latter do not attempt to protect user from his errors while the former usually
go too far... Right compromise is extremely difficult to find.
For example, I consider the explicit availability of pointers as an important
feature of the language that greatly increases its expressive power and far
outweighs
risks of errors in hands of unskilled practitioners. In other words
attempts to make the language "safer" often misfire.
Another useful typology is based in expressive style of the language:
Procedural. The programming style you're probably used to, procedural
languages execute a sequence of statements that lead to a result. In essence,
a procedural language expresses the procedure to be followed to solve a problem.
Procedural languages typically use many variables and have heavy use of loops
and other elements of "state", which distinguishes them from functional programming
languages. Functions in procedural languages may modify variables or have other
side effects (e.g., printing out information) other than the value that the
function returns.
Functional. Employing a programming style often contrasted with procedural
programming, functional programs typically make little use of stored state,
often eschewing loops in favor of recursive functions. The most popular functional
language and the most successful one (most of functional languages are failures,
despite interesting features that are present) is probably regular expressions
notation. Another very successful non-procedural language notation are Unix
pipe notation. All-in-all functional languages have a lot of problems and none
of them managed to get into mainstream. All the talk about superiority of Lisp
remained the talk, as Lisp limits the expressive power of programmer by overloading
the board on one side.
Object-oriented. This is a popular subclass on procedural languages
with a better handling of namespaces (hierarchical structuring on namespace
that reminds Unix file system) and couple of other conveniences in defining
multiple entry functions (class methods in OO-speak). Classes strictly speaking
are evolution of records introduced by Simula. The main difference with Cobol
and PL/1 style of records is that classes have executable components (pointers
to functions) and are hierarchically organized with subclasses being lower level
sub-records, that is still accessible for any name space with higher level class.
A pure hierarchically organized structures were introduced in Cobol. Later
PL/1 extended and refined them introducing name-space copy (like attribute),
pointer base (based -records), etc. C being mostly a subset of PL/1 also used
some of those refinements but in a very limited way. In a way PL/1 record is a non-inherited class
without any methods. Some languages like Perl 5 implement "nuts and bolts" approach
to the introduction of OO constructs, exposing the kitchen. As such those implementation
is highly educational for students as they can see how "object-oriented" kitchen
operates. For example, the type of the class in Perl 5 is implemented as a hidden
first parameter that is passed with each procedure call "behind the scène".
Scripting languages are typically procedural but may contain non-procedural
elements (regular expressions) as well as elements of object-oriented languages
(Python, Ruby). Some of them support coroutines. They fall into their own category
because they are higher level languages then compiled language or languages
with an abstract machine and garbage collection (Java). Scripting languages
usually implement automatic garbage collection. Variables type in scripting
languages is typically dynamic, declarations of variables are not strictly needed
(but can be used) and they usually do not have compile-time type checking of
type compatibility of operands in classic operations. Some like Perl try to
convert the variable into the type required by particular operation (for example
string into numeric constant, if "+" operation is used). Possible errors are
"swiped under the carpet." Uninitialized variables typically are hanged as having
the value zero in numeric operations and null string in string operations. In
case operation can't be performed it returns zero, nil or some other special
value. Some scripting language have a special value of UNDEF which gives the
possibility to determine whether particular variable was assigned any value
before using it in expression.
Logic. Logic programming languages allow programmers to make declarative
statements (possibly in first-order logic: "grass implies green" for example).
The most successful was probably Prolog.
In a way this is another type of functional languages and Prolog is kind of
regular expressions on steroids. The success of this type of languages was/is
very limited.
Those categories are not pure and somewhat overlap. For example, it's possible to
program in an object-oriented style in C, or even assembler. Some scripting languages
like Perl have built-in regular expressions engines that are a part of the language
so they have functional component despite being procedural. Some relatively low
level languages (Algol-style languages) implement garbage collection. A good example
is Java. There are scripting languages that compile into common language framework
which was designed for high level languages. For example, Iron Python compiles into
.Net.
Popularity of the programming languages is not strongly connected to their quality.
Some languages that look like a collection of language designer blunders (PHP, Java
) became quite popular. Java became especially a new Cobol and PHP dominates dynamic
Web sites construction. The dominant technology for such Web sites is often called
LAMP, which means Linux - Apache -My SQL PHP. Being a highly simplified but badly
constructed subset of Perl, kind of new Basic for dynamic Web sites construction
PHP provides the most depressing experience. I was unpleasantly surprised when I
had learnt the Wikipedia engine was rewritten in PHP from Perl some time ago, but
this quite illustrates the trend.
So language design quality has little to do with the language success in the
marketplace. Simpler languages have more wide appeal as success of PHP (which at
the beginning was at the expense of Perl) suggests. In addition much depends whether
the language has powerful sponsor like was the case with Java (Sun and IBM) as well
as Python (Google).
Progress in programming languages has been very uneven and contain several setbacks
like Java. Currently this progress is usually associated with
scripting languages. History of programming
languages raises interesting general questions about "laws" of programming language
design. First let's reproduce several notable quotes:
Knuth law of optimization: "Premature optimization is the root of
all evil (or at least most of it) in programming." - Donald Knuth
"Greenspun's Tenth Rule of Programming: any sufficiently complicated
C or Fortran program contains an ad hoc informally-specified bug-ridden slow
implementation of half of Common Lisp." - Phil Greenspun
"The key to performance is elegance, not battalions of special cases."- Jon Bentley and Doug McIlroy
"Some may say Ruby is a bad rip-off of Lisp or Smalltalk, and I admit that.
But it is nicer to ordinary people." - Matz, LL2
Most papers in computer science describe how their author learned what someone
else already knew. - Peter Landin
"The only way to learn a new programming language is by writing programs
in it." - Kernighan and Ritchie
"If I had a nickel for every time I've written "for (i = 0; i < N; i++)"
in C, I'd be a millionaire." - Mike Vanier
"Language designers are not intellectuals. They're not as interested in
thinking as you might hope. They just want to get a language done and start
using it." - Dave Moon
"Don't worry about what anybody else is going to do. The best way to predict
the future is to invent it." - Alan Kay
"Programs must be written for people to read, and only incidentally for
machines to execute." - Abelson & Sussman, SICP, preface to the first edition
Please note that one thing is to read language manual and appreciate how good
the concepts are, and another to bet your project on a new, unproved language without
good debuggers, manuals and, what is very important, libraries. Debugger is very
important but standard libraries are crucial: they represent a factor that makes
or breaks new languages.
In this sense languages are much like cars. For many people car is the thing
that they use get to work and shopping mall and they are not very interesting is
engine inline or V-type and the use of fuzzy logic in the transmission. What they
care is safety, reliability, mileage, insurance and the size of trunk. In this sense
"Worse is better" is very true. I already mentioned the importance of the debugger.
The other important criteria is quality and availability of libraries. Actually
libraries are what make 80% of the usability of the language, moreover in a sense
libraries are more important than the language...
A popular belief that scripting is
"unsafe" or "second rate" or "prototype" solution is completely wrong. If a project
had died than it does not matter what was the implementation language, so for any
successful project and tough schedules scripting language (especially in dual
scripting language+C combination, for example TCL+C) is an optimal blend that
for a large class of tasks. Such an approach helps to separate architectural decisions
from implementation details much better that any OO model does.
Moreover even for tasks that handle a fair amount of computations and data (computationally
intensive tasks) such languages as Python and Perl are often (but not always !)
competitive with C++, C# and, especially, Java.
The second important observation
about programming languages is that language per se is just a tiny part of what
can be called language programming environment. the latter includes libraries, IDE,
books, level of adoption at universities, popular, important applications written
in the language, level of support and key players that support the language on major
platforms such as Windows and Linux and other similar things. A mediocre language
with good programming environment can give a run for the money to similar superior
in design languages that are just naked. This is a story behind success of Java.
Critical application is also very important and this is a story of success of PHP
which is nothing but a bastardatized derivative of Perl (with all most interesting
Perl features removed ;-) adapted to creation of dynamic web sites using so called
LAMP stack.
History of programming languages raises interesting general questions about the
limit of complexity of programming languages. There is strong historical evidence
that languages with simpler core, or even simplistic core has more chanced to acquire
high level of popularity. The underlying fact here probably is that most programmers
are at best mediocre and such programmer tend on intuitive level to avoid more complex,
more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular
phase of language development (C++ is not simpler language then PL/1, but was widely
adopted because OO became a fashion). Complex non-orthogonal languages can succeed
only as a result on long period of language development from a smaller core or with
the banner of some fashionable new trend (OO programming in case of C++).
Konrad Zuse , a German engineer working alone while hiding out in
the Bavarian Alps, develops Plankalkul. He applies the language to, among other
things, chess.
1949
Short Code , the first computer language actually used on an electronic
computing device, appears. It is, however, a "hand-compiled" language.
Fifties
1951
Grace Hopper
, working for Remington Rand, begins design work on the first widely
known compiler, named A-0. When the language is released by Rand in 1957, it
is called MATH-MATIC.
1952
Alick E. Glennie , in his spare time at the University of Manchester,
devises a programming system called AUTOCODE, a rudimentary compiler.
1957
FORTRAN --mathematical FORmula TRANslating system--appears. Heading
the team is John Backus, who goes on to contribute to the development of ALGOL
and the well-known syntax-specification system known as BNF.
1958
FORTRAN II appears, able to handle subroutines and links to assembly
language.
LISP. John McCarthy at M.I.T. begins work on LISP--LISt Processing.
Algol-58. The original specification for ALGOL appears. The specification does
not describe how data will be input or output; that is left to the individual
implementations.
1959
LISP 1.5 appears.
COBOL is created by the Conference on Data Systems and Languages (CODASYL).
Sixties
1960
ALGOL 60 , the specification for Algol-60, the first block-structured
language, appears. This is the root of the family tree that will ultimately
produce the likes of Pascal. ALGOL goes on to become the most popular language
in Europe in the mid- to late-1960s. Compilers for the language were quite
difficult to write and that hampered it widespread use. FORTRAN managed
to hold its own in the area of numeric computations and Cobol in data processing.
Only PL/1 (which was released in 1964) managed to advance ideas of Algol
60 to reasonably wide audience.
APL Sometime in the early 1960s , Kenneth Iverson begins work
on the language that will become APL--A Programming Language. It uses a
specialized character set that, for proper use, requires APL-compatible
I/O devices.
Discovery of context free languages formalism. The 1960's
also saw the rise of automata theory and the theory of formal languages.
Noam Chomsky introduced
the notion of context free languages and later became well-known for his theory that language is "hard-wired" in human brains,
and for his criticism of American foreign policy.
1962
Snobol was designed in 1962 in Bell Labs by R. E. Griswold and
I. Polonsky. Work begins on the sure-fire winner of the "clever acronym"
award, SNOBOL--StriNg-Oriented symBOlic Language. It will spawn other clever
acronyms: FASBOL, a SNOBOL compiler (in 1971), and SPITBOL--SPeedy ImplemenTation
of snoBOL--also in 1971.
APL is documented in Iverson's book, A Programming Language
.
FORTRAN IV appears.
1963
ALGOL 60 is revised.
PL/1. Work begins on PL/1.
1964
System/360, announced in April of 1964,
PL/1 is released with high quality compiler (F-compiler), which
beats is quality of both compile-time and run-time diagnostics most of
the compilers of the time. Later two brilliantly written and in
some aspects unsurpassable compilers:
debugging and optimizing PL/1 compilers were added. Both represented state of the art of compiler
writing. Cornell University implemented subset of PL/1 for teaching called PL/C
with the compiler that has probably the most advanced error detection and correction capabilities
of batch compilers of all times.
PL/1 was also adopted as system implementation language for Multics.
APL\360 is implemented.
BASIC. At Dartmouth University , professors John G. Kemeny and Thomas
E. Kurtz invent BASIC. The first implementation was on a timesharing
system. The first
BASIC program runs at about 4:00 a.m. on May 1, 1964.
1965
SNOBOL3 appears.
1966
FORTRAN 66 appears.
LISP 2 appears.
Work begins on LOGO at Bolt, Beranek, & Newman. The team is headed
by Wally Fuerzeig and includes Seymour Papert. LOGO is best known for its "turtle
graphics."
1967
SNOBOL4 , a much-enhanced SNOBOL, appears.
The first volume of The Art of Computer Programming was published in 1968 and instantly became classic
Donald Knuth (b. 1938) later published two additional volumes
of his world famous three-volume treatise.
Structured programming movement started. The start of the first religious cult in programming language design.
It was created by Edgar Dijkstra who published his infamous "Go to statement considered
harmful" (CACM 11(3), March 1968, pp 147-148). While misguided this cult somewhat
contributed to the design of control structures in programming languages serving
as a kind of stimulus for creation of more rich set of control structures in
new programming languages (with PL/1 and its derivative -- C as probably
the two popular programming languages which incorporated this new
tendencies). Later it degenerated
into completely fundamentalist and mostly counter-productive verification cult.
ALGOL 68 , the successor of ALGOL 60, appears. Was the first extensible
language that got some traction but generally was a flop. Some members of the
specifications committee--including C.A.R. Hoare and Niklaus Wirth -- protested
its approval on the basis of its overcomplexity. They proved to be partially
write: ALGOL 68 compilers proves to be difficult to implement and tat doomed
the language. Dissatisfied with the complexity of the Algol-68 Niklaus Wirth begins his work on a simple teaching language which later becomes Pascal.
ALTRAN , a FORTRAN variant, appears.
COBOL is officially defined by ANSI.
Niklaus Wirth begins work on Pascal language design (in part
as a reaction to overcomplexity of Algol 68). Like Basic before it, Pascal was
specifically designed for teaching programming at universities and as such was
specifically designed to allow one pass recursive decent
compiler. But the language has multiple grave deficiencies. While a talented
language designer Wirth went overboard in simplification of the language (for
example in the initial version of the language loops were the allowed to have only increment one, arrays were only
static, etc). It also was used to promote bizarre ideas of correctness proofs of the program
inspired by verification movement with the high priest Edgar Dijkstra -- the
first (or may be the second after structured programming) mass religious cult in programming languages history that destroyed careers
of several talented computer scientists who joined it, such as David Gries).
Some of blunders in Pascal design were later corrected in Modula and Modula
2.
1969
500 people attend an APL conference at IBM's headquarters in Armonk,
New York. The demands for APL's distribution are so great that the event is
later referred to as "The March on Armonk."
Seventies
1970
Forth.Sometime in the early 1970s , Charles Moore writes the
first significant programs in his new language, Forth.
Prolog. Work on Prolog begins about this time. For some time
Prolog became fashionable due to Japan initiatives. Later it returned to
relative obscurity, although did not completely disappeared from the
language map.
Also sometime in the early 1970s , work on Smalltalk begins at Xerox
PARC, led by Alan Kay. Early versions will include Smalltalk-72, Smalltalk-74,
and Smalltalk-76.
An implementation of Pascal appears on a CDC 6000-series computer.
Icon , a descendant of SNOBOL4, appears.
1972
The manuscript for Konrad Zuse's Plankalkul (see 1946) is finally
published.
Dennis Ritchie produces C. The definitive reference manual for it
will not appear until 1974.
PL/M. In 1972 Gary Kildall implemented a subset of PL/1, called
"PL/M" for microprocessors. PL/M was used to write the CP/M operating system -
and much application software running on CP/M and MP/M. Digital Research also sold a PL/I compiler for the PC
written in PL/M. PL/M was used to write much other software at Intel for the 8080, 8085, and Z-80 processors during the
1970s.
The first implementation of Prolog -- by Alain Colmerauer and Phillip
Roussel
1974
Donald E. Knuth published his article that give a decisive blow to "structured
programming fundamentalists" led by Edgar Dijkstra: Structured Programming with
go to Statements.
ACM Comput. Surv. 6(4): 261-301 (1974)
Another ANSI specification for COBOL appears.
1975
Paul Abrahams (Courant Intritute of Mathematical Sciences) destroyed credibility
of "structured programming" cult in his article " 'Structure programming' considered
harmful" (SYGPLAN Notices, 1975, April, p 13-24
Tiny BASIC by Bob Albrecht and Dennis Allison (implementation by Dick
Whipple and John Arnold) runs on a microcomputer in 2 KB of RAM. It is
usable of a 4-KB machine, which left 2 KB available for the program.
Microsoft was formed on April 4, 1975 to develop and sell
BASICinterpreters
for the Altair 8800. Bill Gates and Paul Allen write a version of BASIC that they sell
to MITS (Micro Instrumentation and Telemetry Systems) on a per-copy royalty
basis. MITS is producing the Altair, one of the earlier 8080-based
microcomputers that came with a interpreter for a programming language.
Scheme , a LISP dialect by G.L. Steele and G.J. Sussman, appears.
Pascal User Manual and Report , by Jensen and Wirth, is published.
Still considered by many to be the definitive reference on Pascal. This was
kind of attempt to replicate the success of Basic relying of growing "structured
programming" fundamentalism movement started by Edgar Dijkstra. Pascal acquired
large following in universities as compiler was made freely available. It was
adequate for teaching, has fast completer and was superior to Basic.
B.W. Kerninghan describes RATFOR--RATional FORTRAN. It is a preprocessor
that allows C-like control structures in FORTRAN. RATFOR is used in Kernighan
and Plauger's "Software Tools," which appears in 1976.
1976
Backlash on
Dijkstra correctness proofs
pseudo-religious cult started:
Andrew Tenenbaum (Vrije University, Amsterdam) published
paper In Defense of Program Testing or Correctness Proofs Considered
Harmful (SIGPLAN Notices, May 1976 pp 64-68). Made the crucial contribution to the "Structured
programming without GOTO" programming debate, which was a decisive blow to the
structured programming fundamentalists led by
E. Dijkstra;
Maurice Wilkes, famous computer scientists and the first president of
British Computer Society (1957-1960) attacked "verification cult"
in this article Software engineering and Structured programming published
in IEEE transactions on Software engineering (SE-2, No.4, December 1976,
pp 274-276. The paper was also presented as a Keynote address at the Second
International Conference on Software engineering, San Francisco, CA, October
1976.
Design System Language , considered to be a forerunner of PostScript,
appears.
1977
AWK was probably the second (after Snobol) string processing language
that extensively use regular expressions. The first version was created in BellLabs
by Alfred V. Aho, Peter J. Weinberger, and Brian W. Keringhan in 1977. This
was also the first widely used language with built-in garbage collection.
The ANSI standard for MUMPS -- Massachusetts General Hospital Utility
Multi-Programming System -- appears. Used originally to handle medical records,
MUMPS recognizes only a string data-type. Later renamed M.
The design competition that will produce Ada begins. Honeywell Bull's
team, led by Jean Ichbiah, will win the competition. Ada never live to promises
and became an expensive flop.
Kim Harris and others set up FIG, the FORTH interest group. They develop
FIG-FORTH, which they sell for around $20.
UCSD Pascal. In the late 1970s , Kenneth Bowles produces
UCSD Pascal, which makes Pascal available on PDP-11 and Z80-based computers.
Niklaus Wirth begins work on Modula, forerunner of Modula-2 and successor
to Pascal. It was the first widely used language that incorporate the concept
of coroutines.
1978
AWK -- a text-processing language named after the designers, Aho,
Weinberger, and Kernighan -- appears.
FORTRAN 77: The ANSI standard for FORTRAN 77 appears.
1979
Bourne
shell. The Bourne
shellwas included
Unix Version 7.
It was inferior to paralleled developed C-shell but gained tremendous
popularity on the strength of AT&T ownership of Unix.
C shell.The Second Berkeley Software Distribution (2BSD),
was released in May 1979. It included updated versions of the 1BSD
software as well as two new programs by Joy that persist on Unix systems
to this day: the vi text editor (a visual version of ex) and the
C shell.
REXXwas designed and first implemented between 1979 and mid-1982 by Mike Cowlishaw
of IBM.
Bjarne Stroustrup develops a set of languages -- collectively referred
to as "C With Classes" -- that serve as the breeding ground for C++.
1981
C-shell was extended into tcsh.
Effort begins on a common dialect of LISP, referred to as Common LISP.
Japan begins the Fifth Generation Computer System project. The primary
language is Prolog.
1982
ISO Pascal appears.
In 1982 one of the first scripting languages REXX was released by IBM as a product. It was four years after AWK was released.
Over the years IBM included REXX in almost all of its operating systems (VM/CMS, VM/GCS, MVS TSO/E, AS/400, VSE/ESA, AIX,
CICS/ESA, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux.
PostScript appears. It revolutionized printing on dot matrix and laser printers.
1983
REXX was included in the third release of IBM's VM/CMS shipped
in 1983; It was four years after AWK was released. Over the years IBM
included REXX in almost all of its operating systems (VM/CMS, VM/GCS, MVS TSO/E, AS/400, VSE/ESA, AIX,
CICS/ESA, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux.
Smalltalk-80: The Language and Its Implementation by Goldberg
et al is published. Influencial early book that promoted ideas of
OO programming.
Ada appears . Its name comes from Lady Augusta Ada Byron, Countess
of Lovelace and daughter of the English poet Byron. She has been called
the first computer programmer because of her work on Charles Babbage's analytical
engine. In 1983, the Department of Defense directs that all new "mission-critical"
applications be written in Ada.
In late 1983 and early 1984, Microsoft and Digital Research both
release the first C compilers for microcomputers.
In July , the first implementation of C++ appears. The name
was
coined by Rick Mascitti.
In November , Borland's Turbo Pascal hits the scene like a nuclear
blast, thanks to an advertisement in BYTE magazine.
1984
GCC development started. In 1984 Stallman started
his work on an open source C compiler
that became widely knows as gcc. The same year Steven Levy
"Hackers"
book is published with a chapter devoted to RMS that presented him in an extremely favorable light.
Icon. R.E.Griswold designed Icon programming language Icon (see
overview). Like Perl Icon is a high-level, programming language with
a large repertoire of features for processing data structures and character
strings. Icon is an imperative, procedural language with a syntax reminiscent
of C and Pascal, but with semantics at a much higher level (see Griswold,
Ralph E. and Madge T. Griswold. The Icon Programming Language, Second Edition,
Prentice-Hall, Inc., Englewood Cliffs, New Jersey. 1990, ISBN 0-13-447889-4.).
APL2. A reference manual for APL2 appears. APL2 is an extension
of APL that permits nested arrays.
1985
REXX. The first PC implementation of REXX was released.
Forth controls the submersible sled that locates the wreck of
the Titanic.
Vanilla SNOBOL4 for microcomputers is released.
Methods, a line-oriented Smalltalk for PCs, is introduced.
The first version of GCC was able to compile itself appeared in late 1985. The same year GNU Manifesto published
1986
Smalltalk/V appears--the first widely available version of Smalltalk
for microcomputers.
Apple releases Object Pascal for the Mac.
Borland releases Turbo Prolog.
Charles Duff releases Actor, an object-oriented language for developing
Microsoft Windows applications.
Eiffel , another object-oriented language, appears.
C++ appears.
1987
PERL. The first version of Perl,
Perl
1.000 was
released
by Larry Wall in 1987. See an excellent
PerlTimeline
for more information.
Turbo Pascal version 4.0 is released.
1988
The specification for CLOS -- Common LISP Object System -- is published.
Oberon. Niklaus Wirth finishes Oberon, his follow-up to Modula-2.
The language was still-born but some of its ideas found its was to Python.
PERL 2 was released.
TCL was created. The Tcl scripting language grew
out of work of John Ousterhout on creating
the design tools for integrated circuits at the University of California at
Berkeley in the early 1980's. In the fall of 1987, while on sabbatical at DEC's
Western Research Laboratory, he decided to build an embeddable command language.
He started work on Tcl in early 1988, and began using the first version of Tcl
in a graphical text editor in the spring of 1988. The idea of TCL is different
and to certain extent more interesting than idea of Perl -- TCL was designed
as embeddable macro language for applications. In this sense TCL is closer
to REXX (which was probably was one of the first language that was used both
as a shell language and as a macrolanguage). Important products that use Tcl
are TK toolkit and Expect.
1989
The ANSI C specification is published.
C++ 2.0 arrives in the form of a draft reference manual. The 2.0 version
adds features such as multiple inheritance and pointers to members.
Perl 3.0 was released in 1989 was distributed under GNU public license -- one
of the first major open source project distributed under GNU license and
probably the first outside FSF.
zsh. Paul Falstad wrote zsh, a superset of the ksh88 which also had many
csh features.
C++ 2.1 , detailed in Annotated C++ Reference Manual by B.
Stroustrup et al, is published. This adds templates and exception-handling features.
FORTRAN 90 includes such new elements as case statements and derived
types.
Kenneth Iverson and Roger Hui present J at the APL90 conference.
1991
Visual Basic wins BYTE's Best of Show award at Spring COMDEX.
PERL 4 released. In January 1991 the first edition of Programming Perl, a.k.a. The Pink Camel,
by Larry Wall and Randal Schwartz is published by O'Reilly and Associates. It
described a new, 4.0 version of Perl. Simultaneously Perl 4.0 was released (in
March of the same year). Final version of Perl 4 was released in 1993.
Larry Wall is awarded the Dr. Dobbs Journal Excellence in Programming Award.
(March)
1992
Dylan -- named for Dylan Thomas -- an object-oriented language resembling
Scheme, is released by Apple.
1993
ksh93 was released by David Korn. Was the last of
line on AT&T developed shells.
ANSI releases the X3J4.1 technical report -- the first-draft proposal
for (gulp) object-oriented COBOL. The standard is expected to be finalized in
1997.
PERL 4. Version 4 was the first widely used version of Perl.
Timing was simply perfect: it was already widely available before WEB
explosion in 1994.
1994
PERL 5. Version 5 was released in the end of
1994:
Microsoft incorporates Visual Basic for Applications into Excel.
1995
In February , ISO accepts the 1995 revision of the Ada language. Called
Ada 95, it includes OOP features and support for real-time systems.
RUBY December: First release 0.95.
1996
first ANSI C++ standard .
Ruby 1.0 released. Did not gain much popularity until later.
1997
Java. In 1997 Java was released. Sun launches a tremendous and widely
successful complain to replace Cobol with Java as a standard language for writing
commercial applications for the industry.
Dennis Ritchie, the creator of C, dies. He was only 70 at the
time.
There are several interesting "language-induced" errors -- errors that particular programming
language facilitates rather then helps to avoid. They are most studied for C-style languages. Funny
but Pl/1 (from which C was derived) was a better designed language then much simpler C in several of
those categories.
Avoiding C-style languages design blunder of "easy" mistyping "=" instead of "=="
One of most famous C design blunders was two small lexical difference between assignment and
comparison (remember that Algol used := for assignment) caused by the design decision
to make the language more compact (terminals at this type were not very reliable and number of
symbols typed matter greatly. In C assignment is allowed in if statement but no attempts were made
to make language more failsafe by avoiding possibility of mixing up "=" and
"==". In C syntax the statement
if (alpha = beta) ...
assigns the contents of the variable beta to the variable alpha and executes
the code in then branch if
beta <> 0.
It is easy to mix thing and write if (alpha = beta ) instead of (if (alpha
== beta) which is a pretty nasty, and remarkably consistent C-induced bug. in case
you are comparing the constant to a variable, you can often reverse the sequence and put constant first like in
if ( 1==i ) ...
as
if ( 1=i ) ...
does not make any sense. In this case such a blunder will be detected on syntax level.
Dealing with unbalanced "{" and "}" problem in C-style languages
Another nasty problems with C, C++, Java, Perl and other C-style languages is that missing curvy brackets
are pretty difficult to find. they also canbe insertd incorrectly endign with the even more nasty
logical error. One effective solution that was first implemented in PL/1 and was
based on calculation
of the level of nesting (in compiler listing) and ability of multiple closure of blocks in the end statement
(PL/1 did not use brackets {}, they were introduced in C).
In C one can use pseudo comments that signify nesting level zero and check those points with
special program or by writing an editor macro.
Many editors have the ability to point to the closing bracket for any given opening bracket and vise versa.
This is also useful, but less efficient way to solve the problem.
Problem of unclosed literal
Specifying max length of literals is an effecting way of catching missing quote. This idea was
forst implemented
in debugging PL/1 compilers. You can also have an option to limit literal to a single line. In general multi-line literals should have different lexical markers (like "here" construct in shell).
Some language like Perl provide opportunity to use concatenation operator for splitting literals
into multiple lines, which are "merged" at compile time. But if there is no limit on the number of lines
string literal can occupy some bug can slip in which unmatched quote can closed by another unmatched
quote in a nearby literal " commenting out" some part of the code. So this does not help much.
Limit on the language of the literal can be communicated via pragma statement at compile type in a particular fragment of
text. This is an effective way to avoid the problem. Usually only few places in program use
multiline literals, if any.
Editors that use coloring help to detect unclosed literal problem, but there are cases when they
are useless.
Commenting out blocks of code
This is best done not with comments, but with a preprocessor if the language has one (PL/1,
C, etc)
The "dangling else" problem
Having both an if-else and an if statement leads to some possibilities of confusion
when one of the clause of a selection statement is itself a selection statement. For example, the
C code
if (level >= good)
if (level == excellent)
cout << "excellent" << endl;
else
cout << "bad" << endl;
is intended to process a three-state situation in which something can be bad, good
or (as a special case of good) excellent; it is supposed to print an appropriate
description for the excellent and bad cases, and print nothing for the good
case. The indentation of the code reflects these expectations. Unfortunately, the code does not do
this. Instead, it prints excellent for the excellent case, bad for the good
case, and nothing for the bad case.
The problem is deciding which if matches the else in this expression. The basic
rule is
an else matches the nearest previous unmatched if
There are two ways to avoid the dangling else problem:
reverse the logic of the outer branch, so that the else is nested inside another
else instead of an unmatched if:
if (bad)
cout << "bad" << endl;
else
if (excellent)
cout << "excellent" << endl;
use brackets around the if clause so that the inner if is terminated by the
end of the enclosing bracket:
if (good) {
if (excellent)
cout << "excellent" << endl;
}
else
cout << "bad" << endl;
In fact, you can avoid the dangling else problem completely by always using brackets around the
clauses of an if or if-else statement, even if they only enclose a single statement.
So a good strategy for notation of if-else statements is
always use { brace brackets } around the clauses of an if-else or
if statement
Always use { brace brackets } around the clauses of an if-else or
if statement
(This strategy also helps if you need to cut-and-paste more code into one of the clauses: if
a clause consists of only one statement, without enclosing brace brackets, and you add another
statement to it, then you also need to add the brace brackets. Having the brace brackets there
already makes the job easier.)
Development was easier in the days of classical CICS, where all the logic was managed by a
single mainframe computer and 3270 clients were responsible for nothing except displaying
output and responding to keystrokes. But that's no longer adequate when smart phones and PC's
are more powerful than mainframes of old, and our task is to develop systems that can integrate
large shared databases with local processing to provide the modern systems that we need. This
needs web services, but development of distributed systems with COBOL, Java, C#, and similar
technology is difficult.
Since 2015 MANASYS Jazz has been able to develop CICS web services, but it remained
difficult to develop client programs to work with them. Build 16.1 (December 2020) was a
major breakthrough, offering integrated development of COBOL CICS web services for the
mainframe, and C# client interfaces that make client development as easy as discovering
properties and methods with Intellisense.
Build
16.2 (January 2021) supported services returning several records. We'd found that each
request/response took a second or two, whether it was returning 1 or many records, but the
interface could page forward and back instantly within the list of returned records. Build 16.2
also offered easy addition of related-table data, and interfaces for VSAM as well as DB2 web
services. Build
16.3 (June 2021) takes a further step, adding services and interfaces for parent-child
record collections, for example a Department record with the list of Employees who work
there.
Our video
"Bridging Two Worlds" has been updated to demonstrate these features. See how easy it is to
create a web service and related client logic that will display and update one or many records
at a time. See how MANASYS controls updating with CICS-style pseudo-locking, preventing invalid
updates automatically. See how easily MANASYS handles data from many records at a time,
resulting in clean and efficient service architecture.
The working assumption should "Nobody inclusing myself will ever reuse this code". It is very reastic assumption as programmers
are notoriously resultant to reuse the code from somebody elses. And you programming skills evolve you old code will look pretty
foreign to use.
"In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)."
- Roberto Waltman
This week on our show we discuss this quote. Does OOP encourage too many layers in code?
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of
abstraction. I wrote about this before in
the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us?
Bertil Muth "¢
Dec 9 '18
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no
longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality.
Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
Nested Software "¢
Dec 9 '18 "¢ Edited on Dec 16
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need
to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of
brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a
fundamental problem in software development...Nested Software "¢
Dec 9 '18
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build
up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class
with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always
a better fit for re-using code.
"... Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to maintain code all in the name of "clarity". ..."
"... Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint. Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance. ..."
Anyone who claims to be even remotely versed in computer science knows what "spaghetti code" is. That type of code still sadly
exists. But today we also have, for lack of a better term" and sticking to the pasta metaphor" "lasagna code".
Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to
maintain code all in the name of "clarity". It drives me nuts to see how badly some code today is. And then you come across
how small Turbo Pascal v3 was , and after comprehending it was a
full-blown Pascal compiler, one wonders why applications and compilers today are all so massive.
Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint.
Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles
which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance.
Back when I was starting out in computer science I thought by today we'd be writing a few lines of code to accomplish much. Instead,
we write hundreds of thousands of lines of code to accomplish little. It's so sad it's enough to make one cry, or just throw your
hands in the air in disgust and walk away.
There are bright spots. There are people out there that code small and beautifully. But they're becoming rarer, especially when
someone who seemed to have thrived on writing elegant, small, beautiful code recently passed away. Dennis Ritchie understood you
could write small programs that did a lot. He comprehended that the algorithm is at the core of what you're trying to accomplish.
Create something beautiful and well thought out and people will examine it forever, such as
Thompson's version of Regular Expressions !
I've seen many infrastructures in my day. I work for a company with a very complicated infrastructure now. They've got a dev/stage/prod
environment for every product (and they've got many of them). Trust is not a word spoken lightly here. There is no 'trust' for even
sysadmins (I've been working here for 7 months now and still don't have production sudo access). Developers constantly complain about
not having the access that they need to do their jobs and there are multiple failures a week that can only be fixed by a small handful
of people that know the (very complex) systems in place. Not only that, but in order to save work, they've used every cutting-edge
piece of software that they can get their hands on (mainly to learn it so they can put it on their resume, I assume), but this causes
more complexity that only a handful of people can manage. As a result of this the site uptime is (on a good month) 3 nines at best.
In my last position (pronto.com) I put together an infrastructure that any idiot could maintain. I used unmanaged switches behind
a load-balancer/firewall and a few VPNs around to the different sites. It was simple. It had very little complexity, and a new sysadmin
could take over in a very short time if I were to be hit by a bus. A single person could run the network and servers and if the documentation
was lost, a new sysadmin could figure it out without much trouble.
Over time, I handed off my ownership of many of the Infrastructure components to other people in the operations group and of course,
complexity took over. We ended up with a multi-tier network with bunches of VLANs and complexity that could only be understood with
charts, documentation and a CCNA. Now the team is 4+ people and if something happens, people run around like chickens with their
heads cut off not knowing what to do or who to contact when something goes wrong.
Complexity kills productivity. Security is inversely proportionate to usability. Keep it simple, stupid. These are all rules to
live by in my book.
Downtimes: Beatport: not unlikely to have 1-2 hours downtime for the main site per month.
Pronto: several 10-15 minute outages a year Pronto (under my supervision): a few seconds a month (mostly human error though, no
mechanical failure)
John Waclawsky (from Cisco's mobile solutions group), coined the term S4 for "Systems
Standards Stockholm Syndrome" - like hostages becoming attached to their captors, systems
standard participants become wedded to the process of setting standards for the sake of
standards.
"... The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S4) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking. ..."
"... What causes S4? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment. ..."
This was sent to me by a colleague. From "S4 -- The System Standards Stockholm
Syndrome" by John G. Waclawsky, Ph.D.:
The "Stockholm Syndrome" describes the behavior of some hostages. The "System
Standards Stockholm Syndrome" (S4) describes the behavior of system standards
participants who, over time, become addicted to technology complexity and hostages of
group thinking.
12:45 PM -- While we flood you with IMS-related content this week, perhaps it's sensible to
share some airtime with a clever warning about being held "captive" to the hype.
This warning comes from John G. Waclawsky, PhD, senior technical staff, Wireless Group,
Cisco Systems Inc. (Nasdaq: CSCO).
Waclawsky, writing in the July issue of Business Communications Review , compares the fervor over
IMS to the " Stockholm Syndrome ," a term that
comes from a 1973 hostage event in which hostages became sympathetic to their captors.
Waclawsky says a form of the Stockholm Syndrome has taken root in technical standards
groups, which he calls "System Standards Stockholm Syndrome," or S4.
Here's a snippet from Waclawsky's column:
What causes S4? Captives identify with their captors initially as a defensive
mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors,
such as granting a secretarial role (often called a "chair") to a captive in a working
group are magnified, since finding perspective in a systems standards meeting, just like
a hostage situation, is by definition impossible. Rescue attempts are problematic, since
the captive could become mentally incapacitated by suddenly being removed from a
codependent environment.
The full article can be found here -- R. Scott Raynovich, US
Editor, Light Reading
Sunday, August 07, 2005S4 - The Systems Standards Stockholm
Syndrome John Waclawsky, part of the Mobile Wireless Group at Cisco Systems, features an
interesting article in the July 2005 issue of the Business Communications Review on The Systems Standards Stockholm Syndrome.
Since his responsibilities include standards activities (WiMAX, IETF, OMA, 3GPP and TISPAN),
identification of product requirements and the definition of mobile wireless and broadband
architectures, he seems to know very well what he is talking about, namely the IP Multimedia
Subsytem (IMS). See also his article in the June 2005 issue on IMS 101 - What You Need To Know Now .
See also the Wikedpedia glossary from Martin
below:
IMS. Internet Monetisation System . A minor adjustment to Internet Protocol to add a
"price" field to packet headers. Earlier versions referred to Innovation Minimisation
System . This usage is now deprecated. (Expected release Q2 2012, not available in all
markets, check with your service provider in case of sudden loss of unmediated
connectivity.)
It is so true that I have to cite it completely (bold emphasis added):
The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards
Stockholm Syndrome" (S 4 ) describes the behavior of system standards participants
who, over time, become addicted to technology complexity and hostages of group thinking.
Although the original name derives from a 1973 hostage incident in Stockholm, Sweden, the
expanded name and its acronym, S 4 , applies specifically to systems standards
participants who suffer repeated exposure to cult dogma contained in working group documents
and plenary presentations. By the end of a week in captivity, Stockholm Syndrome victims may
resist rescue attempts, and afterwards refuse to testify against their captors. In system
standards settings, S4 victims have been known to resist innovation and even refuse to
compete against their competitors.
Recent incidents involving too much system standards attendance have resulted in people
being captured by radical ITU-like factions known as the 3GPP or 3GPP2.
I have to add of course ETSI TISPAN and it seems that the syndrome is also spreading into
IETF, especially to SIP and SIPPING.
The victims evolve to unwitting accomplices of the group as they become immune to the
frustration of slow plodding progress, thrive on complexity and slowly turn a blind eye to
innovative ideas. When released, they continue to support their captors in filtering out
disruptive innovation, and have been known to even assist in the creation and perpetuation of
bureaucracy.
Years after intervention and detoxification, they often regret their system standards
involvement. Today, I am afraid that S 4 cases occur regularly at system standards
organizations.
What causes S 4 ? Captives identify with their captors initially as a defensive
mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors,
such as granting a secretarial role (often called a "chair") to a captive in a working group
are magnified, since finding perspective in a systems standards meeting, just like a hostage
situation, is by definition impossible. Rescue attempts are problematic, since the captive
could become mentally incapacitated by suddenly being removed from a codependent
environment.
It's important to note that these symptoms occur under tremendous emotional and/or
physical duress due to lack of sleep and abusive travel schedules. Victims of S 4
often report the application of other classic "cult programming" techniques, including:
The encouraged ingestion of mind-altering substances. Under the influence of alcohol,
complex systems standards can seem simpler and almost rational.
"Love-fests" in which victims are surrounded by cultists who feign an interest in them
and their ideas. For example, "We'd love you to tell us how the Internet would solve this
problem!"
Peer pressure. Professional, well-dressed individuals with standing in the systems
standards bureaucracy often become more attractive to the captive than the casual sorts
commonly seen at IETF meetings.
Back in their home environments, S 4 victims may justify continuing their
bureaucratic behavior, often rationalizing and defending their system standard tormentors,
even to the extent of projecting undesirable system standard attributes onto component
standards bodies. For example, some have been heard murmuring, " The IETF is no picnic and
even more bureaucratic than 3GPP or the ITU, " or, "The IEEE is hugely political." (For more
serious discussion of component and system standards models, see " Closed Architectures, Closed Systems And
Closed Minds ," BCR, October 2004.)
On a serious note, the ITU's IMS (IP Multimedia Subsystem) shows every sign of becoming
the latest example of systems standards groupthink. Its concepts are more than seven years
old and still not deployed, while its release train lengthens with functional expansions and
change requests. Even a cursory inspection of the IMS architecture reveals the complexity
that results from:
decomposing every device into its most granular functions and linkages; and
tracking and controlling every user's behavior and related billing.
The proliferation of boxes and protocols, and the state management required for data
tracking and control, lead to cognitive overload but little end user value.
It is remarkable that engineers who attend system standards bodies and use modern
Internet- and Ethernet-based tools don't apply to their work some of the simplicity learned
from years of Internet and Ethernet success: to build only what is good enough, and as simply
as possible.
Now here I have to break in: I think the syndrome is also spreading to
the IETF, becuase the IETF is starting to leave these principles behind - especially in SIP
and SIPPING, not to mention Session Border Confuser (SBC).
The lengthy and detailed effort that characterizes systems standards sometimes produces a
bit of success, as the 18 years of GSM development (1980 to 1998) demonstrate. Yet such
successes are highly optimized, very complex and thus difficult to upgrade, modify and
extend.
Email is a great example. More than 15 years of popular email usage have passed, and today
email on wireless is just beginning to approach significant usage by ordinary people.
The IMS is being hyped as a way to reduce the difficulty of integrating new services, when
in fact it may do just the opposite. IMS could well inhibit new services integration due to
its complexity and related impacts on cost, scalability, reliability, OAM, etc.
Not to mention the sad S 4 effects on all those engineers participating in
IMS-related standards efforts.
Make each program do one thing well. To do a new job, build afresh rather than
complicate old programs by adding new features.
By now, and to be frank in the last 30 years too, this is complete and utter bollocks.
Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from
formatting to "side" features, all half-assed and barely, if at all, consistent.
By now, and to be frank in the last 30 years too, this is complete and utter
bollocks.
There is not one single other idea in computing that is as unbastardised as the unix
philosophy - given that it's been around fifty years. Heck, Microsoft only just developed
PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what
is.
In that same time, we've vacillated between thick and thin computing (mainframes, thin
clients, PCs, cloud). We've rebelled against at least four major schools of program design
thought (structured, procedural, symbolic, dynamic). We've had three different database
revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate
dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video
gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins
('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros
('10s "startup culture").
It's a small miracle that iproute2only has formatting options and
grep only has --color . If they feature-crept anywhere near the same
pace as the rest of the computing world, they would probably be a RESTful SaaS microservice
with ML-powered autosuggestions.
This is because adding a new features is actually easier than trying to figure out how
to do it the Unix way - often you already have the data structures in memory and the
functions to manipulate them at hand, so adding a --frob parameter that does
something special with that feels trivial.
GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he
didn't care about it) while becoming the most available set of tools for Unix systems
didn't help either.
No, it certainly isn't. There are tons of well-designed, single-purpose tools
available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps,
well, that's your prerogative, and I don't begrudge you it, but just because you're not
aware of alternatives doesn't mean they don't exist.
typical shell tools are choke-full of spurious additions,
What does "feature creep" even mean with respect to shell tools? If they have lots of
features, but each function is well-defined and invoked separately, and still conforms to
conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is
BusyBox bloatware because it has lots of discrete shell tools bundled into a single
binary? nirreskeya
3 years ago
I have succumbed to the temptation you offered in your preface: I do write you off
as envious malcontents and romantic keepers of memories. The systems you remember so
fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out
to pasture, they are fertilizing it from below.
Your judgments are not keen, they are intoxicated by metaphor. In the Preface you
suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag.
In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and
addled by puffiness of the genome.
Yet your prison without coherent design continues to imprison you. How can this
be, if it has no strong places? The rational prisoner exploits the weak places,
creates order from chaos: instead, collectives like the FSF vindicate their jailers
by building cells almost compatible with the existing ones, albeit with more
features. The journalist with three undergraduate degrees from MIT, the researcher at
Microsoft, and the senior scientist at Apple might volunteer a few words about the
regulations of the prisons to which they have been transferred.
Your sense of the possible is in no sense pure: sometimes you want the same thing
you have, but wish you had done it yourselves; other times you want something
different, but can't seem to get people to use it; sometimes one wonders why you just
don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice,
just a future whose intellectual tone and interaction style is set by Sonic the
Hedgehog. You claim to seek progress, but you succeed mainly in whining.
Here is my metaphor: your book is a pudding stuffed with apposite observations,
many well-conceived. Like excrement, it contains enough undigested nuggets of
nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of
contempt and of envy.
"... There's still value in understanding the traditional UNIX "do one thing and do it well" model where many workflows can be done as a pipeline of simple tools each adding their own value, but let's face it, it's not how complex systems really work, and it's not how major applications have been working or been designed for a long time. It's a useful simplification, and it's still true at /some/ level, but I think it's also clear that it doesn't really describe most of reality. ..."
There's still value in understanding the traditional UNIX "do one thing and do it
well" model where many workflows can be done as a pipeline of simple tools each adding their
own value, but let's face it, it's not how complex systems really work, and it's not how
major applications have been working or been designed for a long time. It's a useful
simplification, and it's still true at /some/ level, but I think it's also clear that it
doesn't really describe most of reality.
http://www.itwire.com/business-it-news/open-source/65402-torvalds-says-he-has-no-strong-opinions-on-systemd
Almost nothing on the Desktop works as the original Unix inventors prescribed as the "Unix
way", and even editors like "Vim" are questionable since it has integrated syntax
highlighting and spell checker. According to dogmatic Unix Philosophy you should use "ed, the
standard editor" to compose the text and then pipe your text into "spell". Nobody really
wants to work that way.
But while "Unix Philosophy" in many ways have utterly failed as a way people actually work
with computers and software, it is still very good to understand, and in many respects still
very useful for certain things. Personally I love those standard Linux text tools like
"sort", "grep" "tee", "sed" "wc" etc, and they have occasionally been very useful even
outside Linux system administration.
One of the recurring themes of any technology discussion is programming language. It doesn't
take much effort to find blog posts with dramatic headlines (and even more dramatic comments)
about how shipping a new project with Haskell or Clojure or Elm improved someones job,
marriage, and life. These success stories are posted by raving fans that have nothing but the
best to say about their language of choice. A common thread running through these posts is that
they are typically tied to building out new, greenfield projects. I can't help but wonder.
After the honeymoon of building a new project with a new programming language, what happens
next? Is it all bubble gum and roses?
Sadly, it doesn't matter how suited to the job a language is, how much fun it is to program
in, or how much you learn along the way. What matters the most is if the company you work for
can support it. Engineers move on, there are -- believe it or not -- lean times, and after a
few years there is no one left who can support the new esoteric system. Once the application is
in production, how do you support it? Who is going to be on call?
Can't you solve this problem by hiring? Not really. First you need to either find someone
with the appropriate skill set or train someone in the skills. Both of these cost time and
money. Second, what is the new hire going to do? They will be tasked with a part-time
responsibility of maintaining a legacy system written in an esoteric language, while everyone
else in the organization is working in something else. And they will be the 24/7 on-call
support person. You can help with the on-call situation by hiring an additional two or three
people to help support the service. Assuming you need three engineers to maintain a healthy
on-call schedule, at roughly $200K per engineer, you are going to be spending $600K a year on
this service. Does this new programming language save you that much money every year over using
the language everyone else in the organization knows? As a manager or technology lead, what do
you do? Keep trying to hire? Keep spending $600K year on a programming language with no
discernible business impact? No. You design it out of the system.
A case study. We had an internal development team working on a new documentation portal and
they chose to use Elm for the frontend when everyone else was using Dart and JavaScript. The
team was able to get the basics up and running quickly, and man they were having fun. But the
service became increasingly difficult to manage as product requirements expanded beyond the
strengths of the Elm ecosystem's core competencies. Shortly after, the core development team
left to pursue other opportunities.
At first we tried to train existing engineers on Elm. That doesn't work. Not because Elm is
bad, but because people don't want to change the direction of their career just to support
someone else's legacy project. A second option is to hire at least two, preferably three Elm
engineers. This is harder than it sounds. Not all engineers are excited about learning esoteric
languages, and these new engineers you hire will be quickly wondering about their career
development prospects if they are tied to maintaining the only legacy system written in Elm
while everyone else is working on something else.
In the end, instead of trying to maintain the system it was scrapped and rewritten in the
common frontend language the rest of the organization uses. Rewrites are a difficult decision
for well-known reasons, but ultimately it was the correct choice.
The thesis of this post is that you need to choose a programming language that your
organization can support. A natural corollary is that if you want to introduce a new
programming language to a company, it is your responsibility to convince the business of the
benefit of the language. You need to generate organizational support for the language
before you go ahead and start using it. This can be difficult, it can be uncomfortable,
and you could be told no. But without that organizational support, your new service is dead in
the water.
The most important criteria for choosing a programming language is choosing something your
organization can support.
"... I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution. ..."
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of
abstraction. I wrote about this before in
the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Discussion (12)
Subscribe
Shrek: Object-oriented programs are like onions. Donkey: They stink? Shrek: Yes. No. Donkey: Oh, they make you cry. Shrek: No. Donkey: Oh, you leave em out in the sun, they get all brown, start sproutin’ little white hairs. Shrek: No. Layers. Onions have layers. Object-oriented programs have layers. Onions have layers. You get it? They both have
layers. Donkey: Oh, they both have layers. Oh. You know, not everybody like onions. 8 likes
Reply Dec 8 '18
Unrelated, but I love both spaghetti and lasagna 😋 6 likes
Reply
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no
longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality.
Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to
solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing
that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental
problem in software development... 4 likes
Reply
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build
up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class
with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always
a better fit for re-using code. 2 likes
Reply
Inheritance is my preferred option for things that model type hierarchies. For example, widgets in a UI, or literal types in a
compiler.
One reason inheritance is over-used is because languages don't offer enough options to do composition correctly. It ends up becoming
a lot of boilerplate code. Proper support for mixins would go a long way to reducing bad inheritance. 2 likes
Reply
It is always up to the task. For small programms of course you don't need so many layers, interfaces and so on. For a bigger,
more complex one you need it to avoid a lot of issues: code duplications, unreadable code, constant merge conflicts etc. 2 likes
Reply
I'm building a personal project as a mean to get something from zero to production for learning purpose, and I am struggling with
wiring the front-end with the back. Either I dump all the code in the fetch callback or I use DTOs, two sets of interfaces to describe
API data structure and internal data structure... It's a mess really, but I haven't found a good level of compromise. 2 likes
Reply
It's interesting, because a project that gets burned by spaghetti can drift into lasagna code to overcompensate. Still bad, but lasagna
code is somewhat more manageable (just a huge headache to reason about).
But having an ungodly combination of those two... I dare not think about it. shudder 2 likes
Reply
Sidenote before I finish listening: I appreciate that I can minimize the browser on mobile and have this keep playing, unlike
with others apps(looking at you, YouTube). 2 likes
Reply
The pasta theory is a theory of programming. It is a common analogy for application development describing different programming
structures as popular pasta dishes. Pasta theory highlights the shortcomings of the code. These analogies include spaghetti, lasagna
and ravioli code.
Code smells or anti-patterns are a common classification of source code quality. There is also classification based on food which
you can find on Wikipedia.
Spaghetti code is a pejorative term for source code that has a complex and tangled control structure, especially one using many
GOTOs, exceptions, threads, or other “unstructured†branching constructs. It is named such because program flow tends to look
like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, including inexperienced programmers
and a complex program which has been continuously modified over a long life cycle. Structured programming greatly decreased the incidence
of spaghetti code.
Ravioli code
Ravioli code is a type of computer program structure, characterized by a number of small and (ideally) loosely-coupled software
components. The term is in comparison with spaghetti code, comparing program structure to pasta; with ravioli (small pasta pouches
containing cheese, meat, or vegetables) being analogous to objects (which ideally are encapsulated modules consisting of both code
and data).
Lasagna code
Lasagna code is a type of program structure, characterized by several well-defined and separable layers, where each layer of code
accesses services in the layers below through well-defined interfaces. The term is in comparison with spaghetti code, comparing program
structure to pasta.
Spaghetti with meatballs
The term “spaghetti with meatballs†is a pejorative term used in computer science to describe loosely constructed object-oriented
programming (OOP) that remains dependent on procedural code. It may be the result of a system whose development has transitioned
over a long life-cycle, language constraints, micro-optimization theatre, or a lack of coherent coding standards.
Do you know about other interesting source code classification?
When introducing a new tool, programming language, or dependency into your environment, what
steps do you take to evaluate it? In this article, I will walk through a six-question framework
I use to make these determinations.
What problem am I trying to solve?
We all get caught up in the minutiae of the immediate problem at hand. An honest, critical
assessment helps divulge broader root causes and prevents micro-optimizations.
Let's say you are experiencing issues with your configuration management system. Day-to-day
operational tasks are taking longer than they should, and working with the language is
difficult. A new configuration management system might alleviate these concerns, but make sure
to take a broader look at this system's context. Maybe switching from virtual machines to
immutable containers eases these issues and more across your environment while being an
equivalent amount of work. At this point, you should explore the feasibility of more
comprehensive solutions as well. You may decide that this is not a feasible project for the
organization at this time due to a lack of organizational knowledge around containers, but
conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next
quarter.
This intellectual exercise helps you drill down to the root causes and solve core issues,
not the symptoms of larger problems. This is not always going to be possible, but be
intentional about making this decision.
Now that we have identified the problem, it is time for critical evaluation of both
ourselves and the selected tool.
A particular technology might seem appealing because it is new because you read a cool blog
post about it or you want to be the one giving a conference talk. Bells and whistles can be
nice, but the tool must resolve the core issues you identified in the first
question.
What am I giving up?
The tool will, in fact, solve the problem, and we know we're solving the right
problem, but what are the tradeoffs?
These considerations can be purely technical. Will the lack of observability tooling prevent
efficient debugging in production? Does the closed-source nature of this tool make it more
difficult to track down subtle bugs? Is managing yet another dependency worth the operational
benefits of using this tool?
Additionally, include the larger organizational, business, and legal contexts that you
operate under.
Are you giving up control of a critical business workflow to a third-party vendor? If that
vendor doubles their API cost, is that something that your organization can afford and is
willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of
proprietary information? Does the software licensing make this difficult to use
commercially?
While not simple questions to answer, taking the time to evaluate this upfront will save you
a lot of pain later on.
Is the project or vendor healthy?
This question comes with the addendum "for the balance of your requirements." If you only
need a tool to get your team over a four to six-month hump until Project X is
complete, this question becomes less important. If this is a multi-year commitment and the tool
drives a critical business workflow, this is a concern.
When going through this step, make use of all available resources. If the solution is open
source, look through the commit history, mailing lists, and forum discussions about that
software. Does the community seem to communicate effectively and work well together, or are
there obvious rifts between community members? If part of what you are purchasing is a support
contract, use that support during the proof-of-concept phase. Does it live up to your
expectations? Is the quality of support worth the cost?
Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as
well. Something might hit the front page of a news aggregator and receive attention for a few
days, but a deeper look might reveal that only a couple of core developers are actually working
on a project, and they've had difficulty finding outside contributions. Maybe a tool is open
source, but a corporate-funded team drives core development, and support will likely cease if
that organization abandons the project. Perhaps the API has changed every six months, causing a
lot of pain for folks who have adopted earlier versions.
What are the risks?
As a technologist, you understand that nothing ever goes as planned. Networks go down,
drives fail, servers reboot, rows in the data center lose power, entire AWS regions become
inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.
Ask yourself how this tooling could fail and what the impact would be. If you are adding a
security vendor product to your CI/CD pipeline, what happens if the vendor goes
down?
This brings up both technical and business considerations. Do the CI/CD pipelines simply
time out because they can't reach the vendor, or do you have it "fail open" and allow the
pipeline to complete with a warning? This is a technical problem but ultimately a business
decision. Are you willing to go to production with a change that has bypassed the security
scanning in this scenario?
Obviously, this task becomes more difficult as we increase the complexity of the system.
Thankfully, sites like k8s.af consolidate example
outage scenarios. These public postmortems are very helpful for understanding how a piece of
software can fail and how to plan for that scenario.
What are the costs?
The primary considerations here are employee time and, if applicable, vendor cost. Is that
SaaS app cheaper than more headcount? If you save each developer on the team two hours a day
with that new CI/CD tool, does it pay for itself over the next fiscal year?
Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral
if you save the dev team a couple of hours a day, but you're removing a huge blocker in their
daily workflow, and they would be much happier for it. That happiness is likely worth the
financial cost. Onboarding new developers is costly, so don't underestimate the value of
increased retention when making these calculations.
I hope you've found this framework insightful, and I encourage you to incorporate it into
your own decision-making processes. There is no one-size-fits-all framework that works for
every decision. Don't forget that, sometimes, you might need to go with your gut and make a
judgment call. However, having a standardized process like this will help differentiate between
those times when you can critically analyze a decision and when you need to make that leap.
They have computers, and they may have other weapons of mass destruction. (Janet Reno)
I think computer viruses should count as life. I think it says something about human
nature that the only form of life we have created so far is purely destructive. We've
created life in our own image. (Stephen Hawking)
If it keeps up, man will atrophy all his limbs but the push-button finger. (Frank Lloyd
Wright)
If software were as unreliable as economic theory, there wouldn't be a plane made of
anything other than paper that could get off the ground. (Jim Fawcette)
Computers are like bikinis. They save people a lot of guesswork. (Sam Ewing)
If the automobile had followed the same development cycle as the computer, a Rolls-Royce
would today cost $100, get a million miles per gallon, and explode once a year, killing
everyone inside. ("Robert X. Cringely", Computerworld)
To err is human, but to really foul things up you need a computer. (Paul Ehrlich)
All parts should go together without forcing. You must remember that the parts you are
reassembling were disassembled by you. Therefore, if you can't get them together again, there
must be a reason. By all means, do not use a hammer. (1925 IBM Maintenence Manual)
Considering the current sad state of our computer programs, software development is clearly
still a black art, and cannot yet be called an engineering discipline. (Bill Clinton)
Man is still the most extraordinary computer of all. (John F Kennedy)
At this time I do not have a personal relationship with a computer. (Janet Reno)
For a long time it puzzled me how something so expensive, so leading edge, could be so
useless, and then it occurred to me that a computer is a stupid machine with the ability to do
incredibly smart things, while computer programmers are smart people with the ability to do
incredibly stupid things. They are, in short, a perfect match. (Bill Bryson)
Just remember: you're not a "dummy," no matter what those computer books claim. The real
dummies are the people who, though technically expert, couldn't design hardware and software
that's usable by normal consumers if their lives depended upon it. (Walter Mossberg)
You have to ask yourself how many IT organizations, how many CIOs have on their goal sheet,
or their mission statement, "Encouraging creativity and innovation in the corporation?" That's
not why the IT organization was created. (Tom Austin)
The real problem is not whether machines think but whether men do. (B. F. Skinner)
The global village is not created by the motor car or even by the airplane. It's created by
instant electronic information movement. (Marshall Mcluhan)
Replicating assemblers and thinking machines pose basic threats to people and to life on
Earth. Among the cognoscenti of nanotechnology, this threat has become known as the gray
goo problem. (Eric Drexler)
Computers are merely ingenious devices to fulfill unimportant functions. The computer
revolution is an explosion of nonsense. (Neil Postman)
Who cares how it works, just as long as it gives the right answer? (Jeff Scholnik)
There's an old story about the person who wished his computer were as easy to use as his
telephone. That wish has come true, since I no longer know how to use my telephone. (Bjarne
Stroustrup)
I think and think for months and years. Ninety-nine times, the conclusion is false. The
hundredth time I am right. (Albert Einstein)
The first rule of any technology used in a business is that automation applied to an
efficient operation will magnify the efficiency. The second is that automation applied to an
inefficient operation will magnify the inefficiency. (Bill Gates)
See, no matter how clever your automation systems might be, it all falls apart if your human
wetware isn't up to the job. (Andrew Orlowski)
That's the thing about people who think they hate computers. What they really hate is lousy
programmers. (Larry Niven)
On the Internet, nobody knows you're a dog. (Peter Steiner)
We are a bit of stellar matter gone wrong. We are physical machinery - puppets that strut
and talk and laugh and die as the hand of time pulls the strings beneath. But there is one
elementary inescapable answer. We are that which asks the question.(Sir Arthur Eddington)
The nice thing about standards is that there are so many of them to choose from. (Andrew
Tannenbaum)
Standards are always out of date. That's what makes them standards. (Alan Bennett)
Computer Science : 1. A study akin to numerology and astrology, but lacking the
precision of the former and the success of the latter. 2. The boring art of coping with a large
number of trivialities. (Stan Kelly-Bootle)
Once there was a time when the bringing-forth of the true into the beautiful was called
technology. And art was simply called techne. (Martin Heidegger)
The computer actually may have aggravated management's degenerative tendency to focus
inward on costs. (Peter Drucker)
The buyer needs a hundred eyes, the vendor not one. (George Herbert)
Anyone who puts a small gloss on a fundamental technology, calls it proprietary, and then
tries to keep others from building on it, is a thief. (Tim O'Reilly)
What a satire, by the way, is that machine [Babbage's Engine], on the mere mathematician! A
Frankenstein-monster, a thing without brains and without heart, too stupid to make a blunder;
that turns out results like a corn-sheller, and never grows any wiser or better, though it
grind a thousand bushels of them! (Oliver Wendell Holmes)
No, no, you're not thinking, you're just being logical. (Niels Bohr)
Never trust a computer you can't throw out a window. (Steve Wozniak)
If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this
tomfoolery, having passed through a very expensive machine, is somehow ennobled and no one
dares criticize it. (Pierre Gallois)
A computer is essentially a trained squirrel: acting on reflex, thoughtlessly running back
and forth and storing away nuts until some other stimulus makes it do something else. (Ted
Nelson)
Software people would never drive to the office if building engineers and automotive
engineers were as cavalier about buildings and autos as the software "engineer" is about his
software. (Henry Baker)
Since the invention of the microprocessor, the cost of moving a byte of information around
has fallen on the order of 10-million-fold. Never before in the human history has any product
or service gotten 10 million times cheaper-much less in the course of a couple decades. That's
as if a 747 plane, once at $150 million a piece, could now be bought for about the price of a
large pizza. (Michael Rothschild)
Physics is the universe's operating system. (Steven R Garman)
If patterns of ones and zeros were like patterns of human lives and death, if
everything about an individual could be represented in a computer record by a long string of
ones and zeros, then what kind of creature would be represented by a long string of lives and
deaths? (Thomas Pynchon)
Man is the best computer we can put aboard a spacecraft...and the only one that can be mass
produced with unskilled labor. (Wernher von Braun)
I've noticed lately that the paranoid fear of computers becoming intelligent and taking over
the world has almost entirely disappeared from the common culture. Near as I can tell, this
coincides with the release of MS-DOS. (Larry DeLuca)
A friend of the Feline reports that Big Blue marketing and sales personnel have been
strictly forbidden to use the word "mainframe." Instead, in an attempt to distance themselves
from the dinosaur, they're to use the more PC-friendly phrase "large enterprise server." If
that's the case, the Katt retorted, they should also refer to "dumb terminals" as
"intelligence-challenged workstations." (Spencer Katt)
The computer is no better than its program. (Elting Elmore Morison)
There is no doubt that human survival will continue to depend more and more on human
intellect and technology. It is idle to argue whether this is good or bad. The point of no
return was passed long ago, before anyone knew it was happening. (Theodosius Dobzansky)
Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be
mass-produced by unskilled labor. (NASA in 1965)
A computer lets you make more mistakes faster than any invention in human history - with the
possible exceptions of handguns and tequila. (Mitch Radcliffe)
COBOL is a very bad language, but all the others (for business data processing) are so
much worse. (Robert Glass)
FORTRAN's DO statement is far scarier than GOTO ever was. Nothing can match the sheer
gibbering horror of the 'come from' loop if the programmer didn't document it well. (Mark
Hughes)
The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a
criminal offence. (Edsger Dijkstra)
As long as there are ill-defined goals, bizarre bugs, and unrealistic schedules, there
will be Real Programmers willing to jump in and Solve The Problem, saving the
documentation for later. Long live FORTRAN! (Ed Post)
I knew I'd hate COBOL the moment I saw they'd used perform instead of
do. (Larry Wall)
Consistently separating words by spaces became a general custom about the tenth
century A.D., and lasted until about 1957, when FORTRAN abandoned the practice. (Sun
FORTRAN Reference Manual)
Eiffel: The Programming Language is certainly by far the most expensive piece
of fiction on my bookshelf. Excellent, entertaining fiction, but it remains fiction
nevertheless. (Lasse Petersen)
Cobol has almost no fervent enthusiasts. As a programming tool, it has roughly the sex
appeal of a wrench. (Charles Petzold)
When FORTRAN has been called an infantile disorder, PL/I, with its growth
characteristics of a dangerous tumor, could turn out to be a fatal disease. (Edsger
Dijkstra)
A system composed of 100,000 lines of C++ is not be sneezed at, but we don't have that
much trouble developing 100,000 lines of COBOL today. The real test of OOP will come when
systems of 1 to 10 million lines of code are developed. (Ed Yourdon)
A computer without COBOL and FORTRAN is like a piece of chocolate cake without ketchup
or mustard. (John Krueger)
You can tell how far we have to go, when FORTRAN is the language of supercomputers.
(Steven Feiner)
The tree large enough that a stake capable of killing COBOL could be fashioned from
its trunk has not yet grown anywhere upon the face of this verdant planet. (Dan
Martinez)
Eiffel has perhaps the image of a cruel professor giving students tough assignments
and not accepting excuses. C/C++, on the other hand, has an almost sports-car image.
(John Nagle)
From a practical viewpoint, it's easy to see that C will always be with us, taking a
place beside Fortran and Cobol as the right tool for certain jobs. (Larry O'Brien)
C++ is the only current language making COBOL look good. (Bertrand Meyer)
You can't prove anything about a program written in C or FORTRAN. It's really just
Peek and Poke with some syntactic sugar. (Bill Joy)
COBOL is for morons. (Edsger Dijkstra)
FORTRAN was the language of choice for the same reason that three-legged races are
popular. (Ken Thompson)
With respect to COBOL you can really do only one of two things: fight the disease or
pretend that it does not exist. (Edsger Dijkstra)
The very architecture of almost every computer today is designed to optimize the
performance of Fortran programs and its operating-system-level sister, C. (Peter
Gabriel)
The Windows API has done more to retard skill development than anything since COBOL
maintenance. (Larry O'Brien)
COBOL: (Synonymous with evil.) A weak, verbose, and flabby language used by card
wallopers to do boring mindless things on dinosaur mainframes. (Jargon File)
Any sufficiently complicated C or Fortran program contains an ad hoc
informally-specified bug-ridden slow implementation of half of Common Lisp. (Philip
Greenspun)
COBOL programmers are destined to code COBOL for the rest of their lives, and
thereafter. (Bertrand Meyer)
I think conventional languages are for the birds. They're just extensions of the von
Neumann computer, and they keep our noses in the dirt of dealing with individual words
and computing addresses, and doing all kinds of silly things like that, things that we've
picked up from programming for computers; we've built them into programming languages;
we've built them into Fortran; we've built them in PL/1; we've built them into almost
every language. (John Backus)
If you can't do it in Fortran, do it in assembly language. If you can't do it in
assembly language, it isn't worth doing. (Ed Post)
Historically, languages designed for other people to use have been bad: Cobol, PL/I,
Pascal, Ada, C++. The good languages have been those that were designed for their own
creators: C, Perl, Smalltalk, Lisp. (Paul Graham)
Anyone could learn Lisp in one day, except that if they already knew Fortran, it would
take three days. (Marvin Minsky)
I had a running compiler and nobody would touch it. They told me computers could only
do arithmetic. (Rear Admiral Grace Hopper)
Please don't fall into the trap of believing that I am terribly dogmatical about [the
goto statement]. I have the uncomfortable feeling that others are making a religion out
of it, as if the conceptual problems of programming could be solved by a single trick, by
a simple form of coding discipline! (Edsger Dijkstra)
Our IBM Salesmen (to the tune of Jingle Bells)
IBM, Happy men, smiling all the way.
Oh, what fun it is to sell our products night and day.
IBM, Watson men, partners of TJ.
In his service to mankind -- that's why we are so gay.
Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in. (((Larry
Wall)))
If Java had true garbage collection, most programs would delete themselves upon execution.
(Robert Sewell)
I fear the the new object-oriented systems may suffer the fate of LISP, in that they can do
many things, but the complexity of the class hierarchies may cause them to collapse under their
own weight. (Bill Joy)
Using Java for serious jobs is like trying to take the skin off a rice pudding wearing
boxing gloves. (Tel Hudson)
Anybody who thinks a little 9,000-line program [ Java ] that's distributed free and
can be cloned by anyone is going to affect anything we do at Microsoft has his head screwed on
wrong. (Bill Gates)
Take a cup of coffee and add three drops of poison and what have you got? Microsoft J++.
(Scott McNealy)
Of all the great programmers I can think of, I know of only one who would voluntarily
program in Java. And of all the great programmers I can think of who don't work for Sun, on
Java, I know of zero. (Paul Graham)
Using PL/I must be like flying a plane with 7,000 buttons, switches, and handles to
manipulate in the cockpit. (Edsger Dijkstra)
Thirty years from now nobody will remember Java and everyone will remember Microsoft.
(Charles Simonyi)
If you want to shoot yourself in the foot, Perl will give you ten bullets and a laser scope,
then stand by and cheer you on. (Teodor Zlatanov)
Java is the most distressing thing to happen to computing since MS-DOS. (Alan Kay)
Your development cycle is much faster because Java is interpreted. The
compile-link-load-test-crash-debug cycle is obsolete. (James Gosling)
Actually, I'm trying to make Ruby natural, not simple. (Yukihiro "Matz" Matsumoto)
Historically, languages designed for other people to use have been bad: Cobol, PL/I, Pascal,
Ada, C++. The good languages have been those that were designed for their own creators: C,
Perl, Smalltalk, Lisp. (Paul Graham)
When FORTRAN has been called an infantile disorder, PL/I, with its growth characteristics of
a dangerous tumor, could turn out to be a fatal disease. (Edsger Dijkstra)
The three characteristics of Perl programmers: mundaneness, sloppiness, and fatuousness.
(Xah Lee)
PL/I, "the fatal disease", belongs more to the problem set than to the solution set. (Edsger
Dijkstra)
C treats you like a consenting adult. Pascal treats you like a naughty child. Ada treats you
like a criminal. (Bruce Powel Douglass)
Java is, in many ways, C++--. (Michael Feldman)
Perl has grown from being a very good scripting language into something like a cross between
a universal solvent and an open-ended Mandarin where new ideograms are invented hourly.
(Jeffrey Davis)
LISP is like a ball of mud. You can add any amount of mud to it and it still looks like a
ball of mud. (Joel Moses)
Perl is like vise grips. You can do anything with it but it is the wrong tool for every job.
(Bruce Eckel)
I view the JVM as just another architecture that Perl ought to be ported to. (That, and the
Underwood typewriter...) (Larry Wall)
I have found that humans often use Smalltalk during awkward moments. ("Data")
Perl: The only language that looks the same before and after RSA encryption. (Keith
Bostic)
PL/I and Ada started out with all the bloat, were very daunting languages, and got bad
reputations (deservedly). C++ has shown that if you slowly bloat up a language over a period of
years, people don't seem to mind as much. (James Hague)
C++ is history repeated as tragedy. Java is history repeated as farce. (Scott McKay)
A Lisp programmer knows the value of everything, but the cost of nothing. (Alan Perlis)
Claiming Java is easier than C++ is like saying that K2 is shorter than Everest. (Larry
O'Brien)
In the best possible scenario Java will end up mostly like Eiffel but with extra warts
because of insufficiently thoughtful early design. (Matthew B Kennel)
Java, the best argument for Smalltalk since C++. (Frank Winkler)
[Perl] is the sanctuary of dunces. The godsend for brainless coders. The means and banner of
sysadmins. The lingua franca of trial-and-error hackers. The song and dance of stultified
engineers. (Xah Lee)
Java is the SUV of programming tools. (Philip Greenspun)
Going from programming in Pascal to programming in C, is like learning to write in Morse
code. (J P Candusso)
Arguing that Java is better than C++ is like arguing that grasshoppers taste better than
tree bark. (Thant Tessman)
I think conventional languages are for the birds. They're just extensions of the von Neumann
computer, and they keep our noses in the dirt of dealing with individual words and computing
addresses, and doing all kinds of silly things like that, things that we've picked up from
programming for computers; we've built them into programming languages; we've built them into
Fortran; we've built them in PL/1; we've built them into almost every language. (John
Backus)
C++: Simula in wolf's clothing. (Bjarne Stroustrup)
Perl is a car with an autopilot designed by insane aliens. (Jeff Smith)
Like the creators of sitcoms or junk food or package tours, Java's designers were
consciously designing a product for people not as smart as them. (Paul Graham)
High thoughts must have a high language. (Aristophanes)
There are undoubtedly a lot of very intelligent people writing Java, better programmers than
I will ever be. I just wish I knew why. (Steve Holden)
The more of an IT flavor the job descriptions had, the less dangerous was the company. The
safest kind were the ones that wanted Oracle experience. You never had to worry about those.
You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or
Python programmers, that would be a bit frightening. If I had ever seen a job posting looking
for Lisp hackers, I would have been really worried. (Paul Graham)
If you learn to program in Java, you'll never be without a job! (Patricia Seybold in
1998)
Anyone could learn Lisp in one day, except that if they already knew Fortran, it would take
three days. (Marvin Minsky)
Knowing the syntax of Java does not make someone a software engineer. (John Knight)
Javascript is the duct tape of the Internet. (Charlie Campbell)
To Our IBM Home Office Staff (to the tune of Polly Wolly Doodle)
In Old New York, at 270 Broadway,
They're working night and day.
Our IBM fine girls and men --
All tasks to them, mere play.
Our President Watson's loyal band,
Well-serving our Four Lines.
All faithful workers, heart and hand,
Two hundred brilliant minds.
A colleague of mine today committed a class called ThreadLocalFormat , which basically moved instances of Java Format
classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated
that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many".
He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it
was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a
case of "premature optimization" and how bad is it really?
design architecture optimization
quality-attributesshare improve this question
follow edited Dec 5 '19 at 3:54
community wiki 3 revs, 3 users 67% Craig Day
Alex ,
I think you need to distinguish between premature optimization, and unnecessary optimization. Premature to me suggests 'too early
in the life cycle' whereas unnecessary suggests 'does not add significant value'. IMO, requirement for late optimization implies
shoddy design. – Shane MacLaughlin
Oct 17 '08 at 8:53
2 revs, 2 users 92% , 2014-12-11 17:46:38
345
It's important to keep in mind the full quote:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet
we should not pass up our opportunities in that critical 3%.
What this means is that, in the absence of measured performance issues you shouldn't optimize because you think you will get
a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that
isn't a trivially clear optimization should be avoided until it can be measured.
Being from Donald Knuth, I wouldn't be surprized if he had some evidence to back it up. BTW, Src: Structured Programming with
go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.
citeseerx.ist.psu.edu/viewdoc/
– mctylr Mar 1 '10 at 17:57
2 revs, 2 users 90% , 2015-10-06 13:07:11
120
Premature micro optimizations are the root of all evil, because micro optimizations leave out context. They almost never behave
the way they are expected.
What are some good early optimizations in the order of importance:
Architectural optimizations (application structure, the way it is componentized and layered)
Data flow optimizations (inside and outside of application)
Some mid development cycle optimizations:
Data structures, introduce new data structures that have better performance or lower overhead if necessary
Algorithms (now its a good time to start deciding between quicksort3 and heapsort ;-) )
Some end development cycle optimizations
Finding code hotpots (tight loops, that should be optimized)
Profiling based optimizations of computational parts of the code
Micro optimizations can be done now as they are done in the context of the application and their impact can be measured
correctly.
Not all early optimizations are evil, micro optimizations are evil if done at the wrong time in the development life cycle
, as they can negatively affect architecture, can negatively affect initial productivity, can be irrelevant performance wise or
even have a detrimental effect at the end of development due to different environment conditions.
If performance is of concern (and always should be) always think big . Performance is a bigger picture and not about things
like: should I use int or long ?. Go for Top Down when working with performance instead of Bottom Up
. share improve this answer follow
edited Oct 6 '15 at 13:07 community
wiki 2 revs, 2 users 90% Pop Catalin
Here Here! Unconsidered optimization makes code un-maintainable and is often the cause of performance problems. e.g. You multi-thread
a program because you imagine it might help performance, but, the real solution would have been multiple processes which are now
too complex to implement. –
James Anderson May 2 '12 at 5:01
John Mulder , 2008-10-17 08:42:58
45
Optimization is "evil" if it causes:
less clear code
significantly more code
less secure code
wasted programmer time
In your case, it seems like a little programmer time was already spent, the code was not too complex (a guess from your comment
that everyone on the team would be able to understand), and the code is a bit more future proof (being thread safe now, if I understood
your description). Sounds like only a little evil. :)
share improve this answer follow answered
Oct 17 '08 at 8:42 community
wiki John Mulder
mattnz ,
Only if the cost, it terms of your bullet points, is greater than the amortized value delivered. Often complexity introduces value,
and in these cases one can encapsulate it such that it passes your criteria. It also gets reused and continues to provide more
value. – Shane MacLaughlin
Oct 17 '08 at 10:36
Michael Shaw , 2020-06-16 10:01:49
42
I'm surprised that this question is 5 years old, and yet nobody has posted more of what Knuth had to say than a couple of sentences.
The couple of paragraphs surrounding the famous quote explain it quite well. The paper that is being quoted is called "
Structured Programming with go to
Statements ", and while it's nearly 40 years old, is about a controversy and a software movement that both no longer exist,
and has examples in programming languages that many people have never heard of, a surprisingly large amount of what it said still
applies.
Here's a larger quote (from page 8 of the pdf, page 268 in the original):
The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant.
The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe
this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't
debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is
never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother
making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict
myself to tools that deny me such efficiencies.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about,
or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong
negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about
97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by
such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is
often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience
of programmers who have been using measurement tools has been that their intuitive guesses fail.
Another good bit from the previous page:
My own programming style has of course changed during the last decade, according to the trends of the times (e.g., I'm not
quite so tricky anymore, and I use fewer go to's), but the major change in my style has been due to this inner loop phenomenon.
I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data
structure (as in the change from Example 1 to Example 2) so that some of the operations can be eliminated. The reasons for
this approach are that: a) it doesn't take long, since the inner loop is short; b) the payoff is real; and c) I can then afford
to be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.
I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could
probably be made faster quite easily, without increasing code size or compromising its readability.
In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log
N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm
and then throw it away completely in favor of a O(log N) approach.
Note the words may be : if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt
if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you
already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload
in the long run.
Thus, in general, I think the right approach is to find out what your options are before you start writing code, and consciously
choose the best algorithm for your situation. Most importantly, the phrase "premature optimization is the root of all evil" is
no excuse for ignorance. Career developers should have a general idea of how much common operations cost; they should know, for
example,
that strings cost more than numbers
that dynamic languages are much slower than statically-typed languages
the advantages of array/vector lists over linked lists, and vice versa
when to use a hashtable, when to use a sorted map, and when to use a heap
that (if they work with mobile devices) "double" and "int" have similar performance on desktops (FP may even be faster)
but "double" may be a hundred times slower on low-end mobile devices without FPUs;
that transferring data over the internet is slower than HDD access, HDDs are vastly slower than RAM, RAM is much slower
than L1 cache and registers, and internet operations may block indefinitely (and fail at any time).
And developers should be familiar with a toolbox of data structures and algorithms so that they can easily use the right tools
for the job.
Having plenty of knowledge and a personal toolbox enables you to optimize almost effortlessly. Putting a lot of effort into
an optimization that might be unnecessary is evil (and I admit to falling into that trap more than once). But when optimization
is as easy as picking a set/hashtable instead of an array, or storing a list of numbers in double[] instead of string[], then
why not? I might be disagreeing with Knuth here, I'm not sure, but I think he was talking about low-level optimization whereas
I am talking about high-level optimization.
Remember, that quote is originally from 1974. In 1974 computers were slow and computing power was expensive, which gave some
developers a tendency to overoptimize, line-by-line. I think that's what Knuth was pushing against. He wasn't saying "don't worry
about performance at all", because in 1974 that would just be crazy talk. Knuth was explaining how to optimize; in short, one
should focus only on the bottlenecks, and before you do that you must perform measurements to find the bottlenecks.
Note that you can't find the bottlenecks until you have written a program to measure, which means that some performance decisions
must be made before anything exists to measure. Sometimes these decisions are difficult to change if you get them wrong. For this
reason, it's good to have a general idea of what things cost so you can make reasonable decisions when no hard data is available.
How early to optimize, and how much to worry about performance depend on the job. When writing scripts that you'll only run
a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and
you're working on a library that thousands of other developers are going to use in thousands of different ways, it may pay to
optimize the hell out of it, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must
always be balanced against the need for readability, maintainability, elegance, extensibility, and so on.
Awesome video, I loved watching it. In my experience, there are many situations where,
like you pointed out, procedural style makes things easier and prevents you from overthinking
and overgeneralizing the problem you are trying to tackle. However, in some cases,
object-oriented programming removes unnecessary conditions and switches that make your code
harder to read. Especially in complex game engines where you deal with a bunch of objects
which interact in diverse ways to the environment, other objects and the physics engine. In a
procedural style, a program like this would become an unmanageable clutter of flags,
variables and switch-statements. Therefore, the statement "Object-Oriented Programming is
Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers
can use - and just like you would not use pliers to get a nail into a wall, you should not
force yourself to use object-oriented programming to solve every problem at hand. Instead,
you use it when it is appropriate and necessary. Nevertheless, i would like to hear how you
would realize such a complex program. Maybe I'm wrong and procedural programming is the best
solution in any case - but right now, I think you need to differentiate situations which
require a procedural style from those that require an object-oriented style.
I have been brainwashed with c++ for 20 years. I have recently switched to ANSI C and my
mind is now free. Not only I feel free to create design that are more efficient and elegant,
but I feel in control of what I do.
You make a lot of very solid points. In your refactoring of the Mapper interface to a
type-switch though: what is the point of still using a declared interface here? If you are
disregarding extensibility (which would require adding to the internal type switch, rather
than conforming a possible new struct to an interface) anyway, why not just make Mapper of
type interface{} and add a (failing) default case to your switch?
I recommend to install the Gosublime extension, so your code gets formatted on save and
you can use autocompletion. But looks good enough. But I disagree with large functions. Small
ones are just easier to understand and test.
Being the lead designer of an larger app (2m lines of code as of 3 years ago). I like to
say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when
it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in
our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface
also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all
coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I
don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you
have is a hammer, everything looks like a nail. So if you use OO on everything then you
sometimes end up with garbage.
Loving the series. The hardest part of actually becoming an efficient programmer is
unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been
starting with C++ then reducing everything into procedural functions and tightly-packed data
structs. Just by doing that I reduced static memory use and compiled program size at least
10-15%+ (which is a lot when you only have 32kb.) And holy damn, nearly 20 years of C and I
never knew you could nest a function within a function, I had to try that right away.
I have a design for a networked audio platform that goes into large buildings (over 11
stories) and can have 250 networked nodes (it uses an E1 style robbed bit networking system)
and 65K addressable points (we implemented 1024 of them for individual control by grouping
them). This system ties to a fire panel at one end with a microphone and speakers at the
other end. You can manually select any combination of points to page to, or the fire panel
can select zones to send alarm messages to. It works in real time with 50mS built in delays
and has access to 12 audio channels. What really puts the frosting on this cake is, the CPU
is an i8051 running at 18MHz and the code is a bit over 200K bytes that took close to 800K
lines of code. In assembler. And it took less than a Year from concept to first installation.
By one designer/coder. The only OOP in this code was when an infinite loop happened or a bug
crept in - "OOPs!"
There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my
friend. General idea is to declare a struct inside which a function can be declared. Since
you can declare structs inside functions, you can safely use it as a wrapper for your
function-inside-function declaration. This has been done in MSVC but I believe it will
compile in gcc too.
"Is pixel an object or a group of objects? Is there a container? Do I have to ask a
factory to get me a color?" I literally died there... that's literally the best description
of my programming for the last 5 years.
It's really sad that we are only taught OOP and no other paradigms in our college, when I
discovered programming I had no idea about OOP and it was really easy to build programs, bt
then I came across OOP:"how to deconstruct a problem statement into nouns for objects and
verbs for methods" and it really messed up my thinking, I have been struggling for a long
time on how to organize my code on the conceptual level, only recently I realized that OOP is
the reason for this struggle, handmadehero helped alot to bring me back to the roots of how
programming is done, remember never push OOP into areas where it is not needed, u don't have
to model ur program as real world entities cause it's not going to run on real world, it's
going to run on CPU!
I lost an entire decade to OOP, and agree with everything Casey said here. The code I
wrote in my first year as a programmer (before OOP) was better than the code I wrote in my
15th year (OOP expert). It's a shame that students are still indoctrinated into this
regressive model.
Unfortunately, when I first started programming, I encountered nothing but tutorials that
jumped right into OOP like it was the only way to program. And of course I didn't know any
better! So much friction has been removed from my process since I've broken free from that
state of mind. It's easier to judge when objects are appropriate when you don't think they're
always appropriate!
"It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the
fundamental particle of computing that some people want it to be. When blindly applied to
problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet
there's often an aesthetic insistence on objects for everything all the way down. That's too
bad, because it makes it harder to identify the cases where an object-oriented style truly
results in an overall simplicity and ease of understanding." -
https://prog21.dadgum.com/156.html
The first language I was taught was Java, so I was taught OOP from the get go. Removing
the OOP mindset was actually really easy, but what was left stuck in my head is the practice
of having small functions and make your code look artificially "clean". So I am in a constant
struggle of refactoring and not refactoring, knowing that over-refactoring will unnecessarily
complicate my codebase if it gets big. Even after removing my OOP mindset, my emphasis is
still on the code itself, and that is much harder to cure in comparison.
"I want to emphasize that the problem with object-oriented programming is not the concept
that there could be an object. The problem with it is the fact that you're orienting your
program, the thinking, around the object, not the function. So it's the orientation that's
bad about it, NOT whether you end up with an object. And it's a really important distinction
to understand."
Nicely stated, HH. On youtube, MPJ, Brian Will, and Jonathan Blow also address this
matter. OOP sucks and can be largely avoided. Even "reuse" is overdone. Straightline probably
results in faster execution but slightly greater memory use. But memory is cheap and the
resultant code is much easier to follow. Learn a little assembly language. X86 is fascinating
and you'll know what the computer is actually doing.
I think schools should teach at least 3 languages / paradigms, C for Procedural, Java for
OOP, and Scheme (or any Lisp-style languages) for Functional paradigms.
It sounds to me like you're describing JavaScript framework programming that people learn
to start from. It hasn't seemed to me like object-oriented programmers who aren't doing web
stuff have any problem directly describing an algorithm and then translating it into
imperative or functional or just direct instructions for a computer. it's quite possible to
use object-oriented languages or languages that support object-oriented stuff to directly
command a computer.
I dunno man. Object oriented programming can (sometimes badly) solve real problems -
notably polymorphism. For example, if you have a Dog and a Cat sprite and they both have a
move method. The "non-OO" way Casey does this is using tagged unions - and that was not an
obvious solution when I first saw it. Quite glad I watched that episode though, it's very
interesting! Also see this tweet thread from Casey -
https://twitter.com/cmuratori/status/1187262806313160704
My deepest feeling after crossing so many discussions and books about this is a sincere
YES.
Without entering in any technical details about it, because even after some years I
don’t find myself qualified to talk about this (is there someone who really understand
it completely?), I would argument that the main problem is that every time a read something
about OOP it is trying to justify why it is “so good”.
Then, a huge amount of examples are shown, many arguments, and many expectations are
created.
It is not stated simply like this: “oh, this is another programming paradigm.”
It is usually stated that: “This in a fantastic paradigm, it is better, it is simpler,
it permits so many interesting things, … it is this, it is that… and so on.
What happens is that, based on the “good” arguments, it creates some
expectation that things produced with OOP should be very good. But, no one really knows if
they are doing it right. They say: the problem is not the paradigm, it is you that are not
experienced yet. When will I be experienced enough?
Are you following me? My feeling is that the common place of saying it is so good at the
same time you never know how good you are actually being makes all of us very frustrated and
confuse.
Yes, it is a great paradigm since you see it just as another paradigm and drop all the
expectations and excessive claiming that it is so good.
It seems to me, that the great problem is that huge propaganda around it, not the paradigm
itself. Again, if it had a more humble claim about its advantages and how difficult is to
achieve then, people would be much less frustrated.
Sourav
Datta , A programmer trying find the ultimate source code of life.
Answered August 6, 2015 · Author has 145 answers and 292K answer views
In recent years, OOP is indeed being regarded as a overrated paradigm by many. If we look
at the most recent famous languages like Go and Rust, they do not have the traditional OO
approaches in language design. Instead, they choose to pack data into something akin to
structs in C and provide ways to specify "protocols" (similar to interfaces/abstract methods) which can work on those packed
data...
The last decade has seen object oriented programming (OOP) dominate the programming world.
While there is no doubt that there are benefits of OOP, some programmers question whether OOP
has been over rated and ponder whether alternate styles of coding are worth pursuing. To even
suggest that OOP has in some way failed to produce the quality software we all desire could in
some instances cost a programmer his job, so why even ask the question ?
Quality software is the goal.
Likely all programmers can agree that we all want to produce quality software. We would like
to be able to produce software faster, make it more reliable and improve its performance. So
with such goals in mind, shouldn't we be willing to at least consider all possibilities ? Also
it is reasonable to conclude that no single tool can match all situations. For example, while
few programmers today would even consider using assembler, there are times when low level
coding such as assembler could be warranted. The old adage applies "the right tool for the
job". So it is fair to pose the question, "Has OOP been over used to the point of trying to
make it some kind of universal tool, even when it may not fit a job very well ?"
Others are asking the same question.
I won't go into detail about what others have said about object oriented programming, but I
will simply post some links to some interesting comments by others about OOP.
I have watched a number of videos online and read a number of articles by programmers about
different concepts in programming. When OOP is discussed they talk about thinks like modeling
the real world, abtractions, etc. But two things are often missing in such discussions, which I
will discuss here. These two aspects greatly affect programming, but may not be discussed.
First is, what is programming really ? Programming is a method of using some kind of human
readable language to generate machine code (or scripts eventually read by machine code) so one
can make a computer do a task. Looking back at all the years I have been programming, the most
profound thing I have ever learned about programming was machine language. Seeing what a CPU is
actually doing with our programs provides a great deal of insight. It helps one understand why
integer arithmetic is so much faster than floating point. It helps one understand what graphics
is really all about (simply the moving around a lot of pixels or blocks of four bytes). It
helps one understand what a procedure really must do to have parameters passed. It helps one
understand why a string is simply a block of bytes (or double bytes for unicode). It helps one
understand why we use bytes so much and what bit flags are and what pointers are.
When one looks at OOP from the perspective of machine code and all the work a compiler must
do to convert things like classes and objects into something the machine can work with, then
one very quickly begins to see that OOP adds significant overhead to an application. Also if a
programmer comes from a background of working with assembler, where keeping things simple is
critical to writing maintainable code, one may wonder if OOP is improving coding or making it
more complicated.
Second, is the often said rule of "keep it simple". This applies to programming. Consider
classic Visual Basic. One of the reasons it was so popular was that it was so simple compared
to other languages, say C for example. I know what is involved in writing a pure old fashioned
WIN32 application using the Windows API and it is not simple, nor is it intuitive. Visual Basic
took much of that complexity and made it simple. Now Visual Basic was sort of OOP based, but
actually mostly in the GUI command set. One could actually write all the rest of the code using
purely procedural style code and likely many did just that. I would venture to say that when
Visual Basic went the way of dot.net, it left behind many programmers who simply wanted to keep
it simple. Not that they were poor programmers who didn't want to learn something new, but that
they knew the value of simple and taking that away took away a core aspect of their programming
mindset.
Another aspect of simple is also seen in the syntax of some programming languages. For
example, BASIC has stood the test of time and continues to be the language of choice for many
hobby programmers. If you don't think that BASIC is still alive and well, take a look at this
extensive list of different BASIC programming languages.
While some of these BASICs are object oriented, many of them are also procedural in nature.
But the key here is simplicity. Natural readable code.
Simple and low level can work together.
Now consider this. What happens when you combine a simple language with the power of machine
language ? You get something very powerful. For example, I write some very complex code using
purely procedural style coding, using BASIC, but you may be surprised that my appreciation for
machine language (or assembler) also comes to the fore. For example, I use the BASIC language
GOTO and GOSUB. How some would cringe to hear this. But these constructs are native to machine
language and very useful, so when used properly they are powerful even in a high level
language. Another example is that I like to use pointers a lot. Oh how powerful pointers are.
In BASIC I can create variable length strings (which are simply a block of bytes) and I can
embed complex structures into those strings by using pointers. In BASIC I use the DIM AT
command, which allows me to dimension an array of any fixed data type or structure within a
block of memory, which in this case happens to be a string.
Appreciating machine code also affects my view of performance. Every CPU cycle counts. This
is one reason I use BASICs GOSUB command. It allows me to write some reusable code within a
procedure, without the need to call an external routine and pass parameters. The performance
improvement is significant. Performance also affects how I tackle a problem. While I want code
to be simple, I also want it to run as fast as possible, so amazingly some of the best
performance tips have to do with keeping code simple, with minimal overhead and also
understanding what the machine code must accomplish to do with what I have written in a higher
level language. For example in BASIC I have a number of options for the SELECT CASE structure.
One option can optimize the code using jump tables (compiler handles this), one option can
optimize if the values are only Integers or DWords. But even then the compiler can only do so
much. What happens if a large SELECT CASE has to compare dozens and dozens of string constants
to a variable length string being tested ? If this code is part of a parser, then it really can
slow things down. I had this problem in a scripting language I created for an OpenGL based 3D
custom control. The 3D scripting language is text based and has to be interpreted to generate
3D OpenGL calls internally. I didn't want the scripting language to bog things down. So what
would I do ?
The solution was simple and appreciating how the compiled machine code would have to compare
so many bytes in so many string constants, one quickly realized that the compiler alone could
not solve this. I had to think like I was an assembler programmer, but still use a high level
language. The solution was so simple, it was surprising. I could use a pointer to read the
first byte of the string being parsed. Since the first character would always be a letter in
the scripting language, this meant there were 26 possible outcomes. The SELECT CASE simply
tested for the first character value (convert to a number) which would execute fast. Then for
each letter (A,B,C, ) I would only compare the parsed word to the scripting language keywords
which started with that letter. This in essence improved speed by 26 fold (or better).
The fastest solutions are often very simple to code. No complex classes needed here. Just a
simple procedure to read through a text string using the simplest logic I could find. The
procedure is a little more complex than what I describe, but this is the core logic of the
routine.
From experience, I have found that a purely procedural style of coding, using a language
which is natural and simple (BASIC), while using constructs of the language which are closer to
pure machine (or assembler) in the language produces smaller and faster applications which are
also easier to maintain.
Now I am not saying that all OOP is bad. Nor am I saying that OOP never has a place in
programming. What I am saying though is that it is worth considering the possiblity that OOP is
not always the best solution and that there are other choices.
Here are some of my other blog articles which may interest you if this one interested
you:
Classic Visual Basic's end marked a key change in software development.
Yes it is. For application code at least, I'm pretty sure.
Not claiming any originality here, people smarter than me already noticed this fact ages
ago.
Also, don't misunderstand me, I'm not saying that OOP is bad. It probably is the best
variant of procedural programming.
Maybe the term is OOP overused to describe anything that ends up in OO systems.
Things like VMs, garbage collection, type safety, mudules, generics or declarative queries
(Linq) are a given , but they are not inherently object oriented.
I think these things (and others) are more relevant than the classic three principles.
Inheritance
Current advice is usually prefer composition over inheritance . I totally agree.
Polymorphism
This is very, very important. Polymorphism cannot be ignored, but you don't write lots of
polymorphic methods in application code. You implement the occasional interface, but not every
day.
Mostly you use them.
Because polymorphism is what you need to write reusable components, much less to use them.
Encapsulation
Encapsulation is tricky. Again, if you ship reusable components, then method-level access
modifiers make a lot of sense. But if you work on application code, such fine grained
encapsulation can be overkill. You don't want to struggle over the choice between internal and
public for that fantastic method that will only ever be called once. Except in test code maybe.
Hiding all implementation details in private members while retaining nice simple tests can be
very difficult and not worth the troulbe. (InternalsVisibleTo being the least trouble, abstruse
mock objects bigger trouble and Reflection-in-tests Armageddon).
Nice, simple unit tests are just more important than encapsulation for application code, so
hello public!
So, my point is, if most programmers work on applications, and application code is not very
OO, why do we always talk about inheritance at the job interview? 🙂
PS
If you think about it, C# hasn't been pure object oriented since the beginning (think
delegates) and its evolution is a trajectory from OOP to something else, something
multiparadigm.
If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are
global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot
be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.
However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can
overshadow any existing global variable with the same name.
Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP
is overkill.
Absolutely re. zealots. Most Python users use it for scripting and create little functions to separate out small bits of code.
– Paul Uszak Sep 22 at 22:57
For example, approximately one-fourth of all original syntax errors in the Pascal
sample were missingsemicolons or use of comma in place of semicolon 4) indicates
that this type of error
is quite infrequent (80o) and hence needn't be of as great a concern to recovery pro
[PDF] Error log analysis in C programming language courses[BOOK] Programming languages JJ Horning - 1979 - books.google.com to note that over 14%
of the faults occurring in topps programs during the second half of the
experiment were still semicolon faults (compared to 1% for toppsii), and that
missingsemicolons
were about Every decision takes time, and provides an opportunity for errorn
assessment of locally least-cost error recovery SO Anderson, RC
Backhouse , EH Bugge - The Computer , 1983 - academic.oup.com sym = semicolon in the
former, one is anticipating the possibility of a missingsemicolon ; in
contrast,
a missing comma is 13, p. 229) if sy = semicolon then insymbol else begin
error (14); if sy = comma
then insymbol end Both conditional statements accept semicolons but the The role of
systematic errors in developmental studies of programming language
learners J Segal, K Ahmad , M
Rogers - Journal of Educational , 1992 - journals.sagepub.com Errors were classified by
their surface characteristics into single token ( missing gathered from
the students, was that they would experience considerable difficulties with using
semicolons ,
and that the specific rule of ALGOL 68 syntax concerning the role of the semicolon as a
Cited by 9 Related articlesFollow set error
recovery C Stirling - Software: Practice and Experience, 1985 - Wiley Online Library
Some accounts of the recovery scheme mention and make use of non- systematic changes
to
their recursive descent parsers in order to improve In the former he anticipates the
possibility of
a missingsemicolon whereas in the latter he does not anticipate a missingcommaA first look at novice
compilation behaviour using BlueJMC Jadud -
Computer Science Education, 2005 - Taylor & Francis or mark themselves present from
weeks previous they may have missed -- either way change programmer behaviour -- perhaps encouraging them to make fewer " missingsemicolon " errors ,
or be or perhaps highlight places where semicolons should be when they are
missingMaking programming more
conversationalA Repenning -
2011 IEEE Symposium on Visual Languages , 2011 - ieeexplore.ieee.org Miss one
semicolon in a C program and the program may no longer work at all Similar
to code
auto-completion approaches, these kinds of visual programming environments prevent
syntactic programmingmistakes such as missingsemicolons or typos
The OOP paradigm has been criticised for a number of reasons, including not meeting its
stated goals of reusability and modularity, [36][37]
and for overemphasizing one aspect of software design and modeling (data/objects) at the
expense of other important aspects (computation/algorithms). [38][39]
Luca Cardelli has
claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take
longer to compile, and that OOP languages have "extremely poor modularity properties with
respect to class extension and modification", and tend to be extremely complex. [36]
The latter point is reiterated by Joe Armstrong , the principal
inventor of Erlang , who is quoted as
saying: [37]
The problem with object-oriented languages is they've got all this implicit environment
that they carry around with them. You wanted a banana but what you got was a gorilla holding
the banana and the entire jungle.
A study by Potok et al. has shown no significant difference in productivity between OOP and
procedural approaches. [40]
Christopher J.
Date stated that critical comparison of OOP to other technologies, relational in
particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP;
[41]
however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind
of customizable type
system to support RDBMS .
[42]
In an article Lawrence Krubner claimed that compared to other languages (LISP dialects,
functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden
of unneeded complexity. [43]
I find OOP technically unsound. It attempts to decompose the world in terms of interfaces
that vary on a single type. To deal with the real problems you need multisorted algebras --
families of interfaces that span multiple types. I find OOP philosophically unsound. It
claims that everything is an object. Even if it is true it is not very interesting -- saying
that everything is an object is saying nothing at all.
Paul Graham has suggested
that OOP's popularity within large companies is due to "large (and frequently changing) groups
of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one
programmer from "doing too much damage". [44]
Leo Brodie has suggested a connection between the standalone nature of objects and a
tendency to duplicate
code[45] in
violation of the don't repeat yourself principle
[46] of
software development.
Object Oriented Programming puts the Nouns first and foremost. Why would you go to such
lengths to put one part of speech on a pedestal? Why should one kind of concept take
precedence over another? It's not as if OOP has suddenly made verbs less important in the way
we actually think. It's a strangely skewed perspective.
Rich Hickey ,
creator of Clojure ,
described object systems as overly simplistic models of the real world. He emphasized the
inability of OOP to model time properly, which is getting increasingly problematic as software
systems become more concurrent. [39]
Eric S. Raymond
, a Unix programmer and
open-source
software advocate, has been critical of claims that present object-oriented programming as
the "One True Solution", and has written that object-oriented programming languages tend to
encourage thickly layered programs that destroy transparency. [48]
Raymond compares this unfavourably to the approach taken with Unix and the C programming language .
[48]
Rob Pike , a programmer
involved in the creation of UTF-8 and Go , has called object-oriented
programming "the Roman
numerals of computing" [49] and has
said that OOP languages frequently shift the focus from data structures and algorithms to types . [50]
Furthermore, he cites an instance of a Java professor whose
"idiomatic" solution to a problem was to create six new classes, rather than to simply use a
lookup table .
[51]
For efficiency sake, Objects are passed to functions NOT by their value but by
reference.
What that means is that functions will not pass the Object, but instead pass a
reference or pointer to the Object.
If an Object is passed by reference to an Object Constructor, the constructor can put that
Object reference in a private variable which is protected by Encapsulation.
But the passed Object is NOT safe!
Why not? Because some other piece of code has a pointer to the Object, viz. the code that
called the Constructor. It MUST have a reference to the Object otherwise it couldn't pass it to
the Constructor?
The Reference Solution
The Constructor will have to Clone the passed in Object. And not a shallow clone but a deep
clone, i.e. every object that is contained in the passed in Object and every object in those
objects and so on and so on.
So much for efficiency.
And here's the kicker. Not all objects can be Cloned. Some have Operating System resources
associated with them making cloning useless at best or at worst impossible.
And EVERY single mainstream OO language has this problem.
Beg, borrow, steal, buy, fabricate or otherwise obtain a rubber duck (bathtub
variety).
Place rubber duck on desk and inform it you are just going to go over some code with it,
if that's all right.
Explain to the duck what your code is supposed to do, and then go into detail and explain
your code line by line.
At some point you will tell the duck what you are doing next and then realise that that
is not in fact what you are actually doing. The duck will sit there serenely, happy in the
knowledge that it has helped you on your way.
Note : In a pinch a coworker might be able to substitute for the duck, however, it is often
preferred to confide mistakes to the duck instead of your coworker.
Original Credit : ~Andy from lists.ethernal.org
FAQs
If ducks are so smart, why don't we just let the ducks do all the work? It would be
wonderful if this were true, but the fact is that most ducks prefer to take a mentoring
role. There are a few ducks however that do choose to code, but these are the ducks that
nobody hears about because they are selected for secret government projects that are highly
classified in nature.
Where can I hire my own duck? Great question!
Amazon.com hosts a wide selection of affordable ducks that have graduated with a
technical degree from some of the world's leading universities.
Why does this site exist? As a young intern in 2008 I repeatedly pestered a mentor of
mine similar to Kevin's Rubber Duck Story and eventually
my mentor pointed me at the 2002 lists.ethernal.org post
by Andy , which paraphrased a story from the 1999 book The Pragmatic Programmer .
That night I ordered a rubber duck from Amazon and purchased this domain name as a way of
owning up to my behavior.
Tiobe Software's latest programming language popularity index shows the statistical
programming language R making a comeback, rising to eighth place after falling out of the top
20 in May for the first time in three years. Tiobe's Paul Jansen believes demand in
universities and from global efforts to find a vaccine for Covid-19 has given a boost to R and
Python. Said Jansen, "Lots of statistics and data mining needs to be done to find a vaccine for
the Covid-19 virus. As a consequence, statistical programming languages that are easy to learn
and use gain popularity now." Tiobe's rankings are based on search engine results related to
programming language queries. The C programming language topped the latest index, followed in
descending order by Java, Python, C++, C#, Visual Basic, JavaScript, R, PHP, and Swift.
Recently I read
Sapiens: A Brief History of Humankind
by Yuval Harari. The basic thesis of the book is that humans require 'collective fictions' so that we can collaborate in larger numbers
than the 150 or so our brains are big enough to cope with by default. Collective fictions are things that don't describe solid objects
in the real world we can see and touch. Things like religions, nationalism, liberal democracy, or Popperian falsifiability in science.
Things that don't exist, but when we act like they do, we easily forget that they don't.
Collective Fictions in IT – Waterfall
This got me thinking about some of the things that bother me today about the world of software engineering. When I started in
software 20 years ago, God was waterfall. I joined a consultancy (ca. 400 people) that wrote very long specs which were honed to
within an inch of their life, down to the individual Java classes and attributes. These specs were submitted to the customer (God
knows what they made of it), who signed it off. This was then built, delivered, and monies were received soon after. Life was simpler
then and everyone was happy.
Except there were gaps in the story – customers complained that the spec didn't match the delivery, and often the product delivered
would not match the spec, as 'things' changed while the project went on. In other words, the waterfall process was a 'collective
fiction' that gave us enough stability and coherence to collaborate, get something out of the door, and get paid.
This consultancy went out of business soon after I joined. No conclusions can be drawn from this.
Collective Fictions in IT – Startups ca. 2000
I got a job at another software development company that had a niche with lots of work in the pipe. I was employee #39. There
was no waterfall. In fact, there was nothing in the way of methodology I could see at all. Specs were agreed with a phone call. Design,
prototype and build were indistinguishable. In fact it felt like total chaos; it was against all of the precepts of my training.
There was more work than we could handle, and we got on with it.
The fact was, we were small enough not to need a collective fiction we had to name. Relationships and facts could be kept in our
heads, and if you needed help, you literally called out to the room. The tone was like this, basically:
Of course there were collective fictions, we just didn't name them:
We will never have a mission statement
We don't need HR or corporate communications, we have the pub (tough luck if you have a family)
We only hire the best
We got slightly bigger, and customers started asking us what our software methodology was. We guessed it wasn't acceptable to
say 'we just write the code' (legend had it our C-based application server – still in use and blazingly fast – was written before
my time in a fit of pique with a stash of amphetamines over a weekend. It's still in use.)
Turns out there was this thing called 'Rapid Application Development' that emphasized prototyping. We told customers we did RAD,
and they seemed happy, as it was A Thing. It sounded to me like 'hacking', but to be honest I'm not sure anyone among us really properly
understood it or read up on it.
As a collective fiction it worked, because it kept customers off our backs while we wrote the software.
Soon we doubled in size, moved out of our cramped little office into a much bigger one with bigger desks, and multiple floors.
You couldn't shout out your question to the room anymore. Teams got bigger, and these things called 'project managers' started appearing
everywhere talking about 'specs' and 'requirements gathering'. We tried and failed to rewrite our entire platform from scratch.
Yes, we were back to waterfall again, but this time the working cycles were faster and smaller, and the same problems of changing
requirements and disputes with customers as before. So was it waterfall? We didn't really know.
Collective Fictions in IT – Agile
I started hearing the word 'Agile' about 2003. Again, I don't think I properly read up on it ever, actually. I got snippets here
and there from various websites I visited and occasionally from customers or evangelists that talked about it. When I quizzed people
who claimed to know about it their explanations almost invariably lost coherence quickly. The few that really had read up on it seemed
incapable of actually dealing with the very real pressures we faced when delivering software to non-sprint-friendly customers, timescales,
and blockers. So we carried on delivering software with our specs, and some sprinkling of agile terminology. Meetings were called
'scrums' now, but otherwise it felt very similar to what went on before.
As a collective fiction it worked, because it kept customers and project managers off our backs while we wrote the software.
Since then I've worked in a company that grew to 700 people, and now work in a corporation of 100K+ employees, but the pattern
is essentially the same: which incantation of the liturgy will satisfy this congregation before me?
Don't You Believe?
I'm not going to beat up on any of these paradigms, because what's the point? If software methodologies didn't exist we'd have
to invent them, because how else would we work together effectively? You need these fictions in order to function at scale. It's
no coincidence that the Agile paradigm has such a quasi-religious hold over a workforce that is immensely fluid and mobile. (If you
want to know what I really think about software development methodologies, read
this because it lays
it out much better than I ever could.)
One of many interesting arguments in Sapiens is that because these collective fictions can't adequately explain the world, and
often conflict with each other, the interesting parts of a culture are those where these tensions are felt. Often, humour derives
from these tensions.
'The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the
ability to function.' F. Scott Fitzgerald
I don't know about you, but I often feel this tension when discussion of Agile goes beyond a small team. When I'm told in a motivational
poster written by someone I've never met and who knows nothing about my job that I should 'obliterate my blockers', and those blockers
are both external and non-negotiable, what else can I do but laugh at it?
How can you be agile when there are blockers outside your control at every turn? Infrastructure, audit, security, financial planning,
financial structures all militate against the ability to quickly deliver meaningful iterations of products. And who is the customer
here, anyway? We're talking about the square of despair:
When I see diagrams like this representing Agile I can only respond with black humour shared with my colleagues, like kids giggling
at the back of a church.
When within a smaller and well-functioning functioning team, the totems of Agile often fly out of the window and what you're left
with (when it's good) is a team that trusts each other, is open about its trials, and has a clear structure (formal or informal)
in which agreement and solutions can be found and co-operation is productive. Google recently articulated this (reported briefly
here , and more in-depth
here ).
So Why Not Tell It Like It Is?
You might think the answer is to come up with a new methodology that's better. It's not like we haven't tried:
It's just not that easy, like the book says:
'Telling effective stories is not easy. The difficulty lies not in telling the story, but in convincing everyone else to believe
it. Much of history revolves around this question: how does one convince millions of people to believe particular stories about gods,
or nations, or limited liability companies? Yet when it succeeds, it gives Sapiens immense power, because it enables millions of
strangers to cooperate and work towards common goals. Just try to imagine how difficult it would have been to create states, or churches,
or legal systems if we could speak only about things that really exist, such as rivers, trees and lions.'
Let's rephrase that:
'Coming up with useful software methodologies is not easy. The difficulty lies not in defining them, but in convincing others
to follow it. Much of the history of software development revolves around this question: how does one convince engineers to believe
particular stories about the effectiveness of requirements gathering, story points, burndown charts or backlog grooming? Yet when
adopted, it gives organisations immense power, because it enables distributed teams to cooperate and work towards delivery. Just
try to images how difficult it would have been to create Microsoft, Google, or IBM if we could only speak about specific technical
challenges.'
Anyway, does the world need more methodologies? It's not like some very smart people haven't already thought about this.
Acceptance
So I'm cool with it. Lean, Agile, Waterfall, whatever, the fact is we need some kind of common ideology to co-operate in large
numbers. None of them are evil, so it's not like you're picking racism over socialism or something. Whichever one you pick is not
going to reflect the reality, but if you expect perfection you will be disappointed. And watch yourself for unspoken or unarticulated
collective fictions. Your life is full of them. Like that your opinion is important. I can't resist quoting this passage from Sapiens
about our relationship with wheat:
'The body of Homo sapiens had not evolved for [farming wheat]. It was adapted to climbing apple trees and running after gazelles,
not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons
indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias. Moreover,
the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely
changed their way of life. We did not domesticate wheat. It domesticated us. The word 'domesticate' comes from the Latin domus, which
means 'house'. Who's the one living in a house? Not the wheat. It's the Sapiens.'
Maybe we're not here to direct the code, but the code is directing us. Who's the one compromising reason and logic to grow code?
Not the code. It's the Sapiens.
https://widgets.wp.com/likes/index.html?ver=20190321#blog_id=20870870&post_id=1474&origin=zwischenzugs.wordpress.com&obj_id=20870870-1474-5efdf020c3f1f&domain=zwischenzugs.com
Related
"And watch yourself for unspoken or unarticulated collective fictions. Your life is full of them."
Agree completely.
As for software development methodologies, I personally think that with a few tweaks the waterfall methodology could work quite
well. The key changes I'd suggest would help is to introduce developer guidance at the planning stage, including timeboxed explorations
of the feasibility of the proposals, as well as aiming for specs to outline business requirements rather than dictating how they
should be implemented.
Reply
A very entertaining article! I have as similar experience and outlook. I've not tried LEAN. I once heard a senior developer
say that methodologies were just a stick with which to beat developers. This was largely in the case of clients who agree to engage
in whatever process when amongst business people and then are absent at grooming, demos, releases, feedback meetings and so on.
When the software is delivered at progressively short notice, it's always the developer that has to carry the burden of ensuring
quality, feeling keenly responsible for the work they do (the conscientious ones anyway). Then non-technical management hide behind
the process and failing to have the client fully engaged is quickly forgotten.
It reminds me (I'm rambling now, sorry) of factory workers in the 80s complaining about working conditions and the management
nodding and smiling while doing nothing to rectify the situation and doomed to repeat the same error. Except now the workers are
intelligent and will walk, taking their business knowledge and skill set with them.
Reply
Very enjoyable. I had a stab at the small sub-trail of 'syntonicity' here:
http://www.scidata.ca/?p=895
Syntonicity is Stuart Watt's term which he probably got from Seymour Papert.
Of course, this may all become moot soon as our robot overlords take their place at the keyboard.
Reply
A great article! I was very much inspired by Yuval's book myself. So much that I wrote a post about DevOps being a collective
fiction : http://otomato.link/devops-is-a-myth/
Basically same ideas as yours but from a different angle.
Reply
I think part of the "need" for methodology is the desire for a common terminology. However, if everyone has their own view
of what these terms mean, then it all starts to go horribly wrong. The focus quickly becomes adhering to the methodology rather
than getting the work done.
Reply
A very well-written article. I retired from corporate development in 2014 but am still developing my own projects. I have written
on this very subject and these pieces have been published as well.
The idea that the Waterfall technique for development was the only one in use as we go back towards the earlier years is a
myth that has been built up by the folks who have been promoting the Agile technique, which for seniors like me has been just
another word for what we used to call "guerrilla programming". In fact, if one were to review that standards of design in software
engineering there are 13 types of design techniques, all of which have been used at one time or another by many different companies
successfully. Waterfall was just one of them and was only recommended for very large projects.
The author is correct to conclude by implication that the best technique for design and implementation is the RAD technique
promoted by Stephen McConnell of Construx and a team that can work well with other. His book, still in its first edition since
1996, is considered the Bible for software development and describes every aspect of software engineering one could require. His
point. However, his book is only suggested as a guide where engineers can pick what they really need for the development of their
projects; not hard standards. Nonetheless, McConnell stresses the need for good specifications and risk management, the latter
if not used always causes a project to fail or result in less than satisfactory results. His work is proven by over 35 years of
research
Reply
Hilarious and oh so true. Remember the first time you were being taught Agile and they told you that the stakeholders would
take responsibility for their role and decisions. What a hoot! Seriously, I guess they did used to write detailed specs, but in
my twenty some years, I've just been thrilled if I had a business analyst that knew about what they wanted
Reply
OK, here's a collective fiction for you. "Methodologies don't work. They don't reflect reality. They are just something we
tell customers because they are appalled when we admit that our software is developed in a chaotic and unprofessional manner."
This fiction serves those people who already don't like process, and gives them excuses.
We do things the same way over and over for a reason. We have traffic lights because it reduces congestion and reduces traffic
fatalities. We make cakes using a recipe because we like it when the result is consistently pleasing. So too with software methodologies.
Like cake recipes, not all software methodologies are equally good at producing a consistently good result. This fact alone should
tell you that there is something of value in the best ones. While there may be a very few software chefs who can whip up a perfect
result every time, the vast bulk of developers need a recipe to follow or the results are predictably bad.
Your diatribe against process does the community a disservice.
Reply
I have arrived at the conclusion that any and all methodologies would work – IF (and it's a big one), everyone managed to arrive
at a place where they considered the benefit of others before themselves. And, perhaps, they all used the same approach.
For me, it comes down to character rather than anything else. I can learn the skills or trade a chore with someone else.
Software developers; the ones who create "new stuff", by definition, have no roadmap. They have experience, good judgment,
the ability to 'survive in the wild', are always wanting to "see what is over there" and trust, as was noted is key. And there
are varying levels of developer. Some want to build the roads; others use the roads built for them and some want to survey for
the road yet to be built. None of these are wrong – or right.
The various methodology fights are like arguing over what side of the road to drive on, how to spell colour and color. Just
pick one, get over yourself and help your partner(s) become successful.
Ah, right Where do the various methodologies resolve greed, envy, distrust, selfishness, stepping on others for personal gain,
and all of the other REAL killers of success again?
I have seen great teams succeed and far too many fail. Those that have failed more often than not did so for character-related
issues rather than technical ones.
Reply
Before there exists any success, a methodology must freeze a definition for roles, as well as process. Unless there exist sufficient
numbers and specifications of roles, and appropriate numbers of sapiens to hold those roles, then the one on the end becomes overburdened
and triggers systemic failure.
There has never been a sufficiently-complex methodology that could encompass every field, duty, and responsibility in a software
development task. (This is one of the reasons "chaos" is successful. At least it accepts the natural order of things, and works
within the interstitial spaces of a thousand objects moving at once.)
We even lie to ourselves when we name what we're doing: Methodology. It sounds so official, so logical, so orderly. That's
a myth. It's just a way of pushing the responsibility down from the most powerful to the least powerful -- every time.
For every "methodology," who is the caboose on the end of this authority train? The "coder."
The tighter the role definitions become in any methodology, the more actual responsibilities cascade down to the "coder." If
the specs conflict, who raises his hand and asks the question? If a deadline is unreasonable, who complains? If a technique is
unusable in a situation, who brings that up?
The person is obviously the "coder." And what happens when the coder asks this question?
In one methodology the "coder" is told to stop production and raise the issue with the manager who will talk to the analyst
who will talk to the client who will complain that his instructions were clear and it all falls back to the "coder" who, obviously,
was too dim to understand the 1,200 pages of specifications the analyst handed him.
In another, the "coder" is told, "you just work it out." And the concomitant chaos renders the project unstable.
In another, the "coder" is told "just do what you're told." And the result is incompatible with the rest of the project.
I've stopped "coding" for these reasons and because everybody is happy with the myth of programming process because they aren't
the caboose.
Reply
I was going to make fun of this post for being whiney and defeatust. But the more I thought about it, the more I realized
it contained a big nugget of truth. A lot of methodologies, as practiced, have the purpose of putting off risk onto the developers,
of fixing responsibility on developers so the managers aren't responsible for any of the things that can go wrong with projects.
Reply
Great article! I have experienced the same regarding software methodologies. And at a greater level, thank you for introducing
me to the concept of collective fictions; it makes so much sense. I will be reading Sapiens.
Reply
Actually, come to think of it there are two types of Software Engineers who take process very seriously. One who is acutely
aware of software entropy and wants to pro -actively fight against it because they want to engineer to a high standard and don't
like working the weekend. So they wants things organised. Then there's another type who can come across as being a bit dogmatic.
Maybe your links with collective delusions help explain some of the human psychology here.
Reply
First of all this is a great article, very well written. A couple of remarks. Early in waterfall, the large business requirements
documents didn't work for two reasons: There was no new business process, it was the same business process that should be applied
within a new technology (from mainframes to open unix systems, from ascii to RAD tools and 4-GL languages). . Second many consultancy
companies (mostly the big 4) there were using "copy&paste" methods to fill these documents, submit the time and material forms
for the consultants, increasing the revenue and move on. Things have change with the adoption of the smartphones use etc
To reflect the author idea, to my humble opinion the collective fictions is the embedded quality of work into the whole life cycle
development
Thanks
Kostas
Reply
Sorry, did you forget to finish the article? I don't see the conclusion providing the one true programming methodology that
works in all occasions. What is the magic procedure? Thanks in advance.
Reply
Knowing how Linux uses libraries, including the difference between static and dynamic
linking, can help you fix dependency problems.Feed 27
up Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0 x
Subscribe now
Linux, in a way, is a series of static and dynamic libraries that depend on each other. For
new users of Linux-based systems, the whole handling of libraries can be a mystery. But with
experience, the massive amount of shared code built into the operating system can be an
advantage when writing new applications.
To help you get in touch with this topic, I prepared a small application example that shows the most common
methods that work on common Linux distributions (these have not been tested on other systems).
To follow along with this hands-on tutorial using the example application, open a command
prompt and type:
$ git clone https: // github.com / hANSIc99 / library_sample
$ cd library_sample /
$ make
cc -c main.c -Wall -Werror
cc -c libmy_static_a.c -o libmy_static_a.o -Wall -Werror
cc -c libmy_static_b.c -o libmy_static_b.o -Wall -Werror
ar -rsv libmy_static.a libmy_static_a.o libmy_static_b.o
ar: creating libmy_static.a
a - libmy_static_a.o
a - libmy_static_b.o
cc -c -fPIC libmy_shared.c -o libmy_shared.o
cc -shared -o libmy_shared.so libmy_shared.o
$ make clean
rm * .o
After executing these commands, these files should be added to the directory (run
ls to see them):
my_app
libmy_static.a
libmy_shared.so About static linking
When your application links against a static library, the library's code becomes part of the
resulting executable. This is performed only once at linking time, and these static libraries
usually end with a .a extension.
A static library is an archive ( ar ) of object files. The object files are
usually in the ELF format. ELF is short for Executable and Linkable Format ,
which is compatible with many operating systems.
The output of the file command tells you that the static library
libmy_static.a is the ar archive type:
$ file libmy_static.a
libmy_static.a: current ar archive
With ar -t , you can look into this archive; it shows two object files:
$ ar
-t libmy_static.a
libmy_static_a.o
libmy_static_b.o
You can extract the archive's files with ar -x <archive-file> . The
extracted files are object files in ELF format:
$ ar -x libmy_static.a
$ file libmy_static_a.o
libmy_static_a.o: ELF 64 -bit LSB relocatable, x86- 64 , version 1 ( SYSV ) , not stripped
About dynamic linking More Linux resources
Dynamic linking means the use of shared libraries. Shared libraries usually end with
.so (short for "shared object").
Shared libraries are the most common way to manage dependencies on Linux systems. These
shared resources are loaded into memory before the application starts, and when several
processes require the same library, it will be loaded only once on the system. This feature
saves on memory usage by the application.
Another thing to note is that when a bug is fixed in a shared library, every application
that references this library will profit from it. This also means that if the bug remains
undetected, each referencing application will suffer from it (if the application uses the
affected parts).
It can be very hard for beginners when an application requires a specific version of the
library, but the linker only knows the location of an incompatible version. In this case, you
must help the linker find the path to the correct version.
Although this is not an everyday issue, understanding dynamic linking will surely help you
in fixing such problems.
Fortunately, the mechanics for this are quite straightforward.
To detect which libraries are required for an application to start, you can use
ldd , which will print out the shared libraries used by a given file:
Note that the library libmy_shared.so is part of the repository but is not
found. This is because the dynamic linker, which is responsible for loading all dependencies
into memory before executing the application, cannot find this library in the standard
locations it searches.
Errors associated with linkers finding incompatible versions of common libraries (like
bzip2 , for example) can be quite confusing for a new user. One way around this is
to add the repository folder to the environment variable LD_LIBRARY_PATH to tell
the linker where to look for the correct version. In this case, the right version is in this
folder, so you can export it:
Now the dynamic linker knows where to find the library, and the application can be executed.
You can rerun ldd to invoke the dynamic linker, which inspects the application's
dependencies and loads them into memory. The memory address is shown after the object
path:
To find out which linker is invoked, you can use file :
$ file my_app
my_app: ELF 64 -bit LSB executable, x86- 64 , version 1 ( SYSV ) , dynamically linked,
interpreter / lib64 / ld-linux-x86- 64 .so.2, BuildID [ sha1 ]
=26c677b771122b4c99f0fd9ee001e6c743550fa6, for GNU / Linux 3.2.0, not stripped
The linker /lib64/ld-linux-x86–64.so.2 is a symbolic link to
ld-2.30.so , which is the default linker for my Linux distribution:
$ file /
lib64 / ld-linux-x86- 64 .so.2
/ lib64 / ld-linux-x86- 64 .so.2: symbolic link to ld- 2.31 .so
Looking back to the output of ldd , you can also see (next to
libmy_shared.so ) that each dependency ends with a number (e.g.,
/lib64/libc.so.6 ). The usual naming scheme of shared objects is:
**lib** XYZ.so **.<MAJOR>** . **<MINOR>**
On my system, libc.so.6 is also a symbolic link to the shared object
libc-2.30.so in the same folder:
$ file / lib64 / libc.so.6
/ lib64 / libc.so.6: symbolic link to libc- 2.31 .so
If you are facing the issue that an application will not start because the loaded library
has the wrong version, it is very likely that you can fix this issue by inspecting and
rearranging the symbolic links or specifying the correct search path (see "The dynamic loader:
ld.so" below).
On Linux, you mostly are dealing with shared objects, so there must be a mechanism that
detects an application's dependencies and loads them into memory.
ld.so looks for shared objects in these places in the following order:
The relative or absolute path in the application (hardcoded with the -rpath
compiler option on GCC)
In the environment variable LD_LIBRARY_PATH
In the file /etc/ld.so.cache
Keep in mind that adding a library to the systems library archive /usr/lib64
requires administrator privileges. You could copy libmy_shared.so manually to the
library archive and make the application work without setting LD_LIBRARY_PATH
:
If you want your application to use your shared libraries, you can specify an absolute or
relative path during compile time.
Modify the makefile (line 10) and recompile the program by invoking make -B .
Then, the output of ldd shows libmy_shared.so is listed with its
absolute path.
This is a good example, but how would this work if you were making a library for others to
use? New library locations can be registered by writing them to /etc/ld.so.conf or
creating a <library-name>.conf file containing the location under
/etc/ld.so.conf.d/ . Afterward, ldconfig must be executed to rewrite
the ld.so.cache file. This step is sometimes necessary after you install a program
that brings some special shared libraries with it.
Usually, there are different libraries for the 32-bit and 64-bit versions of applications.
The following list shows their standard locations for different Linux distributions:
Knowing where to look for these key libraries can make broken library links a problem of the
past.
While it may be confusing at first, understanding dependency management in Linux libraries
is a way to feel in control of the operating system. Run through these steps with other
applications to become familiar with common libraries, and continue to learn how to fix any
library challenges that could come up along your way.
Next, Stack Overflow breaks down the languages that developers most want to use but haven't
started yet. This list is radically different from the most-loved and most-dreaded lists, which
are composed of languages that developers already utilize in the course of software
development. As you can see, Python tops this particular list, followed by JavaScript and Go:
[ image omitted]
What conclusions can we draw from this data? Python and JavaScript are widely used languages -- it
wouldn't be a stretch to call them ubiquitous -- and their presence at the top of the "most
wanted" list suggests that technologists realize knowing these languages can unlock all kinds
of opportunities. In a similar fashion, the presence of up-and-comers such as Go and Kotlin
hints that developers suspect these languages could become big in coming years, and they want
to learn them now.
And if you want to learn Rust (and find out why it topped the "most loved" languages),
rust-lang.org offers lots of
handy documentation. There are also some handy (free) tutorials available via Medium .
Java is pretty stupid language, in some areas inferior even to a regular C (but it find it
niche as a new Cobol). But the idea was to make the language poratable not to ccreate a "decent"
language/
Rust feels like the place
to be: it's well-structured, it's expressive, it helps you do the right thing. I recently
started learning
Rust after many years of Java development. The five points that keep coming to mind are:
Rust feels familiar
References make sense
Ownership will make sense
Cargo is helpful
The compiler is amazing
I absolutely stand by all of these, but I've got a little more to say because I now feel
like a Rustacean 1 in that:
I don't feel like programming in anything else ever again.
I've moved away from simple incantations.
What do I mean by these two statements? Well, the first is pretty simple: Rust feels like
the place to be. It's well-structured, it's expressive, it helps you do the right thing,
2 it's
got great documentation and tools, and there's a fantastic community. And, of course, it's all
open source, which is something that I care about deeply.
Here is an example of what it is like to use Rust:
// Where checkhashes is pre-defined
vector of hashes to verify
let algorithms = vec ! [ String :: from ( "SHA-256" ) ; checkhashes.len ()] ;
This creates a new vector called "algorithms," of the same length as the vector
"checkhashes," and fills it with the String "SHA-256." And the second thing? Well, I decided
that in order to learn Rust properly, I should take a project that I had originally written in
Java and reimplement it in hopefully fairly idiomatic Rust. Before long, I started fixing
mistakes -- and making mistakes -- around implementation rather than around syntax. And I
wasn't just copying text from tutorials or making minor, seemingly random changes to my code
based on the compiler output. In other words, I was getting things to compile, understanding
why they compiled, and then just making programming mistakes. 3
Here's another example, which should feel quite familiar:
fn usage () {
println ! ( "Usage: findfromserial KEY_LENGTH INITIAL_SALT CHECK_HASH1 [CHECK_HASH2, ...]" )
;
std :: process :: exit ( 1 ) ;
} Programming and development
This is a big step forward. When you start learning a language, it's easy just to copy and
paste text that you've seen elsewhere, or fiddle with unfamiliar constructs until they -- sort
of -- work. Using code or producing code that you don't really understand but seems to work is
sometimes referred to as "using incantations" (from the idea that most magicians in fiction,
film, and gaming recite collections of magic words that "just work" without really
understanding what they're doing or what the combination of words actually means). Some
languages 4 are particularly prone to
this sort of approach, but many -- most? -- people learning a new language are prone to doing
this when they start out just because they want things to work.
Recently, I was up until 1am implementing a new feature -- accepting command-line input --
that I couldn't really get my head 'round. I'd spent quite a lot of time on it (including
looking for -- and failing to find -- some appropriate incantations), and then asked for some
help on an internal rust-lang channel. (You might want to sign up to the general Slack Rust channel inhabited by some
people I know.) A number of people had made some suggestions about what had been going wrong,
and one person was enormously helpful in picking apart some of the suggestions, so I understood
them better. He explained quite a lot, but finished with, "I don't know the return type of the
hash function you're calling -- I think this is a good spot for you to figure this piece out on
your own."
It may seem weird until you get your head 'round it, but it actually works as you might
expect: I wanted to take input from the command line, skip the first three inputs, iterate over
the rest, casting each to a vector of u8's and creating a vector of those. The _
at the end of the "collect" call vacuums up any errors or problems and basically throws them
away.
This was just what I needed, and what any learner of anything, including programming
languages, needs. So when I had to go downstairs at midnight to let the dog out, I decided to
stay down and see if I could work things out for myself. And I did. I took the suggestions that
people had made, understood what they were doing, tried to divine what they should be
doing, worked out how they should be doing it, and then found the right way of making it
happen.
I've still got lots to learn, and I'll make lots of mistakes still, but I now feel that I'm
in a place to find my way through those mistakes (with a little help along the way, probably --
thanks to everyone who's already pointed me in the right direction). But I do feel that I'm now
actually programming in Rust. And I like it.
This is what Rust programmers call themselves.
It's almost impossible to stop people doing the wrong thing entirely, but encouraging
people to do the right thing is great. In fact, Rust goes further and actually makes it
difficult to do the wrong thing in many situations. You really have to try quite hard
to do bad things in Rust.
I found a particularly egregious off-by-one error in my code, for instance, which had
nothing to do with Rust, and everything to do with my not paying enough attention to the
program flow.
Microsoft's EEE tactics which can be redefined as "Steal; Add complexity and bloat; trash original" can be
used on open source and as success of systemd has shown can be pretty successful strategy.
Notable quotes:
"... Free software acts like proprietary software when it treats the existence of alternatives as a problem to be solved. I personally never trust a project with developers as arrogant as that. ..."
...it was developed along lines that are not entirely different from
Microsoft's EEE tactics -- which today I will offer a new acronym and description for:
1. Steal
2. Add Bloat
3. Original Trashed
It's difficult conceptually to "steal" Free software, because it (sort of, effectively)
belongs to everyone. It's not always Public Domain -- copyleft is meant to prevent that. The
only way you can "steal" free software is by taking it from everyone and restricting it again.
That's like "stealing" the ocean or the sky, and putting it somewhere that people can't get to
it. But this is what non-free software does. (You could also simply go against the license
terms, but I doubt Stallman would go for the word "stealing" or "theft" as a first choice to
describe non-compliance).
... ... ...
Again and again, Microsoft "Steals" or "Steers" the development process itself so it
can gain control (pronounced: "ownership") of the software. It is a gradual process, where
Microsoft has more and more influence until they dominate the project and with it, the user.
This is similar to the process where cults (or drug addiction) take over people's lives, and
similar to the process where narcissists interfere in the lives of others -- by staking a claim
and gradually dominating the person or project.
Then they Add Bloat -- more features. GitHub is friendly to use, you don't have to care
about how Git works to use it (this is true of many GitHub clones as well, as even I do not
really care how Git works very much. It took a long time for someone to even drag me towards
GitHub for code hosting, until they were acquired and I stopped using it) and due to its GLOBAL
size, nobody can or ought to reproduce its network effects.
I understand the draw of network effects. That's why larger federated instances of code
hosts are going to be more popular than smaller instances. We really need a mix -- smaller
instances to be easy to host and autonomous, larger instances to draw people away from even
more gigantic code silos. We can't get away from network effects (just like the War on Drugs
will never work) but we can make them easier and less troublesome (or safer) to deal with.
Finally, the Original is trashed, and the SABOTage is complete. This has happened with
Python against Python 2, despite protests from seasoned and professional developers, it was
deliberately attempted with Systemd against not just sysvinit but ALL alternatives -- Free
software acts like proprietary software when it treats the existence of alternatives as a
problem to be solved. I personally never trust a project with developers as arrogant as
that.
... ... ...
There's a meme about creepy vans with "FREE CANDY" painted on the side, which I took one of
the photos from and edited it so that it said "FEATURES" instead. This is more or less how I
feel about new features in general, given my experience with their abuse in development,
marketing and the takeover of formerly good software projects.
People then accuse me of being against features, of course. As with the Dijkstra article,
the real problem isn't Basic itself. The problem isn't features per se (though they do play a
very key role in this problem) and I'm not really against features -- or candy, for that
matter.
I'm against these things being used as bait, to entrap people in an unpleasant situation
that makes escape difficult. You know, "lock-in". Don't get in the van -- don't even go NEAR
the van.
Candy is nice, and some features are nice too. But we would all be better off if we could
get the candy safely, and delete the creepy horrible van that comes with it. That's true
whether the creepy van is GitHub, or surveillance by GIAFAM, or a Leviathan "init" system, or
just breaking decades of perfectly good Python code, to try to force people to develop
differently because Google or Microsoft (who both have had heavy influence over newer Python
development) want to try to force you to -- all while using "free" software.
If all that makes free software "free" is the license -- (yes, it's the primary and key
part, it's a necessary ingredient) then putting "free" software on GitHub shouldn't be a
problem, right? Not if you're running LibreJS, at least.
In practice, "Free in license only" ignores the fact that if software is effectively free,
the user is also effectively free. If free software development gets dragged into doing the
bidding of non-free software companies and starts creating lock-in for the user, even if it's
external or peripheral, then they simply found an effective way around the true goal of the
license. They did it with Tivoisation, so we know that it's possible. They've done this in a
number of ways, and they're doing it now.
If people are trying to make the user less free, and they're effectively making the user
less free, maybe the license isn't an effective monolithic solution. The cost of freedom is
eternal vigilance. They never said "The cost of freedom is slapping a free license on things",
as far as I know. (Of course it helps). This really isn't a straw man, so much as a rebuttal to
the extremely glib take on software freedom in general that permeates development communities
these days.
But the benefits of Free software, free candy and new features are all meaningless, if the
user isn't in control.
Don't get in the van.
"The freedom to NOT run the software, to be free to avoid vendor lock-in through
appropriate modularization/encapsulation and minimized dependencies; meaning any free software
can be replaced with a user's preferred alternatives (freedom 4)." – Peter
Boughton
The world is filled with conformism and groupthink. Most people do not wish to think for
themselves. Thinking for oneself is dangerous, requires effort and often leads to rejection by
the herd of one's peers.
The profession of arms, the intelligence business, the civil service bureaucracy, the
wondrous world of groups like the League of Women Voters, Rotary Club as well as the empire of
the thinktanks are all rotten with this sickness, an illness which leads inevitably to
stereotyped and unrealistic thinking, thinking that does not reflect reality.
The worst locus of this mentally crippling phenomenon is the world of the academics. I have
served on a number of boards that awarded Ph.D and post doctoral grants. I was on the Fulbright
Fellowship federal board. I was on the HF Guggenheim program and executive boards for a long
time. Those are two examples of my exposure to the individual and collective academic
minds.
As a class of people I find them unimpressive. The credentialing exercise in acquiring a
doctorate is basically a nepotistic process of sucking up to elders and a crutch for ego
support as well as an entrance ticket for various hierarchies, among them the world of the
academy. The process of degree acquisition itself requires sponsorship by esteemed academics
who recommend candidates who do not stray very far from the corpus of known work in whichever
narrow field is involved. The endorsements from RESPECTED academics are often decisive in the
award of grants.
This process is continued throughout a career in academic research. PEER REVIEW is the
sine qua non for acceptance of a "paper," invitation to career making conferences, or
to the Holy of Holies, TENURE.
This life experience forms and creates CONFORMISTS, people who instinctively boot-lick their
fellows in a search for the "Good Doggy" moments that make up their lives. These people are for
sale. Their price may not be money, but they are still for sale. They want to be accepted as
members of their group. Dissent leads to expulsion or effective rejection from the group.
This mentality renders doubtful any assertion that a large group of academics supports any
stated conclusion. As a species academics will say or do anything to be included in their
caste.
This makes them inherently dangerous. They will support any party or parties, of any
political inclination if that group has the money, and the potential or actual power to
maintain the academics as a tribe. pl
That is the nature of tribes and humans are very tribal. At least most of them.
Fortunately, there are outliers. I was recently reading "Political Tribes" which was written
by a couple who are both law professors that examines this.
Take global warming (aka the rebranded climate change). Good luck getting grants to do any
skeptical research. This highly complex subject which posits human impact is a perfect
example of tribal bias.
My success in the private sector comes from consistent questioning what I wanted to be
true to prevent suboptimal design decisions.
I also instinctively dislike groups that have some idealized view of "What is to be
done?"
As Groucho said: "I refuse to join any club that would have me as a member"
The 'isms' had it, be it Nazism, Fascism, Communism, Totalitarianism, Elitism all demand
conformity and adherence to group think. If one does not co-tow to whichever 'ism' is at
play, those outside their group think are persecuted, ostracized, jailed, and executed all
because they defy their conformity demands, and defy allegiance to them.
One world, one religion, one government, one Borg. all lead down the same road to --
Orwell's 1984.
David Halberstam: The Best and the Brightest. (Reminder how the heck we got into Vietnam,
when the best and the brightest were serving as presidential advisors.)
Also good Halberstam re-read: The Powers that Be - when the conservative media controlled
the levers of power; not the uber-liberal one we experience today.
"... In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm. ..."
"... In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules ..."
"... I get tired of the purists who think that OO is the only possible answer. The world is not a nail. ..."
OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago. Ruby on rails was the
convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that
Java was/is. Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast,
despite having a DB model that was built by non-programmers on crack.
Most critical processes are procedural, even today if only for the OOP has been a golden hammer ever since Java, but we've
noticed the downsides quite a while ago.
Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular
abstraction craze that Java was/is.
Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast,
despite having a DB model that was built by non-programmers on crack.
There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They
know OOP, so they think that every problem must be solved in an OOP way.
In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows
naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best
paradigm.
In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules.
For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What
"object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative,
static "generate answer" method makes at least as much sense.
There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO
is the only possible answer. The world is not a nail.
Object-oriented programming generates a lot of what looks like work. Back in the days of
fanfold, there was a type of programmer who would only put five or ten lines of code on a
page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming
is like crack for these people: it lets you incorporate all this scaffolding right into your
source code. Something that a Lisp hacker might handle by pushing a symbol onto a list
becomes a whole file of classes and methods. So it is a good tool if you want to convince
yourself, or someone else, that you are doing a lot of work.
Eric Lippert observed a similar occupational hazard among developers. It's something he
calls object happiness .
What I sometimes see when I interview people and review code is symptoms of a disease I call
Object Happiness. Object Happy people feel the need to apply principles of OO design to
small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual
abstract base classes -- writing programs where IFoos talk to IBars but there is only one
implementation of each interface! I suspect that early exposure to OO design principles
divorced from any practical context that motivates those principles leads to object
happiness. People come away as OO True Believers rather than OO pragmatists.
I've seen so many problems caused by excessive, slavish adherence to OOP in production
applications. Not that object oriented programming is inherently bad, mind you, but a little
OOP goes a very long way . Adding objects to your code is like adding salt to a dish: use a
little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's
better to err on the side of simplicity, and I tend to favor the approach that results in
less code, not more .
Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page . Patrick
is a French software developer. Evidently the acronym for object oriented programming is
spelled a little differently in French than it is in English: POO.
That's exactly what I've imagined when I had to work on code that abused objects.
But POO code can have another, more constructive, meaning. This blog author argues that OOP
pales in importance to POO. Programming fOr Others , that
is.
The problem is that programmers are taught all about how to write OO code, and how doing so
will improve the maintainability of their code. And by "taught", I don't just mean "taken a
class or two". I mean: have pounded into head in school, spend years as a professional being
mentored by senior OO "architects" and only then finally kind of understand how to use
properly, some of the time. Most engineers wouldn't consider using a non-OO language, even if
it had amazing features. The hype is that major.
So what, then, about all that code programmers write before their 10 years OO
apprenticeship is complete? Is it just doomed to suck? Of course not, as long as they apply
other techniques than OO. These techniques are out there but aren't as widely discussed.
The improvement [I propose] has little to do with any specific programming technique. It's
more a matter of empathy; in this case, empathy for the programmer who might have to use your
code. The author of this code actually thought through what kinds of mistakes another
programmer might make, and strove to make the computer tell the programmer what they did
wrong.
In my experience the best code, like the best user interfaces, seems to magically
anticipate what you want or need to do next. Yet it's discussed infrequently relative to OO.
Maybe what's missing is a buzzword. So let's make one up, Programming fOr Others, or POO for
short.
The principles of object oriented programming are far more important than mindlessly,
robotically instantiating objects everywhere:
Stop worrying so much about the objects. Concentrate on satisfying the principles of
object orientation rather than object-izing everything. And most of all, consider the poor
sap who will have to read and support this code after you're done with it . That's why POO
trumps OOP: programming as if people mattered will always be a more effective strategy than
satisfying the architecture astronauts .
Daniel
Korenblum , works at Bayes Impact
Updated May 25, 2015 There are many reasons why non-OOP languages and paradigms/practices
are on the rise, contributing to the relative decline of OOP.
First off, there are a few things about OOP that many people don't like, which makes them
interested in learning and using other approaches. Below are some references from the OOP wiki
article:
One of the comments therein linked a few other good wikipedia articles which also provide
relevant discussion on increasingly-popular alternatives to OOP:
Modularity and design-by-contract are better implemented by module systems ( Standard ML
)
Personally, I sometimes think that OOP is a bit like an antique car. Sure, it has a bigger
engine and fins and lots of chrome etc., it's fun to drive around, and it does look pretty. It
is good for some applications, all kidding aside. The real question is not whether it's useful
or not, but for how many projects?
When I'm done building an OOP application, it's like a large and elaborate structure.
Changing the way objects are connected and organized can be hard, and the design choices of the
past tend to become "frozen" or locked in place for all future times. Is this the best choice
for every application? Probably not.
If you want to drive 500-5000 miles a week in a car that you can fix yourself without
special ordering any parts, it's probably better to go with a Honda or something more easily
adaptable than an antique vehicle-with-fins.
Finally, the best example is the growth of JavaScript as a language (officially called
EcmaScript now?). Although JavaScript/EcmaScript (JS/ES) is not a pure functional programming
language, it is much more "functional" than "OOP" in its design. JS/ES was the first mainstream
language to promote the use of functional programming concepts such as higher-order functions,
currying, and monads.
The recent growth of the JS/ES open-source community has not only been impressive in its
extent but also unexpected from the standpoint of many established programmers. This is partly
evidenced by the overwhelming number of active repositories on Github using
JavaScript/EcmaScript:
Because JS/ES treats both functions and objects as structs/hashes, it encourages us to blur
the line dividing them in our minds. This is a division that many other languages impose -
"there are functions and there are objects/variables, and they are different".
This seemingly minor (and often confusing) design choice enables a lot of flexibility and
power. In part this seemingly tiny detail has enabled JS/ES to achieve its meteoric growth
between 2005-2015.
This partially explains the rise of JS/ES and the corresponding relative decline of OOP. OOP
had become a "standard" or "fixed" way of doing things for a while, and there will probably
always be a time and place for OOP. But as programmers we should avoid getting too stuck in one
way of thinking / doing things, because different applications may require different
approaches.
Above and beyond the OOP-vs-non-OOP debate, one of our main goals as engineers should be
custom-tailoring our designs by skillfully choosing the most appropriate programming
paradigm(s) for each distinct type of application, in order to maximize the "bang for the buck"
that our software provides.
Although this is something most engineers can agree on, we still have a long way to go until
we reach some sort of consensus about how best to teach and hone these skills. This is not only
a challenge for us as programmers today, but also a huge opportunity for the next generation of
educators to create better guidelines and best practices than the current OOP-centric
pedagogical system.
Here are a couple of good books that elaborates on these ideas and techniques in more
detail. They are free-to-read online:
Mike MacHenry ,
software engineer, improv comedian, maker Answered
Feb 14, 2015 · Author has 286 answers and 513.7k answer views Because the phrase
itself was over hyped to an extrodinary degree. Then as is common with over hyped things many
other things took on that phrase as a name. Then people got confused and stopped calling what
they are don't OOP.
Yes I think OOP ( the phrase ) is on the decline because people are becoming more educated
about the topic.
It's like, artificial intelligence, now that I think about it. There aren't many people
these days that say they do AI to anyone but the laymen. They would say they do machine
learning or natural language processing or something else. These are fields that the vastly
over hyped and really nebulous term AI used to describe but then AI ( the term ) experienced a
sharp decline while these very concrete fields continued to flourish.
There is nothing inherently wrong with some of the functionality it offers, its the way
OOP is abused as a substitute for basic good programming practices.
I was helping interns - students from a local CC - deal with idiotic assignments like
making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES.
Seriously, WTF?
A room full of career programmers could not even figure out how you were supposed to do
that, much less why.
What was worse was a lack of understanding of basic programming skill or even the use of
variables, as the kids were being taught EVERY program was to to be assembled solely by
sticking together bits of libraries.
There was no coding, just hunting for snippets of preexisting code to glue together. Zero
idea they could add their own, much less how to do it. OOP isn't the problem, its the idea
that it replaces basic programming skills and best practice.
That and the obsession with absofrackinglutely EVERYTHING just having to be a formally
declared object including the while program being an object with a run() method.
Some things actually cry out to be objects, some not so much. Generally, I find that my
most readable and maintainable code turns out to be a procedural program that manipulates
objects.
Even there, some things just naturally want to be a struct or just an array of values.
The same is true of most ingenious ideas in programming. It's one thing if code is
demonstrating a particular idea, but production code is supposed to be there to do work, not
grind an academic ax.
For example, slavish adherence to "patterns". They're quite useful for thinking about code
and talking about code, but they shouldn't be the end of the discussion. They work better as
a starting point. Some programs seem to want patterns to be mixed and matched.
In reality those problems are just cargo cult programming one level higher.
I suspect a lot of that is because too many developers barely grasp programming and never
learned to go beyond the patterns they were explicitly taught.
When all you have is a hammer, the whole world looks like a nail.
Inheritance, while not "inherently" bad, is often the wrong solution. See: Why
extends is evil [javaworld.com]
Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny
little anecdote in Cocoa
Programming for Mac OS X [google.com]:
"Once upon a time, there was a company called Taligent. Taligent was created by IBM and
Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the
peak of its mindshare, I met one of its engineers at a trade show.
I asked him to create a simple application for me: A window would appear with a button,
and when the button was clicked, the words 'Hello, World!' would appear in a text field. The
engineer created a project and started subclassing madly: subclassing the window and the
button and the event handler.
Then he started generating code: dozens of lines to get the button and the text field onto
the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew
that the company was doomed. A couple of years later, Taligent quietly closed its doors
forever."
Almost every programming methodology can be abused by people who really don't know how to
program well, or who don't want to. They'll happily create frameworks, implement new
development processes, and chart tons of metrics, all while avoiding the work of getting the
job done. In some cases the person who writes the most code is the same one who gets the
least amount of useful work done.
So, OOP can be misused the same way. Never mind that OOP essentially began very early and
has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially
an object oriented system. It's just data encapsulation and separating work into manageable
modules. That's how it was before anyone ever came up with the dumb name "full-stack
developer".
(medium.com)
782
Posted by EditorDavid
on Monday July 22, 2019 @12:04AM
from the
OOPs
dept.
Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling
object-oriented programming "a trillion dollar disaster."
Precious time and brainpower are being spent
thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented
Programming (OOP) has been created with one goal in mind -- to
manage the complexity
of procedural
codebases. In other words, it was supposed to
improve code organization
. There's
no objective and open evidence that OOP is better than plain procedural programming
...
Instead of reducing
complexity, it encourages promiscuous
sharing of mutable state
and introduces additional complexity
with its numerous
design patterns
. OOP makes common development practices, like refactoring and
testing, needlessly hard...
As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the
rise of OOP with some curiosity. I think there's a general consensus that abstraction and
re-usability are good things - they're the reason subroutines exist - the issue is whether
they are ends in themselves.
I struggle with the whole concept of "design patterns". There are clearly common themes in
software, but there seems to be a great deal of pressure these days to make your
implementation fit some pre-defined template rather than thinking about the application's
specific needs for state and concurrency. I have seen some rather eccentric consequences of
"patternism".
Correctly written, OOP code allows you to encapsulate just the logic you need for a
specific task and to make that specific task available in a wide variety of contexts by
judicious use of templating and virtual functions that obviate the need for
"refactoring".
Badly written, OOP code can have as many dangerous side effects and as much opacity as any
other kind of code. However, I think the key factor is not the choice of programming
paradigm, but the design process.
You need to think first about what your code is intended to do and in what circumstances
it might be reused. In the context of a larger project, it means identifying commonalities
and deciding how best to implement them once. You need to document that design and review it
with other interested parties. You need to document the code with clear information about its
valid and invalid use. If you've done that, testing should not be a problem.
Some people seem to believe that OOP removes the need for some of that design and
documentation. It doesn't and indeed code that you intend to be reused needs *more* design
and documentation than the glue that binds it together in any one specific use case. I'm
still a firm believer that coding begins with a pencil, not with a keyboard. That's
particularly true if you intend to design abstract interfaces that will serve many purposes.
In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the
costs - and that usually means you not only know your code will be genuinely reusable but
will also genuinely be reused.
I struggle with the whole concept of "design patterns".
Because design patterns are stupid.
A reasonable programmer can understand reasonable code so long as the data is documented
even when the code isn't documented, but will struggle immensely if it were the other way
around.
Bad programmers create objects for objects sake, and because of that they have to follow
so called "design patterns" because no amount of code commenting makes the code easily
understandable when its a spaghetti web of interacting "objects" The "design patterns" don't
make the code easier the read, just easier to write.
Those OOP fanatics, if they do "document" their code, add comments like "// increment the
index" which is useless shit.
The big win of OOP is only in the encapsulation of the data with the code, and great code
treats objects like data structures with attached subroutines, not as "objects", and document
the fuck out of the contained data, while more or less letting the code document itself.
680,303 lines of Java code in the main project in my system.
Probably would've been more like 100,000 lines if you had used a language whose ecosystem
doesn't goad people into writing so many superfluous layers of indirection, abstraction and
boilerplate.
Posted on 2017-12-18 by
esr In recent discussion on this
blog of the GCC repository transition and reposurgeon, I observed "If I'd been restricted to C,
forget it – reposurgeon wouldn't have happened at all"
I should be more specific about this, since I think the underlying problem is general to a
great deal more that the implementation of reposurgeon. It ties back to a lot of recent
discussion here of C, Python, Go, and the transition to a post-C world that I think I see
happening in systems programming.
I shall start by urging that you must take me seriously when I speak of C's limitations.
I've been programming in C for 35 years. Some of my oldest C code is still in wide
production use. Speaking from that experience, I say there are some things only a damn fool
tries to do in C, or in any other language without automatic memory management (AMM, for the
rest of this article).
This is another angle on Greenspun's Law: "Any sufficiently complicated C or Fortran program
contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common
Lisp." Anyone who's been in the trenches long enough gets that Greenspun's real point is not
about C or Fortran or Common Lisp. His maxim could be generalized in a
Henry-Spencer-does-Santyana style as this:
"At any sufficient scale, those who do not have automatic memory management in their
language are condemned to reinvent it, poorly."
In other words, there's a complexity threshold above which lack of AMM becomes intolerable.
Lack of it either makes expressive programming in your application domain impossible or sends
your defect rate skyrocketing, or both. Usually both.
When you hit that point in a language like C (or C++), your way out is usually to write an
ad-hoc layer or a bunch of semi-disconnected little facilities that implement parts of an AMM
layer, poorly. Hello, Greenspun's Law!
It's not particularly the line count of your source code driving this, but rather the
complexity of the data structures it uses internally; I'll call this its "greenspunity". Large
programs that process data in simple, linear, straight-through ways may evade needing an ad-hoc
AMM layer. Smaller ones with gnarlier data management (higher greenspunity) won't. Anything
that has to do – for example – graph theory is doomed to need one (why, hello,
there, reposurgeon!)
There's a trap waiting here. As the greenspunity rises, you are likely to find that more and
more of your effort and defect chasing is related to the AMM layer, and proportionally less
goes to the application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated by
issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of
even low greenspunity.
Sometimes you really have no alternative but to be stuck with an ad-hoc AMM layer. Usually
you get pinned to this situation because real AMM would impose latency costs you can't afford.
The major case of this is operating-system kernels. I could say a lot more about the costs and
contortions this forces you to assume, and perhaps I will in a future post, but it's out of
scope for this one.
On the other hand, reposurgeon is representative of a very large class of "systems" programs
that don't have these tight latency constraints. Before I get to back to the implications of
not being latency constrained, one last thing – the most important thing – about
escalating AMM-layer complexity.
At high enough levels of greenspunity, the effort required to build and maintain your ad-hoc
AMM layer becomes a black hole. You can't actually make any progress on the application domain
at all – when you try it's like being nibbled to death by ducks.
Now consider this prospectively, from the point of view of someone like me who has architect
skill. A lot of that skill is being pretty good at visualizing the data flows and structures
– and thus estimating the greenspunity – implied by a problem domain. Before you've
written any code, that is.
If you see the world that way, possible projects will be divided into "Yes, can be done in a
language without AMM." versus "Nope. Nope. Nope. Not a damn fool, it's a black hole, ain't
nohow going there without AMM."
This is why I said that if I were restricted to C, reposurgeon would never have happened at
all. I wasn't being hyperbolic – that evaluation comes from a cool and exact sense of how
far reposurgeon's problem domain floats above the greenspunity level where an ad-hoc AMM layer
becomes a black hole. I shudder just thinking about it.
Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.
But, though software is sometimes written by people who are exceptionally good at managing that
kind of hair, it then generally has to be maintained by people who are less so
The really smart people in my audience have already figured out that this is why Ken
Thompson, the co-designer of C, put AMM in Go, in spite of the latency issues.
Ken understands something large and simple. Software expands, not just in line count but in
greenspunity, to meet hardware capacity and user demand. In languages like C and C++ we are
approaching a point of singularity at which typical – not just worst-case –
greenspunity is so high that the ad-hoc AMM becomes a black hole, or at best a trap
nigh-indistinguishable from one.
Thus, Go. It didn't have to be Go; I'm not actually being a partisan for that language here.
It could have been (say) Ocaml, or any of half a dozen other languages I can think of. The
point is the combination of AMM with compiled-code speed is ceasing to be a luxury option;
increasingly it will be baseline for getting most kinds of systems work done at all.
Sociologically, this implies an interesting split. Historically the boundary between systems
work under hard latency constraints and systems work without it has been blurry and permeable.
People on both sides of it coded in C and skillsets were similar. People like me who mostly do
out-of-kernel systems work but have code in several different kernels were, if not common, at
least not odd outliers.
Increasingly, I think, this will cease being true. Out-of-kernel work will move to Go, or
languages in its class. C – or non-AMM languages intended as C successors, like Rust
– will keep kernels and real-time firmware, at least for the foreseeable future.
Skillsets will diverge.
It'll be a more fragmented systems-programming world. Oh well; one does what one must, and
the tide of rising software complexity is not about to be turned. This entry was posted in
General , Software by esr . Bookmark the permalink . 144 thoughts on "C, Python, Go, and the
Generalized Greenspun Law"
David Collier-Brown on
2017-12-18 at
17:38:05 said: Andrew Forber quasily-accidentally created a similar truth: any
sufficiently complex program using overlays will eventually contain an implementation of
virtual memory. Reply ↓
esr on
2017-12-18 at
17:40:45 said: >Andrew Forber quasily-accidentally created a similar truth: any
sufficiently complex program using overlays will eventually contain an implementation
of virtual memory.
Oh, neat. I think that's a closer approximation to the most general statement than
Greenspun's, actually. Reply ↓
Alex K. on 2017-12-20 at 09:50:37 said:
For today, maybe -- but the first time I had Greenspun's Tenth quoted at me was in
the late '90s. [I know this was around/just before the first C++ standard, maybe
contrasting it to this new upstart Java thing?] This was definitely during the era
where big computers still did your serious work, and pretty much all of it was in
either C, COBOL, or FORTRAN. [Yeah, yeah, I know– COBOL is all caps for being
an acronym, while Fortran ain't–but since I'm talking about an earlier epoch
of computing, I'm going to use the conventions of that era.]
Now the Object-Oriented paradigm has really mitigated this to an enormous
degree, but I seem to recall at that time the argument was that multimethod
dispatch (a benefit so great you happily accept the flaw of memory
management) was the Killer Feature of LISP.
Given the way the other advantage I would have given Lisp over the past two
decades–anonymous functions [lambdas] and treating them as first-class
values–are creeping into a more mainstream usage, I think automated memory
management is the last visible "Lispy" feature people will associate with
Greenspun. [What, are you now visualizing lisp macros? Perish the
thought–anytime I see a foot cannon that big, I stop calling it a feature ]
Reply
↓
Mycroft Jones on 2017-12-18 at 17:41:04 said: After
looking at the Linear Lisp paper, I think that is where Lutz Mueller got One Reference Only
memory management from. For automatic memory management, I'm a big fan of ORO. Not sure how
to apply it to a statically typed language though. Wish it was available for Go. ORO is
extremely predictable and repeatable, not stuttery. Reply ↓
lliamander on 2017-12-18 at 19:28:04 said: >
Not sure how to apply it to a statically typed language though.
Jeff Read on 2017-12-19 at 00:38:57 said: If
Lutz was inspired by Linear Lisp, he didn't cite it. Actually ORO is more like
region-based memory allocation with a single region: values which leave the current
scope are copied which can be slow if you're passing large lists or vectors
around.
Linear Lisp is something quite a bit different, and allows for arbitrary data
structures with arbitrarily deep linking within, so long as there are no cycles in the
data structures. You can even pass references into and out of functions if you like;
what you can't do is alias them. As for statically typed programming languages well,
there are linear
type systems , which as lliamander mentioned are implemented in Clean.
Newlisp in general is smack in the middle between Rust and Urbit in terms of
cultishness of its community, and that scares me right off it. That and it doesn't
really bring anything to the table that couldn't be had by "old" lisps (and Lutz
frequently doubles down on mistakes in the design that had been discovered and
corrected decades ago by "old" Lisp implementers). Reply ↓
Gary E. Miller on 2017-12-18 at 18:02:10 said: For a
long time I've been holding out hope for a 'standard' garbage collector library for C. But
not gonna hold my breath. One probable reason Ken Thompson had to invent Go is to go around
the tremendous difficulty in getting new stuff into C. Reply ↓
esr on
2017-12-18 at
18:40:53 said: >For a long time I've been holding out hope for a 'standard'
garbage collector library for C. But not gonna hold my breath.
Yeah, good idea not to. People as smart/skilled as you and me have been poking at
this problem since the 1980s and it's pretty easy to show that you can't do better than
Boehm–Demers–Weiser, which has limitations that make it impractical. Sigh
Reply ↓
John
Cowan on 2018-04-15 at 00:11:56 said:
What's impractical about it? I replaced the native GC in the standard
implementation of the Joy interpreter with BDW, and it worked very well. Reply
↓
esr on 2018-04-15 at 08:30:12
said: >What's impractical about it? I replaced the native GC in the standard
implementation of the Joy interpreter with BDW, and it worked very well.
GCing data on the stack is a crapshoot. Pointers can get mistaken for data
and vice-versa. Reply
↓
Konstantin Khomoutov on 2017-12-20 at 06:30:05 said: I
think it's not about C. Let me cite a little bit from "The Go Programming Language"
(A.Donovan, B. Kernigan) --
in the section about Go influences, it states:
"Rob Pike and others began to experiment with CSP implementations as actual
languages. The first was called Squeak which provided a language with statically
created channels. This was followed by Newsqueak, which offered C-like statement and
expression syntax and Pascal-like type notation. It was a purely functional language
with garbage collection, again aimed at managing keyboard, mouse, and window events.
Channels became first-class values, dynamically created and storable in variables.
The Plan 9 operating system carried these ideas forward in a language called Alef.
Alef tried to make Newsqueak a viable system programming language, but its omission of
garbage collection made concurrency too painful."
So my takeaway was that AMM was key to get proper concurrency.
Before Go, I dabbled with Erlang (which I enjoy, too), and I'd say there the AMM is
also a key to have concurrency made easy.
(Update: the ellipsises I put into the citation were eaten by the engine and won't
appear when I tried to re-edit my comment; sorry.) Reply ↓
tz on 2017-12-18 at 18:29:20 said: I think
this is the key insight.
There are programs with zero MM.
There are programs with orderly MM, e.g. unzip does mallocs and frees in a stacklike
formation, Malloc a,b,c, free c,b,a. (as of 1.1.4). This is laminar, not chaotic flow.
Then there is the complex, nonlinear, turbulent flow, chaos. You can't do that in basic
C, you need AMM. But it is easier in a language that includes it (and does it well).
Virtual Memory is related to AMM – too often the memory leaks were hidden (think
of your O(n**2) for small values of n) – small leaks that weren't visible under
ordinary circumstances.
Still, you aren't going to get AMM on the current Arduino variants. At least not
easily.
That is where the line is, how much resources. Because you require a medium to large OS,
or the equivalent resources to do AMM.
Yet this is similar to using FPGAs, or GPUs for blockchain coin mining instead of the
CPU. Sometimes you have to go big. Your Cooper Mini might be great most of the time, but
sometimes you need a Diesel big pickup. I think a Mini would fit in the bed of my F250.
As tasks get bigger they need bigger machines. Reply ↓
Zygo on 2017-12-18 at 18:31:34 said: > Of
course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.
I was about to say something about writing an AMM layer before breakfast on the way to
writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that.
Reply ↓
esr on
2017-12-18 at
18:34:59 said: >I was about to say something about writing an AMM layer before
breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but
I guess you covered that.
Well, yeah. I have days like that occasionally, but it would be unwise to plan a
project based on the assumption that I will. And deeply foolish to assume that
J. Random Programmer will. Reply ↓
tz on 2017-12-18 at 18:32:37 said: C
displaced assembler because it had the speed and flexibility while being portable.
Go, or something like it will displace C where they can get just the right features into
the standard library including AMM/GC.
Maybe we need Garbage Collecting C. GCC?
One problem is you can't do the pointer aliasing if you have a GC (unless you also do
some auxillary bits which would be hard to maintain). void x = y; might be decodable but
there are deeper and more complex things a compiler can't detect. If the compiler gets it
wrong, you get a memory leak, or have to constrain the language to prevent things which
manipulate pointers when that is required or clearer. Reply ↓
Zygo on 2017-12-18 at 20:52:40 said: C++11
shared_ptr does handle the aliasing case. Each pointer object has two fields, one for
the thing being pointed to, and one for the thing's containing object (or its
associated GC metadata). A pointer alias assignment alters the former during the
assignment and copies the latter verbatim. The syntax is (as far as a C programmer
knows, after a few typedefs) identical to C.
The trouble with applying that idea to C is that the standard pointers don't have
space or time for the second field, and heap management isn't standardized at all
(free() is provided, but programs are not required to use it or any other function
exclusively for this purpose). Change either of those two things and the resulting
language becomes very different from C. Reply ↓
IGnatius T
Foobar on 2017-12-18 at 18:39:28 said: Eric, I
love you, you're a pepper, but you have a bad habit of painting a portrait of J. Random
Hacker that is actually a portrait of Eric S. Raymond. The world is getting along with C
just fine. 95% of the use cases you describe for needing garbage collection are eliminated
with the simple addition of a string class which nearly everyone has in their toolkit.
Reply ↓
esr on
2017-12-18 at
18:55:46 said: >The world is getting along with C just fine. 95% of the use
cases you describe for needing garbage collection are eliminated with the simple
addition of a string class which nearly everyone has in their toolkit.
Even if you're right, the escalation of complexity means that what I'm facing now,
J. Random Hacker will face in a couple of years. Yes, not everybody writes reposurgeon
but a string class won't suffice for much longer even if it does today. Reply
↓
I don't solve complex problems.
I simplify complex problems and solve them.
Complexity does escalate, or at least in the sense that we could cross oceans a
few centuries ago, and can go to the planets and beyond today.
We shouldn't use a rocket ship to get groceries from the local market.
J Random H-1B will face some easily decomposed apparently complex problem and
write a pile of spaghetti.
The true nature of a hacker is not so much in being able to handle the most deep
and complex situations, but in being able to recognize which situations are truly
complex and in preference working hard to simplify and reduce complexity in
preference to writing something to handle the complexity. Dealing with a slain
dragon's corpse is easier than one that is live, annoyed, and immolating anything
within a few hundred yards. Some are capable of handling the latter. The wise
knight prefers to reduce the problem to the former. Reply
↓
William O. B'Livion on 2017-12-20 at 02:02:40
said: > J Random H-1B will face some easily decomposed
> apparently complex problem and write a pile of spaghetti.
J Random H-1B will do it with Informatica and Java. Reply
↓
One of the epic fails of C++ is it being sold as C but where anyone could program
because of all the safetys. Instead it created bloatware and the very memory leaks because
the lesser programmers didn't KNOW (grok, understand) what they were doing. It was all
"automatic".
This is the opportunity and danger of AMM/GC. It is a tool, and one with hot areas and
sharp edges. Wendy (formerly Walter) Carlos had a law that said "Whatever parameter you can
control, you must control". Having a really good AMM/GC requires you to respect what it can
and cannot do. OK, form a huge – into VM – linked list. Won't it just handle
everything? NO!. You have to think reference counts, at least in the back of your mind. It
simplifys the problem but doesn't eliminate it. It turns the black hole into a pulsar, but
you still can be hit.
Many will gloss over and either superficially learn (but can't apply) or ignore the "how
to use automatic memory management" in their CS course. Like they didn't bother with
pointers, recursion, or multithreading subtleties. Reply ↓
lliamander on 2017-12-18 at 19:36:35 said: I would
say that there is a parallel between concurrency models and memory management approaches.
Beyond a certain level of complexity, it's simply infeasible for J. Random Hacker to
implement a locks-based solution just as it is infeasible for Mr. Hacker to write a
solution with manual memory management.
My worry is that by allowing the unsafe sharing of mutable state between goroutines, Go
will never be able to achieve the per-process (i.e. language-level process, not OS-level)
GC that would allow for really low latencies necessary for a AMM language to move closer
into the kernel space. But certainly insofar as many "systems" level applications don't
require extremely low latencies, Go will probably viable solution going forward. Reply
↓
Jeff Read on 2017-12-18 at 20:14:18 said: Putting
aside the hard deadlines found in real-time systems programming, it has been empirically
determined that a GC'd program requires five times as much memory as the
equivalent program with explicit memory management. Applications which are both CPU- and
RAM-intensive, where you need to have your performance cake and eat it in as little memory
as possible, are thus severely constrained in terms of viable languages they could be
implemented in. And by "severely constrained" I mean you get your choice of C++ or Rust.
(C, Pascal, and Ada are on the table, but none offer quite the same metaprogramming
flexibility as those two.)
I think your problems with reposturgeon stem from the fact that you're just running up
against the hard upper bound on the vector sum of CPU and RAM efficiency that a dynamic
language like Python (even sped up with PyPy) can feasibly deliver on a hardware
configuration you can order from Amazon. For applications like that, you need to forgo GC
entirely and rely on smart pointers, automatic reference counting, value semantics, and
RAII. Reply ↓
esr on
2017-12-18 at
20:27:20 said: > For applications like that, you need to forgo GC entirely and
rely on smart pointers, automatic reference counting, value semantics, and RAII.
How many times do I have to repeat "reposurgeon would never have been written
under that constraint" before somebody who claims LISP experience gets it? Reply
↓
Jeff Read on 2017-12-18 at 20:48:24 said:
You mentioned that reposurgeon wouldn't have been written under the constraints of
C. But C++ is not C, and has an entirely different set of constraints. In practice,
it's not thst far off from Lisp, especially if you avail yourself of those
wonderful features in C++1x. C++ programmers talk about "zero-cost abstractions"
for a reason .
Semantically, programming in a GC'd language and programming in a language that
uses smart pointers and RAII are very similar: you create the objects you need, and
they are automatically disposed of when no longer needed. But instead of delegating
to a GC which cleans them up whenever, both you and the compiler have compile-time
knowledge of when those cleanups will take place, allowing you finer-grained
control over how memory -- or any other resource -- is used.
Oh, that's another thing: GC only has something to say about memory --
not file handles, sockets, or any other resource. In C++, with appropriate types
value semantics can be made to apply to those too and they will immediately be
destructed after their last use. There is no special with construct in
C++; you simply construct the objects you need and they're destructed when they go
out of scope.
This is how the big boys do systems programming. Again, Go has barely
displaced C++ at all inside Google despite being intended for just that
purpose. Their entire critical path in search is still C++ code. And it always will
be until Rust gains traction.
As for my Lisp experience, I know enough to know that Lisp has utterly
failed and this is one of the major reasons why. It's not even a decent AI
language, because the scruffies won, AI is basically large-scale statistics, and
most practitioners these days use C++. Reply
↓
esr on 2017-12-18 at 20:54:08
said: >C++ is not C, and has an entirely different set of constraints. In
practice, it's not thst far off from Lisp,
Oh, bullshit. I think you're just trolling, now.
I've been a C++ programmer and know better than this.
But don't argue with me. Argue with Ken Thompson, who designed Go because
he knows better than this. Reply
↓
Anthony Williams on
2017-12-19 at 06:02:03
said: Modern C++ is a long way from C++ when it was first standardized in
1998. You should *never* be manually managing memory in modern C++. You
want a dynamically sized array? Use std::vector. You want an adhoc graph?
Use std::shared_ptr and std::weak_ptr.
Any code I see which uses new or delete, malloc or free will fail code
review.
Destructors and the RAII idiom mean that this covers *any* resource, not
just memory.
See the C++ Core Guidelines on resource and memory management: http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource
Reply ↓
esr on 2017-12-19 at
07:53:58 said: >Modern C++ is a long way from C++ when it was
first standardized in 1998.
That's correct. Modern C++ is a disaster area of compounded
complexity and fragile kludges piled on in a failed attempt to fix
leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at
least it was drastically simpler. Clue: complexification when you don't
even fix the problems is bad .
My experience dates from 2009 and included Boost – I was a
senior dev on Battle For Wesnoth. Don't try to tell me I don't know
what "modern C++" is like. Reply
↓
Anthony Williams on
2017-12-19 at
08:17:58 said: > My experience dates from 2009 and included
Boost – I was a senior dev on Battle For Wesnoth. Don't try
to tell me I don't know what "modern C++" is like.
C++ in 2009 with boost was C++ from 1998 with a few extra
libraries. I mean that quite literally -- the standard was
unchanged apart from minor fixes in 2003.
C++ has changed a lot since then. There have been 3 standards
issued, in 2011, 2014, and just now in 2017. Between them, there is
a huge list of changes to the language and the standard library,
and these are readily available -- both clang and gcc have kept
up-to-date with the changes, and even MSVC isn't far behind. Even
more changes are coming with C++20.
So, with all due respect, C++ from 2009 is not "modern C++",
though there certainly were parts of boost that were leaning that
way.
esr on 2017-12-19 at
08:37:11 said: >So, with all due respect, C++ from 2009
is not "modern C++", though there certainly were parts of boost
that were leaning that way.
But the foundational abstractions are still leaky. So when
you tell me "it's all better now", I don't believe you. I just
plain do not.
I've been hearing this soothing song ever since around 1989.
"Trust us, it's all fixed." Then I look at the "fixes" and
they're horrifying monstrosities like templates – all the
dangers of preprocessor macros and a whole new class of
Turing-complete nightmares, too! In thirty years I'm certain
I'll be hearing that C++2047 solves all the problems this
time for sure , and I won't believe a word of it then,
either.
Reply ↓
If you would elaborate on this, I would be grateful.
What are the problematic leaky abstractions you are
concerned about?
Reply ↓
esr on 2017-12-19
at 09:26:24 said: >If you would elaborate on
this, I would be grateful. What are the problematic
leaky abstractions you are concerned about?
Are array accesses bounds-checked? Don't yammer
about iterators; what happens if I say foo[3] and foo
is dimension 2? Never mind, I know the answer.
Are bare, untyped pointers still in the language?
Never mind, I know the answer.
Can I get a core dump from code that the compiler
has statically checked and contains no casts? Never
mind, I know the answer.
Yes, C has these problems too. But it doesn't
pretend not to, and in C I'm never afflicted by
masochistic cultists denying that they're
problems.
> Are array accesses bounds-checked? Don't yammer
about iterators; what happens if I say foo[3] and foo
is dimension 2? Never mind, I know the answer.
You are right, bare arrays are not bounds-checked,
but std::array provides an at() member function, so
arr.at(3) will throw if the array is too small.
Also, ranged-for loops can avoid the need for
explicit indexing lots of the time anyway.
> Are bare, untyped pointers still in the
language? Never mind, I know the answer.
Yes, void* is still in the language. You need to
cast it to use it, which is something that is easy to
spot in a code review.
> Can I get a core dump from code that the
compiler has statically checked and contains no casts?
Never mind, I know the answer.
Probably. Is it possible to write code in any
language that dies horribly in an unintended
fashion?
> Yes, C has these problems too. But it doesn't
pretend not to, and in C I'm never afflicted by
masochistic cultists denying that they're problems.
Did I say C++ was perfect? This blog post was about
the problems inherent in the lack of automatic memory
management in C and C++, and thus why you wouldn't have
written reposurgeon if that's all you had. My point is
that it is easy to write C++ in a way that doesn't
suffer from those problems.
esr on 2017-12-19
at 10:10:11 said: > My point is that it is easy
to write C++ in a way that doesn't suffer from those
problems.
No, it is not. The error statistics of large C++
programs refute you.
My personal experience on Battle for Wesnoth refutes
you.
The persistent self-deception I hear from C++
advocates on this score does nothing to endear the
language to me.
Ian Bruene on 2017-12-19
at 11:05:22 said: So what I am hearing in this is:
"Use these new standards built on top of the language,
and make sure every single one of your dependencies
holds to them just as religiously are you are. And if
anyone fails at any point in the chain you are
doomed.".
Cool.
Casey Barker on 2017-12-19
at 11:12:16 said: Using Go has been a revelation, so I
mostly agree with Eric here. My only objection is to
equating C++03/Boost with "modern" C++. I used both
heavily, and given a green field, I would consider C++14
for some of these thorny designs that I'd never have used
C++03/Boost for. It's a qualitatively different experience.
Just browse a copy of Scott Meyer's _Effective Modern C++_
for a few minutes, and I think you'll at least understand
why C++14 users object to the comparison. Modern C++
enables better designs.
Alas, C++ is a multi-layered tool chest. If you stick to
the top two shelves, you can build large-scale, complex
designs with pretty good safety and nigh unmatched
performance. Everything below the third shelf has rusty
tools with exposed wires and no blade guards, and on
large-scale projects, it's impossible to keep J. Random
Programmer from reaching for those tools.
So no, if they keep adding features, C++ 2047 won't
materially improve this situation. But there is a
contingent (including Meyers) pushing for the *removal* of
features. I think that's the only way C++ will stay
relevant in the long-term.
http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html
Reply ↓
Zygo on 2017-12-19
at 11:52:17 said: My personal experience is that C++11
code (in particular, code that uses closures, deleted
methods, auto (a feature you yourself recommended for C
with different syntax), and the automatic memory and
resource management classes) has fewer defects per
developer-year than the equivalent C++03-and-earlier code.
This is especially so if you turn on compiler flags that
disable the legacy features (e.g. -Werror=old-style-cast),
and treat any legacy C or C++03 code like foreign language
code that needs to be buried under a FFI to make it safe to
use.
Qualitatively, the defects that do occur are easier to
debug in C++11 vs C++03. There are fewer opportunities for
the compiler to interpolate in surprising ways because the
automatic rules are tighter, the library has better utility
classes that make overloads and premature optimization less
necessary, the core language has features that make
templates less necessary, and it's now possible to
explicitly select or rule out invalid candidates for
automatic code generation.
I can design in Lisp, but write C++11 without much
effort of mental translation. Contrast with C++03, where
people usually just write all the Lispy bits in some
completely separate language (or create shambling horrors
like Boost to try to bandaid over the missing limbs
boost::lambda, anyone?
Oh, look, since C++11 they've doubled down on something
called boost::phoenix).
Does C++11 solve all the problems? Absolutely not, that
would break compatibility. But C++11 is noticeably better
than its predecessors. I would say the defect rates are now
comparable to Perl with a bunch of custom C modules (i.e.
exact defect rate depends on how much you wrote in each
language).
Reply ↓
NHO on 2017-12-19
at 11:55:11 said: C++ happily turned into complexity
metatarpit with "Everything that could be implemented in STL
with templates should, instead of core language". And not
deprecating/removing features, instead leaving there.
Reply ↓
Michael on 2017-12-19 at
08:59:41 said: For the curious, can you point to a C++
tutorial/intro that shows how to do it the right way ?
Reply ↓
Michael on 2017-12-19
at 12:09:45 said: Thank you. Not sure this is what
I was looking for.
Was thinking more along the lines of "Learning
Python" equivalent.
Anthony Williams on
2017-12-19 at
08:26:13 said: > That's correct. Modern C++ is a disaster
area of compounded complexity and fragile kludges piled on in a
failed attempt to fix leaky abstractions. 1998 C++ had the
leaky-abstractions problem, but at least it was drastically
simpler. Clue: complexification when you don't even fix the
problems is bad.
I agree that there is a lot of complexity in C++. That doesn't
mean you have to use all of it. Yes, it makes maintaining legacy
code harder, because the older code might use dangerous or complex
parts, but for new code we can avoid the danger, and just stick to
the simple, safe parts.
The complexity isn't all bad, though. Part of the complexity
arises by providing the ability to express more complex things in
the language. This can then be used to provide something simple to
the user.
Take std::variant as an example. This is a new facility from
C++17 that provides a type-safe discriminated variant. If you have
a variant that could hold an int or a string and you store an int
in it, then attempting to access it as a string will cause an
exception rather than a silent error. The code that *implements*
std::variant is complex. The code that uses it is simple.
Reply
↓
Jeff Read on 2017-12-20 at 09:07:06
said: I won't argue with you. C++ is error-prone (albeit less so than C)
and horrid to work in. But for certain classes of algorithmically complex,
CPU- and RAM-intensive problems it is literally the only viable
choice. And it looks like performing surgery on GCC-scale repos falls into
that class of problem.
I'm not even saying it was a bad idea to initially write reposurgeon in
Python. Python and even Ruby are great languages to write prototypes or
even small-scale production versions of things because of how rapidly they
may be changed while you're hammering out the details. But scale comes
around to bite you in the ass sooner than most people think and when it
does, your choice of language hobbles you in a way that can't be
compensated for by throwing more silicon at the problem. And it's in that
niche where C++ and Rust dominate, absolutely uncontested. Reply
↓
jim on 2017-12-22 at 06:41:27
said: If you found rust hard going, you are not a C++ programmer who knows
better than this.
Anthony Williams on 2017-12-19 at
06:15:12 said: > How many times do I have to repeat "reposurgeon would never
have been
> written under that constraint" before somebody who claims LISP
> experience gets it?
That speaks to your lack of experience with modern C++, rather than an inherent
limitation. *You* might not have written reposurgeon under that constraint, because
*you* don't feel comfortable that you wouldn't have ended up with a black-hole of
AMM. That does not mean that others wouldn't have or couldn't have, and that their
code would necessarily be an unmaintainable black hole.
In well-written modern C++, memory management errors are a solved problem. You
can just write code, and know that the compiler and library will take care of
cleaning up for you, just like with a GC-based system, but with the added benefit
that it's deterministic, and can handle non-memory resources such as file handles
and sockets too. Reply
↓
esr on 2017-12-19 at 07:59:30
said: >In well-written modern C++, memory management errors are a solved
problem
In well-written assembler memory management errors are a solved
problem. I hate this idiotic cant repetition about how if you're just good
enough for the language it won't hurt you – it sweeps the actual
problem under the rug while pretending to virtue. Reply
↓
Anthony Williams on
2017-12-19 at 08:08:53
said: > I hate this idiotic repetition about how if you're just good
enough for the language it won't hurt you – it sweeps the actual
problem under the rug while pretending to virtue.
It's not about being "just good enough". It's about *not* using the
dangerous parts. If you never use manual memory management, then you can't
forget to free, for example, and automatic memory management is *easy* to
use. std::string is a darn sight easier to use than the C string functions,
for example, and std::vector is a darn sight easier to use than dynamic
arrays with new. In both cases, the runtime manages the memory, and it is
*easier* to use than the dangerous version.
Every language has "dangerous" features that allow you to cause
problems. Well-written programs in a given language don't use the dangerous
features when there are equivalent ones without the problems. The same is
true with C++.
The fact that historically there are areas where C++ didn't provide a
good solution, and thus there are programs that don't use the modern
solution, and experience the consequential problems is not an inherent
problem with the language, but it does make it harder to educate people.
Reply
↓
John D. Bell on 2017-12-19 at
10:48:09 said: > It's about *not* using the dangerous parts.
Every language has "dangerous" features that allow you to cause
problems. Well-written programs in a given language don't use the
dangerous features when there are equivalent ones without the problems.
Why not use a language that doesn't have "'dangerous'
features"?
NOTES: [1] I am not saying that Go is necessarily that language
– I am not even saying that any existing language is
necessarily that language.
[2] /me is being someplace between naive and trolling here. Reply
↓
esr on 2017-12-19 at
11:10:15 said: >Why not use a language that doesn't have
"'dangerous' features"?
Historically, it was because hardware was weak and expensive
– you couldn't afford the overhead imposed by those
languages. Now it's because the culture of software engineering has
bad habits formed in those days and reflexively flinches from using
higher-overhead safe languages, though it should not. Reply
↓
Paul R on 2017-12-19 at
12:30:42 said: Runtime efficiency still matters. That and the
ability to innovate are the reasons I think C++ is in such wide
use.
To be provocative, I think there are two types of programmer,
the ones who watch Eric Niebler on Ranges https://www.youtube.com/watch?v=mFUXNMfaciE&t=4230s
and think 'Wow, I want to find out more!' and the rest. The rest
can have Go and Rust
D of course is the baby elephant in the room, worth much more
attention than it gets. Reply
↓
Michael on 2017-12-19 at
12:53:33 said: Runtime efficiency still matters. That
and the ability to innovate are the reasons I think C++ is in
such wide use.
Because you can't get runtime efficiency in any other
language?
Because you can't innovate in any other language?
Reply ↓
Our three main advantages, runtime efficiency,
innovation opportunity, building on a base of millions of
lines of code that run the internet and an international
standard.
Our four main advantages
More seriously, C++ enabled the STL, the STL transforms
the approach of its users, with much increased reliability
and readability, but no loss of performance. And at the
same time your old code still runs. Now that is old stuff,
and STL2 is on the way. Evolution.
Zygo on 2017-12-19
at 14:14:42 said: > Because you can't innovate in
any other language?
That claim sounded odd to me too. C++ looks like the
place that well-proven features of younger languages go to
die and become fossilized. The standardization process
would seem to require it.
Reply ↓
My thought was the language is flexible enough to
enable new stuff, and has sufficient weight behind it
to get that new stuff actually used.
Generic programming being a prime example.
Michael on 2017-12-20
at 08:19:41 said: My thought was the language is
flexible enough to enable new stuff, and has sufficient
weight behind it to get that new stuff actually
used.
Are you sure it's that, or is it more the fact that
the standards committee has forever had a me-too
kitchen-sink no-feature-left-behind obsession?
(Makes me wonder if it doesn't share some DNA with
the featuritis that has been Microsoft's calling card
for so long. – they grew up together.)
Paul R on 2017-12-20
at 11:13:20 said: No, because people come to the
standards committee with ideas, and you cannot have too
many libraries. You don't pay for what you don't use.
Prime directive C++.
Michael on 2017-12-20
at 11:35:06 said: and you cannot have too many
libraries. You don't pay for what you don't use.
And this, I suspect, is the primary weakness in your
perspective.
Is the defect rate of C++ code better or worse
because of that?
Paul R on 2017-12-20
at 15:49:29 said: The rate is obviously lower
because I've written less code and library code only
survives if it is sound. Are you suggesting that
reusing code is a bad idea? Or that an indeterminate
number of reimplementations of the same functionality
is a good thing?
You're not on the most productive path to effective
criticism of C++ here.
Michael on 2017-12-20
at 17:40:45 said: The rate is obviously lower
because I've written less code
Please reconsider that statement in light of how
defect rates are measured.
Are you suggesting..
Arguing strawmen and words you put in someone's
mouth is not the most productive path to effective
defense of C++.
But thank you for the discussion.
Paul R on 2017-12-20
at 18:46:53 said: This column is too narrow to have
a decent discussion. WordPress should rewrite in C++ or
I should dig out my Latin dictionary.
Seriously, extending the reach of libraries that
become standardised is hard to criticise, extending the
reach of the core language is.
It used to be a thing that C didn't have built in
functionality for I/O (for example) rather it was
supplied by libraries written in C interfacing to a
lower level system interface. This principle seems to
have been thrown out of the window for Go and the
others. I'm not sure that's a long term win. YMMV.
But use what you like or what your cannot talk your
employer out of using, or what you can get a job using.
As long as it's not Rust.
Zygo on 2017-12-19 at
12:24:25 said: > Well-written programs in a given language don't
use the dangerous features
Some languages have dangerous features that are disabled by default
and must be explicitly enabled prior to use. C++ should become one of
those languages.
I am very fond of the 'override' keyword in C++11, which allows me
to say "I think this virtual method overrides something, and don't
compile the code if I'm wrong about that." Making that assertion
incorrectly was a huge source of C++ errors for me back in the days
when I still used C++ virtual methods instead of lambdas. C++11 solved
that problem two completely different ways: one informs me when I make
a mistake, and the other makes it impossible to be wrong.
Arguably, one should be able to annotate any C++ block and say
"there shall be no manipulation of bare pointers here" or "all array
access shall be bounds-checked here" or even " and that's the default
for the entire compilation unit." GCC can already emit warnings for
these without human help in some cases. Reply
↓
1. Circular references. C++ has smart pointer classes that work when your data
structures are acyclic, but it doesn't have a good solution for circular
references. I'm guessing that reposurgeon's graphs are almost never DAGs.
2. Subversion of AMM. Bare news and deletes are still available, so some later
maintenance programmer could still introduce memory leaks. You could forbid the use
of bare new and delete in your project, and write a check-in hook to look for
violations of the policy, but that's one more complication to worry about and it
would be difficult to impossible to implement reliably due to macros and the
generally difficulty of parsing C++.
3. Memory corruption. It's too easy to overrun the end of arrays, treat a
pointer to a single object as an array pointer, or otherwise corrupt memory.
Reply
↓
esr on 2017-12-20 at 15:51:55
said: >Is this a good summary of your objections to C++ smart pointers as a
solution to AMM?
That is at least a large subset of my objections, and probably the most
important ones. Reply
↓
jim on 2017-12-22 at 07:15:20
said: It is uncommon to find a cyclic graph that cannot be rendered acyclic
by weak pointers.
C++17 cheerfully breaks backward compatibility by removing some
dangerous idioms, refusing to compile code that should never have been
written. Reply
↓
guest on 2017-12-20 at 19:12:01
said: > Circular references. C++ has smart pointer classes that work when
your data structures are acyclic, but it doesn't have a good solution for
circular references. I'm guessing that reposurgeon's graphs are almost never
DAGs.
General graphs with possibly-cyclical references are precisely the workload
GC was created to deal with optimally, so ESR is right in a sense that
reposturgeon _requires_ a GC-capable language to work. In most other programs,
you'd still want to make sure that the extent of the resources that are under
GC-control is properly contained (which a Rust-like language would help a lot
with) but it's possible that even this is not quite worthwhile for
reposturgeon. Still, I'd want to make sure that my program is optimized in
_other_ possible ways, especially wrt. using memory bandwidth efficiently
– and Go looks like it doesn't really allow that. Reply
↓
esr on 2017-12-20 at 20:12:49
said: >Still, I'd want to make sure that my program is optimized in
_other_ possible ways, especially wrt. using memory bandwidth efficiently
– and Go looks like it doesn't really allow that.
Er, there's any language that does allow it? Reply
↓
Jeff Read on 2017-12-27 at
20:58:43 said: Yes -- ahem -- C++. That's why it's pretty much the
only language taken seriously by game developers. Reply
↓
Zygo on 2017-12-21 at 12:56:20
said: > I'm guessing that reposurgeon's graphs are almost never DAGs
Why would reposurgeon's graphs not be DAGs? Some exotic case that comes up
with e.g. CVS imports that never arises in a SVN->Git conversion (admittedly
the only case I've really looked deeply at)?
Git repos, at least, are cannot-be-cyclic-without-astronomical-effort graphs
(assuming no significant advances in SHA1 cracking and no grafts–and even
then, all you have to do is detect the cycle and error out). I don't know how a
generic revision history data structure could contain a cycle anywhere even if
I wanted to force one in somehow. Reply
↓
The repo graph is, but a lot of the structures have reference loops for
fast lookup. For example, a blob instance has a pointer back to the
containing repo, as well as being part of the repo through a pointer chain
that goes from the repo object to a list of commits to a blob.
Without those loops, navigation in the repo structure would get very
expensive. Reply
↓
guest on 2017-12-21 at
15:22:32 said: Aren't these inherently "weak" pointers though? In
that they don't imply ownership/live data, whereas the "true" DAG
references do? In that case, and assuming you can be sufficiently sure
that only DAGs will happen, refcounting (ideally using something like
Rust) would very likely be the most efficient choice. No need for a
fully-general GC here. Reply
↓
esr on 2017-12-21 at
15:34:40 said: >Aren't these inherently "weak" pointers
though? In that they don't imply ownership/live data
I think they do. Unless you're using "ownership" in some sense I
don't understand. Reply
↓
jim on 2017-12-22 at
07:31:39 said: A weak pointer does not own the object it
points to. A shared pointer does.
When there are are zero shared pointers pointing to an
object, it gets freed, regardless of how many weak pointers are
pointing to it.
Shared pointers and unique pointers own, weak pointers do
not own.
Reply ↓
jim on 2017-12-22 at
07:23:35 said: In C++11, one would implement a pointer back to the
owning object as a weak pointer. Reply
↓
> How many times do I have to repeat "reposurgeon would never have been
written under that constraint" before somebody who claims LISP experience gets
it?
Maybe it is true, but since you do not understand, or particularly wish to
understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak
pointers, we hear you say that you would never write reposurgeon would never
under that constraint.
Which, since no one else is writing reposurgeon, is an argument, but not an
argument that those who do get weak pointers and rust scopes find all that
convincing.
I am inclined to think that those who write C++98 (which is the gcc default)
could not write reposurgeon under that constraint, but those who write C++11 could
write reposurgeon under that constraint, and except for some rather unintelligible,
complicated, and twisted class constructors invoking and enforcing the C++11
automatic memory management system, it would look very similar to your existing
python code. Reply
↓
esr on 2017-12-23 at 02:49:13
said: >since you do not understand, or particularly wish to understand, Rust
scoping, ownership, and zero cost abstractions, or C++ weak pointers
Thank you, I understand those concepts quite well. I simply prefer to apply
them in languages not made of barbed wire and landmines. Reply
↓
guest on 2017-12-23 at 07:11:48
said: I'm sure that you understand the _gist_ of all of these notions quite
accurately, and this alone is of course quite impressive for any developer
– but this is not quite the same as being comprehensively aware of
their subtler implications. For instance, both James and I have suggested
to you that backpointers implemented as an optimization of an overall DAG
structure should be considered "weak" pointers, which can work well
alongside reference counting.
For that matter, I'm sure that Rustlang developers share your aversion
to "barbed wire and landmines" in a programming language. You've criticized
Rust before (not without some justification!) for having half-baked
async-IO facilities, but I would think that reposturgeon does not depend
significantly on async-IO. Reply
↓
esr on 2017-12-23 at
08:14:25 said: >For instance, both James and I have suggested to
you that backpointers implemented as an optimization of an overall DAG
structure should be considered "weak" pointers, which can work well
alongside reference counting.
Yes, I got that between the time I wrote my first reply and JAD
brought it up. I've used Python weakrefs in similar situations. I would
have seemed less dense if I'd had more sleep at the time.
>For that matter, I'm sure that Rustlang developers share your
aversion to "barbed wire and landmines" in a programming language.
That acidulousness was mainly aimed at C++. Rust, if it implements
its theory correctly (a point on which I am willing to be optimistic)
doesn't have C++'s fatal structural flaws. It has problems of its own
which I won't rehash as I've already anatomized them in detail.
Reply
↓
Garrett on 2017-12-21 at 11:16:25 said:
There's also development cost. I suspect that using eg. Python drastically reduces the
cost for developing the code. And since most repositories are small enough that Eric
hasn't noticed accidental O(n**2) or O(n**3) algorithms until recently, it's pretty
obvious that execution time just plainly doesn't matter. Migration is going to involve
a temporary interruption to service and is going to be performed roughly once per repo.
The amount of time involved in just stopping the eg. SVN service and bringing up the
eg. GIT hosting service is likely to be longer than the conversion time for the median
conversion operation.
So in these cases, most users don't care about the run-time, and outside of a
handful of examples, wouldn't brush up against the CPU or memory limitations of a
whitebox PC.
This is in contrast to some other cases in which I've worked such as file-serving
(where latency is measured in microseconds and is actually counted), or large data
processing (where wasting resources reduces the total amount of stuff everybody can
do). Reply ↓
David Collier-Brown on
2017-12-18 at
20:20:59 said: Hmmn, I wonder if the virtual memory of Linux (and Unix, and Multics) is
really the OS equivalent of the automatic memory management of application programs? One
works in pages, admittedly, not bytes or groups of bytes, but one could argue that the
sub-page stuff is just expensive anti-internal-fragmentation plumbing
–dave
[In polite Canajan, "I wonder" is the equivalent of saying "Hey everybody, look at this" in
the US. And yes, I that's also the redneck's famous last words.] Reply ↓
John Moore on 2017-12-18 at 22:20:21 said: In my
experience, with most of my C systems programming in protocol stacks and transaction
processing infrastructure, the MM problem has been one of code, not data structure
complexity. The memory is often allocated by code which first encounters the need, and it
is then passed on through layers and at some point, encounters code which determines the
memory is no longer needed. All of this creates an implicit contract that he who is handed
a pointer to something (say, a buffer) becomes responsible for disposing of it. But, there
may be many places where that is needed – most of them in exception handling.
That creates many, many opportunities for some to simply forget to release it. Also,
when the code is handed off to someone unfamiliar, they may not even know about the
contract. Crises (or bad habits) lead to failures to document this stuff (or create
variable names or clear conventions that suggest one should look for the contract).
I've also done a bunch of stuff in Java, both applications level (such as a very complex
Android app with concurrency) and some infrastructural stuff that wasn't as performance
constrained. Of course, none of this was hard real-time although it usually at least needed
to provide response within human limits, which GC sometimes caused trouble with. But, the
GC was worth it, as it substantially reduced bugs which showed up only at runtime, and it
simplified things.
On the side, I write hard real time stuff on tiny, RAM constrained embedded systems
– PIC18F series stuff (with the most horrible machine model imaginable for such a
simple little beast). In that world, there is no malloc used, and shouldn't be. It's
compile time created buffers and structures for the most part. Fortunately, the
applications don't require advanced dynamic structures (like symbol tables) where you need
memory allocation. In that world, AMM isn't an issue. Reply ↓
Michael on 2017-12-18 at 22:47:26 said:
PIC18F series stuff (with the most horrible machine model imaginable for such a simple
little beast)
LOL. Glad I'm not the only one who thought that. Most of my work was on the 16F –
after I found out what it took to do a simple table lookup, I was ready for a stiff
drink. Reply ↓
esr on
2017-12-18 at
23:45:03 said: >In my experience, with most of my C systems programming in
protocol stacks and transaction processing infrastructure, the MM problem has been one
of code, not data structure complexity.
I believe you. I think I gravitate to problems with data-structure complexity
because, well, that's just the way my brain works.
But it's also true that I have never forgotten one of the earliest lessons I learned
from Lisp. When you can turn code complexity into data structure complexity, that's
usually a win. Or to put it slightly differently, dumb code munching smart data beats
smart code munching dumb data. It's easier to debug and reason about. Reply
↓
Jeremy on 2017-12-19 at 01:36:47 said:
Perhaps its because my coding experience has mostly been short python scripts of
varying degrees of quick-and-dirtiness, but I'm having trouble grokking the
difference between smart code/dumb data vs dumb code/smart data. How does one tell
the difference?
Now, as I type this, my intuition says it's more than just the scary mess of
nested if statements being in the class definition for your data types, as opposed
to the function definitions which munch on those data types; a scary mess of nested
if statements is probably the former. The latter though I'm coming up blank.
Perhaps a better question than my one above: what codebases would you recommend
for study which would be good examples of the latter (besides reposurgeon)?
Reply
↓
jsn on 2017-12-19 at 02:35:48
said: I've always expressed it as "smart data + dumb logic = win".
You almost said my favorite canned example: a big conditional block vs. a
lookup table. The LUT can replace all the conditional logic with structured
data and shorter (simpler, less bug-prone, faster, easier to read)
unconditional logic that merely does the lookup. Concretely in Python, imagine
a long list of "if this, assign that" replaced by a lookup into a dictionary.
It's still all "code", but the amount of program logic is reduced.
So I would answer your first question by saying look for places where data
structures are used. Then guesstimate how complex some logic would have to be
to replace that data. If that complexity would outstrip that of the data
itself, then you have a "smart data" situation. Reply
↓
Emanuel Rylke on 2017-12-19 at 04:07:58
said: To expand on this, it can even be worth to use complex code to generate
that dumb lookup table. This is so because the code generating the lookup
table runs before, and therefore separately, from the code using the LUT.
This means that both can be considered in isolation more often; bringing the
combined complexity closer to m+n than m*n. Reply
↓
TheDividualist on 2017-12-19 at
05:39:39 said: Admittedly I have an SQL hammer and think everything is
a nail, but why not would *every* program include a database, like the
SQLLite that even comes bundled with Python distros, no sweat, and put that
lookup table into it, not in a dictionary inside the code?
Of course the more you go in this direction the more problems you will
have with unit testing, in case you want to do such a thing. Generally we
SQL-hammer guys don't do that much, because in theory any fuction can read
any part of the database, making the whole database the potential "inputs"
for every function.
That is pretty lousy design, but I think good design patterns for
separations of concerns and unit testability are not yet really known for
database driven software, I mean, for example, model-view-controller claims
to be one, but actually fails as these can and should call each other. So
you have in the "customer" model or controller a function to check if the
customer has unpaid invoices, and decide to call it from the "sales order"
controller or model to ensure such customers get no new orders registered.
In the same "sales order" controller you also check the "product" model or
controller if it is not a discontinued product and check the "user" model
or controller if they have the proper rights for this operation and the
"state" controller if you are even offering this product in that state and
so on a gazillion other things, so if you wanted to automatically unit test
that "register a new sales order" function you have a potential "input"
space of half the database. And all that with good separation of concerns
MVC patterns. So I think no one really figured this out yet? Reply
↓
guest on 2017-12-20 at 19:21:13
said: There's a reason not to do this if you can help it – dispatching
through a non-constant LUT is way slower than running easily-predicted
conditionals. Like, an order of magnitude slower, or even worse. Reply
↓
esr on 2017-12-19 at 07:45:38
said: >Perhaps a better question than my one above: what codebases would you
recommend for study which would be good examples of the latter (besides
reposurgeon)?
I do not have an instant answer, sorry. I'll hand that question to my
backbrain and hope an answer pops up. Reply
↓
Jon Brase on 2017-12-20 at 00:54:15 said:
When you can turn code complexity into data structure complexity, that's usually
a win. Or to put it slightly differently, dumb code munching smart data beats smart
code munching dumb data. It's easier to debug and reason about.
Doesn't "dumb code munching smart data" really reduce to "dumb code implementing
a virtual machine that runs a different sort of dumb code to munch dumb data"?
Reply
↓
A domain specific language is easier to reason about within its proper
domain, because it lowers the difference between the problem and the
representation of the problem. Reply
↓
wisd0me on 2017-12-19 at 02:35:10 said: I wonder
why you talked about inventing an AMM-layer so much, but told nothing about the GC, which is
available for C language. Why do you need to invent some AMM-layer in the first place,
instead of just using the GC?
For example, Bigloo Scheme and The GNU Objective C runtime successfully used it, among many
others. Reply ↓
Jeremy Bowers on
2017-12-19 at
10:40:24 said: Rust seems like a good fit for the cases where you need the low latency
(and other speed considerations) and can't afford the automation. Firefox finally got to
benefit from that in the Quantum release, and there's more coming. I wouldn't dream of
writing a browser engine in Go, let alone a highly-concurrent one. When you're willing to
spend on that sort of quality, Rust is a good tool to get there.
But the very characteristics necessary to be good in that space will prevent it from
becoming the "default language" the way C was for so long. As much fun as it would be to
fantasize about teaching Rust as a first language, I think that's crazy talk for anything
but maybe MIT. (And I'm not claiming it's a good idea even then; just saying that's the
level of student it would take for it to even be possible .) Dunno if Go will become
that "default language" but it's taking a decent run at it; most of the other contenders I
can think of at the moment have the short-term-strength-yet-long-term-weakness of being
tied to a strong platform already. (I keep hearing about how Swift is going to be usable
off of Apple platforms real soon now just around the corner just a bit longer .) Reply
↓
esr on
2017-12-19 at
17:30:07 said: >Dunno if Go will become that "default language" but it's taking
a decent run at it; most of the other contenders I can think of at the moment have the
short-term-strength-yet-long-term-weakness of being tied to a strong platform already.
I really think the significance of Go being an easy step up from C cannot be
overestimated – see my previous blogging about the role of inward transition
costs.
Ken Thompson is insidiously clever. I like channels and subroutines and := but the
really consequential hack in Go's design is the way it is almost perfectly designed to
co-opt people like me – that is, experienced C programmers who have figured out
that ad-hoc AMM is a disaster area. Reply ↓
Jeff Read on 2017-12-20 at 08:58:23 said:
Go probably owes as much to Rob Pike and Phil Winterbottom for its design as it
does to Thompson -- because it's basically Alef with the feature whose lack,
according to Pike, basically killed Alef: garbage collection.
I don't know that it's "insidiously clever" to add concurrency primitives and GC
to a C-like language, as concurrency and memory management were the two obvious
banes of every C programmer's existence back in the 90s -- so if Go is "insidiously
clever", so is Java. IMHO it's just smart, savvy design which is no small thing;
languages are really hard to get right. And in the space Go thrives in, Go gets a
lot right. Reply
↓
John G on 2017-12-19 at 14:01:09 said: Eric,
have you looked into D *lately*? These days:
First, `pure` functions and transitive `const`, which make code so much
easier to reason about
Second. Allmost entire language is available at compile time. That, combined
with templates, enables crazy (in a good way) stuff, like building optimized
state machine for regex at compile-time. Given, regex pattern is known at
compile time, of course. But that's pretty common.
Can't find it now, but there were bechmarks, which show it's faster than any
run-time built regex engine out there. Still, source code is pretty
straightforward – one don't have to be Einstein to write code like that
[1].
There is a talk by Andrei Alexandrescu called "Fastware" were he show how
various metaprogramming facilities enable useful optimizations [2].
And a more recent talk, "Design By Introspection" [3], were he shows how these
facilities enable much more compact designs and implementaions.
Not sure. I've only recently begun learning D, and I don't know Go. [The D
overview]( https://dlang.org/overview.html ) may include
enough for you to surmise the differences though. Reply
↓
As the greenspunity rises, you are likely to find that more and more of your effort
and defect chasing is related to the AMM layer, and proportionally less goes to the
application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated
by issues like double-free errors and malloc leaks. This is commonly the case in C/C++
programs of even low greenspunity.
Interesting. This certainly fits my experience.
Has anybody looked for common patterns in whatever parasitic distractions plague you
when you start to reach the limits of a language with AMM? Reply ↓
result, err = whatever()
if (err) dosomethingtofixit();
abstraction.
I went through a phase earlier this year where I tried to eliminate the concept of
an errno entirely (and failed, in the end reinventing lisp, badly), but sometimes I
still think – to the tune of flight of the valkeries – "Kill the errno,
kill the errno, kill the ERRno, kill the err!' Reply ↓
jim on 2017-12-23 at
23:37:46 said: I have on several occasions been part of big projects using
languages with AMM, many programmers, much code, and they hit scaling problems and
died, but it is not altogether easy to explain what the problem was.
But it was very clear that the fact that I could get a short program, or a quick fix
up and running with an AMM much faster than in C or C++ was failing to translate into
getting a very large program containing far too many quick fixes up and running.
Reply ↓
AMM is not the only thing that Lisp brings on the table when it comes to dealing with
Greenspunity. Actually, the whole point of Lisp is that there is not _one_ conceptual
barrier to development, or a few, or even a lot, but that there are _arbitrarily_many_, and
that is why you need be able to extend your language through _syntactic_abstraction_ to
build DSLs so that every abstraction layer can be written in a language that is fit that
that layer. [Actually, traditional Lisp is missing the fact that DSL tooling depends on
_restriction_ as well as _extension_; but Haskell types and Racket languages show the way
forward in this respect.]
That is why all languages without macros, even with AMM, remain "blub" to those who grok
Lisp. Even in Go, they reinvent macros, just very badly, with various preprocessors to cope
with the otherwise very low abstraction ceiling.
(Incidentally, I wouldn't say that Rust has no AMM; instead it has static AMM. It also
has some support for macros.) Reply ↓
jim on
2017-12-23
at 22:02:18 said: Static AMM means that the compiler analyzes your code at
compile time, and generates the appropriate frees,
Static AMM means that the compiler automatically does what you manually do in C,
and semi automatically do in C++11 Reply
↓
Patrick Maupin on 2017-12-24 at 13:36:35
said: To the extent that the compiler's insertion of calls to free() can be
easily deduced from the code without special syntax, the insertion is merely an
optimization of the sort of standard AMM semantics that, for example, a PyPy
compiler could do.
To the extent that the compiler's ability to insert calls to free() requires
the sort of special syntax about borrowing that means that the programmer has
explicitly described a non-stack-based scope for the variable, the memory
management isn't automatic.
Perhaps this is why a google search for "static AMM" doesn't return much.
Reply
↓
Jeff Read on 2017-12-27 at 03:01:19
said: I think you fundamentally misunderstand how borrowing works in Rust.
In Rust, as in C++ or even C, references have value semantics. That is
to say any copies of a given reference are considered to be "the same". You
don't have to "explicitly describe a non-stack-based scope for the
variable", but the hitch is that there can be one, and only one, copy of
the original reference to a variable in use at any time. In Rust this is
called ownership, and only the owner of an object may mutate it.
Where borrowing comes in is that functions called by the owner of an
object may borrow a reference to it. Borrowed references are
read-only, and may not outlast the scope of the function that does the
borrowing. So everything is still scope-based. This provides a convenient
way to write functions in such a way that they don't have to worry about
where the values they operate on come from or unwrap any special types,
etc.
If you want the scope of a reference to outlast the function that
created it, the way to do that is to use a std::Rc , which
provides a regular, reference-counted pointer to a heap-allocated object,
the same as Python.
The borrow checker checks all of these invariants for you and will flag
an error if they are violated. Since worrying about object lifetimes is
work you have to do anyway lest you pay a steep price in performance
degradation or resource leakage, you win because the borrow checker makes
this job much easier.
Rust does have explicit object lifetimes, but where these are most
useful is to solve the problem of how to have structures, functions, and
methods that contain/return values of limited lifetime. For example
declaring a struct Foo { x: &'a i32 } means that any
instance of struct Foo is valid only as long as the borrowed
reference inside it is valid. The borrow checker will complain if you
attempt to use such a struct outside the lifetime of the internal
reference. Reply
↓
Doctor Locketopus on 2017-12-27 at 00:16:54 said:
Good Lord (not to be confused with Audre Lorde). If I weren't already convinced
that Rust is a cult, that would do it.
However, I must confess to some amusement about Karl Marx and Michel Foucault
getting purged (presumably because Dead White Male). Reply
↓
Jeff Read on 2017-12-27 at 02:06:40 said:
This is just a cost of doing business. Hacker culture has, for decades, tried to
claim it was inclusive and nonjudgemental and yada yada -- "it doesn't
matter if you're a brain in a jar or a superintelligent dolphin as long as your
code is good" -- but when it comes to actually putting its money where its mouth
is, hacker culture has fallen far short. Now that's changing, and one of the side
effects of that is how we use language and communicate internally, and to the wider
community, has to change.
But none of this has to do with automatic memory management. In Rust, management
of memory is not only fully automatic, it's "have your cake and eat it too": you
have to worry about neither releasing memory at the appropriate time, nor the
severe performance costs and lack of determinism inherent in tracing GCs. You do
have to be more careful in how you access the objects you've created, but the
compiler will assist you with that. Think of the borrow checker as your friend, not
an adversary. Reply
↓
John on 2017-12-20 at 05:03:22 said: Present
day C++ is far from C++ when it was first institutionalized in 1998. You should *never* be
physically overseeing memory in present day C++. You need a powerfully measured cluster?
Utilize std::vector. You need an adhoc diagram? Utilize std::shared_ptr and std::weak_ptr.
Any code I see which utilizes new or erase, malloc or through and through freedom fall
flat code audit. Reply ↓
Garrett on 2017-12-21 at 11:24:41 said: What
makes you refer to this as a systems programming project? It seems to me to be a standard
data-processing problem. Data in, data out. Sure, it's hella complicated and you're
brushing up against several different constraints.
In contrast to what I think of as systems programming, you have automatic memory
management. You aren't working in kernel-space. You aren't modifying the core libraries or
doing significant programmatic interface design.
I'm missing something in your semantic usage and my understanding of the solution
implementation. Reply ↓
Never user-facing. Often scripted. Development-support tool. Used by systems
programmers.
I realize we're in an area where the "systems" vs. "application" distinction gets a
little tricky to make. I hang out in that border zone a lot and have thought about
this. Are GPSD and ntpd "applications"? Is giflib? Sure, they're out-of-kernel, but no
end-user will ever touch them. Is GCC an application? Is apache or named?
Inside kernel is clearly systems. Outside it, I think the "systems" vs.
"application" distinction is about the skillset being applied and who your expected
users are than anything else.
I would not be upset at anyone who argued for a different distinction. I think
you'll find the definitional questions start to get awfully slippery when you poke at
them. Reply ↓
What makes you refer to this as a systems programming project? It seems to me to
be a standard data-processing problem. Data in, data out. Sure, it's hella
complicated and you're brushing up against several different constraints.
When you're talking about Unix, there is often considerable overlap between
"systems" and "application" programming because the architecture of Unix, with pipes,
input and output redirection, etc., allowed for essential OS components to be turned
into simple, data-in-data-out user-space tools. The functionality of ls ,
cp , rm , or cat , for instance, would have been
built into the shell of a pre-Unix OS (or many post-Unix ones). One of the great
innovations of Unix is to turn these units of functionality into standalone programs,
and then make spawning processes cheap enough to where using them interactively from
the shell is easy and natural. This makes extending the system, as accessed through the
shell, easy: just write a new, small program and add it to your PATH .
So yeah, when you're working in an environment like Unix, there's no bright-line
distinction between "systems" and "application" code, just like there's no bright-line
distinction between "user" and "developer". Unix is a tool for facilitating humans
working with computers. It cannot afford to discriminate, lest it lose its Unix-nature.
(This is why Linux on the desktop will never be a thing, not without considerable decay
in the facets of Linux that made it so great to begin with.) Reply ↓
At the upper end you can; the Yun has 64 MB, as do the Dragino variants. You can run
OpenWRT on them and use its Python (although the latest OpenWRT release, Chaos Calmer,
significantly increased its storage footprint from older firmware versions), which runs
fine in that memory footprint, at least for the kinds of things you're likely to do on this
type of device. Reply ↓
I'd be comfortable in that environment, but if we're talking AMM languages Go would
probably be a better match for it. Reply ↓
Peter
Donis on 2017-12-21 at 23:16:33 said:
Go is not available as a standard package on OpenWRT, but it probably won't be too
much longer before it is. Reply ↓
Jeff Read on 2017-12-22 at 14:07:21
said: Go binaries are statically linked, so the best approach is probably to
install Go on your big PC, cross compile, and push the binary out to the
device. Cross-compiling is a doddle; simply set GOOS and GOARCH. Reply
↓
jim on 2017-12-22 at 06:37:36 said:
C++11 has an excellent automatic memory management layer. Its only defect is that it is
optional, for backwards compatibility with C and C++98 (though it really is not all that
compatible with C++98)
And, being optional, you are apt to take the short cut of not using it, which will bite
you.
Rust is, more or less, C++17 with the automatic memory management layer being almost
mandatory. Reply ↓
> you are likely to find that more and more of your effort and defect chasing is
related to the AMM layer
But the AMM layer for C++ has already been written and debugged, and standards and
idioms exist for integrating it into your classes and type definitions.
Once built into your classes, you are then free to write code as if in a fully garbage
collected language in which all types act like ints.
C++14, used correctly, is a metalanguage for writing domain specific languages.
Now sometimes building your classes in C++ is weird, nonobvious, and apt to break for
reasons that are difficult to explain, but done correctly all the weird stuff is done once
in a small number of places, not spread all over your code Reply ↓
Dave taht on 2017-12-22 at 22:31:40 said: Linux is
the best C library ever created. And it's often, terrifying. Things like RCU are nearly
impossible for mortals to understand. Reply ↓
Alex Beamish on 2017-12-23 at 11:18:48 said:
Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and
not memory management, that persuaded me to move from C to Perl about twenty years ago.
Once I'd moved, I also appreciated the memory management in the shape of 'any size you
want' arrays, hashes (where had they been all my life?) and autovivification -- on the spot
creation of array or hash elements, at any depth.
While C is a low-level language that masquerades as a high-level language, the original
intent of the language was to make writing assembler easier and faster. It can still be
used for that, when necessary, leaving the more complicated matters to higher level
languages. Reply ↓
esr on
2017-12-23 at
14:36:26 said: >Interesting thesis .. it was the 'extra layer of goodness'
surrounding file operations, and not memory management, that persuaded me to move from
C to Perl about twenty years ago.
Prestty much all that goodness depends on AMM and could not be implemented without
it. Reply ↓
jim on 2017-12-23 at
22:17:39 said: Autovivification saves you much effort, thought, and coding, because
most of the time the perl interpreter correctly divines your intention, and does a pile
of stuff for you, without you needing to think about it.
And then it turns around and bites you because it does things for you that you did
not intend or expect.
The larger the program, and the longer you are keeping the program around, the more
it is a problem. If you are writing a quick one off script to solve some specific
problem, you are the only person who is going to use the script, and are then going to
throw the script away, fine. If you are writing a big program that will be used by lots
of people for a long time, autovivification, is going to turn around and bit you hard,
as are lots of similar perl features where perl makes life easy for you by doing stuff
automagically.
With the result that there are in practice very few big perl programs used by lots
of people for a long time, while there are an immense number of very big C and C++
programs used by lots of people for a very long time.
On esr's argument, we should never be writing big programs in C any more, and yet,
we are.
I have been part of big projects with many engineers using languages with automatic
memory management. I noticed I could get something up and running in a fraction of the
time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully
completed. We found ourselves fighting weird shit done by the vast pile of run time
software that was invisibly under the hood automatically doing stuff for us. We would
be fighting mysterious and arcane installation and integration issues.
This, my personal experience, is the exact opposite of the outcome claimed by
esr.
Well, that was perl, Microsoft Visual Basic, and PHP. Maybe Java scales better.
But perl, Microsoft visual basic, and PHP did not scale. Reply ↓
Oh, dear Goddess, no wonder. All three of those languages are notorious
sinkholes – they're where "maintainability" goes to die a horrible and
lingering death.
Now I understand your fondness for C++ better. It's bad, but those are way worse
at any large scale. AMM isn't enough to keep you out of trouble if the rest of the
language is a tar-pit. Those three are full of the bones of drowned devops
victims.
Yes, Java scales better. CPython would too from a pure maintainability
standpoint, but it's too slow for the kind of deployment you're implying – on
the other hand, PyPy might not be, I'm finding the JIT compilation works extremely
well and I get runtimes I think are within 2x or 3x of C. Go would probably be da
bomb.Reply
↓
Oh, dear Goddess, no wonder. All three of those languages are notorious
sinkholes – they're where "maintainability" goes to die a horrible and
lingering death.
Can confirm -- Visual Basic (6 and VBA) is a toilet. An absolute cesspool.
It's full of little gotchas -- such as non-short-circuiting AND and OR
operators (there are no differentiated bitwise/logical operators) and the
cryptic Dir() function that exactly mimics the broken semantics of MS-DOS's
directory-walking system call -- that betray its origins as an extended version
of Microsoft's 8-bit BASIC interpreter (the same one used to write toy programs
on TRS-80s and Commodores from a bygone era), and prevent you from
writing programs in a way that feels natural and correct if you've been exposed
to nearly anything else.
VB is a language optimized to a particular workflow -- and like many
languages so optimized as long as you color within the lines provided by the
vendor you're fine, but it's a minefield when you need to step outside those
lines (which happens sooner than you may think). And that's the case with just
about every all-in-one silver-bullet "solution" I've seen -- Rails and PHP
belong in this category too.
It's no wonder the cuddly new Microsoft under Nadella is considering making
Python a first-class extension language for Excel (and perhaps other
Office apps as well).
Visual Basic .NET is something quite different -- a sort of
Microsoft-flavored Object Pascal, really. But I don't know of too many shops
actually using it; if you're targeting the .NET runtime it makes just as much
sense to just use C#.
As for Perl, it's possible to write large, readable, maintainable
code bases in object-oriented Perl. I've seen it done. BUT -- you have to be
careful. You have to establish coding standards, and if you come across the
stereotype of "typical, looks-like-line-noise Perl code" then you have to flunk
it at code review and never let it touch prod. (Do modern developers even know
what line noise is, or where it comes from?) You also have to choose your
libraries carefully, ensuring they follow a sane semantics that doesn't require
weirdness in your code. I'd much rather just do it in Python. Reply
↓
TheDividualist on 2017-12-27 at
11:24:59 said: VB.NET is unusued in the kind of circles *you know*
because these are competitive and status-conscious circles and anything
with BASIC in the name is so obviously low-status and just looks so bad on
the resume that it makes sense to add that 10-20% more effort and learn C#.
C# sounds a whole lot more high status, as it has C in the name so obvious
it looks like being a Real Programmer on the resume.
What you don't know is what happens outside the circles where
professional programmers compete for status and jobs.
I can report that there are many "IT guys" who are not in these circles,
they don't have the intra-programmer social life hence no status concerns,
nor do they ever intend apply for Real Programmer jobs. They are just rural
or not first worlder guys who grew up liking computers, and took a generic
"IT guy" job at some business in a small town and there they taught
themselves Excel VBscript when the need arised to automate some reports,
and then VB.NET when it was time to try to build some actual application
for in-house use. They like it because it looks less intimidating –
it sends out those "not only meant for Real Programmers" vibes.
I wish we lived in a world where Python would fill that non-intimidating
amateur-friendly niche, as it could do that job very well, but we are
already on a hell of a path dependence. Seriously, Bill Gates and Joel
Spolsky got it seriously right when they made Excel scriptable. The trick
is how to provide a smooth transition between non-programming and
programming.
One classic way is that you are a sysadmin, you use the shell, then you
automate tasks with shell scripts, then you graduate to Perl.
One, relatively new way is that you are a web designer, write HTML and
CSS, and then slowly you get dragged, kicking and screaming into JavaScript
and PHP.
The genius was that they realized that a spreadsheet is basically modern
paper. It is the most basic and universal tool of the office drone. I print
all my automatically generated reports into xlsx files, simply because for
me it is the "paper" of 2017, you can view it on any Android phone, and
unlike PDF and like paper you can interact and work with the figures, like
add other numbers to them.
So it was automating the spreadsheet, the VBScript Excel macro that led
the way from not-programming to programming for an immense number of office
drones, who are far more numerous than sysadmins and web designers.
Aaand I think it was precisely because of those microcomputers, like the
Commodore. Out of every 100 office drone in 1991 or so, 1 or 2 had
entertained themselves in 1987 typing in some BASIC programs published in
computer mags. So when they were told Excel is programmable with a form of
BASIC they were not too intidimated
This created such a giant path dependency that still if you want to sell
a language to millions and millions of not-Real Programmers you have to at
least make it look somewhat like Basic.
I think from this angle it was a masterwork of creating and exploiting
path dependency. Put BASIC on microcomputers. Have a lot of hobbyists learn
it for fun. Create the most universal office tool. Let it be programmable
in a form of BASIC – you can just work on the screen, let it generate
a macro and then you just have to modify it. Mostly copy-pasting, not real
programming. But you slowly pick up some programming idioms. Then the path
curves up to VB and then VB.NET.
To challenge it all, one needs to find an application area as important
as number cruching and reporting in an office: Excel is basically
electronic paper from this angle and it is hard to come up with something
like this. All our nearly computer illiterate salespeople use it. (90% of
the use beyond just typing data in a grid is using the auto sum function.)
And they don't use much else than that and Word and Outlook and chat
apps.
Anyway suppose such a purpose can be found, then you can make it
scriptable in Python and it is also important to be able to record a macro
so that people can learn from the generated code. Then maybe that dominance
can be challenged. Reply
↓
Jeff Read on 2018-01-18 at
12:00:29 said: TIOBE says that while VB.NET saw an uptick in
popularity in 2011, it's on its way down now and usage was moribund
before then.
In your attempt to reframe my statements in your usual reference
frame of Academic Programmer Bourgeoisie vs. Office Drone Proletariat,
you missed my point entirely: VB.NET struggled to get a foothold during
the time when VB6 was fresh in developers' minds. It was too different
(and too C#-like) to win over VB6 devs, and didn't offer enough
value-add beyond C# to win over the people who would've just used C# or
Java. Reply
↓
I have been part of big projects with many engineers using languages with
automatic memory management. I noticed I could get something up and running in a
fraction of the time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully
completed. We found ourselves fighting weird shit done by the vast pile of run
time software that was invisibly under the hood automatically doing stuff for us.
We would be fighting mysterious and arcane installation and integration
issues.
Sounds just like every Ruby on Fails deployment I've ever seen. It's great when
you're slapping together Version 0.1 of a product or so I've heard. But I've never
joined a Fails team on version 0.1. The ones I saw were already well-established,
and between the PFM in Rails itself, and the amount of monkeypatching done to
system classes, it's very, very hard to reason about the code you're looking at.
From a management level, you're asking for enormous pain trying to onboard new
developers into that sort of environment, or even expand the scope of your product
with an existing team, without them tripping all over each other.
There's a reason why Twitter switched from Rails to Scala. Reply
↓
> Hacker culture has, for decades, tried to claim it was inclusive and
nonjudgemental and yada yada , hacker culture has fallen far short. Now that's changing,
has to change.|
Observe that "has to change" in practice means that the social justice warriors take
charge.
Observe that in practice, when the social justice warriors take charge, old bugs don't
get fixed, new bugs appear, and projects turn into aimless garbage, if any development
occurs at all.
"has to change" is a power grab, and the people grabbing power are not competent to
code, and do not care about code.
Reflect on the attempted
suicide of "Coraline" It is not people like me who keep using the correct pronouns that
caused "her" to attempt suicide. It is the people who used "her" to grab power. Reply
↓
esr on
2017-12-27 at
14:30:33 said: >"has to change" is a power grab, and the people grabbing power
are not competent to code, and do not care about code.
It's never happened before, and may very well never happen again but this once I
completely agree with JAD. The "change" the SJWs actually want – as opposed to
what they claim to want – would ruin us. Reply ↓
cppcoreguidelines-* and modernize-* will catch most of the issues that esr complains
about, in practice usually all of them, though I suppose that as the project gets bigger,
some will slip through.
Remember that gcc and g++ is C++98 by default, because of the vast base of old fashioned
C++ code which is subtly incompatible with C++11, C++11 onwards being the version of C++
that optionally supports memory safety, hence necessarily subtly incompatible.
To turn on C++11
Place
cmake_minimum_required(VERSION 3.5)
# set standard required to ensure that you get
# the same version of C++ on every platform
# as some environments default to older dialects
# of C++ and some do not.
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
Originally i made this system because i wanted to test programming a micro kernel OS,
with protected mode, PCI bus, usb, ACPI etc, and i didn't want to get close to the 'event
horizon' of memory mannagement in C.
But i didn't wait the Greenspun law to kick in, so i first developped a safe memory
system as a runtime, and replaced the standard C runtime and memory mannagement with
it.
I wanted zero seg fault or memory error possible at all anywhere in the C code. Because
debuguing bare metal exception, without debugger, with complex data structures made in C
look very close to the black hole.
I didn't want to use C++ because C++ compiler have very unpredictible binary format and
function name decoration, which make it much harder to interface with at kernel level.
I wanted also some system as efficient as possible to mannage lockless shared access
between thread of the whole memory as much as possible, to avoid the 'exclusive borrow'
syndrome of rust, with global variables shared between threads with lockless algorithm to
access them.
I took inspiration from the algorithm on this site http://www.1024cores.net/ to develop the basic system, with
strong references as the norm, and direct 'bare pointer' only as weak references for fast
access to memory in C.
What i ended doing is basically a 'strongly typed hashmap DAG' to store object
references hierarchy, which can be manipulated using 'lambda expressions', in sort that
applications can manipulate objects in a indirect manner only through the DAG abstraction,
without having to manipulate bare pointers at all.
This also make a mark and sweep garbage collector easier to do, especially with an
'event based' system, the main loop can call the garbage collector between two executions
of event/messages handlers, which has the advantage that it can be made at a point where
there is no application data on the stack to mark, so it avoid mistaking application data
in the stack for a pointer. All references that are only in stack variables can get
automatically garbage collected when the function exit, much like in C++ actually.
The garbage collector can still be called by the allocator when there is OOM error, it
will attempt a garbage collection before failing the allocation, but all references in the
stack should be garbage collected when the function return to the main loop and the garbage
collector is run.
As all the references hierarchy is expressed explicity in the DAG, there shouldn't be
any pointer stored in the heap, outside of the module's data section, which correspond to C
global variables that are used as the 'root element' of object hierarchy, which can be
traversed to find all the actives references to heap data that the code can potentially
use. A quick system could be made for that the compiler can automatically generate a list
of the 'root references' in the global variables, to avoid memory leak if some global data
can look like a reference.
As each thread have their own heap, it also avoid the 'stop the world syndrome', all
threads can garbage collect their own heap, and there is already some system of lockless
synchronisation to access references based on expression in the DAG, to avoid having to
rely only on 'bare pointers' to manipulate object hierarchy, which allow dynamic
relocation, and make it easier to track active references.
It's also very useful to track memory leak, as the allocator can keep the time of each
memory allocation, it's easy to see all the allocations that happenned between two points
of the program, and dump all their hierarchy and property only from the 'bare
reference'.
Each thread contain two heaps, one which is manually mannaged, mostly used for temporary
strings , or IO buffers, and the other heap which can be mannaged either with atomic
reference count, or mark and sweep.
With this system, C program rarely have to use directly malloc/free, nor to manipulate
pointers to allocated memory directly, other than for temporary buffer allocation, like a
dynamic stack, for io buffers or temporary strings who can easily be mannaged manually. And
all the memory manipulation can be made via a runtime which keep track internally of
pointer address and size, data type and eventually a 'finalizer' function that will be
callled when the pointer is freed,
Since i started to use this system to make C programs, alongside with my own ABI which
can dynamically link binaries compiled with visual studio and gcc together, i tested it for
many different use case, i could make a mini multi thread window mannager/UI, with aysnc
irq driven HID driver events, and a system of distributed application based on blockchain
data, which include multi thread http server who can handle parrallel json/rpc calls, with
an abstraction of applications stack via custom data type definition / scripts stored on
the blockchain, and i have very little problem of memory, albeit it's 100% in C, multi
threaded and deal with heavily dynamic data.
With the mark and sweep mode, it can become quite easy to develop multi thread
applications with good level of concurrency, even to do simple database system, driven by a
script over asynch http/json/rpc, without having to care about complex memory
mannagement.
Even with the reference count mode, the manipulation of references is explicit, and it
should not be to hard to detect leaks with simple parsers, i already did test with antlr C
parser, with a visitor class to parse the grammar and detect potentially errors, as all
memory referencing happen through specific type instead of bare pointers, it's not too hard
to detect potential memory leak problem with a simple parser. Reply ↓
Arron Grier on 2018-06-07 at 17:37:17 said: Since
you've been talking a lot about Go lately, should you not mention it on your Document: How
To Become A Hacker?
esr on
2018-06-08 at
05:48:37 said: >Since you've been talking a lot about Go lately, should you not
mention it on your Document: How To Become A Hacker?
Too soon. Go is very interesting but it's not an essential tool yet.
Yankes on 2018-12-18 at 19:20:46 said: I have
one question, do you even need global AMM? Get one of element of graph, when it will/should
be released in your reposugeon? Over all I think it is never because it usually link with
other from this graph. Overall do you check how many objects are created and released
during operations? I do not mean some temporal strings but object representing main working
set.
Depending on answer it could be if you load some graph element and it will stay
indefinitely in memory then this could easy be converted to C/C++ by simply never using
`free` for graph elements (and all problems with memory management goes out of the
windows).
If they should be released early then when it should happened? Do you have some code in
reposurgeon that purge not needed objects when not needed any more? Depend on simply
accessibility of some object do not mean it needed, many times is quite opposite.
I now working on C# application that had similar bungle like this and previous
developers "solution" was to restarting whole application instead of fixing lifetime
problems. Correct solution was C++ like code, I create object, do work and purge it
explicitly. With this non of components have memory issues now. Of corse problem there lay
with lack of knowing tools they use and not complexity of domain, but did you do analysis
what is needed and what not, and how long? AMM do not solve this.
btw I big fan of lisp that is in C++11 aka templates, great pure functional language :D
Reply ↓
Oh hell yes. Consider, for example, the demands of loading in ad operating on
multiple repositories. Reply ↓
Yankes on 2018-12-19 at 08:56:36 said:
If I understood this correctly situations look like:
I have processes that loaded repo A, B and C and active working on each one.
Now because of some demand we need load repo D.
After we are done we back to A, B and C.
Now question is should be D data be purged?
If there are memory connection form previous repos then it will stay in memory if
not then AMM will remove all data from memory.
If this is complex graph when you have access to any element the you can crawl to
any other element of this graph (this is simplification but probably safe
assumption).
In first case (there is connection) is equivalent to not using `free` in C. Of
corse if not all graph is reachable then there will be partial purge of it memory
(let say that 10% will stay), but what will happens when you need again load repo
D? Current data avaialbe is hidden deep in other graphs and most of data is lost do
AMM. you need load everything again and now repo D size is 110%.
In case there is not connection between repos A, B, C and repo D then we can
free it entirely.
This is easy done in C++ (some kind of smart pointer that know if it pointing same
repo or other).
Do my reasoning is correct? or I miss something?
btw there BIG difference between C and C++, I can implement things in C++ that I
will NEVER be able to implement in C, example of this is my strong typed simple
script language: https://github.com/Yankes/OpenXcom/blob/master/src/Engine/Script.cpp
I would need drop functionalists/protections to be able to convert this to C (or
even C++03).
Another example of this is https://github.com/fmtlib/fmt from C++ and
`printf` from C.
Both do exactly same but C++ is lot of times better and safer.
This mean if we add your statement on impossibility and my then we have:
C <<< C++ <<< Go/Python
but for me personally is more:
C <<< C++ < Go/Python
than yours:
C/C++ <<< Go/Python Reply
↓
Not much. The bigger issue is that it is fucking insane to try
anything like this in a language where the core abstractions are leaky. That
disqualifies C++. Reply
↓
Yankes on 2018-12-19 at 10:24:47
said: I only disagree with word `insane`, C++ have lot of problems like UB,
lot of corner cases, leaking abstraction, whole crap form C (and my
favorite: 1000 line errors from templates), but is not insane to work with
memory problems.
You can easy create tools that make all this problems bearable, and this
is biggest flaw in C++, many problems are solvable but not out of box. C++
is good on crating abstraction: https://www.youtube.com/watch?v=sPhpelUfu8Q
That will fit your domain then it will not leak much because it fit right
the underling problem.
And you can enforce lot of things that allow you to reason locally about
behavior of program.
In case of creating this new abstraction is indeed insane then I think
you have problems in Go too because only problem that AMM solve is
reachability of memory and how long you need for it.
btw best thing that show difference between C++03 and C++11 is
`std::vector<std::vector>`, in C++03 this is insane stupid and in
C++11 is insane clever because it have performance characteristic of
`std::vector` (thanks to `std::move`) and no problems with memory
management (keep index stable and use `v.at(i).at(j).x = 5;` or warp it in
helper class and use `v[i][j].x` that will throw on wrong index).
Reply
↓
Of the omitted language features, the designers explicitly argue against assertions and
pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful
language, encouraging instead the use of interfaces to
achieve dynamic dispatch [h] and
composition to reuse code.
Composition and delegation are in fact largely
automated by struct embedding; according to researchers Schmager et al. , this feature
"has many of the drawbacks of inheritance: it affects the public interface of objects, it is
not fine-grained (i.e, no method-level control over embedding), methods of embedded objects
cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse
it to the extent that programmers in other languages are reputed to overuse inheritance.
[61]
The designers express an openness to generic programming and note that built-in functions
are in fact type-generic, but these are treated as special cases; Pike calls this a
weakness that may at some point be changed. [53]
The Google team built at least one compiler for an experimental Go dialect with generics, but
did not release it. [96] They are
also open to standardizing ways to apply code generation. [97]
Initially omitted, the exception -like panic / recover
mechanism was eventually added, which the Go authors advise using for unrecoverable errors such
as those that should halt an entire program or server request, or as a shortcut to propagate
errors up the stack within a package (but not across package boundaries; there, error returns
are the standard API). [98
Maintaining and adding new features to legacy systems developed using Maintaining and adding new features to legacy systems developed
using C/C++ is a daunting task. There are several facets to the problem -- understanding the existing class hierarchy
and global variables, the different user-defined types, and function call graph analysis, to name a few. This article discusses several
features of doxygen, with examples in the context of projects using C/C++ .
However, doxygen is flexible enough to be used for software projects developed using the Python, Java, PHP, and other languages,
as well. The primary motivation of this article is to help extract information from C/C++ sources, but it also briefly
describes how to document code using doxygen-defined tags.
Installing doxygen
You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources
from the SVN repository and build it. You have two choices for acquiring doxygen. You can download it as a pre-compiled executable
file, or you can check out sources from the SVN repository and build it.
Listing 1 shows the latter process.
Listing 1. Install and build doxygen sources
bash‑2.05$ svn co https://doxygen.svn.sourceforge.net/svnroot/doxygen/trunk doxygen‑svn
bash‑2.05$ cd doxygen‑svn
bash‑2.05$ ./configure –prefix=/home/user1/bin
bash‑2.05$ make
bash‑2.05$ make install
Show more Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH
variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure
script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as
not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to
dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has
permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources
in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to
the /usr folder. Also, you need the svn utility to check out sources. Generating documentation using doxygen
To use doxygen to generate documentation of the sources, you perform three steps. To use doxygen to generate documentation of the
sources, you perform three steps. Generate the configuration file At a shell prompt, type the command doxygen -g At a shell
prompt, type the command doxygen -g doxygen -g . This command generates a text-editable configuration file called
Doxyfile in the current directory. You can choose to override this file name, in which case the invocation should be doxygen
-g <_user-specified file="file" name_="name_"> doxygen -g <user-specified file name> , as shown in
Listing 2 .
Listing 2. Generate the default configuration file
bash‑2.05b$ doxygen ‑g
Configuration file 'Doxyfile' created.
Now edit the configuration file and enter
doxygen Doxyfile
to generate the documentation for your project
bash‑2.05b$ ls Doxyfile
Doxyfile
Show more Edit the configuration file The configuration file is structured as The configuration file is structured as
<TAGNAME> = <VALUE> , similar to the Make file format. Here are the most important tags:
<OUTPUT_DIRECTORY> : You must provide a directory name here -- for example, /home/user1/documentation -- for
the directory in which the generated documentation files will reside. If you provide a nonexistent directory name, doxygen creates
the directory subject to proper user permissions.
<INPUT> : This tag creates a space-separated list of all the directories in which the C/C++ source
and header files reside whose documentation is to be generated. For example, consider the following snippet:
Show more In this case, doxygen would read in the C/C++ sources from these two
directories. If your project has a single source root directory with multiple sub-directories, specify that folder and make the
<RECURSIVE> tag Yes .
<FILE_PATTERNS> : By default, doxygen searches for files with typical C/C++ extensions such as
.c, .cc, .cpp, .h, and .hpp. This happens when the <FILE_PATTERNS> tag has no value associated with
it. If the sources use different naming conventions, update this tag accordingly. For example, if a project convention is to use
.c86 as a C file extension, add this to the <FILE_PATTERNS> tag.
<RECURSIVE> : Set this tag to Yes if the source hierarchy is nested and you need to generate documentation for
C/C++ files at all hierarchy levels. For example, consider the root-level source hierarchy /home/user1/project/kernel,
which has multiple sub-directories such as /home/user1/project/kernel/vmm and /home/user1/project/kernel/asm. If this tag is set
to Yes , doxygen recursively traverses the hierarchy, extracting information.
<EXTRACT_ALL> : This tag is an indicator to doxygen to extract documentation even when the individual classes
or functions are undocumented. You must set this tag to Yes .
<EXTRACT_PRIVATE> : Set this tag to Yes . Otherwise, private data members of a class would not be included in
the documentation.
<EXTRACT_STATIC> : Set this tag to Yes . Otherwise, static members of a file (both functions and variables) would
not be included in the documentation.
Listing 3. Sample doxyfile with user-provided tag values
OUTPUT_DIRECTORY = /home/user1/docs
EXTRACT_ALL = yes
EXTRACT_PRIVATE = yes
EXTRACT_STATIC = yes
INPUT = /home/user1/project/kernel
#Do not add anything here unless you need to. Doxygen already covers all
#common formats like .c/.cc/.cxx/.c++/.cpp/.inl/.h/.hpp
FILE_PATTERNS =
RECURSIVE = yes
Show more Run doxygen Run doxygen in the shell prompt as Run doxygen in the shell prompt as doxygen Doxyfile
(or with whatever file name you've chosen for the configuration file). Doxygen issues several messages before it finally produces
the documentation in Hypertext Markup Language (HTML) and Latex formats (the default). In the folder that the <OUTPUT_DIRECTORY>
tag specifies, two sub-folders named html and latex are created as part of the documentation-generation process.
Listing 4 shows a sample doxygen run log.
Listing 4. Sample log output from doxygen
Searching for include files...
Searching for example files...
Searching for images...
Searching for dot files...
Searching for files to exclude
Reading input files...
Reading and parsing tag files
Preprocessing /home/user1/project/kernel/kernel.h
Read 12489207 bytes
Parsing input...
Parsing file /project/user1/project/kernel/epico.cxx
Freeing input...
Building group list...
..
Generating docs for compound MemoryManager::ProcessSpec
Generating docs for namespace std
Generating group index...
Generating example index...
Generating file member index...
Generating namespace member index...
Generating page index...
Generating graph info page...
Generating search index...
Generating style sheet...
Show more Documentation output formats Doxygen can generate documentation in several output formats other than HTML. You can
configure doxygen to produce documentation in the following formats: Doxygen can generate documentation in several output formats
other than HTML. You can configure doxygen to produce documentation in the following formats:
UNIX man pages: Set the <GENERATE_MAN> tag to Yes . By default, a sub-folder named man is created within
the directory provided using <OUTPUT_DIRECTORY> , and the documentation is generated inside the folder. You must
add this folder to the MANPATH environment variable.
Rich Text Format (RTF): Set the <GENERATE_RTF> tag to Yes . Set the <RTF_OUTPUT> to wherever you
want the .rtf files to be generated -- by default, the documentation is within a sub-folder named rtf within the OUTPUT_DIRECTORY.
For browsing across documents, set the <RTF_HYPERLINKS> tag to Yes . If set, the generated .rtf files contain links
for cross-browsing.
Latex: By default, doxygen generates documentation in Latex and HTML formats. The <GENERATE_LATEX> tag is set
to Yes in the default Doxyfile. Also, the <LATEX_OUTPUT> tag is set to Latex, which implies that a folder named
latex would be generated inside OUTPUT_DIRECTORY, where the Latex files would reside.
Microsoft® Compiled HTML Help (CHM) format: Set the <GENERATE_HTMLHELP> tag to Yes . Because this format is not
supported on UNIX platforms, doxygen would only generate a file named index.hhp in the same folder in which it keeps the
HTML files. You must feed this file to the HTML help compiler for actual generation of the .chm file.
Extensible Markup Language (XML) format: Set the <GENERATE_XML> tag to Yes . (Note that the XML output is still
a work in progress for the doxygen team.)
Listing 5 provides an example of a Doxyfile
that generates documentation in all the formats discussed.
Listing 5. Doxyfile with tags for generating documentation in several formats
Show more Special tags in doxygen Doxygen contains a couple of special tags. Doxygen contains a couple of special tags.
Preprocessing C/C++ code First, doxygen must preprocess First, doxygen must preprocess C/C++ code to extract information.
By default, however, it does only partial preprocessing -- conditional compilation statements ( #if #endif ) are evaluated,
but macro expansions are not performed. Consider the code in
Listing 6 .
Show more With With With With <USE_ROPE> defined in sources, generated documentation from doxygen looks like this:
Defines
#define USE_ROPE
#define STRING std::rope
Variables
static STRING name
Show more Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see
that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed
a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation
but has not done a macro expansion of STRING . The <ENABLE_PREPROCESSING> tag in the Doxyfile is set by
default to Yes . To allow for macro expansions, also set the <MACRO_EXPANSION> tag to Yes . Doing so produces
this output from doxygen:
Defines
#define USE_ROPE
#define STRING std::string
Variables
static std::rope name
Show more If you set the If you set the If you set the If you set the <ENABLE_PREPROCESSING> tag to No , the
output from doxygen for the earlier sources looks like this:
Variables
static STRING name
Show more Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation
now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is
not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type
of STRING . It thus makes sense always to set the <ENABLE_PREPROCESSING> tag to Yes . As part of the documentation,
it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be
desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to
expand only specific macros. For such purposes, along setting <ENABLE_PREPROCESSING> and <MACRO_EXPANSION>
to Yes , you must set the <EXPAND_ONLY_PREDEF> tag to Yes (this tag is set to No by default) and provide the
macro details as part of the <PREDEFINED> or <EXPAND_AS_DEFINED> tag. Consider the code in
Listing 7 , where only the macro
CONTAINER would be expanded.
Show more Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only Here's the
doxygen output with only CONTAINER expanded:
Show more Notice that only the Notice that only the Notice that only the Notice that only the CONTAINER macro has been
expanded. Subject to <MACRO_EXPANSION> and <EXPAND_AS_DEFINED> both being Yes , the <EXPAND_AS_DEFINED>
tag selectively expands only those macros listed on the right-hand side of the equality operator. As part of preprocessing, the final
tag to note is As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is <PREDEFINED>
. Much like the same way you use the -D switch to pass the G++ compiler preprocessor definitions, you use this tag to
define macros. Consider the Doxyfile in Listing
9 .
Listing 9. Doxyfile with macro expansion tags defined
Show more Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the
doxygen-generated output:
Show more When used with the When used with the When used with the When used with the <PREDEFINED> tag, macros should
be defined as <_macro name_="name_">=<_value_> <macro name>=<value> . If no value is provided -- as in the case of simple
#define -- just using <_macro name_="name_">=<_spaces_> <macro name>=<spaces> suffices. Separate multiple
macro definitions by spaces or a backslash ( \ ). Excluding specific files or directories from the documentation
process In the In the <EXCLUDE> tag in the Doxyfile, add the names of the files and directories for which documentation
should not be generated separated by spaces. This comes in handy when the root of the source hierarchy is provided and some sub-directories
must be skipped. For example, if the root of the hierarchy is src_root and you want to skip the examples/ and test/memoryleaks folders
from the documentation process, the Doxyfile should look like
Listing 10 .
Listing 10. Using the EXCLUDE tag as part of the Doxyfile
Show more Generating graphs and diagrams By default, the Doxyfile has the By default, the Doxyfile has the <CLASS_DIAGRAMS>
tag set to Yes . This tag is used for generation of class hierarchy diagrams. The following tags in the Doxyfile deal with generating
diagrams:
<CLASS_DIAGRAMS> : The default tag is set to Yes in the Doxyfile. If the tag is set to No , diagrams for inheritance
hierarchy would not be generated.
<HAVE_DOT> : If this tag is set to Yes , doxygen uses the dot tool to generate more powerful graphs, such as
collaboration diagrams that help you understand individual class members and their data structures. Note that if this tag is set
to Yes , the effect of the <CLASS_DIAGRAMS> tag is nullified.
<CLASS_GRAPH> : If the <HAVE_DOT> tag is set to Yes along with this tag, the inheritance hierarchy
diagrams are generated using the dot tool and have a richer look and feel than what you'd get by using only
<CLASS_DIAGRAMS> .
<COLLABORATION_GRAPH> : If the <HAVE_DOT> tag is set to Yes along with this tag, doxygen generates
a collaboration diagram (apart from an inheritance diagram) that shows the individual class members (that is, containment) and
their inheritance hierarchy.
Listing 11 provides an example using
a few data structures. Note that the <HAVE_DOT> , <CLASS_GRAPH> , and <COLLABORATION_GRAPH>
tags are all set to Yes in the configuration file.
Listing 11. Interacting C classes and structures
struct D {
int d;
};
class A {
int a;
};
class B : public A {
int b;
};
class C : public B {
int c;
D d;
};
Figure 1. The Class inheritance graph and collaboration graph generated using the dot tool
Code documentation style So far, you've used doxygen to extract information from code that is otherwise undocumented. However,
doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses
some of the more common tags doxygen advocates using as part of So far, you've used doxygen to extract information from code that
is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed
documentation. This section discusses some of the more common tags doxygen advocates using as part of C/C++ code. For
further details, see resources on the right. Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions
are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds
of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third
kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are
typically single lines. Functions and class methods have a third kind of description known as the in-body description, which
is a concatenation of all comment blocks found within the function body. Some of the more common doxygen tags and styles of commenting
are:
Brief description: Use a single-line C++ comment, or use the <\brief> tag.
Detailed description: Use JavaDoc-style commenting /** test */ (note the two asterisks [ * ] in
the beginning) or the Qt-style /*! text */ .
In-body description: Individual C++ elements like classes, structures, unions, and namespaces have their own
tags, such as <\class> , <\struct> , <\union> , and <\namespace> .
To document global functions, variables, and enum types, the corresponding file must first be documented using the To document global
functions, variables, and enum types, the corresponding file must first be documented using the <\file> tag.
Listing 12 provides an example that discusses
item 4 with a function tag ( <\fn> ), a function argument tag ( <\param> ), a variable name tag (
<\var> ), a tag for #define ( <\def> ), and a tag to indicate some specific issues related to a
code snippet ( <\warning> ).
Listing 12. Typical doxygen tags and their use
/∗! \file globaldecls.h
\brief Place to look for global variables, enums, functions
and macro definitions
∗/
/∗∗ \var const int fileSize
\brief Default size of the file on disk
∗/
const int fileSize = 1048576;
/∗∗ \def SHIFT(value, length)
\brief Left shift value by length in bits
∗/
#define SHIFT(value, length) ((value) << (length))
/∗∗ \fn bool check_for_io_errors(FILE∗ fp)
\brief Checks if a file is corrupted or not
\param fp Pointer to an already opened file
\warning Not thread safe!
∗/
bool check_for_io_errors(FILE∗ fp);
Show more
Here's how the generated documentation looks:
Defines
#define SHIFT(value, length) ((value) << (length))
Left shift value by length in bits.
Functions
bool check_for_io_errors (FILE ∗fp)
Checks if a file is corrupted or not.
Variables
const int fileSize = 1048576;
Function Documentation
bool check_for_io_errors (FILE∗ fp)
Checks if a file is corrupted or not.
Parameters
fp: Pointer to an already opened file
Warning
Not thread safe!
Show more Conclusion
This article discusses how doxygen can extract a lot of relevant information from legacy C/C++ code. If the code
is documented using doxygen tags, doxygen generates output in an easy-to-read format. Put to good use, doxygen is a ripe candidate
in any developer's arsenal for maintaining and managing legacy systems.
Computer programmer Steve Relles has the poop on what to do when your job is outsourced to
India. Relles has spent the past year making his living scooping up dog droppings as the
"Delmar Dog Butler." "My parents paid for me to get a (degree) in math and now I am a pooper
scooper," "I can clean four to five yards in a hour if they are close together." Relles, who
lost his computer programming job about three years ago ... has over 100 clients who pay $10
each for a once-a-week cleaning of their yard.
Relles competes for business with another local company called "Scoopy Do." Similar
outfits have sprung up across America, including Petbutler.net, which operates in Ohio.
Relles says his business is growing by word of mouth and that most of his clients are women
who either don't have the time or desire to pick up the droppings. "St. Bernard (dogs) are my
favorite customers since they poop in large piles which are easy to find," Relles said. "It
sure beats computer programming because it's flexible, and I get to be outside,"
Eugene Miya , A
friend/colleague. Sometimes driver. Other shared experiences.
Updated Mar 22 2017 · Author has 11.2k answers and 7.9m answer views
He mostly writes in C today.
I can assure you he at least knows about Python. Guido's office at Dropbox is 1 -- 2 blocks
by a backdoor gate from Don's house.
I would tend to doubt that he would use R (I've used S before as one of my stat packages).
Don would probably write something for himself.
Don is not big on functional languages, so I would doubt either Haskell (sorry Paul) or LISP
(but McCarthy lived just around the corner from Don; I used to drive him to meetings; actually,
I've driven all 3 of us to meetings, and he got his wife an electric version of my car based on
riding in my car (score one for friend's choices)). He does use emacs and he does write MLISP
macros, but he believes in being closer to the hardware which is why he sticks with MMIX (and
MIX) in his books.
Don't discount him learning the machine language of a given architecture.
I'm having dinner with Don and Jill and a dozen other mutual friends in 3 weeks or so (our
quarterly dinner). I can ask him then, if I remember (either a calendar entry or at job). I try
not to bother him with things like this. Don is well connected to the hacker community
Don's name was brought up at an undergrad architecture seminar today, but Don was not in the
audience (an amazing audience; I took a photo for the collection of architects and other
computer scientists in the audience (Hennessey and Patterson were talking)). I came close to
biking by his house on my way back home.
We do have a mutual friend (actually, I introduced Don to my biology friend at Don's
request) who arrives next week, and Don is my wine drinking proxy. So there is a chance I may
see him sooner.
Don Knuth would want to use something that’s low level, because details matter
. So no Haskell; LISP is borderline. Perhaps if the Lisp machine ever had become a thing.
He’d want something with well-defined and simple semantics, so definitely no R. Python
also contains quite a few strange ad hoc rules, especially in its OO and lambda features. Yes
Python is easy to learn and it looks pretty, but Don doesn’t care about superficialities
like that. He’d want a language whose version number is converging to a mathematical
constant, which is also not in favor of R or Python.
What remains is C. Out of the five languages listed, my guess is Don would pick that one.
But actually, his own old choice of Pascal suits him even better. I don’t think any
languages have been invented since T E X was written that score higher on the Knuthometer than Knuth’s own original pick.
And yes, I feel that this is actually a conclusion that bears some thinking about. 24.1k
views ·
Dan
Allen , I've been programming for 34 years now. Still not finished.
Answered Mar 9, 2017 · Author has 4.5k answers and 1.8m answer views
In The Art of Computer Programming I think he'd do exactly what he did. He'd invent his own
architecture and implement programs in an assembly language targeting that theoretical
machine.
He did that for a reason because he wanted to reveal the detail of algorithms at the lowest
level of detail which is machine level.
He didn't use any available languages at the time and I don't see why that would suit his purpose now. All the languages
above are too high-level for his purposes.
"... The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes: ..."
"... When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest." ..."
"... The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like: ..."
"... Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft. ..."
"... Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. ..."
"... And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓. ..."
"... Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing. ..."
"... In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots " ..."
"... This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything. ..."
At first blush, the suit filed in Dallas by the Southwest Airlines Pilots Association (SwAPA) against Boeing may seem like a family
feud. SWAPA is seeking an estimated $115 million for lost pilots' pay as a result of the grounding of the 34 Boeing 737 Max planes
that Southwest owns and the additional 20 that Southwest had planned to add to its fleet by year end 2019. Recall that Southwest
was the largest buyer of the 737 Max, followed by American Airlines. However, the damning accusations made by the pilots' union,
meaning, erm, pilots, is likely to cause Boeing not just more public relations headaches, but will also give grist to suits by crash
victims.
However, one reason that the Max is a sore point with the union was that it was a key leverage point in 2016 contract negotiations:
And Boeing's assurances that the 737 Max was for all practical purposes just a newer 737 factored into the pilots' bargaining
stance. Accordingly, one of the causes of action is tortious interference, that Boeing interfered in the contract negotiations to
the benefit of Southwest. The filing describes at length how Boeing and Southwest were highly motivated not to have the contract
dispute drag on and set back the launch of the 737 Max at Southwest, its showcase buyer. The big point that the suit makes is the
plane was unsafe and the pilots never would have agreed to fly it had they known what they know now.
We've embedded the compliant at the end of the post. It's colorful and does a fine job of recapping the sorry history of the development
of the airplane. It has damning passages like:
Boeing concealed the fact that the 737 MAX aircraft was not airworthy because, inter alia, it incorporated a single-point failure
condition -- a software/flight control logic called the Maneuvering Characteristics Augmentation System ("MCAS") -- that,if fed
erroneous data from a single angle-of-attack sensor, would command the aircraft nose-down and into an unrecoverable dive without
pilot input or knowledge.
The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air
crashes:
Had SWAPA known the truth about the 737 MAX aircraft in 2016, it never would have approved the inclusion of the 737 MAX aircraft
as a term in its CBA [collective bargaining agreement], and agreed to operate the aircraft for Southwest. Worse still, had SWAPA
known the truth about the 737 MAX aircraft, it would have demanded that Boeing rectify the aircraft's fatal flaws before agreeing
to include the aircraft in its CBA, and to provide its pilots, and all pilots, with the necessary information and training needed
to respond to the circumstances that the Lion Air Flight 610 and Ethiopian Airlines Flight 302 pilots encountered nearly three
years later.
And (boldface original):
Boeing Set SWAPA Pilots Up to Fail
As SWAPA President Jon Weaks, publicly stated, SWAPA pilots "were kept in the dark" by Boeing.
Boeing did not tell SWAPA pilots that MCAS existed and there was no description or mention of MCAS in the Boeing Flight Crew
Operations Manual.
There was therefore no way for commercial airline pilots, including SWAPA pilots, to know that MCAS would work in the background
to override pilot inputs.
There was no way for them to know that MCAS drew on only one of two angle of attack sensors on the aircraft.
And there was no way for them to know of the terrifying consequences that would follow from a malfunction.
When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against
disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more
technical data -- than [they] needed or could realistically digest."
SWAPA's pilots, like their counterparts all over the world, were set up for failure
The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane
less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored
warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to
the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like:
By March 2016, Boeing settled on a revision of the MCAS flight control logic.
However, Boeing chose to omit key safeguards that had previously been included in earlier iterations of MCAS used on the Boeing
KC-46A Pegasus, a military tanker derivative of the Boeing 767 aircraft.
The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with
limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or
cause a pilot to lose control. Those familiar with the tanker's design explained that these checks were incorporated because "[y]ou
don't want the solution to be worse than the initial problem."
The 737 MAX version of MCAS abandoned the safeguards previously relied upon. As discussed below, the 737 MAX MCAS had greater
control authority than its predecessor, activated repeatedly upon activation, and relied on input from just one of the plane's
two sensors that measure the angle of the plane's nose.
In other words, Boeing can't credibly say that it didn't know better.
Here is one of the sections describing Boeing's cover-ups:
Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions
to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS
itself.
In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so
as to further hide its existence from the public and pilots.
We urge you to read the complaint in full, since it contains juicy insider details, like the significance of Southwest being Boeing's
737 Max "launch partner" and what that entailed in practice, plus recounting dates and names of Boeing personnel who met with SWAPA
pilots and made misrepresentations about the aircraft.
Even though Southwest Airlines is negotiating a settlement with Boeing over losses resulting from the grounding of the 737 Max
and the airline has promised to compensate the pilots, the pilots' union at a minimum apparently feels the need to put the heat on
Boeing directly. After all, the union could withdraw the complaint if Southwest were to offer satisfactory compensation for the pilots'
lost income. And pilots have incentives not to raise safety concerns about the planes they fly. Don't want to spook the horses, after
all.
But Southwest pilots are not only the ones most harmed by Boeing's debacle but they are arguably less exposed to the downside
of bad press about the 737 Max. It's business fliers who are most sensitive to the risks of the 737 Max, due to seeing the story
regularly covered in the business press plus due to often being road warriors. Even though corporate customers account for only 12%
of airline customers, they represent an estimated 75% of profits.
Southwest customers don't pay up for front of the bus seats. And many of them presumably value the combination of cheap travel,
point to point routes between cities underserved by the majors, and close-in airports, which cut travel times. In other words, that
combination of features will make it hard for business travelers who use Southwest regularly to give the airline up, even if the
737 Max gives them the willies. By contrast, premium seat passengers on American or United might find it not all that costly, in
terms of convenience and ticket cost (if they are budget sensitive), to fly 737-Max-free Delta until those passengers regain confidence
in the grounded plane.
Note that American Airlines' pilot union, when asked about the Southwest claim, said that it also believes its pilots deserve
to be compensated for lost flying time, but they plan to obtain it through American Airlines.
If Boeing were smart, it would settle this suit quickly, but so far, Boeing has relied on bluster and denial. So your guess is
as good as mine as to how long the legal arm-wrestling goes on.
Update 5:30 AM EDT : One important point that I neglected to include is that the filing also recounts, in gory detail, how Boeing
went into "Blame the pilots" mode after the Lion Air crash, insisting the cause was pilot error and would therefore not happen again.
Boeing made that claim on a call to all operators, including SWAPA, and then three days later in a meeting with SWAPA.
However, Boeing's actions were inconsistent with this claim. From the filing:
Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition
likely could exist or develop on 737 MAX aircraft.
Relying on Boeing's description of the problem, the AD directed that in the event of un-commanded nose-down stabilizer trim
such as what happened during the Lion Air crash, the flight crew should comply with the Runaway Stabilizer procedure in the Operating
Procedures of the 737 MAX manual.
But the AD did not provide a complete description of MCAS or the problem in 737 MAX aircraft that led to the Lion Air crash,
and would lead to another crash and the 737 MAX's grounding just months later.
An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas
MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft
tail down again.
Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and
relied upon in earlier generations of 737 aircraft.
Even after the Lion Air crash, Boeing's description of MCAS was still insufficient to put correct its lack of disclosure as
demonstrated by a second MCAS-caused crash.
We hoisted this detail because insiders were spouting in our comments section, presumably based on Boeing's patter, that the Lion
Air pilots were clearly incompetent, had they only executed the well-known "runaway stabilizer," all would have been fine. Needless
to say, this assertion has been shown to be incorrect.
Excellent, by any standard. Which does remind of of the NYT zine story (William Langewiesche
Published Sept. 18, 2019) making the claim that basically the pilots who crashed their planes weren't real "Airman".
And making
the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly
true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's
a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude
✓.
And especially if you as a pilot didn't know MCAS was there in the first place. This sort of engineering by Boeing is criminal.
And the lying. To everyone. Oh, least we all forget the processing power of the in flight computer is that of a intel 286. There
are times I just want to be beamed back to the home planet. Where we care for each other.
One should also point out that Langewiesche said that Boeing made disastrous mistakes with the MCAS and that the very future
of the Max is cloudy. His article was useful both for greater detail about what happened and for offering some pushback to the
idea that the pilots had nothing to do with the accidents.
As for the above, it was obvious from the first Seattle Times stories that these two events and the grounding were going to
be a lawsuit magnet. But some of us think Boeing deserves at least a little bit of a defense because their side has been totally
silent–either for legal reasons or CYA reasons on the part of their board and bad management.
Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit
above safety all else , that is glaringly obvious to everyone except Boeing.
"The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with
limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or
cause a pilot to lose control "
"Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions
to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS
itself.
In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual
so as to further hide its existence from the public and pilots "
This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial
airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the
automation crapification of everything.
"... Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? ..."
"... So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming). ..."
What the hell is DevOps? Every couple months I find myself trying to look into it as all I
ever hear and see about is DevOps being the way forward. But each time I research it I can only
find things talking about streamlining software updates and quality assurance and yada yada
yada. It seems like DevOps only applies to companies that make software as a product. How does
that affect me as a sysadmin for higher education? My "company's" product isn't software.
Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to
offer me? Again, when I try to research them a majority of what I find just links back to
software development.
To give a rough idea of what I deal with, below is a list of my three main
responsibilities.
macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+
machines)
Network Administration (I just started with this a couple months ago and I'm slowly
learning about our infrastructure and network administration in general from our IT
director. We have several buildings spread across our entire campus with a mixture of
Juniper, Dell, and Brocade equipment.)
AV Systems Design and Programming (I'm the only person who does anything related to
video conferencing, meeting room equipment, presentation systems, digital signage, etc. for
7 buildings.)
So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn,
but it gets so overwhelming trying to find information because everything I find just assumes
you're a software developer with all this prerequisite knowledge. Additionally, how the hell do
you find the time to learn all of this? It seems like new DevOps software or platforms or
whatever you call them spin up every single month. I'm already in the middle of trying to learn
JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in
addition to networking concepts in general), and AV design stuff (like Crestron programming).
I've been working at the same job for 5 years and I feel like I'm being left in the dust by the
entire rest of the industry. I'm being pulled in so many different directions that I feel like
it's impossible for me to ever get another job. At the same time, I can't specialize in
anything because I have so many different unrelated areas I'm supposed to be doing work in.
And this is what I go through/ask myself every few months I try to research and learn
DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing
to offer. Thanks in advance.
there's a lot of tools that can be used to make your life much easier that's used on a
daily basis for DevOps, but apparently that's not the case for you. when you manage infra as
code, you're using DevOps.
there's a lot of space for operations guys like you (and me) so look to DevOps as an
alternative source of knowledge, just to stay tuned on the trends of the industry and improve
your skills.
for higher education, this is useful for managing large projects and looking for
improvement during the development of the product/service itself. but again, that's not the
case for you. if you intend to switch to another position, you may try to search for a
certification program that suits your needs
"... In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part! ..."
"... Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture. ..."
"... OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management. ..."
"... OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. ..."
"... OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake. ..."
"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first
few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and
polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the
Java class.
Each of our previous articles had a common thread. We have written about the strengths and benefits of
the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we
do not want to give anyone a false sense that object-oriented techniques are always the perfect answer.
Object-oriented techniques are not the magic "silver bullets" of programming.
In the programming world, the term silver bullet refers to a technology or methodology that is touted as the
ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically
make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more
impressive, it will do all of this without any effort on your part!
Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never
will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and
architecture.
If anything, using OO makes design and architecture more important because without a clear, well-planned design,
OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble
for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned
system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier
maintenance.
Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO
isn't flawed, but some of the hype has given OO developers and managers a false sense of security.
Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of
OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential
pitfalls.
Fallacies of OO
It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow
these common fallacies to mislead you.
OO will insure the success of your project: An object-oriented approach to software development does not guarantee
the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only
careful analysis and a complete understanding of the problem will make the project succeed. A successful project will
utilize sound techniques, competent programmers, sound processes and solid project management.
OO makes you a better programmer: OO does not make a programmer better. Only experience can do that. A coder might
know all of the OO lingo and syntactical tricks, but if he or she doesn't know when and where to employ these
features, the resulting code will be error-prone and difficult for others to maintain and reuse.
OO-derived software is superior to other forms of software: OO techniques do not make good software; features make
good software. You can use every OO trick in the book, but if the application lacks the features and functionality
users need, no one will use it.
OO techniques mean you don't need to worry about business plans: Before jumping onto the object bandwagon, be
certain to conduct a full business analysis. Don't go in without careful consideration or on the faith of marketing
hype. It is critical to understand the costs as well as the benefits of object-oriented development. If you plan for
only one or two internal development projects, you will see few of the benefits of reuse. You might be able to use
preexisting object-oriented technologies, but rolling your own will not be cost effective.
OO will cure your corporate ills: OO will not solve morale and other corporate problems. If your company suffers
from motivational or morale problems, fix those with other solutions. An OO Band-Aid will only worsen an already
unfortunate situation.
OO Pitfalls
Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object
technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly,
however, the results can be disastrous.
OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort
to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your
skills, you will fall behind.
OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development
costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you
might be disappointed in your initial OO results.
OO technologies might not fit your corporate culture: The successful application of OO requires that your
development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable
objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work
or if you'll never reap the benefits of it.
OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger
and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are
dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to
improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However,
if you're developing an application that tracks millions of data points in real time, OO might not be the answer for
you.
OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every
situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to
design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO
technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in
protecting you from making a mistake.
What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware
of the hype. Use an OO approach only when appropriate.
Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to
use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm
than good.
Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of
"Hooked on Objects."
David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based
object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member
specializing in telecommunications consulting for ObjectWave. Contact them at [email protected] or visit their Web
site at www.objectwave.com.
This isn't a general discussion of OO pitfalls and conceptual weaknesses, but a discussion of how conventional 'textbook' OO
design approaches can lead to inefficient use of cache & RAM, especially on consoles or other hardware-constrained environments.
But it's still good.
Props to the
artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up
with everything.
He missed "sea
change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended
distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.
For those too
young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were
hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in
corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.
"The company has
undergone organization optimization due to our strategy modification, which includes empowering the support to the
operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.
In his big long
career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most
ambitious songs in pop music history.
This should be
played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius
as usual, Mr. Yankovic!
There's a quote
it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly
it and it's brilliant.
From the current
Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with
urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be
driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment
to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to
capital allocation."" yeah Weird Al totally nailed it
Excuse me, but
"proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you
of anything like that. [pause] I'm fired, aren't I?~George Meyer
I watch this at
least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to
seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment
Associates" eventually to discover they want someone to run a cash register and sweep up.
The irony is a
song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes
via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous
excesses of Unbridled Capitalism.
If you
understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on.
This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!
hahaha,
"Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't
heard/read/seen in years, since last being an office drone. =D
When I interact
with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized
to synergize with the client-centric requirements of the microcosmic community focussed social entities that I
administrate on social media while interfacing energetically about the inherent shortcomings of the current
socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of
coherent verbalisations capable of comprehension?
When I bought
"Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed.
All the corporate buzzwords!
(except "pivot", apparently).
"I am resolute in my ability to elevate this collaborative, forward-thinking team into the
revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising
in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO
that outsources IT level 2 OpenScore Sysadmin 527 points ·
4 days ago
"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data
science and blockchain to achieve business value"
They say, No more IT or system or server admins needed very soon...
Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight,
find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless,
etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement
and manage these technologies. Every meeting is infested with these A holes.
Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting,
and watch them not have a clue how to answer them.
For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure
including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and
them not wanting to spend more. So during a meeting my friend asked a question something like this:
"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into
account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic
going out to the Azure cloud? "
Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was
a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away
the manager's savings.
I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really
isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2
my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and
then calmly and politely dismantled them with facts.
no amount of corporate pressuring or bitching could ever stand up to that. level 3
Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for
rational facts. level 4 PuzzledSwitch 35 points ·
4 days ago
This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot
answer and slowly take them apart. I don't have to be loud to get my point across. level 4
This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity.
(Box them in), let them be their own argument against themselves.
Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are
burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences.
Then when everything goes to shit, they call the infrastructure team to fix it...
In 1976, after eight years in the Soviet education system, I graduated the equivalent of
middle school. Afterwards, I could choose to go for two more years, which would earn me a high
school diploma, and then do three years of college, which would get me a diploma in "higher
education."
Or, I could go for the equivalent of a blend of an associate and bachelor's degree, with an
emphasis on vocational skills. This option took four years.
I went with the second option, mainly because it was common knowledge in the Soviet Union at
the time that there was a restrictive quota for Jews applying to the five-year college program,
which almost certainly meant that I, as a Jew, wouldn't get in. I didn't want to risk it.
My best friend at the time proposed that we take the entrance exams to attend Nizhniy Novgorod Industrial
and Economic College. (At that time, it was known as Gorky Industrial and Economic College -
the city, originally named for famous poet Maxim Gorky, was renamed in the 1990s after the fall
of the Soviet Union.)
They had a program called "Programming for high-speed computing machines." Since I got good
grades in math and geometry, this looked like I'd be able to get in. It also didn't hurt that
my aunt, a very good seamstress and dressmaker, sewed several dresses specifically for the
school's chief accountant, who was involved in enrollment decisions. So I got in.
What's interesting is that from the almost sixty students accepted into the program that
year, all of them were female. It was the same for the class before us, and for the class after
us. Later, after I started working the Soviet Union, and even in the United States in the early
1990s, I understood that this was a trend. I'd say that 70% of the programmers I encountered in
the IT industry were female. The males were mostly in middle and upper management.
My mom's code notebook, with her name and "Macroassembler" on it.
We started what would be considered our major concentration courses during the second year.
Along with programming, there were a lot of related classes: "Computing Appliances and Their
Organization", "Electro Technology", "Algorithms of Numerical Methods," and a lot of math that
included integral and differential calculations. But programming was the main course, and we
spent the most hours on it.
Notes on programming - Heading is "Directives (Commands) for job control implementation",
covering the ABRT command
In the programming classes, we studied programming the "dry" way: using paper, pencil and
eraser. In fact, this method was so important that students who forgot their pencils were sent
to the main office to ask for one. It was extremely embarrassing, and we learned quickly not to
forget them.
Paper and pencil code for opening a file in Macroassembler
Every semester we would take a new programming language to learn. We learned Algol,
Fortran,and PL/1. We would learn from simplest commands to loop organization, function and
sub-function programming, multi-dimensional array processing, and more.
After mastering the basics, we would take exams, which were logical computing tasks to code
in this specific language.
At some point midway through the program, our school bought the very first physical computer
I ever saw : the Nairi. The programming language was AP, which was one of the few computer
languages with Russian keywords.
Then, we started taking labs. It was terrifying experience. You had to type your program in
entering device which basically was a typewriter connected to a huge computer. The programs
looked like step-by-step instructions, and if you made even one mistake you had to start all
over again. To code a solution for a linear algebraic equation usually would take 10 - 12
steps.
Program output in Macroassembler ("I was so creative with my program names," jokes my
mom.)
Our teacher used to go for one week of "practice work and curriculum development," to a
serious IT shop with more advanced machines every once in a while. At that time, the heavy
computing power was in the ES Series, produced by Soviet bloc countries.
These machines were clones of the IBM 360. They worked with punch cards and punch tapes. She
would bring back tons of papers with printed code and debugging comments for us to learn in
classroom.
After two and half years of rigorous study using pencil and paper, we had six months of
practice. Most of the time it was one of several scientific research institutes existed in
Nizhny Novgorod. I went to an institute that was oriented towards the auto industry.
I graduate with title "Programmer-Technician". Most of the girls from my class took computer
operator jobs, but I did not want to settle. I continued my education at Lobachevsky State University , named after Lobachevsky , the famous Russian
mathematician. Since I was taking evening classes, it took me six years to graduate.
I wrote a lot about my first college because now looking back I realize that this is where I
really learned to code and developed my programming skills. At the State University, we took a
huge amount of unnecessary courses. The only useful one was professional English. After this
course I could read technical documentation in English without issues.
My final university degree was equivalent to a US master's in Computer Science. The actual
major was called "Computational Mathematics and Cybernetics".
In total I worked for about seven years in the USSR as computer programmer, from 1982 to
1989. Technology changed rapidly, even there. I started out writing programs on special blanks
for punch card machines using a Russian version of Assembler. To maximize performance, we would
leave stacks of our punch cards for nightly processing.
After a couple years, we got terminals with keyboards. First they were installed in the same
room where main computer was. Initially, there were not enough terminals and "machine time" was
evenly divided between all of the programmers during the day.
Then, the terminals started to appear in the same room where programmers were. The displays
were small, with black background and green font. We were now working in the terminal.
The languages were also changing. I switched to C and had to get hands-on training. I did
not know then, but I picked profession where things are constantly moving. The most I've ever
worked with the same software was for about three years.
In 1991, we emigrated to the States. I had to quit my job two years before to avoid any
issues with the Soviet government. Every programmer I knew had to sign a special form
commanding them to keep state secrets. Such a signature could prevent us from getting exit
visas.
When I arrived in the US, I worried I had fallen behind. To refresh my skills and to become
more marketable, I had to take programming course for six months. It was the then-popular mix
of COBOL, DB2, JCL etc.
The main differences between USA and the USSR was the level at which computers were
incorporated in every day life. In the USSR, they were still a novelty. There were not a lot of
practical usage. Some of the reasons were planed organization of economy, politicized approach
to science. Cybernetics was considered "capitalist" discovery and was in exile in 1950s. In the
United States, computers were already widely in use, and even in consumer settings.
The other difference is gender of this profession. In the United States, it is more
male-dominated. In Russia as I was starting my professional life, it was considered more of a
female occupation. In both programs I studied , girls represented 100% of the class. Guys would
go for something that was considered more masculine. These choices included majors like
construction engineering and mechanical engineering.
Now, things have changed in Russia. Average salary for software developer in Moscow is
around $21K annually, versus $10K average salary for Russia as a whole. It, like in the United
States, has become a male-dominated field.
In conclusion, I have to say I picked the good profession to be in. Although I constantly
have to learn new things, I've never had to worry about being employed. When I did go through a
layoff, I was able to find a job very quickly. It is also a good paying job. I was very lucky
compared to other immigrants, who had to study programming from scratch.
Perl is unique complex non-orthogonal language and due to this it has unique level of
expressiveness.
Also the complexity of Perl to a large extent reflect the complexity of Perl environment
(which is Unix environment at the beginning, but now also Windows environment with its
quirks)
Notable quotes:
"... On a syntactic level, in the particular case of Perl, I placed variable names in a separate namespace from reserved words. That's one of the reasons there are funny characters on the front of variable names -- dollar signs and so forth. That allowed me to add new reserved words without breaking old programs. ..."
"... A script is something that is easy to tweak, and a program is something that is locked in. There are all sorts of metaphorical tie-ins that tend to make programs static and scripts dynamic, but of course, it's a continuum. You can write Perl programs, and you can write C scripts. People do talk more about Perl programs than C scripts. Maybe that just means Perl is more versatile. ..."
"... A good language actually gives you a range, a wide dynamic range, of your level of discipline. We're starting to move in that direction with Perl. The initial Perl was lackadaisical about requiring things to be defined or declared or what have you. Perl 5 has some declarations that you can use if you want to increase your level of discipline. But it's optional. So you can say "use strict," or you can turn on warnings, or you can do various sorts of declarations. ..."
"... But Perl was an experiment in trying to come up with not a large language -- not as large as English -- but a medium-sized language, and to try to see if, by adding certain kinds of complexity from natural language, the expressiveness of the language grew faster than the pain of using it. And, by and large, I think that experiment has been successful. ..."
"... If you used the regular expression in a list context, it will pass back a list of the various subexpressions that it matched. A different computer language may add regular expressions, even have a module that's called Perl 5 regular expressions, but it won't be integrated into the language. You'll have to jump through an extra hoop, take that right angle turn, in order to say, "Okay, well here, now apply the regular expression, now let's pull the things out of the regular expression," rather than being able to use the thing in a particular context and have it do something meaningful. ..."
"... A language is not a set of syntax rules. It is not just a set of semantics. It's the entire culture surrounding the language itself. So part of the cultural context in which you analyze a language includes all the personalities and people involved -- how everybody sees the language, how they propagate the language to other people, how it gets taught, the attitudes of people who are helping each other learn the language -- all of this goes into the pot of context. ..."
"... In the beginning, I just tried to help everybody. Particularly being on USENET. You know, there are even some sneaky things in there -- like looking for people's Perl questions in many different newsgroups. For a long time, I resisted creating a newsgroup for Perl, specifically because I did not want it to be ghettoized. You know, if someone can say, "Oh, this is a discussion about Perl, take it over to the Perl newsgroup," then they shut off the discussion in the shell newsgroup. If there are only the shell newsgroups, and someone says, "Oh, by the way, in Perl, you can solve it like this," that's free advertising. So, it's fuzzy. We had proposed Perl as a newsgroup probably a year or two before we actually created it. It eventually came to the point where the time was right for it, and we did that. ..."
"... For most web applications, Perl is severely underutilized. Your typical CGI script says print, print, print, print, print, print, print. But in a sense, it's the dynamic range of Perl that allows for that. You don't have to say a whole lot to write a simple Perl script, whereas your minimal Java program is, you know, eight or ten lines long anyway. Many of the features that made it competitive in the UNIX space will make it competitive in other spaces. ..."
"... Over the years, much of the work of making Perl work for people has been in designing ways for people to come to Perl. I actually delayed the first version of Perl for a couple of months until I had a sed-to-Perl and an awk-to-Perl translator. One of the benefits of borrowing features from various other languages is that those subsets of Perl that use those features are familiar to people coming from that other culture. What would be best, in my book, is if someone had a way of saying, "Well, I've got this thing in Visual Basic. Now, can I just rewrite some of these things in Perl?" ..."
The creator of Perl talks about language design and Perl. By Eugene Eric
Kim
DDJ : Is Perl 5.005 what you envisioned Perl to be when you set out to do
it?
LW: That assumes that I'm smart enough to envision something as complicated as Perl.
I knew that Perl would be good at some things, and would be good at more things as time went
on. So, in a sense, I'm sort of blessed with natural stupidity -- as opposed to artificial
intelligence -- in the sense that I know what my intellectual limits are.
I'm not one of these people who can sit down and design an entire system from scratch and
figure out how everything relates to everything else, so I knew from the start that I had to
take the bear-of-very-little-brain approach, and design the thing to evolve. But that fit in
with my background in linguistics, because natural languages evolve over time.
You can apply biological metaphors to languages. They move into niches, and as new needs
arise, languages change over time. It's actually a practical way to design a computer language.
Not all computer programs can be designed that way, but I think more can be designed that way
than have been. A lot of the majestic failures that have occurred in computer science have been
because people thought they could design the whole thing in advance.
DDJ : How do you design a language to evolve?
LW: There are several aspects to that, depending on whether you are talking about
syntax or semantics. On a syntactic level, in the particular case of Perl, I placed
variable names in a separate namespace from reserved words. That's one of the reasons there are
funny characters on the front of variable names -- dollar signs and so forth. That allowed me
to add new reserved words without breaking old programs.
DDJ : What is a scripting language? Does Perl fall into the category of a
scripting language?
LW: Well, being a linguist, I tend to go back to the etymological meanings of
"script" and "program," though, of course, that's fallacious in terms of what they mean
nowadays. A script is what you hand to the actors, and a program is what you hand to the
audience. Now hopefully, the program is already locked in by the time you hand that out,
whereas the script is something you can tinker with. I think of phrases like "following the
script," or "breaking from the script." The notion that you can evolve your script ties into
the notion of rapid prototyping.
A script is something that is easy to tweak, and a program is something that is locked
in. There are all sorts of metaphorical tie-ins that tend to make programs static and scripts
dynamic, but of course, it's a continuum. You can write Perl programs, and you can write C
scripts. People do talk more about Perl programs than C scripts. Maybe that just means Perl is
more versatile.
... ... ...
DDJ : Would that be a better distinction than interpreted versus compiled --
run-time versus compile-time binding?
LW: It's a more useful distinction in many ways because, with late-binding languages
like Perl or Java, you cannot make up your mind about what the real meaning of it is until the
last moment. But there are different definitions of what the last moment is. Computer
scientists would say there are really different "latenesses" of binding.
A good language actually gives you a range, a wide dynamic range, of your level of
discipline. We're starting to move in that direction with Perl. The initial Perl was
lackadaisical about requiring things to be defined or declared or what have you. Perl 5 has
some declarations that you can use if you want to increase your level of discipline. But it's
optional. So you can say "use strict," or you can turn on warnings, or you can do various sorts
of declarations.
DDJ : Would it be accurate to say that Perl doesn't enforce good design?
LW: No, it does not. It tries to give you some tools to help if you want to do that, but I'm
a firm believer that a language -- whether it's a natural language or a computer language --
ought to be an amoral artistic medium.
You can write pretty poems or you can write ugly poems, but that doesn't say whether English
is pretty or ugly. So, while I kind of like to see beautiful computer programs, I don't think
the chief virtue of a language is beauty. That's like asking an artist whether they use
beautiful paints and a beautiful canvas and a beautiful palette. A language should be a medium
of expression, which does not restrict your feeling unless you ask it to.
DDJ : Where does the beauty of a program lie? In the underlying algorithms, in the
syntax of the description?
LW: Well, there are many different definitions of artistic beauty. It can be argued
that it's symmetry, which in a computer language might be considered orthogonality. It's also
been argued that broken symmetry is what is considered most beautiful and most artistic and
diverse. Symmetry breaking is the root of our whole universe according to physicists, so if God
is an artist, then maybe that's his definition of what beauty is.
This actually ties back in with the built-to-evolve concept on the semantic level. A lot of
computer languages were defined to be naturally orthogonal, or at least the computer scientists
who designed them were giving lip service to orthogonality. And that's all very well if you're
trying to define a position in a space. But that's not how people think. It's not how natural
languages work. Natural languages are not orthogonal, they're diagonal. They give you
hypotenuses.
Suppose you're flying from California to Quebec. You don't fly due east, and take a left
turn over Nashville, and then go due north. You fly straight, more or less, from here to there.
And it's a network. And it's actually sort of a fractal network, where your big link is
straight, and you have little "fractally" things at the end for your taxi and bicycle and
whatever the mode of transport you use. Languages work the same way. And they're designed to
get you most of the way here, and then have ways of refining the additional shades of
meaning.
When they first built the University of California at Irvine campus, they just put the
buildings in. They did not put any sidewalks, they just planted grass. The next year, they came
back and built the sidewalks where the trails were in the grass. Perl is that kind of a
language. It is not designed from first principles. Perl is those sidewalks in the grass. Those
trails that were there before were the previous computer languages that Perl has borrowed ideas
from. And Perl has unashamedly borrowed ideas from many, many different languages. Those paths
can go diagonally. We want shortcuts. Sometimes we want to be able to do the orthogonal thing,
so Perl generally allows the orthogonal approach also. But it also allows a certain number of
shortcuts, and being able to insert those shortcuts is part of that evolutionary thing.
I don't want to claim that this is the only way to design a computer language, or that
everyone is going to actually enjoy a computer language that is designed in this way.
Obviously, some people speak other languages. But Perl was an experiment in trying to come
up with not a large language -- not as large as English -- but a medium-sized language, and to
try to see if, by adding certain kinds of complexity from natural language, the expressiveness
of the language grew faster than the pain of using it. And, by and large, I think that
experiment has been successful.
DDJ : Give an example of one of the things you think is expressive about Perl that
you wouldn't find in other languages.
LW: The fact that regular-expression parsing and the use of regular expressions is
built right into the language. If you used the regular expression in a list context, it
will pass back a list of the various subexpressions that it matched. A different computer
language may add regular expressions, even have a module that's called Perl 5 regular
expressions, but it won't be integrated into the language. You'll have to jump through an extra
hoop, take that right angle turn, in order to say, "Okay, well here, now apply the regular
expression, now let's pull the things out of the regular expression," rather than being able to
use the thing in a particular context and have it do something meaningful.
The school of linguistics I happened to come up through is called tagmemics, and it makes a
big deal about context. In a real language -- this is a tagmemic idea -- you can distinguish
between what the conventional meaning of the "thing" is and how it's being used. You think of
"dog" primarily as a noun, but you can use it as a verb. That's the prototypical example, but
the "thing" applies at many different levels. You think of a sentence as a sentence.
Transformational grammar was built on the notion of analyzing a sentence. And they had all
their cute rules, and they eventually ended up throwing most of them back out again.
But in the tagmemic view, you can take a sentence as a unit and use it differently. You can
say a sentence like, "I don't like your I-can-use-anything-like-a-sentence attitude." There,
I've used the sentence as an adjective. The sentence isn't an adjective if you analyze it, any
way you want to analyze it. But this is the way people think. If there's a way to make sense of
something in a particular context, they'll do so. And Perl is just trying to make those things
make sense. There's the basic distinction in Perl between singular and plural context -- call
it list context and scalar context, if you will. But you can use a particular construct in a
singular context that has one meaning that sort of makes sense using the list context, and it
may have a different meaning that makes sense in the plural context.
That is where the expressiveness comes from. In English, you read essays by people who say,
"Well, how does this metaphor thing work?" Owen Barfield talks about this. You say one thing
and mean another. That's how metaphors arise. Or you take two things and jam them together. I
think it was Owen Barfield, or maybe it was C.S. Lewis, who talked about "a piercing
sweetness." And we know what "piercing" is, and we know what "sweetness" is, but you put those
two together, and you've created a new meaning. And that's how languages ought to work.
DDJ : Is a more expressive language more difficult to learn?
LW: Yes. It was a conscious tradeoff at the beginning of Perl that it would be more
difficult to master the whole language. However, taking another clue from a natural language,
we do not require 5-year olds to speak with the same diction as 50-year olds. It is okay for
you to use the subset of a language that you are comfortable with, and to learn as you go. This
is not true of so many computer-science languages. If you program C++ in a subset that
corresponds to C, you get laughed out of the office.
There's a whole subject that we haven't touched here. A language is not a set of syntax
rules. It is not just a set of semantics. It's the entire culture surrounding the language
itself. So part of the cultural context in which you analyze a language includes all the
personalities and people involved -- how everybody sees the language, how they propagate the
language to other people, how it gets taught, the attitudes of people who are helping each
other learn the language -- all of this goes into the pot of context.
Because I had already put out other freeware projects (rn and patch), I realized before I
ever wrote Perl that a great deal of the value of those things was from collaboration. Many of
the really good ideas in rn and Perl came from other people.
I think that Perl is in its adolescence right now. There are places where it is grown up,
and places where it's still throwing tantrums. I have a couple of teenagers, and the thing you
notice about teenagers is that they're always plus or minus ten years from their real age. So
if you've got a 15-year old, they're either acting 25 or they're acting 5. Sometimes
simultaneously! And Perl is a little that way, but that's okay.
DDJ : What part of Perl isn't quite grown up?
LW: Well, I think that the part of Perl, which has not been realistic up until now
has been on the order of how you enable people in certain business situations to actually use
it properly. There are a lot of people who cannot use freeware because it is, you know,
schlocky. Their bosses won't let them, their government won't let them, or they think their
government won't let them. There are a lot of people who, unknown to their bosses or their
government, are using Perl.
DDJ : So these aren't technical issues.
LW: I suppose it depends on how you define technology. Some of it is perceptions,
some of it is business models, and things like that. I'm trying to generate a new symbiosis
between the commercial and the freeware interests. I think there's an artificial dividing line
between those groups and that they could be more collaborative.
As a linguist, the generation of a linguistic culture is a technical issue. So, these
adjustments we might make in people's attitudes toward commercial operations or in how Perl is
being supported, distributed, advertised, and marketed -- not in terms of trying to make bucks,
but just how we propagate the culture -- these are technical ideas in the psychological and the
linguistic sense. They are, of course, not technical in the computer-science sense. But I think
that's where Perl has really excelled -- its growth has not been driven solely by technical
merits.
DDJ : What are the things that you do when you set out to create a culture around
the software that you write?
LW:In the beginning, I just tried to help everybody. Particularly being on
USENET. You know, there are even some sneaky things in there -- like looking for people's Perl
questions in many different newsgroups. For a long time, I resisted creating a newsgroup for
Perl, specifically because I did not want it to be ghettoized. You know, if someone can say,
"Oh, this is a discussion about Perl, take it over to the Perl newsgroup," then they shut off
the discussion in the shell newsgroup. If there are only the shell newsgroups, and someone
says, "Oh, by the way, in Perl, you can solve it like this," that's free advertising. So, it's
fuzzy. We had proposed Perl as a newsgroup probably a year or two before we actually created
it. It eventually came to the point where the time was right for it, and we did that.
DDJ : Perl has really been pigeonholed as a language of the Web. One result is
that people mistakenly try to compare Perl to Java. Why do you think people make the comparison
in the first place? Is there anything to compare?
LW: Well, people always compare everything.
DDJ : Do you agree that Perl has been pigeonholed?
LW: Yes, but I'm not sure that it bothers me. Before it was pigeonholed as a web
language, it was pigeonholed as a system-administration language, and I think that -- this goes
counter to what I was saying earlier about marketing Perl -- if the abilities are there to do a
particular job, there will be somebody there to apply it, generally speaking. So I'm not too
worried about Perl moving into new ecological niches, as long as it has the capability of
surviving in there.
Perl is actually a scrappy language for surviving in a particular ecological niche. (Can you
tell I like biological metaphors?) You've got to understand that it first went up against C and
against shell, both of which were much loved in the UNIX community, and it succeeded against
them. So that early competition actually makes it quite a fit competitor in many other realms,
too.
For most web applications, Perl is severely underutilized. Your typical CGI script says
print, print, print, print, print, print, print. But in a sense, it's the dynamic range of Perl
that allows for that. You don't have to say a whole lot to write a simple Perl script, whereas
your minimal Java program is, you know, eight or ten lines long anyway. Many of the features
that made it competitive in the UNIX space will make it competitive in other spaces.
Now, there are things that Perl can't do. One of the things that you can't do with Perl
right now is compile it down to Java bytecode. And if that, in the long run, becomes a large
ecological niche (and this is not yet a sure thing), then that is a capability I want to be
certain that Perl has.
DDJ : There's been a movement to merge the two development paths between the
ActiveWare Perl for Windows and the main distribution of Perl. You were talking about
ecological niches earlier, and how Perl started off as a text-processing language. The
scripting languages that are dominant on the Microsoft platforms -- like VB -- tend to be more
visual than textual. Given Perl's UNIX origins -- awk, sed, and C, for that matter -- do you
think that Perl, as it currently stands, has the tools to fit into a Windows niche?
LW: Yes and no. It depends on your problem domain and who's trying to solve the
problem. There are problems that only need a textual solution or don't need a visual solution.
Automation things of certain sorts don't need to interact with the desktop, so for those sorts
of things -- and for the programmers who aren't really all that interested in visual
programming -- it's already good for that. And people are already using it for that. Certainly,
there is a group of people who would be enabled to use Perl if it had more of a visual
interface, and one of the things we're talking about doing for the O'Reilly NT Perl Resource
Kit is some sort of a visual interface.
A lot of what Windows is designed to do is to get mere mortals from 0 to 60, and there are
some people who want to get from 60 to 100. We are not really interested in being in
Microsoft's crosshairs. We're not actually interested in competing head-to-head with Visual
Basic, and to the extent that we do compete with them, it's going to be kind of subtle. There
has to be some way to get people from the slow lane to the fast lane. It's one thing to give
them a way to get from 60 to 100, but if they have to spin out to get from the slow lane to the
fast lane, then that's not going to work either.
Over the years, much of the work of making Perl work for people has been in designing
ways for people to come to Perl. I actually delayed the first version of Perl for a couple of
months until I had a sed-to-Perl and an awk-to-Perl translator. One of the benefits of
borrowing features from various other languages is that those subsets of Perl that use those
features are familiar to people coming from that other culture. What would be best, in my book,
is if someone had a way of saying, "Well, I've got this thing in Visual Basic. Now, can I just
rewrite some of these things in Perl?"
We're already doing this with Java. On our UNIX Perl Resource Kit, I've got a hybrid
language called "jpl" -- that's partly a pun on my old alma mater, Jet Propulsion Laboratory,
and partly for Java, Perl...Lingo, there we go! That's good. "Java Perl Lingo." You've heard it
first here! jpl lets you take a Java program and magically turn one of the methods into a chunk
of Perl right there inline. It turns Perl code into a native method, and automates the linkage
so that when you pull in the Java code, it also pulls in the Perl code, and the interpreter,
and everything else. It's actually calling out from Java's Virtual Machine into Perl's virtual
machine. And we can call in the other direction, too. You can embed Java in Perl, except that
there's a bug in JDK having to do with threads that prevents us from doing any I/O. But that's
Java's problem.
It's a way of letting somebody evolve from a purely Java solution into, at least partly, a
Perl solution. It's important not only to make Perl evolve, but to make it so that people can
evolve their own programs. It's how I program, and I think a lot of people program that way.
Most of us are too stupid to know what we want at the beginning.
DDJ : Is there hope down the line to present Perl to a standardization
body?
LW: Well, I have said in jest that people will be free to standardize Perl when I'm
dead. There may come a time when that is the right thing to do, but it doesn't seem appropriate
yet.
DDJ : When would that time be?
LW: Oh, maybe when the federal government declares that we can't export Perl unless
it's standardized or something.
DDJ : Only when you're forced to, basically.
LW: Yeah. To me, once things get to a standards body, it's not very interesting
anymore. The most efficient form of government is a benevolent dictatorship. I remember walking
into some BOF that USENIX held six or seven years ago, and John Quarterman was running it, and
he saw me sneak in, sit in the back corner, and he said, "Oh, here comes Larry Wall! He's a
standards committee all of his own!"
A great deal of the success of Perl so far has been based on some of my own idiosyncrasies.
And I recognize that they are idiosyncrasies, and I try to let people argue me out of them
whenever appropriate. But there are still ways of looking at things that I seem to do
differently than anybody else. It may well be that perl5-porters will one day degenerate into a
standards committee. So far, I have not abused my authority to the point that people have
written me off, and so I am still allowed to exercise a certain amount of absolute power over
the Perl core.
I just think headless standards committees tend to reduce everything to mush. There is a
conservatism that committees have that individuals don't, and there are times when you want to
have that conservatism and times you don't. I try to exercise my authority where we don't want
that conservatism. And I try not to exercise it at other times.
DDJ : How did you get involved in computer science? You're a linguist by
background?
LW: Because I talk to computer scientists more than I talk to linguists, I wear the
linguistics mantle more than I wear the computer-science mantle, but they actually came along
in parallel, and I'm probably a 50/50 hybrid. You know, basically, I'm no good at either
linguistics or computer science.
DDJ : So you took computer-science courses in college?
LW: In college, yeah. In college, I had various majors, but what I eventually
graduated in -- I'm one of those people that packed four years into eight -- what I eventually
graduated in was a self-constructed major, and it was Natural and Artificial Languages, which
seems positively prescient considering where I ended up.
DDJ : When did you join O'Reilly as a salaried employee? And how did that come
about?
LW: A year-and-a-half ago. It was partly because my previous job was kind of winding
down.
DDJ : What was your previous job?
LW: I was working for Seagate Software. They were shutting down that branch of
operations there. So, I was just starting to look around a little bit, and Tim noticed me
looking around and said, "Well, you know, I've wanted to hire you for a long time," so we
talked. And Gina Blaber (O'Reilly's software director) and I met. So, they more or less offered
to pay me to mess around with Perl.
So it's sort of my dream job. I get to work from home, and if I feel like taking a nap in
the afternoon, I can take a nap in the afternoon and work all night.
DDJ : Do you have any final comments, or tips for aspiring programmers? Or
aspiring Perl programmers?
LW: Assume that your first idea is wrong, and try to think through the various
options. I think that the biggest mistake people make is latching onto the first idea that
comes to them and trying to do that. It really comes to a thing that my folks taught me about
money. Don't buy something unless you've wanted it three times. Similarly, don't throw in a
feature when you first think of it. Think if there's a way to generalize it, think if it should
be generalized. Sometimes you can generalize things too much. I think like the things in Scheme
were generalized too much. There is a level of abstraction beyond which people don't want to
go. Take a good look at what you want to do, and try to come up with the long-term lazy way,
not the short-term lazy way.
I present here a small bibliography of papers on programming languages from the 1970's. I
have personally considered these papers interesting in my research on the syntax of programming
languages. I give here short annotations and comments (adapted to modern's day notions) on some
of these papers.
Boeing screwed up by designing and installing a faulty systems that was unsafe. It did not
even tell the pilots that MCAS existed. It still insists that the system's failure should not
be trained in simulator type training. Boeing's failure and the FAA's negligence, not the
pilots, caused two major accidents.
Nearly a year after the first incident Boeing has still not presented a solution that the
FAA would accept. Meanwhile more safety critical issues on the 737 MAX were found for which
Boeing has still not provided any acceptable solution.
But to Langewiesche this anyway all irrelevant. He closes his piece out with more "blame the
pilots" whitewash of "poor Boeing":
The 737 Max remains grounded under impossibly close scrutiny, and any suggestion that this
might be an overreaction, or that ulterior motives might be at play, or that the Indonesian
and Ethiopian investigations might be inadequate, is dismissed summarily. To top it off,
while the technical fixes to the MCAS have been accomplished, other barely related
imperfections have been discovered and added to the airplane's woes. All signs are that the
reintroduction of the 737 Max will be exceedingly difficult because of political and
bureaucratic obstacles that are formidable and widespread. Who in a position of authority
will say to the public that the airplane is safe?
I would if I were in such a position. What we had in the two downed airplanes was a
textbook failure of airmanship . In broad daylight, these pilots couldn't decipher a variant
of a simple runaway trim, and they ended up flying too fast at low altitude, neglecting to
throttle back and leading their passengers over an aerodynamic edge into oblivion. They were
the deciding factor here -- not the MCAS, not the Max.
One wonders how much Boeing paid the author to assemble his screed.
14,000 Words Of "Blame The Pilots" That Whitewash Boeing Of 737 MAX Failure
The New York Times
No doubt, this WAS intended as a whitewash of Boeing, but having read the 14,000 words, I
don't think it qualifies as more than a somewhat greywash. It is true he blames the pilots
for mishandling a situation that could, perhaps, have been better handled, but Boeing still
comes out of it pretty badly and so does the NTSB. The other thing I took away from the
article is that Airbus planes are, in principle, & by design, more
failsafe/idiot-proof.
Key words: New York Times Magazine. I think when your body is for sale you are called a
whore. Trump's almost hysterical bashing of the NYT is enough to make anyone like the paper,
but at its core it is a mouthpiece for the military industrial complex. Cf. Judith Miller.
The New York Times Magazine just published a 14,000 words piece
An ill-disguised attempt to prepare the ground for premature approval for the 737max. It
won't succeed - impossible. Opposition will come from too many directions. The blowback from
this article will make Boeing regret it very soon, I am quite sure.
Come to think about it: (apart from the MCAS) what sort of crap design is it, if an
absolutely vital control, which the elevator is, can become impossibly stiff under just those
conditions where you absolutely have to be able to move it quickly?
It will only highlight the hubris of "my sh1t doesn't stink" mentality of the American
elite and increase the resolve of other civil aviation authorities with a backbone (or in
ascendancy) to put Boeing through the wringer.
For the longest time FAA was the gold standard and years of "Air Crash Investigation" TV
shows solidified its place but has been taken for granted. Unitl now if it's good enough for
the FAA it's good enough for all.
That reputation has now been irreparably damaged over this sh1tshow. I can't help but
think this NYT article is only meant for domestic sheeple or stock brokers' consumption as
anyone who is going to have anything technical to do with this investigation is going to see
right through this load literal diarroeh.
I wouldn't be surprised if some insider wants to offload some stock and planted this story
ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs
to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting
requirements. As usual, rules are only meant for the rest of us.
Thanks for the ongoing reporting of this debacle b....you are saving peoples lives
@ A.L who wrote
" I wouldn't be surprised if some insider wants to offload some stock and planted this story
ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs
to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting
requirements. As usual, rules are only meant for the rest of us. "
I agree but would pluralize your "insider" to "insiders". This SOP gut and run
financialization strategy is just like we are seeing with Purdue Pharma that just filed
bankruptcy because their opioids have killed so many....the owners will never see jail time
and their profits are protected by the God of Mammon legal system.
Hopefully the WWIII we are engaged in about public/private finance will put an end to this
perfidy by the God of Mammon/private finance cult of the Western form of social
organization.
Peter Lemme, the satcom guru , was
once an engineer at Boeing. He testified over technical MAX issue before Congress and wrote
lot of technical details about it. He retweeted the NYT Mag piece with this comment :
Peter Lemme @Satcom_Guru
Blame the pilots.
Blame the training.
Blame the airline standards.
Imply rampant corruption at all levels.
Claim Airbus flight envelope protection is superior to Boeing.
Fumble the technical details.
Stack the quotes with lots of hearsay to drive the theme.
Ignore everything else
perhaps, just like proponents of AI and self driving cars. They just love the technology,
financially and emotionally invested in it so much they can't see the forest from the
trees.
I like technology, I studied engineering. But the myopic drive to profitability and
naivety to unintended consequences are pushing these tech out into the world before they are
ready.
engineering used to be a discipline with ethics and responsibilities... But now anybody
who could write two lines of code can call themselves a software engineer....
This impostor definitely demonstrated programming abilities, although at the time there was
not such ter :-)
Notable quotes:
"... "We wrote it down. ..."
"... The next phrase was: ..."
"... " ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' " ..."
"... " ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' " ..."
"... "Tong what ?" rasped the colonel. ..."
"... "Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?" ..."
"... Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction. ..."
a fabulous tale of the South Pacific by William Manchester
The Man Who Could Speak Japanese
"We wrote it down.
The next phrase was:
" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "
Then:
" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "
"Tong what ?" rasped the colonel.
"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched
his eyebrows, as though inviting another question. There was one. The adjutant asked,
"What's that gizmo on the end?"
Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist
was the last of his status symbols, and he clung to it desperately. Looking back, I think
his improvisations on the Morton fantail must have been one of the most heroic achievements
in the history of confidence men -- which, as you may have gathered by now, was Whitey's
true profession. Toward the end of our tour of duty on the 'Canal he was totally
discredited with us and transferred at his own request to the 81-millimeter platoon, where
our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a
bunch of eight balls anyway. Yet even then, even after we had become completely
disillusioned with him, he remained a figure of wonder among us. We could scarcely believe
that an impostor could be clever enough actually to invent a language -- phonics,
calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his
seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering
every variation, every subtlety, every syntactic construction.
I wish I had read Thinking Forth by Leo Brodie ISBN-10: 0976458705 ISBN-13: 978-0976458708
much earlier. It is an amazing book to really show you a different way to approach
programming problems. It is available online these days.
Programming skills are somewhat similar to the skills of people who play violin or piano. As
soon a you stop playing violin or piano still start to evaporate. First slowly, then quicker. In
two yours you probably will lose 80%.
Notable quotes:
"... I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. ..."
Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude
completely, and I think he wrote his last computer program in the 1980s. At this conference I
went to in 1967 about simulation language, Chris Strachey was going around asking everybody at
the conference what was the last computer program you wrote. This was 1967. Some of the people
said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did
last week." I asked Edsger this question when I visited him in Texas in the 90s and he said,
"Don, I write programs now with pencil and paper, and I execute them in my head." He finds that
a good enough discipline.
I think he was mistaken on that. He taught me a lot of things, but I really think that if he
had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of
aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that
when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we
had. It was in the first apartment, our little rented house, before we had electricity in the
house.
We were sitting there in the dark, and he was telling me how he had just learned about the
specifications of the IBM System/360, and it made him so ill that his heart was actually
starting to flutter.
He intensely disliked things that he didn't consider clean to work with. So I can see that
he would have distaste for the languages that he had to work with on real computers. My
reaction to that was to design my own language, and then make Pascal so that it would work well
for me in those days. But his response was to do everything only intellectually.
So, programming.
I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs
in February. These are small programs, but I have a compulsion. I love to write programs and
put things into it. I think of a question that I want to answer, or I have part of my book
where I want to present something. But I can't just present it by reading about it in a book.
As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have
to translate my knowledge of this method into something that the machine is going to understand
just forces me to make that crystal-clear in my head. Then I can explain it to somebody else
infinitely better. The exposition is always better if I've implemented it, even though it's
going to take me more time.
So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a
mathematician taking my grad studies. A startup company, called Green Tree Corporation because
green is the color of money, came to me and said, "Don, name your price. Write compilers for us
and we will take care of finding computers for you to debug them on, and assistance for you to
do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that
era this was not quite at Bill Gate's level today, but it was sort of out there.
The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not
going to do it. I just thought this was an impossible number."
At that point I made the decision in my life that I wasn't going to optimize my income; I
was really going to do what I thought I could do for well, I don't know. If you ask me what
makes me most happy, number one would be somebody saying "I learned something from you". Number
two would be somebody saying "I used your software". But number infinity would be Well, no.
Number infinity minus one would be "I bought your book". It's not as good as "I read your
book", you know. Then there is "I bought your software"; that was not in my own personal value.
So that decision came up. I kept up with the literature about compilers. The Communications of
the ACM was where the action was. I also worked with people on trying to debug the ALGOL
language, which had problems with it. I published a few papers, like "The Remaining Trouble
Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called
"Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active
in programming languages.
Frana: You have made the comment several times that maybe 1 in 50 people have the
"computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those
people are trained professional librarians? [laughter] There is some strangeness there. But can
you pinpoint what it is about the mind of the computer scientist that is....
Knuth: That is different?
Frana: What are the characteristics?
Knuth: Two things: one is the ability to deal with non-uniform structure, where you
have case one, case two, case three, case four. Or that you have a model of something where the
first component is integer, the next component is a Boolean, and the next component is a real
number, or something like that, you know, non-uniform structure. To deal fluently with those
kinds of entities, which is not typical in other branches of mathematics, is critical. And the
other characteristic ability is to shift levels quickly, from looking at something in the large
to looking at something in the small, and many levels in between, jumping from one level of
abstraction to another. You know that, when you are adding one to some number, that you are
actually getting closer to some overarching goal. These skills, being able to deal with
nonuniform objects and to see through things from the top level to the bottom level, these are
very essential to computer programming, it seems to me. But maybe I am fooling myself because I
am too close to it.
Frana: It is the hardest thing to really understand that which you are existing
within.
Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a
more coherent structure so that it is easier to put things together. I can see that I could be
viewed as a scholar that does his best to check out sources of material, so that people get
credit where it is due. And to check facts over, not just to look at the abstract of something,
but to see what the methods were that did it and to fill in holes if necessary. I look at my
role as being able to understand the motivations and terminology of one group of specialists
and boil it down to a certain extent so that people in other parts of the field can use it. I
try to listen to the theoreticians and select what they have done that is important to the
programmer on the street; to remove technical jargon when possible.
But I have never been good at any kind of a role that would be making policy, or advising
people on strategies, or what to do. I have always been best at refining things that are there
and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but
not really in a way that would be in any way controlling the flow. The only time I have ever
advocated something strongly was with literate programming; but I do this always with the
caveat that it works for me, not knowing if it would work for anybody else.
When I work with a system that I have created myself, I can always change it if I don't like
it. But everybody who works with my system has to work with what I give them. So I am not able
to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says,
'Don, please forecast the future,'...
"... When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?" ..."
"... When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this." ..."
...I showed the second version of this design to two of my graduate students, and I said,
"Okay, implement this, please, this summer. That's your summer job." I thought I had specified
a language. I had to go away. I spent several weeks in China during the summer of 1977, and I
had various other obligations. I assumed that when I got back from my summer trips, I would be
able to play around with TeX and refine it a little bit. To my amazement, the students, who
were outstanding students, had not competed [it]. They had a system that was able to do about
three lines of TeX. I thought, "My goodness, what's going on? I thought these were good
students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a
miracle."
Because going from my specification, which I thought was complete, they really had an
impossible task, and they had succeeded wonderfully with it. These students, by the way, [were]
Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software
and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the
key people for Microsoft Word.
He did important mathematical things as well as his hyphenation methods which are quite used
in all languages now. These guys were actually doing great work, but I was amazed that they
couldn't do what I thought was just sort of a routine task. Then I became a programmer in
earnest, where I had to do it. The reason is when you're doing programming, you have to explain
something to a computer, which is dumb.
When you're writing a document for a human being to understand, the human being will
look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of
ambiguities and vagueness that you don't realize until you try to put it into a computer. Then
all of a sudden, almost every five minutes as you're writing the code, a question comes up that
wasn't addressed in the specification. "What if this combination occurs?"
It just didn't occur to the person writing the design specification. When you're faced
with implementation, a person who has been delegated this job of working from a design would
have to say, "Well hmm, I don't know what the designer meant by this."
If I hadn't been in China they would've scheduled an appointment with me and stopped their
programming for a day. Then they would come in at the designated hour and we would talk. They
would take 15 minutes to present to me what the problem was, and then I would think about it
for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would
write code for another five minutes and they'd have to schedule another appointment.
I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got
going. Bob worked many years on a beautiful idea for a programming language, where he designed
a language called Chiron, but he never touched the programming himself. I think this was
actually the reason that he had trouble with that project, because it's so hard to do the
design unless you're faced with the low-level aspects of it, explaining it to a machine instead
of to another person.
Forsythe, I think it was, who said, "People have said traditionally that you don't
understand something until you've taught it in a class. The truth is you don't really
understand something until you've taught it to a computer, until you've been able to program
it." At this level, programming was absolutely important
Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming
keeps getting harder because more stuff is discovered. I can cope with learning about one new
technique per day, but I can't take ten in a day all at once. So conferences are depressing; it
means I have so much more work to do. If I hide myself from the truth I am much happier.
"... Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill. ..."
"... But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly. ..."
Knuth: This is, of course, really the story of my life, because I hope to live long enough
to finish it. But I may not, because it's turned out to be such a huge project. I got married
in the summer of 1961, after my first year of graduate school. My wife finished college, and I
could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for
our honeymoon.
We had four months of wedded bliss in Southern California, and then a man from
Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to
write compilers."
The more I thought about it, I decided "Oh yes, I've got this book inside of me."
I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I
sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I
think I'm going to write a book."
As I say, we had four months of bliss, because the rest of our marriage has all been devoted
to this book. Well, we still have had happiness. But really, I wake up every morning and I
still haven't finished the book. So I try to -- I have to -- organize the rest of my life
around this, as one main unifying theme. The book was supposed to be about how to write a
compiler. They had heard about me from one of their editorial advisors, that I knew something
about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy
writing. In high school I had been editor of the weekly paper. In college I was editor of the
science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote
the manual for that compiler that we wrote. I enjoyed writing, number one.
Also, Addison-Wesley was the people who were asking me to do this book; my favorite
textbooks had been published by Addison Wesley. They had done the books that I loved the most
as a student. For them to come to me and say, "Would you write a book for us?", and here I am
just a secondyear gradate student -- this was a thrill.
Another very important reason at the time was that I knew that there was a great need for a
book about compilers, because there were a lot of people who even in 1962 -- this was January
of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't
been explained. The people who had discovered it, though, were scattered all over the world and
they didn't know of each other's work either, very much. I had been following it. Everybody I
could think of who could write a book about compilers, as far as I could see, they would only
give a piece of the fabric. They would slant it to their own view of it. There might be four
people who could write about it, but they would write four different books. I could present all
four of their viewpoints in what I would think was a balanced way, without any axe to grind,
without slanting it towards something that I thought would be misleading to the compiler writer
for the future. I considered myself as a journalist, essentially. I could be the expositor, the
tech writer, that could do the job that was needed in order to take the work of these brilliant
people and make it accessible to the world. That was my motivation. Now, I didn't have much
time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's
all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a
contract, but they said "We know it'll take you a while." I didn't really begin to have much
time to work on it until 1963, my third year of graduate school, as I'm already finishing up on
my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was
for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess
you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the
salary for that was $15,000, which was much more than an assistant professor. I think assistant
professors were getting eight or nine thousand in those days.
Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was
getting $7,600 for the nine-month year.
Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a
FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks
up identifiers in a hash table. The method that we used is called linear probing. Basically you
take the variable name that you want to look up, you scramble it, like you square it or
something like this, and that gives you a number between one and, well in those days it would
have been between 1 and 1000, and then you look there. If you find it, good; if you don't find
it, go to the next place and keep on going until you either get to an empty place, or you find
the number you're looking for. It's called linear probing. There was a rumor that one of
Professor Feller's students at Princeton had tried to figure out how fast linear probing works
and was unable to succeed. This was a new thing for me. It was a case where I was doing
programming, but I also had a mathematical problem that would go into my other [job]. My winter
job was being a math student, my summer job was writing compilers. There was no mix. These
worlds did not intersect at all in my life at that point. So I spent one day during the summer
while writing the compiler looking at the mathematics of how fast does linear probing work. I
got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of
paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the
internet now, because this became really the genesis of my main research work, which developed
not to be working on compilers, but to be working on what they call analysis of algorithms,
which is, have a computer method and find out how good is it quantitatively. I can say, if I
got so many things to look up in the table, how long is linear probing going to take. It dawned
on me that this was just one of many algorithms that would be important, and each one would
lead to a fascinating mathematical problem. This was easily a good lifetime source of rich
problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and
I had one day to do the research and mathematics that changed my life for my future research
trends. But now I've gotten off the topic of what your original question was.
Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of
Computing. The compiler book morphed into The Art of Computer Programming, which became a
seven-volume plan.
Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm
starting, after I finish this summer job, then I began to do things that were going to be
relating to the book. One of the things I knew I had to have in the book was an artificial
machine, because I'm writing a compiler book but machines are changing faster than I can write
books. I have to have a machine that I'm totally in control of. I invented this machine called
MIX, which was typical of the computers of 1962.
In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I
taught a class at Caltech on how to write programs in assembly language for this hypothetical
computer. Then I started writing the parts that dealt with sorting problems and searching
problems, like the linear probing idea. I began to write those parts, which are part of a
compiler, of the book. I had several hundred pages of notes gathering for those chapters for
The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing
on The Art of Computer Programming.
I met George Forsythe about this time. George was the man who inspired both of us [Knuth and
Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a
talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't
do that. I just got married, and I've got to finish this book first." I said, "I think I'll
finish the book next year, and then I can come up [and] start thinking about the rest of my
life, but I want to get my book done before my son is born." Well, John is now 40-some years
old and I'm not done with the book. Part of my lack of expertise is any good estimation
procedure as to how long projects are going to take. I way underestimated how much needed to be
written about in this book. Anyway, I started writing the manuscript, and I went merrily along
writing pages of things that I thought really needed to be said. Of course, it didn't take long
before I had started to discover a few things of my own that weren't in any of the existing
literature. I did have an axe to grind. The message that I was presenting was in fact not going
to be unbiased at all. It was going to be based on my own particular slant on stuff, and that
original reason for why I should write the book became impossible to sustain. But the fact that
I had worked on linear probing and solved the problem gave me a new unifying theme for the
book. I was going to base it around this idea of analyzing algorithms, and have some
quantitative ideas about how good methods were. Not just that they worked, but that they worked
well: this method worked 3 times better than this method, or 3.1 times better than this method.
Also, at this time I was learning mathematical techniques that I had never been taught in
school. I found they were out there, but they just hadn't been emphasized openly, about how to
solve problems of this kind.
So my book would also present a different kind of mathematics than was common in the
curriculum at the time, that was very relevant to analysis of algorithm. I went to the
publishers, I went to Addison Wesley, and said "How about changing the title of the book from
'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never
sell; their focus group couldn't buy that one. I'm glad they stuck to the original title,
although I'm also glad to see that several books have now come out called "The Analysis of
Algorithms", 20 years down the line.
But in those days, The Art of Computer Programming was very important because I'm
thinking of the aesthetical: the whole question of writing programs as something that has
artistic aspects in all senses of the word. The one idea is "art" which means artificial, and
the other "art" means fine art. All these are long stories, but I've got to cover it fairly
quickly.
I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I
finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of
notes, including a very good example of what you mentioned about seeing holes in the fabric.
One of the most important chapters in the book is parsing: going from somebody's algebraic
formula and figuring out the structure of the formula. Just the way I had done in seventh grade
finding the structure of English sentences, I had to do this with mathematical sentences.
Chapter ten is all about parsing of context-free language, [which] is what we called it at
the time. I covered what people had published about context-free languages and parsing. I got
to the end of the chapter and I said, well, you can combine these ideas and these ideas, and
all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas
had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I
can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But
now, all of a sudden, I saw there was a way to say I can find the most general condition that
can be done efficiently without looking ahead to the end of the sentence. That you could make a
decision on the fly, reading from left to right, about the structure of the thing. That was
just a natural outgrowth of seeing the different pieces of the fabric that other people had put
together, and writing it into a chapter for the first time. But I felt that this general
concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I
could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it
was right, but it was too hard for me, really, to explain it well.
So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my
book. Textbooks don't have to cover everything when you get to the harder things; then you have
to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's
going to be published very soon, so any little things I discover and put in the book I didn't
bother to write a paper and publish in the journal because I figure it'll be in my book pretty
soon anyway. Computer science is changing so fast, my book is bound to be obsolete.
It takes a year for it to go through editing, and people drawing the illustrations, and then
they have to print it and bind it and so on. I have to be a little bit ahead of the
state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the
stuff to myself that I had, these little ideas I had been coming up with. But when I got to
this idea of left-to-right parsing, I said "Well here's something I don't really understand
very well. I'll publish this, let other people figure out what it is, and then they can tell me
what I should have said." I published that paper I believe in 1965, at the end of finishing my
draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of
computer science start with LR(k) and take off from there. But I want to give you an idea
of
"... Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). ..."
Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by
confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy
of concepts (aka subtyping).
The only widely used OO language (for sufficiently narrow values of wide and wide values
of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP
that helps you with the classification.
This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference
to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class
methods, friend methods, and so on).
I agree with you that subclassing (for the purpose of reusing behavior), traits
(for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to
depart from type systems and be used for mere code organization.
"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you,
Java)?"
Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the
entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main"
in each class, but only the one in the specified class will be the entry point.
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation.
Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous
training. Of course I jest. :)
The 80286 Intel processors: The Intel 80286[3] (also marketed as the iAPX 286[4] and often called Intel 286) is a 16-bit
microprocessor that was introduced on February 1, 1982. The 80286 was employed for the IBM PC/AT, introduced in 1984, and then
widely used in most PC/AT compatible computers until the early 1990s.
Notable quotes:
"... The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes. ..."
"... In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials. ..."
"... Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities. ..."
"... The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo. ..."
"... Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance. ..."
"... I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership. ..."
"... So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. ..."
"... Arrogance? When the money keeps flowing in anyway, it comes naturally. ..."
"... In the neoliberal world order governments, regulators and the public are secondary to corporate profits. ..."
"... I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX. ..."
"... I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees. ..."
"... Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. ..."
"... Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War. ..."
"... In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance. ..."
"... For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. ..."
"... Actually can you show me a single place in the US where ethics are considered a bastion of governorship? ..."
"... You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. ..."
"... Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper. ..."
"... That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets? ..."
"... If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word. ..."
"... Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item." ..."
"... Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated. ..."
"... Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics. ..."
"... The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination. ..."
United Airline and American Airlines furtherprolonged the
grounding of their Boeing 737 MAX airplanes. They now schedule the plane's return to the flight
line in December. But it is likely that the grounding will continue well into the next
year.
After Boeing's
shabby design and lack of safety analysis of its Maneuver Characteristics Augmentation
System (MCAS) led to the death of 347 people, the grounding of the type and billions of losses,
one would expect the company to show some decency and humility. Unfortunately Boeing behavior
demonstrates none.
There is still little detailed information on how Boeing will fix MCAS. Nothing was said by
Boeing about the manual trim system of the 737 MAX that
does not work when it is needed . The unprotected rudder cables of the plane
do not meet safety guidelines but were still certified. The planes flight control computers
can be
overwhelmed by bad data and a fix will be difficult to implement. Boeing continues to say
nothing about these issues.
International flight safety regulators no longer trust the Federal Aviation Administration
(FAA) which failed to uncover those problems when it originally certified the new type. The FAA
was also the last regulator to ground the plane after two 737 MAX had crashed. The European
Aviation Safety Agency (EASA) asked Boeing to explain and correct
five major issues it identified. Other regulators asked additional questions.
Boeing needs to regain the trust of the airlines, pilots and passengers to be able to again
sell those planes. Only full and detailed information can achieve that. But the company does
not provide any.
As Boeing sells some 80% of its airplanes abroad it needs the good will of the international
regulators to get the 737 MAX back into the air. This makes the arrogance
it displayed in a meeting with those regulators inexplicable:
Friction between Boeing Co. and international air-safety authorities threatens a new delay in
bringing the grounded 737 MAX fleet back into service, according to government and pilot
union officials briefed on the matter.
The latest complication in the long-running saga, these officials said, stems from a
Boeing briefing in August that was cut short by regulators from the U.S., Europe, Brazil and
elsewhere, who complained that the plane maker had failed to provide technical details and
answer specific questions about modifications in the operation of MAX flight-control
computers.
The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX.
The regulators convened an international meeting to get their questions answered and Boeing
arrogantly showed up without having done its homework. The regulators saw that as an insult.
Boeing was sent back to do what it was supposed to do in the first place: provide details and
analysis that prove the safety of its planes.
What did the Boeing managers think those regulatory agencies are? Hapless lapdogs like the
FAA managers`who
signed off on Boeing 'features' even after their engineers told them that these were not
safe?
Buried in the Wall Street Journal
piece quoted above is another little shocker:
In recent weeks, Boeing and the FAA identified another potential flight-control computer risk
requiring additional software changes and testing, according to two of the government and
pilot officials.
The new issue must be going beyond the flight control computer (FCC) issues the FAA
identified in June .
Boeing's original plan to fix the uncontrolled activation of MCAS was to have both FCCs
active at the same time and to switch MCAS off when the two computers disagree. That was
already a huge change in the general architecture which so far consisted of one active and one
passive FCC system that could be switched over when a failure occurred.
Any additional software changes will make the issue even more complicated. The 80286 Intel
processors the FCC software is running on is limited in its capacity. All the extras procedures
Boeing now will add to them may well exceed the system's capabilities.
Changing software in a delicate environment like a flight control computer is extremely
difficult. There will always be surprising side effects or regressions where already corrected
errors unexpectedly reappear.
The old architecture was possible because the plane could still be flown without any
computer. It was expected that the pilots would detect a computer error and would be able to
intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS
accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That
changes the level of scrutiny the system will have to undergo.
All procedures and functions of the software will have to be tested in all thinkable
combinations to ensure that they will not block or otherwise influence each other. This will
take months and there is a high chance that new issues will appear during these tests. They
will require more software changes and more testing.
Flight safety regulators know of these complexities. That is why they need to take a deep
look into such systems. That Boeing's management was not prepared to answer their questions
shows that the company has not learned from its failure. Its culture is still one of finance
orientated arrogance.
Building safe airplanes requires engineers who know that they may make mistakes and who have
the humility to allow others to check and correct their work. It requires open communication
about such issues. Boeing's say-nothing strategy will prolong the grounding of its planes. It
will increases the damage to Boeing's financial situation and reputation.
---
Previous Moon of Alabama posts on Boeing 737 MAX issues:
"The 80286 Intel processors the FCC software is running on is limited in its capacity."
You must be joking, right?
If this is the case, the problem is unfixable: you can't find two competent software
engineers who can program these dinosaur 16-bit processors.
You must be joking, right?
If this is the case, the problem is unfixable: you can't find two competent software
engineers who can program these dinosaur 16-bit processors.
One of the two is writing this.
Half-joking aside. The 737 MAX FCC runs on 80286 processors. There are ten thousands of
programmers available who can program them though not all are qualified to write real-time
systems. That resource is not a problem. The processors inherent limits are one.
Thanks b for the fine 737 max update. Others news sources seem to have dropped coverage. It
is a very big deal that this grounding has lasted this long. Things are going to get real bad
for Boeing if this bird does not get back in the air soon. In any case their credibility is
tarnished if not down right trashed.
What ever software language these are programmed in (my guess is C) the compilers still
exist for it and do the translation from the human readable code to the machine code for you.
Of course the code could be assembler but writing assembly code for a 286 is far easier than
writing it for say an i9 becuase the CPU is so much simpler and has a far smaller set of
instructions to work with.
@b:
It was a hyperbole.
I might be another one, but left them behind as fast as I could. The last time I had to deal
with it was an embedded system in 1998-ish. But I am also retiring, and so are thousands of
others. The problems with support of a legacy system are a legend.
I commented when you first started writing about this that it would take Boeing down and
still believe that to be true. To the extent that Boeing is stonewalling the international
safety regulators says to me that upper management and big stock holders are being given time
to minimize their exposure before the axe falls.
I also want to add that Boeing's focus on profit over safety is not restricted to the 737
Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane
line and all else they make.....I have no intention of ever flying in another Boeing
airplane, given the attitude shown by Boeing leadership.
This is how private financialization works in the Western world. Their bottom line is
profit, not service to the flying public. It is in line with the recent public statement by
the CEO's from the Business Roundtable that said that they were going to focus more on
customer satisfaction over profit but their actions continue to say profit is their primary
motive.
The God of Mammon private finance religion can not end soon enough for humanity's sake. It
is not like we all have to become China but their core public finance example is well worth
following.
So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance
and impudence. IMO, Boeing shareholders's hair ought to be on fire given their BoD's behavior
and getting ready to litigate.
As b notes, Boeing's international credibility's hanging by a
very thin thread. A year from now, Boeing could very well see its share price deeply dive
into the Penny Stock category--its current P/E is 41.5:1 which is massively overpriced.
Boeing Bombs might come to mean something vastly different from its initial meaning.
Such seemingly archaic processors are the norm in aerospace. If the planes flight
characteristics had been properly engineered from the start the processor wouldn't be an
issue. You can't just spray perfume on a garbage pile and call it a rose.
In the neoliberal world order governments, regulators and the public are secondary to
corporate profits. This is the same belief system that is suspending the British Parliament
to guarantee the chaos of a no deal Brexit. The irony is that globalist, Joe Biden's restart
the Cold War and nationalist Donald Trump's Trade Wars both assure that foreign regulators
will closely scrutinize the safety of the 737 Max. Even if ignored by corporate media and
cleared by the FAA to fly in the USA, Boeing and Wall Street's Dow Jones average are cooked
gooses with only 20% of the market. Taking the risk of flying the 737 Max on their family
vacation or to their next business trip might even get the credentialed class to realize that
their subservient service to corrupt Plutocrats is deadly in the long term.
"The latest complication in the long-running saga, these officials said, stems from a Boeing
BA, -2.66% briefing in August that was cut short by regulators from the U.S., Europe, Brazil
and elsewhere, who complained that the plane maker had failed to provide technical details
and answer specific questions about modifications in the operation of MAX flight-control
computers."
It seems to me that Boeing had no intention to insult anybody, but it has an impossible
task. After decades of applying duct tape and baling wire with much success, they finally
designed an unfixable plane, and they can either abandon this line of business (narrow bodied
airliners) or start working on a new design grounded in 21st century technologies.
Boeing's military sales are so much more significant and important to them, they are just
ignoring/down-playing their commercial problem with the 737 MAX. Follow the real money.
That is unblievable FLight Control comptuer is based on 80286!
A control system needs Real Time operation, at least some pre-emptive task operation, in
terms of milisecond or microsecond.
What ever way you program 80286 you can not achieve RT operation on 80286.
I do not think that is the case.
My be 80286 is doing some pripherial work, other than control.
It is quite likely (IMHO) that they are no longer able to provide the requested
information, but of course they cannot say that.
I once wrote a keyboard driver for an 80286, part of an editor, in assembler, on my first
PC type computer, I still have it around here somewhere I think, the keyboard driver, but I
would be rusty like the Titanic when it comes to writing code. I wrote some things in DEC
assembler too, on VAXen.
arata @16: 80286 does interrupts just fine, but you have to grok asynchronous operation, and
most coders don't really, I see that every day in Linux and my browser. I wish I could get
that box back, it had DOS, you could program on the bare wires, but God it was slow.
Boeing recently lost a $6+Billion weapons contract thanks to its similar Q&A in that
realm of its business. Its annual earnings are due out in October. Plan to short-sell
soon!
I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does
not sign off on the mods, it will cripple, if not doom the MAX.
I am equally surprised that we continue to sabotage China's export leader, as the WSJ
reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its
disposal" to disrupt its business, including launching cyberattacks on its networks and
instructing law enforcement to "menace" its employees.
The telecommunications giant also said
law enforcement in the U.S. have searched, detained and arrested Huawei employees and its
business partners, and have sent FBI agents to the homes of its workers to pressure them to
collect information on behalf of the U.S."
I wonder how much blind trust in Boeing is intertwined into the fabric of civic aviation all
around the world.
I mean something like this: Boeing publishes some research into failure statistics, solid
materials aging or something. One that is really hard and expensive to proceed with.
Everything take the results for granted without trying to independently reproduce and verify,
because The Boeing!
Some later "derived" researches being made, upon the foundation of some
prior works *including* that old Boeing research. Then FAA and similar company institutions
around the world make some official regulations and guidelines deriving from the research
which was in part derived form original Boeing work. Then insurance companies calculate their
tarifs and rate plans, basing their estimation upon those "government standards", and when
governments determine taxation levels they use that data too. Then airline companies and
airliner leasing companies make their business plans, take huge loans in the banks (and banks
do make their own plans expecting those loans to finally be paid back), and so on and so
forth, building the cards-deck house, layer after layer.
And among the very many of the cornerstones - there would be dust covered and
god-forgotten research made by Boeing 10 or maybe 20 years ago when no one even in drunk
delirium could ever imagine questioning Boeing's verdicts upon engineering and scientific
matters.
Now, the longevity of that trust is slowly unraveled. Like, the so universally trusted
737NG generation turned out to be inherently unsafe, and while only pilots knew it before,
and even of them - only most curious and pedantic pilots, today it becomes public knowledge
that 737NG are tainted.
Now, when did this corruption started? Wheat should be some deadline cast into the past,
that since the day every other technical data coming from Boeing should be considered
unreliable unless passing full-fledged independent verification? Should that day be somewhere
in 2000-s? 1990-s? Maybe even 1970-s?
And ALL THE BODY of civic aviation industry knowledge that was accumulated since that date
can NO MORE BE TRUSTED and should be almost scrapped and re-researched new! ALL THE tacit
INPUT that can be traced back to Boeing and ALL THE DERIVED KNOWLEDGE now has to be verified
in its entirety.
Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. Until the lawsuits
become too enormous.
I wonder how much that will cost. And speaking of rigged markets - why do ya suppose that
Trumpilator et al have been
so keen to make huge sales to the Saudis, etc. etc. ? Ya don't suppose they had an inkling of
trouble in the wind do ya? Speaking of insiders, how many million billions do ya suppose is
being made in the Wall Street "trade war" roller coaster by peeps, munchkins not muppets, who
have access to the Tweeter-in-Chief?
I commented when you first started writing about this that it would take Boeing down and
still believe that to be true. To the extent that Boeing is stonewalling the international
safety regulators says to me that upper management and big stock holders are being given
time to minimize their exposure before the axe falls.
Have you considered the costs of restructuring versus breaking apart Boeing and selling it
into little pieces; to the owners specifically?
The MIC is restructuring itself - by first creating the political conditions to make the
transformation highly profitable. It can only be made highly profitable by forcing the public
to pay the associated costs of Rape and Pillage Incorporated.
Military Industrial Complex welfare programs, including wars in Syria and Yemen, are
slowly winding down. We are about to get a massive bill from the financiers who already own
everything in this sector, because what they have left now is completely unsustainable, with
or without a Third World War.
It is fine that you won't fly Boeing but that is not the point. You may not ever fly again
since air transit is subsidized at every level and the US dollar will no longer be available
to fund the world's air travel infrastructure.
You will instead be paying for the replacement of Boeing and seeing what google is
planning it may not be for the renewal of the airline business but rather for dedicated
ground transportation, self driving cars and perhaps 'aerospace' defense forces, thank you
Russia for setting the trend.
As readers may remember I made a case study of Boeing for a fairly recent PHD. The examiners
insisted that this case study be taken out because it was "speculative." I had forecast
serious problems with the 787 and the 737 MAX back in 2012. I still believe the 787 is
seriously flawed and will go the way of the MAX. I came to admire this once brilliant company
whose work culminated in the superb 777.
America really did make some excellent products in the 20th century - with the exception
of cars. Big money piled into GM from the early 1920s, especially the ultra greedy, quasi
fascist Du Pont brothers, with the result that GM failed to innovate. It produced beautiful
cars but technically they were almost identical to previous models.
The only real innovation
over 40 years was automatic transmission. Does this sound reminiscent of the 737 MAX? What
glued together GM for more than thirty years was the brilliance of CEO Alfred Sloan who
managed to keep the Du Ponts (and J P Morgan) more or less happy while delegating total
responsibility for production to divisional managers responsible for the different GM brands.
When Sloan went the company started falling apart and the memoirs of bad boy John DeLorean
testify to the complete disfunctionality of senior management.
At Ford the situation was perhaps even worse in the 1960s and 1970s. Management was at war
with the workers, faulty transmissions were knowingly installed. All this is documented in an
excellent book by ex-Ford supervisor Robert Dewar in his book "A Savage Factory."
Well, the first thing that came to mind upon reading about Boeing's apparent arrogance
overseas - silly, I know - was that Boeing may be counting on some weird Trump sanctions
for anyone not cooperating with the big important USian corporation! The U.S. has influence
on European and many other countries, but it can only be stretched so far, and I would guess
messing with Euro/internation airline regulators, especially in view of the very real fatal
accidents with the 737MAX, would be too far.
Please read the following article to get further info about how the 5 big Funds that hold 67%
of Boeing stocks are working hard with the big banks to keep the stock high. Meanwhile Boeing
is also trying its best to blackmail US taxpayers through Pentagon, for example, by
pretending to walk away from a competitive bidding contract because it wants the Air Force to
provide better cost formula.
So basically, Boeing is being kept afloat by US taxpayers because it is "too big to fail"
and an important component of Dow. Please tell. Who is the biggest suckers here?
re Piotr Berman | Sep 3 2019 21:11 utc
[I have a tiny bit of standing in this matter based on experience with an amazingly similar
situation that has not heretofore been mentioned. More at end. Thus I offer my opinion.]
Indeed, an impossible task to design a workable answer and still maintain the fiction that
737MAX is a hi-profit-margin upgrade requiring minimal training of already-trained 737-series
pilots , either male or female.
Turning-off autopilot to bypass runaway stabilizer necessitates :
[1]
the earlier 737-series "rollercoaster" procedure to overcome too-high aerodynamic forces
must be taught and demonstrated as a memory item to all pilots.
The procedure was designed
for early Model 737-series, not the 737MAX which has uniquely different center-of-gravity and
pitch-up problem requiring MCAS to auto-correct, especially on take-off.
[2] but the "rollercoaster" procedure does not work at all altitudes.
It causes aircraft to
lose some altitude and, therefore, requires at least [about] 7,000-feet above-ground
clearance to avoid ground contact. [This altitude loss consumed by the procedure is based on
alleged reports of simulator demonstrations. There seems to be no known agreement on the
actual amount of loss].
[3] The physical requirements to perform the "rollercoaster" procedure were established at a
time when female pilots were rare.
Any 737MAX pilots, male or female, will have to pass new
physical requirements demonstrating actual conditions on newly-designed flight simulators
that mimic the higher load requirements of the 737MAX . Such new standards will also have to
compensate for left vs right-handed pilots because the manual-trim wheel is located between
the .pilot/copilot seats.
================
Now where/when has a similar situation occurred? I.e., wherein a Federal regulator agency
[FAA] allowed a vendor [Boeing] to claim that a modified product did not need full
inspection/review to get agency certification of performance [airworthiness].
As you may know, 2 working, nuclear, power plants were forced to shut down and be
decommissioned when, in 2011, 2 newly-installed, critical components in each plant were
discovered to be defective, beyond repair and not replaceable. These power plants were each
producing over 1,000 megawatts of power for over 20 years. In short, the failed components
were modifications of the original, successful design that claimed to need only a low-level
of Federal Nuclear Regulatory Commission oversight and approval. The mods were, in fact, new
and untried and yet only tested by computer modeling and theoretical estimations based on
experience with smaller/different designs.
<<< The NRC had not given full inspection/oversight to the new units because of
manufacturer/operator claims that the changes were not significant. The NRC did not verify
the veracity of those claims. >>>
All 4 components [2 required in each plant] were essentially heat-exchangers weighing 640
tons each, having 10,000 tubes carrying radioactive water surrounded by [transferring their
heat to] a separate flow of "clean" water. The tubes were progressively damaged and began
leaking. The new design failed. It can not be fixed. Thus, both plants of the San Onofre
Nuclear Generating Station are now a complete loss and await dismantling [as the courts will
decide who pays for the fiasco].
In my mind, the fact that Boeing transferred its head office from Seattle (where the main
manufacturing and presumable the main design and engineering functions are based) to Chicago
(centre of the neoliberal economic universe with the University of Chicago being its central
shrine of worship, not to mention supply of future managers and administrators) in 1997 says
much about the change in corporate culture and values from a culture that emphasised
technical and design excellence, deliberate redundancies in essential functions (in case of
emergencies or failures of core functions), consistently high standards and care for the
people who adhered to these principles, to a predatory culture in which profits prevail over
people and performance.
Good article. Boeing is, or used to be, America's biggest manufacturing export. So you are
right it cannot be allowed to fail. Boeing is also a manufacturer of military aircraft. The
fact that it is now in such a pitiful state is symptomatic of America's decline and decadence
and its takeover by financial predators.
They did the same with Nortel, whose share value exceeded 300 billion not long before it was
scrapped. Insiders took everything while pension funds were wiped out of existence.
It is so very helpful to understand everything you read is corporate/intel propaganda, and
you are always being setup to pay for the next great scam. The murder of 300+ people by
boeing was yet another tragedy our sadistic elites could not let go to waste.
For many amerikans, a good "offensive" is far preferable than a good defense even if that
only involves an apology. Remember what ALL US presidents say.. We will never apologize.. For
the extermination of natives, for shooting down civilian airliners, for blowing up mosques
full of worshipers, for bombing hospitals.. for reducing many countries to the stone age and
using biological
and chemical and nuclear weapons against the planet.. For supporting terrorists who plague
the planet now. For basically being able to be unaccountable to anyone including themselves
as a peculiar race of feces. So it is not the least surprising that amerikan corporations
also follow the same bad manners as those they put into and pre-elect to rule them.
People talk about Seattle as if its a bastion of integrity.. Its the same place Microsoft
screwed up countless companies to become the largest OS maker? The same place where Amazon
fashions how to screw its own employees to work longer and cheaper? There are enough examples
that Seattle is not Toronto.. and will never be a bastion of ethics..
Actually can you show
me a single place in the US where ethics are considered a bastion of governorship? Other than
the libraries of content written about ethics, rarely do amerikans ever follow it. Yet expect
others to do so.. This is getting so perverse that other cultures are now beginning to
emulate it. Because its everywhere..
Remember Dallas? I watched people who saw in fascination
how business can function like that. Well they cant in the long run but throw enough money
and resources and it works wonders in the short term because it destroys the competition. But
yea around 1998 when they got rid of the laws on making money by magic, most every thing has
gone to hell.. because now there are no constraints but making money.. anywhich way.. Thats
all that matters..
You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla
cpu, the intel chips were dinosaurs in every way. Requiring the cpu to be almost twice as
fast to get the same thing done.. Also its interrupt control was not upto par. A simple
example was how the commodore amiga could read from the disk and not stutter or slow down
anything else you were doing. I never seen this fixed.. In fact going from 8Mhz to 4GHz seems
to have fixed it by brute force. Yes the 8Mhz motorolla cpu worked wonders when you had
music, video, IO all going at the same time. Its not just the CPU but the support chips which
don't lock up the bus. Why would anyone use Intel? When there are so many specific embedded
controllers designed for such specific things.
Initially I thought it was just the new over-sized engines they retro-fitted. A situation
that would surely have been easier to get around by just going back to the original engines
-- any inefficiencies being less $costly than the time the planes have been grounded. But
this post makes the whole rabbit warren 10 miles deeper.
I do not travel much these days and
find the cattle-class seating on these planes a major disincentive. Becoming aware of all
these added technical issues I will now positively select for alternatives to 737 and bear
the cost.
I'm surprised Boeing stock still haven't taken nose dive
Posted by: Bob burger | Sep 3 2019 19:27 utc | 9
That is because the price is propped up by
$9 billion share buyback per year . Share buyback is an effective scheme to airlift all
the cash out of a company towards the major shareholders. I mean, who wants to develop
reliable airplanes if you can funnel the cash into your pockets?
Once the buyback ends the dive begins and just before it hits ground zero, they buy the
company for pennies on the dollar, possibly with government bailout as a bonus. Then the
company flies towards the next climb and subsequent dive. MCAS economics.
Hi , I am new here in writing but not in reading..
About the 80286 , where is the coprocessor the 80287?
How can the 80286 make IEEE math calculations?
So how can it fly a controlled flight when it can not calculate its accuracy......
How is it possible that this system is certified?
It should have at least a 80386 DX not SX!!!!
moved to Chicago in 1997 says much about the change in corporate culture and values from a
culture that emphasised technical and design excellence, deliberate redundancies in essential
functions (in case of emergencies or failures of core functions), consistently high standards
and care for the people who adhered to these principles, to a predatory culture in which
profits prevail over people and performance.
Jen @ 35
< ==
yes, the morally of the companies and their exclusive hold on a complicit or
controlled government always defaults the government to support, enforce and encourage the
principles of economic Zionism.
But it is more than just the corporate culture => the corporate fat cats
1. use the rule-making powers of the government to make law for them. Such laws create high
valued assets from the pockets of the masses. The most well know of those corporate uses of
government is involved with the intangible property laws (copyright, patent, and government
franchise). The government generated copyright, franchise and Patent laws are monopolies. So
when government subsidizes a successful outcome R&D project its findings are packaged up
into a set of monopolies [copyrights, privatized government franchises which means instead of
50 companies or more competing for the next increment in technology, one gains the full
advantage of that government research only one can use or abuse it. and the patented and
copyrighted technology is used to extract untold billions, in small increments from the
pockets of the public.
2. use of the judicial power of governments and their courts in both domestic and
international settings, to police the use and to impose fake values in intangible property
monopolies. Government-rule made privately owned monopoly rights (intangible property rights)
generated from the pockets of the masses, do two things: they exclude, deny and prevent would
be competition and their make value in a hidden revenue tax that passes to the privately held
monopolist with each sale of a copyrighted, government franchised, or patented service or
product. .
Please note the one two nature of the "use of government law making powers to generate
intangible private monopoly property rights"
There is no doubt Boeing has committed crimes on the 737MAX, its arrogance & greedy
should be severely punished by the international commitment as an example to other global
Corporations. It represents what is the worst of Corporate America that places profits in
front of lives.
How the U.S. is keeping Russia out of the international market?
Iran and other sanctioned countries are a potential captive market and they have growth
opportunities in what we sometimes call the non-aligned, emerging markets countries (Turkey,
Africa, SE Asia, India, ...).
One thing I have learned is that the U.S. always games the system, we never play fair. So
what did we do. Do their manufacturers use 1% U.S. made parts and they need that for
international certification?
Ultimately all of the issues in the news these days are the same one and the
same issue - as the US gets closer and closer to the brink of catastrophic collapse they get
ever more desperate. As they get more and more desperate they descend into what comes most
naturally to the US - throughout its entire history - frenzied violence, total absence of
morality, war, murder, genocide, and everything else that the US is so well known for (by
those who are not blinded by exceptionalist propaganda).
The Hong Kong violence is a perfect example - it is impossible that a self-respecting
nation state could allow itself to be seen to degenerate into such idiotic degeneracy, and so
grossly flaunt the most basic human decency. Ergo , the US is not a self-respecting
nation state. It is a failed state.
I am certain the arrogance of Boeing reflects two things: (a) an assurance from the US
government that the government will back them to the hilt, come what may, to make sure that
the 737Max flies again; and (b) a threat that if Boeing fails to get the 737Max in the air
despite that support, the entire top level management and board of directors will be jailed.
Boeing know very well they cannot deliver. But just as the US government is desperate
to avoid the inevitable collapse of the US, the Boeing top management are desperate to avoid
jail. It is a charade.
It is time for international regulators to withdraw certification totally - after the
problems are all fixed (I don't believe they ever will be), the plane needs complete new
certification of every detail from the bottom up, at Boeing's expense, and with total
openness from Boeing. The current Boeing management are not going to cooperate with that,
therefore the international regulators need to demand a complete replacement of the
management and board of directors as a condition for working with them.
If Boeing had invested some of this money that it blew on share buybacks to design a new
modern plane from ground up to replace the ancient 737 airframe, these tragedies could have
been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate
cost-cutters and financial engineers, rather than real engineers, had the final word.
Markets don't care about any of this. They don't care about real engineers either. They
love corporate cost-cutters and financial engineers. They want share buybacks, and if
something bad happens, they'll overlook the $5 billion to pay for the fallout because it's
just a "one-time item."
And now Boeing still has this plane, instead of a modern plane, and the history of this
plane is now tainted, as is its brand, and by extension, that of Boeing. But markets blow
that off too. Nothing matters.
Companies are getting away each with their own thing. There are companies that are losing
a ton of money and are burning tons of cash, with no indications that they will ever make
money. And market valuations are just ludicrous.
======
Thus Boeing issue is part of a much larger picture. Something systemic had to make
"markets" less rational. And who is this "market"? In large part, fund managers wracking
their brains how to create "decent return" while the cost of borrowing and returns on lending
are super low. What remains are forms of real estate and stocks.
Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or
quarter of that would suffice to build a plane that logically combines the latest
technologies. E.g. the entire frame design to fit together with engines, processors proper
for the information processing load, hydraulics for steering that satisfy force requirements
in almost all circumstances etc. New technologies also fail because they are not completely
understood, but when the overall design is logical with margins of safety, the faults can be
eliminated.
Instead, 737 was slowly modified toward failure, eliminating safety margins one by
one.
Boeing has apparently either never heard of, or ignores a procedure that is mandatory in
satellite design and design reviews. This is FMEA or Failure Modes and Effects Analysis. This
requires design engineers to document the impact of every potential failure and combination
of failures thereby highlighting everthing from catastrophic effects to just annoyances.
Clearly BOEING has done none of these and their troubles are a direct result. It can be
assumed that their arrogant and incompetent management has not yet understood just how
serious their behavior is to the future of the company.
Once the buyback ends the dive begins and just before it hits ground zero, they buy the
company for pennies on the dollar, possibly with government bailout as a bonus. Then the
company flies towards the next climb and subsequent dive. MCAS economics.
Computer modelling is what they are talking about in the cliche "Garbage in, garbage out".
The problem is not new, and it is well understood. What computer modelling is is cheap,
and easy to fudge, and that is why it is popular with people who care about money a lot. Much
of what is called "AI" is very similar in its limitations, a complicated way to fudge up the
results you want, or something close enough for casual examination.
In particular cases where you have a well-defined and well-mathematized theory, then you
can get some useful results with models. Like in Physics, Chemistry.
And they can be useful for "realistic" training situations, like aircraft simulators. The
old story about wargame failures against Iran is another such situation. A lot of video games
are big simulations in essence. But that is not reality, it's fake reality.
@ SteveK9 71 "By the way, the problem was caused by Mitsubishi, who designed the heat
exchangers."
Ahh. The furriners...
I once made the "mistake" of pointing out (in a comment under an article in Salon) that the
reactors that exploded at Fukushima was made by GE and that GE people was still in charge of
the reactors of American quality when they exploded. (The amerikans got out on one of the
first planes out of the country).
I have never seen so many angry replies to one of my comments. I even got e-mails for several weeks from angry Americans.
@Henkie #53
You need floating point for scientific calculations, but I really doubt the 737 is doing any
scientific research.
Also, a regular CPU can do mathematical calculations. It just isn't as fast nor has the same
capacity as a dedicated FPU.
Another common use for FPUs is in live action shooter games - the neo-physics portions
utilize scientific-like calculations to create lifelike actions. I sold computer systems in
the 1990s while in school - Doom was a significant driver for newer systems (as well as hedge
fund types).
Again, don't see why an airplane needs this.
Interesting. Sorry you had that experience. I'm not sure what you mean by a "multi-line
text widget". I can tell you that early versions of OpenMOTIF were very very buggy in my
experience. You probably know this, but after OpenMOTIF was completed and revved a few times
the original MOTIF code was released as open-source. Many of the bugs I'd been seeing (and
some just strange visual artifacts) disappeared. I know a lot of people love QT and it's
produced real apps and real results - I won't poo-poo it. How
Even if you don't like C++ much, The Design and
Evolution of C++ [amazon.com] is a great book for understanding why pretty much any
language ends up the way it does, seeing the tradeoffs and how a language comes to grow and
expand from simple roots. It's way more interesting to read than you might expect (not very
dry, and more about human interaction than you would expect).
Other than that reading through back posts in a lot of coding blogs that have been around
a long time is probably a really good idea.
You young whippersnappers don't 'preciate how good you have it!
Back in my day, the only book about programming was the 1401 assembly language manual!
But seriously, folks, it's pretty clear we still don't know shite about how to program
properly. We have some fairly clear success criteria for improving the hardware, but the
criteria for good software are clear as mud, and the criteria for ways to produce good
software are much muddier than that.
Having said that, I will now peruse the thread rather carefully
Couldn't find any mention of Guy Steele, so I'll throw in The New Hacker's
Dictionary , which I once owned in dead tree form. Not sure if Version 4.4.7 http://catb.org/jargon/html/ [catb.org] is
the latest online... Also remember a couple of his language manuals. Probably used the Common
Lisp one the most...
Didn't find any mention of a lot of books that I consider highly relevant, but that may
reflect my personal bias towards history. Not really relevant for most programmers.
I've been programming for the past ~40 years and I'll try to summarize what I believe are
the most important bits about programming (pardon the pun.) Think of this as a META: " HOWTO:
Be A Great Programmer " summary. (I'll get to the books section in a bit.)
1. All code can be summarized as a trinity of 3 fundamental concepts:
* Linear ; that is, sequence: A, B, C
* Cyclic ; that is, unconditional jumps: A-B-C-goto B
* Choice ; that is, conditional jumps: if A then B
2. ~80% of programming is NOT about code; it is about Effective Communication. Whether
that be:
* with your compiler / interpreter / REPL
* with other code (levels of abstraction, level of coupling, separation of concerns,
etc.)
* with your boss(es) / manager(s)
* with your colleagues
* with your legal team
* with your QA dept
* with your customer(s)
* with the general public
The other ~20% is effective time management and design. A good programmer knows how to
budget their time. Programming is about balancing the three conflicting goals of the
Program
Management Triangle [wikipedia.org]: You can have it on time, on budget, on quality.
Pick two.
3. Stages of a Programmer
There are two old jokes:
In Lisp all code is data. In Haskell all data is code.
And:
Progression of a (Lisp) Programmer:
* The newbie realizes that the difference between code and data is trivial.
* The expert realizes that all code is data.
* The true master realizes that all data is code.
(Attributed to Aristotle Pagaltzis)
The point of these jokes is that as you work with systems you start to realize that a
data-driven process can often greatly simplify things.
4. Know Thy Data
Fred Books once wrote
"Show me your flowcharts (source code), and conceal your tables (domain model), and I shall
continue to be mystified; show me your tables (domain model) and I won't usually need your
flowcharts (source code): they'll be obvious."
A more modern version would read like this:
Show me your code and I'll have to see your data,
Show me your data and I won't have to see your code.
The importance of data can't be understated:
* Optimization STARTS with understanding HOW the data is being generated and used, NOT the
code as has been traditionally taught.
* Post 2000 "Big Data" has been called the new oil. We are generating upwards to millions of
GB of data every second. Analyzing that data is import to spot trends and potential
problems.
5. There are three levels of optimizations. From slowest to fastest run-time:
a) Bit-twiddling hacks
[stanford.edu]
b) Algorithmic -- Algorithmic complexity or Analysis of algorithms
[wikipedia.org] (such as Big-O notation)
c) Data-Orientated
Design [dataorienteddesign.com] -- Understanding how hardware caches such as instruction
and data caches matter. Optimize for the common case, NOT the single case that OOP tends to
favor.
Optimizing is understanding Bang-for-the-Buck. 80% of code execution is spent in 20% of
the time. Speeding up hot-spots with bit twiddling won't be as effective as using a more
efficient algorithm which, in turn, won't be as efficient as understanding HOW the data is
manipulated in the first place.
6. Fundamental Reading
Since the OP specifically asked about books -- there are lots of great ones. The ones that
have impressed me that I would mark as "required" reading:
* The Mythical Man-Month
* Godel, Escher, Bach
* Knuth: The Art of Computer Programming
* The Pragmatic Programmer
* Zero Bugs and Program Faster
* Writing Solid Code / Code Complete by Steve McConnell
* Game Programming
Patterns [gameprogra...tterns.com] (*)
* Game Engine Design
* Thinking in Java by Bruce Eckel
* Puzzles for Hackers by Ivan Sklyarov
(*) I did NOT list Design Patterns: Elements of Reusable Object-Oriented Software
as that leads to typical, bloated, over-engineered crap. The main problem with "Design
Patterns" is that a programmer will often get locked into a mindset of seeing
everything as a pattern -- even when a simple few lines of code would solve th
eproblem. For example here is 1,100+ of Crap++ code such as Boost's over-engineered
CRC code
[boost.org] when a mere ~25 lines of SIMPLE C code would have done the trick. When was the
last time you ACTUALLY needed to _modify_ a CRC function? The BIG picture is that you are
probably looking for a BETTER HASHING function with less collisions. You probably would be
better off using a DIFFERENT algorithm such as SHA-2, etc.
7. Do NOT copy-pasta
Roughly 80% of bugs creep in because someone blindly copied-pasted without thinking. Type
out ALL code so you actually THINK about what you are writing.
8. K.I.S.S.
Over-engineering and aka technical debt, will be your Achilles' heel. Keep It Simple,
Silly.
9. Use DESCRIPTIVE variable names
You spend ~80% of your time READING code, and only ~20% writing it. Use good, descriptive
variable names. Far too programmers write usless comments and don't understand the difference
between code and comments:
Code says HOW, Comments say WHY
A crap comment will say something like: // increment i
No, Shit Sherlock! Don't comment the obvious!
A good comment will say something like: // BUGFIX: 1234:
Work-around issues caused by A, B, and C.
10. Ignoring Memory Management doesn't make it go away -- now you have two problems. (With
apologies to JWZ)
If you don't understand both the pros and cons of these programming paradigms
...
* Procedural
* Object-Orientated
* Functional, and
* Data-Orientated Design
... then you will never really understand programming, nor abstraction,
at a deep level, along with how and when it should and shouldn't be used.
12. Multi-disciplinary POV
ALL non-trivial code has bugs. If you aren't using static code analysis
[wikipedia.org] then you are not catching as many bugs as the people who are.
Also, a good programmer looks at his code from many different angles. As a programmer you
must put on many different hats to find them:
* Architect -- design the code
* Engineer / Construction Worker -- implement the code
* Tester -- test the code
* Consumer -- doesn't see the code, only sees the results. Does it even work?? Did you VERIFY
it did BEFORE you checked your code into version control?
13. Learn multiple Programming Languages
Each language was designed to solve certain problems. Learning different languages, even
ones you hate, will expose you to different concepts. e.g. If you don't how how to read
assembly language AND your high level language then you will never be as good as the
programmer who does both.
14. Respect your Colleagues' and Consumers Time, Space, and Money.
Mobile game are the WORST at respecting people's time, space and money turning "players
into payers." They treat customers as whales. Don't do this. A practical example: If you are
a slack channel with 50+ people do NOT use @here. YOUR fire is not their emergency!
15. Be Passionate
If you aren't passionate about programming, that is, you are only doing it for the money,
it will show. Take some pride in doing a GOOD job.
16. Perfect Practice Makes Perfect.
If you aren't programming every day you will never be as good as someone who is.
Programming is about solving interesting problems. Practice solving puzzles to develop your
intuition and lateral thinking. The more you practice the better you get.
"Sorry" for the book but I felt it was important to summarize the "essentials" of
programming.
--
Hey Slashdot. Fix your shitty filter so long lists can be posted.: "Your comment has too few
characters per line (currently 37.0)."
You crammed a lot of good ideas into a short post.
I'm sending my team at work a link to your post.
You mentioned code can data. Linus Torvalds had this to say:
"I'm a huge proponent of designing your code around the data, rather than the other way
around, and I think it's one of the reasons git has been fairly successful [â¦] I
will, in fact, claim that the difference between a bad programmer and a good one is whether
he considers his code or his data structures more important."
"Bad programmers worry about the code. Good programmers worry about data structures and
their relationships."
I'm inclined to agree. Once the data structure is right, the code oftem almost writes
itself. It'll be easy to write and easy to read because it's obvious how one would handle
data structured in that elegant way.
Writing the code necessary to transform the data from the input format into the right
structure can be non-obvious, but it's normally worth it.
i learnt to program at school from a Ph.D computer scientist. We never even had computers
in the class. We learnt to break the problem down into sections using flowcharts or
pseudo-code and then we would translate that program into whatever coding language we were
using. I still do this usually in my notebook where I figure out all the things I need to do
and then write the skeleton of the code using a series of comments for what each section of
my program and then I fill in the code for each section. It is a combination of top down and
bottom up programming, writing routines that can be independently tested and validated.
While OO critique is good (althoth most point are far from new) and up to the point the proposed solution is not.
There is no universal opener for creating elegant reliable programs.
Notable quotes:
"... Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization. There's no objective and open evidence that OOP is better than plain procedural programming. ..."
"... The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard. ..."
"... C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any idiotic "object model" c&@p. -- Linus Torvalds, the creator of Linux ..."
"... Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good programming framework should provide mechanisms that prevent us from doing stupid things. ..."
"... Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate code (low signal-to-noise ratio). ..."
The ultimate goal of every software developer should be to write reliable code. Nothing else matters if the code is buggy
and unreliable. And what is the best way to write code that is reliable? Simplicity . Simplicity is the opposite of complexity
. Therefore our first and foremost responsibility as software developers should be to reduce code complexity.
Disclaimer
I'll be honest, I'm not a raving fan of object-orientation. Of course, this article is going to be biased. However, I have good
reasons to dislike OOP.
I also understand that criticism of OOP is a very sensitive topic -- I will probably offend many readers. However, I'm doing what
I think is right. My goal is not to offend, but to raise awareness of the issues that OOP introduces.
I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he designed it. I'm criticizing the modern
Java/C# approach to OOP.
I will also admit that I'm angry. Very angry. I think that it is plain wrong that OOP is considered the de-facto standard for
code organization by many people, including those in very senior technical positions. It is also wrong that many mainstream languages
don't offer any other alternatives to code organization other than OOP.
Hell, I used to struggle a lot myself while working on OOP projects. And I had no single clue why I was struggling this much.
Maybe I wasn't good enough? I had to learn a couple more design patterns (I thought)! Eventually, I got completely burned out.
This post sums up my first-hand decade-long journey from Object-Oriented to Functional programming. I've seen it all. Unfortunately,
no matter how hard I try, I can no longer find use cases for OOP. I have personally seen OOP projects fail because they become too
complex to maintain.
TLDR
Object oriented programs are offered as alternatives to correct ones --
Edsger
W. Dijkstra , pioneer of computer science
<img src="https://miro.medium.com/max/1400/1*MTb-Xx5D0H6LUJu_cQ9fMQ.jpeg" width="700" height="467"/>
Photo by
Sebastian Herrmann on
Unsplash
Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In
other words, it was supposed to improve code organization.
There's no objective and open evidence that OOP is better than plain procedural programming.
The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies
of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing
complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns
. OOP makes common development practices, like refactoring and testing, needlessly hard.
Some might disagree with me, but the truth is that modern OOP has never been properly designed. It never came out of a proper
research institution (in contrast with Haskell/FP). I do not consider Xerox or another enterprise to be a "proper research institution".
OOP doesn't have decades of rigorous scientific research to back it up. Lambda calculus offers a complete theoretical foundation
for Functional Programming. OOP has nothing to match that. OOP mainly "just happened".
Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But what are the long-term consequences
of using OOP? OOP is a time bomb, set to explode sometime in the future when the codebase gets big enough.
Projects get delayed, deadlines get missed, developers get burned-out, adding in new features becomes next to impossible.
The organization labels the codebase as the "legacy codebase" , and the development team plans a rewrite .
OOP is not natural for the human brain, our thought process is centered around "doing" things -- go for a walk, talk to a friend,
eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects.
OOP code is non-deterministic -- unlike with functional programming, we're not guaranteed to get the same output given the same
inputs. This makes reasoning about the program very hard. As an oversimplified example, the output of 2+2 or calculator.Add(2,
2) mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004. The dependencies of the
Calculator object might change the result of the computation in subtle, but profound ways.
The Need for a Resilient Framework
I know, this may sound weird, but as programmers, we shouldn't trust ourselves to write reliable code. Personally, I am unable
to write good code without a strong framework to base my work on. Yes, there are frameworks that concern themselves with some very
particular problems (e.g. Angular or ASP.Net).
I'm not talking about the software frameworks. I'm talking about the more abstract dictionary definition of a framework:
"an essential supporting structure " -- frameworks that concern themselves with the more abstract things like code organization
and tackling code complexity. Even though Object-Oriented and Functional Programming are both programming paradigms, they're also
both very high-level frameworks.
Limiting our choices
C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any
idiotic "object model" c&@p.
-- Linus Torvalds, the creator of Linux
Linus Torvalds is widely known for his open criticism of C++ and OOP. One thing he was 100% right about is limiting programmers
in the choices they can make. In fact, the fewer choices programmers have, the more resilient their code becomes. In the quote above,
Linus Torvalds highly recommends having a good framework to base our code upon.
<img src="https://miro.medium.com/max/1400/1*ujt2PMrbhCZuGhufoxfr5w.jpeg" width="700" height="465"/>
Photo by
specphotops on
Unsplash
Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good
programming framework should provide mechanisms that prevent us from doing stupid things.
A good programming framework helps us to write reliable code. First and foremost, it should help reduce complexity by providing
the following things:
Modularity and reusability
Proper state isolation
High signal-to-noise ratio
Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though
OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages
the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate
code (low signal-to-noise ratio).
... ... ...
Messaging
Alan Kay coined the term "Object Oriented Programming" in the 1960s. He had a background in biology and was attempting to make
computer programs communicate the same way living cells do.
<img src="https://miro.medium.com/max/1400/1*bzRsnzakR7O4RMbDfEZ1sA.jpeg" width="700" height="467"/>
Photo by
Muukii on
Unsplash
Alan Kay's big idea was to have independent programs (cells) communicate by sending messages to each other. The state of
the independent programs would never be shared with the outside world (encapsulation).
That's it. OOP was never intended to have things like inheritance, polymorphism, the "new" keyword, and the myriad of design
patterns.
OOP in its purest form
Erlang is OOP in its purest form. Unlike more mainstream languages, it focuses on the core idea of OOP -- messaging. In
Erlang, objects communicate by passing immutable messages between objects.
Is there proof that immutable messages are a superior approach compared to method calls?
Hell yes! Erlang is probably the most reliable language in the world. It powers most of the world's telecom (and
hence the internet) infrastructure. Some of the systems written in Erlang have reliability of 99.9999999% (you read that right --
nine nines). Code Complexity
With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder
to modify and maintain.
The most important aspect of software development is keeping the code complexity down. Period. None of the fancy features
matter if the codebase becomes impossible to maintain. Even 100% test coverage is worth nothing if the codebase becomes too complex
and unmaintainable .
What makes the codebase complex? There are many things to consider, but in my opinion, the top offenders are: shared mutable state,
erroneous abstractions, and low signal-to-noise ratio (often caused by boilerplate code). All of them are prevalent in OOP.
The Problems of State
<img src="https://miro.medium.com/max/1400/1*1WeuR9OoKyD5EvtT9KjXOA.jpeg" width="700" height="467"/>
Photo by
Mika Baumeister on
Unsplash
What is state? Simply put, state is any temporary data stored in memory. Think variables or fields/properties in OOP. Imperative
programming (including OOP) describes computation in terms of the program state and changes to that state . Declarative (functional)
programming describes the desired results instead, and don't specify changes to the state explicitly.
... ... ...
To make the code more efficient, objects are passed not by their value, but by their reference . This is where "dependency
injection" falls flat.
Let me explain. Whenever we create an object in OOP, we pass references to its dependencies to the constructor .
Those dependencies also have their own internal state. The newly created object happily stores references to those dependencies
in its internal state and is then happy to modify them in any way it pleases. And it also passes those references down to anything
else it might end up using.
This creates a complex graph of promiscuously shared objects that all end up changing each other's state. This, in turn, causes
huge problems since it becomes almost impossible to see what caused the program state to change. Days might be wasted trying
to debug such state changes. And you're lucky if you don't have to deal with concurrency (more on this later).
Methods/Properties
The methods or properties that provide access to particular fields are no better than changing the value of a field directly.
It doesn't matter whether you mutate an object's state by using a fancy property or method -- the result is the same: mutated
state.
Some people say that OOP tries to model the real world. This is simply not true -- OOP has nothing to relate to in the real world.
Trying to model programs as objects probably is one of the biggest OOP mistakes.
The real world is not hierarchical
OOP attempts to model everything as a hierarchy of objects. Unfortunately, that is not how things work in the real world. Objects
in the real world interact with each other using messages, but they mostly are independent of each other.
Inheritance in the real world
OOP inheritance is not modeled after the real world. The parent object in the real world is unable to change the behavior of child
objects at run-time. Even though you inherit your DNA from your parents, they're unable to make changes to your DNA as they please.
You do not inherit "behaviors" from your parents, you develop your own behaviors. And you're unable to "override" your parents' behaviors.
The real world has no methods
Does the piece of paper you're writing on have a "write" method ? No! You take an empty piece of paper, pick up a pen,
and write some text. You, as a person, don't have a "write" method either -- you make the decision to write some text based on outside
events or your internal thoughts.
The Kingdom of Nouns
Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions
and data structures belong in totally different worlds.
Objects (or nouns) are at the very core of OOP. A fundamental limitation of OOP is that it forces everything into nouns. And not
everything should be modeled as nouns. Operations (functions) should not be modeled as objects. Why are we forced to create a
Multiplier class when all we need is a function that multiplies two numbers? Simply have a Multiply function,
let data be data and let functions be functions!
In non-OOP languages, doing trivial things like saving data to a file is straightforward -- very similar to how you would describe
an action in plain English.
Real-world example, please!
Sure, going back to the painter example, the painter owns a PaintingFactory . He has hired a dedicated BrushManager
, ColorManager , a CanvasManager and a MonaLisaProvider . His good friend zombie makes
use of a BrainConsumingStrategy . Those objects, in turn, define the following methods: CreatePainting
, FindBrush , PickColor , CallMonaLisa , and ConsumeBrainz .
Of course, this is plain stupidity, and could never have happened in the real world. How much unnecessary complexity has been
created for the simple act of drawing a painting?
There's no need to invent strange concepts to hold your functions when they're allowed to exist separately from the objects.
Unit Testing
<img src="https://miro.medium.com/max/1400/1*xGn4uGgVyrRAXnqSwTF69w.jpeg" width="700" height="477"/>
Photo by
Ani Kolleshi on
Unsplash
Automated testing is an important part of the development process and helps tremendously in preventing regressions (i.e. bugs
being introduced into existing code). Unit Testing plays a huge role in the process of automated testing.
Some might disagree, but OOP code is notoriously difficult to unit test. Unit Testing assumes testing things in isolation, and
to make a method unit-testable:
Its dependencies have to be extracted into a separate class.
Create an interface for the newly created class.
Declare fields to hold the instance of the newly created class.
Make use of a mocking framework to mock the dependencies.
Make use of a dependency-injection framework to inject the dependencies.
How much more complexity has to be created just to make a piece of code testable? How much time was wasted just to make some code
testable?
> PS we'd also have to instantiate the entire class in order to test a single method. This will also bring in the code from
all of its parent classes.
With OOP, writing tests for legacy code is even harder -- almost impossible. Entire companies have been created (
TypeMock ) around the issue of
testing legacy OOP code.
Boilerplate code
Boilerplate code is probably the biggest offender when it comes to the signal-to-noise ratio. Boilerplate code is "noise" that
is required to get the program to compile. Boilerplate code takes time to write and makes the codebase less readable because of the
added noise.
While "program to an interface, not to an implementation" is the recommended approach in OOP, not everything should become an
interface. We'd have to resort to using interfaces in the entire codebase, for the sole purpose of testability. We'd also probably
have to make use of dependency injection, which further introduced unnecessary complexity.
Testing private methods
Some people say that private methods shouldn't be tested I tend to disagree, unit testing is called "unit" for a reason -- test
small units of code in isolation. Yet testing of private methods in OOP is nearly impossible. We shouldn't be making private methods
internal just for the sake of testability.
In order to achieve testability of private methods, they usually have to be extracted into a separate object. This, in turn, introduces
unnecessary complexity and boilerplate code.
Refactoring
Refactoring is an important part of a developer's day-to-day job. Ironically, OOP code is notoriously hard to refactor. Refactoring
is supposed to make the code less complex, and more maintainable. On the contrary, refactored OOP code becomes significantly more
complex -- to make the code testable, we'd have to make use of dependency injection, and create an interface for the refactored class.
Even then, refactoring OOP code is really hard without dedicated tools like Resharper.
In the simple example above, the line count has more than doubled just to extract a single method. Why does refactoring create
even more complexity, when the code is being refactored in order to decrease complexity in the first place?
Contrast this to a similar refactor of non-OOP code in JavaScript:
The code has literally stayed the same -- we simply moved the isValidInput function to a different file and added
a single line to import that function. We've also added _isValidInput to the function signature for the sake of testability.
This is a simple example, but in practice the complexity grows exponentially as the codebase gets bigger.
And that's not all. Refactoring OOP code is extremely risky . Complex dependency graphs and state scattered all over OOP
codebase, make it impossible for the human brain to consider all of the potential issues.
The Band-aids
<img src="https://miro.medium.com/max/1400/1*JOtbVvacgu-nH3ZR4mY2Og.jpeg" width="700" height="567"/>
Image source: Photo by
Pixabay from
Pexels
What do we do when something is not working? It is simple, we only have two options -- throw it away or try fixing it. OOP is
something that can't be thrown away easily, millions of developers are trained in OOP. And millions of organizations worldwide are
using OOP.
You probably see now that OOP doesn't really work , it makes our code complex and unreliable. And you're not alone! People
have been thinking hard for decades trying to address the issues prevalent in OOP code. They've come up with a myriad of
design patterns.
Design patterns
OOP provides a set of guidelines that should theoretically allow developers to incrementally build larger and larger systems:
SOLID principle, dependency injection, design patterns, and others.
Unfortunately, the design patterns are nothing other than band-aids. They exist solely to address the shortcomings of OOP.
A myriad of books has even been written on the topic. They wouldn't have been so bad, had they not been responsible for the introduction
of enormous complexity to our codebases.
The problem factory
In fact, it is impossible to write good and maintainable Object-Oriented code.
On one side of the spectrum we have an OOP codebase that is inconsistent and doesn't seem to adhere to any standards. On the other
side of the spectrum, we have a tower of over-engineered code, a bunch of erroneous abstractions built one on top of one another.
Design patterns are very helpful in building such towers of abstractions.
Soon, adding in new functionality, and even making sense of all the complexity, gets harder and harder. The codebase will be full
of things like SimpleBeanFactoryAwareAspectInstanceFactory , AbstractInterceptorDrivenBeanDefinitionDecorator
, TransactionAwarePersistenceManagerFactoryProxy or RequestProcessorFactoryFactory .
Precious brainpower has to be wasted trying to understand the tower of abstractions that the developers themselves have created.
The absence of structure is in many cases better than having bad structure (if you ask me).
Is Object-Oriented Programming a Trillion Dollar Disaster?
(medium.com) Posted by EditorDavid on Monday July 22, 2019 @01:04AM from the OOPs dept.
Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay
calling object-oriented programming "a trillion dollar disaster." Precious time and
brainpower are being spent thinking about "abstractions" and "design patterns" instead of
solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal
in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to
improve code organization . There's
no objective and open evidence that OOP is better than plain procedural programming ...
Instead of reducing complexity, it encourages promiscuous sharing of mutable state and
introduces additional complexity with its numerous design patterns . OOP makes common
development practices, like refactoring and testing, needlessly hard...
Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But
what are the long-term consequences of using OOP? OOP is a time bomb, set to explode
sometime in the future when the codebase gets big enough. Projects get delayed, deadlines get
missed, developers get burned-out, adding in new features becomes next to impossible .
The organization labels the codebase as the " legacy codebase ", and the development
team plans a rewrite .... OOP provides developers too many tools and choices, without
imposing the right kinds of limitations. Even though OOP promises to address modularity and
improve reusability, it fails to deliver on its promises...
I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he
designed it. I'm criticizing the modern Java/C# approach to OOP... I think that it is plain
wrong that OOP is considered the de-facto standard for code organization by many people,
including those in very senior technical positions. It is also wrong that many mainstream
languages don't offer any other alternatives to code organization other than OOP.
The essay ultimately blames Java for the popularity of OOP, citing Alan Kay's comment that
Java "is the most distressing thing to happen to computing since MS-DOS." It also quotes Linus
Torvalds's observation that "limiting your project to C means that people don't screw things up
with any idiotic 'object model'."
And it ultimately suggests Functional Programming as a superior alternative, making the
following assertions about OOP:
"OOP code encourages the use of shared mutable state, which
has been proven to be unsafe time and time again... [E]ncapsulation, in fact, is glorified
global state." "OOP typically requires a lot of boilerplate code (low signal-to-noise ratio)."
"Some might disagree, but OOP code is notoriously difficult to unit test... [R]efactoring OOP
code is really hard without dedicated tools like Resharper." "It is impossible to write good
and maintainable Object-Oriented code."
There's no objective and open evidence that OOP is better than plain procedural
programming...
...which is followed by the author's subjective opinions about why procedural programming
is better than OOP. There's no objective comparison of the pros and cons of OOP vs procedural
just a rant about some of OOP's problems.
We start from the point-of-view that OOP has to prove itself. Has it? Has any project or
programming exercise ever taken less time because it is object-oriented?
Precious time and brainpower are being spent thinking about "abstractions" and "design
patterns" instead of solving real-world problems...
...says the person who took the time to write a 6,000 word rant on "why I hate
OOP".
Sadly, that was something you hallucinated. He doesn't say that anywhere.
Inheritance, while not "inherently" bad, is often the wrong solution. See: Why
extends is evil [javaworld.com]
Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny
little anecdote in Cocoa
Programming for Mac OS X [google.com]:
"Once upon a time, there was a company called Taligent. Taligent was created by IBM and
Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the
peak of its mindshare, I met one of its engineers at a trade show. I asked him to create a
simple application for me: A window would appear with a button, and when the button was
clicked, the words 'Hello, World!' would appear in a text field. The engineer created a
project and started subclassing madly: subclassing the window and the button and the event
handler. Then he started generating code: dozens of lines to get the button and the text
field onto the window. After 45 minutes, I had to leave. The app still did not work. That
day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its
doors forever."
Almost every programming methodology can be abused by people who really don't know how to
program well, or who don't want to. They'll happily create frameworks, implement new
development processes, and chart tons of metrics, all while avoiding the work of getting the
job done. In some cases the person who writes the most code is the same one who gets the
least amount of useful work done.
So, OOP can be misused the same way. Never mind that OOP essentially began very early and
has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially
an object oriented system. It's just data encapsulation and separating work into manageable
modules. That's how it was before anyone ever came up with the dumb name "full-stack
developer".
As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the
rise of OOP with some curiosity. I think there's a general consensus that abstraction and
re-usability are good things - they're the reason subroutines exist - the issue is whether
they are ends in themselves.
I struggle with the whole concept of "design patterns". There are clearly common themes in
software, but there seems to be a great deal of pressure these days to make your
implementation fit some pre-defined template rather than thinking about the application's
specific needs for state and concurrency. I have seen some rather eccentric consequences of
"patternism".
Correctly written, OOP code allows you to encapsulate just the logic you need for a
specific task and to make that specific task available in a wide variety of contexts by
judicious use of templating and virtual functions that obviate the need for "refactoring".
Badly written, OOP code can have as many dangerous side effects and as much opacity as any
other kind of code. However, I think the key factor is not the choice of programming
paradigm, but the design process. You need to think first about what your code is intended to
do and in what circumstances it might be reused. In the context of a larger project, it means
identifying commonalities and deciding how best to implement them once. You need to document
that design and review it with other interested parties. You need to document the code with
clear information about its valid and invalid use. If you've done that, testing should not be
a problem.
Some people seem to believe that OOP removes the need for some of that design and
documentation. It doesn't and indeed code that you intend to be reused needs *more* design
and documentation than the glue that binds it together in any one specific use case. I'm
still a firm believer that coding begins with a pencil, not with a keyboard. That's
particularly true if you intend to design abstract interfaces that will serve many purposes.
In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the
costs - and that usually means you not only know your code will be genuinely reusable but
will also genuinely be reused.
[...] I'm still a firm believer that coding begins with a pencil, not with a keyboard.
[...]
This!
In fact, even more: I'm a firm believer that coding begins with a pencil designing the data
model that you want to implement.
Everything else is just code that operates on that data model. Though I agree with most of
what you say, I believe the classical "MVC" design-pattern is still valid. And, you know
what, there is a reason why it is called "M-V-C": Start with the Model, continue with the
View and finalize with the Controller. MVC not only stood for Model-View-Controller but also
for the order of the implementation of each.
And preferably, as you stated correctly, "... start with pencil & paper
..."
I struggle with the whole concept of "design patterns".
Because design patterns are stupid.
A reasonable programmer can understand reasonable code so long as the data is documented
even when the code isnt documented, but will struggle immensely if it were the other way
around. Bad programmers create objects for objects sake, and because of that they have to
follow so called "design patterns" because no amount of code commenting makes the code easily
understandable when its a spaghetti web of interacting "objects" The "design patterns" dont
make the code easier the read, just easier to write.
Those OOP fanatics, if they do "document" their code, add comments like "// increment the
index" which is useless shit.
The big win of OOP is only in the encapsulation of the data with the code, and great
code treats objects like data structures with attached subroutines, not as "objects" ,
and document the fuck out of the contained data, while more or less letting the code document
itself. and keep OO elements to a minimum. As it turns out, OOP is just much more effort than
procedural and it rarely pays off to invest that effort, at least for me.
The problem isn't the object orientation paradigm itself, it's how it's applied.
The big problem in any project is that you have to understand how to break down the final
solution into modules that can be developed independently of each other to a large extent and
identify the items that are shared. But even when you have items that are apparently
identical don't mean that they will be that way in the long run, so shared code may even be
dangerous because future developers don't know that by fixing problem A they create problems
B, C, D and E.
Any time you make something easier, you lower the bar as well and now have a pack of
idiots that never could have been hired if it weren't for a programming language that
stripped out a lot of complexity for them.
Exactly. There are quite a few aspects of writing code that are difficult regardless of
language and there the difference in skill and insight really matters.
I have about 35+ years of software development experience, including with procedural, OOP
and functional programming languages.
My experience is: The question "is procedural better than OOP or functional?" (or
vice-versa) has a single answer: "it depends".
Like in your cases above, I would exactly do the same: use some procedural language that
solves my problem quickly and easily.
In large-scale applications, I mostly used OOP (having learned OOP with Smalltalk &
Objective-C). I don't like C++ or Java - but that's a matter of personal preference.
I use Python for large-scale scripts or machine learning/AI tasks.
I use Perl for short scripts that need to do a quick task.
Procedural is in fact easier to grasp for beginners as OOP and functional require a
different way of thinking. If you start developing software, after a while (when the project
gets complex enough) you will probably switch to OOP or functional.
Again, in my opinion neither is better than the other (procedural, OOP or functional). It
just depends on the task at hand (and of course on the experience of the software
developer).
There is nothing inherently wrong with some of the functionality it offers, its the way
OOP is abused as a substitute for basic good programming practices. I was helping interns -
students from a local CC - deal with idiotic assignments like making a random number
generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF? A room
full of career programmers could not even figure out how you were supposed to do that, much
less why. What was worse was a lack of understanding of basic programming skill or even the
use of variables, as the kids were being taught EVERY program was to to be assembled solely
by sticking together bits of libraries. There was no coding, just hunting for snippets of
preexisting code to glue together. Zero idea they could add their own, much less how to do
it. OOP isn't the problem, its the idea that it replaces basic programming skills and best
practice.
That and the obsession with absofrackinglutely EVERYTHING just having to be a formally
declared object including the while program being an object with a run() method.
Some things actually cry out to be objects, some not so much.Generally, I find that my
most readable and maintainable code turns out to be a procedural program that manipulates
objects.
Even there, some things just naturally want to be a struct or just an array of values.
The same is true of most ingenious ideas in programming. It's one thing if code is
demonstrating a particular idea, but production code is supposed to be there to do work, not
grind an academic ax.
For example, slavish adherence to "patterns". They're quite useful for thinking about code
and talking about code, but they shouldn't be the end of the discussion. They work better as
a starting point. Some programs seem to want patterns to be mixed and matched.
In reality those problems are just cargo cult programming one level higher.
I suspect a lot of that is because too many developers barely grasp programming and never
learned to go beyond the patterns they were explicitly taught.
When all you have is a hammer, the whole world looks like a nail.
There are a lot of mediocre programmers who follow the principle "if you have a hammer,
everything looks like a nail". They know OOP, so they think that every problem must be solved
in an OOP way. In fact, OOP works well when your program needs to deal with relatively
simple, real-world objects: the modelling follows naturally. If you are dealing with abstract
concepts, or with highly complex real-world objects, then OOP may not be the best
paradigm.
In Java, for example, you can program imperatively, by using static methods. The problem
is knowing when to break the rules. For example, I am working on a natural language system
that is supposed to generate textual answers to user inquiries. What "object" am I supposed
to create to do this task? An "Answer" object that generates itself? Yes, that would work,
but an imperative, static "generate answer" method makes at least as much sense.
There are different ways of thinking, different ways of modelling a problem. I get tired
of the purists who think that OO is the only possible answer. The world is not a nail.
I'm approaching 60, and I've been coding in COBOL, VB, FORTRAN, REXX, SQL for almost 40
years. I remember seeing Object Oriented Programming being introduced in the 80s, and I went
on a course once (paid by work). I remember not understanding the concept of "Classes", and
my impression was that the software we were buying was just trying to invent stupid new words
for old familiar constructs (eg: Files, Records, Rows, Tables, etc). So I never transitioned
away from my reliable mainframe programming platform. I thought the phrase OOP had dies out
long ago, along with "Client Server" (whatever that meant). I'm retiring in a few years, and
the mainframe will outlive me. Everything else is buggy.
"limiting your project to C means that people don't screw things up with any idiotic
'object model'."
GTK .... hold by beer... it is not a good argument against OOP
languages. But first, lets see how OOP came into place. OOP was designed to provide
encapsulation, like components, support reuse and code sharing. It was the next step coming
from modules and units, which where better than libraries, as functions and procedures had
namespaces, which helped structuring code. OOP is a great idea when writing UI toolkits or
similar stuff, as you can as
Like all things OO is fine in moderation but it's easy to go completely overboard,
decomposing, normalizing, producing enormous inheritance trees. Yes your enormous UML diagram
looks impressive, and yes it will be incomprehensible, fragile and horrible to maintain.
That said, it's completely fine in moderation. The same goes for functional programming.
Most programmers can wrap their heads around things like functions, closures / lambdas,
streams and so on. But if you mean true functional programming then forget it.
As for the kernel's choice to use C, that really boils down to the fact that a kernel
needs to be lower level than a typical user land application. It has to do its own memory
allocation and other things that were beyond C++ at the time. STL would have been usable, so
would new / delete, and exceptions & unwinding. And at that point why even bother? That
doesn't mean C is wonderful or doesn't inflict its own pain and bugs on development. But at
the time, it was the only sane choice.
"... Being powerless within calcifies totalitarian corporate culture ..."
"... ultimately promoted wide spread culture of obscurantism and opportunism what amounts to extreme office politics of covering their own butts often knowing that entire development strategy is flawed, as long as they are not personally blamed or if they in fact benefit by collapse of the project. ..."
"... As I worked side by side and later as project manager with Indian developers I can attest to that culture which while widely spread also among American developers reaches extremes among Indian corporations which infect often are engaged in fraud to be blamed on developers. ..."
The programmers in India are well capable of writing good software. The difficulty lies
in communicating the design requirements for the software. If they do not know in detail
how air planes are engineered, they will implement the design to the letter but not to
its intent.
I worked twenty years in commercial software development including in aviation for UA
and while Indian software developers are capable, their corporate culture is completely
different as is based on feudal workplace relations of subordinates and management that
results with extreme cronyism, far exceeding that in the US as such relations are not only
based on extreme exploitation (few jobs hundreds of qualified candidates) but on personal
almost paternal like relations that preclude required independence of judgment and
practically eliminates any major critical discussions about efficacy of technological
solutions and their risks.
Being powerless within calcifies totalitarian corporate culture facing
alternative of hurting family-like relations with bosses' and their feelings, who
emotionally and in financial terms committed themselves to certain often wrong solutions
dictated more by margins than technological imperatives, ultimately promoted wide
spread culture of obscurantism and opportunism what amounts to extreme office politics of
covering their own butts often knowing that entire development strategy is flawed, as long
as they are not personally blamed or if they in fact benefit by collapse of the
project.
As I worked side by side and later as project manager with Indian developers I can
attest to that culture which while widely spread also among American developers reaches
extremes among Indian corporations which infect often are engaged in fraud to be blamed on
developers.
In fact it is shocking contrast with German culture that practically prevents anyone
engaging in any project as it is almost always, in its entirety, discussed, analyzed,
understood and fully supported by every member of the team, otherwise they often simply
refused to work on project citing professional ethics. High quality social welfare state
and handsome unemployment benefits definitely supported such ethical stand back them
While what I describe happened over twenty years ago it is still applicable I
believe.
"... Honestly, since 2015 feels like Apple wants to abandon it's PC business but just doesn't know how so ..."
"... The new line seems like a valid refresh, but the prices are higher than ever, and remember young people are earning less than ever, so I still think they are looking for a way out of the PC trade, maybe this refresh is to just buy time for an other five years before they close up. ..."
"... I wonder how much those tooling engineers in the US make compared to their Chinese competitors? It seems like a neoliberal virtuous circle: loot/guts education, then find skilled labor from places that still support education, by moving abroad or importing workers, reducing wages and further undermining the local skill base. ..."
"... I sympathize with y'all. It's not uncommon for good products to become less useful and more trouble as the original designers, etc., get arrogant from their success and start to believe that every idea they have is a useful improvement. Not even close. Too much of fixing things that aren't broken and gilding lilies. ..."
The iPod, the iPhone, the MacBook Air, the physical Apple Store, even the iconic packaging
of Apple products -- these products changed how we view and use their categories, or created
new categories, and will be with us a long time.
Ironically. both Jobs and Ive were inspired by Dieter Rams – whom iFixit calls "the
legendary industrial designer renowned for functional and simple consumer products." And unlike
Apple. Rams believed that good design didn't have to come at the expense of either durability
or the environment:
Rams loves durable products that are environmentally friendly. That's one of his
10
principles for good design : "Design makes an important contribution to the preservation
of the environment." But Ive has never publicly discussed the dissonance between his
inspiration and Apple's disposable, glued-together products. For years, Apple has openly combated green standards that would
make products easier to repair and recycle, stating that they need "complete design
flexibility" no matter the impact on the environment.
In fact, that complete design flexibility – at least as practiced by Ive – has
resulted in crapified products that are an environmental disaster. Their lack of durability
means they must be repaired to be functional, and the lack of repairability means many of these
products end up being tossed prematurely – no doubt not a bug, but a feature. As
Vice
recounts :
But history will not be kind to Ive, to Apple, or to their design choices. While the
company popularized the smartphone and minimalistic, sleek, gadget design, it also did things
like create brand new screws designed to keep consumers from repairing their iPhones.
Under Ive, Apple began gluing down batteries inside laptops and smartphones (rather than
screwing them down) to shave off a fraction of a millimeter at the expense of repairability
and sustainability.
It redesigned MacBook Pro keyboards with mechanisms that are, again, a fraction of a
millimeter thinner, but that are
easily defeated by dust and crumbs (the computer I am typing on right now -- which is six
months old -- has a busted spacebar and 'r' key). These keyboards are not easily repairable,
even by Apple, and many MacBook Pros have to be completely replaced due to a single key
breaking. The iPhone 6 Plus
had a design flaw that led to its touch screen spontaneously breaking -- it then told
consumers there was no problem for months
before ultimately creating a repair program . Meanwhile, Apple's own internal
tests showed those flaws . He designed AirPods, which feature an unreplaceable battery
that must be physically
destroyed in order to open .
Vice also notes that in addition to Apple's products becoming "less modular, less consumer
friendly, less upgradable, less repairable, and, at times, less functional than earlier
models", Apple's design decisions have not been confined to Apple. Instead, "Ive's influence is
obvious in products released by Samsung, HTC, Huawei, and others, which have similarly traded
modularity for sleekness."
Right to Repair
As I've written before, Apple is leading opponent of giving consumers a right to repair.
Nonetheless, there's been some global progress on this issue (see Global Gains on Right
to Repair ). And we've also seen a widening of support in the US for such a right. The
issue has arisen in the current presidential campaign, with Elizabeth Warren throwing down the
gauntlet by endorsing a right to repair for farm tractors. The New York Times has also taken up
the cause more generally (see Right
to Repair Initiatives Gain Support in US ). More than twenty states are considering
enacting right to repair statutes.
I've been using Apple since 1990, I concur with the article about h/w and add that from
Snow Leopard to Sierra the OSX was buggy as anything from the Windows world if not more so.
Got better with High Sierra but still not up to the hype. I haven't lived with Mojave. I use
Apple out of habit, haven't felt the love from them since Snow Leopard, exactly when they
became a cell phone company. People think Apple is Mercedes and PCs are Fords, but for a long
time now in practical use, leaving aside the snazzy aesthetics, under the hood it's GM vs
Ford. I'm not rich enough to buy a $1500 non-upgradable, non-repairable product so the new T2
protected computers can't be for me.
The new Dell XPS's are tempting, they got the right
idea, if you go to their service page you can dl complete service instructions, diagrams, and
blow ups. They don't seem at all worried about my hurting myself.
In the last few years PCs
offer what before I could only get from Apple; good screen, back lit keyboard, long battery
life, trim size.
Honestly, since 2015 feels like Apple wants to abandon it's PC business but just doesn't
know how so it's trying to drive off all the old legacy power users, the creative people that
actually work hard for their money, exchanging them for rich dilettantes, hedge fund
managers, and status seekers – an easier crowd to finally close up shop on.
The new
line seems like a valid refresh, but the prices are higher than ever, and remember young
people are earning less than ever, so I still think they are looking for a way out of the PC
trade, maybe this refresh is to just buy time for an other five years before they close up.
When you start thinking like this about a company you've been loyal to for 30 years something
is definitely wrong.
The reason that Apple moved the last of its production to China is, quite simply, that
China now has basically the entire industrial infrastructure that we used to have. We have
been hollowed out, and are now essentially third-world when it comes to industry. The entire
integrated supply chain that defines an industrial power, is now gone.
The part about China no longer being a low-wage country is correct. China's wages have
been higher than Mexico's for some time. But the part about the skilled workers is a slap in
the face.
How can US workers be skilled at manufacturing, when there are no longer any jobs
here where they can learn or use those skills?
I wonder how much those tooling engineers in the US make compared to their Chinese
competitors? It seems like a neoliberal virtuous circle: loot/guts education, then find
skilled labor from places that still support education, by moving abroad or importing
workers, reducing wages and further undermining the local skill base.
They lost me when they made the iMac so thin it couldn't play a CD – and had the
nerve to charge $85 for an Apple player. Bought another brand for $25. I don't care that it's
not as pretty. I do care that I had to buy it at all.
I need a new cellphone. You can bet it won't be an iPhone.
Although I have never used an Apple product, I sympathize with y'all. It's not uncommon
for good products to become less useful and more trouble as the original designers, etc., get
arrogant from their success and start to believe that every idea they have is a useful
improvement. Not even close. Too much of fixing things that aren't broken and gilding
lilies.
Worst computer I've ever owned: Apple Macbook Pro, c. 2011 or so.
Died within 2 years, and also more expensive than the desktops I've built since that
absolutely crush it in every possible performance metric (and last longer).
Meanwhile, I also still use a $300 Best Buy Toshiba craptop that has now lasted for 8
straight years.
"Beautiful objects" – aye, there's the rub. In point of fact, the goal of industrial
design is not to create beautiful objects. It is the goal of the fine arts to create
beautiful objects. The goal of design is to create useful things that are easy to use and are
effective at their tasks. Some -- including me -- would add to those most basic goals, the
additional goals of being safe to use, durable, and easy to repair; perhaps even easy to
adapt or suitable for recycling, or conservative of precious materials. The principles of
good product design are laid out admirably in the classic book by Donald A. Norman, The
Design of Everyday Things (1988). So this book was available to Jony Ive (born 1967) during
his entire career (which overlapped almost exactly the wonder years of Postmodernism –
and therein lies a clue). It would indeed be astonishing to learn that Ive took no notice of
it. Yet Norman's book can be used to show that Ive's Apple violated so many of the principles
of good design, so habitually, as to raise the suspicion that the company was not engaged in
"product design" at all. The output Apple in the Ive era, I'd say, belongs instead to the
realm of so-called "commodity aesthetics," which aims to give manufactured items a
sufficiently seductive appearance to induce their purchase – nothing more. Aethetics
appears as Dieter Rams's principle 3, as just one (and the only purely commercial) function
in his 10; so in a theoretical context that remains ensconced within a genuine, Modernist
functionalism. But in the Apple dispensation that single (aesthetic) principle seems to have
subsumed the entire design enterprise – precisely as one would expect from "the
cultural logic of late capitalism" (hat tip to Mr Jameson). Ive and his staff of formalists
were not designing industrial products, or what Norman calls "everyday things," let alone
devices; they were aestheticizing products in ways that first, foremost, and almost only
enhanced their performance as expressions of a brand. Their eyes turned away from the prosaic
prize of functionality to focus instead on the more profitable prize of sales -- to repeat
customers, aka the devotees of 'iconic' fetishism. Thus did they serve not the masses but
Mammon, and they did so as minions of minimalism. Nor was theirs the minimalism of the
Frankfurt kitchen, with its deep roots in ethics and ergonomics. It was only superficially
Miesian. Bauhaus-inspired? Oh, please. Only the more careless readers of Tom Wolfe and
Wikipedia could believe anything so preposterous. Surely Steve Jobs, he of the featureless
black turtleneck by Issey Miyake, knew better. Anyone who has so much as walked by an Apple
Store, ever, should know better. And I guess I should know how to write shorter
I love it. A company which fell in love so much with their extraordinary profits that they
sabatoged their design and will now suffer enormous financial consequences. They're lucky to
have all their defense/military contracts.
The software at the heart of the Boeing 737 MAX crisis was developed at a time when the company was laying off experienced engineers
and replacing them with temporary workers making as little as $9 per hour, according to
Bloomberg .
In an effort to cut costs, Boeing was relying on subcontractors making paltry wages to develop and test its software. Often times,
these subcontractors would be from countries lacking a deep background in aerospace, like India.
Boeing had recent college graduates working for Indian software developer HCL Technologies Ltd. in a building across from Seattle's
Boeing Field, in flight test groups supporting the MAX. The coders from HCL designed to specifications set by Boeing but, according
to Mark Rabin, a former Boeing software engineer, "it was controversial because it was far less efficient than Boeing engineers just
writing the code."
Rabin said: "...it took many rounds going back and forth because the code was not done correctly."
In addition to cutting costs, the hiring of Indian companies may have landed Boeing orders for the Indian military and commercial
aircraft, like a $22 billion order received in January 2017 . That order included 100 737 MAX 8 jets and was Boeing's largest order
ever from an Indian airline. India traditionally orders from Airbus.
HCL engineers helped develop and test the 737 MAX's flight display software while employees from another Indian company, Cyient
Ltd, handled the software for flight test equipment. In 2011, Boeing named Cyient, then known as Infotech, to a list of its "suppliers
of the year".
One HCL employee posted online: "Provided quick workaround to resolve production issue which resulted in not delaying flight test
of 737-Max (delay in each flight test will cost very big amount for Boeing) ."
But Boeing says the company didn't rely on engineers from HCL for the Maneuvering Characteristics Augmentation System, which was
linked to both last October's crash and March's crash. The company also says it didn't rely on Indian companies for the cockpit warning
light issue that was disclosed after the crashes.
A Boeing spokesperson said: "Boeing has many decades of experience working with supplier/partners around the world. Our primary
focus is on always ensuring that our products and services are safe, of the highest quality and comply with all applicable regulations."
HCL, on the other hand, said: "HCL has a strong and long-standing business relationship with The Boeing Company, and we take pride
in the work we do for all our customers. However, HCL does not comment on specific work we do for our customers. HCL is not associated
with any ongoing issues with 737 Max."
Recent simulator tests run by the FAA indicate that software issues on the 737 MAX run deeper than first thought. Engineers who
worked on the plane, which Boeing started developing eight years ago, complained of pressure from managers to limit changes that
might introduce extra time or cost.
Rick Ludtke, a former Boeing flight controls engineer laid off in 2017, said: "Boeing was doing all kinds of things, everything
you can imagine, to reduce cost , including moving work from Puget Sound, because we'd become very expensive here. All that's very
understandable if you think of it from a business perspective. Slowly over time it appears that's eroded the ability for Puget Sound
designers to design."
Rabin even recalled an incident where senior software engineers were told they weren't needed because Boeing's productions were
mature. Rabin said: "I was shocked that in a room full of a couple hundred mostly senior engineers we were being told that we weren't
needed."
Any given jetliner is made up of millions of parts and millions of lines of code. Boeing has often turned over large portions
of the work to suppliers and subcontractors that follow its blueprints. But beginning in 2004 with the 787 Dreamliner, Boeing sought
to increase profits by providing high-level specs and then asking suppliers to design more parts themselves.
Boeing also promised to invest $1.7 billion in Indian companies as a result of an $11 billion order in 2005 from Air India. This
investment helped HCL and other software developers.
For the 787, HCL offered a price to Boeing that they couldn't refuse, either: free. HCL "took no up-front payments on the 787
and only started collecting payments based on sales years later".
Rockwell Collins won the MAX contract for cockpit displays and relied in part on HCL engineers and contract engineers from Cyient
to test flight test equipment.
Charles LoveJoy, a former flight-test instrumentation design engineer at the company, said: "We did have our challenges with the
India team. They met the requirements, per se, but you could do it better."
I love it. A company which fell in love so much with their extraordinary profits that they sabatoged their design and will
now suffer enormous financial consequences. They're lucky to have all their defense/military contracts.
Oftentimes, it's the cut-and-paste code that's the problem. If you don't have a good appreciation for what every line does,
you're never going to know what the sub or entire program does.
By 2002 i could not sit down with any developers without hearing at least one story about how they had been in a code review
meeting and seen absolute garbage turned out by H-1B workers.
Lots of people have known about this problem for many years now.
May the gods damn all financial managers! One of the two professions, along with bankers, which have absolutely no social value
whatsoever. There should be open hunting season on both!
Shifting to high-level specs puts more power in the hands of management/accounting types, since it doesn't require engineering
knowledge to track a deadline. Indeed, this whole story is the wet dream of business school, the idea of being able to accomplish
technical tasks purely by demand. A lot of public schools teach kids science is magic so when they grow up, the think they can
just give directions and technology appears.
In this country, one must have a license from the FAA to work on commercial aircraft. That means training and certification
that usually results in higher pay for those qualified to perform the repairs to the aircraft your family will fly on.
In case you're not aware, much of the heavy stuff like D checks (overhaul) have been outsourced by the airlines to foreign
countries where the FAA has nothing to say about it. Those contractors can hire whoever they wish for whatever they'll accept.
I have worked with some of those "mechanics" who cannot even read.
Keep that in mind next time the TSA perv is fondling your junk. That might be your last sexual encounter.
Or this online shop designed back in 1997. It was supposed to take over all internet
shopping that didn't really exist back then yet. And they used Indian doctors to code. Well
sure they ended up with a site... but one so heavy with pictures it took 30min to open one
page, another 20min to even click on a product to read its text etc-. This with good
university internet.
Unsurprisingly i don't think they ever managed to sell anything. But they gave out free
movie tickets to every registered customer... so me & friend each registered some 80
accounts and went to free movies for a good bit over a year.
mailman must have had fun delivering 160 letters to random names in the same student
apartment :D
"... Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). ..."
"... The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. ..."
"... "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. ..."
"... If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. ..."
"... It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism ..."
"... When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. ..."
The fall of the Berlin Wall and the corresponding end of the Soviet Empire gave the fullest impetus imaginable to the forces of
globalized capitalism, and correspondingly unfettered access to the world's cheapest labor. What was not to like about that? It afforded
multinational corporations vastly expanded opportunities to fatten their profit margins and increase the bottom line with seemingly
no risk posed to their business model.
Or so it appeared. In 2000, aerospace engineer L.J. Hart-Smith's remarkable paper, sardonically titled
"Out-Sourced Profits – The Cornerstone of Successful Subcontracting," laid
out the case against several business practices of Hart-Smith's previous employer, McDonnell Douglas, which had incautiously ridden
the wave of outsourcing when it merged with the author's new employer, Boeing. Hart-Smith's intention in telling his story was a
cautionary one for the newly combined Boeing, lest it follow its then recent acquisition down the same disastrous path.
Of the manifold points and issues identified by Hart-Smith, there is one that stands out as the most compelling in terms of understanding
the current crisis enveloping Boeing: The embrace of the metric "Return on Net Assets" (RONA). When combined with the relentless
pursuit of cost reduction (via offshoring), RONA taken to the extreme can undermine overall safety standards.
Related to this problem is the intentional and unnecessary use of complexity as an instrument of propaganda. Like many of its
Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious
and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a
move up the technology curve).
All of these pernicious concepts are branches of the same poisoned tree: "
shareholder capitalism ":
[A] notion best epitomized by Milton Friedman that the only social
responsibility of a corporation is to increase its profits, laying the groundwork for the idea that shareholders, being the owners
and the main risk-bearing participants, ought therefore to receive the biggest rewards. Profits therefore should be generated
first and foremost with a view toward maximizing the interests of shareholders, not the executives or managers who (according
to the theory) were spending too much of their time, and the shareholders' money, worrying about employees, customers, and the
community at large. The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised
solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive
stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction
of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced
by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according
to the theory, precluded the ability to compete globally.
"Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. In essence, it means maximizing the returns
of those dollars deployed in the operation of the business. Applied to a corporation, it comes down to this: If the choice is between
putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the
optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly
assuming that they can make better use of that money than the company itself can.
It is an absurd conceit to believe that a dilettante
portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate
productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy
embedded in the myth of shareholder capitalism.
Engineering reality, however, is far more complicated than what is outlined in university MBA textbooks. For corporations like
McDonnell Douglas, for example, RONA was used not as a way to prioritize new investment in the corporation but rather to justify
disinvestment in the corporation. This disinvestment ultimately degraded the company's underlying profitability and the quality
of its planes (which is one of the reasons the Pentagon helped to broker the merger with Boeing; in another perverse echo of the
2008 financial disaster, it was a politically engineered bailout).
RONA in Practice
When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized
workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. Productivity
is diminished, even as labor-saving technologies are introduced. Precision machinery is sold off and replaced by inferior, but cheaper,
machines. Engineering quality deteriorates. And the upshot is that a reliable plane like Boeing's 737, which had been a tried and
true money-spinner with an impressive safety record since 1967, becomes a high-tech death trap.
The drive toward efficiency is translated into a drive to do more with less. Get more out of workers while paying them less. Make
more parts with fewer machines. Outsourcing is viewed as a way to release capital by transferring investment from skilled domestic
human capital to offshore entities not imbued with the same talents, corporate culture and dedication to quality. The benefits to
the bottom line are temporary; the long-term pathologies become embedded as the company's market share begins to shrink, as the airlines
search for less shoddy alternatives.
You must do one more thing if you are a Boeing director: you must erect barriers to bad news, because there is nothing that bursts
a magic bubble faster than reality, particularly if it's bad reality.
The illusion that Boeing sought to perpetuate was that it continued to produce the same thing it had produced for decades: namely,
a safe, reliable, quality airplane. But it was doing so with a production apparatus that was stripped, for cost reasons, of many
of the means necessary to make good aircraft. So while the wine still came in a bottle signifying Premier Cru quality, and still
carried the same price, someone had poured out the contents and replaced them with cheap plonk.
And that has become remarkably easy to do in aviation. Because Boeing is no longer subject to proper independent regulatory scrutiny.
This is what happens when you're allowed to "
self-certify" your own airplane , as the Washington Post described: "One Boeing engineer would conduct a test of a particular
system on the Max 8, while another Boeing engineer would act as the FAA's representative, signing on behalf of the U.S. government
that the technology complied with federal safety regulations."
This is a recipe for disaster. Boeing relentlessly cut costs, it outsourced across the globe to workforces that knew nothing about
aviation or aviation's safety culture. It sent things everywhere on one criteria and one criteria only: lower the denominator. Make
it the same, but cheaper. And then self-certify the plane, so that nobody, including the FAA, was ever the wiser.
Boeing also greased the wheels in Washington to ensure the continuation of this convenient state of regulatory affairs for the
company. According to OpenSecrets.org ,
Boeing and its affiliates spent $15,120,000 in lobbying expenses in 2018, after spending, $16,740,000 in 2017 (along with a further
$4,551,078 in 2018 political contributions, which placed the company 82nd out of a total of 19,087 contributors). Looking back at
these figures over the past four elections (congressional and presidential) since 2012, these numbers represent fairly typical spending
sums for the company.
But clever financial engineering, extensive political lobbying and self-certification can't perpetually hold back the effects
of shoddy engineering. One of the sad byproducts of the FAA's acquiescence to "self-certification" is how many things fall through
the cracks so easily.
Shared libraries that are dynamically linked make more efficient use of disk space than
those that are statically linked, and more importantly allow you to perform security updates in
a more efficient manner, but executables compiled against a particular version of a dynamic
library expect that version of the shared library to be available on the machine they run on.
If you are running machines with both Fedora 9 and openSUSE 11, the versions of some shared
libraries are likely to be slightly different, and if you copy an executable between the
machines, the file might fail to execute because of these version differences.
With ELF Statifier you can create a statically
linked version of an executable, so the executable includes the shared libraries instead of
seeking them at run time. A statically linked executable is much more likely to run on a
different Linux distribution or a different version of the same distribution.
Of course, to do this you sacrifice some disk space, because the statically linked
executable includes a copy of the shared libraries that it needs, but in these days of terabyte
disks the space consideration is less important than the security one. Consider what happens if
your executables are dynamically linked to a shared library, say libfoo, and there is a
security update to libfoo. When your applications are dynamically linked you can just update
the shared copy of libfoo and your applications will no longer be vulnerable to the security
issue in the older libfoo. If on the other hand you have a statically linked executable, it
will still include and use its own private copy of the old libfoo. You'll have to recreate the
statically linked executable to get the newer libfoo and security update.
Still, there are times when you want to take a daemon you compiled on a Fedora machine and
run it on your openSUSE machine without having to recompile it and all its dependencies.
Sometimes you just want it to execute now and can rebuild it later if desired. Of
course, the machine you copy the executable from and the one on which you want to run it must
have the same architecture.
ELF Statifier is packaged as a 1-Click
install for openSUSE 10.3 but not for Ubuntu Hardy or Fedora. I'll use version 1.6.14 of ELF
Statifier and build it from source on a Fedora 9 x86 machine. ELF Statifier does not use
autotools, so you compile by simply invoking make . Compilation and installation
is shown below.
$ tar xzvf statifier-1.6.14.tar.gz
$ cd ./statifier-*
$ make
$ sudo make install
As an example of how to use the utility, I'll create a statically linked version of the
ls binary in the commands shown below. First I create a personal copy of the
dynamically linked executable and inspect it to see what it dynamically links to. You run
statifier with the path to the dynamically linked executable as the first argument and the path
where you want to create the statically linked executable as the second argument. Notice that
the ldd command reports that no dynamically linked libraries are required by
ls-static. The next command shows that the binary size has grown significantly for the static
version of ls.
$ mkdir test
$ cd ./test
$ cp -a /bin/ls ls-dynamic
$ ls -lh
-rwxr-xr-x 1 ben ben 112K 2008-08-01 04:05 ls-dynamic
$ ldd ls-dynamic
linux-gate.so.1 => (0x00110000)
librt.so.1 => /lib/librt.so.1 (0x00a3a000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00a06000)
libacl.so.1 => /lib/libacl.so.1 (0x00d8a000)
libc.so.6 => /lib/libc.so.6 (0x0084e000)
libpthread.so.0 => /lib/libpthread.so.0 (0x009eb000)
/lib/ld-linux.so.2 (0x0082e000)
libdl.so.2 => /lib/libdl.so.2 (0x009e4000)
libattr.so.1 => /lib/libattr.so.1 (0x0606d000)
$ statifier ls-dynamic ls-static
$ ldd ls-static
not a dynamic executable
$ ls -lh ls-static
-rwxr-x--- 1 ben ben 2.0M 2008-10-03 12:05 ls-static
As you can see above, the statified ls crashes when you run it with the -l
option. If you get segmentation faults when running your statified executables you should
disable stack randomization and
recreate the statified executable. The stack and address space randomization feature of the
Linux kernel makes the locations used for the stack and other important parts of an executable
change every time it is executed. Randomizing things each time you run a binary hinders attacks
such as the return-to-libc attack because the
location of libc functions changes all the time.
You are giving away some security by changing the randomize_va_space parameter as shown
below. The change to randomize_va_space affects not only attacks on the executables themselves
but also exploit attempts that rely on buffer overflows to compromise the system. Without
randomization, both attacks become more straightforward. If you set randomize_va_space to zero
as shown below and recreate the ls-static binary, things should work as expected. You'll have
to leave the stack randomization feature disabled in order to execute the statified
executable.
There are a few other tricks up statifier's sleeve: you can set or unset environment
variables for the statified executable, and include additional libraries (LD_PRELOAD libraries)
into the static executable. Being able to set additional environment variables for a static
executable is useful when the binary you are statifying relies on finding additional resources
like configuration files. If the binary allows you to tell it where to find its resources
through environment variables, you can include these settings directly into the statified
executable.
The ability to include preloaded shared libraries into the statified binary (LD_PRELOADing)
is probably a less commonly used feature. One use is including additional functionality such as
making the statically linked executable "trashcan friendly" by default, perhaps using
delsafe , but
without needing to install any additional software on the machine that is running the
statically linked executable.
Security measures that randomize the address
space of binaries might interfere with ELF Statifier and cause it not to work. But when you
just want to move the execution of an application to another Linux machine, ELF Statifier might
get you up and running without the hassle of a recompile.
Although he is certainly a giant, Knuth will never be able to complete this monograph - the
technology developed too quickly. Three volumes came out in 1963-1968 and then there was a lull.
January 10, he will be 81. At this age it is difficult to work in the field of mathematics and
system programming. So we will probably never see the complete fourth volume.
This inability to
finish the work he devoted a large part of hi life is definitely a tragedy. The key problem here is
that now it is simply impossible to cover the whole area of system programming and
related algorithms for one person. But the first three volumes played tremendous positive role
for sure.
Also he was distracted for several years to create TeX. He needed to create a non-profit and
complete this work by attracting the best minds from the outside. But he is by nature a loner,
as many great scientists are, and prefer to work this way.
His other mistake is due to the fact that MIX - his emulator was too far from the IBM S/360, which became the standard de-facto
in mid-60th. He then realized that this was a blunder and replaced MIX with more modem emulator MIXX, but it was
"too little, too late" and it took time and effort. So the first three volumes and fragments of the fourth is all
that we have now and probably forever.
Not all volumes fared equally well with time. The third volume suffered most IMHO and as of 2019 is partially obsolete. Also it was
written by him in some haste and some parts of it are are far from clearly written ( it was based on earlier
lectures of Floyd, so it was oriented of single CPU computers only. Now when multiprocessor
machines, huge amount of RAM and SSD hard drives are the norm, the situation is very different from late 60th. It requires different sorting
algorithms (the importance of mergesort increased, importance of quicksort decreased). He also got too carried away with sorting random numbers and establishing upper bound and
average
run time. The real data is almost never random and typically contain sorted fragments. For example, he overestimated the
importance of quicksort and thus pushed the discipline in the wrong direction.
Notable quotes:
"... These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update. ..."
"... AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. ..."
"... One good teacher makes all the difference in life. More than one is a rare blessing. ..."
With more than one million copies in print, "The Art of Computer Programming " is the Bible
of its field. "Like an actual bible, it is long and comprehensive; no other book is as
comprehensive," said Peter Norvig, a director of research at Google. After 652 pages, volume
one closes with a blurb on the back cover from Bill Gates: "You should definitely send me a
résumé if you can read the whole thing."
The volume opens with an excerpt from " McCall's Cookbook ":
Here is your book, the one your thousands of letters have asked us to publish. It has
taken us years to do, checking and rechecking countless recipes to bring you only the best,
only the interesting, only the perfect.
Inside are algorithms, the recipes that feed the digital age -- although, as Dr. Knuth likes
to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an
esteemed algorithmist; his name is attached to some of the field's most important specimens,
such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all
occurrences of a given word or pattern of letters in a text -- for instance, when you hit
Command+F to search for a keyword in a document.
... ... ...
During summer vacations, Dr. Knuth made more money than professors earned in a year by
writing compilers. A compiler is like a translator, converting a high-level programming
language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally,
improving it in the process. In computer science, "optimization" is truly an art, and this is
articulated in another Knuthian proverb: "Premature optimization is the root of all evil."
Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he
came to call the "analysis of algorithms." A publisher hired him to write a book about
compilers, but it evolved into a book collecting everything he knew about how to write for
computers -- a book about algorithms.
... ... ...
When Dr. Knuth started out, he intended to write a single work. Soon after, computer science
underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes
out sub-volumes, called fascicles. The next installation, "Volume 4, Fascicle 5," covering,
among other things, "backtracking" and "dancing links," was meant to be published in time for
Christmas. It is delayed until next April because he keeps finding more and more irresistible
problems that he wants to present.
In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time.
He retired at 55, restricted his public engagements and quit email (officially, at least).
Andrei Broder recalled that time management was his professor's defining characteristic even in
the early 1980s.
Dr. Knuth typically held student appointments on Friday mornings, until he started spending
his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to
the computers when they were free. Horrified by what his beloved book looked like on the page
with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX
computer typesetting system, which remains the gold standard for all forms of scientific
communication and publication. Some consider it Dr. Knuth's greatest contribution to the world,
and the greatest contribution to typography since Gutenberg.
This decade-long detour took place back in the age when computers were shared among users
and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted
his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to
midnight. Dr. Broder recalled, "When I told my girlfriend that we can't do anything Friday
night because Friday night at 10 I have to meet with my adviser, she thought, 'This is
something that is so stupid it must be true.'"
... ... ...
Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish "The
Art of Computer Programming," although that time frame has been a constant since about 1980.
Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue?
"Definitely not," said Dr. Knuth.
"I am worried that algorithms are getting too prominent in the world," he added. "It started
out that computer scientists were worried nobody was listening to us. Now I'm worried that too
many people are listening."
Thanks Siobhan for your vivid portrait of my friend and mentor. When I came to Stanford as
an undergrad in 1973 I asked who in the math dept was interested in puzzles. They pointed me
to the computer science dept, where I met Knuth and we hit it off immediately. Not only a
great thinker and writer, but as you so well described, always present and warm in person. He
was also one of the best teachers I've ever had -- clear, funny, and interested in every
student (his elegant policy was each student can only speak twice in class during a period,
to give everyone a chance to participate, and he made a point of remembering everyone's
names). Some thoughts from Knuth I carry with me: finding the right name for a project is
half the work (not literally true, but he labored hard on finding the right names for TeX,
Metafont, etc.), always do your best work, half of why the field of computer science exists
is because it is a way for mathematically minded people who like to build things can meet
each other, and the observation that when the computer science dept began at Stanford one of
the standard interview questions was "what instrument do you play" -- there was a deep
connection between music and computer science, and indeed the dept had multiple string
quartets. But in recent decades that has changed entirely. If you do a book on Knuth (he
deserves it), please be in touch.
I remember when programming was art. I remember when programming was programming. These
days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it
kind of works, then fix the bugs in the post-release, or the next update.
AI is a joke. None
of the current 'AI' actually is. It is just another new buzz-word to throw around to people
that do not understand it at all. We should be in a golden age of computing. Instead, we are
cutting all corners to get something out as fast as possible. The technology exists to do far
more. It is the human element that fails us.
My particular field of interest has always been compiler writing and have been long
awaiting Knuth's volume on that subject. I would just like to point out that among Kunth's
many accomplishments is the invention of LR parsers, which are widely used for writing
programming language compilers.
Yes, \TeX, and its derivative, \LaTeX{} contributed greatly to being able to create
elegant documents. It is also available for the web in the form MathJax, and it's about time
the New York Times supported MathJax. Many times I want one of my New York Times comments to
include math, but there's no way to do so! It comes up equivalent to:
$e^{i\pi}+1$.
I read it at the time, because what I really wanted to read was volume 7, Compilers. As I
understood it at the time, Professor Knuth wrote it in order to make enough money to build an
organ. That apparantly happened by 3:Knuth, Searching and Sorting. The most impressive part
is the mathemathics in Semi-numerical (2:Knuth). A lot of those problems are research
projects over the literature of the last 400 years of mathematics.
I own the three volume "Art of Computer Programming", the hardbound boxed set. Luxurious.
I don't look at it very often thanks to time constraints, given my workload. But your article
motivated me to at least pick it up and carry it from my reserve library to a spot closer to
my main desk so I can at least grab Volume 1 and try to read some of it when the mood
strikes. I had forgotten just how heavy it is, intellectual content aside. It must weigh more
than 25 pounds.
I too used my copies of The Art of Computer Programming to guide me in several projects in
my career, across a variety of topic areas. Now that I'm living in Silicon Valley, I enjoy
seeing Knuth at events at the Computer History Museum (where he was a 1998 Fellow Award
winner), and at Stanford. Another facet of his teaching is the annual Christmas Lecture, in
which he presents something of recent (or not-so-recent) interest. The 2018 lecture is
available online - https://www.youtube.com/watch?v=_cR9zDlvP88
One of the most special treats for first year Ph.D. students in the Stanford University
Computer Science Department was to take the Computer Problem-Solving class with Don Knuth. It
was small and intimate, and we sat around a table for our meetings. Knuth started the
semester by giving us an extremely challenging, previously unsolved problem. We then formed
teams of 2 or 3. Each week, each team would report progress (or lack thereof), and Knuth, in
the most supportive way, would assess our problem-solving approach and make suggestions for
how to improve it. To have a master thinker giving one feedback on how to think better was a
rare and extraordinary experience, from which I am still benefiting! Knuth ended the semester
(after we had all solved the problem) by having us over to his house for food, drink, and
tales from his life. . . And for those like me with a musical interest, he let us play the
magnificent pipe organ that was at the center of his music room. Thank you Professor Knuth,
for giving me one of the most profound educational experiences I've ever had, with such
encouragement and humor!
I learned about Dr. Knuth as a graduate student in the early 70s from one of my professors
and made the financial sacrifice (graduate student assistantships were not lucrative) to buy
the first and then the second volume of the Art of Computer Programming. Later, at Bell Labs,
when I was a bit richer, I bought the third volume. I have those books still and have used
them for reference for years. Thank you Dr, Knuth. Art, indeed!
@Trerra In the good old days, before Computer Science, anyone could take the Programming
Aptitude Test. Pass it and companies would train you. Although there were many mathematicians
and scientists, some of the best programmers turned out to be music majors. English, Social
Sciences, and History majors were represented as well as scientists and mathematicians. It
was a wonderful atmosphere to work in . When I started to look for a job as a programmer, I
took Prudential Life Insurance's version of the Aptitude Test. After the test, the
interviewer was all bent out of shape because my verbal score was higher than my math score;
I was a physics major. Luckily they didn't hire me and I got a job with IBM.
In summary, "May the force be with you" means: Did you read Donald Knuth's "The Art of
Computer Programming"? Excellent, we loved this article. We will share it with many young
developers we know.
Dr. Knuth is a great Computer Scientist. Around 25 years ago, I met Dr. Knuth in a small
gathering a day before he was awarded a honorary Doctorate in a university. This is my
approximate recollection of a conversation. I said-- " Dr. Knuth, you have dedicated your
book to a computer (one with which he had spent a lot of time, perhaps a predecessor to
PDP-11). Isn't it unusual?". He said-- "Well, I love my wife as much as anyone." He then
turned to his wife and said --"Don't you think so?". It would be nice if scientists with the
gift of such great minds tried to address some problems of ordinary people, e.g. a model of
economy where everyone can get a job and health insurance, say, like Dr. Paul Krugman.
I was in a training program for women in computer systems at CUNY graduate center, and
they used his obtuse book. It was one of the reasons I dropped out. He used a fantasy
language to describe his algorithms in his book that one could not test on computers. I
already had work experience as a programmer with algorithms and I know how valuable real
languages are. I might as well have read Animal Farm. It might have been different if he was
the instructor.
Don Knuth's work has been a curious thread weaving in and out of my life. I was first
introduced to Knuth and his The Art of Computer Programming back in 1973, when I was tasked
with understanding a section of the then-only-two-volume Book well enough to give a lecture
explaining it to my college algorithms class. But when I first met him in 1981 at Stanford,
he was all-in on thinking about typography and this new-fangled system of his called TeX.
Skip a quarter century. One day in 2009, I foolishly decided kind of on a whim to rewrite TeX
from scratch (in my copious spare time), as a simple C library, so that its typesetting
algorithms could be put to use in other software such as electronic eBook's with high-quality
math typesetting and interactive pictures. I asked Knuth for advice. He warned me, prepare
yourself, it's going to consume five years of your life. I didn't believe him, so I set off
and tried anyway. As usual, he was right.
I have signed copied of "Fundamental Algorithms" in my library, which I treasure. Knuth
was a fine teacher, and is truly a brilliant and inspiring individual. He taught during the
same period as Vint Cerf, another wonderful teacher with a great sense of humor who is truly
a "father of the internet". One good teacher makes all the difference in life. More than
one is a rare blessing.
I am a biologist, specifically a geneticist. I became interested in LaTeX typesetting
early in my career and have been either called pompous or vilified by people at all levels
for wanting to use. One of my PhD advisors famously told me to forget LaTeX because it was a
thing of the past. I have now forgotten him completely. I still use LaTeX almost every day in
my work even though I don't generally typeset with equations or algorithms. My students
always get trained in using proper typesetting. Unfortunately, the publishing industry has
all but largely given up on TeX. Very few journals in my field accept TeX manuscripts, and
most of them convert to word before feeding text to their publishing software. Whatever
people might argue against TeX, the beauty and elegance of a property typeset document is
unparalleled. Long live LaTeX
A few years ago Severo Ornstein (who, incidentally, did the hardware design for the first
router, in 1969), and his wife Laura, hosted a concert in their home in the hills above Palo
Alto. During a break a friend and I were chatting when a man came over and *asked* if he
could chat with us (a high honor, indeed). His name was Don. After a few minutes I grew
suspicious and asked "What's your last name?" Friendly, modest, brilliant; a nice addition to
our little chat.
When I was a physics undergraduate (at Trinity in Hartford), I was hired to re-write
professor's papers into TeX. Seeing the beauty of TeX, I wrote a program that re-wrote my lab
reports (including graphs!) into TeX. My lab instructors were amazed! How did I do it? I
never told them. But I just recognized that Knuth was a genius and rode his coat-tails, as I
have continued to do for the last 30 years!
A famous quote from Knuth: "Beware of bugs in the above code; I have only proved it
correct, not tried it." Anyone who has ever programmed a computer will feel the truth of this
in their bones.
My grandfather, in the early 60's could board a 707 in New York and arrive in LA in far
less time than I can today. And no, I am not counting 4 hour layovers with the long waits to
be "screened", the jets were 50-70 knots faster, back then your time was worth more, today
less.
Not counting longer hours AT WORK, we spend far more time commuting making for much longer
work days, back then your time was worth more, today less!
Software "upgrades" require workers to constantly relearn the same task because some young
"genius" observed that a carefully thought out interface "looked tired" and glitzed it up.
Think about the almost perfect Google Maps driver interface being redesigned by people who
take private buses to work. Way back in the '90's your time was worth more than today!
Life is all the "time" YOU will ever have and if we let the elite do so, they will suck
every bit of it out of you.
The Indeed jobs website determined, by counting the most requested computer languages in
technology job postings, that Java and Python were most desired by employers across the U.S.
The site compared posts from employers in San Francisco, San Jose, and the wider U.S. between
October 2017 and October 2018. Analysis found most of the non-Python or non-Java languages to
be a reflection of the digital economy's demands, with HTML, CSS, and JavaScript buttressing
the everyday Web. Meanwhile, SQL and PHP drive back-end functions such as data retrieval and
dynamic content display. Although languages from tech giants such as Microsoft's C# and Apple's
Swift for iOS and macOS applications were not among the top 10, both were cited as among the
language skills most wanted by developers. Meanwhile, Amazon Web Services, which has proved
vital to cloud computing, did crack the top 10.
Revisiting the Unix philosophy in 2018The old strategy of building small, focused
applications is new again in the modern microservices environment.
Program Design in the Unix Environment " in the
AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using
the example of BSD's cat -v implementation. In a nutshell that philosophy is: Build small,
focused programs -- in whatever language -- that do only one thing but do this thing well,
communicate via stdin / stdout , and are connected through pipes.
Sound familiar?
Yeah, I thought so. That's pretty much the definition of microservices offered
by James Lewis and Martin Fowler:
In short, the microservice architectural style is an approach to developing a single
application as a suite of small services, each running in its own process and communicating
with lightweight mechanisms, often an HTTP resource API.
While one *nix program or one microservice may be very limited or not even very interesting
on its own, it's the combination of such independently working units that reveals their true
benefit and, therefore, their power.
*nix vs. microservices
The following table compares programs (such as cat or lsof ) in a *nix environment against
programs in a microservices environment.
The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted
script) that, ideally, reads input from stdin and writes output to stdout . A microservices
setup deals with a service that exposes one or more communication interfaces, such as HTTP or
gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional
behavior) and stateful examples, where, in addition to the input, some internal (persisted)
state decides what happens. Data flow
Traditionally, *nix programs could communicate via pipes. In other words, thanks to
Doug McIlroy , you
don't need to create temporary files to pass around and each can process virtually endless
streams of data between processes. To my knowledge, there is nothing comparable to a pipe
standardized in microservices, besides my little
Apache Kafka-based experiment from 2017 .
Configuration and parameterization
How do you configure a program or service -- either on a permanent or a by-call basis? Well,
with *nix programs you essentially have three options: command-line arguments, environment
variables, or full-blown config files. In microservices, you typically deal with YAML (or even
worse, JSON) documents, defining the layout and configuration of a single microservice as well
as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions ,
Nomad
job specifications , or Docker Compose files. These may or may not be
parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed
-i commands.
Discovery
How do you know what programs or services are available and how they are supposed to be
used? Well, in *nix, you typically have a package manager as well as good old man; between
them, they should be able to answer all the questions you might have. In a microservices setup,
there's a bit more automation in finding a service. In addition to bespoke approaches like
Airbnb's SmartStack
or Netflix's Eureka , there
usually are environment variable-based or DNS-based approaches
that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation
and design, and gRPC does the same for more
tightly coupled high-performance cases. Last but not least, take developer experience (DX) into
account, starting with writing good Makefiles and ending with writing your
docs with (or in?) style .
Pros and
cons
Both *nix and microservices offer a number of challenges and
opportunities
Composability
It's hard to design something that has a clear, sharp focus and can also play well with
others. It's even harder to get it right across different versions and to introduce respective
error case handling capabilities. In microservices, this could mean retry logic and timeouts --
maybe it's a better option to outsource these features into a service mesh? It's hard, but if
you get it right, its reusability can be enormous.
Observability
In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather
straightforward to find the culprit when things go south. But, in a
yes | tr \\n x | head -c 450m | grep n
or a request path in a microservices setup that involves, say, 20 services, how do you even
start to figure out which one is behaving badly? Luckily we have standards, notably
OpenCensus and OpenTracing . Observability still might be the biggest single
blocker if you are looking to move to microservices.
Global state
While it may not be such a big issue for *nix programs, in microservices, global state
remains something of a discussion. Namely, how to make sure the local (persistent) state is
managed effectively and how to make the global state consistent with as little effort as
possible.
Wrapping up
In the end, the question remains: Are you using the right tool for a given task? That is, in
the same way a specialized *nix program implementing a range of functions might be the better
choice for certain use cases or phases, it might be that a monolith is the best
option for your organization or workload. Regardless, I hope this article helps you see the
many, strong parallels between the Unix philosophy and microservices -- maybe we can learn
something from the former to benefit the latter.
Michael Hausenblas is a Developer Advocate for Kubernetes
and OpenShift at Red Hat where he helps appops to build and operate apps. His background is in
large-scale data processing and container orchestration and he's experienced in advocacy and
standardization at W3C and IETF. Before Red Hat, Michael worked at Mesosphere, MapR and in two
research institutions in Ireland and Austria. He contributes to open source software incl.
Kubernetes, speaks at conferences and user groups, and shares good practices...
Elegance is one of those things that can be difficult to define. I know it when I see it,
but putting what I see into a terse definition is a challenge. Using the Linux diet
command, Wordnet provides one definition of elegance as, "a quality of neatness and ingenious
simplicity in the solution of a problem (especially in science or mathematics); 'the simplicity
and elegance of his invention.'"
In the context of this book, I think that elegance is a state of beauty and simplicity in
the design and working of both hardware and software. When a design is elegant,
software and hardware work better and are more efficient. The user is aided by simple,
efficient, and understandable tools.
Creating elegance in a technological environment is hard. It is also necessary. Elegant
solutions produce elegant results and are easy to maintain and fix. Elegance does not happen by
accident; you must work for it.
The quality of simplicity is a large part of technical elegance. So large, in fact that it
deserves a chapter of its own, Chapter 18, "Find the Simplicity," but we do not ignore it here.
This chapter discusses what it means for hardware and software to be elegant.
Yes, hardware can be elegant -- even beautiful, pleasing to the eye. Hardware that is well
designed is more reliable as well. Elegant hardware solutions improve reliability'.
"Even Microsoft, the biggest software company in the world, recently screwed up..."
Isn't it rather logical than the larger a company is, the more screw ups it can make?
After all, Microsofts has armies of programmers to make those bugs.
Once I created a joke that the best way to disable missile defense would be to have a
rocket that can stop in mid-air, thus provoking the software to divide be zero and crash. One
day I told that joke to a military officer who told me that something like that actually
happened, but it was in the Navy and it involved a test with a torpedo. Not only the program
for "torpedo defense" went down but the system crashed too and the engine of the ship stopped
working as well. I also recall explanations that a new complex software system typically has
all major bugs removed after being used for a year. And the occasion was Internal Revenue
Service changing hardware and software leading to widely reported problems.
One issue with Microsoft (not just Microsoft) is that their business model (not the
benefit of the users) requires frequent changes in the systems, so bugs are introduced at the
steady clip. Of course, they do not make money on bugs per se, but on new features that in
time make it impossible to use older versions of the software and hardware.
Nikita Prokopov, a software programmer and author of Fira Code, a popular programming font,
AnyBar, a universal status indicator, and some open-source Clojure libraries, writes :
Remember times when an
OS, apps and all your data fit on a floppy? Your desktop todo app is probably written in
Electron and thus has userland driver for Xbox 360 controller in it, can render 3d graphics and
play audio and take photos with your web camera. A simple text chat is notorious for its load
speed and memory consumption. Yes, you really have to count Slack in as a resource-heavy
application. I mean, chatroom and barebones text editor, those are supposed to be two of the
less demanding apps in the whole world. Welcome to 2018.
At least it works, you might say. Well, bigger doesn't imply better. Bigger means someone
has lost control. Bigger means we don't know what's going on. Bigger means complexity tax,
performance tax, reliability tax. This is not the norm and should not become the norm
. Overweight apps should mean a red flag. They should mean run away scared. 16Gb Android phone
was perfectly fine 3 years ago. Today with Android 8.1 it's barely usable because each app has
become at least twice as big for no apparent reason. There are no additional functions. They
are not faster or more optimized. They don't look different. They just...grow?
iPhone 4s was released with iOS 5, but can barely run iOS 9. And it's not because iOS 9
is that much superior -- it's basically the same. But their new hardware is faster, so they
made software slower. Don't worry -- you got exciting new capabilities like...running the same
apps with the same speed! I dunno. [...] Nobody understands anything at this point. Neither
they want to. We just throw barely baked shit out there, hope for the best and call it "startup
wisdom." Web pages ask you to refresh if anything goes wrong. Who has time to figure out what
happened? Any web app produces a constant stream of "random" JS errors in the wild, even on
compatible browsers.
[...] It just seems that nobody is interested in building quality, fast, efficient,
lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we
still struggle with the same problems: package management, build systems, compilers, language
design, IDEs. Build systems are inherently unreliable and periodically require full clean, even
though all info for invalidation is there. Nothing stops us from making build process reliable,
predictable and 100% reproducible. Just nobody thinks its important. NPM has stayed in
"sometimes works" state for years.
Less resource use to accomplish the required tasks? Both in manufacturing (more chips from
the same amount of manufacturing input) and in operation (less power used)?
Ehm...so for example using smaller cars with better mileage to commute isn't more
environmentally friendly either, according to
you?https://slashdot.org/comments.pl?sid=12644750&cid=57354556#
iPhone 4S used to be the best and could run all the applications.
Today, the same power is not sufficient because of software bloat. So you could say that
all the iPhones since the iPhone 4S are devices that were created and then dumped for no
reason.
It doesn't matter since we can't change the past and it doesn't matter much since
improvements are slowing down so people are changing their phones less often.
Can you really not see the connection between inefficient software and environmental harm?
All those computers running code that uses four times as much data, and four times the number
crunching, as is reasonable? That excess RAM and storage has to be built as well as powered
along with the CPU. Those material and electrical resources have to come from somewhere.
But the calculus changes completely when the software manufacturer hosts the software (or
pays for the hosting) for their customers. Our projected AWS bill motivated our management to
let me write the sort of efficient code I've been trained to write. After two years of
maintaining some pretty horrible legacy code, it is a welcome change.
The big players care a great deal about efficiency when they can't outsource inefficiency
to the user's computing resources.
We've been trained to be a consuming society of disposable goods. The latest and
greatest feature will always be more important than something that is reliable and durable
for the long haul.
It's not just consumer stuff.
The network team I'm a part of has been dealing with more and more frequent outages, 90%
of which are due to bugs in software running our devices. These aren't fly-by-night vendors
either, they're the "no one ever got fired for buying X" ones like Cisco, F5, Palo Alto, EMC,
etc.
10 years ago, outages were 10% bugs, and 90% human error, now it seems to be the other way
around. Everyone's chasing features, because that's what sells, so there's no time for
efficiency/stability/security any more.
Poor software engineering means that very capable computers are no longer capable of
running modern, unnecessarily bloated software. This, in turn, leads to people having to
replace computers that are otherwise working well, solely for the reason to keep up with
software that requires more and more system resources for no tangible benefit. In a nutshell
-- sloppy, lazy programming leads to more technology waste. That impacts the environment. I
have a unique perspective in this topic. I do web development for a company that does
electronics recycling. I have suffered the continued bloat in software in the tools I use
(most egregiously, Adobe), and I see the impact of technological waste in the increasing
amount of electronics recycling that is occurring. Ironically, I'm working at home today
because my computer at the office kept stalling every time I had Photoshop and Illustrator
open at the same time. A few years ago that wasn't a problem.
There is one place where people still produce stuff like the OP wants, and that's
embedded. Not IoT wank, but real embedded, running on CPUs clocked at tens of MHz with RAM in
two-digit kilobyte (not megabyte or gigabyte) quantities. And a lot of that stuff is written
to very exacting standards, particularly where something like realtime control and/or safety
is involved.
The one problem in this area is the endless battle with standards morons who begin each
standard with an implicit "assume an infinitely
> Poor software engineering means that very capable computers are no longer capable of
running modern, unnecessarily bloated software.
Not just computers.
You can add Smart TVs, settop internet boxes, Kindles, tablets, et cetera that must be
thrown-away when they become too old (say 5 years) to run the latest bloatware. Software
non-engineering is causing a lot of working hardware to be landfilled, and for no good
reason.
When the speed of your processor doubles every two year along with a concurrent doubling
of RAM and disk space, then you can get away with bloatware.
Since Moore's law appears to have stalled since at least five years ago, it will be
interesting to see if we start to see algorithm research or code optimization techniques
coming to the fore again.
"... It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another. ..."
"... Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything. ..."
"... Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer. ..."
"... I recall flowcharting entirely on paper before committing a program to punched cards. ..."
Much has been made, including in this post, of the excellent organization of Russian
forces and Russian military technology.
I have been re-investigating an open-source relational database system known as PosgreSQL
(variously), and I remember finding perhaps a decade ago a very useful whole text search
feature of this system which I vaguely remember was written by a Russian and, for that
reason, mildly distrusted by me.
Come to find out that the principle developers and maintainers of PostgreSQL are Russian.
OMG. Double OMG, because the reason I chose it in the first place is that it is the best
non-proprietary RDBS out there and today is supported on Google Cloud, AWS, etc.
The US has met an equal or conceivably a superior, case closed. Trump's thoroughly odd
behavior with Putin is just one but a very obvious one example of this.
Of course, Trump's
nationalistic blather is creating a "base" of people who believe in the godliness of the US.
They are in for a very serious disappointment.
After the iron curtain fell, there was a big demand for Russian-trained programmers
because they could program in a very efficient and "light" manner that didn't demand too much
of the hardware, if I remember correctly.
It's a bit of chicken-and-egg problem, though.
Russia, throughout 20th century, had problem with developing small, effective hardware, so
their programmers learned how to code to take maximum advantage of what they had, with their
technological deficiency in one field giving rise to superiority in another.
Russia has plenty of very skilled, very well-trained folks and their science and math
education is, in a way, more fundamentally and soundly grounded on the foundational stuff
than US (based on my personal interactions anyways).
Russian tech ppl should always be viewed
with certain amount of awe and respect...although they are hardly good on everything.
Well said. Soviet university training in "cybernetics" as it was called in the late 1980s
involved two years of programming on blackboards before the students even touched an actual
computer.
It gave the students an understanding of how computers works down to the bit
flipping level. Imagine trying to fuzz code in your head.
I recall flowcharting entirely on paper before committing a program to punched cards. I
used to do hex and octal math in my head as part of debugging core dumps. Ah, the glory days.
Honeywell once made a military computer that was 10 bit. That stumped me for a while, as
everything was 8 or 16 bit back then.
That used to be fairly common in the civilian sector (in US) too: computing time was
expensive, so you had to make sure that the stuff worked flawlessly before it was committed.
No opportunity to seeing things go wrong and do things over like much of how things happen
nowadays. Russians, with their hardware limitations/shortages, I imagine must have been much
more thorough than US programmers were back in the old days, and you could only get there by
being very thoroughly grounded n the basics.
If we take consulting, services, and support off the table as an option for high-growth
revenue generation (the only thing VCs care about), we are left with open core [with some
subset of features behind a paywall] , software as a service, or some blurring of the
two... Everyone wants infrastructure software to be free and continuously developed by highly
skilled professional developers (who in turn expect to make substantial salaries), but no one
wants to pay for it. The economics of
this situation are unsustainable and broken ...
[W]e now come to what I have recently called "loose" open core and SaaS. In the future, I
believe the most successful OSS projects will be primarily monetized via this method. What is
it? The idea behind "loose" open core and SaaS is that a popular OSS project can be developed
as a completely community driven project (this avoids the conflicts of interest inherent in
"pure" open core), while value added proprietary services and software can be sold in an
ecosystem that forms around the OSS...
Unfortunately, there is an inflection point at which in some sense an OSS project becomes
too popular for its own good, and outgrows its ability to generate enough revenue via either
"pure" open core or services and support... [B]uilding a vibrant community and then enabling an
ecosystem of "loose" open core and SaaS businesses on top appears to me to be the only viable
path forward for modern VC-backed OSS startups.
Klein also suggests OSS foundations start providing fellowships to key maintainers, who
currently "operate under an almost feudal system of patronage, hopping from company to company,
trying to earn a living, keep the community vibrant, and all the while stay impartial..."
"[A]s an industry, we are going to have to come to terms with the economic reality: nothing
is free, including OSS. If we want vibrant OSS projects maintained by engineers that are well
compensated and not conflicted, we are going to have to decide that this is something worth
paying for. In my opinion, fellowships provided by OSS foundations and funded by companies
generating revenue off of the OSS is a great way to start down this path."
"... Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley ..."
"... Older generations called this kind of fraud "fake it 'til you make it." ..."
"... Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring ..."
"... It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. ..."
"... In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz ..."
"... So what are the uses for that? I am curious what things people have put these to use for. ..."
"... Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. ..."
"... I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. ..."
"... 10% are just causing damage. I'm not talking about terrorists and criminals. ..."
"... Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers. ..."
The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other
gratingly childish classifications describing various programming specialities.
" I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering
talent.
I
was struck by how many developers were, like myself, not really programmers , but rather this, that and the other. A great
number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of
the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary
to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with
"open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste...
[M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There
wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted
major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was
on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation
of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence
allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest
way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign
designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme
for the transformation of society, always in a direction that favoured the investor and executive classes.
"I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily
indoctrinated in a specious ideology."
> The people can do both are smart enough to build their own company and compete with you.
Been there, done that. Learned a few lessons. Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other
"smart person" put in 75 hours a week dealing with hiring, managing people, corporate strategy, staying up on the competition,
figuring out tax changes each year and getting taxes filed six times each year, the various state and local requirements, legal
changes, contract hassles, etc, while hoping the company makes money this month so they can take a paycheck and lay their rent.
I learned that I'm good at creating software systems and I enjoy it. I don't enjoy all-nighters, partners being dickheads trying
to pull out of a contract, or any of a thousand other things related to running a start-up business. I really enjoy a consistent,
six-figure compensation package too.
I pay monthly gross receipts tax (12), quarterly withholdings (4) and a corporate (1) and individual (1) returns. The gross
receipts can vary based on the state, so I can see how six times a year would be the minimum.
Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week
dealing with hiring
There's nothing wrong with not wnting to run your own business, it's not for most people, and even if it was, the numbers don't
add up. But putting the scare qoutes in like that makes it sound like you have huge chip on your shoulder. Those things re just
as essential to the business as your work and without them you wouldn't have the steady 9:30-4:30 with good paycheck.
Of course they are important. I wouldn't have done those things if they weren't important!
I frequently have friends say things like "I love baking. I can't get enough of baking. I'm going to open a bakery.". I ask
them "do you love dealing with taxes, every month? Do you love contract law? Employment law? Marketing? Accounting?" If you LOVE
baking, the smart thing to do is to spend your time baking. Running a start-up business, you're not going to do much baking.
I can tell you a few things that have worked for me. I'll go in chronological order rather than priority order.
Make friends in the industry you want to be in. Referrals are a major way people get jobs.
Look at the job listings for jobs you'd like to have and see which skills a lot of companies want, but you're missing. For
me that's Java. A lot companies list Java skills and I'm not particularly good with Java. Then consider learning the skills you
lack, the ones a lot of job postings are looking for.
You don't understand the point of an ORM do you? I'd suggest reading why they exist
They exist because programmers value code design more than data design. ORMs are the poster-child for square-peg-round-hole
solutions, which is why all ORMs choose one of three different ways of squashing hierarchical data into a relational form, all
of which are crappy.
If the devs of the system (the ones choosing to use an ORM) had any competence at all they'd design their database first because
in any application that uses a database the database is the most important bit, not the OO-ness or Functional-ness of the design.
Over the last few decades I've seen programs in a system come and go; a component here gets rewritten, a component there gets
rewritten, but you know what? They all have to work with the same damn data.
You can more easily switch out your code for new code with new design in a new language, than you can switch change the database
structure. So explain to me why it is that you think the database should be mangled to fit your OO code rather than mangling your
OO code to fit the database?
Stick to the one thing for 10-15years. Often all this new shit doesn't do jack different to the old shit, its not faster, its
not better. Every dick wants to be famous so make another damn library/tool with his own fancy name and feature, instead
of enhancing an existing product.
Or kids who can't hack the main stuff, suddenly discover the cool new, and then they can pretend they're "learning" it, and
when the going gets tough (as it always does) they can declare the tech to be pants and move to another.
hence we had so many people on the bandwagon for functional programming, then dumped it for ruby on rails, then dumped that
for Node.js, not sure what they're on at currently, probably back to asp.net.
How much code do you have to reuse before you're not really programming anymore? When I started in this business, it was reasonably
possible that you could end up on a project that didn't particularly have much (or any) of an operating system. They taught you
assembly language and the process by which the system boots up, but I think if I were to ask most of the programmers where I work,
they wouldn't be able to explain how all that works...
It really feels like if you know what you're doing it should be possible to build a team of actually good programmers and
put everyone else out of business by actually meeting your deliverables, but no one has yet. I wonder why that is.
You mean Amazon, Google, Facebook and the like? People may not always like what they do, but they manage to get things done
and make plenty of money in the process. The problem for a lot of other businesses is not having a way to identify and promote
actually good programmers. In your example, you could've spent 10 minutes fixing their query and saved them days of headache,
but how much recognition will you actually get? Where is your motivation to help them?
It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out
over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than
they understand the technology. Yes it can happen that baseless boasts can be called out over time by a large enough mass
of feedback from competent peers, but it takes a *lot* to overcome the tendency for them to have faith in the boasts.
And all these modern coders forget old lessons, and make shit stuff, just look at instagram windows app, what a load of garbage
shit, that us old fuckers could code in 2-3 weeks.
Instagram - your app sucks, cookie cutter coders suck, no refinement, coolness. Just cheap ass shit, with limited usefulness.
Just like most of commercial software that's new - quick shit.
Oh and its obvious if your an Indian faking it, you haven't worked in 100 companies at the age of 29.
Here's another problem, if faced with a skilled team that says "this will take 6 months to do right" and a more naive team
that says "oh, we can slap that together in a month", management goes with the latter. Then the security compromises occur, then
the application fails due to pulling in an unvetted dependency update live into production. When the project grows to handling
thousands instead of dozens of users and it starts mysteriously folding over and the dev team is at a loss, well the choice has
be
These restrictions is a large part of what makes Arduino programming "fun". If you don't plan out your memory usage, you're
gonna run out of it. I cringe when I see 8MB web pages of bloated "throw in everything including the kitchen sink and the neighbor's
car". Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something.
Of course, I don't have time to review it but I'm sure everybody else has peer reviewed it for flaws and exploits line by line.
Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something.
Of course. What is the business case for making it efficient? Those massive frameworks are cached by the browser and run on
the client's system, so cost you nothing and save you time to market. Efficient costs money with no real benefit to the business.
If we want to fix this, we need to make bloat have an associated cost somehow.
My company is dealing with the result of this mentality right now. We released the web app to the customer without performance
testing and doing several majorly inefficient things to meet deadlines. Once real load was put on the application by users with
non-ideal hardware and browsers, the app was infuriatingly slow. Suddenly our standard sub-40 hour workweek became a 50+ hour
workweek for months while we fixed all the inefficient code and design issues.
So, while you're right that getting to market and opt
In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully
inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz
We still have them. In fact some of the systems I've programmed have been more resource limited than the gloriously spacious
32KiB memory of the BBC model B. Take the PIC12F or 10F series. A glorious 64 bytes of RAM, max clock speed of 16MHz, but not
unusual to run it 32kHz.
So what are the uses for that? I am curious what things people have put these to use for.
It's hard to determine because people don't advertise use of them at all. However, I know that my electric toothbrush uses
an Epson 4 bit MCU of some description. It's got a status LED, basic NiMH batteryb charger and a PWM controller for an H Bridge.
Braun sell a *lot* of electric toothbrushes. Any gadget that's smarter than a simple switch will probably have some sort of basic
MCU in it. Alarm system components, sensor
b) No computer ever ran at 1.023 MHz. It was either a nice multiple of 1Mhz or maybe a multiple of 3.579545Mhz (ie. using the
TV output circuit's color clock crystal to drive the CPU).
Well, it could be used to drive the TV output circuit, OR, it was used because it's a stupidly cheap high speed crystal. You
have to remember except for a few frequencies, most crystals would have to be specially cut for the desired frequency. This occurs
even today, where most oscillators are either 32.768kHz (real time clock
Yeah, nice talk. You could have stopped after the first sentence. The other AC is referring to the
Commodore C64 [wikipedia.org]. The frequency has nothing
to do with crystal availability but with the simple fact that everything in the C64 is synced to the TV. One clock cycle equals
8 pixels. The graphics chip and the CPU take turns accessing the RAM. The different frequencies dictated by the TV standards are
the reason why the CPU in the NTSC version of the C64 runs at 1.023MHz and the PAL version at 0.985MHz.
Commodore 64 for the win. I worked for a company that made detection devices for the railroad, things like monitoring axle
temperatures, reading the rail car ID tags. The original devices were made using Commodore 64 boards using software written by
an employee at the one rail road company working with them.
The company then hired some electrical engineers to design custom boards using the 68000 chips and I was hired as the only
programmer. Had to rewrite all of the code which was fine...
Many of these languages have an interactive interpreter. I know for a fact that Python does.
So, since job-fairs are an all day thing, and setup is already a thing for them -- set up a booth with like 4 computers at
it, and an admin station. The 4 terminals have an interactive session with the interpreter of choice. Every 20min or so, have
a challenge for "Solve this problem" (needs to be easy and already solved in general. Programmers hate being pimped without pay.
They don't mind tests of skill, but hate being pimped. Something like "sort this array, while picking out all the prime numbers"
or something.) and see who steps up. The ones that step up have confidence they can solve the problem, and you can quickly see
who can do the work and who can't.
The ones that solve it, and solve it to your satisfaction, you offer a nice gig to.
Then you get someone good at sorting arrays while picking out prime numbers, but potentially not much else.
The point of the test is not to identify the perfect candidate, but to filter out the clearly incompetent. If you can't sort
an array and write a function to identify a prime number, I certainly would not hire you. Passing the test doesn't get you a job,
but it may get you an interview ... where there will be other tests.
(I am not even a professional programmer, but I can totally perform such a trivially easy task. The example tests basic understanding
of loop construction, function construction, variable use, efficient sorting, and error correction-- especially with mixed type
arrays. All of these are things any programmer SHOULD now how to do, without being overly complicated, or clearly a disguised
occupational problem trying to get a free solution. Like I said, programmers hate being pimped, and will be turned off
Again, the quality applicant and the code monkey both have something the fakers do not-- Actual comprehension of what a program
is, and how to create one.
As Bill points out, this is not the final exam. This is the "Oh, I see you do actually know how to program-- show me more"
portion of the process. This is the part that HR drones are not capable of performing, due to Dunning-Krueger. Those that are
actually, REALLY competent will do more than just satisfy the requirements of the challenge, they will provide actually working
solutions to the challenge that properly validate their input, and return proper error states if the input is invalid, etc-- You
can learn a LOT about a potential hire by observing their work. *THAT* is what this is really about. The triviality of the problem
is a necessity, because you ***DON'T*** try to get free solutions out of people.
I realize that may be difficult for you to comprehend, but you *DON'T* do that. The job fair is to let people know that you
have a position available, and try to curry interest in people to apply. A successful pre-screening is confidence building, and
helps the potential hire to feel that your company is actually interested in actually hiring somebody, and not just fucking off
in the booth, to cover for "failing to find somebody" and then "Getting yet another H1B". It gives them a chance to show you what
they can do. That is what it is for, and what it does. It also excludes the fakers that this article is about-- The ones that
can talk a good talk, but could not program a simple boolean check condition if their life depended on it.
If it were not for the time constraints of a job fair (usually only 2 days, and in that time you need to try and pre-screen
as many as possible), I would suggest a tiered challenge, with progressively harder challenges, where you hand out resumes to
the ones that make it to the top 3 brackets, but that is not the way the world works.
This in my opinion is really a waste of time. Challenges like this have to be so simple they can be done walking up to a
booth are not likely to filter the "all talks" any better than a few interview questions could (imperson so the candidate can't
just google it).
Tougher more involved stuff isn't good either it gives a huge advantage to the full time job hunter, the guy or gal that
already has a 9-5 and a family that wants to seem them has not got time for games. We have been struggling with hiring where
I work ( I do a lot of the interviews ) and these are the conclusions we have reached
You would be surprised at the number of people with impeccable-looking resumes failing at something as simple as the
FizzBuzz test [codinghorror.com]
The only thing fuzzbuzz tests is "have you done fizzbuzz before"? It's a short question filled with every petty trick the author
could think ti throw in there. If you haven't seen the tricks they trip you up for no reason related to your actual coding skills.
Once you have seen them they're trivial and again unrelated to real work. Fizzbuzz is best passed by someone aiming to game the
interview system. It passes people gaming it and trips up people who spent their time doing on the job real work.
A good programmer first and foremost has a clean mind. Experience suggests puzzle geeks, who excel at contrived tests, are
usually sloppy thinkers.
No. Good programmers can trivially knock out any of these so-called lame monkey tests. It's lame code monkeys who can't do
it. And I've seen their work. Many night shifts and weekends I've burned trying to fix their shit because they couldn't actually
do any of the things behind what you call "lame monkey tests", like:
pulling expensive invariant calculations out of loops using for loops to scan a fucking table to pull rows or calculate an
aggregate when they could let the database do what it does best with a simple SQL statement systems crashing under actual load
because their shitty code was never stress tested ( but it worked on my dev box! .) again with databases, having to
redo their schemas because they were fattened up so much with columns like VALUE1, VALUE2, ... VALUE20 (normalize you assholes!) chatting remote APIs - because these code monkeys cannot think about the need
for bulk operations in increasingly distributed systems. storing dates in unsortable strings because the idiots do not know
most modern programming languages have a date data type.
Oh and the most important, off-by-one looping errors. I see this all the time, the type of thing a good programmer can spot
on quickly because he or she can do the so-called "lame monkey tests" that involve arrays and sorting.
I've seen the type: "I don't need to do this shit because I have business knowledge and I code for business and IT not google",
and then they go and code and fuck it up... and then the rest of us have to go clean up their shit at 1AM or on weekends.
If you work as an hourly paid contractor cleaning that crap, it can be quite lucrative. But sooner or later it truly sucks
the energy out of your soul.
So yeah, we need more lame monkey tests ... to filter the lame code monkeys.
Someone could Google the problem with the phone then step up and solve the challenge.
If given a spec, someone can consistently cobble together working code by Googling, then I would love to hire them. That is
the most productive way to get things done.
There is nothing wrong with using external references. When I am coding, I have three windows open: an editor, a testing window,
and a browser with a Stackoverflow tab open.
Yeah, when we do tech interviews, we ask questions that we are certain they won't be able to answer, but want to see how they
would think about the problem and what questions they ask to get more data and that they don't just fold up and say "well that's
not the sort of problem I'd be thinking of" The examples aren't made up or anything, they are generally selection of real problems
that were incredibly difficult that our company had faced before, that one may not think at first glance such a position would
than spending weeks interviewing "good" candidates for an opening, selecting a couple and hiring them as contractors, then
finding out they are less than unqualified to do the job they were hired for.
I've seen it a few times, Java "experts", Microsoft "experts" with years of experience on their resumes, but completely useless
in coding, deployment or anything other than buying stuff from the break room vending machines.
That being said, I've also seen projects costing hundreds of thousands of dollars, with y
I agree with this. I consider myself to be a good programmer and I would never go into contractor game. I also wonder, how
does it take you weeks to interview someone and you still can't figure out if the person can't code? I could probably see that
in 15 minutes in a pair coding session.
Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they
suck big time. They make software development 100 times harder than it could be. Their technical supp
That being said, I've also seen projects costing hundreds of thousands of dollars, with years of delays from companies like
Oracle, Sun, SAP, and many other "vendors"
Software development is a hard thing to do well, despite the general thinking of technology becoming cheaper over time, and
like health care the quality of the goods and services received can sometimes be difficult to ascertain. However, people who don't
respect developers and the problems we solve are very often the same ones who continually frustrate themselves by trying to cheap
out, hiring outsourced contractors, and then tearing their hair out when sub par results are delivered, if anything is even del
As part of your interview process, don't you have candidates code a solution to a problem on a whiteboard? I've interviewed
lots of "good" candidates (on paper) too, but they crashed and burned when challenged with a coding exercise. As a result, we
didn't make them job offers.
I'm not a great coder but good enough to get done what clients want done. If I'm not sure or don't think I can do it, I tell
them. I think they appreciate the honesty. I don't work in a tech-hub, startups or anything like that so I'm not under the same
expectations and pressures that others may be.
OK, so yes, I know plenty of programmers who do fake it. But stitching together components isn't "fake" programming.
Back in the day, we had to write our own code to loop through an XML file, looking for nuggets. Now, we just use an XML serializer.
Back then, we had to write our own routines to send TCP/IP messages back and forth. Now we just use a library.
I love it! I hated having to make my own bricks before I could build a house. Now, I can get down to the business of writing
the functionality I want, ins
But, I suspect you could write the component if you had to. That makes you a very different user of that component than someone
who just knows it as a magic black box.
Because of this, you understand the component better and have real knowledge of its strengths and limitations. People blindly
using components with only a cursory idea of their internal operation often cause major performance problems. They rarely recognize
when it is time to write their own to overcome a limitation (or even that it is possibl
You're right on all counts. A person who knows how the innards work, is better than someone who doesn't, all else being equal.
Still, today's world is so specialized that no one can possibly learn it all. I've never built a processor, as you have, but I
still have been able to build a DNA matching algorithm for a major DNA lab.
I would argue that anyone who can skillfully use off-the-shelf components can also learn how to build components, if they are
required to.
1, 'Back in the Day' there was no XML, XMl was not very long ago.
2, its a parser, a serialiser is pretty much the opposite (unless this weeks fashion has redefined that.. anything is possible).
3, 'Back then' we didnt have TCP stacks...
But, actually I agree with you. I can only assume the author thinks there are lots of fake plumbers because they dont cast
their own toilet bowels from raw clay, and use pre-build fittings and pipes! That car mechanics start from raw steel scrap and
a file.. And that you need
Yes, I agree with you on the "middle ground." My reaction was to the author's point that "not knowing how to build the components"
was the same as being a "fake programmer."
If I'm a plumber, and I don't know anything about the engineering behind the construction of PVC pipe, I can still be a good
plumber. If I'm an electrician, and I don't understand the role of a blast furnace in the making of the metal components, I can
still be a good electrician.
The analogy fits. If I'm a programmer, and I don't know how to make an LZW compression library, I can still be a good programmer.
It's a matter of layers. These days, we specialize. You've got your low-level programmers that make the components, the high level
programmers that put together the components, the graphics guys who do HTML/CSS, and the SQL programmers that just know about
databases. Every person has their specialty. It's no longer necessary to be a low-level programmer, or jack-of-all-trades, to
be "good."
If I don't know the layout of the IP header, I can still write quality networking software, and if I know XSLT, I can still
do cool stuff with XML, even if I don't know how to write a good parser.
LOL yeah I know it's all JSON now. I've been around long enough to see these fads come and go. Frankly, I don't see a whole
lot of advantage of JSON over XML. It's not even that much more compact, about 10% or so. But the point is that the author laments
the "bad old days" when you had to create all your own building blocks, and you didn't have a team of specialists. I for one don't
want to go back to those days!
The main advantage is that JSON is that it is consistent. XML has attributes, embedded optional stuff within tags. That was
derived from the original SGML ancestor where is was thought to be a convenience for the human authors who were supposed to be
making the mark-up manually. Programmatically it is a PITA.
I got shit for decrying XML back when it was the trendy thing. I've had people apologise to me months later because they've
realized I was right, even though at the time they did their best to fuck over my career because XML was the new big thing and
I wasn't fully on board.
XML has its strengths and its place, but fuck me it taught me how little some people really fucking understand shit.
And a rather small part at that, albeit a very visible and vocal one full of the proverbial prima donas. However, much of the
rest of the tech business, or at least the people working in it, are not like that. It's small groups of developers working in
other industries that would not typically be considered technology. There are software developers working for insurance companies,
banks, hedge funds, oil and gas exploration or extraction firms, national defense and many hundreds and thousands of other small
They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses
around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers
to take over more of the mundane work of producing software.
Kind of hard to take this article serious after saying gibberish like this. I would say most good programmers know that neither
learn-to-code courses nor AI are going to make a dent in their income any time soon.
There is a huge shortage of decent programmers. I have personally witnessed more than one phone "interview" that went like
"have you done this? what about this? do you know what this is? um, can you start Monday?" (120K-ish salary range)
Partly because there are way more people who got their stupid ideas funded than good coders willing to stain their resume with
that. partly because if you are funded, and cannot do all the required coding solo, here's your conundrum:
top level hackers can afford to be really picky, so on one hand it's hard to get them interested, and if you could get
that, they often want some ownership of the project. the plus side is that they are happy to work for lots of equity if they
have faith in the idea, but that can be a huge "if".
"good but not exceptional" senior engineers aren't usually going to be super happy, as they often have spouses and children
and mortgages, so they'd favor job security over exciting ideas and startup lottery.
that leaves you with fresh-out-of-college folks, which are really really a mixed bunch. some are actually already senior
level of understanding without the experience, some are absolutely useless, with varying degrees in between, and there's no
easy way to tell which is which early.
so the not-so-scrupulous folks realized what's going on, and launched multiple coding boot camps programmes, to essentially
trick both the students into believing they can become a coder in a month or two, and also the prospective employers that said
students are useful. so far it's been working, to a degree, in part because in such companies coding skill evaluation process
is broken. but one can only hide their lack of value add for so long, even if they do manage to bluff their way into a job.
All one had to do was look at the lousy state of software and web sites today to see this is true. It's quite obvious little
to no thought is given on how to make something work such that one doesn't have to jump through hoops.
I have many times said the most perfect word processing program ever developed was WordPefect 5.1 for DOS. Ones productivity
was astonishing. It just worked.
Now we have the bloated behemoth Word which does its utmost to get in the way of you doing your work. The only way to get it
to function is to turn large portions of its "features" off, and even then it still insists on doing something other than what
you told it to do.
Then we have the abomination of Windows 10, which is nothing but Clippy on 10X steroids. It is patently obvious the people
who program this steaming pile have never heard of simplicity. Who in their right mind would think having to "search" for something
is more efficient than going directly to it? I would ask the question if these people wander around stores "searching" for what
they're looking for, but then I realize that's how their entire life is run. They search for everything online rather than going
directly to the source. It's no wonder they complain about not having time to things. They're always searching.
Web sites are another area where these people have no clue what they're doing. Anything that might be useful is hidden behind
dropdown menus, flyouts, popup bubbles and intriately designed mazes of clicks needed to get to where you want to go. When someone
clicks on a line of products, they shouldn't be harassed about what part of the product line they want to look at. Give them the
information and let the user go where they want.
This rant could go on, but this article explains clearly why we have regressed when it comes to software and web design. Instead
of making things simple and easy to use, using the one or two brain cells they have, programmers and web designers let the software
do what it wants without considering, should it be done like this?
The tech industry has a ton of churn -- there's some technological advancement, but there's an awful lot of new products turned
out simply to keep customers buying new licenses and paying for upgrades.
This relentless and mostly phony newness means a lot of people have little experience with current products. People fake because
they have no choice. The good ones understand the general technologies and problems they're meant to solve and can generally get
up to speed quickly, while the bad ones are good at faking it but don't really know what they're doing. Telling the difference
from the outside is impossible.
Sales people make it worse, promoting people as "experts" in specific products or implementations because the people have experience
with a related product and "they're all the same". This burns out the people with good adaption skills.
From the summary, it sounds like a lot of programmers and software engineers are trying to develop the next big thing so that
they can literally beg for money from the elite class and one day, hopefully, become a member of the aforementioned. It's sad
how the middle class has been utterly decimated in the United States that some of us are willing to beg for scraps from the wealthy.
I used to work in IT but I've aged out and am now back in school to learn automotive technology so that I can do something other
than being a security guard. Currently, the only work I have been able to find has been in the unglamorous security field.
I am learning some really good new skills in the automotive program that I am in but I hate this one class called "Professionalism
in the Shop." I can summarize the entire class in one succinct phrase, "Learn how to appeal to, and communicate with, Mr. Doctor,
Mr. Lawyer, or Mr. Wealthy-man." Basically, the class says that we are supposed to kiss their ass so they keep coming back to
the Audi, BMW, Mercedes, Volvo, or Cadillac dealership. It feels a lot like begging for money on behalf of my employer (of which
very little of it I will see) and nothing like professionalism. Professionalism is doing the job right the first time, not jerking
the customer off. Professionalism is not begging for a 5 star review for a few measly extra bucks but doing absolute top quality
work. I guess the upshot is that this class will be the easiest 4.0 that I've ever seen.
There is something fundamentally wrong when the wealthy elite have basically demanded that we beg them for every little scrap.
I can understand the importance of polite and professional interaction but this prevalent expectation that we bend over backwards
for them crosses a line with me. I still suck it up because I have to but it chafes my ass to basically validate the wealthy man.
In 70's I worked with two people who had a natural talent for computer science algorithms .vs. coding syntax. In the 90's while at COLUMBIA I worked with only a couple of true computer scientists out of 30 students.
I've met 1 genius who programmed, spoke 13 languages, ex-CIA, wrote SWIFT and spoke fluent assembly complete with animated characters.
According to the Bluff Book, everyone else without natural talent fakes it. In the undiluted definition of computer science,
genetics roulette and intellectual d
Ah yes, the good old 80:20 rule, except it's recursive for programmers.
80% are shit, so you fire them. Soon you realize that 80% of the remaining 20% are also shit, so you fire them too. Eventually
you realize that 80% of the 4% remaining after sacking the 80% of the 20% are also shit, so you fire them!
...
The cycle repeats until there's just one programmer left: the person telling the joke.
---
tl;dr: All programmers suck. Just ask them to review their own code from more than 3 years ago: they'll tell you that
Who gives a fuck about lines? If someone gave me JavaScript, and someone gave me minified JavaScript, which one would I
want to maintain?
I donâ(TM)t care about your line savings, less isnâ(TM)t always better.
Because the world of programming is not centered about JavasScript and reduction of lines is not the same as minification.
If the first thing that came to your mind was about minified JavaScript when you saw this conversation, you are certainly not
the type of programmer I would want to inherit code from.
See, there's a lot of shit out there that is overtly redundant and unnecessarily complex. This is specially true when copy-n-paste
code monkeys are left to their own devices for whom code formatting seems
I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be
good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as
broken and who automate routine tasks or simply question and erase tasks that are not necessary. If you have a software team
that contain 5 of these, you can easily beat a team of 100 average people, not only in cost but also in schedule, quality and
features. In theory they are worth 20 times more than average employees, but in practise they are usually paid the same amount
of money with few exceptions.
80% of people are the average. They can follow instructions and they can get the work done, but they don't see that something
is broken and needs fixing if it works the way it has always worked. While it might seem so, these people are not worthless. There
are a lot of tasks that these people are happily doing which the 10% don't want to do. E.g. simple maintenance work, implementing
simple features, automating test cases etc. But if you let the top 10% lead the project, you most likely won't be needed that
much of these people. Most work done by these people is caused by themselves, by writing bad software due to lack of good leader.
10% are just causing damage. I'm not talking about terrorists and criminals. I have seen software developers who have tried
(their best?), but still end up causing just damage to the code that someone else needs to fix, costing much more than their own
wasted time. You really must use code reviews if you don't know your team members, to find these people early.
I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer
and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good
at, but I'm good at my job, so say everyone I've worked with.)
I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that
for twenty years.
I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their
first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They
never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong
20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.
I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good
programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things
I'm not good at, but I'm good at my job, so say everyone I've worked with.)
I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that
for twenty years.
I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their
first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years.
They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was
wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days
of improvement.
If you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not
able to ask for their help.
You're not interviewing your peers. They are already in your team. You should be working together.
I've seen superstar programmers suck the life out of project by over-complicating things and not working together with others.
10% are just causing damage. I'm not talking about terrorists and criminals.
Terrorists and criminals have nothing on those guys. I know guy who is one of those. Worse, he's both motivated and enthusiastic.
He also likes to offer help and advice to other people who don't know the systems well.
"I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics
are combined. Some are clever and diligent -- their place is the General Staff. The next lot are stupid and lazy -- they make
up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest
leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must
beware of anyone who is stupid and diligent -- he must not be entrusted with any responsibility because he will always cause only
mischief."
It's called the Pareto Distribution [wikipedia.org].
The number of competent people (people doing most of the work) in any given organization goes like the square root of the number
of employees.
Matches my observations. 10-15% are smart, can think independently, can verify claims by others and can identify and use rules
in whatever they do. They are not fooled by things "everybody knows" and see standard-approaches as first approximations that,
of course, need to be verified to work. They do not trust anything blindly, but can identify whether something actually work well
and build up a toolbox of such things.
The problem is that in coding, you do not have a "(mass) production step", and that is the
In basic concept I agree with your theory, it fits my own anecdotal experience well, but I find that your numbers are off.
The top bracket is actually closer to 20%. The reason it seems so low is that a large portion of the highly competent people are
running one programmer shows, so they have no co-workers to appreciate their knowledge and skill. The places they work do a very
good job of keeping them well paid and happy (assuming they don't own the company outright), so they rarely if ever switch jobs.
at least 70, probably 80, maybe even 90 percent of professional programmers should just fuck off and do something else as they
are useless at programming.
Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for
long? For whatever reason, the industry doesn't want old programmers.
Otherwise, I'd suggest longer training and education before they enter the industry. But that just narrows an already narrow
window of use.
Well, it does rather depend on which industry you work in - i've managed to find interesting programming jobs for 25 years,
and there's no end in sight for interesting projects and new avenues to explore. However, this isn't for everyone, and if you
have good personal skills then moving from programming into some technical management role is a very worthwhile route, and I know
plenty of people who have found very interesting work in that direction.
I think that is a misinterpretation of the facts. Old(er) coders that are incompetent are just much more obvious and usually
are also limited to technologies that have gotten old as well. Hence the 90% old coders that can actually not hack it and never
really could get sacked at some time and cannot find a new job with their limited and outdated skills. The 10% that are good at
it do not need to worry though. Who worries there is their employers when these people approach retirement age.
My experience as an IT Security Consultant (I also do some coding, but only at full rates) confirms that. Most are basically
helpless and many have negative productivity, because people with a clue need to clean up after them. "Learn to code"? We have
far too many coders already.
You can't bluff you way through writing software, but many, many people have bluffed their way into a job and then tried to
learn it from the people who are already there. In a marginally functional organization those incompetents are let go pretty quickly,
but sometimes they stick around for months or years.
Apparently the author of this book is one of those, probably hired and fired several times before deciding to go back to his
liberal arts roots and write a book.
I think you can and this is by far not the first piece describing that. Here is a classic:
https://blog.codinghorror.com/... [codinghorror.com]
Yet these people somehow manage to actually have "experience" because they worked in a role they are completely unqualified to
fill.
Fiddling with JavaScript libraries to get a fancy dancy interface that makes PHB's happy is a sought-after skill, for good
or bad. Now that we rely more on half-ass libraries, much of "programming" is fiddling with dark-grey boxes until they work
good enough.
This drives me crazy, but I'm consoled somewhat by the fact that it will all be thrown out in five years anyway.
(zdnet.com)JavaScript remains the most popular programming language,
but two offerings from Microsoft are steadily gaining , according to developer-focused
analyst firm RedMonk's first quarter 2018 ranking. RedMonk's rankings are based on pull
requests in GitHub, as well as an approximate count of how many times a language is tagged on
developer knowledge-sharing site Stack Overflow. Based on these figures, RedMonk analyst
Stephen O'Grady reckons JavaScript is the most popular language today as it was last year. In
fact, nothing has changed in RedMonk's top 10 list with the exception of Apple's Swift rising
to join its predecessor, Objective C, in 10th place. The top 10 programming languages in
descending order are JavaScript, Java, Python, C#, C++, CSS, Ruby, and C, with Swift and
Objective-C in tenth.
TIOBE's top programming language index for March consists of many of the same top 10
languages though in a different order, with Java in top spot, followed by C, C++, Python, C#,
Visual Basic .NET, PHP, JavaScript, Ruby, and SQL. These and other popularity rankings are
meant to help developers see which skills they should be developing. Outside the RedMonk top
10, O'Grady highlights a few notable changes, including an apparent flattening-out in the rapid
ascent of Google's back-end system language, Go.
Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job
done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
Notable quotes:
"... Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI. ..."
"... What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers. ..."
"... AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT. ..."
"... Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI. ..."
"... Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead. ..."
"... Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location. ..."
"... I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script. ..."
Deep End's Paul Venezia speaks out against the
overemphasis on GUIs in today's admin tools,
saying that GUIs are fine and necessary in many cases, but only after a complete CLI is in place, and that they cannot interfere
with the use of the CLI, only complement it. Otherwise, the GUI simply makes easy things easy and hard things much harder. He writes,
'If you have to make significant, identical changes to a bunch of Linux servers, is it easier to log into them one-by-one and run
through a GUI or text-menu tool, or write a quick shell script that hits each box and either makes the changes or simply pulls down
a few new config files and restarts some services? And it's not just about conservation of effort - it's also about accuracy. If
you write a script, you're certain that the changes made will be identical on each box. If you're doing them all by hand, you aren't.'"
Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get
the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
A bad GUI with no CLI is the worst of both worlds, the author of the article got that right. The 80/20 rule applies: 80% of
the work is common to everyone, and should be offered with a GUI. And the 20% that is custom to each sysadmin, well use the CLI.
maxwell demon:
What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn
about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection,
of course) to other computers.
0123456 (636235) writes:
What would be nice is if the GUI could automatically create a shell script doing the change.
While it's not quite the same thing, our GUI-based home router has an option to download the config as a text file so you can
automatically reconfigure it from that file if it has to be reset to defaults. You could presumably use sed to change IP addresses,
etc, and copy it to a different router. Of course it runs Linux.
Alain Williams:
AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that
you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a
script, copy elsewhere, ... I liked SMIT.
Ephemeriis:
What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn
about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection,
of course) to other computers.
Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into
a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the
CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI.
We've just started working with Aruba hardware. Installed a mobility controller last week. They've got a GUI that does something
similar. It's all a pretty web-based front-end, but it again generates CLI commands and a human-readable configuration. I'm still
very new to the platform, but I'm already learning about their CLI through the GUI. And getting work done that I wouldn't be able
to if I had to look up the CLI commands for everything.
Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by
the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands
through the GUI, and learn how you could have done it in PowerShell instead.
Anpheus:
Just about every Microsoft tool newer than 2007 does this. Virtual machine manager, SQL Server has done it for ages, I think
almost all the system center tools do, etc.
It's a huge improvement.
PoV:
All good admins document their work (don't they? DON'T THEY?). With a CLI or a script that's easy: it comes down to "log in
as user X, change to directory Y, run script Z with arguments A B and C - the output should look like D". Try that when all you
have is a GLUI (like a GUI, but you get stuck): open this window, select that option, drag a slider, check these boxes, click
Yes, three times. The output might look a little like this blurry screen shot and the only record of a successful execution is
a window that disappears as soon as the application ends.
I suppose the Linux community should be grateful that windows made the fundemental systems design error of making everything
graphic. Without that basic failure, Linux might never have even got the toe-hold it has now.
skids:
I think this is a stronger point than the OP: GUIs do not lead to good documentation. In fact, GUIs pretty much are limited
to procedural documentation like the example you gave.
The best they can do as far as actual documentation, where the precise effect of all the widgets is explained, is a screenshot
with little quote bubbles pointing to each doodad. That's a ridiculous way to document.
This is as opposed to a command reference which can organize, usually in a pretty sensible fashion, exact descriptions of what
each command does.
Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names
are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where
the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location.
Not that even good command references are mandatory by today's pathetic standards. Even the big boys like Cisco have shown
major degradation in the quality of their documentation during the last decade.
pedantic bore:
I think the author might not fully understand who most admins are. They're people who couldn't write a shell script if their
lives depended on it, because they've never had to. GUI-dependent users become GUI-dependent admins.
As a percentage of computer users, people who can actually navigate a CLI are an ever-diminishing group.
/etc/init.d/NetworkManager stop
chkconfig NetworkManager off
chkconfig network on
vi/etc/sysconfig/network
vi/etc/sysconfig/network-scripts/eth0
At least they named it NetworkManager, so experienced admins could recognize it as a culprit. Anything named in CamelCase is
almost invariably written by new school programmers who don't grok the Unix toolbox concept and write applications instead of
tools, and the bloated drivel is usually best avoided.
Darkness404 (1287218) writes: on Monday October 04, @07:21PM (#33789446)
There are more and more small businesses (5, 10 or so employees) realizing that they can get things done easier if they had
a server. Because the business can't really afford to hire a sysadmin or a full-time tech person, its generally the employee who
"knows computers" (you know, the person who has to help the boss check his e-mail every day, etc.) and since they don't have the
knowledge of a skilled *Nix admin, a GUI makes their administration a lot easier.
So with the increasing use of servers among non-admins, it only makes sense for a growth in GUI-based solutions.
Svartalf (2997) writes: Ah... But the thing is... You don't NEED the GUI with recent Linux systems- you do with Windows.
oatworm (969674) writes: on Monday October 04, @07:38PM (#33789624) Homepage
Bingo. Realistically, if you're a company with less than a 100 employees (read: most companies), you're only going to have
a handful of servers in house and they're each going to be dedicated to particular roles. You're not going to have 100 clustered
fileservers - instead, you're going to have one or maybe two. You're not going to have a dozen e-mail servers - instead, you're
going to have one or two. Consequently, the office admin's focus isn't going to be scalability; it just won't matter to the admin
if they can script, say, creating a mailbox for 100 new users instead of just one. Instead, said office admin is going to be more
focused on finding ways to do semi-unusual things (e.g. "create a VPN between this office and our new branch office", "promote
this new server as a domain controller", "install SQL", etc.) that they might do, oh, once a year.
The trouble with Linux, and I'm speaking as someone who's used YaST in precisely this context, is that you have to make a choice
- do you let the GUI manage it or do you CLI it? If you try to do both, there will be inconsistencies because the grammar of the
config files is too ambiguous; consequently, the GUI config file parser will probably just overwrite whatever manual changes it
thinks is "invalid", whether it really is or not. If you let the GUI manage it, you better hope the GUI has the flexibility necessary
to meet your needs. If, for example, YaST doesn't understand named Apache virtual hosts, well, good luck figuring out where it's
hiding all of the various config files that it was sensibly spreading out in multiple locations for you, and don't you dare use
YaST to manage Apache again or it'll delete your Apache-legal but YaST-"invalid" directive.
The only solution I really see is for manual config file support with optional XML (or some other machine-friendly but still
human-readable format) linkages. For example, if you want to hand-edit your resolv.conf, that's fine, but if the GUI is going
to take over, it'll toss a directive on line 1 that says "#import resolv.conf.xml" and immediately overrides (but does not overwrite)
everything following that. Then, if you still want to use the GUI but need to hand-edit something, you can edit the XML file using
the appropriate syntax and know that your change will be reflected on the GUI.
That's my take. Your mileage, of course, may vary.
icebraining (1313345) writes: on Monday October 04, @07:24PM (#33789494) Homepage
I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL.
Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script.
devent (1627873) writes:
Why Windows servers have a GUI is beyond me anyway. The servers are running 99,99% of the time without a monitor and normally
you just login per ssh to a console if you need to administer them. But they are consuming the extra RAM, the extra CPU cycles
and the extra security threats. I don't now, but can you de-install the GUI from a Windows server? Or better, do you have an option
for no-GUI installation? Just saw the minimum hardware requirements. 512 MB RAM and 32 GB or greater disk space. My server runs
sirsnork (530512) writes: on Monday October 04, @07:43PM (#33789672)
it's called a "core" install in Server 2008 and up, and if you do that, there is no going back, you can't ever add the GUI
back.
What this means is you can run a small subset of MS services that don't need GUI interaction. With R2 that subset grew somwhat
as they added the ability to install .Net too, which mean't you could run IIS in a useful manner (arguably the strongest reason
to want to do this in the first place).
Still it's a one way trip and you better be damn sure what services need to run on that box for the lifetime of that box or
you're looking at a reinstall. Most windows admins will still tell you the risk isn't worth it.
Simple things like network configuration without a GUI in windows is tedious, and, at least last time i looked, you lost the
ability to trunk network poers because the NIC manufactuers all assumed you had a GUI to configure your NICs
prichardson (603676) writes: on Monday October 04, @07:27PM (#33789520) Journal
This is also a problem with Max OS X Server. Apple builds their services from open source products and adds a GUI for configuration
to make it all clickable and easy to set up. However, many options that can be set on the command line can't be set in the GUI.
Even worse, making CLI changes to services can break the GUI entirely.
The hardware and software are both super stable and run really smoothly, so once everything gets set up, it's awesome. Still,
it's hard for a guy who would rather make changes on the CLI to get used to.
MrEricSir (398214) writes:
Just because you're used to a CLI doesn't make it better. Why would I want to read a bunch of documentation, mess with command
line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then
see if it worked. Don't make me read a bunch of man pages just to do a simple task. In essence, the question here is whether it's
okay for the user to be lazy and use a GUI, or whether the programmer should be too lazy to develop a GUI.
ak_hepcat (468765) writes: <[email protected] minus author> on Monday October 04, @07:38PM (#33789626) Homepage Journal
Probably because it's also about the ease of troubleshooting issues.
How do you troubleshoot something with a GUI after you've misconfigured? How do you troubleshoot a programming error (bug)
in the GUI -> device communication? How do you scale to tens, hundreds, or thousands of devices with a GUI?
CLI makes all this easier and more manageable.
arth1 (260657) writes:
Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what
it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages
just to do a simple task. Because then you'll be stuck at doing simple tasks, and will never be able to do more advanced tasks.
Without hiring a team to write an app for you instead of doing it yourself in two minutes, that is. The time you spend reading
man
fandingo (1541045) writes: on Monday October 04, @07:54PM (#33789778)
I don't think you really understand systems administration. 'Users,' or in this case admins, don't typically do stuff once.
Furthermore, they need to know what he did and how to do it again (i.e. new server or whatever) or just remember what he did.
One-off stuff isn't common and is a sign of poor administration (i.e. tracking changes and following processes).
What I'm trying to get at is that admins shouldn't do anything without reading the manual. As a Windows/Linux admin, I tend
to find Linux easier to properly administer because I either already know how to perform an operation or I have to read the manual
(manpage) and learn a decent amount about the operation (i.e. more than click here/use this flag).
Don't get me wrong, GUIs can make unknown operations significantly easier, but they often lead to poor process management.
To document processes, screenshots are typically needed. They can be done well, but I find that GUI documentation (created by
admins, not vendor docs) tend to be of very low quality. They are also vulnerable to 'upgrades' where vendors change the interface
design. CLI programs typically have more stable interfaces, but maybe that's just because they have been around longer...
maotx (765127) writes: <[email protected]> on Monday October 04, @07:42PM (#33789666)
That's one thing Microsoft did right with Exchange 2007. They built it entirely around their new powershell CLI and then built
a GUI for it. The GUI is limited in compared to what you can do with the CLI, but you can get most things done. The CLI becomes
extremely handy for batch jobs and exporting statistics to csv files. I'd say it's really up there with BASH in terms of scripting,
data manipulation, and integration (not just Exchange but WMI, SQL, etc.)
They tried to do similar with Windows 2008 and their Core [petri.co.il] feature, but they still have to load a GUI to present
a prompt...Reply to This
Charles Dodgeson (248492) writes: <[email protected]> on Monday October 04, @08:51PM (#33790206) Homepage Journal
Probably Debian would have been OK, but I was finding admin of most Linux distros a pain for exactly these reasons.
I couldn't find a layer where I could do everything that I needed to do without worrying about one thing stepping on another.
No doubt there are ways that I could manage a Linux system without running into different layers of management tools stepping
on each other, but it was a struggle.
There were other reasons as well (although there is a lot that I miss about Linux), but I think that this was one of the leading
reasons.
(NB: I realize that this is flamebait (I've got karma to burn), but that isn't my intention here.)
"... The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO. ..."
"... Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser. ..."
"... In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept. acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language restrictions, "lint" program, etc.) shows up at the door when the project starts. ..."
Here is an a la carte menu of features or properties that are related to these terms; I have heard OO defined to be many different
subsets of this list.
Encapsulation - the ability to syntactically hide the implementation of a type. E.g. in C or Pascal you always know
whether something is a struct or an array, but in CLU and Java you can hide the difference.
Protection - the inability of the client of a type to detect its implementation. This guarantees that a behavior-preserving
change to an implementation will not break its clients, and also makes sure that things like passwords don't leak out.
Ad hoc polymorphism - functions and data structures with parameters that can take on values of many different types.
Parametric polymorphism - functions and data structures that parameterize over arbitrary values (e.g. list of anything).
ML and Lisp both have this. Java doesn't quite because of its non-Object types.
Everything is an object - all values are objects. True in Smalltalk (?) but not in Java (because of int and friends).
All you can do is send a message (AYCDISAM) = Actors model - there is no direct manipulation of objects, only communication
with (or invocation of) them. The presence of fields in Java violates this.
Specification inheritance = subtyping - there are distinct types known to the language with the property that a value of
one type is as good as a value of another for the purposes of type correctness. (E.g. Java interface inheritance.)
Implementation inheritance/reuse - having written one pile of code, a similar pile (e.g. a superset) can be generated in
a controlled manner, i.e. the code doesn't have to be copied and edited. A limited and peculiar kind of abstraction. (E.g.
Java class inheritance.)
Sum-of-product-of-function pattern - objects are (in effect) restricted to be functions that take as first argument a distinguished
method key argument that is drawn from a finite set of simple names.
So OO is not a well defined concept. Some people (eg. Abelson and Sussman?) say Lisp is OO, by which they mean {3,4,5,7} (with
the proviso that all types are in the programmers' heads). Java is supposed to be OO because of {1,2,3,7,8,9}. E is supposed to be
more OO than Java because it has {1,2,3,4,5,7,9} and almost has 6; 8 (subclassing) is seen as antagonistic to E's goals and not necessary
for OO.
The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as
a definition of OO.
Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince
you that you are a loser.
Perhaps part of the confusion - and you say this in a different way in your little
memo - is that the C/C++ folks see OO as a liberation from a world
that has nothing resembling a first-class functions, while Lisp folks see OO as a prison since it limits their use of functions/objects
to the style of (9.). In that case, the only way OO can be defended is in the same manner as any other game or discipline -- by arguing
that by giving something up (e.g. the freedom to throw eggs at your neighbor's house) you gain something that you want (assurance
that your neighbor won't put you in jail).
This is related to Lisp being oriented to the solitary hacker and discipline-imposing languages being oriented to social packs,
another point you mention. In a pack you want to restrict everyone else's freedom as much as possible to reduce their ability to
interfere with and take advantage of you, and the only way to do that is by either becoming chief (dangerous and unlikely) or by
submitting to the same rules that they do. If you submit to rules, you then want the rules to be liberal so that you have a chance
of doing most of what you want to do, but not so liberal that others nail you.
In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept.
acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or
hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language
restrictions, "lint" program, etc.) shows up at the door when the project starts.
I recently contributed to a discussion of anti-OO on the e-lang list. My main anti-OO message (actually it only attacks points
5/6) was http://www.eros-os.org/pipermail/e-lang/2001-October/005852.html
. The followups are interesting but I don't think they're all threaded properly.
(Here are the pet definitions of terms used above:
Value = something that can be passed to some function (abstraction). (I exclude exotic compile-time things like parameters
to macros and to parameterized types and modules.)
Object = a value that has function-like behavior, i.e. you can invoke a method on it or call it or send it a message
or something like that. Some people define object more strictly along the lines of 9. above, while others (e.g. CLTL) are more
liberal. This is what makes "everything is an object" a vacuous statement in the absence of clear definitions.
In some languages the "call" is curried and the key-to-method mapping can sometimes be done at compile time. This technicality
can cloud discussions of OO in C++ and related languages.
Function = something that can be combined with particular parameter(s) to produce some result. Might or might not be
the same as object depending on the language.
Type = a description of the space of values over which a function is meaningfully parameterized. I include both types
known to the language and types that exist in the programmer's mind or in documentation
As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.
One involves a forensic analysis of over 100,000 lines of old C and assembly code from about 1990, and I have to work on Windows
XP.
The other is a hack to translate code written in weird language L1 into weird language L2 with a program written in scripting
language L3, where none of the L's even existed in 1990; this one uses Linux. Thus it's perhaps a bit surprising that I find myself
relying on much the same toolset for these very different tasks.
... ... ...
Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back
in time.
But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur
stuck in the past.
On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting
for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code. ..."
If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart"
and they reflect that in their code.
If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason
they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it.
No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart"
and they reflect that in their code. ..."
If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart"
and they reflect that in their code.
If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason
they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it.
No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.
If you don't have
root access on a particular GNU/Linux system that you use, or if you don't want to install
anything to the system directories and potentially interfere with others' work on the machine,
one option is to build your favourite tools in your $HOME directory.
This can be useful if there's some particular piece of software that you really need for
whatever reason, particularly on legacy systems that you share with other users or developers.
The process can include not just applications, but libraries as well; you can link against a
mix of your own libraries and the system's libraries as you need.
Preparation
In most cases this is actually quite a straightforward process, as long as you're allowed to
use the system's compiler and any relevant build tools such as autoconf . If the
./configure script for your application allows a --prefix option,
this is generally a good sign; you can normally test this with --help :
$ mkdir src
$ cd src
$ wget -q http://fooapp.example.com/fooapp-1.2.3.tar.gz
$ tar -xf fooapp-1.2.3.tar.gz
$ cd fooapp-1.2.3
$ pwd
/home/tom/src/fooapp-1.2.3
$ ./configure --help | grep -- --prefix
--prefix=PREFIX install architecture-independent files in PREFIX
Don't do this if the security policy on your shared machine explicitly disallows compiling
programs! However, it's generally quite safe as you never need root privileges at any stage of
the process.
Naturally, this is not a one-size-fits-all process; the build process will vary for
different applications, but it's a workable general approach to the task.
Installing
Configure the application or library with the usual call to ./configure , but
use your home directory for the prefix:
$ ./configure --prefix=$HOME
If you want to include headers or link against libraries in your home directory, it may be
appropriate to add definitions for CFLAGS and LDFLAGS to refer to
those directories:
You should then be able to install the application with the usual make and
make install , needing root privileges for neither:
$ make
$ make install
If successful, this process will insert files into directories like $HOME/bin
and $HOME/lib . You can then try to call the application by its full path:
$ $HOME/bin/fooapp -v
fooapp v1.2.3
Environment setup
To make this work smoothly, it's best to add to a couple of environment variables, probably
in your .bashrc file, so that you can use the home-built application
transparently.
First of all, if you linked the application against libraries also in your home directory,
it will be necessary to add the library directory to LD_LIBRARY_PATH , so that the
correct libraries are found and loaded at runtime:
$ /home/tom/bin/fooapp -v
/home/tom/bin/fooapp: error while loading shared libraries: libfoo.so: cannot open shared...
Could not load library foolib
$ export LD_LIBRARY_PATH=$HOME/lib
$ /home/tom/bin/fooapp -v
fooapp v1.2.3
An obvious one is adding the $HOME/bin directory to your $PATH so
that you can call the application without typing its path:
$ fooapp -v
-bash: fooapp: command not found
$ export PATH="$HOME/bin:$PATH"
$ fooapp -v
fooapp v1.2.3
Similarly, defining MANPATH so that calls to man will read the
manual for your build of the application first is worthwhile. You may find that
$MANPATH is empty by default, so you will need to append other manual locations to
it. An easy way to do this is by appending the output of the manpath utility:
$ man -k fooapp
$ manpath
/usr/local/man:/usr/local/share/man:/usr/share/man
$ export MANPATH="$HOME/share/man:$(manpath)"
$ man -k fooapp
fooapp (1) - Fooapp, the programmer's foo apper
This done, you should be able to use your private build of the software comfortably, and all
without never needing to reach for root .
Caveats
This tends to work best for userspace tools like editors or other interactive command-line
apps; it even works for shells. However this is not a typical use case for most applications
which expect to be packaged or compiled into /usr/local , so there are no
guarantees it will work exactly as expected. I have found that Vim and Tmux work very well like
this, even with Tmux linked against a home-compiled instance of libevent , on
which it depends.
In particular, if any part of the install process requires root privileges,
such as making a setuid binary, then things are likely not to work as
expected.
When unexpected behaviour is noticed in a program, GNU/Linux provides a wide variety of command-line
tools for diagnosing problems. The use of gdb , the GNU debugger, and related tools
like the lesser-known Perl debugger, will be familiar to those using IDEs to set breakpoints in their
code and to examine program state as it runs. Other tools of interest are available however to observe
in more detail how a program is interacting with a system and using its resources.
Debugging with gdb
You can use gdb in a very similar fashion to the built-in debuggers in modern IDEs
like Eclipse and Visual Studio.
If you are debugging a program that you've just compiled, it makes sense to compile it with its
debugging symbols added to the binary, which you can do with a gcc call containing
the -g option. If you're having problems with some code, it helps to also use
-Wall to show any errors you may have otherwise missed:
$ gcc -g -Wall example.c -o example
The classic way to use gdb is as the shell for a running program compiled in C or
C++, to allow you to inspect the program's state as it proceeds towards its crash.
$ gdb example
...
Reading symbols from /home/tom/example...done.
(gdb)
At the (gdb) prompt, you can type run to start the program, and it may
provide you with more detailed information about the causes of errors such as segmentation faults,
including the source file and line number at which the problem occurred. If you're able to compile
the code with debugging symbols as above and inspect its running state like this, it makes figuring
out the cause of a particular bug a lot easier.
(gdb) run
Starting program: /home/tom/gdb/example
Program received signal SIGSEGV, Segmentation fault.
0x000000000040072e in main () at example.c:43
43 printf("%d\n", *segfault);
After an error terminates the program within the (gdb) shell, you can type
backtrace to see what the calling function was, which can include the specific parameters
passed that may have something to do with what caused the crash.
(gdb) backtrace
#0 0x000000000040072e in main () at example.c:43
You can set breakpoints for gdb using the break to halt the program's
run if it reaches a matching line number or function call:
(gdb) break 42
Breakpoint 1 at 0x400722: file example.c, line 42.
(gdb) break malloc
Breakpoint 1 at 0x4004c0
(gdb) run
Starting program: /home/tom/gdb/example
Breakpoint 1, 0x00007ffff7df2310 in malloc () from /lib64/ld-linux-x86-64.so.2
Thereafter it's helpful to step through successive lines of code using step
. You can repeat this, like any gdb command, by pressing Enter repeatedly to step through
lines one at a time:
(gdb) step
Single stepping until exit from function _start,
which has no line number information.
0x00007ffff7a74db0 in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6
You can even attach gdb to a process that is already running, by finding the process
ID and passing it to gdb :
The much newer valgrind can be used as a debugging
tool in a similar way. There are many different checks and debugging methods this program can run,
but one of the most useful is its Memcheck tool, which can be used to detect common memory errors
like buffer overflow:
$ valgrind --leak-check=yes ./example
==29557== Memcheck, a memory error detector
==29557== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==29557== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==29557== Command: ./example
==29557==
==29557== Invalid read of size 1
==29557== at 0x40072E: main (example.c:43)
==29557== Address 0x0 is not stack'd, malloc'd or (recently) free'd
==29557==
...
The gdb and valgrind tools
can be used together for a very thorough survey of a program's run. Zed Shaw's
Learn C the Hard Way includes
a really good introduction for elementary use of valgrind with a deliberately broken
program.
Tracing system and library calls with ltrace
The strace and ltrace tools are designed to allow watching system calls
and library calls respectively for running programs, and logging them to the screen or, more usefully,
to files.
You can run ltrace and have it run the program you want to monitor in this way for
you by simply providing it as the sole parameter. It will then give you a listing of all the system
and library calls it makes until it exits.
You can also attach it to a process that's already running:
$ pgrep example
5138
$ ltrace -p 5138
Generally, there's quite a bit more than a couple of screenfuls of text generated by this, so
it's helpful to use the -o option to specify an output file to which to log the calls:
$ ltrace -o example.ltrace ./example
You can then view this trace in a text editor like Vim, which includes syntax highlighting for
ltrace output:
Vim session with ltrace output
I've found ltrace very useful for debugging problems where I suspect improper linking
may be at fault, or the absence of some needed resource in a chroot environment, since
among its output it shows you its search for libraries at dynamic linking time and opening configuration
files in /etc , and the use of devices like /dev/random or /dev/zero
.
Tracking open files with lsof
If you want to view what devices, files, or streams a running process has open, you can do that
with lsof :
$ pgrep example
5051
$ lsof -p 5051
For example, the first few lines of the apache2 process running on my home server
are:
# lsof -p 30779
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 30779 root cwd DIR 8,1 4096 2 /
apache2 30779 root rtd DIR 8,1 4096 2 /
apache2 30779 root txt REG 8,1 485384 990111 /usr/lib/apache2/mpm-prefork/apache2
apache2 30779 root DEL REG 8,1 1087891 /lib/x86_64-linux-gnu/libgcc_s.so.1
apache2 30779 root mem REG 8,1 35216 1079715 /usr/lib/php5/20090626/pdo_mysql.so
...
Interestingly, another way to list the open files for a process is to check the corresponding
entry for the process in the dynamic /proc directory:
# ls -l /proc/30779/fd
This can be very useful in confusing situations with file locks, or identifying whether a process
is holding open files that it needn't.
Viewing memory allocation with pmap
As a final debugging tip, you can view the memory allocations for a particular process with
pmap :
This will show you what libraries a running process is using, including those in shared memory.
The total given at the bottom is a little misleading as for loaded shared libraries, the running
process is not necessarily the only one using the memory;
determining "actual" memory usage for a given process is a little more in-depth than it might
seem with shared libraries added to the picture. Posted in
GNU/Linux Tagged
backtrace ,
breakpoint ,
debugging ,
file ,
file handle ,
gdb ,
ltrace ,
memory ,
process ,
strace ,
trace ,
unixUnix as IDE: Building
Posted on February
13, 2012 by Tom Ryder
Because compiling projects can be such a complicated and repetitive process, a good IDE provides
a means to abstract, simplify, and even automate software builds. Unix and its descendents accomplish
this process with a Makefile , a prescribed recipe in a standard format for generating
executable files from source and object files, taking account of changes to only rebuild what's necessary
to prevent costly recompilation.
One interesting thing to note about make is that while it's generally used for compiled
software build automation and has many shortcuts to that effect, it can actually effectively be used
for any situation in which it's required to generate one set of files from another. One possible
use is to generate web-friendly optimised graphics from source files for deployment for a website;
another use is for generating static HTML pages from code, rather than generating pages on the fly.
It's on the basis of this more flexible understanding of software "building" that modern takes on
the tool like Ruby's rake have
become popular, automating the general tasks for producing and installing code and files of all kinds.
Anatomy of a Makefile
The general pattern of a Makefile is a list of variables and a list of targets
, and the sources and/or objects used to provide them. Targets may not necessarily be linked binaries;
they could also constitute actions to perform using the generated files, such as install
to instate built files into the system, and clean to remove built files from the source
tree.
It's this flexibility of targets that enables make to automate any sort of task relevant
to assembling a production build of software; not just the typical parsing, preprocessing, compiling
proper and linking steps performed by the compiler, but also running tests ( make test
), compiling documentation source files into one or more appropriate formats, or automating deployment
of code into production systems, for example, uploading to a website via a git push
or similar content-tracking method.
An example Makefile for a simple software project might look something like the below:
all: example
example: main.o example.o library.o
gcc main.o example.o library.o -o example
main.o: main.c
gcc -c main.c -o main.o
example.o: example.c
gcc -c example.c -o example.o
library.o: library.c
gcc -c library.c -o library.o
clean:
rm *.o example
install: example
cp example /usr/bin
The above isn't the most optimal Makefile possible for this project, but it provides
a means to build and install a linked binary simply by typing make . Each target
definition contains a list of the dependencies required for the command that follows; this
means that the definitions can appear in any order, and the call to make will call the
relevant commands in the appropriate order.
Much of the above is needlessly verbose or repetitive; for example, if an object file is built
directly from a single C file of the same name, then we don't need to include the target at all,
and make will sort things out for us. Similarly, it would make sense to put some of
the more repeated calls into variables so that we would not have to change them individually if our
choice of compiler or flags changed. A more concise version might look like the following:
CC = gcc
OBJECTS = main.o example.o library.o
BINARY = example
all: example
example: $(OBJECTS)
$(CC) $(OBJECTS) -o $(BINARY)
clean:
rm -f $(BINARY) $(OBJECTS)
install: example
cp $(BINARY) /usr/bin
More general uses of make
In the interests of automation, however, it's instructive to think of this a bit more generally
than just code compilation and linking. An example could be for a simple web project involving deploying
PHP to a live webserver. This is not normally a task people associate with the use of make
, but the principles are the same; with the source in place and ready to go, we have certain targets
to meet for the build.
PHP files don't require compilation, of course, but web assets often do. An example that will
be familiar to web developers is the generation of scaled and optimised raster images from vector
source files, for deployment to the web. You keep and version your original source file, and when
it comes time to deploy, you generate a web-friendly version of it.
Let's assume for this particular project that there's a set of four icons used throughout the
site, sized to 64 by 64 pixels. We have the source files to hand in SVG vector format, safely tucked
away in version control, and now need to generate the smaller bitmaps for the site, ready
for deployment. We could therefore define a target icons , set the dependencies, and
type out the commands to perform. This is where command line tools in Unix really begin to shine
in use with Makefile syntax:
With the above done, typing make icons will go through each of the source icons files
in a Bash loop, convert them from SVG to PNG using ImageMagick's convert , and optimise
them with pngcrush , to produce images ready for upload.
A similar approach can be used for generating help files in various forms, for example, generating
HTML files from Markdown source:
And perhaps finally deploying a website with git push web , but only after
the icons are rasterized and the documents converted:
deploy: icons docs
git push web
For a more compact and abstract formula for turning a file of one suffix into another, you can
use the .SUFFIXES pragma to define these using special symbols. The code for converting
icons could look like this; in this case, $< refers to the source file, $*
to the filename with no extension, and $@ to the target.
A variety of tools exist in the GNU Autotools toolchain for the construction of configure
scripts and make files for larger software projects at a higher level, in particular
autoconf and
automake . The use
of these tools allows generating configure scripts and make files covering
very large source bases, reducing the necessity of building otherwise extensive makefiles manually,
and automating steps taken to ensure the source remains compatible and compilable on a variety of
operating systems.
Covering this complex process would be a series of posts in its own right, and is out of scope
of this survey.
Thanks to user samwyse for the .SUFFIXES suggestion in the comments. Posted
in GNU/Linux
Tagged build ,
clean ,
dependency ,
generate ,
install ,
make ,
makefile ,
target ,
unixUnix as IDE: Compiling
Posted on February
12, 2012 by Tom Ryder
There are a lot of tools available for compiling and interpreting code on the Unix platform, and
they tend to be used in different ways. However, conceptually many of the steps are the same. Here
I'll discuss compiling C code with gcc from the GNU Compiler Collection, and briefly
the use of perl as an example of an interpreter. GCC
GCC is a very mature GPL-licensed collection
of compilers, perhaps best-known for working with C and C++ programs. Its free software license and
near ubiquity on free Unix-like systems like GNU/Linux and BSD has made it enduringly popular for
these purposes, though more modern alternatives are available in compilers using the
LLVM infrastructure, such as
Clang .
The frontend binaries for GNU Compiler Collection are best thought of less as a set of complete
compilers in their own right, and more as drivers for a set of discrete programming tools,
performing parsing, compiling, and linking, among other steps. This means that while you can use
GCC with a relatively simple command line to compile straight from C sources to a working binary,
you can also inspect in more detail the steps it takes along the way and tweak it accordingly.
I won't be discussing the use of make files here, though you'll almost certainly
be wanting them for any C project of more than one file; that will be discussed in the next article
on build automation tools.
Compiling and assembling object code
You can compile object code from a C source file like so:
$ gcc -c example.c -o example.o
Assuming it's a valid C program, this will generate an unlinked binary object file called
example.o in the current directory, or tell you the reasons it can't. You can inspect
its assembler contents with the objdump tool:
$ objdump -D example.o
Alternatively, you can get gcc to output the appropriate assembly code for the object
directly with the -S parameter:
$ gcc -c -S example.c -o example.s
This kind of assembly output can be particularly instructive, or at least interesting, when printed
inline with the source code itself, which you can do with:
$ gcc -c -g -Wa,-a,-ad example.c > example.lst
Preprocessor
The C preprocessor cpp is generally used to include header files and define macros,
among other things. It's a normal part of gcc compilation, but you can view the C code
it generates by invoking cpp directly:
$ cpp example.c
This will print out the complete code as it would be compiled, with includes and relevant macros
applied.
Linking objects
One or more objects can be linked into appropriate binaries like so:
$ gcc example.o -o example
In this example, GCC is not doing much more than abstracting a call to ld , the GNU
linker. The command produces an executable binary called example .
Compiling, assembling, and linking
All of the above can be done in one step with:
$ gcc example.c -o example
This is a little simpler, but compiling objects independently turns out to have some practical
performance benefits in not recompiling code unnecessarily, which I'll discuss in the next article.
Including and linking
C files and headers can be explicitly included in a compilation call with the -I
parameter:
$ gcc -I/usr/include/somelib.h example.c -o example
Similarly, if the code needs to be dynamically linked against a compiled system library available
in common locations like /lib or /usr/lib , such as ncurses
, that can be included with the -l parameter:
$ gcc -lncurses example.c -o example
If you have a lot of necessary inclusions and links in your compilation process, it makes sense
to put this into environment variables:
This very common step is another thing that a Makefile is designed to abstract away
for you.
Compilation plan
To inspect in more detail what gcc is doing with any call, you can add the
-v switch to prompt it to print its compilation plan on the standard error stream:
$ gcc -v -c example.c -o example.o
If you don't want it to actually generate object files or linked binaries, it's sometimes tidier
to use -### instead:
$ gcc -### -c example.c -o example.o
This is mostly instructive to see what steps the gcc binary is abstracting away for
you, but in specific cases it can be useful to identify steps the compiler is taking that you may
not necessarily want it to.
More verbose error checking
You can add the -Wall and/or -pedantic options to the gcc
call to prompt it to warn you about things that may not necessarily be errors, but could be:
$ gcc -Wall -pedantic -c example.c -o example.o
This is good for including in your Makefile or in your
makeprg
definition in Vim, as it works well with the quickfix window discussed in the previous article and
will enable you to write more readable, compatible, and less error-prone code as it warns you more
extensively about errors.
Profiling compilation time
You can pass the flag -time to gcc to generate output showing how long
each step is taking:
$ gcc -time -c example.c -o example.o
Optimisation
You can pass generic optimisation options to gcc to make it attempt to build more
efficient object files and linked binaries, at the expense of compilation time. I find -O2
is usually a happy medium for code going into production:
The approach to interpreted code on Unix-like systems is very different. In these examples I'll
use Perl, but most of these principles will be applicable to interpreted Python or Ruby code, for
example.
Inline
You can run a string of Perl code directly into the interpreter in any one of the following ways,
in this case printing the single line "Hello, world." to the screen, with a linebreak following.
The first one is perhaps the tidiest and most standard way to work with Perl; the second uses a
heredoc string, and the
third a classic Unix shell pipe.
Of course, it's more typical to keep the code in a file, which can be run directly:
$ perl hello.pl
In either case, you can check the syntax of the code without actually running it with the
-c switch:
$ perl -c hello.pl
But to use the script as a logical binary , so you can invoke it directly without knowing
or caring what the script is, you can add a special first line to the file called the "shebang" that
does some magic to specify the interpreter through which the file should be run.
#!/usr/bin/env perl
print "Hello, world.\n";
The script then needs to be made executable with a chmod call. It's also good practice
to rename it to remove the extension, since it is now taking the shape of a logic binary:
$ mv hello{.pl,}
$ chmod +x hello
And can thereafter be invoked directly, as if it were a compiled binary:
$ ./hello
This works so transparently that many of the common utilities on modern GNU/Linux systems, such
as the adduser frontend to useradd , are actually Perl or even Python scripts.
In the next post, I'll describe the use of make for defining and automating building
projects in a manner comparable to IDEs, with a nod to newer takes on the same idea with Ruby's
rake . Posted in
GNU/Linux Tagged
assembler ,
compiler ,
gcc ,
interpreter ,
linker ,
perl ,
python ,
ruby ,
unixUnix as IDE: Editing
Posted on February
11, 2012 by Tom Ryder
The text editor is the core tool for any programmer, which is why choice of editor evokes such tongue-in-cheek
zealotry in debate among programmers. Unix is the operating system most strongly linked with two
enduring favourites, Emacs and Vi, and their modern versions in GNU Emacs and Vim, two editors with
very different editing philosophies but comparable power.
Being a Vim heretic myself, here I'll discuss the indispensable features of Vim for programming,
and in particular the use of shell tools called from within Vim to complement the editor's
built-in functionality. Some of the principles discussed here will be applicable to those using Emacs
as well, but probably not for underpowered editors like Nano.
This will be a very general survey, as Vim's toolset for programmers is enormous , and
it'll still end up being quite long. I'll focus on the essentials and the things I feel are most
helpful, and try to provide links to articles with a more comprehensive treatment of the topic. Don't
forget that Vim's :help has surprised many people new to the editor with its high quality
and usefulness.
Filetype detection
Vim has built-in settings to adjust its behaviour, in particular its syntax highlighting, based
on the filetype being loaded, which it happily detects and generally does a good job at doing so.
In particular, this allows you to set an indenting style conformant with the way a particular language
is usually written. This should be one of the first things in your .vimrc file.
if has("autocmd")
filetype on
filetype indent on
filetype plugin on
endif
Syntax highlighting
Even if you're just working with a 16-color terminal, just include the following in your
.vimrc if you're not already:
syntax on
The colorschemes with a default 16-color terminal are not pretty largely by necessity, but they
do the job, and for most languages syntax definition files are available that work very well. There's
a tremendous array of colorschemes
available, and it's not hard to tweak them to suit or even to write your own. Using a
256-color terminal or gVim
will give you more options. Good syntax highlighting files will show you definite syntax errors with
a glaring red background.
Line numbering
To turn line numbers on if you use them a lot in your traditional IDE:
set number
You might like to try this as well, if you have at least Vim 7.3 and are keen to try numbering
lines relative to the current line rather than absolutely:
set relativenumber
Tags files
Vim works very well with the output
from the ctags utility. This allows you to search quickly for all uses of a particular
identifier throughout the project, or to navigate straight to the declaration of a variable from
one of its uses, regardless of whether it's in the same file. For large C projects in multiple files
this can save huge amounts of otherwise wasted time, and is probably Vim's best answer to similar
features in mainstream IDEs.
You can run :!ctags -R on the root directory of projects in many popular languages
to generate a tags file filled with definitions and locations for identifiers throughout
your project. Once a tags file for your project is available, you can search for uses
of an appropriate tag throughout the project like so:
:tag someClass
The commands :tn and :tp will allow you to iterate through successive
uses of the tag elsewhere in the project. The built-in tags functionality for this already covers
most of the bases you'll probably need, but for features such as a tag list window, you could try
installing the very popular Taglist
plugin . Tim Pope's Unimpaired
plugin also contains a couple of useful relevant mappings.
Calling external programs
Until 2017, there were three major methods of calling external programs during a Vim session:
:!<command> -- Useful for issuing commands from within a Vim context particularly
in cases where you intend to record output in a buffer.
:shell -- Drop to a shell as a subprocess of Vim. Good for interactive commands.
Ctrl-Z -- Suspend Vim and issue commands from the shell that called it.
Since 2017, Vim 8.x now includes a :terminal command to bring up a terminal emulator
buffer in a window. This seems to work better than previous plugin-based attempts at doing this,
such as Conque . For the moment I
still strongly recommend using one of the older methods, all of which also work in other vi
-type editors.
Lint programs and syntax checkers
Checking syntax or compiling with an external program call (e.g. perl -c ,
gcc ) is one of the calls that's good to make from within the editor using :!
commands. If you were editing a Perl file, you could run this like so:
:!perl -c %
/home/tom/project/test.pl syntax OK
Press Enter or type command to continue
The % symbol is shorthand for the file loaded in the current buffer. Running this
prints the output of the command, if any, below the command line. If you wanted to call this check
often, you could perhaps map it as a command, or even a key combination in your .vimrc
file. In this case, we define a command :PerlLint which can be called from normal mode
with \l :
For a lot of languages there's an even better way to do this, though, which allows us to capitalise
on Vim's built-in quickfix window. We can do this by setting an appropriate makeprg
for the filetype, in this case including a module that provides us with output that Vim can use for
its quicklist, and a definition for its two formats:
You may need to install this module first via CPAN, or the Debian package libvi-quickfix-perl
. This done, you can type :make after saving the file to check its syntax, and if errors
are found, you can open the quicklist window with :copen to inspect the errors, and
:cn and :cp to jump to them within the buffer.
Vim quickfix working on a Perl file
This also works for output from
gcc
, and pretty much any other compiler syntax checker that you might want to use that includes filenames,
line numbers, and error strings in its error output. It's even possible to do this with
web-focused languages like PHP , and for tools like
JSLint for JavaScript . There's
also an excellent plugin named
Syntastic that
does something similar.
Reading output from other commands
You can use :r! to call commands and paste their output directly into the buffer
with which you're working. For example, to pull a quick directory listing for the current folder
into the buffer, you could type:
:r!ls
This doesn't just work for commands, of course; you can simply read in other files this way with
just :r , like public keys or your own custom boilerplate:
You can extend this to actually filter text in the buffer through external commands, perhaps selected
by a range or visual mode, and replace it with the command's output. While Vim's visual block mode
is great for working with columnar data, it's very often helpful to bust out tools like column
, cut , sort , or awk .
For example, you could sort the entire file in reverse by the second column by typing:
:%!sort -k2,2r
You could print only the third column of some selected text where the line matches the pattern
/vim/ with:
:'<,'>!awk '/vim/ {print $3}'
You could arrange keywords from lines 1 to 10 in nicely formatted columns like:
:1,10!column -t
Really any kind of text filter or command can be manipulated like this in Vim, a simple
interoperability feature that expands what the editor can do by an order of magnitude. It effectively
makes the Vim buffer into a text stream, which is a language that all of these classic tools speak.
There is a lot more detail on this in my
"Shell from Vi" post.
Built-in alternatives
It's worth noting that for really common operations like sorting and searching, Vim has built-in
methods in :sort and :grep , which can be helpful if you're stuck using
Vim on Windows, but don't have nearly the adaptability of shell calls.
Diffing
Vim has a diffing mode, vimdiff , which allows you to not only view the
differences between different versions of a file, but also to resolve conflicts via a three-way merge
and to replace differences to and fro with commands like :diffput and :diffget
for ranges of text. You can call vimdiff from the command line directly with at least
two files to compare like so:
$ vimdiff file-v1.c file-v2.c
Vim diffing a .vimrc file Version control
You can call version control methods directly from within Vim, which is probably all you need
most of the time. It's useful to remember here that % is always a shortcut for the buffer's
current file:
:!svn status
:!svn add %
:!git commit -a
Recently a clear winner for Git functionality with Vim has come up with Tim Pope's
Fugitive , which I highly recommend
to anyone doing Git development with Vim. There'll be a more comprehensive treatment of version control's
basis and history in Unix in Part 7 of this series.
The difference
Part of the reason Vim is thought of as a toy or relic by a lot of programmers used to GUI-based
IDEs is its being seen as just a tool for editing files on servers, rather than a very capable editing
component for the shell in its own right. Its own built-in features being so composable with external
tools on Unix-friendly systems makes it into a text editing powerhouse that sometimes surprises even
experienced users.
In programming, a library is an assortment of pre-compiled pieces of code that can be
reused in a program. Libraries simplify life for programmers, in that they provide reusable
functions, routines, classes, data structures and so on (written by a another programmer),
which they can use in their programs.
For instance, if you are building an application that needs to perform math operations, you
don't have to create a new math function for that, you can simply use existing functions in
libraries for that programming language.
Examples of libraries in Linux include libc (the standard C library) or glibc (GNU version
of the standard C library), libcurl (multiprotocol file transfer library), libcrypt (library
used for encryption, hashing, and encoding in C) and many more.
Linux supports two classes of libraries, namely:
Static libraries – are bound to a program statically at compile time.
Dynamic or shared libraries – are loaded when a program is launched and loaded into
memory and binding occurs at run time.
Dynamic or shared libraries can further be categorized into:
Dynamically linked libraries – here a program is linked with the shared library and
the kernel loads the library (in case it's not in memory) upon execution.
Dynamically loaded libraries – the program takes full control by calling functions
with the library.
Shared Library Naming Conventions
Shared libraries are named in two ways: the library name (a.k.a soname ) and a "filename"
(absolute path to file which stores library code).
For example, the soname for libc is libc.so.6 : where lib is the prefix, is a descriptive
name, so means shared object, and is the version. And its filename is: /lib64/libc.so.6 . Note
that the soname is actually a symbolic link to the filename.
Locating Shared Libraries in
Linux
Shared libraries are loaded by ld.so (or ld.so.x ) and ld-linux.so (or ld-linux.so.x )
programs, where is the version. In Linux, /lib/ld-linux.so.x searches and loads all shared
libraries used by a program.
A program can call a library using its library name or filename, and a library path stores
directories where libraries can be found in the filesystem. By default, libraries are located
in /usr/local/lib /usr/local/lib64 /usr/lib and /usr/lib64 ; system startup libraries are in
/lib and /lib64 . Programmers can, however, install libraries in custom locations.
The library path can be defined in /etc/ld.so.conf file which you can edit with a command
line editor.
# vi /etc/ld.so.conf
The line(s) in this file instruct the kernel to load file in /etc/ld.so.conf.d . This way,
package maintainers or programmers can add their custom library directories to the search
list.
If you look into the /etc/ld.so.conf.d directory, you'll see .conf files for some common
packages (kernel, mysql and postgresql in this case):
# ls /etc/ld.so.conf.d
kernel-2.6.32-358.18.1.el6.x86_64.conf kernel-2.6.32-696.1.1.el6.x86_64.conf mariadb-x86_64.conf
kernel-2.6.32-642.6.2.el6.x86_64.conf kernel-2.6.32-696.6.3.el6.x86_64.conf postgresql-pgdg-libs.conf
If you take a look at the mariadb-x86_64.conf, you will see an absolute path to package's
libraries.
# cat mariadb-x86_64.conf
/usr/lib64/mysql
The method above sets the library path permanently. To set it temporarily, use the
LD_LIBRARY_PATH environment variable on the command line. If you want to keep the changes
permanent, then add this line in the shell initialization file /etc/profile (global) or
~/.profile (user specific).
# export LD_LIBRARY_PATH=/path/to/library/file
Managing Shared Libraries in Linux
Let us now look at how to deal with shared libraries. To get a list of all shared library
dependencies for a binary file, you can use the ldd utility . The output of ldd is in the
form:
library name => filename (some hexadecimal value)
OR
filename (some hexadecimal value) #this is shown when library name can't be read
This command shows all shared library dependencies for the ls command .
Because shared libraries can exist in many different directories, searching through all of
these directories when a program is launched would be greatly inefficient: which is one of
the likely disadvantages of dynamic libraries. Therefore a mechanism of caching employed,
performed by a the program ldconfig
By default, ldconfig reads the content of /etc/ld.so.conf , creates the appropriate
symbolic links in the dynamic link directories, and then writes a cache to /etc/ld.so.cache
which is then easily used by other programs.
This is very important especially when you have just installed new shared libraries or
created your own, or created new library directories. You need to run ldconfig command to
effect the changes.
# ldconfig
OR
# ldconfig -V #shows files and directories it works with
After creating your shared library, you need to install it. You can either move it into
any of the standard directories mentioned above, and run the ldconfig command.
Alternatively, run the following command to create symbolic links from the soname to the
filename:
Thats all for now! In this article, we gave you an introduction to libraries, explained
shared libraries and how to manage them in Linux. If you have any queries or additional ideas
to share, use the comment form below.
Most larger software projects will contain several components, some of which you may find
use for later on in some other project, or that you just want to separate out for
organizational purposes. When you have a reusable or logically distinct set of functions,
it is helpful to build a library from it so that you don’t have to
copy the source code into your current project and recompile it all the
timeâ€"and so you can keep different modules of your program disjoint and
change one without affecting others. Once it’s been written and
tested, you can safely reuse it over and over again, saving the time and hassle of
building it into your project every time.
Building static libraries is fairly simple, and since we rarely get questions on them,
I won’t cover them. I’ll stick with shared
libraries, which seem to be more confusing for most people.
Before we get started, it might help to get a quick rundown of everything that happens
from source code to running program:
C Preprocessor: This stage processes all the preprocessor directives
. Basically, any line that starts with a #, such as #define and #include.
Compilation Proper: Once the source file has been preprocessed, the result is then
compiled. Since many people refer to the entire build process as
compilation, this stage is often referred to as “compilation
proper.†This stage turns a .c file into an .o (object) file.
Linking: Here is where all of the object files and any libraries are linked
together to make your final program. Note that for static libraries, the actual library
is placed in your final program, while for shared libraries, only a reference to the
library is placed inside. Now you have a complete program that is ready to run. You
launch it from the shell, and the program is handed off to the loader.
Loading: This stage happens when your program starts up. Your program is scanned
for references to shared libraries. Any references found are resolved and the libraries
are mapped into your program.
Steps 3 and 4 are where the magic (and confusion) happens with shared libraries.
To fairly review this book, one must distinguish between the methodology it presents and
the actual presentation. As to the presentation, the author attempts to win the reader over
with emotional persuasion and pep talk rather than with facts and hard evidence. Stories of
childhood and comradeship don't classify as convincing facts to me. A single case study-the
C3 project-is often referred to, but with no specific information (do note that the project
was cancelled by the client after staying in development for far too long).
As to the method itself, it basically boils down to four core practices:
1. Always have a customer available on site.
2. Unit test before you code.
3. Program in pairs.
4. Forfeit detailed design in favor of incremental, daily releases and refactoring.
If you do the above, and you have excellent staff on your hands, then the book promises that
you'll reap the benefits of faster development, less overtime, and happier customers. Of
course, the book fails to point out that if your staff is all highly qualified people, then
the project is likely to succeed no matter what methodology you use. I'm sure that anyone who
has worked in the software industry for sometime has noticed the sad state that most computer
professionals are in nowadays.
However, assuming that you have all the topnotch developers that you desire, the outlined
methodology is almost impossible to apply in real world scenarios. Having a customer always
available on site would mean that the customer in question is probably a small, expendable
fish in his organization and is unlikely to have any useful knowledge of its business
practices. Unit testing code before it is written means that one would have to have a mental
picture of what one is going to write before writing it, which is difficult without upfront
design. And maintaining such tests as the code changes would be a nightmare. Programming in
pairs all the time would assume that your topnotch developers are also sociable creatures,
which is rarely the case, and even if they were, no one would be able to justify the practice
in terms of productivity. I won't discuss why I think that abandoning upfront design is a bad
practice; the whole idea is too ridiculous to debate.
Both book and methodology will attract fledgling developers with its promise of hacking as an
acceptable software practice and a development universe revolving around the programmer. It's
a cult, not a methodology, were the followers shall find salvation and 40-hour working weeks.
Experience is a great teacher, but only a fool would learn from it alone. Listen to what the
opponents have to say before embracing change, and don't forget to take the proverbial grain
of salt.
Two stars out of five for the presentation for being courageous and attempting to defy the
standard practices of the industry. Two stars for the methodology itself, because it
underlines several common sense practices that are very useful once practiced without the
extremity.
Maybe it's an interesting idea, but it's just not ready for prime time.
Parts of Kent's recommended practice - including aggressive testing and short integration
cycle - make a lot of sense. I've shared the same beliefs for years, but it was good to see
them clarified and codified. I really have changed some of my practice after reading this and
books like this.
I have two broad kinds of problem with this dogma, though. First is the near-abolition of
documentation. I can't defend 2000 page specs for typical kinds of development. On the other
hand, declaring that the test suite is the spec doesn't do it for me either. The test suite
is code, written for machine interpretation. Much too often, it is not written for human
interpretation. Based on the way I see most code written, it would be a nightmare to reverse
engineer the human meaning out of any non-trivial test code. Some systematic way of ensuring
human intelligibility in the code, traceable to specific "stories" (because "requirements"
are part of the bad old way), would give me a lot more confidence in the approach.
The second is the dictatorial social engineering that eXtremity mandates. I've actually tried
the pair programming - what a disaster. The less said the better, except that my experience
did not actually destroy any professional relationships. I've also worked with people who
felt that their slightest whim was adequate reason to interfere with my work. That's what
Beck institutionalizes by saying that any request made of me by anyone on the team must be
granted. It puts me completely at the mercy of anyone walking by. The requisite bullpen
physical environment doesn't work for me either. I find that the visual and auditory
distraction make intense concentration impossible.
I find revival tent spirit of the eXtremists very off-putting. If something works, it works
for reasons, not as a matter of faith. I find much too much eXhortation to believe, to go
ahead and leap in, so that I will eXperience the wonderfulness for myself. Isn't that what
the evangelist on the subway platform keeps saying? Beck does acknowledge unbelievers like
me, but requires their exile in order to maintain the group-think of the X-cult.
Beck's last chapters note a number of exceptions and special cases where eXtremism may not
work - actually, most of the projects I've ever encountered.
There certainly is good in the eXtreme practice. I look to future authors to tease that good
out from the positively destructive threads that I see interwoven.
By A customer on May 2, 2004
A work of fiction
The book presents extreme programming. It is divided into three parts:
(1) The problem
(2) The solution
(3) Implementing XP.
The problem, as presented by the author, is that requirements change but current
methodologies are not agile enough to cope with this. This results in customer being unhappy.
The solution is to embrace change and to allow the requirements to be changed. This is done
by choosing the simplest solution, releasing frequently, refactoring with the security of
unit tests.
The basic assumption which underscores the approach is that the cost of change is not
exponential but reaches a flat asymptote. If this is not the case, allowing change late in
the project would be disastrous. The author does not provide data to back his point of view.
On the other hand there is a lot of data against a constant cost of change (see for example
discussion of cost in Code Complete). The lack of reasonable argumentation is an irremediable
flaw in the book. Without some supportive data it is impossible to believe the basic
assumption, nor the rest of the book. This is all the more important since the only project
that the author refers to was cancelled before full completion.
Many other parts of the book are unconvincing. The author presents several XP practices. Some
of them are very useful. For example unit tests are a good practice. They are however better
treated elsewhere (e.g., Code Complete chapter on unit test). On the other hand some
practices seem overkill. Pair programming is one of them. I have tried it and found it useful
to generate ideas while prototyping. For writing production code, I find that a quiet
environment is by far the best (see Peopleware for supportive data). Again the author does
not provide any data to support his point.
This book suggests an approach aiming at changing software engineering practices. However the
lack of supportive data makes it a work of fiction.
I would suggest reading Code Complete for code level advice or Rapid Development for
management level advice.
By A customer on November 14, 2002
Not Software Engineering.
Any Engineering discipline is based on solid reasoning and logic not on blind faith.
Unfortunately, most of this book attempts to convince you that Extreme programming is better
based on the author's experiences. A lot of the principles are counter - intutive and the
author exhorts you just try it out and get enlightened. I'm sorry but these kind of things
belong in infomercials not in s/w engineering.
The part about "code is the documentation" is the scariest part. It's true that keeping the
documentation up to date is tough on any software project, but to do away with dcoumentation
is the most ridiculous thing I have heard. It's like telling people to cut of their noses to
avoid colds.
Yes we are always in search of a better software process. Let me tell you that this book
won't lead you there.
The "gossip magazine diet plans" style of programming.
This book reminds me of the "gossip magazine diet plans", you know, the vinegar and honey
diet, or the fat-burner 2000 pill diet etc. Occasionally, people actually lose weight on
those diets, but, only because they've managed to eat less or exercise more. The diet plans
themselves are worthless. XP is the same - it may sometimes help people program better, but
only because they are (unintentionally) doing something different. People look at things like
XP because, like dieters, they see a need for change. Overall, the book is a decently written
"fad diet", with ideas that are just as worthless.
By A customer on August 11, 2003
Hackers! Salvation is nigh!!
It's interesting to see the phenomenon of Extreme Programming happening in the dawn of the
21st century. I suppose historians can explain such a reaction as a truly conservative
movement. Of course, serious software engineering practice is hard. Heck, documentation is a
pain in the neck. And what programmer wouldn't love to have divine inspiration just before
starting to write the latest web application and so enlightened by the Almighty, write the
whole thing in one go, as if by magic? No design, no documentation, you and me as a pair, and
the customer too. Sounds like a hacker's dream with "Imagine" as the soundtrack (sorry,
John).
The Software Engineering struggle is over 50 years old and it's only logical to expect some
resistance, from time to time. In the XP case, the resistance comes in one of its worst
forms: evangelism. A fundamentalist cult, with very little substance, no proof of any kind,
but then again if you don't have faith you won't be granted the gift of the mystic
revelation. It's Gnosticism for Geeks.
Take it with a pinch of salt.. well, maybe a sack of salt. If you can see through the B.S.
that sells millions of dollars in books, consultancy fees, lectures, etc, you will recognise
some common-sense ideas that are better explained, explored and detailed elsewhere.
Kent is an excellent writer. He does an excellent job of presenting an approach to
software development that is misguided for anything but user interface code. The argument
that user interface code must be gotten into the hands of users to get feedback is used to
suggest that complex system code should not be "designed up front". This is simply wrong. For
example, if you are going to deply an application in the Amazon Cloud that you want to scale,
you better have some idea of how this is going to happen. Simply waiting until you
application falls over and fails is not an acceptable approach.
One of the things I despise the most about the software development culture is the
mindless adoption of fads. Extreme programming has been adopted by some organizations like a
religious dogma.
Engineering large software systems is one of the most difficult things that humans do.
There are no silver bullets and there are no dogmatic solutions that will make the difficult
simple.
Maybe I'm too cynical because I never got to work for the successful, whiz-kid companies;
Maybe this book wasn't written for me!
This book reminds me of Jacobsen's "Use Cases" book of the 1990s. 'Use Cases' was all the
rage but after several years, we slowly learned the truth: Uses Cases does not deal with the
architecture - a necessary and good foundation for any piece of software.
Similarly, this book seems to be spotlighting Testing and taking it to extremes.
'the test plan is the design doc'
Not True. The design doc encapsulates wisdom and insight
a picture that accurately describes the interactions of the lower level software
components is worth a thousand lines of code-reading.
Also present is an evangelistic fervor that reminds me of the rah-rah eighties'
bestseller, "In Search Of Excellence" by Peters and Waterman. (Many people have since noted
that most of the spotlighted companies of that book are bankrupt twenty five years
later).
- in a room full of people with a bully supervisor (as I experienced in my last job at a
major telco) innovation or good work is largely absent.
- deploy daily - are you kidding?
to run through the hundreds of test cases in a large application takes several hours if
not days. Not all testing can be automated.
- I have found the principle of "baby steps", one of the principles in the book, most
useful in my career - it is the basis for prototyping iteratively. However I heard it
described in 1997 at a pep talk at MCI that the VP of our department gave to us. So I dont
know who stole it from whom!
Lastly, I noted that the term 'XP' was used throughout the book, and the back cover has a
blurb from an M$ architect. Was it simply coincidence that Windows shares the same name for
its XP release? I wondered if M$ had sponsored part of the book as good advertising for
Windows XP! :)
"... The scl ("Software Collections") tool is provided to make use of the tool versions from the Developer Toolset easy while minimizing the potential for confusion with the regular RHEL tools. ..."
"... Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription. ..."
"... You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set. ..."
Red Hat provides another option via the Red Hat Developer Toolset.
With the developer toolset, developers can choose to take advantage of the latest
versions of the GNU developer tool chain, packaged for easy installation on Red Hat
Enterprise Linux. This version of the GNU development tool chain is an alternative to the
toolchain offered as part of each Red Hat Enterprise Linux release. Of course, developers
can continue to use the version of the toolchain provided in Red Hat Enterprise Linux.
The developer toolset gives software developers the ability to develop and compile an
application once to run on multiple versions of Red Hat Enterprise Linux (such as Red Hat
Enterprise Linux 5 and 6). Compatible with all supported versions of Red Hat Enterprise
Linux, the developer toolset is available for users who develop applications for Red Hat
Enterprise Linux 5 and 6. Please see the release notes for support of specific minor
releases.
Unlike the compatibility and preview gcc packages provided with RHEL itself, the developer
toolset packages put their content under a /opt/rh path. The
scl ("Software Collections") tool is provided to make use of the tool versions
from the Developer Toolset easy while minimizing the potential for confusion with the regular
RHEL tools.
Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with
an active Red Hat Enterprise Linux Developer subscription.
You will need an active Red Hat Enterprise Linux Developer subscription to gain access
to Red Hat Developer Tool set.
For further information on Red Hat Developer Toolset, refer to the relevant release
documentation:
Now comes the tedious part - any package which has a version higher than provided by yum
fro your distro needs to be downloaded from koji , and recursively repeat the process
until all dependency requirements are met.
I cheat, btw.
I usually repackage the rpm to contain a correct build tree using the gnu facility to use
correctly placed and named requirements, so gmp/mpc/mpfr/isl (cloog is no longer required)
are downloaded and untard into the correct path, and the new (bloated) tar is rebuilt into a
new src rpm (with minor changes to spec file) with no dependency on their packaged (rpm)
versions. Since I know of no one using ADA, I simply remove the portions pertaining to gnat
from the specfile, further simplifying the build process, leaving me with just binutils to
worry about.
Gcc can actually build with older binutils, so if you're in a hurry, further edit the
specfile to require the binutils version already present on your system. This will result in
a slightly crippled gcc, but mostly it will perform well enough.
This works quite well mostly.
UPDATE 1
The simplest method for opening a src rpm is probably yum install the rpm and access
everything under ~/rpmbuild, but I prefer
mkdir gcc-5.3.1-4.fc23
cd gcc-5.3.1-4.fc23
rpm2cpio ../gcc-5.3.1-4.fc23.src.rpm | cpio -id
tar xf gcc-5.3.1-20160212.tar.bz2
cd gcc-5.3.1-20160212
contrib/download_prerequisites
cd ..
tar caf gcc-5.3.1-20160212.tar.bz2 gcc-5.3.1-20160212
rm -rf gcc-5.3.1-20160212
# remove gnat
sed -i '/%global build_ada 1/ s/1/0/' gcc.spec
sed -i '/%if !%{build_ada}/,/%endif/ s/^/#/' gcc.spec
# remove gmp/mpfr/mpc dependencies
sed -i '/BuildRequires: gmp-devel >= 4.1.2-8, mpfr-devel >= 2.2.1, libmpc-devel >= 0.8.1/ s/.*//' gcc.spec
# remove isl dependency
sed -i '/BuildRequires: isl = %{isl_version}/,/Requires: isl-devel = %{isl_version}/ s/^/#/' gcc.spec
# Either build binutils as I do, or lower requirements
sed -i '/Requires: binutils/ s/2.24/2.20/' gcc.spec
# Make sure you don't break on gcc-java
sed -i '/gcc-java/ s/^/#/' gcc.spec
You also have the choice to set prefix so this rpm will install side-by-side without
breaking distro rpm (but requires changing name, and some modifications to internal package
names). I usually add an environment-module so I can load and unload this gcc as required
(similar to how collections work) as part of the rpm (so I add a new dependency).
Finally create the rpmbuild tree and place the files where hey should go and build:
Normally one should not use a "server" os for development - that's why you have fedora
which already comes with latest gcc. I have some particular requirements, but you should
really consider using the right tool for the task - rhel/centos to run production apps,
fedora to develop those apps etc.
The official way to have gcc 4.8.2 on RHEL 6 is via installing Red Hat Developer Toolset (yum
install devtoolset-2), and in order to have it you need to have one of the below
subscriptions:
Red Hat Enterprise Linux Developer Support, Professional
Red Hat Enterprise Linux Developer Support, Enterprise
Red Hat Enterprise Linux Developer Suite
Red Hat Enterprise Linux Developer Workstation, Professional
Red Hat Enterprise Linux Developer Workstation, Enterprise
30 day Self-Supported Red Hat Enterprise Linux Developer Workstation Evaluation
60 day Supported Red Hat Enterprise Linux Developer Workstation Evaluation
90 day Supported Red Hat Enterprise Linux Developer Workstation Evaluation
1-year Unsupported Partner Evaluation Red Hat Enterprise Linux
1-year Unsupported Red Hat Advanced Partner Subscription
You can check whether you have any of these subscriptions by running:
subscription-manager list --available
and
subscription-manager list --consumed .
If you don't have any of these subscriptions, you won't succeed in "yum install
devtoolset-2". However, luckily cern provide a "back door" for their SLC6 which can also be
used in RHEL 6. Run below three lines via root, and you should be able to have it:
Once it's done completely, you should have the new development package in
/opt/rh/devtoolset-2/root/.
answered Oct 29 '14 at 21:53
For some reason the mpc/mpfr/gmp packages aren't being downloaded. Just look in your gcc
source directory, it should have created symlinks to those packages:
gcc/4.9.1/install$ ls -ad gmp mpc mpfr
gmp mpc mpfr
Review by many eyes does not always prevent buggy codeThere is a view that
because open source software is subject to review by many eyes, all the bugs will be ironed out
of it. This is a myth.
06 Oct 2017 Mike Bursell (Red Hat)Feed 8
up Image credits :
Internet Archive Book Images . CC BY-SA 4.0 Writing code is hard.
Writing secure code is harder -- much harder. And before you get there, you need to think about
design and architecture. When you're writing code to implement security functionality, it's
often based on architectures and designs that have been pored over and examined in detail. They
may even reflect standards that have gone through worldwide review processes and are generally
considered perfect and unbreakable. *
However good those designs and architectures are, though, there's something about putting
things into actual software that's, well, special. With the exception of software proven to be
mathematically correct, ** being able to write software
that accurately implements the functionality you're trying to realize is somewhere between a
science and an art. This is no surprise to anyone who's actually written any software, tried to
debug software, or divine software's correctness by stepping through it; however, it's not the
key point of this article.
Nobody *** actually believes that the
software that comes out of this process is going to be perfect, but everybody agrees that
software should be made as close to perfect and bug-free as possible. This is why code review
is a core principle of software development. And luckily -- in my view, at least -- much of the
code that we use in our day-to-day lives is open source, which means that anybody can look at
it, and it's available for tens or hundreds of thousands of eyes to review.
And herein lies the problem: There is a view that because open source software is subject to
review by many eyes, all the bugs will be ironed out of it. This is a myth. A dangerous myth.
The problems with this view are at least twofold. The first is the "if you build it, they will
come" fallacy. I remember when there was a list of all the websites in the world, and if you
added your website to that list, people would visit it. **** In the same way, the
number of open source projects was (maybe) once so small that there was a good chance that
people might look at and review your code. Those days are past -- long past. Second, for many
areas of security functionality -- crypto primitives implementation is a good example -- the
number of suitably qualified eyes is low.
Don't think that I am in any way suggesting that the problem is any less in proprietary
code: quite the opposite. Not only are the designs and architectures in proprietary software
often hidden from review, but you have fewer eyes available to look at the code, and the
dangers of hierarchical pressure and groupthink are dramatically increased. "Proprietary code
is more secure" is less myth, more fake news. I completely understand why companies like to
keep their security software secret, and I'm afraid that the "it's to protect our intellectual
property" line is too often a platitude they tell themselves when really, it's just unsafe to
release it. So for me, it's open source all the way when we're looking at security
software.
So, what can we do? Well, companies and other organizations that care about security
functionality can -- and have, I believe a responsibility to -- expend resources on checking
and reviewing the code that implements that functionality. Alongside that, the open source
community, can -- and is -- finding ways to support critical projects and improve the amount of
review that goes into that code. ***** And we should encourage
academic organizations to train students in the black art of security software writing and
review, not to mention highlighting the importance of open source software.
We can do better -- and we are doing better. Because what we need to realize is that the
reason the "many eyes hypothesis" is a myth is not that many eyes won't improve code -- they
will -- but that we don't have enough expert eyes looking. Yet.
* Yeah, really: "perfect and unbreakable." Let's just pretend that's true
for the purposes of this discussion.
** and that still relies on the design and architecture to actually do what
you want -- or think you want -- of course, so good luck.
*** Nobody who's actually written more than about five lines of code (or
more than six characters of Perl).
**** I added one. They came. It was like some sort of magic.
"... That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country! ..."
"... I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore... ..."
"... Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. ..."
"... There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor ..."
"... If you want a high tech executive to suffer a stroke, mention the words "labor unions". ..."
"... India isn't being hired for the quality, they're being hired for cheap labor. ..."
"... Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again... ..."
"... Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology. ..."
"... I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children. ..."
"... Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact. ..."
That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as
the high school teachers who teach their kids. And these are the top coders in the country!
Automated coding just pushes the level of coding further up the development food chain, rather
than gets rid of it. It is the wrong approach for current tech. AI that is smart enough to model
new problems and create their own descriptive and runnable language - hopefully after my lifetime
but coming sometime.
What coding does not teach is how to improve our non-code infrastructure and how to keep it running
(that's the stuff which actually moves things). Code can optimize stuff, but it needs actual actuators
to affect reality.
Sometimes these actuators are actual people walking on top of a roof while fixing it.
Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper
labor near the top of their goals and as a business model.
There are quite a few highly qualified American software engineers who lose their jobs
to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient
of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with
assembling products in China by slave labor .
If you want a high tech executive to suffer a stroke, mention the words "labor unions".
Nope. Married to a highly-technical skillset, you can still make big bucks. I say this as someone
involved in this kind of thing academically and our Masters grads have to beat the banks and fintech
companies away with dog shits on sticks. You're right that you can teach anyone to potter around
and throw up a webpage but at the prohibitively difficult maths-y end of the scale, someone suitably
qualified will never want for a job.
In a similar vein, if you accept the argument that it does drive down wages, wouldn't the culprit
actually be the multitudes of online and offline courses and tutorials available to an existing
workforce?
Funny you should pick medicine, law, engineering... 3 fields that are *not* taught in high school.
The writer is simply adding "coding" to your list. So it seems you agree with his "garbage" argument
after all.
Key word is "good". Teaching everyone is just going to increase the pool of programmers code I
need to fix. India isn't being hired for the quality, they're being hired for cheap labor.
As for women sure I wouldn't mind more women around but why does no one say their needs to be
more equality in garbage collection or plumbing? (And yes plumbers are a high paid professional).
In the end I don't care what the person is, I just want to hire and work with the best and
not someone I have to correct their work because they were hired by quota. If women only graduate
at 15% why should IT contain more than that? And let's be a bit honest with the facts, of those
15% how many spend their high school years staying up all night hacking? Very few. Now the few
that did are some of the better developers I work with but that pool isn't going to increase by
forcing every child to program... just like sports aren't better by making everyone take gym class.
I ran a development team for 10 years and I never had any trouble hiring programmers - we just
had to pay them enough. Every job would have at least 10 good applicants.
Two years ago I decided to scale back a bit and go into programming (I can code real-time low
latency financial apps in 4 languages) and I had four interviews in six months with stupidly low
salaries. I'm lucky in that I can bounce between tech and the business side so I got a decent
job out of tech.
My entirely anecdotal conclusion is that there is no shortage of good programmers just a shortage
of companies willing to pay them.
I've worn many hats so far, I started out as a started out as a sysadmin, then I moved on to web
development, then back end and now I'm doing test automation because I am on almost the same money
for half the effort.
But the concepts won't. Good programming requires the ability to break down a task, organise
the steps in performing it, identify parts of the process that are common or repetitive so they
can be bundled together, handed-off or delegated, etc.
These concepts can be applied to any programming language, and indeed to many non-software
activities.
Well to his point sort of... either everything will go php or all those entry level php developers
will be on the street. A good Java or C developer is hard to come by. And to the others, being
a being a developer, especially a good one, is nothing like reading and writing. The industry
is already saturated with poor coders just doing it for a paycheck.
Pretty much the entire history of the software industry since FORAST was developed for the
ORDVAC has been about desperately trying to make software development in some way possible without
driving everyone bonkers.
The gulf between FORAST and today's IDE-written, type-inferring high level languages, compilers,
abstracted run-time environments, hypervisors, multi-computer architectures and general tech-world
flavour-of-2017-ness is truly immense[1].
And yet software is still fucking hard to write. There's no sign it's getting easier despite
all that work.
Automated coding was promised as the solution in the 1980s as well. In fact, somewhere in my
archives, I've got paper journals which include adverts for automated systems that would programmers
completely redundant by writing all your database code for you. These days, we'd think of those
tools as automated ORM generators and they don't fix the problem; they just make a new one --
ORM impedance mismatch -- which needs more engineering on top to fix...
The tools don't change the need for the humans, they just change what's possible for the humans
to do.
[1] FORAST executed in about 20,000 bytes of memory without even an OS. The compile artifacts
for the map-reduce system I built today are an astonishing hundred million bytes... and don't
include the necessary mapreduce environment, management interface, node operating system and distributed
filesystem...
"There are already top quality coders in China and India"
AHAHAHAHAHAHAHAHAHAHAHA *rolls on the floor laughting* Yes........ 1%... and 99% of incredibly
bad, incompetent, untalented one that produce cost 50% of a good developer but produce only 5%
in comparison. And I'm talking with a LOT of practical experience through more than a dozen corporations
all over the world which have been outsourcing to India... all have been disasters for the companies
(but good for the execs who pocketed big bonuses and left the company before the disaster blows
up in their face)
Tech executives have pursued [the goal of suppressing workers' compensation] in a variety
of ways. One is collusion – companies conspiring to prevent their employees from earning more
by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department
antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement.
Folks interested in the story of the Techtopus (less drily presented than in the links in this
article) should check out Mark Ames' reporting, esp
this overview article and
this focus on the egregious Steve Jobs (whose canonization by the US corporate-funded media
is just one more impeachment of their moral bankruptcy).
Another, more sophisticated method is importing large numbers of skilled guest workers from
other countries through the H1-B visa program. These workers earn less than their American
counterparts, and possess little bargaining power because they must remain employed to keep
their status.
I have watched as schools run by trade unions have done the opposite for the 5 decades. By limiting
the number of graduates, they were able to help maintain living wages and benefits. This has been
stopped in my area due to the pressure of owners run "trade associations".
During that same time period I have witnessed trade associations controlled by company owners,
while publicising their support of the average employee, invest enormous amounts of membership
fees in creating alliances with public institutions. Their goal has been that of flooding the
labor market and thus keeping wages low. A double hit for the average worker because membership
fees were paid by employees as well as those in control.
Coding jobs are just as susceptible to being moved to lower cost areas of the world as hardware
jobs already have. It's already happening. There are already top quality coders in China and India.
There is a much larger pool to chose from and they are just as good as their western counterparts
and work harder for much less money.
Globalisation is the reason, and trying to force wages up in one country simply moves the
jobs elsewhere. The only way I can think of to limit this happening is to keep the company and
coders working at the cutting edge of technology.
I'd be much more impressed if I saw that the hordes of young male engineers here in SF
expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd
say 91.3% are oblivious, idiotic children.
They would definitely not survive the zombie apocalypse.
P.S. not every kid wants or needs to have their soul sucked out of them sitting in front of
a screen full of code for some idiotic service that some other douchbro thinks is the next iteration
of sliced bread.
The demonization of Silicon Valley is clearly the next place to put all blame. Look what
"they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get
a rope!
I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San
Jose transform into a concrete jungle. There used to be quite a bit of semiconductor
equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings
have the same name : AVAILABLE. Most equipment and device manufacturing has moved to
Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to
assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC,
PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less
talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and
accessible to nearly all. Programming has become a commodity and will be priced like milk,
wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a
career.
Hi: As I have said many times before, there is no shortage of people who fully understand the
problem and can see all the connections.
However, they all fall on their faces when it comes to the solution.
To cut to the chase, Concentrated Wealth needs to go, permanently.
Of course the challenge is how to best accomplish this.....
Damn engineers and their black and white world view, if they weren't so inept they would've
unionized instead of being trampled again and again in the name of capitalism.
Not maybe. Too late. American corporations objective is to low ball wages here in US. In
India they spoon feed these pupils with affordable cutting edge IT training for next to
nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the
western world to dominate the IT industry. I've seen it with my own eyes in action. Those in
charge will anything/everything to maintain their grip on power. No brag. Just fact.
Wrong again, that approach has been tried since the 80s and will keep failing only because
software development is still more akin to a technical craft than an engineering discipline.
The number of elements required to assemble a working non trivial system is way beyond
scriptable.
> That's some crystal ball you have there. English teachers will need to know how to
code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service
industry?
You don't believe there will be robots to do plumbing and cleaning? The cleaner's job will
be to program robots to do what they need.
CEOs? Absolutely.
English teachers? Both of my kids have school laptops and everything is being done on the
computers. The teachers use software and create websites and what not. Yes, even English
teachers.
Not knowing / understanding how to code will be the same as not knowing how to use Word/
Excel. I am assuming there are people who don't, but I don't know any above the age of 6.
We've had 'automated coding scripts' for years for small tasks. However, anyone who says
they're going to obviate programmers, analysts and designers doesn't understand the software
development process.
Even if expert systems (an 80's concept, BTW) could code, we'd still have a huge need for
managers. The hard part of software isn't even the coding. It's determining the requirements
and working with clients. It will require general intelligence to do 90% of what we do right
now. The 10% we could automate right now, mostly gets in the way. I agree it will change, but
it's going to take another 20-30 years to really happen.
wrong, software companies are already developing automated coding scripts. You'll get a bunch
of door to door knives salespeople once the dust settles that's what you'll get.
Thw user "imipak" views are pretty common misconceptions. They are all wrong.
Notable quotes:
"... I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers. ..."
"... Many people can write, but few become journalists, and fewer still become real authors. ..."
Coding has little or nothing to do with Silicon Valley. They may or may not have ulterior
motives, but ultimately they are nothing in the scheme of things.
I disagree with teaching coding as a discrete subject. I think it should be combined with
home economics and woodworking because 90% of these subjects consist of transferable skills
that exist in all of them. Only a tiny residual is actually topic-specific.
In the case of coding, the residual consists of drawing skills and typing skills.
Programming language skills? Irrelevant. You should choose the tools to fit the problem.
Neither of these needs a computer. You should only ever approach the computer at the very
end, after you've designed and written the program.
Is cooking so very different? Do you decide on the ingredients before or after you start?
Do you go shopping half-way through cooking an omelette?
With woodwork, do you measure first or cut first? Do you have a plan or do you randomly
assemble bits until it does something useful?
Real coding, taught correctly, is barely taught at all. You teach the transferable skills.
ONCE. You then apply those skills in each area in which they apply.
What other transferable skills apply? Top-down design, bottom-up implementation. The
correct methodology in all forms of engineering. Proper testing strategies, also common
across all forms of engineering. However, since these tests are against logic, they're a test
of reasoning. A good thing to have in the sciences and philosophy.
Technical writing is the art of explaining things to idiots. Whether you're designing a
board game, explaining what you like about a house, writing a travelogue or just seeing if
your wild ideas hold water, you need to be able to put those ideas down on paper in a way
that exposes all the inconsistencies and errors. It doesn't take much to clean it up to be
readable by humans. But once it is cleaned up, it'll remain free of errors.
So I would teach a foundation course that teaches top-down reasoning, bottom-up design,
flowcharts, critical path analysis and symbolic logic. Probably aimed at age 7. But I'd not
do so wholly in the abstract. I'd have it thoroughly mixed in with one field, probably
cooking as most kids do that and it lacks stigma at that age.
I'd then build courses on various crafts and engineering subjects on top of that, building
further hierarchies where possible. Eliminate duplication and severely reduce the fictions we
call disciplines.
I used to employ 200 computer scientists in my business and now teach children so I'm
apparently as guilty as hell. To be compared with a carpenter is, however, a true compliment, if you mean those that
create elegant, aesthetically-pleasing, functional, adaptable and long-lasting bespoke
furniture, because our crafts of problem-solving using limited resources in confined
environments to create working, life-improving artifacts both exemplify great human ingenuity
in action. Capitalism or no.
"But coding is not magic. It is a technical skill, akin to carpentry."
But some people do it much better than others. Just like journalism. This article is
complete nonsense, as I discuss in another comment. The author might want to consider a
career in carpentry.
"But coding is not magic. It is a technical skill, akin to carpentry."
I was about to take offence on behalf of programmers, but then I realized that would be
snobbish and insulting to carpenters too. Many people can code, but only a few can code well,
and fewer still become the masters of the profession. Many people can learn carpentry, but
few become joiners, and fewer still become cabinetmakers.
Many people can write, but few become journalists, and fewer still become real authors.
IT is probably one of the most "neoliberalized" industry (even in comparison with
finance). So atomization of labor and "plantation economy" is a norm in IT. It occurs on
rather high level of wages, but with influx of foreign programmers and IT specialists (in the past)
and mass outsourcing (now) this is changing. Completion for good job positions is fierce. Dog eats
dog competition, the dream of neoliberals. Entry level jobs are already paying $15 an hour, if not
less.
Programming is a relatively rare talent, much like ability to play violin. Even amateur level is
challenging. On high level (developing large complex programs in a team and still preserving your individuality
and productivity ) it is extremely rare. Most of "commercial" programmers are able to produce only a
mediocre code (which might be adequate). Only a few programmers can excel if complex software projects.
Sometimes even performing solo. There is also a pathological breed of "programmer junkie"
( graphomania happens in programming
too ) who are able sometimes to destroy something large projects singlehandedly. That often happens
with open source projects after the main developer lost interest and abandoned the project.
It's good to allow children the chance to try their hand at coding when they otherwise may not had
that opportunity, But in no way that means that all of them can became professional programmers. No
way. Again the top level of programmers required position of a unique talent, much like top musical
performer talent.
Also to get a decent entry position you iether need to be extremely talented or graduate from Ivy
League university. When applicants are abundant, resume from less prestigious universities are not even
considered. this is just easier for HR to filter applications this way.
Also under neoliberalism cheap labor via H1B visas flood the market and depresses wages. Many Silicon
companies were so to say "Russian speaking in late 90th after the collapse of the USSR. Not offshoring
is the dominant way to offload the development to cheaper labor.
Notable quotes:
"... As software mediates more of our lives, and the power of Silicon Valley grows, it's tempting to imagine that demand for developers is soaring. The media contributes to this impression by spotlighting the genuinely inspiring stories of those who have ascended the class ladder through code. You may have heard of Bit Source, a company in eastern Kentucky that retrains coalminers as coders. They've been featured by Wired , Forbes , FastCompany , The Guardian , NPR and NBC News , among others. ..."
"... A former coalminer who becomes a successful developer deserves our respect and admiration. But the data suggests that relatively few will be able to follow their example. Our educational system has long been producing more programmers than the labor market can absorb. ..."
"... More tellingly, wage levels in the tech industry have remained flat since the late 1990s. Adjusting for inflation, the average programmer earns about as much today as in 1998. If demand were soaring, you'd expect wages to rise sharply in response. Instead, salaries have stagnated. ..."
"... Tech executives have pursued this goal in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement . Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status. ..."
"... Guest workers and wage-fixing are useful tools for restraining labor costs. But nothing would make programming cheaper than making millions more programmers. ..."
"... Silicon Valley has been unusually successful in persuading our political class and much of the general public that its interests coincide with the interests of humanity as a whole. But tech is an industry like any other. It prioritizes its bottom line, and invests heavily in making public policy serve it. The five largest tech firms now spend twice as much as Wall Street on lobbying Washington – nearly $50m in 2016. The biggest spender, Google, also goes to considerable lengths to cultivate policy wonks favorable to its interests – and to discipline the ones who aren't. ..."
"... Silicon Valley is not a uniquely benevolent force, nor a uniquely malevolent one. Rather, it's something more ordinary: a collection of capitalist firms committed to the pursuit of profit. And as every capitalist knows, markets are figments of politics. They are not naturally occurring phenomena, but elaborately crafted contraptions, sustained and structured by the state – which is why shaping public policy is so important. If tech works tirelessly to tilt markets in its favor, it's hardly alone. What distinguishes it is the amount of money it has at its disposal to do so. ..."
"... The problem isn't training. The problem is there aren't enough good jobs to be trained for ..."
"... Everyone should have the opportunity to learn how to code. Coding can be a rewarding, even pleasurable, experience, and it's useful for performing all sorts of tasks. More broadly, an understanding of how code works is critical for basic digital literacy – something that is swiftly becoming a requirement for informed citizenship in an increasingly technologized world. ..."
"... But coding is not magic. It is a technical skill, akin to carpentry. Learning to build software does not make you any more immune to the forces of American capitalism than learning to build a house. Whether a coder or a carpenter, capital will do what it can to lower your wages, and enlist public institutions towards that end. ..."
"... Exposing large portions of the school population to coding is not going to magically turn them into coders. It may increase their basic understanding but that is a long way from being a software engineer. ..."
"... All schools teach drama and most kids don't end up becoming actors. You need to give all kids access to coding in order for some can go on to make a career out of it. ..."
"... it's ridiculous because even out of a pool of computer science B.Sc. or M.Sc. grads - companies are only interested in the top 10%. Even the most mundane company with crappy IT jobs swears that they only hire "the best and the brightest." ..."
"... It's basically a con-job by the big Silicon Valley companies offshoring as many US jobs as they can, or "inshoring" via exploitation of the H1B visa ..."
"... Masters is the new Bachelors. ..."
"... I taught CS. Out of around 100 graduates I'd say maybe 5 were reasonable software engineers. The rest would be fine in tech support or other associated trades, but not writing software. Its not just a set of trainable skills, its a set of attitudes and ways of perceiving and understanding that just aren't that common. ..."
"... Yup, rings true. I've been in hi tech for over 40 years and seen the changes. I was in Silicon Valley for 10 years on a startup. India is taking over, my current US company now has a majority Indian executive and is moving work to India. US politicians push coding to drive down wages to Indian levels. ..."
This month, millions of children returned to school. This year, an unprecedented number of them
will learn to code.
Computer science courses for children have proliferated rapidly in the past few years. A 2016
Gallup
report found that 40% of American schools now offer coding classes – up from only 25% a few years
ago. New York, with the largest public school system in the country, has
pledged to offer computer science to all 1.1 million students by 2025. Los Angeles, with the
second largest,
plans to do the same by 2020. And Chicago, the fourth largest, has gone further,
promising to make computer science a high school graduation requirement by 2018.
The rationale for this rapid curricular renovation is economic. Teaching kids how to code will
help them land good jobs, the argument goes. In an era of flat and falling incomes, programming provides
a new path to the middle class – a skill so widely demanded that anyone who acquires it can command
a livable, even lucrative, wage.
This narrative pervades policymaking at every level, from school boards to the government. Yet
it rests on a fundamentally flawed premise. Contrary to public perception, the economy doesn't actually
need that many more programmers. As a result, teaching millions of kids to code won't make them all
middle-class. Rather, it will proletarianize the profession by flooding the market and forcing wages
down – and that's precisely the point.
At its root, the campaign for code education isn't about giving the next generation a shot at
earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by
creating a source of cheap labor for the tech industry.
As software mediates more of our lives, and the power of Silicon Valley grows, it's tempting to
imagine that demand for developers is soaring. The media contributes to this impression by spotlighting
the genuinely inspiring stories of those who have ascended the class ladder through code. You may
have heard of Bit Source, a company in eastern Kentucky that retrains coalminers as coders. They've
been featured by
Wired
,
Forbes ,
FastCompany ,
The Guardian ,
NPR and
NBC
News , among others.
A former coalminer who becomes a successful developer deserves our respect and admiration. But
the data suggests that relatively few will be able to follow their example. Our educational system
has long been producing more programmers than the labor market can absorb. A
study by the Economic Policy Institute found that the supply of American college graduates with
computer science degrees is 50% greater than the number hired into the tech industry each year. For
all the talk of a tech worker shortage, many qualified graduates simply can't find jobs.
More tellingly, wage levels in the tech industry have remained flat since the late 1990s. Adjusting
for inflation, the average programmer earns about as much today as in 1998. If demand were soaring,
you'd expect wages to rise sharply in response. Instead, salaries have stagnated.
Still, those salaries are stagnating at a fairly high level. The Department of Labor estimates
that the median annual wage for computer and information technology occupations is $82,860 – more
than twice the national average. And from the perspective of the people who own the tech industry,
this presents a problem. High wages threaten profits. To maximize profitability, one must always
be finding ways to pay workers less.
Tech executives have pursued this goal in a variety of ways. One is collusion – companies conspiring
to prevent their employees from earning more by switching jobs. The prevalence of this practice in
Silicon Valley triggered a justice department
antitrust complaint in 2010, along with a class action suit that culminated in a $415m
settlement . Another, more sophisticated method is importing
large numbers of skilled guest workers from other countries through the H1-B visa program. These
workers earn less than their
American counterparts, and possess little bargaining power because they must remain employed to keep
their status.
Guest workers and wage-fixing are useful tools for restraining labor costs. But nothing would
make programming cheaper than making millions more programmers. And where better to develop
this workforce than America's schools? It's no coincidence, then, that the campaign for code education
is being orchestrated by the tech industry itself. Its primary instrument is Code.org, a nonprofit
funded by Facebook, Microsoft, Google and
others . In 2016, the organization spent
nearly $20m on training teachers, developing curricula, and lobbying policymakers.
Silicon Valley has been unusually successful in persuading our political class and much of the
general public that its interests coincide with the interests of humanity as a whole. But tech is
an industry like any other. It prioritizes its bottom line, and invests heavily in making public
policy serve it. The five largest tech firms now
spend twice as much as Wall Street on lobbying Washington – nearly $50m in 2016. The biggest
spender, Google, also goes to considerable lengths to
cultivate policy wonks favorable to its interests – and to
discipline the ones who aren't.
Silicon Valley
is not a uniquely benevolent force, nor a uniquely malevolent one. Rather, it's something more ordinary:
a collection of capitalist firms committed to the pursuit of profit. And as every capitalist knows,
markets are figments of politics. They are not naturally occurring phenomena, but elaborately crafted
contraptions, sustained and structured by the state – which is why shaping public policy is so important.
If tech works tirelessly to tilt markets in its favor, it's hardly alone. What distinguishes it is
the amount of money it has at its disposal to do so.
Money isn't Silicon Valley's only advantage in its
crusade to remake American education, however. It also enjoys a favorable ideological climate.
Its basic message – that schools alone can fix big social problems – is one that politicians of both
parties have been repeating for years. The far-fetched premise of neoliberal school reform is that
education can mend our disintegrating social fabric. That if we teach students the right skills,
we can solve poverty, inequality and stagnation. The school becomes an engine of economic transformation,
catapulting young people from challenging circumstances into dignified, comfortable lives.
This argument is immensely pleasing to the technocratic mind. It suggests that our core economic
malfunction is technical – a simple asymmetry. You have workers on one side and good jobs
on the other, and all it takes is training to match them up. Indeed, every president since Bill Clinton
has talked about training American workers to fill the "skills gap". But gradually, one mainstream
economist after another has come to realize what most workers have known for years: the gap doesn't
exist. Even Larry Summers has
concluded it's a myth.
The problem isn't training. The problem is there aren't enough good jobs to be trained for
. The solution is to make bad jobs better, by raising the minimum wage and making it easier for workers
to form a union, and to create more good jobs by investing for growth. This involves forcing business
to put money into things that actually grow the productive economy rather than
shoveling profits out to shareholders. It also means increasing public investment, so that people
can make a decent living doing socially necessary work like decarbonizing our energy system and restoring
our decaying infrastructure.
Everyone should have the opportunity to learn how to code. Coding can be a rewarding, even pleasurable,
experience, and it's useful for performing all sorts of tasks. More broadly, an understanding of
how code works is critical for basic digital literacy – something that is swiftly becoming a requirement
for informed citizenship in an increasingly technologized world.
But coding is not magic. It is a technical skill, akin to carpentry. Learning to build software
does not make you any more immune to the forces of American capitalism than learning to build a house.
Whether a coder or a carpenter, capital will do what it can to lower your wages, and enlist public
institutions towards that end.
Silicon Valley has been extraordinarily adept at converting previously uncommodified portions
of our common life into sources of profit. Our schools may prove an easy conquest by comparison.
"Everyone should have the opportunity to learn how to code. " OK, and that's what's being done.
And that's what the article is bemoaning. What would be better: teach them how to change tires
or groom pets? Or pick fruit? Amazingly condescending article.
However, training lots of people to be coders won't automatically result in lots of people
who can actually write good code. Nor will it give managers/recruiters the necessary skills
to recognize which programmers are any good.
A valid rebuttal but could I offer another observation? Exposing large portions of the school
population to coding is not going to magically turn them into coders. It may increase their basic
understanding but that is a long way from being a software engineer.
Just as children join art, drama or biology classes so they do not automatically become artists,
actors or doctors. I would agree entirely that just being able to code is not going to guarantee
the sort of income that might be aspired to. As with all things, it takes commitment, perseverance
and dogged determination. I suppose ultimately it becomes the Gattaca argument.
Fair enough, but, his central argument, that an overabundance of coders will drive wages in that
sector down, is generally true, so in the future if you want your kids to go into a profession
that will earn them 80k+ then being a "coder" is not the route to take. When coding is - like
reading, writing, and arithmetic - just a basic skill, there's no guarantee having it will automatically
translate into getting a "good" job.
This article lumps everyone in computing into the 'coder' bin, without actually defining what
'coding' is. Yes there is a glut of people who can knock together a bit of HTML and
JavaScript, but that is not really programming as such.
There are huge shortages of skilled
developers however; people who can apply computer science and engineering in terms of
analysis and design of software. These are the real skills for which relatively few people
have a true aptitude.
The lack of really good skills is starting to show in some terrible
software implementation decisions, such as Slack for example; written as a web app running in
Electron (so that JavaScript code monkeys could knock it out quickly), but resulting in awful
performance. We will see more of this in the coming years...
My brother is a programmer, and in his experience these coding exams don't test anything but
whether or not you took (and remember) a very narrow range of problems introduce in the first
years of a computer science degree. The entire hiring process seems premised on a range of
ill-founded ideas about what skills are necessary for the job and how to assess them in
people. They haven't yet grasped that those kinds of exams mostly test test-taking ability,
rather than intelligence, creativity, diligence, communication ability, or anything else that
a job requires beside coughing up the right answer in a stressful, timed environment without
outside resources.
I'm an embedded software/firmware engineer. Every similar engineer I've ever met has had the same
background - starting in electronics and drifting into embedded software writing in C and assembler.
It's virtually impossible to do such software without an understanding of electronics. When it
goes wrong you may need to get the test equipment out to scope the hardware to see if it's a hardware
or software problem. Coming from a pure computing background just isn't going to get you a job
in this type of work.
All schools teach drama and most kids don't end up becoming actors. You need to give all kids
access to coding in order for some can go on to make a career out of it.
Coding salaries will inevitably fall over time, but such skills give workers the option, once
they discover that their income is no longer sustainable in the UK, of moving somewhere more affordable
and working remotely.
Completely agree. Coding is a necessary life skill for 21st century but there are levels to every
skill. From basic needs for an office job to advanced and specialised.
Lots of people can code but very few of us ever get to the point of creating something new that
has a loyal and enthusiastic user-base. Everyone should be able to code because it is or will
be the basis of being able to create almost anything in the future. If you want to make a game
in Unity, knowing how to code is really useful. If you want to work with large data-sets, you
can't rely on Excel and so you need to be able to code (in R?). The use of code is becoming so
pervasive that it is going to be like reading and writing.
All the science and engineering graduates I know can code but none of them have ever sold a
stand-alone software. The argument made above is like saying that teaching everyone to write will
drive down the wages of writers. Writing is useful for anyone and everyone but only a tiny fraction
of people who can write, actually write novels or even newspaper columns.
Immigrants have always a big advantage over locals, for any company, including tech companies:
the government makes sure that they will stay in their place and never complain about low salaries
or bad working conditions because, you know what? If the company sacks you, an immigrant may be
forced to leave the country where they live because their visa expires, which is never going to
happen with a local. Companies always have more leverage over immigrants. Given a choice between
more and less exploitable workers, companies will choose the most exploitable ones.
Which is something that Marx figured more than a century ago, and why he insisted that socialism
had to be international, which led to the founding of the First International Socialist. If worker's
fights didn't go across country boundaries, companies would just play people from one country
against the other. Unfortunately, at some point in time socialists forgot this very important
fact.
SO what's wrong with having lots of people able to code? The only argument you seem to have is
that it'll lower wages. So do you think that we should stop teaching writing skills so that journalists
can be paid more? And no one os going to "force" kids into high-level abstract coding practices
in kindergarten, fgs. But there is ample empirical proof that young children can learn basic principles.
In fact the younger that children are exposed to anything, the better they can enhance their skills
adn knowlege of it later in life, and computing concepts are no different.
You're completely missing the point. Kids are forced into the programming field (even STEM as
a more general term), before they evolve their abstract reasoning. For that matter, you're not
producing highly skilled people, but functional imbeciles and a decent labor that will eventually
lower the wages.
Conspiracy theory? So Google, FB and others paying hundreds of millions of dollars for forming
a cartel to lower the wages is not true? It sounds me that you're sounding more like a 1969 denier
that Guardian is. Tech companies are not financing those incentives because they have a good soul.
Their primary drive has always been money, otherwise they wouldn't sell your personal data to
earn money.
But hey, you can always sleep peacefully when your kid becomes a coder. When he is 50, everyone
will want to have a Cobol, Ada programmer with 25 years of experience when you can get 16 year
old kid from a high school for 1/10 of a price. Go back to sleep...
it's ridiculous because even out of a pool of computer science B.Sc. or M.Sc. grads - companies
are only interested in the top 10%. Even the most mundane company with crappy IT jobs swears that
they only hire "the best and the brightest."
It's basically a con-job by the big Silicon Valley companies offshoring as many US jobs as
they can, or "inshoring" via exploitation of the H1B visa - so they can say "see, we don't
have 'qualified' people in the US - maybe when these kids learn to program in a generation." As
if American students haven't been coding for decades -- and saw their salaries plummet as the
H1B visa and Indian offshore firms exploded......
Dude, stow the attitude. I've tested code from various entities, and seen every kind of crap peddled
as gold.
But I've also seen a little 5-foot giggly lady with two kids, grumble a bit and save a $100,000
product by rewriting another coder's man-month of work in a few days, without any flaws or cracks.
Almost nobody will ever know she did that. She's so far beyond my level it hurts.
And yes, the author knows nothing. He's genuinely crying wolf while knee-deep in amused wolves.
The last time I was in San Jose, years ago , the room was already full of people with Indian
surnames. If the problem was REALLY serious, a programmer from POLAND was called in.
If you think fighting for a violinist spot is hard, try fighting for it with every spare violinist
in the world . I am training my Indian replacement to do my job right now
. At least the public can appreciate a good violin. Can you appreciate
Duff's device ?
So by all means, don't teach local kids how to think in a straight line, just in case they
make a dent in the price of wages IN INDIA.... *sheesh*
That's the best possible summarisation of this extremely dumb article. Bravo.
For those who don't know how to think of coding, like the article author, here's a few analogies
:
A computer is a box that replays frozen thoughts, quickly. That is all.
Coding is just the art of explaining. Anyone who can explain something patiently and clearly,
can code. Anyone who can't, can't.
Making hardware is very much like growing produce while blind. Making software is very much
like cooking that produce while blind.
Imagine looking after a room full of young eager obedient children who only do exactly, *exactly*,
what you told them to do, but move around at the speed of light. Imagine having to try to keep
them from smashing into each other or decapitating themselves on the corners of tables, tripping
over toys and crashing into walls, etc, while you get them all to play games together.
The difference between a good coder and a bad coder is almost life and death. Imagine a broth
prepared with ingredients from a dozen co-ordinating geniuses and one idiot, that you'll mass
produce. The soup is always far worse for the idiot's additions. The more cooks you involve, the
more chance your mass produced broth will taste bad.
People who hire coders, typically can't tell a good coder from a bad coder.
No you do it in your own time. If you're not prepared to put in long days IT is not for you in
any case. It was ever thus, but more so now due to offshoring - rather than the rather obscure
forces you seem to believe are important.
Sorry, offworldguy, but you're losing this one really badly. I'm a professional software engineer
in my 60's and I know lots of non-professionals in my age range who write little programs,
scripts and apps for fun. I know this because they often contact me for help or advice.
So you've now been told by several people in this thread that ordinary people do code for fun
or recreation. The fact that you don't know any probably says more about your network of friends
and acquaintances than about the general population.
This is one of the daftest articles I've come across in a long while.
If it's possible that so many kids can be taught to code well enough so that wages come down,
then that proves that the only reason we've been paying so much for development costs is the scarcity
of people able to do it, not that it's intrinsically so hard that only a select few could anyway.
In which case, there is no ethical argument for keeping the pools of skilled workers to some select
group. Anyone able to do it should have an equal opportunity to do it.
What is the argument for not teaching coding (other than to artificially keep wages high)? Why
not stop teaching the three R's, in order to boost white-collar wages in general?
Computing is an ever-increasingly intrinsic part of life, and people need to understand it at
all levels. It is not just unfair, but tantamount to neglect, to fail to teach children all the
skills they may require to cope as adults.
Having said that, I suspect that in another generation or two a good many lower-level coding jobs
will be redundant anyway, with such code being automatically generated, and "coders" at this level
will be little more than technicians setting various parameters. Even so, understanding the basics
behind computing is a part of understanding the world they live in, and every child needs that.
Suggesting that teaching coding is some kind of conspiracy to force wages down is well, it makes
the moon-landing conspiracy looks sensible by comparison.
I think it is important to demystify advanced technology, I think that has importance in its own
right.Plus, schools should expose kids to things which may spark their interest. Not everyone
who does a science project goes on years later to get a PhD, but you'd think that it makes it
more likely. Same as giving a kid some music lessons. There is a big difference between serious
coding and the basic steps needed to automate a customer service team or a marketing program,
but the people who have some mastery over automation will have an advantage in many jobs. Advanced
machines are clearly going to be a huge part of our future. What should we do about it, if not
teach kids how to understand these tools?
This is like arguing that teaching kids to write is nothing more than a plot to flood the market
for journalists. Teaching first aid and CPR does not make everyone a doctor.
Coding is an essential skill for many jobs already: 50 years ago, who would have thought you needed
coders to make movies? Being a software engineer, a serious coder, is hard. IN fact, it takes
more than technical coding to be a software engineer: you can learn to code in a week. Software
Engineering is a four year degree, and even then you've just started a career. But depriving kids
of some basic insights may mean they won't have the basic skills needed in the future, even for
controlling their car and house. By all means, send you kids to a school that doesn't teach coding.
I won't.
Did you learn SNOBOL, or is Snowball a language I'm not familiar with? (Entirely possible, as
an American I never would have known Extended Mercury Autocode existed we're it not for a random
book acquisition at my home town library when I was a kid.)
The tide that is transforming technology jobs from "white collar professional" into "blue collar
industrial" is part of a larger global economic cycle.
Successful "growth" assets inevitably transmogrify into "value" and "income" assets as they
progress through the economic cycle. The nature of their work transforms also. No longer focused
on innovation; on disrupting old markets or forging new ones; their fundamental nature changes
as they mature into optimising, cost reducing, process oriented and most importantly of all --
dividend paying -- organisations.
First, the market invests. And then, .... it squeezes.
Immature companies must invest in their team; must inspire them to be innovative so that they
can take the creative risks required to create new things. This translates into high skills, high
wages and "white collar" social status.
Mature, optimising companies on the other hand must necessarily avoid risks and seek variance-minimising
predictability. They seek to control their human resources; to eliminate creativity; to to make
the work procedural, impersonal and soulless. This translates into low skills, low wages and "blue
collar" social status.
This is a fundamental part of the economic cycle; but it has been playing out on the global
stage which has had the effect of hiding some of its' effects.
Over the past decades, technology knowledge and skills have flooded away from "high cost" countries
and towards "best cost" countries at a historically significant rate. Possibly at the maximum
rate that global infrastructure and regional skills pools can support. Much of this necessarily
inhumane and brutal cost cutting and deskilling has therefore been hidden by the tide of outsourcing
and offshoring. It is hard to see the nature of the jobs change when the jobs themselves are changing
hands at the same time.
The ever tighter ratchet of dehumanising industrialisation; productivity and efficiency continues
apace, however, and as our global system matures and evens out, we see the seeds of what we have
sown sail home from over the sea.
Technology jobs in developed nations have been skewed towards "growth" activities since for
the past several decades most "value" and "income" activities have been carried out in developing
nations. Now, we may be seeing the early preparations for the diffusion of that skewed, uneven
and unsustainable imbalance.
The good news is that "Growth" activities are not going to disappear from the world. They just
may not be so geographically concentrated as they are today. Also, there is a significant and
attention-worthy argument that the re-balancing of skills will result in a more flexible and performant
global economy as organisations will better be able to shift a wider variety of work around the
world to regions where local conditions (regulation, subsidy, union activity etc...) are supportive.
For the individuals concerned it isn't going to be pretty. And of course it is just another
example of the race to the bottom that pits states and public sector purse-holders against one
another to win the grace and favour of globally mobile employers.
As a power play move it has a sort of inhumanly psychotic inevitability to it which is quite
awesome to observe.
I also find it ironic that the only way to tame the leviathan that is the global free-market
industrial system might actually be effective global governance and international cooperation
within a rules-based system.
Both "globalist" but not even slightly both the same thing.
not just coders, it put even IT Ops guys into this bin. Basically good old - so you are working
with computers sentence I used to hear a lot 10-15 years ago.
You can teach everyone how to code but it doesn't necessarily mean everyone will be able to work
as one. We all learn math but that doesn't mean we're all mathematicians. We all know how to write
but we're not all professional writers.
I have a graduate degree in CS and been to a coding bootcamp. Not everyone's brain is wired
to become a successful coder. There is a particular way how coders think. Quality of a product
will stand out based on these differences.
Very hyperbolic is to assume that the profit in those companies is done by decreasing wages. In
my company the profit is driven by ability to deliver products to the market. And that is limited
by number of top people (not just any coder) you can have.
You realise that the arts are massively oversupplied and that most artists earn very little, if
anything? Which is sort of like the situation the author is warning about. But hey, he knows nothing.
Congratulations, though, on writing one of the most pretentious posts I've ever read on CIF.
So you know kids, college age people and software developers who enjoy doing it in their leisure
time? Do you know any middle aged mothers, fathers, grandparents who enjoy it and are not
software developers?
Sorry, I don't see coding as a leisure pursuit that is going to take off
beyond a very narrow demographic and if it becomes apparent (as I believe it will) that there
is not going to be a huge increase in coding job opportunities then it will likely wither in schools
too, perhaps replaced by music lessons.
No, because software developer probably fail more often than they succeed. Building anything worthwhile
is an iterative process. And it's not just the compiler but the other devs, oyur designer, your
PM, all looking at your work.
It's not shallow or lazy. I also work at a tech company and it's pretty common to do that across
job fields. Even in HR marketing jobs, we hire students who can't point to an internship or other
kind of experience in college, not simply grades.
A lot of people do find it fun. I know many kids - high school and young college age - who code
in the leisure time because they find it pleasurable to make small apps and video games. I myself
enjoy it too. Your argument is like saying since you don't like to read books in your leisure
time, nobody else must.
The point is your analogy isn't a good one - people who learn to code can not only enjoy it
in their spare time just like music, but they can also use it to accomplish all kinds of basic
things. I have a friend who's a software developer who has used code to program his Roomba to
vacuum in a specific pattern and to play Candy Land with his daughter when they lost the spinner.
Creativity could be added to your list. Anyone can push a button but only a few can invent a new
one.
One company in the US (after it was taken over by a new owner) decided it was more profitable
to import button pushers from off-shore, they lost 7 million customers (gamers) and had to employ
more of the original American developers to maintain their high standard and profits.
So similar to 500k a year people going to university ( UK) now when it used to be 60k people a
year( 1980). There was never enough graduate jobs in 1980 so can't see where the sudden increase
in need for graduates has come from.
They aren't really crucial pieces of technology except for their popularity
It's early in the day for me, but this is the most ridiculous thing I've read so far, and I
suspect it will be high up on the list by the end of the day.
There's no technology that is "crucial" unless it's involved in food, shelter or warmth. The
rest has its "crucialness" decided by how widespread its use is, and in the case of those 3 languages,
the answer is "very".
You (or I) might not like that very much, but that's how it is.
My benchmark would be if the average new graduate in the discipline earns more or less than one
of the "professions", Law, medicine, Economics etc. The short answer is that they don't. Indeed,
in my experience of professions, many good senior SW developers, say in finance, are paid markedly
less than the marketing manager, CTO etc. who are often non-technical.
My benchmark is not "has a car, house etc." but what does 10, 15 20 years of experience in
the area generate as a relative income to another profession, like being a GP or a corporate solicitor
or a civil servant (which is usually the benchmark academics use for pay scaling). It is not to
denigrate, just to say that markets don't always clear to a point where the most skilled are the
highest paid.
I was also suggesting that even if you are not intending to work in the SW area, being able
to translate your imagination into a program that reflects your ideas is a nice life skill.
Your assumption has no basis in reality. In my experience, as soon as Clinton ramped up H1Bs,
my employer would invite 6 same college/degree/curriculum in for interviews, 5 citizen,
1 foreign student and default offer to foreign student without asking interviewers a single question
about the interview. Eventually, the skipped the farce of interviewing citizens all together.
That was in 1997, and it's only gotten worse. Wall St's been pretty blunt lately. Openly admits
replacing US workers for import labor, as it's the "easiest" way to "grow" the economy, even though
they know they are ousting citizens from their jobs to do so.
"People who get Masters and PhD's in computer science" Feed western universities money, for degree
programs that would otherwise not exist, due to lack of market demand. "someone has a Bachelor's
in CS" As citizens, having the same college/same curriculum/same grades, as foreign grad. But
as citizens, they have job market mobility, and therefore are shunned. "you can make something
real and significant on your own" If someone else is paying your rent, food and student loans
while you do so.
While true, it's not the coders' fault. The managers and execs above them have intentionally created
an environment where these things are secondary. What's primary is getting the stupid piece of
garbage out the door for Q profit outlook. Ship it amd patch it.
Do most people find it fun? I can code. I don't find it 'fun'. Thirty years ago as a young graduate
I might have found it slightly fun but the 'fun' wears off pretty quick.
In my estimation PHP is an utter abomination. Python is just a little better but still very bad.
Ruby is a little better but still not at all good.
Languages like PHP, Python and JS are popular for banging out prototypes and disposable junk,
but you greatly overestimate their importance. They aren't really crucial pieces of technology
except for their popularity and while they won't disappear they won't age well at all. Basically
they are big long-lived fads. Java is now over 20 years old and while Java 8 is not crucial, the
JVM itself actually is crucial. It might last another 20 years or more. Look for more projects
like Ceylon, Scala and Kotlin. We haven't found the next step forward yet, but it's getting more
interesting, especially around type systems.
A strong developer will be able to code well in a half dozen languages and have fairly decent
knowledge of a dozen others. For me it's been many years of: Z80, x86, C, C++, Java. Also know
some Perl, LISP, ANTLR, Scala, JS, SQL, Pascal, others...
This makes people like me with 35 years of experience shipping products on deadlines up and down
every stack (from device drivers and operating systems to programming languages, platforms and
frameworks to web, distributed computing, clusters, big data and ML) so much more valuable. Been
there, done that.
It's just not true. In SV there's this giant vacuum created by Apple, Google, FB, etc. Other good
companies struggle to fill positions. I know from being on the hiring side at times.
Plenty of people? I don't know of a single person outside of my work which is teaming with programmers.
Not a single friend, not my neighbours, not my wife or her extended family, not my parents. Plenty
of people might do it but most people don't.
Agreed: by gifted I did not meant innate. It's more of a mix of having the interest, the persistence,
the time, the opportunity and actually enjoying that kind of challenge.
While some of those
things are to a large extent innate personality traits, others are not and you don't need max
of all of them, you just need enough to drive you to explore that domain.
That said, somebody that goes into coding purelly for the money and does it for the money alone
is extremely unlikelly to become an exceptional coder.
I'm as senior as they get and have interviewed quite a lot of programmers for several positions,
including for Technical Lead (in fact, to replace me) and so far my experience leads me to believe
that people who don't have a knack for coding are much less likely to expose themselves to many
different languages and techniques, and also are less experimentalist, thus being far less likely
to have those moments of transcending merely being aware of the visible and obvious to discover
the concerns and concepts behind what one does. Without those moments that open the door to the
next Universe of concerns and implications, one cannot do state transitions such as Coder to Technical
Designer or Technical Designer to Technical Architect.
Sure, you can get the title and do the things from the books, but you will not get WHY are
those things supposed to work (and when they will not work) and thus cannot adjust to new conditions
effectively and will be like a sailor that can't sail away from sight of the coast since he can't
navigate.
All this gets reflected in many things that enhance productivity, from the early ability to
quickly piece together solutions for a new problem out of past solutions for different problems
to, later, conceiving software architecture designs fittted to the typical usage pattern in the
industry for which the software is going to be made.
From the way our IT department is going, needing millions of coders is not the future. It'll be
a minority of developers at the top, and an army of low wage monkeys at the bottom who can troubleshoot
from a script - until AI comes along that can code faster and more accurately.
Interesting piece that's fundamentally flawed. I'm a software engineer myself. There is a reason
a University education of a minimum of three years is the base line for a junior developer or
'coder'.
Software engineering isn't just writing code. I would say 80% of my time is spent designing
and structuring software before I even touch the code.
Explaining software engineering as a discipline at a high level to people who don't understand
it is simple.
Most of us who learn to drive learn a few basics about the mechanics of a car. We know that
brake pads need to be replaced, we know that fuel is pumped into an engine when we press the gas
pedal. Most of us know how to change a bulb if it blows.
The vast majority of us wouldn't be able to replace a head gasket or clutch though. Just knowing
the basics isn't enough to make you a mechanic.
Studying in school isn't enough to produce software engineers. Software engineering isn't just
writing code, it's cross discipline. We also need to understand the science behind the computer,
we need too understand logic, data structures, timings, how to manage memory, security, how databases
work etc.
A few years of learning at school isn't nearly enough, a degree isn't enough on its own due
to the dynamic and ever evolving nature of software engineering. Schools teach technology that
is out of date and typically don't explain the science very well.
This is why most companies don't want new developers, they want people with experience and
multiple skills.
Programming is becoming cool and people think that because of that it's easy to become a skilled
developer. It isn't. It takes time and effort and most kids give up.
French was on the national curriculum when I was at school. Most people including me can't
hold a conversation in French though.
Ultimately there is a SKILL shortage. And that's because skill takes a long time, successes
and failures to acquire. Most people just give up.
This article is akin to saying 'schools are teaching basic health to reduce the wages of Doctors'.
It didn't happen.
There is a difference. When you teach people music you teach a skill that can be used for a lifetimes
enjoyment. One might sit at a piano in later years and play. One is hardly likely to 'do a bit
of coding' in ones leisure time.
The other thing is how good are people going to get at coding and how long will they retain
the skill if not used? I tend to think maths is similar to coding and most adults have pretty
terrible maths skills not venturing far beyond arithmetic. Not many remember how to solve a quadratic
equation or even how to rearrange some algebra.
One more thing is we know that if we teach people music they will find a use for it, if only
in their leisure time. We don't know that coding will be in any way useful because we don't know
if there will be coding jobs in the future. AI might take over coding but we know that AI won't
take over playing piano for pleasure.
If we want to teach logical thinking then I think maths has always done this and we should
make sure people are better at maths.
Am I missing something here? Being able to code is a skill that is a useful addition to the skill
armoury of a youngster entering the work place. Much like reading, writing, maths... Not only
is it directly applicable and pervasive in our modern world, it is built upon logic.
The important point is that American schools are not ONLY teaching youngsters to code, and
producing one dimensional robots... instead coding makes up one part of their overall skill set.
Those who wish to develop their coding skills further certainly can choose to do so. Those who
specialise elsewhere are more than likely to have found the skills they learnt whilst coding useful
anyway.
I struggle to see how there is a hidden capitalist agenda here. I would argue learning the
basics of coding is simply becoming seen as an integral part of the school curriculum.
The word "coding" is shorthand for "computer programming" or "software development" and it masks
the depth and range of skills that might be required, depending on the application.
This subtlety is lost, I think, on politicians and perhaps the general public. Asserting that
teaching lots of people to code is a sneaky way to commodotise an industry might have some truth
to it, but remember that commodotisation (or "sharing and re-use" as developers might call it)
is nothing new. The creation of freely available and re-usable software components and APIs has
driven innovation, and has put much power in the hands of developers who would not otherwise have
the skill or time to tackle such projects.
There's nothing to fear from teaching more people to "code", just as there's nothing to fear
from teaching more people to "play music". These skills simply represent points on a continuum.
There's room for everyone, from the kid on a kazoo all the way to Coltrane at the Village Vanguard.
I taught CS. Out of around 100 graduates I'd say maybe 5 were reasonable software engineers.
The rest would be fine in tech support or other associated trades, but not writing software. Its
not just a set of trainable skills, its a set of attitudes and ways of perceiving and understanding
that just aren't that common.
I can't understand the rush to teach coding in schools. First of all I don't think we are going
to be a country of millions of coders and secondly if most people have the skills then coding
is hardly going to be a well paid job. Thirdly you can learn coding from scratch after school
like people of my generation did. You could argue that it is part of a well rounded education
but then it is as important for your career as learning Shakespeare, knowing what an oxbow lake
is or being able to do calculus: most jobs just won't need you to know.
While you roll on the floor laughing, these countries will slowly but surely get their act together.
That is how they work. There are top quality coders over there and they will soon promoted into
a position to organise the others.
You are probably too young to remember when people laughed at electronic products when they
were made in Japan then Taiwan. History will repeat it's self.
Yes it's ironic and no different here in the UK. Traditionally Labour was the party focused on
dividing the economic pie more fairly, Tories on growing it for the benefit of all. It's now completely
upside down with Tories paying lip service to the idea of pay rises but in reality supporting
this deflationary race to the bottom, hammering down salaries and so shrinking discretionary spending
power which forces price reductions to match and so more pressure on employers to cut costs ...
ad infinitum.
Labour now favour policies which would cause an expansion across the entire economy through pay
rises and dramatically increased investment with perhaps more tolerance of inflation to achieve
it.
Not surprising if they're working for a company that is cold-calling people - which should be
banned in my opinion. Call centres providing customer support are probably less abuse-heavy since
the customer is trying to get something done.
I taught myself to code in 1974. Fortran, COBOL were first. Over the years as a aerospace engineer
I coded in numerous languages ranging from PLM, Snowball, Basic, and more assembly languages than
I can recall, not to mention deep down in machine code on more architectures than most know even
existed. Bottom line is that coding is easy. It doesn't take a genius to code, just another way
of thinking. Consider all the bugs in the software available now. These "coders", not sufficiently
trained need adult supervision by engineers who know what they are doing for computer systems
that are important such as the electrical grid, nuclear weapons, and safety critical systems.
If you want to program toy apps then code away, if you want to do something important learn engineering
AND coding.
Laughable. It takes only an above-average IQ to code. Today's coders are akin to the auto mechanics
of the 1950s where practically every high school had auto shop instruction . . . nothing but a
source of cheap labor for doing routine implementations of software systems using powerful code
libraries built by REAL software engineers.
I disagree. Technology firms are just like other firms. Why then the collusion not to pay more
to workers coming from other companies? To believe that they are anything else is naive. The author
is correct. We need policies that actually grow the economy and not leaders who cave to what the
CEOs want like Bill Clinton did. He brought NAFTA at the behest of CEOs and all it ended up doing
was ripping apart the rust belt and ushering in Trump.
So the media always needs some bad guys to write about, and this month they seem to have it in
for the tech industry. The article is BS. I interview a lot of people to join a large tech company,
and I can guarantee you that we aren't trying to find cheaper labor, we're looking for the best
talent.
I know that lots of different jobs have been outsourced to low cost areas, but these days the
top companies are instead looking for the top talent globally.
I see this article as a hit piece against Silicon Valley, and it doesn't fly in the face of
the evidence.
This has got to be the most cynical and idiotic social interest piece I have ever read in the
Guardian. Once upon a time it was very helpful to learn carpentry and machining, but now, even
if you are learning those, you will get a big and indispensable headstart if you have some logic
and programming skills. The fact is, almost no matter what you do, you can apply logic and programming
skills to give you an edge. Even journalists.
Yup, rings true. I've been in hi tech for over 40 years and seen the changes. I was in Silicon
Valley for 10 years on a startup. India is taking over, my current US company now has a majority
Indian executive and is moving work to India. US politicians push coding to drive down wages to
Indian levels.
On the bright side I am old enough and established enough to quit tomorrow,
its someone else's problem, but I still despise those who have sold us out, like the Clintons,
the Bushes, the Googoids, the Zuckerboids.
Sure markets existed before governments, but capitalism didn't, can't in fact. It needs the organs
of state, the banking system, an education system, and an infrastructure.
Then teach them other things but not coding! Here in Australia every child of school age has to
learn coding. Now tell me that everyone of them will need it? Look beyond computers as coding
will soon be automated just like every other job.
If you have never coded then you will not appreciate how labour intensive it is. Coders effectively
use line editors to type in, line by line, the instructions. And syntax is critical; add a comma
when you meant a semicolon and the code doesn't work properly. Yeah, we use frameworks and libraries
of already written subroutines, but, in the end, it is all about manually typing in the code.
Which is an expensive way of doing things (hence the attractions of 'off-shoring' the coding
task to low cost economies in Asia).
And this is why teaching kids to code is a waste of time.
Already, AI based systems are addressing the task of interpreting high level design models
and simply generating the required application.
One of the first uses templates and a smart chatbot to enable non-tech business people to build
their websites. By describe in non-coding terms what they want, the chatbot is able to assemble
the necessary components and make the requisite template amendments to build a working website.
Much cheaper than hiring expensive coders to type it all in manually.
It's early days yet, but coding may well be one of the big losers to AI automation along with
all those back office clerical jobs.
Teaching kids how to think about design rather than how to code would be much more valuable.
Thick-skinned? Just because you might get a few error messages from the compiler? Call centre
workers have to put up with people telling them to fuck off eight hours a day.
Spot on. Society will never need more than 1% of its people to code. We will need far more garbage
men. There are only so many (relatively) good jobs to go around and its about competing to get
them.
I'm a professor (not of computer science) and yet, I try to give my students a basic understanding
of algorithms and logic, to spark an interest and encourage them towards programming. I have no
skin in the game, except that I've seen unemployment first-hand, and want them to avoid it. The
best chance most of them have is to learn to code.
Educating youth does not drive wages down. It drives our economy up. China, India, and other
countries are training youth in programming skills. Educating our youth means that they will
be able to compete globally.
This is the standard GOP stand that we don't need to educate our youth, but instead fantasize
about high-paying manufacturing jobs miraculously coming back.
Many jobs, including new manufacturing jobs have an element of coding because they are
automated. Other industries require coding skills to maintain web sites and keep computer
systems running. Learning coding skills opens these doors.
Coding teaches logic, an essential thought process. Learning to code, like learning anything,
increases the brains ability to adapt to new environments which is essential to our survival
as a species.
We must invest in educating our youth.
"Contrary to public perception, the economy doesn't actually need that many more
programmers." This really looks like a straw man introducing a red herring. A skill can be
extremely valuable for those who do not pursue it as a full time profession.
The economy doesn't actually need that many more typists, pianists, mathematicians,
athletes, dietitians. So, clearly, teaching typing, the piano, mathematics, physical
education, and nutrition is a nefarious plot to drive down salaries in those professions.
None of those skills could possibly enrich the lives or enhance the productivity of builders,
lawyers, public officials, teachers, parents, or store managers.
A study by the Economic Policy Institute found that the supply of American college
graduates with computer science degrees is 50% greater than the number hired into the tech
industry each year.
You're assuming that all those people are qualified to work in software because they have
a piece of paper that says so, but that's not a valid assumption. The quality of computer
science degree courses is generally poor, and most people aren't willing or able to teach
themselves. Universities are motivated to award degrees anyway because if they only awarded
degrees to students who are actually qualified then that would reflect very poorly on their
quality of teaching.
A skills shortage doesn't mean that everyone who claims to have a skill gets hired and
there are still some jobs left over that aren't being done. It means that employers are
forced to hire people who are incompetent in order to fill all their positions. Many people
who get jobs in programming can't really do it and do nothing but create work for everyone
else. That's why most of the software you use every day doesn't work properly. That's why
competent programmers' salaries are still high in spite of the apparently large number of
"qualified" people who aren't employed as programmers.
"... You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field. ..."
"... Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to. ..."
"... I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it. ..."
"... "...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck. ..."
"... Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed. ..."
Agreed, to many people 'coding' consists of copying other people's JavaScript snippets from
StackOverflow... I tire of the many frauds in the business...
You can learn to code, but that doesn't mean you'll be good at it. There will be a few who
excel but most will not. This isn't a reflection on them but rather the reality of the
situation. In any given area some will do poorly, more will do fairly, and a few will excel.
The same applies in any field.
Oh, rubbish. I'm in the process of retiring from my job as an Android software designer so
I'm tasked with hiring a replacement for my organisation. It pays extremely well, the work is
interesting, and the company is successful and serves an important worldwide industry.
Still, finding highly-qualified people is hard and they get snatched up in mid-interview
because the demand is high. Not only that but at these pay scales, we can pretty much expect
the Guardian will do yet another article about the unconscionable gap between what rich,
privileged techies like software engineers make and everyone else.
Really, we're damned if we do and damned if we don't. If tech workers are well-paid we're
castigated for gentrifying neighbourhoods and living large, and yet anything that threatens
to lower what we're paid produces conspiracy-theory articles like this one.
I learned to cook in school. Was there a shortage of cooks? No. Did I become a professional
cook? No. but I sure as hell would not have missed the skills I learned for the world, and I
use them every day.
Oh no, there's loads of people who say they're coders, who have on their CV that they're
coders, that have been paid to be coders. Loads of them.
Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a
problem to do it, definitely a coder, not a problem being "hands on"... can't actually
write working code when we actually ask them to.
I feel for your brother, and I've experienced the exact same BS "test" that you're
describing. However, when I said "rudimentary coding exam", I wasn't talking about classic
fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply
ask people to write a small amount of code that will solve a simple real world problem.
Something that they would be asked to do if they got hired. We let them take a long time to
do it. We let them use Google to look things up if they need. You would be shocked how many
"qualified applicants" can't do it.
The demonization of Silicon Valley is clearly the next place to put all blame. Look what
"they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get
a rope!
I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San
Jose transform into a concrete jungle. There used to be quite a bit of semiconductor
equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings
have the same name : AVAILABLE. Most equipment and device manufacturing has moved to
Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to
assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC,
PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less
talented.
Now the script based languages (HTML, JAVA, etc.) are even higher level and
accessible to nearly all. Programming has become a commodity and will be priced like milk,
wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a
career.
"intelligence, creativity, diligence, communication ability, or anything else that a job"
None of those are any use if, when asked to turn your intelligent, creative, diligent,
communicated idea into some software, you perform as well as most candidates do at simple
coding assessments... and write stuff that doesn't work.
At its root, the campaign for code education isn't about giving the next generation a
shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no
longer exist, by creating a source of cheap labor for the tech industry.
Of course the writer does not offer the slightest shred of evidence to support the idea
that this is the actual goal of these programs. So it appears that the tinfoil-hat
conspiracy brigade on the Guardian is operating not only below the line, but above it,
too.
The fact is that few of these students will ever become software engineers (which,
incidentally, is my profession) but programming skills are essential in many professions for
writing little scripts to automate various tasks, or to just understand 21st century
technology.
Sadly this is another article by a partial journalist who knows nothing about the software
industry, but hopes to subvert what he had read somewhere to support a position he had
already assumed. As others had said, understanding coding had already become akin to being able to use a
pencil. It is a basic requirement of many higher level roles.
But knowing which end of a pencil to put on the paper (the equivalent of the level of
coding taught in schools) isn't the same as being an artist. Moreover anyone who knows the field recognises that top coders are gifted, they embody
genius. There are coding Caravaggio's out there, but few have the experience to know that. No
amount of teaching will produce high level coders from average humans, there is an intangible
something needed, as there is in music and art, to elevate the merely good to genius.
All to say, however many are taught the basics, it won't push down the value of the most
talented coders, and so won't reduce the costs of the technology industry in any meaningful
way as it is an industry, like art, that relies on the few not the many.
Not all of those children will want to become programmers but at least the barrier to
entry,
- for more to at least experience it - will be lower.
Teaching music to only the children whose parents can afford music tuition means than
society misses out on a greater potential for some incredible gifted musicians to shine
through.
Moreover, learning to code really means learning how to wrangle with the practical
application of abstract concepts, algorithms, numerical skills, logic, reasoning, etc. which
are all transferrable skills some of which are not in the scope of other classes, certainly
practically.
Like music, sport, literature etc. programming a computer, a website, a device, a smartphone
is an endeavour that can be truly rewarding as merely a pastime, and similarly is limited
only by ones imagination.
"...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a
severe underestimation of the level of expertise required to conceptualise and deliver robust
and maintainable code. The complexity of integrating software is more equivalent to
constructing an entire building with components of different materials. If you think teaching
coding is enough to enable software design and delivery then good luck.
Yeah, but mania over coding skills inevitably pushes over skills out of the curriculum (or
deemphasizes it). Education is zero-sum in that there's only so much time and energy to
devote to it. Hence, you need more than vague appeals to "enhancement," especially given the
risks pointed out by the author.
"Talented coders will start new tech businesses and create more jobs."
That could be argued for any skill set, including those found in the humanities and social
sciences likely to pushed out by the mania over coding ability. Education is zero-sum: Time
spent on one subject is time that invariably can't be spent learning something else.
"If they can't literally fix everything let's just get rid of them, right?"
That's a strawman. His point is rooted in the recognition that we only have so much time,
energy, and money to invest in solutions. One's that feel good but may not do anything
distract us for the deeper structural issues in our economy. The probably with thinking
"education" will fix everything is that it leaves the status quo unquestioned.
Being able to write code and being able to program are two very different skills. In language
terms its the difference between being able to read and write (say) English and being able to
write literature; obviously you need a grasp of the language to write literature but just
knowing the language is not the same as being able to assemble and marshal thought into a
coherent pattern prior to setting it down.
To confuse things further there's various levels of skill that all look the same to the
untutored eye. Suppose you wished to bridge a waterway. If that waterway was a narrow ditch
then you could just throw a plank across. As the distance to be spanned got larger and larger
eventually you'd have to abandon intuition for engineering and experience. Exactly the same
issues happen with software but they're less tangible; anyone can build a small program but a
complex system requires a lot of other knowledge (in my field, that's engineering knowledge
-- coding is almost an afterthought).
Its a good idea to teach young people to code but I wouldn't raise their expectations of
huge salaries too much. For children educating them in wider, more general, fields and
abstract activities such as music will pay off huge dividends, far more than just teaching
them whatever the fashionable language du jour is. (...which should be Logo but its too
subtle and abstract, it doesn't look "real world" enough!).
I don't see this is an issue. Sure, there could be ulterior motives there, but anyone who
wants to still be employed in 20 years has to know how to code . It is not that everyone will
be a coder, but their jobs will either include part-time coding or will require understanding
of software and what it can and cannot do. AI is going to be everywhere.
What a dumpster argument. I am not a programmer or even close, but a basic understanding of
coding has been important to my professional life. Coding isn't just about writing software.
Understanding how algorithms work, even simple ones, is a general skill on par with algebra.
But is isn't just about coding for Tarnoff. He seems to hold education in contempt
generally. "The far-fetched premise of neoliberal school reform is that education can mend
our disintegrating social fabric." If they can't literally fix everything let's just get rid
of them, right?
Never mind that a good education is clearly one of the most important things
you can do for a person to improve their quality of life wherever they live in the world.
It's "neoliberal," so we better hate it.
I'm not going to argue that the goal of mass education isn't to drive down wages, but the
idea that the skills gap is a myth doesn't hold water in my experience. I'm a software
engineer and manager at a company that pays well over the national average, with great
benefits, and it is downright difficult to find a qualified applicant who can pass a
rudimentary coding exam.
A lot of resumes come across my desk that look qualified on paper,
but that's not the same thing as being able to do the job. Secondarily, while I agree that
one day our field might be replaced by automation, there's a level of creativity involved
with good software engineering that makes your carpenter comparison a bit flawed.
"... I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the
talent here' is the main excuse ..."
"... This is interesting. Indeed, I do think there is excess supply of software programmers. ..."
"... Well, it is either that or the kids themselves who have to pay for it and they are even less
prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the
US. And the employer ideally should pay for the job related training, but again, it is not the case
in the US. ..."
"... Plenty of people care about the arts but people can't survive on what the arts pay. That was
pretty much the case all through human history. ..."
"... I was laid off at your age in the depths of the recent recession and I got a job. ..."
"... The great thing about software , as opposed to many other jobs, is that it can be done at home
which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff
up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show
them what I've done. ..."
"... Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers.
Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children
being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to
get round it. Denying children useful skills is indefensible. ..."
I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find
the talent here' is the main excuse, though many 'older' (read: over 40) native-born tech
workers will tell your that's plenty of talent here already, but even with the immigration hassles,
H1B workers will be cheaper overall...
This is interesting. Indeed, I do think there is excess supply of software programmers.
There is only a modest number of decent jobs, say as an algorithms developer in finance,
general architecture of complex systems or to some extent in systems security. However, these
jobs are usually occupied and the incumbents are not likely to move on quickly. Road blocks are
also put up by creating sub networks of engineers who ensure that some knowledge is not ubiquitous.
Most very high paying jobs in the technology sector are in the same standard upper management
roles as in every other industry.
Still, the ability to write a computer program in an enabler, knowing how it works means you
have an ability to imagine something and make it real. To me it is a bit like language, some people
can use language to make more money than others, but it is still important to be able to have
a basic level of understanding.
And yet I know a lot of people that has happened to. Better to replace a $125K a year programmer
with one who will do the same, or even less, job for $50K.
This could backfire if the programmers don't find the work or pay to match their expectations...
Programmers, after all tend to make very good hackers if their minds are turned to it.
While I like your idea of what designing a computer program involves, in my nearly 40
years experience as a programmer I have rarely seen this done.
How else can you do it?
Java is popular because it's a very versatile language - On this list it's the most popular
general-purpose programming language. (Above it javascript is just a scripting language and HTML/CSS
aren't even programming languages)
https://fossbytes.com/most-used-popular-programming-languages/
... and below it you have to go down to C# at 20% to come to another general-purpose language,
and even that's a Microsoft house language.
Also the "correct" choice of programming languages is also based on how many people in the
shop know it so they maintain code that's written in it by someone else.
> job-specific training is completely different. What a joke to persuade public school districts
to pick up the tab on job training.
Well, it is either that or the kids themselves who have to pay for it and they are even
less prepared to do so. Ideally, college education should be tax payer paid but this is not the
case in the US. And the employer ideally should pay for the job related training, but again, it
is not the case in the US.
> The bigger problem is that nobody cares about the arts, and as expensive as education
is, nobody wants to carry around a debt on a skill that won't bring in the buck
Plenty of people care about the arts but people can't survive on what the arts pay. That
was pretty much the case all through human history.
Since newspaper are consolidating and cutting jobs gotta clamp down on colleges offering BA degrees,
particularly in English Literature and journalism.
This article focuses on the US schools, but I can imagine it's the same in the UK. I don't think
these courses are going to be about creating great programmers capable of new innovations as much
as having a work force that can be their own IT Help Desk.
They'll learn just enough in these classes to do that.
Then most companies will be hiring for other jobs, but want to make sure you have the IT skills
to serve as your own "help desk" (although they will get no salary for their IT work).
I find that quite remarkable - 40 years ago you must have been using assembler and with hardly
any memory to work with. If you blitzed through that without applying the thought processes described,
well...I'm surprised.
I was laid off at your age in the depths of the recent recession and I got a job. As
I said in another posting, it usually comes down to fresh skills and good personal references
who will vouch for your work-habits and how well you get on with other members of your team.
The great thing about software , as opposed to many other jobs, is that it can be done
at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects
and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices
so I could show them what I've done.
The situation has a direct comparison to today. It has nothing to do with land. There was a certain
amount of profit making work and not enough labour to satisfy demand. There is currently a certain
amount of profit making work and in many situations (especially unskilled low paid work) too much
labour.
So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?
Or is this the most obtuse argument yet for avoiding what everyone in tech knows - even more
blatantly than in many other industries, wages are curtailed by offshoring; and in the US, by
having offshoring centres on US soil.
Well, speaking as someone who spends a lot of time trying to find really good programmers... frankly
there aren't that many about. We take most of ours from Eastern Europe and SE Asia, which is quite
expensive, given the relocation costs to the UK. But worth it.
So, yes, if more British kids learnt about coding, it might help a bit. But not much; the real
problem is that few kids want to study IT in the first place, and that the tuition standards in
most UK universities are quite low, even if they get there.
Robots, or AI, are already making us more productive. I can write programs today in an afternoon
that would have taken me a week a decade or two ago.
I can create a class and the IDE will take care of all the accessors, dependencies, enforce
our style-guide compliance, stub-in the documentation ,even most test cases, etc, and all I have
to write is very-specific stuff required by my application - the other 90% is generated for me.
Same with UI/UX - stubs in relevant event handlers, bindings, dependencies, etc.
Programmers are a zillion times more productive than in the past, yet the demand keeps growing
because so much more stuff in our lives has processors and code. Your car has dozens of processors
running lots of software; your TV, your home appliances, your watch, etc.
Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers.
Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected
to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual
instruction" to get round it. Denying children useful skills is indefensible.
Getting children to learn how to write code, as part of core education, will be the first step
to the long overdue revolution. The rest of us will still have to stick to burning buildings down
and stringing up the aristocracy.
did you misread? it seemed like he was emphasizing that learning to code, like learning art (and
sports and languages), will help them develop skills that benefit them in whatever profession
they choose.
While I like your idea of what designing a computer program involves, in my nearly 40 years experience
as a programmer I have rarely seen this done. And, FWIW, IMHO choosing the tool (programming language)
might reasonably be expected to follow designing a solution, in practice this rarely happens.
No, these days it's Java all the way, from day one.
I'd advise parents that the classes they need to make sure their kids excel in are acting/drama.
There is no better way to getting that promotion or increasing your pay like being a skilled actor
in the job market. It's a fake it till you make it deal.
This really has to be one of the silliest articles I read here in a very long time.
People, let your children learn to code. Even more, educate yourselves and start to code just
for the fun of it - look at it like a game.
The more people know how to code the less likely they are to understand how stuff works. If you
were ever frustrated by how impossible it seems to shop on certain websites, learn to code and
you will be frustrated no more. You will understand the intent behind the process.
Even more, you will understand the inherent limitations and what is the meaning of safety. You
will be able to better protect yourself in a real time connected world.
Learning to code won't turn your kid into a programmer, just like ballet or piano classes won't
mean they'll ever choose art as their livelihood. So let the children learn to code and learn
along with them
Tipping power to employers in any profession by oversupply of labour is not a good thing. Bit
of a macabre example here but...After the Black Death in the middle ages there was a huge under
supply of labour. It produced a consistent rise in wages and conditions and economic development
for hundreds of years after this. Not suggesting a massive depopulation. But you can achieve the
same effects by altering the power balance. With decades of Neoliberalism, the employers side
of the power see-saw is sitting firmly in the mud and is producing very undesired results for
the vast majority of people.
I am 59, and it is not just the age aspect it is the money aspect. They know you have experience
and expectations, and yet they believe hiring someone half the age and half the price, times 2
will replace your knowledge. I have been contracting in IT for 30 years, and now it is obvious
it is over. Experience at some point no longer mitigates age. I think I am at that point now.
Dear, dear, I know, I know, young people today . . . just not as good as we were. Everything is
just going down the loo . . . Just have a nice cuppa camomile (or chamomile if you're a Yank)
and try to relax ... " hey you kids, get offa my lawn !"
There are good reasons to teach coding. Too many of today's computer users are amazingly unaware
of the technology that allows them to send and receive emails, use their smart phones, and use
websites. Few understand the basic issues involved in computer security, especially as it relates
to their personal privacy. Hopefully some introductory computer classes could begin to remedy
this, and the younger the students the better.
Security problems are not strictly a matter of coding.
Security issues persist in tech. Clearly that is not a function of the size of the workforce.
I propose that it is a function of poor management and design skills. These are not taught in
any programming class I ever took. I learned these on the job and in an MBA program, and because
I was determined.
Don't confuse basic workforce training with an effective application of tech to authentic needs.
How can the "disruption" so prized in today's Big Tech do anything but aggravate our social
problems? Tech's disruption begins with a blatant ignorance of and disregard for causes, and believes
to its bones that a high tech app will truly solve a problem it cannot even describe.
indeed that idea has been around as long as cobol and in practice has just made things worse,
the fact that many people outside of software engineering don;t seem to realise is that the coding
itself is a relatively small part of the job
so how many female and old software engineers are there who are unable to get a job, i'm one of
them at 55 finding it impossible to get a job and unlike many 'developers' i know what i'm doing
Training more people for an occupation will result in more people becoming qualified to perform
that occupation, irregardless of the fact that many will perform poorly at it. A CS degree is
no guarantee of competency, but it is one of the best indicators of general qualification we have
at the moment. If you can provide a better metric for analyzing the underlying qualifications
of the labor force, I'd love to hear it.
Regarding your anecdote, while interesting, it poor evidence when compared to the aggregate
statistical data analyzed in the EPI study.
Good grief. It's not job-specific training. You sound like someone who knows nothing about
computer programming.
Designing a computer program requires analysing the task; breaking it down into its components,
prioritising them and identifying interdependencies, and figuring out which parts of it can be
broken out and done separately. Expressing all this in some programming language like Java, C,
or C++ is quite secondary.
So once you learn to organise a task properly you can apply it to anything - remodeling a house,
planning a vacation, repairing a car, starting a business, or administering a (non-software) project
at work.
"... Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst. ..."
"... how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts ..."
"... And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that. ..."
"... most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe. ..."
Instant feedback is one of the things I really like about programming, but it's also the
thing that some people can't handle. As I'm developing a program all day long the compiler is
telling me about build errors or warnings or when I go to execute it it crashes or produces
unexpected output, etc. Software engineers are bombarded all day with negative feedback and
little failures. You have to be thick-skinned for this work.
How is it shallow and lazy? I'm hiring for the real world so I want to see some real world
accomplishments. If the candidate is fresh out of university they can't point to work
projects in industry because they don't have any. But they CAN point to stuff they've done on
their own. That shows both motivation and the ability to finish something. Why do you object
to it?
Thank you. The kids that spend high school researching independently and spend their nights
hacking just for the love of it and getting a job without college are some of the most
competent I've ever worked with. Passionless college grads that just want a paycheck are some
of the worst.
There is a big difference between "coding" and programming. Coding for a smart phone app is a
matter of calling functions that are built into the device. For example, there are functions
for the GPS or for creating buttons or for simulating motion in a game. These are what we
used to call subroutines. The difference is that whereas we had to write our own subroutines,
now they are just preprogrammed functions. How those functions are written is of little or no
importance to today's coders.
Nor are they able to program on that level. Real programming
requires not only a knowledge of programming languages, but also a knowledge of the underlying
algorithms that make up actual programs. I suspect that "coding" classes operate on a quite
superficial level.
Its not about the amount of work or the amount of labor. Its about the comparative
availability of both and how that affects the balance of power, and that in turn affects the
overall quality of life for the 'majority' of people.
Most of this is not true. Peter Nelson gets it right by talking about breaking steps down and
thinking rationally. The reason you can't just teach the theory, however, is that humans
learn much better with feedback. Think about trying to learn how to build a fast car, but you
never get in and test its speed. That would be silly. Programming languages take the system
of logic that has been developed for centuries and gives instant feedback on the results.
It's a language of rationality.
This article is about the US. The tech industry in the EU is entirely different, and
basically moribund. Where is the EU's Microsoft, Apple, Google, Amazon, Oracle, Intel,
Facebook, etc, etc? The opportunities for exciting interesting work, plus the time and
schedule pressures that force companies to overlook stuff like age because they need a
particular skill Right Now, don't exist in the EU. I've done very well as a software engineer
in my 60's in the US; I cannot imagine that would be the case in the EU.
sorry but that's just not true, i doubt you are really programming still, or quasi programmer
but really a manager who like to keep their hand in, you certainly aren't busy as you've been
posting all over this cif. also why would you try and hire someone with such disparate
skillsets, makes no sense at all
oh and you'd be correct that i do have workplace issues, ie i have a disability and i also
suffer from depression, but that shouldn't bar me from employment and again regarding my
skills going stale, that again contradicts your statement that it's about
planning/analysis/algorithms etc that you said above ( which to some extent i agree with
)
Not at all, it's really egalitarian. If I want to hire someone to paint my portrait, the best
way to know if they're any good is to see their previous work. If they've never painted a
portrait before then I may want to go with the girl who has
Right? It's ridiculous. "Hey, there's this industry you can train for that is super valuable
to society and pays really well!"
Then Ben Tarnoff, "Don't do it! If you do you'll drive down wages for everyone else in the
industry. Build your fire starting and rock breaking skills instead."
how about how new labor tried to sign away IT access in England to India in exchange for
banking access there, how about the huge loopholes in bringing in cheap IT workers from
elsewhere in the world, not conspiracies, but facts
I think the difference between gifted and not is motivation. But I agree it's not innate. The
kid who stayed up all night in high school hacking into the school server to fake his coding
class grade is probably more gifted than the one who spent 4 years in college getting a BS in
CS because someone told him he could get a job when he got out.
I've done some hiring in my life and I always ask them to tell me about stuff they did on
their own.
As several people have pointed out, writing a computer program requires analyzing and
breaking down a task into steps, identifying interdependencies, prioritizing the order,
figuring out what parts can be organized into separate tasks that be done separately, etc.
These are completely independent of the language - I've been programming for 40 years in
everything from FORTRAN to APL to C to C# to Java and it's all the same. Not only that but
they transcend programming - they apply to planning a vacation, remodeling a house, or fixing
a car.
Neither coding nor having a bachelor's degree in computer science makes you a suitable job
candidate. I've done a lot of recruiting and interviews in my life, and right now I'm trying
to hire someone. And I've never recommended hiring anyone right out of school who
could not point me to a project they did on their own, i.e., not just grades and test scores.
I'd like to see an IOS or Android app, or a open-source component, or utility or program of
theirs on GitHub, or something like that.
That's the thing that distinguishes software from many other fields - you can do something
real and significant on your own. If you haven't managed to do so in 4 years of college
you're not a good candidate.
Within the next year coding will be old news and you will simply be able to describe
things in ur native language in such a way that the machine will be able to execute any set
of instructions you give it.
In a sense that's already true, as i noted elsewhere. 90% of the code in my projects (Java
and C# in their respective IDEs) is machine generated. I do relatively little "coding". But
the flaw in your idea is this: most of what software designers do is not coding. It requires
domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will
be a long time before we get where you describe.
Completely agree. At the highest levels there is more work that goes into managing complexity and making
sure nothing is missed than in making the wheels turn and the beepers beep.
I've actually interviewed people for very senior technical positions in Investment Banks who
had all the fancy talk in the world and yet failed at some very basic "write me a piece of
code that does X" tests.
Next hurdle on is people who have learned how to deal with certain situations and yet
don't really understand how it works so are unable to figure it out if you change the problem
parameters.
That said, the average coder is only slightly beyond this point. The ones who can take in
account maintenability and flexibility for future enhancements when developing are already a
minority, and those who can understand the why of software development process steps, design
software system architectures or do a proper Technical Analysis are very rare.
Hubris.
It's easy to mistake efficiency born of experience as innate talent. The difference
between a 'gifted coder' and a 'non gifted junior coder' is much more likely to be 10 or 15
years sitting at a computer, less if there are good managers and mentors involved.
Politicians love the idea of teaching children to 'code', because it sounds so modern, and
nobody could possible object... could they? Unfortunately it simply shows up their utter
ignorance of technical matters because there isn't a language called 'coding'. Computer
programming languages have changed enormously over the years, and continue to evolve. If you
learn the wrong language you'll be about as welcome in the IT industry as a lamp-lighter or a
comptometer operator.
The pace of change in technology can render skills and qualifications obsolete in a matter
of a few years, and only the very best IT employers will bother to retrain their staff - it's
much cheaper to dump them. (Most IT posts are outsourced through agencies anyway - those that
haven't been off-shored. )
And this isn't even a good conspiracy theory; it's a bad one. He offers no evidence
that there's an actual plan or conspiracy to do this. I'm looking for an account of where the
advocates of coding education met to plot this in some castle in Europe or maybe a secret
document like "The Protocols of the Elders of Google", or some such.
Tool Users Vs Tool Makers.
The really good coders actually get why certain things work as they do and can adjust them
for different conditions. The mass produced coders are basically code copiers and code gluing
specialists.
People who get Masters and PhD's in computer science are not usually "coders" or software
engineers - they're usually involved in obscure, esoteric research for which there really is
very little demand. So it doesn't surprise me that they're unemployed. But if someone has a
Bachelor's in CS and they're unemployed I would have to wonder what they spent
their time at university doing.
The thing about software that distinguishes it from lots of other fields is that you can
make something real and significant on your own . I would expect any recent CS
major I hire to be able to show me an app or an open-source component or something similar
that they made themselves, and not just test scores and grades. If they could not then I
wouldn't even think about hiring them.
Fortunately for those of us who are actually good at coding, the difference in productivity
between a gifted coder and a non-gifted junior developer is something like 100-fold.
Knowing how to code and actually being efficient at creating software programs and systems
are about as far apart as knowing how to write and actually being able to write a bestselling
exciting Crime trilogy.
I do think there is excess supply of software programmers. There is only a modest number
of decent jobs, say as an algorithms developer in finance, general architecture of complex
systems or to some extent in systems security.
This article is about coding; most of those jobs require very little of that.
Most very high paying jobs in the technology sector are in the same standard upper
management roles as in every other industry.
How do you define "high paying". Everyone I know (and I know a lot because I've been a sw
engineer for 40 years) who is working fulltime as a software engineer is making a
high-middle-class salary, and can easily afford a home, travel on holiday, investments,
etc.
> Already there. I take it you skipped right past the employment prospects for US STEM
grads - 50% chance of finding STEM work.
That just means 50% of them are no good and need to develop their skills further or try
something else.
Not every with a STEM degree from some 3rd rate college is capable of doing complex IT or
STEM work.
So, is teaching people English or arithmetic all about reducing wages for the literate
and numerate?
Yes. Haven't you noticed how wage growth has flattened? That's because some do-gooders"
thought it would be a fine idea to educate the peasants. There was a time when only the
well-to do knew how to read and write, and that's why they well-to-do were well-to-do.
Education is evil. Stop educating people and then those of us who know how to read and write
can charge them for reading and writing letters and email. Better yet, we can have Chinese
and Indians do it for us and we just charge a transaction fee.
Massive amounts of public use cars, it doesn't mean millions need schooling in auto
mechanics. Same for software coding. We aren't even using those who have Bachelors, Masters
and PhDs in CS.
"..importing large numbers of skilled guest workers from other countries through the H1-B
visa program..."
"skilled" is good. H1B has long ( appx 17 years) been abused and turned into trafficking
scheme. One can buy H1B in India. Powerful ethnic networks wheeling & dealing in US &
EU selling IT jobs to essentially migrants.
The real IT wages haven't been stagnant but steadily falling from the 90s. It's easy to
see why. $82K/year IT wage was about average in the 90s. Comparing the prices of housing
(& pretty much everything else) between now gives you the idea.
> not every kid wants or needs to have their soul sucked out of them sitting in front of a
screen full of code for some idiotic service that some other douchbro thinks is the next
iteration of sliced bread
Taking a couple of years of programming are not enough to do this as a job, don't
worry.
But learning to code is like learning maths, - it helps to develop logical thinking, which
will benefit you in every area of your life.
"... A lot of basic entry level jobs require a good level of Excel skills. ..."
"... Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money. ..."
"... Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled. ..."
"... Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely. ..."
"... We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? ..."
There are very few professional Scribes nowadays, a good level of reading & writing is
simplely a default even for the lowest paid jobs. A lot of basic entry level jobs require a
good level of Excel skills. Several years from now basic coding will be necessary to
manipulate basic tools for entry level jobs, especially as increasingly a lot of real code
will be generated by expert systems supervised by a tiny number of supervisors. Coding jobs
will go the same way that trucking jobs will go when driverless vehicles are perfected.
Offer the class but not mandatory. Just like I could never succeed playing football others
will not succeed at coding. The last thing the industry needs is more bad developers showing
up for a paycheck.
Programming is a cultural skill; master it, or even understand it on a simple level, and you
understand how the 21st century works, on the machinery level. To bereave the children of
this crucial insight is to close off a door to their future.
What's next, keep them off Math, because, you know . .
That's some crystal ball you have there. English teachers will need to know how to code? Same
with plumbers? Same with janitors, CEOs, and anyone working in the service industry?
The economy isn't a zero-sum game. Developing a more skilled workforce that can create more
value will lead to economic growth and improvement in the general standard of living.
Talented coders will start new tech businesses and create more jobs.
What a dumpster argument. I am not a programmer or even close, but a basic understanding of
coding has been important to my professional life. Coding isn't just about writing software.
Understanding how algorithms work, even simple ones, is a general skill on par with algebra.
But is isn't just about coding for Tarnoff. He seems to hold education in contempt
generally. "The far-fetched premise of neoliberal school reform is that education can mend
our disintegrating social fabric." If they can't literally fix everything let's just get rid
of them, right?
Never mind that a good education is clearly one of the most important things
you can do for a person to improve their quality of life wherever they live in the world.
It's "neoliberal," so we better hate it.
I agree with the basic point. We've seen this kind of tactic for some time now. Silicon
Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all
about) with little room for genuine creativity, or even understanding of what that actually
means. I've seen how impossible it is to explain to upper level management how crappy cheap
developers actually diminish productivity and value. All they see is that the requisition is
filled for less money.
The bigger problem is that nobody cares about the arts, and as expensive as education is,
nobody wants to carry around a debt on a skill that won't bring in the bucks. And
smartphone-obsessed millennials have too short an attention span to fathom how empty their
lives are, devoid of the aesthetic depth as they are.
I can't draw a definite link, but I think algorithm fails, which are based on fanatical
reliance on programmed routines as the solution to everything, are rooted in the shortage of
education and cultivation in the arts.
Economics is a social science, and all this is merely a reflection of shared cultural
values. The problem is, people think it's math (it's not) and therefore set in stone.
Libertarianism posits that everyone should be free to sell their labour or negotiate their
own arrangements without the state interfering. So if cheaper foreign labour really was
undercutting American labout the Libertarians would be thrilled.
But it's not. I'm in my 60's and retiring but I've been a software engineer all my life.
I've worked for many different companies, and in different industries and I've never had any
trouble competing with cheap imported workers. The people I've seen fall behind were ones who
did not keep their skills fresh. When I was laid off in 2009 in my mid-50's I made sure my
mobile-app skills were bleeding edge (in those days ANYTHING having to do with mobile was
bleeding edge) and I used to go to job interviews with mobile devices to showcase what I
could do. That way they could see for themselves and not have to rely on just a CV.
They older guys who fell behind did so because their skills and toolsets had become
obsolete.
Now I'm trying to hire a replacement to write Android code for use in industrial
production and struggling to find someone with enough experience. So where is this oversupply
I keep hearing about?
Not producing enough to fill vacancies or not producing enough to keep wages at Google's
preferred rate? Seeing as research shows there is no lack of qualified developers, the latter
option seems more likely.
It's about ensuring those salaries no longer exist, by creating a source of cheap labor
for the tech industry.
We're already using Asia as a source of cheap labor for the tech industry. Why do we need
to create cheap labor in the US? That just seems inefficient.
There was never any need to give our jobs to foreigners. That is, if you are comparing the
production of domestic vs. foreign workers. The sole need was, and is, to increase profits.
Schools MAY be able to fix big social problems, but only if they teach a well-rounded
curriculum that includes classical history and the humanities. Job-specific training is
completely different. What a joke to persuade public school districts to pick up the tab on
job training. The existing social problems were not caused by a lack of programmers, and
cannot be solved by Big Tech.
I agree with the author that computer programming skills are not that limited in
availability. Big Tech solved the problem of the well-paid professional some years ago by
letting them go, these were mostly workers in their 50s, and replacing them with H1-B
visa-holders from India -- who work for a fraction of their experienced American
counterparts.
It is all about profits. Big Tech is no different than any other "industry."
Supply of apples does not affect the demand for oranges. Teaching coding in high school does
not necessarily alter the supply of software engineers. I studied Chinese History and geology
at University but my doing so has had no effect on the job prospects of people doing those
things for a living.
You would be surprised just how much a little coding knowledge has transformed my ability to
do my job (a job that is not directly related to IT at all).
Because teaching coding does not affect the supply of actual engineers. I've been a
professional software engineer for 40 years and coding is only a small fraction of what I do.
You and the linked article don't know what you're talking about. A CS degree does not equate
to a productive engineer.
A few years ago I was on the recruiting and interviewing committee to try to hire some
software engineers for a scientific instrument my company was making. The entire team had
about 60 people (hw, sw, mech engineers) but we needed 2 or 3 sw engineers with math and
signal-processing expertise. The project was held up for SIX months because we could
not find the people we needed. It would have taken a lot longer than that to train someone up
to our needs. Eventually we brought in some Chinese engineers which cost us MORE than what we
would have paid for an American engineer when you factor in the agency and visa
paperwork.
Modern software engineers are not just generic interchangable parts - 21st century
technology often requires specialised scientific, mathematical, production or business
domain-specific knowledge and those people are hard to find.
Visa jobs are part of trade agreements. To be very specific, US gov (and EU) trade Western
jobs for market access in the East. http://www.marketwatch.com/story/in-india-british-leader-theresa-may-preaches-free-trade-2016-11-07 There is no shortage. This is selling off the West's middle class. Take a look at remittances in wikipedia and you'll get a good idea just how much it costs the
US and EU economies, for sake of record profits to Western industry.
I see advantages in teaching kids to code, and for kids to make arduino and other CPU powered
things. I don't see a lot of interest in science and tech coming from kids in school. There
are too many distractions from social media and game platforms, and not much interest in
developing tools for future tech and science.
Although coding per se is a technical skill it isn't designing or integrating systems. It is
only a small, although essential, part of the whole software engineering process. Learning to
code just gets you up the first steps of a high ladder that you need to climb a fair way if
you intend to use your skills to earn a decent living.
Friend of mine in the SV tech industry reports that they are about 100,000 programmers
short in just the internet security field.
Y'all are trying to create a problem where there isn't one. Maybe we shouldn't teach them
how to read either. They might want to work somewhere besides the grill at McDonalds.
Within the next year coding will be old news and you will simply be able to describe things
in ur native language in such a way that the machine will be able to execute any set of
instructions you give it. Coding is going to change from its purely abstract form that is not
utilized at peak- but if you can describe what you envision in an effective concise manner u
could become a very good coder very quickly -- and competence will be determined entirely by
imagination and the barriers of entry will all but be extinct
Total... utter... no other way... huge... will only get worse... everyone... (not a very
nuanced commentary is it).
I'm glad pieces like this are mounting, it is relevant that we counter the mix of
messianism and opportunism of Silicon Valley propaganda with convincing arguments.
They aren't immigrants. They're visa indentured foreign workers. Why does that matter? It's
part of the cheap+indentured hiring criteria. If it were only cheap, they'd be lowballing
offers to citizen and US new grads.
Correct premises, - proletarianize programmers - many qualified graduates simply can't find jobs. Invalid conclusion: - The problem is there aren't enough good jobs to be trained for.
That conclusion only makes sense if you skip right past ... " importing large numbers of skilled guest workers from other countries through the
H1-B visa program. These workers earn less than their American counterparts, and
possess little bargaining power because they must remain employed to keep their status"
Hiring Americans doesn't "hurt" their record profits. It's incessant greed and collusion
with our corrupt congress.
This column was really annoying. I taught my students how to program when I was given a free
hand to create the computer studies curriculum for a new school I joined. (Not in the UK
thank Dog). 7th graders began with studying the history and uses of computers and
communications tech. My 8th grade learned about computer logic (AND, OR, NOT, etc) and moved
on with QuickBASIC in the second part of the year. My 9th graders learned about databases and
SQL and how to use HTML to make their own Web sites. Last year I received a phone call from
the father of one student thanking me for creating the course, his son had just received a
job offer and now works in San Francisco for Google. I am so glad I taught them "coding" (UGH) as the writer puts it, rather than arty-farty
subjects not worth a damn in the jobs market.
I live and work in Silicon Valley and you have no idea what you are talking about. There's no
shortage of coders at all. Terrific coders are let go because of their age and the
availability of much cheaper foreign coders(no, I am not opposed to immigration).
Looks like you pissed off a ton of people who can't write code and are none to happy with you
pointing out the reason they're slinging insurance for geico.
I think you're quite right that coding skills will eventually enter the mainstream and
slowly bring down the cost of hiring programmers.
The fact is that even if you don't get paid to be a programmer you can absolutely benefit
from having some coding skills.
There may however be some kind of major coding revolution with the advent of quantum
computing. The way code is written now could become obsolete.
A well-argued article that hits the nail on the head. Amongst any group of coders, very few
are truly productive, and they are self starters; training is really needed to do the admin.
There is not a huge skills shortage. That is why the author linked this EPI report analyzing
the data to prove exactly that. This may not be what people want to believe, but it is
certainly what the numbers indicate. There is no skills gap.
Yes. China and India are indeed training youth in coding skills. In order that they take jobs
in the USA and UK! It's been going on for 20 years and has resulted in many experienced IT
staff struggling to get work at all and, even if they can, to suffer stagnating wages.
Has anyones job is at risk from a 16 year old who can cobble together a couple of lines of
javascript since the dot com bubble?
Good luck trying to teach a big enough pool of US school kids regular expressions let
alone the kind of test driven continuous delivery that is the norm in the industry now.
> A lot of resumes come across my desk that look qualified on paper, but that's not the
same thing as being able to do the job
I have exactly the same experience. There is undeniable a skill gap.
It takes about a year for a skilled professional to adjust and learn enough to become
productive, it takes about 3-5 years for a college grad.
It is nothing new. But the issue is, as the college grad gets trained, another company
steal him/ her. And also keep in mind, all this time you are doing job and training the new
employee as time permits. Many companies in the US cut the non-profit department (such as IT)
to the bone, we cannot afford to lose a person and then train another replacement for 3-5
years.
The solution? Hire a skilled person. But that means nobody is training college grads and
in 10-20 years we are looking at the skill shortage to the point where the only option is
brining foreign labor.
American cut-throat companies that care only about the bottom line cannibalized
themselves.
Heh. You are not a coder, I take it. :) Going to be a few decades before even the
easiest coding jobs vanish.
Given how shit most coders of my acquaintance have been - especially in matters
of work ethic, logic, matching s/w to user requirements and willingness to test and correct
their gormless output - most future coding work will probably be in the area of disaster
recovery. Sorry, since the poor snowflakes can't face the sad facts, we have to call it
"business continuation" these days, don't we?
The demonization of Silicon Valley is clearly the next place to put all blame. Look what
"they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get
a rope!
I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San
Jose transform into a concrete jungle. There used to be quite a bit of semiconductor
equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings
have the same name : AVAILABLE. Most equipment and device manufacturing has moved to
Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to
assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC,
PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less
talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and
accessible to nearly all. Programming has become a commodity and will be priced like milk,
wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a
career.
(acolyer.org) Posted by EditorDavid on Saturday September 23, 2017 @05:19PM from the
dynamic-discussions dept. "Static vs dynamic typing is always one of those topics that attracts
passionately held positions," writes the Morning Paper -- reporting on an "encouraging" study that attempted
to empirically evaluate the efficacy of statically-typed systems on mature, real-world code
bases. The study was conducted by Christian Bird at Microsoft's "Research in Software
Engineering" group with two researchers from University College London. Long-time Slashdot
reader phantomfive writes:
This study looked at bugs found in open source Javascript code. Looking through the commit
history, they
enumerated the bugs that would have been caught if a more strongly typed language (like
Typescript) had been used. They found that a strongly typed language would have reduced bugs by
15%. Does this make you want to avoid Python?
Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs
on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different
versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed
from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed
resource manager can perform well when integrated with Maui cluster scheduler to improve performance.
2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3)
OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed.
In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent
with Torque.
PBS contain three basic units server, MoM (execution host), scheduler.
Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network
to communicate with the MoMs. PBS server create a batch job, modify the job requested from different
MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs.
It will also monitor the PBS license for jobs. If your license expires it will throw an error.
Scheduler: PBS scheduler uses various algorithms to decide when job should get executed
on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched".
MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets
job from server it will actually execute that job on the host. Each node must have MoM running to
get participate in execution.
Installation and Setting up of environment (cluster with multiple nodes)
Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL"
file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image
below run this executable. It will ask for the "execution directory" where you want to store the
executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory"
which contain different configuration files. Keep both as default for simplicity. There are three
kind of installation available as shown in figure:
1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server
will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution
nodes. As MoM and commands are also installed on server node it can be used to submit and execute
the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available
nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific
permission by server as we are going to see below. They are not involved in scheduling. This kind
of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client
node: This are the nodes which are only allowed to submit a PBS job at server with specific permission
by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling.
Creating vnode in PBS Pro:
We can create multiple vnodes in a single node which contain some part of resources in a node.
We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below
to create vnode using qmgr.
The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory
and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively
only one job at a time independent of number of resources free. This sharing mode can be default_shared
which means any number of jobs can run on that vnode until all resources are busy. To know more
about all attributes which can be used with vnode creation are available in PBS Pro reference guide.
You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any
name you want I prefer hostname -vnode with sample given below. It will select all files even
temporary files with (~) and replace configuration for same vnode so delete unnecessary files to
get proper configuration of vnodes.
Here in this example we assigned default node configuration to resource available as 0 because
by default it will detect and allocate all available resources to default node with sharing attribute
as is default_shared.
Which cause problem as all the jobs will by default get scheduled on
that default vnode because its sharing type is default_shared . If you want to schedule jobs
on your customized vnodes you should allocate resources available as 0 on default vnode
. Every time whenever you restart the PBS server
PBS get status:
get status of Jobs:
qstat will give details about jobs there states etc.
useful options:
To print detail about all jobs which are running or in hold state: qstat -a
To print detail about subjobs in JobArray which are running or in hold state: qstat -ta
get status of PBS nodes and vnodes:
"pbsnode -a" command will provide list of all nodes present in PBS complex with
there resources available, assigned, status etc.
To get details of all nodes and vnodes you created use " pbsnodes -av" command.
You can also specify node or vnode name to get detail information of that specific node or vnode.
e.g.
pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped
with IP address in /etc/hosts file)
Job submission (qsub):
PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs
are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command
which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server
will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges
set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get
detail of resources available and assigned. PBS assigns unique job identifier to each and every job
called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script
Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You
may refer simple script given below #!/bin/sh
echo "This is PBS job"
When PBS completes execution of job it will store errors in file name with JobName.e{JobID}
e.g. Job1.e1492
Output with file name
JobName.o{JobID} e.g. Job1.o1492
By default it will store this files in the current working directory (can be seen by pwd
command) . You can change this location by giving path with -o option.
you may specify job name with -N option while submitting the job
qsub -N firstJob ./test.sh
If you don't specify job name it will store files by replacing JobName with script name. e.g.
qsub ./test.sh this command will store results in file with test.sh.e1493 and
test.sh.o.1493 in current working directory.
OR
qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with
test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.
If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored
in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.
In some cases you may require job which should run after successful or unsuccessful completion
of some specified jobs for that PBS provide some options such as
This specified job will start only after successful completion of job with job ID "316.megamind".
Like afterok PBS has other options such as beforeok
beforenotok to , afternotok. You may find all this details in the man page of qsub
.
Submit Job with priority :
There are two ways using which we can set priority to jobs which are going to execute.
1) Using single queue with different jobs with different priority:
To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config"
file, normally $PBS_HOME is present in "/var/spool/PBS/" folder. Open this file
and uncomment the line below if present otherwise add it .
job_sort_key : "job_priority HIGH"
After saving this file you will need to restart the pbs_sched daemon on head node you may use
command below
service pbs restart
After completing this task you have to submit the job with -p option to specify priority
of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority
and 1023 is the highest priority in the queue.
In this case PBS will execute jobs as explain in the diagram given below
2) Using different queues with specified priority: We are going to discuss this point in PBS Queue
section.
In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority
of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below
PBS Pro can manage multiple queue as per users requirement. By default every job is queued in
"workq" for execution. There are two types of queue are available execution and routing queue. Jobs
in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed
they can be redirected to execution queue or another routing queue by using command qmove
command. By default queue "workq" is an execution queue. The sequence of job in queue may change
by using priority defined while job submission as specified above in job submission section.
Useful qmgr commands:
First type qmgr which is Manager interface of PBS Pro.
To create queue:
Qmgr:
create queue test2
To set type of queue you created:
Qmgr:
set queue test2 queue_type=execution
OR
Qmgr:
set queue test2 queue_type=route
To enable queue:
Qmgr:
set queue test2 enabled=True
To set priority of queue:
Qmgr:
set queue test2 priority=50
Jobs in queue with higher priority will get first preference. After completion of all jobs in
the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability
of job starvation in queue with lower priority.
To start queue:
Qmgr:
set queue test2 started = True
To activate all queue (present at particular node):
Qmgr:
active queue @default
To set queue for specified users : You require to set acl_user_enable attribute to true which
indicate PBS to only allow user present in acl_users list to submit the job.
Qmgr:
set queue test2 acl_user_enable=True
To set users permitted (to submit job in a queue):
(in place of .. you have to specify hostname of compute node in PBS complex. Only user name without
hostname will allow users ( with same name ) to submit job from all nodes (
permitted to submit job ) in a PBS Complex).
To delete queues we created:
Qmgr:
delete queue test2
To see details of all queue status:
qstat -Q
You may specify specific queue name: qstat -Q test2
To see full details of all queue: qstat -Q -f
You may specify specific queue name: qstat -Q -f test2
Betteridge's law of
headlines
From Wikipedia, the free encyclopedia
Jump to:
navigation
,
search
Betteridge's law of headlines
is one name for an
adage
that states: "Any
headline
that ends in a
question mark
can be answered by the word
no
." It is
named after Ian Betteridge, a British technology journalist,
[1]
[2]
although the principle is much older. As with similar "laws" (e.g.,
Murphy's law
), it is intended as a humorous adage rather than
always being literally true.
[3]
[4]
The maxim has been cited by other names since as early as 1991,
when a published compilation of Murphy's Law variants called it "
Davis's
law
",
[5]
a name that also crops up online, without any explanation of who
Davis was.
[6]
[7]
It has also been called just the "
journalistic principle
",
[8]
and in 2007 was referred to in commentary as "an old truism among
journalists".
[9]
Ian Betteridge's name became associated with the concept after he
discussed it in a February 2009 article, which examined a previous
TechCrunch
article that carried the headline "Did
Last.fm
Just Hand Over User Listening Data To the
RIAA
?":
[10]
This story is a great demonstration of my maxim that any
headline which ends in a question mark can be answered by the
word "no." The reason why journalists use that style of headline
is that they know the story is probably bullshit, and don't
actually have the sources and facts to back it up, but still
want to run it.
[1]
A similar observation was made by British newspaper editor
Andrew Marr
in his 2004 book
My Trade
, among Marr's
suggestions for how a reader should interpret newspaper articles:
If the headline asks a question, try answering 'no'. Is This
the True Face of Britain's Young? (Sensible reader: No.) Have We
Found the Cure for AIDS? (No; or you wouldn't have put the
question mark in.) Does This Map Provide the Key for Peace?
(Probably not.) A headline with a question mark at the end
means, in the vast majority of cases, that the story is
tendentious or over-sold. It is often a scare story, or an
attempt to elevate some run-of-the-mill piece of reporting into
a national controversy and, preferably, a national panic. To a
busy journalist hunting for real information a question mark
means 'don't bother reading this bit'.
[11]
Outside
journalism
In the field of
particle physics
, the concept is known as
Hinchliffe's Rule
,
[12]
[13]
after physicist
Ian Hinchliffe
,
[14]
who stated that if a research paper's title is in the form of a
yes–no question, the answer to that question will be "no".
[14]
The adage was humorously led into a
Liar's paradox
by a pseudonymous 1988 paper which bore the title
"Is Hinchliffe's Rule True?"
[13]
[14]
However, at least one article found that the "law" does not apply
in research literature.
[15]
.YouTube.com: At 31 min there is an interesting slide that gives some information about the
scale of system in DOE. Current system has 18,700 News system will have 50K to 500K
nodes, 32 core per node (power consumption is ~15 MW equal to a small city power
consumption). The cost is around $200M
James Maguire's
article raises some interesting questions as to why teaching Java to first
year CS / IT students is a bad idea. The article mentions both Ada and Pascal
– neither of which really "took off" outside of the States, with the former
being used mainly by contractors of the US Dept. of Defense.
This is my own, personal, extension to the article – which I agree with –
and why first year students should be taught C in first year. I'm biased though,
I learned C as my first language and extensively use C or C++ in
projects.
Java is a very high level language that has interesting features that make
it easier for programmers. The two main points, that I like about Java, are
libraries (although libraries exist for C / C++ ) and memory management.
Libraries
Libraries are fantastic. They offer an API and abstract a metric fuck tonne
of work that a programmer doesn't care about. I don't care how
the library works inside, just that I have a way of putting in input and getting
expected output (see my post on
abstraction).
I've extensively used libraries, even this week, for audio codec decoding. Libraries
mean not reinventing the wheel and reusing code (something students are discouraged
from doing, as it's plagiarism, yet in the real world you are rewarded). Again,
starting with C means that you appreciate the libraries more.
Memory Management
Managing your programs memory manually is a pain in the hole. We all know
this after spending countless hours finding memory leaks in our programs. Java's
inbuilt memory management tool is great – it saves me from having to do it.
However, if I had have learned Java first, I would assume (for a short amount
of time) that all languages managed memory for you or that
all languages were shite compared to Java because they don't
manage memory for you. Going from a "lesser" language like C to Java makes you
appreciate the memory manager
What's so great about C?
In the context of a first language to teach students, C is perfect. C is
Relatively simple
Procedural
Lacks OOP features, which confuse freshers
Low level
Fast
Imperative
Weakly typed
Easy to get bugs
Java is a complex language that will spoil a first year student. However,
as noted, CS / IT courses need to keep student retention rates high. As an example,
my first year class was about 60 people, final year was 8. There are ways to
keep students, possibly with other, easier, languages in the second semester
of first year – so that students don't hate the subject when choosing the next
years subject post exams.
Conversely, I could say that you should teach Java in first
year and expand on more difficult languages like C or assembler (which should
be taught side by side, in my mind) later down the line – keeping retention
high in the initial years, and drilling down with each successive semester to
more systems level programming.
There's a time and place for Java, which I believe is third year or final
year. This will keep Java fresh in the students mind while they are going job
hunting after leaving the bosom of academia. This will give them a good head
start, as most companies are Java houses in Ireland.
In
computer science, abstraction is the process by which
data and
programs are defined with a
representation similar to its meaning (semantics),
while hiding away the
implementation details. Abstraction tries to reduce and factor out details
so that the
programmer
can focus on a few concepts at a time. A system can have several abstraction
layers whereby different meanings and amounts of detail are exposed
to the programmer. For example,
low-level abstraction layers expose details of the
hardware
where the program is
run, while high-level layers deal with the
business
logic of the program.
That might be a bit too wordy for some people, and not at all clear. Here's
my analogy of abstraction.
Abstraction is like a car
A car has a few features that makes it unique.
A steering wheel
Accelerator
Brake
Clutch
Transmission (Automatic or Manual)
If someone can drive a Manual transmission car, they can drive any Manual
transmission car. Automatic drivers, sadly, cannot drive a Manual transmission
drivers without "relearing" the car. That is an aside, we'll assume that all
cars are Manual transmission cars – as is the case in Ireland for most cars.
Since I can drive my car, which is a Mitsubishi Pajero, that means that I
can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.
All I need to know, in order to drive a car – any car – is how to use the
breaks, accelerator, steering wheel, clutch and transmission. Since I already
know this in my car, I can abstract away your car and it's
controls.
I do not need to know the inner workings of your car in
order to drive it, just the controls. I don't need to know how exactly the breaks
work in your car, only that they work. I don't need to know, that your car has
a turbo charger, only that when I push the accelerator, the car moves. I also
don't need to know the exact revs that I should gear up or gear down (although
that would be better on the engine!)
Virtually all controls are the same. Standardization means that the clutch,
break and accelerator are all in the same place, regardless of the car. This
means that I do not need to relearn how a car works. To me, a car is
just a car, and is interchangeable with any other car.
Abstraction means not caring
As a programmer, or someone using a third party API (for example), abstraction
means not caring how the inner workings of some function works
– Linked list data structure, variable names inside the function, the sorting
algorithm used, etc – just that I have a standard (preferable unchanging) interface
to do whatever I need to do.
Abstraction can be taught of as a black box. For input, you get output. That
shouldn't be the case, but often is. We need abstraction so that, as a programmer,
we can concentrate on other aspects of the program – this is the corner-stone
for large scale, multi developer, software projects.
"I tell them you learn to write the same way you learn to play golf," he once
said. "You do it, and keep doing it until you get it right. A lot of people
think something mystical happens to you, that maybe the muse kisses you on the
ear. But writing isn't divinely inspired - it's hard work."
An anonymous reader writes "Derek Sivers, creator of online indie music
store CD Baby, has a post about why he thinks basic programming is a useful
skill for everybody.
He says, 'The most common thing I hear from aspiring entrepreneurs is, "I
have this idea for an app or site. But I'm not technical, so I need to find
someone who can make it for me." I point them to my advice about how to hire
a programmer, but as most of the good ones are already booked solid, it's a
pretty helpless position to be in. If you heard someone say, "I have this
idea for a song. But I'm not musical, so I need to find someone who will
write, perform, and record it for me." - you'd probably advise them to just
take some time to sit down with a guitar or piano and learn enough to turn
their ideas into reality.
And so comes my advice: Yes, learn some programming basics. Just some
HTML, CSS, and JavaScript should be enough to start. ... You don't need to
become an expert, just know the basics, so you're not helpless.'"
BrokenHalo (565198:
Well, no reason why it should. Just about anyone should be able to
write some form of pseudocode, however incomplete, for whatever task
they want to accomplish with or without the assistance of a computer.
That said, when I first started working with computers back in the '70s,
programmers mostly didn't have access to the actual computer hardware,
so if the chunk of code was large, we simply wrote out our FORTRAN,
Assembly or COBOL programs on a cellulose-fibre "paper" substance called
a Coding Sheet with a graphite-filled wooden stick known as a pencil.
These were then transcribed on to mag tape by a platoon of very pretty
but otherwise non-human keypunch ops who were universally capable of
typing at a rate of 6.02 x 10^23 words per minute. (If the program or
patch happened to be small or trivial, we used one of those metal
card-punch contraptions with an 029 keypad, thus allowing the office
door to slam with nothing to restrain it.)
This leisurely approach led to a very different and IMHO more creative
attitude to coding, and it was probably no coincidence that many
programmers back then were pipe-smokers.
Anonymous Coward:
"I have an idea for an app" is exactly what riles up programmers.
Ideas are a dime a dozen. If you, the "nontechnical person", do your job
right, then you'll find a competent and cooperative programmer.
If, on the other hand, and this is is much too common, you expect the
programmer to do your work (requirements engineering, reading your mind
for what you want, correcting your conceptual mistakes, graphics design,
business planning to get the scale right, etc.) on top of the actual
programming in return for a one-time payment while you expect to sell
"your" startup for millions, then you'll get asshole programmers - and
you deserve them.
Anonymous Coward:
A programmer's job is to implement a specification. People who
"have an idea for an app" only want to pay a programmer (I'm being
generous here, often they don't even want to pay a programmer, see the
article), but expect to get a business analyst, graphics artist,
software architect, marketer, programmer and system administrator rolled
into one, so that they don't have to give away too much of the money
they expect to earn with their creative idea.
Someone who thinks you can learn a little programming to avoid being
at the mercy of programmers isn't looking for a partner, isn't willing
to share with a partner and doesn't deserve the input from a partner.
aaarrrgggh (9205):
I'm an engineer. I want to remodel my home. I come up with ideas,
document them, and give them to an architect to build into a complete
design that conveys scope to the general contractor and trades. Me being
educated about the process helps me to manage scope and hopefully get
the product I want in the most efficient manner possible, while also
taking advantage of the expertise of others. A prima donna architect
that only wants to create something they find to be beautiful might not
solve my problems.
Programming is no different. If I convey something in pseudo code or
user interface, I would expect a skilled programmer to be able to
provide a critical evaluation of my idea and guide me into the best
direction. I might not be able to break down the functions for security
the right way, but I would at least be highlighting the need for
security as an example.
Moraelin (679338)
I'm not sure that learning some superficial idea of a language
is going to help. And I'll give you a couple of reasons why:
Dunning-Kruger. The people with the least knowledge on the
domain are those who overrate their knowledge the most.
Now I really wish to believe that some management or marketing guy
is willing to sink 10,000 hours into becoming good at programming,
and have a good idea of exactly what he's asking for. I really do.
But we both know that even if he does a decent amount overtime,
that's about 3 years of doing NOTHING BUT programming, i.e., he'd
have to not do his real job at all any more. Or more like 15 years
if he does some two-hours a day of hobby-style programming in the
afternoon. And he probably won't even do that.
What is actually going to happen, if at all, is that he'll plod
through it up to first peak of his own sense of how much it knows,
i.e., the Dunning-Kruger sweet spot. The point where he thinks he
knows it all, except, you know, maybe some minor esoteric stuff that
doesn't matter anyway. But is actually the point where he doesn't
know jack.
And from my experience, those are the worst problem bosses. The
kind which is an illustration of Russell's, "The trouble with the
world is that the stupid are cocksure and the intelligent are full
of doubt." The kind who is cock-sure that he probably is better at
programming than you anyway, he just, you know, doesn't have the
time to actually do it. (Read: to actually get experience.)
That's the kind who's just moved from just a paranoid suspicion that
your making a fuss about the 32414'th change request is taking
advantage of him, to the kind who "knows" that you're just an
unreasonable asshole. After all, he has no problem making changes to
the 1000 line JSP or PHP page he did for practice (half of which
being just HTML mixed in with the business code.) If he wants to add
a button to that one, hey, his editor even lets him drag and drop it
in 5 seconds. Why, he can even change it from displaying a fictive
list of widgets to a fictive list of employees. So your wanting to
redo a part of the design to accommodate his request to change the
whole functionality of a 1,000,000 line program (which is actually
quite small) must be some kind of trying to shaft him.
It's the kind who thinks that if he did a simple example program in
Visual Fox Pro, a single-user "database", placed the database files
on a file server, and then accessed them from another workstation,
that makes him qualified to decide he doesn't need MySQL or Oracle
for his enterprise system, he can just demand to have it done in
Visual Fox Pro. In fact, he "knows" it can be done that way. No,
really, this is an actual example that happened to me. Verbatim. I'm
not making it up.
Well, it doesn't work on other domains either, so I don't see
why programming would be any different. People can have a
superficial understanding of how a map editor for Skyrim works, and
it won't prevent them from coming with some unreasonable idea like
that someone should make him every outfit from [insert Anime series]
and not just do it for free, but credit him, because, hey, he had
the idea. No, seriously, just about every other idiot thinks that
the reason someone hasn't done a total conversion from Skyrim to
Star Wars is that they didn't have the precious idea.
Basically it's Dunning-Kruger all over again.
I think more than understanding programming, what people need is
understanding that ideas are a dime a dozen. What matters is the
execution.
What they need to understand is that, no, you're probably not the
next Edison or Ford or Steve Jobs or whatever. There are probably a
thousand other guys who had the same idea, some may have even tried it,
and there might actually be a reason why you never heard of it being
actually finished. And even those are remembered for actually having the
management skills to make those ideas work, not just for having an idea.
Ford didn't just make it for having the idea of making a cheap car, nor
for being a mechanic himself. Why it worked was managing to sort things
out like managing to hire and hold onto some good subordinates, reduce
the turnover that previously had some departments literally hire 300
people a year to fill 100 positions, etc.
It's the execution that mattered, not just having an idea.
Once they get disabused of the idea all that matters is that their
brain farted a vague idea, I think it will go a longer way towards less
frustration both for them and their employees.
RabidReindeer (2625839):
short version: "A little knowledge is a dangerous thing."
People who think they know what the job entails start out saying "It's
Easy! All You Have To Do Is..." and the whole thing swiftly descends
into Hell.
Dennis M. Ritchie, who helped shape the modern digital era by creating software
tools that power things as diverse as search engines like Google and smartphones,
was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70.
Mr. Ritchie, who lived alone, was in frail health in recent years after treatment
for prostate cancer and heart disease, said his brother Bill.
In the late 1960s and early '70s, working at Bell Labs, Mr. Ritchie made
a pair of lasting contributions to computer science. He was the principal designer
of the C programming language and co-developer of the Unix operating system,
working closely with Ken Thompson, his longtime Bell Labs collaborator.
The C programming language, a shorthand of words, numbers and punctuation,
is still widely used today, and successors like C++ and Java build on the ideas,
rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly
had a rich and enduring impact. Its free, open-source variant, Linux, powers
many of the world's data centers, like those at Google and Amazon, and its technology
serves as the foundation of operating systems, like Apple's iOS, in consumer
computing devices.
"The tools that Dennis built - and their direct descendants - run pretty
much everything today," said Brian Kernighan, a computer scientist at Princeton
University who worked with Mr. Ritchie at Bell Labs.
Those tools were more than inventive bundles of computer code. The C language
and Unix reflected a point of view, a different philosophy of computing than
what had come before. In the late '60s and early '70s, minicomputers were moving
into companies and universities - smaller and at a fraction of the price of
hulking mainframes.
Minicomputers represented a step in the democratization of computing, and
Unix and C were designed to open up computing to more people and collaborative
working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were
making not merely software but, as Mr. Ritchie once put it, "a system around
which fellowship can form."
C was designed for systems programmers who wanted to get the fastest performance
from operating systems, compilers and other programs. "C is not a big language
- it's clean, simple, elegant," Mr. Kernighan said. "It lets you get close to
the machine, without getting tied up in the machine."
Such higher-level languages had earlier been intended mainly to let people
without a lot of programming skill write programs that could run on mainframes.
Fortran was for scientists and engineers, while Cobol was for business managers.
C, like Unix, was designed mainly to let the growing ranks of professional
programmers work more productively. And it steadily gained popularity. With
Mr. Kernighan, Mr. Ritchie wrote a classic text, "The C Programming Language,"
also known as "K. & R." after the authors' initials, whose two editions, in
1978 and 1988, have sold millions of copies and been translated into 25 languages.
Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y.
His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee
Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J.,
where Mr. Ritchie grew up and attended high school.
He then went to Harvard, where he majored in applied mathematics.
While a graduate student at Harvard, Mr. Ritchie worked at the computer center
at the Massachusetts Institute of Technology, and became more interested in
computing than math. He was recruited by the Sandia National Laboratories, which
conducted weapons research and testing. "But it was nearly 1968," Mr. Ritchie
recalled in an interview in 2001, "and somehow making A-bombs for the government
didn't seem in tune with the times."
Mr. Ritchie joined Bell Labs in 1967, and soon
began his fruitful collaboration with Mr. Thompson on both Unix and the C programming
language. The pair represented the two different strands of the nascent discipline
of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson
came from electrical engineering.
"We were very complementary," said Mr. Thompson, who is now an engineer at
Google. "Sometimes personalities clash, and sometimes they meld. It was just
good with Dennis."
Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by
another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham,
England.
Mr. Ritchie traveled widely and read voraciously, but friends and family
members say his main passion was his work. He remained
at Bell Labs, working on various research projects, until he retired in 2007.
Colleagues who worked with Mr. Ritchie were struck by his code - meticulous,
clean and concise. His writing, according to Mr. Kernighan, was similar. "There
was a remarkable precision to his writing," Mr. Kernighan said, "no extra words,
elegant and spare, much like his code."
Disagree that a person can become a competent computer programmer
in under a year. Well, maybe the exceptional genius…
For most people, it takes a minimum of 3
years to master the skills required to be a decent coder.
It's not just about learning Java (which I do agree is a good computer
language to start with), there are certain prerequisites. Fortunately,
not a lot of math is required, high-school algebra is sufficient, plus
a grasp of "functions" (because programmers usually have to write a
lot of functions). On the other hand, boolean logic is absolutely required,
and that's more than just knowing the difference between logical AND
and logcial OR (or XOR). Also, if one gets into databases (my specialty,
actually), then one also needs to master the mathematics of set theory.
And a real programmer also needs to be able to write (and understand)
a recursion algorithm. For example, every time I have interviewed a
potential coder, I have asked them, "Are you familiar with the 'Towers
of Hanoi' algorithim?" If they don't know what that is, they still have
a chance to impress me if they can describe a B-tree navigation algorithm.
That's first- or second-year computer science stuff. If they can't recurse
a directory tree (using whatever programming language of their choice),
then they aren't a real programmer. God knows there are plenty of fakes
in the business. Sorry for the rant. Having to deal with "pretend programmers"
(rookies who think they're programmers because they know how to update
their Facebook page) is one of my pet peeves… Grrrrrrrr!
The computer, known as EDSAC (Electronic Delay Storage Automatic
Calculator) was a huge contraption that took up a room in what was
the University's old Mathematical Library. It contained 3,000 vacuum
valves arranged on 12 racks and used tubes filled with mercury for
memory. Despite its impressive size, it could only carry out 650
operations per second.
Before the development of EDSAC, digital computers, such as the
American Moore School's ENIAC (Electronic Numeral Integrator and
Computer), were only capable of dealing with one particular type of
problem. To solve a different kind of problem, thousands of switches
had to be reset and miles of cable re-routed. Reprogramming took
days.
In 1946, a paper by the Hungarian-born scientist John von Neumann
and others suggested that the future lay in developing computers
with memory which could not only store data, but also sets of
instructions, or programs. Users would then be able to change
programs, written in binary number format, without rewiring the
whole machine. The challenge was taken up by three groups of
scientists - one at the University of Manchester, an American team
led by JW Mauchly and JP Eckert, and the Cambridge team led by
Wilkes.
Eckert and Mauchly had been working on developing a
stored-program computer for two years before Wilkes became involved
at Cambridge. While the University of Manchester machine, known as
"Baby", was the first to store data and program, it was Wilkes who
became the first to build an operational machine based on von
Neumann's ideas (which form the basis for modern computers) to
deliver a service.
Wilkes chose to adopt mercury delay lines suggested by Eckert to
serve as an internal memory store. In such a delay line, an
electrical signal is converted into a sound wave travelling through
a long tube of mercury at a speed of 1,450 metres per second. The
signal can be transmitted back and forth along the tube, several of
which were combined to form the machine's memory. This memory meant
the computer could store both data and program. The main program was
loaded by paper tape, but once loaded this was executed from memory,
making the machine the first of its kind.
After two years of development, on May 6 1949 Wilkes's EDSAC
"rather suddenly" burst into life, computing a table of square
numbers. From early 1950 it offered a regular computing service to
the members of Cambridge University, the first of its kind in the
world, with Wilkes and his group developing programs and compiling a
program library. The world's first scientific paper to be published
using computer calculations - a paper on genetics by RA Fisher – was
completed with the help of EDSAC.
Wilkes was probably the first computer programmer to spot the
coming significance of program testing: "In 1949 as soon as we
started programming", he recalled in his memoirs, "we found to our
surprise that it wasn't as easy to get programs right as we had
thought. Debugging had to be discovered. I can remember the exact
instant when I realised that a large part of my life from then on
was going to be spent in finding mistakes in my own programs."
In 1951 Wilkes (with David J Wheeler and Stanley Gill) published
the world's first textbook on computer programming, Preparation of
Programs for an Electronic Digital Computer. Two years later he
established the world's first course in Computer Science at
Cambridge.
EDSAC remained in operation until 1958, but the future lay not in
delay lines but in magnetic storage and, when it came to the end of
its life, the machine was cannibalised and scrapped, its old program
tapes used as streamers at Cambridge children's parties.
Wilkes, though, remained at the forefront of computing technology
and made several other breakthroughs. In 1958 he built EDSAC's
replacement, EDSAC II, which not only incorporated magnetic storage
but was the first computer in the world to have a micro-programmed
control unit. In 1965 he published the first paper on cache
memories, followed later by a book on time-sharing.
In 1974 he developed the "Cambridge Ring", a digital
communication system linking computers together. The network was
originally designed to avoid the expense of having a printer at
every computer, but the technology was soon developed commercially
by others.
When EDSAC was built, Wilkes sought to allay public fears by
describing the stored-program computer as "a calculating machine
operated by a moron who cannot think, but can be trusted to do what
he is told". In 1964, however, predicting the world in "1984", he
drew a more Orwellian picture: "How would you feel," he wrote, "if
you had exceeded the speed limit on a deserted road in the dead of
night, and a few days later received a demand for a fine that had
been automatically printed by a computer coupled to a radar system
and vehicle identification device? It might not be a demand at all,
but simply a statement that your bank account had been debited
automatically."
Maurice Vincent Wilkes was born at Dudley, Worcestershire, on
June 26 1913. His father was a switchboard operator for the Earl of
Dudley whose extensive estate in south Staffordshire had its own
private telephone network; he encouraged his son's interest in
electronics and at King Edward VI's Grammar School, Stourbridge,
Maurice built his own radio transmitter and was allowed to operate
it from home.
Encouraged by his headmaster, a Cambridge-educated mathematician,
Wilkes went up to St John's College, Cambridge to read Mathematics,
but he studied electronics in his spare time in the University
Library and attended lectures at the Engineering Department. After
obtaining an amateur radio licence he constructed radio equipment in
his vacations with which to make contact, via the ionosphere, with
radio "hams" around the world.
Wilkes took a First in Mathematics and stayed on at Cambridge to
do a PhD on the propagation of radio waves in the ionosphere. This
led to an interest in tidal motion in the atmosphere and to the
publication of his first book Oscillations of the Earth's Atmosphere
(1949). In 1937 he was appointed university demonstrator at the new
Mathematical Laboratory (later renamed the Computer Laboratory)
housed in part of the old Anatomy School.
When war broke out, Wilkes left Cambridge to work with R
Watson-Watt and JD Cockroft on the development of radar. Later he
became involved in designing aircraft, missile and U-boat radio
tracking systems.
In 1945 Wilkes was released from war work to take up the
directorship of the Cambridge Mathematical Laboratory and given the
task of constructing a computer service for the University.
The following year he attended a course on "Theory and Techniques
for Design of Electronic Digital Computers" at the Moore School of
Electrical Engineering at the University of Pennsylvania, the home
of the ENIAC. The visit inspired Wilkes to try to build a
stored-program computer and on his return to Cambridge, he
immediately began work on EDSAC.
Wilkes was appointed Professor of Computing Technology in 1965, a
post he held until his retirement in 1980. Under his guidance the
Cambridge University Computer Laboratory became one of the country's
leading research centres. He also played an important role as an
adviser to British computer companies and was instrumental in
founding the British Computer Society, serving as its first
president from 1957 to 1960.
After his retirement, Wilkes spent six years as a consultant to
Digital Equipment in Massachusetts, and was Adjunct Professor of
Electrical Engineering and Computer Science at the Massachusetts
Institute of Technology from 1981 to 1985. Later he returned to
Cambridge as a consultant researcher with a research laboratory
funded variously by Olivetti, Oracle and AT&T, continuing to work
until well into his 90s.
Maurice Wilkes was elected a fellow of the Royal Society in 1956,
a Foreign Honorary Member of the American Academy of Arts and
Sciences in 1974, a Fellow of the Royal Academy of Engineering in
1976 and a Foreign Associate of the American National Academy of
Engineering in 1977. He was knighted in 2000.
Among other prizes he received the ACM Turing Award in 1967; the
Faraday Medal of the Institute of Electrical Engineers in 1981; and
the Harry Goode Memorial Award of the American Federation for
Information Processing Societies in 1968.
In 1985 he provided a lively account of his work in Memoirs of a
Computer Pioneer.
Maurice Wilkes married, in 1947, Nina Twyman. They had a son and
two daughters.
Andrew Binstock and Donald Knuth converse on the success of open
source, the problem with multicore architecture, the disappointing lack
of interest in literate programming, the menace of reusable code, and
that urban legend about winning a programming contest with a single
compilation.
Andrew Binstock: You are one of the fathers of the open-source
revolution, even if you aren't widely heralded as such. You previously
have stated that you released
TeX as open source because of the problem of proprietary
implementations at the time, and to invite corrections to the code-both
of which are key drivers for open-source projects today. Have you been
surprised by the success of open source since that time?
Donald Knuth: The success of open source code is perhaps the only
thing in the computer field that hasn't surprised me during
the past several decades. But it still hasn't reached its full potential;
I believe that open-source programs will begin to be completely dominant
as the economy moves more and more from products towards services, and
as more and more volunteers arise to improve the code.
For example, open-source code can produce
thousands of binaries, tuned perfectly to the configurations of individual
users, whereas commercial software usually will exist in only a few
versions. A generic binary executable file must include
things like inefficient "sync" instructions that are totally inappropriate
for many installations; such wastage goes away when the source code
is highly configurable. This should be a huge win for open source.
Yet I think that a few programs, such as Adobe Photoshop, will always
be superior to competitors like the Gimp-for some reason, I really don't
know why! I'm quite willing to pay good
money for really good software,
if I believe that it has been produced by the best programmers.
Remember, though, that my opinion on economic questions is highly
suspect, since I'm just an educator and scientist. I understand almost
nothing about the marketplace.
Andrew: A story states that you once entered a programming
contest at Stanford (I believe) and you submitted the winning entry,
which worked correctly after a single compilation. Is this
story true? In that vein, today's developers frequently build programs
writing small code increments followed by immediate compilation and
the creation and running of unit tests. What are your thoughts on this
approach to software development?
Donald: The story you heard is typical of legends that are based
on only a small kernel of truth. Here's what actually happened:
John McCarthy decided in 1971 to have a Memorial Day Programming
Race. All of the contestants except me worked at his AI Lab up in the
hills above Stanford, using the WAITS time-sharing system; I was down
on the main campus, where the only computer available to me was a mainframe
for which I had to punch cards and submit them for processing in batch
mode. I used Wirth's
ALGOL W system (the predecessor of Pascal). My program didn't
work the first time, but fortunately I could use Ed Satterthwaite's
excellent offline debugging system for ALGOL W, so I needed only two
runs. Meanwhile, the folks using WAITS couldn't get enough machine cycles
because their machine was so overloaded. (I think that the second-place
finisher, using that "modern" approach, came in about an hour after
I had submitted the winning entry with old-fangled methods.) It wasn't
a fair contest.
As to your real question, the idea of immediate compilation and "unit
tests" appeals to me only rarely, when I'm feeling my way in a totally
unknown environment and need feedback about what works and what doesn't.
Otherwise, lots of time is wasted on activities that I simply never
need to perform or even think about. Nothing needs to be "mocked up."
Andrew: One of the emerging problems for developers, especially
client-side developers, is changing their thinking to write programs
in terms of threads. This concern, driven by the advent of inexpensive
multicore PCs, surely will require that many algorithms be recast for
multithreading, or at least to be thread-safe. So far, much of the work
you've published for Volume 4 of
The Art of Computer Programming (TAOCP) doesn't
seem to touch on this dimension. Do you expect to enter into problems
of concurrency and parallel programming in upcoming work, especially
since it would seem to be a natural fit with the combinatorial topics
you're currently working on?
Donald: The field of combinatorial algorithms is so vast that I'll
be lucky to pack its sequential aspects into three or four
physical volumes, and I don't think the sequential methods are ever
going to be unimportant. Conversely, the half-life of parallel techniques
is very short, because hardware changes rapidly and each new machine
needs a somewhat different approach. So I decided long ago to stick
to what I know best. Other people understand parallel machines much
better than I do; programmers should listen to them, not me, for guidance
on how to deal with simultaneity.
Andrew: Vendors of multicore processors have expressed frustration
at the difficulty of moving developers to this model. As a former professor,
what thoughts do you have on this transition and how to make it happen?
Is it a question of proper tools, such as better native support for
concurrency in languages, or of execution frameworks? Or are there other
solutions?
Donald: I don't want to duck your question entirely.
I might as well flame a bit about my personal
unhappiness with the current trend toward multicore architecture.
To me, it looks more or less like the hardware
designers have run out of ideas, and that they're trying to pass the
blame for the future demise of Moore's Law to the software writers by
giving us machines that work faster only on a few key benchmarks!
I won't be surprised at all if the whole multithreading idea turns out
to be a flop, worse than the "Titanium"
approach that was supposed to be so terrific-until it turned out that
the wished-for compilers were basically impossible to write.
Let me put it this way: During the past
50 years, I've written well over a thousand programs, many of which
have substantial size. I can't think of even five of
those programs that would have been enhanced noticeably by parallelism
or multithreading. Surely, for example, multiple processors
are no help to TeX.[1]
How many programmers do you know who are enthusiastic about these
promised machines of the future? I hear almost nothing but grief from
software people, although the hardware folks in our department assure
me that I'm wrong.
I know that important applications for parallelism exist-rendering
graphics, breaking codes, scanning images, simulating physical and biological
processes, etc. But all these applications require dedicated code and
special-purpose techniques, which will need to be changed substantially
every few years.
Even if I knew enough about such methods to write about them in
TAOCP, my time would be largely wasted, because soon there
would be little reason for anybody to read those parts. (Similarly,
when I prepare the third edition of
Volume 3 I plan to rip out much of the material about how to sort
on magnetic tapes. That stuff was once one of the hottest topics in
the whole software field, but now it largely wastes paper when the book
is printed.)
The machine I use today has dual processors.
I get to use them both only when I'm running two independent jobs at
the same time; that's nice, but it happens only a few minutes every
week. If I had four processors, or eight, or more, I
still wouldn't be any better off, considering the kind of work I do-even
though I'm using my computer almost every day during most of the day.
So why should I be so happy about the future that hardware vendors promise?
They think a magic bullet will come along to make multicores speed up
my kind of work; I think it's a pipe dream. (No-that's the wrong metaphor!
"Pipelines" actually work for me, but threads don't. Maybe the word
I want is "bubble.")
From the opposite point of view, I do
grant that web browsing probably will get better with multicores.
I've been talking about my technical work, however, not recreation.
I also admit that I haven't got many bright ideas about what I wish
hardware designers would provide instead of multicores, now that they've
begun to hit a wall with respect to sequential computation. (But my
MMIX
design contains several ideas that would substantially improve the current
performance of the kinds of programs that concern me most-at the cost
of incompatibility with legacy x86 programs.)
Andrew: One of the few projects of yours that hasn't been
embraced by a widespread community is literate programming.
What are your thoughts about why literate programming didn't catch on?
And is there anything you'd have done differently in retrospect regarding
literate programming?
Donald: Literate programming is a very personal thing. I think it's
terrific, but that might well be because I'm a very strange person.
It has tens of thousands of fans, but not millions.
In my experience, software created with literate programming has
turned out to be significantly better than software developed in more
traditional ways. Yet ordinary software is usually okay-I'd give it
a grade of C (or maybe C++), but not F; hence, the traditional methods
stay with us. Since they're understood by a vast community of programmers,
most people have no big incentive to change, just as I'm not motivated
to learn Esperanto even though it might be preferable to English and
German and French and Russian (if everybody switched).
Jon Bentley
probably hit the nail on the head when he once was asked why literate
programming hasn't taken the whole world by storm. He observed that
a small percentage of the world's population is good at programming,
and a small percentage is good at writing; apparently I am asking everybody
to be in both subsets.
Yet to me, literate programming is certainly the most important thing
that came out of the TeX project. Not only has it enabled me to write
and maintain programs faster and more reliably than ever before, and
been one of my greatest sources of joy since the 1980s-it has actually
been indispensable at times. Some of my major programs, such
as the MMIX meta-simulator, could not have been written with any other
methodology that I've ever heard of. The complexity was simply too daunting
for my limited brain to handle; without literate programming, the whole
enterprise would have flopped miserably.
If people do discover nice ways to use the newfangled multithreaded
machines, I would expect the discovery to come from people who routinely
use literate programming. Literate programming is what you need to rise
above the ordinary level of achievement. But I don't believe in forcing
ideas on anybody. If literate programming isn't your style, please forget
it and do what you like. If nobody likes it but me, let it die.
On a positive note, I've been pleased to discover that the conventions
of CWEB are already standard equipment within preinstalled software
such as Makefiles, when I get off-the-shelf Linux these days.
Andrew: In
Fascicle 1 of Volume 1, you reintroduced the MMIX computer,
which is the 64-bit upgrade to the venerable MIX machine comp-sci students
have come to know over many years. You previously described MMIX in
great detail in MMIXware.
I've read portions of both books, but can't tell whether the Fascicle
updates or changes anything that appeared in MMIXware, or whether it's
a pure synopsis. Could you clarify?
Donald: Volume 1 Fascicle 1 is a programmer's introduction, which
includes instructive exercises and such things. The MMIXware book is
a detailed reference manual, somewhat terse and dry, plus a bunch of
literate programs that describe prototype software for people to build
upon. Both books define the same computer (once the errata to MMIXware
are incorporated from my website). For most readers of TAOCP,
the first fascicle contains everything about MMIX that they'll ever
need or want to know.
I should point out, however, that MMIX isn't a single machine; it's
an architecture with almost unlimited varieties of implementations,
depending on different choices of functional units, different pipeline
configurations, different approaches to multiple-instruction-issue,
different ways to do branch prediction, different cache sizes, different
strategies for cache replacement, different bus speeds, etc. Some instructions
and/or registers can be emulated with software on "cheaper" versions
of the hardware. And so on. It's a test bed, all simulatable with my
meta-simulator, even though advanced versions would be impossible to
build effectively until another five years go by (and then we could
ask for even further advances just by advancing the meta-simulator specs
another notch).
Suppose you want to know if five separate multiplier units and/or
three-way instruction issuing would speed up a given MMIX program. Or
maybe the instruction and/or data cache could be made larger or smaller
or more associative. Just fire up the meta-simulator and see what happens.
Andrew: As I suspect you don't use unit testing with MMIXAL,
could you step me through how you go about making sure that your code
works correctly under a wide variety of conditions and inputs? If you
have a specific work routine around verification, could you describe
it?
Donald: Most examples of machine language code in TAOCP
appear in Volumes 1-3; by the time we get to Volume 4, such low-level
detail is largely unnecessary and we can work safely at a higher level
of abstraction. Thus, I've needed to write only a dozen or so MMIX programs
while preparing the opening parts of Volume 4, and they're all pretty
much toy programs-nothing substantial. For little things like that,
I just use informal verification methods, based on the theory that I've
written up for the book, together with the MMIXAL assembler and MMIX
simulator that are readily available on the Net (and described in full
detail in the MMIXware book).
That simulator includes debugging features like the ones I found
so useful in Ed Satterthwaite's system for ALGOL W, mentioned earlier.
I always feel quite confident after checking a program with those tools.
Andrew: Despite its formulation many years ago, TeX is still
thriving, primarily as the foundation for LaTeX. While
TeX has been effectively frozen at your request, are there features
that you would want to change or add to it, if you had the time and
bandwidth? If so, what are the major items you add/change?
Donald: I believe changes to TeX would cause much more harm than
good. Other people who want other features are creating their own systems,
and I've always encouraged further development-except that nobody should
give their program the same name as mine. I want to take permanent responsibility
for TeX and Metafont,
and for all the nitty-gritty things that affect existing documents that
rely on my work, such as the precise dimensions of characters in the
Computer Modern fonts.
Andrew: One of the little-discussed aspects of software development
is how to do design work on software in a completely new domain. You
were faced with this issue when you undertook TeX: No prior art was
available to you as source code, and it was a domain in which you weren't
an expert. How did you approach the design, and how long did it take
before you were comfortable entering into the coding portion?
Donald: That's another good question! I've discussed the answer in
great detail in Chapter 10 of my book
Literate Programming, together with Chapters 1 and 2 of my book
Digital Typography. I think that anybody who is really interested
in this topic will enjoy reading those chapters. (See also Digital
Typography Chapters 24 and 25 for the complete first and second
drafts of my initial design of TeX in 1977.)
Andrew: The books on TeX and the program itself show a clear
concern for limiting memory usage-an important problem for systems of
that era. Today, the concern for memory usage in programs has more to
do with cache sizes. As someone who has designed a processor in software,
the issues of cache-aware and cache-oblivious
algorithms surely must have crossed your radar screen. Is
the role of processor caches on algorithm design something that you
expect to cover, even if indirectly, in your upcoming work?
Donald: I mentioned earlier that MMIX provides a test bed for many
varieties of cache. And it's a software-implemented machine, so we can
perform experiments that will be repeatable even a hundred years from
now. Certainly the next editions of Volumes 1-3 will discuss the behavior
of various basic algorithms with respect to different cache parameters.
In Volume 4 so far, I count about a dozen references to cache memory
and cache-friendly approaches (not to mention a "memo cache," which
is a different but related idea in software).
Andrew: What set of tools do you use today for writing
TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what
do you use for the coding?
Donald: My general working style is to write everything first with
pencil and paper, sitting beside a big wastebasket. Then I use Emacs
to enter the text into my machine, using the conventions of TeX. I use
tex, dvips, and gv to see the results, which appear on my screen almost
instantaneously these days. I check my math with Mathematica.
I program every algorithm that's discussed (so that I can thoroughly
understand it) using CWEB, which works splendidly with the GDB debugger.
I make the illustrations with
MetaPost (or, in
rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some
homemade tools, like my own spell-checker for TeX and CWEB within Emacs.
I designed my own bitmap font for use with Emacs, because I hate the
way the ASCII apostrophe and the left open quote have morphed into independent
symbols that no longer match each other visually. I have special Emacs
modes to help me classify all the tens of thousands of papers and notes
in my files, and special Emacs keyboard shortcuts that make bookwriting
a little bit like playing an organ. I prefer
rxvt to xterm for terminal
input. Since last December, I've been using a file backup system called
backupfs, which
meets my need beautifully to archive the daily state of every file.
According to the current directories on my machine, I've written
68 different CWEB programs so far this year. There were about 100 in
2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has
an extremely convenient "change file" mechanism, with which I can rapidly
create multiple versions and variations on a theme; so far in 2008 I've
made 73 variations on those 68 themes. (Some of the variations are quite
short, only a few bytes; others are 5KB or more. Some of the CWEB programs
are quite substantial, like the 55-page BDD package that I completed
in January.) Thus, you can see how important literate programming is
in my life.
I currently use Ubuntu Linux,
on a standalone laptop-it has no Internet connection. I occasionally
carry flash memory drives between this machine and the Macs that I use
for network surfing and graphics; but I trust my family jewels only
to Linux. Incidentally, with Linux I much prefer the keyboard focus
that I can get with classic
FVWM to the GNOME and
KDE environments that other people seem to like better. To each his
own.
Andrew: You state in the preface of
Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely
will comprise three volumes and possibly more. It's clear from the text
that you're really enjoying writing on this topic. Given that, what
is your confidence in the note posted on the TAOCP website
that Volume 5 will see light of day by 2015?
Donald: If you check the Wayback Machine for previous incarnations
of that web page, you will see that the number 2015 has not been constant.
You're certainly correct that I'm having a ball writing up this material,
because I keep running into fascinating facts that simply can't be left
out-even though more than half of my notes don't make the final cut.
Precise time estimates are impossible, because I can't tell until
getting deep into each section how much of the stuff in my files is
going to be really fundamental and how much of it is going to be irrelevant
to my book or too advanced. A lot of the recent literature is academic
one-upmanship of limited interest to me; authors these days often introduce
arcane methods that outperform the simpler techniques only when the
problem size exceeds the number of protons in the universe. Such algorithms
could never be important in a real computer application. I read hundreds
of such papers to see if they might contain nuggets for programmers,
but most of them wind up getting short shrift.
From a scheduling standpoint, all I know at present is that I must
someday digest a huge amount of material that I've been collecting and
filing for 45 years. I gain important time by working in batch mode:
I don't read a paper in depth until I can deal with dozens of others
on the same topic during the same week. When I finally am ready to read
what has been collected about a topic, I might find out that I can zoom
ahead because most of it is eminently forgettable for my purposes. On
the other hand, I might discover that it's fundamental and deserves
weeks of study; then I'd have to edit my website and push that number
2015 closer to infinity.
Andrew: In late 2006, you were diagnosed with prostate cancer.
How is your health today?
Donald: Naturally, the cancer will be a serious concern. I have superb
doctors. At the moment I feel as healthy as ever, modulo being 70 years
old. Words flow freely as I write TAOCP and as I write the
literate programs that precede drafts of TAOCP. I wake up in
the morning with ideas that please me, and some of those ideas actually
please me also later in the day when I've entered them into my computer.
On the other hand, I willingly put myself in God's hands with respect
to how much more I'll be able to do before cancer or heart disease or
senility or whatever strikes. If I should unexpectedly die tomorrow,
I'll have no reason to complain, because my life has been incredibly
blessed. Conversely, as long as I'm able to write about computer science,
I intend to do my best to organize and expound upon the tens of thousands
of technical papers that I've collected and made notes on since 1962.
Andrew: On your website, you mention that the Peoples Archive
recently made a series of videos in which you reflect on your past life.
In segment 93, "Advice to Young People," you advise that
people shouldn't do something simply because it's
trendy. As we know all too well, software development is
as subject to fads as any other discipline. Can you give some examples
that are currently in vogue, which developers shouldn't adopt simply
because they're currently popular or because that's the way they're
currently done? Would you care to identify important examples of this
outside of software development?
Donald: Hmm. That question is almost contradictory, because I'm basically
advising young people to listen to themselves rather than to others,
and I'm one of the others. Almost every
biography of every person whom you would like to emulate will say that
he or she did many things against the "conventional wisdom" of the day.
Still, I hate to duck your questions even though I also hate to offend
other people's sensibilities-given that software methodology has always
been akin to religion. With the caveat that there's no reason anybody
should care about the opinions of a computer scientist/mathematician
like me regarding software development, let me just say that almost
everything I've ever heard associated with the term "extreme
programming" sounds like exactly the wrong way to go...with one
exception. The exception is the idea of working in teams and reading
each other's code. That idea is crucial, and it might even mask out
all the terrible aspects of extreme programming that alarm me.
I also must confess to a strong bias against the fashion for reusable
code. To me, "re-editable code" is much, much better than an untouchable
black box or toolkit. I could go on and on about this. If you're totally
convinced that reusable code is wonderful, I probably won't be able
to sway you anyway, but you'll never convince me that reusable code
isn't mostly a menace.
Here's a question that you may well have meant to ask: Why is the
new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1?
The answer is that computer programmers will understand that I wasn't
ready to begin writing Volume 4 of TAOCP at its true beginning
point, because we know that the initialization of a program can't be
written until the program itself takes shape. So I started in 2005 with
Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of
Star Wars, which began with Episode 4.)
Finally I was psyched up to write the early parts, but I soon realized
that the introductory sections needed to include much more stuff than
would fit into a single fascicle. Therefore, remembering
Dijkstra's
dictum that counting should begin at 0, I decided to launch Volume 4
with Fascicle 0. Look for Volume 4 Fascicle 1 later this year.
References
[1] My colleague Kunle
Olukotun points out that, if the usage of TeX became a major bottleneck
so that people had a dozen processors and really needed to speed up
their typesetting terrifically, a super-parallel version of TeX could
be developed that uses "speculation" to typeset a dozen chapters at
once: Each chapter could be typeset under the assumption that the previous
chapters don't do anything strange to mess up the default logic. If
that assumption fails, we can fall back on the normal method of doing
a chapter at a time; but in the majority of cases, when only normal
typesetting was being invoked, the processing would indeed go 12 times
faster. Users who cared about speed could adapt their behavior and use
TeX in a disciplined way.
BareBones is an interpreter for the "Bare Bones" programming language
defined in Chapter 11 of "Computer Science: An Overview", 9th Edition,
by J. Glenn Brookshear.
Release focus: Minor feature enhancements
Changes:
Identifiers were made case-insensitive. A summary of the language was
added to the README file.
You can't prove anything about a program written in C or FORTRAN.
It's really just Peek and Poke with some syntactic sugar.
There are a couple of people in the world who can really program
in C or FORTRAN. They write more code in less time than it takes for
other programmers. Most programmers aren't that good.
The problem is that those few programmers
who crank out code aren't interested in maintaining it.
The buzzwords of the 1980s are mips and megaflops. The buzzwords
of the 1990s will be verification, reliability, and understandability.
Xerox PARC was a great environment because they had great people,
enough money to build real systems, and management that protected them
from management.
The best way to do research is to make a radical assumption and
then assume it's true. For me, I use the assumption that object oriented
programming is the way to go.
At Sun, we don't believe in the Soviet model of economic planning.
You can drive a car by looking in the rear view mirror as long as
nothing is ahead of you. Not enough software professionals are engaged
in forward thinking.
The standard definition of AI is that which we don't understand.
Questioner [whose initials are EF]: You mentioned in your talk about
a catastrophic event taking place ten years from now that will depress
you. Looking back, doesn't UNIX depress you? It's now 1989 and UNIX
is 1968. BJ: Standards for UNIX are coming. This will halt progress. That
will provide the opportunity for something better to enter the marketplace.
You see, as long as UNIX keeps slipping and makes some progress, nothing
better will come along. The UNIX standards committees are therefore
doing us a great service by slowing down and eventually halting the
progress of UNIX.
Generic software has absolutely no value.
The GNU approach to software is one extreme. Of course, it violates
my axiom of the top programmers demanding lots of money for their products.
Computer Science Education: Where Are the Software Engineers of Tomorrow?
Dr. Robert B.K. Dewar, AdaCore Inc.
Dr. Edmond Schonberg, AdaCore Inc.
It is our view that Computer Science (CS) education is neglecting
basic skills, in particular in the areas of programming and formal methods.
We consider that the general adoption of Java as a first programming
language is in part responsible for this decline. We examine briefly
the set of programming skills that should be part of every software
professional's repertoire.
It is all about programming! Over the last few years we have noticed
worrisome trends in CS education. The following represents a summary
of those trends:
Mathematics requirements in CS programs are shrinking.
The development of programming skills in several languages is
giving way to cookbook approaches using large libraries and special-purpose
packages.
The resulting set of skills is insufficient for today's software
industry (in particular for safety and security purposes) and, unfortunately,
matches well what the outsourcing industry can offer. We are training
easily replaceable professionals.
These trends are visible in the latest curriculum recommendations from
the Association for Computing Machinery (ACM). Curriculum 2005 does
not mention mathematical prerequisites at all, and it mentions only
one course in the theory of programming languages [1].
We have seen these developments from both sides: As faculty members
at New York University for decades, we have regretted the introduction
of Java as a first language of instruction for most computer science
majors. We have seen he has weakened the formation of our students,
as reflected in their performance in systems and architecture courses.
As founders of a company that specializes in Ada programming tools for
mission-critical systems, we find it harder to recruit qualified applicants
who have the right foundational skills. We want to advocate a more rigorous
formation, in which formal methods are introduced early on, and programming
languages play a central role in CS education.
Formal Methods and Software Construction
Formal techniques for proving the correctness of programs were an extremely
active subject of research 20 years ago. However, the methods (and the
hardware) of the time prevented these techniques from becoming widespread,
and as a result they are more or less ignored by most CS programs. This
is unfortunate because the techniques have evolved to the point that
they can be used in large-scale systems and can contribute substantially
to the reliability of these systems. A case in point is the use of SPARK
in the re-engineering of the ground-based air traffic control system
in the United Kingdom (see a description of iFACTS – Interim Future
Area Control Tools Support, at <www.nats.co.uk/article/90>). SPARK is
a subset of Ada augmented with assertions that allow the designer to
prove important properties of a program: termination, absence of run-time
exceptions, finite memory usage, etc. [2]. It is obvious that this kind
of design and analysis methodology (dubbed Correctness by Construction)
will add substantially to the reliability of a system whose design has
involved SPARK from the beginning. However, PRAXIS, the company that
developed SPARK and which is designing iFACTS, finds it hard to recruit
people with the required mathematical competence (and this is present
even in the United Kingdom, where formal methods are more widely taught
and used than in the United States).
Another formal approach to which CS students need exposure is model
checking and linear temporal logic for the design of concurrent systems.
For a modern discussion of the topic, which is central to mission-critical
software, see [3].
Another area of computer science which we find neglected is the study
of floating-point computations. At New York University, a course in
numerical methods and floating-point computing used to be required,
but this requirement was dropped many years ago, and now very few students
take this course. The topic is vital to all scientific and engineering
software and is semantically delicate. One would imagine that it would
be a required part of all courses in scientific computing, but these
often take MatLab to be the universal programming tool and ignore the
topic altogether.
The Pitfalls of Java as a First Programming Language
Because of its popularity in the context of Web applications and the
ease with which beginners can produce graphical programs, Java has become
the most widely used language in introductory programming courses. We
consider this to be a misguided attempt to make programming more fun,
perhaps in reaction to the drop in CS enrollments that followed the
dot-com bust. What we observed at New York University is that the Java
programming courses did not prepare our students for the first course
in systems, much less for more advanced ones. Students found it hard
to write programs that did not have a graphic interface, had no feeling
for the relationship between the source program and what the hardware
would actually do, and (most damaging) did not understand the semantics
of pointers at all, which made the use of C in systems programming very
challenging.
Let us propose the following principle: The irresistible beauty of programming
consists in the reduction of complex formal processes to a very small
set of primitive operations. Java, instead of exposing this beauty,
encourages the programmer to approach problem-solving like a plumber
in a hardware store: by rummaging through a multitude of drawers (i.e.
packages) we will end up finding some gadget (i.e. class) that does
roughly what we want. How it does it is not interesting! The result
is a student who knows how to put a simple program together, but does
not know how to program. A further pitfall of the early use of Java
libraries and frameworks is that it is impossible for the student to
develop a sense of the run-time cost of what is written because it is
extremely hard to know what any method call will eventually execute.
A lucid analysis of the problem is presented in [4].
We are seeing some backlash to this approach. For example, Bjarne Stroustrup
reports from Texas A & M University that the industry is showing increasing
unhappiness with the results of this approach. Specifically, he notes
the following:
I have had a lot of complaints about that [the use of Java as a
first programming language] from industry, specifically from AT&T,
IBM, Intel, Bloomberg, NI, Microsoft, Lockheed-Martin, and more.
[5]
He noted in a private discussion on this topic, reporting the following:
It [Texas A&M] did [teach Java as the first language]. Then I started
teaching C++ to the electrical engineers and when the EE students
started to out-program the CS students, the CS department switched
to C++. [5]
It will be interesting to see how many departments follow this trend.
At AdaCore, we are certainly aware of many universities that have adopted
Ada as a first language because of similar concerns.
A Real Programmer Can Write in Any Language (C, Java, Lisp, Ada)
Software professionals of a certain age will remember the slogan of
old-timers from two generations ago when structured programming became
the rage: Real programmers can write Fortran in any language. The slogan
is a reminder of how thinking habits of programmers are influenced by
the first language they learn and how hard it is to shake these habits
if you do all your programming in a single language. Conversely, we
want to say that a competent programmer is comfortable with a number
of different languages and that the programmer must be able to use the
mental tools favored by one of them, even when programming in another.
For example, the user of an imperative language such as Ada or C++ must
be able to write in a functional style, acquired through practice with
Lisp and ML1, when manipulating recursive structures. This
is one indication of the importance of learning in-depth a number of
different programming languages. What follows summarizes what we think
are the critical contributions that well-established languages make
to the mental tool-set of real programmers. For example, a real programmer
should be able to program inheritance and dynamic dispatching in C,
information hiding in Lisp, tree manipulation libraries in Ada, and
garbage collection in anything but Java. The study of a wide variety
of languages is, thus, indispensable to the well-rounded programmer.
Why C Matters
C is the low-level language that everyone must know. It can be seen
as a portable assembly language, and as such it exposes the underlying
machine and forces the student to understand clearly the relationship
between software and hardware. Performance analysis is more straightforward,
because the cost of every software statement is clear. Finally, compilers
(GCC for example) make it easy to examine the generated assembly code,
which is an excellent tool for understanding machine language and architecture.
Why C++ Matters
C++ brings to C the fundamental concepts of modern software engineering:
encapsulation with classes and namespaces, information hiding through
protected and private data and operations, programming by extension
through virtual methods and derived classes, etc. C++ also pushes storage
management as far as it can go without full-blown garbage collection,
with constructors and destructors.
Why Lisp Matters
Every programmer must be comfortable with functional programming and
with the important notion of referential transparency. Even though most
programmers find imperative programming more intuitive, they must recognize
that in many contexts that a functional, stateless style is clear, natural,
easy to understand, and efficient to boot.
An additional benefit of the practice of Lisp is that the program is
written in what amounts to abstract syntax, namely the internal representation
that most compilers use between parsing and code generation. Knowing
Lisp is thus an excellent preparation for any software work that involves
language processing.
Finally, Lisp (at least in its lean Scheme incarnation) is amenable
to a very compact self-definition. Seeing a complete Lisp interpreter
written in Lisp is an intellectual revelation that all computer scientists
should experience.
Why Java Matters
Despite our comments on Java as a first or only language, we think that
Java has an important role to play in CS instruction. We will mention
only two aspects of the language that must be part of the real programmer's
skill set:
An understanding of concurrent programming (for which threads
provide a basic low-level model).
Reflection, namely the understanding that a program can be instrumented
to examine its own state and to determine its own behavior in a
dynamically changing environment.
Why Ada Matters
Ada is the language of software engineering par excellence. Even when
it is not the language of instruction in programming courses, it is
the language chosen to teach courses in software engineering. This is
because the notions of strong typing, encapsulation, information hiding,
concurrency, generic programming, inheritance, and so on, are embodied
in specific features of the language. From our experience and that of
our customers, we can say that a real programmer writes Ada in any language.
For example, an Ada programmer accustomed to Ada's package model, which
strongly separates specification from implementation, will tend to write
C in a style where well-commented header files act in somewhat the same
way as package specs in Ada. The programmer will include bounds checking
and consistency checks when passing mutable structures between subprograms
to mimic the strong-typing checks that Ada mandates [6]. She will organize
concurrent programs into tasks and protected objects, with well-defined
synchronization and communication mechanisms.
The concurrency features of Ada are particularly important in our age
of multi-core architectures. We find it surprising that these architectures
should be presented as a novel challenge to software design when Ada
had well-designed mechanisms for writing safe, concurrent software 30
years ago.
Programming Languages Are Not the Whole Story
A well-rounded CS curriculum will include an advanced course in programming
languages that covers a wide variety of languages, chosen to broaden
the understanding of the programming process, rather than to build a
résumé in perceived hot languages. We are somewhat dismayed to see the
popularity of scripting languages in introductory programming courses.
Such languages (Javascript, PHP, Atlas) are indeed popular tools of
today for Web applications. Such languages have all the pedagogical
defaults that we ascribe to Java and provide no opportunity to learn
algorithms and performance analysis. Their absence of strong typing
leads to a trial-and-error programming style and prevents students from
acquiring the discipline of separating design of interfaces from specifications.
However, teaching the right languages alone is not enough. Students
need to be exposed to the tools to construct large-scale reliable programs,
as we discussed at the start of this article. Topics of relevance are
studying formal specification methods and formal proof methodologies,
as well as gaining an understanding of how high-reliability code is
certified in the real world. When you step into a plane, you are putting
your life in the hands of software which had better be totally reliable.
As a computer scientist, you should have some knowledge of how this
level of reliability is achieved. In this day and age, the fear of terrorist
cyber attacks have given a new urgency to the building of software that
is not only bug free, but is also immune from malicious attack. Such
high-security software relies even more extensively on formal methodologies,
and our students need to be prepared for this new world.
References
Joint Taskforce for Computing Curricula. "Computing Curricula
2005: The Overview Report." ACM/AIS/ IEEE, 2005 <www.acm.org/education
/curric_vols/CC2005-March06 Final.pdf>.
Barnes, John. High Integrity Ada: The Spark Approach.
Addison-Wesley, 2003.
Ben-Ari, M. Principles of Concurrent and Distributed Programming.
2nd ed. Addison-Wesley, 2006.
Mitchell, Nick, Gary Sevitsky, and Harini Srinivasan. "The Diary
of a Datum: An Approach to Analyzing Runtime Complexity in Framework-Based
Applications." Workshop on Library-Centric Software Design, Object-Oriented
Programming, Systems, Languages, and Applications, San Diego, CA,
2005.
Stroustrup, Bjarne. Private communication. Aug. 2007.
Holzmann Gerard J. "The Power of Ten – Rules for Developing
Safety Critical Code." IEEE Computer June 2006: 93-95.
Note
Several programming language and system names have evolved from
acronyms whose formal spellings are no longer considered applicable
to the current names for which they are readily known. ML, Lisp,
GCC, PHP, and SPARK fall under this category.
One of the article's main points (one that was misunderstood, Dewar
tells me) is that the adoption of Java as
a first programming language in college courses has led to this decline.
Not exactly. Yes, Dewar believes that Java's graphic libraries allow
students to cobble together software without understanding the underlying
source code.
But the problem with CS programs goes far beyond their focus on Java,
he says.
"A lot of it is, 'Let's make this all more fun.' You know, 'Math
is not fun, let's reduce math requirements. Algorithms are not fun,
let's get rid of them. Ewww – graphic libraries, they're fun.
Let's have people mess with libraries. And [forget] all this business
about 'command line' – we'll have people use nice visual interfaces
where they can point and click and do fancy graphic stuff and have
fun."
Dewar says his email in-box is crammed full of positive responses
to his article, from students as well as employers. Many readers have
thanked him for speaking up about a situation they believe needs addressing,
he says.
One email was from an IT staffer who is working with a junior programmer.
The older worker suggested that the young engineer check the call stack
to see about a problem, but unfortunately, "he'd never heard of a call
stack."
Mama, Don't Let Your Babies Grow Up to be Cowboys (or Computer Programmers)
At fault, in Dewar's view, are universities that are desperate to
make up for lower enrollment in CS programs – even if that means gutting
the programs.
It's widely acknowledged that enrollments in computer science programs
have declined. The chief causes: the dotcom crash made a CS career seem
scary, and the never-ending headlines about outsourcing makes it seem
even scarier. Once seen as a reliable meal ticket, some concerned parents
now view CS with an anxiety usually reserved for Sociology or Philosophy
degrees. Why waste your time?
College administrators are understandably alarmed by smaller student
head counts. "Universities tend to be in the raw numbers mode," Dewar
says. "'Oh my God, the number of computer science majors has dropped
by a factor of two, how are we going to reverse that?'"
They've responded, he claims, by dumbing
down programs, hoping to make them more accessible and popular. Aspects
of curriculum that are too demanding, or perceived as tedious, are downplayed
in favor of simplified material that attracts a larger enrollment. This
effort is counterproductive, Dewar says.
"To me, raw numbers are not necessarily the first concern. The first
concern is that people get a good education."
These students who have been spoon-fed easy material aren't prepared
to compete globally. Dewar, who also co-owns a software company and
so deals with clients and programmers internationally, says, "We see
French engineers much better trained than American engineers," coming
out of school.
Microsoft has unveiled a new Web site offering lessons to new programmers
on building applications using the tools in Visual Studio 2005.
[Sep 30, 2006]
Dreamsongs
Essays Downloads Triggers & Practice: How Extremes in Writing Relate
to Creativity and Learning [pdf]
I presented this keynote at XP/Agile Universe 2002 in Chicago, Illinois.
The thrust of the talk is that it is possible to teach creative activities
through an MFA process and to get better by practicing, but computer
science and software engineering education on one hand and software
practices on the other do not begin to match up to the discipline the
arts demonstrate. Get to work.
Welcome to the Summer of Code 2006 site. We are no longer accepting
applications from students or mentoring organizations. Students can
view previously submitted applications and respond to mentor comments
via the student
home page. Accepted student projects will be announced on
code.google.com/soc/ on May
23, 2006. You can talk to us in the
Summer-Discuss-2006
group or via IRC in #summer-discuss on
SlashNET.
If you're feeling nostalgic, you can still access the
Summer of Code
2005 site.
Knuth view holds; Stallman's views does not make any sense other then
in context of his cult :-). See also Slashdot discussion
Slashdot Is Programming Art
Art and hand-waving are two things that a lot of people consider
to go very well together. Art and computer programming, less so. Donald
Knuth put them together when he named his wonderful multivolume set
on algorithms The Art of Computer Programming, but
Knuth chose a craft-oriented definition of art (PDF) in order to
do so.
... ... ...
Someone I didn't attempt to contact but whose words live on is Albert
Einstein. Here are a couple of relevant quotes:
[W]e do science when we reconstruct in the language
of logic what we have seen and experienced. We do art when we communicate
through forms whose connections are not accessible to the conscious
mind yet we intuitively recognise them as something meaningful.
Also:
After a certain level of technological skill is
achieved, science and art tend to coalesce in aesthetic plasticity
and form. The greater scientists are artists as well.[1]
This is a lofty place to start. Here's Fred Brooks with a more direct
look at the subject:
The programmer, like the poet, works only slightly
removed from pure thought-stuff. He builds his castles in the air,
from air, creating by exertion of the imagination. Few media of
creation are so flexible, so easy to polish and rework, so readily
capable of realizing grand conceptual structures.[2]
He doesn't say it's art, but it sure sounds a lot like it.
In that vein, Andy Hunt from the Pragmatic Programmers says:
It is absolutely an art. No question about it.
Check out this quote from the Marines:
An even greater part of the conduct of war
falls under the realm of art, which is the employment of creative
or intuitive skills. Art includes the creative, situational
application of scientific knowledge through judgment and experience,
and so the art of war subsumes the science of war. The art of
war requires the intuitive ability to grasp the essence of a
unique military situation and the creative ability to devise
a practical solution.
Sounds like a similar situation to software development
to me.
There are other similarities between programming
and artists, see my essay at
Art In Programming (PDF).
I could go on for hours about the topic...
Guido van Rossum, the creator of Python, has stronger alliances to
Knuth's definition:
I'm with Knuth's definition (or use) of the word art.
To me, it relates strongly to creativity, which is very
important for my line of work.
If there was no art in it, it wouldn't be any fun, and then I
wouldn't still be doing it after 30 years.
Bjarne Stroustrup, the creator of C++, is also more like Knuth in
refining his definition of art:
When done right, art and craft blends seamlessly. That's the
view of several schools of design, though of course not the view
of people into "art as provocation".
Define "craft"; define "art". The crafts and arts that I appreciate
blend seamlessly into each other so that there is no dilemma.
So far, these views are very top-down. What happens when you change
the viewpoint? Paul Graham, programmer and author of Hackers and
Painters, responded that he'd written quite a bit on the subject
and to feel free to grab something. This was my choice:
I've found that the best sources of ideas are
not the other fields that have the word "computer" in their names,
but the other fields inhabited by makers. Painting has been a much
richer source of ideas than the theory of computation.
For example, I was taught in college that one
ought to figure out a program completely on paper before even going
near a computer. I found that I did not program this way. I found
that I liked to program sitting in front of a computer, not a piece
of paper. Worse still, instead of patiently writing out a complete
program and assuring myself it was correct, I tended to just spew
out code that was hopelessly broken, and gradually beat it into
shape. Debugging, I was taught, was a kind of final pass where you
caught typos and oversights. The way I worked, it seemed like programming
consisted of debugging.
For a long time I felt bad about this, just as
I once felt bad that I didn't hold my pencil the way they taught
me to in elementary school. If I had only looked over at the other
makers, the painters or the architects, I would have realized that
there was a name for what I was doing: sketching. As far as I can
tell, the way they taught me to program in college was all wrong.
You should figure out programs as you're writing them, just as writers
and painters and architects do.[3]
Paul goes on to talk about the implications for software design and
the joys of dynamic typing, which allows you to stay looser later.
Now, we're right down to the code. This is what Richard Stallman,
founder of the GNU Project and the Free Software Foundation, has to
say (throwing in a geek joke for good measure):
I would describe programming as a craft, which
is a kind of art, but not a fine art. Craft means making useful
objects with perhaps decorative touches. Fine art means making things
purely for their beauty.
Programming in general is not fine art, but some
entries in the obfuscated C contest may qualify. I saw one that
could be read as a story in English or as a C program. For the English
reading one had to ignore punctuation--for instance, the name Charlotte
might appear as char *lotte.
(Once I was eating in Legal Sea Food and ordered
arctic char. When it arrived, I looked for a signature, saw none,
and complained to my friends, "This is an unsigned char. I wanted
a signed char!" I would have complained to the waiter if I had thought
he'd get the joke.)
... ... ...
Constraints and Art
The existence of so many restraints in the actual practice of code
writing makes it tempting to dismiss programming as art, but when you
think about it, people who create recognized art have constraints too.
Writers, painters, and so on all have their code--writers must be comprehensible
in some sort of way in their chosen language. Musicians have tools of
expression in scales, harmonies, and timbres. Painters might seem to
be free of this, but cultural rules exist, as they do for the other
categories. An artist can break rules in an inspired way and receive
the highest praise for it--but sometimes only after they've been dead
for a long time.
Program syntax and logic might seem to be more restrictive than these
rules, which is why it is more inspiring to think as Fred Brooks did--in
the heart of the machine.
Perhaps it's more useful to look at the process. If there are ways
in which the concept of art could be useful, then maybe we'll find them
there.
If we broadly take the process as consisting of idea, design, and
implementation, it's clear that even if we don't accept that implementation
is art, there is plenty of scope in the first two stages, and there's
certainly scope in the combination. Thinking about it a little more
also highlights the reductio ad absurdum of looking at any art in this
way, where sculpture becomes the mere act of chiseling stone or painting
is the application of paint to a surface.
Looking at the process immediately focuses on the different situations
of the lone hacker or small team as opposed to large corporate teams,
who in some cases send specification documents to people they don't
even know in other countries. The latter groups hope that they've specified
things in such detail that they need to know nothing about the code
writers other than the fact that they can deliver.
The process for the lone hacker or small team might be almost unrecognizable
as a process to an outsider--a process like that described by Paul Graham,
where writing the code itself alters and shapes an idea and its design.
The design stage is implicit and ongoing. If there is art in idea and
design, then this is kneaded through the dough of the project like a
special magic ingredient--the seamless combination that Bjarne Stroustrup
mentioned. In less mystical terms, the process from beginning to end
has strong degrees of integrity.
The situation with larger project groups is more difficult. More
people means more time constraints on communication, just because the
sums are bigger. There is an immediate tendency for the existence of
more rules and a concomitant tendency for thinking inside the box. You
can't actually order people to be creative and brilliant. You can only
make the environment where it's more likely and hope for the best. Xerox
PARC and Bell Labs are two good examples of that.
The real question is how to be inspired for the small team, and additionally,
how not to stop inspiration for the larger team. This is a question
of personal development. Creative thinking requires knowledge outside
of the usual and ordinary, and the freedom and imagination to roam.
Why It Matters
What's the prize? What's the point? At the micro level, it's an idea
(which might not be a Wow idea) with a brilliant execution. At the macro
level, it's a Wow idea (getting away from analogues, getting away from
clones--something entirely new) brilliantly executed.
I realize now that I should have also asked my responders, if they
were sympathetic to the idea of programming as art, to nominate some
examples. I'll do that myself. Maybe you'd like to nominate some more?
I think of the early computer game Elite, made by a team of two, which
extended the whole idea of games both graphically and in game play.
There are the first spreadsheets VisiCalc and Lotus 1-2-3 for the elegance
of the first concept even if you didn't want to use one. Even though
I don't use it anymore, the C language is artistic for the elegance
of its basic building blocks, which can be assembled to do almost anything.
Anyway, go make some art. Why not?!
References
[1] from Alice Calaprice, The New Quotable Einstein, Princeton.
[2] Frederick P. Brooks,
Jr., The Mythical Man Month: Essays on Software Engineering,
Addison-Wesley, Reading, MA, anniversary edition 1995.
The article and many of the comments seem to be trying to consider where
programming is an art by examining coding. IMHO, that is no more
valid than examining typing (or handwriting) to determine if a novellist
is an artist or examining hammers and chisels to decide is sculptors
are artists.
The code is a tool the way that information is
accessed, manipulated and presented is the art that a programmer produces.
Combining existing ideas in new ways (and creating completely new ideas)
to make some chunk on information more useful (or whatever aesthetic
pleases you) is what makes programming art.
Well of course it's art 2005-07-06 05:33:56 kbw333
In recent years it has been said that art is something produced by an
artist. This goes on to imply that if you're not an artist, your products
cannot be seriously considered art. It justifies an artist's messy room
as art, and everyone else's as a messy room. This point of view is also
used to justify the publicity awarded to particular popular artists,
while keeping the lid on others. Occasionally, as if by osmosis, an
new artist will be discovered and he/she'll join the ranks of established
artists.
When I was younger, there was talk of Arts and Crafts. These days it's
the arts that get focus and there's little talk of craft. Craft seems
to be implied in art. For example, a clever photograph will often require
skill with a camera and until recently, with film processing. This is
an example of arts and crafts being bound together. These huge metal
sculptures I keep bumping into are clearly art, but require knowledge
of metal working to achieve them; again arts and crafts. I don't think
you can have one without the other, it's a matter of emphasis.
It is said that the most complex structures built by mankind are software
systems. This is not generally appreciated because most people cannot
see them. Maybe that's a good thing because if we saw them as buildings,
we'd deem many of them unsafe. But this obscurity leads to a generally
unrecognising of the beauty of some software.
It is clear that software construction is a craft. But you just need
to try it to realise that it's art too. The whole idea of design patterns
was an attempt to elevate the art in novices. There are many ways to
construct software, but it's artistic input that makes it manageable,
beautiful, and reliable.
A good overview. 2005-07-06 01:05:15
aixguru1
[Reply
| View]
Constraints are found in any type of art. For instance with painters,
your canvas is a set size and your brushes are the ones (random things
included) that you have nearby. You have a limited set of tools and
constraints, but still the world is open to you.
The same is true with programming. You have constraints, but as Brooks
say, "air from air". You can imagine, create, and manipulate your programs
to do whatever you want them to do.
One thing to note on another comment I noticed was a comparison of craft
and true art based on skill levels. As with artists, programmers come
with various degrees of skill. Some art is primitive and unskilled,
but practice makes perfect and a programmer that works on his skillset
can improve as well.
One argument is that programming is structured and follows after others.
The key is that art is the same. Look up figure drawing for example
and you will find the techniques most artists use for basic figure drawing
to create what later become works of art. Art also immitates nature
and other artists in many cases. Sometimes immitation is the highest
form of flattery. With programming, imitation is a key to success on
various levels. For instance, I personally referenced and learned many
things about coding from the Richard Stevens books and his examples.
Many others have gained knowledge on "best practices" in coding as a
basis to help them create. Much like the eliptical shapes, ovals, circles,
and other shapes that make the lightly drawn poses in the start of a
figure drawing, programming has those key APIs, bits of code, libraries,
classes and "shapes" if you will that help you create the final "picture"
that makes up what we call our art.
The art of a programmer.
art vs. craft 2005-07-05 23:06:02
unwesen
[Reply
| View]
being a programmer myself, and working closely with artists, i have
found that programming and writing/painting/composing music are rather
similar activities. one of my artist friends in particular was interested
in what programming actually is, so we struggled to find a definition
for it.
as it turns out, we quickly accepted that programming must be craft.
we agreed that in order to be a reasonable artist, one has to be a good
artisan. art that isn't executed well is a stroke of luck, might be
beautiful, but is essentially meaningless. an artisan who puts thought
and experience into the piece he creates, however, creates a manifestation
his thoughts, and thereby makes them accessible to others. craft, in
other words, is a carrier medium for culture. we judge long-dead cultures
by the the 'things' they have made. it is no accident that we call these
things 'artifacts', from the latin words 'ars' (art) and 'facere' (to
make, to create).
if the products of craft are carriers of culture, what, then, is art?
it's something you might call _inspired_ craft. again, if you look at
the latin roots for 'inspiration', 'spirare' means to 'breathe'; receiving
an 'inspiration' therefore is receiving the breath of life, the spirit
that god reputedly breathed into us in order to make us alife. creating
life is universially seen as creating something new - in the simplest
sense, children are 'new' human beings.
art, therefore, must be craft that has an element of newness to it.
how do you achieve something new? by breaking the boundaries of the
system within which everything 'old' exists. if craft is a carrier for
our culture, art by definition must break with that culture. now there
are two possibilities: either your art is rejected by the majority of
people, or it is accepted as beautiful. in the first case, it might
be anything - meaningless, ahead of its time, etc. in the second case,
however, it will quickly become part of the culture it broke with -
culture expands to embrace those slight deviations from its norm.
art, therefore, is craft that advances our culture. in this i differ
strongly from stallman's opinion that art is 'merely beautiful' - it
must have an impact on our culture in order to be considered art. that
might sound rather elitist, i'm afraid... yet consider that every culture
contains subcultures, and the impact i'm speaking of does not have to
be earth-shattering. a street musician known in one part of a smaller
city for his inspired music is an artist, even if his art reaches a
few hundred people at most. as long as it's not a mere reproduction
of our culture, it's art.
reading through all this again, i still agree that programming is mainly
craft. if you are an inspired programmer, however, you might well create
art. as with conventional artists, whether your creation is art or craft
may sometimes not be recognized until long after the act of creation.
i could go on about this. i know there are some aspects still not covered,
but this text is too long already. in closing i would like to use this
text as an example of art vs. craft. i certainly know how to write,
and to some extent how to phrase my thoughts in order to acheive certain
effects. in that sense, i'm an artisan (although, admittedly, not a
very good one). whether this text can be considered art depends very
much on the readership: either i have restated the obvious, then it's
merely poor craft. if i have managed to blow fresh thoughts into enough
of your minds, it might be considered a small work of art.
art vs. craft 2005-07-06 05:41:15
evanh
[Reply
| View]
I find it quite easy to merge your two definitions together by simply
reducing, like your "street musician", the physical border of "our
culture" to "my culture".
Indeed it's art 2005-07-05 20:41:12
tonywilliams
[Reply
| View]
I, too, program like Paul Graham. Most of the great programmers from
whom I have learnt program exactly the same way - they just pproduce
better code on the first pass than I do (g).
A nomination for art I've seen? Unix design was an example of art. The
power of "everything is a file" and the concept of a pipe are pure art.
I have recently been reading the O'Reilly title "Classic Shell Scripting"
and it has examples of combining those two principles to produce amazing
software - such as a spell checker in a single pipe.
Rael Dornfest's original version of Blosxom was art. Blog software in
a very small number of lines of Perl that used simplicity, the power
of Perl and the facilities of the underlying OS. Since then the refinements
and improvements have been like the final polish of a sculpture.
# Tony
I program like Paul Graham! 2005-07-05 16:32:53
makeme
[Reply
| View]
TFA quotes Paul Graham as saying:
"For example, I was taught in college that one ought to figure out a
program completely on paper before even going near a computer. I found
that I did not program this way. I found that I liked to program sitting
in front of a computer, not a piece of paper. Worse still, instead of
patiently writing out a complete program and assuring myself it was
correct, I tended to just spew out code that was hopelessly broken,
and gradually beat it into shape. Debugging, I was taught, was a kind
of final pass where you caught typos and oversights. The way I worked,
it seemed like programming consisted of debugging."
This is how I program! Maybe I'm not as crappy a programmer as I originally
thought!
art and programming 2005-07-02 06:27:44 neilhorne [Reply
| View]
For a further development of this idea see:
dotAtelier
Trackbacks Comments made on other sites via trackbacks appear below.
Trackback from
[Smalltalk] Programming - Art or Science 2005-07-06 01:05:47 ONLamp has an essay by John Littler discussing the relationship between
computer programming and art. Included in the essay are quotes on that
topic from various luminaries. My favorite quotes are from Fred Brooks:
The programmer, like the poet, works...
Trackback from
Toadkillerdog's DogHouse Programming: Geek Art or Science? 2005-07-05 20:56:46 An interesting blurb on Slashdot on whether or not programming is art
of science (includes link to the...
Trackback from
Riaan's Blog Programming. Is is Art? 2005-07-05 20:42:31
Trackback from
Sashidhar Kokku 's Development blog. Is Programming Art? or is assembling??? 2005-07-05 17:33:50 Is programming an art, or is assembling an art? Is drawing an art, or
is filling the drawing with colors an art?
Trackback from
Sashidhar Kokku 's Development blog. Is Programming Art? or is assembling??? 2005-07-05 17:31:08 chromatic writes "A constant question for software developers is 'What
is the nature of programming?'...
Trackback from
Sexy Jihad Is Programming Art? 2005-07-01 05:28:30
Is programming art? This is a very interesting question that ONLAMP
has an article about: What the heck is art anyway, at least as most people understand it?
What do people mean when they say "art"? A straw poll showed a fair
degree of ...
The entire second page of the article talks about scripting languages,
specifically Javascript (in browsers) and Groovy.
1. Kudos to the
Groovy
[codehaus.org] authors. They've even garnered James Gosling's attention.
If you write Java code and consider yourself even a little bit of a
forward thinker, look up Groovy. It's a very important JSR (JSR-241
specifically).
2. He talks about Javascript solely from the point of view of the
browser. Yes, I agree that Javascript is predominently implemented in
a browser, but it's reach can be felt everywhere. Javascript == ActionScript
(Flash scripting language). Javascript == CFScript (ColdFusion scripting
language). Javascript object notation == Python object notation.
But what about Javascript and
Rhino's
[mozilla.org] inclusion in
Java 6 [sun.com]? I've been using Rhino as a server side language
for a while now because Struts is way too verbose for my taste. I just
want a thin glue layer between the web interface and my java components.
I'm sick and tired of endless xml configuration (that means you, too,
EJB!). A Rhino script on the server (with embedded Request, Response,
Application, and Session objects) is the perfect glue that does not
need xml configuration. (See also Groovy's Groovlets for a thin glue
layer).
3. Javascript has been called Lisp in C's clothing. Javascript (via
Rhino) will be included in Java 6. I also read that Java 6 will allow
access to the parse trees created by the javac compiler (same link as
Java 6 above).
Java is now Lisp? Paul Graham writes about
9 features [paulgraham.com] that made Lisp unique when it debuted
in the 50s. Access to the parse trees is one of the most advanced features
of Lisp. He argues that when a language has all 9 features (and Java
today is at about #5), you've not created a new language but a dialect
of Lisp.
I am a Very Big Fan of dynamic languages that can flex like a pretzel
to fit my problem domain. Is Java evolving to be that pretzel?
The
Pragmatic Programmers suggest learning a new language every year.
This has already paid off for me. The more different languages I learn,
the more I understand about programming in general. It's a lot easier
to solve problems if you have a toolbox full of good tools.
Sadly Delphi/Kylix (Object Pascal) is often overlooked. Perl, Ruby,
etc. are all find for scripts, but in most cases, a compiiled program
in a better way to do. Delphi lets you program procedurally like C,
or with Objects like C++, only the union is much more natural. It prevents
you from making many stupid mistakes, while allowing you 99.9% of the
power C has. It borrows some syntax from perhaps better languages (Oberon,
Modula, etc.), but has a much bigger and more useful standard library.
(Unofficially, anyway...)
VFP is great. It has its own easy to deploy runtime. You can compile
to .exe. Its IDE if excellent. It is complete with the front-end user
interface, middle-ware code and it's own multi-user safe & high performance
database engine (desktop). BUT: M$ (aka the Borg) assimilated back in
the early 90's what was then a cross platform development tool. Now
M$ vision of cross platform for VFP is multiple versions of Windows.
Plus M$ can not make a lot of end-user money on a product whos runtime
is free.
2003-05-15 03:05:28 anonymous
I've found that C# grows on me faster than any other language
I've used. At first I was very disappointed, saying it was just
9% better than Java. I was dismissive of the funny ways they
use the new and override keywords until I understood they had
addressed an important set of problems.
Having used it a while, I'd say it's very nice. Perhaps the
best single advantage that C# has over Java, however, is that
when it burst onto the public scene, it was much more complete
than Java was for the first several years. Including libraries
and documentation. It is of course completely unfair that the
C# designers had years to use and study Delphi and Java and
C++ before committing to a design for C#. So what!
The single best thing about C# may be that it works just
as the documentations says it does. This alone is worth the
price of admission (which is steep).
I liked some parts of AREXX -- on the Amiga -- mostly the idea
of the standard interprocess communication scripting. However,
I always had problems with the syntax -- figuring out what was
actually being passed, or being processed. It was weird. (I
think in C.)
I eventually did figure out how to do useful things -- my
favorite is a script that controls 3D image rendering in Lightwave,
uses an external image processing program to apply motion blur
and watermarks, then loads the results into the Toaster frame
buffer, and talks to a comm program that controls a SVHS single
frame editing deck to write the frame out.
All possible, because these programs that didn't know anything
about each other all supported an Arexx port.
I wish the same thing existed on Linux. Perl scripts and
system() calls are not the same thing as interprocess communication.
And don't get me started about that fu-"scripting" that gimp
has.
These are my preferences, based on the kind of work I've done and
continue to do, the order in which I learned the languages, and just
plain personal taste. In the spirit of generating new ideas, learning
new techniques, and maybe understanding why things are done
the way they're done, it's worth considering the different ways to do
them.
The
Pragmatic Programmers suggest learning a new language every year.
This has already paid off for me. The more different languages I learn,
the more I understand about programming in general. It's a lot easier
to solve problems if you have a toolbox full of good tools.
... ... ...
Every language is sacred in the eyes of its zealots, but there's
bound to be someone out there for whom the language just doesn't feel
right. In the open source world, we're fortunate to be able to pick
and choose from several high-quality and free and open languages to
find what fits our minds the best.
...was this article really about programming in general, or a hyping
of open source software? open source programmers (i'm thinking of Python,
Ruby, etc.) are really no better than, say for example, C++ programmers
or JAVA programmers.
just because they use open source software solutions and technologies,
does not mean they have any more a grasp on programming concepts and
the tricks of the trade then those using proprietary solutions.
i consider myself to be more a teacher of programming (i am just
better at that), but i don't think that someone who has been programming
for years or uses open source solutions is any more qualified a programmer
than i am.
Though many free software programmers exhibit high quality in their
work, I'll hesitate before concluding that a good way to nurture good
coders is to throw them into the midst of the free community. It may
well be that many people go into free software because they are
already competent enough and want to contribute.
That said, I'm not sure either what's the best way to groom people
into truly professional coders.
<off-topic>
An excellent (IMO) book which introduces assembly languages to complete
beginners is "Peter Norton's Assembly Language Book for the IBM PC",
by Peter Norton and John Socha.
</off-topic>
It happens that I have a very long
resume.
The reason I make it so long is that I depend on potential clients finding
it via the search engines for a large portion of my business. If I just
wanted to help someone understand my employability it could be considerably
shorter. But in an effort to make my resume show up in a lot of searches
for skills, I mention every skill keyword that I can legitimately claim
to have experience in somewhere in the resume, sometimes several times.
The resume is designed to appeal to buzzword hunters.
But it annoys me, I shouldn't have to do that. So my resume has an
editorial statement in it, aimed squarely at the HR managers you complain
about:
I strive to achieve quality, correctness, performance and maintainability
in the products I write.
I believe a sound understanding and application of software engineering
principles is more valuable than knowledge of APIs or toolsets.
In particular, this makes one flexible enough to handle any sort
of programming task.
It helps if you don't deal with headhunters or contract brokers.
They're much worse than most HR managers for only attempting to place
people that match a buzzword search in a database rather than understanding
someone's real talent. Read
my policy on recruiters and contract agencies.
It's generally easier to get smaller companies to take real depth
seriously than the larger companies. One reason for this is that they
are too small to employ HR managers, so the person you're talking to
is likely to be another engineer. My first jobs writing retail Macintosh
software, Smalltalk, and Java were gotten at small companies where the
person I contacted first at the company was an engineer.
If you're looking for permanent employment, many companies post their
openings on their own web pages. I give some tips on locating these
job postings via search engines
on this
page.
I've been consulting full-time for over four years, and I've only
taken one contract through a broker. I've actually bent my own rules
and tried to find other work through the body shops, but they have been
useless to me. I've had far better luck finding work on my own, through
the web, and through referrals from friends and former coworkers.
elj.com - A Web Site
dedicated to exposing an eclectic mix of elegant programming technologies
The first incarnation of this page was started by John W.F. McClain
at MIT. He took it with him when he moved to Loral, but was unable to
update and maintain it there, so I offered to take it over.
In John's original page, he said:
Computer programmers create new languages all the time (often
without even realizing it.) My hope is this collection of critiques
will help raise the general quality of computer language design.
Predicting the future is easier said than done, and yet, we persist
in trying to do it. As futile as it may seem to forecast the future
of programming, if we're going to try, it's helpful to recognize certain
fundamental characteristics of programming and programmers. We know,
for example, that programming is hard. We know that the industry is
driven by the desire to make programming easier. And we know, as Perl
creator Larry Wall has often observed, that programmers are lazy, impatient,
and excessively proud.
This first condition formed the basis of Frederick Brooks's classic
text on software engineering, The Mythical Man Month (Addison-Wesley,
1995; ISBN 0201835959) first published in 1975, where he wrote:
As we look to the horizon of a decade hence, we see no silver
bullet. There is no single development, in either technology or
management technique, which by itself promises even one order of
magnitude improvement in productivity, in reliability, in simplicity.
Brooks's prediction was dire and, unfortunately, accurate. There
was no silver bullet, and as far as we can tell, there never will be.
However, programming is undoubtedly easier today than it was in the
past, and the latter two principles of programming and programmers explain
why. Programming became easier because the software industry was motivated
to make it so, and because lazy and impatient programmers wouldn't accept
anything less. And there is no reason to believe that this will change
in the future.
The following simple program, which
uses many different and usual concepts in programming, is based on "The
early development of Programming Languages" by Donald E. Knuth and Luis
Trabb Pardo, published in "A History of Computing in the Twentieth Century"
edited by N. Metropolis, J. Howlett and Gian-Carlo Cota, Academic Press,
New York, 1980, pp. 197-273. They gave an example in Algol 60 and translated
into some very old languages such as Zuse's Plankalkül, Goldstine's
Flow diagrams, Mauchly's Short Code, Burks' Intermediate PL, Rutishauser's
Klammerausdrücke, Bohm's Formules, Hopper's A-2, Laning and Zierler's
Algebraic interpreter, Backus' FORTRAN
0 and Brooker's
AUTOCODE.
Klammerausdrücke is a German expression,
we keep the German expression also in the Russian and English versions.
A direct English translation is "bracket expression".
FORTRAN 0 was really
not called FORTRAN 0,
it is just the very first version of Fortran.
The program is given here in Pascal, C and five variants of Fortran.
The purpose of this is to show how Fortran has been developed from a
cryptical, almost machine-dependent language, into a modern structured
high-level programming language.
The final example shows the program in the new programming language
F.
xpccx writes in with a bit from NewsBytes,
"NASA turned 43 this month and marked the occasion by
releasing
more than 200 of its scientific and engineering applications for public
use. The modular Fortran programs can be modified, compiled and run on most
Linux platforms." The software can be found at
OpenChannelSoftware.com.
At long last I am ready to prepare my own space mission. I wonder if a whiskey
barrel is gonna be air tight after I launch it/me into space with a trebuchet.
(It's this sort of unconventional thinking that should get me my job at
NASA. Or at least get me put to sleep).
In a recent study [1],
Prechelt compared the relative performance of Java and C++ in terms
of execution time and memory utilization. Unlike many benchmark studies,
Prechelt compared multiple implementations of the same task by multiple
programmers in order to control for the effects of differences in programmer
skill. Prechelt concluded that, "as of JDK 1.2, Java programs are typically
much slower than programs written in C or C++. They also consume much
more memory."
We have repeated Precheltнs study using
Lisp as the implementation language. Our results show that Lisp's performance
is comparable to or better than C++ in terms of execution speed, with
significantly lower variability which translates into reduced project
risk. Furthermore, development time is significantly lower and less
variable than either C++ or Java. Memory consumption is comparable to
Java. Lisp thus presents a viable alternative to Java for dynamic applications
where performance is important.
Conclusions
Lisp is often considered an esoteric
AI language. Our results suggest that it might be worthwhile to revisit
this view. Lisp provides nearly all of the advantages that make Java
attractive, including automatic memory management, dynamic object-oriented
programming, and portability. Our results suggest that Lisp is superior
to Java and comparable to C++ in terms of runtime, and superior to both
in terms of programming effort, and variability of results. This last
item is particularly significant as it translates directly into reduced
risk for software development.
There is more data available for other languages.. (Score:4,
Interesting)
by crealf on Saturday September 08, @07:53AM (#2266890)
(User
#414283 Info)
The article about Lisp is a follow-up of an article by Lutz Prechelt
in CACM99 (a
draft [ira.uka.de] is available on his page along with other articles).
If you look, from the developer point of view, Python and Perl work
times are similar to those of Lisp, along with program sizes.
Of course, from the speed point of view, in the test, none of the scripting
language could compete with Lisp.
Anyway some articles by
Prechelt [ira.uka.de] are interesting too (as many other research
papers ; found via
citeseer
[nec.com] for instance)
In my opinion Smalltalk makes a much better alternative to Java.
Smalltalk has all the trappings--a very rich set of base classes, byte-coded,
garbage collected, etc.
There are many Smalltalks out there...Smalltalk/X is quite good, and
even has a Smalltalk-to-C compiler to boot. It's not totally free, but
pretty cheap (and I believe for non-commercial use everything works
but the S-to-C compiler).
Squeak is an even better place to start...it is highly portable (moreso
than Java), very extensible (thanks to VM plugins) and has as very active
community that includes Alan Kay, the man who INVENTED the term "object-oriented
programming". Squeak has a just-in-time compiler (JITTER), support for
multiple front-ends, and can be tied to any kind of external libraries
and DLL's. It's not GPL'd, but it is free under an old Apple license
(I believe the only issue is with the fonts..they are still Apple fonts).
It's already been ported to every platform I've ever seen, including
the iPaq (both WinCE and Linux). It runs even on STOCK iPaqs (ie 32m)
without any expansion...Java, from what I understand, still has big
problems just running on the iPaq, not to mention unexpanded iPaqs.
And of course, we can't forget about old GNU Smalltalk, which is still
seeing development.
Smalltalk is quite easy to learn--you can just pick up the old "Smalltalk-80:
The Language" (Goldberg) and work right from there. Squeak already has
two really good books that have just come into print (go to Amazon and
search for Mark Guzdial).
(this is not meant as a language flame...I'm just throwing this out
on the table, since we're discussing alternatives to Java. Scheme/LISP
is a cool idea as well, but I think Smalltalk deserves some mention.)
I've written 2 Lisp and 4 Java books (Score:3, Informative)
by MarkWatson on Saturday September 08, @09:56AM (#2267225)
(User
#189759 Info)
First, great topic!
I have written 2 Lisp books for Springer-Verlag and 4 Java books,
so you bet that I have an opinion on my two favorite languages.
First, given free choice, I would use Common LISP for most of my
devlopment work. Common LISP has a huge library and is a very stable
language. Although I prefer Xanalys LispWorks, there are also good
free Common LISP systems.
Java is also a great language, mainly because of the awesome class
libraries and the J2EE framework (I am biased here because I am just
finishing up writing a J2EE book).
Peter Norvig once made a great comment on Java and Lisp (roughly
quoting him): Java is only half as good as Lisp for AI but that is
good enough.
Anyway, I find that both Java and Common LISP are very efficient
environments to code in. I only use Java for my work because that is
what my customers want.
BTW, I have a new free web book on Java and AI on my web site - help
yourself!
Best regards,
Mark
-- www.markwatson.com -- Open Source and Content
Why Java succeeded, LISP can't make headway now (Score:5,
Informative)
by joneshenry on Saturday September 08, @10:44AM (#2267438)
(User
#9497 Info)
Java was never marketted as the ultimate fast language to do searching
or to manipulate large data structures. What Java was marketted as was
a language that was good enough for programming paradigms popular at
the time such as object orientation and automatic garbage collection
while providing the most comprehensive APIs under the control of one
entity who would continue to push the extension of those APIs.
In this
LinuxWorld interview [linuxworld.com] look what Stroustrup is hoping
to someday have in the C++ standard for libraries. It's a joke, almost
all of those features are already in Java. As Stroustrup says, a standard
GUI framework is not "politically feasible".
Now go listen to what
Linux Torvalds is saying [ddj.com] about what he finds to be the
most exciting thing to happen to Linux the past year. Hint, it's not
the completion of the kernel 2.4.x, it's KDE. The foundation of KDE's
success is the triumph of Qt as the de facto standard that a large community
has embraced to build an entire reimplementation of end user applications.
To fill the void of a standard GUI framework for C++, Microsoft has
dictated a set of de facto standards for Windows, and Trolltech has
successfully pushed Qt as the de facto standard for Linux.
I claim that as a whole the programming community doesn't care whether
a standard is de jure or de facto, but they do care that SOME standard
exists. When it comes to talking people into making the investment of
time and money to learn a platform on which to base their careers, a
multitude of incompatible choices is NOT the way to market.
I find talking about LISP as one language compared to Java to be
a complete joke. Whose LISP? Scheme? Whose version of Scheme, GNU's
Guile? Is the Elisp in Emacs the most widely distributed implementation
of LISP? Can Emacs be rewritten using Guile? What is the GUI framework
for all of LISP? Anyone come up with a set of LISP APIs that are the
equivalent of J2EE or Jini?
I find it extremely disheartening that the same people who can grasp
the argument that the value of networks lies in the communication people
can do are incapable of applying the same reasoning to programming languages.
Is it that hard to read
Odlyzko [umn.edu] and not see that people just want to do the same
thing with programming languages--talk among themselves. The modern
paradigm for software where the money is being made is getting things
to work with each other. Dinosaur languages that wait around for decades
while slow bureaucratic committees create nonsolutions are going to
get stomped by faster moving mammals such as Java pushed by single-decision
vendors. And so are fragmented languages with a multitude of incompatible
and incomplete implementations such as LISP.
First off, one of the best spokespersons for Lisp is Paul Graham,
author of "On Lisp" and "ANSI Common Lisp". His web site is
Here [paulgraham.com].
Reading through his
articles
[paulgraham.com] will give you a better sense of what lisp is about.
One that I'd like to see people comment on is:
java's
cover [paulgraham.com] ... It resonates with my experience as well.
Also
This response [paulgraham.com] to his java's cover article succinctly
makes a good point that covers most of the bickering found here...
I personally think that the argument that Lisp is not widely known,
and therefore not enough programmers exist to support corporate projects
is bogus. The fact that you can hire someone who claims to know C++
does NOT in any way shape or form mean that you can hire someone who
will solve your C++ programming problem! See
my own web site [endpointcomputing.com] for more on that.
I personally believe that if you have a large C++ program you're
working on and need to hire a new person or a replacement who already
claims to know C++, the start up cost for that person is the same as
if you have a Lisp program doing the same thing, and need to hire someone
AND train them to use Lisp. Why? the training more than pays for itself
because it gives the new person a formal introduction to your project,
and Lisp is a more productive system than C++ for most tasks. Furthermore,
it's quite likely that the person who claims to know C++ doesn't know
it as well as you would like, and therefore the fact that you haven't
formally trained them on your project is a cost you aren't considering.
One of the points that the original article by the fellow at NASA
makes is that Lisp turned out to have a very low standard deviation
of run-time and development time. What this basically says is that the
lisp programs were more consistent. This is a very good thing as anyone
who has ever had deadlines knows.
Yes, the JVM version used in this study is old, but lets face it
that would affect the average, but wouldn't affect the standard deviation
much. Java programs are more likely to be slow, as are C++ programs!
The point about lisp being a memory hog that a few people have made
here is invalid as well. The NASA article states:
Memory consumption for Lisp was significantly higher than for
C/C++ and roughly comparable to Java. However, this result is somewhat
misleading for two reasons. First, Lisp and Java both do internal memory
management using garbage collection, so it is often the case that the
Lisp and Java runtimes will allocate memory from the operating system
this is not actually being used by the application program.
People here have interpreted this to mean that the system is a memory
hog anyway. In fact many lisp systems reserve a large chunk of their
address space, which makes it look like a large amount of memory is
in use. However the operating system has really just reserved it, not
allocated it. When you touch one of the pages it does get allocated.
So it LOOKS like you're using a LOT of memory, but in fact because of
the VM system, you are NOT using very much memory at all.
The biggest reasons people don't use Lisp are they either don't understand
Lisp, or have been forced by clients or supervisors to use something
else.
Its interesting to see the results of a short study, even though
the author admits to the flaw in his methodolody (primarily the subjects
were self-chosen). Still, I don't think that's a fatal flaw, and I think
his results do have some validity.
However, I think the author misses a more important issue: development
involving a single programmer for a relatively small task isn't the
point for most organizations. Maintainability and a large pool of potential
developers (for example) are a significant factor in deciding what language
to use. LISP is a fabulous language, but try to find 10 programmers
at a reasonable price in the next 2 weeks. Good luck.
Also, while initial development time is important, typically testing/debug
cycles are the costly part of implementation, so that's what should
weigh on your mind as the area that the most gains can be made. Further,
large projects are collaborative efforts, so the objects and libraries
available for a particular language plays a role in how quickly you
can produce quality code.
As an aside, it would've been interesting to see the same development
done with experienced Visual Basic programmer. My guess is he/she would
have the lowest development cycle, and yet it wouldn't be my first choice
for a large scale development project (although at the risk of being
flamed, its not a bad language for just banging out a quick set of tools
for my own use).
Some of thing things I believe are more important when thinking about
a programming language:
1) Amenable to use by team of programmers
2) Viability over a period of time (5-10 years).
3) Large developer base
4) Cross platform - not because I think cross-platform is a good thing
by itself; rather, I think its important to avoid being locked-in to
a single hardware or Operating System vendor.
5) Mature IDE, debugging tools, and compilers.
6) Wide applicability
Computer languages tend to develop in response to specific needs, and
most programmers will probably end up learning 5-10 languages over the
course of their career. It would be helpful to have a discussion of
the appropriate roles for certain computer languages, since I'm not
sure any computer languages is better than any other.
Perhaps not quite as illuminating as it appears (Score:1)
by ascholl
(ascholl-at-max(dot)cs(dot)kzoo(dot)edu)
on Saturday September 08, @07:53AM (#2266888)
(User
#225398 Info)
The study does show an advantage of lisp over java/c/c++ -- but
only for small problems which depend heavily on the types of
tasks lisp was designed for. The author recognizes the second problem
("It might be because the benchmark task involved search and managing
a complex linked data structure, two jobs for which Lisp happens to
be specifically designed and particularly well suited.") but doesn't
even mention the first.
While I haven't seen the example programs, I suspect that the reason
the java versions performed poorly time-wise was probably directly related
to object instantiation. Instantiating an object is a pretty expensive
task in java; typical 'by the book' methods would involve instantiating
new numbers for every collection of digits, word, digit/character set
representation, etc. The performance cut due to instantiation can be
minimized dramatically by re-using program wide collections of commonly
used objects, but the effect would only be seen on large inputs. Since
the example input was much smaller than the actual test case, it seems
likely that the programmers may have neglected to include this functianality.
Hypothising about implementation aside, the larger question is one of
problem scope. If you're going to claim that language A is better than
language B, you probably aren't concerned about tiny (albeit non-trivial)
problems like the example. Now, I don't know whether this is true, but
it seems possible that a large project implemented in java
or c/c++ might be built quicker, be easier to maintain, and be less
fragile than its equivilent in lisp. It may even perform better. It's
not fair to assume blindly that the advantages of lisp seen in this
study will scale up. I'm not claiming that they don't ... but still.
If we're choosing a language for a task, this should be a primary consideration.
Here is another relevant view that explains that advocacy of a particular
language might has little in common with the desire to innovate. Most people
simply hate to be wrong after they made their (important and time consuming)
choice ;-)
I think one of the biggest reasons for language
advocacy (/OS advocacy/DB advocacy/etc.) is that we have a vested interest
in "our" language succeeding. Each of us has worked hard to learn the
subtleties and intricacies of [language X], and if something else comes
along that's better, we're suddenly newbies again. That hard-won expertise
doesn't carry much weight if [language Y] makes it easy for "any idiot"
to accomplish and/or understand what took you a week to figure out.
We start trying to come up with reasons why it's not really better:
It doesn't give you enough control; it's not as efficient; it has fewer
options...
PC vs. Mac. BSD vs. Mac. Mainframe vs. client-server. Command line
vs. GUI. How many people were a little saddened to see MS-DOS fading
into the mist, not because it was a great tool, but because they knew
how to use it?
A language advocate needs [language X] to succeed, to be dominant,
to be the best, because he has more status and more useful knowledge
that way.
Languages are very interesting things. They can either tie you up, or
set you free. But no programming language can be everything to everyone,
despite the fact that sometimes it looks like one does.
What is it that you like about programming languages? What is it that
you hate? What did you start on? What do you find yourself coding with most
often today? Has your choice of programming languages affected other choices
in software? (I.e. Lisp hackers tend to gravitate toward emacs, whereas
others go to vi)
It is quite interesting to me the amount of influence
that programming languages have on the way programmers think about how
they do things. One example from one perspective is this; if you didn't
know that most UNIXen were implemented in C, would you be able to tell?
If so, why or why not? What are the different properties that UNIX has
that makes it pretty obvious that it wasn't written by somebody programming
in a functional language, or in an object-oriented language (or style)?
... ... ...
One of the responces
My favorite language is Chez Scheme for two reasons:
syntactic abstraction and control abstraction.
Syntactic abstraction is macros. As opposed to
other implementations of Scheme, Chez Scheme in my opinion has the
best story on macros, and its macro system is among the most powerful
I have seen.
Control abstraction is the power to add new control
operations to your language. For example, backtracking and coroutines.
More esoterically, monads in direct-style code. Control abstraction
boils down to first-class continuations (call/cc). With the single
exception of SML/NJ, no other language I know of has call/cc.
I know I will be using Scheme for years to come,
and my company will also continue to use it in its systems. We code
a lot in C++ and Delphi, but the Real Hard Stuff(tm) is done in
Scheme because macros and continuations are big hammers. Despite
Scheme being over 20 years old and despite demonstrated, efficient
implementations of these "advanced" language concepts, I don't see
new language designs adopting these features from Scheme. I hope
this changes
Turbo Vision provides a very nice user interface (comparable with the
very well known GUIs) but only for console applications. This UNIX port is based
on Borland's version 2.0 with fixes and was made to create RHIDE (a nice
IDE for gcc and other GNU compilers). The library supports /dev/vcsa devices
to speed-up, ncurses to run from telnet and xterm. This port, in contrast
to the Sigala's port, doesn't have "100% compatibility with the original
library" as goal, instead we modified a lot of code in favor of security
(specially buffer overflows). The port is also available for the original
platform (DOS).
I and just about every designer of Common Lisp and CLOS has had extreme
exposure to the MIT/Stanford style of design. The essence of this style
can be captured by the phrase ``the right thing.'' To such a designer
it is important to get all of the following characteristics right:
Simplicity -- the design must be simple, both in implementation
and interface. It is more important for the interface to be simple
than the implementation.
Correctness -- the design must be correct in all observable
aspects. Incorrectness is simply not allowed.
Consistency -- the design must not be inconsistent. A design
is allowed to be slightly less simple and less complete to avoid
inconsistency. Consistency is as important as correctness.
Completeness -- the design must cover as many important situations
as is practical. All reasonably expected cases must be covered.
Simplicity is not allowed to overly reduce completeness.
I believe most people would agree that these are good characteristics.
I will call the use of this philosophy of design the ``MIT approach.''
Common Lisp (with CLOS) and Scheme represent the MIT approach to design
and implementation.
The worse-is-better philosophy is only slightly different:
Simplicity -- the design must be simple, both in implementation
and interface. It is more important for the implementation to be
simple than the interface. Simplicity is the most important consideration
in a design.
Correctness -- the design must be correct in all observable
aspects. It is slightly better to be simple than correct.
Consistency -- the design must not be overly inconsistent.
Consistency can be sacrificed for simplicity in some cases, but
it is better to drop those parts of the design that deal with less
common circumstances than to introduce either implementational complexity
or inconsistency.
Completeness -- the design must cover as many important
situations as is practical. All reasonably expected cases should
be covered. Completeness can be sacrificed in favor of any
other quality. In fact, completeness must be sacrificed
whenever implementation simplicity is jeopardized. Consistency can
be sacrificed to achieve completeness if simplicity is retained;
especially worthless is consistency of interface.
Early Unix and C are examples of the use of this school of design,
and I will call the use of this design strategy the ``New Jersey approach.''
I have intentionally caricatured the worse-is-better philosophy to convince
you that it is obviously a bad philosophy and that the New Jersey approach
is a bad approach.
However, I believe that worse-is-better, even in its strawman form,
has better survival characteristics than the-right-thing, and that the
New Jersey approach when used for software is a better approach than
the MIT approach.
The concept known as "worse is better" holds that in software making
(and perhaps in other arenas as well) it is better to start with
a minimal creation and grow it as needed. Christopher Alexander
might call this "piecemeal growth." This is the story of the evolution
of that concept.
From 1984 until 1994 I had a Lisp company called "Lucid, Inc." In
1989 it was clear that the Lisp business was not going well, partly
because the AI companies were floundering and partly because those AI
companies were starting to blame Lisp and its implementations for the
failures of AI. One day in Spring 1989, I was sitting out on the Lucid
porch with some of the hackers, and someone asked me why I thought people
believed C and Unix were better than Lisp. I jokingly answered, "because,
well, worse is better." We laughed over it for a while as I tried to
make up an argument for why something clearly lousy could be good.
A few months later, in Summer 1989, a small Lisp conference called
EuroPAL (European Conference on the Practical Applications of Lisp)
invited me to give a keynote, probably since Lucid was the premier Lisp
company. I agreed, and while casting about for what to talk about, I
gravitated toward a detailed explanation of the worse-is-better ideas
we joked about as applied to Lisp. At Lucid we knew a lot about how
we would do Lisp over to survive business realities as we saw them,
and so the result was called "Lisp: Good News, Bad News, How to Win
Big." [html]
(slightly abridged version) [pdf]
(has more details about the Treeshaker and delivery of Lisp applications).
I gave the talk in March, 1990 at Cambridge University. I had never
been to Cambridge (nor to Oxford), and I was quite nervous about speaking
at Newton's school. There were about 500-800 people in the auditorium,
and before my talk they played the Notting Hillbillies over the sound
system - I had never heard the group before, and indeed, the album was
not yet released in the US. The music seemed appropriate because I had
decided to use a very colloquial American-style of writing in the talk,
and the Notting Hillbillies played a style of music heavily influenced
by traditional American music, though they were a British band. I gave
my talk with some fear since the room was standing room only, and at
the end, there was a long silence. The first person to speak up was
Gerry Sussman, who largely ridiculed the talk, followed by Carl Hewitt
who was similarly none too kind. I spent 30 minutes trying to justify
my speech to a crowd in no way inclined to have heard such criticism
- perhaps they were hoping for a cheerleader-type speech.
I survived, of course, and made my way home to California. Back then,
the Internet was just starting up, so it was reasonable to expect not
too many people would hear about the talk and its disastrous reception.
However, the press was at the talk and wrote about it extensively in
the UK. Headlines in computer rags proclaimed "Lisp Dead, Gabriel States."
In one, there was a picture of Bruce Springsteen with the caption, "New
Jersey Style," referring to the humorous name I gave to the worse-is-better
approach to design. Nevertheless, I hid the talk away and soon was convinced
nothing would come of it.
About a year later we hired a young kid from Pittsburgh named Jamie
Zawinski. He was not much more than 20 years old and came highly recommended
by Scott Fahlman. We called him "The Kid." He was a lot of fun to have
around: not a bad hacker and definitely in a demographic we didn't have
much of at Lucid. He wanted to find out about the people at the company,
particularly me since I had been the one to take a risk on him, including
moving him to the West Coast. His way of finding out was to look through
my computer directories - none of them were protected. He found the
EuroPAL paper, and found the part about worse is better. He connected
these ideas to those of Richard Stallman, whom I knew fairly well since
I had been a spokesman for the League for Programming Freedom for a
number of years. JWZ excerpted the worse-is-better sections and sent
them to his friends at CMU, who sent them to their friends at Bell Labs,
who sent them to their friends everywhere.
Soon I was receiving 10 or so e-mails a day requesting the paper.
Departments from several large companies requested permission to use
the piece as part of their thought processes for their software strategies
for the 1990s. The companies I remember were DEC, HP, and IBM. In June
1991, AI Expert magazine republished the piece to gain a larger readership
in the US.
However, despite the apparent enthusiasm by the rest of the world,
I was uneasy about the concept of worse is better, and especially with
my association with it. In the early 1990s, I was writing a lot of essays
and columns for magazines and journals, so much so that I was using
a pseudonym for some of that work:
Nickieben
Bourbaki. The original idea for the name was that my staff at Lucid
would help with the writing, and the single pseudonym would represent
the collective, much as the French mathematicians in the 1930s used
"Nicolas Bourbaki" as their collective name while rewriting the foundations
of mathematics in their image. However, no one but I wrote anything
under that name.
In the Winter of 1991-1992 I wrote an essay called "Worse
Is Better Is Worse" under the name "Nickieben Bourbaki." This piece
attacked worse is better. In it, the fiction was created that Nickieben
was a childhood friend and colleague of Richard P. Gabriel, and as a
friend and for Richard's own good, Nickieben was correcting Richard's
beliefs.
In the Autumn of 1992, the Journal of Object-Oriented Programming
(JOOP) published a "rebuttal" editorial I wrote to "Worse Is Better
Is Worse" called "Is
Worse Really Better?" The folks at Lucid were starting to get a
little worried because I would bring them review drafts of papers arguing
(as me) for worse is better, and later I would bring them rebuttals
(as Nickieben) against myself. One fellow was seriously nervous that
I might have a mental disease.
In the middle of the 1990s I was working as a management consultant
(more or less), and I became interested in why worse is better really
could work, so I was reading books on economics and biology to understand
how evolution happened in economic systems. Most of what I learned was
captured in a presentation I would give back then, typically as a keynote,
called "Models
of Software Acceptance: How Winners Win," and in a chapter called
"Money
Through Innovation Reconsidered," in my book of essays, "Patterns
of Software: Tales from the Software Community."
You might think that by the year 2000 I would have settled what I
think of worse is better - after over a decade of thinking and speaking
about it, through periods of clarity and periods of muck, and through
periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was
scheduled to be on a panel entitled "Back to the Future: Is Worse (Still)
Better?" And in preparation for this panel, the organizer, Martine Devos,
asked me to write a position paper, which I did, called "Back
to the Future: Is Worse (Still) Better?" In this short paper, I
came out against worse is better. But a month or so later, I wrote a
second one, called "Back
to the Future: Worse (Still) is Better!" which was in favor of it.
I still can't decide. Martine combined the two papers into the single
position paper for the panel, and during the panel itself, run as a
fishbowl, participants routinely shifted from the pro-worse-is-better
side of the table to the anti-side. I sat in the audience, having lost
my voice giving my Mob Software talk that morning, during which I said,
"risk-taking and a willingness to open one's eyes to new possibilities
and a rejection of worse-is-better make an environment where excellence
is possible. Xenia invites the duende, which is battled daily because
there is the possibility of failure in an aesthetic rather than merely
a technical sense."
... In going over the list above I can find that the only things
I did that I regret, or feel have no value: learning Fortran, corporate
politics, and prolog. So given the relatively little wasted effort,
I feel compelled to recommend that newbies should learn along a
similar path to mine. However, what is not shown in this list is
that I have a strong mathematical background and that I thoroughly
enjoy programming which I view as much as a creative process as
a mechanical one.
However, the average industry programmer is not necessarily like
me. Nor should they necessarily desire to be so. My consumption
by computers is not something I've seen many other people being
taken by. To many, if not most people, computer programming is not
at all a creative process, and is mearly a means of getting a paycheck.
In the short term anyone bright enough to pick up a University/College
degree need not do much more than that to find themselves a job
in the computer industry these days.
For those people I suggest you enter a University/College which
can prepare you for a career in programming, but also a career in
something else, should you find passion for something other than
staring into a phosphor screen 9 to 5. If you've already done that,
then go see your family, career councilor, therapist, whatever.
You're grown up, you can figure out what kind of a job you want,
can't you?
But for those who want to get into programming. I mean seriously
into programming. For those that feel drawn towards computer programming,
I can make some recommendations. First, this industry is a young
one, moving and changing very quickly. Picking up one particular
language that seems popular right now, may not make any sense in
5-10 years. Starting the way I started, I feel, is the best recommendation
I can make.
BASIC
-- the first language to learn
This is a very contentious issue on the USENET, but I do
strongly suggest beginners pick up the BASIC programming language
first. There are many reasons why other people advocate
learning other languages, but I feel that they are generally misguided.
It is impossible to re-instill into a seasoned programmer, the idea
that learning to program from scratch is not a trivial thing. The
deepers concepts in most other languages are totally beyond what
a beginner has every experienced or could have any hope of assimilating
given they have no idea what the motivation is behind it.
Concepts such as scope, data types, pointers, modularity,
and dynamic memory allocation have no meaning to someone who isn't
at already familliar with some fundamental programming issues.
These are all, in a sense, meta programming issues which
the beginner could not possibly truly appreciate when they first
pick up a language. What if it turns out the potential student is
unsure of themselves and needs to decide if they can or cannot hack
programming? Snowing the beginner under these concepts will only
serve to encourage them to give it up.
Programming is not about following rules, structure or design,
that's the job of your compiler or syntax checker (like lint.) Programming
is about giving instructions to your computer and making it follow
them. Its about being as creative as you can possibly be with your
computer. So many programmers so easily forget this as they expound
on wonderful things like object oriented programming, garbage collection,
portability and all sorts of other nonsense. Trying to feed this
to a beginner is going to warp how they think a computer works.
And knowing how a computer works is very important, far more important
than how Simonyi, Stroussup, Ritchie or Kernighan think you should
write code.
BASIC provides a simple syntax with some simple rules.
If you can't master basic in a very short amount of time, you can
be pretty sure that programming is not for you. But just because
someone can't grok templates, classes, linked lists or whatever
on their first outing with programming, doesn't meant they couldn't
handle those concepts with proper pre-motivation. That pre-motivation
can only exist if the beginner has a good idea how to program his/her
computer in the first place. Teaching them C, C++, etc., turns
it into a chicken and egg problem, where they might know the solution
but have no idea why things are the way they are, and consequently
are unable to re-apply that motivated thinking to the future problems
of programming that they won't have a book to refer to about.
Basic also gives you a lot of room to play with. As shunned as
it is in this industry you can actually do some nifty things in
it. Another important thing is that most Basics have machine specific
extensions for doing rudimentary graphics and sound. The positive
feedback the people get from being able to in some sense have absolute
control over the fundamental user interactive features of their
computer (display, and audio output) is completely unseen in languages
such as C, Pascal, COBOL, FORTRAN, or other candidate beginner languages.
Learning graphics in those languages is considered an "advanced
concept" because it takes you away from the fundamentals of those
languages whose base syntax are oriented toward managing databases,
spreadsheets or doing complex mathematical calculations.
Finally, as one last attempt to convince you, Basic itself has
no other role than to be a programmer's first language. It is not
powerful enough to be a real programmer's tool. It lets you get
your feet wet, and is a reasonable balance between high level and
low level programming concepts. From basic, the beginner is meant
to spring board into another direction, and should be able to no
matter what second language they chose.
Once a beginner is convinced to start programming in Basic, the
learning process, for most, almost takes care of itself. Show someone
some simple concepts of basic programming and if they have any aptitude
for it at all, they should be able to run with it on their own for
a reasonable amount of time. With me for example, I started in the
middle ages of computing, when there weren't any instructors to
teach you how to program. I had a manual, some magazines, and a
little bit of a push from the local guru (as well as a very
inspirational TV program called Bits and Bytes.) After that,
I was on my own. But I was in bliss, because I had what I needed
to tame the computer. I could make it do what I want, and it was
just up to my own ingenuity to make what ever I wanted the computer
do a reality.
Of course after convincing you of this, it then begs the question
as to what the second language to learn should be. This is a hard
question. To get yourself up to a respectable level of programming
expertise, I claim that you need the minimalist concepts that are
derived from assembly language as well as highly level data structures
and meta programming technqiues you can learn from Pascal, C, C++
or Ada. I don't think I can whole heartly recommend learning one
before the other, so I would instead rather make the suggestion
that you learn assembly and C as second and third languages, though
not necessarily in that order. I think the benefit of learning high
level programming structures in a language like C are not worth
debating as they are so self evident. But it might not be completely
obvious why I consider assembly language so important. I justify
my position in the next section.
Paul Nettle (a programmer
at Terminal Reality), recently pointed out some things written
(by Microsoftie's no less) about the use goto and the debate that
so commonly ensues about it. Here's an excerpt of what he posted
to the rec.games.programmer
newsgroup:
Here's what "Writing Solid Code" (p.
xxii) has to say on the subject of the goto statement:
---[excerpt begins]--- That's not to say that you should blindly follow the
guidelines in this book. They aren't rules. Too many programmers
have taken the guideline "Don't use goto statements" as
a commandment from God that should never be broken. When
asked why they're so strongly against gotos, they say that
using goto statements results in unmaintainable spaghetti
code. Experienced programmers often add that goto statments
can upset the compiler's code optimizer. Both points are
valid. Yet there are times when the judicious use of a goto
can greatly improve the clarity and efficiency of the code.
In such cases, clinging to the guideline "Don't use goto
statements" would result in worse code, not better.
---[excerpt ends]---
And here's what "Code Complete" (p. 349)
has to say on the subject:
---[excerpt begins]--- The Phony goto Debate
A primary feature of most goto discussions
is a shallow approach to the question. The arguer on the
"gotos are evil" side usually presents a trivial code fragment
that uses gotos and then shows how easy it is to rewrite
the fragment without gotos. This proves mainly that it's
easy to write trivial code without gotos.
---[excerpt ends]---
Indeed it always surprises me how quickly
people are willing to regurgitate the age old argument against the
use of goto. In my early days of programmer, I started
as a BASIC spaghetti code programmer that over used goto's as a
matter of course. Then I went to university where I was told never
to use goto (or suffer the wrath of the TA's grading penalties.)
Indeed I trained myself to get out of using goto's and indeed, I've
never come to an algorithm that I couldn't some implement without
gotos.
As explained in Sec. 5.9, an experienced
programmer CANNOT get a job using a new skill by taking a course
in that skill; employers demand actual work experience. So, how
can one deal with this Catch-22 situation?
The answer is, sad to say, that you should engage in frequent
job-hopping. Note that the timing is very delicate, with the
windows of opportunity usually being very narrow, as seen below.
Suppose you are currently using programming language X, but you
see that X is beginning to go out of fashion, and a new language
(or OS or platform, etc.) Y is just beginning to come on the scene.
The term ``just beginning'' is crucial here; it means that Y is
so new that there almost no one has work experience in it yet. At
that point you should ask your current employer to assign you to
a project which uses Y, and let you learn Y on the job. If your
employer is not willing to do this, or does not have a project using
Y, then find another employer who uses both X and Y, and thus who
will be willing to hire you on the basis of your experience with
X alone, since very few people have experience with Y yet.
Clearly, if you wait too long to make such a move, so that there
are people with work experience in the skill, the move will be nearly
impossible. As one analyst, Jay Whitehead
humorously told ZD-TV Radio, if your skill shows up as a book in
the Dummies series, that skill is no longer marketable.
What if you do not manage to time this process quite correctly?
You will then likely be in a very tough situation if you need to
find a new programming job, say if you get laid off. The best strategy
is to utilize your social network, including former coworkers whom
you might know only slightly - anyone who knows the quality of your
work. Call them and say, ``You know
that I'm a good programmer, someone who really gets the job done.
I can learn any skill quickly. Please pass my re'sume' to a hiring
manager.''
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.