cargo cult programming: n. A style of (incompetent) programming dominated by ritual inclusion of code or program structures
that serve no real purpose. A cargo cult programmer will usually explain the extra code as a way of working around some bug encountered
in the past, but usually neither the bug nor the reason the code apparently avoided the bug was ever fully understood (compare
shotgun debugging,
voodoo programming).
The term `cargo cult' is a reference to aboriginal religions that grew up in the South Pacific after World War II. The practices
of these cults center on building elaborate mockups of airplanes and military style landing strips in the hope of bringing the
return of the god-like airplanes that brought such marvelous cargo during the war. Hackish usage probably derives from Richard
Feynman's characterization of certain practices as "cargo cult science" in his book "Surely You're Joking, Mr. Feynman!" (W. W.
Norton & Co, New York 1985, ISBN 0-393-01921-7).
A data structure is a structure, not an object. Only if you exclusively use the methods to manipulate the structure
(via function pointers if you're using C) and each method is implemented as co-routine, then you have an object. Such an approach
is typically an overkill. OO zealots make mistake typical for other zealots by insisting that it must be used for everywhere and
reject other useful approaches. This is religious zealotry. And please remember that Dark Ages lasted several hundred years.
Is OO a topic artificially promoted for mercantile gains by
"a hoard of practically illiterate and corrupt researchers
publishing crap papers in junk conferences." ?
Object-oriented design is the roman numerals of computing.
The object-oriented model makes it easy to build up programs by accretion.
What this often means, in practice, is that it provides a structured way to write spaghetti code.
— Paul Graham
“design patterns” are concepts used by people who can’t learn by any method except memorization,
so in place of actual programming ability, they memorize “patterns” and throw each one
in sequence at a problem until it works
~ Dark_Shikari
I often wonder why object oriented programming (OO) is so popular, despite being a failure as a programming paradigm. It is still
advertised as the best thing since sliced bread, despite the fact that is rarely used in Web programming, which is the most dynamically
developing application area (and many of those programs are based on LAMP stack with PHP as "P" in it and that means no OO at all).
Is it becoming something that is talked about a lot, but rarely practiced? Just a topic artificially promoted for mercantile gains by
"a hoard of practically illiterate and corrupt researchers publishing crap papers in junk conferences." ? Or is it a
"for profit" cult?
To me it looks more like more dangerous development -- a variant of computer science
Lysenkoism (and I can attest that the current level of degradation of computer science definitely reminds me the level of
degradation of social sciences under Stalin; it's now more about fashion then research with cloud computing as the latest hot fashion.
OOP style directly and deliberately maps well to the typical hierarchical organization of large corporations. Conway Law suggest
exactly that: "organizations which design systems are constrained to produce designs which are copies of the communication
structures of these organizations." It should be no surprise that OOP style is popular in corporations; OOP is the
codification of bureaucratic organization communication patterns. This facto might
be one that can explain the success of this paradigm in the commercial programming world.
Why do we hate working in OOP? For the same reasons we hate working for inflexible, siloed, process-heavy, policy-heavy,
bureaucratic, mutually-distrusting teams in corporations. Why is it a success? For the same reasons that those corporations are
successful: organizations with those characteristics have historically been strongly correlated with creation of value that
increases the wealth of the billionaire class, and therefore billionaires pay for the creation of those systems.
For many (but not all) projects OO is just an overkill: a programmer who uses it in such cases looks like the proverbial general
who sent a tank division to capture unarmed natives village. Many projects do not support the idea of many instances of the same
object, which is the key in the concept of classes (which came from simulation programming languages such as Simula). Class with
a single instance is not a class; it is a perversion ;-). As Joe Armstrong, the creator of Erlang, aptly observed:
The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them.
You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
OO inevitably leads to "lasagna style" programming style, the disease which became epidemic in OO and
results in considerable bloat of the codebase: excessive zeal in
making classes reusable and generic inevitably leads to creation of way too many layers of classes. And to the code
that is can be hard to understand and very hard to maintain, defeating the original goal of introducing OO. Even in a simple,
utility style examples, OO programs often are three to five times more bulky. In one Python utility example that I recently (2021)
studied a 1200 lines of code can be reduced to just 200 using procedural programming. You literally need to junk the code in order
to maintain it -- it does not make any sense to try to understand that many completely useless layers. In larger program this
effect is stronger and has more disastrous effect on maintenance.
If a parent and child could arbitrarily switch places, then the problem clearly is not suitable for "objectification".
Unix file system can serve as a good model of object hierarchy. But it is often easier to work with tags then with paths. For
example the document tagged as #Company and #Procedures can be put in Folder Documents/Staff and still be found.
Tags have no order or hierarchy. It is a flat name space.
If you read books considered to be "OO classic" the distinct impression that one gets is that "king is naked". And the many
of OO propagandists are weak and semi-illiterate in computer science demagogs (or worse, snake oil salesmen) promoting particular
dogma for material gains (conferences, books, higher position within computer science departments, etc). BTW the level of corruption
of academic researchers in XX and XXI centuries is unprecedented and there are a lot of example when academic departments in major universities
promote junk science. Just look at economics department and neo-classical economics.
But if we assume that OO is a variant of Lysenkoism in computer science, then the absurdity
of dogma does not matter and does not diminish the number of adherents. As universities are captured it has huge staying power
despite of this. It is essentially the same trick, that on much larger scale played in US universities with neoclassical economics.
But if we assume that OO is a variant of Lysenkoism in computer science, then the absurdity
of dogma does not matter and does not diminish the number of adherents. As universities are captured it has huge staying power
despite of this. It is essentially the same trick, that on much larger scale played in US universities with neoclassical economics.
In the end, productivity of programmer, the conciseness and quality of the resulting code are the only true merits of any programming
methodology. The conciseness (ability to express algorithms with less lines of the code) and quality (number of undetected bugs in the
production version of the program) are two critically important metrics that programming methodologies need to be judged upon.
As Paul Graham noted the length of the program can serve as a useful (although far from perfect) metric for how much work is to write
it and how many bugs it will contains after debugging is finished. Not the length in characters, of course, but the length in distinct
syntactic elements (tokens) -- basically, the number of lexical elements or, if you wish, the number of leafs in the parse tree.
It may not be quite true that the shortest program requires the least effort to write, but in general length of program in lexical
tokens correlates well with the complexity and the effort. OO approach fails dismally in this metric almost in all important domain
of programming (with the notable exception of simulation tasks and GUI interfaces) as for the program in the same language which permits
structuring the program in both non-OO (procedural) and OO fashion (C++, Perl, etc) program structured
in OO fashion is typically longer.
Being extremely verbose (Java might be a king of verbosity among widely adopted languages; really the scion of Cobol ;-) is only
one problem that negatively affects both the creation and, especially, maintenance of large programs. But it is an important problem.
Java is definitely so verbose that this factor alone push it down one level below, say, PL/1. And it's
sad that PL/1 which was created in early 60th, is still competitive and superior to the language created more then 50 years later.
Attempts to raise Java level using elaborate "frameworks" with complex sets of classes introduced other problems: bad performance,
inability to comprehend the program as blackbox approach to important classes (you have API , enjoy) often does not work and resulting
difficulties in debugging.
In general we can state that OO is a fuzzy concept, that has both good and bad sides. First of all it is often implemented in an
incomplete, crippled way (C++, Java) which undermines its usefulness. To enjoy advantages of OO programming the language should provide
allocation all the variables in the heap, ensure the availability of coroutines, correct implementation of
exception handling, and garbage collection. In a sense you can (and should) view objects
as separate entities that communicate between each other strictly by messages. That's an elegant idea but in real world it does network
well for most applications -- execution penalty is way too high. As such it is an expensive proposition (execution-wise) and lead to
huge penalties in execution times. That's why "crippled" approaches "when one wants to preserve virginity and have children"
prevail.
Still we should clearly state that OO-model does incorporate several good ideas:
The idea of equivalence between procedures and data structures. Some "OO" languages does not even include separate
notion of data structures relying on one general mechanism. This is probably a mistake, but you get the idea. This idea
of equivalence is a very elegant, and it was very innovative at the time concept that initially appeared in simulation languages
and came into general purpose programming languages via influence of Simula 67.
Hierarchical partitioning of variables namespace into trees with "inheritance" as a pretty neat method of accessing lower
level data from higher level abstractions. Somehow it reminds me Unix filesystem structuring. Partitioning of variable
namespace in hierarchical manner makes creation of large, complex systems simpler. Actually it was invented and implemented
independently on OO in Modula. Without namespaces large program development requires "iron variable naming discipline"
that in the past only IBM was capable of (during the development of OS/360) But this not the only way to implement
namespaces. Non hierarchal namespaces are also very useful (common block in Fortran, external variable and structures in PL/1)
Usage of the heap instead of stack for allocation of variables. In general purpose language this concept was first implemented
in PL/1 on early 60th. So this is 50 years old technique. And before PL/1 is was in more consistent manner implemented by LISP.
Still dominance of C which was a system programming language (and should be viewed as severely cut dialect of PL/1 with addition
of B-style pointers) badly affected the acceptability of this idea, essentially slowing its wide adoption for approximately 12 years
(Java was created in 1995, while C was created around 1973.) Java was the first widely used language with heap allocation of variables.
Limitations of early hardware also played an important role and those limitations stopped to matter only around 2000.
C was created mainly due to experience with PL/1 as system programming language used in
Multics as both Thompson and Richie were part of Multics development team for several years. So it was designed as the system
programming language. But later due to availability of open source compliers it's domain of applicability was extended to application
development despite absence of automatic storage allocation and subscripts checks which lead to chronic buffer overflow vulnerabilities.
The latter became a huge security problem from which we are still suffering to this day. Better and safer languages for
application development were possible since probably 1990 due to tremendous progress in hardware and C++ was a deeply flawed
attempt to rectify some flaws of C as application development language borrowing some ideas from Simula67.
Later Java introduced the allocation variables in heap and garbage collection into mainstream. Which probably explains
why Java, despite being so badly designed language, managed to overtake Cobol. Later scripting language made this idea the
"standard de-facto."
Naturalness of grouping of related procedures operating on the same data under "one roof". This was also pioneered
by PL/1 with multi-entry procedures. This idea is also related to the general equivalence of procedures and data structures which
we mentioned before. It is just another side of the same coin. The idea of constructor is essentially the idea of initializing
of such dynamically allocated data structure on which a set of procedures operates. As simple as that. All this fluff about
classes as abstractions of objects with the set of operations on them is just a smoke screen which hides the simplicity of this idea.
Again PL/1 multi-entry procedures captured 80% of usefulness of classes with almost zero percent of OO fluff.
Extension of the idea of type to structures and providing a mechanism of creating a new instance of the structure with
automatic population of necessary elements via special initializing procedure called constructor. Simula-67 was the inspiration
here, but even before Simula-67 PL/1 invented the concept of creating similar substructure in a new record from the existing record
structure with the like operator. Of course in OO this idea got a new stance and a new terminology, but compiler-writer
wise this is the same idea. Still it seems to me that JavaScript prototype-based approach is superior and in this sense only
Javascript and its derivatives deserve the name of true object oriented languages. See
Prototype Based Object-Oriented Programming
"The great thing about Object Oriented code is that it can make small,
simple problems look like large, complex ones." Top 50 Funny Computer Quotes
Unix conceptual model -- "everything is a string" and "everything is a file" is incompatible with OO model, which want to segregate
different objects as much as possible. OO is essentially return on the new level to OS/360 mentality where there were dozens of different
types of files supported by the OS. If this is a future I do not want to be a part of it :-)
Often encoding complex data structure as a string with delimiters lead to more clear and modifiable code then using multitude of
methods accessing some binary structure (and object is a data structure that consists of data fields and references to methods that
work to those data fields) . Binary structures are IBM way to design programs and the fiasco of OS/360 had shown that this a road to
nowhere. Right now public relations are on completely different stage and what was a clear failure in 1960 can be sold as the
complete success in 2000.
Funny that one of the most influential book on software engineering The
Mythical Man-Month is actually about a dismal failure of IBM with the design of operating system for the revolutionary hardware
of System 360 line of computers. And while term did not exist at the time, the way IBM designed the filesystem can be called object
oriented. They introduced many types of files.
Another objection along the same lines is the level of overcomplexity of the language that OO creates in comparison with simpler
approaches to the structuring of the program namespace "at large" like modules (modules in Module language were actually a really
brilliant invention.) If you look at the popular objected oriented languages like software written in Python is clear that generation
of new types is the process that had run amok. It makes software very complex and brittle saturated with thousand of types and million
of methods that no human can ever muster; and conversion between types is a nightmare.
In a way OO type system acts like a bureaucratic organization: each department within it wants to increase ifs size and influence
at the expense of others and those turf wars consume much of the energy of the organization and often pervert the original purpose for
which the organization was created.
Derived types have their uses. But there are far from universal way of structuring the problem and often they are not the best way.
But as soon as user defined types were introduced they became kind of object of worship the superstition that supposedly can cure all
ill of this world (or at least all ills of software engineering).
As any cult it instantly attracted a special category of ruthless and power hungry people who became the priests of this new cult.
That's probably the most dismal side effect of OO cult. Everything now is viewed throw the prism of the "class struggle,"
so to speak.
That situation logically leads to the phenomenon which can be called the "OO trap": tremendous among of time wasted on unnecessary
redesigning and polishing of OO classis
As any cult it instantly attracted a special category of ruthless and power hungry people who became the priests of this
new cult. That's probably the most dismal side effect of OO cult. Everything now is viewed throw the prism of the "class struggle,"
so to speak. That situation logically leads to the phenomenon which can be called the OO trap: tremendous among of time
wasted on unnecessary redesigning and polishing of OO classis
Creation of a set of classes in OO projects often degenerates for "the art for the sake of art.": all efforts are spent on polishing
and generalizing the set of classes while the mail project task remains neglected. This became a compulsive idea and the programmers
often became hooked to it. Waisting time on never ending polishing and redesigning of OO classis is a real epidemic among OO programmers,
and often pushes a project in which they particulate into jeopardy. The unstoppable desire to create the masterpiece out
of the set of classes is so prevalent, that it needs to be confronted with management actions, if as a project manager you care about
the project schedule.
For some reason, "good enough" solutions are improved, improved and improved, at the expense of the key objectives of the particular
programming project. Desire to make set of classes more universal and more flexible that is warranted for particular goal proved
to be irresistible for majority of programmers. Most of those super-polished classes are never reused.
Desire to make set of classes more universal and more flexible that is warranted for particular goal proved to be irresistible
for majority of programmers. Most of those super-polished classes are never reused.
But changes to them due to polishing and generalization of function require minor changes in programs which now needs to use those
new and improved (more elegant, more universal and more powerful) classes and this cycle never ends. Also with time the
classes hierarchy gradually became deeper and less and less efficient. We already mentions this effect as "lasagna code" problem.
As Roberto Waltman quipped "The object-oriented version of spaghetti code is, of course, 'lasagna code'. Too many layers."
"The true faith compels us to believe there is one holy Catholic Apostolic Church
and this we firmly believe and plainly confess. And outside of her there is no salvation or remission from sins."
- Boniface VII, Pope (1294-1303)
“The quality of ideas seems to play a minor role in mass movement leadership. What counts is the arrogant gesture, the complete
disregard of the opinion of others, the singlehanded defiance of the world.”
― Eric Hoffer, The True Believer: Thoughts on the Nature of Mass Movements
Often the analogy with objects pushes programmers way too far and they lose the contact with reality: they stop viewing OO as just
a sometimes useful abet questionable abstraction/paradigm of software development and a good way to structure program namespace, but
became what Eric Hoffer called "true believers" who hate all alternative approaches
to structuring computer programs and namespace. Hatred is the most accessible and comprehensive of all the unifying agents.
What true believers does no admit that in many cases OO programs are longer then their non OO counterparts and that alone make them
more error prone. Moreover enforcing object oriented model on problem that do no lend themselves to this approach is an invitation
to the disaster. I saw (and translated from one programming language to another) many Unix utilities in which authors
attempted to write them in OO-style. They were mostly junk. Only few were written in really creative style which when you delve
into the source ode you exclaim: this guy really is a master who thought deep.
No language guarantee the creation of good, reliable programs. Especially is such a complex environment as modern Unix or Windows,
which are excessively complex operating systems (with Windows being bizarre complex, far beyond redemption) Programming talent
is essentially "language independent" and gifted programmers can adapt and overcome shortcoming of any language, while mediocre programmer
are able to create a mess in any language.
At best claims of OO about better reliability of programs are definitely exaggerated, at best; at worst they are fake and should
be viewed as a marketing trick. OO makes programs more verbose and the number of bugs linearly depends on the number of
codelines in the program.
Also as programming after, say, 2000 is mainly driven by fashion, the warts of any particular approach become understood only a decade
or more after the "boom" in particular technology has ended and experience accumulates. Only then can see OO technology without
rose-colored glasses.
Writing this article in, say, 1996, would be impossible (and impractical).
Object-oriented programming (OOP) is often treated like a new Christianity and religious zeal of converts often borders with
stupidity. And it definitely attracts numerous charlatans which propose magic cures like various object methodologies and, what is worse,
write books about them ;-).
All-in-all my impression is that for almost 30 years of its existence OO failed to improve programming productivity in comparison
with alternative approaches, such as scripting languages (with some of them incorporating OO "just in case" to ride the fashion
as you should never upset religious zealots ;-).
If so this is more of a religious dogma, much like previously were "structured programming" and verification bonanza (with
Edger Dijkstra as the first high
priest of a modern computer science techno cult aka "church of computer scientology" ;-). And it is true that there
are more corrupted academic fields then computer science, such as economics.
But still this is a really terrifying in a sense that it replicates Lysenkoism mentality on a new level with it's sycophants and
self-reproducing cult mentality. And the level of groupthink, this cult mentality is a real problem. As
Prince Kropotkin used to say about prison guards in Alexandrov
Central (one of most strict regime prisons in Tsar Russia) where he served his prison term "People are better then institutions".
Like in any cult, high priests do not believe one bit in the nonsense they spread. For them it is just as the way for getting prestige
and money. Just ask yourselves a question: in what place there were more sincere communists in, say, 1970th: in the Politburo of CPSU
of the USSR or any small Montmartre cafe. Like in all such cases,
failure does not discourage rank and file members of the cult. Paradoxically it just increases the cult cohesion and zeal.
And as of 2014 OO adepts are still brainwashing CS students despite the failure of OO provide advertised benefits for
the last 25 years (Release 2.0 of C++ came in 1989). And they will continue just because this is very profitable economically. They
do not care about negative externalities (an economic term that
is fully applicable in this case) connected with such a behavior. Just give me a Mercedes (or tenured position) now and f*ck the computer
science future.
So far all this bloat and inefficiencies were covered by Moore's
law and dramatically improving parameters of hardware (please remember that initial mainframes often used to have 64K (not
megabyte but kilobytes) of memory and PL/1 (which in many areas is a higher level language then C++ and Java) worked well in those
circumstances which is simply incredible, earth-shuttering achievement if you ask me. If you look at IBM optimizing and debugging
complier then gcc and other C++ compilers clearly looks like a second rate implementations.
In other words, you can alway
le border="2" width="90%" bgcolor="#FFFF00">
ftware development methodology to be highly successful, even if it is not. Bloat and inefficiencies will be covered well by the tremendous
growth of power of computers that is still continuing unabated, although slowed down a bit. In ten years people start discovering
the truth, but at this point you will be rich and successfully retired somewhere or Hawaii. Meanwhile to can fleece lemmings by running
expensive courses and publishing books that teache this new and shiny software development technology.
You can claim any software development methodology highly successful because even if it is not. Bloat and inefficiencies
will be covered well by the tremendous growth of power of computers that is still continuing unabated, although slowed down a
bit.
OO is a set of certain ideas which are not a panacea and as such never was and never will be a universally applicable programming
paradigm. Object orientation has limited applicability and should be used when it brings distinct advantages, but not be pushed for
everything like naive or crooked (mostly crooked and greedy) authors of "Object Oriented Books" (TM).
Here is the number of books of authors who wanted to milk the cow and included words "object oriented" in the title.
Stats are for each year since 2000 (data are extracted from Library of Congress):
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
92
83
104
76
69
70
76
68
57
60
61
46
27
So there are a lot of authors, who try to sell the latest fad to unsuspecting audience much like snake oil salesmen of the past.
I have strong personal hate for authors who wrote Object Oriented Algorithms and Data Structure books, and especially authors who
convert previous procedure-oriented algorithms books into object-oriented attempting to earn fast buck; corruption is a real problem
in academia, you should know that ;-).
In a way, the term "object oriented cult" has a deeper meaning -- like in most cults high priests of the cult (including most
"object oriented" books authors) really love only money and power. That is the real (but hidden) object of many OO evangelists.
And they do not believe in anything they preach...
In a way, the term "object oriented cult" has a deeper meaning -- like in most cults high priests of the cult really love
only money and power. And they do not believe in anything they preach... That is the real (but hidden) object of many
OO evangelists.. And they do not believe in anything they preach...
Many common applications can better be developed under different paradigms such as "multi-pass processing", compiler like structure,
abstract machine paradigm, functional language and so on. Just imagine that somebody tries to solve in object oriented way a typical
"parsing of a text string problem" that can be solved with the regular expressions or via lexical and syntax analysis algorithms.
Of course any sting is a derived object of alphabet of 26 letters but how far we will go with such "OO approach". Just look at the
poverty of books that sell object oriented approach to students who want to study algorithms and data structures. Those snake oil salesmen
who wrote such books are using OO as a marketing trick to get a quick buck do not deserve the title of computer scientists and
their degrees probably should be revoked ;-) Lord Tebbit once said "You can judge a man by his enemies." Judging from the composition
of pro-OO camp in computer science any alternative paradigm/methodology promoter or even skeptic like me looks good by definition ;-)
When I think about OO I see two distinct trends:
Natural (slow) progress in refinement of existing concepts in a way more suitable for modern hardware. One is positive
and is connected with the refinement of traditional programming constructs (generic procedures, templates, hierarchical namespaces
and controlled visibility of variables, etc.). The hierarchical structuring of namespace and, to a certain extent, limited hierarchical
namespace inheritance is a very good idea and here OO languages added to the language design arsenal. In essence OO blends
concept of the procedure and data structure, providing each data structure with procedural components (pointers to functions) and
the way to initialize instances allocated on the heap using a special procedure called constructor.
Junk science or Lysenkoism, if you wish. The second is hugely negative attempt to position OO as a new universal software
engineering paradigm. Often OO is oversold as panacea and even as a cult (Bertrand
Meyer). Ideally objects should be independent communicating entities, which means that programming is accomplished by building
structures and exchanging messages rather than by traditional procedural way. For many (most?) problems forcing OO representation
leads to a mismatch of the language used to the problem domain.
To understand where OO languages stands here requires to delve in a little bit of history. First on all programming projects
can be categorized based on the size of codebase. While limits are somewhat arbitrary the general tendency holds well. For example,
if we exclude comments and lines devoted to help screens and processing of options we can distinguish between following
six categories:
Tiny:upto 100 lines. Productivity of programmer can be 1000 lines of debugged and tested code per day or more.
Smallupto 1K line. Productivity of programmer can be 100 lines of debugged and tested code per day or more.
Mediumupto 10K lines. Productivity of programmer can be 10 lines of debugged and tested code per day. per day
or less. Staring from approx 1K line you need to use multiple namespaces so only programmers who understand and can ingeniously apply
this concept well can successfully work on the upper limit of this range. That excludes probably 50% of people who call themselves
programmers.
Largeupto 100K lines. Productivity of programmer is typically less then 10 lines of debugged and tested code
per day. If manpower is limited such project takes years to complete (two or more years). Usually this is complex systems.
For a project that contains 100K line and has only one programmer a reasonable estimate is a couple of years (initial versions of
Linux, Python and Perl interpreters belong to this category) Only few people are able to successfully design and implement
such programs. Of course complexity matters too. For example compilers and interpreters are more complex and requires specialized
knowledge to write. This is probably an upper limit for an individual programmer.
Hugeabove 100K lines. The project has high chances' to fail or finished with large cost and time overruns.
Such projects require a team of programmers and specialized tools.
Monstrousabove 1 million lines. The project has very high chances to fail. Large cost and time overruns
are given. OS/360 was one of such projects. It was described in famous Fred Books book the Mythical man-month. Please
not that while hardware was the real breakthrough, the OS was the second rate despite huge cost and time overrun. Only
compliers for OS/360 were first class. But that was old IBM later destroyed by greedy and evil neoliberal bean counters
Louis V. Gerstner Jr. and
Samuel J. Palmisano
What is interesting that significant share of the most important software projects, that created system we used today were completed
before 1990 when OO became fashionable (Unix, C, C++, DOS, Windows 85, Windows NT, Microsoft C and C++ compliers, Microsoft Office,
GNU toolchain, Oracle, DB/2, Apache, Delphi, Perl,
As experience with hose projects had shown the structuring of programming namespace and developer discipline is the necessary precondition
of the success. This is the major issue. For medium size projects and above skillful structuring of the namespace is one of key
determinant of the success of the project. while initial tools of structuring namespace were based on Algol-60 concept of scope, Fortran
concept of common blocks and and separate compilation, Modula-2 developed between by Nicklaus Wirth from 1977 and 1985 and not Simula-67
was the real breakthrough in this area.
Capabilities of structuring of namespace differs between major programming languages. Algol-60 introduced great idea of semitransparent
mirror when subroutines can view variable in encompassing subroutines but encompassing subroutines can't view variables declared inside
nested subroutines. This abstraction provided be very powerful way of structuring namespace. It also introduced the idea of the
scope of variable (local vs global is just two extremes, there ca be shades of grey in hierarchical system that Algol-80 invented. that
also introduced related to scope the concept of visibility. Again this was a tremendous breakthrough, a real revolution in programming
languages design. Simula 67, the first OO language in existence, was an extension of Algol-60
Fortran, which was older language then Algol 60, approached the problem differently -- because of separate compilation it defined
so called common blocks, which on complier level are separate namespaces were each variable is prefixed by the name of the common block.
You can think that each separately complied subroutine 'exports" variables into common block.
PL/1 managed to combine Algol 60 ideas with Fortran idea on a new level and generalized this idea introducing the concept of external
variables in namespace structure that fully resembles Algol-60. In PL/1 each separately compiled program need to declare variables it
want to export to other program as external and those variable do not obey algol-60 visibility rules. They also need to be
allocated statically and run time mapping of addresses was performed by the linker. There is also more complex idea of areas more
similar to Fortran common blocks but dynamic in nature. What is important is that PL/1 introduced the idea of "multi-entry subroutines,
which were essentially "poor man classes."
With the growth of CPU speed and size of the memory, the to allocate variables on the heap with garbage collection became more and
more important ( the only early language with built-in garbage collection was LISP, originally specified in 1958). PL/1 also introduced
early version of classes in the form of multiple entries to the subroutines and notion of generic calling interfaces and multiple inheritance
via like attribute for structure (which copied the definition of the initial structure to the new structure -- the derivate
of the old. Call to the subroutine with multiple entries in PL/1 was in essence the call to a "poor man" constructor, which can initialize
various structures present in it, and each entry point was a "poor man" method of this "class". As Perl does not require a return statement
at the point where new entry points is defined, they could overlap, but that's non-essential. Entry point can be "generic" so that different
entry points are selected by complier based on the combination of input parameter (for example, one for int another for float).
Unless special care was taken and all variables are allocated on the heap, such a "poor man class" has only one instance. And
while nothing prevented programmers to program it in such a way that it supported multiple instances via pointers to them in many
case one instance is enough :-) Multiple instances were needed less commonly and they can imitated by using array of structures
and index to this array pointing to particular instance. So in rudimentary form we can see that key elements of OO existed since
April 7, 1964, the date of release of OS.260 with its compliers suit. or more then 50 year ago. Think about it -- half a century ago.
The first language that merged previously existed elements in a well integrated framework which later became known as OO, including
the idea of "constructor", was Simula-67. While a derivative of Algol 60 it stems from so called simulation languages with their
emphasized on imitation of real-world objects, and it shows. It never achieved mainstream usage but was influential as a revolutionary
prototype. Later in "bastardized" form (without co-routines) the ideas if Simula-67 were rehashed in C++ with first commercial release
in 1985 and a usable implementation around 1989 (V.2.0). Neither of those language addressed the key problem of regulating of visibility
of variables, which is the critical problem for large programming project. Simula still lived in Algol-60 world of semi-transparent
windows installed on the boundaries of subroutines. C++ separated the namespace via separately compiled programs. It also
can import identifies via inheritance mechanism.
I would like to stress again that the breakthrough was in handling of namespaces in programming languages was achieved around
1985 by Nicklaus Wirth (one of the contributors to Algol-60 project), before OO became fashionable. Modula (developed from 1977 to 1985)
introduced the concept of modules: programming unit with its own namespace that can export variables into other modules on the main
program. It was the only early mainstream language that supported coroutines. The revolutionary concept of exporting and importing
variable from namespaces of module, each defined by physical files allowed "regulate" visibility of variables that really can
scale. In a way this was a superior, moré powerful and flexible way to deal with this problem then inheritance mechanism introduced
by Simula 67. All those talks about multiple inheritance in view of existence of this mechanism are slightly suspect ;-)
But nothing can stop complexity junkies to flog the deal horse :-).
Here is an example of the source code for the "Hello world" program in Module 2:
MODULE Hello;
FROM STextIO IMPORT WriteString;
BEGIN
WriteString("Hello World!");
END Hello.
The Modula-2 module may be used to encapsulate a set of related subprograms and data structures, and restrict their visibility
from other portions of the program. The module design implemented the data abstraction feature of Modula-2 in a very clean way. Modula-2
programs are composed of modules, each of which is made up of two parts: a definition module, the interface portion, which
contains only those parts of the subsystem that are exported (visible to other modules), and an implementation module,
which contains the working code that is internal to the module.
The language has strict scope control. In particular the scope of a module can be considered as an impenetrable wall: Except for
standard identifiers no object from the outer world is visible inside a module unless explicitly imported; no internal module object
is visible from the outside unless explicitly exported.
Suppose module M1 exports objects a, b, c, and P by enumerating its identifiers in an explicit export list
DEFINITION MODULE M1;
EXPORT QUALIFIED a, b, c, P;
...
Then the objects a, b,c, and P from module M1 become now known outside module M1 as M1.a, M1.b, M1.c, and M1.P. They are exported
in a qualified manner to the universe (assumed module M1 is global). The exporting module's name, i.e. M1, is used as a qualifier
followed by the object's name.
Suppose module M2 contains the following IMPORT declaration
MODULE M2;
IMPORT M1;
...
Then this means that the objects exported by module M1 to the universe of its enclosing program can now be used inside module
M2. They are referenced in a qualified manner like this: M1.a, M1.b, M1.c, and M1.P. Example:
...
M1.a := 0;
M1.c := M1.P(M1.a + M1.b);
...
Qualified export avoids name clashes: For instance, if another module M3 would also export an object called P, then we can still
distinguish the two objects, since M1.P differs from M3.P. Thanks to the qualified export it does not matter that both objects are
called P inside their exporting modules M1 and M3.
There is an alternative technique available, which is in wide use by Modula-2 programmers. Suppose module M4 is formulated as
this
MODULE M4;
FROM M1 IMPORT a, b, c, P;
Then this means that objects exported by module M1 to the universe can again be used inside module M4, but now by mere references
to the exported identifiers in an "unqualified" manner like this: a, b, c, and P. Example:
...
a := 0;
c := P(a + b);
...
This technique of unqualifying import allows use of variables and other objects outside their exporting module in exactly the
same simple, i.e. unqualified, manner as inside the exporting module. The walls surrounding all modules have now become irrelevant
for all those objects for which this has been explicitly allowed. Of course unqualifying import is only usable if there are no name
clashes.
These export and import rules may seem unnecessarily restrictive and verbose. But they do not only safeguard objects against unwanted
access, but also have the pleasant side-effect of providing automatic cross-referencing of the definition of every identifier in
a program: if the identifier is qualified by a module name, then the definition comes from that module. Otherwise if it occurs unqualified,
simply search backwards, and you will either encounter a declaration of that identifier, or its occurrence in an IMPORT statement
which names the module it comes from. This property becomes very useful when trying to understand large programs containing many
modules.
The language provides for (limited) single-processor concurrency (monitors,
coroutines and explicit transfer of control) and for hardware
access (absolute addresses, bit manipulation, and interrupts).
It uses a nominal type system.
The next advance in set of concept which are marketed as OO was, as strange as it sounds, pipe and socket concepts in Unix. Because
the other important element of a set of ideas that constitute OO toolbox is that idea that each instance of the object should
have "state". It can suspect execution at some point, return control to the caller and on next invocation to resume the execution from
exactly this point. We are tailing about coroutine concept here which was introduced by Melvin convey around 1963 adn was present
only in two early programming languages: Simula 67 and Module This concept got a new life and entered mainstream in Unix via the
revolutionary concept of pipelines, one of the crown jewel of Unix architecture. Members of pipeline can be viewed as objects
that exchange information via messages. Exchange of information via messages lie in the core of "real" OO.
Perl was the first scripting language that created a special programming construct corresponding to namespace called "package"
and the notation, which allowed arbitrary access of variables from any namespace to any namespace as long as you know the name of this
namespace and provide it explicitly as a prefix of the variable. For example $main:line in Perl is a variable line in
namespace main::. Python was another which essentially replicated Modula ideas.
Export of the variable is like creation of the symbolic link in Unix filesystem, and it allows to use this variables in a different
namespace without qualifying it with namespace explicitly.
That in short is the overview of the key ideas. OO added nothing to it. It is just a marketing term that denote combination of several
idea into more or less usable package. And the idea of inheritance is no way central to it it is just a variation of the idea of namespaces
introduced in early 1970 in Modula.
All OO languages are mix and match various flavors of the same key pre-existing ideas, which some form existed on or before early
1970 and, as such, they are 50 years old. Some OO languages integrate those ideas with better conceptual integrity then others.
That's it.
Against, the idea of constructor of the class (as the way to initialize structures in a subroutine) and set of methods was
present implicitly in the concept of multi-entry subroutines in PL/1. Message parsing as a programming paradigm was invented by
Melvin Conway in 1962 as the method of structuring COBOL complier (the idea of coroutines) and later in a brilliant, innovative
form found its way into Unix pipes and sockets.
I can add to this that System/360 in 1962 introduced the idea 8 bit byte and byte addressable memory, along with a really sophisticated
instruction set (for example, tr function present in Unix as a command and as built-in function most programming language today
is direct descendant from this instruction set). Which was a milestone in the development of hardware that later make possible
for us to use interpreted languages like Perl, Python and Ruby, and do not care about ten times of more speed penalty that such
implementation entails.
All-in-all, Algo-60, System/360 (with it 8 bit byte, byte addressable memory, high quality compliers, including PL/1), Simula-67
in 1960th, Modula, and Unix in 1970th were trailblazers. In comparison with them ideas that constitute OO were side show,
which possible exception of Molula-2 breakthrough.
So when some OO "true believer" claims that Python is OO language, the real question is "to what extent" and what benefits vs costs
Python OO implementation provides (fpr example, treating stings as a immutable objects does entails significant costs, even on modern
computers ;-) Truth be told only Stackless Python fully implement coroutines concept, when modules do have an internal state and
can exchange messages. And any language that does not fully support coroutines and "message passing" interfaces can't
be viewed as a "true" OO language.
More often thant not the label of OO language serves more like a marketing gimmick ;-) For example to that extent "classic"
C++ is an OO language is an interesting question.
Someone thinking that they should stick to the OO paradigm and treat the others on the order of the goto statement in levels
of immorality are really missing out by not looking at this paradigm. The metaprogramming capacity of templates is also quite
impressive...The largest pitfall is the belief that OOP is a silver bullet, or the "one perfect paradigm."
While those good ideas shine in certain classes of programming tasks, such as GUI interfaces, real world system modeling, etc, there
are programming tasks (for example computational) were they are useless or even harmful. And generally OO approach does not fare well
is sting/text processing tasks iether.
But when I see classic computer algorithms books polluted by OO nonsense, this is just a vivid reminder to me that Lysenkoism in
computer science is still alive and thriving. Thos "professors" generally should be fired on the spot.
Also harmful is use of OO in the writing of utilities. I saw a lot of overly complex junk written by this method (to the level of
being unmaintanable) and can attest that OO fails in this area. It is just not a suitable paradigm and thinking about construction of
a utility in OO-fashion usually lead to poor results and bloated codebase. Often you can cut the size of codebase 50% or
more but rewriting it the procedural fashion or using coroutines.
I think for this domain the paradigm of "defensive programming" and associated set of ideas
are more useful.
OO fashion ( or "for profit" cult, if you wish ) movement vividly demonstrates that in modern science at least one third
of scientists are money seeking charlatans, while the other one third are intellectual prostitutes (among Professors of Economics
this proportion is significantly higher).
As such OO serves as an indirect proof of pretty sad observation that in modern science at least one third of scientists are money
seeking charlatans, while the other one third are intellectual prostitutes (actually among Professors of economics this proportion is
even higher). Conversion of previously decent, honest researcher into one of those two despicable categories is not only possible, but
happens quite often. A lot of modern science is actually pseudoscience. Life in academia those
days is very tough and often survival, not seeking the truth, becomes the highest priority. Like somebody said "Out of the crooked
timber of humanity no straight thing was ever made".
Again, the proliferation of OO books devoted to description of algorithms is completely absurd. It is really intellectually bankrupt
way to explain algorithms to students. And what is really terrifying in the fact that good, in-depth books such as
Donald Knuth masterpiece The
Art of Computer Programming exists since 1968. OO do not replace, but complements procedural programming, as not everything
should be seen as an object contarary to what some hot-headed OO enthusiasts (priests of this new techno cult) suggest. Sometime
we should call a spade a spade, not an object ;-).
Good design in not about following the latest hot fad, but about finding a unique set of tools and methods that make programming
the particular task productive, or even possible. It requires programming talent, not some of language features. Kernighan & Plauger
long ago noted this fact in their still relevant book The Elements of Programming
Style:
Good design and programming is not learned by generalities, but by seeing how significant programs can be made clean, easy
to read, easy to maintain and modify, human-engineered, efficient, and reliable, by the application of good and programming practices.
Careful study and imitation of good designs and programs significantly improves development skills.
One should understand that OOP is an old hat and several OOP-based languages are 20 or more years old. Essentially we can look at
Simula67 as the first OO manage. Which means that OO is more then 50 years old.
OOP attempts to decompose the world into objects and claims that everything is an object. Which was really natural approach in simulation.
That's why OO concepts were born out of experience with simulation languages.
But saying that everything is an object not always (actually pretty rarely) provide an useful insight into the problem. Just think
of sorting. Will it help to sort the file efficiently if you think that the records are objects. Most probably not. Things that have
a state and change their state while receiving something that can be interpreted as messages are natural candidates to be represented
as objects. In this case it is really powerful programming paradigm.
Here are some guidelines to help decide if an object-oriented approach is appropriate:
Does your code manipulate a data structure that corresponds to a real-world object (such as a window on the screen)?
Is there a group of variables that you can group into structure processed by the some set of installation functions which can
be interpreted as operations on this data structure?
Is hierarchical structure of functions space is useful, or flat structure is adequate?
OOP emphases creation a set of classes as a universal method of decomposition of the problem. But in reality such a decomposition
heavily depends on the set of data representation and algorithms that programmer knows and is comfortable with. That's why typical decomposition
of the problem into classes by Unix programmer can be completely different (and often better) that decomposition of the same problem
by Windows-only programmer.
The essence of programming are algorithms operating on data structures and the "programming culture" used in particular OS exerts
heavy influence on the way programmers think. In no way OO by itself can help you come up with optimal storage structures and
algorithms for solving the problem. Moreover OO introduced entirely new and quite obscure terminology with the purpose of mystifying
some old useful mechanisms of program structuring:
The binding of procedures to a data a structure (called object). Procedures pointers to which are stored in the structure
itself are now called methods. The ability to store a pointer to the procedure in record field was available in languages since the
early 60s (PL/1).
Constructing a new data structure (called subclass) by adding new fields (extending) a given structure (the superclass).
It's essentially a variation of like construct introduced in PL/1. The cosmetics here is to consider any
structure as a new type and use type checking mechanism to prevent certain errors in manipulation of pointers to such structures.
In addition the process of creating of a new instance (and its initialization) can be controlled with a hidden call to a special
procedure called constructor.
Also while OO emphasize the concept of object (which can be abstracted as a coroutine with it own state) in reality many so called
OO languages does not implement the concept of coroutine. As such their methods do not have a real state, they can't be suspended and
resumed. In other words they are just new and slightly perverted way to use Algol-style procedures.
As for paradigm shift, OO can be compared to introduction of a local LAN instead of mainframe. That mean that we now have a bunch
of small, autonomous PCs each with own CPU, communicating with each other via messages over the net. It takes somewhat pervert
imagination to see a simple procedure call as a real message passing mechanism -- only threads communicate through real messages.
So true object model is intrinsically connected with multithreading, yet this connection is not understood well. True message mechanism
presuppose that object (autonomous PC with its own CPU) was active before receiving it (has initial state) and will be active
after processing it (in a new state). To certain extent real OOP-style is a special case of concurrent programming but without
any concurrency.
Pointers are a great programming concept. But as any powerful feature it is a dangerous feature. OO tries to hide it removing
explicit pointers from the language by hiding them and assigning them a type within the concept on instantiation of a class. Instance
of the class is essentially a typed pointer, pointing to the memory area occupied by particular structure.
At the same time removing pointers from the language as first class language elements (which they were since PL/1) is not without
problems. It remove a lot of expressive power of the language. As Perl demonstrated quite convincingly presence of pointers even
is scripting language framework is very beneficial. Generally the idea that you need to switch to OO framework in order to use typed
pointers in retrospect looks problematic.
And the idea of run-time accesses to elements of symbol table is a very powerful one and can be extended far beyond the concept of
typed pointes. For example PL/1 style onsubscriptrange exception can be implemented this way. Here again analogy with
Unix filesystem is appropriate. One of the powerful concepts introduced by Unix filesystem is the concept of I-nodes -- descriptors
for each file.
Now with modern hardware this idea of having a descriptor for each variable is the idea which time has come. It provides multiple
benefits and allow to accomplish really elegant things. It is a dream of compiler writers (having complier symbol table available at
run time) that has come true.
I see no real reason to remove pointers from a programming languages like most OO languages do, and think that it is impossible without
diminishing the power and expressiveness of the language. Of course, you can argue that in OO languages "everything is a pointer." In
a way yes, but it is designed to hide the concept not to illuminate it.
In modern languages pointer can and should be a first class language structure as it is now indirect -- pointing to the descriptor
of the variable instead of actual memory area of the variable like in C (which is designed as system programming language, so this is
must for this class of languages). That allows a lot of checking which prevent typical blunders with pointers. It is like converting
ancient razor -- and very useful but a very dangerous tool into modern Gillette shaving razor form :-)
Of course, tricks possible in C with the redefinition of a particular structure (memory area) into another are more difficult, but
they are still possible. At least some subclass of those.
Co-routines are a necessary element of OO framework. That's why they were present in Simula 67 -- the ancestor of C++ and grandmother
of all modern OO languages. If we assume that object need to have its own state that automatically imply that each method should have
its own state too, which they need to preserve from invocation to another.
That means that OO languages that does not support the concept of coroutines are cripples that are missing fundamental feature of
the OO model and should not generally be viewed as "real" OO languages.
Implementation of exceptions without implementing methods as subroutines is always deficient. In essence the exception is nothing
but stopped co-routine and that means that all regular methods in OO language that supports exception should be co-routines too and
should allocate all variables on the heap.
An exception generally ruins stack and keeping all variables in the heap is just a necessity for a clean implementation.
Allocation all the variables on heap, which is necessary for proper implementation of exception (which break stack based
procedure call model) generally presuppose garbage collection.
You can view exceptions as frozen coroutines which were initialized and instantly relinquished their control to the caller. Exception
is just a reactivation of this coroutine. And if processing of exception allow to return to point of invocation of exception we
have full co-routine mechanism in action.
In this sense OO languages that does not support garbage collection are cripples. This list includes C++.
Decomposition of program into modules/classes is an art. OO tend to stimulate more strictly hierarchical (aka bureaucratic) decomposition.
This is not the only type of decomposition possible or desirable.
Sometimes much better way of decomposition in non-hierarchical decomposition when some frequently used operations are implemented
outside hieratical structures as a shortcuts to typical sequences of operations.
It is true that premature optimization is the root of all evils, but complete neglect of this aspect is also not good.
Programs with "stupid OO-style decomposition" tend to have unnecessary deep procedure nesting hierarchies during execution which
not that good for modern CPUs with multistage execution pipelines and predictive execution.
It also make maintenance of programs written by an OO zealot who try to enforce this paradigm in area which is not suitable for it
a real nightmare. I encountered several cases when it was more economic to rewrite such procedures from scratch instead of maintaining
the mess that organization inherited when such a developer left.
Actually one interesting feature of OO zealot is that are too preoccupied with the hierarchy of objects in their program to produce
anything useful :-) All their energy and time goes into this typically fruitless area instead of trying to understand and
solve the actual problem. That's why such a program typically are real junk. All efforts went into construction of elaborate and
useless hierarchy of "universal" and "reusable" classes. Which will never be reused.
Actually "true OO" is very similar to the idea of compiler-compiler as it tried to create some kind of abstract language (in a form
of hierarchy of classes) that can help to solve particular problem and hopefully (often this is a false expectation) is reusable
to others similar problems. But I think that more explicit approach of creating such an abstract language and a real compiler from it
into some other "target" language can work better then OO.
Moreover there is a great danger in thinking just in term of hierarchy of classes well known to people who designed compilers. There
is a great temptation to switch attention from the solving of the problem to the designing of a "perfect" set of classes, instead of
solving the actual problem. Making them more elegant, more generic, more flexible. You name it. This is an infinite and most often
fruitless process. Often those refinement are not necessary for the particular problem and design became "art for the sake of art" --
completely detached from reality.
So the process of designing classes became self-perpetuating activity, disconnected with the task in hand (with usual justifications
that this "universal" set of classes will help to design other problem later on the read, which never happens). The key point is that
it became a very similar to addition and occupy lion share of developer time, which often dooms the problem he (or team) is trying to
solve. I would call this effect OO class design addiction trap.
Moreover in a team of programmers there is often at least one member who psychologically is predisposed to this type of addiction
(kind of and who instantly jump into opportunity disrupting the work of other members of the team with they constant desire to
improve/change the set of classes used. Often such people as a wrong as they are fanatical and in the fanatical zeal they can
do substantial damage to the team.
This "class design addiction trap" is very pronounced negative effect of OO, but people often try to hide it and never admit to it.
OO class-design addition trap has other side, which is well demonstrated itself in Java. People end with using so many class libraries
that application slows down considerably and loading them at the beginning is a nuisance even of computers with SSD. Moreover subtle
interactions between different versions introduce a very difficult to debug errors with each upgrade.
In other words usage of huge mass of Java class libraries increases complexity of a typical application program to the level when
debugging becomes an art. And that often nullifies any savings on design and coding phases of program development.
Rat race for the generalization/abstraction of the functionality of each and every class is a district danger that exist in OO programming.
In the absence of better term let's call it "Over-universalization" and understand it as a distinct tendency to consider the most generic
case in designing class libraries. It is a problem of programming as an art and the way of solving it often distinguish a master
programmer from an average in a sense that master programmer knows were to stop.
But OO tend to make it more pronounced. But again the problem is universal in programming and exist in designing regular procedural
subroutines libraries, for example glib. See, for example History of development of Midnight
Commander
This distinct tendency to make classes as abstract and as generic as possible makes them less suitable for the particular problem
domain. If also increases the complexity of the design and maintenance. In other words it often backfires. In extreme cases the class
library became so universal that it is not well applicable to any case where it can be useful and programmer start re-implementing
those primitives again instead of using one from the class library. Such a paradox.
The same problem but to lesser extent happens with designers of libraries or modules for regular procedure languages or scripting
languages that do not emphasize OO programming, such as Perl. You can definitely see it in cgi.pm.
The typical path of development reminds the proverb that the road to hell is paved with good intentions. I remember an example from
my experience as a compiler writer. For example, initially the subroutine that output diagnostic messages to the screen and write them
to the log is simple and useful. Then the second parameter is introduced and it became able to process and output message severity
levels (terminal, server, error, warning, info, etc), then collection of statistics for all those levels is introduced, then it
became able to expand macros, then to output context of the error, then ability to send messages above certain severity via SMTP is
added, and then nobody is using it in the next project. Instead a simple subroutine that accepts a simple parameter (diagnostic
message) is quickly written and the cycle of enhancements starts again with new players.
Programs rarely remain static, and invariably the original class structure becomes less useful with time. That results in more code
being added as new classes, which undermines the conceptual integrity of the initial design and lead to "class hell": the number
of classes grows to the level when nobody can see the whole picture and due to this start reinventing the bicycle.
Moreover often the amount of class libraries grow to the level when just loading them at startup consumes considerable time
making Java look very slow despite significant progress on JVM side. It looks like Gossling in his attempt to fix some problems with
C++ badly missed prototype-based programming ideas, the ideas
that found its way into JavaScript. In a recent blog entry he even mentioned:
Over the years I've used and created a wide variety of scripting languages, and in general, I'm a big fan of them. When the project
that Java came out of first started, I was originally planning to do a scripting language. But a number of forces pushed me away
from that.
When custom class library is used, there is another danger. When is already designed and working, people often see better ways to
do something. And this temptation of introduce changes is almost irresistible. If not probably regulated it became like building on
shifting sand.
Class library mess that exists in Java and that makes Java so vulnerable to exploits suggests that there should be better paradigms
on modularizing OO program then Simula-67 style class model. In this sense
prototype oriented OO model probably deserves a second look.
One telling sign of a cult if unwillingness to discuss any alternatives. And true enough, the alternative methodologies are never
discussed in OO books. As we are dealing with the techno-cult let's be realists and understand that as Niccolo Machiavelli observed
“And one should bear in mind that there is nothing more difficult to execute, nor more dubious of success, nor more dangerous
to administer than to introduce a new order to things; for he who introduces it has all those who profit from the old order as his
enemies; and he has only lukewarm allies in all those who might profit from the new.
This lukewarmness partly stems from fear of their adversaries, who have the law on their side, and partly from the skepticism
of men, who do not truly believe in new things unless they have personal experience in them.”
So it is often better to "dilute" or "subvert" OO development methodology then openly oppose it, especially if the company brass
is hell bent on Java, Python or some other fancy OO language. Techno-cult adherents usually close ranks when they face a
front attack. And as Paul Graham [The Hundred-Year Language]
observed groupthink has one interesting property: "It is irresistible to large organizations."
That can be done in various creative ways so the discussion below provides just a few tips. All of them can be "squeezed" into compatibility
with usage of some OO language (for example Python can be used instead of TCL in dual
language programming methodology), despite that each of them subverts the idea of OO in some fundamental way.
Using scripting language such as TCL and compiled language such as C in a single project has a lot of promise as it better separates
programming in the large (glue language), from programming in the small (component programming). See also
Greenspun's Tenth Rule of Programming. Lua
can be used instead of TCL. Julia is a new interesting alternative that can call C directly; no wrappers or special API.
In a way this is a simple implementation of abstract machine with, say, C subroutines and scripting language library representing
machine operations and scripting language as a glue (TCL for C). For many problems this "scripting language+compiling language"
approach is a better paradigm of software development as access to the implementation of interpreter by C programmers enforced development
discipline already developed and established in scripting interpreter development community. And libraries used by interpreter usually
are very high quality and serve both as an example of how things would be done and for preventing "reinventing the wheel" --
a tendency to re-implement parts of the library that are already implemented in any decent scripting interpreter. Programmers
are usually learn by example and code of even simple interpreter like AWK or gawk is a great school. We can reformulate
Greenspun 10th law of programming as following:
Any sufficiently complicated OO program written in Java, C++ or other OO language contains an ad hoc, informally-specified,
bug-ridden, slow implementation of half of scripting language interpreter.
I think that Stallman's objections to Tcl may stem largely from one aspect of Tcl's design that he either doesn't understand
or doesn't agree with. This is the proposition that you should use *two* languages for a large software system:
one, such as C or C++, for manipulating the complex internal data structures where performance is key, and another, such as Tcl,
for writing small-ish scripts that tie together the C pieces and are used for extensions. For the Tcl scripts, ease of learning,
ease of programming and ease of glue-ing are more important than performance or facilities for complex data structures and algorithms.
I think these two programming environments are so different that it will be hard for a single language to work well in both.
For example, you don't see many people using C (or even Lisp) as a command language, even though both of these languages work well
for lower-level programming.
Thus I designed Tcl to make it really easy to drop down into C or C++ when you come across tasks that make more sense in a lower-level
language. This way Tcl doesn't have to solve all of the world's problems. Stallman appears to prefer an approach where a single
language is used for everything, but I don't know of a successful instance of this approach. Even Emacs uses substantial
amounts of C internally, no?
I didn't design Tcl for building huge programs with 10's or 100's of thousands of lines of Tcl, and I've been pretty
surprised that people have used it for huge programs. What's even more surprising to me is that in some cases the resulting
applications appear to be manageable. This certainly isn't what I intended the language for, but the results haven't been
as bad as I would have guessed.
This is approach is closely connected with the idea of structuring application as an abstract machine with well defined primitives
(opcodes). If a full language is developed (which actually is not necessary) then this language does not need to produce
object code. Compiling into a lower level language such as C, C++ or Java is a more viable approach.
In this case maintained of the application can be split into two distinct parts: maintenance of the higher level codebase and maintenance
of the abstract machine that implements the higher level language and associated run time infrastructure.
The great advantage of this approach is that it allow to engage architects in actual programming which always lead to higher quality
of final product: many primitives can be created from preexisting Unix utilities and programs and glued via shell language. See
Real Insights into Architecture Come Only From Actual Programming
As the cost of programming is heaving dependent of the level of the language used, usage of the higher level language allow to dramatically
lower the cost of the development. This approach also stimulates prototyping as often the first
version of application can be glued from shell scripts and pre-existing Unix utilities and applications in a relatively short time which
make the whole design process more manageable.
Even if the idea of defining the language will be thrown out later and another approach to development is adopted the positive effects
of creating such a prototype can be felt for the rest of project development. In this sense "Operate of higher level" is not just an
empty slogan.
Compilers stopped to be a "black art" in late 70th and this technology is greatly underutilized in modern software development.
In this case you can catch some high level errors on syntactic level, which is impossible with OO although in many way it is similar
"compiler-compiler" based methodology. In light-weight form the problem can be structures in compiler like form with distinct
lexical analyzer, syntax analyzer and code generation parts. Multipass compiling with intermediate representation writable to
disk is a great tool for solving complex problem and it naturally allow subsequent optimization converting read/write statements into
coroutine interface. When intermediate representations between different passes are formally defined they also can be analyzed
for correctness. Flexible switching between writing of intermediate files and coroutine linkage greatly simplifies debugging. XML can
be used as a powerful intermediate representation language, although in many cases it is an overkill. Some derivative of SMTP message
format is another commonly used representation.
This is a the newest methodology, often based on LAMP, where the whole virtual instance of OS become a part of application and
application uses OS logging, OS scheduler, etc instead of reinventing the bicycle. This is a new a promising approach to programming
substantial class of problems. This specialized virtual machine provides services via network interface, for example Web interface.
LAMP stack which can be used in this approach proved to be a tremendously useful development paradigm. And in most cases non-OO
languages are used in P part of this acronym. But Python and Ruby has well implement OO features, so this approach does not completely
exclude usage of OO where is can really beneficial and not dictated by groupthink or fashion.
One important advantage of this approach is the executables in any OS are much more like objects that classes with methods in modern
OO languages. They definitely have their state, can be interrupted and resumed and communicate with other executable via messages (which
includes sockets). So OS infrastructure in general can be viewed as object oriented environment "in the large" while all OO languages
belong to OO in the small.
OOP became popular primarily because of GUI interfaces. In fact, many non-programmers think that "Object" in OOP means
a screen object such as a button, icon, or listbox. They often talk about drag-and-drop "objects". GUI's
sold products. Anything associated with GUI's was sure to get market and sales brochure attention, regardless of whether this association
was accurate or not. I have even seen salary surveys from respected survey companies that have a programming classification
called "GUI/OOP Programming".
Screen objects can correspond closely with OOP objects, making them allegedly easier to manipulate in a program. We do not
disagree that OOP works fairly well for GUI's, but it is now being sold as the solve-all and be-all of programming.
Some argue that OOP is still important even if not dealing directly with GUI's. In our opinion, much of the hype about OOP
is faddish. OOP in itself does NOT allow programs to do things that they could not do before. OOP is more of
a program organizational philosophy rather than a set of new external solutions or operations.
He also provided a deep insight that attractiveness of OO is somewhat similar to the attractiveness of the social doctrine like communism
(with its ideas of central hierarchical planning model and idealistic hopes that that will eliminate wasteful, redundant procedures).
Actually the idea that both OO and Marxism overemphasized classes is pretty cute :-). As well as the idea that full hierarchical
decomposition is a close analogy to bureaucracy that is making organizations so dysfunctional:
Unfortunately, OOP and economic communism suffer similar problems. They both get bogged down
in their own bureaucracy and have a difficult time dealing with change and outside influences which are not a part of the internal
bureaucracy. For example, a process may be stuck in department X because it may be missing a piece of information that the
next department, Y, or later departments may not even need. Department X may not know or care that the waiting piece of information
is not needed by later departments. It simply has it's rules and regulations and follows them like a good little bureaucratic
soldier.
This analogy can well look stretched, but highly placed "object oriented jerks" from academia really remind me high priests
of Marxism-Leninism in at least in one aspect -- complete personal corruption.
Object-oriented programming (OOP) is an ancient (25-year-old) technology, now being pushed as
the answer to all the world's programming ills. While not denying that there are advantages to OOP,
I argue that it is being oversold. In particular, OOP gives little support to GUI and network support, some of the
biggest software problems we face today. It is difficult to constrain relationships between objects
(something SmallTalk did better than C++). Fundamentally, object reuse has much more to do with the underlying models being supported
than with the "objectness" of the programming language. Object-oriented languages tend to burn CPU cycles,
both at compile and execution time, out of proportion to the benefits they provide. In summary, the goods things
about OOP are often the information hiding and consistent underlying models which derive from clean thoughts, not linguistic clichés.
...Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and
no amount of evidence to the contrary seems to be able to shake it free. But although some object-oriented software is reusable,
what makes it reusable is its bottom-upness, not its object-orientedness.
Consider libraries: they're reusable because they're language, whether they're written in an object-oriented style or not.
I don't predict the demise of object-oriented programming, by the way. Though I don't think it has
much to offer good programmers, except in certain specialized domains, it is irresistible to large organizations. Object-oriented
programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches.
Large organizations always tend to develop software this way, and I expect this to be as true in a hundred years as it is today.
...One helpful trick here is to use the length of the program
as an approximation for how much work it is to write. Not the length in characters, of course, but the length in distinct syntactic
elements-- basically, the size of the parse tree. It may not be quite true that the shortest program
is the least work to write, but it's close enough that you're better off aiming for the solid target of brevity than the fuzzy, nearby
one of least work. Then the algorithm for language design becomes: look at a program and ask, is there any way to
write this that's shorter?
Object Oriented Programming (OOP) is currently being hyped as the best way to do everything
from promoting code reuse to forming lasting relationships with persons of your preferred sexual orientation. This
paper tries to demystify the benefits of OOP. We point out that, as with so many previous software engineering fads, the biggest
gains in using OOP result from applying principles that are older than, and largely independent of, OOP. Moreover, many of the
claimed benefits are either not true or true only by chance, while occasioning some high costs that are rarely discussed. Most
seriously, all the hype is preventing progress in tackling problems that are both more important and harder: control of parallel
and distributed applications, GUI design and implementation, fault tolerant and real-time programming. OOP has little to offer
these areas. Fundamentally, you get good software by thinking about it, designing it well, implementing it carefully, and testing
it intelligently, not by mindlessly using an expensive mechanical process.
The working assumption should "Nobody inclusing myself will ever reuse this code". It is very reastic assumption as programmers
are notoriously resultant to reuse the code from somebody elses. And you programming skills evolve you old code will look pretty
foreign to use.
"In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)."
- Roberto Waltman
This week on our show we discuss this quote. Does OOP encourage too many layers in code?
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of
abstraction. I wrote about this before in
the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us?
Bertil Muth "¢
Dec 9 '18
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no
longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality.
Maybe the unncessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
Nested Software "¢
Dec 9 '18 "¢ Edited on Dec 16
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need
to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of
brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a
fundamental problem in software development...Nested Software "¢
Dec 9 '18
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build
up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class
with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always
a better fit for re-using code.
"... Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to maintain code all in the name of "clarity". ..."
"... Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint. Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance. ..."
Anyone who claims to be even remotely versed in computer science knows what "spaghetti code" is. That type of code still sadly
exists. But today we also have, for lack of a better term" and sticking to the pasta metaphor" "lasagna code".
Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to
maintain code all in the name of "clarity". It drives me nuts to see how badly some code today is. And then you come across
how small Turbo Pascal v3 was , and after comprehending it was a
full-blown Pascal compiler, one wonders why applications and compilers today are all so massive.
Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint.
Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles
which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance.
Back when I was starting out in computer science I thought by today we'd be writing a few lines of code to accomplish much. Instead,
we write hundreds of thousands of lines of code to accomplish little. It's so sad it's enough to make one cry, or just throw your
hands in the air in disgust and walk away.
There are bright spots. There are people out there that code small and beautifully. But they're becoming rarer, especially when
someone who seemed to have thrived on writing elegant, small, beautiful code recently passed away. Dennis Ritchie understood you
could write small programs that did a lot. He comprehended that the algorithm is at the core of what you're trying to accomplish.
Create something beautiful and well thought out and people will examine it forever, such as
Thompson's version of Regular Expressions !
"... I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution. ..."
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of
abstraction. I wrote about this before in
the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Discussion (12)
Subscribe
Shrek: Object-oriented programs are like onions. Donkey: They stink? Shrek: Yes. No. Donkey: Oh, they make you cry. Shrek: No. Donkey: Oh, you leave em out in the sun, they get all brown, start sproutin’ little white hairs. Shrek: No. Layers. Onions have layers. Object-oriented programs have layers. Onions have layers. You get it? They both have
layers. Donkey: Oh, they both have layers. Oh. You know, not everybody like onions. 8 likes
Reply Dec 8 '18
Unrelated, but I love both spaghetti and lasagna 😋 6 likes
Reply
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no
longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality.
Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to
solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing
that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental
problem in software development... 4 likes
Reply
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build
up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class
with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always
a better fit for re-using code. 2 likes
Reply
Inheritance is my preferred option for things that model type hierarchies. For example, widgets in a UI, or literal types in a
compiler.
One reason inheritance is over-used is because languages don't offer enough options to do composition correctly. It ends up becoming
a lot of boilerplate code. Proper support for mixins would go a long way to reducing bad inheritance. 2 likes
Reply
It is always up to the task. For small programms of course you don't need so many layers, interfaces and so on. For a bigger,
more complex one you need it to avoid a lot of issues: code duplications, unreadable code, constant merge conflicts etc. 2 likes
Reply
I'm building a personal project as a mean to get something from zero to production for learning purpose, and I am struggling with
wiring the front-end with the back. Either I dump all the code in the fetch callback or I use DTOs, two sets of interfaces to describe
API data structure and internal data structure... It's a mess really, but I haven't found a good level of compromise. 2 likes
Reply
It's interesting, because a project that gets burned by spaghetti can drift into lasagna code to overcompensate. Still bad, but lasagna
code is somewhat more manageable (just a huge headache to reason about).
But having an ungodly combination of those two... I dare not think about it. shudder 2 likes
Reply
Sidenote before I finish listening: I appreciate that I can minimize the browser on mobile and have this keep playing, unlike
with others apps(looking at you, YouTube). 2 likes
Reply
The pasta theory is a theory of programming. It is a common analogy for application development describing different programming
structures as popular pasta dishes. Pasta theory highlights the shortcomings of the code. These analogies include spaghetti, lasagna
and ravioli code.
Code smells or anti-patterns are a common classification of source code quality. There is also classification based on food which
you can find on Wikipedia.
Spaghetti code is a pejorative term for source code that has a complex and tangled control structure, especially one using many
GOTOs, exceptions, threads, or other “unstructured†branching constructs. It is named such because program flow tends to look
like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, including inexperienced programmers
and a complex program which has been continuously modified over a long life cycle. Structured programming greatly decreased the incidence
of spaghetti code.
Ravioli code
Ravioli code is a type of computer program structure, characterized by a number of small and (ideally) loosely-coupled software
components. The term is in comparison with spaghetti code, comparing program structure to pasta; with ravioli (small pasta pouches
containing cheese, meat, or vegetables) being analogous to objects (which ideally are encapsulated modules consisting of both code
and data).
Lasagna code
Lasagna code is a type of program structure, characterized by several well-defined and separable layers, where each layer of code
accesses services in the layers below through well-defined interfaces. The term is in comparison with spaghetti code, comparing program
structure to pasta.
Spaghetti with meatballs
The term “spaghetti with meatballs†is a pejorative term used in computer science to describe loosely constructed object-oriented
programming (OOP) that remains dependent on procedural code. It may be the result of a system whose development has transitioned
over a long life-cycle, language constraints, micro-optimization theatre, or a lack of coherent coding standards.
Do you know about other interesting source code classification?
Awesome video, I loved watching it. In my experience, there are many situations where,
like you pointed out, procedural style makes things easier and prevents you from overthinking
and overgeneralizing the problem you are trying to tackle. However, in some cases,
object-oriented programming removes unnecessary conditions and switches that make your code
harder to read. Especially in complex game engines where you deal with a bunch of objects
which interact in diverse ways to the environment, other objects and the physics engine. In a
procedural style, a program like this would become an unmanageable clutter of flags,
variables and switch-statements. Therefore, the statement "Object-Oriented Programming is
Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers
can use - and just like you would not use pliers to get a nail into a wall, you should not
force yourself to use object-oriented programming to solve every problem at hand. Instead,
you use it when it is appropriate and necessary. Nevertheless, i would like to hear how you
would realize such a complex program. Maybe I'm wrong and procedural programming is the best
solution in any case - but right now, I think you need to differentiate situations which
require a procedural style from those that require an object-oriented style.
I have been brainwashed with c++ for 20 years. I have recently switched to ANSI C and my
mind is now free. Not only I feel free to create design that are more efficient and elegant,
but I feel in control of what I do.
You make a lot of very solid points. In your refactoring of the Mapper interface to a
type-switch though: what is the point of still using a declared interface here? If you are
disregarding extensibility (which would require adding to the internal type switch, rather
than conforming a possible new struct to an interface) anyway, why not just make Mapper of
type interface{} and add a (failing) default case to your switch?
I recommend to install the Gosublime extension, so your code gets formatted on save and
you can use autocompletion. But looks good enough. But I disagree with large functions. Small
ones are just easier to understand and test.
Being the lead designer of an larger app (2m lines of code as of 3 years ago). I like to
say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when
it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in
our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface
also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all
coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I
don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you
have is a hammer, everything looks like a nail. So if you use OO on everything then you
sometimes end up with garbage.
Loving the series. The hardest part of actually becoming an efficient programmer is
unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been
starting with C++ then reducing everything into procedural functions and tightly-packed data
structs. Just by doing that I reduced static memory use and compiled program size at least
10-15%+ (which is a lot when you only have 32kb.) And holy damn, nearly 20 years of C and I
never knew you could nest a function within a function, I had to try that right away.
I have a design for a networked audio platform that goes into large buildings (over 11
stories) and can have 250 networked nodes (it uses an E1 style robbed bit networking system)
and 65K addressable points (we implemented 1024 of them for individual control by grouping
them). This system ties to a fire panel at one end with a microphone and speakers at the
other end. You can manually select any combination of points to page to, or the fire panel
can select zones to send alarm messages to. It works in real time with 50mS built in delays
and has access to 12 audio channels. What really puts the frosting on this cake is, the CPU
is an i8051 running at 18MHz and the code is a bit over 200K bytes that took close to 800K
lines of code. In assembler. And it took less than a Year from concept to first installation.
By one designer/coder. The only OOP in this code was when an infinite loop happened or a bug
crept in - "OOPs!"
There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my
friend. General idea is to declare a struct inside which a function can be declared. Since
you can declare structs inside functions, you can safely use it as a wrapper for your
function-inside-function declaration. This has been done in MSVC but I believe it will
compile in gcc too.
"Is pixel an object or a group of objects? Is there a container? Do I have to ask a
factory to get me a color?" I literally died there... that's literally the best description
of my programming for the last 5 years.
It's really sad that we are only taught OOP and no other paradigms in our college, when I
discovered programming I had no idea about OOP and it was really easy to build programs, bt
then I came across OOP:"how to deconstruct a problem statement into nouns for objects and
verbs for methods" and it really messed up my thinking, I have been struggling for a long
time on how to organize my code on the conceptual level, only recently I realized that OOP is
the reason for this struggle, handmadehero helped alot to bring me back to the roots of how
programming is done, remember never push OOP into areas where it is not needed, u don't have
to model ur program as real world entities cause it's not going to run on real world, it's
going to run on CPU!
I lost an entire decade to OOP, and agree with everything Casey said here. The code I
wrote in my first year as a programmer (before OOP) was better than the code I wrote in my
15th year (OOP expert). It's a shame that students are still indoctrinated into this
regressive model.
Unfortunately, when I first started programming, I encountered nothing but tutorials that
jumped right into OOP like it was the only way to program. And of course I didn't know any
better! So much friction has been removed from my process since I've broken free from that
state of mind. It's easier to judge when objects are appropriate when you don't think they're
always appropriate!
"It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the
fundamental particle of computing that some people want it to be. When blindly applied to
problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet
there's often an aesthetic insistence on objects for everything all the way down. That's too
bad, because it makes it harder to identify the cases where an object-oriented style truly
results in an overall simplicity and ease of understanding." -
https://prog21.dadgum.com/156.html
The first language I was taught was Java, so I was taught OOP from the get go. Removing
the OOP mindset was actually really easy, but what was left stuck in my head is the practice
of having small functions and make your code look artificially "clean". So I am in a constant
struggle of refactoring and not refactoring, knowing that over-refactoring will unnecessarily
complicate my codebase if it gets big. Even after removing my OOP mindset, my emphasis is
still on the code itself, and that is much harder to cure in comparison.
"I want to emphasize that the problem with object-oriented programming is not the concept
that there could be an object. The problem with it is the fact that you're orienting your
program, the thinking, around the object, not the function. So it's the orientation that's
bad about it, NOT whether you end up with an object. And it's a really important distinction
to understand."
Nicely stated, HH. On youtube, MPJ, Brian Will, and Jonathan Blow also address this
matter. OOP sucks and can be largely avoided. Even "reuse" is overdone. Straightline probably
results in faster execution but slightly greater memory use. But memory is cheap and the
resultant code is much easier to follow. Learn a little assembly language. X86 is fascinating
and you'll know what the computer is actually doing.
I think schools should teach at least 3 languages / paradigms, C for Procedural, Java for
OOP, and Scheme (or any Lisp-style languages) for Functional paradigms.
It sounds to me like you're describing JavaScript framework programming that people learn
to start from. It hasn't seemed to me like object-oriented programmers who aren't doing web
stuff have any problem directly describing an algorithm and then translating it into
imperative or functional or just direct instructions for a computer. it's quite possible to
use object-oriented languages or languages that support object-oriented stuff to directly
command a computer.
I dunno man. Object oriented programming can (sometimes badly) solve real problems -
notably polymorphism. For example, if you have a Dog and a Cat sprite and they both have a
move method. The "non-OO" way Casey does this is using tagged unions - and that was not an
obvious solution when I first saw it. Quite glad I watched that episode though, it's very
interesting! Also see this tweet thread from Casey -
https://twitter.com/cmuratori/status/1187262806313160704
My deepest feeling after crossing so many discussions and books about this is a sincere
YES.
Without entering in any technical details about it, because even after some years I
don’t find myself qualified to talk about this (is there someone who really understand
it completely?), I would argument that the main problem is that every time a read something
about OOP it is trying to justify why it is “so good”.
Then, a huge amount of examples are shown, many arguments, and many expectations are
created.
It is not stated simply like this: “oh, this is another programming paradigm.”
It is usually stated that: “This in a fantastic paradigm, it is better, it is simpler,
it permits so many interesting things, … it is this, it is that… and so on.
What happens is that, based on the “good” arguments, it creates some
expectation that things produced with OOP should be very good. But, no one really knows if
they are doing it right. They say: the problem is not the paradigm, it is you that are not
experienced yet. When will I be experienced enough?
Are you following me? My feeling is that the common place of saying it is so good at the
same time you never know how good you are actually being makes all of us very frustrated and
confuse.
Yes, it is a great paradigm since you see it just as another paradigm and drop all the
expectations and excessive claiming that it is so good.
It seems to me, that the great problem is that huge propaganda around it, not the paradigm
itself. Again, if it had a more humble claim about its advantages and how difficult is to
achieve then, people would be much less frustrated.
Sourav
Datta , A programmer trying find the ultimate source code of life.
Answered August 6, 2015 · Author has 145 answers and 292K answer views
In recent years, OOP is indeed being regarded as a overrated paradigm by many. If we look
at the most recent famous languages like Go and Rust, they do not have the traditional OO
approaches in language design. Instead, they choose to pack data into something akin to
structs in C and provide ways to specify "protocols" (similar to interfaces/abstract methods) which can work on those packed
data...
The last decade has seen object oriented programming (OOP) dominate the programming world.
While there is no doubt that there are benefits of OOP, some programmers question whether OOP
has been over rated and ponder whether alternate styles of coding are worth pursuing. To even
suggest that OOP has in some way failed to produce the quality software we all desire could in
some instances cost a programmer his job, so why even ask the question ?
Quality software is the goal.
Likely all programmers can agree that we all want to produce quality software. We would like
to be able to produce software faster, make it more reliable and improve its performance. So
with such goals in mind, shouldn't we be willing to at least consider all possibilities ? Also
it is reasonable to conclude that no single tool can match all situations. For example, while
few programmers today would even consider using assembler, there are times when low level
coding such as assembler could be warranted. The old adage applies "the right tool for the
job". So it is fair to pose the question, "Has OOP been over used to the point of trying to
make it some kind of universal tool, even when it may not fit a job very well ?"
Others are asking the same question.
I won't go into detail about what others have said about object oriented programming, but I
will simply post some links to some interesting comments by others about OOP.
I have watched a number of videos online and read a number of articles by programmers about
different concepts in programming. When OOP is discussed they talk about thinks like modeling
the real world, abtractions, etc. But two things are often missing in such discussions, which I
will discuss here. These two aspects greatly affect programming, but may not be discussed.
First is, what is programming really ? Programming is a method of using some kind of human
readable language to generate machine code (or scripts eventually read by machine code) so one
can make a computer do a task. Looking back at all the years I have been programming, the most
profound thing I have ever learned about programming was machine language. Seeing what a CPU is
actually doing with our programs provides a great deal of insight. It helps one understand why
integer arithmetic is so much faster than floating point. It helps one understand what graphics
is really all about (simply the moving around a lot of pixels or blocks of four bytes). It
helps one understand what a procedure really must do to have parameters passed. It helps one
understand why a string is simply a block of bytes (or double bytes for unicode). It helps one
understand why we use bytes so much and what bit flags are and what pointers are.
When one looks at OOP from the perspective of machine code and all the work a compiler must
do to convert things like classes and objects into something the machine can work with, then
one very quickly begins to see that OOP adds significant overhead to an application. Also if a
programmer comes from a background of working with assembler, where keeping things simple is
critical to writing maintainable code, one may wonder if OOP is improving coding or making it
more complicated.
Second, is the often said rule of "keep it simple". This applies to programming. Consider
classic Visual Basic. One of the reasons it was so popular was that it was so simple compared
to other languages, say C for example. I know what is involved in writing a pure old fashioned
WIN32 application using the Windows API and it is not simple, nor is it intuitive. Visual Basic
took much of that complexity and made it simple. Now Visual Basic was sort of OOP based, but
actually mostly in the GUI command set. One could actually write all the rest of the code using
purely procedural style code and likely many did just that. I would venture to say that when
Visual Basic went the way of dot.net, it left behind many programmers who simply wanted to keep
it simple. Not that they were poor programmers who didn't want to learn something new, but that
they knew the value of simple and taking that away took away a core aspect of their programming
mindset.
Another aspect of simple is also seen in the syntax of some programming languages. For
example, BASIC has stood the test of time and continues to be the language of choice for many
hobby programmers. If you don't think that BASIC is still alive and well, take a look at this
extensive list of different BASIC programming languages.
While some of these BASICs are object oriented, many of them are also procedural in nature.
But the key here is simplicity. Natural readable code.
Simple and low level can work together.
Now consider this. What happens when you combine a simple language with the power of machine
language ? You get something very powerful. For example, I write some very complex code using
purely procedural style coding, using BASIC, but you may be surprised that my appreciation for
machine language (or assembler) also comes to the fore. For example, I use the BASIC language
GOTO and GOSUB. How some would cringe to hear this. But these constructs are native to machine
language and very useful, so when used properly they are powerful even in a high level
language. Another example is that I like to use pointers a lot. Oh how powerful pointers are.
In BASIC I can create variable length strings (which are simply a block of bytes) and I can
embed complex structures into those strings by using pointers. In BASIC I use the DIM AT
command, which allows me to dimension an array of any fixed data type or structure within a
block of memory, which in this case happens to be a string.
Appreciating machine code also affects my view of performance. Every CPU cycle counts. This
is one reason I use BASICs GOSUB command. It allows me to write some reusable code within a
procedure, without the need to call an external routine and pass parameters. The performance
improvement is significant. Performance also affects how I tackle a problem. While I want code
to be simple, I also want it to run as fast as possible, so amazingly some of the best
performance tips have to do with keeping code simple, with minimal overhead and also
understanding what the machine code must accomplish to do with what I have written in a higher
level language. For example in BASIC I have a number of options for the SELECT CASE structure.
One option can optimize the code using jump tables (compiler handles this), one option can
optimize if the values are only Integers or DWords. But even then the compiler can only do so
much. What happens if a large SELECT CASE has to compare dozens and dozens of string constants
to a variable length string being tested ? If this code is part of a parser, then it really can
slow things down. I had this problem in a scripting language I created for an OpenGL based 3D
custom control. The 3D scripting language is text based and has to be interpreted to generate
3D OpenGL calls internally. I didn't want the scripting language to bog things down. So what
would I do ?
The solution was simple and appreciating how the compiled machine code would have to compare
so many bytes in so many string constants, one quickly realized that the compiler alone could
not solve this. I had to think like I was an assembler programmer, but still use a high level
language. The solution was so simple, it was surprising. I could use a pointer to read the
first byte of the string being parsed. Since the first character would always be a letter in
the scripting language, this meant there were 26 possible outcomes. The SELECT CASE simply
tested for the first character value (convert to a number) which would execute fast. Then for
each letter (A,B,C, ) I would only compare the parsed word to the scripting language keywords
which started with that letter. This in essence improved speed by 26 fold (or better).
The fastest solutions are often very simple to code. No complex classes needed here. Just a
simple procedure to read through a text string using the simplest logic I could find. The
procedure is a little more complex than what I describe, but this is the core logic of the
routine.
From experience, I have found that a purely procedural style of coding, using a language
which is natural and simple (BASIC), while using constructs of the language which are closer to
pure machine (or assembler) in the language produces smaller and faster applications which are
also easier to maintain.
Now I am not saying that all OOP is bad. Nor am I saying that OOP never has a place in
programming. What I am saying though is that it is worth considering the possiblity that OOP is
not always the best solution and that there are other choices.
Here are some of my other blog articles which may interest you if this one interested
you:
Classic Visual Basic's end marked a key change in software development.
Yes it is. For application code at least, I'm pretty sure.
Not claiming any originality here, people smarter than me already noticed this fact ages
ago.
Also, don't misunderstand me, I'm not saying that OOP is bad. It probably is the best
variant of procedural programming.
Maybe the term is OOP overused to describe anything that ends up in OO systems.
Things like VMs, garbage collection, type safety, mudules, generics or declarative queries
(Linq) are a given , but they are not inherently object oriented.
I think these things (and others) are more relevant than the classic three principles.
Inheritance
Current advice is usually prefer composition over inheritance . I totally agree.
Polymorphism
This is very, very important. Polymorphism cannot be ignored, but you don't write lots of
polymorphic methods in application code. You implement the occasional interface, but not every
day.
Mostly you use them.
Because polymorphism is what you need to write reusable components, much less to use them.
Encapsulation
Encapsulation is tricky. Again, if you ship reusable components, then method-level access
modifiers make a lot of sense. But if you work on application code, such fine grained
encapsulation can be overkill. You don't want to struggle over the choice between internal and
public for that fantastic method that will only ever be called once. Except in test code maybe.
Hiding all implementation details in private members while retaining nice simple tests can be
very difficult and not worth the troulbe. (InternalsVisibleTo being the least trouble, abstruse
mock objects bigger trouble and Reflection-in-tests Armageddon).
Nice, simple unit tests are just more important than encapsulation for application code, so
hello public!
So, my point is, if most programmers work on applications, and application code is not very
OO, why do we always talk about inheritance at the job interview? 🙂
PS
If you think about it, C# hasn't been pure object oriented since the beginning (think
delegates) and its evolution is a trajectory from OOP to something else, something
multiparadigm.
If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are
global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot
be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.
However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can
overshadow any existing global variable with the same name.
Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP
is overkill.
Absolutely re. zealots. Most Python users use it for scripting and create little functions to separate out small bits of code.
– Paul Uszak Sep 22 at 22:57
The OOP paradigm has been criticised for a number of reasons, including not meeting its
stated goals of reusability and modularity, [36][37]
and for overemphasizing one aspect of software design and modeling (data/objects) at the
expense of other important aspects (computation/algorithms). [38][39]
Luca Cardelli has
claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take
longer to compile, and that OOP languages have "extremely poor modularity properties with
respect to class extension and modification", and tend to be extremely complex. [36]
The latter point is reiterated by Joe Armstrong , the principal
inventor of Erlang , who is quoted as
saying: [37]
The problem with object-oriented languages is they've got all this implicit environment
that they carry around with them. You wanted a banana but what you got was a gorilla holding
the banana and the entire jungle.
A study by Potok et al. has shown no significant difference in productivity between OOP and
procedural approaches. [40]
Christopher J.
Date stated that critical comparison of OOP to other technologies, relational in
particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP;
[41]
however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind
of customizable type
system to support RDBMS .
[42]
In an article Lawrence Krubner claimed that compared to other languages (LISP dialects,
functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden
of unneeded complexity. [43]
I find OOP technically unsound. It attempts to decompose the world in terms of interfaces
that vary on a single type. To deal with the real problems you need multisorted algebras --
families of interfaces that span multiple types. I find OOP philosophically unsound. It
claims that everything is an object. Even if it is true it is not very interesting -- saying
that everything is an object is saying nothing at all.
Paul Graham has suggested
that OOP's popularity within large companies is due to "large (and frequently changing) groups
of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one
programmer from "doing too much damage". [44]
Leo Brodie has suggested a connection between the standalone nature of objects and a
tendency to duplicate
code[45] in
violation of the don't repeat yourself principle
[46] of
software development.
Object Oriented Programming puts the Nouns first and foremost. Why would you go to such
lengths to put one part of speech on a pedestal? Why should one kind of concept take
precedence over another? It's not as if OOP has suddenly made verbs less important in the way
we actually think. It's a strangely skewed perspective.
Rich Hickey ,
creator of Clojure ,
described object systems as overly simplistic models of the real world. He emphasized the
inability of OOP to model time properly, which is getting increasingly problematic as software
systems become more concurrent. [39]
Eric S. Raymond
, a Unix programmer and
open-source
software advocate, has been critical of claims that present object-oriented programming as
the "One True Solution", and has written that object-oriented programming languages tend to
encourage thickly layered programs that destroy transparency. [48]
Raymond compares this unfavourably to the approach taken with Unix and the C programming language .
[48]
Rob Pike , a programmer
involved in the creation of UTF-8 and Go , has called object-oriented
programming "the Roman
numerals of computing" [49] and has
said that OOP languages frequently shift the focus from data structures and algorithms to types . [50]
Furthermore, he cites an instance of a Java professor whose
"idiomatic" solution to a problem was to create six new classes, rather than to simply use a
lookup table .
[51]
For efficiency sake, Objects are passed to functions NOT by their value but by
reference.
What that means is that functions will not pass the Object, but instead pass a
reference or pointer to the Object.
If an Object is passed by reference to an Object Constructor, the constructor can put that
Object reference in a private variable which is protected by Encapsulation.
But the passed Object is NOT safe!
Why not? Because some other piece of code has a pointer to the Object, viz. the code that
called the Constructor. It MUST have a reference to the Object otherwise it couldn't pass it to
the Constructor?
The Reference Solution
The Constructor will have to Clone the passed in Object. And not a shallow clone but a deep
clone, i.e. every object that is contained in the passed in Object and every object in those
objects and so on and so on.
So much for efficiency.
And here's the kicker. Not all objects can be Cloned. Some have Operating System resources
associated with them making cloning useless at best or at worst impossible.
And EVERY single mainstream OO language has this problem.
The world is filled with conformism and groupthink. Most people do not wish to think for
themselves. Thinking for oneself is dangerous, requires effort and often leads to rejection by
the herd of one's peers.
The profession of arms, the intelligence business, the civil service bureaucracy, the
wondrous world of groups like the League of Women Voters, Rotary Club as well as the empire of
the thinktanks are all rotten with this sickness, an illness which leads inevitably to
stereotyped and unrealistic thinking, thinking that does not reflect reality.
The worst locus of this mentally crippling phenomenon is the world of the academics. I have
served on a number of boards that awarded Ph.D and post doctoral grants. I was on the Fulbright
Fellowship federal board. I was on the HF Guggenheim program and executive boards for a long
time. Those are two examples of my exposure to the individual and collective academic
minds.
As a class of people I find them unimpressive. The credentialing exercise in acquiring a
doctorate is basically a nepotistic process of sucking up to elders and a crutch for ego
support as well as an entrance ticket for various hierarchies, among them the world of the
academy. The process of degree acquisition itself requires sponsorship by esteemed academics
who recommend candidates who do not stray very far from the corpus of known work in whichever
narrow field is involved. The endorsements from RESPECTED academics are often decisive in the
award of grants.
This process is continued throughout a career in academic research. PEER REVIEW is the
sine qua non for acceptance of a "paper," invitation to career making conferences, or
to the Holy of Holies, TENURE.
This life experience forms and creates CONFORMISTS, people who instinctively boot-lick their
fellows in a search for the "Good Doggy" moments that make up their lives. These people are for
sale. Their price may not be money, but they are still for sale. They want to be accepted as
members of their group. Dissent leads to expulsion or effective rejection from the group.
This mentality renders doubtful any assertion that a large group of academics supports any
stated conclusion. As a species academics will say or do anything to be included in their
caste.
This makes them inherently dangerous. They will support any party or parties, of any
political inclination if that group has the money, and the potential or actual power to
maintain the academics as a tribe. pl
That is the nature of tribes and humans are very tribal. At least most of them.
Fortunately, there are outliers. I was recently reading "Political Tribes" which was written
by a couple who are both law professors that examines this.
Take global warming (aka the rebranded climate change). Good luck getting grants to do any
skeptical research. This highly complex subject which posits human impact is a perfect
example of tribal bias.
My success in the private sector comes from consistent questioning what I wanted to be
true to prevent suboptimal design decisions.
I also instinctively dislike groups that have some idealized view of "What is to be
done?"
As Groucho said: "I refuse to join any club that would have me as a member"
The 'isms' had it, be it Nazism, Fascism, Communism, Totalitarianism, Elitism all demand
conformity and adherence to group think. If one does not co-tow to whichever 'ism' is at
play, those outside their group think are persecuted, ostracized, jailed, and executed all
because they defy their conformity demands, and defy allegiance to them.
One world, one religion, one government, one Borg. all lead down the same road to --
Orwell's 1984.
David Halberstam: The Best and the Brightest. (Reminder how the heck we got into Vietnam,
when the best and the brightest were serving as presidential advisors.)
Also good Halberstam re-read: The Powers that Be - when the conservative media controlled
the levers of power; not the uber-liberal one we experience today.
"... In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm. ..."
"... In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules ..."
"... I get tired of the purists who think that OO is the only possible answer. The world is not a nail. ..."
OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago. Ruby on rails was the
convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that
Java was/is. Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast,
despite having a DB model that was built by non-programmers on crack.
Most critical processes are procedural, even today if only for the OOP has been a golden hammer ever since Java, but we've
noticed the downsides quite a while ago.
Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular
abstraction craze that Java was/is.
Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast,
despite having a DB model that was built by non-programmers on crack.
There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They
know OOP, so they think that every problem must be solved in an OOP way.
In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows
naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best
paradigm.
In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules.
For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What
"object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative,
static "generate answer" method makes at least as much sense.
There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO
is the only possible answer. The world is not a nail.
Object-oriented programming generates a lot of what looks like work. Back in the days of
fanfold, there was a type of programmer who would only put five or ten lines of code on a
page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming
is like crack for these people: it lets you incorporate all this scaffolding right into your
source code. Something that a Lisp hacker might handle by pushing a symbol onto a list
becomes a whole file of classes and methods. So it is a good tool if you want to convince
yourself, or someone else, that you are doing a lot of work.
Eric Lippert observed a similar occupational hazard among developers. It's something he
calls object happiness .
What I sometimes see when I interview people and review code is symptoms of a disease I call
Object Happiness. Object Happy people feel the need to apply principles of OO design to
small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual
abstract base classes -- writing programs where IFoos talk to IBars but there is only one
implementation of each interface! I suspect that early exposure to OO design principles
divorced from any practical context that motivates those principles leads to object
happiness. People come away as OO True Believers rather than OO pragmatists.
I've seen so many problems caused by excessive, slavish adherence to OOP in production
applications. Not that object oriented programming is inherently bad, mind you, but a little
OOP goes a very long way . Adding objects to your code is like adding salt to a dish: use a
little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's
better to err on the side of simplicity, and I tend to favor the approach that results in
less code, not more .
Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page . Patrick
is a French software developer. Evidently the acronym for object oriented programming is
spelled a little differently in French than it is in English: POO.
That's exactly what I've imagined when I had to work on code that abused objects.
But POO code can have another, more constructive, meaning. This blog author argues that OOP
pales in importance to POO. Programming fOr Others , that
is.
The problem is that programmers are taught all about how to write OO code, and how doing so
will improve the maintainability of their code. And by "taught", I don't just mean "taken a
class or two". I mean: have pounded into head in school, spend years as a professional being
mentored by senior OO "architects" and only then finally kind of understand how to use
properly, some of the time. Most engineers wouldn't consider using a non-OO language, even if
it had amazing features. The hype is that major.
So what, then, about all that code programmers write before their 10 years OO
apprenticeship is complete? Is it just doomed to suck? Of course not, as long as they apply
other techniques than OO. These techniques are out there but aren't as widely discussed.
The improvement [I propose] has little to do with any specific programming technique. It's
more a matter of empathy; in this case, empathy for the programmer who might have to use your
code. The author of this code actually thought through what kinds of mistakes another
programmer might make, and strove to make the computer tell the programmer what they did
wrong.
In my experience the best code, like the best user interfaces, seems to magically
anticipate what you want or need to do next. Yet it's discussed infrequently relative to OO.
Maybe what's missing is a buzzword. So let's make one up, Programming fOr Others, or POO for
short.
The principles of object oriented programming are far more important than mindlessly,
robotically instantiating objects everywhere:
Stop worrying so much about the objects. Concentrate on satisfying the principles of
object orientation rather than object-izing everything. And most of all, consider the poor
sap who will have to read and support this code after you're done with it . That's why POO
trumps OOP: programming as if people mattered will always be a more effective strategy than
satisfying the architecture astronauts .
Daniel
Korenblum , works at Bayes Impact
Updated May 25, 2015 There are many reasons why non-OOP languages and paradigms/practices
are on the rise, contributing to the relative decline of OOP.
First off, there are a few things about OOP that many people don't like, which makes them
interested in learning and using other approaches. Below are some references from the OOP wiki
article:
One of the comments therein linked a few other good wikipedia articles which also provide
relevant discussion on increasingly-popular alternatives to OOP:
Modularity and design-by-contract are better implemented by module systems ( Standard ML
)
Personally, I sometimes think that OOP is a bit like an antique car. Sure, it has a bigger
engine and fins and lots of chrome etc., it's fun to drive around, and it does look pretty. It
is good for some applications, all kidding aside. The real question is not whether it's useful
or not, but for how many projects?
When I'm done building an OOP application, it's like a large and elaborate structure.
Changing the way objects are connected and organized can be hard, and the design choices of the
past tend to become "frozen" or locked in place for all future times. Is this the best choice
for every application? Probably not.
If you want to drive 500-5000 miles a week in a car that you can fix yourself without
special ordering any parts, it's probably better to go with a Honda or something more easily
adaptable than an antique vehicle-with-fins.
Finally, the best example is the growth of JavaScript as a language (officially called
EcmaScript now?). Although JavaScript/EcmaScript (JS/ES) is not a pure functional programming
language, it is much more "functional" than "OOP" in its design. JS/ES was the first mainstream
language to promote the use of functional programming concepts such as higher-order functions,
currying, and monads.
The recent growth of the JS/ES open-source community has not only been impressive in its
extent but also unexpected from the standpoint of many established programmers. This is partly
evidenced by the overwhelming number of active repositories on Github using
JavaScript/EcmaScript:
Because JS/ES treats both functions and objects as structs/hashes, it encourages us to blur
the line dividing them in our minds. This is a division that many other languages impose -
"there are functions and there are objects/variables, and they are different".
This seemingly minor (and often confusing) design choice enables a lot of flexibility and
power. In part this seemingly tiny detail has enabled JS/ES to achieve its meteoric growth
between 2005-2015.
This partially explains the rise of JS/ES and the corresponding relative decline of OOP. OOP
had become a "standard" or "fixed" way of doing things for a while, and there will probably
always be a time and place for OOP. But as programmers we should avoid getting too stuck in one
way of thinking / doing things, because different applications may require different
approaches.
Above and beyond the OOP-vs-non-OOP debate, one of our main goals as engineers should be
custom-tailoring our designs by skillfully choosing the most appropriate programming
paradigm(s) for each distinct type of application, in order to maximize the "bang for the buck"
that our software provides.
Although this is something most engineers can agree on, we still have a long way to go until
we reach some sort of consensus about how best to teach and hone these skills. This is not only
a challenge for us as programmers today, but also a huge opportunity for the next generation of
educators to create better guidelines and best practices than the current OOP-centric
pedagogical system.
Here are a couple of good books that elaborates on these ideas and techniques in more
detail. They are free-to-read online:
Mike MacHenry ,
software engineer, improv comedian, maker Answered
Feb 14, 2015 · Author has 286 answers and 513.7k answer views Because the phrase
itself was over hyped to an extrodinary degree. Then as is common with over hyped things many
other things took on that phrase as a name. Then people got confused and stopped calling what
they are don't OOP.
Yes I think OOP ( the phrase ) is on the decline because people are becoming more educated
about the topic.
It's like, artificial intelligence, now that I think about it. There aren't many people
these days that say they do AI to anyone but the laymen. They would say they do machine
learning or natural language processing or something else. These are fields that the vastly
over hyped and really nebulous term AI used to describe but then AI ( the term ) experienced a
sharp decline while these very concrete fields continued to flourish.
There is nothing inherently wrong with some of the functionality it offers, its the way
OOP is abused as a substitute for basic good programming practices.
I was helping interns - students from a local CC - deal with idiotic assignments like
making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES.
Seriously, WTF?
A room full of career programmers could not even figure out how you were supposed to do
that, much less why.
What was worse was a lack of understanding of basic programming skill or even the use of
variables, as the kids were being taught EVERY program was to to be assembled solely by
sticking together bits of libraries.
There was no coding, just hunting for snippets of preexisting code to glue together. Zero
idea they could add their own, much less how to do it. OOP isn't the problem, its the idea
that it replaces basic programming skills and best practice.
That and the obsession with absofrackinglutely EVERYTHING just having to be a formally
declared object including the while program being an object with a run() method.
Some things actually cry out to be objects, some not so much. Generally, I find that my
most readable and maintainable code turns out to be a procedural program that manipulates
objects.
Even there, some things just naturally want to be a struct or just an array of values.
The same is true of most ingenious ideas in programming. It's one thing if code is
demonstrating a particular idea, but production code is supposed to be there to do work, not
grind an academic ax.
For example, slavish adherence to "patterns". They're quite useful for thinking about code
and talking about code, but they shouldn't be the end of the discussion. They work better as
a starting point. Some programs seem to want patterns to be mixed and matched.
In reality those problems are just cargo cult programming one level higher.
I suspect a lot of that is because too many developers barely grasp programming and never
learned to go beyond the patterns they were explicitly taught.
When all you have is a hammer, the whole world looks like a nail.
Inheritance, while not "inherently" bad, is often the wrong solution. See: Why
extends is evil [javaworld.com]
Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny
little anecdote in Cocoa
Programming for Mac OS X [google.com]:
"Once upon a time, there was a company called Taligent. Taligent was created by IBM and
Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the
peak of its mindshare, I met one of its engineers at a trade show.
I asked him to create a simple application for me: A window would appear with a button,
and when the button was clicked, the words 'Hello, World!' would appear in a text field. The
engineer created a project and started subclassing madly: subclassing the window and the
button and the event handler.
Then he started generating code: dozens of lines to get the button and the text field onto
the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew
that the company was doomed. A couple of years later, Taligent quietly closed its doors
forever."
Almost every programming methodology can be abused by people who really don't know how to
program well, or who don't want to. They'll happily create frameworks, implement new
development processes, and chart tons of metrics, all while avoiding the work of getting the
job done. In some cases the person who writes the most code is the same one who gets the
least amount of useful work done.
So, OOP can be misused the same way. Never mind that OOP essentially began very early and
has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially
an object oriented system. It's just data encapsulation and separating work into manageable
modules. That's how it was before anyone ever came up with the dumb name "full-stack
developer".
(medium.com)
782
Posted by EditorDavid
on Monday July 22, 2019 @12:04AM
from the
OOPs
dept.
Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling
object-oriented programming "a trillion dollar disaster."
Precious time and brainpower are being spent
thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented
Programming (OOP) has been created with one goal in mind -- to
manage the complexity
of procedural
codebases. In other words, it was supposed to
improve code organization
. There's
no objective and open evidence that OOP is better than plain procedural programming
...
Instead of reducing
complexity, it encourages promiscuous
sharing of mutable state
and introduces additional complexity
with its numerous
design patterns
. OOP makes common development practices, like refactoring and
testing, needlessly hard...
As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the
rise of OOP with some curiosity. I think there's a general consensus that abstraction and
re-usability are good things - they're the reason subroutines exist - the issue is whether
they are ends in themselves.
I struggle with the whole concept of "design patterns". There are clearly common themes in
software, but there seems to be a great deal of pressure these days to make your
implementation fit some pre-defined template rather than thinking about the application's
specific needs for state and concurrency. I have seen some rather eccentric consequences of
"patternism".
Correctly written, OOP code allows you to encapsulate just the logic you need for a
specific task and to make that specific task available in a wide variety of contexts by
judicious use of templating and virtual functions that obviate the need for
"refactoring".
Badly written, OOP code can have as many dangerous side effects and as much opacity as any
other kind of code. However, I think the key factor is not the choice of programming
paradigm, but the design process.
You need to think first about what your code is intended to do and in what circumstances
it might be reused. In the context of a larger project, it means identifying commonalities
and deciding how best to implement them once. You need to document that design and review it
with other interested parties. You need to document the code with clear information about its
valid and invalid use. If you've done that, testing should not be a problem.
Some people seem to believe that OOP removes the need for some of that design and
documentation. It doesn't and indeed code that you intend to be reused needs *more* design
and documentation than the glue that binds it together in any one specific use case. I'm
still a firm believer that coding begins with a pencil, not with a keyboard. That's
particularly true if you intend to design abstract interfaces that will serve many purposes.
In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the
costs - and that usually means you not only know your code will be genuinely reusable but
will also genuinely be reused.
I struggle with the whole concept of "design patterns".
Because design patterns are stupid.
A reasonable programmer can understand reasonable code so long as the data is documented
even when the code isn't documented, but will struggle immensely if it were the other way
around.
Bad programmers create objects for objects sake, and because of that they have to follow
so called "design patterns" because no amount of code commenting makes the code easily
understandable when its a spaghetti web of interacting "objects" The "design patterns" don't
make the code easier the read, just easier to write.
Those OOP fanatics, if they do "document" their code, add comments like "// increment the
index" which is useless shit.
The big win of OOP is only in the encapsulation of the data with the code, and great code
treats objects like data structures with attached subroutines, not as "objects", and document
the fuck out of the contained data, while more or less letting the code document itself.
680,303 lines of Java code in the main project in my system.
Probably would've been more like 100,000 lines if you had used a language whose ecosystem
doesn't goad people into writing so many superfluous layers of indirection, abstraction and
boilerplate.
Of the omitted language features, the designers explicitly argue against assertions and
pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful
language, encouraging instead the use of interfaces to
achieve dynamic dispatch [h] and
composition to reuse code.
Composition and delegation are in fact largely
automated by struct embedding; according to researchers Schmager et al. , this feature
"has many of the drawbacks of inheritance: it affects the public interface of objects, it is
not fine-grained (i.e, no method-level control over embedding), methods of embedded objects
cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse
it to the extent that programmers in other languages are reputed to overuse inheritance.
[61]
The designers express an openness to generic programming and note that built-in functions
are in fact type-generic, but these are treated as special cases; Pike calls this a
weakness that may at some point be changed. [53]
The Google team built at least one compiler for an experimental Go dialect with generics, but
did not release it. [96] They are
also open to standardizing ways to apply code generation. [97]
Initially omitted, the exception -like panic / recover
mechanism was eventually added, which the Go authors advise using for unrecoverable errors such
as those that should halt an entire program or server request, or as a shortcut to propagate
errors up the stack within a package (but not across package boundaries; there, error returns
are the standard API). [98
"... Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? ..."
"... So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming). ..."
What the hell is DevOps? Every couple months I find myself trying to look into it as all I
ever hear and see about is DevOps being the way forward. But each time I research it I can only
find things talking about streamlining software updates and quality assurance and yada yada
yada. It seems like DevOps only applies to companies that make software as a product. How does
that affect me as a sysadmin for higher education? My "company's" product isn't software.
Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to
offer me? Again, when I try to research them a majority of what I find just links back to
software development.
To give a rough idea of what I deal with, below is a list of my three main
responsibilities.
macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+
machines)
Network Administration (I just started with this a couple months ago and I'm slowly
learning about our infrastructure and network administration in general from our IT
director. We have several buildings spread across our entire campus with a mixture of
Juniper, Dell, and Brocade equipment.)
AV Systems Design and Programming (I'm the only person who does anything related to
video conferencing, meeting room equipment, presentation systems, digital signage, etc. for
7 buildings.)
So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn,
but it gets so overwhelming trying to find information because everything I find just assumes
you're a software developer with all this prerequisite knowledge. Additionally, how the hell do
you find the time to learn all of this? It seems like new DevOps software or platforms or
whatever you call them spin up every single month. I'm already in the middle of trying to learn
JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in
addition to networking concepts in general), and AV design stuff (like Crestron programming).
I've been working at the same job for 5 years and I feel like I'm being left in the dust by the
entire rest of the industry. I'm being pulled in so many different directions that I feel like
it's impossible for me to ever get another job. At the same time, I can't specialize in
anything because I have so many different unrelated areas I'm supposed to be doing work in.
And this is what I go through/ask myself every few months I try to research and learn
DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing
to offer. Thanks in advance.
there's a lot of tools that can be used to make your life much easier that's used on a
daily basis for DevOps, but apparently that's not the case for you. when you manage infra as
code, you're using DevOps.
there's a lot of space for operations guys like you (and me) so look to DevOps as an
alternative source of knowledge, just to stay tuned on the trends of the industry and improve
your skills.
for higher education, this is useful for managing large projects and looking for
improvement during the development of the product/service itself. but again, that's not the
case for you. if you intend to switch to another position, you may try to search for a
certification program that suits your needs
"... In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part! ..."
"... Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture. ..."
"... OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management. ..."
"... OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. ..."
"... OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake. ..."
"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first
few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and
polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the
Java class.
Each of our previous articles had a common thread. We have written about the strengths and benefits of
the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we
do not want to give anyone a false sense that object-oriented techniques are always the perfect answer.
Object-oriented techniques are not the magic "silver bullets" of programming.
In the programming world, the term silver bullet refers to a technology or methodology that is touted as the
ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically
make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more
impressive, it will do all of this without any effort on your part!
Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never
will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and
architecture.
If anything, using OO makes design and architecture more important because without a clear, well-planned design,
OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble
for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned
system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier
maintenance.
Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO
isn't flawed, but some of the hype has given OO developers and managers a false sense of security.
Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of
OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential
pitfalls.
Fallacies of OO
It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow
these common fallacies to mislead you.
OO will insure the success of your project: An object-oriented approach to software development does not guarantee
the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only
careful analysis and a complete understanding of the problem will make the project succeed. A successful project will
utilize sound techniques, competent programmers, sound processes and solid project management.
OO makes you a better programmer: OO does not make a programmer better. Only experience can do that. A coder might
know all of the OO lingo and syntactical tricks, but if he or she doesn't know when and where to employ these
features, the resulting code will be error-prone and difficult for others to maintain and reuse.
OO-derived software is superior to other forms of software: OO techniques do not make good software; features make
good software. You can use every OO trick in the book, but if the application lacks the features and functionality
users need, no one will use it.
OO techniques mean you don't need to worry about business plans: Before jumping onto the object bandwagon, be
certain to conduct a full business analysis. Don't go in without careful consideration or on the faith of marketing
hype. It is critical to understand the costs as well as the benefits of object-oriented development. If you plan for
only one or two internal development projects, you will see few of the benefits of reuse. You might be able to use
preexisting object-oriented technologies, but rolling your own will not be cost effective.
OO will cure your corporate ills: OO will not solve morale and other corporate problems. If your company suffers
from motivational or morale problems, fix those with other solutions. An OO Band-Aid will only worsen an already
unfortunate situation.
OO Pitfalls
Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object
technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly,
however, the results can be disastrous.
OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort
to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your
skills, you will fall behind.
OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development
costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you
might be disappointed in your initial OO results.
OO technologies might not fit your corporate culture: The successful application of OO requires that your
development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable
objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work
or if you'll never reap the benefits of it.
OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger
and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are
dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to
improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However,
if you're developing an application that tracks millions of data points in real time, OO might not be the answer for
you.
OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every
situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to
design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO
technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in
protecting you from making a mistake.
What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware
of the hype. Use an OO approach only when appropriate.
Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to
use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm
than good.
Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of
"Hooked on Objects."
David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based
object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member
specializing in telecommunications consulting for ObjectWave. Contact them at [email protected] or visit their Web
site at www.objectwave.com.
This isn't a general discussion of OO pitfalls and conceptual weaknesses, but a discussion of how conventional 'textbook' OO
design approaches can lead to inefficient use of cache & RAM, especially on consoles or other hardware-constrained environments.
But it's still good.
"... Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). ..."
Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by
confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy
of concepts (aka subtyping).
The only widely used OO language (for sufficiently narrow values of wide and wide values
of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP
that helps you with the classification.
This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference
to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class
methods, friend methods, and so on).
I agree with you that subclassing (for the purpose of reusing behavior), traits
(for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to
depart from type systems and be used for mere code organization.
"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you,
Java)?"
Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the
entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main"
in each class, but only the one in the specified class will be the entry point.
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation.
Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous
training. Of course I jest. :)
While OO critique is good (althoth most point are far from new) and up to the point the proposed solution is not.
There is no universal opener for creating elegant reliable programs.
Notable quotes:
"... Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization. There's no objective and open evidence that OOP is better than plain procedural programming. ..."
"... The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard. ..."
"... C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any idiotic "object model" c&@p. -- Linus Torvalds, the creator of Linux ..."
"... Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good programming framework should provide mechanisms that prevent us from doing stupid things. ..."
"... Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate code (low signal-to-noise ratio). ..."
The ultimate goal of every software developer should be to write reliable code. Nothing else matters if the code is buggy
and unreliable. And what is the best way to write code that is reliable? Simplicity . Simplicity is the opposite of complexity
. Therefore our first and foremost responsibility as software developers should be to reduce code complexity.
Disclaimer
I'll be honest, I'm not a raving fan of object-orientation. Of course, this article is going to be biased. However, I have good
reasons to dislike OOP.
I also understand that criticism of OOP is a very sensitive topic -- I will probably offend many readers. However, I'm doing what
I think is right. My goal is not to offend, but to raise awareness of the issues that OOP introduces.
I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he designed it. I'm criticizing the modern
Java/C# approach to OOP.
I will also admit that I'm angry. Very angry. I think that it is plain wrong that OOP is considered the de-facto standard for
code organization by many people, including those in very senior technical positions. It is also wrong that many mainstream languages
don't offer any other alternatives to code organization other than OOP.
Hell, I used to struggle a lot myself while working on OOP projects. And I had no single clue why I was struggling this much.
Maybe I wasn't good enough? I had to learn a couple more design patterns (I thought)! Eventually, I got completely burned out.
This post sums up my first-hand decade-long journey from Object-Oriented to Functional programming. I've seen it all. Unfortunately,
no matter how hard I try, I can no longer find use cases for OOP. I have personally seen OOP projects fail because they become too
complex to maintain.
TLDR
Object oriented programs are offered as alternatives to correct ones --
Edsger
W. Dijkstra , pioneer of computer science
<img src="https://miro.medium.com/max/1400/1*MTb-Xx5D0H6LUJu_cQ9fMQ.jpeg" width="700" height="467"/>
Photo by
Sebastian Herrmann on
Unsplash
Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In
other words, it was supposed to improve code organization.
There's no objective and open evidence that OOP is better than plain procedural programming.
The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies
of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing
complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns
. OOP makes common development practices, like refactoring and testing, needlessly hard.
Some might disagree with me, but the truth is that modern OOP has never been properly designed. It never came out of a proper
research institution (in contrast with Haskell/FP). I do not consider Xerox or another enterprise to be a "proper research institution".
OOP doesn't have decades of rigorous scientific research to back it up. Lambda calculus offers a complete theoretical foundation
for Functional Programming. OOP has nothing to match that. OOP mainly "just happened".
Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But what are the long-term consequences
of using OOP? OOP is a time bomb, set to explode sometime in the future when the codebase gets big enough.
Projects get delayed, deadlines get missed, developers get burned-out, adding in new features becomes next to impossible.
The organization labels the codebase as the "legacy codebase" , and the development team plans a rewrite .
OOP is not natural for the human brain, our thought process is centered around "doing" things -- go for a walk, talk to a friend,
eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects.
OOP code is non-deterministic -- unlike with functional programming, we're not guaranteed to get the same output given the same
inputs. This makes reasoning about the program very hard. As an oversimplified example, the output of 2+2 or calculator.Add(2,
2) mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004. The dependencies of the
Calculator object might change the result of the computation in subtle, but profound ways.
The Need for a Resilient Framework
I know, this may sound weird, but as programmers, we shouldn't trust ourselves to write reliable code. Personally, I am unable
to write good code without a strong framework to base my work on. Yes, there are frameworks that concern themselves with some very
particular problems (e.g. Angular or ASP.Net).
I'm not talking about the software frameworks. I'm talking about the more abstract dictionary definition of a framework:
"an essential supporting structure " -- frameworks that concern themselves with the more abstract things like code organization
and tackling code complexity. Even though Object-Oriented and Functional Programming are both programming paradigms, they're also
both very high-level frameworks.
Limiting our choices
C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any
idiotic "object model" c&@p.
-- Linus Torvalds, the creator of Linux
Linus Torvalds is widely known for his open criticism of C++ and OOP. One thing he was 100% right about is limiting programmers
in the choices they can make. In fact, the fewer choices programmers have, the more resilient their code becomes. In the quote above,
Linus Torvalds highly recommends having a good framework to base our code upon.
<img src="https://miro.medium.com/max/1400/1*ujt2PMrbhCZuGhufoxfr5w.jpeg" width="700" height="465"/>
Photo by
specphotops on
Unsplash
Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good
programming framework should provide mechanisms that prevent us from doing stupid things.
A good programming framework helps us to write reliable code. First and foremost, it should help reduce complexity by providing
the following things:
Modularity and reusability
Proper state isolation
High signal-to-noise ratio
Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though
OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages
the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate
code (low signal-to-noise ratio).
... ... ...
Messaging
Alan Kay coined the term "Object Oriented Programming" in the 1960s. He had a background in biology and was attempting to make
computer programs communicate the same way living cells do.
<img src="https://miro.medium.com/max/1400/1*bzRsnzakR7O4RMbDfEZ1sA.jpeg" width="700" height="467"/>
Photo by
Muukii on
Unsplash
Alan Kay's big idea was to have independent programs (cells) communicate by sending messages to each other. The state of
the independent programs would never be shared with the outside world (encapsulation).
That's it. OOP was never intended to have things like inheritance, polymorphism, the "new" keyword, and the myriad of design
patterns.
OOP in its purest form
Erlang is OOP in its purest form. Unlike more mainstream languages, it focuses on the core idea of OOP -- messaging. In
Erlang, objects communicate by passing immutable messages between objects.
Is there proof that immutable messages are a superior approach compared to method calls?
Hell yes! Erlang is probably the most reliable language in the world. It powers most of the world's telecom (and
hence the internet) infrastructure. Some of the systems written in Erlang have reliability of 99.9999999% (you read that right --
nine nines). Code Complexity
With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder
to modify and maintain.
The most important aspect of software development is keeping the code complexity down. Period. None of the fancy features
matter if the codebase becomes impossible to maintain. Even 100% test coverage is worth nothing if the codebase becomes too complex
and unmaintainable .
What makes the codebase complex? There are many things to consider, but in my opinion, the top offenders are: shared mutable state,
erroneous abstractions, and low signal-to-noise ratio (often caused by boilerplate code). All of them are prevalent in OOP.
The Problems of State
<img src="https://miro.medium.com/max/1400/1*1WeuR9OoKyD5EvtT9KjXOA.jpeg" width="700" height="467"/>
Photo by
Mika Baumeister on
Unsplash
What is state? Simply put, state is any temporary data stored in memory. Think variables or fields/properties in OOP. Imperative
programming (including OOP) describes computation in terms of the program state and changes to that state . Declarative (functional)
programming describes the desired results instead, and don't specify changes to the state explicitly.
... ... ...
To make the code more efficient, objects are passed not by their value, but by their reference . This is where "dependency
injection" falls flat.
Let me explain. Whenever we create an object in OOP, we pass references to its dependencies to the constructor .
Those dependencies also have their own internal state. The newly created object happily stores references to those dependencies
in its internal state and is then happy to modify them in any way it pleases. And it also passes those references down to anything
else it might end up using.
This creates a complex graph of promiscuously shared objects that all end up changing each other's state. This, in turn, causes
huge problems since it becomes almost impossible to see what caused the program state to change. Days might be wasted trying
to debug such state changes. And you're lucky if you don't have to deal with concurrency (more on this later).
Methods/Properties
The methods or properties that provide access to particular fields are no better than changing the value of a field directly.
It doesn't matter whether you mutate an object's state by using a fancy property or method -- the result is the same: mutated
state.
Some people say that OOP tries to model the real world. This is simply not true -- OOP has nothing to relate to in the real world.
Trying to model programs as objects probably is one of the biggest OOP mistakes.
The real world is not hierarchical
OOP attempts to model everything as a hierarchy of objects. Unfortunately, that is not how things work in the real world. Objects
in the real world interact with each other using messages, but they mostly are independent of each other.
Inheritance in the real world
OOP inheritance is not modeled after the real world. The parent object in the real world is unable to change the behavior of child
objects at run-time. Even though you inherit your DNA from your parents, they're unable to make changes to your DNA as they please.
You do not inherit "behaviors" from your parents, you develop your own behaviors. And you're unable to "override" your parents' behaviors.
The real world has no methods
Does the piece of paper you're writing on have a "write" method ? No! You take an empty piece of paper, pick up a pen,
and write some text. You, as a person, don't have a "write" method either -- you make the decision to write some text based on outside
events or your internal thoughts.
The Kingdom of Nouns
Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions
and data structures belong in totally different worlds.
Objects (or nouns) are at the very core of OOP. A fundamental limitation of OOP is that it forces everything into nouns. And not
everything should be modeled as nouns. Operations (functions) should not be modeled as objects. Why are we forced to create a
Multiplier class when all we need is a function that multiplies two numbers? Simply have a Multiply function,
let data be data and let functions be functions!
In non-OOP languages, doing trivial things like saving data to a file is straightforward -- very similar to how you would describe
an action in plain English.
Real-world example, please!
Sure, going back to the painter example, the painter owns a PaintingFactory . He has hired a dedicated BrushManager
, ColorManager , a CanvasManager and a MonaLisaProvider . His good friend zombie makes
use of a BrainConsumingStrategy . Those objects, in turn, define the following methods: CreatePainting
, FindBrush , PickColor , CallMonaLisa , and ConsumeBrainz .
Of course, this is plain stupidity, and could never have happened in the real world. How much unnecessary complexity has been
created for the simple act of drawing a painting?
There's no need to invent strange concepts to hold your functions when they're allowed to exist separately from the objects.
Unit Testing
<img src="https://miro.medium.com/max/1400/1*xGn4uGgVyrRAXnqSwTF69w.jpeg" width="700" height="477"/>
Photo by
Ani Kolleshi on
Unsplash
Automated testing is an important part of the development process and helps tremendously in preventing regressions (i.e. bugs
being introduced into existing code). Unit Testing plays a huge role in the process of automated testing.
Some might disagree, but OOP code is notoriously difficult to unit test. Unit Testing assumes testing things in isolation, and
to make a method unit-testable:
Its dependencies have to be extracted into a separate class.
Create an interface for the newly created class.
Declare fields to hold the instance of the newly created class.
Make use of a mocking framework to mock the dependencies.
Make use of a dependency-injection framework to inject the dependencies.
How much more complexity has to be created just to make a piece of code testable? How much time was wasted just to make some code
testable?
> PS we'd also have to instantiate the entire class in order to test a single method. This will also bring in the code from
all of its parent classes.
With OOP, writing tests for legacy code is even harder -- almost impossible. Entire companies have been created (
TypeMock ) around the issue of
testing legacy OOP code.
Boilerplate code
Boilerplate code is probably the biggest offender when it comes to the signal-to-noise ratio. Boilerplate code is "noise" that
is required to get the program to compile. Boilerplate code takes time to write and makes the codebase less readable because of the
added noise.
While "program to an interface, not to an implementation" is the recommended approach in OOP, not everything should become an
interface. We'd have to resort to using interfaces in the entire codebase, for the sole purpose of testability. We'd also probably
have to make use of dependency injection, which further introduced unnecessary complexity.
Testing private methods
Some people say that private methods shouldn't be tested I tend to disagree, unit testing is called "unit" for a reason -- test
small units of code in isolation. Yet testing of private methods in OOP is nearly impossible. We shouldn't be making private methods
internal just for the sake of testability.
In order to achieve testability of private methods, they usually have to be extracted into a separate object. This, in turn, introduces
unnecessary complexity and boilerplate code.
Refactoring
Refactoring is an important part of a developer's day-to-day job. Ironically, OOP code is notoriously hard to refactor. Refactoring
is supposed to make the code less complex, and more maintainable. On the contrary, refactored OOP code becomes significantly more
complex -- to make the code testable, we'd have to make use of dependency injection, and create an interface for the refactored class.
Even then, refactoring OOP code is really hard without dedicated tools like Resharper.
In the simple example above, the line count has more than doubled just to extract a single method. Why does refactoring create
even more complexity, when the code is being refactored in order to decrease complexity in the first place?
Contrast this to a similar refactor of non-OOP code in JavaScript:
The code has literally stayed the same -- we simply moved the isValidInput function to a different file and added
a single line to import that function. We've also added _isValidInput to the function signature for the sake of testability.
This is a simple example, but in practice the complexity grows exponentially as the codebase gets bigger.
And that's not all. Refactoring OOP code is extremely risky . Complex dependency graphs and state scattered all over OOP
codebase, make it impossible for the human brain to consider all of the potential issues.
The Band-aids
<img src="https://miro.medium.com/max/1400/1*JOtbVvacgu-nH3ZR4mY2Og.jpeg" width="700" height="567"/>
Image source: Photo by
Pixabay from
Pexels
What do we do when something is not working? It is simple, we only have two options -- throw it away or try fixing it. OOP is
something that can't be thrown away easily, millions of developers are trained in OOP. And millions of organizations worldwide are
using OOP.
You probably see now that OOP doesn't really work , it makes our code complex and unreliable. And you're not alone! People
have been thinking hard for decades trying to address the issues prevalent in OOP code. They've come up with a myriad of
design patterns.
Design patterns
OOP provides a set of guidelines that should theoretically allow developers to incrementally build larger and larger systems:
SOLID principle, dependency injection, design patterns, and others.
Unfortunately, the design patterns are nothing other than band-aids. They exist solely to address the shortcomings of OOP.
A myriad of books has even been written on the topic. They wouldn't have been so bad, had they not been responsible for the introduction
of enormous complexity to our codebases.
The problem factory
In fact, it is impossible to write good and maintainable Object-Oriented code.
On one side of the spectrum we have an OOP codebase that is inconsistent and doesn't seem to adhere to any standards. On the other
side of the spectrum, we have a tower of over-engineered code, a bunch of erroneous abstractions built one on top of one another.
Design patterns are very helpful in building such towers of abstractions.
Soon, adding in new functionality, and even making sense of all the complexity, gets harder and harder. The codebase will be full
of things like SimpleBeanFactoryAwareAspectInstanceFactory , AbstractInterceptorDrivenBeanDefinitionDecorator
, TransactionAwarePersistenceManagerFactoryProxy or RequestProcessorFactoryFactory .
Precious brainpower has to be wasted trying to understand the tower of abstractions that the developers themselves have created.
The absence of structure is in many cases better than having bad structure (if you ask me).
Is Object-Oriented Programming a Trillion Dollar Disaster?
(medium.com) Posted by EditorDavid on Monday July 22, 2019 @01:04AM from the OOPs dept.
Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay
calling object-oriented programming "a trillion dollar disaster." Precious time and
brainpower are being spent thinking about "abstractions" and "design patterns" instead of
solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal
in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to
improve code organization . There's
no objective and open evidence that OOP is better than plain procedural programming ...
Instead of reducing complexity, it encourages promiscuous sharing of mutable state and
introduces additional complexity with its numerous design patterns . OOP makes common
development practices, like refactoring and testing, needlessly hard...
Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But
what are the long-term consequences of using OOP? OOP is a time bomb, set to explode
sometime in the future when the codebase gets big enough. Projects get delayed, deadlines get
missed, developers get burned-out, adding in new features becomes next to impossible .
The organization labels the codebase as the " legacy codebase ", and the development
team plans a rewrite .... OOP provides developers too many tools and choices, without
imposing the right kinds of limitations. Even though OOP promises to address modularity and
improve reusability, it fails to deliver on its promises...
I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he
designed it. I'm criticizing the modern Java/C# approach to OOP... I think that it is plain
wrong that OOP is considered the de-facto standard for code organization by many people,
including those in very senior technical positions. It is also wrong that many mainstream
languages don't offer any other alternatives to code organization other than OOP.
The essay ultimately blames Java for the popularity of OOP, citing Alan Kay's comment that
Java "is the most distressing thing to happen to computing since MS-DOS." It also quotes Linus
Torvalds's observation that "limiting your project to C means that people don't screw things up
with any idiotic 'object model'."
And it ultimately suggests Functional Programming as a superior alternative, making the
following assertions about OOP:
"OOP code encourages the use of shared mutable state, which
has been proven to be unsafe time and time again... [E]ncapsulation, in fact, is glorified
global state." "OOP typically requires a lot of boilerplate code (low signal-to-noise ratio)."
"Some might disagree, but OOP code is notoriously difficult to unit test... [R]efactoring OOP
code is really hard without dedicated tools like Resharper." "It is impossible to write good
and maintainable Object-Oriented code."
There's no objective and open evidence that OOP is better than plain procedural
programming...
...which is followed by the author's subjective opinions about why procedural programming
is better than OOP. There's no objective comparison of the pros and cons of OOP vs procedural
just a rant about some of OOP's problems.
We start from the point-of-view that OOP has to prove itself. Has it? Has any project or
programming exercise ever taken less time because it is object-oriented?
Precious time and brainpower are being spent thinking about "abstractions" and "design
patterns" instead of solving real-world problems...
...says the person who took the time to write a 6,000 word rant on "why I hate
OOP".
Sadly, that was something you hallucinated. He doesn't say that anywhere.
Inheritance, while not "inherently" bad, is often the wrong solution. See: Why
extends is evil [javaworld.com]
Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny
little anecdote in Cocoa
Programming for Mac OS X [google.com]:
"Once upon a time, there was a company called Taligent. Taligent was created by IBM and
Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the
peak of its mindshare, I met one of its engineers at a trade show. I asked him to create a
simple application for me: A window would appear with a button, and when the button was
clicked, the words 'Hello, World!' would appear in a text field. The engineer created a
project and started subclassing madly: subclassing the window and the button and the event
handler. Then he started generating code: dozens of lines to get the button and the text
field onto the window. After 45 minutes, I had to leave. The app still did not work. That
day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its
doors forever."
Almost every programming methodology can be abused by people who really don't know how to
program well, or who don't want to. They'll happily create frameworks, implement new
development processes, and chart tons of metrics, all while avoiding the work of getting the
job done. In some cases the person who writes the most code is the same one who gets the
least amount of useful work done.
So, OOP can be misused the same way. Never mind that OOP essentially began very early and
has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially
an object oriented system. It's just data encapsulation and separating work into manageable
modules. That's how it was before anyone ever came up with the dumb name "full-stack
developer".
As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the
rise of OOP with some curiosity. I think there's a general consensus that abstraction and
re-usability are good things - they're the reason subroutines exist - the issue is whether
they are ends in themselves.
I struggle with the whole concept of "design patterns". There are clearly common themes in
software, but there seems to be a great deal of pressure these days to make your
implementation fit some pre-defined template rather than thinking about the application's
specific needs for state and concurrency. I have seen some rather eccentric consequences of
"patternism".
Correctly written, OOP code allows you to encapsulate just the logic you need for a
specific task and to make that specific task available in a wide variety of contexts by
judicious use of templating and virtual functions that obviate the need for "refactoring".
Badly written, OOP code can have as many dangerous side effects and as much opacity as any
other kind of code. However, I think the key factor is not the choice of programming
paradigm, but the design process. You need to think first about what your code is intended to
do and in what circumstances it might be reused. In the context of a larger project, it means
identifying commonalities and deciding how best to implement them once. You need to document
that design and review it with other interested parties. You need to document the code with
clear information about its valid and invalid use. If you've done that, testing should not be
a problem.
Some people seem to believe that OOP removes the need for some of that design and
documentation. It doesn't and indeed code that you intend to be reused needs *more* design
and documentation than the glue that binds it together in any one specific use case. I'm
still a firm believer that coding begins with a pencil, not with a keyboard. That's
particularly true if you intend to design abstract interfaces that will serve many purposes.
In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the
costs - and that usually means you not only know your code will be genuinely reusable but
will also genuinely be reused.
[...] I'm still a firm believer that coding begins with a pencil, not with a keyboard.
[...]
This!
In fact, even more: I'm a firm believer that coding begins with a pencil designing the data
model that you want to implement.
Everything else is just code that operates on that data model. Though I agree with most of
what you say, I believe the classical "MVC" design-pattern is still valid. And, you know
what, there is a reason why it is called "M-V-C": Start with the Model, continue with the
View and finalize with the Controller. MVC not only stood for Model-View-Controller but also
for the order of the implementation of each.
And preferably, as you stated correctly, "... start with pencil & paper
..."
I struggle with the whole concept of "design patterns".
Because design patterns are stupid.
A reasonable programmer can understand reasonable code so long as the data is documented
even when the code isnt documented, but will struggle immensely if it were the other way
around. Bad programmers create objects for objects sake, and because of that they have to
follow so called "design patterns" because no amount of code commenting makes the code easily
understandable when its a spaghetti web of interacting "objects" The "design patterns" dont
make the code easier the read, just easier to write.
Those OOP fanatics, if they do "document" their code, add comments like "// increment the
index" which is useless shit.
The big win of OOP is only in the encapsulation of the data with the code, and great
code treats objects like data structures with attached subroutines, not as "objects" ,
and document the fuck out of the contained data, while more or less letting the code document
itself. and keep OO elements to a minimum. As it turns out, OOP is just much more effort than
procedural and it rarely pays off to invest that effort, at least for me.
The problem isn't the object orientation paradigm itself, it's how it's applied.
The big problem in any project is that you have to understand how to break down the final
solution into modules that can be developed independently of each other to a large extent and
identify the items that are shared. But even when you have items that are apparently
identical don't mean that they will be that way in the long run, so shared code may even be
dangerous because future developers don't know that by fixing problem A they create problems
B, C, D and E.
Any time you make something easier, you lower the bar as well and now have a pack of
idiots that never could have been hired if it weren't for a programming language that
stripped out a lot of complexity for them.
Exactly. There are quite a few aspects of writing code that are difficult regardless of
language and there the difference in skill and insight really matters.
I have about 35+ years of software development experience, including with procedural, OOP
and functional programming languages.
My experience is: The question "is procedural better than OOP or functional?" (or
vice-versa) has a single answer: "it depends".
Like in your cases above, I would exactly do the same: use some procedural language that
solves my problem quickly and easily.
In large-scale applications, I mostly used OOP (having learned OOP with Smalltalk &
Objective-C). I don't like C++ or Java - but that's a matter of personal preference.
I use Python for large-scale scripts or machine learning/AI tasks.
I use Perl for short scripts that need to do a quick task.
Procedural is in fact easier to grasp for beginners as OOP and functional require a
different way of thinking. If you start developing software, after a while (when the project
gets complex enough) you will probably switch to OOP or functional.
Again, in my opinion neither is better than the other (procedural, OOP or functional). It
just depends on the task at hand (and of course on the experience of the software
developer).
There is nothing inherently wrong with some of the functionality it offers, its the way
OOP is abused as a substitute for basic good programming practices. I was helping interns -
students from a local CC - deal with idiotic assignments like making a random number
generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF? A room
full of career programmers could not even figure out how you were supposed to do that, much
less why. What was worse was a lack of understanding of basic programming skill or even the
use of variables, as the kids were being taught EVERY program was to to be assembled solely
by sticking together bits of libraries. There was no coding, just hunting for snippets of
preexisting code to glue together. Zero idea they could add their own, much less how to do
it. OOP isn't the problem, its the idea that it replaces basic programming skills and best
practice.
That and the obsession with absofrackinglutely EVERYTHING just having to be a formally
declared object including the while program being an object with a run() method.
Some things actually cry out to be objects, some not so much.Generally, I find that my
most readable and maintainable code turns out to be a procedural program that manipulates
objects.
Even there, some things just naturally want to be a struct or just an array of values.
The same is true of most ingenious ideas in programming. It's one thing if code is
demonstrating a particular idea, but production code is supposed to be there to do work, not
grind an academic ax.
For example, slavish adherence to "patterns". They're quite useful for thinking about code
and talking about code, but they shouldn't be the end of the discussion. They work better as
a starting point. Some programs seem to want patterns to be mixed and matched.
In reality those problems are just cargo cult programming one level higher.
I suspect a lot of that is because too many developers barely grasp programming and never
learned to go beyond the patterns they were explicitly taught.
When all you have is a hammer, the whole world looks like a nail.
There are a lot of mediocre programmers who follow the principle "if you have a hammer,
everything looks like a nail". They know OOP, so they think that every problem must be solved
in an OOP way. In fact, OOP works well when your program needs to deal with relatively
simple, real-world objects: the modelling follows naturally. If you are dealing with abstract
concepts, or with highly complex real-world objects, then OOP may not be the best
paradigm.
In Java, for example, you can program imperatively, by using static methods. The problem
is knowing when to break the rules. For example, I am working on a natural language system
that is supposed to generate textual answers to user inquiries. What "object" am I supposed
to create to do this task? An "Answer" object that generates itself? Yes, that would work,
but an imperative, static "generate answer" method makes at least as much sense.
There are different ways of thinking, different ways of modelling a problem. I get tired
of the purists who think that OO is the only possible answer. The world is not a nail.
I'm approaching 60, and I've been coding in COBOL, VB, FORTRAN, REXX, SQL for almost 40
years. I remember seeing Object Oriented Programming being introduced in the 80s, and I went
on a course once (paid by work). I remember not understanding the concept of "Classes", and
my impression was that the software we were buying was just trying to invent stupid new words
for old familiar constructs (eg: Files, Records, Rows, Tables, etc). So I never transitioned
away from my reliable mainframe programming platform. I thought the phrase OOP had dies out
long ago, along with "Client Server" (whatever that meant). I'm retiring in a few years, and
the mainframe will outlive me. Everything else is buggy.
"limiting your project to C means that people don't screw things up with any idiotic
'object model'."
GTK .... hold by beer... it is not a good argument against OOP
languages. But first, lets see how OOP came into place. OOP was designed to provide
encapsulation, like components, support reuse and code sharing. It was the next step coming
from modules and units, which where better than libraries, as functions and procedures had
namespaces, which helped structuring code. OOP is a great idea when writing UI toolkits or
similar stuff, as you can as
Like all things OO is fine in moderation but it's easy to go completely overboard,
decomposing, normalizing, producing enormous inheritance trees. Yes your enormous UML diagram
looks impressive, and yes it will be incomprehensible, fragile and horrible to maintain.
That said, it's completely fine in moderation. The same goes for functional programming.
Most programmers can wrap their heads around things like functions, closures / lambdas,
streams and so on. But if you mean true functional programming then forget it.
As for the kernel's choice to use C, that really boils down to the fact that a kernel
needs to be lower level than a typical user land application. It has to do its own memory
allocation and other things that were beyond C++ at the time. STL would have been usable, so
would new / delete, and exceptions & unwinding. And at that point why even bother? That
doesn't mean C is wonderful or doesn't inflict its own pain and bugs on development. But at
the time, it was the only sane choice.
"... The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO. ..."
"... Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser. ..."
"... In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept. acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language restrictions, "lint" program, etc.) shows up at the door when the project starts. ..."
Here is an a la carte menu of features or properties that are related to these terms; I have heard OO defined to be many different
subsets of this list.
Encapsulation - the ability to syntactically hide the implementation of a type. E.g. in C or Pascal you always know
whether something is a struct or an array, but in CLU and Java you can hide the difference.
Protection - the inability of the client of a type to detect its implementation. This guarantees that a behavior-preserving
change to an implementation will not break its clients, and also makes sure that things like passwords don't leak out.
Ad hoc polymorphism - functions and data structures with parameters that can take on values of many different types.
Parametric polymorphism - functions and data structures that parameterize over arbitrary values (e.g. list of anything).
ML and Lisp both have this. Java doesn't quite because of its non-Object types.
Everything is an object - all values are objects. True in Smalltalk (?) but not in Java (because of int and friends).
All you can do is send a message (AYCDISAM) = Actors model - there is no direct manipulation of objects, only communication
with (or invocation of) them. The presence of fields in Java violates this.
Specification inheritance = subtyping - there are distinct types known to the language with the property that a value of
one type is as good as a value of another for the purposes of type correctness. (E.g. Java interface inheritance.)
Implementation inheritance/reuse - having written one pile of code, a similar pile (e.g. a superset) can be generated in
a controlled manner, i.e. the code doesn't have to be copied and edited. A limited and peculiar kind of abstraction. (E.g.
Java class inheritance.)
Sum-of-product-of-function pattern - objects are (in effect) restricted to be functions that take as first argument a distinguished
method key argument that is drawn from a finite set of simple names.
So OO is not a well defined concept. Some people (eg. Abelson and Sussman?) say Lisp is OO, by which they mean {3,4,5,7} (with
the proviso that all types are in the programmers' heads). Java is supposed to be OO because of {1,2,3,7,8,9}. E is supposed to be
more OO than Java because it has {1,2,3,4,5,7,9} and almost has 6; 8 (subclassing) is seen as antagonistic to E's goals and not necessary
for OO.
The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as
a definition of OO.
Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince
you that you are a loser.
Perhaps part of the confusion - and you say this in a different way in your little
memo - is that the C/C++ folks see OO as a liberation from a world
that has nothing resembling a first-class functions, while Lisp folks see OO as a prison since it limits their use of functions/objects
to the style of (9.). In that case, the only way OO can be defended is in the same manner as any other game or discipline -- by arguing
that by giving something up (e.g. the freedom to throw eggs at your neighbor's house) you gain something that you want (assurance
that your neighbor won't put you in jail).
This is related to Lisp being oriented to the solitary hacker and discipline-imposing languages being oriented to social packs,
another point you mention. In a pack you want to restrict everyone else's freedom as much as possible to reduce their ability to
interfere with and take advantage of you, and the only way to do that is by either becoming chief (dangerous and unlikely) or by
submitting to the same rules that they do. If you submit to rules, you then want the rules to be liberal so that you have a chance
of doing most of what you want to do, but not so liberal that others nail you.
In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept.
acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or
hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language
restrictions, "lint" program, etc.) shows up at the door when the project starts.
I recently contributed to a discussion of anti-OO on the e-lang list. My main anti-OO message (actually it only attacks points
5/6) was http://www.eros-os.org/pipermail/e-lang/2001-October/005852.html
. The followups are interesting but I don't think they're all threaded properly.
(Here are the pet definitions of terms used above:
Value = something that can be passed to some function (abstraction). (I exclude exotic compile-time things like parameters
to macros and to parameterized types and modules.)
Object = a value that has function-like behavior, i.e. you can invoke a method on it or call it or send it a message
or something like that. Some people define object more strictly along the lines of 9. above, while others (e.g. CLTL) are more
liberal. This is what makes "everything is an object" a vacuous statement in the absence of clear definitions.
In some languages the "call" is curried and the key-to-method mapping can sometimes be done at compile time. This technicality
can cloud discussions of OO in C++ and related languages.
Function = something that can be combined with particular parameter(s) to produce some result. Might or might not be
the same as object depending on the language.
Type = a description of the space of values over which a function is meaningfully parameterized. I include both types
known to the language and types that exist in the programmer's mind or in documentation
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code. ..."
If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart"
and they reflect that in their code.
If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason
they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it.
No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart"
and they reflect that in their code. ..."
If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart"
and they reflect that in their code.
If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason
they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it.
No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.
James Maguire's
article raises some interesting questions as to why teaching Java to first
year CS / IT students is a bad idea. The article mentions both Ada and Pascal
– neither of which really "took off" outside of the States, with the former
being used mainly by contractors of the US Dept. of Defense.
This is my own, personal, extension to the article – which I agree with –
and why first year students should be taught C in first year. I'm biased though,
I learned C as my first language and extensively use C or C++ in
projects.
Java is a very high level language that has interesting features that make
it easier for programmers. The two main points, that I like about Java, are
libraries (although libraries exist for C / C++ ) and memory management.
Libraries
Libraries are fantastic. They offer an API and abstract a metric fuck tonne
of work that a programmer doesn't care about. I don't care how
the library works inside, just that I have a way of putting in input and getting
expected output (see my post on
abstraction).
I've extensively used libraries, even this week, for audio codec decoding. Libraries
mean not reinventing the wheel and reusing code (something students are discouraged
from doing, as it's plagiarism, yet in the real world you are rewarded). Again,
starting with C means that you appreciate the libraries more.
Memory Management
Managing your programs memory manually is a pain in the hole. We all know
this after spending countless hours finding memory leaks in our programs. Java's
inbuilt memory management tool is great – it saves me from having to do it.
However, if I had have learned Java first, I would assume (for a short amount
of time) that all languages managed memory for you or that
all languages were shite compared to Java because they don't
manage memory for you. Going from a "lesser" language like C to Java makes you
appreciate the memory manager
What's so great about C?
In the context of a first language to teach students, C is perfect. C is
Relatively simple
Procedural
Lacks OOP features, which confuse freshers
Low level
Fast
Imperative
Weakly typed
Easy to get bugs
Java is a complex language that will spoil a first year student. However,
as noted, CS / IT courses need to keep student retention rates high. As an example,
my first year class was about 60 people, final year was 8. There are ways to
keep students, possibly with other, easier, languages in the second semester
of first year – so that students don't hate the subject when choosing the next
years subject post exams.
Conversely, I could say that you should teach Java in first
year and expand on more difficult languages like C or assembler (which should
be taught side by side, in my mind) later down the line – keeping retention
high in the initial years, and drilling down with each successive semester to
more systems level programming.
There's a time and place for Java, which I believe is third year or final
year. This will keep Java fresh in the students mind while they are going job
hunting after leaving the bosom of academia. This will give them a good head
start, as most companies are Java houses in Ireland.
In
computer science, abstraction is the process by which
data and
programs are defined with a
representation similar to its meaning (semantics),
while hiding away the
implementation details. Abstraction tries to reduce and factor out details
so that the
programmer
can focus on a few concepts at a time. A system can have several abstraction
layers whereby different meanings and amounts of detail are exposed
to the programmer. For example,
low-level abstraction layers expose details of the
hardware
where the program is
run, while high-level layers deal with the
business
logic of the program.
That might be a bit too wordy for some people, and not at all clear. Here's
my analogy of abstraction.
Abstraction is like a car
A car has a few features that makes it unique.
A steering wheel
Accelerator
Brake
Clutch
Transmission (Automatic or Manual)
If someone can drive a Manual transmission car, they can drive any Manual
transmission car. Automatic drivers, sadly, cannot drive a Manual transmission
drivers without "relearing" the car. That is an aside, we'll assume that all
cars are Manual transmission cars – as is the case in Ireland for most cars.
Since I can drive my car, which is a Mitsubishi Pajero, that means that I
can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.
All I need to know, in order to drive a car – any car – is how to use the
breaks, accelerator, steering wheel, clutch and transmission. Since I already
know this in my car, I can abstract away your car and it's
controls.
I do not need to know the inner workings of your car in
order to drive it, just the controls. I don't need to know how exactly the breaks
work in your car, only that they work. I don't need to know, that your car has
a turbo charger, only that when I push the accelerator, the car moves. I also
don't need to know the exact revs that I should gear up or gear down (although
that would be better on the engine!)
Virtually all controls are the same. Standardization means that the clutch,
break and accelerator are all in the same place, regardless of the car. This
means that I do not need to relearn how a car works. To me, a car is
just a car, and is interchangeable with any other car.
Abstraction means not caring
As a programmer, or someone using a third party API (for example), abstraction
means not caring how the inner workings of some function works
– Linked list data structure, variable names inside the function, the sorting
algorithm used, etc – just that I have a standard (preferable unchanging) interface
to do whatever I need to do.
Abstraction can be taught of as a black box. For input, you get output. That
shouldn't be the case, but often is. We need abstraction so that, as a programmer,
we can concentrate on other aspects of the program – this is the corner-stone
for large scale, multi developer, software projects.
So what are programmers doing wrong? One thing is too much use of inheritance. "It
is obviously hugely overused," he says. "There are languages where you can't express yourself without inheritance - they fit everything
into a hierarchy and it doesn't make any sense. Inheritance should come from the domain, from the problem. It is good where there
is an 'is a' or 'kind of' relationship in the fundamental domain. Shapes fit into this, there is something natural there. Similarly
device controllers have natural hierarchies that you should exploit. If you forget about programming languages and look at the application
domain, the questions about deep or shallow inheritance answer themselves.
He also takes care to distinguish "implementation inheritance, where in some sense you want a deep hierarchies so that
most of the implementation is shared, and interface inheritance - where you don't care, all you want to do is to hide a set
of implementations behind a common interface. I don't think people distinguish that enough."
Another bugbear is protected visibility. "When you build big hierarchies you get two kinds of users [of the classes]: the general
users, and the people who extend the hierarchy. People who extend the hierarchy often need protected access.
The reason I like public or private is that if it is private, nobody can mess with it.
"If I say protected, about some data, anybody can mess with it and scramble my data. That has been a problem.
It is not such a problem if the protected interface really is functional, a set of functions that you
have provided as support for implementers of new classes... The ideal is public or private, and sometimes out of necessity
we use protected," he said.
Can not help my desire to say a couple of words. My coding experience is 25+ years. And OOP never
was attractive to me. Probably, that was due to I always had what OOP could give, thanks to Modula-2 and Oberon-2.
Philosophy aside, coding OOP practice I happened to observe was that OOP provided means mainly for
Modularisation (encapsulation).
Making reusable code libraries.
IMHO that was the reason of OOP success and that was all what PP could give by means of modules.
Now it is time to recall that (celebarted) OOP method is to create a number of objects and to fire up their interaction by
message exchange.
Regarding its methodology, OOP approach seems to be much more obscure than PP. May
be, that is due to the fact that it is much more natural for human being to invent an algorithm for a purpose, than to build an
abstract machine which would work according to some model in a way that it would implement an algorithm for a purpose...
AFAIC the only domain of OOP metaphor to fit more or less nicely is windowed GUI (interactive graphics)
and modeling of automation systems blocks.
OOP is an abstraction made up of too many false hopes, thus counterproductive as 99% people see it now ;)
Here we are, real
life example: every comment in this thread [from OO point of view] is derived class of 26 ASCII letters, but, how useful such
abstraction is for the matter?
Larry Well had classic example somewhere: radio tower and plumbing pipe are made from single base class, but they have almost
nothing in common.
Classes hierarchy,...
Vertical inheritance paradigm clearly becomes insufficient,.. then horizontal (aka transparent) inheritance comes into play,
making a mess from the ideal initial picture. (forgive me to ask,.. does java has it?.. perl does, but,..)
Isolation,.. Students are getting wrong idea about it, listening fairy OOP tales. They start to believe that object's properties
becomes invisible when someone's eyes get closed.
Instead learning about modularity, decomposition, protocols, state machines, a code re-use,.. every and any OO writer most
of life reinvents the wheel cloning a classes and methods from zillion similar ones.
Result is: almost every OOP product is looking as collection of procedures, thus being near imposible for lock-free threading.
I vote for data-driven modular design, -- back to the nature, to stop lie to ourselve.
OOP as scientific abstraction is fine... and limited.
Some might not be able to grasp OO but this is a small number compared to the number of people who
just cannot understand pointers. For a great many people pointers remain magical.
Object-oriented programming generates a lot of what looks like work. Back in the days
of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of
elaborately formatted comments.
Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source
code.
Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file
of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.
Eric Lippert observed a similar occupational hazard among developers. It's something he calls
object happiness.
What I sometimes see when I interview people and review code is symptoms of a disease I call Object
Happiness. Object Happy people feel the need to apply principles of OO design to small, trivial, throwaway projects.
They invest lots of unnecessary time making pure virtual abstract base classes -- writing programs where IFoos talk to IBars but
there is only one implementation of each interface!
I suspect that early exposure to OO design principles divorced from any practical context that
motivates those principles leads to object happiness. People come away as OO True Believers rather than OO pragmatists.
I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented
programming is inherently bad, mind you, but a little OOP goes a very long way. Adding objects to your code is like adding
salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err
on the side of simplicity, and I tend to favor the approach that results in less code, not more.
Given my ambivalence about all things OO, I was amused when Jon Galloway
forwarded me a link to Patrick Smacchia's web page.
Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in
French than it is in English: POO.
March 5, 2007
Todd Blanchard
Well, there's objects and then there's Objects. I work in Smalltalk - real objects everywhere and it feels pretty natural.
OTOH, Java's Objects(TM) are characterized by cargo cult engineering. Lots of form of without function of. Factory is just
one pattern that is horrifically overused in that world and usually for no good reason.
You have to know when to use sense. Very rare in software. Sometimes a script is just a script.
Opeth
Making something ridiculously complex for the sake of making it simple is like trying to put out a fire with gasoline.
Some programmers just need to take a deep breath and write code that is a delicious salami sandwich, and not an extravagantly
prepared four course meal that tastes like shit.
Ed
"It has been said that democracy is the worst form of government except all the others that have been tried." - Churchill
Erm, I guess people do go object-crazy. The problem, as I'm sure is documented elsewhere, is the crappy teaching phase driving
home that "OO is about all about inheritance" when it's not. Inheritance is a powerful tool that is sorely abused. Most of my
object hierarchies are flat, I mark all classes as sealed unless I do intend for someone to derive off of them, and I don't create
interfaces until I really need them (and usually it's only for testing so I can swap in a test implementation).
OO to me just provides a better way to hide implementation and abstract ideas away so I can create more complex, but logically
simpler programs because I don't have to hold onto all the nuances of everything at once. It's no panacea, but it is nicer to
work with when done right.
Anyways, don't throw the baby out with the bathwater. Just because some cars suck, do you stop driving altogether? So, until
something better comes along .. on March 5, 2007 1:51 AM
Phil Deneka
I second Mr. Haack's thoughts. I was very fortunate in both high school and college in having teachers who taught both the
thinking structure for OOP and why it works. We consistently had to work in groups and be able to read each others' code at a
glance and understand what it did, how, and why.
I didn't understand just how important that was until many years later. It has shaped every program I've touched since.
Cesar Viteri
I read somewhere the following: "The difference between a terrorist and a Object-Oriented Methodologist is that you can try
to negotiate with the terrorist". A lot of people that behave like you describe in this post make it come true :o)
Excellent post, keep it coming :o)
Thomas Flamer
When I studied computer science at the university of Oslo, we had a lecturer called Kristen Nygaard who actually invented object
oriented programming. He invented OOP for a language called Simula as a technique for modeling real world objects and beheaviours.
You do not have to program OOP in Simula, like in java.
dnm
I like one of the quotes in Damian Conway's Perl Best Practices:
Always write your code as though it will have to be maintained by an angry axe murderer who knows where you live...
Eric Turner
I think this can be summed up very easily. Bad programming is bad programming no matter the language or the technique used.
VB has a bad rap because so many bad programmers coded in it. C had lots of good programmers in the beginning. Not use why in
either case, but still true.
OO can be equally bad. A language or technique is neither good or bad. Bad use or implementation of them are however bad. Programmers
should be aware of the strengths and weaknesses of each that they do. If they don't then can they really call themselves programmers.
I would say they are just coders. Programmers use the strengths from languages and techniques to reduce the weaknesses. If you
don't then you are just a coder pretending to be a programmer.
Rabid Wolverine
They used to call it spaghetti code, OO architects like to call it lasagna code however, most of
the time oop winds up as ravioli code…
ok, we got OOP. we got POO.
Dave
Let's make another acronym: Perfect at Objected Oriented Progamming. (POOP). you can make this a certification that people
can get by taking an exam or something. it can be sponsored or standardized by different vendors. You can be MCSPOOP, or Sun certified
POOP. you can have all different flavors of POOP.(yuck)
Tom
OOP is an excursion into futility.
It is oversold, anbd rarely are the benfits worth the costs.
Far from making code clearer it generally adds to obfuscation.
A methodology or tool, adopted with religious fervour, cannot substitute for good design and high quality coding.
What is required is clear thought, clear structure, well chosen names, and precise, accurate, and pertinent commenting.
There is nothing that can be done in C++ or Java that could not be done quicker, more clearly and just as effectively in C,
and in other problem domains completely different languages such as LISP and Prolog are in any case more appropriate tools.
The drawbacks of OOP become most apparent when trying to maintain an OOP-horror. The sequence of procedure calls and values
of variables was easily tracked in old-fashioned C. Troubleshooting is an order of magnitude more difficult in C++ or Java.
Inheritance is more trouble than it's worth. Under the doubtful disguise of the holy "code reuse" an insane amount of gratuitous
complexity is added to our environment, which makes necessary industrial quantities of syntactical sugar to make the ensuing mess
minimally manageable.
I would read it as sarcasm. Try reading this manifesto
[pbm.com] and updating Fortran to C to account for 20 years of shift in the industry. Anyone not using C is just eating Quiche.
Although his joke went over your head, it is worth pointing out that OO is not a paradigm.
I know wikipedia thinks that it is, and so do a hoard of practically illiterate researchers publishing
crap papers in junk conferences. But that doesn't make it true.
Object Orientation is just a method of [name space] organization for procedural languages. Although it helps code maintenance and does a better job of unit management that modules alone, it doesn't change the
underlying computational paradigm.
I say procedural languages because class-based programming in functional languages is actually a different type of beast although
it gets called OO to appeal to people from an imperative background.
I would like to warn, that "is Harmful" is a cliche so compromized by
Edsger W. Dijkstra that now and article with this title
should be taken with a grain of salt.
"Object-oriented programming is an exceptionally bad idea which could only have originated in California." -- Edsger
Dijkstra
"object-oriented design is the roman numerals of computing." --
Rob Pike
"The phrase "object-oriented" means a lot of things. Half are obvious, and the other half are mistakes."
-- Paul Graham
"Implementation inheritance causes the same intertwining and brittleness that have been observed when goto statements are
overused. As a result, OO systems often suffer from complexity and lack of reuse." -- John Ousterhout Scripting, IEEE Computer,
March 1998
"90% of the shit that is popular right now wants to rub its object-oriented nutsack all over my code" -- kfx
[Dec 13, 2010] Skeptical quotes about OO programming and languages
"It has been discovered that C++ provides a remarkable facility for concealing the trivial details of a program - such as where
its bugs are." – David Keppel
"In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)."
- Roberto Waltman.
"Java is the most distressing thing to hit computing since MS-DOS." – Alan Kay
"If the principles of OOP are introduced too early it may lead to cognitive overload for some students resulting in confusion and
disillusionment with the subject."
Historically, research suggests that students have always found computer programming difficult; the abstract nature of programming
involving problem solving and logical thinking requires a certain aptitude, and the necessary skills and disciplines are not always
easy to learn and execute.
Even students who are bright and successful in other areas of study often struggle to grasp the basics of programming, and this
has traditionally led to higher than average failure and drop-out rates. Many students end up disillusioned and look for ways to
avoid the subject later in the programme.
Modern programming paradigms, based upon the object-oriented programming (OOP) paradigm, and introduced in recent years, have
additional complex concepts and constraints associated with them.
OOP languages such as Java and VB.NET are now widely used for teaching introductory programming modules in many universities.
These place an additional cognitive burden on students over and above the already difficult programming
principles associated with all programming languages.
Many students complain that they find it difficult to understand some of the complexities associated with object orientation.
Trying to deal with these concepts at an early stage leads to having less time to focus on more fundamental principles and often
results in students having a poorer understanding of the basics.
Add to this the need to include modern windows programming environments with graphics controls and event handling, and it all
becomes too much for many students to handle; they simply cannot see the wood for the trees.
If the principles of OOP are introduced too early it may lead to cognitive overload for some students
resulting in confusion and disillusionment with the subject.
This additional complexity makes the problem of teaching contemporary programming at an introductory level even more acute and
if not addressed is likely to lead to even higher failure and drop-out rates in the early phases of computing programmes, with more
students trying to avoid programming at all costs.
Why is it that students find programming courses more difficult than they did in the past?
One reason is that the range of abilities of student cohorts has undoubtedly widened in recent years. Another is simply that OOP
programming is more complex and difficult to understand. It has often been suggested that the difficulty in the teaching and understanding
of a programming language can be seen by examining the complexity of the ubiquitous 'Hello World' program.
The 'Hello World' program illustrates the simplest form of human-computer interaction (HCI); it sends a text message from a computer
program to the user, displayed on the screen. 'Hello World' will be familiar to many computer lecturers and students as it is considered
to be the most basic of programs, and is normally used as the first program example in many undergraduate programming text books
and introductory programming modules.
To illustrate the additional complexity of OOP consider as a simple metric the comparison of the program code for 'Hello World'
written in Pascal, a language used in many universities to teach introductory programming in the past, and Java a contemporary OOP
language widely used commercially and in universities today to teach programming.
PASCAL:
program HelloWorld
begin
write ('Hello World')
end.
JAVA:
class Message
{
public static void main (String args[ ])
{
Message helloWorld = new Message ( );
helloWorld.printMessage ( );
}
void printMessage ( )
{
System.out.print ("Hello World");
}
}
These two programs perform exactly the same function. It is not difficult to see that the early generation Pascal program is very
simple and easy to understand, most students and even most ordinary adults would have no problem understanding what is going on.
"Although there have been, and will always be, religious fanatics who think their language is the only way to code, the really organized
OOP hype started in the late 1980's. "... "It is said that countries get the governments they deserve, and perhaps that is true of professions
as well--a lot of the energy fueling this hype derives from the truly poor state of software development."
1994 | USENIX
Object Oriented Programming (OOP) is currently being hyped as the best way to do everything from promoting code reuse to forming
lasting relationships with persons of your preferred sexual orientation. This paper tries to demystify the benefits of OOP. We point
out that, as with so many previous software engineering fads, the biggest gains in using OOP result from applying principles that
are older than, and largely independent of, OOP. Moreover, many of the claimed benefits are either not true or true only by chance,
while occasioning some high costs that are rarely discussed. Most seriously, all the hype is preventing progress in tackling problems
that are both more important and harder: control of parallel and distributed applications, GUI design and implementation, fault tolerant
and real-time programming. OOP has little to offer these areas. Fundamentally, you get good software by thinking about it, designing
it well, implementing it carefully, and testing it intelligently, not by mindlessly using an expensive mechanical process.
Define Your Terms
Object Oriented Programming (OOP) is a term largely borrowed from the SmallTalk community, who were espousing many of these techniques
in the mid 1970's. In turn, many of their ideas derive from Simula 67, as do most of the core ideas in C++. Key notions such as encapsulation
and reuse have been discussed as far back as the 60's, and received a lot of discussion during the rounds of the Ada definition.
Although there have been, and will always be, religious fanatics who think their language is the only
way to code, the really organized OOP hype started in the late 1980's. By the early 1990's, both Next and Microsoft
were directing their marketing muscle into persuading us to give up C and adopt C++, while SmallTalk and Eiffel both were making
a respectable showing, and object oriented operating systems and facilities (DOE, PenPoint, COBRA) were getting a huge play in the
trade press--the hype wars were joined.
It is said that countries get the governments they deserve, and perhaps that is true of professions
as well--a lot of the energy fueling this hype derives from the truly poor state of software development. While hardware
developers have provided a succession of products with radically increasing power and lower cost, the software world has seen very
little productivity improvement. Major, highly visible products from industry leaders continue to be years late (Windows NT), extremely
buggy (Solaris) or both, costs skyrocket, and, most seriously, people are very reluctant to pay 1970's software costs when they are
running cheap 1990's hardware. I believe a lot of non-specialists look at software development and see it as so completely screwed
up that the cause cannot be profound--it must be something simple, something a quick fix could fix. Maybe if they just used objects...
To be more precise, most of what I say will apply to C++, viewed as a poor stepchild by most of the OOP elite. Actually, the few
comments I will make about more dynamically typed languages like SmallTalk make C++ look good by comparison. I will also focus my
concern fairly narrowly. I am interested in tools, including languages, that make it easier and more
productive to generate large serious high quality software products. So focusing rules out a bunch of sometimes entertaining philosophical
and aesthetic arguments best entertained over beer.
... ... ...
What Works in OOP
Those who report big benefits from using OOP are not lying. Many of the reported benefits come from focusing on designing the
software models, including the roles and interactions of the modules, enabling the modules to encapsulate expertise, and carefully
designing the interfaces between these modules. While most OOP systems allow you, and even encourage you, to do these things, most
older programming systems allow these techniques as well. These are good, old ideas that have proved their worth in the trenches
for decades, whether they were called OOP, structured programming, or just common sense. I have seen excellent programs written in
assembler that used these principles, and terrible programs in C++ that did not. The use of objects and inheritance is not what makes
these programs good.
What works in all these cases is that the programs were well thought out and the design was done intelligently, based on a clear
and well communicated set of organizing principles. The language and the operating system just don't matter. In many cases, the same
organizing principles used to guide the design can be used to guide the construction and testing of the product as well. What makes
a piece of software good has a lot to do with the application of thought to the problem being addressed, and not much to do with
what language or methodology you used. To the extent that the OOP methodology makes you think problems through and forces you to
make hidden assumptions explicit, it leads to better code.
OOP Claims Unmasked
The hype for OOP usually claims benefits such as faster development time, better code reuse, and higher quality and reliability
of the final code. As the last section shows, these are not totally empty claims, but when true they don't have much to do with OOP
methodology. This section examines these claims in more detail.
OOP is supposed to allow code to be developed faster; the question is, "faster than what?". Will OOP let you write a parser faster
than Yacc, or write a GUI faster than using a GUI-builder? Will your favorite OOP replace awk or Perl or csh
within a few years? I think not.
Well, maybe faster than C, and I suppose if we consider only raw C this claim has some validity. But a large part of most OOP
environments is a rich set of classes that allow the user to manipulate the environment--build windows, send messages across a network,
receive keystrokes, etc. C, by design, has a much thinner package of such utilities, since it is used in so many different environments.
There were some spectacularly productive environments based on LISP a few years back (and not even the most diehard LISP fanatic
would say that LISP is object oriented). A lot of what made these environments productive was a rich, well designed set of existing
functions that could be accessed by the user. An that is a lot of what makes OOP environments productive compared to raw C. Another
way of saying this is that a lot of the productivity improvement comes from code reuse.
There is probably no place where the OOP claims are more misleading than the claims of code reuse. In fact, code reuse is a complex
and difficult problem--it has been recognized as desirable for decades, and the issues that make it hard are not materially facilitated
by OOP.
In order for me to reuse your code, your code needs to do something that I want done (that's the easy part), and your code needs
to operate within the same model of the program and environment as my code (that's the hard part). OOP addresses some of the gratuitous
problems that occasionally plagued code reuse attempts (for example, issues of data layout), but the fundamental problems are, and
remain, hard.
An example should make this clearer. One of the most common examples of a reused program is a string package (this is particularly
compelling in C++, since C has such limited string handling facilities). Suppose you have written a string package in C++, and I
want to use it in my compiler symbol table. As it happens, many of the strings that a compiler uses while compiling a function do
not need to be referenced after that function has been compiled. This is commonly dealt with by providing an arena-based allocator,
where storage can be allocated out of an arena associated with a function, and then the whole arena can be discarded when the function
has been processed. This minimizes the chance of memory leaks and makes the deallocation of storage essentially free (Similar techniques
are used to handle transaction-based storage in a transaction processing system, etc.).
So, I want to use your string package, but I want your string package to use my arena-based allocator. But, almost certainly,
you have encapsulated knowledge of storage allocation so that I can't have any contact with it (that is a feature of OOP,
after all), so I can't use your package with my storage allocator. Actually, I would probably have more luck reusing your package
had it been in C, since I could supply my own malloc and free routines (although that has its own set of
problems).
If you had designed your string package to allow me to specify the storage allocator, then I could use it. But this just makes
the point all the more strongly. The reason we do not reuse code is that most code is not designed to be reused (notice I said nothing
about implementation). When code is designed to be reused (the C standard library comes to mind) it doesn't need object
oriented techniques to be effective. I will have more to say about reuse by inheritance below.
One of the major long-term advantages of object-oriented techniques may be that it can support broad algorithmic reuse, of a style
similar to the Standard Template Library of C++. However, the underlying language is enormously overbuilt
for such support, allowing all sorts of false traps and dead-ends for the unwary. The Standard Template Library took
several generations and a dozen of the best minds in the C++ community to reach its current state, and it's no mistake that several
of the early generations were coded in Ada and SCHEME--its power is not in the language, but in the ideas.
The final advantage claimed for OOP is higher quality code. Here again, there is a germ of truth to
this claim, since some problems with older methods (such as name clashes in libraries) are harder to make and easier to detect using
OOP. To the extent that we can reuse "known good" code, our quality will increase--this doesn't depend on OOP. However, basically
code quality depends on intelligent design, an effective implementation process, and aggressive testing. OOP does not address the
first or last step at all, and falls short in the implementation step.
For example, we might wish to enforce some simple style rules on our object implementations, such as requiring that every object
have a print method or a serialize method for dumping the object to disc. The best that many object- oriented
systems can do is provide you (or, rather, your customer) with a run-time error when you try to dump an object to disc that has not
defined such a method (C++ actually does a bit better than that). Many of the more dynamically typed systems, such as SmallTalk or
PenPoint, do not provide any typing of arguments of messages, or enforce any conventions as to which messages can be sent to which
objects. This makes messages as unstructured as GOTO's were in the 1970's, with a similar impact on correctness and quality.
One of the most unfortunate effects of the OOP bandwagon is that it encourages the belief that how you speak is more important
than what you say. It is rather like suggesting that if someone uses perfect English grammar they must be truthful. It is
what you say, and not how you say it.
... ... ...
He said that She said that He had Halitosis
Using a computer language is a social, and even political act, akin to voting for a candidate or buying
a certain brand of car. As such, our choices are open to manipulation by marketeers, influence by fads, and various forms of rationalization
by those who were burned and have trouble admitting it. In particular, much of what is "known" about a language is
something that was true, or at widely believed, at one point in the language's history, but may not be true currently. At one point,
"everybody" knew that PL/I had no recursive functions, ALGOL 68 was too big a language to be useful, Ada was too slow, and C could
not be used for numerical problems. Some of these beliefs were never true, and none of them are true now, but they are still widely
held. It is worth looking at OOP in this light.
Some of the image manipulators target nontechnical people such as our bosses and customers, and may try to persuade them that
OOP would solve their problems. As we have seen, however, many of the things that are "true" of OOP (for example, that it makes reuse
easy) are difficult to justify when you look more carefully. As professionals, it is our responsibility to ask whether moving to
OOP is in the best interests of ourselves, our company, or our profession. We must also have the courage to reject the fad when it
is a diversion or will not meet our needs. We must also make this decision anew for each project, considering all the potential factors.
Realistically, the answer will probably be that some projects should use OOP, others should not, and for a fair number in the middle
it doesn't matter very much.
Summary
The only way to construct good software is to think about it. Since the scope of problems that software attempts to address is
so vast, the kinds of solutions that that we need is also vast. OOP is a good tool to have in our toolbox, and there are places that
it is my tool of choice. But there are also places where I would avoid it like the plague. It is important to all of us that we continue
to have that option.
OO is definitely overkill for a lot of web projects. It seems to me that so many people
use OO frameworks like Ruby and Zope because "it's enterprise level". But using an 'enterprise' framework
for small to medium sized web applications just adds so much overhead and frustration at having to learn the framework that it just
doesn't seem worth it to me. Having said all this I must point out that I'm distrustful
of large corporations and hate their dehumanizing hierarchical structure. Therefore i am naturally drawn towards open
source and away from the whole OO/enterprise/hierarchy paradigm. Maybe people want to push open source to the enterprise level in
the hope that they will adopt the technology and therefore they will have more job security. Get over it - go and learn Java and
.NET if you want job security and preserve open source software as an oasis of freedom away from the corporate world. Just my 2c.
===
OOP has its place, but the diversity of frameworks is just as challenging to figure out as a new class you didn't write, if not
more. None of them work the same or keep a standard convention between them that makes learning them easier.
Frameworks are great, but sometimes I think maybe they don't all have to be OO. I keep
a small personal library of functions I've (and others have) written procedurally and include them just like I would a class. Beyond
the overhead issues is complexity. OOP has you chasing declarations over many files to figure out what's happening. If you're trying
to learn how that unique class you need works, it can be time consuming to read through it and see how the class is structured. By
the time you're done you may as well have written the class yourself, at least by then you'd have a solid understanding. Encapsulation
and polymorphism have their advantages, but the cost is complexity which can equal time. And for smaller projects that will likely
never expand, that time and energy can be a waste.
Not trying to bash OOP, just to defend procedural style. They each have their place.
===
Sorry, but I don't like your text, because you mix Ruby and Ruby on Rails alot. Ruby is in my opinion easier to use then PHP,
because PHP has no design-principle beside "make it work, somehow easy to use". Ruby has some really cool stuff I miss quite often,
when I have to program in PHP again (blocks for example), but has a more clear and logical syntax.
Ruby on Rails is of course not that easy to use, at least when speaking about small-scale projects. This is, because it does a
lot more than PHP does. Of course, there are other good reasons to prefere PHP over Rails (like the better support by providers,
more modules, more documentation), but from my opinion, most projects done in PHP from the complexity of a blog could profit from
being programmed in Rails, from the pure technical point of view. At least I won't program in PHP again unless a customer asks me.
===
I have a reasonable level of experience with PHP and Python but unfortunately haven't touched Ruby yet. They both seem to be a
good choice for low complexity projects. I can even say that I like Python a lot. But I would never consider it again for projects
where design is an issue. They also say it is for (rapid) prototyping. My experience is that as long as you can't afford a proper
IDE Python is maybe the best place to go to. But a properly "equipped" environment can formidably boost
your productivity with a statically typed language like Java. In that case Python's advantage shrinks to the benefits
of quick tests accesible through its command line.
Another problem of Python is that it wants to be everything: simple and complete, flexible and structured,
high-level while allowing for low-level programming. The result is a series of obscure features
Having said all that I must give Python all the credits of a good language. It's just not perfect. Maybe it's Ruby. My apologies
for not sticking too closely to the subject of the article.
===
The one thing I hate is OOP geeks trying to prove that they can write code that does nothing usefull and nobody understands.
"You don't have to use OOP in ruby! You can do it PHP way! So you better do your homework before making
such statements!"
Then why use ruby in the first place?
"What is really OVERKILL to me, is to know the hundreds of functions, PHP provides out of the box, and available in ANY scope!
So I have to be extra carefull wheter I can use some name. And the more functions - the bigger the MESS."
On the other hand, in ruby you use only functions avaliable for particullar object you use.
I would rather say: "some text".length than strlen("some text"); which is much more meaningful! Ruby language itself much more
descriptive. I remember myself, from my old PHP days, heaving alwayse to look up the php.net for appropriate function, but now I
can just guess!"
Yeah you must have weak memory and can`t remember wheter strlen() is for strings or for numbers….
Doesn`t ruby have the same number of functions just stored in objects?
Look if you can`t remember strlen than invent your own classes you can make a whole useless OOP framework for PHP in a day……
"I don't predict the demise of object-oriented programming, by the way. Though I don't think it has much to offer good programmers,
except in certain specialized domains, it is irresistible to large organizations. Object-oriented programming offers a sustainable way
to write spaghetti code. It lets you accrete programs as a series of patches. Large organizations always tend to develop software this
way, and I expect this to be as true in a hundred years as it is today." -- This is a pretty interesting observation
which links some (objectionable) properties of OO with the properties of the large organizations.
April 2003 | Keynote from PyCon2003
...I have a hunch that the main branches of the evolutionary tree pass through the languages that have the smallest, cleanest
cores. The more of a language you can write in itself, the better.
...Languages evolve slowly because they're not really technologies. Languages are notation. A program is a formal description
of the problem you want a computer to solve for you. So the rate of evolution in programming languages is more like the rate of evolution
in mathematical notation than, say, transportation or communications. Mathematical notation does evolve, but not with the giant leaps
you see in technology.
...I learned to program when computer power was scarce. I can remember taking all the spaces out of my Basic programs so they
would fit into the memory of a 4K TRS-80. The thought of all this stupendously inefficient software burning up cycles doing the same
thing over and over seems kind of gross to me. But I think my intuitions here are wrong. I'm like someone who grew up poor, and can't
bear to spend money even for something important, like going to the doctor.
Some kinds of waste really are disgusting. SUVs, for example, would arguably be gross even if they ran on a fuel which would never
run out and generated no pollution. SUVs are gross because they're the solution to a gross problem. (How to make minivans look more
masculine.) But not all waste is bad. Now that we have the infrastructure to support it, counting the minutes of your long-distance
calls starts to seem niggling. If you have the resources, it's more elegant to think of all phone calls as one kind of thing, no
matter where the other person is.
There's good waste, and bad waste. I'm interested in good waste-- the kind where, by spending more, we can get simpler designs.
How will we take advantage of the opportunities to waste cycles that we'll get from new, faster hardware?
The desire for speed is so deeply engrained in us, with our puny computers, that it will take a conscious effort to overcome it.
In language design, we should be consciously seeking out situations where we can trade efficiency for even the smallest increase
in convenience.
Most data structures exist because of speed. For example, many languages today have both strings and
lists. Semantically, strings are more or less a subset of lists in which the elements are characters. So why do you need a separate
data type? You don't, really. Strings only exist for efficiency. But it's lame to clutter up the semantics of the language with hacks
to make programs run faster. Having strings in a language seems to be a case of premature optimization.
... Inefficient software isn't gross. What's gross is a language that makes programmers do needless work. Wasting programmer time
is the true inefficiency, not wasting machine time. This will become ever more clear as computers get faster
...Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the
contrary seems to be able to shake it free. But although some object-oriented software is reusable, what makes it reusable is its
bottom-upness, not its object-orientedness. Consider libraries: they're reusable because they're language, whether they're written
in an object-oriented style or not.
I don't predict the demise of object-oriented programming, by the way. Though I don't think it has much to offer good programmers,
except in certain specialized domains, it is irresistible to large organizations. Object-oriented programming offers a sustainable
way to write spaghetti code. It lets you accrete programs as a series of patches. Large organizations always tend to develop software
this way, and I expect this to be as true in a hundred years as it is today.
...As this gap widens, profilers will become increasingly important. Little attention
is paid to profiling now. Many people still seem to believe that the way to get fast applications is to write compilers that generate
fast code. As the gap between acceptable and maximal performance widens, it will become increasingly clear that the way to get fast
applications is to have a good guide from one to the other.
...One of the most exciting trends in the last ten years has been the rise of open-source languages like Perl, Python, and Ruby.
Language design is being taken over by hackers. The results so far are messy, but encouraging. There
are some stunningly novel ideas in Perl, for example. Many are stunningly bad, but that's always true of ambitious
efforts. At its current rate of mutation, God knows what Perl might evolve into in a hundred years.
...One helpful trick here is to use the length of the program
as an approximation for how much work it is to write. Not the length in characters, of course, but the length in distinct syntactic
elements-- basically, the size of the parse tree. It may not be quite true that the shortest program
is the least work to write, but it's close enough that you're better off aiming for the solid target of brevity than the fuzzy, nearby
one of least work.Then the algorithm for language design becomes: look at a program and
ask, is there any way to write this that's shorter?
Demonstration how statically checkable rules can prevent the problem from occurring [a separate document]
A more formal and general presentation of this topic is given in a paper and a talk at a Monterey 2001 workshop (June 19-21, 2001,
Monterey, CA): Subtyping-OOP.ps.gz [35K] and
MTR2001-Subtyping-talk.ps.gz [67K]
Does OOP really separate interface from implementation?
Decoupling of abstraction from implementation is one of the holy grails of good design. Object-oriented programming in general
and encapsulation in particular are claimed to be conducive to such separation, and therefore to more reliable code. In the end,
productivity and quality are the only true merits a programming methodology is to be judged upon. This article is to show a very
simple example that questions if OOP indeed helps separate interface from implementation. The example is a very familiar one, illustrating
the difference between subclassing and subtyping. The article carries this example of Bags and Sets one step further, to a rather
unsettling result. The article set out to follow good software engineering; this makes the resulting failure even more ominous.
The article aims to give a more-or-less "real" example, which one can run and see the result for himself. By necessity the example
had to be implemented in some language. The present article uses C++. It appears however that similar code (with similar conclusions)
can be carried on in many other OO languages (e.g., Java, Python, etc).
Suppose I was given a task to implement a Bag -- an unordered collection of possibly duplicate items (integers in this example).
I chose the following interface:
typedef int const * CollIterator; // Primitive but will do
class CBag {
public:
int size(void) const; // The number of elements in the bag
virtual void put(const int elem); // Put an element into the bag
int count(const int elem) const; // Count the number of occurrences
// of a particular element in the bag
virtual bool del(const int elem); // Remove an element from the bag
// Return false if the element
// didn't exist
CollIterator begin(void) const; // Standard enumerator interface
CollIterator end(void) const;
CBag(void);
virtual CBag * clone(void) const; // Make a copy of the bag
private:
// implementation details elided
};
Other useful operations of the CBag package are implemented without the knowledge of CBag's internals. The functions below use only
the public interface of the CBag class:
// Standard "print-on" operator
ostream& operator << (ostream& os, const CBag& bag);
// Union (merge) of the two bags
// The return type is void to avoid complications with subclassing
// (which incidental to the current example)
void operator += (CBag& to, const CBag& from);
// Determine if CBag a is subbag of CBag b
bool operator <= (const CBag& a, const CBag& b);
inline bool operator >= (const CBag& a, const CBag& b)
{ return b <= a; }
// Structural equivalence of the bags
// Two bags are equal if they contain the same number of the same elements
inline bool operator == (const CBag& a, const CBag& b)
{ return a <= b && a >= b; }
It has to be stressed that the package was designed to minimize the number of functions that need to know details of CBag's implementation.
Following good practice, I wrote validation code (file vCBag.cc
[Code]) that tests all the functions
and methods of the CBag package and verifies common invariants.
Suppose you are tasked with implementing a Set package. Your boss defined a set as an unordered collection where each element
has a single occurrence. In fact, your boss even said that a set is a bag with no duplicates. You have found my CBag package
and realized that it can be used with few additional changes. The definition of a Set as a Bag, with some constraints, made the decision
to reuse the CBag code even easier.
class CSet : public CBag {
public:
bool memberof(const int elem) const { return count(elem) > 0; }
// Overriding of CBag::put
void put(const int elem)
{ if(!memberof(elem)) CBag::put(elem); }
CSet * clone(void) const
{ CSet * new_set = new CSet(); *new_set += *this; return new_set; }
CSet(void) {}
};
The definition of a CSet makes it possible to mix CSets and CBags, as in set += bag; or bag += set;
These operations are well-defined, keeping in mind that a set is a bag that happens to have the count of all members exactly one.
For example, set += bag; adds all elements from a bag to a set, unless they are already present. bag += set;
is no different than merging a bag with any other bag.
You too wrote a validation suite to test all CSet methods (newly defined and inherited from a bag) and to verify common expected
properties, e.g., a+=a is a.
In my package, I have defined and implemented a function:
// A sample function. Given three bags a, b, and c, it decides
// if a+b is a subbag of c
bool foo(const CBag& a, const CBag& b, const CBag& c)
{
CBag & ab = *(a.clone()); // Clone a to avoid clobbering it
ab += b; // ab is now the union of a and b
bool result = ab <= c;
delete &ab;
return result;
}
It was verified in the regression test suite. You have tried this function on sets, and found it satisfactory.
Later on, I revisited my code and found my implementation of foo() inefficient. Memory for the ab object is unnecessarily
allocated on heap. I rewrote the function as
bool foo(const CBag& a, const CBag& b, const CBag& c)
{
CBag ab;
ab += a; // Clone a to avoid clobbering it
ab += b; // ab is now the union of a and b
bool result = ab <= c;
return result;
}
It has exactly the same interface as the original foo(). The code hardly changed. The behavior of the new implementation is also
the same -- as far as I and the package CBag are concerned. Remember, I have no idea that you're re-using my package. I re-ran the
regression test suite with the new foo(): everything tested fine.
However, when you run your code with the new implementation of foo(), you notice that something has changed! You can see
this for yourself: download the complete code from
[Code]. make vCBag1 and
make vCBag2 run validation tests with the first and the second implementations of foo(). Both tests complete successfully,
with the identical results. make vCSet1 and make vCSet2 test the CSet package. The tests -- other than
those of foo() -- all succeed. Function foo() however yields markedly different results. It is debatable which implementation of
foo() gives truer results for CSets. In any case, changing internal algorithms of a pure function foo() while keeping the
same interfaces is not supposed to break your code. What happened?
What makes this problem more unsettling is that both you and I tried to do everything by the book. We wrote a safe, typechecked
code. We eschewed casts. g++ (2.95.2) compiler with flags -W and -Wall issued not a single warning. Normally these flags cause g++
to become very annoying. You didn't try to override methods of CBag to deliberately break the CBag package. You attempted to preserve
CBag's invariants (weakening a few as needed). Real-life classes usually have far more obscure algebraic properties. We both wrote
regression tests for our implementations of a CBag and a CSet, and they passed. And yet, despite all my efforts to separate interface
and implementation, I failed. Should a programming language or the methodology take at least a part of the blame?
[OOP-problems]
Subtyping vs. Subclassing
The problem with CSet is caused by CSet design's breaking of the Liskov Substitution Principle (LSP)
[LSP]. CSet has been declared as a
subclass of CBag. Therefore, C++ compiler's typechecker permits passing a CSet object or a CSet reference to a function that
expects a CBag object or reference. However, it is well known
[Subtyping-Subclassing]
that a CSet is not a subtype of a CBag. The next few paragraphs give a simple proof of this fact, for the sake of reference.
One approach is to consider Bags and Sets as pure values, without any state or intrinsic behavior -- just like integers
are. This approach is taken in the next article,
Preventing-Trouble.html. The other
point of view -- the one used in this article -- is Object-Oriented Programming, of objects that encapsulate state and behavior.
Behavior means an object can accept a message, send a reply and possibly change its state. Let us consider a Bag and a Set separately,
without regard to their possible relationship. Throughout this section we use a different, concise notation to emphasize the general
nature of the argument.
We will define a Bag as an object that accepts two messages:
(send a-Bag 'put x)
puts an element x into the Bag, and
(send a-Bag 'count x)
gives the count of occurrences of x in the Bag (without changing a-Bag's state).
Likewise, a Set is defined as an object that accepts two messages:
(send a-Set 'put x)
puts an element x into a-Set unless it was already there,
(send a-Set 'count x)
gives the count of occurrences of x in a-Set (which is always either 0 or 1).
Let's consider a function
(define (fnb bag)
(send bag 'put 5)
(send bag 'put 5)
(send bag 'count 5))
The behavior of this function can be summed as follows: given a Bag, the function adds two elements into it and returns (+ 2 (send orig-bag 'count 5))
Technically you can pass to fnb a Set object as well. Just as a Bag, a Set object accepts messages put
and count. However applying fnb to a Set object will break the function's post-condition, which stated
above. Therefore, passing a set object where a bag was expected changes behavior of some program. According to the Liskov Substitution
Principle (LSP), a Set is not substitutable for a Bag -- a Set cannot be a subtype of a Bag.
Let's consider a function
(define (fns set)
(send set 'put 5)
(send set 'count 5))
The behavior of this function is: given a Set, the function adds an element into it and returns 1. If you pass to this function a
bag (which -- just as a set -- replies to messages put and count), the function fns may return
a number greater than 1. This will break fns's contract, which promised always to return 1.
Therefore, from the OO point of view, neither a Bag nor a Set are a subtype of the other. This is the crux of the problem. Bag
and Set only appear similar. The interface or an implementation of a Bag and a Set appear to invite subclassingof a
Set from a Bag (or vice versa). Doing so however will violate the LSP -- and you have to brace for very subtle errors. The previous
section intentionally broke the LSP to demonstrate how insidious the errors are and how difficult it may be to find them. Sets and
Bags are very simple types, far simpler than the ones you deal with in a production code. Alas, LSP when considered from an OOP point
of view is undecidable. You cannot count on a compiler for help in pointing out an error. You cannot rely on regression tests
either. It's manual work -- you have to see the problem
[OOP-problems].
Subtyping and Immutability
One may claim that "A Set *is not a* Bag, but an ImmutableSet *is an* ImmutableBag." That is not correct. An immutability per se
does not confer subtyping to "derived" classes of data. As an example, consider a variation of the previous argument. We will use
a C++ syntax for a change. The examples will hold if re-written in Java, Haskell, Self or any other language with a native or emulated
OO system.
class BagV {
virtual BagV put(const int) const;
int count(const int) const;
... // other similar const members
};
class SetV {
virtual SetV put(const int) const;
int count(const int) const;
... // other similar const members
};
Instances of BagV and SetV classes are immutable, yet the classes are not subtypes of each other. To see that, let us consider
a polymorphic function
Over a set of BagV instances, the behavior of this function can be represented by an invariant f(bag) == 1 + bag.count(1)
If we take an object asetv = SetV().put(1) and pass it to f(), the invariant above will be broken. Therefore,
by LSP, a SetV is not substitutable for BagV: a SetV is not a BagV.
In other words, if one defines
int fb(const BagV& bag) { return bag.put(1).count(1); }
he can potentially pass a SetV instance to it: e.g., either by making SetV a subclass of BagV, or by reinterpret_cast<const
BagV&>(aSetV). Doing so will generate no overt error; yet this will break fb()'s invariant and alter program's behavior in
unpredictable ways. A similar argument will show that BagV is not a subtype of SetV.
C++ objects are record-based. Subclassing is a way of extending records, with possibly altering some slots in the parent record.
Those slots must be designated as modifiable by a keyword virtual. In this context, prohibiting mutation and overriding makes subclassing
imply subtyping. This was the reasoning behind BRules [Preventing-Trouble.html].
However merely declaring the state of an object immutable is not enough to guarantee that derivation leads to subtyping: An object
can override parent's behavior without altering the parent. This is easy to do when an object is implemented as a functional closure,
when a handler for an incoming message is located with the help of some kind of reflexive facilities, or in prototype-based OO systems.
Incidently, if we do permit a derived object to alter its base object, we implicitly allow behavior overriding. For example, an object
A can react to a message M by forwarding the message to an object B stored in A's
slot. If an object C derived from A alters that slot it hence overrides A's behavior with
respect to M.
For example, http://pobox.com/~oleg/ftp/Scheme/index.html#pure-oo
implements a purely functional OO system. It supports objects with an identity, state and behavior, inheritance and polymorphism.
Everything in that system is immutable. And yet it is possible to define something like a BagV, and derive SetV from it by
overriding a put message handler. Acting this way is bad and invites trouble as this breaks the LSP as shown earlier.
Yet it is possible. This example shows that immutability per se does not turn object derivation into subtyping.
"I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented programming
is inherently bad, mind you, but a little OOP goes a very long way. Adding objects to your code is like adding salt to a dish: use a
little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity,
and I tend to favor the approach that results in less code, not more. "
Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer
who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented
programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something
that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool
if you want to convince yourself, or someone else, that you are doing a lot of work.
Eric Lippert observed a similar occupational hazard among developers. It's something he calls
object happiness.
What I sometimes see when I interview people and review code is symptoms of a disease I call Object Happiness. Object Happy people
feel the need to apply principles of OO design to small, trivial, throwaway projects. They invest lots of unnecessary time making
pure virtual abstract base classes -- writing programs where IFoos talk to IBars but there is only one implementation of each
interface! I suspect that early exposure to OO design principles divorced from any practical context that motivates those principles
leads to object happiness. People come away as OO True Believers rather than OO pragmatists.
I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented
programming is inherently bad, mind you, but a little OOP goes a very long way. Adding objects to your code is like adding
salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err
on the side of simplicity, and I tend to favor the approach that results in less code, not more.
Given my ambivalence about all things OO, I was amused when Jon Galloway
forwarded me a link to Patrick Smacchia's web page.
Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in
French than it is in English: POO.
Actually Spolsky does not understand the role of scripting languages. But hi is right of target with his critique of OO. Object
oriented programming is no silver bullet.
Joel Spolsky is one of our most celebrated pundits on the practice of software development, and he's full of terrific insight.
In a recent blog post, he decries the fallacy of "Lego
programming" -- the all-too-common assumption that sophisticated new tools will make writing applications as easy as snapping
together children's toys. It simply isn't so, he says -- despite the fact that people have been claiming it for decades -- because
the most important work in software development happens before a single line of code is written.
By way of support, Spolsky reminds us of a quote from the most celebrated pundit of an earlier generation of developers. In his
1987 essay "No Silver Bullet,"
Frederick P. Brooks wrote, "The essence of a software entity is a construct of interlocking concepts
... I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the
labor of representing it and testing the fidelity of the representation ... If this is true, building software will always be hard.
There is inherently no silver bullet."
As Spolsky points out, in the 20 years since Brooks wrote "No Silver Bullet," countless products have reached the market heralded
as the silver bullet for effortless software development. Similarly, in the 30 years since Brooks published "
The Mythical Man-Month" -- in which, among other
things, he debunks the fallacy that if one programmer can do a job in ten months, ten programmers can do the same job in one month
-- product managers have continued to buy into various methodologies and tricks that claim to make running software projects as easy
as stacking Lego bricks.
Don't you believe it. If, as Brooks wrote, the hard part of software development is the initial design,
then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest
GUI rapid-development toolkit will.
And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to "the community"
-- if such a thing exists -- in
the naive belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as
history demonstrates.
In 1998, Netscape released the source code to its Mozilla browser to the public to much fanfare, but only lukewarm response from
developers. As it turned out, the Mozilla source was much too complex and of too poor quality for developers outside Netscape to
understand it. As Jamie Zawinski recounts, the resulting decision
to rewrite the browser's rendering engine from scratch derailed the project anywhere from six to ten months.
This is a classic example of the fallacy of the mythical man-month. The problem with the Mozilla code was poor design, not lack
of an able workforce. Throwing more bodies at the project didn't necessarily help; it may have even hindered it. And while implementing
a community development process may have allowed Netscape to sidestep its own internal management problems, it was certainly no silver
bullet for success.
The key to developing good software the first time around is doing the hard work at the beginning: good design, and rigorous testing
of that design. Fail that, and you've got no choice but to take the hard road. As Brooks observed all those years ago, successful
software will never be easy. No amount of open source process will change that, and to think otherwise is just more Lego-programming
nonsense.
I participated in a debate on the question "Objects Have Failed" at OOPSLA 2002 in Seattle, Washington. My teammate was Brian
Foote, and our opponents were Guy L. Steele Jr. and James Noble. My opening remarks were scripted, as were Guy Steele's, and my rebuttals
were drawn from an extensive set of notes.
"Obsessive embrace has spawned a search for purity that has become an ideological weapon, promoting an incremental advance as the
ultimate solution to our software problems. " ... "Needless to say, object-orientation provides an important lens through which to understand
and fashion systems in the new world, but it simply cannot be the only lens. " ... "And as a result we find that object-oriented languages
have succumbed to static thinkers who worship perfect planning over runtime adaptability, early decisions over late ones, and the wisdom
of compilers over the cleverness of failure detection and repair."
November 6, 2002
Opening remarks
What can it mean for a programming paradigm to fail? A paradigm fails when the narrative it embodies fails to speak truth or when
its proponents embrace it beyond reason. The failure to speak truth centers around the changing needs of software in the 21st century
and around the so-called improvements on OO that have obliterated its original benefits. Obsessive embrace
has spawned a search for purity that has become an ideological weapon, promoting an incremental advance as the ultimate solution
to our software problems. The effect has been to brainwash people on the street. The statement "everything is an object"
says that OO is universal, and the statement "objects model the real world" says that OO has a privileged position. These are very
seductive invitations to a totalizing viewpoint. The result is to starve research and development on alternative paradigms.
Someday, the software we have already written will be a set of measure 0. We have lived through three ages of computing-the first
was machine coding; the second was symbolic assemblers, interpreter routines, and early compilers; and the third was imperative,
procedural, and functional programming, and compiler-based languages. Now we are in the fourth: object-oriented programming. These
first four ages featured single-machine applications. Even though such systems will remain important, increasingly our systems will
be made up of dozens, hundreds, thousands, or millions of disparate components, partial applications, services, sensors, and actuators
on a variety of hardware, written by a variegated set of developers, and it won't be incorrect to say that no one knows how it all
works. In the old world, we focussed on efficiency, resource limitations, performance, monolithic programs, standalone systems, single
author programs, and mathematical approaches. In the new world we will foreground robustness, flexibility, adaptation, distributed
systems, multiple-author programs, and biological metaphors for computing.
Needless to say, object-orientation provides an important lens through which to understand and fashion
systems in the new world, but it simply cannot be the only lens. In future systems, unreliability will be common,
complexity will be out of sight, and anything like carefully crafted precision code will be unrealistic. It's like a city: Bricks
are important for building part of some buildings, but the complexity and complicated way a variety of building materials and components
come together under the control of a multitude of actors with different cultures and goals, talents and proclivities means that the
kind of thinking that goes into bricks will not work at the scale of the city. Bricks are just too limited, and the circumstances
where they make sense are too constrained to serve as a model for building something as diverse and unpredictable as a city. And
further, the city itself is not the end goal, because the city must also-in the best case-be a humane structure for human activity,
which requires a second set of levels of complexity and concerns. Using this metaphor to talk about future computing systems, it's
fair to say that OO addresses concerns at the level of bricks.
The modernist tendency in computing is to engage in totalizing discourse in which one paradigm or one story is expected to supply
all in every situation. Try as they might, OO's promoters cannot provide a believable modernist grand narrative to the exclusion
of all others. OO holds no privileged position. So instead of Java for example embracing all the components developed elsewhere,
its proponents decided to develop their own versions so that all computing would be embraced within the Java narrative.
Objects, as envisioned by the designers of languages like Smalltalk and Actors-long before C++ and Java came around- were for
modeling and building complex, dynamic worlds. Programming environments for languages like Smalltalk were written in those languages
and were extensible by developers. Because the philosophy of dynamic change was part of the post-Simula OO worldview, languages and
environments of that era were highly dynamic.
But with C++ and Java, the dynamic thinking fostered by object-oriented languages was nearly fatally assaulted by the theology
of static thinking inherited from our mathematical heritage and the assumptions built into our views of computing by Charles Babbage
whose factory-building worldview was dominated by omniscience and omnipotence.
And as a result we find that object-oriented languages have succumbed to static thinkers who worship
perfect planning over runtime adaptability, early decisions over late ones, and the wisdom of compilers over the cleverness of failure
detection and repair.
Beyond static types, precise interfaces, and mathematical reasoning, we need self-healing and self-organizing mechanisms, checking
for and responding to failures, and managing systems whose overall complexity is beyond the ken of any single person.
One might think that such a postmodern move would have good consequences, but unlike Perl, the combination was not additive but
subtractive-as if by undercutting what OO was, OO could be made more powerful. This may work as a literary or artistic device,
but the idea in programming is not to teach but to build.
The apparent commercial success of objects and our love affair with business during the past decade have combined to stifle research
and exploration of alternative language approaches and paradigms of computing. University and industrial research communities retreated
from innovating in programming languages in order to harvest the easy pickings from the OO tree. The business frenzy at the end of
the last century blinded researchers to diversity of ideas, and they were into going with what was hot, what was uncontroversial.
If ever there was a time when Kuhn's normal science dominated computing, it was during this period.
My own experience bears this out. Until 1995, when I went back to school to study poetry, my research career centered on the programming
language, Lisp. When I returned in 1998, I found that my research area had been eliminated. I was forced to find new ways to earn
a living within the ecology created by Java, which was busily recreating the computing world in its own image.
Smalltalk, Lisp, Haskell, ML, and other languages languish while C++, Java, and their near-clone C# are the only languages getting
attention. Small languages like Tcl, Perl, and Python are gathering adherents, but are making no progress in language and system
design at all.
Our arguments come in several flavors:
The object-oriented approach does not adequately address the computing requirements of the future.
Object-oriented languages have lost the simplicity - some would say purity - that made them special and which were the source
of their expressive and development power.
Powerful concepts like encapsulation were supposed to save people from themselves while developing software, but encapsulation
fails for global properties or when software evolution and wholesale changes are needed. Open Source handles this better. It's
likely that modularity-keeping things local so people can understand them-is what's really important about encapsulation.
Objects promised reuse, and we have not seen much success.
Despite the early clear understanding of the nature of software development by OO pioneers, the current caretakers of the
ideas have reverted to the incumbent philosophy of perfect planning, grand design, and omniscience inherited from Babbage's theology.
The over-optimism spawned by objects in the late 1990s led businesses to expect miracles that might have been possible with
objects unpolluted by static thinking , and when software developers could not deliver, the outrageous business plans of those
businesses fell apart, and the result was our current recession.
Objects require programming by creating communicating entities, which means that programming is
accomplished by building structures rather than by linguistic expression and description through form, and this often leads to
a mismatch of language to problem domain.
Object design is like creating a story in which objects talk and interact with each other, leading people to expect that learning
object-oriented programming is easy, when in fact it is as hard as ever. Again, business was misled.
People enthused by objects hogged the road, would not get out of the way, would not allow alternatives to be explored-not
through malice but through exuberance-and now resources that could be used to move ahead are drying up. But sometimes this exuberance
was out-and-out lying to push others out of the way.
But in the end, we don't advocate changing the way we work on and with objects and object-oriented languages. Instead, we argue
for diversity, for work on new paradigms, for letting a thousand flowers bloom. Self-healing, self-repair, massive and complex systems,
self-organization, adaptation, flexibility, piecemeal growth, statistical behavior, evolution, emergence, and maybe dozens of other
ideas and approaches we haven't thought of-including new physical manifestations of non-physical action-should be allowed and encouraged
to move ahead.
This is a time for paradigm definition and shifting. It won't always look like science, won't always even appear to be rational;
papers and talks explaining and advocating new ideas might sound like propaganda or fiction or even poetry; narrative will play a
larger role than theorems and hard results. This will not be normal science.
In the face of all this, it's fair to say that objects have failed.
[Feb 14, 2006] OOP Criticism Object Oriented
Programming Oversold by B. Jacobs. OOP criticism and OOP problems. The emperor has no clothes! Reality Check 101. Snake OOil.
5/14/2005
OOP Myths Debunked:
Myth: OOP is a proven general-purpose technique
Myth: OOP models the real world better
Myth: OOP makes programming more visual
Myth: OOP makes programming easier and faster
Myth: OOP eliminates the "complexity" of "case" or "switch" statements
Myth: OOP reduces the number of places that require changing
Myth: OOP increases reuse (recycling of code)
Myth: Most things fit nicely into hierarchical taxonomies
Myth: Sub-typing is a stable way to model differences
Myth: Self-handling nouns are more useful than self-handling verbs
Myth: Most operations have one natural "primary noun"
Myth: OOP does automatic garbage-collection better
Myth: Procedural cannot do components well
Myth: OO databases can better store large, multimedia data
Myth: OODBMS are overall faster than RDBMS
Myth: OOP better hides persistence mechanisms
Myth: C and Pascal are the best procedural can get
Myth: SQL is the best relational language
Myth: OOP would have prevented more Y2K problems
Myth: OOP "does patterns" better
Myth: Only OOP can "protect data"
Myth: Implementation changes significantly more often than interfaces
Myth: Procedural/Relational ties field types and sizes to the code more
Myth: Procedural cannot extend compiled portions very well
Myth: No procedural language can re-compile at the routine level
Myth: Procedural/Relational programs cannot "factor" as well
Myth: OOP models human thought better (Which human?)
Software-engineering has two kinds of cargo-cult; slavish adherence to process without regard to the effect on product, and reliance
on personal heroics, again without regard to product. In both cases, organizations try to mimic programming style/paradigm, but only
mimic the external appearance without understanding real programming techniques and ideas behind the technology.
The real difference is not which style is chosen, but what education, training, and understanding is brought to
bear on the project. Rather than debating process vs. commitment, we should be looking for ways to raise the average level of developer
and manager competence. That will improve our chances of success regardless of which development style we choose.
I'm fairly sure you could accurately gauge the maturity of a programming team by the amount of superstition
in the source code they produce. Code superstitions are a milder form of
cargo cult software development, in
which you find people writing code constructs that have no conceivable value with respect to the functions that the code is meant
to fulfill.
A recent conversation reminded me of an example I find particularly disturbing. Sample code for dealing
with JDBC is particularly prone to being littered with this particular error, as shown below. (I suspect that is not coincidental;
I'll be coming back to that.) I have elided most braces out for clarity and terseness - imagine that this is a cross between Java
and Python:
import java.sql.*;
public class JdbcSample {
public static void main(String[] args) {
Connection conn = null;
try
conn = DriverManager.getConnection("jdbc:someUrl");
// ...more JDBC stuff...
catch (SQLException ex)
// Too often that is silently ignored, but that's another blog entry
finally
if (conn != null)
try
conn.close();
catch (SQLException sqlEx)
conn = null;
}
The "superstition" part is that setting the connection to null can have absolutely no useful effect;
being a local variable, "conn" will become eligible for garbage collection as soon as it goes out of scope anyway, which the most
rudimentary analysis of flow control reveals it will immediately after being set to null.
I am always particularly interested in finding out what goes on in the minds of programmers who write
this kind of thing, because that will sometimes reveal the roots of the superstition. Most of the time, though, if you raise question
in a design review the programmer will say something like "I copied and pasted it from sample code". This is how the superstitions
spread - and it's also a red flag with respect to the team's practice maturity - but rarely an occasion to gain insight into why
the superstition took hold, which is what you'll need to know in "remedial" training.
Now, the "null" concept, obvious as it seems, is a likely place for superstitions to accrete around.
If you look closely, "null" is nothing but obvious. Comparing Java and Smalltalk, for instance, we find that they differ radically
with respect to calling instance methods on null, or "nil" as it's called in Smalltalk; "nil" does have some instance methods you
can call. Also, what is the type of the "null" value in Java ? It is a special type called "the null type", which looks like a sensible
answer but incidentally breaks the consistency of the type system; the only types which are assignable to variables are the type
of the variable or subtypes of that type, so "null type" should be a subclass of every Java class. (It actually works that way in
Eiffel, as Nat Pryce reminds me - see comments.)
See also
here for another example of a
null-related Java superstition, also surprisingly common, as you can verify by Googling for "equals null".
In the case of JDBC, I would bet that idioms of resource allocation and deallocation inherited from
non-garbage collected languages, like C, were the main force in establishing the superstition. Even people new to Java get used to
not calling "dispose" or "delete" to deallocate objects, but unfortunately the design of the JDBC "bridges" between the object and
relational worlds suffer from a throwback to idioms of explicit resource allocation/deallocation.
Owing to what many see as a major design flaw in Java, "going out of scope" cannot be relied on as
an indicator that a resource is no longer in use, either, so whenever they deal with JDBC Java programmers are suddenly thrown back
into a different world, one where deallocation is something to think about, like not forgetting your keys at home. And so, in precisely
the same way as I occasionally found myself patting my pockets to check for home keys when I left the office, our fingers
reflexively type in the closest equivalent we find in Java to an explicit deallocation - setting to null.
You may object that the setting-to-null superstition is totally harmless. So is throwing salt over
your shoulder. While this may be true of one particular superstition, I would be particularly concerned about a team which
had many such habits, just like you wouldn't want to trust much of importance your batty old aunt who avoids stepping on cracks,
stays home on Fridays, crosses herself on seeing a black cat, but always sends you candy for Christmas.
Laurent Bossavit
explains the notion of "Cargo Cult" programming - the example being setting a temporary variable to null (i.e., one that is going
out of scope)
You may object that the setting-to-null superstition is totally harmless. So is throwing salt over
your shoulder. While this may be true of one particular superstition, I would be particularly concerned about a team which had many
such habits, just like you wouldn't want to trust much of importance your batty old aunt who avoids stepping on cracks, stays home
on Fridays, crosses herself on seeing a black cat, but always sends you candy for Christmas.
What superstitious coding practices does your group have?
Comments
null helps GC yes? no? [john mcintosh] November 15, 2004 19:23:32 EST
I once had a fellow phone me from Hong Kong who explained a performance problem they were having.
Seems they at the end of each method, and in each "destroy" method for a class (used to to destroy instances), they would set all
the variables to NULL. The best was of course iterating over thousands of array elements, setting them to NULL since they felt this
was helping the GC find NULL (garbaged) variables faster. Once they stopped doing this why windows just snapped closed....
Empty Java constructor [Jason Dufair] November 16, 2004 10:08:33 EST
I'm on a team doing Java right now. I see a lot of empty Java constructors. Being a Smalltalker making
a living doing Java, I figured they must be there for a reason. Come to find out an empty constructor just calls the super's constructor.
As if it weren't there in the first place. Whee!
[PDF]This paper is a personal account of my experience of teaching Java programming to undergraduate
and postgraduate students. These students enter their respective subjects with no previous Java programming knowledge. However, the
undergraduate students have previous experience with Visual Basic programming. In contrast, the postgraduate students are enrolled
in a "conversion" course which, in most cases, means that they were unfamiliar with any form of programming language or, in some
cases, some core information technology skills. Irrespective of these differences, I have witnessed how both groups independently
develop, what can be described as, a trade based culture with similarities to 'cargo cults' around the Java language. This
anthropological term provides a useful terms of reference as the focus of programming activity for many students increasingly centres
upon the imitation of code gathered from the lecturer or, in some cases, each other. This is particularly evident as project deadlines
approach. In extreme examples of this cargo cult fever, students will discard potentially strong project developments that incorporate
many features of good software design in favour of inelegant and cobbled together code on the single criteria of better functionality.
In this paper I use the concept of the cargo cult to frame the differing expectations surrounding "learning
Java" that are held by students and their lecturer. I draw upon my own observations and experiences as the teacher in these learning
environments and upon feedback from my most recent cohort of undergraduate students undertaking an BSc(Hons) programme within a UK
university. The student feedback is drawn from a questionnaire containing six questions relating to their experiences and expectations
regarding a Java programming subject. The definition and description of the cargo cult is also used to consider how this relationship
can be established in a way that encourages positive learning outcomes through the obligations and reciprocation associated with
gifts – in this case, clearly labeled gifts of code. The cargo cult and the erroneous form of thinking associated with it provide
a useful framework for understanding the teaching and learning environment in which I taught Java. In this way the interactions
and motivation of students and the lecturers who ultimately share the common goal of obtaining their academic success can be scrutinized
with the aim of improving this experience for all those involved. The cargo cult is not, however, 'simply' an anachronistic
analogy drawn from social anthropology. Cargo cult thinking has been identified within contemporary culture as readily as tribal
cultures and with equal significance (Hirsch 2002; Cringely 2001, Fitzgerald 1999; Feynman 1974).
2. Cargo Cult Thinking
It is important to acknowledge that cargo cult thinking is not necessarily the 'wrong' way of thinking
or that this paper seeks to castigate students' study practices. Cargo cult thinking is based, in part, on conclusions drawn from
only partially observed phenomena. In many respects this paper is a reflexive exercise regarding my own teaching practices and an
examination of the ways in which cargo cult thinking can be employed to achieve positive learning outcomes. Nonetheless, despite
this acknowledgement, the actions of cargo cult followers are based upon a "fallacious reasoning" of cause and effect. This could
be summarized in the context of Java programming as the assumption that if I, as a student, write my code like you, the lecturer,
do, or use your code as much as possible, I will be a programmer like you and this is what is required for me to do well - or at
least pass - this subject. However, as teachers of Java it is necessary to acknowledge the – perhaps dormant – presence of
this attitude and to consequently offer offhand code examples with extreme caution. I have repeatedly spotted examples of my own
code embedded within students' projects. Although the code may originally have been offered as a quick and incomplete example of
a concept or a particular line of thinking it can too readily become the cornerstone of larger scale classes without modification.
It is perhaps, then unsurprising, that the cargo cult attitude does, develop among students when they are first learning a programming
language and the concepts of programming. The consequence of pursuing this belief unchecked parallels the effects of learning
in a "Java for Dummies" manner. Deeper, conceptual understanding and problem-solving techniques remain undeveloped and students are
left able only to imitate the step-by-step procedures outlined by the textbook. This step-by-step form of explicit instruction
discourages exploration and discourages students from appreciating the learning that is occurring when they disentangle java compiler
errors. This is perhaps one of the most revealing differences between students and lecturers. While experienced programmers
use compiler errors, new programmers will see the errors as "just one more thing" getting in the way of a successfully executing
application. This suggests a lack of awareness that programming is not synonymous with writing code. The consuming focus
in the majority of undergraduate and postgraduate assessment projects is upon pursuing and obtaining functionality in their code
to the detriment of the user interface, clear documentation, class structures, code reusability, extensibility or reliability.
When code is reused, and especially when code is acquired from outsourced teams or incorporated via Web
services technologies, there's a real opportunity for cargo cult practices to take hold. Source code may follow unfamiliar naming
conventions, and design documents and internal memos may be written in unfamiliar languages or in a language that we know by people
who don't speak that language very well. We may not even have the source-we may have only WSDL (Web Services Description Language)
or some other interface definition to guide us.
The wooden headphones may bear fancy names like
"design patterns," but they're still an indicator that we may be building systems that look like those that have worked before-instead
of designing from deep understanding toward solutions that meet new needs.
"The first principle is that you must not fool yourself," said the late physicist Richard Feynman
in the 1974 Caltech commencement address that's often considered the origin of the "cargo cult" phrase, at least as used by coders.
That's a good principle. Reusing code that we don't understand or reusing familiar methods merely because we do understand them are
behaviors for which we should be on guard.
In the not-so-recent past, headlines proclaimed, "Software ICs Will Revolutionize Computer Programming." ind development
would reduce programming to assembling standardized "objects," and that the need for programmers would decline as software "technicians"
with minimal training would develop the software of the future.
Ten years have passed, and this clearly hasn't happened. Skilled programmers are in greater demand, the skill levels
required are higher, and software is harder to develop. The business press says that nirvana is now just around the corner; companies
that have the words "object-oriented" in their business plan are in demand among venture firms. Yet object-oriented methodologies
are over 20 years old. Are today's technological forecasts any more accurate than those of 10 years ago?
This is not to say that OO can't work. There are examples of successful OO projects; usually these are showcase
projects staffed with top developers. For the most part, however, object-oriented technology has not been the "magic bullet." In
this article, I'll briefly discuss some reasons that OO has thus far failed to deliver. More importantly, I'll address some ways
that organizations with average programmers can achieve high levels of reuse and shorten development cycles.
Let me count the way
The principal benefit cited for object-oriented methodologies is "reuse." This sounds like a valuable benefit;
if we improve reuse, we write less code. Less code means faster development and easier maintenance in the future. Less code also
means fewer chances for bugs, so it indirectly affects product quality. However, industry watchers report that there is only 15 percent
average reuse in today's object-based projects. That's a pretty damning statistic, if true; we did better 20 years ago with COBOL
subroutines! Others have cited different statistics; one major consulting firm reports 25 percent reuse across clients, and some
academic centers report 80 percent reuse. So what's the real story?
All of these figures beg the question: "How do you measure reuse?" Is reuse a measure of code that is referenced
in more than one place? (Subroutines could do that before OO.) Is code referenced in 50 places counted differently from code referenced
in two places? One measure of reuse might be the size of an application developed using OO technology versus one developed using
a different technology. This measurement, however, is impossible to perform, as such systems don't exist. Further, a search of the
literature turns up no widely-used standards for measuring reuse.
Yet another complication is the granularity involved in measuring reuse. The usual unit is the object itself. But
no one looks inside the object. One can create a simple object that can be used for only one specific function. This object can be
made to serve more functions (thus improving its reuse) by adding methods to it. Perhaps, however, the same programming benefit could
have been achieved by creating a new object for the additional functions rather than enhancing the first object with additional methods.
The amount of programming work is the same in both cases, but the bulkier single object with additional methods counts for a higher
level of reuse to most people, even though this object is carrying around a lot of unused "baggage" in any one instantiation.
The bottom line is that there is no practical objective way to measure reuse. Anyone out to make a point (positive
or negative) about reuse can find a metric to prove that point. This creates a new problem. If you can't measure something, how can
you improve it? For the time being, we will have to assume that we know good reuse when we see it, even if we can't measure it. We
can do this by observing how long it takes to develop an application or how much code it takes to develop the application (assuming
experienced, competent programmers). By using this subjective approach, it is apparent to most developers that we are still losing
ground.
Objects and Components
Agreeing on what constitutes an "object" is a fundamental problem with object-oriented technology. In theory, an
object represents a real-world entity, such as a person, vehicle, merchandise, etc. Yet most programmers think of objects as processing
entities -- listboxes, text widgets, windows, etc. While it would be possible to start with widgets and, through encapsulation and
inheritance, end up with, say, vehicles, developers just don't do this when building real systems. So one problem is that most OO
development is not truly object oriented, but rather programming with predefined widgets. Just because you are programming in C++
does not mean that you are doing object-oriented development. As we used to say, "Real FORTRAN programmers can write FORTRAN in any
language" -- and real procedural programmers can write procedural code in C++.
There is a well-established, theoretical basis for object-oriented methodology. Even if some developers don't understand
it, don't use it correctly, or disagree with it, there is a body of reference material that precisely defines objects and regulates
their use.
The computer industry has recently begun to shift focus from "objects" to "components" as the answer to our dreams.
But what is a component? Some simply use the term "component" as another name for a widget. I have a catalog in front of me that
purports to offer "components." It includes charting tools, a cryptographic package, a Text Edit control developer's kit, a collection
of widgets (grids, trees, notebooks, meters, etc.), communications drivers, and similar entities. This definition of "component"
is not the answer we are seeking, however.
A search of the literature doesn't help, either. There are many articles that discuss components, but few that
actually define a component. Industry expert Judith Hurwitz says, "Components are made up of business rules, application functionality,
data, or resources that are encapsulated to allow reuse in multiple applications." Alan Radding, who writes about multi-tier development,
responds, "In [Judith] Hurwitz Consulting's hypertier scheme, everything in effect ends up as a component." Don Kiely, writing about
components for IEEE's Computer magazine never actually defines components, but he does define "framework assemblies" as groups of
components "that could be plugged into an application as easily as individual components." This is a significant statement because
it shows that Kiely, Hurwitz, and Radding are thinking along the same lines, even if they use different words. Kiely also makes the
useful observation that, "to be truly effective, components should be portable and inter-operable across applications," something
that I will come back to later.
one common misconception is that one can not do object oriented design in C, or any language that isn't approved by the OOP
zealots. this is just not true, while it may be more natural to write a good object oriented design in C++, Java or Smalltalk.
it can also be done in C or BASIC.
one can create objects in C by creating a structure, then passing that structure to every method that performs on that structure.
a common use could be something like this.
it is also possible to perform polymorphism and inheritance with function pointers and other techniques.
Omega
I tend to agree with the author..
OOP (IMHO -- I'm crazy for the acronyms today), is just a fad. Like structured programming was before it.. Unfortunately a
lot of these companies today fall into "trendy" programming methodologies. Personally, I believe you should program using the
style you're most comfortable and familiar with. If you're trying to fit a mold it will slow you down..
AlgUSF
The biggest problem with OOP is when people use it too much, and end up with like a million classes.
Duh,the comparison is simple! (1)
Hairy_Potter | about 13 years ago
×
Both communism and OOP rely on the concept of classes for the fundamental flavor.
These problems form obstacles to the further development of object-oriented software engineering, and in some situations
are beginning to cause its outright rejection. Such problems can be solved either by a variety of ad hoc tools and methodologies,
or by progress in language technology (both design and implementation). Here are some things that could or should be done in the
various areas.
Economy of execution. Much can be done to improve the efficiency of method invocation by clever program
analysis, as well as by language features (e.g. by "final" methods and classes); this is the topic of a large and promising body
of current work. We also need to design type systems that can statically check many of the conditions that now require dynamic
subclass checks.
Economy of compilation. We need to adopt languages and type systems that allow the separate compilation
of (sub)classes, without resorting to recompilation of superclasses and without relying on "private" information in interfaces.
Economy of small-scale development. Improvements in type systems for object-oriented languages will
improve error detection and the expressiveness of interfaces. Much promising work has been done already and needs to be applied
or further deployed [1][5].
Economy of large-scale development. Major progress should be achieved by formulating and enforcing inheritance
interfaces: the contract between a class and its subclasses (as opposed to the instantiation interface which is essentially an
object type). This recommendation requires the development of adequate language support. Parametric polymorphism is beginning
to appear in many object-oriented languages, and its interactions with object-oriented features need to be better understood.
Subtyping and subclassing must be separated. Similarly, classes and interfaces must be separated.
Economy of language features. Prototype-based languages have already tried to reduce the complexity
of class-based languages by providing simpler, more composable features. Even within class-based languages, we now have a better
understanding of how to achieve simplicity and orthogonality, but much remains to be done. How can we design an object-oriented
language that is powerful and simple; one that allows powerful engineering but also simple and reliable engineering?
OOP Criticism -- good OOP criticism and
OOP problems (The emperor has no clothes!). Contains a very good collection of links
This research study, investigates some of the problems and unresolved issues in the OOPar. Contrary
to adopting a WHAT (the problem) and HOW (the solution) approach it uniquely asks WHY these problem and issues exist.
We argue that the WHAT & HOW approach, although useful in the short term, does not provide a long term solution to the problems in
data modelling (DM). As a result of adopting such an approach and the empirical and wide ranging nature of chapter 3, four aspects
are proposed.
2.0 Concepts of the OO model
The main concepts that underline the OOPar, are outlined in the following sections.
2.1 Object Classes & Objects
In the OOPar, the problem domain is modeled using object-classes, and there instances objects (Booch,
1994). An object is any abstract or real world item that is relevant to the system. An object class is a grouping of these objects.
For example, in a library information system an object-classes would be such things as members, books, etc. Objects would be instances
of these classes, e.g. Joe Bloggs, Object-oriented analysis by Martin, etc.
2.2 Methods
Methods are predefined operations, and are associated with an object-class. "Methods specify the way
in which an object's data are manipulated" (Martin & Odell, (1992), p.17). Therefore, the member object class identified earlier
may contain methods such as reserve_book, borrow_book, and etc. Access to an object is only granted via the methods.
This, in fact is one of the key features of the OO model, that is behaviour (methods) and data-structures
(i.e. the declarative aspect of an object) are not separated - these are encapsulated together in one module.
2.3 Encapsulation
The process of keeping methods and data together, and granting access to the object only through the
methods is referred to as encapsulation. This achieves information hiding, i.e. "The object hides its data from other objects
and allows the data to be accessed via its own methods" (Martin & Odell, p.17).
2.4 Inheritance
Inheritance is the process, where a high-level object class can be specialised into sub-classes. Wirfs-Brock
et al(1990), define inheritance as "... the ability of one class to define the behaviour and data structure of its instances as a
superset of the definition of another class or classes." (p.24).
For example, in the library system, at a later stage we may find that two types of members exist,
children and adult. To accommodate this, we can make use of inheritance by abstracting all the common features into a high-level
member class and further create two new sub-classes, adult-members and child-members, under the member class. Sub-classes would also
inherit data and functions from the super class.
2.4.1 Multiple Inheritance
Multiple inheritance, is almost identical in concepts to single inheritance, however in this case
a sub-class can inherit from many super-classes. For example, at a later design stage of the library system, we may have a situation,
where a book is of type fiction and is also a reference book. This potentially, allows the use of multiple inheritance.
2.4 Polymorphism & Dynamic Binding
The term polymorphism, originates from the Greek word 'poly morph' meaning many forms. In the context
of the OOPar, the polymorphism concept allows different objects to react to the same stimuli (i.e. message) differently (Hymes, 1995).
For example, adult and child members, may only be allowed to borrow books for up to 6 and 3 weeks respectively. Therefore the borrow_book
message to the adult-member class will respond differently (i.e. date books by 6 weeks), than the same message to the child-member
class (date books by 3 weeks). There are variations and degrees of polymorphism (e.g. operator overloading), which the interested
reader is guided to standard OO textbooks (see refs. at end).
2.5 Genericity
Generic (or parametric) classes are those that define a whole family of related classes, differing
only in matters concerning one or more types that are used as arguments to the class of declarations (deChampeux et al, 1993). The
concept of genericity allows the designer to specify standard generic classes that can be reused. For example, in designing any system,
a number of common programming situations require the same class structure to be applied to different data types. Examples of several
situations in user interface systems are the following:
queue class
a queue of characters entered by a user
a queue of mouse events that have occurred and are waiting to be handled.
In each case the same basic algorithms and supporting data structures are needed. What varies among
uses of the class is the type of the data being manipulated. Lists are also used to maintain relationships in OO programming languages,
hence in the case of the library system a standard generic list class could be defined to maintain the relationship between members
and books reserved, for example.
3.0 Claimed Benefits
This section describes some of the key, general claimed benefits of the OOPar. The list is, of course,
not extensive and their are many other claimed benefits of this approach which are described later, in their respective sections.
3.1 Naturalness of Analysis & Design (Cognition)
One of the frequently claimed benefits of the OOPar is that it is natural (therefore more understandable),
and is assumed to be cognitively similar to the way human beings perceive and understand the real-world (Meyer, (1988); Rosson &
Alpert (1988); Rosson & Alpert(1990). Martin & Odell (1992, p.31) for example, states "The way of thinking is more natural for most
people than the techniques of structured analysis and design. After all, the world consists of objects." Mcfadden & Hoffer(1994,
p.) similarly note "The notation and approach of the object-oriented data model builds on these paradigms that people constantly
use to cope with complexity." This claim, therefore assumes that it is more natural for developers to decompose a problem into objects,
at least as compared to the traditional structured languages. In other words, it should be natural for developers and users to map
the problem into objects and into classification hierarchies.
3.2 Software Reuse
Software reuse is perhaps the most publicised benefits of the OOPar. Advocates of the OOPar claim
that it provides effective mechanisms to allow for software to be reused (Meyer,1988). For example Budd(1996), states "Well designed
objects in object-oriented systems are the basis for systems to be assembled largely from reusable modules, leading to higher productivity."
(p.31). Martin & Odell(1992), similarly state "It [OO] leads to world of reusable classes, where much of the software construction
process will be the assembly of existing well-proven classes." (p.31).
These mechanisms are encapsulation, polymorphism, and inheritance. For example, encapsulation allows
object classes to be modified, or even added to new systems without requiring additional modification to other classes in the system.
The end-goal of this, is to develop a component based software industry (as Martin & Odell point out), where classes can be purchased,
and plugged in. Inheritance allows existing code to be reused. Genericity allows the reuse of one standard class.
3.3 Communication Process
Curtis & Waltz (1990) and Krasner, Curtis, & Iscoe(1987), report that at the software team level,
some of the key problems encountered are communication and coordination, capturing and using domain knowledge and organisational
issues. With the OO approach, advocates claim that communication and coordination between the project team and client(s), and also
within the team are enhanced. For example, Rumbaugh et al(1991, p.4), claimed that "greatest benefits [of OO] come from helping specifiers,
developers, express abstract concepts and communicate them to each other. Martin & Odell (1992, p.34) also similarly state, "Business
people more easily understand the OO paradigm. They think in terms of objects.....OO methodologies encourage better understanding
as the end users and developers share a common model." Similar statements can be found in many popular text books, e.g. (Coad & Yourdan
(1991, p.3); Jacobson, Christerson, Johnsson, & Overgaard, (1992, p.43); Wirfs-Brock, Wilkerson, & Weiner (1990, pp. 10 - 11).
This claim is based on two premises:
Naturalness of OO (described above) makes understanding easier,
Objects are constructed from the problem domain, hence communication between project team and
client(s) is enhanced. Also, because a single representation permeates throughout all stages of life-cycle, therefore communication
and coordination within the team is facilitated.
3.4 Refinement & Extensibility
OO advocates, also claim that "software" developed using the OOPar is easy to refine and extend.
Khoshafian(1990) states "Object oriented programming techniques allow the development of extensible
and reusable modules" (p.274). Graham(1994), similarly notes, "Inheritance, genericity or other forms of polymorphism make exception
handling easier and improve the extensibility of systems." (p.37). These claims are related to three key principles of the OOPar,
encapsulation, inheritance and polymorphism.
For example, encapsulation allows the internal implementation of a class to be modified without requiring
changes to its services (i.e. methods). It also allows new classes to be added to a system, without major modifications to the system.
Inheritance allows the class hierarchy to be further refined, and combined with polymorphism, the superclass does not have to "know"
about the new class, i.e. modifications do not have to be made at the superclass.
4.0 Definition of OO Model
In critiquing a concept it is common to start with a formal definition. However, in this research
project we will not do this, for the following reasons:
1. Unlike the relational model, the OO model does not have one commonly accepted formal definition.
2. As a result of (1) trying to define the OO model, is in itself a considerable research task.
Our approach will, therefore be to investigate reported problems in the application of the OOPar,
and academic critiques. And, then to try and identify common threads and similarities between these problems.
5.0 Conclusion
In summary, in section 2 we outlined some of the core concepts that underline the OO model. Section
3 provided an outline of some of the key claimed benefits of the OO model. Finally in section 4, we discussed our reasons for not
formally defining the OO model.
Paul Graham and Jonathan
Rees discuss the nature and appeal of object-orientation. (Graham holds quite hackerish views regarding language design that want
a bit the specific sense of esthetics that comes with mathematical culture, and his take on abstraction really is somewhat flat, but
anyway ...)
Object Oriented Programming Oversold! Detailed OOP
criticism by a programmer of business applications who advocates a procedural/relational approach factoring out the management
of relationships to the database.
"Object-oriented programming is an exceptionally bad idea which could only have originated in California." -- Edsger
Dijkstra
"object-oriented design is the roman numerals of computing." --
Rob Pike
"The phrase "object-oriented" means a lot of things. Half are obvious, and the other half are mistakes." -- Paul Graham
"Implementation inheritance causes the same intertwining and brittleness that have been observed when goto statements are
overused. As a result, OO systems often suffer from complexity and lack of reuse." -- John Ousterhout Scripting, IEEE Computer,
March 1998
"90% of the shit that is popular right now wants to rub its object-oriented nutsack all over my code" -- kfx
This research study, investigates some of the problems and unresolved issues in the OOPar. Contrary
to adopting a WHAT (the problem) and HOW (the solution) approach it uniquely asks WHY these problem and issues exist.
We argue that the WHAT & HOW approach, although useful in the short term, does not provide a long term solution to the problems in
data modelling (DM). As a result of adopting such an approach and the empirical and wide ranging nature of chapter 3, four aspects
are proposed.
OOP Criticism -- good OOP criticism
and OOP problems (The emperor has no clothes!). Contains a very good collection of links
BlueTail's "Why OO Sucks" - Quote: If a
[paradigm] is so bad that it creates a new industry to solve problems of its own making, then it must be a good idea for the guys
who want to make money.
Paul Graham Excluding OOP From New Language
- Paul co-started a very successful e-stores company using LISP. To my chagrin, however, he does not like databases
either. LISP shares my philosophy of treating larger-scale data organization similar to code organization. I just think that nested
lists are less "grokkable" and flexible than (good) tables.
Database Debunkings - A website influenced by the ideas of Chris
Date, the author of a popular university database textbook. Date generally finds object orientation conceptually impure compared
to relational theory, and believes that OO thinking re-exposes
outdated database and data structure thinking
of the 1960's (specifically hierarchical and network databases).
Objects Have Failed - Narrative by
Richard P. Gabriel. Some of it seems to be complaining about lack of dynamic languages more than about OO itself. I wonder if Richard
would not be complaining if SmallTalk was "in" instead of Java.
OOP Better in Theory than in Fact - R. Mansfield's
complaints about OO that seem to echo many of the complaints presented here. (Coincidence? I'll let you be the judge.) Note that
I generally do not support copy-and-paste
for reuse. There are plenty of other non-OO ways to get reuse.
OOP for Heretics - Tony Marston
has similarly found out the world of OOP lacks consistency and
science.
"We Don't Need No Stinkin' OO Proof!" - Well
at least some are realizing there is no real evidence. "Feels good" is sufficient for this guy. I would like to see his allegedly
inferior procedural code. He was probably just bad at procedural. Many OO fans are. See
here as far as the modeling claim.
Object Oriented Programming (OOP) is currently being hyped as the best way to do everything from promoting code reuse to forming
lasting relationships with persons of your preferred sexual orientation. This paper tries to demystify the benefits of OOP. We point
out that, as with so many previous software engineering fads, the biggest gains in using OOP result from applying principles that
are older than, and largely independent of, OOP. Moreover, many of the claimed benefits are either not true or true only by chance,
while occasioning some high costs that are rarely discussed. Most seriously, all the hype is preventing progress in tackling problems
that are both more important and harder: control of parallel and distributed applications, GUI design and implementation, fault tolerant
and real-time programming. OOP has little to offer these areas. Fundamentally, you get good software by thinking about it, designing
it well, implementing it carefully, and testing it intelligently, not by mindlessly using an expensive mechanical process.
Object-oriented design is supposed to make our software more robust and resilient, yet we still see systems
that are as fragile as their procedural ancestors. Are developers adopting aggressive practices because they think the technology
will protect them?
Object-oriented software development practices are supposed to make our software more robust and resilient to change.
Yet we still see systems designed using these practices that are as rigid and fragile as their procedural ancestors. Adding new features
still causes a cascade of change throughout the software and often results in the creation of new bugs. It wasn't supposed to be
this way. Many software development organizations invested heavily in object technology, expecting something better. They expected
the changes to be localized and the software to be resilient to bugs. Is it possible that object technology is the software equivalent
of four-wheel drive? Does it provide greater control and safety, only to be abused by programmers who develop more aggressively because
they think objects will protect them?
The problem is, today's object-oriented software often lacks modularity. The systems are just as hard, if not harder,
than their procedural brethren to modify or enhance. What appear to be simple one-line fixes end up taking three weeks to implement.
Simple alterations cause a cascade of sympathetic changes to wash over the entire system.
It is my argument that we rely too heavily on object technology's safety features and ignore good software development
practices such as planning, design, review and assessment in the name of expediency. We hope that at least one of our four driving
wheels will somehow grab and prevent us from losing control on the slippery roadway we have been driving along at a reckless speed.
Re Beware of C Hackers -- A rebuttal to Bertrand Meyer
by Robert Martin (3 Jul 95) Meyer makes a clear difference between C programmers and C hackers. He even states that he expects everyone
to know C (at least back in '95, when they had that discussion), he knows C himself very well and he points to the fact that some C
hackers are not well-suited for the creation of huge, complex systems that must be reliable because they (=the hackers) chase for runtime
and memory efficiency and lose sight of the more important points maintainability, readability etc. I think he has a point there. He
does not use the term 'C hacker' for someone who is a good programmer and uses C, as you might assume.
I have recently acquired a copy of Bertrand Meyer's new book "Object Success". I would like to say that I have
a great deal of respect for Meyer. Moreover, I have read many good things in this book so far.
However I take extreme exception to something he wrote in this book. On page 91 he writes the following which is
included in its entirety. I will comment on it afterwards.
PRUDENT HIRING PRINCIPLE: Beware of C hackers.
A "C hacker" is somewone who has had too much practice writing low-level C software and making use of all the
special techniques and tricks permitted by that language.
Why single out C? First, interestingly enough, one seldom hears about Pascal hackers, Ada hackers or Modula
hackers. C, which since the late nineteen-seventies has spread rapidly throughought the computing community, especially in the
USA, typifies a theology of computing where the Computer is the central deity and its altar reads Efficiency. Everything is sacrificed
to low-level performance, and programs are built in terms of addresses, words, memory cells, pointers, manual memory allocation
and deallocation, unsafe type conversions, signals and similar machine-oriented constructs. In this almost monotheist cult, where
the Microsecond and the Kilobyte complete the trinity, there is little room for such idols of software engineering as Readability,
Provability and Extendibility.
Not surprisingly, former believers need a serious debriefing before they can rejoin the rest of the computing
community and its progress towards more modern forms of software development.
The above principle does not say "Stay away from C hackers", which would show lack of faith in the human aptitude
to betterment. There have indeed been cases of former C hackers who became born-again O-O developers. But in general you should
be cautious about including C hackers in your projects, as they are often the ones who have the most trouble adapting to the abstraction-based
form of software development that object technology embodies.
There is only one word that can accurately describe these sentiments. That word is biggotry. I don't like to use
a word like that to describe the words of someone who is obviously intelligent. Yet there is no other option. The words he has written
create a class of people whom he recommends ought to be hired, only with caution.
Who are these "C Hackers"? Has Dr. Meyer given us any means to identify them? Yes.
A "C hacker" is somewone who has had too much practice writing low-level C software and making use of all the special techniques
and tricks permitted by that language.
What possible recourse can a manager have but to look with prejudice against anyone who happens to put "C" on their
resume. By associating "C" with "Hackers", Dr. Meyer damages everyone who uses that language, whether they are hackers are not. In
effect, Dr. Meyer is making a statement that is equivalent to: "Beware of the Thieving Frenchmen."
What is a hacker? A hacker is someone who writes computer programs without employing sound principles of software
engineering. Someone who simply throws code together without thought to structure or lifecycle.
Certainly there are hackers who use C. But there are Hackers who use every language. And in this, Dr. Meyer is
quite negligent, for he says nearly the opposite:
Why single out C? First, interestingly enough, one seldom hears about Pascal hackers, Ada hackers or Modula hackers.
This may or may not be true, I have no statistics. However, *if* it is true I would be willing to bet that the
reason has something to do with the difference in the number of C programmers as compared to Ada, Pascal and Modula programmers.
If there are 20 times as many C programmers, then there are probably 20 times as many C hackers.
My point is that C does not predispose someone to be a hacker. And that the ratio of C hackers to C programmers
is probably the same as Ada hackers to Ada programmers.
So Dr. Meyer casts aspersions upon all C programmers while giving amnesty to Ada, Pascal and Modula programmers.
According to Dr. Meyer, it is only, or especially, the "C hacker" that you must be wary of. He does not say: "Beware of Hackers",
rather he says: "Beware of C hackers." And this is simply biggotry, the segregation and defamation of a class of people based only
upon the language that the program in.
And why this malevolence towards C? One can only conjecture. He offers reasons, but they are nearly mystical in
their descriptions. Consider:
C [...] typifies a theology of computing where the Computer is the central deity and its altar reads Efficiency.
Dr. Meyer does not provide any proof, or even a scrap of evidence to support this rediculous claim. He states it
as fact. This is an abuse of authority. What every author fears, (or ought to fear in my opinion) is that he cast his own opinions
as unalterable truth. Yet, rather than proceed with trepdiation, Dr. Meyer seems to glory in his deprication of C. His writing becomes
almost frenzied as he attacks it.
Everything is sacrificed to low-level performance, and programs are built in terms of addresses, words, memory
cells, pointers, manual memory allocation and deallocation, unsafe type conversions, signals and similar machine-oriented constructs.
In this almost monotheist cult, where the Microsecond and the Kilobyte complete the trinity, there is little room for such idols
of software engineering as Readability, Provability and Extendibility.
Here he names every evil trick and bad practice that he can, and ascribes it all to C, as though no other language
had the capability of supporting bad practices. He also claims that C programmers religiously follow these bad practices as the sacrements
of their religion.
These statements are extremely irresponsible. There is no basis of fact that Dr. Meyer has supplied for these extreme
accusations and defamations. Dr. Meyer has a right to dislike C if he chooses. But his vehemence against its programmers is unreasonable,
and unreasoned.
It is easy to refute nearly all of Dr. Meyer's claims regarding C programmers. I have known many many C programmers
who were very concerned with good software engineering. Who considered the quest for ulimate efficiency to be absurd. Who were careful
with their programming practices. In fact, I have never met a single C programmer who fits the description that Dr. Meyer ascribes
to them all.
In my opinion, he is very wrong, not only professionally, but moraly. And he owes the industry an apology and a
retraction.
I have been doing custom business programming for small and medium projects since the late 1980's. When Object
Oriented Programming started popping it's head into the mainstream, I began looking into it to see how it could improve the type
of applications that I work on.
Note that this excludes large business frameworks such as SAP, PeopleSoft, etc. I have never built a SAP-clone
and probably never will, as with many others in my niche.
I have come to the conclusion that although OO may help in building the fundamental components of business applications,
and even the language itself, any minor organizational improvement OO adds to the applications themselves are not justified by the
complexity, confusion, and training effort it will likely add to a business-oriented language. In other-words, OO is not a general-purpose
software organizational paradigm, and "selling" it as such harms progress in the alternatives.
I have used languages where the GUI, collections handling, and other basic frameworks are built into the language
in such a way that OO's benefits would rarely help the language deal with them. It is also my opinion that the language of base framework
implementations probably should not be the same as the application's language for the most part. For example, most Visual Basic components
are written in C++. Meyer seems to have more of a one-size-fits-all view of languages and paradigms than I do.
For a preview of my opinions and analysis of this situation, may I suggest the following links:
Although the stated niche is not representative of all programming tasks, it is still a rather large one and should
not be ignored when choosing paradigms.
Here is a quick summary of my criticisms of OOSC2:
Meyer tends to build up false or crippled representations of OO's competitors, which distorts OO's alleged
comparative advantages.
A good many of the patterns that OO improves are not something needed directly by the stated niche, except
in rare cases.
We have very conflicting views and philosophies on data sharing.
Note that although my writing style has at times been called sarcastic and harsh, please do not confuse the delivery
tone with the message.
Also note that I am not against abstraction and generic-ness. I am only saying that OO's brand of these
is insufficient for my niche.
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.