|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Bulletin | 1998 | 1999 | 2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 |
2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 |
Jan | Feb | Mar | Apr | May | Jun | Jul | Sept | Oct | Nov | Dec |
"The great thing about Object Oriented code is that it can make small, simple
problems look like large, complex ones."
Top 50 Funny Computer Quotes cargo cult programming: n. A style of (incompetent) programming dominated by ritual inclusion of code or program structures that serve no real purpose. A cargo cult programmer will usually explain the extra code as a way of working around some bug encountered in the past, but usually neither the bug nor the reason the code apparently avoided the bug was ever fully understood (compare shotgun debugging, voodoo programming). The term `cargo cult' is a reference to aboriginal religions that grew up in the South Pacific after World War II. The practices of these cults center on building elaborate mockups of airplanes and military style landing strips in the hope of bringing the return of the god-like airplanes that brought such marvelous cargo during the war. Hackish usage probably derives from Richard Feynman's characterization of certain practices as "cargo cult science" in his book "Surely You're Joking, Mr. Feynman!" (W. W. Norton & Co, New York 1985, ISBN 0-393-01921-7). Both communism and OOP rely on the concept of classes. Both generate a lot of fanaticism and overuse this notion. That might be not an accident. A data structure is a structure, not an object. Only if you exclusively use
the methods to manipulate the structure (via function pointers if you're using C) and each method
is implemented as co-routine, then you have an object. Such an approach is typically an overkill.
OO zealots make mistake typical for other zealots by insisting that it must be used for everywhere
and reject other useful approaches. This is religious zealotry. And please remember that Dark
Ages lasted several hundred years. |
|
I often wonder why object oriented programming (OO) is so popular, despite being a failure as a programming paradigm. Despite the fact that is rarely used in Web programming, which is the most dynamically developing application area (and many of those programs are based on LAMP stack with PHP as "P" in it). Is it becoming something that is talked about a lot, but rarely practiced? Just a topic artificially promoted for mercantile gains by "a hoard of practically illiterate and corrupt researchers publishing crap papers in junk conferences." ?
To me it looks more like more dangerous development -- a variant of computer science Lysenkoism (and I can attest that the current level of degradation of computer science can somewhat remind the level of degradation of social sciences under Stalin; it's now more about fashion then research with cloud computing as the latest hot fashion). If you read books considered to be "OO classic" the distinct impression that one gets is that "king is naked". But if this is a variant of Lysenkoism absurdity of dogma does not matter and does not diminish the number of adherents. As universities are captured it has huge staying power despite of this. The same trick that is played in US universities with neoclassical economics.
|
In the end, productivity and quality are the only true merits a programming methodology is to be judged upon. As Paul Graham noted the length of the program can serve as a useful (although far from perfect) metric for how much work is to write it. Not the length in characters, of course, but the length in distinct syntactic elements (tokens) -- basically, the number of lexical elements or, if you wish, the number of leafs in the parse tree. It may not be quite true that the shortest program requires the least effort to write, but in general length of program in lexical tokens correlates well with the complexity and the effort. OO approach fails in this metric as for language which permits structuring the program in both non-OO (procedural) and OO fashion (C++, Perl, etc) program structured in OO fashion is typically longer.
Being extremely verbose (Java might be a king of verbosity among widely adopted languages; really the scion of Cobol ;-) is only one problem that negatively affects both the creation and, especially, maintenance of the programs. Java is definitely so verbose that this factor alone push down its level below, say, PL/1. And it's sad that PL/1 which was created in early 60th, is still competitive and superior to the language created 40 years later. Attempts to raise Java level using elaborate "frameworks" with complex sets of classes introduced other problems: bad performance and difficulties in debugging.
OO is a fuzzy concept, that has both good and bad sides. First of all it is often implemented in an incomplete, crippled way (C++, Java) which undermines its usefulness. To enjoy advantages of OO programming the language should provide allocation all the variables in the heap, availability of coroutines, correct implementation of exception handling, and garbage collection. As such it is an expensive proposition (execution-wise).
Also we should clearly state that OO-model does incorporate several good ideas:
While those good ideas shine in certain classes of programming tasks, such as GUI interfaces, there are programming tasks (for example computational) were they are useless or even harmful. When I see classic computer algorithms books polluted by OO nonsense, this is just a vivid reminder to me that Lysenkoism in computer science is still alive and thriving. And it serves as an indirect proof of pretty sad observation that in modern science at least one third of scientists are money seeking charlatans, while the other one third are intellectual prostitutes (actually among Professors of economics this proportion is even higher). Conversion of previously decent, honest researcher into one of those two despicable categories is not only possible, but happens quite often. A lot of modern science is actually pseudoscience. Life in academia those days is very tough and often survival became the highest priority. Like somebody said "Out of the crooked timber of humanity no straight thing was ever made".
But such a dominance of OO books devoted to description of algorithms is completely absurd, intellectually bankrupt way to explain algorithms to students. It is really terrifying in view that such books as Donald Knuth masterpiece The Art of Computer Programming exists since 1968. OO do not replace, but complements procedural programming as not everything should be an object as some hot-headed OO enthusiasts (priests of this new techno cult) suggest.
Good design in not about following the latest hot fad, but about finding a unique set of tools and methods that make performing the task productive, or even possible. Kernighan & Plauger long ago notes this fact in their still relevant book The Elements of Programming Style:
Good design and programming is not learned by generalities, but by seeing how significant programs can be made clean, easy to read, easy to maintain and modify, human-engineered, efficient, and reliable, by the application of good and programming practices. Careful study and imitation of good designs and programs significantly improves development skills.
"The true faith compels us to believe there is
one holy Catholic Apostolic Church and this we firmly believe and plainly confess. And outside
of her there is no salvation or remission from sins." - Boniface VII, Pope (1294-1303) |
Object-oriented programming (OOP) is often treated like a new Christianity and religious zeal of converts often borders with stupidity. And it definitely attracts numerous charlatans which propose magic cures like various object methodologies and, what is worse, write books about them ;-).
All-in-all my impression is that for almost 30 years of its existence OO failed to improve programming productivity in comparison with alternative approaches, such as scripting languages (with some of them incorporating OO "just in case" to ride the fashion as you should never upset religious zealots ;-).
If so this is more of a religious dogma, much like previously were structured programming and verification bonanza (with Edger Dijkstra as the first high priest of a modern computer science techno cult aka "church of computer scientology" ;-). And it is true that there are more corrupted academic fields then computer science, such as economics.
But still this is a really terrifying in a sense that it replicates Lysenkoism mentality on a new level with it's sycophants and self-reproducing cult mentality. And this cult mentality is a real problem. As Prince Kropotkin used to say about prison guards in Alexandrov Central (one of most strict regime prisons in Tzar Russia) where he served his prison term "People are better then institutions".
Like in any cult, high priests do not believe one bit in the nonsense they spread. For them it is just as the way for getting prestige and money. Just ask yourselves a question: in what place there were more sincere communists in, say, 1970th: in the Politburo of CPSU of the USSR or any small Montmartre cafe. Like in all such cases, failure does not discourage rank and file members of the cult. Paradoxically it just increases the cult cohesion and zeal.
And in 2014 OO adepts are still brainwashing CS students despite the failure of OO provide advertised benefits for the last 25 years (Release 2.0 of C++ came in 1989). And they will continue just because this is very profitable economically. They do not care about negative externalities (an economic term that is fully applicable in this case) connected with such a behavior. Just give me a Mercedes (or tenured position) now and f*ck the computer science future.
So far all this bloat and inefficiencies were covered by Moore's law. In other word you can claim any software development methodology highly successful because even if it is not, bloat and inefficiencies will be covered well by the tremendous growth of power of computers that is still continuing unabated, although slowed down a bit.
OO is a set of certain ideas which are not a panacea and as such never was and never will be a universally applicable programming paradigm. Object orientation has limited applicability and should be used when it brings distinct advantages, but not be pushed for everything like naive or crooked (mostly crooked and greedy) authors of "Object Oriented Books" (TM).
Here is the number of books of authors who wanted to milk the cow and included words "object oriented" in the title. Stats are for each year since 2000 (data are extracted from Library of Congress):
2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 |
92 | 83 | 104 | 76 | 69 | 70 | 76 | 68 | 57 | 60 | 61 | 46 | 27 |
So there are a lot of authors, who try to sell the latest fad to unsuspecting audience much like snake oil salesmen of the past.
I have strong personal hate for authors who wrote Object Oriented Algorithms and Data Structure books, and especially authors who convert previous procedure-oriented algorithms books into object-oriented attempting to earn fast buck; corruption is a real problem in academia, you should know that ;-).
In a way, the term "object oriented cult" has a deeper meaning -- like in most cults high priests of the cult (including most "object oriented" books authors) really love only money and power. And they do not believe in anything they preach...
In a way, the term "object oriented cult" has a deeper meaning -- like in most cults high priests of the cult really love only money and power. And they do not believe in anything they preach... |
Many common applications can better be developed under different paradigms such as "multi-pass processing", compiler like structure, abstract machine paradigm, functional language and so on. Just imagine that somebody tries to solve in object oriented way typical parsing of text string problem that can be solved with the regular expressions. Of course any sting is a derived object of alphabet of 26 letters but how far we will go with such "OO approach". Or look at the poverty of books that sell object oriented approach to students who want to study algorithms and data structures. Those snake oil salesmen who wrote such books are using OO as a marketing trick to get a quick buck do not deserve the title of computer scientists and their degrees probably should be revoked ;-) Lord Tebbit once said "You can judge a man by his enemies." Judging from the composition of pro-OO camp in computer science any alternative paradigm/methodology promoter or even skeptic like me looks good by definition ;-)
When I think about OO I see two distinct trends:
One should understand that OOP is an old hat and several OOP-based languages are 20 or more years old
OOP attempts to decompose the world into objects and claims that everything is an object. But saying that everything is an object not always provide an useful insight into the problem. Just think of sorting. Will it help to sort the file efficiently if you think that the records are objects. Most probably not. Things that have state and change their state are natural candidates to be represented as objects. Here are some guidelines to help decide if an object-oriented approach is appropriate:
OOP emphases creation a set of classes as a universal method of decomposition of the problem. But in reality such a decomposition heavily depends on the set of data representation and algorithms that programmer knows and is comfortable with. That's why typical decomposition of the problem into classes by Unix programmer can be completely different (and often better) that decomposition of the same problem by Windows-only programmer. The essence of programming are algorithms operating on data structures and the "programming culture" used in particular OS exerts heavy influence on the way programmers think. In no way OO by itself can help you come up with optimal storage structures and algorithms for solving the problem. Moreover OO introduced entirely new and quite obscure terminology with the purpose of mystifying some old useful mechanisms of program structuring:
Also while OO emphasize the concept of object (which can be abstracted as a coroutine with it sown state) in reality many so called OO languages does not implement the concept of coroutine. As such their methods do not have a real state, can't be suspended and resumed. In other words they are just new and slightly perverted way to use Algol-style procedures. As for paradigm shift, OO can be compared to introduction of a local LAN instead of mainframe. That mean that we now have a bunch of small, autonomous PCs each with own CPU, communicating with each other via messages over the net. It takes some imagination to see a simple procedure call as a real message passing mechanism -- only threads communicate through real messages. So true object model is intrinsically connected with multithreading, yet this connection is not understood well. True message mechanism presuppose that object (autonomous PC with its own CPU) was active before receiving it and will be active after processing it. To certain extent real OOP-style is a special case of concurrent programming.
Pointers are a great programming concept. But as any powerful feature it is a dangerous feature. OO tries to hide it removing explicit pointers from the language by hiding them and assigning them a type within the concept on instantiation of a class. Instance of the class is essentially a typed pointer, pointing to the memory area occupied by particular structure.
At the same time removing pointers from the language as first class elements is not without problems. It remove a lot of expressive power of the language. As Perl demonstrated quite convincingly presence of pointers in scripting language framework is very beneficial. Generally the idea that you need to switch to OO framework in order to use typed pointers in retrospect looks problematic.
And the idea of run-time accesses to elements of symbol table is a powerful one and can be expected far beyond the concept of typed pointes. For example Pl/1 style onsubscriptrange exception can be implemented this way.
Co-routines are a necessary element of OO framework in were present in Simula 67 -- the ancestor of C++ and grandmother of all modern OO languages. If we assume that object need to have its own state that automatically imply that each method should have its own state too.
That means that OO languages that does not support the concept of coroutines are cripples that are missing fundamental feature of the OO model and should not generally be viewed as "real" OO languages.
Implementation of exceptions without implementing methods as subroutines is always deficient. In essence the exception is nothing but stopped co-routine and that means that all regular methods in OO language that supports exception should be co-routines too and should allocate all variables on heap.
Allocation all the variables on heap generally presuppose garbage collection. In this sense OO languages that does not support garbage collection are cripples. This list includes C++. Implementation of exceptions requires allocation all variable in the heap, as exception generally ruins stack and it needs to saved in the heap in any case.
Decomposition of program into modules/classes is an art. OO tend to stimulate more strictly hierarchical (aka bureaucratic) decomposition. This is not the only type of decomposition possible or desirable. Sometimes much better way of decomposition in non-hierarchical decomposition when some frequently used operations are implemented outside hieratical structures as a shortcuts to typical sequences of operations. It is true that premature optimization is the root of all evils, but complete neglect those this aspect is also not good. OO programs with "stupid decomposition" tend to have unnecessary deep procedure nesting hierarchies during execution which not that good for modern CPUs with multistage execution pipelines and predictive execution.
Actually "true OO" is very similar to the idea of compiler-compiler as it tried to create some kind of abstract language (in a form of hierarchy of classes) that can help to solve particular problem and hopefully (often this is a false expectation) is reusable to others similar problems. But I think that more explicit approach of creating such an abstract language and a real compiler from it into some other "target" language can work better then OO.
Moreover there is a great danger in thinking just in term of hierarchy of classes well known to people who designed compilers. There is a great temptation to switch attention from the solving of the problem to the designing of a "perfect" set of classes. Instead of solving problem. Making them more elegant, more generic, more flexible. You name it. Often those refinement are not necessary for the particular problem and design became "art for the sake of art" -- completely detached from reality.
So the process of designing classes became self-perpetuating activity, disconnected with the task in hand (with usual justifications that this "universal" set of classes will help to design other problem later on the read, which never happens). The key point is that it became a very similar to addition and occupy lion share of developer time, which often dooms the problem he (or team) is trying to solve. I would call this effect OO class design addiction trap.
Moreover in a team of programmers there is often at least one member who psychologically is predisposed to this type of addiction (kind of and who instantly jump into opportunity disrupting the work of other members of the team with they constant desire to improve/change the set of classes used. Often such people as a wrong as they are fanatical and in the fanatical zeal they can do substantial damage to the team.
This "class design addiction trap" is very pronounced negative effect of OO, but people often try to hide it and never admit to it.
OO class-design addition trap has other side, which is well demonstrated itself in Java. People end with using so many class libraries that application slows down considerably and loading them at the beginning is a nuisance even of computers with SSD. Moreover subtle interactions between different versions introduce a very difficult to debug errors with each upgrade.
In other words usage of huge mass of Java class libraries increases complexity of a typical application program to the level when debugging becomes an art. And that often nullifies any savings on design and coding phases of program development.
Rat race for the generalization/abstraction of the functionality of each and every class id a district danger that exist in OO programming. In the absence of better term let's call it "Over-universalization" and understand it as a distinct tendency to consider the most generic case in designing class libraries. It is a problem of programming as an art and the way of solving it often distinguish a master programmer from an average in a sense that master programmer knows were to stop.
But OO tend to make it more pronounced. But again the problem is universal in programming and exist in designing regular procedural subroutines libraries, for example glib. See, for example History of development of Midnight Commander
This distinct tendency to make classes as abstract and as generic as possible makes them less suitable for the particular problem domain. If also increases the complexity of the design and maintenance. In other words it often backfires. In extreme cases the class library became so universal that it is not well applicable to any case where it can be useful and programmer start re-implementing those primitives again instead of using one from the class library. Such a paradox.
The same problem but to lesser extent happens with designers of libraries or modules for regular procedure languages or scripting languages that do not emphasize OO programming, such as Perl. You can definitely see it in cgi.pm.
The typical path of development reminds the proverb that the road to hell is paved with good intentions. I remember an example from my experience as a compiler writer. For example, initially the subroutine that output diagnostic messages to the screen and write them to the log is simple and useful. Then the second parameter is introduced and it became able to process and output message severity levels (terminal, server, error, warning, info, etc), then collection of statistics for all those levels is introduced, then it became able to expand macros, then to output context of the error, then ability to send messages above certain severity via SMTP is added, and then nobody is using it in the next project. Instead a simple subroutine that accepts a simple parameter (diagnostic message) is quickly written and the cycle of enhancements starts again with new players.
Programs rarely remain static, and invariably the original class structure becomes less useful with time. That results in more code being added as new classes, which undermines the conceptual integrity of the initial design and lead to "class hell": the number of classes grows to the level when nobody can see the whole picture and due to this start reinventing the bicycle.
Moreover often the amount of class libraries grow to the level when just loading them at startup consumes considerable time making Java look very slow despite significant progress on JVM side. It looks like Gossling in his attempt to fix some problems with C++ badly missed prototype-based programming ideas, the ideas that found its way into JavaScript. In a recent blog entry he even mentioned:
Over the years I've used and created a wide variety of scripting languages, and in general, I'm a big fan of them. When the project that Java came out of first started, I was originally planning to do a scripting language. But a number of forces pushed me away from that.
James Gosling, Dec 15, 2005
When custom class library is used, there is another danger. When is already designed and working, people often see better ways to do something. And this temptation of introduce changes is almost irresistible. If not probably regulated it became like building on shifting sand.
Class library mess that exists in Java and that makes Java so vulnerable to exploits suggests that there should be better paradigms on modularizing OO program then Simula-67 style class model. In this sense prototype oriented OO model probably deserves a second look.
One telling sign of a cult if unwillingness to discuss any alternatives. And true enough, the alternative methodologies are never discussed in OO books. As we are dealing with the techo-cult let's be realists and understand that as Niccolo Machiavelli observed
"And one should bear in mind that there is nothing more difficult to execute, nor more dubious of success, nor more dangerous to administer than to introduce a new order to things; for he who introduces it has all those who profit from the old order as his enemies; and he has only lukewarm allies in all those who might profit from the new. This lukewarmness partly stems from fear of their adversaries, who have the law on their side, and partly from the skepticism of men, who do not truly believe in new things unless they have personal experience in them."
So it is often better to "dilute" or "subvert" OO development methodology then openly oppose it, especially if the company brass is hell bent on Java. Techo-cult adherents usually close ranks when they face a front attack. And as Paul Graham [The Hundred-Year Language] observed "It is irresistible to large organizations."
That can be done in various creative ways so the discussion below provides just a few tips. All of them can be "squeezed" into compatibility with usage of some OO language (for example Python can be used instead of TCL in dual language programming methodology), despite that each of them subverts the idea of OO in some fundamental way.
As a programming methodology OO programming competes with several other:
Using scripting language such as TCL and compiled language such as C in a single project has a lot of promise as it better separates programming in the large (glue language), from programming in the small (component programming). See also Greenspun's Tenth Rule of Programming. In a way this is a simple implementation of abstract machine with, say, C subroutines and scripting language library representing machine operations and scripting language as a glue (TCL for C). For many problems this "scripting language+compiling language" approach is a better paradigm of software development as access to the implementation of interpreter by C programmers enforced development discipline already developed and established in scripting interpreter development community. And libraries used by interpreter usually are very high quality and serve both as an example of how things would be done and for preventing "reinventing the wheel" -- a tendency to re-implement parts of the library that are already implemented in any decent scripting interpreter. Programmers are usually learn by example and code of even simple interpreter like AWK or gawk is a great school. We can reformulate Greenspun 10th law of programming as following:
Any sufficiently complicated OO program written in Java, C++ or other OO language contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of scripting language interpreter.
As John Ousterhout aptly put it:
I think that Stallman's objections to Tcl may stem largely from one aspect of Tcl's design that he either doesn't understand or doesn't agree with. This is the proposition that you should use *two* languages for a large software system: one, such as C or C++, for manipulating the complex internal data structures where performance is key, and another, such as Tcl, for writing small-ish scripts that tie together the C pieces and are used for extensions. For the Tcl scripts, ease of learning, ease of programming and ease of glue-ing are more important than performance or facilities for complex data structures and algorithms. I think these two programming environments are so different that it will be hard for a single language to work well in both. For example, you don't see many people using C (or even Lisp) as a command language, even though both of these languages work well for lower-level programming.
Thus I designed Tcl to make it really easy to drop down into C or C++ when you come across tasks that make more sense in a lower-level language. This way Tcl doesn't have to solve all of the world's problems. Stallman appears to prefer an approach where a single language is used for everything, but I don't know of a successful instance of this approach. Even Emacs uses substantial amounts of C internally, no?
I didn't design Tcl for building huge programs with 10's or 100's of thousands of lines of Tcl, and I've been pretty surprised that people have used it for huge programs. What's even more surprising to me is that in some cases the resulting applications appear to be manageable. This certainly isn't what I intended the language for, but the results haven't been as bad as I would have guessed.
This is approach is closely connected with the idea of structuring application as an abstract machine with well defined primitives (opcodes). If a full language is developed (which actually is not necessary) then this language does not need to produce object code. Compiling into a lower level language such as C, C++ or Java is a more viable approach.
In this case maintained of the application can be split into two distinct parts: maintenance of the higher level codebase and maintenance of the abstract machine that implements the higher level language and associated run time infrastructure.
The great advantage of this approach is that it allow to engage architects in actual programming which always lead to higher quality of final product: many primitives can be created from preexisting Unix utilities and programs and glued via shell language. See Real Insights into Architecture Come Only From Actual Programming
As the cost of programming is heaving dependent of the level of the language used, usage of the higher level language allow to dramatically lower the cost of the development. This approach also stimulates prototyping as often the first version of application can be glued from shell scripts and pre-existing Unix utilities and applications in a relatively short time which make the whole design process more manageable.
Even if the idea of defining the language will be thrown out later and another approach to development is adopted the positive effects of creating such a prototype can be felt for the rest of project development. In this sense "Operate of higher level" is not just an empty slogan.
Compilers stopped to be a "black art" in late 70th and this technology is greatly underutilized in modern software development. In this case you can catch some high level errors on syntactic level, which is impossible with OO although in many way it is similar "compiler-compiler" based methodology. In light-weight form the problem can be structures in compiler like form with distinct lexical analyzer, syntax analyzer and code generation parts. Multipass compiling with intermediate representation writable to disk is a great tool for solving complex problem and it naturally allow subsequent optimization converting read/write statements into coroutine interface. When intermediate representations between different passes are formally defined they also can be analyzed for correctness. Flexible switching between writing of intermediate files and coroutine linkage greatly simplifies debugging. XML can be used as a powerful intermediate representation language, although in many cases it is an overkill. Some derivative of SMTP message format is another commonly used representation.
This is a the newest methodology, often based on LAMP, where the whole virtual instance of OS become a part of application and application uses OS logging, OS scheduler, etc instead of reinventing the bicycle. This is a new a promising approach to programming substantial class of problems. This specialized virtual machine provides services via network interface, for example Web interface. LAMP stack which can be used in this approach proved to be a tremendously useful development paradigm. And in most cases non-OO languages are used in P part of this acronym. But Python and Ruby has well implement OO features, so this approach does not completely exclude usage of OO where is can really beneficial and not dictated by groupthink or fashion.
One important advantage of this approach is the executables in any OS are much more like objects that classes with methods in modern OO languages. They definitely have their state, can be interrupted and resumed and communicate with other executable via messages (which includes sockets). So OS infrastructure in general can be viewed as object oriented environment "in the large" while all OO languages belong to OO in the small.
In his paper Object Oriented Programming Oversold! B. Jacobs aptly noted:
OOP became popular primarily because of GUI interfaces. In fact, many non-programmers think that "Object" in OOP means a screen object such as a button, icon, or listbox. They often talk about drag-and-drop "objects". GUI's sold products. Anything associated with GUI's was sure to get market and sales brochure attention, regardless of whether this association was accurate or not. I have even seen salary surveys from respected survey companies that have a programming classification called "GUI/OOP Programming".Screen objects can correspond closely with OOP objects, making them allegedly easier to manipulate in a program. We do not disagree that OOP works fairly well for GUI's, but it is now being sold as the solve-all and be-all of programming.
Some argue that OOP is still important even if not dealing directly with GUI's. In our opinion, much of the hype about OOP is faddish. OOP in itself does NOT allow programs to do things that they could not do before. OOP is more of a program organizational philosophy rather than a set of new external solutions or operations.
He also provided a deep insight that attractiveness of OO is somewhat similar to the attractiveness of the social doctrine like communism (with its ideas of central hierarchical planning model and idealistic hopes that that will eliminate wasteful, redundant procedures). Actually the idea that both OO and Marxism overemphasized classes is pretty cute :-). As well as the idea that full hierarchical decomposition is a close analogy to bureaucracy that is making organizations so dysfunctional:
Unfortunately, OOP and economic communism suffer similar problems. They both get bogged down in their own bureaucracy and have a difficult time dealing with change and outside influences which are not a part of the internal bureaucracy. For example, a process may be stuck in department X because it may be missing a piece of information that the next department, Y, or later departments may not even need. Department X may not know or care that the waiting piece of information is not needed by later departments. It simply has it's rules and regulations and follows them like a good little bureaucratic soldier.
This analogy can well look stretched, but highly placed "object oriented jerks" from academia really remind me high priests of Marxism-Leninism in at least in one aspect -- complete personal corruption.
In his old Usenix paper Objecting To Objects Stephen C. Johnson wrote
Object-oriented programming (OOP) is an ancient (25-year-old) technology, now being pushed as the answer to all the world's programming ills. While not denying that there are advantages to OOP, I argue that it is being oversold. In particular, OOP gives little support to GUI and network support, some of the biggest software problems we face today. It is difficult to constrain relationships between objects (something SmallTalk did better than C++). Fundamentally, object reuse has much more to do with the underlying models being supported than with the "objectness" of the programming language. Object-oriented languages tend to burn CPU cycles, both at compile and execution time, out of proportion to the benefits they provide. In summary, the goods things about OOP are often the information hiding and consistent underlying models which derive from clean thoughts, not linguistic cliches.
In his April 2003 Keynote for PyCon2003) Paul Graham suggested [The Hundred-Year Language] :
...Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the contrary seems to be able to shake it free. But although some object-oriented software is reusable, what makes it reusable is its bottom-upness, not its object-orientedness.
Consider libraries: they're reusable because they're language, whether they're written in an object-oriented style or not.
I don't predict the demise of object-oriented programming, by the way. Though I don't think it has much to offer good programmers, except in certain specialized domains, it is irresistible to large organizations. Object-oriented programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches.
Large organizations always tend to develop software this way, and I expect this to be as true in a hundred years as it is today.
...One helpful trick here is to use the length of the program as an approximation for how much work it is to write. Not the length in characters, of course, but the length in distinct syntactic elements-- basically, the size of the parse tree. It may not be quite true that the shortest program is the least work to write, but it's close enough that you're better off aiming for the solid target of brevity than the fuzzy, nearby one of least work. Then the algorithm for language design becomes: look at a program and ask, is there any way to write this that's shorter?
Dr. Nikolai Bezroukov
P.S. Newer version of the paper might be available at Object-Oriented Cult: A Slightly Skeptical View on the Object-Oriented Programming
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Object Oriented Programming (OOP) is currently being
hyped as the best way to do everything from promoting code reuse to forming lasting relationships
with persons of your preferred sexual orientation. This paper tries to demystify
the benefits of OOP. We point out that, as with so many previous software engineering fads,
the biggest gains in using OOP result from applying principles that are older than, and largely
independent of, OOP. Moreover, many of the claimed benefits are either not true or true only
by chance, while occasioning some high costs that are rarely discussed. Most seriously, all
the hype is preventing progress in tackling problems that are both more important and harder:
control of parallel and distributed applications, GUI design and implementation, fault tolerant
and real-time programming. OOP has little to offer these areas. Fundamentally, you get good
software by thinking about it, designing it well, implementing it carefully, and testing it
intelligently, not by mindlessly using an expensive mechanical process.
-- Abstract to
Objecting to Objects, by Stephen C. Johnson, |
|
The road to Hell is paved with good intentions. -- Proverb |
The Register
So what are programmers doing wrong? One thing is too much use of inheritance. "It is obviously hugely overused," he says. "There are languages where you can't express yourself without inheritance - they fit everything into a hierarchy and it doesn't make any sense. Inheritance should come from the domain, from the problem. It is good where there is an 'is a' or 'kind of' relationship in the fundamental domain. Shapes fit into this, there is something natural there. Similarly device controllers have natural hierarchies that you should exploit. If you forget about programming languages and look at the application domain, the questions about deep or shallow inheritance answer themselves.
He also takes care to distinguish "implementation inheritance, where in some sense you want a deep hierarchies so that most of the implementation is shared, and interface inheritance - where you don't care, all you want to do is to hide a set of implementations behind a common interface. I don't think people distinguish that enough."
Another bugbear is protected visibility. "When you build big hierarchies you get two kinds of users [of the classes]: the general users, and the people who extend the hierarchy. People who extend the hierarchy often need protected access. The reason I like public or private is that if it is private, nobody can mess with it.
"If I say protected, about some data, anybody can mess with it and scramble my data. That has been a problem. It is not such a problem if the protected interface really is functional, a set of functions that you have provided as support for implementers of new classes... The ideal is public or private, and sometimes out of necessity we use protected," he said.
Linux Journal
Surprise: Ancient advocacy alive :)
Gene , 09/07/2010 - 03:32.
Can not help my desire to say a couple of words. My coding experience is 25+ years. And OOP never was attractive to me. Probably, that was due to I always had what OOP could give, thanks to Modula-2 and Oberon-2.modularity vs. OOPPhilosophy aside, coding OOP practice I happened to observe was that OOP provided means mainly for
- Modularisation (encapsulation).
- Making reusable code libraries.
IMHO that was the reason of OOP success and that was all what PP could give by means of modules.
Now it is time to recall that (celebarted) OOP method is to create a number of objects and to fire up their interaction by message exchange.
Regarding its methodology, OOP approach seems to be much more obscure than PP. May be, that is due to the fact that it is much more natural for human being to invent an algorithm for a purpose, than to build an abstract machine which would work according to some model in a way that it would implement an algorithm for a purpose...
AFAIC the only domain of OOP methaphor to fit more or less nicely is windowed GUI (interactive graphics) and modelling of automation systems blocks.
Regards,
Genevova, Wed, 09/01/2010 - 17:11.
OOP is an abstraction made up of too many false hopes, thus counterproductive as 99% people see it now ;)Here we are, real life example: every comment in this thread [from OO point of view] is derived class of 26 ASCII letters, but, how useful such abstraction is for the matter?
Larry Well had classic example somewhere: radio tower and plumbing pipe are made from single base class, but they have almost nothing in common.
Classes hierarchy,...
Vertical inheritance paradigm clearly becomes insufficient,.. then horizontal (aka transparent) inheritance comes into play, making a mess from the ideal initial picture. (forgive me to ask,.. does java has it?.. perl does, but,..)
Isolation,.. Students are getting wrong idea about it, listening fairy OOP tales. They start to believe that object's properties becomes invisible when someone's eyes get closed.
Instead learning about modularity, decomposition, protocols, state machines, a code re-use,.. every and any OO writer most of life reinvents the wheel cloning a classes and methods from zillion similar ones.
Result is: almost every OOP product is looking as collection of procedures, thus being near imposible for lock-free threading.
I vote for data-driven modular design, -- back to the nature, to stop lie to ourselve.
OOP as scientific abstraction is fine... and limited.
Gordon J Milne Some might not be able to 09/02/2010 - 16:02.
Some might not be able to grasp OO but this is a small number compared to the number of people who just cannot understand pointers. For a great many people pointers remain magical.Joel has a great article (http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html) on this.
The division between OO and procedural is but a hair's breadth compared to that between those who understand pointers and those that do not.
I'm not a fan of object orientation for the sake of object orientation. Often the proper OO way of doing things ends up being a productivity tax. Sure, objects are the backbone of any modern programming language, but sometimes I can't help feeling that slavish adherence to objects is making my life a lot more difficult. I've always found inheritance hierarchies to be brittle and unstable, and then there's the massive object-relational divide to contend with. OO seems to bring at least as many problems to the table as it solves.
Perhaps Paul Graham summarized it best:
Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments.Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code.
Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.
Eric Lippert observed a similar occupational hazard among developers. It's something he calls object happiness.
What I sometimes see when I interview people and review code is symptoms of a disease I call Object Happiness. Object Happy people feel the need to apply principles of OO design to small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual abstract base classes -- writing programs where IFoos talk to IBars but there is only one implementation of each interface!I suspect that early exposure to OO design principles divorced from any practical context that motivates those principles leads to object happiness. People come away as OO True Believers rather than OO pragmatists.
I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented programming is inherently bad, mind you, but a little OOP goes a very long way. Adding objects to your code is like adding salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity, and I tend to favor the approach that results in less code, not more.
Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page. Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in French than it is in English: POO.
March 5, 2007
Todd Blanchard
Well, there's objects and then there's Objects. I work in Smalltalk - real objects everywhere and it feels pretty natural.
OTOH, Java's Objects(TM) are characterized by cargo cult engineering. Lots of form of without function of. Factory is just one pattern that is horrifically overused in that world and usually for no good reason.
You have to know when to use sense. Very rare in software. Sometimes a script is just a script.
Opeth
Making something ridiculously complex for the sake of making it simple is like trying to put out a fire with gasoline.
Some programmers just need to take a deep breath and write code that is a delicious salami sandwich, and not an extravagantly prepared four course meal that tastes like shit.
Ed
"It has been said that democracy is the worst form of government except all the others that have been tried." - Churchill
Erm, I guess people do go object-crazy. The problem, as I'm sure is documented elsewhere, is the crappy teaching phase driving home that "OO is about all about inheritance" when it's not. Inheritance is a powerful tool that is sorely abused. Most of my object hierarchies are flat, I mark all classes as sealed unless I do intend for someone to derive off of them, and I don't create interfaces until I really need them (and usually it's only for testing so I can swap in a test implementation).
OO to me just provides a better way to hide implementation and abstract ideas away so I can create more complex, but logically simpler programs because I don't have to hold onto all the nuances of everything at once. It's no panacea, but it is nicer to work with when done right.
Anyways, don't throw the baby out with the bathwater. Just because some cars suck, do you stop driving altogether? So, until something better comes along .. on March 5, 2007 1:51 AM
Phil Deneka
I second Mr. Haack's thoughts. I was very fortunate in both high school and college in having teachers who taught both the thinking structure for OOP and why it works. We consistently had to work in groups and be able to read each others' code at a glance and understand what it did, how, and why.
I didn't understand just how important that was until many years later. It has shaped every program I've touched since.
Cesar Viteri
I read somewhere the following: "The difference between a terrorist and a Object-Oriented Methodologist is that you can try to negotiate with the terrorist". A lot of people that behave like you describe in this post make it come true :o)
Excellent post, keep it coming :o)
Thomas Flamer
When I studied computer science at the university of Oslo, we had a lecturer called Kristen Nygaard who actually invented object oriented programming. He invented OOP for a language called Simula as a technique for modeling real world objects and beheaviours. You do not have to program OOP in Simula, like in java.
dnm
I like one of the quotes in Damian Conway's Perl Best Practices:
Always write your code as though it will have to be maintained by an angry axe murderer who knows where you live...
Eric Turner
I think this can be summed up very easily. Bad programming is bad programming no matter the language or the technique used. VB has a bad rap because so many bad programmers coded in it. C had lots of good programmers in the beginning. Not use why in either case, but still true.
OO can be equally bad. A language or technique is neither good or bad. Bad use or implementation of them are however bad. Programmers should be aware of the strengths and weaknesses of each that they do. If they don't then can they really call themselves programmers. I would say they are just coders. Programmers use the strengths from languages and techniques to reduce the weaknesses. If you don't then you are just a coder pretending to be a programmer.
Rabid Wolverine
They used to call it spaghetti code, OO architects like to call it lasagna code however, most of the time oop winds up as ravioli code…
ok, we got OOP. we got POO.
Dave
TomLet's make another acronym: Perfect at Objected Oriented Progamming. (POOP). you can make this a certification that people can get by taking an exam or something. it can be sponsored or standardized by different vendors. You can be MCSPOOP, or Sun certified POOP. you can have all different flavors of POOP.(yuck)
OOP is an excursion into futility.
It is oversold, anbd rarely are the benfits worth the costs.
Far from making code clearer it generally adds to obfuscation.
A methodology or tool, adopted with religious fervour, cannot substitute for good design and high quality coding.
What is required is clear thought, clear structure, well chosen names, and precise, accurate, and pertinent commenting.
There is nothing that can be done in C++ or Java that could not be done quicker, more clearly and just as effectively in C, and in other problem domains completely different languages such as LISP and Prolog are in any case more appropriate tools.
The drawbacks of OOP become most apparent when trying to maintain an OOP-horror. The sequence of procedure calls and values of variables was easily tracked in old-fashioned C. Troubleshooting is an order of magnitude more difficult in C++ or Java.
Inheritance is more trouble than it's worth. Under the doubtful disguise of the holy "code reuse" an insane amount of gratuitous complexity is added to our environment, which makes necessary industrial quantities of syntactical sugar to make the ensuing mess minimally manageable.
- Bad Engineering Properties of Object-Oriented Languages by Luca Cardelli.
- Why OO Sucks by Joe Armstrong.
- Pitfalls of Object Oriented Programming - By Tony Albrecht of Sony Computer Entertainment Europe, Research & Development Division.
See Also
- Object-Oriented Considered Harmful by Frans Faase.
- Object Oriented Programming Oversold!
- I Hate Patterns - By Parand Tony Darugar.
- Why arc Isn't Particularly Object-Oriented - By Paul Graham.
- The questions about inheritance in the Java IAQ.
Re: So you want to learn object oriented now? (Score:5, Informative)by smallfries (601545) on Saturday February 28, @02:32AM (#27021181) Homepage
I would read it as sarcasm. Try reading this manifesto [pbm.com] and updating Fortran to C to account for 20 years of shift in the industry. Anyone not using C is just eating Quiche.
Although his joke went over your head, it is worth pointing out that OO is not a paradigm. I know wikipedia thinks that it is, and so do a hoard of practically illiterate researchers publishing crap papers in junk conferences. But that doesn't make it true.
Object Orientation is just a method of [name space] organization for procedural languages. Although it helps code maintenance and does a better job of unit management that modules alone, it doesn't change the underlying computational paradigm.
I say procedural languages because class-based programming in functional languages is actually a different type of beast although it gets called OO to appeal to people from an imperative background.
BCS - The Chartered Institute for IT
Historically, research suggests that students have always found computer programming difficult; the abstract nature of programming involving problem solving and logical thinking requires a certain aptitude, and the necessary skills and disciplines are not always easy to learn and execute.
Even students who are bright and successful in other areas of study often struggle to grasp the basics of programming, and this has traditionally led to higher than average failure and drop-out rates. Many students end up disillusioned and look for ways to avoid the subject later in the programme.
Modern programming paradigms, based upon the object-oriented programming (OOP) paradigm, and introduced in recent years, have additional complex concepts and constraints associated with them.
OOP languages such as Java and VB.NET are now widely used for teaching introductory programming modules in many universities. These place an additional cognitive burden on students over and above the already difficult programming principles associated with all programming languages.
Many students complain that they find it difficult to understand some of the complexities associated with object orientation. Trying to deal with these concepts at an early stage leads to having less time to focus on more fundamental principles and often results in students having a poorer understanding of the basics.
Add to this the need to include modern windows programming environments with graphics controls and event handling, and it all becomes too much for many students to handle; they simply cannot see the wood for the trees.
If the principles of OOP are introduced too early it may lead to cognitive overload for some students resulting in confusion and disillusionment with the subject.
This additional complexity makes the problem of teaching contemporary programming at an introductory level even more acute and if not addressed is likely to lead to even higher failure and drop-out rates in the early phases of computing programmes, with more students trying to avoid programming at all costs.
Why is it that students find programming courses more difficult than they did in the past?
One reason is that the range of abilities of student cohorts has undoubtedly widened in recent years. Another is simply that OOP programming is more complex and difficult to understand. It has often been suggested that the difficulty in the teaching and understanding of a programming language can be seen by examining the complexity of the ubiquitous 'Hello World' program.
The 'Hello World' program illustrates the simplest form of human-computer interaction (HCI); it sends a text message from a computer program to the user, displayed on the screen. 'Hello World' will be familiar to many computer lecturers and students as it is considered to be the most basic of programs, and is normally used as the first program example in many undergraduate programming text books and introductory programming modules.
To illustrate the additional complexity of OOP consider as a simple metric the comparison of the program code for 'Hello World' written in Pascal, a language used in many universities to teach introductory programming in the past, and Java a contemporary OOP language widely used commercially and in universities today to teach programming.
PASCAL:
program HelloWorld
begin
write ('Hello World')
end.JAVA:
class Message
{
public static void main (String args[ ])
{
Message helloWorld = new Message ( );
helloWorld.printMessage ( );
}
void printMessage ( )
{
System.out.print ("Hello World");
}
}These two programs perform exactly the same function. It is not difficult to see that the early generation Pascal program is very simple and easy to understand, most students and even most ordinary adults would have no problem understanding what is going on.
1994 | USENIX
Object Oriented Programming (OOP) is currently being hyped as the best way to do everything from promoting code reuse to forming lasting relationships with persons of your preferred sexual orientation. This paper tries to demystify the benefits of OOP. We point out that, as with so many previous software engineering fads, the biggest gains in using OOP result from applying principles that are older than, and largely independent of, OOP. Moreover, many of the claimed benefits are either not true or true only by chance, while occasioning some high costs that are rarely discussed. Most seriously, all the hype is preventing progress in tackling problems that are both more important and harder: control of parallel and distributed applications, GUI design and implementation, fault tolerant and real-time programming. OOP has little to offer these areas. Fundamentally, you get good software by thinking about it, designing it well, implementing it carefully, and testing it intelligently, not by mindlessly using an expensive mechanical process.
Define Your Terms
Object Oriented Programming (OOP) is a term largely borrowed from the SmallTalk community, who were espousing many of these techniques in the mid 1970's. In turn, many of their ideas derive from Simula 67, as do most of the core ideas in C++. Key notions such as encapsulation and reuse have been discussed as far back as the 60's, and received a lot of discussion during the rounds of the Ada definition. Although there have been, and will always be, religious fanatics who think their language is the only way to code, the really organized OOP hype started in the late 1980's. By the early 1990's, both Next and Microsoft were directing their marketing muscle into persuading us to give up C and adopt C++, while SmallTalk and Eiffel both were making a respectable showing, and object oriented operating systems and facilities (DOE, PenPoint, COBRA) were getting a huge play in the trade press--the hype wars were joined.
It is said that countries get the governments they deserve, and perhaps that is true of professions as well--a lot of the energy fueling this hype derives from the truly poor state of software development. While hardware developers have provided a succession of products with radically increasing power and lower cost, the software world has seen very little productivity improvement. Major, highly visible products from industry leaders continue to be years late (Windows NT), extremely buggy (Solaris) or both, costs skyrocket, and, most seriously, people are very reluctant to pay 1970's software costs when they are running cheap 1990's hardware. I believe a lot of non-specialists look at software development and see it as so completely screwed up that the cause cannot be profound--it must be something simple, something a quick fix could fix. Maybe if they just used objects...
To be more precise, most of what I say will apply to C++, viewed as a poor stepchild by most of the OOP elite. Actually, the few comments I will make about more dynamically typed languages like SmallTalk make C++ look good by comparison. I will also focus my concern fairly narrowly. I am interested in tools, including languages, that make it easier and more productive to generate large serious high quality software products. So focusing rules out a bunch of sometimes entertaining philosophical and aesthetic arguments best entertained over beer.
... ... ...
What Works in OOP
Those who report big benefits from using OOP are not lying. Many of the reported benefits come from focusing on designing the software models, including the roles and interactions of the modules, enabling the modules to encapsulate expertise, and carefully designing the interfaces between these modules. While most OOP systems allow you, and even encourage you, to do these things, most older programming systems allow these techniques as well. These are good, old ideas that have proved their worth in the trenches for decades, whether they were called OOP, structured programming, or just common sense. I have seen excellent programs written in assembler that used these principles, and terrible programs in C++ that did not. The use of objects and inheritance is not what makes these programs good.
What works in all these cases is that the programs were well thought out and the design was done intelligently, based on a clear and well communicated set of organizing principles. The language and the operating system just don't matter. In many cases, the same organizing principles used to guide the design can be used to guide the construction and testing of the product as well. What makes a piece of software good has a lot to do with the application of thought to the problem being addressed, and not much to do with what language or methodology you used. To the extent that the OOP methodology makes you think problems through and forces you to make hidden assumptions explicit, it leads to better code.
OOP Claims Unmasked
The hype for OOP usually claims benefits such as faster development time, better code reuse, and higher quality and reliability of the final code. As the last section shows, these are not totally empty claims, but when true they don't have much to do with OOP methodology. This section examines these claims in more detail.
OOP is supposed to allow code to be developed faster; the question is, "faster than what?". Will OOP let you write a parser faster than Yacc, or write a GUI faster than using a GUI-builder? Will your favorite OOP replace
awk
or Perl orcsh
within a few years? I think not.Well, maybe faster than C, and I suppose if we consider only raw C this claim has some validity. But a large part of most OOP environments is a rich set of classes that allow the user to manipulate the environment--build windows, send messages across a network, receive keystrokes, etc. C, by design, has a much thinner package of such utilities, since it is used in so many different environments. There were some spectacularly productive environments based on LISP a few years back (and not even the most diehard LISP fanatic would say that LISP is object oriented). A lot of what made these environments productive was a rich, well designed set of existing functions that could be accessed by the user. An that is a lot of what makes OOP environments productive compared to raw C. Another way of saying this is that a lot of the productivity improvement comes from code reuse.
There is probably no place where the OOP claims are more misleading than the claims of code reuse. In fact, code reuse is a complex and difficult problem--it has been recognized as desirable for decades, and the issues that make it hard are not materially facilitated by OOP.
In order for me to reuse your code, your code needs to do something that I want done (that's the easy part), and your code needs to operate within the same model of the program and environment as my code (that's the hard part). OOP addresses some of the gratuitous problems that occasionally plagued code reuse attempts (for example, issues of data layout), but the fundamental problems are, and remain, hard.
An example should make this clearer. One of the most common examples of a reused program is a string package (this is particularly compelling in C++, since C has such limited string handling facilities). Suppose you have written a string package in C++, and I want to use it in my compiler symbol table. As it happens, many of the strings that a compiler uses while compiling a function do not need to be referenced after that function has been compiled. This is commonly dealt with by providing an arena-based allocator, where storage can be allocated out of an arena associated with a function, and then the whole arena can be discarded when the function has been processed. This minimizes the chance of memory leaks and makes the deallocation of storage essentially free (Similar techniques are used to handle transaction-based storage in a transaction processing system, etc.).
So, I want to use your string package, but I want your string package to use my arena-based allocator. But, almost certainly, you have encapsulated knowledge of storage allocation so that I can't have any contact with it (that is a feature of OOP, after all), so I can't use your package with my storage allocator. Actually, I would probably have more luck reusing your package had it been in C, since I could supply my own
malloc
andfree
routines (although that has its own set of problems).If you had designed your string package to allow me to specify the storage allocator, then I could use it. But this just makes the point all the more strongly. The reason we do not reuse code is that most code is not designed to be reused (notice I said nothing about implementation). When code is designed to be reused (the C standard library comes to mind) it doesn't need object oriented techniques to be effective. I will have more to say about reuse by inheritance below.
One of the major long-term advantages of object-oriented techniques may be that it can support broad algorithmic reuse, of a style similar to the Standard Template Library of C++. However, the underlying language is enormously overbuilt for such support, allowing all sorts of false traps and dead-ends for the unwary. The Standard Template Library took several generations and a dozen of the best minds in the C++ community to reach its current state, and it's no mistake that several of the early generations were coded in Ada and SCHEME--its power is not in the language, but in the ideas.
The final advantage claimed for OOP is higher quality code. Here again, there is a germ of truth to this claim, since some problems with older methods (such as name clashes in libraries) are harder to make and easier to detect using OOP. To the extent that we can reuse "known good" code, our quality will increase--this doesn't depend on OOP. However, basically code quality depends on intelligent design, an effective implementation process, and aggressive testing. OOP does not address the first or last step at all, and falls short in the implementation step.
For example, we might wish to enforce some simple style rules on our object implementations, such as requiring that every object have a
serialize
method for dumping the object to disc. The best that many object- oriented systems can do is provide you (or, rather, your customer) with a run-time error when you try to dump an object to disc that has not defined such a method (C++ actually does a bit better than that). Many of the more dynamically typed systems, such as SmallTalk or PenPoint, do not provide any typing of arguments of messages, or enforce any conventions as to which messages can be sent to which objects. This makes messages as unstructured as GOTO's were in the 1970's, with a similar impact on correctness and quality.One of the most unfortunate effects of the OOP bandwagon is that it encourages the belief that how you speak is more important than what you say. It is rather like suggesting that if someone uses perfect English grammar they must be truthful. It is what you say, and not how you say it.
... ... ...
He said that She said that He had Halitosis
Using a computer language is a social, and even political act, akin to voting for a candidate or buying a certain brand of car. As such, our choices are open to manipulation by marketeers, influence by fads, and various forms of rationalization by those who were burned and have trouble admitting it. In particular, much of what is "known" about a language is something that was true, or at widely believed, at one point in the language's history, but may not be true currently. At one point, "everybody" knew that PL/I had no recursive functions, ALGOL 68 was too big a language to be useful, Ada was too slow, and C could not be used for numerical problems. Some of these beliefs were never true, and none of them are true now, but they are still widely held. It is worth looking at OOP in this light.
Some of the image manipulators target nontechnical people such as our bosses and customers, and may try to persuade them that OOP would solve their problems. As we have seen, however, many of the things that are "true" of OOP (for example, that it makes reuse easy) are difficult to justify when you look more carefully. As professionals, it is our responsibility to ask whether moving to OOP is in the best interests of ourselves, our company, or our profession. We must also have the courage to reject the fad when it is a diversion or will not meet our needs. We must also make this decision anew for each project, considering all the potential factors. Realistically, the answer will probably be that some projects should use OOP, others should not, and for a fair number in the middle it doesn't matter very much.
Summary
The only way to construct good software is to think about it. Since the scope of problems that software attempts to address is so vast, the kinds of solutions that that we need is also vast. OOP is a good tool to have in our toolbox, and there are places that it is my tool of choice. But there are also places where I would avoid it like the plague. It is important to all of us that we continue to have that option.
Blog Archive KILLERPHP.COM
OO is definitely overkill for a lot of web projects. It seems to me that so many people use OO frameworks like Ruby and Zope because "it's enterprise level". But using an 'enterprise' framework for small to medium sized web applications just adds so much overhead and frustration at having to learn the framework that it just doesn't seem worth it to me. Having said all this I must point out that I'm distrustful of large corporations and hate their dehumanizing hierarchical structure. Therefore i am naturally drawn towards open source and away from the whole OO/enterprise/hierarchy paradigm. Maybe people want to push open source to the enterprise level in the hope that they will adopt the technology and therefore they will have more job security. Get over it - go and learn Java and .NET if you want job security and preserve open source software as an oasis of freedom away from the corporate world. Just my 2c.
===
OOP has its place, but the diversity of frameworks is just as challenging to figure out as a new class you didn't write, if not more. None of them work the same or keep a standard convention between them that makes learning them easier. Frameworks are great, but sometimes I think maybe they don't all have to be OO. I keep a small personal library of functions I've (and others have) written procedurally and include them just like I would a class. Beyond the overhead issues is complexity. OOP has you chasing declarations over many files to figure out what's happening. If you're trying to learn how that unique class you need works, it can be time consuming to read through it and see how the class is structured. By the time you're done you may as well have written the class yourself, at least by then you'd have a solid understanding. Encapsulation and polymorphism have their advantages, but the cost is complexity which can equal time. And for smaller projects that will likely never expand, that time and energy can be a waste.
Not trying to bash OOP, just to defend procedural style. They each have their place.
===
Sorry, but I don't like your text, because you mix Ruby and Ruby on Rails alot. Ruby is in my opinion easier to use then PHP, because PHP has no design-principle beside "make it work, somehow easy to use". Ruby has some really cool stuff I miss quite often, when I have to program in PHP again (blocks for example), but has a more clear and logical syntax.
Ruby on Rails is of course not that easy to use, at least when speaking about small-scale projects. This is, because it does a lot more than PHP does. Of course, there are other good reasons to prefere PHP over Rails (like the better support by providers, more modules, more documentation), but from my opinion, most projects done in PHP from the complexity of a blog could profit from being programmed in Rails, from the pure technical point of view. At least I won't program in PHP again unless a customer asks me.
===
I have a reasonable level of experience with PHP and Python but unfortunately haven't touched Ruby yet. They both seem to be a good choice for low complexity projects. I can even say that I like Python a lot. But I would never consider it again for projects where design is an issue. They also say it is for (rapid) prototyping. My experience is that as long as you can't afford a proper IDE Python is maybe the best place to go to. But a properly "equipped" environment can formidably boost your productivity with a statically typed language like Java. In that case Python's advantage shrinks to the benefits of quick tests accesible through its command line.
Another problem of Python is that it wants to be everything: simple and complete, flexible and structured, high-level while allowing for low-level programming. The result is a series of obscure features
Having said all that I must give Python all the credits of a good language. It's just not perfect. Maybe it's Ruby. My apologies for not sticking too closely to the subject of the article.
===
The one thing I hate is OOP geeks trying to prove that they can write code that does nothing usefull and nobody understands.
"You don't have to use OOP in ruby! You can do it PHP way! So you better do your homework before making such statements!"
Then why use ruby in the first place?
"What is really OVERKILL to me, is to know the hundreds of functions, PHP provides out of the box, and available in ANY scope! So I have to be extra carefull wheter I can use some name. And the more functions - the bigger the MESS."
On the other hand, in ruby you use only functions avaliable for particullar object you use.
I would rather say: "some text".length than strlen("some text"); which is much more meaningful! Ruby language itself much more descriptive. I remember myself, from my old PHP days, heaving alwayse to look up the php.net for appropriate function, but now I can just guess!"
Yeah you must have weak memory and can`t remember wheter strlen() is for strings or for numbers….
Doesn`t ruby have the same number of functions just stored in objects?
Look if you can`t remember strlen than invent your own classes you can make a whole useless OOP framework for PHP in a day……
April 2003 | Keynote from PyCon2003
...I have a hunch that the main branches of the evolutionary tree pass through the languages that have the smallest, cleanest cores. The more of a language you can write in itself, the better.
...Languages evolve slowly because they're not really technologies. Languages are notation. A program is a formal description of the problem you want a computer to solve for you. So the rate of evolution in programming languages is more like the rate of evolution in mathematical notation than, say, transportation or communications. Mathematical notation does evolve, but not with the giant leaps you see in technology.
...I learned to program when computer power was scarce. I can remember taking all the spaces out of my Basic programs so they would fit into the memory of a 4K TRS-80. The thought of all this stupendously inefficient software burning up cycles doing the same thing over and over seems kind of gross to me. But I think my intuitions here are wrong. I'm like someone who grew up poor, and can't bear to spend money even for something important, like going to the doctor.
Some kinds of waste really are disgusting. SUVs, for example, would arguably be gross even if they ran on a fuel which would never run out and generated no pollution. SUVs are gross because they're the solution to a gross problem. (How to make minivans look more masculine.) But not all waste is bad. Now that we have the infrastructure to support it, counting the minutes of your long-distance calls starts to seem niggling. If you have the resources, it's more elegant to think of all phone calls as one kind of thing, no matter where the other person is.
There's good waste, and bad waste. I'm interested in good waste-- the kind where, by spending more, we can get simpler designs. How will we take advantage of the opportunities to waste cycles that we'll get from new, faster hardware?
The desire for speed is so deeply engrained in us, with our puny computers, that it will take a conscious effort to overcome it. In language design, we should be consciously seeking out situations where we can trade efficiency for even the smallest increase in convenience.
Most data structures exist because of speed. For example, many languages today have both strings and lists. Semantically, strings are more or less a subset of lists in which the elements are characters. So why do you need a separate data type? You don't, really. Strings only exist for efficiency. But it's lame to clutter up the semantics of the language with hacks to make programs run faster. Having strings in a language seems to be a case of premature optimization.
... Inefficient software isn't gross. What's gross is a language that makes programmers do needless work. Wasting programmer time is the true inefficiency, not wasting machine time. This will become ever more clear as computers get faster
...Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the contrary seems to be able to shake it free. But although some object-oriented software is reusable, what makes it reusable is its bottom-upness, not its object-orientedness. Consider libraries: they're reusable because they're language, whether they're written in an object-oriented style or not.
I don't predict the demise of object-oriented programming, by the way. Though I don't think it has much to offer good programmers, except in certain specialized domains, it is irresistible to large organizations. Object-oriented programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches. Large organizations always tend to develop software this way, and I expect this to be as true in a hundred years as it is today.
...As this gap widens, profilers will become increasingly important. Little attention is paid to profiling now. Many people still seem to believe that the way to get fast applications is to write compilers that generate fast code. As the gap between acceptable and maximal performance widens, it will become increasingly clear that the way to get fast applications is to have a good guide from one to the other.
...One of the most exciting trends in the last ten years has been the rise of open-source languages like Perl, Python, and Ruby. Language design is being taken over by hackers. The results so far are messy, but encouraging. There are some stunningly novel ideas in Perl, for example. Many are stunningly bad, but that's always true of ambitious efforts. At its current rate of mutation, God knows what Perl might evolve into in a hundred years.
...One helpful trick here is to use the length of the program as an approximation for how much work it is to write. Not the length in characters, of course, but the length in distinct syntactic elements-- basically, the size of the parse tree. It may not be quite true that the shortest program is the least work to write, but it's close enough that you're better off aiming for the solid target of brevity than the fuzzy, nearby one of least work. Then the algorithm for language design becomes: look at a program and ask, is there any way to write this that's shorter?
An extensive discussion of subtyping, insidious problems with subclassing, and practical rules to avoid them.
- Does OOP really separate interface from implementation?
- The manifestation of a problem: an example of how an implementation inheritance prevents separation of interface and implementation
- Subtyping vs. Subclassing
- Explanation why the problem above happened
- Subclassing errors, OOP style and practically checkable to prevent them
- Demonstration how statically checkable rules can prevent the problem from occurring [a separate document]
A more formal and general presentation of this topic is given in a paper and a talk at a Monterey 2001 workshop (June 19-21, 2001, Monterey, CA): Subtyping-OOP.ps.gz [35K] and MTR2001-Subtyping-talk.ps.gz [67K]
Does OOP really separate interface from implementation?
Decoupling of abstraction from implementation is one of the holy grails of good design. Object-oriented programming in general and encapsulation in particular are claimed to be conducive to such separation, and therefore to more reliable code. In the end, productivity and quality are the only true merits a programming methodology is to be judged upon. This article is to show a very simple example that questions if OOP indeed helps separate interface from implementation. The example is a very familiar one, illustrating the difference between subclassing and subtyping. The article carries this example of Bags and Sets one step further, to a rather unsettling result. The article set out to follow good software engineering; this makes the resulting failure even more ominous.
The article aims to give a more-or-less "real" example, which one can run and see the result for himself. By necessity the example had to be implemented in some language. The present article uses C++. It appears however that similar code (with similar conclusions) can be carried on in many other OO languages (e.g., Java, Python, etc).
Suppose I was given a task to implement a Bag -- an unordered collection of possibly duplicate items (integers in this example). I chose the following interface:
typedef int const * CollIterator; // Primitive but will do class CBag { public: int size(void) const; // The number of elements in the bag virtual void put(const int elem); // Put an element into the bag int count(const int elem) const; // Count the number of occurrences // of a particular element in the bag virtual bool del(const int elem); // Remove an element from the bag // Return false if the element // didn't exist CollIterator begin(void) const; // Standard enumerator interface CollIterator end(void) const; CBag(void); virtual CBag * clone(void) const; // Make a copy of the bag private: // implementation details elided };
Other useful operations of the CBag package are implemented without the knowledge of CBag's internals. The functions below use only the public interface of the CBag class:
// Standard "print-on" operator ostream& operator << (ostream& os, const CBag& bag); // Union (merge) of the two bags // The return type is void to avoid complications with subclassing // (which incidental to the current example) void operator += (CBag& to, const CBag& from); // Determine if CBag a is subbag of CBag b bool operator <= (const CBag& a, const CBag& b); inline bool operator >= (const CBag& a, const CBag& b) { return b <= a; } // Structural equivalence of the bags // Two bags are equal if they contain the same number of the same elements inline bool operator == (const CBag& a, const CBag& b) { return a <= b && a >= b; }
It has to be stressed that the package was designed to minimize the number of functions that need to know details of CBag's implementation. Following good practice, I wrote validation code (file vCBag.cc [Code]) that tests all the functions and methods of the CBag package and verifies common invariants.
Suppose you are tasked with implementing a Set package. Your boss defined a set as an unordered collection where each element has a single occurrence. In fact, your boss even said that a set is a bag with no duplicates. You have found my CBag package and realized that it can be used with few additional changes. The definition of a Set as a Bag, with some constraints, made the decision to reuse the CBag code even easier.
class CSet : public CBag { public: bool memberof(const int elem) const { return count(elem) > 0; } // Overriding of CBag::put void put(const int elem) { if(!memberof(elem)) CBag::put(elem); } CSet * clone(void) const { CSet * new_set = new CSet(); *new_set += *this; return new_set; } CSet(void) {} };
The definition of a CSet makes it possible to mix CSets and CBags, as in
set += bag;
orbag += set;
These operations are well-defined, keeping in mind that a set is a bag that happens to have the count of all members exactly one. For example,set += bag;
adds all elements from a bag to a set, unless they are already present.bag += set;
is no different than merging a bag with any other bag.You too wrote a validation suite to test all CSet methods (newly defined and inherited from a bag) and to verify common expected properties, e.g.,
a+=a is a
.In my package, I have defined and implemented a function:
// A sample function. Given three bags a, b, and c, it decides // if a+b is a subbag of c bool foo(const CBag& a, const CBag& b, const CBag& c) { CBag & ab = *(a.clone()); // Clone a to avoid clobbering it ab += b; // ab is now the union of a and b bool result = ab <= c; delete &ab; return result; }
It was verified in the regression test suite. You have tried this function on sets, and found it satisfactory.Later on, I revisited my code and found my implementation of foo() inefficient. Memory for the
ab
object is unnecessarily allocated on heap. I rewrote the function as
bool foo(const CBag& a, const CBag& b, const CBag& c) { CBag ab; ab += a; // Clone a to avoid clobbering it ab += b; // ab is now the union of a and b bool result = ab <= c; return result; }
It has exactly the same interface as the original foo(). The code hardly changed. The behavior of the new implementation is also the same -- as far as I and the package CBag are concerned. Remember, I have no idea that you're re-using my package. I re-ran the regression test suite with the new foo(): everything tested fine.However, when you run your code with the new implementation of foo(), you notice that something has changed! You can see this for yourself: download the complete code from [Code].
make vCBag1
andmake vCBag2
run validation tests with the first and the second implementations of foo(). Both tests complete successfully, with the identical results.make vCSet1
andmake vCSet2
test the CSet package. The tests -- other than those of foo() -- all succeed. Function foo() however yields markedly different results. It is debatable which implementation of foo() gives truer results for CSets. In any case, changing internal algorithms of a pure function foo() while keeping the same interfaces is not supposed to break your code. What happened?What makes this problem more unsettling is that both you and I tried to do everything by the book. We wrote a safe, typechecked code. We eschewed casts. g++ (2.95.2) compiler with flags -W and -Wall issued not a single warning. Normally these flags cause g++ to become very annoying. You didn't try to override methods of CBag to deliberately break the CBag package. You attempted to preserve CBag's invariants (weakening a few as needed). Real-life classes usually have far more obscure algebraic properties. We both wrote regression tests for our implementations of a CBag and a CSet, and they passed. And yet, despite all my efforts to separate interface and implementation, I failed. Should a programming language or the methodology take at least a part of the blame? [OOP-problems]
Subtyping vs. Subclassing
The problem with CSet is caused by CSet design's breaking of the Liskov Substitution Principle (LSP) [LSP]. CSet has been declared as a subclass of CBag. Therefore, C++ compiler's typechecker permits passing a CSet object or a CSet reference to a function that expects a CBag object or reference. However, it is well known [Subtyping-Subclassing] that a CSet is not a subtype of a CBag. The next few paragraphs give a simple proof of this fact, for the sake of reference.
One approach is to consider Bags and Sets as pure values, without any state or intrinsic behavior -- just like integers are. This approach is taken in the next article, Preventing-Trouble.html. The other point of view -- the one used in this article -- is Object-Oriented Programming, of objects that encapsulate state and behavior. Behavior means an object can accept a message, send a reply and possibly change its state. Let us consider a Bag and a Set separately, without regard to their possible relationship. Throughout this section we use a different, concise notation to emphasize the general nature of the argument.
We will define a Bag as an object that accepts two messages:
Likewise, a Set is defined as an object that accepts two messages:
(send a-Bag 'put x)
- puts an element x into the Bag, and
(send a-Bag 'count x)
- gives the count of occurrences of x in the Bag (without changing a-Bag's state).
Let's consider a function
(send a-Set 'put x)
- puts an element x into a-Set unless it was already there,
(send a-Set 'count x)
- gives the count of occurrences of x in a-Set (which is always either 0 or 1).
(define (fnb bag) (send bag 'put 5) (send bag 'put 5) (send bag 'count 5))
The behavior of this function can be summed as follows: given a Bag, the function adds two elements into it and returns
(+ 2 (send orig-bag 'count 5))
Technically you can pass to
fnb
a Set object as well. Just as a Bag, a Set object accepts messagesput
andcount
. However applyingfnb
to a Set object will break the function's post-condition, which stated above. Therefore, passing a set object where a bag was expected changes behavior of some program. According to the Liskov Substitution Principle (LSP), a Set is not substitutable for a Bag -- a Set cannot be a subtype of a Bag.Let's consider a function
(define (fns set) (send set 'put 5) (send set 'count 5))
The behavior of this function is: given a Set, the function adds an element into it and returns 1. If you pass to this function a bag (which -- just as a set -- replies to messagesput
andcount
), the functionfns
may return a number greater than 1. This will breakfns
's contract, which promised always to return 1.Therefore, from the OO point of view, neither a Bag nor a Set are a subtype of the other. This is the crux of the problem. Bag and Set only appear similar. The interface or an implementation of a Bag and a Set appear to invite subclassingof a Set from a Bag (or vice versa). Doing so however will violate the LSP -- and you have to brace for very subtle errors. The previous section intentionally broke the LSP to demonstrate how insidious the errors are and how difficult it may be to find them. Sets and Bags are very simple types, far simpler than the ones you deal with in a production code. Alas, LSP when considered from an OOP point of view is undecidable. You cannot count on a compiler for help in pointing out an error. You cannot rely on regression tests either. It's manual work -- you have to see the problem [OOP-problems].
Subtyping and Immutability
One may claim that "A Set *is not a* Bag, but an ImmutableSet *is an* ImmutableBag." That is not correct. An immutability per se does not confer subtyping to "derived" classes of data. As an example, consider a variation of the previous argument. We will use a C++ syntax for a change. The examples will hold if re-written in Java, Haskell, Self or any other language with a native or emulated OO system.
class BagV { virtual BagV put(const int) const; int count(const int) const; ... // other similar const members }; class SetV { virtual SetV put(const int) const; int count(const int) const; ... // other similar const members };
Instances of BagV and SetV classes are immutable, yet the classes are not subtypes of each other. To see that, let us consider a polymorphic function
template <typename T> int f(const T& t) { return t.put(1).count(1); }
Over a set of BagV instances, the behavior of this function can be represented by an invariant
f(bag) == 1 + bag.count(1)
If we take an object
asetv = SetV().put(1)
and pass it tof()
, the invariant above will be broken. Therefore, by LSP, a SetV is not substitutable for BagV: a SetV is not a BagV.In other words, if one defines
int fb(const BagV& bag) { return bag.put(1).count(1); }
he can potentially pass a SetV instance to it: e.g., either by making SetV a subclass of BagV, or byreinterpret_cast<const BagV&>(aSetV)
. Doing so will generate no overt error; yet this will break fb()'s invariant and alter program's behavior in unpredictable ways. A similar argument will show that BagV is not a subtype of SetV.C++ objects are record-based. Subclassing is a way of extending records, with possibly altering some slots in the parent record. Those slots must be designated as modifiable by a keyword virtual. In this context, prohibiting mutation and overriding makes subclassing imply subtyping. This was the reasoning behind BRules [Preventing-Trouble.html].
However merely declaring the state of an object immutable is not enough to guarantee that derivation leads to subtyping: An object can override parent's behavior without altering the parent. This is easy to do when an object is implemented as a functional closure, when a handler for an incoming message is located with the help of some kind of reflexive facilities, or in prototype-based OO systems. Incidently, if we do permit a derived object to alter its base object, we implicitly allow behavior overriding. For example, an object
A
can react to a messageM
by forwarding the message to an objectB
stored inA
's slot. If an objectC
derived fromA
alters that slot it hence overridesA
's behavior with respect toM
.For example, http://pobox.com/~oleg/ftp/Scheme/index.html#pure-oo implements a purely functional OO system. It supports objects with an identity, state and behavior, inheritance and polymorphism. Everything in that system is immutable. And yet it is possible to define something like a BagV, and derive SetV from it by overriding a
put
message handler. Acting this way is bad and invites trouble as this breaks the LSP as shown earlier. Yet it is possible. This example shows that immutability per se does not turn object derivation into subtyping.The present page is a compilation and extension of two articles posted on comp.object, comp.lang.functional, comp.lang.c++.moderated newsgroups on Jun 18 and Jul 14, 2000.
Andy Gaynor has asked the right questions. This article is merely an answer.
Discussion thread: http://www.deja.com/viewthread.xp?AN=644379349.1&search=thread&recnum=%[email protected]%3e%231/5&group=comp.object&frpage=viewthread.xp
Acknowledgment
I'm not a fan of object orientation for the sake of object orientation. Often the proper OO way of doing things ends up being a productivity tax. Sure, objects are the backbone of any modern programming language, but sometimes I can't help feeling that slavish adherence to objects is making my life a lot more difficult. I've always found inheritance hierarchies to be brittle and unstable, and then there's the massive object-relational divide to contend with. OO seems to bring at least as many problems to the table as it solves.
Perhaps Paul Graham summarized it best:
Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.Eric Lippert observed a similar occupational hazard among developers. It's something he calls object happiness.
What I sometimes see when I interview people and review code is symptoms of a disease I call Object Happiness. Object Happy people feel the need to apply principles of OO design to small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual abstract base classes -- writing programs where IFoos talk to IBars but there is only one implementation of each interface! I suspect that early exposure to OO design principles divorced from any practical context that motivates those principles leads to object happiness. People come away as OO True Believers rather than OO pragmatists.I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented programming is inherently bad, mind you, but a little OOP goes a very long way. Adding objects to your code is like adding salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity, and I tend to favor the approach that results in less code, not more.
Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page. Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in French than it is in English: POO.
Dec 14, 2006 | InfoWorld
Joel Spolsky is one of our most celebrated pundits on the practice of software development, and he's full of terrific insight. In a recent blog post, he decries the fallacy of "Lego programming" -- the all-too-common assumption that sophisticated new tools will make writing applications as easy as snapping together children's toys. It simply isn't so, he says -- despite the fact that people have been claiming it for decades -- because the most important work in software development happens before a single line of code is written.
By way of support, Spolsky reminds us of a quote from the most celebrated pundit of an earlier generation of developers. In his 1987 essay "No Silver Bullet," Frederick P. Brooks wrote, "The essence of a software entity is a construct of interlocking concepts ... I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation ... If this is true, building software will always be hard. There is inherently no silver bullet."
As Spolsky points out, in the 20 years since Brooks wrote "No Silver Bullet," countless products have reached the market heralded as the silver bullet for effortless software development. Similarly, in the 30 years since Brooks published " The Mythical Man-Month" -- in which, among other things, he debunks the fallacy that if one programmer can do a job in ten months, ten programmers can do the same job in one month -- product managers have continued to buy into various methodologies and tricks that claim to make running software projects as easy as stacking Lego bricks.
Don't you believe it. If, as Brooks wrote, the hard part of software development is the initial design, then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest GUI rapid-development toolkit will.
And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to "the community" -- if such a thing exists -- in the naive belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as history demonstrates.
In 1998, Netscape released the source code to its Mozilla browser to the public to much fanfare, but only lukewarm response from developers. As it turned out, the Mozilla source was much too complex and of too poor quality for developers outside Netscape to understand it. As Jamie Zawinski recounts, the resulting decision to rewrite the browser's rendering engine from scratch derailed the project anywhere from six to ten months.
This is a classic example of the fallacy of the mythical man-month. The problem with the Mozilla code was poor design, not lack of an able workforce. Throwing more bodies at the project didn't necessarily help; it may have even hindered it. And while implementing a community development process may have allowed Netscape to sidestep its own internal management problems, it was certainly no silver bullet for success.
The key to developing good software the first time around is doing the hard work at the beginning: good design, and rigorous testing of that design. Fail that, and you've got no choice but to take the hard road. As Brooks observed all those years ago, successful software will never be easy. No amount of open source process will change that, and to think otherwise is just more Lego-programming nonsense.
Resolved: Objects Have Failed
I participated in a debate on the question "Objects Have Failed" at OOPSLA 2002 in Seattle, Washington. My teammate was Brian Foote, and our opponents were Guy L. Steele Jr. and James Noble. My opening remarks were scripted, as were Guy Steele's, and my rebuttals were drawn from an extensive set of notes.
November 6, 2002
Opening remarks
What can it mean for a programming paradigm to fail? A paradigm fails when the narrative it embodies fails to speak truth or when its proponents embrace it beyond reason. The failure to speak truth centers around the changing needs of software in the 21st century and around the so-called improvements on OO that have obliterated its original benefits. Obsessive embrace has spawned a search for purity that has become an ideological weapon, promoting an incremental advance as the ultimate solution to our software problems. The effect has been to brainwash people on the street. The statement "everything is an object" says that OO is universal, and the statement "objects model the real world" says that OO has a privileged position. These are very seductive invitations to a totalizing viewpoint. The result is to starve research and development on alternative paradigms.
Someday, the software we have already written will be a set of measure 0. We have lived through three ages of computing-the first was machine coding; the second was symbolic assemblers, interpreter routines, and early compilers; and the third was imperative, procedural, and functional programming, and compiler-based languages. Now we are in the fourth: object-oriented programming. These first four ages featured single-machine applications. Even though such systems will remain important, increasingly our systems will be made up of dozens, hundreds, thousands, or millions of disparate components, partial applications, services, sensors, and actuators on a variety of hardware, written by a variegated set of developers, and it won't be incorrect to say that no one knows how it all works. In the old world, we focussed on efficiency, resource limitations, performance, monolithic programs, standalone systems, single author programs, and mathematical approaches. In the new world we will foreground robustness, flexibility, adaptation, distributed systems, multiple-author programs, and biological metaphors for computing.
Needless to say, object-orientation provides an important lens through which to understand and fashion systems in the new world, but it simply cannot be the only lens. In future systems, unreliability will be common, complexity will be out of sight, and anything like carefully crafted precision code will be unrealistic. It's like a city: Bricks are important for building part of some buildings, but the complexity and complicated way a variety of building materials and components come together under the control of a multitude of actors with different cultures and goals, talents and proclivities means that the kind of thinking that goes into bricks will not work at the scale of the city. Bricks are just too limited, and the circumstances where they make sense are too constrained to serve as a model for building something as diverse and unpredictable as a city. And further, the city itself is not the end goal, because the city must also-in the best case-be a humane structure for human activity, which requires a second set of levels of complexity and concerns. Using this metaphor to talk about future computing systems, it's fair to say that OO addresses concerns at the level of bricks.
The modernist tendency in computing is to engage in totalizing discourse in which one paradigm or one story is expected to supply all in every situation. Try as they might, OO's promoters cannot provide a believable modernist grand narrative to the exclusion of all others. OO holds no privileged position. So instead of Java for example embracing all the components developed elsewhere, its proponents decided to develop their own versions so that all computing would be embraced within the Java narrative.
Objects, as envisioned by the designers of languages like Smalltalk and Actors-long before C++ and Java came around- were for modeling and building complex, dynamic worlds. Programming environments for languages like Smalltalk were written in those languages and were extensible by developers. Because the philosophy of dynamic change was part of the post-Simula OO worldview, languages and environments of that era were highly dynamic.
But with C++ and Java, the dynamic thinking fostered by object-oriented languages was nearly fatally assaulted by the theology of static thinking inherited from our mathematical heritage and the assumptions built into our views of computing by Charles Babbage whose factory-building worldview was dominated by omniscience and omnipotence.
And as a result we find that object-oriented languages have succumbed to static thinkers who worship perfect planning over runtime adaptability, early decisions over late ones, and the wisdom of compilers over the cleverness of failure detection and repair.
Beyond static types, precise interfaces, and mathematical reasoning, we need self-healing and self-organizing mechanisms, checking for and responding to failures, and managing systems whose overall complexity is beyond the ken of any single person.
One might think that such a postmodern move would have good consequences, but unlike Perl, the combination was not additive but subtractive-as if by undercutting what OO was, OO could be made more powerful. This may work as a literary or artistic device, but the idea in programming is not to teach but to build.
The apparent commercial success of objects and our love affair with business during the past decade have combined to stifle research and exploration of alternative language approaches and paradigms of computing. University and industrial research communities retreated from innovating in programming languages in order to harvest the easy pickings from the OO tree. The business frenzy at the end of the last century blinded researchers to diversity of ideas, and they were into going with what was hot, what was uncontroversial. If ever there was a time when Kuhn's normal science dominated computing, it was during this period.
My own experience bears this out. Until 1995, when I went back to school to study poetry, my research career centered on the programming language, Lisp. When I returned in 1998, I found that my research area had been eliminated. I was forced to find new ways to earn a living within the ecology created by Java, which was busily recreating the computing world in its own image.
Smalltalk, Lisp, Haskell, ML, and other languages languish while C++, Java, and their near-clone C# are the only languages getting attention. Small languages like Tcl, Perl, and Python are gathering adherents, but are making no progress in language and system design at all.
Our arguments come in several flavors:
- The object-oriented approach does not adequately address the computing requirements of the future.
- Object-oriented languages have lost the simplicity - some would say purity - that made them special and which were the source of their expressive and development power.
- Powerful concepts like encapsulation were supposed to save people from themselves while developing software, but encapsulation fails for global properties or when software evolution and wholesale changes are needed. Open Source handles this better. It's likely that modularity-keeping things local so people can understand them-is what's really important about encapsulation.
- Objects promised reuse, and we have not seen much success.
- Despite the early clear understanding of the nature of software development by OO pioneers, the current caretakers of the ideas have reverted to the incumbent philosophy of perfect planning, grand design, and omniscience inherited from Babbage's theology.
- The over-optimism spawned by objects in the late 1990s led businesses to expect miracles that might have been possible with objects unpolluted by static thinking , and when software developers could not deliver, the outrageous business plans of those businesses fell apart, and the result was our current recession.
- Objects require programming by creating communicating entities, which means that programming is accomplished by building structures rather than by linguistic expression and description through form, and this often leads to a mismatch of language to problem domain.
- Object design is like creating a story in which objects talk and interact with each other, leading people to expect that learning object-oriented programming is easy, when in fact it is as hard as ever. Again, business was misled.
- People enthused by objects hogged the road, would not get out of the way, would not allow alternatives to be explored-not through malice but through exuberance-and now resources that could be used to move ahead are drying up. But sometimes this exuberance was out-and-out lying to push others out of the way.
But in the end, we don't advocate changing the way we work on and with objects and object-oriented languages. Instead, we argue for diversity, for work on new paradigms, for letting a thousand flowers bloom. Self-healing, self-repair, massive and complex systems, self-organization, adaptation, flexibility, piecemeal growth, statistical behavior, evolution, emergence, and maybe dozens of other ideas and approaches we haven't thought of-including new physical manifestations of non-physical action-should be allowed and encouraged to move ahead.
This is a time for paradigm definition and shifting. It won't always look like science, won't always even appear to be rational; papers and talks explaining and advocating new ideas might sound like propaganda or fiction or even poetry; narrative will play a larger role than theorems and hard results. This will not be normal science.
In the face of all this, it's fair to say that objects have failed.
5/14/2005
OOP Myths Debunked:
The real difference is not which style is chosen, but what education, training, and understanding is brought to bear on the project. Rather than debating process vs. commitment, we should be looking for ways to raise the average level of developer and manager competence. That will improve our chances of success regardless of which development style we choose.
I'm fairly sure you could accurately gauge the maturity of a programming team by the amount of superstition in the source code they produce. Code superstitions are a milder form of cargo cult software development, in which you find people writing code constructs that have no conceivable value with respect to the functions that the code is meant to fulfill.
A recent conversation reminded me of an example I find particularly disturbing. Sample code for dealing with JDBC is particularly prone to being littered with this particular error, as shown below. (I suspect that is not coincidental; I'll be coming back to that.) I have elided most braces out for clarity and terseness - imagine that this is a cross between Java and Python:
import java.sql.*; public class JdbcSample { public static void main(String[] args) { Connection conn = null; try conn = DriverManager.getConnection("jdbc:someUrl"); // ...more JDBC stuff... catch (SQLException ex) // Too often that is silently ignored, but that's another blog entry finally if (conn != null) try conn.close(); catch (SQLException sqlEx) conn = null; }The "superstition" part is that setting the connection to null can have absolutely no useful effect; being a local variable, "conn" will become eligible for garbage collection as soon as it goes out of scope anyway, which the most rudimentary analysis of flow control reveals it will immediately after being set to null.
I am always particularly interested in finding out what goes on in the minds of programmers who write this kind of thing, because that will sometimes reveal the roots of the superstition. Most of the time, though, if you raise question in a design review the programmer will say something like "I copied and pasted it from sample code". This is how the superstitions spread - and it's also a red flag with respect to the team's practice maturity - but rarely an occasion to gain insight into why the superstition took hold, which is what you'll need to know in "remedial" training.
Now, the "null" concept, obvious as it seems, is a likely place for superstitions to accrete around. If you look closely, "null" is nothing but obvious. Comparing Java and Smalltalk, for instance, we find that they differ radically with respect to calling instance methods on null, or "nil" as it's called in Smalltalk; "nil" does have some instance methods you can call. Also, what is the type of the "null" value in Java ? It is a special type called "the null type", which looks like a sensible answer but incidentally breaks the consistency of the type system; the only types which are assignable to variables are the type of the variable or subtypes of that type, so "null type" should be a subclass of every Java class. (It actually works that way in Eiffel, as Nat Pryce reminds me - see comments.)
See also here for another example of a null-related Java superstition, also surprisingly common, as you can verify by Googling for "equals null".
In the case of JDBC, I would bet that idioms of resource allocation and deallocation inherited from non-garbage collected languages, like C, were the main force in establishing the superstition. Even people new to Java get used to not calling "dispose" or "delete" to deallocate objects, but unfortunately the design of the JDBC "bridges" between the object and relational worlds suffer from a throwback to idioms of explicit resource allocation/deallocation.
Owing to what many see as a major design flaw in Java, "going out of scope" cannot be relied on as an indicator that a resource is no longer in use, either, so whenever they deal with JDBC Java programmers are suddenly thrown back into a different world, one where deallocation is something to think about, like not forgetting your keys at home. And so, in precisely the same way as I occasionally found myself patting my pockets to check for home keys when I left the office, our fingers reflexively type in the closest equivalent we find in Java to an explicit deallocation - setting to null.
You may object that the setting-to-null superstition is totally harmless. So is throwing salt over your shoulder. While this may be true of one particular superstition, I would be particularly concerned about a team which had many such habits, just like you wouldn't want to trust much of importance your batty old aunt who avoids stepping on cracks, stays home on Fridays, crosses herself on seeing a black cat, but always sends you candy for Christmas.
Posted by Morendil at November 15, 2004 04:57 PM
Laurent Bossavit explains the notion of "Cargo Cult" programming - the example being setting a temporary variable to null (i.e., one that is going out of scope)
You may object that the setting-to-null superstition is totally harmless. So is throwing salt over your shoulder. While this may be true of one particular superstition, I would be particularly concerned about a team which had many such habits, just like you wouldn't want to trust much of importance your batty old aunt who avoids stepping on cracks, stays home on Fridays, crosses herself on seeing a black cat, but always sends you candy for Christmas.
What superstitious coding practices does your group have?
Comments
null helps GC yes? no? [john mcintosh] November 15, 2004 19:23:32 EST
I once had a fellow phone me from Hong Kong who explained a performance problem they were having. Seems they at the end of each method, and in each "destroy" method for a class (used to to destroy instances), they would set all the variables to NULL. The best was of course iterating over thousands of array elements, setting them to NULL since they felt this was helping the GC find NULL (garbaged) variables faster. Once they stopped doing this why windows just snapped closed....
Empty Java constructor [Jason Dufair] November 16, 2004 10:08:33 EST
I'm on a team doing Java right now. I see a lot of empty Java constructors. Being a Smalltalker making a living doing Java, I figured they must be there for a reason. Come to find out an empty constructor just calls the super's constructor. As if it weren't there in the first place. Whee!
[PDF]This paper is a personal account of my experience of teaching Java programming to undergraduate and postgraduate students. These students enter their respective subjects with no previous Java programming knowledge. However, the undergraduate students have previous experience with Visual Basic programming. In contrast, the postgraduate students are enrolled in a "conversion" course which, in most cases, means that they were unfamiliar with any form of programming language or, in some cases, some core information technology skills. Irrespective of these differences, I have witnessed how both groups independently develop, what can be described as, a trade based culture with similarities to 'cargo cults' around the Java language. This anthropological term provides a useful terms of reference as the focus of programming activity for many students increasingly centres upon the imitation of code gathered from the lecturer or, in some cases, each other. This is particularly evident as project deadlines approach. In extreme examples of this cargo cult fever, students will discard potentially strong project developments that incorporate many features of good software design in favour of inelegant and cobbled together code on the single criteria of better functionality.In this paper I use the concept of the cargo cult to frame the differing expectations surrounding "learning Java" that are held by students and their lecturer. I draw upon my own observations and experiences as the teacher in these learning environments and upon feedback from my most recent cohort of undergraduate students undertaking an BSc(Hons) programme within a UK university. The student feedback is drawn from a questionnaire containing six questions relating to their experiences and expectations regarding a Java programming subject. The definition and description of the cargo cult is also used to consider how this relationship can be established in a way that encourages positive learning outcomes through the obligations and reciprocation associated with gifts – in this case, clearly labeled gifts of code. The cargo cult and the erroneous form of thinking associated with it provide a useful framework for understanding the teaching and learning environment in which I taught Java. In this way the interactions and motivation of students and the lecturers who ultimately share the common goal of obtaining their academic success can be scrutinized with the aim of improving this experience for all those involved. The cargo cult is not, however, 'simply' an anachronistic analogy drawn from social anthropology. Cargo cult thinking has been identified within contemporary culture as readily as tribal cultures and with equal significance (Hirsch 2002; Cringely 2001, Fitzgerald 1999; Feynman 1974).
2. Cargo Cult Thinking
It is important to acknowledge that cargo cult thinking is not necessarily the 'wrong' way of thinking or that this paper seeks to castigate students' study practices. Cargo cult thinking is based, in part, on conclusions drawn from only partially observed phenomena. In many respects this paper is a reflexive exercise regarding my own teaching practices and an examination of the ways in which cargo cult thinking can be employed to achieve positive learning outcomes. Nonetheless, despite this acknowledgement, the actions of cargo cult followers are based upon a "fallacious reasoning" of cause and effect. This could be summarized in the context of Java programming as the assumption that if I, as a student, write my code like you, the lecturer, do, or use your code as much as possible, I will be a programmer like you and this is what is required for me to do well - or at least pass - this subject. However, as teachers of Java it is necessary to acknowledge the – perhaps dormant – presence of this attitude and to consequently offer offhand code examples with extreme caution. I have repeatedly spotted examples of my own code embedded within students' projects. Although the code may originally have been offered as a quick and incomplete example of a concept or a particular line of thinking it can too readily become the cornerstone of larger scale classes without modification. It is perhaps, then unsurprising, that the cargo cult attitude does, develop among students when they are first learning a programming language and the concepts of programming. The consequence of pursuing this belief unchecked parallels the effects of learning in a "Java for Dummies" manner. Deeper, conceptual understanding and problem-solving techniques remain undeveloped and students are left able only to imitate the step-by-step procedures outlined by the textbook. This step-by-step form of explicit instruction discourages exploration and discourages students from appreciating the learning that is occurring when they disentangle java compiler errors. This is perhaps one of the most revealing differences between students and lecturers. While experienced programmers use compiler errors, new programmers will see the errors as "just one more thing" getting in the way of a successfully executing application. This suggests a lack of awareness that programming is not synonymous with writing code. The consuming focus in the majority of undergraduate and postgraduate assessment projects is upon pursuing and obtaining functionality in their code to the detriment of the user interface, clear documentation, class structures, code reusability, extensibility or reliability.
When code is reused, and especially when code is acquired from outsourced teams or incorporated via Web services technologies, there's a real opportunity for cargo cult practices to take hold. Source code may follow unfamiliar naming conventions, and design documents and internal memos may be written in unfamiliar languages or in a language that we know by people who don't speak that language very well. We may not even have the source-we may have only WSDL (Web Services Description Language) or some other interface definition to guide us.The wooden headphones may bear fancy names like "design patterns," but they're still an indicator that we may be building systems that look like those that have worked before-instead of designing from deep understanding toward solutions that meet new needs.
"The first principle is that you must not fool yourself," said the late physicist Richard Feynman in the 1974 Caltech commencement address that's often considered the origin of the "cargo cult" phrase, at least as used by coders. That's a good principle. Reusing code that we don't understand or reusing familiar methods merely because we do understand them are behaviors for which we should be on guard.
May 7, 1998
In the not-so-recent past, headlines proclaimed, "Software ICs Will Revolutionize Computer Programming." ind development would reduce programming to assembling standardized "objects," and that the need for programmers would decline as software "technicians" with minimal training would develop the software of the future.
Ten years have passed, and this clearly hasn't happened. Skilled programmers are in greater demand, the skill levels required are higher, and software is harder to develop. The business press says that nirvana is now just around the corner; companies that have the words "object-oriented" in their business plan are in demand among venture firms. Yet object-oriented methodologies are over 20 years old. Are today's technological forecasts any more accurate than those of 10 years ago?
This is not to say that OO can't work. There are examples of successful OO projects; usually these are showcase projects staffed with top developers. For the most part, however, object-oriented technology has not been the "magic bullet." In this article, I'll briefly discuss some reasons that OO has thus far failed to deliver. More importantly, I'll address some ways that organizations with average programmers can achieve high levels of reuse and shorten development cycles.
Let me count the way
The principal benefit cited for object-oriented methodologies is "reuse." This sounds like a valuable benefit; if we improve reuse, we write less code. Less code means faster development and easier maintenance in the future. Less code also means fewer chances for bugs, so it indirectly affects product quality. However, industry watchers report that there is only 15 percent average reuse in today's object-based projects. That's a pretty damning statistic, if true; we did better 20 years ago with COBOL subroutines! Others have cited different statistics; one major consulting firm reports 25 percent reuse across clients, and some academic centers report 80 percent reuse. So what's the real story?
All of these figures beg the question: "How do you measure reuse?" Is reuse a measure of code that is referenced in more than one place? (Subroutines could do that before OO.) Is code referenced in 50 places counted differently from code referenced in two places? One measure of reuse might be the size of an application developed using OO technology versus one developed using a different technology. This measurement, however, is impossible to perform, as such systems don't exist. Further, a search of the literature turns up no widely-used standards for measuring reuse.
Yet another complication is the granularity involved in measuring reuse. The usual unit is the object itself. But no one looks inside the object. One can create a simple object that can be used for only one specific function. This object can be made to serve more functions (thus improving its reuse) by adding methods to it. Perhaps, however, the same programming benefit could have been achieved by creating a new object for the additional functions rather than enhancing the first object with additional methods. The amount of programming work is the same in both cases, but the bulkier single object with additional methods counts for a higher level of reuse to most people, even though this object is carrying around a lot of unused "baggage" in any one instantiation.
The bottom line is that there is no practical objective way to measure reuse. Anyone out to make a point (positive or negative) about reuse can find a metric to prove that point. This creates a new problem. If you can't measure something, how can you improve it? For the time being, we will have to assume that we know good reuse when we see it, even if we can't measure it. We can do this by observing how long it takes to develop an application or how much code it takes to develop the application (assuming experienced, competent programmers). By using this subjective approach, it is apparent to most developers that we are still losing ground.
Objects and Components
Agreeing on what constitutes an "object" is a fundamental problem with object-oriented technology. In theory, an object represents a real-world entity, such as a person, vehicle, merchandise, etc. Yet most programmers think of objects as processing entities -- listboxes, text widgets, windows, etc. While it would be possible to start with widgets and, through encapsulation and inheritance, end up with, say, vehicles, developers just don't do this when building real systems. So one problem is that most OO development is not truly object oriented, but rather programming with predefined widgets. Just because you are programming in C++ does not mean that you are doing object-oriented development. As we used to say, "Real FORTRAN programmers can write FORTRAN in any language" -- and real procedural programmers can write procedural code in C++.
There is a well-established, theoretical basis for object-oriented methodology. Even if some developers don't understand it, don't use it correctly, or disagree with it, there is a body of reference material that precisely defines objects and regulates their use.
The computer industry has recently begun to shift focus from "objects" to "components" as the answer to our dreams. But what is a component? Some simply use the term "component" as another name for a widget. I have a catalog in front of me that purports to offer "components." It includes charting tools, a cryptographic package, a Text Edit control developer's kit, a collection of widgets (grids, trees, notebooks, meters, etc.), communications drivers, and similar entities. This definition of "component" is not the answer we are seeking, however.
A search of the literature doesn't help, either. There are many articles that discuss components, but few that actually define a component. Industry expert Judith Hurwitz says, "Components are made up of business rules, application functionality, data, or resources that are encapsulated to allow reuse in multiple applications." Alan Radding, who writes about multi-tier development, responds, "In [Judith] Hurwitz Consulting's hypertier scheme, everything in effect ends up as a component." Don Kiely, writing about components for IEEE's Computer magazine never actually defines components, but he does define "framework assemblies" as groups of components "that could be plugged into an application as easily as individual components." This is a significant statement because it shows that Kiely, Hurwitz, and Radding are thinking along the same lines, even if they use different words. Kiely also makes the useful observation that, "to be truly effective, components should be portable and inter-operable across applications," something that I will come back to later.
Slashdot
jon_c
common misconception
one common misconception is that one can not do object oriented design in C, or any language that isn't approved by the OOP zealots. this is just not true, while it may be more natural to write a good object oriented design in C++, Java or Smalltalk. it can also be done in C or BASIC.
one can create objects in C by creating a structure, then passing that structure to every method that performs on that structure. a common use could be something like this.
struct window_t win;
window_init(&win)
window_draw(&win);
window_destroy(&win);it is also possible to perform polymorphism and inheritance with function pointers and other techniques.
Omega
I tend to agree with the author..
OOP (IMHO -- I'm crazy for the acronyms today), is just a fad. Like structured programming was before it.. Unfortunately a lot of these companies today fall into "trendy" programming methodologies. Personally, I believe you should program using the style you're most comfortable and familiar with. If you're trying to fit a mold it will slow you down..
AlgUSF
The biggest problem with OOP is when people use it too much, and end up with like a million classes.
Duh,the comparison is simple! (1)Hairy_Potter | about 13 years ago
×Both communism and OOP rely on the concept of classes for the fundamental flavor.
These problems form obstacles to the further development of object-oriented software engineering, and in some situations are beginning to cause its outright rejection. Such problems can be solved either by a variety of ad hoc tools and methodologies, or by progress in language technology (both design and implementation). Here are some things that could or should be done in the various areas.
- Economy of execution. Much can be done to improve the efficiency of method invocation by clever program analysis, as well as by language features (e.g. by "final" methods and classes); this is the topic of a large and promising body of current work. We also need to design type systems that can statically check many of the conditions that now require dynamic subclass checks.
- Economy of compilation. We need to adopt languages and type systems that allow the separate compilation of (sub)classes, without resorting to recompilation of superclasses and without relying on "private" information in interfaces.
- Economy of small-scale development. Improvements in type systems for object-oriented languages will improve error detection and the expressiveness of interfaces. Much promising work has been done already and needs to be applied or further deployed [1] [5].
- Economy of large-scale development. Major progress should be achieved by formulating and enforcing inheritance interfaces: the contract between a class and its subclasses (as opposed to the instantiation interface which is essentially an object type). This recommendation requires the development of adequate language support. Parametric polymorphism is beginning to appear in many object-oriented languages, and its interactions with object-oriented features need to be better understood. Subtyping and subclassing must be separated. Similarly, classes and interfaces must be separated.
- Economy of language features. Prototype-based languages have already tried to reduce the complexity of class-based languages by providing simpler, more composable features. Even within class-based languages, we now have a better understanding of how to achieve simplicity and orthogonality, but much remains to be done. How can we design an object-oriented language that is powerful and simple; one that allows powerful engineering but also simple and reliable engineering?
Contents
- To Main
- OOP Questions & Answers
- Table Oriented Programming
- Critique of Bertrand Meyer's OOSC2 NEW!
- Code Challenge to OO Fans NEW!
- Tabled GUI's (alternative to OOP GUI's)
- Buzz-Words (Incoherentance, Entrapsulation, Polydwarfism, and others)
- An OOP Forum
- Competing Paradigms
- OOP's Goals
- Why More Don't Speak Up
- Subtype Proliferation Myth
- The Driver Pattern A Narrow Niche?
- OOP Criticism Part 2 (includes Black Box issues)
- Code and scenario examples: Shapes, Bank, Publications
External Links
- Reuse Not High according to Dr. Dobb's
- Other Comments on Reuse and OOP (Wikiwiki forums)
- Nuts to OOP (The "emperor clothes" reference was made independently)
- More on "Nuts to OOP"
- Objecting To Objects, by Stephen C. Johnson (full article requires a membership, but you can still read the summary)
- OOP Paradigm Critique by Shajan Miah (It is a long article and I have not fully reviewed it yet.)
- Critique of OOP by James M. Coggins
- Reuse Is Tough (An InfoWeek article not really about OO, but a good reality check)
This research study, investigates some of the problems and unresolved issues in the OOPar. Contrary to adopting a WHAT (the problem) and HOW (the solution) approach it uniquely asks WHY these problem and issues exist. We argue that the WHAT & HOW approach, although useful in the short term, does not provide a long term solution to the problems in data modelling (DM). As a result of adopting such an approach and the empirical and wide ranging nature of chapter 3, four aspects are proposed.
2.0 Concepts of the OO model
The main concepts that underline the OOPar, are outlined in the following sections.
2.1 Object Classes & Objects
In the OOPar, the problem domain is modeled using object-classes, and there instances objects (Booch, 1994). An object is any abstract or real world item that is relevant to the system. An object class is a grouping of these objects. For example, in a library information system an object-classes would be such things as members, books, etc. Objects would be instances of these classes, e.g. Joe Bloggs, Object-oriented analysis by Martin, etc.
2.2 Methods
Methods are predefined operations, and are associated with an object-class. "Methods specify the way in which an object's data are manipulated" (Martin & Odell, (1992), p.17). Therefore, the member object class identified earlier may contain methods such as reserve_book, borrow_book, and etc. Access to an object is only granted via the methods.
This, in fact is one of the key features of the OO model, that is behaviour (methods) and data-structures (i.e. the declarative aspect of an object) are not separated - these are encapsulated together in one module.
2.3 Encapsulation
The process of keeping methods and data together, and granting access to the object only through the methods is referred to as encapsulation. This achieves information hiding, i.e. "The object hides its data from other objects and allows the data to be accessed via its own methods" (Martin & Odell, p.17).
2.4 Inheritance
Inheritance is the process, where a high-level object class can be specialised into sub-classes. Wirfs-Brock et al(1990), define inheritance as "... the ability of one class to define the behaviour and data structure of its instances as a superset of the definition of another class or classes." (p.24).
For example, in the library system, at a later stage we may find that two types of members exist, children and adult. To accommodate this, we can make use of inheritance by abstracting all the common features into a high-level member class and further create two new sub-classes, adult-members and child-members, under the member class. Sub-classes would also inherit data and functions from the super class.
2.4.1 Multiple Inheritance
Multiple inheritance, is almost identical in concepts to single inheritance, however in this case a sub-class can inherit from many super-classes. For example, at a later design stage of the library system, we may have a situation, where a book is of type fiction and is also a reference book. This potentially, allows the use of multiple inheritance.
2.4 Polymorphism & Dynamic Binding
The term polymorphism, originates from the Greek word 'poly morph' meaning many forms. In the context of the OOPar, the polymorphism concept allows different objects to react to the same stimuli (i.e. message) differently (Hymes, 1995). For example, adult and child members, may only be allowed to borrow books for up to 6 and 3 weeks respectively. Therefore the borrow_book message to the adult-member class will respond differently (i.e. date books by 6 weeks), than the same message to the child-member class (date books by 3 weeks). There are variations and degrees of polymorphism (e.g. operator overloading), which the interested reader is guided to standard OO textbooks (see refs. at end).
2.5 Genericity
Generic (or parametric) classes are those that define a whole family of related classes, differing only in matters concerning one or more types that are used as arguments to the class of declarations (deChampeux et al, 1993). The concept of genericity allows the designer to specify standard generic classes that can be reused. For example, in designing any system, a number of common programming situations require the same class structure to be applied to different data types. Examples of several situations in user interface systems are the following:
queue class
- a queue of characters entered by a user
- a queue of mouse events that have occurred and are waiting to be handled.
In each case the same basic algorithms and supporting data structures are needed. What varies among uses of the class is the type of the data being manipulated. Lists are also used to maintain relationships in OO programming languages, hence in the case of the library system a standard generic list class could be defined to maintain the relationship between members and books reserved, for example.
3.0 Claimed Benefits
This section describes some of the key, general claimed benefits of the OOPar. The list is, of course, not extensive and their are many other claimed benefits of this approach which are described later, in their respective sections.
3.1 Naturalness of Analysis & Design (Cognition)
One of the frequently claimed benefits of the OOPar is that it is natural (therefore more understandable), and is assumed to be cognitively similar to the way human beings perceive and understand the real-world (Meyer, (1988); Rosson & Alpert (1988); Rosson & Alpert(1990). Martin & Odell (1992, p.31) for example, states "The way of thinking is more natural for most people than the techniques of structured analysis and design. After all, the world consists of objects." Mcfadden & Hoffer(1994, p.) similarly note "The notation and approach of the object-oriented data model builds on these paradigms that people constantly use to cope with complexity." This claim, therefore assumes that it is more natural for developers to decompose a problem into objects, at least as compared to the traditional structured languages. In other words, it should be natural for developers and users to map the problem into objects and into classification hierarchies.
3.2 Software Reuse
Software reuse is perhaps the most publicised benefits of the OOPar. Advocates of the OOPar claim that it provides effective mechanisms to allow for software to be reused (Meyer,1988). For example Budd(1996), states "Well designed objects in object-oriented systems are the basis for systems to be assembled largely from reusable modules, leading to higher productivity." (p.31). Martin & Odell(1992), similarly state "It [OO] leads to world of reusable classes, where much of the software construction process will be the assembly of existing well-proven classes." (p.31).
These mechanisms are encapsulation, polymorphism, and inheritance. For example, encapsulation allows object classes to be modified, or even added to new systems without requiring additional modification to other classes in the system. The end-goal of this, is to develop a component based software industry (as Martin & Odell point out), where classes can be purchased, and plugged in. Inheritance allows existing code to be reused. Genericity allows the reuse of one standard class.
3.3 Communication Process
Curtis & Waltz (1990) and Krasner, Curtis, & Iscoe(1987), report that at the software team level, some of the key problems encountered are communication and coordination, capturing and using domain knowledge and organisational issues. With the OO approach, advocates claim that communication and coordination between the project team and client(s), and also within the team are enhanced. For example, Rumbaugh et al(1991, p.4), claimed that "greatest benefits [of OO] come from helping specifiers, developers, express abstract concepts and communicate them to each other. Martin & Odell (1992, p.34) also similarly state, "Business people more easily understand the OO paradigm. They think in terms of objects.....OO methodologies encourage better understanding as the end users and developers share a common model." Similar statements can be found in many popular text books, e.g. (Coad & Yourdan (1991, p.3); Jacobson, Christerson, Johnsson, & Overgaard, (1992, p.43); Wirfs-Brock, Wilkerson, & Weiner (1990, pp. 10 - 11).
This claim is based on two premises:
- Naturalness of OO (described above) makes understanding easier,
- Objects are constructed from the problem domain, hence communication between project team and client(s) is enhanced. Also, because a single representation permeates throughout all stages of life-cycle, therefore communication and coordination within the team is facilitated.
3.4 Refinement & Extensibility
OO advocates, also claim that "software" developed using the OOPar is easy to refine and extend.
Khoshafian(1990) states "Object oriented programming techniques allow the development of extensible and reusable modules" (p.274). Graham(1994), similarly notes, "Inheritance, genericity or other forms of polymorphism make exception handling easier and improve the extensibility of systems." (p.37). These claims are related to three key principles of the OOPar, encapsulation, inheritance and polymorphism.
For example, encapsulation allows the internal implementation of a class to be modified without requiring changes to its services (i.e. methods). It also allows new classes to be added to a system, without major modifications to the system. Inheritance allows the class hierarchy to be further refined, and combined with polymorphism, the superclass does not have to "know" about the new class, i.e. modifications do not have to be made at the superclass.
4.0 Definition of OO Model
In critiquing a concept it is common to start with a formal definition. However, in this research project we will not do this, for the following reasons:
1. Unlike the relational model, the OO model does not have one commonly accepted formal definition.
2. As a result of (1) trying to define the OO model, is in itself a considerable research task.
Our approach will, therefore be to investigate reported problems in the application of the OOPar, and academic critiques. And, then to try and identify common threads and similarities between these problems.
5.0 Conclusion
In summary, in section 2 we outlined some of the core concepts that underline the OO model. Section 3 provided an outline of some of the key claimed benefits of the OO model. Finally in section 4, we discussed our reasons for not formally defining the OO model.
Google matched content |
Subtyping, Subclassing, and Trouble with OOP
UML criticism by A.J.H.Simons and coauthors
Paul Graham and Jonathan Rees discuss the nature and appeal of object-orientation. (Graham holds quite hackerish views regarding language design that want a bit the specific sense of esthetics that comes with mathematical culture, and his take on abstraction really is somewhat flat, but anyway ...)
Object Oriented Programming Oversold! Detailed OOP criticism by a programmer of business applications who advocates a procedural/relational approach factoring out the management of relationships to the database.
Why Java is not my favorite programming language
Objecting to Objects, by Stephen C. Johnson (USENIX conference invited talk, 1994)
Object Oriented Programming is Inherently Harmful
Critique of the Object Oriented Paradigm: Beyond Object-Orientation Date: 14th May, 1997 Shajan Miah
This research study, investigates some of the problems and unresolved issues in the OOPar. Contrary to adopting a WHAT (the problem) and HOW (the solution) approach it uniquely asks WHY these problem and issues exist. We argue that the WHAT & HOW approach, although useful in the short term, does not provide a long term solution to the problems in data modelling (DM). As a result of adopting such an approach and the empirical and wide ranging nature of chapter 3, four aspects are proposed.
OOP Criticism -- good OOP criticism and OOP problems (The emperor has no clothes!). Contains a very good collection of links
Object Oriented Programming (OOP) is currently being hyped as the best way to do everything from promoting code reuse to forming lasting relationships with persons of your preferred sexual orientation. This paper tries to demystify the benefits of OOP. We point out that, as with so many previous software engineering fads, the biggest gains in using OOP result from applying principles that are older than, and largely independent of, OOP. Moreover, many of the claimed benefits are either not true or true only by chance, while occasioning some high costs that are rarely discussed. Most seriously, all the hype is preventing progress in tackling problems that are both more important and harder: control of parallel and distributed applications, GUI design and implementation, fault tolerant and real-time programming. OOP has little to offer these areas. Fundamentally, you get good software by thinking about it, designing it well, implementing it carefully, and testing it intelligently, not by mindlessly using an expensive mechanical process.
June 2000.
Object-oriented design is supposed to make our software more robust and resilient, yet we still see systems that are as fragile as their procedural ancestors. Are developers adopting aggressive practices because they think the technology will protect them?
by Steve Adolph
Object-oriented software development practices are supposed to make our software more robust and resilient to change. Yet we still see systems designed using these practices that are as rigid and fragile as their procedural ancestors. Adding new features still causes a cascade of change throughout the software and often results in the creation of new bugs. It wasn't supposed to be this way. Many software development organizations invested heavily in object technology, expecting something better. They expected the changes to be localized and the software to be resilient to bugs. Is it possible that object technology is the software equivalent of four-wheel drive? Does it provide greater control and safety, only to be abused by programmers who develop more aggressively because they think objects will protect them?
The problem is, today's object-oriented software often lacks modularity. The systems are just as hard, if not harder, than their procedural brethren to modify or enhance. What appear to be simple one-line fixes end up taking three weeks to implement. Simple alterations cause a cascade of sympathetic changes to wash over the entire system.
It is my argument that we rely too heavily on object technology's safety features and ignore good software development practices such as planning, design, review and assessment in the name of expediency. We hope that at least one of our four driving wheels will somehow grab and prevent us from losing control on the slippery roadway we have been driving along at a reckless speed.
Re Beware of C Hackers -- A rebuttal to Bertrand Meyer by Robert Martin (3 Jul 95) Meyer makes a clear difference between C programmers and C hackers. He even states that he expects everyone to know C (at least back in '95, when they had that discussion), he knows C himself very well and he points to the fact that some C hackers are not well-suited for the creation of huge, complex systems that must be reliable because they (=the hackers) chase for runtime and memory efficiency and lose sight of the more important points maintainability, readability etc. I think he has a point there. He does not use the term 'C hacker' for someone who is a good programmer and uses C, as you might assume.
I have recently acquired a copy of Bertrand Meyer's new book "Object Success". I would like to say that I have a great deal of respect for Meyer. Moreover, I have read many good things in this book so far.
However I take extreme exception to something he wrote in this book. On page 91 he writes the following which is included in its entirety. I will comment on it afterwards.
PRUDENT HIRING PRINCIPLE: Beware of C hackers.
A "C hacker" is somewone who has had too much practice writing low-level C software and making use of all the special techniques and tricks permitted by that language.
Why single out C? First, interestingly enough, one seldom hears about Pascal hackers, Ada hackers or Modula hackers. C, which since the late nineteen-seventies has spread rapidly throughought the computing community, especially in the USA, typifies a theology of computing where the Computer is the central deity and its altar reads Efficiency. Everything is sacrificed to low-level performance, and programs are built in terms of addresses, words, memory cells, pointers, manual memory allocation and deallocation, unsafe type conversions, signals and similar machine-oriented constructs. In this almost monotheist cult, where the Microsecond and the Kilobyte complete the trinity, there is little room for such idols of software engineering as Readability, Provability and Extendibility.
Not surprisingly, former believers need a serious debriefing before they can rejoin the rest of the computing community and its progress towards more modern forms of software development.
The above principle does not say "Stay away from C hackers", which would show lack of faith in the human aptitude to betterment. There have indeed been cases of former C hackers who became born-again O-O developers. But in general you should be cautious about including C hackers in your projects, as they are often the ones who have the most trouble adapting to the abstraction-based form of software development that object technology embodies.
There is only one word that can accurately describe these sentiments. That word is biggotry. I don't like to use a word like that to describe the words of someone who is obviously intelligent. Yet there is no other option. The words he has written create a class of people whom he recommends ought to be hired, only with caution.
Who are these "C Hackers"? Has Dr. Meyer given us any means to identify them? Yes.
A "C hacker" is somewone who has had too much practice writing low-level C software and making use of all the special techniques and tricks permitted by that language.What possible recourse can a manager have but to look with prejudice against anyone who happens to put "C" on their resume. By associating "C" with "Hackers", Dr. Meyer damages everyone who uses that language, whether they are hackers are not. In effect, Dr. Meyer is making a statement that is equivalent to: "Beware of the Thieving Frenchmen."
What is a hacker? A hacker is someone who writes computer programs without employing sound principles of software engineering. Someone who simply throws code together without thought to structure or lifecycle.
Certainly there are hackers who use C. But there are Hackers who use every language. And in this, Dr. Meyer is quite negligent, for he says nearly the opposite:
Why single out C? First, interestingly enough, one seldom hears about Pascal hackers, Ada hackers or Modula hackers.This may or may not be true, I have no statistics. However, *if* it is true I would be willing to bet that the reason has something to do with the difference in the number of C programmers as compared to Ada, Pascal and Modula programmers. If there are 20 times as many C programmers, then there are probably 20 times as many C hackers.
My point is that C does not predispose someone to be a hacker. And that the ratio of C hackers to C programmers is probably the same as Ada hackers to Ada programmers.
So Dr. Meyer casts aspersions upon all C programmers while giving amnesty to Ada, Pascal and Modula programmers. According to Dr. Meyer, it is only, or especially, the "C hacker" that you must be wary of. He does not say: "Beware of Hackers", rather he says: "Beware of C hackers." And this is simply biggotry, the segregation and defamation of a class of people based only upon the language that the program in.
And why this malevolence towards C? One can only conjecture. He offers reasons, but they are nearly mystical in their descriptions. Consider:
C [...] typifies a theology of computing where the Computer is the central deity and its altar reads Efficiency.Dr. Meyer does not provide any proof, or even a scrap of evidence to support this rediculous claim. He states it as fact. This is an abuse of authority. What every author fears, (or ought to fear in my opinion) is that he cast his own opinions as unalterable truth. Yet, rather than proceed with trepdiation, Dr. Meyer seems to glory in his deprication of C. His writing becomes almost frenzied as he attacks it.
Everything is sacrificed to low-level performance, and programs are built in terms of addresses, words, memory cells, pointers, manual memory allocation and deallocation, unsafe type conversions, signals and similar machine-oriented constructs. In this almost monotheist cult, where the Microsecond and the Kilobyte complete the trinity, there is little room for such idols of software engineering as Readability, Provability and Extendibility.Here he names every evil trick and bad practice that he can, and ascribes it all to C, as though no other language had the capability of supporting bad practices. He also claims that C programmers religiously follow these bad practices as the sacrements of their religion.
These statements are extremely irresponsible. There is no basis of fact that Dr. Meyer has supplied for these extreme accusations and defamations. Dr. Meyer has a right to dislike C if he chooses. But his vehemence against its programmers is unreasonable, and unreasoned.
It is easy to refute nearly all of Dr. Meyer's claims regarding C programmers. I have known many many C programmers who were very concerned with good software engineering. Who considered the quest for ulimate efficiency to be absurd. Who were careful with their programming practices. In fact, I have never met a single C programmer who fits the description that Dr. Meyer ascribes to them all.
In my opinion, he is very wrong, not only professionally, but moraly. And he owes the industry an apology and a retraction.
Introduction
I have been doing custom business programming for small and medium projects since the late 1980's. When Object Oriented Programming started popping it's head into the mainstream, I began looking into it to see how it could improve the type of applications that I work on.Note that this excludes large business frameworks such as SAP, PeopleSoft, etc. I have never built a SAP-clone and probably never will, as with many others in my niche.I have come to the conclusion that although OO may help in building the fundamental components of business applications, and even the language itself, any minor organizational improvement OO adds to the applications themselves are not justified by the complexity, confusion, and training effort it will likely add to a business-oriented language. In other-words, OO is not a general-purpose software organizational paradigm, and "selling" it as such harms progress in the alternatives.I have used languages where the GUI, collections handling, and other basic frameworks are built into the language in such a way that OO's benefits would rarely help the language deal with them. It is also my opinion that the language of base framework implementations probably should not be the same as the application's language for the most part. For example, most Visual Basic components are written in C++. Meyer seems to have more of a one-size-fits-all view of languages and paradigms than I do.
For a preview of my opinions and analysis of this situation, may I suggest the following links:
Introduction to OO criticism
The Driver Pattern
Subtype Proliferation Myth
Black Box Wire Bloat
Although the stated niche is not representative of all programming tasks, it is still a rather large one and should not be ignored when choosing paradigms.
Here is a quick summary of my criticisms of OOSC2:
- Meyer tends to build up false or crippled representations of OO's competitors, which distorts OO's alleged comparative advantages.
- A good many of the patterns that OO improves are not something needed directly by the stated niche, except in rare cases.
- We have very conflicting views and philosophies on data sharing.
Note that although my writing style has at times been called sarcastic and harsh, please do not confuse the delivery tone with the message.
Also note that I am not against abstraction and generic-ness. I am only saying that OO's brand of these is insufficient for my niche.
C++ Critique
Table of Contents - C++ A Critique of C++ .. (3rd Ed. Ian Joyner Oct 1996) -- a pretty weak critique of C++ from the position of OO diehard
For a more or less reasonable sample of OO advocacy one can read Object Orientation: The Importance of Being Earnest
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: September 02, 2019