|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Bulletin | 1998 | 1999 | 2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 |
2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 |
|
There several questions that are usually swiped under the carpet when writing about Unix history
|
The actual level of influence of Multics, CTSS and ITS. To what extent Unix was a copycat OS and to what it was a new and unique amalgam of preexisting best features in earlier OSes taking them to a new level ? Unix definitely introduced several important new features into OS design. It also elegantly integrated pre-exiting approaches to time sharing as exemplified by CTSS, approaches that are now considered to be a Unix achievement. But key ideas with the exception of pipes and regular expressions are borrowed. Mainly from Multics and CTSS. Multics was the key battlefield in which Unix designers hone their skills and got key ideas and the were into produced to MIT culture. Win out participation in Multics development, both Thomson and Richie most probably would remain a second rate designers, first class programmers in Bell Labs. And nobody would ever know about them. For example the idea of a shell as an utility-style application not tightly connected to the kernel, which often cited as one on Unix unique design features, predates Unix.
The role of Unix in promoting of scripting. Unix proved to be the fertile ground for scripting development and essentially pioneered scripting as it was the first environment in which scripting was put on a solid basis with AWK, C-shell, and later Perl. See also Shell Giants, Larry Wall and Perl and John K. Ousterhout and Tcl
AWK was originally written in 1977, and distributed with Version 7 Unix. The book The AWK Programming Language, the first scripting book was published 1988.
C shell (csh) was written by Bill Joy and first distributed with BSD in 1979.
The first important non-Unix scripting language that was more or less widely used was probably REXX which was designed and first implemented between 1979 and mid-1982 by Mike Cowlishaw of IBM.
Perl was first released in 1987.
The level of incompetence of AT&T management. Actually it is pretty amazing how incompetent people can be on the top on leading engineering companies, and not only USA but German too (see Bootlickocracy: Interaction of Cronyism, Nepotism and Incompetence in Corporate IT and The Peter Principle )Bill Gates characterization of AT&T management probably holds for all the period of Unix development. Essentially Unix happened not so much due to as despite of AT&T brass efforts. Gates speaking at Unix Expo in 1996, reminded listeners about AT&T's leadership vacuum:
"I have to admit, it was fairly difficult to work with AT&T back then. They simply didn't understand what they had. They didn't understand how to manage the asset, either in terms of promoting it properly or in terms of making sure that there wasn't fragmentation in how different implementations were put together. And so that vacuum in leadership created a bit of a dilemma for everybody who was involved in Unix.
"Well, Microsoft stepped back and looked at that situation and said that the best thing for us might be to start from scratch: build a new system, focus on having a lot of the great things about Unix, a lot of the great things about Windows, and also being a file-sharing server that would have the same kind of performance that, up until that point, had been unique to Novell's Netware.
"And through Windows NT, you can see it throughout the design. In a weak sense, it is a form of Unix."
Microsoft contributions to Unix
Real vs. imaginable contributions of Unix to the OS field (IMHO Unix first of all the first system that created component model of programming with shell and pipes as "glue" between components). But many things which are attributed to Unix were actually taken from Multix...
Limits in complexity of OSes. Is Unix excessive complexity (this was an OS designed by developers for developers) the key problem which prevented it wider adoption. MS-DOS backlash (MS DOS and later Windows essentially wiped clean Unix from the desktops/workstations; Linux lately slightly reversed the trend) and its significance.
Flaws in design of C and their influence on Unix in general and Unix security in particular.
Multics seems to be a dramatically under-appreciated predecessor of Unix -- all key Bell Labs folks that participated in Unix development were trained in Multics and borrowed a lot from this system including the key idea of the hierarchical filesystem, many commands (ls, ps, etc) as well as the key idea of using high level language for writing OS. C language was essentially a simplified dialect of PL/1 with BCPL address arithmetic (see also links in Multics Home page). For example Thompson in recent interview stated:
Thompson. The one thing I stole was the hierarchical file system because it was a really good idea—the difference being that Multics was a virtual memory system and these "files" weren't files but naming conventions for segments. After you walk one of these hierarchical name spaces, which were tacked onto the side and weren't really part of the system, you touch it and it would be part of your address space and then you use machine instructions to store the data in that segment. I just plain lifted this.
By the same token, Multics was a virtual memory system with page faults, and it didn't differentiate between data and programs. You'd jump to a segment as it was faulted in, whether it was faulted in as data or instructions. There were no files to read or write—nothing you could remote—which I thought was a bad idea. This huge virtual memory space was the unifying concept behind Multics—and it had to be tried in an era when everyone was looking for the grand unification theory of programming—but I thought it was a big mistake.
The good, but biased (bashing of Multics is completely misguided and from the point of view of Unix history completely stupid) description of origins and history of UNIX can be found at Origins and History of Unix, 1969-1995
A notorious ‘second-system effect‘ often afflicts the successors of small experimental prototypes. The urge to add everything that was left out the first time around all too frequently leads to huge and overcomplicated design. Less well known, because less common, is the ‘third-system effect’; sometimes, after the second system has collapsed of its own weight, there is a chance to go back to simplicity and get it really right.
The original Unix was a third system. Its grandfather was the small and simple Compatible Time-Sharing System (CTSS), either the first or second timesharing system ever deployed (depending on some definitional questions we are going to determinedly ignore). Its father was the pioneering Multics project, an attempt to create a feature-packed ‘information utility’ that would gracefully support interactive timesharing of mainframe computers by large communities of users
... ... ...
... Thompson had been a researcher on the Multics project, an experience which spoiled him for the primitive batch computing that was the rule almost everywhere else. But the concept of timesharing was still a novel one in the late 1960s; the first speculations on it had been uttered barely ten years earlier by computer scientist John McCarthy (also the inventor of the Lisp language), the first actual deployment had been in 1962, seven years earlier, and timesharing operating systems were still experimental and temperamental beasts.Computer hardware was at that time more primitive than even people who were there to see it can now easily recall. The most powerful machines of the day had less computing power and internal memory than a typical cellphone of today.[
13] Video display terminals were in their infancy and would not be widely deployed for another six years. The standard interactive device on the earliest timesharing systems was the ASR-33 teletype — a slow, noisy device that printed upper-case-only on big rolls of yellow paper. The ASR-33 was the natural parent of the Unix tradition of terse commands and sparse responses.
Information about the history of Linux, one of the most recent Unix re-implementations, can be found at Nikolai Bezroukov. Portraits of Open Source Pioneers. Ch 4: A Slightly Skeptical View on Linus Torvalds
See also my review of A Quarter Century of UNIX
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
October 13, 2011 | NYTimes.com
Dennis M. Ritchie, who helped shape the modern digital era by creating software tools that power things as diverse as search engines like Google and smartphones, was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70.Mr. Ritchie, who lived alone, was in frail health in recent years after treatment for prostate cancer and heart disease, said his brother Bill.
In the late 1960s and early ’70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator.
The C programming language, a shorthand of words, numbers and punctuation, is still widely used today, and successors like C++ and Java build on the ideas, rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly had a rich and enduring impact. Its free, open-source variant, Linux, powers many of the world’s data centers, like those at Google and Amazon, and its technology serves as the foundation of operating systems, like Apple’s iOS, in consumer computing devices.
“The tools that Dennis built — and their direct descendants — run pretty much everything today,” said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs.
Those tools were more than inventive bundles of computer code. The C language and Unix reflected a point of view, a different philosophy of computing than what had come before. In the late ’60s and early ’70s, minicomputers were moving into companies and universities — smaller and at a fraction of the price of hulking mainframes.
Minicomputers represented a step in the democratization of computing, and Unix and C were designed to open up computing to more people and collaborative working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were making not merely software but, as Mr. Ritchie once put it, “a system around which fellowship can form.”
C was designed for systems programmers who wanted to get the fastest performance from operating systems, compilers and other programs. “C is not a big language — it’s clean, simple, elegant,” Mr. Kernighan said. “It lets you get close to the machine, without getting tied up in the machine.”
Such higher-level languages had earlier been intended mainly to let people without a lot of programming skill write programs that could run on mainframes. Fortran was for scientists and engineers, while Cobol was for business managers.
C, like Unix, was designed mainly to let the growing ranks of professional programmers work more productively. And it steadily gained popularity. With Mr. Kernighan, Mr. Ritchie wrote a classic text, “The C Programming Language,” also known as “K. & R.” after the authors’ initials, whose two editions, in 1978 and 1988, have sold millions of copies and been translated into 25 languages.
Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y. His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J., where Mr. Ritchie grew up and attended high school. He then went to Harvard, where he majored in applied mathematics.
While a graduate student at Harvard, Mr. Ritchie worked at the computer center at the Massachusetts Institute of Technology, and became more interested in computing than math. He was recruited by the Sandia National Laboratories, which conducted weapons research and testing. “But it was nearly 1968,” Mr. Ritchie recalled in an interview in 2001, “and somehow making A-bombs for the government didn’t seem in tune with the times.”
Mr. Ritchie joined Bell Labs in 1967, and soon began his fruitful collaboration with Mr. Thompson on both Unix and the C programming language. The pair represented the two different strands of the nascent discipline of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson came from electrical engineering.
“We were very complementary,” said Mr. Thompson, who is now an engineer at Google. “Sometimes personalities clash, and sometimes they meld. It was just good with Dennis.”
Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham, England.
Mr. Ritchie traveled widely and read voraciously, but friends and family members say his main passion was his work. He remained at Bell Labs, working on various research projects, until he retired in 2007.
Colleagues who worked with Mr. Ritchie were struck by his code — meticulous, clean and concise. His writing, according to Mr. Kernighan, was similar. “There was a remarkable precision to his writing,” Mr. Kernighan said, “no extra words, elegant and spare, much like his code.”
May 23 2004 | Google Groups
John Mashey View profileNewsgroups: comp.std.c
From: old_systems_...@yahoo.com (John Mashey)Date: 23 May 2004 17:47:44 -0700
Local: Sun, May 23 2004 8:47 pmSubject: Re: Why does getenv() return char*, not const char*?
lawrence.jo...@ugsplm.com wrote in message <news:[email protected]>...
> Seungbeom Kim <sb...@stanford.edu> wrote:
> > Then why is getenv() specified to return char*, not const char*?
> Because getenv() predates const and the committee didn't want to break
Not only did getenv() precede const, but there is more history.
> all the existing code that used it.
In some ways, it is slightly strange (historically) that the standard
says:
a) You can't modify strings pointed at by return value of getenv().
b) You *can* modify strings pointed at by argv pointers.
History:
1) Starting in 1975, the PWB/UNIX shell had (single-letter) variables,
or which one in particular ($s) was initialized by the shell to the
user's home directory, which it got from some PWB-extra data kept in
the per-process data area. The $p variable was initialized to the
contents of $s/.path if such existed, or to ":/bin/:/usr/bin" if it
didn't.
2) During 1977, in particular, there was a long set of discussions
about moving to Steve Bourne's new shell, at least partly with the
idea of consolidating the mess of different UNIX shell variants that
had grown up, either directly from the original Thompson shell, or
indirectly from it via PWB or USG shells.
3) The PWB shell variables and the variable path-search features had
proved extremely useful, but were very limited. Among other things,
variables were not inherited in any general way. It seemed that we
needed to do something better in conjunction with wide introduction of
the Bourne shell, if we were going to convince people to switch
happily.
4) There were discussions among many people, but particularly, Steve
Bourne, Dennis Ritchie and I thrashed through a lot of different
possibilities regarding the semantics and implementation of what
became "the environment".
We explored various grand schemes of kernel-internal associative
memories kept per process group, with complicated protection schemes
and concerns about side-effects, and plenty of function/syscalls for
manipulating them.
Nothing was very simple. Fortunately:
5) In typical UNIX fashion, Dennis suggested that the environment
could just be handled as an extra set of argc/argv-like pointers and
strings normally passed automatically upon exec, which most programs
would never modify, but which could be interrogated without a system
call. Programs that wanted to modify the environment could do so
explicitly, just like argv manipulation.
Thus, the mechanisms for initializing the environment would be the
same as for argv. The storage cost would accure in user programs,
rather than (really precious) kernel memory, although it would add
some overhead to exec.
For minimality, the *only* C-level function provided was getenv(3), on
the belief that many programs needed simple access to environment
variables, but very few needed to delete them, change them, etc, and
if (a few) people were doing that, they could just go ahead and write
code appropriate to their needs,
which could either be fairly simple [like for the "env" command] or
more complex (like the shell].
One might fairly complain that we should have thought harder about
supplying a complete set of *env functions [akin to the putenv /
setenv / unsetenv / clearenv functions that have since grown up].
However, note that there have been lots of arguments over the
implementations of these things over the years, and different systems
have done them differently.
Also, for context, recall that there will still many PDP-11s around;
a typical PDP-11/45 had 248KB of memory and a PDP-11/70 1MB, with the
former running perhaps 16 users and the latter up to 48 (but usually
less);
both were limited to 64KB instruction plus 64KB data memory.
Heavyweight features were still viewed with suspicion, ad we were
interested in supplying a minimalist faeture set good enough to
handlethe problems we knew we had.
6) For string constants, many implementors have wanted them guaranteed
constant for decades, for storage reduction and performance tricks
like:
a) Keeping only one copy of a given literal string.
b) Including them in a read-only text segment (on some machines with
PC-relative addressing, this can be helpful).
c) Putting them in a read-only data segment, if the OS supported that,
thus letting them be shared amongst processes running the same
executable.
On the other hand, 7th Edition UNIX environment variables were really
thought of as convenient, usually-hidden extra arguments, with no more
read-onlyness than regular arguments.
I would guess that the viewpoint change comes from having a set of
functions to modify the environment, and wanting to better hide the
data.
In August 1969, Ken Thompson, a programmer at AT&T subsidiary Bell Laboratories, saw the month-long departure of his wife and young son as an opportunity to put his ideas for a new operating system into practice. He wrote the first version of Unix in assembly language for a wimpy Digital Equipment Corp. (DEC) PDP-7 minicomputer, spending one week each on the operating system, a shell, an editor and an assembler.
Thompson and a colleague, Dennis Ritchie, had been feeling adrift since Bell Labs had withdrawn earlier in the year from a troubled project to develop a time-sharing system called Multics (Multiplexed Information and Computing Service). They had no desire to stick with any of the batch operating systems that predominated at the time, nor did they want to reinvent Multics, which they saw as grotesque and unwieldy.
After batting around some ideas for a new system, Thompson wrote the first version of Unix, which the pair would continue to develop over the next several years with the help of colleagues Doug McIlroy, Joe Ossanna and Rudd Canaday. Some of the principles of Multics were carried over into their new operating system, but the beauty of Unix then (if not now) lay in its less-is-more philosophy.
"A powerful operating system for interactive use need not be expensive either in equipment or in human effort," Ritchie and Thompson would write five years later in the Communications of the ACM (CACM), the journal of the Association for Computing Machinery. "[We hope that] users of Unix will find that the most important characteristics of the system are its simplicity, elegance, and ease of use."
Apparently they did. Unix would go on to become a cornerstone of IT, widely deployed to run servers and workstations in universities, government facilities and corporations. And its influence spread even farther than its actual deployments, as the ACM noted in 1983 when it gave Thompson and Ritchie its top prize, the A.M. Turing Award for contributions to IT: "The model of the Unix system has led a generation of software designers to new ways of thinking about programming."
Early steps
Of course, Unix' success didn't happen all at once. In 1971 it was ported to the PDP-11 minicomputer, a more powerful platform than the PDP-7 for which it was originally written. Text-formatting and text-editing programs were added, and it was rolled out to a few typists in the Bell Labs Patent department, its first users outside the development team.
In 1972, Ritchie wrote the high-level C programming language (based on Thompson's earlier B language); subsequently, Thompson rewrote Unix in C, which greatly increased the OS' portability across computing environments. Along the way it picked up the name Unics (Uniplexed Information and Computing Service), a play on Multics; the spelling soon morphed into Unix.
It was time to spread the word. Ritchie and Thompson's July 1974 CACM article, "The UNIX Time-Sharing System," took the IT world by storm. Until then, Unix had been confined to a handful of users at Bell Labs. But now with the Association for Computing Machinery behind it -- an editor called it "elegant" -- Unix was at a tipping point.
"The CACM article had a dramatic impact," IT historian Peter Salus wrote in his book The Daemon, the Gnu and the Penguin. "Soon, Ken was awash in requests for Unix."
Hackers' heaven
Thompson and Ritchie were the consummate "hackers," when that word referred to someone who combined uncommon creativity, brute force intelligence and midnight oil to solve software problems that others barely knew existed.
Their approach, and the code they wrote, greatly appealed to programmers at universities, and later at startup companies without the mega-budgets of an IBM, Hewlett-Packard or Microsoft. Unix was all that other hackers, such as Bill Joy at the University of California, Rick Rashid at Carnegie Mellon University and David Korn later at Bell Labs, could wish for.
"Nearly from the start, the system was able to, and did, maintain itself," wrote Thompson and Ritchie in the CACM article. "Since all source programs were always available and easily modified online, we were willing to revise and rewrite the system and its software when new ideas were invented, discovered, or suggested by others."
Korn, an AT&T Fellow today, worked as a programmer at Bell Labs in the 1970s. "One of the hallmarks of Unix was that tools could be written, and better tools could replace them," he recalls. "It wasn't some monolith where you had to buy into everything; you could actually develop better versions." He developed the influential Korn shell, essentially a programming language to direct Unix operations, now available as open-source software.
Author and technology historian Salus recalls his work with the programming language APL on an IBM System/360 mainframe as a professor at the University of Toronto in the 1970s. It was not going well. But the day after Christmas in 1978, a friend at Columbia University gave him a demonstration of Unix running on a minicomputer. "I said, 'Oh my God,' and I was an absolute convert," says Salus.
He says the key advantage of Unix for him was its "pipe" feature, introduced in 1973, which made it easy to pass the output of one program to another. The pipeline concept, invented by Bell Labs' McIlroy, was subsequently copied by many operating systems, including all the Unix variants, Linux, DOS and Windows.
One big moment that isn't often recognized, he says, is when DARPA -- working with a number of contractors, including Collins Radio, BBN and others -- demonstrated the first successful TCP connection traversing three dissimilar but interconnected networks. November 22, 2007, marked the 30th anniversary of that demo.
... ... ..
After his stint at DARPA, Kahn didn't stop pioneering. In 1986, he started the Corporation for National Research Initiatives (CNRI). The Reston, Va.-based organization helps shepherd various technology infrastructure projects. With funding from the National Science Foundation and DARPA, the CNRI helped create the first Gigabit networks operating at speeds above 1 billion bits per second. The CNRI also funded the development of Mosaic, the first popular Web browser, at the University of Illinois.
...Charles M. Hannum: I'm one of the creators of the NetBSD Project, and served as its de facto technical lead for a long time. I was also involved in creating the NetBSD Foundation, and served as its president and chairman of the board. (Note: I was never the Foundation's secretary or treasurer.)
How did the NetBSD Project start?
Charles M. Hannum: The four people who started NetBSD were Chris Demetriou, Adam Glass, Theo de Raadt, and me. At the time, Chris and Adam were attending Berkeley and close to graduating. Theo was working for a living. I was doing other things. Chris is still sort of around, but doesn't really do anything; Adam went to work at Microsoft and is now lost to us; and we all know about Theo. There is some info in the NetBSD entry on Wikipedia.
I think the most striking difference between late 1992 and today is that there really was no "web." The original software had been released, and some of us had started using it for small (non-graphical!) websites, but it was still very much a hobbyist thing. More people were getting "email accounts" of various types, but penetration of the internet into homes was still quite low. Even so, this was near the end of the NSFNet era, and there were increasing problems with the backbone being overloaded. There were also some as-yet-undiscovered problems with TCP flow control which caused additional performance problems. And lastly, nobody had started taking network security seriously.
On the open source side, you could liken it to the Stone Age. The operating systems (primarily 386BSD 0.1 and Linux 0.12) were nothing to write home about; they were buggy and incomplete. There were no "desktop" packages or "office" suites of any significance. (I actually ported Applixware to NetBSD under contract, because a certain blue company wanted it for their thin clients that ran NetBSD.) Development of X was hampered by the dissolution of the X Consortium and a vacuum of leadership there. We discovered later that GCC development was also being hampered by mismanagement (since fixed).
What I'd like to stress is that there was no Dummies Guide to Starting an Open Source Project. The term "open source" wasn't even being used yet, but that's beside the point. We were the first big collaborative project to use a version management system. (As examples, Linux used none, and most GNU software at best used "backup files" for version management.) There were no previous successes--or really even significant failures--to look at for guidance about how to organize the project. So we made it up.
I want to elaborate on the point about network security a little. Keep in mind that, in this time frame, most people were still using
rlogin
and unencryptedtelnet
. Buffer overflows were rampant. I (and other people) had already started doing experiments with subverting web sites; we all knew it was possible then, but nobody cared. It was several years before most people saw the writing on the wall and started clamping down access to central repositories and whatnot. Even today, most software distributions on the net (and this includes NetBSD) are not signed.The primary reason for starting NetBSD at that time, ironically enough, is that there was a perceived lack of management in 386BSD. The actual 386BSD release only ran on a handful of systems, and was quite buggy. There was a rapidly growing community around it nonetheless, and many people had contributed patches. However, 386BSD's leader simply vanished. Nobody had any idea what he was doing, or whether he was even looking at the patches or working on another release. Eventually we decided that the only answer was to make a go of it ourselves--that's right, it all started with a fork.
Would you like to talk about the fork that originated OpenBSD?
Charles M. Hannum: No.
Since it came up in the /. thread, though, I would like to make one correction. It's widely claimed that I'm "the one" who ejected Theo from the NetBSD community. That is false. At that time in NetBSD's history, Chris G. Demetriou was playing the role of alpha male, and I wasn't even given a choice. I was certain it was going to bite us in the ass. I think the question for historians is not whether it did bite us in the ass, but how many times and how hard.
Why did you focus on portability?
Charles M. Hannum: At the very beginning, this was not actually a focus. It quickly became apparent that there were a large number of people interested in NetBSD (and open source OSes in general) who were currently on non-x86 platforms. Remember that this was still the 80486 era--PC processors weren't very good. Most "workstation"-class computers were based on something else--a myriad of Motorola 68k and 88k, SPARC, POWER, etc. On the HP9000/300 and SPARC platforms, we also had the advantage of access to code already written--though in both cases the integration was complex, and the code only supported a handful of systems at first.
I was also hanging out (and occasionally doing some work) at the Free Software Foundation, where there were a lot of HP9000/300s--running MORE/BSD, which had long since been abandoned. So I set out to get NetBSD running on these machines. Just getting a cross compiler working at the time was quite a bear, but it didn't take too long before these machines were all running NetBSD.
Yes, several of the earliest NetBSD development systems were owned by the Free Software Foundation.
With working 68k support in hand, we quickly spawned Amiga and Mac development groups. I then helped Theo get the SPARC support from LBL integrated. The whole thing snowballed.
Of course, the fact that you've ported the system to tons of machines does not mean you have good portability. We were still in the days of "copy and edit"--and so, for example, although the Amiga and Mac code [was] heavily cribbed from the HP9000/300 code, they were separate and had different features and bugs. Trying to fix bugs in this environment was making me crazy--and wasting a lot of time--and eventually I just started smashing the code together at high speed and sifting out the common parts. Other developers followed ("lead by example" works sometimes), and so this led into our global thinking about portability architecture, shared device drivers, etc. Oddly enough, Microsoft was working on similar ideas at the same time, in the development of Windows NT.
How was the relationship with hardware manufacturers at that time?
Charles M. Hannum: Terrible. We rarely got documentation from anyone. I actually wrote a lot of device driver code by guessing what the device was supposed to do. (Lots of previous experience reverse-engineering code helped there, I'm sure.) We had severe problems trying to deal with things like the "programmable" Adaptec SCSI controllers that became very popular; it was so bad that I was honestly talking about staging a sit-in at Adaptec HQ, and probably should have done it.
There were some notable exceptions, though. It took a while, but NCR, and later LSI, finally came around and dropped a heaping pile of (fairly good) documentation on me for their SCSI controllers. We did eventually get some material out of BusLogic, but only for their older "heavyweight" controllers (what we call "bha"<-- should bha be in code style? -->).
Intel, for its part, has been pretty good about putting CPU and chipset documentation online for the last several years, which I applaud, but their networking documentation (both wired and wireless) has been extremely poor. The strangest part of the Intel story was when the i82559 manual became restricted, even though it was substantially identical to the i82557 manual [that] had been published in their networking databooks earlier. Most other companies producing Ethernet controllers have been decent to us, except for Broadcom and Marvell, which have [each] been a 100% loss. Wireless vendors have generally been a tremendous annoyance, generating excuses but no documentation--I think Atmel and Realtek are the only exceptions.
We got some scattered documentation from other companies--e.g. Ensoniq and Realtek--but it's sometimes been incomplete and very difficult to make sense of.
Fortunately we didn't have to deal with most of the PC video card circus ourselves; XFree86 and now X.org have taken care of that.
What type of relationship did you have with the license? Is this relationship changed during the time?
Charles M. Hannum: Most of us had a very strong distaste for the so-called "virus" clause in the GPL, and this is the primary reason we did not adopt it. There was also some thinking that CSRG (Berkeley) and the X Consortium had been successful with leaner, looser licenses, so why bother. In retrospect I think this was naive; if you look at the history, you'll see that neither CSRG nor the X Consortium were really successful in getting third parties to contribute back most of their changes--and so what we really got in both cases was a long list of derived but very different, and often incompatible, systems.
Linux has not been wholly successful in this either, and today there are myriad distributions which are subtly incompatible. However, they definitely did better.
If I were doing it again, I might very well switch to the LGPL. I'll just note that it didn't exist at the time.
How much did the (in)famous BSD lawsuit hit NetBSD code base and popularity?
Charles M. Hannum: There was a lot of FUD around this issue--some of it from Linus, actually--and it did cause us some problems. The reality is that we had a signed agreement with USL that essentially said we had to upgrade certain files from their Net/2 versions to 4.4-Lite, and not distribute some other files at all (which we never used in the first place). We were in the process of moving to a 4.4-Lite base anyway, so this had virtually no impact on development. It did, however, delay making our CVS history public--far longer than it should have--because we needed to remove some of that early history in order to meet the conditions of the agreement.
I've never seen the similar agreement between USL and FreeBSD, but my understanding from what I've heard is that it is quite different. This caused some more FUD to be generated, because apparently what we did would not have met the terms of FreeBSD's agreement.
Had Novell not bought USL when it did, it's unclear to me how this would have panned out. I've never been able to convince myself that Berkeley was in the "right." However, Novell put a swift end to the suit, the agreement is very clear, and nobody cares about that early code history any more--so this is all water under the bridge.
Have you read the legal settlement after it was recently made public? Any surprise?
Charles M. Hannum: Yes, I read it. The first thing to note is that this agreement was with Berkeley. We executed a separate agreement with USL (which has not been made public), and that is what governs our relationship. There were no surprises when we read the settlement, but it wasn't really relevant to us.
If you had a separate agreement, you were free to work on your software without problems. Do you think that the general and chaotic FUD about the lawsuit hit you even if you had already solved the problem?
Charles M. Hannum: Absolutely it hurt us. A lot of people (and I don't want to be divisive, but honestly they were mostly Linux proponents, including Linus himself) spread FUD for years about BSD systems being "unsafe"--even after the UCB/USL lawsuit was settled. The fact is that there was no danger in using NetBSD in a product, and a number of companies did so.
How was NetBSD funded during these years? Who managed these funds and how?
Charles M. Hannum: In the beginning, we just put the machines on the UCB and MIT AI lab networks, because we had access to do that and nobody minded. The server equipment was purchased by Chris Demetriou. The project per se had no "funds." Later on, colo space and machines were donated by a variety of organizations (NASA, iki.fi and hut.fi, etc.), and again no money changed hands. Later on we started collecting donations to purchase hardware; colo space was (and is) still donated, primarily by ISC now.
As far as funding marketing work, such as conference appearances and merchandise, most of that was paid for by me, d.b.a. The NetBSD Mission. A fraction of the cost of the Comdex booths was paid for by LinuxMall. Most of the other conferences gave us "free" booths--that means the conference itself didn't charge for the booth, but we (I) still had to pay for everything else (carpeting, furniture, shipping or renting equipment, union labor, etc.). Producing CDs and T-shirts to give away (we tried selling them at conferences, but that didn't go over well, especially at Comdex) was also fairly expensive; it adds up quickly.
Do you have any regret about the way NetBSD promoted the project and did advocacy at conferences around the world?
Charles M. Hannum: No. Unfortunately there is no longer a concerted effort to do this, and particularly to give away copies that people can try. Frankly I'm not sure (and wasn't even then) that giving away copies to install on a PC will impress people much anyway, given that NetBSD's installer is still very primitive compared to the Linux distros. Many reviews have focused on this and lambasted NetBSD for it.
Ibrahim Haddad
Tuesday, July 5, 2005 12:18:35 PM
After 13 years, Addison-Wesley has published an update to a classic UNIX System programming text: Advanced Programming in the UNIX Environment. After the death of the original author, Rich Stevens, in 1999, it was difficult to find someone to tackle a project this big. We recently caught up with the co-author, Steve Rago, to get a behind-the-scenes look at this project.
Steve, you were one of the developers of UNIX System V Release 4. Can you tell us more about your background and contributions and how you became the co-author of the second edition of one of the most popular UNIX books?
After getting a BE and MS from Stevens Institute of Technology, I got a job working in the UNIX System V Development Laboratory at AT&T Bell Labs. I had wanted to work at Bell Labs, where my father worked, since I was 12 years old. Ironically, a year after joining Bell Labs, AT&T reorganized us into a different business unit, so we weren't Bell Labs anymore. I started out working on System V Release 2.0, helping to maintain and benchmark the VAX port. Eventually, I worked on networking software, which led me to STREAMS. After most of the original STREAMS developers completed the port of Dennis Ritchie's streams to System V Release 3, I ended up taking over responsibility for it somewhere between SVR3.1 and SVR3.2. During SVR4 development, I enhanced the STREAMS mechanism, converted the open file table to use dynamically-allocated memory (thus removing the historic NOFILE limit to a UNIX process's open files), moved the poll(2) system call under the vnode framework, and did a lot of general clean-up work in the kernel.
I spent 7 years at AT&T, then left for a small start-up company just before AT&T created USL. I worked on file systems, writing one that transparently compressed and uncompressed files on the fly, and another that sped up system throughput and used an intent log for fast recovery. These were eventually ported to the SCO OpenServer V UNIX System. Then I developed stackable file systems for commercial UNIX systems. The file system business evolved into a file server business, and then the company was bought by EMC, where I still work as a manager of one of the file system groups. In total, I have about 20 years of UNIX programming experience, both kernel-level and user-level.
Since I was involved in the review of the first edition of APUE, Addison-Wesley contacted me for suggestions for candidates to update the book. I wanted the book to be updated properly, the way Rich would have wanted, and to honor his memory, so I volunteered for the project.
Why did you update APUE and how does the second edition of APUE differ from the first edition? Where did you have the most changes?
Rich's book is a classic, but the world has changed a lot since it was first published in 1992. Standards have evolved, UNIX system implementations have come and gone, and technology has advanced significantly. I added a chapter on sockets, two chapters on threads, and totally rewrote the chapter on communicating with a printer to reflect the technological advancement from a serial PostScript printer to a network-attached PostScript printer. I removed the chapter that dealt with modem communication, but I made it available on the book's Web site. Other than the printer chapter, Chapter 2 shows the most change. It deals with standards, and these have changed significantly over the past 13 years. One other major change is that I shifted the implementation focus from 4.3+BSD and SVR4 to more contemporary platforms: FreeBSD 5.2.1, Linux 2.4.22, Mac OS X 10.3, and Solaris 9. (The source code for the examples is also available on the book's Web site.)
Of course, the idea of actually executing this unknown program is ridiculous. We went through all that over a decade ago, when Robert T. Morris, Jr. let loose the famous Internet Worm and took down a whole lot of BSD systems. We hadn't paid much attention to security up till then, but we certainly did in the aftermath. Now, ten years later, it appears that people still haven't learnt. We have three major problems:
Microsoft actively encourages people to transmit executable programs. Sure, we can do that in UNIX as well, but we don't. That's not because UNIX runs on multiple platforms: more and more programs are being written in interpretative languages such as perl and tcl, and if this worm had been written in one of these languages, it would have the potential to damage UNIX systems as well as Microsoft systems. The real reason we don't transmit executable programs is that the whole idea is such a security risk that it seems completely absurd.
Microsoft has done nothing to protect systems. This isn't the first time that a massive security breach has been propagated by e-mail, yet their systems don't have any concept of security, the program can do whatever it wants. If I were designing an execution environment for executable mail attachments, I'd put it in its own directory and chroot it there, so that it couldn't access the rest of the system.
Users haven't learnt either. I heard that one British publisher has apparently lost all its image data, which was stored on disk. What would they have done if the disk had failed? This seems to be a general problem with Microsoft users: they don't make backups.
This message caused damage comparable in magnitude to Bill Gates' personal fortune. Who's to blame? Not really the perpetrator. We know how to stop this damage. In the UNIX world, we stopped it a decade ago. Microsoft knows about the dangers, but has done nothing to stop it.
It's a pity that the press didn't see this. I haven't heard a single mention in the press that the vendor of the software might be to blame. Even so, though, it makes the man in the street more aware of security issues, and that can only be to the benefit of secure operating systems.
On a different topic, I've been doing some work on describing the differences between BSD and Linux lately. Given the similarity between the systems, it's not surprising that people keep asking what the differences are. Here's the current state of a document I'm writing on the subject.
Any comparison has to be subjective,
but I'm trying to be fair to everybody here. If you find something incorrect
or disadvantageous to any side, including Linux, please let me know.
In the late 1970s and early 1980s (the good old days of "hobby computing") before the IBM PC and its clones took over the world Steve Hosgood built a Unix clone at home. He was used to V7 Unix on the PDP-11 at university and wasn't keen to step backwards 10 years to the technology of CP/M and BASIC programming.
He did not know that eight years later a guy called Linus Torvalds was going to think the same thoughts and do much the same things. The big difference was that he was in the right place and the right time and had internet connectivity - Steve didn't have any of these advantages!
Caldera International has done a very good thing. They have released the "Ancient" Unices they inherited when they purchased SCO under a "BSD-style" license. The license is available here, instructions on finding the source are here. Caldera (and before that SCO) had required people to obtain a free (as in beer) but somewhat restrictive license in order to get these old sources. The new BSD-style licensing only applies to the 16-bit PDI-11 versions and some of the early 32-bit releases (excluding System III and System V), but it's still very cool.
The Unix archives are available via http://www.tuhs.org/archive_sites.html. The Unix Heritage Society website is at http://www.tuhs.org/. And the PDP Unix Preservation Society (PUPS) Home Page is at http://minnie.tuhs.org/PUPS/. This webpage has information on installing and running the software.
According to the PUPS FAQ, the Santa Cruz Operation (SCO) owned Unix research editions 1 to 7, PWB/UNIX, Mini-UNIX, 32V, System III, System V, and parts of 2.xBSD. In May, 2001, Caldera completed the acquisition of SCO's Server Software and Professional Services divisions, and SCO's UnixWare and OpenServer technologies. (The Santa Cruz Operation already provided free personal source code licenses.)
History of the Usenet
History of the ukr Internet K.I.S.S.
I's been a long while, but finally people are coming to accept Solaris, the System V-based operating system that replaced SunOS 4. Still, six years is a long time, and it would have taken much longer if Sun had continued to maintain SunOS 4. Why such loyalty? They are, after all, both versions of Unix.
The last thing I want to do here is revive the SunOS vs. Solaris debate, but I will draw attention to the biggest single difference between SunOS 4 and SunOS 5, the operating system component of today's Solaris: SunOS 4 is based on 4.2 BSD, the version of Unix developed at the University of California at Berkeley and the first operating system with support for TCP/IP. By contrast, SunOS 5 (commonly called Solaris, though that's not quite accurate) is based on AT&T's Unix System V.4. BSD is different enough from System V that six years after the "death" of SunOS 4, it still has a large number of supportsignificantly hampered when Unix System Laboratories (USL) filed a lawsuit against BSDI, alleging abuse of AT&T source code.
Historically, each project was founded after differences of opinion about what constituted a good operating system. Since the software is free, any group of people can decide to custom build their own operating system. If it doesn't work, they can just stop building. In fact, all current BSD varieties, including BSDI, stem from Bill Jolitz's 386 BSD project, which faded into oblivion in 1994.
On the face of it, this doesn't seem to be a good approach: why not bite the bullet and compromise? In practice, the system shows remarkable self-regulating tendencies: 386 BSD is the only project that has closed up shop, and its descendents are all doing well and actively cross-pollinating. The fact that each version has a different kernel means survival of the fittest applies to kernel code as well, whereas in Linux it applies only to user code. For example, the fledgling FreeBSD SPARC port didn't start from scratch: it started from the NetBSD implementation and immediately raised the question, What can we do better? This process automatically raises the standard necessary for success. As a result, many such attempts fail, but the ones that don't create world-class code.
Google matched content |
Twenty Years of Berkeley Unix From AT&T-Owned to Freely Redistributable Marshall Kirk McKusick
This article traces some of the intermediate history of the UNIX Operating System, from the mid nineteen-seventies to the early eighties. It is slightly updated from an article that appeared as ``The Evolution of UNIX from 1974 to the Present, Part 1'' in Microsystems[Darw1984a]. It was intended as part 1 of 3; unfortunately, that issue was also the last issue of Microsystems. This article discusses ``Research UNIX'': V6, V7 and V8; and tells the tale of many programs and subsystems that are today part of 4.4 BSD, System V or both. Subsequent articles were planned to discuss in more detail the history of Berkeley UNIX, System V, and commercialized UNIXes. We have not written those other articles; this article is being submitted to DaemonNews in hopes that those who have written other histories of other parts of UNIX's history will come forward.
BSTJ version of C.ACM Unix paper -- one of the most famous papers in computer
science.
Unix is a general-purpose, multi-user, interactive operating system for the larger Digital Equipment Corporation PDP-11 and the Interdata 8/32 computers. It offers a number of features seldom found even in larger operating systems, including
This paper discusses the nature and implementation of the file system and
of the user command interface.
The Evolution of the Unix Time-sharing System
This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process-control mechanism, and the idea of pipelined commands. Some attention is paid to social conditions during the development of the system.
During the past few years, the Unix operating
system has come into wide use, so wide that its very name has become a trademark
of Bell Laboratories. Its important characteristics have become known to many
people. It has suffered much rewriting and tinkering since the first publication
describing it in 1974 [1], but few fundamental changes. However, Unix was born
in 1969 not 1974, and the account of its development makes a little-known and
perhaps instructive story. This paper presents a technical and social history
of the evolution of the system.
Unix philosophy
Most of the key people for Unix project( Ken Thompson, Dennis Ritchie, Joe Ossanna, Bob Morris, Doug McIlroy, Brian Kernighan) came from Multics project were they were trained in OS design. Here is the relevant quote from UNIX and Multics:
When Bell Telephone Laboratories (BTL) joined with MIT Project MAC and General Electric's computer department to create the Multics project, BTL contributed some of the finest programmers in the world to the team. I first met Ken Thompson because he had written a slick editor for CTSS called QED. It was descended from QED on the SDS-940, but was quite different because Ken had added regular expressions to it, and made many other changes. (Ken had published a paper on compiling regular expressions into machine code just before joining the Multics project.) Ken worked on the Multics I/O switch design. Dennis Ritchie and Rudd Canaday were BCPL jocks. Joe Ossanna worked on the I/O system design and wrote one of the original six Multics papers; Bob Morris, Doug McIlroy, Dave Farber, and Jim Gimpel worked on EPL, Stu Feldman worked on the I/O switch, Peter Neumann managed the team and worked on file system design, Brian Kernighan worked on the support tools.
Ken Thompson home page. See also entries in Jargon file
Reflections on Trusting Trust -- Turing lecture. Published in Communication of the ACM, Vol. 27, No. 8, August 1984, pp. 761-763. Here is the introduction:
Thank the ACM for this award. I can't help but feel that I am
receiving this honor for timing and serendipity as much as technical merit.
UNIX swept into popularity with an industry-wide change from central main
frames to autonomous minis. I suspect that Daniel Bobrow (1) would be here
instead of me if he could not afford a PDP-10 and ad had to "settle" for
a PDP-11. Moreover, the current state of UNIX is the result of the labors
of a large number of people.
There is an old adage, "Dance with the one that brought you," which means
that I should talk about UNIX. I have not worked on mainstream UNIX in many
years, yet I continue to get undeserved credit for the work of others. Therefore,
I am not going to talk about UNIX, but I want to thank everyone who has
contributed.
That brings me to Dennis Ritchie. Our collaboration has been a thing of
beauty. In the ten years that we have worked together, I can recall only
one case of miscoordination of work. On that occasion, I discovered that
we both had written the same 20-line assembly language program. I compared
the sources and was astounded to find that they matched character-for-character.
The result of our work together has been far greater than the work that
we each contributed.
I am a programmer. On my 1040 form, that is what I put down as my occupation.
As a programmer, I write programs. I would like to present to you the cutest
program I ever wrote. I will do this in three stages and try to bring it
together at the end.
Why Pascal is Not My Favorite Programming Language -- bold critique of Pascal when "structured programming' ayatollah" ruled the world of programming ;-)
UNIX History Unix Timeline by Éric Lévénez
Datametrics--Handout for UNIX a Brief History
Unix History -- a rather funny version.
Licenses are available for the following versions:
Mini UNIX
UNIX V6
PWB UNIX
UNIX V7 (which also covers Editions 1-5, and the 32V)
Follow these instructions to obtain a source code license for "ancient" versions of UNIX.
These licenses permit hobbyists and enthusiasts to have access to the source code of these historic releases, for personal and non-commercial use, and to share experiences and code updates with other authorized individuals having corresponding licenses. SCO has received numerous favorable responses from UNIX enthusiasts around the world, including messages such as, "Future computer historians will greatly appreciate what you have achieved!" and "I've wanted access to this material for nearly 20 years! Well done!"
Executive Bios Bill Joy Since joining Sun from Berkeley in 1982, he has led Sun's technical strategy, spearheading its open systems philosophy. He designed Sun's Network File System (NFS), and was a co-designer of the SPARC microprocessor Architecture. In 1991 he did the basic pipeline design of UltraSparc-I and its multimedia processing features. This basic pipeline is the one used in all of Sun's SPARC microprocessors shipping today.
Unix Differences. (from Unix FAQ)
The E-Business Network's Geek of the Week show interviews Lynne Jolitz of 386BSD fame. The eShow description says "Lynne Jolitz talks about the creation of 386BSD, the first shareware BSD implementation for Intel platforms, and her current quest to combat the problem of etherlag." ("shareware"?)
In the interview, she discusses some of the history and highlights of the 386BSD project. She said that the Berkeley Unix (Tahoe) only ran on old VAX machines before her work on the 386 platform. She said that they (Bill and Lynne) were overwhelmed with all the patches and fixes they received. Jolitz also mentioned that they "started playing with role-based security" and plug-n-play, and that many of their concepts have not been implemented yet.
Watch (or listen to) the RealAudio show at Oracle.com's E-Business Network.
On a related note, feel free to share your experiences or hints for using RealPlayer in the Talkbacks below.
Who provides support, service, and training for BSD?
BSDI have always supported BSD/OS, and they have recently announced support contracts for FreeBSD.
In addition, each of the projects has a list of consultants for hire: FreeBSD, NetBSD and OpenBSD.
The BSD project home pages
Other references to BSD
Riding the web wave
Twenty Years of Berkeley Unix
Whatever Happened to BSD?
A new thorn in Microsoft's side?
BSD's Big Break?
Three Unixlike systems may be better than Linux.
BSD a better OS than Linux?
The legend of BSD
Getting to know OpenBSD
BSD Unix: Power to the people, from the code
Unix Lore and History -- from USAIL
Mach and
Windows NT Operating Systems Compared -- paper by Mike Mike Podanoffsky
25 St Anthony Drive Hudson, NH 03051 Email: [email protected]
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019