I am writing this page about semi-forgotten, short but very influential epoch in programming. It spans from 1980 till
approximately 1995. Just fifteen years.
There have always been a gifted young programmers, and there always will be. But PC revolution was a unique period that
probably will never be repeated. It was the real revolution in programming despite the fact that DOS was a pretty primitive
OS, actually a program loader and a couple of dozens of utilities. PC was the first computer that gave programmer unlimited
time to develop their own programs, and not in isolation but interacting with peers as PC revolution generated large scale social activities around it in the form of PC user groups.
And at the beginning the field was quite open for anybody with talent to make his/her mark: it was era of re-invention of
old concepts on a new level and creation of new one. Companies like Borland and Lotus were created in garages before
they became large players.
1973Gary Kildall writes
a simple operating system in his PL/M language. He calls it CP/M (Control Program/Monitor). (Control Program for Microcomputer)
1979
February Apple Computer releases DOS 3.2.
July Apple Computer releases DOS 3.2.1
1980
April Tim Patterson begins writing an operating system for use with Seattle Computer Products' 8086-based
computer (for the new Intel 16-bit 8086 CPU). Seattle Computer Products decides to make their own disk operating
system (DOS), due to delays by Digital Research in releasing a CP/M-86 operating system. QDOS had been developed
as a clone of the CP/M eight-bit operating system in order to provide compatibility with the popular business applications
of the day such as WordStar and dBase. CP/M (Control Program for Microcomputers) was written by Gary Kildall of
Digital Research several years earlier and was the first operating system for microcomputers in general use.
August QDOS 0.10 (Quick and Dirty Operating System) is shipped by Seattle Computer Products. Paterson's
DOS 1.0 was approximately 4000 lines of assembler source. Although it was completed in a mere six weeks, QDOS was
sufficiently different from CP/M to be considered legal. Paterson was later hired by Microsoft. Even though
it had been created in only two man-months, the DOS worked surprisingly well. A week later, the EDLIN line
editor was created. EDLIN was supposed to last only six months, before being replaced.
September Tim Patterson shows Microsoft his 86-DOS, written for the 8086 chip.
October Microsoft's Paul Allen contacts Seattle Computer Products' Tim Patterson, asking for the rights
to sell SCP's DOS to an unnamed client (IBM). Microsoft pays less than US$100,000 for the right.
December Seattle Computer Products renames QDOS to 86-DOS, releasing it as version 0.3. Microsoft then
bought non-exclusive rights to market 86-DOS.
1981
February MS-DOS runs for the first time on IBM's prototype microcomputer.
July In July 1981, Microsoft bought all rights to the DOS from Seattle Computer, and the name MS-DOS
was adopted. Shortly afterward, IBM announced the Personal Computer, using as its operating system what was essentially
Seattle Computer's 86-DOS 1.14. Microsoft has been continuously improving the DOS, providing version 1.24 to IBM
(as IBM's version 1.1) with MS-DOS version 1.25 as the general release to all MS-DOS customers in March 1982. Now
version 2.0, released in February 1983, has just been announced with IBM's new XT computer.
August IBM announces the IBM 5150 PC Personal Computer, featuring a 4.77-MHz Intel 8088 CPU, 64KB RAM,
40KB ROM, one 5.25-inch floppy drive, and PC-DOS 1.0 (Microsoft's MS-DOS), for US$3000. While architecturally it
was largely an improved version of QDOS, IBM subjected the operating system to an extensive quality-assurance program,
reportedly found well over 300 bugs, and decided to rewrite some programs. This is why PC-DOS is copyrighted by
both IBM and Microsoft.
1982
May Microsoft releases MS-DOS 1.1 to IBM, for the IBM PC. It supports 320KB double-sided floppy
disk drives. Microsoft also releases MS-DOS 1.25, similar to 1.1 but for IBM-compatible computers.
1983
March MS-DOS 2.0 for PCs is announced. It was rewritten from scratch, supporting 10 MB hard drives, a
tree-structured file system, and 360 KB floppy disks. October
IBM introduces PC-DOS 2.1 with the IBM PCjr.
1984
March Microsoft releases MS-DOS 2.1 for the IBM PCjr. Microsoft releases MS-DOS 2.11. It includes
enhancements to better allow conversion into different languages and date formats.
August Microsoft releases MS-DOS 3.0 for PCs. It adds support for 1.2 MB floppy disks, and bigger (than
10 MB) hard disks.
November Microsoft releases MS-DOS 3.1. It adds support for Microsoft networks.
1986
January Microsoft releases MS-DOS 3.2. It adds support for 3.5-inch 720 KB floppy disk drives. Microsoft
releases MS-DOS 3.25.
August Microsoft ships MS-DOS 3.3, the most successful version of DOS ever. It became de
facto standard DOS in most markets quickly displacing older versions.
November Compaq ships MS-DOS 3.31 with support for over 32mb drives.
1988
Digital Research transforms CP/M into DR DOS.
June Microsoft releases MS-DOS 4.0, including a graphical/mouse interface.
Dos 4.0 was not very successful.
July IBM ships DOS 4.0. It adds a shell menu interface and support for hard disk partitions over 32 MB.
April Microsoft introduces Russian MS-DOS 4.01 for the Soviet market. It did not have much success as
DOS 3.3 was already entrenched and PCs were weaker in comparison with Western market.
May Digital Research releases DR DOS 5.0.
1991
June Microsoft releases MS-DOS 5.0 which became hugely successful and gradually displace Dos 3.3 as the
main DOS version. It adds a full-screen editor, undelete and unformat utilities, and task swapping. GW-BASIC is
replaced with Qbasic, based on Microsoft's QuickBASIC.
September Digital Research Inc. releases DR DOS 6.0, for US$100.
1993
March
Microsoft introduces the MS-DOS 6.0 Upgrade, including DoubleSpace disk compression. 1 million copies of the new
and upgrade versions are sold through retail channels within the first 40 days.
November Microsoft releases MS-DOS 6.2.
1994
February Microsoft releases MS-DOS 6.21, removing DoubleSpace disk compression due to the lawsuit.
April IBM releases PC-DOS 6.3 the first version of PC DOS significantly different from MS-DOS.
June Microsoft releases MS-DOS 6.22, bringing back disk compression under the name DriveSpace. This became
the last version of DOS for Microsoft although its popularity never reached the levels of DOS 3.3 and DOS 5.0. It
was surprisingly good and feature rich version althouth IBM DOS 7 was more innovative.
1995
February IBM announces PC DOS 7, with integrated data compression from Stac Electronics (Stacker).
April IBM releases PC DOS 7 with REXX interpreter and xEdit style editor. Many consider this version
to be an "Ultimate DOS".
September. Microsoft releases DOS 7 as a part of Windows 95. It provides support for long filenames when
Windows is running, but removes a large number of utilities, some of which are on the Windows 95 CD in the
\other\oldmsdos directory. (see
http://www.nukesoft.co.uk/msdos/dosversions.shtml
)
1997. Version 7.1 was released by Microsoft. This version is part of OEM Service Release 2 and later of Windows
95. The main change is support for FAT 32 hard disks, a more efficient and robust way of storing data on large drives
(see http://www.nukesoft.co.uk/msdos/dosversions.shtml
)
DOS (Disk Operating System) was a derivative of CP/M. The latter was a single-user, single-tasking
computer operating system with a command-line interface. It was written by Gary Kildall, the brilliant
programmer who single-handily introduced several important concepts into microcomputer OS design: usage of BIOS is probably
the most important one. CP/M dominated the world of 8-bit microprocessors.
In 1980,
IBM
approached Digital Research, at Bill Gates's
suggestion, to license CP/M for its upcoming IBM PC.
Although he knew about the meeting, Kildall missed the first part because he chose to deliver software in person to
North Star Computers in
his Pitts Special airplane at the same
time. His wife Dorothy, an attorney, handled the business discussions as was usually the case. She hesitated to sign
IBM's complex non-disclosure
agreement, which the IBM representatives required before revealing any details of the project. Upon returning to
DR, Kildall quickly signed the agreement, but he remained unenthusiastic about porting CP/M to the IBM PC's
8088 processor.[2]
IBM related the story to Gates, who was already providing the
ROMBASIC interpreter for the PC. Gates was astonished that
Kildall had not shown more interest; in later years he would claim that Kildall capriciously "went flying." IBM representatives
expressed doubts that the project could continue without CP/M, so Gates agreed to provide a CP/M-compatible OS for the
PC, expecting that he could license the CP/M clone QDOS
from Seattle Computer Products. Microsoft adapted QDOS for the IBM PC and delivered it to IBM as
PC-DOS.
Kildall believed that PC-DOS infringed on CP/M's
copyright, but
software copyright law was
in its infancy—the decision in the landmark
Apple v. Franklin case was still
two years away—and according to accounts of Kildall's employees and friends, Kildall was wary of engaging IBM in a lengthy
and costly lawsuit. Nevertheless, he confronted IBM
in late 1980 with his allegation, and they agreed to offer CP/M as an OS option for the PC in return for Digital's release
of liability.[2] When
the IBM PC was introduced, IBM sold the operating system as an unbundled (but necessary) option. One of the operating
system options was PC-DOS, priced at US$60. A new port of CP/M, called
CP/M-86, was offered a few months later and priced
at $240. Largely due to its early availability and the substantial price difference, PC-DOS became the preferred operating
system. IBM's decision to source its favored operating system from
Microsoft marked the beginning of Digital Research's
decline.
Kildall never spoke publicly about the IBM negotiations or the success of MS-DOS, although he talked about it within
DR and wrote an unpublished 226-page memoir shortly before his death that contained his account. This memoir served
as source material for a chapter about Kildall and CP/M in the 2004 book They Made America by
Harold Evans.
People who have disputed his account include QDOS author
Tim Paterson, DR legal counsel Gerry Davis,
DR programmers Gordon Eubanks and
Alan Cooper, and members of the IBM PC team.
Eubanks has also rejected Gates's suggestion that Kildall took the day off when IBM visited,
noting that he flew on company business.[3][4][2]
Tim Paterson was 24 when he wrote DOS and just out of the college. Here how he recollects the events in his blog entry
The First DOS Machine
Seattle Computer Products (SCP) introduced their 8086 16-bit computer system in October 1979, nearly two years before
the introduction of the IBM PC. By "computer system", I actually mean a set of three plug-in cards for the S-100 bus:
the 8086 CPU card, the CPU Support Card, and the 8/16 RAM. At that time SCP did not manufacture the S-100 chassis these
cards would plug into. That chassis and a computer terminal would be needed to make a complete working computer.
The S-100 Bus
Inside of a modern personal computer you'll find a motherboard crammed with the heart of the computer system: CPU,
memory, disk interface, network interface, USB interface, etc. Off in one corner you usually find four or five PCI slots
for add-in cards, but for most people no additional cards are needed (except maybe a graphics card).
In contrast, the S-100 Bus motherboard contained no components at all. A full-size motherboard had nothing but 18
– 22 card slots. Each slot accepted a 5" x 10" S-100 card with its 100-pin edge connector. A typical computer system
would have a card with a CPU, possibly several cards with memory, a card for a floppy disk interface, a card for serial
I/O, possibly a video card, etc.
This arrangement was started by MITS with the Altair 8800 computer, but eventually became standardized by the Institute
of Electrical and Electronics Engineers as IEEE-696. During the standardization process, the S-100 bus was extended
from being an 8-bit bus (typically used by 8080 and Z80 processors) to a 16-bit bus. It was this extension to 16-bits
that made the S-100 bus a suitable target for the 8086 computer from SCP.
SCP also wanted to take advantage of the vast range of existing cards for the S-100 bus. They didn't need to make
cards for disk interface, serial I/O, video, etc. since they were already available. Even the (empty) chassis itself
was a standard item. An existing computer owner could simply swap out his 8-bit CPU card and replace it with the 16-bit
SCP card, and all the hardware would work together (but the software was another matter).
The SCP 16-bit Computer System
The 8086 CPU card was an Intel 8086 microprocessor with dozens of logic chips needed to interface it to the S-100
bus. The signals and timings of the bus were built around the original 8-bit Intel 8080, and it took a lot of "glue"
logic to create the same signals with a different microprocessor (this was also true for the Z80). For the 8086, however,
there was also a significant added layer of working in "8-bit mode" or "16-bit mode". These modes were defined by the
IEEE standard so that a 16-bit CPU could be used with existing 8-bit memory with a performance penalty, or with new
16-bit memory at full speed. Essentially, the CPU card could request 16 bits for each memory access. If there was no
response, the card went into 8-bit mode: the microprocessor would be stopped momentarily while logic on the card ran
two consecutive 8-bit memory cycles to fetch the required 16 bits.
The SCP 8086 CPU had a mechanical switch on it that allowed the microprocessor to run at either 4 MHz or 8 MHz (while
a processor you get today would run at around 3,000 MHz = 3 GHz). When SCP first started sales, Intel could not yet
provide the 8 MHz CPU chip so early units could only be run at 4 MHz.
The CPU Support Card was a collection of stuff needed to make a working computer. The most important items were:
A boot ROM with debugger.
A serial port to connect to a computer terminal.
A time-of-day clock.
The 8/16 RAM had 16 KB (not MB!) of memory. It could operate in "16-bit mode" so the 16-bit processor could run at
full speed. In the early days, using four of these cards to build a system with 64 KB of memory would have been considered
plenty.
The only 16-bit software available when the system was first released was Microsoft Stand-Alone Disk BASIC. SCP did
not make a floppy disk controller, but supported disk controllers made by Cromemco and Tarbell.
Development Timeline
I earned my BS in Computer Science in June of 1978 and started work at SCP. Intel just announced their new 8086 microprocessor,
and I was sent to a technical seminar to check it out. The May 1978 issue of IEEE Computer Magazine had published the
first draft of the S-100 standard which included 16-bit extensions, so I was excited about the possibility of making
a 16-bit S-100 computer. My primary duties at SCP were to improve and create new S-100 memory cards, which at that time
were SCP's only products. But I was given the go-ahead to investigate a computer design when I had the time.
By January of 1979 my design for the 8086 CPU and CPU Support Card had been realized in prototypes. I was able to
do some testing, but then hardware development needed to be put on hold while I wrote some software. Not even Intel
could provide me with an 8086 assembler to do the most basic programming, so I wrote one myself. It was actually a Z80
program that ran under CP/M, but it generated 8086 code. Next I wrote a debugger that would fit into the 2 KB ROM on
the CPU Support Card.
In May we began work with Microsoft to get their BASIC running on our machine. I brought the computer to their offices
and sat side by side with Bob O'Rear as we debugged it. I was very impressed with how quickly he got it working. They
had not used a real 8086 before, but they had simulated it so BASIC was nearly ready to go when I arrived. At Microsoft's
invitation, I took the 8086 computer to New York to demonstrate it at the National Computer Conference in the first
week of June.
There was a small setback when the June 1979 issue of IEEE Computer Magazine came out. The draft of the IEEE S-100
standard had changed significantly. I got involved in the standards process to correct some errors that had been introduced.
But I still had to make design changes to the 8086 CPU which required a whole new layout of the circuit board, with
a risk of introducing new errors. It turned out, however, that no mistakes were made so production was able to start
3 months later.
Evolution
The 16 KB memory card was eventually replaced by a 64 KB card. Intel introduced the 8087 coprocessor that performed
floating-point math, and SCP made an adapter that plugged into the 8086 microprocessor socket that made room for it.
Later SCP updated the 8086 CPU card so it had space for the 8087.
The software situation did not change until I wrote DOS for this machine, first shipping it in August 1980.
When IBM introduced their PC in August 1981, its 8088 processor used 8-bit memory, virtually identical in performance
to using 8-bit memory with the SCP 8086 CPU. Except IBM ran their processor at 4.77 MHz while the SCP machine ran at
8 MHz. So the SCP 8086 computer system was about three times faster than the IBM PC.
IBM also reintroduced memory limitations that I had specifically avoided in designing the 8086 CPU. For S-100 computers,
a low-cost alternative to using a regular computer terminal was to use a video card. The video card, however, used up
some of the memory address space. The boot ROM would normally use up address space as well. SCP systems were designed
to be used with a terminal, and the boot ROM could be disabled after boot-up. This made the entire 1 MB of memory address
space available for RAM. IBM, on the other hand, had limited the address space in their PC to 640 KB of RAM due to video
and boot/BIOS ROM. This limitation has been called the "DOS 640K barrier", but it had nothing to do with DOS.
Microsoft took full advantage of the SCP system capability. In 1988, years after SCP had shut down, they were still
using the SCP system for one task only it could perform ("linking the linker"). Their machine was equipped with the
full 1 MB of RAM – 16 of the 64 KB cards. That machine could not be retired until 32-bit software tools were developed
for Intel's 386 microprocessor.
In July 1981, Microsoft bought all rights to the DOS from Seattle Computer to fulfill contact already signed with IBM, and the name
PC-DOS was adopted. Shortly
afterward, IBM announced the Personal Computer, using as its operating system what was essentially Seattle Computer's 86-DOS
1.14. Microsoft has been continuously improving the DOS, providing version 1.24 to IBM (as IBM's version 1.1) with MS-DOS
version 1.25 as the general release to all MS-DOS customers in March 1982. Version 2.0, released in February 1983,
has just been announced with IBM's new XT computer. There was also an interesting article "the roots of DOS" in IBM Softalk
magazine dated March, 1983:
One does not write an operating system and fail to pick up a little wisdom in the process, particularly when that
operating system becomes the property of Microsoft Corporation.
Tim Paterson, who has changed jobs and companies fairly often in the past few years, is satisfied with his position
at present. He's at Seattle Computer, a company that made its name in the S-100 market and has since developed its own
microcomputer--the Gazelle. It's been almost a year since he quit Microsoft. His experiences constructing an operating
system that eventually would be central to IBM's Personal Computer and many other computers is quite a story.
Seen and Not Seen. Paterson looks good. He wears faded jeans. He has a dark beard and moustache, and he often
breaks into a mischievous grin. A terrific programmer and hardware engineer, at twenty-six Paterson is typically innocent-looking.
He and his kind are the backbone of computer companies. There is no shortage of business
people and production people in the computer industry. There are only a few terrific programmers. Everyone else is replaceable.
Yet, in a big company, programmers are sometimes the least known, least appreciated employees.
"With all the code being written out there, who gets the credit? People like Bill Gates get it all and he hasn't
written anything in years," says Paterson.
Paterson has been a pawn, a very valuable pawn, in a gigantic game of corporate chess.
He has been moved around the board and asked to perform numerous tasks, some of which he'd like to forget.
Like any good pawn, he's been relatively straightforward and dependable. But deep down, he's always wanted to be his
own master, to call his own moves.
"My dad was an electrical engineer and when I went to school it was the first thing I tried," Paterson says.
In high school, Paterson took a semester of Fortran. He worked with a 7400, one of the TTL series of small logic
chips. He didn't learn good textbook design from classes. "I learned it by reading and playing with it. I got a lot
of exposure to electronics stuff at home."
It was as a student at Seattle's University of Washington that Paterson first encountered personal computers. In
early 1976, his roommate bought an IMSAI 8080. "He provided the cash. I selected and maintained it," recalls Paterson.
"It had a 4K memory board with the 8080 chip and no I/O devices. You could play a few stupid games like 'chase the bit,'
which was entertaining for fifteen minutes at a time."
Days of Wire and Solder. Later that year, Paterson got a job as a technician at a Seattle-area retail computer
store. There he was an eyewitness to those early days when the only way to own a microcomputer was to buy components
and assemble it yourself. He's not kidding when he says, "Life begins with a disk drive."
With his university courses and practical experience in the retail store, Paterson learned quickly. He started toying
around with designing his own peripheral boards.
"I got to know Rod Brock of Seattle Computer when he came into the store periodically. We were selling his boards.
Eventually he asked me to consult for Seattle Computer.
"They were having problems with a 16K static memory board for the S-100," Paterson continues. "Rod Brock hired me
in June 1978, at fifty dollars a day, to make the board work. I left the retail computer store at that time." After
a few weeks of consulting, he was made a salaried employee of Seattle Computer.
In his computer science classes, Paterson had become interested in operating systems, as well as hardware and compilers.
After receiving his bachelor of science degree in June '78, he attended graduate courses off and on for about another
year. But he gradually lost interest. "I thought they were too oriented towards theory and not what I needed."
At Seattle Computer he at first worked on several projects--redesigning an S-100 memory board, which led to two new
memory board designs. Things changed when he attended a local seminar on the Intel 8086 chip in late July 1978.
"I gained the respect of Rod Brock and made suggestions. I thought doing something with the 8086 would be neat and
Brock gave it the go-ahead.
"The first design of the 8086 CPU card was finished by the end of January. We had a prototype by May 1979. We built
three boards, but didn't wire them all up. There were both layout and design errors, but we got two prototypes working."
Rainy Day Computer. Seattle Computer was toying with the idea of eventually building its own computer, thus
the CPU card project. They wanted to diversify, but had no firm business plan.
Once a prototype of the 8086 CPU was up and running, Seattle was approached by Digital Research to see if it could
get CP/M to run on it. Microsoft, which had moved to the Seattle area in January 1979, wanted to see if some of its
programs would work, too. At the end of May 1979, Paterson went to Microsoft to work with Bob O'Rear there. In a week
or so, Paterson cranked out all 32K of Microsoft's Basic onto the 8086.
"That's pretty remarkable," says Paterson. "Microsoft already had developed several good utilities, like a cross-assembler
for use with the PDP-10. There were a few bugs, but basically the CPU worked and the development tools worked."
At the 1979 National Computer Conference in New York, Seattle Computer was the guest of Microsoft and Lifeboat. They
showed off standalone Basic-86, then the only software for the 8086. Seattle Computer started shipping the product with
its CPU card in November, primarily to software developers.
In April 1980, Paterson began work on an operating system.
"We needed an operating system at Seattle Computer for our own computers and I wanted to do one. So we decided to
go for it.
"I was waiting for Digital to come out with CP/M-86. I thought they would have it real soon. If they had beat me
I wouldn't have taken the trouble.
"I had always wanted to write my own operating system. I've always hated CP/M and thought I could do it a lot better."
Fast and Grimy. In the spring of 1980, Paterson started working on what would become MS-DOS. By July, he had
finished roughly 50 percent of the operating system and called it QDOS 0.10 (for quick and dirty). He quickly found
a bug and it became QDOS 0.11.
"Step one was to write down what CP/M-80 did. Step two was to design a file system that was fast and efficient."
One of the significant differences between MS-DOS and CP/M-86, when it finally appeared, was the file management
system. CP/M usually provides a window to no more than 16K or 32K. With MS-DOS, the entire file is available. Paterson
created QDOS's file management module using the same method found in standalone Basic-86.
"I'm into bottom-up programming. You know that you're going to need certain functions later in the program. I build
tools on which to make the next layer.
"When you're programming top-down, it's stepwise refining going from general actions to smaller actions. With my
method there isn't a lot of diagramming. Bottom-up programming is definitely legitimate, it just doesn't get a lot of
press."
By the end of August 1980, QDOS 0.11 worked well and was being shipped. It didn't stay QDOS very long, and not many
copies were distributed. For the much improved update, Paterson worked hard to include all the necessary features of
a complete operating system.
"There was an assembler, resident in the operating system, and debugging, but no editor. I wrote the quickest line
editor I could imagine--quick in how fast I wrote it, two weeks.
"I was aghast," says Paterson, "when I heard that IBM was using it and not throwing it out the window."
Eighty-six on Cue. In December 1980, Paterson and company came out with 86-DOS, 0.33, which had significant
improvements over QDOS. "86-DOS reflected pretty much everything we had learned so far. Just about the only thing it
didn't have was variable sector record sizes. The assembler, originally written on the Z-80, was made faster. We also
made some changes in the system calls. It was a pretty polished package when it was released."
Starting at the end of 1980, Seattle Computer sold 86-DOS to OEMs (original equipment manufacturers) and other companies
like Microsoft.
"They [Microsoft] paid us a flat fee. It was not a per-copy royalty, but per OEM. Part of the contract said we couldn't
ask them who they were selling it to or planning to sell it to."
In early 1981 the IBM Personal Computer had not yet been announced, but rumors were flying about Big Blue's micro.
"We all had our suspicions that it was IBM that Microsoft was dealing with, but we didn't know for sure."
Around April 1981, while he was doing some internal changes on 86-DOS--modifying system calls and including error
handling for hard disks--Paterson decided to quit Seattle Computer. In May, he went to Microsoft to work full-time on
the PC-DOS version of 86-DOS.
"The first day on the job I walk through the door and 'Hey! It's IBM,' " says Paterson, grinning impishly. "I worked
at Microsoft a neat eleven months. In May, June, and July I worked on things I hadn't quite finished, refining PC-DOS."
International Business Machinations. This was the beginning of an eleven-month hurricane. Almost daily, Paterson
shipped stuff to Boca Raton for IBM's approval, and IBM would instantly return comments, modifications, and more problems.
"They were real thorough. I would send them a disk the same day via Delta Dash. IBM would be on the phone to me as
soon as the disk arrived." Paterson pauses and winds up. He's remembering one request that clashed violently with his
view of the project.
"IBM wanted CP/M prompts. It made me throw up." But when IBM asks, you comply if you're a lowly programmer, and that
is what Paterson did.
He finished PC-DOS in July, one month before the pc was officially announced to the world. By this time, 86-DOS had
become MS-DOS.
"Microsoft wanted to own it--pay all monies now and take it off Seattle Computer's hands. Both companies realized
a good deal when they saw it. Seattle Computer really didn't have the marketing clout of Microsoft.
"So on July 27, 1981, the operating system became Microsoft's property. According to the deal, Seattle Computer can
still see the source code, but is otherwise just another licensee. I think both companies were real happy. The deal
was closed just a few weeks before the pc was announced. Microsoft was quite confident." Paterson pauses.
"Selling it was the right move. Seattle Computer is doing good without it. The timing was appropriate. I was invited
to sit in on the meeting between Rod Brock and Paul Allen. I was flattered."
Is There Life after DOS? After the initial version of PC-DOS, Paterson went on to other programming tasks
at Microsoft. He worked on an 8086 version of Microsoft's Basic compiler.
Paterson is like many programmers in the industry. Sure, he likes elegance. Sure, he likes simplicity. Sure, he likes
to make things easy for the user. But what he likes more than anything else in a program or system is speed.
"I love assembly language. I'm a speed freak, especially with mathematical routines. If there's a loop that you want
to repeat five times, I'll just rewrite the line that many times. Bam, bam, bam, woosh! The 8086 does normal loops real
slow."
Paterson, still with Microsoft, did some consulting for Seattle Computer that fall, working on a RAM card for the
IBM pc. Soon after he finished, he heard that Microsoft was working on a similar project.
"It was a real thrill to design a card for my employer's competitors. Microsoft was not too upset. They didn't chop
my fingers off. By March 1982, Seattle's board had become quite popular. It came out months before anyone else came
out with a multifunction board."
Late in 1981, Paterson and Microsoft got the word that IBM was looking for a 1.1 update. In November, he was made
technical product manager of MS-DOS. He and his group delivered the initial version of 1.1 at the end of the year, a
few days early. Then the Boca Raton deluge came.
"The whole process drove me crazy. A lot of bugs--PTRs [program trouble reports] -- kept dribbling in and IBM would
make a phone call a day. It really drove me nuts. I felt like a puppet on an IBM string."
In March 1982, after two months of going back and forth with Paterson, IBM accepted 1.1. Paterson spent the last
weeks of that month planning 2.0. Then, on April 1, he suddenly quit Microsoft. (Mark Zbikowski became the group manager
and has brought MS-DOS to the brink of releasing the 2.0 version, which Paterson had little to do with.)
Not a man to burn his bridges, Paterson left Microsoft on good terms and returned to Seattle Computer to work, at
first, on Seattle's floppy disk controller.
Wrong-Way Paterson. Seattle Computer was doing quite well. Paterson had owned 10 percent of the company since
1979, and had been an officer and member of the board. Achieving such a position at Microsoft was unlikely.
"It was not what was wrong with Microsoft, but what Seattle Computer had to offer. At Seattle, I'm director of engineering.
Hey! That's really motivating. It was the basis for my moving back. I was a little irritated with Microsoft, mainly
having to work with IBM. Microsoft recognizes talent. If somebody complains, they try to move them around."
Paterson moved himself, though, out the front door.
At present, he and Seattle Computer are "catching up." They have new S-100 products in the works and will have "new
IBM stuff" soon.
"We're working on our expansion cards, making them with four functions instead of just two. We want to catch up with
those four function guys. We're also working on a new enclosure for the Gazelle.
"I have the urge to do another operating system--to see if, under the circumstances, I couldn't do it better. I don't
think I will, though, not at Seattle Computer. I don't have the time.
"Currently, my job does not include designing software, though I consider myself a software designer. I can picture
all kinds of neat things--combined packages. Memory's cheap these days and it should be possible to have a spreadsheet,
word processor, database, and more at fingertip control in one package.
"Still, the 8086/8088 looks like it for a while. It'll go through a software jump; it hasn't caught up with the Z-80
yet."
Speed Racer. When far from the world of programming and corporate politics, Paterson is something of a race-car
nut. He codrives a Mazda RX-2 in pro rallies.
"The car has a roll cage and we wear helmets and all that. I have an RX-7 and, yeah, I'm kinda into cars."
Paterson is still looking for that elusive something. Independently minded, he seeks complete freedom. He doesn't
want to work for someone else all his life. More properly put, he doesn't want always to be doing someone else's work.
Some year Paterson would like to start his own company. When his Seattle Computer stock is worth enough, he just
may sell it and go off on his own.
"Don't worry, the boss knows. Rod Brock said to me, 'Tim, in a few years you'll go.' There is the potential that
I'll go all the way with Seattle, but I just don't know. Small companies that make it either become big or become part
of a big company."
For the moment, Paterson is just another brilliant programmer. He's happy, but a little sad sometimes.
The acronym DOS was not new even then. It had originally been used by IBM in the 1960s in the
name of an operating system (i.e., DOS/360) for its System/360 line of mainframes. After QDOS code was quickly polished up and presented to IBM
for evaluation IBM found itself left with Microsoft's offering of "Microsoft Disk Operating System 1.0". An agreement was
reached between the two, and IBM agreed to accept 86-DOS as the main operating system for their new PC.
Microsoft initially
kept the IBM deal a secret from Seattle Computer Products. Microsoft purchased all rights to 86-DOS
only in July 1981, and "IBM
Personal Computer DOS 1.0" was ready for the introduction of the IBM PC in October 1981. IBM subjected the operating system
to an extensive quality-assurance program, reportedly found well over 300 bugs, and decided to rewrite several programs. This
is why PC-DOS is copyrighted by both IBM and Microsoft. And in what was to become another extremely fortuitous move,
Bill Gates persuaded IBM to let his company retain marketing rights for the operating system separately from the IBM PC
project. So there was two version of the same OS on the market: PC-DOS (the IBM version) and MS-DOS (the Microsoft version).
The two versions were initially nearly identical, but they eventually diverged.
Digital Research wanted $495 for CP/M-86 and that considering PC-DOS with similar functionality was $40
many software developers found it easier to
port existing CP/M software to DOS than to the new version of CP/M. IBM's $39.95 DOS was far cheaper than anyone else's
alternative and everyone bought DOS. BTW, formally the IBM PC shipped without an operating system. IBM didn't start bundling
DOS until the second generation AT/339 came out.
Until its acquisition of QDOS and development of MS DOS of its base Microsoft had been a major vendor of computer programming
languages for microcomputers. It stated with BASIC interpreter: Gates and co-founder Paul Allen had written Microsoft BASIC and were selling
it on disks and tape mostly to PC hobbyists. Later they added several other languages. But due to the tremendous
success of MS DOS Microsoft all at once became the operating systems company. As PC became the most popular personal computer, revenue
from its sales fueled Microsoft's phenomenal growth, and MS-DOS was the key to company's rapid emergence as the dominant
firm in the software industry. This product continued to be the largest single contributor to Microsoft's income well after
it had become more famous for Windows.
The original version of DOS (version 1.0) was quickly superseded by newer versions due to phenomenal
speed of development by Microsoft. For several years they produced a new version each year and each version contained significant
enhancements:
Version 1.25, released in 1982, added support for double-sided disks, thereby eliminating the need to manually turn
the disks over to access the reverse side.
Version 2.0, released the next year, added support for directories, for IBM's then huge 10MB hard disk and for 360KB,
5.25-inch floppy disks. This was followed by version 2.11 later in the same year, which added support for foreign and
extended characters.
Version 3.0, launched in 1984, added support for 1.2MB floppy disks and 32MB hard disks. This was soon followed
by version 3.1, which added support for networks.
DOS 3.2 was probably the most popular DOS before DOS 5 was released. It was so tiny, in fact, that it can fit on
a single 360K floppy disk and still leave sufficient room on the same disk for Borland C compiler and data files.
By 1984, IBM's PC DOS, had grabbed 35% of the personal computer market. With the arrival of IBM compatible
clones running MS DOS, the path was set for DOS to become the standard by sheer volume alone. And by 1986 DOS became a de-facto
standard.
Additions and improvements in subsequent versions included support for multiple hard disk partitions, for disk compression
and for larger partitions as well as an improved disk checking program, improved memory management, a disk defragmenter
and an improved text editor.
In spite of its very small size and relative simplicity, it is one of the most successful operating systems that has
been developed. Paradoxically this simple system was more popular then all versions of Unix combined despite the fact
that Unix is much more complex and richer OS or may be because of that. Due to mass market created by DOS and PCs
the quality of DOS applications was very high and in most cases they simply wiped the floor of their Unix counterparts.
DOS has more then a dozen variant with three main: MS DOS (Microsoft), PC DOS (IBM) and DR DOS (Digital Research, the authors
of DR-DOS).
DOS is a text-based 16-Bit single-user, single-tasking operating system. The early success of Microsoft is mainly based
on the success of MS-DOS. Later versions of MS-DOS brought some changes in memory handling and peripheral support,
but because of its architecture, DOS was unable to make use of the capabilities of 386 and later Intel chips.
Though limited to a single program mode, DOS is still preferred by many programmers (mainly developers of games), because
the application has full control over all resources in the PC. Some unique programs like
1-2-3, MS Word,
Norton Commander, Xtree, Ghost, originated in DOS. While two-button
mouse was widely available on PCs approximately from 1986, DOS programs widely used "letter accelerators" for menus.
DOS games became the most successful software industry and many of
DOS games became instant classics.
Many site on Internet still make them available for download. Such games as
Tetris,
Digger, Alleycat, Frogger, Adventure of
Captain Comic, Battle Chess, Lemmings,
Prince of Persia, Doom became part of
software game history.
Microsoft used the loophole in IBM contact to market his own version of DOS called MS-DOS. Digital Research was the first
company who tried to unseat Microsoft monopoly by producing better versions DOS then MS DOS but Microsoft proved to be a
very tough competitor. As Wikipedia stated:
The first version was released in May, 1988. Version
numbers were chosen to reflect features relative to MS-DOS; the first version promoted to the public was DR-DOS 3.41,
which offered comparable but better features to the massively successful MS-DOS 3.3 - and Compaq's version, Compaq DOS
3.31. (Compaq's variant was the first to introduce the system for supporting hard disk partitions of over 32MB which
was later to become the standard used in MS-DOS 4.0 and all subsequent releases.)
At this time, MS-DOS was only available bundled with hardware, so DR-DOS achieved some immediate success as it was
possible for consumers to buy it through normal retail channels. Also, DR-DOS was cheaper to license than MS-DOS. As
a result, DRI was approached by a number of PC manufacturers who were interested in a third-party DOS, and this prompted
several updates to the system.
Most significant update
The most significant was DR-DOS 5.0 in May 1990. (The
company skipped version 4, avoiding comparison with MS-DOS 4.0.) This introduced
ViewMAX, a
GEM based
GUI file management shell, and bundled disk-caching software,
but more significantly, it also offered vastly improved memory management over MS-DOS. Compared to earlier MS-DOS 4.01
which already bundled a 386-mode memory manager (EMM386.SYS), capable of converting
Extended
Memory Specification (XMS) memory into
Expanded
Memory Specification (LIM 4.0 EMS) memory more commonly used by DOS applications, memory management in DR-DOS had
two extra features.
First, on Intel 80286 or better microprocessors
with 1MB or more RAM, the DR-DOS kernel and structures such as disk buffers could be located in the
High Memory Area, the first 64KB
of extended memory which were accessible
in real mode due to an incomplete compatibility
of the 80286 with earlier processors. This freed up the equivalent amount of critical "base" or
Conventional memory, the
first 640KB of the PC's RAM – which was the area in which all MS-DOS applications had to run. Using high memory was
not a new idea, as this memory could previously be used by Windows applications starting with
Windows/286 2.1 released in 1988, but offering
more memory to old DOS applications was more interesting.
Additionally, on Intel 80386 machines,
DR-DOS's EMS memory manager allowed the OS to load DOS device drivers into upper memory blocks, further freeing base
memory. For more information on this, see the article on the
Upper Memory Area.
DR-DOS 5 was the first DOS to integrate such functionality into the base OS (loading device drivers into upper memory
blocks was possible using QEMM and MS-DOS). As such, on
a 386 system, it could offer vastly more free conventional memory than any other DOS. Once drivers for a mouse, multimedia
hardware and a network stack were loaded, an MS-DOS machine typically might only have 300 to 400KB of free conventional
memory – too little to run most late-1980s software. DR-DOS 5, with a small amount of manual tweaking, could load all
this and still keep all of its conventional memory free – allowing for some necessary DOS data structures, as
much as 620KB out of the 640KB.
So much, in fact, that some programs would fail to load as they started "impossibly" low in memory – inside the first
64KB. DR-DOS 5's new LOADFIX command worked around this by leaving a small empty space at the start of the memory map.
Given the constraints of the time, this was an incredibly powerful technology which made life much easier for PC
technicians of the day, and this propelled DR-DOS 5.0 to rapid and considerable popularity.
Aggressive competition by Microsoft
Faced with substantial competition in the DOS arena,
Microsoft responded strongly. They announced
the development of MS-DOS 5.0 in May 1990, to be released a few months later and include similar advanced features to
those of DR-DOS. This has been seen as vaporware,
as MS-DOS 5.0 was released June 1991. It included matches of the DR's enhancements in memory management, but did not
offer all of the improvements to the syntax of DOS commands that DR did.
DR responded with DR-DOS 6.0 in 1991. This bundled in
SuperStor on-the-fly disk compression, to maximise the space available on the tiny hard disks of the time - 40MB was
still not an atypical size, which with the growth of larger applications and especially
Microsoft Windows was frequently
not enough space. DR-DOS 6.0 also included an
API
for multitasking on
CPUs capable of memory
protection, namely the Intel 80386 and newer.
The API was available only to DR-DOS aware applications, but well-behaved ordinary DOS applications could also be pre-emptively
multitasked by the bundled task-switcher, TaskMax. On 286-based systems, DOS applications could be suspended to the
background to allow others to run. However, DR's multitasking system was seen as technically inferior to third-party
offerings such as DESQview, which could multitask
applications which performed direct hardware access and graphical applications and even present them in scalable on-screen
windows. Though far from being a true "multitasking" operating system, TaskMax nonetheless represented an important
"tick on the box" - a feature on the list of specifications.
Microsoft responded with MS-DOS 6.0, which again matched the more important features of DR-DOS 6.0 - but the use
of stolen code caused legal problems. See the article on
MS-DOS for more.
Though DR-DOS was almost 100% binary compatible with applications written for MS-DOS, Microsoft nevertheless expended
considerable effort in attempts to break compatibility. In one example, they inserted code into the beta version of
Windows 3.1 to return a non-fatal error message if it detected a non-Microsoft DOS. With the detection code disabled
(or if the user canceled the error message), Windows ran perfectly under DR-DOS.
[1] This code was removed from final release of Windows 3.1 and all subsequent versions, however. (see also
Embrace,
extend and extinguish for other Microsoft tactics.)
IBM was the second company that tried to fight MS-DOS dominance, but it entered in the war too late when major battles
were already lost. Until version 6 all versions of MS DOS and PC DOS were very similar. But at version 6 those two OSes
diverged. Especially different was PC DOS 7 which many consider the best DOS ever created. Both PC DOS 6 and PC DOS
7 arrived too late to change the history. PC-DOS 7 was substantially more reliable and easier to configure than any of its
competitors. In short PC-DOS 7 was a winner. It even contained
REXX interpreter and Xedit style editor. The final release
of PC-DOS was Y2K Compliant and was also known as PC DOS 2000. Versions 7 and 2000 natively support a XDF floppy format
(800.com driver was used before for this purpose).
The final major version was probably DOS 7.0, which was released in 1995 as part of Microsoft Windows 95. It featured
close integration with that operating system, including support for long filenames and removal of some command line utilities,
that were preserved and can installed separately from Windows 95 CDROM. It was revised in 1997 with version 7.1, which added
support for the FAT32 filesystem on hard disks.
Although many of the features were copied from UNIX and CP/M, MS-DOS was a distinct OS that surpassed Unix in the command
line interface sophistication.
The demise of MS DOS occurred mainly due to raise of important on GUI interfaces and dramatic growth of power of personal
computers. The introduction of the Apple Macintosh in 1984 brought about a surge of interest in GUIs (graphical user interfaces),
and it soon became apparent that they are the future of personal computer interface.
Although many MS-DOS programs created their own sometimes very sophisticated command line based GUIs, this approach required
duplication of programming effort, and the lack of a consistent GUI API made it more difficult for users to learn new programs
and for developers to develop them.
It took Microsoft several years to provide high quality GUI of its own, but it finally succeeded with the introduction
of Windows 95 in 1995 which has taken market by storm making DOS obsolete. Still even previous version of Windows starting
with Windows 3.0 released in 1990 used to have very capable GUI. Protected mode and real mode are the two modes of
operation supported by the Intel x86 architecture. The former enables 32-bit memory addressing, thereby permitting use of
the extended memory that cannot be easily accessed from real mode. This makes it possible to assign separate memory areas
to the operating system kernel and to each process (i.e., program or task), thus resulting in much more stable multitasking
than can be attained with real mode.
Early versions of Microsoft Windows ran under MS-DOS, whereas later versions were launched under MS-DOS but were then
extended by going into protected mode. Windows NT and its successors, Windows 2000 and XP, do not use MS-DOS; however,
they contain an emulation layer on which MS-DOS programs can be operated, mainly for backward compatibility which is excellent:
most DOS 1 program still can run on both Windows 2000 and XP.
MS-DOS has a relatively small number of commands, and an even smaller number of commonly used ones.
They can be glued together by very primitive shell called Batch file. Moreover, these commands
are generally inflexible because, in contrast to Unix-like operating systems, they are designed to accommodate few options
or arguments (i.e., values that can be passed to the commands).
Some of the most common commands are as follows (corresponding commands on Unix-like operating systems are shown in parenthesis):
CD - changes the current directory (similar to unix cd)
COPY - copies a file (cp)
DEL - deletes a file (rm)
DIR - lists directory contents (ls)
EDIT - starts an editor to create or edit text files
FORMAT - formats a disk to accept DOS files (mformat)
HELP - displays information about a command (man, info)
MKDIR - creates a new directory (mkdir)
RD - removes a directory (rmdir)
REN - renames a file (similar to unix mv)
TYPE - displays contents of a file on the screen (similar to Unix cat)
Primitivism of DOS shell simulated creation of various file managers (see
The History of Development of Norton Commander)
with additional shell capabilities as well as alternative shells, the most successful of which was
4DOS.
MS-DOS and Linux have much in common, primarily because MS-DOS copied many ideas from UNIX. However, there are some very
fundamental differences, including:
Linux is a full-fledged multiuser, multitasking operating system, whereas MS-DOS is a single-user, single-tasking
operating system.
Despite being more complex and powerful OS Linux has much weaker interface with the keyboard and the quality of
full screen command mode applications is inferior to DOS.
MS-DOS does not have built-in security concepts such as file-ownership and permissions, which are fundamental to
Linux.
Linux has an inverse tree-like filesystem in which all directories and files branch from a single directory, i.e.,
the root directory, and its subdirectories. MS-DOS can have multiple, independent root directories, such as A:, C:,
D:, etc.
Linux uses forward slashes "/" to separate directories, whereas MS-DOS uses backslashes "\" for the same purpose.
Linux filenames can contain up to 255 characters. MS-DOS filenames are limited to an eight characters plus a three-character
file type and have restrictions on allowable characters. Also, filenames are case-sensitive in Linux, whereas they are
not in MS-DOS.
Linux has a richer command line utilities set than does MS-DOS, with a much greater number of commands and individual
commands having greater power, flexibility and ease of use. Commands are case-sensitive in Linux, but they are not in
MS-DOS.
Although Linux and MS-DOS both have pipes and I/O redirection, the MS-DOS pipes use a completely different -- and
inferior -- implementation.
The great success of MS-DOS led to the development of several similar operating systems, including DR-DOS, FreeDOS, OpenDOS
and PC-DOS. Development of FreeDOS was begun in 1994 by Jim Hall, then a physics student at the University of Wisconsin-River
Falls. His motivation was Microsoft's announcement that it would stop supporting MS-DOS because of its impending replacement
by Windows 95.
Like MS-DOS, FreeDOS is lean and robust, and it can run on old hardware and in embedded systems. A major improvement
as compared with MS-DOS is the addition of options to the commands. FreeDOS is released under the GPL (although some software
in the distribution is covered by other licenses).
Because Linux was originally developed on PCs and at a time when MS-DOS was the dominant PC operating system, a variety
of tools were developed to help developers and users bridge the gap between the two operating systems. Among them is
dosemu, a DOS emulator which is included with Red Hat and other distributions and on which it is possible to run DOS
programs. Emulators are also available for running DOS on other versions of Unix, even on non-x86 processors.
mtools is a collection of utilities that make it easy to access an MS-DOS floppy disk from Linux by merely inserting
it into the floppy disk drive and without having to use any mount commands (which can be tricky for inexperienced
users). Included in mtools are more than 20 commands, all of which are identical to their MS-DOS counterparts except that
the letter m is added to the start of each of their names. For example, the MS-DOS command type a:\file1.txt
to display the contents of file1.txt that is located on a floppy disk would become mtype a:/file1.txt (mtools
commands use forward slashes instead of backslashes).
Although it is widely believed that MS-DOS is an antiquated dead operating system with few features and capabilities, this
is far from correct. In fact, although not generally publicized, MS-DOS is still used today by numerous businesses and individuals
around the world. It survived for so long because is is robust, relatively simple and continue to get the job done with
a minimum of maintenance.
DOS is one of the most secure operating system in existence. Application doe DOS do not "dial
home" and do not automatic updates which are essentially a backdoor. At the same time the quality of
application is extremely high. Some components of MS office such as MS Word exist for DOS.
DOS is still one of the best way to run recovery programs, low-level disk utilities, the flashing of the system BIOS and
diagnostics. In it used in several popular programs with Norton Ghost 2003
probably the most popular.
In many cases, it was not DOS itself that was the limiting factor in system performance; rather, it was the hardware,
including small memories, slow CPUs and slow video cards. The capabilities of DOS have, in fact, continued to increase
for several year after Microsoft Windows 95 became widespread. This is a result of continuing advances in the hardware
support and the introduction of new or improved utilities and applications. The most active segments were not MS-DOS but other DOS
clones, particularly IBM PC DOS 7, DrDos and FreeOS.
Due to IBM efforts PCs DOS usage actually rose after 1995 and version 7 is still used in some parts of Eastern Europe and Asia.
It is probably the most high quality version of MS DOS in existence. PC DOS 7.1 fully supports FAT32 and LBA. Its latest publicly available build is from December 2003. Available from http://www.ibm.com/systems/management/sgstk.html"
DOS will be around for many years into the future not only because of the continued existence of legacy applications
and extremely rich high-quality tool chain that even today is in some segments is richer then Linux tool chain but also
because of security concerns that are now pretty widespread in Windows environment.
The main area of growth will most likely be simple embedded applications,
for which DOS is attractive because of its extremely small size, very reliable operation, well developed toolset
including high quality compilers from C and
zero cost (in the case of DrDos and FreeDOS).
(cnet.com) 58BeauHD on Monday October 14,
2019 @10:10PM from the nostalgia-blast dept. The latest update from Internet Archive
brings thousands of MS-DOS games from the '90s like 3D Bomber, Zool and Alien Rampage. CNET
reports: On Sunday, Internet Archive released 2,500 MS-DOS games that
includes action, strategy and adventure titles. Some of the games are Vor Terra, Spooky Kooky
Monster Maker, Princess Maker 2 and I Have No Mouth And I Must Scream. "This will be our
biggest update yet, ranging from tiny recent independent productions to long-forgotten big-name
releases from decades ago," Internet Archive software curator Jason Scott wrote on
the site's blog .
One game that might trigger a few memories is the 1992 action-adventure horror game
Alone in the Dark ,
published by Infogrames. In the game, you can play private investigator Edward Carnby or family
member Emily Hartwood, who's investigating the suspicious death of Jeremy Hartwood in his
Louisiana mansion called Derceto, which is now supposedly haunted. Fighting against rats,
zombies and giant worms, you have to solve a number of puzzles to escape. Another retro game
included by Internet Archive is a 1994 title played on PCs and Amiga computers called
Mr. Blobby (a remake
of the SNES game Super Troll Islands). Players can choose from three different characters --
Mr. Blobby, Mrs. Blobby and Baby Blobby. The goal of the game is to color in the computer
screen by walking over it. Levels include climbing ladders, avoiding spikes and bouncing on
springs.
FreeDOS turns 25 years old: An origin storyThe operating system's history is a
great example of the open source software model: developers working together to create
something.
That's a major milestone for
any open source software project, and I'm proud of the work that we've done on it over the past
quarter century. I'm also proud of how we built FreeDOS because it is a great example of how
the open source software model works.
For its time, MS-DOS was a powerful operating system. I'd used DOS for years, ever since my
parents replaced our aging Apple II computer with a newer IBM machine. MS-DOS provided a
flexible command line, which I quite liked and that came in handy to manipulate my files. Over
the years, I learned how to write my own utilities in C to expand its command-line capabilities
even further.
Around 1994, Microsoft announced that its next planned version of Windows would do away with
MS-DOS. But I liked DOS. Even though I had started migrating to Linux, I still booted into
MS-DOS to run applications that Linux didn't have yet.
I figured that if we wanted to keep DOS, we would need to write our own. And that's how
FreeDOS was born.
On June 29, 1994, I made a small announcement about my idea to the comp.os.msdos.apps
newsgroup on Usenet.
ANNOUNCEMENT OF PD-DOS PROJECT:
A few months ago, I posted articles relating to starting a public domain version of DOS. The
general support for this at the time was strong, and many people agreed with the statement,
"start writing!" So, I have
Announcing the first effort to produce a PD-DOS. I have written up a "manifest" describing
the goals of such a project and an outline of the work, as well as a "task list" that shows
exactly what needs to be written. I'll post those here, and let discussion follow.
While I announced the project as PD-DOS (for "public domain," although the abbreviation was
meant to mimic IBM's "PC-DOS"), we soon changed the name to Free-DOS and later FreeDOS.
I started working on it right away. First, I shared the utilities I had written to expand
the DOS command line. Many of them reproduced MS-DOS features, including CLS, DATE, DEL, FIND,
HELP, and MORE. Some added new features to DOS that I borrowed from Unix, such as TEE and TRCH
(a simple implementation of Unix's tr). I contributed over a dozen FreeDOS utilities
By sharing my utilities, I gave other developers a starting point. And by sharing my source
code under the GNU
General Public License (GNU GPL), I implicitly allowed others to add new features and fix
bugs.
Other developers who saw FreeDOS taking shape contacted me and wanted to help. Tim Norman
was one of the first; Tim volunteered to write a command shell (COMMAND.COM, later named
FreeCOM). Others contributed utilities that replicated or expanded the DOS command line.
We released our first alpha version as soon as possible. Less than three months after
announcing FreeDOS, we had an Alpha 1 distribution that collected our utilities. By the time we
released Alpha 5, FreeDOS boasted over 60 utilities. And FreeDOS included features never
imagined in MS-DOS, including internet connectivity via a PPP dial-up driver and dual-monitor
support using a primary VGA monitor and a secondary Hercules Mono monitor.
New developers joined the project, and we welcomed them. By October 1998, FreeDOS had a
working kernel, thanks to Pat Villani. FreeDOS also sported a host of new features that brought
not just parity with MS-DOS but surpassed MS-DOS, including ANSI support and a print spooler
that resembled Unix lpr.
You may be familiar with other milestones. We crept our way towards the 1.0 label, finally
releasing FreeDOS 1.0 in September 2006, FreeDOS 1.1 in January 2012, and FreeDOS 1.2 in
December 2016. MS-DOS stopped being a moving target long ago, so we didn't need to update as
frequently after the 1.0 release.
Today, FreeDOS is a very modern DOS. We've moved beyond "classic DOS," and now FreeDOS
features lots of development tools such as compilers, assemblers, and debuggers. We have lots
of editors beyond the plain DOS Edit editor, including Fed, Pico, TDE, and versions of Emacs
and Vi. FreeDOS supports networking and even provides a simple graphical web browser (Dillo).
And we have tons of new utilities, including many that will make Linux users feel at home.
FreeDOS got where it is because developers worked together to create something. In the
spirit of open source software, we contributed to each other's work by fixing bugs and adding
new features. We treated our users as co-developers; we always found ways to include people,
whether they were writing code or writing documentation. And we made decisions through
consensus based on merit. If that sounds familiar, it's because those are the core values of
open source software: transparency, collaboration, release early and often, meritocracy, and
community. That's the open
source way !
I encourage you to download FreeDOS 1.2 and give it a try.
Microsoft co-founder Paul Allen died today from complications with non-Hodgkin's lymphoma. He was 65. Allen said
earlier this month
that he was being treated for the disease.
Allen was a childhood friend of Bill Gates, and together, the two started Microsoft in 1975. He left the company in 1983 while
being treated for Hodgkin's lymphoma and remained a board member with the company through 2000. He was first treated for non-Hodgkin's
lymphoma in 2009, before seeing it go into remission.
In a statement given to ABC News , Gates said he was "heartbroken by the passing of one of my oldest and dearest friends."
He went on to commend his fellow co-founder for his life after Microsoft:
From our early days together at Lakeside School, through our partnership in the creation of Microsoft, to some of our joint
philanthropic projects over the years, Paul was a true partner and dear friend. Personal computing would not have existed without
him.
But Paul wasn't content with starting one company. He channelled his intellect and compassion into a second act focused on
improving people's lives and strengthening communities in Seattle and around the world. He was fond of saying, "If it has the
potential to do good, then we should do it." That's the king of person he was.
Paul loved life and those around him, and we all cherished him in return. He deserved much more time, but his contributions
to the world of technology and philanthropy will live on for generations to come. I will miss him tremendously.
Microsoft CEO Satya Nadella said Allen's contributions to both Microsoft and the industry were "indispensable." His full statement
is quoted below:
Paul Allen's contributions to our company, our industry, and to our community are indispensable. As co-founder of Microsoft,
in his own quiet and persistent way, he created magical products, experiences and institutions, and in doing so, he changed the
world. I have learned so much from him -- his inquisitiveness, curiosity, and push for high standards is something that will continue
to inspire me and all of us as Microsoft. Our hearts are with Paul's family and loved ones. Rest in peace.
In a memoir published in 2011, Allen says
that he was responsible for naming Microsoft and creating the two-button mouse. The book also portrayed Allen as
going under-credited for his
work at Microsoft, and Gates as having taken more ownership of the company than he deserved. It created some drama when it arrived,
but the two men ultimately appeared to remain friends,
posing for a photo together two years later.
After leaving Microsoft, Allen became an investor through his company Vulcan, buying into a diverse set of companies and markets.
Vulcan's current portfolio ranges from the Museum of Pop Culture in Seattle, to a group focused on using machine learning for climate
preservation, to Stratolaunch, which is
creating a spaceplane . Allen's investments and donations made him a major name in Seattle, where much of his work was focused.
He recently
funded a $46 million building in South Seattle that will house homeless and low-income families.
Both Apple CEO Tim Cook and Google CEO Sundar Pichai called Allen a tech "pioneer" while highlighting his philanthropic work in
statements on Twitter. Amazon CEO Jeff Bezos said Allen's work "inspired so many."
Allen has long been the owner of the Portland Trail Blazers and Seattle Seahawks as well. NFL Commissioner Roger Goodell said
Allen "worked tirelessly" to "identify new ways to make the game safer and protect our players from unnecessary risk." NBA Commissioner
Adam Silver said Allen "helped lay the foundation for the league's growth internationally and our embrace of new technologies."
He also launched a number of philanthropic efforts, which were later combined under the name Paul G. Allen Philanthropies. His
"philanthropic contributions exceed $2 billion," according to Allen's own website, and he had committed to giving away the majority
of his fortune.
Allen's sister, Jody Allen, wrote a statement on his family's behalf:
My brother was a remarkable individual on every level. While most knew Paul Allen as a technologist and philanthropist, for
us he was a much loved brother and uncle, and an exceptional friend.
Paul's family and friends were blessed to experience his wit, warmth, his generosity and deep concern. For all the demands
on his schedule, there was always time for family and friends. At this time of loss and grief for us – and so many others – we
are profoundly grateful for the care and concern he demonstrated every day.
Some of Allen's philanthropy has taken a scientific bent: Allen founded the
Allen Institute for Brain Science in 2003, pouring
$500 million into the non-profit that aims to give scientists the tools and data they need to probe how brain works. One recent project,
the Allen Brain Observatory , provides an open-access "catalogue
of activity in the mouse's brain," Saskia de Vries, senior scientist on the project,
said in a video . That kind of data is key to piecing together
how the brain processes information.
In a statement emailed to The Verge, The Allen Institute's President and CEO Allan Jones said:
Paul's vision and insight have been an inspiration to me and to many others both here at the Institute that bears his name,
and in the myriad of other areas that made up the fantastic universe of his interests. He will be sorely missed. We honor his
legacy today, and every day into the long future of the Allen Institute, by carrying out our mission of tackling the hard problems
in bioscience and making a significant difference in our respective fields.
Man what a shock! I was lucky enough to be working at a Seattle startup that Paul bought
back in the 90s ( doing VoIP SOHO phone systems ). He liked to swing by office on a regular
basis as we were just a few blocks from Dicks hamburgers on Mercer St (his favorite). He was
really an engineer's engineer. We'd give him a status report on how things were going and
within a few minutes he was up at the white board spitballing technical solutions to ASIC or
network problems. I especially remember him coming by the day he bought the Seahawks. Paul
was a big physical presence ( 6'2" 250lbs in those days ), but he kept going on about how
after meeting the Seahawks players, he never felt so physically small in his life. Ignore the
internet trolls. Paul was a good guy. He was a humble, modest, down-to-earth guy. There was
always a pick-up basketball game on his court on Thursday nights. Jam session over at his
place were legendary ( I never got to play with him, but every musician that I know that
played with him was impressed with his guitar playing ). He left a huge legacy in the pacific
northwest. We'll miss you Paul!
The book Paul Allen wrote avoids a full report, but gives the impression that Bill Gates
was so angry, Paul Allen left the company because interacting with Bill Gates was bad for his
health.
Quotes from the book, Idea Man
[amazon.com] by Paul Allen.
Page 49:
THREE DECADES AFTER teaching Bill and me at Lakeside, Fred Wright was asked what he'd
thought about our success with Microsoft. His reply: "It was neat that they got along well
enough that the company didn't explode in the first year or two."
Page 96:
When Bill pushed on licensing terms or bad-mouthed the flaky Signetics cards, Ed thought
he was insubordinate. You could hear them yelling throughout the plant, and it was quite a
spectacle-the burly ex-military officer standing toe to toe with the owlish prodigy about
half his weight, neither giving an inch.
Page 177:
Bill was sarcastic, combative, defensive, and contemptuous.
Page 180:
"For Bill, the ground had already begun shifting. At product review meetings, his scathing
critiques became a perverse badge of honor. One game was to count how many times Bill
confronted a given manager; whoever got tagged for the most "stupidest things " won the
contest. "I give my feedback," he grumbled to me, "and it doesn't go anywhere."
He used to have the nickname "Doctor NetVorkian" because many of the things he invested in
promptly tanked in one way or another after his investment. He had a lot of bad luck with his
investments.
For those who don't understand the joke, a certain Dr. Kervorkian became notorious for
helping ill patients commit suicide.
But you can wipe Windows off your hard drive, so I don't get your point. Paul Allen was
a great guy in many, many ways.
Agreed. Even if you could "blame" him for all or part of Windows, he did start the
Museum of Pop
Culture [wikipedia.org]. If you are ever in Seattle, it is a must see. I mean, they have
what is probably the best Star Trek museum display anywhere (which is saying a lot since the
Smithsonian has a very nice one as well), including most of the original series set pieces
and I believe one of the only actual Enterprise models used for filming. In my mind, that
gives him a great deal of geek cred. Plus, as I under
I knew someone would say that. You are right. I won't. But he won't either. He was a
patent troll. Oh but: RIP and thoughts and prayers, right? He was a great guy and will be
missed.
Posted by EditorDavid on Sunday December 25, 2016 @02:56PM from the long-term-projects dept.
Very long-time Slashdot reader Jim Hall
-- part of GNOME's board of directors -- has a Christmas gift. Since 1994 he's been overseeing
an open source project that maintains a replacement for the MS-DOS operating system, and has just
announced the release of the "updated, more modern"
FreeDOS 1.2 !
[Y]ou'll find a few nice surprises. FreeDOS 1.2 now makes it easier to connect to a network.
And you can find more tools and games, and a few graphical desktop options including OpenGEM.
But the first thing you'll probably notice is the all-new new installer that makes it much easier
to install FreeDOS. And after you install FreeDOS, try the FDIMPLES program to install new programs
or to remove any you don't want. Official announcement also available at the
FreeDOS
Project blog .
FreeDOS also lets you play classic DOS games like Doom , Wolfenstein 3D ,
Duke Nukem , and Jill of the Jungle -- and today marks a very special occasion,
since it's been almost five years since the release of FreeDos 1.1. "If you've followed FreeDOS,
you know that we don't have a very fast release cycle," Jim
writes on his blog . "We just don't
need to; DOS isn't exactly a moving target anymore..."
I have been looking around for varieties of PC-DOS 7.1
Currently i have several builds:
1.10, 1.11, 1.19, 1.26, 1.28, 1.29, 1.30, 1.32 and one earlier one (no built-in version).
All of these have ibmbio.com, ibmdos.com, and command.com.
1.19 is very common (ghost 2003), but only has the kernel files and some pcdos 2000 files.
1.26 has also himem.sys
1.28 has himem.sys, fdisk32.com, but these come from different copies of 128
1.32 is the most complete set from the server script kit. MSCDEX is not specifically identified as 1.32, but is different
to the pcdos 2000 version (in two bytes), that it is counted as first appearing here.
For PCDOS, i have modified files like bootnt4/bootw98 to install windows NT, 9x, PC-DOS, PCDOS71, and MSDOS622 sectors.
These files write new boot sectors. You can use bootpart or my modified ibmpart to move the necessary files into place.
I installed all of these boot sectors onto a vm, and then reverted to pcdos71. The vm rebooted perfectly.
I've also found a proggie in
http://omniplex.om.f...os/pcdosasm.htm like dos622.com. It tells you how to change the version to any pcdos or msdos
number (even things like pcdos 5.82). Still, it's handy since you can test msdos 6.22 under Windows 2000, without the
need for a virtual machine. (dos622 command.com, and then run commands from there). I made versions for all pcdos and
msdos (ibm320, 330, 400, 500, 502, 600, 630, 700, 710), and msdos (500, 600, 620, 622, 700, 710, 800). Other dos versions
like 6.21 and 6.10 are really 6.20 and 6.00 respectively. There's room to handle dos versions like 20.45 etc. Beats
setver.
"PC DOS 7.1, however, fully supports FAT32 and LBA. Its latest publicly available build is from December 2003 - new enough
in my humble opinion. Available from http://www.ibm.com/systems/management/sgstk.html"
PC DOS 7.1, however, fully supports FAT32 and LBA. Its latest publicly available build is from December 2003 - new
enough in my humble opinion. Available from
The licensing is complicated, but as AFAIK DR-DOS/OpenDOS 7.01, 7.02 and 7.03 (pre-Devicelogics) are still free,
cerainly for private use. Source is free for OpenDOS 7.02. DRDOS, Inc. can't change licensing for an ancestor of its
product that it never owned. If they could, Udo Kuhnt would be in a lot of trouble.
... ... ...
===
Snipped from the OpenDOS 7.01 source license:
* REDISTRIBUTION OF THE SOFTWARE IS PERMITTED FOR NON-COMMERCIAL
PURPOSES provided that the copyright notices, marks and these terms
and conditions in this document are duplicated in all such forms and
that the documentaiton, advertising materials, and other materials
related to such distribution and use acknowledge that the software was
developed by Caldera, Inc.
For the source code license grant, you may:
* modify, translate, compile, disassemble, or create derivative works
based on the Software provided that such modifications are for
non-commercial use and that such modifications are provided back to
Caldera except for those who have obtained the right from Caldera
in writing to retain such modifications; any modification, translation,
compilation, disassembly or derivative work used for commercial gain
requires a separate license from Caldera;
[End snips]
So it looks to me like Udo is OK, as long as he doesn't charge money for his OS.
====
excerpt from OpenDOS 7.0.1 license.txt:
---------------------------------------------------------------------
Caldera grants you a non-exclusive license to use the Software in
source or binary form free of charge if
(a) you are a student, faculty member or staff member {...}
(b) your use of the Software is for the purpose of evaluating
whether to purchase an ongoing license to the Software. The evaluation
period for use by or on behalf of a commercial entity is limited
to 90 days; evaluation use by others is not subject to this 90 day
limit but is still limited to a reasonable period.
IBM PC DOS 2000 is the complete solution for the millions of personal computers in the world still running an older
version of DOS. This powerful new version of IBM PC DOS contains the newest revision of PC DOS (v7.0 revision 1.0) along
with an impressive list of features including Y2K compliance.
How do I get Virtual PC 2007 to access my CD/DVD drive for my IBM PC DOS 2000
VM? When I run DOSShell in PC DOS, I see only the A, B and C drives. I have
a floppy (A) and 2 hard drives (C, D). My DVD burner is E. The B drive in
DOSShell is irrelevant because it corresponds to nothing on my hardware; but
I'm still trying to get PC DOS to recognize my DVD drive.
IBM announced its new machine, the 5150, on 12 August 1981. It was no ordinary launch: the 5150 wasn't the 'big iron'
typical of Big Blue - it was a personal computer.
A 12-strong team was assembled under Don Estridge, the Development Director of the project,
codenamed 'Chess'. Lewis Eggebrecht was brought on board as Chief Designer.
Rather than create
the 5150 from scratch, Estridge's engineers used existing parts from a variety of other companies,
seemingly in marked contrast with IBM tradition. The company made a virtue out of the fact that
it made the components used in its machines. When you bought an IBM computer, it had IBM's imprimatur
of quality through and through.
Re: VPC 2007 and PC DOS 2000 (CD/DVD drive not recognized by DOS VM)
On Tue, 25 Aug 2009 23:34:10 GMT, "CookyMonzta via WindowsKB.com"
<u50174@xxxxxx> wrote:
Quote:
>How do I get Virtual PC 2007 to access my CD/DVD drive for my IBM PC DOS 2000
>VM? When I run DOSShell in PC DOS, I see only the A, B and C drives. I have
>a floppy (A) and 2 hard drives (C, D). My DVD burner is E. The B drive in
>DOSShell is irrelevant because it corresponds to nothing on my hardware; but
>I'm still trying to get PC DOS to recognize my DVD drive.
You need to install CD-ROM drivers. DOS does not come with them
preinstalled.
The easiest way is to use the VMAdditions from VPC2004, you can
download them from my website.
"Thirty years ago, on July 27 1981,
Microsoft bought the rights for QDOS
(Quick and Dirty Operating System) from Seattle Computer Products (SCP) for $25,000. QDOS, otherwise known as 86-DOS,
was designed by SCP to run on the Intel 8086 processor, and was originally thrown together in just two months for a
0.1 release in 1980 (thus the name). Meanwhile, IBM had planned on powering its first Personal Computer with CP/M-86,
which had been the standard OS for Intel 8086 and 8080 architectures at the time, but a deal could not be struck with
CP/M's developer, Digital Research.
IBM then approached Microsoft, which already had a few of years of experience under its belt with M-DOS, BASIC, and
other important tools - and as you can probably tell from the landscape of the computer world today, the IBM/Microsoft
partnership worked out rather well indeed."
Osgeld:
what a half assed summary, and it was not the IBM/Microsoft partnership that did shit, its the MS licencing agreement
that allowed MS to sell to other people than IBM that made a huge fucking difference when the clones came in and
obliterated IBM at their own game
Chemisor:
And just remember how WordPerfect 5.1 met all your word processing needs in less than 640k, while OpenOffice
writer needs 640M to do it.
Rockoon:
Microsoft was already well established in the market by that point. It was hard to find a machine that did not
have a Microsoft BASIC baked into a ROM chip, and even harder to find one that didn't rely on any Microsoft BASIC
at all. Everyone used Microsoft.
IBM was doing business with an already established partner when they contracted Microsoft for an OS.
dintech:
The MS-DOS acronym It always made me wonder. If QDOS was Quick and Dirty Operating System, then surely MS-DOS
is Microsoft Dirty Operating System. It's a weird way to brand your product.
harrkev:
Well, I remember when I was a kid, the computer world was very fragmented. Apple was incompatible with Atari
was incompatible with Commodore was incompatible with IBM. Need I mention the other minor players, such as Franklin,
Acorn, TI, Sinclair, etc.? Great game came out? Odds are it won't run on the system that YOU have. As much as I
generally dislike the major players, at least there are only three major platforms that you have to develop for.
In fact, you can develop a game for only one market, and still have the opportunity to make quite a bit of money.
IBM chose to use 5" double-density soft-sectored disks with 40 tracks, 8 sectors per track, and 512 bytes per sector.
This gave their disk a total capacity of 163,840 bytes, known as the "160k disk".
In the spring of 1981 I left Seattle Computer to work for Microsoft. My first task was to help put the finishing
touches on the adaptation of DOS to the IBM PC hardware. This work was generally driven by bug reports and feature requests
from IBM, and I stuck to doing the work I was asked to do. But eventually I got up the nerve to ask: Why were there
only 8 sectors per track on the IBM floppy disk?
We had worked with many disk formats at Seattle Computer, and I had gotten deep into the nitty-gritty of how they
were laid out. A 5" disk spun at 300 RPM, or 0.2 seconds per revolution. The double-density bit rate was 250 kbps, meaning
there was time for 50,000 raw bits, or 6250 bytes, in one revolution. There was considerable overhead for each sector,
something like 114 bytes. But IBM was only using (512 + 114) * 8 = 5008 bytes of the track. Adding a 9th sector on the
track would increase this to 5634 bytes, still far less than the total available. You could almost fit a 10th sector,
but not quite (or maybe you could reduce the overhead and make 10 fit!).
IBM's response was something like "Oh, really?" They had never done the calculations. It was also too late to put
a change like this into the product (it was probably 2 – 3 months before the PC shipped). They said they would save
it as a new, upgraded feature for DOS 1.1. The first IBM PCs would ship with the 160k format.
More Formats
IBM refreshed the Personal Computer line around 6 months after the initial release. This not only included DOS 1.1,
but now the floppy disk drives could use both sides of the disks, doubling the capacity.
For compatibility, they still had the single-sided format with 8 sectors per track – the 160k disk. But now older
PCs (with single-sided disks) could upgrade to DOS 1.1 and use the 9-sector format – the 180k disk. And of course new
machines would want to use the double-sided 9-sector format – the 360k disk. For some reason, IBM also supported a double-sided
8-sector format – the 320k disk – which served no useful purpose.
Two years later IBM introduced the Personal Computer AT, based on the Intel 286 microprocessor. With it came the
high-capacity 5" floppy disk. It basically gave the 5" disk the same specifications as the older 8" disks – doubling
the data rate and spinning the disk at the faster 360 RPM. Capacity increased to 1.2 MB, the same as the 8" disks.
Eventually the 5" floppy disk was replaced by the 3.5" disk with its 1.44 MB capacity. I still see a few of my computers
around the office with 3.5" disk drives, but their time has passed as well
An important design parameter for the OS was performance, which drove my choice for the file system. I had learned a
handful of file management techniques, and I spent some time analyzing what I knew before making a choice. These were
the candidates:
North Star DOS and the UCSD p-system used the simplest method: contiguous file allocation. That is, each file occupies
consecutive sectors on the disk. The disk directory only needs to keep track of the first sector and the length for
each file – very compact. Random access to file data is just as fast as sequential access, because it's trivial to compute
the sector you want. But the big drawback is that once a file is boxed in by other files on the disk, it can't grow.
The whole file would then have to be moved to a new spot with more contiguous free space, with the old location leaving
a "hole". After a while, all that's left are the holes. Then you have to do a time-consuming "pack" to shuffle all the
files together and close the holes. I decided the drawbacks of contiguous allocation were too severe.
UNIX uses a clever multi-tiered approach. For small files, the directory entry for a file has a short table of the sectors
that make up the file. These sectors don't have to be contiguous, so it's easy to extend the file. If the file gets
too large for the list to fit in the table, UNIX adds a tier. The sectors listed in the table no longer reference data
in the file; instead, each entry identifies a sector which itself contains nothing but a list of sectors of file data.
If the file gets huge, yet another tier is added – the table entries each reference a sector whose entries reference
a sector whose entries identify the file data. Random access to file data is very fast for small files, but as the files
get larger and the number of tiers grow, if will take one or two additional disk reads just to find the location of
the data you really want.
CP/M didn't track disk space directly in terms of sectors. Instead it grouped sectors together into a "cluster" or "allocation
unit". The original CP/M was designed specifically around 8" disks, which had 2002 sectors of 128 bytes each. By making
each cluster 8 sectors (1K), there were less than 256 clusters on a disk. Thus clusters could be indentified using only
one byte. The directory entry for CP/M had a table of 16 entries of the clusters in file, so for a file of 16K or less
both random and sequential access were fast and efficient. But when a file exceeded 16K, it needed a whole new directory
entry to store an additional 16K of cluster numbers. There was no link between these entries; they simply contained
the same name and a number indentifying which section of the file it represented (the "extent"). This led to a potential
performance nightmare, especially for random access. When switching between extents, the system had to perform its standard
linear search of the directory for a file of the correct name and extent. This search could take multiple disk reads
before the requested data was located.
Microsoft Stand-Alone Disk BASIC used the File Allocation Table (FAT). Unlike all the other file systems, the FAT system
separates the directory entry (which has the file name, file size, etc.) from the map of how the data is stored (the
FAT). I will not give a detailed explanation of how that worked here as the system has been well documented, such as
my 1983 article An Inside Look at MS-DOS at http://www.patersontech.com/dos/Byte/InsideDos.htm.
Like CP/M, BASIC used a 1K cluster so that, once again, there were less than 256 on the standard 8" floppy disk of
the day. The FAT needs one entry per cluster, and for BASIC the entry needed to be just one byte, so the FAT fit within
two 128-byte sectors. This small size also meant it was practical, even with the limited memory of the time, to keep
the entire FAT in memory at all times.
To me, the big appeal of the FAT system was that you never had to read the disk just to find the location of the data
you really wanted. FAT entries are in a chain – you can't get to the end without visiting every entry in between – so
it is possible the OS would have to pass through many entries finding the location of the data. But with the FAT entirely
in memory, passing through a long chain would still be 100 times faster than a single sector read from a floppy disk.
Another thing I liked about FAT was its space efficiency. There were no tables of sectors or clusters that might be
half full because the file wasn't big enough to need them all. The size of the FAT was set by the size of the disk.
When I designed DOS I knew that fitting the cluster number in a single byte, limiting the number of clusters to 256,
wouldn't get the job done as disks got bigger. I increased the FAT entry to 12 bits, allowing over 4000 clusters. With
a cluster size of as much as 16K bytes, this would allow for disks as large as 64MB. You could even push it to a 32K
cluster and 128MB disk size, although that large cluster could waste a lot space. These disk sizes seemed enormous to
me in 1980. Only recently had we seen the first 10MB hard disks come out for microcomputers, and that size seemed absurdly
lavish (and expensive).
Obviously I'm no visionary. Disk size has grown faster than any other computer attribute, up by a factor of 100,000.
(Typical memory size is up by a factor of 30,000, while clock speed is only up 1000x.) Microsoft extended the FAT entry
to 16 bits, then to 32 bits to keep up. But this made the FAT so large that it was no longer kept entirely in memory,
taking away the performance advantage.
Anonymous:
Greetings,
I don't even know where to begin, given that I am writing to the creator of DOS. But, to start I'll say it is
good to see this blog, and I hope to see more from you here (I definitely bookmarked it).
On the DOS-v-CP/M issue, I take your side. What I suspect actually happened was just a case of sour grapes. DOS
got the market, CP/M lost it. That's all, I doubt any high principles were at work here. DRI just lost a lot of
money.
Technically, it should be obvious DOS was superior to CP/M. You emphasize the FAT, of course, but also better
device error handling comes to mind.
Another point you should take up, and I can't believe you haven't yet, is that later DR-DOS is a "rip-off" of
MS-DOS! Ironic, huh? Sort of how Apple sued Microsoft for stealing an idea Apple stole from Xerox PARC.
I noticed also the "feud" with this fool Wharton. An article I found on the subject mentioned him as a "pal"
of Kildall's. It is reasonable to expect he is biased on the matter, given this. You should be able to find the
article as it is titled something like "The man who could have been Bill Gates."
I'd also like to ask: DOS's FAT was obviously better than CP/M's extent system. What about the fact that DOS
later adopted Handles for file access, which are simpler to use than FCB's. Didn't CP/M-86 continue with FCB's instead
of adding support for Handles?
Despite an interest in this issue, I personally am still "turned off" by all this bickering and fighting over
money, fame, and whatever. I sometimes wonder why we can't all just get along and enjoy doing and tinkering with
cool and interesting things? Then, the answer comes to me. Money.
All told, I'm glad DOS won and CP/M lost. Granted, being an active (!) DOS hacker/hobbyist, and long-time user,
I would be expected to say this, but whatever. I'm glad you provided something that, perhaps incidentally, was better
than what we could have had. Well, it was either that or UCSD p-System :)
Thirty years ago, on July 27 1981, Microsoft bought the rights for QDOS (Quick and
Dirty Operating System) from Seattle Computer Products (SCP) for $25,000. QDOS, otherwise known as 86-DOS, was designed
by SCP to run on the
Intel 8086 processor, and was originally thrown together in just two months for a 0.1 release in 1980. Meanwhile,
IBM had planned on powering its upcoming Personal Computer with CP/M-86, which had been the standard OS for Intel 8086
and 8080 architectures at the time, but a deal could not be struck with CP/M's developer, Digital Research. IBM then
approached Microsoft, which
already had a few years of experience under its belt with
M-DOS, BASIC, and other important
tools - and as you can probably tell from the landscape of the computer world today, the IBM/Microsoft partnership worked
out rather well indeed.
IBM released its Personal Computer in August 1981 running version 1.14 of SCP's
QDOS - but a few months later Microsoft produced MS-DOS 1.24, which then became the standard IBM
PC operating system. In March 1983, both MS-DOS 2.0 and the IBM PC/XT were released. The rest, as they say, is history.
MS-DOS 3.0 followed in 1984 (alongside the IBM PC/AT), and MS-DOS 4.0 with a mouse-powered, menu-driven interface arrived
in 1989. It's around this point that IBM's PC operating system, PC-DOS, began to diverge from MS-DOS - and of course,
come 1990, Microsoft released Windows 3.0, which would change Microsoft's focus forever. It's also around this time
that developers start to feel the pinch of the
640KB conventional memory
limit imposed by IBM's original hardware specifications.
Still, come 1991, MS-DOS 5.0 was released (along
with the much-loved
QBASIC),
and MS-DOS 6.0 with much-maligned
DoubleSpace disk compression tool appeared in 1993. By this stage, IBM, Digital
Research (DR-DOS), and Microsoft were all leapfrogging each other with different
version numbers and features. IBM released PC-DOS 6.1, and MS followed quickly with
MS-DOS 6.2. IBM released PC-DOS 6.3 - and Novell trumped them all by releasing Novell
DOS 7.0. In 1995, however, Windows 95 with an underpinning of MS-DOS 7.0 and its new
FAT32 file system was released, and the history of DOS draws to a close. Every other
version of DOS was quickly squished out of existence by Windows 95, and it wouldn't
be until the late 90s and the emergence of the Dot Com Bubble that another command-line
OS would yet again rise to prominence in the shape of
Linux.
In 1987, I was asked by a magazine editor to write an article about data compression. I wrote a manuscript and an
accompanying program, sent them to the editor, and forgot about them. The next time I heard from him I was told that
the magazine was discontinued. So I uploaded my program, a simple implementation of the LZSS compression algorithm (see
below) to PC-VAN, a big Japanese BBS run by NEC. That was May 1, 1988.
Soon a number of hobby programmers gathered and began improving on that program. The project culminated in Kazuhiko
Miki's archiver LArc, which was fairly widey used in Japan. (Dr. Miki was then a medical specialist working at
a governmental office. I heard he left office and began work on freeware/shareware promotion.)
The LZSS algorithm is based on a very simple idea. Suppose I'm going to write "compression" here. But probably I've
already used that word before in this file. If I used that word 57 characters before, I might as well write "go 57 characters
backward, and read 11 characters," or <57,11> for short. In general, when I've already used the string of characters
among the recent 4096 characters, say, I encode the string by a <position,length> pair.
In Storer's [8] terminology, this is a sliding dictionary algorithm, analyzed first by Ziv and Lempel
[14] and then by Storer and Szymanski [9], among others.
Later versions of my LZSS implementations and Miki's LArc used binary search trees to make string search faster;
see Bell [1].
Incidentally, there are two distinct Ziv-Lempel (LZ) methods: sliding dictionary [14] and dynamic
dictionary [15] in Storer's [8] terminology. The LZW algorithm [12]
belongs to the latter. Most pre-LHarc compression tools, such as 'compress', 'ARC', and 'PKARC', used LZW.
During the summer of 1988, I wrote another compression program, LZARI. This program is based on the following
observation: Each output of LZSS is either a single character or a <position,linds of "characters," and the "characters"
256 through 511 are accompanied by a <position> field. These 512 "characters" can be Huffman-coded, or better still,
algebraically coded. The <position> field can be coded in the same manner. In LZARI I used an adaptive algebraic
compression [13], [2] to encode the "characters," and static algebraic compression
to encode the <position> field. (There were several versions of LZARI; some of them were slightly different from
the above description.) The compression of LZARI was very tight, though rather slow.
Haruyasu Yoshizaki (Yoshi), a physician and guru hobby programmer, worked very hard to make LZARI faster.
Most importantly, he replaced LZARI's algebraic compression by dynamic Huffman coding.
His program, LZHUF, was very successful. It was much faster than my LZARI. As for compression ratio,
Huffman cannot beat algebraic compression, but the difference turned out to be very small.
Yoshi rewrote the compression engine of LZHUF in assembler, and added a nifty user interface. His archiver,
LHarc, soon became the de facto standard among Japanese BBS users. After Prof. Kenjirou Okubo, a mathematician,
introduced LHarc to the United States, it became world-famous. Other vendors began using similar techniques:
sliding dictionary plus statistical compressions such as Huffman and Shannon-Fano. (I wondered why they used Shannon-Fano
rather than Huffman which is guaranteed to compress tighter than Shannon-Fano. As it turned out, a then-popular book
on compression published in U.S. contained a wrong description and buggy sample programs, such that Shannon-Fano outperformed
(buggy) Huffman on many files.)
Although LHarc was much faster than LZARI, we weren't quite satisfied with its speed. Because LHarc
was based on dynamic Huffman, it had to update Huffman tree every time it received a character. Yoshi and I tried other
dynamic Huffman algorithms [5], [10], [11], but improvements were
not as great as we desired.
So I took a different step: replacing LHarc's dynamic Huffman by a static Huffman method.
Traditional static Huffman coding algorithm first scans the input file to count character distribution, then builds
Huffman tree and encodes the file. In my approach, the input file is read only once. It is first compressed by a sliding
dictionary method like LZARI and LHarc, and at the same time the distributions of the "characters" (see
above) and positions are counted. The output of this process is stored in main memory. When the buffer in memory is
full (or the input is exhausted), the Huffman trees are constructed, and the half-processed content of the buffer is
actually compressed and output.
In static Huffman, the Huffman tree must be stored in the compressed file. In the traditional approach this information
consumes hundreds of bytes. My approach was to standardize Huffman trees so that (1) each left subtree is no deeper
than its right counterpart, and (2) the leaves at the same level are sorted in ascending order. In this way the Huffman
tree can be uniquely specified by the lengths of the codewords. Moreover, the resulting table is again compressed by
the same Huffman algorithm.
To make the decoding program simpler, the Huffman tree is adjusted so that the codeword lengths do not exceed 16
bits. Since this adjusting is rarely needed, the algorithm is made very simple. It does not create optimal length-limited
Huffman trees; see e.g. [6] for an optimal algorithm. Incidentally, my early program had a bug here,
which was quickly pointed out and corrected by Yoshi.
The sliding dictionary algorithm is also improved by Yoshi using a "PATRICIA tree" data structure; see McCreight
[7] and Fiala and Greene [4].
After completing my algorithm, I learned that Brent [3] also used a sliding dictionary plus Huffman
coding. His method, SLH, is simple and elegant, but since it doesn't find the most recent longest match, the
distribution of match position becomes flat. This makes the second-stage Huffman compression less efficient.
On the basis of these new algorithms, Yoshi began to rewrite his LHarc, but it took him so long (remember
he was a busy doctor!) that I decided to write my own archiver. My archiver was quite recklessly named 'ar'. (Actually
I appended version numbers as in 'ar002' for version 0.02.) I should have named it 'har' (after my name), say, because
'ar' collides with the name of UNIX's archiver. I didn't want my program to compete with LHarc, but I wanted
many people to try the algorithm, so I wrote it in pure ANSI C. This is the reason 'ar' lacked many bells and whistles
necessary for a real archiver.
Note: The version of 'ar002' most often found in the U.S. had a bug. Line 24 of maketbl.c should
read, of course,
while (i <= 16) {
weight[i] = 1U << (16 - i); i++;
}
Somehow the bug didn't show up when compiled by Turbo C.
Yoshi finally showed us his new archiver written in C. It was tentatively named LHx. He then rewrote the main
logic in assembler. Yoshi and I wrote an article describing his new archiver, which would be named LH, in the
January, 1991, issue of "C Magazine" (in Japanese). The suffix 'arc' of LHarc was deliberately dropped because
the people who sold ARC did not want others to use the name.
Then we learned that for the new DOS 5.0, LH meaned LoadHigh, an internal command. We decided to rename LH
to LHA.
Also, I was told that the algorithm described in Fiala and Greene [4] got patented ("Textual Substitution
Data Compression With Finite Length Search Windows," U.S. Patent 4,906,991, Mar. 6, 1990. Actually they got three patents!
The other two were: "Start, Step, Stop Unary Encoding for Data Compression," Application Ser. No. 07/187,697, and "Search
Tree Data Structure Encoding for Textual Substitution Data Compression Systems," Application Ser. No. 07/187,699.)
Furthermore, I learned that the original Ziv-Lempel compression method (Eastman et al., U.S. Patent 4,464,650, 8/1984)
and the LZW method (Welch, 4,558,302, 12/1985) were patented. I also heard that Richard Stallman, of the Free Software
Foundation, author of the EMACS editor and leader of the GNU project, ceased to use 'compress' program any more because
its LZW algorithm got patented.
Are algorithms patentable? (See [16].) If these patents should turn out to be taken seriously,
all compression programs now in use may infringe some of these patents. (Luckily, not all claims made by those algorithm
patents seems to be valid.)
The foregoing is a slight modification of what I wrote in 1991. The year 1991 was a very busy year for me. In 1992,
I joined the faculty of Matsusaka University. This opportunity should have given me more free time, but as it turned
out I got ever busier. I stopped hacking on my compression algorithms; so did Yoshi.
Luckily, all good things in LHA were taken over, and all bad things abandoned, by the new great archiver
zip and the compression tool gzip.
I admire the efforts of Jean-loup Gailly and others.
A brief historical comment on PKZIP: At one time a programmer for PK and I were in close contact. We exchanged
a lot of ideas. No wonder PKZIP and LHA are so similar.
Another historical comment: LHICE and ICE are definitely not written by Yoshi (or me or anyone
I know). I think they are faked versions of LHarc.
Lawrence L. Larmore and Daniel S. Hirschberg. A fast algorithm for optimal length-limited Huffman codes. Journal
of the Association for Computing Machinery, 37(3):464--473, 1990.
James A. Storer and Thomas G. Szymanski. Data compression via textual substitution. Journal of the Association
for Computing Machinery, 29(4):928--951, 1982.
Jacob Ziv and Abraham Lempel. Compression of individual sequences via variable-rate coding. IEEE Transactions
on Information Theory, IT-24(5):530--536, 1978.
"...instrumental in inexpensive, dependable communication..." -
Leonard Levine
PKZip began life humbly, as
PKARC, which was effectively
a clone of the ARC file compression
utility. Following a legal action with SEA, ARC's developers, agreement was reached allowing one last version of the
software using the old file format, and PKPAK was released to an adoring BBS public in
1986.
PKWare later developed its
own file format, which became immensely popular when the file format was put into the
public domain,
and BBS users began boycotting
the ARC program and using PK programs. The BBS world began to convert all the compressed files from ARC to ZIP format,
and ARC slipped into oblivion.
ARC had become popular amongst BBS users, who were paying large amounts of money to transfer files across (by today's
standards) painfully slow modems. Any improvement meant money in users' pockets, and tighter compression would mean
smaller files, which whipped across the
POTS much faster. It's
shareware status and
simplicity of use were vital to the everyday user, and businesses, recognising its power and versatility began using
it to archive data for
better storage (necessary in the days of very expensive hard drives.
Phil Katz, the
brains behind the company and the product, had already dramatically speeded up the compression process when he developed
PKPAK and
PKUNPAK, and now began
to work on improving the
compression algorithm in 1988.
Phil was writing in
assembly
language, and used the best possible algorithms and the
386 processor's features where
appropriate. He was still using the
LZW algorithm, and whilst he had made improvements in compression and storage techniques, had still not 'invented'
compression.
Phil's Zip algorithm was also used in
gzip, and remains one of the
most the most ported utilities. His improvements to the compression process, and the hatred felt by many toward the
creators of ARC meant that the PKZip program had become a standard
for the IBM PC under MS-DOS,
and even now, remains the most popular file compression algorithm, in many Windows implementations. The MS-DOS version
is now at version 2.50, and includes support for
long file
names.
When we were doing the original IBM PC -- and consider this was a brand new hardware and software design -- it was
hanging all the time," Bradley says. The only option engineers had to continue the work was to turn off the computer
and start it again. That required at least a minute to boot back up because of the Power On Self Test (POST) feature
that was built in. To this day, all Windows computers do a POST when they reboot; it's built into every ROM sequence.
But back in those early days, the need to reboot "would happen a lot," Bradley says. "Depending on what you were working
on, that could be daily, hourly, even every five minutes if you were working on a particular shortcut."
So Bradley came up with the Ctrl-Alt-Del keystroke combination -- three keys distant enough on the keyboard to make
it virtually impossible for someone to hit all three accidentally and simultaneously. "So, if you hit those keys, instead
of taking a minute to start up the PC again, it would be much quicker -- the equivalent of turning the machine off and
on without running POST."
The combination escaped from IBM labs and hit popular culture when application developers, in the days when programs
ran on diskette, decided to publish the combination to help users start their applications faster.
After that, end users got used to it, and the rest is, well, history.
At the 20th anniversary of the unveiling of the IBM PC -- that was in August 2001 -- Bradley appeared on a panel featuring
Bill
Gates and other industry luminaries. "I thought we were there to have fun," Bradley says, remembering the moment
that he joked with Gates about helping make his combo so well known.
Bradley laughs when recalling the joke. "I said, 'I may have invented it, but Bill [Gates] is the one who made it famous.'
" In return, he received a glare from Gates.
Nowadays,
Microsoft Windows intercepts the Control-Alt-Delete key combination and displays a pop-up window that allows users
to shut down the PC or shows what programs are running.
Bradley muses that it's funny "that I got famous for this, when I did so many other nifty and difficult things." Among
the Purdue Ph.D.'s accomplishments: He developed the ROM BIOS for the first IBM PC, led the development of the ROM BIOS
and system diagnostics on the PC/XT, and was project manager for several PS/2 models. In 1992, he began working on higher-performing
IBM systems built around the
PowerPC
RISC CPU.
Retired from IBM since 2004, Bradley has received engineering awards and will likely be immortalized as being part of
the original IBM PC team. But like it or not, his place in computer history as the father of the three-finger salute
is here to stay.
SAN FRANCISCO (MarketWatch) -- Among the engineers who laid the groundwork for the modern software industry,
Tim Paterson didn't exactly cash in on his technical prowess like many of his extraordinarily wealthy peers.
MSFT)
Chairman Bill Gates plans to step back from the company he founded in 1975, and focus more of his attention and vast
fortune -- estimated at just under $60 billion, according to Forbes -- on philanthropy, funding efforts like fighting
AIDS and educating impoverished children.
Meanwhile Paterson, the architect of the computer code that helped spawn that fortune, is busying himself with a small
firm operated out of his home in Issaquah, Wash., selling modified widgets designed to improve the performance of aging
digital-video recorders.
Paterson developed the DOS operating system in 1980, the year before it was sold to Microsoft for $50,000. DOS
then formed the core of what became Microsoft's Windows software, a flagship product that has stoked the Redmond, Wash.-based
company's colossal fortunes over the past quarter of a century.
'I don't remember ever thinking at one point how big [DOS] is going to be in the future.'
For Microsoft's first fiscal quarter ended in September, the unit that includes Windows contributed $4.1 billion in
sales.
Paterson Technology, which Paterson founded in 1989 as a diversion, will exceed $50,000 in sales this year, he
said -- the same amount that DOS fetched some 26 years ago. But more importantly, he added, the firm satisfies his undying
need to tinker.
Job at Microsoft
Paterson had developed DOS while working at Seattle Computer Products, and took a job with Microsoft around the
time it bought the operating system, helping "tune and spruce" what became known as "MS-DOS" in its first iterations,
he said. Paterson would work sporadically for Microsoft for the next 17 years on various products.
But DOS, and what it wrought, is his most prominent legacy. "I don't remember ever thinking at one point how big this
is going to be in the future," Paterson said this week.
By the early 1990s, however, Paterson recalls thinking: "Wow, there're 100 million copies of this thing out there,
and I wrote it originally."
Al Gillen, an analyst with IDC, said that the operating system proved historic. "DOS is fundamentally the product that
made Microsoft into the powerhouse it is today," he commented.
The imprint of DOS on Windows lasted until the XP version, released in 2001, according to Directions on Microsoft
analyst Michael Cherry. Cherry said that DOS also begat companions to Windows -- Microsoft's Office applications for
word processing and other tasks. The suite is part of a unit that also contributed $4.1 billion in sales in Microsoft's
September quarter. Office emerged, Cherry added, simply because the advent of computers with an operating system meant
"people needed something to do with them."
Paterson's focus, though, has shifted away from PC software. Paterson Technology began churning out devices in 2004
that let digital-video recorders, such as discontinued lines made by ReplayTV, to continue to automatically change the
channels on satellite-TV receivers.
Running the firm out of his house, Paterson said that he's enamored with improving such widespread, existing technologies.
He is in the middle of a production run of 400 of his "translators" for digital-video recorders, and reports to be pleased
with the consistent demand for them.
"I've always enjoyed making little gizmos," he said.
'He knew my name'
Paterson said that he first met Gates and Microsoft co-founder Paul Allen in 1979.
Allen, who proved to be Paterson's primary contact at Microsoft, resigned from his executive role at the company in
1983, and has since become the owner of professional sports teams, among other high-profile pursuits. Allen usually
ranks highly on the annual Forbes list of the world's richest people -- with Gates typically at the top.
Paterson describes his relationship with Gates during his Microsoft days as mostly unaffected by his creation of
DOS. "I was just another developer," commented Paterson.
"If we ran into each other at an event, he knew my name," he said, "because of the really early days when we had had
some one-on-one time."
Later, Paterson's only contact with Gates occurred during the regular product presentations to the chief executive,
something required of all Microsoft units. Gates transitioned from Microsoft's chief to the role of chairman in 2000.
This summer, Gates plans to dramatically reduce the time he spends at Microsoft to one day per week. The rest of his
time will be dedicated to the Bill & Melinda Gates Foundation, the charitable organization he founded with his wife
in 2000.
Paterson said that Gates has had a unique impact on Microsoft as a technologically adept executive able to see
the big picture for the company's business direction.
"He had this impact as far as the overall vision; he knew what the other products were doing, and he had this vision
of where things should go," he added. "It seemed funny to me that any old product, it still got a review by the top
guy."
A Microsoft spokesman said that such reviews will necessarily have to be limited as Gates steps back. "It might
not be the same quantity; it'll be a little more focused in specific areas that he decides upon," the spokesman commented.
Following the launch of his own business in the mid-1980s, Paterson rejoined Microsoft from 1986 through 1988. He left
and then signed on again in 1990 to focus on the company's Visual Basic programming language. "I'd gotten married and
needed to make more money," he said.
Upon rejoining Microsoft, Paterson said he was shown that had he stayed on with the company after his initial hiring
in the early 1980s, his stock options would already have made him a millionaire.
"I had no idea what options really meant. It was hard to fathom," Paterson said. By the time he left Microsoft for good
in 1998, however, he noted that his compensation package was "enough to retire and fix me for life."
'Just another tech company'
In "Pirates of Silicon Valley," a 1999 film about the origins of Microsoft and rival Apple Inc. an actor playing Paul
Allen approaches a young man named Rod Brock at Seattle Computer Products and offers to buy DOS.
"Why?" a befuddled Brock responds. When Allen offers him $50,000 for it, Brock, wide-eyed, has to take a moment to
catch his breath.
The real-life Brock, Paterson said, was a "middle-aged, balding guy" who owned Seattle Computer at that time, Paterson
said, pointing out that portraying the deal as poorly considered wasn't quite accurate.
"Seattle Computer was really small, and it was a hardware company," Paterson elaborated. Such flat-fee arrangements
were common then, he said, and it was Microsoft's additional engineering strength that ultimately made the product so
valuable.
More irritating, Paterson said, is the lingering notion that he lifted much of DOS from an existing operating system
called CP/M. That notion apparently was reinforced in a 2004 book by Sir Harold Evans, called "They Made America."
Paterson filed a defamation suit against Evans and his publisher in 2005. But a judge dismissed Paterson's suit
earlier this year, citing that questions relating to DOS and CP/M had been widespread before the book.
According to Paterson, he had discussed making clarifications with Evans, but those talks ended when the suit was dismissed.
"He always said he wanted to get it right," Paterson said of Evans. "If he really did, he could have kept talking
to me."
These days, Paterson is focused on turning out his next batch of video-recorder translators, along with developing a
new model that won't require preordered parts from Taiwan.
Looking to "simplify" his life recently, Paterson shed all of his directly owned stocks, including those in Microsoft.
He allows that he still likely owns some stock in the company through mutual funds, though he doesn't pay any special
interest to mentions of Microsoft in the news.
"I'm coming around to reading about it as just another tech company," he said.
John Letzing is a MarketWatch reporter based in San Francisco.
Seattle Computer Products (SCP) introduced their 8086 16-bit computer system in October 1979, nearly two years before
the introduction of the IBM PC. By "computer system", I actually mean a set of three plug-in cards for the S-100 bus:
the 8086 CPU card, the CPU Support Card, and the 8/16 RAM. At that time SCP did not manufacture the S-100 chassis these
cards would plug into. That chassis and a computer terminal would be needed to make a complete working computer.
The S-100 Bus
Inside of a modern personal computer you'll find a motherboard crammed with the heart of the computer system: CPU,
memory, disk interface, network interface, USB interface, etc. Off in one corner you usually find four or five PCI slots
for add-in cards, but for most people no additional cards are needed (except maybe a graphics card).
In contrast, the S-100 Bus motherboard contained no components at all. A full-size motherboard had nothing but 18
– 22 card slots. Each slot accepted a 5" x 10" S-100 card with its 100-pin edge connector. A typical computer system
would have a card with a CPU, possibly several cards with memory, a card for a floppy disk interface, a card for serial
I/O, possibly a video card, etc.
This arrangement was started by MITS with the Altair 8800 computer, but eventually became standardized by the Institute
of Electrical and Electronics Engineers as IEEE-696. During the standardization process, the S-100 bus was extended
from being an 8-bit bus (typically used by 8080 and Z80 processors) to a 16-bit bus. It was this extension to 16-bits
that made the S-100 bus a suitable target for the 8086 computer from SCP.
SCP also wanted to take advantage of the vast range of existing cards for the S-100 bus. They didn't need to make
cards for disk interface, serial I/O, video, etc. since they were already available. Even the (empty) chassis itself
was a standard item. An existing computer owner could simply swap out his 8-bit CPU card and replace it with the 16-bit
SCP card, and all the hardware would work together (but the software was another matter).
The SCP 16-bit Computer System
The 8086 CPU card was an Intel 8086 microprocessor with dozens of logic chips needed to interface it to the S-100
bus. The signals and timings of the bus were built around the original 8-bit Intel 8080, and it took a lot of "glue"
logic to create the same signals with a different microprocessor (this was also true for the Z80). For the 8086, however,
there was also a significant added layer of working in "8-bit mode" or "16-bit mode". These modes were defined by the
IEEE standard so that a 16-bit CPU could be used with existing 8-bit memory with a performance penalty, or with new
16-bit memory at full speed. Essentially, the CPU card could request 16 bits for each memory access. If there was no
response, the card went into 8-bit mode: the microprocessor would be stopped momentarily while logic on the card ran
two consecutive 8-bit memory cycles to fetch the required 16 bits.
The SCP 8086 CPU had a mechanical switch on it that allowed the microprocessor to run at either 4 MHz or 8 MHz (while
a processor you get today would run at around 3,000 MHz = 3 GHz). When SCP first started sales, Intel could not yet
provide the 8 MHz CPU chip so early units could only be run at 4 MHz.
The CPU Support Card was a collection of stuff needed to make a working computer. The most important items were:
A boot ROM with debugger.
A serial port to connect to a computer terminal.
A time-of-day clock.
The 8/16 RAM had 16 KB (not MB!) of memory. It could operate in "16-bit mode" so the 16-bit processor could run at
full speed. In the early days, using four of these cards to build a system with 64 KB of memory would have been considered
plenty.
The only 16-bit software available when the system was first released was Microsoft Stand-Alone Disk BASIC. SCP did
not make a floppy disk controller, but supported disk controllers made by Cromemco and Tarbell.
Development Timeline
I earned my BS in Computer Science in June of 1978 and started work at SCP. Intel just announced their new 8086 microprocessor,
and I was sent to a technical seminar to check it out. The May 1978 issue of IEEE Computer Magazine had published the
first draft of the S-100 standard which included 16-bit extensions, so I was excited about the possibility of making
a 16-bit S-100 computer. My primary duties at SCP were to improve and create new S-100 memory cards, which at that time
were SCP's only products. But I was given the go-ahead to investigate a computer design when I had the time.
By January of 1979 my design for the 8086 CPU and CPU Support Card had been realized in prototypes. I was able to
do some testing, but then hardware development needed to be put on hold while I wrote some software. Not even Intel
could provide me with an 8086 assembler to do the most basic programming, so I wrote one myself. It was actually a Z80
program that ran under CP/M, but it generated 8086 code. Next I wrote a debugger that would fit into the 2 KB ROM on
the CPU Support Card.
In May we began work with Microsoft to get their BASIC running on our machine. I brought the computer to their offices
and sat side by side with Bob O'Rear as we debugged it. I was very impressed with how quickly he got it working. They
had not used a real 8086 before, but they had simulated it so BASIC was nearly ready to go when I arrived. At Microsoft's
invitation, I took the 8086 computer to New York to demonstrate it at the National Computer Conference in the first
week of June.
There was a small setback when the June 1979 issue of IEEE Computer Magazine came out. The draft of the IEEE S-100
standard had changed significantly. I got involved in the standards process to correct some errors that had been introduced.
But I still had to make design changes to the 8086 CPU which required a whole new layout of the circuit board, with
a risk of introducing new errors. It turned out, however, that no mistakes were made so production was able to start
3 months later.
Evolution
The 16 KB memory card was eventually replaced by a 64 KB card. Intel introduced the 8087 coprocessor that performed
floating-point math, and SCP made an adapter that plugged into the 8086 microprocessor socket that made room for it.
Later SCP updated the 8086 CPU card so it had space for the 8087.
The software situation did not change until I wrote DOS for this machine, first shipping it in August 1980.
When IBM introduced their PC in August 1981, its 8088 processor used 8-bit memory, virtually identical in performance
to using 8-bit memory with the SCP 8086 CPU. Except IBM ran their processor at 4.77 MHz while the SCP machine ran at
8 MHz. So the SCP 8086 computer system was about three times faster than the IBM PC.
IBM also reintroduced memory limitations that I had specifically avoided in designing the 8086 CPU. For S-100 computers,
a low-cost alternative to using a regular computer terminal was to use a video card. The video card, however, used up
some of the memory address space. The boot ROM would normally use up address space as well. SCP systems were designed
to be used with a terminal, and the boot ROM could be disabled after boot-up. This made the entire 1 MB of memory address
space available for RAM. IBM, on the other hand, had limited the address space in their PC to 640 KB of RAM due to video
and boot/BIOS ROM. This limitation has been called the "DOS 640K barrier", but it had nothing to do with DOS.
Microsoft took full advantage of the SCP system capability. In 1988, years after SCP had shut down, they were still
using the SCP system for one task only it could perform ("linking the linker"). Their machine was equipped with the
full 1 MB of RAM – 16 of the 64 KB cards. That machine could not be retired until 32-bit software tools were developed
for Intel's 386 microprocessor.
I set to work writing an operating system (OS) for the 16-bit Intel 8086 microprocessor in April of 1980. At that
point my employer, Seattle Computer Products (SCP), had been shipping their 8086 computer system (which I had designed)
for about 6 months. The only software we had for the computer system was Microsoft Stand-Alone Disk BASIC. "Stand-Alone"
means that it worked without an OS, managing the disk directly with its own file system. It was fast for BASIC, but
it wouldn't have been suitable for writing, say, a word processor.
We knew Digital Research was working on a 16-bit OS, CP/M-86. At one point we were expecting it to be available at the
end of 1979. Had it made its debut at any time before DOS was working, the DOS project would have been dropped. SCP
wanted to be a hardware company, not a software company.
I envisioned the power of the 8086 making it practical to have a multi-user OS, and I laid out a plan to the SCP board
of directors to develop a single-user OS and a multi-user OS that would share the same Application Program Interface
(API). This would be a big design job that would take time to get right – but we were already shipping our computer
system and needed an OS now. So I proposed to start with a "quick and dirty" OS that would eventually be thrown away.
Baseline Experience
I had graduated from college (BS in Computer Science) less than two years earlier. I spent the next year tentatively
in graduate school while also working at SCP. So I didn't have much experience in the computer industry.
This is not to say I had no experience at all. In college I had an IMSAI 8080 with a North Star floppy disk system.
I made my own peripherals for it, which included a Qume daisy-wheel printer with its own Z80-based controller that I
designed and programmed myself. I also designed my own Z80-based road-rally computer that used a 9-inch CRT video display
mounted in the glove box of my car. And my school work included writing a multitasking OS for the Z80 microprocessor
as a term project. The thrust of that project had been to demonstrate preemptive multitasking with synchronization between
tasks.
For SCP, my work had been ostensibly hardware-oriented. But the 8086 CPU had required me to develop software tools including
an assembler (the most basic tool for programming a new processor) and a debugger. These tools shipped with the product.
My hands-on experience with operating systems was limited to those I had used on microcomputers. I had never used a
"big computer" OS at all. All programming projects in high school and college were submitted on punched cards. On my
own computer I used North Star DOS, and at SCP we had Cromemco CDOS, a CP/M look-alike.
File System Performance
An important design parameter for the OS was performance, which drove my choice for the file system. I had learned a
handful of file management techniques, and I spent some time analyzing what I knew before making a choice. These were
the candidates:
North Star DOS and the UCSD p-system used the simplest method: contiguous file allocation. That is, each file occupies
consecutive sectors on the disk. The disk directory only needs to keep track of the first sector and the length for
each file – very compact. Random access to file data is just as fast as sequential access, because it's trivial to compute
the sector you want. But the big drawback is that once a file is boxed in by other files on the disk, it can't grow.
The whole file would then have to be moved to a new spot with more contiguous free space, with the old location leaving
a "hole". After a while, all that's left are the holes. Then you have to do a time-consuming "pack" to shuffle all the
files together and close the holes. I decided the drawbacks of contiguous allocation were too severe.
UNIX uses a clever multi-tiered approach. For small files, the directory entry for a file has a short table of the sectors
that make up the file. These sectors don't have to be contiguous, so it's easy to extend the file. If the file gets
too large for the list to fit in the table, UNIX adds a tier. The sectors listed in the table no longer reference data
in the file; instead, each entry identifies a sector which itself contains nothing but a list of sectors of file data.
If the file gets huge, yet another tier is added – the table entries each reference a sector whose entries reference
a sector whose entries identify the file data. Random access to file data is very fast for small files, but as the files
get larger and the number of tiers grow, if will take one or two additional disk reads just to find the location of
the data you really want.
CP/M didn't track disk space directly in terms of sectors. Instead it grouped sectors together into a "cluster" or "allocation
unit". The original CP/M was designed specifically around 8" disks, which had 2002 sectors of 128 bytes each. By making
each cluster 8 sectors (1K), there were less than 256 clusters on a disk. Thus clusters could be indentified using only
one byte. The directory entry for CP/M had a table of 16 entries of the clusters in file, so for a file of 16K or less
both random and sequential access were fast and efficient. But when a file exceeded 16K, it needed a whole new directory
entry to store an additional 16K of cluster numbers. There was no link between these entries; they simply contained
the same name and a number indentifying which section of the file it represented (the "extent"). This led to a potential
performance nightmare, especially for random access. When switching between extents, the system had to perform its standard
linear search of the directory for a file of the correct name and extent. This search could take multiple disk reads
before the requested data was located.
Microsoft Stand-Alone Disk BASIC used the File Allocation Table (FAT). Unlike all the other file systems, the FAT system
separates the directory entry (which has the file name, file size, etc.) from the map of how the data is stored (the
FAT). I will not give a detailed explanation of how that worked here as the system has been well documented, such as
my 1983 article An Inside Look at MS-DOS at
http://www.patersontech.com/dos/Byte/InsideDos.htm.
Like CP/M, BASIC used a 1K cluster so that, once again, there were less than 256 on the standard 8" floppy disk of the
day. The FAT needs one entry per cluster, and for BASIC the entry needed to be just one byte, so the FAT fit within
two 128-byte sectors. This small size also meant it was practical, even with the limited memory of the time, to keep
the entire FAT in memory at all times.
To me, the big appeal of the FAT system was that you never had to read the disk just to find the location of the data
you really wanted. FAT entries are in a chain – you can't get to the end without visiting every entry in between – so
it is possible the OS would have to pass through many entries finding the location of the data. But with the FAT entirely
in memory, passing through a long chain would still be 100 times faster than a single sector read from a floppy disk.
Another thing I liked about FAT was its space efficiency. There were no tables of sectors or clusters that might be
half full because the file wasn't big enough to need them all. The size of the FAT was set by the size of the disk.
When I designed DOS I knew that fitting the cluster number in a single byte, limiting the number of clusters to 256,
wouldn't get the job done as disks got bigger. I increased the FAT entry to 12 bits, allowing over 4000 clusters. With
a cluster size of as much as 16K bytes, this would allow for disks as large as 64MB. You could even push it to a 32K
cluster and 128MB disk size, although that large cluster could waste a lot space. These disk sizes seemed enormous to
me in 1980. Only recently had we seen the first 10MB hard disks come out for microcomputers, and that size seemed absurdly
lavish (and expensive).
Obviously I'm no visionary. Disk size has grown faster than any other computer attribute, up by a factor of 100,000.
(Typical memory size is up by a factor of 30,000, while clock speed is only up 1000x.) Microsoft extended the FAT entry
to 16 bits, then to 32 bits to keep up. But this made the FAT so large that it was no longer kept entirely in memory,
taking away the performance advantage.
Hardware Performance
On the few computer systems I personally used, I had experienced a range of disk system performance. North Star DOS
loaded files with lightening speed, while the CP/M look-alike Cromemco CDOS took much longer. This was not a file system
issue – the difference still existed with files less than 16K (just one CP/M "extent") and contiguously allocated.
North Star DOS did the best job possible with its hardware. Each track had 10 sectors of 256 bytes each, and it could
read those 10 sectors consecutively into memory without interruption. To read in a file of 8KB would require reading
32 sectors; the nature of the file system ensured they were contiguous, so the data would be found on four consecutive
tracks. When stepping from track to track, it would have missed the start of the new track and had to wait for it to
spin around again. That would mean it would take a total of 6.2 revolutions (including three revolutions lost to track
stepping) to read the 8KB. The 5-inch disk turned at 5 revolutions per second so the total time would be less than 1.3
seconds.
The standard disk with CP/M (CDOS) had a track with 26 sectors of 128 bytes each. CP/M could not read these sectors
consecutively. It used an interleave factor of 6, meaning it would read every sixth sector. The five-sector gap between
reads presumably allowed for processing time. An 8KB file would occupy about 2˝ tracks, which, at 6 revolutions per
track (because of the interleave), would take about 17 revolutions of the disk to be read (including two revolutions
lost to track stepping). The 8-inch disk turned at 6 revolutions per second so the total time would be over 2.8 seconds.
This is more than twice as long as the North Star system which used fundamentally slower hardware.
At least part of the reason CP/M was so much slower was because of its poor interface to the low-level "device driver"
software. CP/M called this the BIOS (for Basic Input/Output System). Reading a single disk sector required five separate
requests, and only one sector could be requested at a time. (The five requests were Select Disk, Set Track, Set Sector,
Set Memory Address, and finally Read Sector. I don't know if all five were needed for every Read if, say, the disk or
memory address were the same.)
I called the low-level driver software the I/O System, and I was determined its interface would not be a bottleneck.
Only a single request was needed to read disk data, and that request could be for any number of sectors. This put more
of the work on the I/O System, but it allowed it to maximize performance. The floppy disk format did not use interleave,
and an 8KB file could be read from an 8-inch disk in 4˝ revolutions which is less than 0.8 seconds.
When hard disks were first introduced on the IBM PC/XT in 1982, the I/O System provided by IBM once again used an interleave
factor of 6. Some aftermarket add-on hard disks were available with interleave factor as low as 3. In 1983 I founded
Falcon Technology which made the first PC hard disk system that required no interleave. Once hard disks started having
built-in memory, interleave was completely forgotten.
CP/M Translation Compatibility
For DOS to succeed, it would need useful applications (like word processing) to be written for it. I was concerned that
SCP might have trouble persuading authors of application software to put in the effort to create a DOS version of their
programs. Few people had bought SCP's 16-bit computer, so the installed base was small. Without the applications, there
wouldn't be many users, and without the users, there wouldn't be many applications.
My hope was that by making it as easy as possible to port existing 8-bit applications to our 16-bit computer, we would
get more programmers to take the plunge. And it seemed to me that CP/M translation compatibility was what would make
the job as easy as possible. Intel had defined rules for translating 8-bit programs into 16-bit programs; CP/M translation
compatibility means that when a program's request to CP/M went through the translation, it would become an equivalent
request to DOS. My first blog entry explains this in more detail.
So I made CP/M translation compatibility a fundamental design goal. This required me to create a very specific Application
Program Interface that implemented the translation compatibility. I did not consider this the primary API – there was,
in fact, another API more suited to the 16-bit world and that had more capabilities. Both APIs used CP/M-defined constructs
(such as the "File Control Block"); the compatibility API had to, and I didn't see a reason to define something different
for the primary API.
I myself took advantage of translation compatibility. The development tools I had written, such as the assembler, were
originally 8-bit programs that ran under CP/M (CDOS). I put them through the translator and came up with 16-bit programs
that ran under DOS. These translated tools were included with DOS when shipped by SCP. But I don't think anyone else
ever took advantage of this process.
Gary Kildall's CP/M was the first general-purpose operating system (OS) for 8-bit computers. It demonstrated that it
was possible to pare down the giant operating systems of mainframes and minicomputers into an OS that provided the essential
functionality of a general-purpose Application Program Interface, while leaving enough memory left for applications
to run. This was a radical idea.
Beyond this technical achievement is the success CP/M had in the marketplace. It stands alone as the catalyst that launched
the microcomputer industry as a successful business. Without those signs of success, companies like IBM wouldn't have
been tempted to enter the business and fuel its growth.
The Significance of the BIOS Interface
The concept of a software interface between separate programs or program components has been around for a long, long
time. The most obvious example is the Application Program Interface (API) that all operating systems provide to application
programs.
But interfaces can exist at other levels, not just between the OS and applications. Interfaces are used whenever two
components must interact, but the components are developed independently or may need to be changed independently.
Much has been made of the layers of interfaces used by CP/M. (John Wharton: "Gary's most profound contribution"; Harold
Evans: "truly revolutionary", "a phenomenal advance"; Tom Rolander: "supreme accomplishment", "originator of that layering
of the software".) The core of CP/M was the Basic Disk Operating System (BDOS). On one side of the BDOS was the API
exposed to application programs; on the other side was the Basic Input/Output System (BIOS) that connected the BDOS
to specific computer hardware.
Certainly the idea of these layers of interfaces was not new to the computer industry. For example, UNIX (like all operating
systems) provided an API for application programs, and connected to the hardware with an interface to low-level device
driver software. These are equivalent layers to the CP/M's BDOS and BIOS, but much more sophisticated. So I am a bit
mystified as to what is so revolutionary about CP/M's design.
Of course, being the first microcomputer OS, CP/M was the first to put these layers on a microcomputer. So I guess in
distilling down the essence of an OS so it would fit on a microcomputer, we can give Kildall credit for not distilling
out too much and leaving out the interface layers. Except that he actually did, originally. As described in his memoirs,
quoted by Evans, in early versions of CP/M he had to rewrite the parts that manage the hardware "so many times that
the tips of my fingers were wearing thin, so I designed a general interface, which I called the BIOS." I read somewhere
that the BIOS interface was first added to version 1.3 of CP/M. It may well be that Kildall had not seen a device driver
or BIOS-level interface before. But the advantages that became obvious to him had been just as visible to his predecessors.
I equipped my first computer, an IMSAI 8080, with the North Star floppy disk system. It came with North Star DOS, which
was the first OS that I personally used. It, of course, had a low-level interface so it could be tailored to work with
any hardware. This is where my experience with interface layers began. I had not seen CP/M at the time.
CP/M & the DOS Connection
I can think of no specific technical innovations demonstrated by CP/M. The new idea was simply that you could do it
– you could actually put a general-purpose operating system on a microcomputer. It worked and the market loved it.
DOS was built on top of this general groundwork. Kildall distilled the OS to a minimal, useful set of capabilities that
would fit on a microcomputer. This simplified my task in setting the functionality of DOS. It was a matter of adding
or subtracting to an existing & working set rather than developing the list from scratch.
DOS has been accused of being a "rip-off" or "knockoff" of the CP/M "design" or "architecture". I guess this may refer
to the fact that DOS has the same interface layers. Or it may refer to the similar function set. In that sense, since
Kildall picked an appropriate set of functions, any subsequent microcomputer OS would have the same ones and would be
some sort of "knockoff"
Gary Kildall's CP/M was the first general-purpose operating system (OS) for 8-bit computers. It demonstrated that
it was possible to pare down the giant operating systems of mainframes and minicomputers into an OS that provided the
essential functionality of a general-purpose Application Program Interface, while leaving enough memory left for applications
to run. This was a radical idea.
Beyond this technical achievement is the success CP/M had in the marketplace. It stands alone as the catalyst that launched
the microcomputer industry as a successful business. Without those signs of success, companies like IBM wouldn't have
been tempted to enter the business and fuel its growth.
The Significance of the BIOS Interface
The concept of a software interface between separate programs or program components has been around for a long, long
time. The most obvious example is the Application Program Interface (API) that all operating systems provide to application
programs.
But interfaces can exist at other levels, not just between the OS and applications. Interfaces are used whenever two
components must interact, but the components are developed independently or may need to be changed independently.
Much has been made of the layers of interfaces used by CP/M. (John Wharton: "Gary's most profound contribution"; Harold
Evans: "truly revolutionary", "a phenomenal advance"; Tom Rolander: "supreme accomplishment", "originator of that layering
of the software".) The core of CP/M was the Basic Disk Operating System (BDOS). On one side of the BDOS was the API
exposed to application programs; on the other side was the Basic Input/Output System (BIOS) that connected the BDOS
to specific computer hardware.
Certainly the idea of these layers of interfaces was not new to the computer industry. For example, UNIX (like all operating
systems) provided an API for application programs, and connected to the hardware with an interface to low-level device
driver software. These are equivalent layers to the CP/M's BDOS and BIOS, but much more sophisticated. So I am a bit
mystified as to what is so revolutionary about CP/M's design.
Of course, being the first microcomputer OS, CP/M was the first to put these layers on a microcomputer. So I guess in
distilling down the essence of an OS so it would fit on a microcomputer, we can give Kildall credit for not distilling
out too much and leaving out the interface layers. Except that he actually did, originally. As described in his memoirs,
quoted by Evans, in early versions of CP/M he had to rewrite the parts that manage the hardware "so many times that
the tips of my fingers were wearing thin, so I designed a general interface, which I called the BIOS." I read somewhere
that the BIOS interface was first added to version 1.3 of CP/M. It may well be that Kildall had not seen a device driver
or BIOS-level interface before. But the advantages that became obvious to him had been just as visible to his predecessors.
I equipped my first computer, an IMSAI 8080, with the North Star floppy disk system. It came with North Star DOS, which
was the first OS that I personally used. It, of course, had a low-level interface so it could be tailored to work with
any hardware. This is where my experience with interface layers began. I had not seen CP/M at the time.
CP/M & the DOS Connection
I can think of no specific technical innovations demonstrated by CP/M. The new idea was simply that you could do it
– you could actually put a general-purpose operating system on a microcomputer. It worked and the market loved it.
DOS was built on top of this general groundwork. Kildall distilled the OS to a minimal, useful set of capabilities that
would fit on a microcomputer. This simplified my task in setting the functionality of DOS. It was a matter of adding
or subtracting to an existing & working set rather than developing the list from scratch.
DOS has been accused of being a "rip-off" or "knockoff" of the CP/M "design" or "architecture". I guess this may refer
to the fact that DOS has the same interface layers. Or it may refer to the similar function set. In that sense, since
Kildall picked an appropriate set of functions, any subsequent microcomputer OS would have the same ones and would be
some sort of "knockoff".
In his book They Made America (Little, Brown & Co., 2004), author Harold Evans revives claims that DOS is
based on the late Gary Kildall's CP/M, using words such as "slapdash clone" and "blatant copies". I sued the author
& publisher for making false and defamatory statements.
The case was dismissed last week shortly before it was to go to trial. The main reason this happened is because the
judge ruled that I am a "limited purpose public figure." This sets a very high bar of protection for free speech, leading
the judge to then rule that the book represented protected opinions.
Facts not in dispute
What may be most surprising about the issue is that there is no significant dispute on the actual relationship between
DOS and CP/M. The relationship is simply this: DOS implements the same Application Program Interface (API) as CP/M.
The API is how an application program (such as a word processor) asks the operating system to perform a task, such as
to read or write a disk file.
There is no suggestion that I copied any CP/M code when I wrote DOS. (To this day, I have never seen any CP/M code.)
And the internal workings of DOS are quite different. For example, unlike CP/M, DOS used the FAT (File Allocation Table)
system for organizing disk files, which made it much faster but meant floppy disks were not interchangeable between
CP/M and DOS.
One point of disagreement: In his memoirs (quoted by Evans), Kildall claims that I dissected CP/M to learn how it worked.
This is not true, and it doesn't even make sense. Since DOS worked so differently, there would have been nothing I could
learn from CP/M's internal workings to help in writing DOS.
What do I mean by "implement the same API"? Every operating system has basic functions like reading and writing disk
files. The API defines the exact details of how to make it happen and what the results are. For example, to "open" a
file in preparation for reading or writing, the application would pass the location of an 11-character file name and
the function code 15 to CP/M through the "Call 5" mechanism. The very same sequence would also open a file in DOS, while,
say, UNIX, did not use function code 15, 11-character file names, or "Call 5" to open a file.
Translation Compatibility
Since CP/M and DOS both had the same API, you would think that a program for one would run on the other, right? Wrong.
CP/M was only for 8-bit computers based on the 8080 or Z80 microprocessors. DOS was only for 16-bit computers based
on the Intel 8086 microprocessor. At the time DOS was written, there was a vast library of 8-bit CP/M programs, none
of which could run on 16-bit DOS computers.
While 8-bit programs could not run on 16-bit computers, Intel documented how the original software developer could mechanically
translate an 8-bit program into a 16-bit program. Only the developer of the program with possession of the source code
could make this translation. I designed DOS so the translated program would work the same as it had with CP/M – translation
compatibility. The key to making this work was implementing the CP/M API.
So sue me
When you boil it all down, the thrust of Evans' story is that Kildall and his company, Digital Research (DRI), should
have sued for copyright infringement and DOS would be dead. CP/M would have then been chosen for the IBM PC (because
it was the only choice left), and the history of the PC would be much different.
While DRI was free to sue for copyright infringement, the likely success of such action is still controversial at best.
The question was whether the published API, used by all the applications that ran under CP/M, was protected from being
implemented in another operating system. There are experts who say no, and there are experts who say maybe, but from
what I can tell as of 2007 there is yet to be a successful finalized case.
I say that because Evans & I each brought our own copyright experts into our negotiations to settle the case. Mine was
Lee Hollaar of the University of Utah, while Evans used Douglas Lichtman of the University of Chicago. Lichtman provided
a handful of case citations to show "that the issues here are not clear in either direction, and, in a hypothetical
copyright suit filed by Kildall, he might have won and he might have lost." But the only case that seemed to support
DRI's side was a preliminary injunction in 2003 (Positive Software v. New Century Mortgage). We didn't think it really
applied, and as a preliminary injunction it hadn't gone through a full trial, let alone appeal. I would suppose this
is the best he had. Hollaar said to me "I feel that it's clear from the cases under similar circumstances that you would
have won." I have wished my suit against Evans could have really become a copyright suit about CP/M & DOS so this could
be settled.
If tiny Seattle Computer Products had been sued by Digital Research back when DOS was new, I'm sure we would have caved
instead of fighting it. I would have changed DOS so the API details were nothing like CP/M, and translation compatibility
would have been lost. But in the end that would have made absolutely no difference. No one ever used or cared about
translation compatibility. I had been wrong to think it was a valuable feature.
Lichtman mentioned to me that he was working for SCO in their lawsuits against Linux. If I understood him correctly,
SCO was trying to establish that an API can be protected by copyright, so it could be used to collect royalties on Linux,
whose API is the same as or similar to UNIX.
An overlooked court case in Seattle has helped restore the reputation of the late computer pioneer Gary Kildall.
Last week, a Judge dismissed a defamation law suit brought by Tim Paterson, who sold a computer operating system
to Microsoft in 1980, against journalist and author Sir Harold Evans and his publisher Little Brown. The software became
the basis of Microsoft's MS-DOS monopoly, and the basis of its dominance of the PC industry.
But history has overlooked the contribution of Kildall, who Evans justifiably described as "the true founder of the
personal computer revolution and the father of PC software" in a book published three years ago.
In a chapter devoted to Kildall in Evans' They Made America: From the Steam Engine to the Search Engine: Two Centuries
of Innovators, Evans related how Paterson "[took] 'a ride on' Kildall's operating system, appropriated the 'look
and feel' of [Kildall's] CP/M operating system, and copied much of his operating system interface from CP/M."
The story of how Bill Gates came to acquire an operating system is well known. In 1980, Kildall's Digital Research
provided the operating system for a wide range of microcomputers, and was established as the industry standard. IBM
had approached Microsoft, then a tiny software company in the Seattle area, to provide a BASIC run-time for its first
micro, the IBM PC. Gates offered to provide IBM an operating system too, even though he didn't have one at the time.
This required a hasty purchase.
Microsoft turned to Tim Paterson, whose garage operation Seattle Computer Products was selling a CP/M clone called
86-DOS. This had been developed under the code name QDOS (for "quick and dirty operating system"), and SCP sold it alongside
an add-in CPU card. Microsoft turned this into the hugely successful DOS franchise.
(The oft-told story of Kildall spurning IBM to fly his plane is deeply misleading. It was IBM's distribution and
pricing of CP/M, which in the end was one of three operating systems offered with the first IBM PC, that ensured MS-DOS
captured the market.)
Paterson brought the case against Evans in March 2005, as we reported
here, claiming that Evans' defamatory
chapter caused him "great pain and mental anguish".
Evans was puzzled that the chapter drew a defamation suit as it merely "recapitulate[d] and state[d] what 11, 12,
15 other books [said] and there [was] no public outcry, no public corrections, no website corrections, no criticism
in reviews [that any of the accounts were erroneous".
Taking a dim view of lawsuits designed to curb the First Amendment rights of journalists, Judge Thomas Zilly found
that Paterson's lawsuit failed on several important counts. In US law, Zilly pointed out, "truth is an absolute defense to a claim of defamation".
Judge Zilly said Paterson falsely claimed Evans credited Kildall as the "inventor" of DOS, weakening his case. At
the same time, the Judge found, Evans had faithfully recorded Paterson's denial of Kildall's view that QDOS "ripped
off" CP/M.
The Judge also agreed that Paterson copied CP/M's API, including the first 36 functions and the parameter passing
mechanism, although Paterson renamed several of these. Kildall's "Read Sequential" function became "Sequential Read",
for example, while "Read Random" became "Random Read".
(DR came to regret not suing Microsoft "very early on". For his part, Paterson was to plead that his operating system
of choice, Kildall's CP/M-86, was at the time unavailable for products based on Intel's 8086 that he wanted to sell,
necessitating the hasty clone).
Finally, Judge Zilly concluded that Evans acted without malice, and castigated the plaintiffs
for introducing irrelevancies into court, including the claim that Kildall was an alcoholic.
"Plaintiffs fail to provide any evidence regarding 'serious doubts' about the accuracy of the Kildall chapter. Instead,
a careful review of the Lefer notes... provides a research picture tellingly close to the substance of the final chapter."
And with that, the case was dismissed.
The PC world might have looked very different today had Kildall's Digital Research prevailed as the operating system
of choice for personal computers. DRI offered manufacturers the same low-cost licensing model which Bill Gates is today
credited with inventing by sloppy journalists - only
with far superior technology. DRI's roadmap showed a smooth migration to reliable multi-tasking, and in GEM, a portable
graphical environment which would undoubtedly have brought the GUI to the low-cost PC desktop years before Microsoft's
Windows finally emerged as a standard.
But then Kildall was motivated by technical excellence, not by the need to dominate his fellow man.
I remember the book by David Bradley. Assembly Language Programming for the IBM Personal Computer, Prentice-Hall,
1984 ((ISBN:
0130491713). Bradley's accomplishments are numerous - he wrote the BIOS code for the original PC and rose to become
architecture manager at the PC group. In 1992 he became the architecture manager for the group that developed the PowerPC
RISC microprocessor. Inventor of the Week Archive
See also David Bradley (engineer) -
Wikipedia
In 1981 these men changed how we live
David Smith, technology correspondent
Sunday August 6, 2006 The Observer
'IBM Corporation today announced its smallest, lowest-priced computer system - the IBM Personal Computer,' ran the
press release 25 years ago this week. 'Designed for business, school and home, the easy-to-use system sells for
as little as $1,565. It offers many advanced features and, with optional software, may use hundreds of popular application
programs.'
On 12 August 1981 no one could guess quite how profound an impact the announcement from International Business
Machines would have on hundreds of millions of lives. Nor how wildly divergent would be the fortunes of three men
who were there at the genesis of the IBM PC 5150 - a invention to rank in importance with the motor car, telephone
and television.
One of those men was David Bradley, 57, a member of
the original 12 engineers who worked on the secret project and who is still amazed by its profound consequences,
from email and iPods to Google and MySpace. Speaking from his home in North Carolina last week, he said: 'Computers
have improved the productivity of office workers and become a toy for the home. I don't want to assert that the
PC invented the internet, but it was one of the preconditions.'
The man with perhaps most cause to toast the industry standard PC's 25th birthday on Saturday, even more than
the engineers who built it, is Bill Gates. His software for the IBM PC, and nearly all the computers that followed
it, made him the world's richest man. But for IBM, the story was arguably one of defeat snatched from the jaws of
victory.
Bradley was also working on a similar machine when, in September 1980, he was recruited to the IBM team and sent
to Boca Raton in Florida to come up with a PC that would rival the pioneering Apple II. A few months later the team
had grown and got its leader - Don Estridge, a photographer's son from Florida who had worked for the army and NASA.
Racing against a 12-month deadline, the engineers scoured the country for components, and asked Intel, then a manufacturer
of memory chips, to deliver the central processing unit, or 'brain'.
IBM also needed operating system software. The man in the right place at the right time was a young geek who
had dropped out of Harvard. Bill Gates of Microsoft specialised in more modest computer languages but assured the
IBM team that he could come up with an operating system for their new machines in just a few days. After Estridge's
task force had left for their hotel, Gates went around the corner to a tiny company which had written a system for
the Intel processor and bought it out for Ł26,000. He then customised the system for IBM and sold it to them for
Ł42,000. Critically, Gates retained the right to license the system to other manufacturers who could, and would,
clone the IBM design. A quarter of a century later, he has an estimated wealth of Ł26bn.
IBM's failure to secure exclusive rights to Gates's software is often regarded as a blunder comparable to that
of the music executives who spurned The Beatles. But Bradley disagrees, saying that there was a higher purpose -
he and his colleagues used 'open architecture', off-the-shelf parts which others could acquire, and so defined a
standard that allowed others to build compatible machines capable of running the same software.
Experts generally regard this as the result of haste rather than altruism on IBM's part, but Bradley points out
that in the spirit of openness it published technical manuals to explain how the PC worked. Unlike Apple, who stuck
by its proprietary system and lost the lion's share of the market, the IBM PC was an invitation to rivals eager
to imitate and improve upon it.
Bradley said: 'I believe the primary reason it was so successful is that it was an open system. There was a microprocessor
from Intel and an operating system from Microsoft. We published everything we knew so that if you wanted to work
on an application program you had all the information to do it and you could be reasonably confident IBM wouldn't
change things later.
'The participation of the rest of the industry was important because IBM alone could not possibly have invented
all the applications that people would want.'
The IBM PC 5150 weighed 25lbs, stood just under six inches high and had 64 kilobytes of memory and a five-and-a-quarter
inch floppy disk drive. Initial sales forecasts expected 242,000 to be sold over five years, but the figure was
exceeded in single month. It was a personal triumph for Estridge, the 'father of the PC', but he would not live
to see its full legacy in the democratization of computing.
On 2 August 1985 Estridge was on Delta Air Lines Flight 191 from Fort Lauderdale, Florida approaching Dallas-Fort
Worth airport. It was caught in a freak wind and plummeted to the ground, bursting into flames. Of 152 passengers
on board, 128 died, including 48-year-old Estridge, his wife and several IBM executives.
IBM was overtaken in the PC market by Compaq in 1994. IBM sold its PC division to Chinese giant Lenovo for Ł628m
last year. 'I'm sad and disillusioned that IBM got out of the computer business since I was there at the very beginning,'
added Bradley. 'But as an IBM stockholder I think it was an extremely sensible business decision.'
Bradley quit IBM in 2004 after 28 years and lives in comfortable retirement. He mused: 'I have no regrets about
what happened. I was there when it was just a glimmer in everybody's eye and it's a privilege to still be here to
talk about it. And no, I don't envy Bill Gates.'
A decades-old quarrel over a defining event in computer history -- the creation of the program
that propelled Microsoft to dominance -- has suddenly become a legal dispute that could lead to a public trial.
Tim Paterson, the programmer widely credited for the software that became Microsoft's
landmark operating system, MS-DOS, filed a defamation suit this week against prominent historian and author Harold Evans
and the publishers of his book, "They Made America," released last year.
At issue is a chapter in the book that calls Paterson's program "a slapdash clone"
and "rip-off" of CP/M, an operating system developed in the 1970s by Seattle native Gary Kildall, founder of Digital
Research Inc. Paterson's suit disputes that claim and a long list of related assertions in the 16-page chapter
on Kildall, who died in 1994 at age 52.
The intent of the chapter, Evans said yesterday, was to "correct history" and set the
record straight on Kildall's role as a software pioneer. Evans based key elements of the chapter on Kildall's unpublished
memoirs. Evans said he stands by the facts as portrayed in the book and plans to "enter a vigorous defense" against
Paterson's lawsuit.
But Paterson, now 48 and retired in Redmond, said the chapter misrepresents history.
One possible resolution, he said yesterday, could include the release of a new edition of the book correcting the alleged
misrepresentations outlined in his suit.
"It's really a matter of the truth coming out and being widely understood, and if it
takes a trial to do that, then maybe a trial can help, but it's not necessary," Paterson said. "We're trying to get
their attention, first of all, and then see where that leads in terms of rectifying the problem."
In a statement, Microsoft criticized the version of events as portrayed in "They Made
America" as "one-sided and inaccurate."
Microsoft's statement acknowledged the "important" work of Kildall and others at his
company. However, the statement added: "The early history of the personal computer industry has been written many
times, and Microsoft is proud of the foundational role we played in the industry and for delivering the combination
of technical and business acumen that proved to be the catalyst for the revolution that followed."
The lawsuit promises to draw attention not only because of the subject matter but also because
of the prominence of the players.
Evans, married to Washington Post columnist and former Vanity Fair and New Yorker editor
Tina Brown, has worked as editor of the Sunday Times of London, president and publisher of Random House, and editorial
director and vice chairman of U.S. News & World Report, among other high-profile positions. The broader "They Made America"
book formed the basis for a PBS television series.
Paterson said he first became aware of the book not long before its October publication,
when contacted by a BusinessWeek reporter seeking comment for a story the magazine published about the book's chapter
on Kildall. Paterson said neither Evans nor his collaborators contacted him or interviewed him for the chapter, relying
instead on some of his previously published writing.
The lawsuit, filed Monday in U.S. District Court in Seattle, names as defendants Evans,
collaborators Gail Buckland and David Lefer, and publishers Little, Brown & Co. and Time Warner Book Group. The collaborators
and representatives of the publishing companies couldn't be reached for comment yesterday.
The suit acknowledges that Paterson sought to make the application programming
interfaces in his QDOS operating system, the predecessor of MS-DOS, compatible with Kildall's CP/M. Application programming
interfaces link programs to operating systems, and by ensuring compatibility, Paterson was seeking to "make it easy
as possible for software developers to write applications" for his operating system, the suit said.
However, the suit disputes the book's assertion that Paterson's program was a rip-off
of Kildall's software. It also dismisses any notion that Kildall was the actual originator of the DOS Microsoft ended
up using.
"It is known in the computer world and the public in general that DOS was invented by
Plaintiff Tim Paterson," the lawsuit says.
The suit seeks unspecified monetary damages above the $75,000 threshold for federal court
jurisdiction.
If the case ever comes to court, "the devil is in the details and in who can remember
what details about what happened," said Paul Freiberger, co-author of "Fire in the Valley," a book about creation of
the personal computer, first published in 1984.
"DOS certainly looked a lot like CP/M to the user," Freiberger said yesterday. "That
gets into all kinds of conflicts about look and feel and when someone is infringing."
As described in the suit, Paterson developed what would become known as DOS in the late
1970s and early '80s while working as a programmer for Tukwila-based Seattle Computer Products. Microsoft bought the
operating system in the early 1980s. Paterson later went to work for Microsoft.
"They Made America," which retails for $40, tells the story of inventors such as Henry
Ford, Wilbur and Orville Wright, and Walt Disney, among others. The chapter on Kildall describes him as "utterly brilliant"
at programming.
Describing the development of CP/M, Evans wrote that "Kildall created the bedrock
and subsoil out of which the PC software industry would grow."
"Entirely out of his own head, without the backing of a research lab or anyone,
he wrote the first language for a microcomputer operating system ... before there was even a microcomputer," Evans wrote.
Among other things, the chapter dismisses as myth the legendary story in which
Kildall is said to have missed a chance to sell his operating system to IBM because he decided to go flying. What's
not in dispute is that Microsoft and a young Bill Gates were able to strike a deal instead, providing the operating
system for IBM's early PC and launching a pivotal era for what has since become the world's largest software company.
DOSEMU-HOWTO [tldp.org] is the official linux
dosemu howto.
It seems to be even kept up-to-date (as popular dos is these days, anyhow).
There's still a dusty corner of systems design and programming that takes place on DOS: some embedded programming
tools (compilers, flash burners, in circuit emulator debuggers) for some chips still work "best" on DOS.
Only now, we can use DOSEMU to run them under Linux and get the benefit of real development environment when
supporting legacy apps. We can open a bash shell and use Perl, gnu make, emacs/vim, etc to drive development,
then have a DOSemu / FreeDOS window to drive download and debug.
It can be quite difficult automating the Windows versions of these tools to that same level. Most of our projects
use Windowes tool (running in VMware on Linux), but we did one two years ago hosted on DOSEMU and using Bytecraft's
(now) excellent compiler for the PIC chips.
Best of both worlds, and many, many thanks to all the hackers that made it work so well.
Why? Because I gave away lots of my old but good DOS programs, complete with licence of course, years ago.
It would be nice to run almost bug-free, stable things like Word Perfect 5.2 again. (I did find one bug in that
actually, but it was not too serious and did not cause data loss). Then there was a magazine cover disk with
50 free utilities, about 20 of which were actually useful and worked, and got used every day, and all the old
C programs I wrote, which would compile and run on both DOS and Unix, but not for some reason, Windoze, even
in a command window.
It would be nice to run non-bloated code again. I used to be amazed at the speed of spell-checking in WP
5.2 on a 286, it would most probably still beat Word 2000 on my Athlon 2.6GHz. Life was much less troublesome
then, before truly abominable software, designed by idiots, for idiots, became dominant.
Now, if DOS could be combined with Unix version 7, that would be almost perfection.
You fail to mention here that Gary Kildall drew heavily upon another source -- DEC's RT-11 disk operating system
-- for his design. Not only were most of the user level commands the same in CP/M; the APIs were also similar. Given
that RT-11 was designed to be a minimal disk operating system for machines that were running "real time" applications
(hence the "RT"), it was already very lean and hence a good model for CP/M.
Note: DJGPP is spelled all
upper case when it would normally be capitalized, and all lower case otherwise. It is never correct to spell it ``Djgpp''.
Also, please be careful not to let your fingers get confused and type something like dgjpp or gjgpp.
DJGPP was born around 1989 (originally called djgcc), when Richard Stallman spoke at a meeting
of the Northern New England Unix Users Group (NNEUUG) at Data General, where I then worked. I asked if the FSF ever
planned on porting gcc to MS-DOS (I wanted to use it to write a 32-bit operating system for PCs), and he said it couldn't
be done because gcc was too big and MS-DOS was a 16-bit operating system. Challenge in hand, I began.
The first gcc I built was 1.35, which I built on an ISC Unix system running on a 386/16.
I wrote custom replacements for the system calls, linked with ISC's libc.a, write a custom program to turn the resulting
binary into a 32-bit EXE that Phar Lap's extender could use, and had the first gcc that ran on MS-DOS.
Because Phar Lap hadn't discovered virtual memory yet, and I didn't have enough physical
memory to let gcc compile itself, I had to write an extender that could provide virtual memory. Go32 was born. The first
files were control.c, mswitch.s, paging.c, and valloc.c, among a few other minor files. By the time I got this working,
gcc 1.37 was out, so that was the first version that was built on a dos platform. I used this version for a while to
work on my 32-bit OS, which I still have lurking around on my hard drive (in source form).
I scrounged through the net and found the recently free'd BSD sources, and ported their
libc.a to MS-DOS. This, and many custom routines, was the basis for djgpp's standard library. The headers were based
on the original g++-includes from g++ 1.37. This is why the headers don't match the libraries all the time.
The first version that made it big was djgpp 1.03, which can still be found in a few
shareware catalogs, even though 1.03 was pre-grok-copyleft, and those shareware dealers are distributing it illegally.
The name was changed from djgcc to djgpp when C++ was added. I forget which release this
was. Since C++ is integral to gcc, djgpp no longer stands for "DJ's G++" but probably stands for something like "DJ's
GNU Programming Platform".
djgpp 1.05 was another big hit, as this was one of the first that was commercially available.
djgpp 1.06 added VCPI. 1.10 added DPMI. 1.11 added DPMIEMU, the first step towards version 2, and appeared on the GNU
Compiler Binary CD-ROM. Today, djgpp appears in many programming journals and on many CD-ROM distributions in many countries
and languages.
Version 2 began due to a need to have a system that could be fully self-bootstrapping.
Since go32 requires a Borland compiler to build, it didn't fit the bill. Cygnus, a big user of djgpp for their DOS-based
products, requested a self-bootstrapping version, so version 2 was born. The first part was writing an assembler that
could produce the 16-bit stub, so djasm was written. This stub relies on DPMI, a break from djgpp's traditional "run
on anything" policy. The old go32 was used to provide DPMI services through an offspring product called CWSDPMI (Charles
Sandmann), which will ship with djgpp. Eventually, even the DPMI server will be written in a combination of 32-bit gcc
code and 16-bit djasm code, or if gcc can produce 16-bit code by then, in that.
Many "third-party" software has been written for djgpp, and many applications that are
outgrowing their 16-bit roots are being rewritten to take advantage of djgpp's 32-bit environment. Popular examples
of these (some are still in the works) are Quake, Info-Zip, GhostScript, Executor/DOS, WatTCP, Xemu, DESQview/X's developers
kit, and countless data processing programs used by companies and individuals throughout the world.
Where we are now
Version 2.00 shipped on February 5, 1996, after more than two years of development. Version
2.01 shipped October 19, 1996. Version 2.02 shipped December 6, 1998.