|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
Compiler construction (see also my page with the collection of links) stopped to be a black art approximately after publishing of famous David Gries' book. Now it's a pretty established field but the truth is that there are few good books on the topic. Widely praised Compilers Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman is in my opinion a weak book that stresses too much syntax parsing and more obscures then enlighten the design of compiler. In other words the Dragon Book is way overhyped. It is confusing and a complete nightmare to understand, especially for students. It actually kills interest to compiler writing instead of enhancing it. the authors have penchant to use useless formalisms ("art for the sake of the art"). It have some value for instructors but almost none for students. Actually this book proves that Alfred Aho did not participated much in the development of the AWK interpreter and language :-). Which was a revolutionarily language (probably the first scripting language in existence) with well written interpreter that integrated regular expression engine into it (the path later enhanced by Perl). I would like to stress that is a fundamental way, OO technology can be viewed as a rather primitive implementation of the compiler-compiler paradigm -- extending the base language dynamically into a new more specialized language.
|
One of the most underestimated books on compliers is probably the first volume of The Art Of Computer Programming, the book that should be on the shelf of any complier writer. Algorithms described in this book, especially coroutines and those related to the trees, as well as MIX assembler are useful examples that any compiler writer can use. Generally a book with a complete code of a simple compiler is a good start as theoretical methods exposed in books like Compilers Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman at the beginning looks incomprehensible and at end are not that practical. Paradoxically only after a while own compiler or interpreter one starts to understand how primitive thinking all those "theorists" have about this complex subject and how far detached they are from reality in their writings. In this sense, for a practitioner of the field, sound skepticism toward those "pseudo theories" make perfect sense :-).
At the same time compliers are very interesting, fascinating area, and on abstract level the methods used for their writing are essentially higher level programming paradigm which can be called language-oriented programming and which is not less productive the OO (actually more as OO is way overhyped and also related to compiler-compiler idea).
This concept is described in detail in the paper by Martin Ward entitled "Language Oriented Programming",[1] published in Software - Concepts and Tools, Vol.15, No.4, pp 147-161, 1994,[2] and in the article by Sergey Dmitriev entitled "Language Oriented Programming: The Next Programming Paradigm".[3]
The concept of language-oriented programming takes the approach to capture requirements in the user's terms, and then to try to create an implementation language as isomorphic as possible to the user's descriptions, so that the mapping between requirements and implementation is as direct as possible. A measure of the closeness of this isomorphism is the "redundancy" of the language, defined as the number of editing operations needed to implement a stand-alone change in requirements. It is not assumed a-priori what is the best language for implementing the new language. Rather, the developer can choose among options created by analysis of the information flows — what information is acquired, what its structure is, when it is acquired, from whom, and what is done with it.[4]
Structuring program as a compiler represent a very powerful method for for solving a very broad class of complex problems related to processing to stream of structured data (not necessary only text, HTML, XML and like). In a way this approach equals to a construction of a compiler from some sort of specialized language designed specifically for this problem domain (the language which might be completely hidden from the user and exists only is some internal representation of data or just structuring of the program). It is similar but at the same time a higher level methodology then dominant now OO approach. BTW OO can be viewed as a primitive compiler-compiler approach to program construction -- you extend the language to create some more specilized language for processing of data, although few view it this way.
Structuring output of the problem as a generated program is also very powerful idea. See Generative programming. At one short period of time (approximately 2000-2003) this area became fashionable, but soon was simply forgotten. It does not deserve to be forgotten. This "pattern" of program constuction has a great value even in simple program development. for example in Unix transition of traditional user accounts from one flavor of Unix to about is best done by decompiling /etc/passwd and /etc/group file into a set of groupadd and useradd statement and then running the resulting script on the target server.
In any case separation of lexical aspects, syntactic aspects and semantic aspects of the task proved to be a very fruitful approach. Even such supposal minor thing as considering diagnostic as special subsystem with its own rules of classification of errors typical for compiler can drastically improve the quality of many programs. Providing summary of diagnostic messages also helps but nowadays is almost never done.
Attention to semantic of processing of errors of different severity (recoverable errors vs non-recoverable errors) and set of built-in macros for making diagnostic messages more comprehensible to the user (this is almost forgotten art; only IBM diagnostic compiler for PL/I and couple of other compliers oriented on students such as PL/C complier approach this problem from sound positions).
If you want to create your own language a look at books like Programming Language Pragmatics can help you to avoid typical blunders. Usually compiler writer already know three-four languages quite in depth (for example, C, C++, Perl and Lisp) and if this is not the case self-education is a necessary step.
Still history (which include history of language design blunders) if often ignored and it is disgusting to see in modern languages blatant errors in language design that were already known as such in late 60th. Among those we can mentions
1: if (... ) {
if ( ...) {
...
while (... ) {
,,,
}:1 # closes all levels of nesting opened since local label 1 was usedhere 1 is a local label can be reused multiple time in multiple places in the same program (the only restriction is that the same labels can't be nested). Pascal style numeric local labels are perfectly suitable for this task. This "multiple closure" solution is known since at least 1968 or so when it was first implemented on PL/1.
The first phase, the first level off a typical compiler is a lexical scanner that takes input text and split it into "lexical tokens" (eliminating comments) creating simultaneously a the table of identifiers and possible a table of literals (this way you can complex literals and factor multiple uses of the same literal, which is useful it is it is a long string. Those tables are used by subsequent phases of the compiler. References to identifier at this phase are replaced by referenced to the symbol table. Processes this way program is submitted to the next phase which typically is syntax analyzer which verifies if the program is construction correction in accordance with the grammar for the specific language and construct "parsing tree" the is used on the next, the most complex phase of the compiler -- code generation. There can be some "optimizing phase between those two stages which operates on program graph and try to "fold" certain constructs into some meta constructs or factor out or completely eliminate some redundant computation.
In any case the first phase of a typical compiler is the lexical scanner which can operate as subroutine or, better, as a coroutine. In extreme it can be just a separate "first" pass, but that make "feedback from syntax analyzer" to "lexical parser" more difficult. Coroutine structure in this sense have some advantages (for example it allow treating missing semicolon at the end of the line; a very frequent error typical for languages such a C or Perl and in most cases such errors are treated "stupidly" by modern compilers and interpreters from C-like languages; there is a distinct regress in quality if diagnostics since the days of classic compilers from IBM/360).
There are several lexical scanner generator of varying quality. But in my view lexical analysis phase should be coded by hand and it can do some look ahead in order to simplify the next phase: syntax analysis. Actually even if you deal with the simple, regular lexical structure you do not need to use Lex (or flex, or similar tools) to generate lexical scanner. Hand written scanners are simpler, more powerful and more flexible. They can provide additional help to syntax analysis by looking forward for a particular lexical token Moreover usually lexical analyzers can use advanced instructions for a particular architecture (like tr) so mixture of C and assembler is the best way to go.
The rule number one is not too spend too much time on syntax analysis. A lot of people hurt themselves and missed deadlines by overdoing this part of compilers instead of concentration of code generation. If you can mold your language into recursive decent parsing just do it and forget about the problem. It also can provide a better error diagnostics (which is an Achilles spot of YACC). Wirth was a very talented compiler writer and he preferred recursive decent parsers.
But as for syntactic analyzers, using something like YACC can improve reliability of parser and can be recommended. YACC has some debugging capabilities and you can ask questions and probably get answers in comp.compiler group. But for a simple languages a recursive decent is much better technology. It is not only simpler and better linked to generating code (which is the most difficult part of compiler) but also has better diagnostics. See for example LEPL 2.3
LEPL is a recursive descent parser library written in Python. It is based on parser combinator libraries popular in functional programming, but also exploits Python language features. Operators provide... a friendly syntax, and the consistent use of generators supports full backtracking and resource management. Backtracking implies that a wide variety of grammars are supported; appropriate memoisation ensures that even left-recursive grammars terminate
If it's your own language you almost always can tweak grammar to conform to recursive decent parser. Things like functions definition can always be moved to the front of the program by lexical scanner.
If this is language that you can't tweak and grammar is imposed on you try to use YACC or something similar and also forget about the problem :-). Usually you can split your language into several simpler sublanguages which can be parsed more easily if they are treated separately. For example some sublanguages can be parsable by a regular expression parser. So you definitely can use different parsing strategies for each of them.
This approach can be implemented using Floyd-Evans language which is a specialized language for writing parsers. It operates with a stack of lexical tokens. This way you can combine recursive decent for the program as whole with bottom up approaches for parsing expressions.
At the beginning you can start with an interpreter. That completely bypasses the problem of generating assembly code. This approach is especially useful if the language is flux. As you polish the language you can convert interpreter to a complier. First you can compile your language to C, then to assembler and then to object code. Such gradual approach is useful as you learn in the process.
One interesting possibility is to use XML as intermediary language then try to modify existing XML tools to convert it into C or assembler. XML captures nested structure of computer programs quite well and naturally suits to out of tuples which is the basic pre-generation form used in compliers. Using XML allows you using very common XML editors for viewing your intermediary representation. This alone simplify debugging immensely as a good visibility of intermediate representation of the program is the key to success in compiler writing. You can also try to use XLST as a poor man code generator (see for example Xalan which is available in C++), at least at the beginning stages. And the knowledge you obtain in the process can be reused in other valuable applications. Of course XML is abused those day immensely with simple config files coded in it left and right, but that's problem with any fashionable technology. In complier writing domain I think additional time spent learning XML and using it as an intermediate representation is probably a time spend well. While XML is not perfect it does make intermediate representation of the program more transparent with very little effort.
Compilers does not exist as an isolated phenomenon -- a lot depends on hardware in hand and programming languages in hand. Those days most compilers are written in C, but scripting languages such as Python, Perl and Ruby, if available, can make initial design phase more productive. After all compiler is a text manipulating program. But always start simple: generate C-code or assembler code. You need to adapt to the target computer first and learn a lot before converting your compiler to generating object code. With modern speeds of CPU this might not be even necessary unless of language gains really wide acceptance. Which is a very rare case indeed ;-) If you plan generating assembly code or object code, a set of good books on assembler should be added to your library.
Compiler writing produced several interesting algorithms including algorithms on directed graphs -- a very fascinating area. I hope that Donald Knuth will eventually write a volume devoted to compiler construction. But my popes dim with each year passed (as of 2015 he already is 77 years old). But this area definitely needs a giant. Meanwhile combination of Gries' book and Writing Compilers and Interpreters or Compiler Design in C (plus a couple of more advanced books like Muchnick's Advanced Compiler Design and Implementation) can probably serve you as a substitute of this inexistent volume of the Art of Programming.
Although this area is semi-forgotten, one active software development paradigm related to compiler technologies is so called program generation and generator programming patterns ;-). There are several books of various quality related to this topic.
The last part and the most complex part of compiler writing is the code optimization and here a tree representation can be extremely useful. I strongly recommend to try peephole optimization as the first optimization method. It was introduced in the paper McKeeman, W.M. Peephole Optimization. CACM 8 (July 1965), p 443-444. See also Aho, Alfred V., Ravi Sethi, Jeffrey D. Ullman. "Compilers: Principles, Techniques, and Tools". Addison Wesley, 1986.
Peephole optimization is a method to improve the quality of the program by examining a short sequence of target instructions by applying to them some well defined pattern recognition technology such as regular expressions. The peephole is a small moving window on the target program. Instructions in the peephole are optimized only considering the local context present in the peephole. It can also be defined as the pattern matching and conditional replacement performed on small sections of code. (see Vicki H. Allan. Peephole Optimization as a Targeting and Coupling Tool, page 112-121 1989, ACM Proceedings of the 22nd annual international workshop on microprogramming and micro architecture). Among other things it includes (see Subject D1 Peephole Optimization and Optimal Code Generation):
redundant-instruction elimination like in
(1) MOV R0, a (2) MOV a, R0 # Whenever (2) is executed right after (1), it is redundant.
debug = 0; if (debug) { print debugging information
}
See Peter B. Kessler "Discovering Machine Specific Code Improvement", Sigplan Notices, July 1986. pp 249-254 and a related paper by Massalin in ASPLOS '87: Superoptimizer --- A look at the smallest program. Proceedings of the Second International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS II), Oct. 1987, IEEE:
Abstract:
The superoptimizer is a tool that, given an instruction set, finds the shortest program to compute a function. Startling programs have been generated, many of them engaging in convoluted bit-fiddling bearing little resemblance to the source programs which defined the functions. The key idea in the superoptimizer is a probabilistic test that makes exhaustive searches practical for programs of useful size. The search space is defined by the processor's instruction set, which may include the whole set, but it is typically restricted to a subset. By constraining the instructions and observing the effect on the output program, it is possible to gain insight into the design of instruction sets. In addition, superoptimized programs may be used by peephole optimizers to improve the quality of generated code, or by assembly language programmers to improve manually written code
We can mention the following
Hardcover / Published 1985
One of the best books on the topic By kievite on January 19, 2011
- Series: McGraw-Hill computer science series
- Hardcover: 816 pages
- Publisher: Mcgraw-Hill College (May 1985)
- Language: English
- ISBN-10: 0070651612
- ISBN-13: 978-0070651616
- Product Dimensions: 1.2 x 7 x 9.5 inches
Format: Hardcover Verified Purchase 1 of 1 people found this review helpful
The key feature on a good book is that it makes complex issue more understandable. Few compiler books belong to this category as most are "over-exited" with formalisms, especially in syntax area.
Few that are more practically oriented and contain source code often lose the key ideas behind the mass of code.
In this book authors manage to cover a large body of compiler related topics in relatively in-obscure manner. They took the most difficult middle road and tried to balance formal methods with pseudo-code. That's a very difficult task and the fact the the authors managed this is a tremendous achievement in itself.
Essentially all chapters starting from chapter 8 (Symbol table handing techniques) by-and-large preserved their value to the reader/student. The most valuable part of the book is the part devoted to discussion of optimization techniques. That's the area that distinguishes simple compilers for more complex and some methods like constant folding are a must even for simple compilers/interpreters. At the same time it greatly influences the way all other compiler tasks are performed and first of all the intermediate representation of the program. The authors cover the topic in two large chapters 12 and 13 (pp 610-721).
Published in 1971
A Customer on October 8, 1999
Excellent book which still remains excellent
This was one of the best books I read when I was a student 25 years ago. You really understand how to write compilers after reading it. It is invaluable for those who learn programming. I believe it contributed greatly to turning me into a professional programmer. It's a shame that this book is not available any more...
M. Cleary
- Hardcover: 592 pages
- Publisher: Course Technology; 1 edition (January 24, 1997)
- Language: English
- ISBN: 0534939724
- Average Customer Review: based on 10 reviews. (Write a review)
- Amazon.com Sales Rank: #177,613 in Books
One of the best books, April 6, 2005
This book is outstanding! The Dragon Book is way overhyped. I have tried again and again to follow the dragon book, and each time I found it too difficult. On the other hand, Louden's book has answered many questions that I had in a clear, concise manner! I love this book! I have also flipped through almost all other compiler/interpreter books on the market in various bookstores, but none of them compare. This is *THE* book on introductory compiler design. Other books you might want if interested in writing your own programming language/compiler are "Programming Language Pragmatics", "Lex and Yacc", "Java Virtual Machine Specification" and "Virtual Machine Design and Implementation in C/C++".meerkat "Captain Meerkat"(Moscow, ID USA)
I taught from this book, May 6, 2003
This is an excellent basic book on compilers. Its strength is its strong practical approach combined with using YACC/LEX technology. It hand holds you through the development of a simple compiler. If I wanted to learn about compilers I would read this first. Its weakness is it is too narrow. There are plenty of features of languages that are not addressed but in passing. Its goal is to get a compiler built. For a compilers 101 class there is no better book.
Selected Reviews SVENSSON KURT from Stockholm Sweden
- Hardcover: 856 pages ; Dimensions (in inches): 1.73 x 9.57 x 7.70
- Publisher: Morgan Kaufmann; 1st edition (January 15, 2000)
- ISBN: 1558604421
- In-Print Editions: Paperback | All Editions
- Average Customer Review:
A classic, May 30, 2003A reader from Los AngelesFor everyone that ever has worked on or plans to implement a compiler/interpreter. It gave me many new insights on different implementation issues. This book is written in such a pedagogical, clear and exciting way that it can be read like a novel or a thriller (pick Your choice).
A magnificent achievement--a bedrock of knowledge, for life, March 17, 2002Cher-Wah Tan from Texas, SA USAI am in the process of reading this book for a review and already I am compelled (at the conclusion of chp. 7 of 13) to write a glowing review. This book truly is an achievement and it lives up to my predecessor's comments. I hasten to emphasize that this textbook combines form and content to a very high degree: it is written superbly with great clarity, the topics are organized extremely well and meaningfully, and finally it provides a comprehensive overview of all aspects of programming. In the course of my reading, I have never felt the need to skip sections; there are no sections that are abstruse or cursorily covered. All sections are integrated with the others and each section offers very useful knowledge. The author clearly displays a profound understanding of all aspects of his endeavor. I must emphasize that in the vast majority of cases with textbooks, in any academic area, the impression is that the author has intimate knowledge of 60% of the material he covers, and as for the latter 40% of the material he has at best good knowledge or passing familiarity but is able to speak on these topics because of his qualifications. The greatest merit of this book is that one can very profitably go through a self-study programme through all 13 chapters and come away much superior to one's peers in college or graduate school or industry (I qualify this statement at present: I have read only 7 chapters yet, but intend to read all 13). Finally, if you are familiar with the excellent book on Computer Architecture by Patterson and Hennesy, then I say that this book is on level par with that venerable textbook.
Tough Topic - Crystal Clear Explanation, June 3, 2001I have always enjoyed reading programming-language and compiler books and most of them are quite tough on a first-read. Programming Language Pragmatics is one huge exception. None of the books I have read come close to the clarity that this book exhibits. On many occassions, the choice of words and presentation in this book has made me go 'Wow, I thought I already knew this stuff...'
Besides core topics, it has interesting discussion like concurrency, data-abstraction (object-oriented) and non-imperative programming models (functional and logic).
TOC (with my comments)
- Ch. 1 Introduction
- Ch. 2 Programming Language Syntax (theory of Regular Expression, Context-Free Grammars, Automata etc)
- Ch. 3 Names, Scopes, and Bindings (binding, scope rules, closures etc)
- Ch. 4 Semantic Analysis (attribute grammars, attribute flow, syntax tree etc)
- Ch. 5 Assembly-Level Computer Architecture (keeping the pipeline full, register allocation etc)
- Ch. 6 Control Flow
- (expression evaluation, iteration, recursion, nondeterminacy etc)
- Ch. 7 Data Types (type checking, pointers and recursive types etc)
- Ch. 8 Subroutines and Control Abstraction (stack layout, calling sequences, parameter passing etc)
- Ch. 9 Building a Runnable Program (back-end compiler structure, intermediate forms etc)
- Ch. 10 Data Abstraction and Object Orientation (encapsulation, inheritance, dynamic method binding, multiple inheritance, the object model of smalltalk)
- Ch. 11 Nonimperative Programming Models: Functional and Logic Languages
- Ch. 12 Concurrency (shared memory, message passing etc)
- Ch. 13 Code Improvement (peephole, redundancy elimination, data flow analysis, loop improvement, instruction scheduling, register allocation etc)
App. A Programming Languages Mentioned
App. B Language Design and Language Implementation
This is a very impressive book; truly one of my best investments in books so far.
Amazon.com
Andrew Kadatch(Redmond, WA United States) Excellent, December 12, 2001
This book is definitely _not_ for beginners, but compilers are not supposed to be written by novices -- if there is rocket science in computers, it is compiler development. Crystal clear style and language make this book easy reading, and LCC is the best non-optimizing compiler I've seen (and believe me, I've seen many compiler sources): orthogonal, easy to follow design, well-thought data structures and overall architecture.
I treat this book as a perfect collection of brilliant ideas, many of which you will find implemented in most commercial compilers.
Whether it helps to write your own compiler? -- sure. Are you thinking about IR (internal representation) that will be easy to create and, most important, walk through and manipulate? -- take a look how Fraser et al did it; they did it well. Think how to write a front end or code generator? -- it's all there. Sure, blind copying won't work -- optimizing compiler will call for way more sophisticated BURG-like technique (one of the best known code generation techniques by now), but, all in all, it'll be BURG-like, and it's in the book as well.
So, if you want to show your students (or learn yourself) how compilers should be written, you cannot find anything better than LCC accompanied by this book. Fraser's team did it right.
Nikolai Bezroukov
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Announcing the first Art of Computer Programming eBooks
For many years I've resisted temptations to put out a hasty electronic version of The Art of Computer Programming, because the samples sent to me were not well made.
But now, working together with experts at Mathematical Sciences Publishers, Addison-Wesley and I have launched an electronic edition that meets the highest standards. We've put special emphasis into making the search feature work well. Thousands of useful "clickable" cross-references are also provided --- from exercises to their answers and back, from the index to the text, from the text to important tables and figures, etc.
Note: However, I have personally approved ONLY the PDF versions of these books. Beware of glitches in the ePUB and Kindle versions, etc., which cannot be faithful to my intentions because of serious deficiencies in those alternative formats. Indeed, the Kindle edition in particular is a travesty, an insult to Addison-Wesley's proud tradition of quality.
Readers of the Kindle edition should not expect the mathematics to make sense! Maybe the ePUB version is just as bad, or worse --- I don't know, and I don't really want to know. If you have been misled into purchasing any part of eTAOCP in an inferior format, please ask the publisher for a replacement.
The first fascicle can be ordered from Pearson's InformIT website, and so can Volumes 1, 2, 3, and 4A.
MMIXware
Hooray: After fifteen years of concentrated work with the help of numerous volunteers, I'm finally able to declare success by releasing Version 1.0 of the software for MMIX. This represents the most difficult set of programs I have ever undertaken to write; I regard it as a major proof-of-concept for literate programming, without which I believe the task would have been too difficult.
Version 0.0 was published in 1999 as a tutorial volume of Springer's Lecture Notes in Computer Science, Number 1750. Version 1.0 has now been published as a thoroughly revised printing, available both in hardcopy and as an eBook. I hope readers will enjoy such things as the exposition of a computer's pipeline, which is discussed via analogy to the activites in a high tech automobile repair shop. There also is a complete implementation of IEEE standard floating point arithmetic in terms of operations on 32-point integers, including original routines for floating point input and output that deliver the maximum possible accuracy. The book contains extensive indexes, designed to enhance the experience of readers who wish to exercise and improve their code-reading skills.
The MMIX Supplement
I'm pleased to announce the appearance of an excellent 200-page companion to Volumes 1, 2, and 3, written by Martin Ruckert. It is jam-packed with goodies from which an extraordinary amount can be learned. Martin has not merely transcribed my early programs for MIX and recast them in a modern idiom using MMIX; he has penetrated to their essence and rendered them anew with elegance and good taste.
His carefully checked code represents a significant contribution to the art of pedagogy as well as to the art of programming.
- File Size: 558 KB
- Print Length: 169 pages
- Simultaneous Device Usage: Unlimited
- Publication Date: September 20, 2015
- Sold by: Amazon Digital Services, Inc.
- Language: English
Amazon.com
File Size: 4095 KB
Publisher: Pearson; 1 edition (June 1, 2013)
Publication Date: June 1, 2013
Sold by: Amazon Digital Services, Inc.
Language: English
ASIN: B00LOBJZ82
Amazon.com
File Size: 5023 KB
Print Length: 536 pages
Publisher: Pearson; 1 edition (May 2, 2012)
Publication Date: May 2, 2012
Sold by: Amazon Digital Services, Inc.
Language: English
"Throw away your compiler theory book! Terence Parr shows how to write practical parsers, translators, interpreters, and other language applications using modern tools and design patterns. Whether you're designing your own DSL or mining existing code for bugs or gems, you'll find example code and suggested patterns in this clearly written book about all aspects of parsing technology."
- Paperback: 374 pages
- Publisher: Pragmatic Bookshelf; 1 edition (January 10, 2010)
- Language: English
- ISBN-10: 193435645X
- ISBN-13: 978-1934356456
-Guido van Rossum, Creator of the Python language
Table of Contents
booklover , June 8, 2015
- What Readers Are Saying About Language Implementation Patterns
- Acknowledgments
- Preface
- What to Expect from This Book
- How This Book Is Organized
- What You'll Find in the Patterns
- Who Should Read This Book
- How to Read This Book
- Languages and Tools Used in This Book
- Part 1: Getting Started with Parsing
- Chapter 1: Language Applications Cracked Open
- The Big Picture
- A Tour of the Patterns
- Dissecting a Few Applications
- Choosing Patterns and Assembling Applications
- Chapter 2: Basic Parsing Patterns
- Identifying Phrase Structure
- Building Recursive-Descent Parsers
- Parser Construction Using a Grammar DSL
- Tokenizing Sentences
- Mapping Grammars to Recursive-Descent Recognizers
- LL(1) Recursive-Descent Lexer
- LL(1) Recursive-Descent Parser
- LL(k) Recursive-Descent Parser
- Chapter 3: Enhanced Parsing Patterns
- Parsing with Arbitrary Lookahead
- Parsing like a Pack Rat
- Directing the Parse with Semantic Information
- Backtracking Parser
- Memoizing Parser
- Predicated Parser
- Part 2: Analyzing Languages
- Chapter 4: Building Intermediate Form Trees
- Why We Build Trees
- Building Abstract Syntax Trees
- Quick Introduction to ANTLR
- Constructing ASTs with ANTLR Grammars
- Parse Tree
- Homogeneous AST
- Normalized Heterogeneous AST
- Irregular Heterogeneous AST
- Chapter 5: Walking and Rewriting Trees
- Walking Trees and Visitation Order
- Encapsulating Node Visitation Code
- Automatically Generating Visitors from Grammars
- Decoupling Tree Traversal from Pattern Matching
- Embedded Heterogeneous Tree Walker
- External Tree Visitor
- Tree Grammar
- Tree Pattern Matcher
- Chapter 6: Tracking and Identifying Program Symbols
- Collecting Information About Program Entities
- Grouping Symbols into Scopes
- Resolving Symbols
- Symbol Table for Monolithic Scope
- Symbol Table for Nested Scopes
- Chapter 7: Managing Symbol Tables for Data Aggregates
- Building Scope Trees for Structs
- Building Scope Trees for Classes
- Symbol Table for Data Aggregates
- Symbol Table for Classes
- Chapter 8: Enforcing Static Typing Rules
- Computing Static Expression Types
- Automatic Type Promotion
- Enforcing Static Type Safety
- Enforcing Polymorphic Type Safety
- Part 3: Building Interpreters
- Chapter 9: Building High-Level Interpreters
- Designing High-Level Interpreter Memory Systems
- Tracking Symbols in High-Level Interpreters
- Processing Instructions
- Syntax-Directed Interpreter
- Tree-Based Interpreter
- Chapter 10: Building Bytecode Interpreters
- Programming Bytecode Interpreters
- Defining an Assembly Language Syntax
- Bytecode Machine Architecture
- Where to Go from Here
- Bytecode Assembler
- Stack-Based Bytecode Interpreter
- Register-Based Bytecode Interpreter
- Part 4: Translating and Generating Languages
- Chapter 11: Translating Computer Languages
- Syntax-Directed Translation
- Rule-Based Translation
- Model-Driven Translation
- Constructing a Nested Output Model
- Syntax-Directed Translator
- Rule-Based Translator
- Target-Specific Generator Classes
- Chapter 12: Generating DSLs with Templates
- Getting Started with StringTemplate
- Characterizing StringTemplate
- Generating Templates from a Simple Input Model
- Reusing Templates with a Different Input Model
- Using a Tree Grammar to Create Templates
- Applying Templates to Lists of Data
- Building Retargetable Translators
- Chapter 13: Putting It All Together
- Finding Patterns in Protein Structures
- Using a Script to Build 3D Scenes
- Processing XML
- Reading Generic Configuration Files
- Tweaking Source Code
- Adding a New Type to Java
- Pretty Printing Source Code
- Compiling to Machine Code
- Bibliography
Great book for those who want to write and exploit parsing technologyMaciej Pilichowski on December 13, 2012This is a great book for those who want to learn how to write and exploit parsing technology to create DSL (domain specific languages) and transform programs using parsing technology when regular expressions won't do the job or are too complex to be coded easily.
The book discusses many aspects of compiler technology and interpreter technology. It will help you write a compiler or interpreter using ANTLR3. It goes in depth into the many things you need to know such as - LL(1), LL(k), and LL(*) parsers and symbol tools.
The book instructs one in how to use ANTLR - which is the lexer/parser generator which the author of the book makes freely available. ANTLR is the most powerful generator on the market today and used by many companies such as Oracle.
To get the most from this informative book, you should also buy "The Definitive ANLTR 4 Reference" which Amazon also sells.
Parr is a genius. He has produced the definitive parser generator and the definitive books that enable one to use this generator. Even if one will not use ANTLR, the books are valuable for their overview and in depth discussion of writing compilers and parsers.
Note that the book does not discuss some things necessary for writing a world class compiler such as register allocation. I'm using it to write translators from one language to another and in that application its advice is spot on and extremely helpful.
1.0 out of 5 stars The maze to slow you down
Guido van Rossum wrote in his review of this book "Throw away your compiler theory book". If I was about to throw away any book from my compilers bookshelf, this book would be the first one.And it is not because it does not have some ideas presented, it has -- but almost everything is done in such twisted way, that instead of boosting your performance, it actually slows you down. Take any sentence -- and I knew less after reading it, than before. It is like asking a stranger about time, and listening in return to mumbling about how time in general is an interesting concept, the ways of measuring time, and in the end getting info, it is about the same time, as last Sunday. All of those information are absolute true, correct, but not useful in context of the question.
My question for this book was -- "how to build my own DSL using ANTLR?" -- and it failed to delivered clear answer. Here is why...
The book has three layers -- how would you do given task in general (in theory), how would you write ANTLR, and in the end how would you use ANTLR. I did such distinction myself, because for the author everything blends in, so when you read some parts you could be surprised "should I write all this stuff?" to guess some time later, that you saw ANTLR internals, which are already written. This smooth moving from one layer to another was a problem for me, because I always missed which part should I implement by myself, and which is already implemented.
Every layer has flaws starting from basic problems -- the book is too thin to fit good explanation of all of them. In my opinion the author should decide on one topic and focus solely on it -- either it is theory book or book about using ANTLR -- here you have little this, little that, and in fact for good understanding of theory you should get another book, for good understanding of ANTLR you should use also another book (actually there is only just one: The Definitive Antlr Reference: Building Domain-Specific Languages (Pragmatic Programmers)), and for writing ANTLR -- I don't know what you should get, because I don't intend to do it.
Whenever some code is presented you will be surprised by author's style of compressing space -- since it is printed book, not only you won't get any substitution of syntax highlighting, but you will read through "if's" single-liners, or even "try-catch" blocks as single-lines. The code, presented after all in educational purpose, looks more like the result of obfuscator.
The theory part (of scanning and parsing) is the best of this book (it does not mean it is good), it is possible to get some knowledge out of it, but if you are interested in those stages I would not recommend this book to learn about them. The later stage in compiling are described using heavily ANTLR, and it is obscure as everything about ANTLR in this book.
The ANTLR internals part -- I didn't pay too much attention to it, because I don't understand why should I read about it in the first place. I have an input, I need the output, I am writing for example a grammar (for parsing), not a parser. This part is a mistake.
This leaves using ANTLR -- I had high hopes for it, and it was at the same time the biggest disappointment. I already mentioned it is hard to dissect this part from the others (especially from ANTLR internals), but worse -- for parsing, the key point for me -- there is more about internals of ANTLR, than using ANTLR! With some struggle I figured out how to write a grammar in ANTLR, just to find out the next chapter (generating AST) is not based on parsing! Instead of gradually creating more and more advanced DSL (for example) it uses disjoint examples of algorithms implemented in ANTLR -- for AST it manually "injects" symbols to show how AST is built. Did you see AST building for which the source was not parsing?
After reading the book instead of having complete picture of the tool I have a blurry images of how the scanning is done, how parsing is done, how can I create AST (in several ways), how to optimize... but how I should connect those stages -- I have no idea.
In the end I gave up -- the book and ANTLR altogether (only after reading a book I found out I would fail anyway, because I aimed to use C#, and ANTLR and C# is not a reliable pair). This experience was one of the most frustrating in my life, because I really tried to understand how to use ANTLR, I had solid background (at the time of reading, I already written all stages of compiler), yet I couldn't move past AST because of all this mumbling -- there are regular, irregular, homogeneous, heterogeneous trees (why does the parser have to know it?) but how to build any of those trees using output from parser -- silence. On the other hand there is big chapter about traversing trees (in general).
Maybe ANTLR is over engineered, maybe I am not smart enough for this book, however I spent about 2 months with both books by Terence Parr, without any success, and on the other hand I was so annoyed that I wrote from scratch lexer and LALR parser in 3 weeks. If I can still do the math, it means, I spent less time on my custom tools and in result I got even better understanding about building compilers. Also I found much better books -- so in perspective of time, the time spent on reading this book was complete waste, because I didn't learn literally anything new. It was an obstacle for me, not a help. It kept me thinking about artificial problems (like choosing heterogeneous or homogenous trees; artificial -- because the choice is ANTLR requirement) instead of moving forward to solving the problem.
It goes without saying, I don't recommend this book -- simply put, avoid it. It is a mixture of solid knowledge, but the mixture is badly served. If you would like to learn about ANTLR (up to parsing and evaluating parse tree), I highly recommend free ANTLR 3.x Tutorial by Scott Stanchfield (use google; the same topic, ANTLR, but what a difference in teaching).
If you would like to learn about building compiler using existing tools for scanning and parsing, I think Modern Compiler Implementation in Java could be a good pick as introductory book (however I didn't read entire book, so I cannot say I recommend it or not). If you are interested in building your own parser, a killer book -- Parsing Techniques: A Practical Guide (Monographs in Computer Science).
And if you prefer video lectures instead of a book, I could not praise enough Alex Aiken's Compilers course (free on Coursera platform), now I can hardly stand anything less concise.
All those resources are good examples, that no matter what topic, you can always use clear language and explain the problem in understandable manner. And if you start with them I doubt you will ever find this book useful, not mentioning, well written.
- Hardcover: 720 pages
- Publisher: Pearson; 1 edition (November 7, 2009)
- Language: English
- ISBN-10: 0136067050
- ISBN-13: 978-0136067054
- Product Dimensions: 7.6 x 1.6 x 9.2 inches
Brandon on March 25, 2002
Excellent book
I am 22. I found this rare book at a library sells, they were having a sell and sold this vbook for [money]. My interest at that time was compiler design. It was more out of curiosity, than for any real project. So, maybe this review is not from the perspective of a professional, but a curious student 3 years ago. I did find this book to be rather incisive. This book is also heavy on terminology. As in the first chapter they give a detailed description of different classes of compilers. The second chapter goes into lexical analysis. And the next few chapters they give the student an exercise to write a small compiler, that is rather trivial. That is the plus of this book, they give excercises for the student.
This book also has a chapter on scanning, which is the best I ever seen in any compiler design book I have ever read. They talk about concepts of set theory as it relates to lexical analysis. Then they talk about regular expression and Fintie automata. This book is a great read indeed, and very easy to read.
There are quite of few chapters dedicated to parsing. In the chapters related to parsing they give a comparisons to Top Down and Bottom up parsing. They even go well known utilities like Yacc. The last few chapters go into depth chapter by chapter on implementing control structures: conditional, iterative, recursive. Even the appropriate runtimes, like code generation. There is even one chapter that goes into the fundamental Data Structures for a compiler. The last chapters is called "Parsing In The Real World".
The code example in this book are based off of a language the ADA-CS langauge. There is a brief tutorial of this language. But the code is just illustration, as they do not use a full langauge for the illustration. I think this is important, because the book focuses more on design rather then design with a particular langauge.
I really cant find anything wrong with this book. I definitely got more than my money's worth on this book. As I only spent [money] on this book. But I would have easily spent [money] on this book easily. Simply because I am drawn to this type of information. And even in 1999 when I found this book, compiler design was not demanded in the workplace much, I still find this a great book for students.
I would encourage anyone to purchase this book. If you can find this book that is. I'm sure this book is very hard rto find. My book is a Instructors book, and was not previously for sell. But if you ever see this book at a yard sell, lirbrary sell, please pick it up. Especially if you are student.
Hardcover: 720 pages
Publisher: Pearson; 1 edition (November 7, 2009)
Language: English
ISBN-10: 0136067050
ISBN-13: 978-0136067054
Product Dimensions: 7.6 x 1.6 x 9.2 inchesHere is one review for the 1991 edition
Mr James S Battle from London
Good treatment of difficult material, December 23, 1999
This book has a nice balance of theory and practical algorithms. There is enough detail to allow a (patient) reader to implement his own compiler tools, though like most other books on the subject, this book leaves you with the feeling that the area might have died about twenty years ago (no insult intended!); an update needed, to include OO languages, some treatment of the complexities associated with parsing modern languages, C++ etc.
All things considered, still a great book, well worth the money.
- Hardcover: 302 pages
- Publisher: JEM-BRM, LLC (November 5, 2009)
- Language: English
- ISBN-10: 0982505736
- ISBN-13: 978-0982505731
- Product Dimensions: 6.1 x 0.7 x 9.2 inches
This exciting and practical book for compiler construction combines history and development of several early programming languages together with sufficient theory to develop a compiler for an extensive language. The book reflects the author's views that compiler construction can best be learned by the actual implementation of a compiler. A source language, equivalent to early translating languages, is developed. An object language consisting entirely of numbers is also developed.
The student will learn to write programs in the developed source and object language. Using the language C++, the author gently leads the student through the steps which are necessary to complete a working compiler in a one-semester effort. Extensive exercises at the end of each chapter keep the student's focus on the big project - the implementation of a working compiler.
- Hardcover: 801 pages
- Publisher: Morgan Kaufmann (October 27, 2003)
- ISBN: 155860698X
- Average Customer Review: based on 5 reviews. (Write a review)
- Amazon.com Sales Rank: #148,516 in Books
Paperback - 400 pages Bk&Cd-Rom edition (March 26, 2001)
Addison-Wesley Pub Co; ISBN: 0201719622 ; Dimensions (in inches): 0.81 x 9.22 x 7.37Grant Steinfeld (see more about me) from New York, ny United States
Parser Design for the 21st century, February 7, 2002
I found this powerful parser framework easy to understand (with a little help from my friends) and a pleasure to incorporate into my programmers toolbox.
Aho is for Computer Scientists and Mathematicians, while the organic nature of Steve's thinking and elegant application of Design Patterns to the problem of creating an extensible parser, is more up a Biologist turned webmaster alley.
In less that a few days we were able to convert IDL to WSDL, in less than 100 lines of code!
The only issue I had was the text sometimes could have benefited with some graphical depiction of the concepts, or even an accompanying flash animation / demo website. Maybe in the next edition?
Paperback - 448 pages Bk&Cd-Rom edition (February 7, 2001)
Prentice Hall PTR; ISBN: 0130258784 ; Dimensions (in inches): 1.21 x 9.22 x 6.99Avg. Customer Rating:
A reader from San Jose, CA United States Worth reading if you have interest in code generation, February 21, 2002
This book is definitely interesting in understanding how code generation works and how to utilize some of the newer technologies like XML and XSL to generate software.I am very impressed with some of the new, advanced code generators like CodeCharge, which utilize XML and XSL but do not give us insight to the internals of how it works.
While those tools prove that XML and XSL are great for generatng code, this book explains how it is done.
Soumen Sarkar (see more about me) from Fremont, CA United States The ideas in the book are worth exploring, February 9, 2002
Agreed that XML may not be the best language to capture domain specification expressiveness. But use of XML/XSLT to do custom code generation has the benifit of rapid application prototyping and development. The crucial fact is that the domain specification is captured in XML only relatively few times and project software developers mainly use the generated code.A reader from Victoria, Canada Soso, January 15, 2002The question is how many people in the project is exposed to 'ugliness' of XML and how many times. The advantages of 'neat' code generation far outweigh the disadvantages of 'ugliness' of domain specification in XML.
In a real Network Management Software development I achieved 60% of generated code (EJB, SNMP, Java utilities) by using custom code generation by XML/XSLT. Only myself dealt with XML other software developers happily used generated code. You can imagine the lead the project had and continues to have because of use of XML/XSLT in project specific custom code generation. The code generation system is stable now -- any new addition in EJB, SNMP model results in thousands of lines of Java/SQL/XML/SVG code without any additional effort.
I would, therefore, continue to recommend the book as worth exploring. This book really contributed new techniques in software development. More specically with XML/XSLT you have freely available tools to implement "model driven programming" in your software project.
While the book has interesting ideas, it ignores useful results of the domain-specific language community. More important, it preaches to use XML as a domain-specific language, which is in my opinion a disastrous idea.Terence Parr (jGuru.com) provides an excellent argument why this is the case in his article "Answers to the question 'When shouldn't you use XML?'", August 2001, IBM developerWorks : XML zone : XML zone articles:
"XML is a poor human interface: Humans have an innate ability to apply structure to a stream of characters (sentences), therefore, adding markup symbols can only make it harder for us to read and more laborious to type.
The problem is that most programmers have very little experience designing and parsing computer languages. Rather than spending the time to design and parse a human-friendly language, programmers are using the fastest path to providing a specification language and implementation: "Oh, use XML. Done." And that's OK, but I want programmers to recognize that they are providing an inferior interface when they take that easy route."
Besides, the book is poorly typeset. It appears that the font was increased until the book had more than 400 pages. I have never seen a bigger font in a computing book! I don't know why Prentice Hall endangers their good reputation with such a poorly typeset publication. Better try to borrow the book first before potentially wasting your money.
by Dick Grune (Editor), Henri Bal, Ceriel Jacobs, Koen Langendoen
Paperback - 754 pages 1 edition (August 30, 2000)
John Wiley & Sons; ISBN: 0471976970
- Paperback: 864 pages ; Dimensions (in inches): 1.22 x 9.22 x 7.42
- Publisher: Addison-Wesley Pub Co; 1st edition (June 6, 2000)
- ISBN: 0201309777
- Average Customer Review: Based on 15 reviews.
A reader from Vermont
5 Stars with caveats......., October 10, 2000
Its hard to tell from the title of this book who will benefit from reading it but from a practical standpoint, C++ library designers and those with an interest in the "bleeding edge" of software engineering should find it very enlightening. The primary focus of this book is speeding up the lifecycle of program design by utilizing "Generative Programming". GP is a fancy name for programming using domain specific notations and generating highly optimized code without burdening the application programmer with low level details of domain libraries.
- Chapter 1 "What is this book about?" - The authors describe GP. Short and sweet.....
- Chapter 2 "Domain Engineering" - A rather dry, pedantic review of current Domain Engineering methods. This chapter reads like a PHD lit review. Boring....
- Chapter 3 "Domain Engineering and OO Analysis and Design" - Why OO Analysis isn't appropriate for designing reusable libraries and analysis methods that are more suitable for the task. Quick and painless....
- Chapter 4 "Feature Modeling" - One of the high points of the book. For those of you who have been stymied by the inflexibility of UML, the authors introduce the technique of "feature diagrams" which allow library designers to defer decisions like inheritance vs. aggregation until later in the design. Potentially very useful.
- Chapter 5 "The Process of GP" - Describes how GP should work in an ideal world (which unfortunately doesn't exist yet). A bit too abstract.....
- Chapter 6 "Generic Programming" - Describes type based programming (i.e. C++ templates) and various languages support for Generic Programming. Java programmers won't like this one!
- Chapter 7 "Component-Oriented Template-Based C++ Programming Techniques" - The title pretty much says it all. Good introduction to C++ templates.
- Chapter 8 "Aspect-Oriented Programming" - Aspects are portions of code that have little to do with the actual intent of the code. Examples are synchronization and error handling. This chapter describes how messy aspects can make code and how to separate aspects from core functionality. Good stuff....
- Chapter 9 "Generators" - Describes how ideal code Generators should work. Good introduction to the topic.
- Chapter 10 "Static Metaprogramming in C++" - For me this is the high point of the book. Compile time control structures such as IF<>, SWITCH<>, DO<> and WHILE<> are introduced. These can be used to generate configurable types as shown in later chapters. These structures are difficult to debug but if used conservatively are very powerful!
- Chapter 11 "Intentional Programming" - A description of Microsoft's Intentional Programming environment. IP is the ideal GP development environment that allows library designers to enhance the main IDE with domain specific libraries. Developers interact directly with the source parse trees that are rendered to the IDE in a domain specific manner. The description is interesting but the IP Software is potential Vaporware and I'm kinda sick of reading about MS development tools that will change the world (C# anyone????)
- Chapter 12-14 - The final chapters describe how to build template class generators that allow the application programming to specify functionality as a template parameter and the generator will build the type. It's as close to GP as we can get today. A list container class, bank account class and a highly optimized matrix library are designed using the GP methodology. It's nice to see the authors actually practicing what they preach.
Aside from the overly academic feel to the book and touting Microsoft fantasy-ware (which may become available... who knows?) this book offers much food for thought for system designers and C++ library implementers. The template tricks described are difficult to debug but with a little luck future compilers will provide better support for this style of compile time design. I look forward to the 2nd or 3rd edition of this book when this stuff matures.
Selected Reviews SVENSSON KURT from Stockholm Sweden
- Hardcover: 856 pages ; Dimensions (in inches): 1.73 x 9.57 x 7.70
- Publisher: Morgan Kaufmann; 1st edition (January 15, 2000)
- ISBN: 1558604421
- In-Print Editions: Paperback | All Editions
- Average Customer Review:
A classic, May 30, 2003A reader from Los AngelesFor everyone that ever has worked on or plans to implement a compiler/interpreter. It gave me many new insights on different implementation issues. This book is written in such a pedagogical, clear and exciting way that it can be read like a novel or a thriller (pick Your choice).
A magnificent achievement -- a bedrock of knowledge, for life, March 17, 2002Cher-Wah Tan from Texas, SA USAI am in the process of reading this book for a review and already I am compelled (at the conclusion of chp. 7 of 13) to write a glowing review. This book truly is an achievement and it lives up to my predecessor's comments. I hasten to emphasize that this textbook combines form and content to a very high degree: it is written superbly with great clarity, the topics are organized extremely well and meaningfully, and finally it provides a comprehensive overview of all aspects of programming.
In the course of my reading, I have never felt the need to skip sections; there are no sections that are abstruse or cursorily covered. All sections are integrated with the others and each section offers very useful knowledge. The author clearly displays a profound understanding of all aspects of his endeavor.
I must emphasize that in the vast majority of cases with textbooks, in any academic area, the impression is that the author has intimate knowledge of 60% of the material he covers, and as for the latter 40% of the material he has at best good knowledge or passing familiarity but is able to speak on these topics because of his qualifications. The greatest merit of this book is that one can very profitably go through a self-study programm through all 13 chapters and come away much superior to one's peers in college or graduate school or industry (I qualify this statement at present: I have read only 7 chapters yet, but intend to read all 13).
Finally, if you are familiar with the excellent book on Computer Architecture by Patterson and Hennesy, then I say that this book is on level par with that venerable textbook.
Tough Topic - Crystal Clear Explanation, June 3, 2001I have always enjoyed reading programming-language and compiler books and most of them are quite tough on a first-read. Programming Language Pragmatics is one huge exception. None of the books I have read come close to the clarity that this book exhibits. On many occassions, the choice of words and presentation in this book has made me go 'Wow, I thought I already knew this stuff...'
Besides core topics, it has interesting discussion like concurrency, data-abstraction (object-oriented) and non-imperative programming models (functional and logic).
TOC (with my comments)
- Ch. 1 Introduction
- Ch. 2 Programming Language Syntax (theory of Regular Expression, Context-Free Grammars, Automata etc)
- Ch. 3 Names, Scopes, and Bindings (binding, scope rules, closures etc)
- Ch. 4 Semantic Analysis (attribute grammars, attribute flow, syntax tree etc)
- Ch. 5 Assembly-Level Computer Architecture (keeping the pipeline full, register allocation etc)
- Ch. 6 Control Flow
- (expression evaluation, iteration, recursion, nondeterminacy etc)
- Ch. 7 Data Types (type checking, pointers and recursive types etc)
- Ch. 8 Subroutines and Control Abstraction (stack layout, calling sequences, parameter passing etc)
- Ch. 9 Building a Runnable Program (back-end compiler structure, intermediate forms etc)
- Ch. 10 Data Abstraction and Object Orientation (encapsulation, inheritance, dynamic method binding, multiple inheritance, the object model of smalltalk)
- Ch. 11 Nonimperative Programming Models: Functional and Logic Languages
- Ch. 12 Concurrency (shared memory, message passing etc)
- Ch. 13 Code Improvement (peephole, redundancy elimination, data flow analysis, loop improvement, instruction scheduling, register allocation etc)
App. A Programming Languages Mentioned
App. B Language Design and Language Implementation
This is a very impressive book; truly one of my best investments in books so far.
Hardcover - 560 pages (January 1998)
Cambridge Univ Pr (Short); ISBN: 0521583888 ; Dimensions (in inches): 1.22 x 9.59 x 7.77
Avg. Customer Review:
Number of Reviews: 10Hariharan Thantry from East Lansing, MI
Please don't buy it!, February 23, 2000
If you are a genius at writing compilers without ever needing a book, go ahead buy this book. If you want to learn something, please buy the book by Aho, Ullman and Sethi. I bought this book as part of course requirement and found it to be absolutely useless. The author doesn't care to explain anything and his programming exercises are the vaguest.
Might be good if you have too much money to splurge.
I think it is recommended in the universities because of the support tools JLex and CUP, the documentation of which is again more pathetic!
M. ClearyOne of the best books, April 6, 2005
- Hardcover: 592 pages
- Publisher: Course Technology; 1 edition (January 24, 1997)
- Language: English
- ISBN: 0534939724
- Average Customer Review: based on 10 reviews. (Write a review)
- Amazon.com Sales Rank: #177,613 in Books
This book is outstanding! The Dragon Book is way overhyped. I have tried again and again to follow the dragon book, and each time I found it too difficult. On the other hand, Louden's book has answered many questions that I had in a clear, concise manner! I love this book! I have also flipped through almost all other compiler/interpreter books on the market in various bookstores, but none of them compare. This is *THE* book on introductory compiler design. Other books you might want if interested in writing your own programming language/compiler are "Programming Language Pragmatics", "Lex and Yacc", "Java Virtual Machine Specification" and "Virtual Machine Design and Implementation in C/C++".meerkat "Captain Meerkat"(Moscow, ID USA)I taught from this book, May 6, 2003
This is an excellent basic book on compilers. Its strength is its strong practical approach combined with using YACC/LEX technology.It hand holds you through the development of a simple compiler. If I wanted to learn about compilers I would read this first. Its weakness is it is too narrow. There are plenty of features of languages that are not addressed but in passing. Its goal is to get a compiler built. For a compilers 101 class there is no better book.
Paperback: 864 pages
Publisher: Wiley; 2 edition (August 10, 1996)
Language: English
ISBN-10: 0471113530
ISBN-13: 978-0471113539
Product Dimensions: 7.4 x 1.8 x 9.2 inches
Shipping Weight: 2.8 pounds
Average Customer Review: 4.0 out of 5 stars See all reviews (28 customer reviews)
David Hunter 4.0 out of 5 stars Good, but repetative., January 12, 2002
Effectively, you purchase this text to learn how to write compilers and interpreters. This book does this well. The shadow of this, is the fact that 50-60% of this book is repetitious code. Hastily, you're thrown into concepts that help to define how a compiler works. Details covered range from functions of a compiler, down to function blocks of descrete code.
Exceptionally thorough, this book is written in a very linear fashion. Almost as if 'A to Z', you're taken from basic line indexing, through assembly output for x86. Providing you have the patience to properly work through this book, once you finish, you will definitely have the tools to write your own compiler.
Overall, this is a pretty good book. I would not say great because it does not keep a steady 'beat' with its steps. Fast and slow, it can be disorientating for some people. Rather than expending pages upon pages of code, I would like to see a CD included with the book. Code would be replaced by simplified function blocks to help speed the process. (To *really* grasp what the author is doing, you have to deciper the exact details of their code.)
Sean Osullivan 2.0 out of 5 stars Mak is useful, but do use it with caution, April 15, 2000
There are several things you should know about this book:
- The book implements a top-down or recursive-descent parser, as opposed to a standard shift-reduce parser. This is *very* important, as lex/yacc, Visual Parse++, and other parsing tools are efficient shift-reduce machines. Thus, the parser isn't really portable. Even so, I did find the symbol table design that's used by the parser to be critical for what I needed.
- The printed material is mostly (say 70%) code listings, thus even though the book is a whopping 838 pages, it would be much slimmer with fewer listings. The code is downloadable from the pusblisher's (Wiley) site.
- The 30% of text and figures that are in the book could be much more insightful. For example, Chapter 11 - the interactive debugger should at least have some description (screenshots perhaps) of how to use the debugger. (Hint, the commands end with a semi-colon.)
- Even though this book is C++ oriented, it doesn't use standard containers like linked lists, or trees (maps/sets). The classes have pointers in them that makes the class also act as a its own node in a list or whatever. This makes the design much more confusing than it needs to be.
- The symbol table implementation has heavy circular dependencies. Quite honestly I don't know of a better implementation (yet). This does, however pose a problem if you'll need to extend the design (to use STL containers, to self-serialize, etc.)
The book has been a godsend, but I couldn't honestly let the 4 and 5 star reviews sit unchallenged. If I had known the above sooner, I could have saved quite a few weekends.
I think an Ideal Writing Compilers book would come bundled with a thirty day version of Visual Parse++ or Dr. Parse, and work from there.
Nathan Moore says:
I can't think of any reason that a recursive decent parser would be considered any less portable than a yacc parser. Most introductions to parsers present recursive decent first as it's easier to understand than what a yacc parser does, easier to construct by hand (making your own table driven parser of any type is pretty difficult), and if you don't get it right the first time a recursive decent parser is easier to debug.
In addition it's pretty straight forward to add good error messages to a recursive decent parser and error recovery is more straight forward than with yacc.
As far as actual production compilers, recursive decent is not dead as a recursive decent parser can parse non-context free languages that a yacc parser can't really handle since in a recursive decent parser you are already coding in what is almost certainly a turing complete language while yacc's language recognition is limited to the power of a push down automaton.
This is likely to be a benefit to many in this books target audience who may be writing for an input language not of their own design, which may not be context free.
Stefan Vorkoetter says:
I agree with Nathan on all points. Particularly, recursive descent parsers are _more_ portable, since they can be ported to any machine that has a compiler for the language you wrote it in (e.g. C or C++).
Furthermore, recursive descent parsers force you to actually understand the grammar of the language you are parsing. I've written parsers for C, SQL, C++, and Modelica, all using recursive descent.
Hugh K. Boyd 5.0 out of 5 stars Excellent Treatment of a Tough Subject, September 27, 2001
I bought this book in 1996 when I was a CS graduate student. The course text was the traditional "dragon book" which is a complete nigthmare to understand. I read this book in hopes of better understanding how compilers and interpreters are implemented and to this day I feel like I hit the jackpot. The book focuses primarily on the practical implementation of language interpreters and compilers and includes the code (C++) for a full featured Pascal interpreter (not just a minimal implementation that interprets a few statements). The author walks the reader through each class virtually line by line and presents the material in a way that any intermediate level C++ developer can easily understand.
Notwithstanding the pragmatic focus of this book, it also provides excellent treatment of the theory of compiler design. While it is at least 5 years old, I still keep this book in my library.
Hardcover - 404 pages (August 1996)
John Wiley & Son Ltd; ISBN: 0471941484 ; Dimensions (in inches): 1.19 x 9.52 x 7.69
Amazon.com Sales Rank: 46,548
Avg. Customer Review: Number of Reviews: 3For the complete review, see the Sept '97 issue of Doctor Dobbs
Hardcover - 606 pages (March 1, 1995)
Addison-Wesley Pub Co; ISBN: 0201422905 ; Dimensions (in inches): 1.36 x 9.49 x 7.02
Amazon.com Sales Rank: 173,987
Avg. Customer Review:
Number of Reviews: 1
Not very practical. Authors are too preoccupied with mathematical formalisms.authoritative, informative, and dull., November 28, 1999
Reviewer: Ray Dillinger (see more about me) from Silicon Valley
This is a useful and highly informative text. It covers technique and structures for the efficient compilation of OO, functional, and Logic Programming Languages -- languages not well covered by the Dragon Book. The code examples are sparse, and in pseudocode. The authors present a lot of theory as mathematical formalisms -- one of the most precise and complete ways to do it of course, but reading it is uphill work. They also cover technique and give reasonable discussion of the complexity of various approaches. The coverage of detail is absolutely superb.However, to my eye and mind, the book is dreadfully dull. I find most compiler texts fun and engaging, inviting me to explore new ideas and make judgements about approaches. By contrast, this text is like being led by the hand (or by the nose) through every decision, idea, and comparison by someone who knows everything there is to know about it and doesn't care what you think or whether you get it. The technique is presented as an implementation of the theory, but real-world examples of situations requiring the application of that theory are scarce. Finally, the entire thing is written without a trace of wit or humor. I can't fault this book technically -- but I'm not confident of its ability to hold a student's attention.
Amazon.com
- Paperback: 592 pages
- Publisher: Addison-Wesley Professional; 1 edition (February 10, 1995)
- Language: English
- ISBN-10: 0805316701
- ISBN-13: 978-0805316704
- Product Dimensions: 7.2 x 1.2 x 9.1 inches
Andrew Kadatch(Redmond, WA United States) Excellent, December 12, 2001
This book is definitely _not_ for beginners, but compilers are not supposed to be written by novices -- if there is rocket science in computers, it is compiler development. Crystal clear style and language make this book easy reading, and LCC is the best non-optimizing compiler I've seen (and believe me, I've seen many compiler sources): orthogonal, easy to follow design, well-thought data structures and overall architecture.
I treat this book as a perfect collection of brilliant ideas, many of which you will find implemented in most commercial compilers.
Whether it helps to write your own compiler? -- sure. Are you thinking about IR (internal representation) that will be easy to create and, most important, walk through and manipulate? -- take a look how Fraser et al did it; they did it well. Think how to write a front end or code generator? -- it's all there. Sure, blind copying won't work -- optimizing compiler will call for way more sophisticated BURG-like technique (one of the best known code generation techniques by now), but, all in all, it'll be BURG-like, and it's in the book as well.
So, if you want to show your students (or learn yourself) how compilers should be written, you cannot find anything better than LCC accompanied by this book. Fraser's team did it right.
M. ClearyOne of the best books, April 6, 2005
- Hardcover: 592 pages
- Publisher: Course Technology; 1 edition (January 24, 1997)
- Language: English
- ISBN: 0534939724
- Average Customer Review: based on 10 reviews. (Write a review)
- Amazon.com Sales Rank: #177,613 in Books
This book is outstanding! The Dragon Book is way overhyped. I have tried again and again to follow the dragon book, and each time I found it too difficult.On the other hand, Louden's book has answered many questions that I had in a clear, concise manner! I love this book! I have also flipped through almost all other compiler/interpreter books on the market in various bookstores, but none of them compare.
This is *THE* book on introductory compiler design. Other books you might want if interested in writing your own programming language/compiler are "Programming Language Pragmatics", "Lex and Yacc", "Java Virtual Machine Specification" and "Virtual Machine Design and Implementation in C/C++".
meerkat "Captain Meerkat"(Moscow, ID USA)I taught from this book, May 6, 2003
This is an excellent basic book on compilers. Its strength is its strong practical approach combined with using YACC/LEX technology. It hand holds you through the development of a simple compiler. If I wanted to learn about compilers I would read this first. Its weakness is it is too narrow. There are plenty of features of languages that are not addressed but in passing.Its goal is to get a compiler built. For a compilers 101 class there is no better book.
Paperback / Published 1996
Paperback 2nd edition (July 1996)
John Wiley & Sons; ISBN: 0471113530 ; Dimensions (in inches): 1.90 x 9.20 x 7.50
Amazon.com Sales Rank: 27,811
Avg. Customer Review:
Number of Reviews: 6Mak is useful, but do use it with caution., April 15, 2000
Reviewer: Sean G. O'Sullivan (see more about me) from Fredericksburg, Va
There are several things you should know about this book:1) The book implements a top-down or recursive-descent parser, as opposed to a standard shift-reduce parser. This is *very* important, as lex/yacc, Visual Parse++, and other parsing tools are efficient shift-reduce machines. Thus, the parser isn't really portable. Even so, I did find the the symbol table design that's used by the parser to be critical for what I needed.
2) The printed material is mostly (say 70%) code listings, thus even though the book is a whopping 838 pages, it would be much slimmer with fewer listings. The code is downloadable from the publisher's (Wiley) site.
3) The 30% of text and figures that are in the book could be much more insightful. For example, Chapter 11 - the interactive debugger should at least have some description (screenshots perhaps) of how to use the debugger. (Hint, the commands end with a semi-colon.)
4) Even though this book is C++ oriented, it doesn't use standard containers like linked lists, or trees (maps/sets). The classes have pointers in them that makes the class also act as a its own node in a list or whatever. This makes the design much more confusing than it needs to be.
5) The symbol table implementation has heavy circular dependencies. Quite honestly I don't know of a better implementation (yet). This does, however pose a problem if you'll need to extend the design (to use STL containers, to self-serialize, etc.)
The book has been a godsend, but I couldn't honestly let the 4 and 5 star reviews sit unchallenged. If I had known the above sooner, I could have saved quite a few weekends.
I think an Ideal Writing Compilers book would come bundled with a thirty day version of Visual Parse++ or Dr. Parse, and work from there.
Great Introduction, March 22, 2000
Reviewer: Kevin P. Albrecht (see more about me) from Tampa, Florida
This is a good introduction for people with no previous knowledge of writing a compiler. I recommend good working knowledge of C++; and if you know Pascal, you're even better off. Knowledge of basic data structures (Stacks, Linked Lists, Binary Trees) is also important. The language that he implements is Pascal, but it would be a simple task to implement another language.A fine book on compiler construction using C++., August 30, 1999
Reviewer: Lee Carlson ([email protected]) from St.Louis, MOThis book gives a very detailed discussion of how to write a compiler using C++. As such it could function as a supplementary textbook for a course in compilers or as one for an advanced course in C++. The author describes in detail every step of the way, and it makes interesting and fun reading. Buy it: it is well worth the price.
Hardcover: 1919 pages Publisher: Prentice-Hall; 2nd edition (March 27, 1990) Language: English ISBN-10: 0131550454 ISBN-13: 978-0131550452
Olivier Langlois: My best compiler book, November 2, 2006
This book is more accessible than the Dragon book (Compilers: Principles, Techniques, and Tools) but is less complete.
This book presents complete source code for parser generators tools and a C compiler.
Even if this book is getting a little bit old and it targets a DOS platform, it should not stop you from acquiring this goldmine of very useful information for anyone interested in compilers for a very reasonable price.
Anonymous:
I have had this book for 8 years. It clearly describes compiler theories and examples. It is very useful when I develop very fast parser. (The code generated by lex isn't fast enough.) I am not in the compiler writing business. This book is perfect for me.
Paperback: 812 pages Publisher: Addison Wesley (July 11, 1991)
Language: English
ISBN-10: 0805321667 ISBN-13: 978-0805321661
eoi (see more about me) from LA CA
Crafting a Compiler with C offers an innovative approach to compiler design for students or professional programmers who use C. Through numerous examples and exercises, you'll learn how to design a working compiler from start to finish. The book also provides balanced coverage of both theoretical and implementation issues, with detailed discussions of standard compiler topics such as top-down and bottom- up parsing, semantic analysis, intermediate representations, and code generation. All the procedures in this book are presented in a readable, C-based notation. Features:
- Based on the best-selling Crafting a Compiler.
- Balances an excellent, readable introduction to compiler theory with a wealth of realistic compler design examples and exercises.
- Emphasizes the use of compiler tools that generate parsers and scanners.
- Discusses LR parsing and reduction techniques thoroughly.
- Introduces FLex and ScanGen early.
- Includes optional advanced topics at the end of each chapter.
Chapter 1 Introduction
An overview of the compilation process begins the text. The concept of constructing a compiler from a collection of components is emphasized. The idea of using tools to generate some of these components is introduced.
Chapter 2 A Simple Compiler
A very simple language, Micro, is presented, and each of the components of a compiler is discussed with respect to compiling Micro. Parts of the text of a compiler for Micro (written in Ada) are included in this chapter. The compilation of features of more comprehensive Ada subsets is the motivation for the techniques presented in the following chapters.
Chapter 3 Scanning Theory and Practice
The basic concepts and techniques for building the lexical analysis component of a compiler are presented. This discussion includes both the development of hand-coded scanners and the use of scanner generation tools for implementation of table-driven scanners.
Chapter 4 Grammars and Parsing
Fundamentals of formal language concepts and grammars are presented in this chapter, including context-free grammars, BNF notation, derivations, and parse trees. Since First and Follow sets are used in the definitions of both top- down and bottom-up parsing techniques, they are defined in this chapter. A discussion of language and grammar relationships is also included.
Chapter 5 LL(1) Grammars and Parsing
Top-down parsing is presented as the initial approach to syntax analysis. Both recursive descent and LL(1) are discussed, with an emphasis on the latter. Use of parser generators is a major focus of this chapter.
Chapter 6 LR Parsing
Bottom-up parsing is presented as an alternative approach to syntax analysis. LR, SLR and LALR parsing concepts are introduced and compared with LL techniques. Again, use of parser generators is a major focus of the chapter.
Chapter 7 Semantic Processing
The fundamentals of semantic processing in conjunction with top-down and bottom-up parsers are presented in this chapter. Topics include a comparison of alternative compiler organizations, addition of action symbols to a gram mar (for top-down parsing), rewriting grammars for "semantic hooks" (for bottom-up parsing), definition of semantic records and use of a semantic stack, checking semantic correctness, and producing intermediate code.
Chapter 8 Symbol Tables
This chapter stresses the use of a symbol table as an abstract component, utilized by the rest of the compiler through a precisely defined interface. Possible implementations are presented, followed by discussions of symbol tables for handling nested scopes and language features used to define names accessible from surrounding scopes (such as records and Ada packages).
Chapter 9 Run-time Storage Organization
Basic techniques for run-time storage management is presented, including discussions of static allocation, stack-based allocation and generalized dynamic (heap) allocation.
Chapter 10 Processing Declarations
Basic techniques for processing type, variable, and constant declarations are discussed. The organization of this material is based on semantic routines for handling specific language features.
Chapter 11 Processing Expressions and Data Structure References
Semantic routines for handling variable references and arithmetic and Boolean expressions are outlined. Address computation methods for array elements and record fields are included in the discussion of variables references. In this and the next two chapters, emphasis is placed on techniques for checking semantic correctness and generating intermediate code for use by a target code generator.
Chapter 12 Translating Control Structures
Compilation techniques for features such as if statements, case statements, and various looping constructs are the focus of this chapter. A point of emphasis is effective use of a semantic stack or syntax tree to simplify the job of handling these constructs, which can be nested and which can extend over arbitrary amounts of program text. Students should gain an understanding of the advantage of this general technique over ad hoc approaches.
Chapter 13 Translating Procedures and Functions
Techniques for processing both declarations and calls of subprograms are presented. Since much of the complexity of this topic involves parameters, considerable material is provided that deals with building parameter descriptions, checking for correctness of actual parameters in subprogram calls, and code-generation techniques required by various parameter modes. The concept of a run-time activation stack is discussed here, and the support routines necessary to implement one are outlined.
Chapter 14 Attribute Grammars and Multipass Translation
Multipass translation is modeled by traversal over an intermediate form. The attribute model of information flow receives particular emphasis.
Chapter 15 Code Generation and Local Code Optimization
The code generator is presented as a separate component that translates from the intermediate code generated by the semantic routines to the final target code of the compiler. Such topics as instruction selection, register management, and use of addressing modes are presented. Use of a code generator- generator is discussed. Discussion of basic block optimizations is included in this chapter.
Chapter 16 Global Optimization
The focus of this chapter is on practical techniques that yield useful improvements from a moderate amount of effort. Thus the main sections of the chapter include global data flow analysis, optimizing subprogram calls, and optimizing loops.
Chapter 17 Parsing in the Real World
This chapter includes material on two major topics necessary for implementing practical compilers: syntax-error handling and table compaction. The error-handling section presents error-recovery and error-repair techniques applicable to recursive descent, LL and LR parsers. The table compaction techniques included are applicable to both LL and LR parser tables, as well as to scanner tables and any other situation requiring efficient storage with fast access to elements of sparse tables.
Mr James S Battle Good treatment of difficult material December 23, 1999
This book has a nice balance of theory and practical algorithms. There is enough detail to allow a (patient) reader to implement his own compiler tools, though like most other books on the subject, this book leaves you with the feeling that the area might have died about twenty years ago (no insult intended!); an update needed, to include OO languages, some treatment of the complexities associated with parsing modern languages, C++ etc. All things considered, still a great book, well worth the money
Luca Regini: 4.0 stars Good introduction, January 9, 2007
This book is quite dense and requires some serious work to be understood properly. It is quite complete even if it is a bit old compared with the latest twists in compiler theory. It is a mix between theory and pratical implementation. This is its main strength and its main weakness: it is not a comprehensive theorical work on compilers neither a complete "pratical" tutorial. Anyway it is a good introduction for the (college-level) student who is willing to do some serious work.
Dr R M. Siegfried: Handles a tough subject in a thorough and readable way, April 21, 2004
I teach compiler construction and I personally hate the "Dragon book" because it is beyond the level of many students. Fischer and LeBlanc present most of the same material and they make it readable. Theirs was the first book to devote an entire chapter to symbol tables, the central data structure that all components of a compiler use. There should be a law mandate that compiler courses should use either this book or Thomas Parsons'.
Book and Disk / Paperback / Published 1994
Amazon price: $43.99 ~ You Save: $11.00 (20%)
Paperback - 452 pages Book&Disk edition (August 15, 1994)
John Wiley & Sons; ISBN: 0471597546 ; Dimensions (in inches): 1.15 x 9.18 x 7.46
Other Editions: Software
Avg. Customer Review:
Number of Reviews: 1A good first step, April 20, 1999
Reviewer: A reader from Omaha, Nebraska
This is a good, basic, "gateway" book on compiler and interpreter design and implementation. It can easily provide the reader with the basic concepts of this tricky topic in a way that will allow the reader to move on to more complicated materials.Having taken a compiler construction class in college using "Compilers : Principles, Techniques, and Tools", I can say that this book is much easier to understand and I wish we had spent the first 2-3 weeks of the course covering the material therein.
If you are new to compiler construction or are interested in producing a simple interpreter, this book is for you. If you already consider yourself well read in compiler technology, this book may be of questionable value.
368 pages 1 edition (May 16, 1997)
Prentice Hall; ISBN: 0130481904 ; Dimensions (in inches): 0.90 x 9.28 x 7.07
Amazon.com Sales Rank: 203,118
Avg. Customer Review:
Number of Reviews: 2
Kevin N. Weinhold from Kansas, USA
Concerning Tom Pittman, April 7, 2000
Tom Pittman is an excellent teacher. Having his instruction in writing is the second best thing. He was a pioneer in microcomputers, having created one of the first compilers available. Strongly Recommended.
Pavel Tatarintsev (see more about me) from Voronezh, Russia
Very good book on compilers, March 26, 2000
This book is the one of the best I'v ever seen on compiler design. It one of the books that was written several years ago but very helpful. The language is not simple, but exact. I recommend it to all students and specialists who interested in compilers architecture.
366 pages 2Nd/update edition (October 1992)
O'Reilly & Associates; ISBN: 1565920007 ; Dimensions (in inches): 0.87 x 8.98 x 6.01
Amazon.com Sales Rank: 12,571
Popular in: University of Pittsburgh (#18) , Delaware Universities (#12) . See more
Avg. Customer Review:
Number of Reviews: 19Anyone who want to use compiler techniques in his/her project should probably start with using this two tools or derivatives. Lexical analyzer generation with Lex is essentially optional. You just need to know the interface with syntax analyser to write a better program yourself. In most cases much better lexical analyzer can be written manually, but syntactic analyzer needs to be generated at least for a part of the language (expressions, etc.)
Incomplete, poorly organized, and not very well written, April 6, 1999
Reviewer: A reader from Austin, TX
As with several other O'Reilly books, I found Lex & Yacc to be maddeningly uneven. The approach is to give a too-brief synopsis of the tool, then illustrate its use using a very specific example that, one suspects, is merely the handiest project the authors had available.I had a fair bit of programming experience when I bought the book, but none with Lex or Yacc. Some fundamental questions came up during the course of my muddling through, and these were left unanswered. I actually got more insight into these tools from a ~20-page web site on the topic.
The reference chapters are organized alphabetically ("ambiguities & conflicts", "bugs", ..., "%ident declaration"), and in a way that does not help someone who is looking for a specific answer (in trying to find out about the possibility of more than one parser in a program, who would think to look under 'v' for "variant and multiple grammars"?). These 'reference chapters' seemed more like a place to dump the information not discussed elsewhere.
Maybe it's a lost cause, finding a comprehensive, well-written introduction to such an arcane topic, but I'm still looking.
Hardcover / Published 1985
One of the best books on the topic
- Series: McGraw-Hill computer science series
- Hardcover: 816 pages
- Publisher: Mcgraw-Hill College (May 1985)
- Language: English
- ISBN-10: 0070651612
- ISBN-13: 978-0070651616
- Product Dimensions: 1.2 x 7 x 9.5 inches
By kievite on January 19, 2011
Format: Hardcover Verified Purchase 1 of 1 people found this review helpfulThe key feature on a good book is that it makes complex issue more understandable. Few compiler books belong to this category as most are "over-exited" with formalisms, especially in syntax area.
Few that are more practically oriented and contain source code often lose the key ideas behind the mass of code.
In this book authors manage to cover a large body of compiler related topics in relatively in-obscure manner. They took the most difficult middle road and tried to balance formal methods with pseudo-code. That's a very difficult task and the fact the the authors managed this is a tremendous achievement in itself.
Essentially all chapters starting from chapter 8 (Symbol table handing techniques) by-and-large preserved their value to the reader/student. The most valuable part of the book is the part devoted to discussion of optimization techniques. That's the area that distinguishes simple compilers for more complex and some methods like constant folding are a must even for simple compilers/interpreters. At the same time it greatly influences the way all other compiler tasks are performed and first of all the intermeduate representation of the program. The authors cover the topic in two large chapters 12 and 13 (pp 610-721).
Raymond Easy to read and understand, August 15, 2003The author has done a good job by presenting basic compiler theory and implementing a simple compiler using the java programming lauguage.A reader Best introduction ever written., September 22, 2001Good illustration of compiler concepts. One of the better basic compiler books i have read so far. Next book should be "Progamming language pragmatics" followed by "Advanced compiler design and implementation"
I've purchased or borrowed 5 books on compiler design. There is no doubt that this book should be the choice for any introductory course. The authors explain everything tightly and provide a lot of actual examples in the text. All of it is in Java, of course. Don't worry if you don't use Java. It's very easy to understand if you have any experience with any OO language. I prefer Object Pascal and had no trouble whatsoever with the code.This book will not provide proofs or a lot in the way of choices for designing a compiler. This is good when you are starting out. The last thing you need if you actually want to learn about compiler design from front to back is a hundred different ways of doing the same thing. The text takes you through a small version of the "Triangle" language ("Mini-Triangle") - and the code for the entire Triangle language is available for download.
This book makes learning about compilers effortless for anyone with an OO background and a little knowledge of the most common algorithms learned in any into course on algorithms. If you can't learn from this text, then don't bother with any other.
The next book I'd recommend after reading this text is the Dragon Book. Then you can try on Advanced Compiler Design for size - which I am doing at present.
A great book to read along (or just before of after) this text is Programming Language Pragmatics. I read it in parallel. If I had to do it again, I'd probably read it first.
Hardcover - 796 pages (November 1985)
Addison-Wesley Pub Co; ISBN: 0201100886 ; Dimensions (in inches): 1.44 x 9.54 x 6.58
Amazon.com Sales Rank: 5,595
Avg. Customer Review:A reader from St. Louis, MO, USA
Excellent Introductory Compiler Text, September 28, 1999
This is a comprehensive and easy to understand text. It covers all the fundamental stages of compiler design, with plenty of explanation (both practical and theoretical). It doesn't exhaustively cover every conceivable topic, but it does leave you with a good taste of what's involved. Of course, it is not a book for beginning programmers, and there are very few code examples. Judging by the comments of some reviewers, I would suspect that they gave poor reviews because they lacked the prerequisite background (familiarity with a good HLL like C, data structures, mathematical background etc). As with any 'advanced' topic in computer science, there is quite a lot expected from you. Upon first reading, some topics occasionally seem overwhelming. Welcome to Earth. This is where your library card comes in. Do a little research and then come back to this text; you'll find that it is well organized and extremely clear. If you want a cookbook this book isn't for you. If you want a solid understanding of compiler fundamentals then this book is your best bet.
5 of 7 people found the following review helpful:
Great for hard-core compiler gurus, November 29, 1999
Reviewer: A reader from Dallas, TX
I picked this text up in anticipation for a compiler course at Georgia Tech. I have not read any other compiler books, so I have little to compare it to. However, I can definitely say that this is a book for people who are looking for "hard-core" compiler knowledge. It is a very dry and meticulous book. Contrary to the opinions of other reviewers, this is not an "easy to understand text". It will take quite a bit of determination to get the most out of it. If you don't love the stuff, you'll stop reading at page 100 or so. As for topics explained in this book, it seems to cover just about everything you will need to understand and write a full-blown compiler.
Paperback / Published 1996
Robert C. Morgan / Paperback / Published 1998
Amazon price: $59.95
Alex T Schreiner, et al / Hardcover / Published 1985
William A. Barrett, John D. Couch / Hardcover / Published 1979
Hardcover / Published 1987
Hardcover: 493 pages
Publisher: Wiley; 1 edition (January 15, 1971)
Language: English
ISBN-10: 047132776X
ISBN-13: 978-0471327769
Product Dimensions: 9.9 x 7 x 1.3 inches
Shipping Weight: 2.3 poundsA Customer on October 8, 1999
Excellent book which still remains excellent
This was one of the best books I read when I was a student 25 years ago. You really understand how to write compilers after reading it. It is invaluable for those who learn programming. I believe it contributed greatly to turning me into a professional programmer. It's a shame that this book is not available any more...
J. Conklin on December 21, 2007
It Works
Many years ago I used this book to build a compiler which generated test cases for a complex real-time system. The result was a syntax-oriented, single-pass, context independent, processor with no restricted variables. It all worked as described in the book. I believe the construction concepts are still valid, mutatis mutandis. Simply disregard the language anachronisms.
Amazon Customer on March 26, 2014
I read it 30 years ago when I worked on my bachellor thesis, it was very clear and useful; after reading it I got convinced I could write a compiler.
2003 edition
Hardcover, 1000 pages /Morgan Kaufman Publishers/July 1997/ISBN: 1558603204
TocReviewer: A reader from Cupertino, CA USA
Must-Have CS book, August 9, 2000
If you are a good CSE engineer, and wanna make even better, this book is the one should be on your shelf. If you are compiler engineer, it is a must-have. I agree that it is a collection of research papers, but it is the only one comprehensive collection covering all aspects of compilation. I personally find it very helpful.Dragon book is cool, but it is only for beginners. If you are a beginner, always start with Dragon book, but don't abuse this classic book.
Paul Haahr ([email protected]) from San Francisco, CA
The definitive compiler book for the 1990s, September 23, 1998
This book is the comprehensive text for anyone working on an optimizing compiler for uniprocessor systems. It gives good detail on all major approaches and is up-to-date on important techniques like SSA form and partial redundancy information. As someone working directly in the field, it's saved me the effort of hunting up original research papers in many areas. One drawback for this book as a practical tool: the pseudocode used to illustrate examples is often pretty far from being suitable for real implementations.A warning: this is not an introductory book, and people who want to learn about the basics of building a compiler should look elsewhere; perhaps Andrew Appel's ``Modern Compilers'' series. Muchnick's book is for people who want to write compilers which generate high-performance code
Hardcover / Published 1995
Amazon price: $66.95Hardcover - 564 pages (January 1995)
Addison-Wesley Pub Co; ISBN: 0805316701 ; Dimensions (in inches): 1.34 x 9.81 x 7.94
Amazon.com Sales Rank: 49,632
Avg. Customer Review:
Number of Reviews: 3This is a good book, but you need to know some compiler writing basics to benefit from it. The authors use recursive decent parser so this book can be an excellent introduction to this approach.
Almost everything you need to know about a simple compiler, January 27, 1999
Reviewer: A reader from Cambridge, MA USA
I first read this book when I ported lcc 3.5 to the Alpha (and later helped tune the production 4.0 port). The book is extremely clear and complete with regard to the lcc compiler itself; it is an invaluable reference for anyone who works with lcc.In the two years since I last worked directly with lcc, I've consulted the book on numerous occasions; Messrs. Fraser and Hanson have a clear writing and programming style that makes this book (and the awesome paper that they wrote with Todd Proebsting on lburg) one of my standard "how-to" books on simple IR and code generation issues.
I'd only like to see more information about lburg; in particular, more about using lburg to do some simple optimizations. The writing style is clear and reasonably concise, but the constraints of retrofitting literate programming techniques onto an existing software project can make the code presentation a little fragmented. Still, I always found what I wanted and usually found the explanation to be quite good.
How they wrote *their* compiler, not how to write *yours*, April 5, 1999
Reviewer: A reader from USA
I did not like this book. First of all the authors are egotistical saying things along the lines of "this compiler *must* be great, hundreds of people are using it." Secondly, they wrote their compiler in pre-ANSI C, making it difficult to read. Similarly, they use a hokey "hypertext" style format for presenting the source code,also making it difficult to read. Thirdly, their techniques are questionable - they don't use automatic tools for scanning or parser generation. In fact their scanner is one big 'case' statement. Their parser is recursive descent, hand-written. This is one of the least maintainable and most hard to read parsing techniques. I do give them credit, though, for writing a compiler with easily changeable back ends. This part is way cool, especially with such diverse platforms as Sparc and x86. Finally, their writing is not easily read - especially with the hokey code interspersed. I bought it wanting to learn about their code generation but have decided to return it, and will probably buy Advanced Compiler Design And Implementation; Muchnick, Steve instead.
Hardcover 2nd edition (December 1987)
Van Nostrand Reinhold; ISBN: 0317636367Excellent book but not for a `crash course', November 4, 1999
Reviewer: Vikram Vijayaraghavan ([email protected]) from India
The top researchers in the field give this (by now) legendary compiler book. Crisp and stimulating discussions of the various phases of compilers. A major disadvantage is that examples & exercises are few. While reading the stuff on LALR parsers for the first time one feels woefully lost.... serious stuff intended for careful study - not just another book for perusal.The Best Book Available on Compiler Design, September 28, 1999
Reviewer: Fred Bourgeois ([email protected]) from Santa Cruz, California
The quintessential reference for anyone interested in the subject of compiler design and development. This sub-field of Computer Science forms a scientific core the theory of which is universally applicable to so many areas of our field that every professional computer scientist and software developer/programmer should be intimately familiar with the basic tenets included: lexical analysis, parsing, optimization, symbol management, space vs. time considerations, and especially BNF (notation for specifying grammars). Even if you are not a compiler developer and have no intention of becoming one, this knowledge is so fundamental to being a good software developer and intelligent user of compilers that no professional can afford not to have read this book and keep it handy as a reference.One of the few really good books, May 13, 1999
Reviewer: A reader from Princeton, NJ
It is really a great book, especially for self study. Unlike newer variations on the same theme that are more concerned with stuffing a book with something that makes the table of contents look attractive, this one really covers things in detail. Very well written too. Books like that re-kindle the '...love of study, a passion which derives fresh vigour from enjoyment...' as Gibbon put it. Makes you suddenly recall why you still are in this damn profession. Keep it handy--for psychological reasons, to be used in moments of Microsoft "technologies" triggered developmental distress.
They used to write good books (tm) <g>
Published in 1985 by Addison-Wesley.
This is actually university-style introductory book that is heavy on theory, but contains almost no implementation details. Mostly outdated, but some graph algorithms presented are still useful.
Alfred V. Aho, Jeffrey D. Ullman
Actually this is algorithms and data structure book. Not very practical, but it contains descriptions of several interesting algorithms that are difficult to find elsewhere. Still valuable as a complementary volume to Knuth.
B. Lorho (Editor) / Published 1984
Material below is partially based on The Teaching About Programming Languages Project. There is huge amount of this type of books and many are too scholastic and/or try to use (unnecessary) formalisms without justifying their usefulness.
- Hardcover: 654 pages
- Publisher: Prentice Hall; 3 edition (July 27, 1995)
- Language: English
- ISBN-10: 0136780121
- ISBN-13: 978-0136780120
- Product Dimensions: 1.2 x 7.2 x 9.8 inches
- Shipping Weight: 2.5 pounds
The first edition of this book (written by Terrence Pratt) was really impressive...
I also remember his excellent paper:
Control Computations and the Design of Loop Control Structures. TSE 4(2): 81-89 (1978)I do no know much about Marvin Zelkowitz
Authors WEB page Courses that use this book:
Robert W. Sebesta / Hardcover / Published 1998
The author WEB page: Concepts of Programming Languages by Addison-Wesley Publishing Company, 3rd edition, 1996).
Courses that use this book:
- See also course by Dick Botting
Hardcover / Published 1992
Amazon price: $75.93
Authors WEB page: Essentials of Programming Languages by Daniel P. Friedman, Mitchell Wand, and Christopher T. Haynes (MIT Press and McGraw-Hill, 1992).
Courses that use this book:
- Chris Haynes
- Matthias Felleisen (who, however, "mostly" relies "on lecture notes")
- Gary T. Leavens
- Mitchell Wand
- Phillip J. Windley
- Andrew Wright (who, however, "mostly" relies "on lecture notes")
- graduate courses that use this book.
/ Hardcover / Published 1999
Authors WEB page: Principles of Programming Languages: Design, Evaluation and Implementation by Oxford University Press), third edition, 1999).Courses that use this book:
Programming Language Concepts and Paradigms (Prentice Hall International Series in Computer Science)
David A. WattAuthors WEB page: Programming Language Concepts and Paradigms by Prentice-Hall, 1990).
Programming Language Essentials
Henri Bal / Paperback / Published 1994
Amazon price: $29.95 (Special Order)
Authors WEB page: Programming Language Essentials by Henri E. Bal and Addison-Wesley Publishing Company, 1994)
Authors WEB page: Programming Languages: An Interpreter-Based Approach by Addison-Wesley Publishing Company, 1990).
If you use this text, you might also be interested in graduate courses that use this book.
Hardcover / Published 1996
Authors WEB page: Programming Languages: Concepts and Constructs by Ravi Sethi (Addison-Wesley Publishing Company, 1996)
Hardcover / Published 1985
Hardcover / Published 1983
Published 1977
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 29, 2020