To sell a book of any worth to a major publisher a writer needs a capable,
professional agent. On behalf of thousands of writers without an agent or access
to one via insider introduction, I will describe what it is like for an outsider
to try to gain representation.
My over-all professional background: a writer
of 20 books, published in New York (Morrow, fiction; Ballantine, nonfiction),
and a freelance with a long record of achievement in print, broadcast and Internet
media worldwide for some of the best corporations, magazines, media and similar
interests.
Those credentials, plus $1, will buy you a really rotten cup of coffee.
Analysts aren't too bullish on self-publishing. "I don't think self-publishing
will be big," says Jupiter's Hertzberg. "Everybody's a writer or a filmmaker, yes,
but talent will still determine where the market is, and there aren't that many
talented authors."
5 trends in open source documentation
Certain trends in tech documentation stand out. We round up five top trends from 2016.
I've been doing open source documentation for a long time. Over the past decade, there have
been a lot of attitude shifts regarding authoring and publishing. Some of these trends seem to go
in cycles, such as the popularity of semantic markup. The latest trends move documentation closer
to code, what many have called docs as code. Let's look at a few of the larger themes in documentation
trends:
1. Git
When I first started doing
documentation work for GNOME,
we wrote our documentation in
DocBook and stored
it in CVS repositories alongside our code. These days, most GNOME documentation is written in
Mallard and stored in a
Git repository (after
a brief stint with SVN). Although formats and tools have changed, the constant factor is that sources
are stored in
revision control, just like code.
It may seem odd to call this a trend when we've been doing it for so long, but a few things have
changed, and some of that revolves around what Git has brought to the table. Git is one of the decentralized
version control systems that arrived on the scene over the past decade or so. Some people continue
to use decentralized version control systems the same way they used CVS or SVN, but that doesn't
expose the real power of these systems. Documentation writers are increasingly proficient using Git
for what it is. They're creating development, staging, and production branches, and they're merging
disparate contributions. This wasn't as common just a few years ago.
Git is certainly not the only decentralized version control system. There are also Bazaar and
Mercurial, to name just two, and you will find writers wielding the same power with those tools as
well. But Git has taken the majority of the mind share, thanks in large part to popular Git hosting
sites.
This is an area in which open source has lead the trend in the overall software documentation
industry. A quick glance at technical writing forums will show plenty of people across the industry
looking for information on how to effectively transition to Git. In the past, they may have stored
their sources on a network drive with no revision control, or they may have used a proprietary management
system. Git and tools like it have drastically changed the way the entire software industry deals
with documentation.
2. Lightweight languages
There have always been plenty of choices for documentation source formats. There are semantic
XML formats, and SGML formats before that. There are TeX dialects and
troff dialects.
There are the source formats of word processors, page layout tools, and help authoring tools. There
are the internal formats of various wikis and content management systems. There's HTML. And there
are a handful of lightweight markup languages that are designed to be easy to type in a text editor.
People are increasingly choosing lightweight markup languages for a number of reasons. They are
usually easier to write, at least for simple things. They tend to play better with version control
systems, because they're generally line oriented. And they can help lower the barrier to entry for
new contributors, although you should be careful not to expect a change in source format alone to
drive lots of contributors to your project.
Lightweight markup languages have their downsides, too. The tools for working with them tend to
be limited in scope, and don't often provide the kind of data model you need to write other tools.
They also don't usually provide as much semantic information. With XML formats, for example, there
are a wealth of tools for translation, validation, link checking, status reporting, and various types
of testing and data extraction. This kind of tooling isn't currently as extensive for lightweight
formats. So although lightweight formats might ease the barrier to entry for new contributors, they
can also create new barriers to long-term maintenance. As with all things, there are always trade-offs.
The three
most
popular lightweight formats right now are Markdown, AsciiDoc, and reStructured Text. Markdown
is the simplest, but it doesn't offer much for anything but the most basic documentation needs. It
also comes in many different, slightly incompatible flavors, depending on which processing tool you
use. AsciiDoc offers more
semantics and more types of elements. It originally focused on being a front-end to DocBook, but
it has grown to natively support lots of output formats. reStructuredText came from the Python community,
and for a long time its use was largely limited to Python projects. It has grown in popularity lately
due to hosting sites, such as Read the Docs.
3. Static site generators
Five years ago, the trend was to use wikis and blogging platforms to create documentation sites.
They were easy to set up, and giving people accounts to contribute was easy. Particularly brave people
would even open their wiki to anonymous contributions. These days, the trend is to keep sources in
version control, then build and publish sites with mostly static HTML files.
Generating static sites isn't new. My first job out of college was working on internal tools used
at a software company to build and publish static files for tens of thousands of pages of documentation.
But static sites have become increasingly popular for projects of all sizes, for a number of reasons.
First, there are increasingly good off-the-shelf static site generators. Tools like
Middleman and
Jekyll are just as easy to deploy as a wiki or
a blog. Unless you have specialized needs, you no longer have to write and maintain your own site-generating
tool. Static site generators have become increasingly popular among web developers, and technical
writers get to ride that wave.
Another reason static sites are more popular is that source hosting sites are easier to use, and
a growing number of technical people use them. One of the draws of a wiki was that somebody could
contribute without downloading anything or installing special tools. If your source files are stored
in a hosting service like GitHub, anybody with a GitHub account can edit them right in their web
browser and ask you to merge their changes.
4. Continuous integration
Continuous
integration is the key that ties the previous trends together. You can write your documentation
in a simple format, store it in Git and edit it on the web using a Git hosting service, and publish
a site from those sources. With continuous integration, you don't even need a human to kick off the
publishing process. If you're brave, you can publish automatically after every commit to master,
and you'll have a nearly wiki-like experience for writers.
Some projects will be more conservative and only publish from a production branch. But even when
publishing from a branch, continuous integration removes tedious human intervention. You can also
automatically publish staging sites for development branches.
Continuous integration isn't just about publishing, either. Projects can use it to automatically
test their documentation for things like validity and link integrity, or to generate reports on status
and coverage.
5. Hosted documentation services
Automatically publishing documentation sites with continuous integration is easier than ever,
but now there are hosted services that take care of everything for you. Just pass them a Git repository,
and they'll automatically build, publish, and host your documentation. The most well-known example
is Read the Docs. Originally coming out of the Python community, its ease of use has made it popular
for all sorts of projects.
Whether free hosted documentation sites can be financially viable remains to be seen�to keep sites
like that running costs money and people hours. If the sites can't maintain a certain level of quality,
people will take their documentation elsewhere. If you benefit from one of these free services, I
encourage you to see how you can help financially.
I believe the hosted documentation services trend will continue. Smart people will figure out
how to smooth the bumps. I also suspect we'll start seeing paid hosted documentation services for
proprietary software. Open source has led the way on documentation technology over the past decade,
and it will continue to do so.
Sixty years ago the futurist Arthur C. Clarke
observed
that
any sufficiently advanced technology is indistinguishable from magic. The internet -- how we both communicate with one another and
together preserve the intellectual products of human civilization -- fits Clarke's observation well. In Steve Jobs's words, "
it
just works
," as readily as clicking, tapping, or speaking. And every bit as much aligned with the vicissitudes of magic, when
the internet doesn't work, the reasons are typically so arcane that explanations for it are about as useful as trying to pick apart
a failed spell.
A NEW GUIDE TO LIVING THROUGH CLIMATE CHANGE
The Weekly Planet brings you big ideas and vital information to help you flourish on a changing planet.
Sign Up
THANKS FOR SIGNING UP!
Underpinning our vast and simple-seeming digital networks are technologies that, if they hadn't already been invented, probably
wouldn't unfold the same way again. They are artifacts of a very particular circumstance, and it's unlikely that in an alternate
timeline they would have been designed the same way.
The internet's distinct architecture arose from a distinct constraint and a distinct freedom: First, its academically minded
designers didn't have or expect to raise massive amounts of capital to build the network; and second, they didn't want or expect to
make money from their invention.
The internet's framers thus had no money to simply roll out a uniform centralized network the way that, for example, FedEx
metabolized a capital outlay of
tens
of millions of dollars
to deploy liveried planes, trucks, people, and drop-off boxes, creating a single point-to-point delivery
system. Instead, they settled on the equivalent of rules for how to bolt existing networks together.
Rather than a single centralized network modeled after the legacy telephone system, operated by a government or a few massive
utilities, the internet was designed to allow any device anywhere to interoperate with any other device, allowing any provider able
to bring whatever networking capacity it had to the growing party. And because the network's creators did not mean to monetize, much
less monopolize, any of it, the key was for desirable content to be provided naturally by the network's users, some of whom would
act as content producers or hosts, setting up watering holes for others to frequent.
Unlike the
briefly
ascendant proprietary networks
such as CompuServe, AOL, and
Prodigy
,
content and network would be separated. Indeed, the internet had and has no main menu, no CEO, no public stock offering, no formal
organization at all. There are only engineers who meet every so often to refine its suggested communications protocols that hardware
and software makers, and network builders, are then free to take up as they please.
So the internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks. Tim Berners-Lee
took up the invite and invented the protocols for the World Wide Web, an application to run on the internet. If your computer spoke
"web" by running a browser, then it could speak with servers that also spoke web, naturally enough known as websites. Pages on sites
could contain links to all sorts of things that would, by definition, be but a click away, and might in practice be found at servers
anywhere else in the world, hosted by people or organizations not only not affiliated with the linking webpage, but entirely unaware
of its existence. And webpages themselves might be assembled from multiple sources before they displayed as a single unit,
facilitating the rise of ad networks that could be called on by websites to insert surveillance beacons and ads on the fly, as pages
were pulled together at the moment someone sought to view them.
And like the internet's own designers, Berners-Lee
gave
away
his protocols to the world for free -- enabling a design that omitted any form of centralized management or control, since
there was no usage to track by a World Wide Web, Inc., for the purposes of billing. The web, like the internet, is a
collective
hallucination
, a set of independent efforts united by common technological protocols to appear as a seamless, magical whole.
This absence of central control, or even easy central monitoring, has long been celebrated as an instrument of grassroots democracy
and freedom. It's not trivial to censor a network as organic and decentralized as the internet. But more recently, these features
have been understood to facilitate vectors for individual harassment and societal destabilization, with no easy gating points
through which to remove or label malicious work not under the umbrellas of the major social-media platforms, or to quickly identify
their sources. While both assessments have power to them, they each gloss over a key feature of the distributed web and internet:
Their designs naturally create gaps of responsibility for maintaining valuable content that others rely on. Links work seamlessly
until they don't. And as tangible counterparts to online work fade, these gaps represent actual holes in humanity's knowledge.
Before today's internet, the primary way to preserve something for the ages was to consign it to writing -- first on stone, then
parchment, then papyrus, then 20-pound acid-free paper, then a tape drive, floppy disk, or hard-drive platter -- and store the result
in a temple or library: a
building
designed to guard it
against rot, theft, war, and natural disaster. This approach has facilitated preservation of some material
for thousands of years. Ideally, there would be multiple identical copies stored in multiple libraries, so the failure of one
storehouse wouldn't extinguish the knowledge within. And in rare instances in which a document was surreptitiously altered, it could
be compared against copies elsewhere to detect and correct the change.
These buildings didn't run themselves, and they weren't mere warehouses. They were staffed with clergy and then librarians, who
fostered a culture of preservation and its many elaborate practices, so precious documents would be both safeguarded and made
accessible at scale -- certainly physically, and, as important, through careful indexing, so an inquiring mind could be paired with
whatever a library had that might slake that thirst. (As Jorge Luis Borges pointed out,
a
library without an index
becomes paradoxically
less
informative
as it grows.)
At the dawn of the internet age, 25 years ago, it seemed the internet would make for immense improvements to, and perhaps some
relief from, these stewards' long work. The quirkiness of the internet and web's design was the apotheosis of ensuring that the
perfect would not be the enemy of the good. Instead of a careful system of designation of "important" knowledge distinct from
day-to-day mush, and importation of that knowledge into the institutions and cultures of permanent preservation and access
(libraries), there was just the infinitely variegated web, with canonical reference websites like those for academic papers and
newspaper articles juxtaposed with PDFs, blogs, and social-media posts hosted here and there.
Enterprising students designed web crawlers to automatically follow and record every single link they could find, and then follow
every link at the end of that link, and then build a concordance that would allow people to search across a seamless whole, creating
search engines returning the top 10 hits for a word or phrase among, today, more than 100 trillion possible pages. As Google
puts
it
, "The web is like an ever-growing library with billions of books and no central filing system."
Now, I just quoted from Google's corporate website, and I used a hyperlink so you can see my source. Sourcing is the glue that holds
humanity's knowledge together. It's what allows you to learn more about what's only briefly mentioned in an article like this one,
and for others to double-check the facts as I represent them to be. The link I used points to
https://www.google.com/search/howsearchworks/crawling-indexing/
.
Suppose Google were to change what's on that page, or reorganize its website anytime between when I'm writing this article and when
you're reading it, eliminating it entirely. Changing what's there would be an example of content drift; eliminating it entirely is
known as
link
rot
.
It turns out that link rot and content drift are
endemic
to the web
, which is both unsurprising and shockingly risky for a library that has "billions of books and no central filing
system." Imagine if libraries didn't exist and there was only a "sharing economy" for physical books: People could register what
books they happened to have at home, and then others who wanted them could visit and peruse them. It's no surprise that such a
system could fall out of date, with books no longer where they were advertised to be -- especially if someone reported a book being in
someone else's home in 2015, and then an interested reader saw that 2015 report in 2021 and tried to visit the original home
mentioned as holding it. That's what we have right now on the web.
Whether humble home or massive government edifice, hosts of content can and do fail. For example, President Barack Obama signed the
Affordable Care Act in the spring of 2010. In the fall of 2013, congressional Republicans shut down day-to-day government funding in
an attempt to kill Obamacare. Federal agencies, obliged to cease all but essential activities, pulled the plug on websites across
the U.S. government, including access to thousands, perhaps millions, of official government documents, both current and archived,
and of course very few having anything to do with Obamacare. As night follows day, every single link pointing to the affected
documents and sites no longer worked. Here's NASA's website from the time:
In 2010, Justice Samuel Alito wrote a concurring opinion in a case before the Supreme Court, and his opinion linked to a website as
part of the explanation of his reasoning. Shortly after the opinion was released, anyone following the link wouldn't see whatever it
was Alito had in mind when writing the opinion. Instead, they would find this
message
:
"Aren't you glad you didn't cite to this webpage If you had, like Justice Alito did, the original content would have long since
disappeared and someone else might have come along and purchased the domain in order to make a comment about the transience of
linked information in the internet age."
Inspired by cases like these, some colleagues and I joined those investigating the extent of link rot in 2014 and again this past
spring.
The
first
study
, with Kendra Albert and Larry Lessig, focused on documents meant to endure indefinitely: links within scholarly papers, as
found in the
Harvard Law Review
, and judicial opinions of the Supreme Court. We found that 50
percent of the links embedded in Court opinions since 1996, when the first hyperlink was used, no longer worked. And 75 percent of
the links in the
Harvard Law Review
no longer worked.
People tend to overlook the decay of the modern web, when in fact these numbers are extraordinary -- they represent a comprehensive
breakdown in the chain of custody for facts. Libraries exist, and they still have books in them, but they aren't stewarding a huge
percentage of the information that people are linking to, including within formal, legal documents. No one is. The flexibility of
the web -- the very feature that makes it work, that had it eclipse CompuServe and other centrally organized networks -- diffuses
responsibility for this core societal function.
The problem isn't just for academic articles and judicial opinions. With John Bowers and Clare Stanton, and the kind cooperation of
The
New York Times
, I was able to analyze approximately 2 million externally facing links found in articles at nytimes.com since
its inception in 1996. We found that 25 percent of deep links have rotted. (
Deep
links
are links to specific content -- think theatlantic.com/article, as opposed to just theatlantic.com.) The
older
the article
, the less likely it is that the links work. If you go back to 1998, 72 percent of the links are dead. Overall, more
than half of all articles in
The New York Times
that contain deep links have at least one
rotted link.
Our studies are in line with others. As far back as 2001, a team at Princeton University studied the
persistence
of web references in scientific articles
, finding that the raw number of URLs contained in academic articles was increasing but
that many of the links were broken, including 53 percent of those in the articles they had collected from 1994. Thirteen years
later, six researchers created a data set of more than 3.5 million scholarly articles about science, technology, and medicine, and
determined that
one
in five
no longer points to its originally intended source. In 2016, an analysis with the same data set
found
that 75 percent of all references
had drifted.
Of course, there's a keenly related problem of permanency for much of what's online. People communicate in ways that feel ephemeral
and let their guard down commensurately, only to find that a Facebook comment can stick around forever. The upshot is the worst of
both worlds: Some information sticks around when it shouldn't, while other information vanishes when it should remain.
So far, the rise of the web has led to routinely cited sources of information that aren't part of more formal systems; blog entries
or casually placed working papers at some particular web address have no counterparts in the pre-internet era. But surely anything
truly worth keeping for the ages would still be published as a book or an article in a scholarly journal, making it accessible to
today's libraries, and preservable in the same way as before? Alas, no.
Because information is so readily placed online, the incentives for creating paper counterparts, and storing them in the traditional
ways, declined slowly at first and have since plummeted. Paper copies were once considered originals, with any digital complement
being seen as a bonus. But now, both publisher and consumer -- and libraries that act in the long term on behalf of their consumer
patrons -- see digital as the primary vehicle for access, and paper copies are deprecated.
From my vantage point as a law professor, I've seen the last people ready to turn out the lights at the end of the party: the
law-student editors of academic law journals. One of the more stultifying rites of passage for entering law students is to
"subcite," checking the citations within scholarship in progress to make sure they are in the exacting and byzantine form required
by legal-citation standards, and, more directly, to make sure the source itself exists and says what the citing author says it says.
(In a somewhat alarming number of instances, it does not, which is a good reason to entertain the subciting exercise.)
The original practice for, say, the
Harvard Law Review
, was to require a student subciter to
lay eyes on an original paper copy of the cited source, such as a statute or a judicial opinion. The Harvard Law Library would, in
turn, endeavor to keep a physical copy of everything -- ideally every law and case from everywhere -- for just that purpose. The
Law
Review
has since eased up, allowing digital images of printed text to suffice, and that's not entirely unwelcome: It turns out
that the physical law (as distinct from the laws of physics) takes up a lot of space, and Harvard Law School was sending more and
more books out to a remote depository, to be laboriously retrieved when needed.
A few years ago I
helped
lead
an
effort to digitize all of that paper
both as images and as searchable text -- more than 40,000 volumes comprising more than 40
million pages -- which completed the scanning of nearly every published case from every state from the time of that state's inception
up through the end of 2018. (The scanned books have been
sent
to an abandoned limestone mine
in Kentucky, as a hedge against some kind of digital or even physical apocalypse.)
A special quirk allowed us to do that scanning, and to then treat the longevity of the result as seriously as we do that of any
printed material: American case law is not copyrighted, because it's the product of judges. (Indeed, any work by the U.S. government
is
required
by statute
to be in the public domain.) But the Harvard Law School library is no longer collecting the print editions from which
to scan -- it's too expensive. And other printed materials are essentially trapped on paper until copyright law is refined to better
account for digital circumstances.
Into that gap has entered material that's born digital, offered by the same publishers that would previously have been selling on
printed matter. But there's a catch: These officially sanctioned digital manifestations of material have an asterisk next to their
permanence. Whether it's an individual or a library acquiring them, the purchaser is typically buying mere access to the material
for a certain period of time, without the ability to transfer the work into the purchaser's own chosen container. This is true of
many commercially published scholarly journals, for which "subscription" no longer signifies a regular delivery of paper volumes
that, if canceled, simply means no more are forthcoming. Instead, subscription is for ongoing access to the entire corpus of
journals hosted by the publishers themselves. If the subscription arrangement is severed, the entire oeuvre becomes inaccessible.
Libraries in these scenarios are no longer custodians for the ages of anything, whether tangible or intangible, but rather poolers
of funding to pay for fleeting access to knowledge elsewhere.
Similarly, books are now often purchased on Kindles, which are the Hotel Californias of digital devices: They enter but can't be
extracted, except by Amazon. Purchased books can be involuntarily zapped by Amazon, which has been known to do so, refunding the
original purchase price. For example, 10 years ago, a third-party bookseller offered a well-known book in Kindle format on Amazon
for 99 cents a copy, mistakenly thinking it was no longer under copyright. Once the error was noted, Amazon -- in something of a
panic -- reached into every Kindle that had downloaded the book and
deleted
it
. The book was, fittingly enough, George Orwell's
1984
. (
You
don't have
1984
. In fact, you never had
1984
. There is no
such book as
1984
.
)
At the time, the incident was seen as evocative but not truly worrisome; after all, plenty of physical copies of
1984
were
available. Today, as both individual and library book buying shifts from physical to digital, a de-platforming of a Kindle
book -- including a retroactive one -- can carry much more weight.
Deletion isn't the only issue. Not only can information be removed, but it also can be changed. Before the advent of the internet,
it would have been futile to try to change the contents of a book after it had been long published. Librarians do not take kindly to
someone attempting to rip out or mark up a few pages of an "incorrect" book. The closest approximation of post-hoc editing would
have been to influence the contents of a later edition.
Ebooks don't have those limitations, both because of how readily new editions can be created and how simple it is to push "updates"
to existing editions after the fact. Consider the
experience
of
Philip Howard, who sat down to read a printed edition of
War and Peace
in 2010. Halfway through
reading the brick-size tome, he purchased a 99-cent electronic edition for his Nook e-reader:
As I was reading, I came across this sentence: "It was as if a light had
been Nookd in a carved and painted lantern " Thinking this was simply a glitch in the software, I ignored the intrusive word
and continued reading. Some pages later I encountered the rogue word again. With my third encounter I decided to retrieve my
hard cover book and find the original (well, the translated) text.
For the sentence above I discovered this genuine translation: "It was as if a light had been
kindled in a carved and painted lantern "
A search of this Nook version of the book confirmed it: Every instance of the word
kindle
had
been replaced by
nook
, in perhaps an attempt to alter a previously made Kindle version of the
book for Nook use. Here are some screenshots
I
took
at the time:
It is only a matter of time before the retroactive malleability of these forms of publishing becomes a new area of pressure and
regulation for content censorship. If a book contains a passage that someone believes to be defamatory, the aggrieved person can sue
over it -- and receive monetary damages if they're right. Rarely is the book's existence itself called into question, if only because
of the difficulty of putting the cat back into the bag after publishing.
Now it's far easier to make demands for a refinement or an outright change of the offending sentence or paragraph. So long as those
remedies are no longer fanciful, the terms of a settlement can include them, as well as a promise not to advertise that a change has
even been made. And a lawsuit need never be filed; only a demand made, publicly or privately, and not one grounded in a legal claim,
but simply one of outrage and potential publicity. Rereading an old Kindle favorite might then become reading a slightly (if
momentously) tweaked version of that old book, with only a nagging feeling that it isn't quite how one remembers it.
This isn't hypothetical. This month, the best-selling author Elin Hilderbrand published a new novel. The novel, widely praised by
critics, included a snippet of dialogue in which one character makes a wry joke to another about spending the summer in an attic on
Nantucket, "like Anne Frank." Some readers
took
to social media
to criticize this moment between characters as anti-Semitic. The author sought to explain the character's use of
the analogy before offering an apology and saying that she had asked her publisher to remove the passage from digital versions of
the book immediately.
There are sufficient technical and typographical alterations to ebooks after they're published that a publisher itself might not
even have a simple accounting of how often it, or one of its authors, has been importuned to alter what has already been published.
Nearly 25 years ago I helped Wendy Seltzer start a site, now called
Lumen
,
that tracks requests for elisions from institutions
ranging
from
the University of California to the Internet Archive to Wikipedia, Twitter, and Google -- often for claimed copyright
infringements found by clicking through links published there. Lumen thus makes it possible to learn more about what's missing or
changed from, say, a Google web search, because of outside demands or requirements.
For example, thanks to the site's record-keeping both of deletions and of the source and text of demands for removals, the law
professor Eugene Volokh was able to identify a number of removal requests made
with
fraudulent documentation
-- nearly 200 out of 700 "court orders" submitted to Google that he reviewed turned out to have been
apparently Photoshopped from whole cloth. The Texas attorney general has since
sued
a
company for routinely submitting these falsified court orders to Google for the purpose of forcing content removals. Google's
relationship with Lumen is purely voluntary -- YouTube, which, like Google, has the parent company Alphabet, is not currently sending
notices. Removals through other companies -- like book publishers and distributors such as Amazon -- are not publicly available.
The rise of the Kindle points out that even the concept of a link -- a "uniform resource locator," or URL -- is under great stress. Since
Kindle books don't live on the World Wide Web, there's no URL pointing to a particular page or passage of them. The same goes for
content within any number of mobile apps, leaving people to trade screenshots -- or, as
The Atlantic
's
Kaitlyn Tiffany
put
it
, "the gremlins of the internet" -- as a way of conveying content.
Here, courtesy of the law professor
Alexandra
Roberts
, is how a district-court
opinion
pointed
to a TikTok video: "A May 2020 TikTok video featuring the Reversible Octopus Plushies now has over 1.1 million likes and 7.8 million
views. The video can be found at Girlfriends mood #teeturtle #octopus #cute #verycute #animalcrossing #cutie #girlfriend #mood
#inamood #timeofmonth #chocolate #fyp #xyzcba #cbzzyz #t (tiktok.com)."
Which brings us full circle to the fact that long-term writing, including official documents, might often need to point to
short-term, noncanonical sources to establish what they mean to say -- and the means of doing that is disintegrating before our eyes
(or worse, entirely unnoticed). And even long-term, canonical sources such as books and scholarly journals are in fugacious
configurations -- usually to support digital subscription models that require scarcity -- that preclude ready long-term linking, even as
their physical counterparts evaporate.
The project of preserving and building on our intellectual track, including all its meanderings and false starts, is thus falling
victim to the catastrophic success of the digital revolution that should have bolstered it. Tools that could have made humanity's
knowledge production available to all instead have, for completely understandable reasons, militated toward an ever-changing "now,"
where there's no easy way to cite many sources for posterity, and those that are citable are all too mutable.
Again, the stunning success of the improbable, eccentric architecture of our internet came about because of a wise decision to favor
the good over the perfect and the general over the specific. I have admiringly called this the "
Procrastination
Principle
," wherein an elegant network design would not be unduly complicated by attempts to solve every possible problem that
one could imagine materializing in the future. We see the
principle
at work
in Wikipedia, where the initial pitch for it would seem preposterous: "We can generate a consummately thorough and
mostly reliable encyclopedia by allowing anyone in the world to create a new page and anyone else in the world to drop by and revise
it."
It would be natural to immediately ask what would possibly motivate anyone to contribute constructively to such a thing, and what
defenses there might be against edits made ignorantly or in bad faith. If Wikipedia garnered enough activity and usage, wouldn't
some two-bit vendor be motivated to turn every article into a spammy ad for a Rolex watch?
Indeed, Wikipedia suffers from vandalism, and over time, its sustaining community has developed tools and practices for dealing with
it that didn't exist when Wikipedia was created. If they'd been implemented too soon, the extra hurdles to starting and editing
pages might have deterred many of the contributions that got Wikipedia going to begin with. The Procrastination Principle paid off.
Similarly, it wasn't on the web inventor Tim Berners-Lee's mind to vet proposed new websites according to any standard of truth,
reliability, or anything else. People could build and offer whatever they wanted, so long as they had the hardware and
connectivity to set up a web server, and others would be free to visit that site or ignore it as they wished. That websites would
come and go, and that individual pages might be rearranged, was a feature, not a bug. Just as the internet could have been
structured as a big CompuServe, centrally mediated, but wasn't, the web could have had any number of features to better assure
permanence and sourcing. Ted Nelson's Xanadu project contemplated all that and more, including "
two-way
links
" that would alert a site every time someone out there chose to link to it. But Xanadu
never
took off
.
As procrastinators know, later doesn't mean never, and the benefits of the internet and web's flexibility -- including permitting the
building of walled app gardens on top of them that reject the idea of a URL entirely -- now come at great risk and cost to the larger
tectonic enterprise to, in Google's
early
words
, "organize the world's information and make it universally accessible and useful."
Sergey Brin and Larry Page's idea was a noble one -- so noble that for it to be entrusted to a single company, rather than society's
long-honed institutions, such as libraries, would not do it justice. Indeed, when Google's founders first released a
paper
describing
the search engine they had invented, they included an appendix about "advertising and mixed motives," concluding that "the issue of
advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the
academic realm." No such transparent, academic competitive search engine exists in 2021. By making the storage and organization of
information everyone's responsibility and no one's, the internet and web could grow, unprecedentedly expanding access, while making
any and all of it fragile rather than robust in many instances in which we depend on it.
What are we going to do about the crisis we're in? No one is more keenly aware of the problem of the internet's ephemerality than
Brewster Kahle, a technologist who founded the Internet Archive in 1996 as a nonprofit effort to preserve humanity's knowledge,
especially and including the web. Brewster had developed a precursor to the web called WAIS, and then a web-traffic-measurement
platform called Alexa, eventually bought by Amazon. That sale put Brewster in a position personally to help fund the Internet
Archive's initial operations, including the
Wayback
Machine
, specifically designed to collect, save, and make available webpages even after they've gone away. It did this by
picking multiple entry points to start "scraping" pages -- saving their contents rather than merely displaying them in a browser for a
moment -- and then following as many successive links as possible on those pages, and those pages' linked pages.
It is no coincidence that a single civic-minded citizen like Brewster was the one to step up, instead of our existing institutions.
In part that's due to potential legal risks that tend to slow down or deter well-established organizations. The copyright
implications of crawling, storing, and displaying the web were at first unsettled, typically leaving such actions either to parties
who could be low key about it, saving what they scraped only for themselves; to large and powerful commercial parties like search
engines whose business imperatives made showing only the most recent, active pages central to how they work; or to tech-oriented
individuals with a start-up mentality and little to lose. An example of the latter is at work with Clearview AI, where a single
rakish entrepreneur
scraped
billions of images and tags
from social-networking sites such as Facebook, LinkedIn, and Instagram in order to build a
facial-recognition database capable of identifying nearly any photo or video clip of someone.
Brewster is superficially in that category, too, but -- in the spirit of the internet and web's inventors -- is doing what he's doing
because he believes in his work's virtue, not its financial potential. The Wayback Machine's approach is to save as much as possible
as often as possible, and in practice that means a lot of things every so often. That's vital work, and it should be supported much
more, whether with government subsidy or more foundation support. (The Internet Archive was
a
semifinalist
for the MacArthur Foundation's "
100
and Change
" initiative, which awards $100 million individually to worthy causes.)
A complementary approach to "save everything" through independent scraping is for whoever is creating a link to make sure that a
copy is saved at the time the link is made. Researchers at the
Berkman
Klein Center for Internet & Society
, which I co-founded,
designed
such a system
with an open-source package called
Amberlink
.
The internet and the web invite any form of additional building on them, since no one formally approves new additions. Amberlink can
run on some web servers to make it so that what's at the end of a link can be captured when a webpage on an Amberlink-empowered
server first includes that link. Then, when someone clicks on a link on an Amber-tuned site, there's an opportunity to see what the
site had captured at that link, should the original destination no longer be available. (Search engines such as Google have this
feature, too -- you can often
ask
to
see the search engine's "cached" copy of a webpage linked from a search-results page, rather than just following the link to try to
see the site yourself.)
Amber is an example of one website archiving another, unrelated website to which it links. It's also possible for websites to
archive themselves for longevity. In 2020, the Internet Archive announced a
partnership
with
a company called Cloudflare, which is used by popular or controversial websites to be more resilient against denial-of-service
attacks conducted by bad actors that could make the sites unavailable to everyone. Websites that enable an "always online" service
will see their content automatically archived by the Wayback Machine, and if the original host becomes unavailable to Cloudflare,
the Internet Archive's saved copy of the page will be made available instead.
These approaches work generally, but they don't always work specifically. When a judicial opinion, scholarly article, or editorial
column points to a site or page, the author tends to have something very distinct in mind. If that page is changing -- and there's no
way to know if it will change -- then a 2021 citation to a page isn't reliable for the ages if the nearest copy of that page available
is one archived in 2017 or 2024.
Taking inspiration from Brewster's work, and indeed partnering with the Internet Archive, I worked with
researchers
at
Harvard's
Library
Innovation Lab
to start
Perma
.
Perma is an alliance of more than 150 libraries. Authors of enduring documents -- including scholarly papers, newspaper articles, and
judicial opinions -- can ask Perma to convert the links included within them into permanent ones archived at
http://perma.cc
;
participating libraries treat snapshots of what's found at those links as accessions to their collections, and undertake to preserve
them indefinitely.
In turn, the researchers Martin Klein, Shawn Jones, Herbert Van de Sompel, and Michael Nelson have honed a service called
Robustify
to
allow archives of links from whatever source, including Perma, to be incorporated into new "dual-purpose" links so that they can
point to a page that works in the moment, while also offering an archived alternative if the original page fails. That could allow
for a rolling directory of snapshots of links from a variety of archives -- a networked history that is both prudently distributed,
internet-style, while shepherded by the long-standing institutions that have existed for this vital public-interest purpose:
libraries.
A technical infrastructure through which authors and publishers can preserve the links they draw on is a necessary start. But the
problem of digital malleability extends beyond the technical. The law should hesitate before allowing the scope of remedies for
claimed infringements of rights -- whether economic ones such as copyright or more personal, dignitary ones such as defamation -- to
expand naturally as the ease of changing what's already been published increases.
Compensation for harm, or the addition of corrective material, should be favored over quiet retroactive alteration. And publishers
should establish clear and principled policies against undertaking such changes under public pressure that falls short of a legal
finding of infringement. (And, in plenty of cases, publishers should stand up against legal pressure, too.)
The benefit of retroactive correction in some instances -- imagine fixing a typographical error in the proportions of a recipe, or
blocking out someone's phone number shared for the purposes of harassment -- should be contextualized against the prospect of systemic,
chronic demands for revisions by aggrieved people or companies single-mindedly demanding changes that serve to eat away at the
public record. The public's interest in seeing what's changed -- or at least being aware that a change has been made and why -- is as
legitimate as it is diffuse. And because it's diffuse, few people are naturally in a position to speak on its behalf.
For those times when censorship is deemed the right course, meticulous records should be kept of what has been changed. Those
records should be available to the public, the way that Lumen's records of copyright takedowns in Google search are, unless that
very availability defeats the purpose of the elision. For example, to date, Google does not report to Lumen when it removes a
negative entry in a web search about someone who has invoked Europe's "right to be forgotten," lest the public merely consult Lumen
to see the very material that has been found under European law to be an undue drag on someone's reputation (balanced against the
public's right to know).
In those cases, there should be a means of record-keeping that, while unavailable to the public in just a few clicks, should be
available to researchers wanting to understand the dynamics of online censorship. John Bowers, Elaine Sedenberg, and I have
described
how
that might work
, suggesting that libraries can again serve as semi-closed archives of both public and private censorial actions
online. We can build what the Germans used to call a
giftschrank
, a "poison cabinet" containing
dangerous works that nonetheless should be preserved and accessible in certain circumstances. (Art imitates life: There is a "
restricted
section
" in Harry Potter's universe, and an aptly named "
poison
room
" in the television adaptation of
The Magicians
.)
It is really tempting to cover for mistakes by pretending they never happened. Our technology now makes that alarmingly simple, and
we should build in a little less efficiency, a little more inertia that previously provided for itself in ample qualities because of
the nature of printed texts. Even the Supreme Court hasn't been above a
few
retroactive tweaks
to inaccuracies in its edicts. As the law professor Jeffrey Fisher said after our colleague Richard Lazarus
discovered changes, "In Supreme Court opinions, every word matters When they're changing the wording of opinions, they're
basically rewriting the law."
On an immeasurably more modest scale, if this article has a mistake in it, we should all want an author's or editor's note at the
bottom indicating where a correction has been applied and why, rather than that kind of quiet revision. (At least, I want that
before I know just how embarrassing an error it might be, which is why we devise systems based on principle, rather than trying to
navigate in the moment.)
Society can't understand itself if it can't be honest with itself, and it can't be honest with itself if it can only live in the
present moment. It's long overdue to affirm and enact the policies and technologies that will let us see where we've been, including
and especially where we've erred, so we might have a coherent sense of where we are and where we want to go.
Jonathan Zittrain
is a law professor and computer-science professor at Harvard, and a co-founder of its Berkman Klein
Center for Internet & Society.
Leanpub is a powerful platform for serious authors. This platform is the combination of two
things: a publishing workflow and a storefront . Oh, and we pay 80% royalties .
Leanpub is more than the sum of its parts, however – by combining a simple, elegant
writing and publishing workflow with a store focused on selling in-progress ebooks , it's
something different. Leanpub is a magical typewriter for authors : just write in plain text ,
and to publish your ebook, just click a button . (You can click a Preview button first if you
want!) Once you've clicked the Publish button, anyone in the world can instantly buy your ebook
from Leanpub, and read it on their computer, tablet, phone or ebook reader. Whenever you want
to distribute an update to all your readers, just click the Publish button again. It really is
that easy.
Authors can sign up for our Free plan to create 100 books or courses for FREE! Authors can
also get more features and unlimited previews and publishes by signing up for a Standard or Pro
plan.
"talk-embed-stream-container"> hootowl 11 hours ago remove Share link
Copy The CIA is funded, populated, and controlled by sociopathic
dual-staters and drug cartels. They don't give a damn about Americans or real American
interests.
17 so-called "Intelligence Agencies" are an existential threat, a clear and present
danger to what remains of our constitutional freedoms and prosperity.
Who the hell can possibly control 17 intelligence agencies run by sociopaths and corrupt
politicians (is that redundant)?
hootowl 11 hours ago remove Share link Copy The CIA is funded, populated, and controlled by sociopathic
dual-staters and drug cartels. They don't give a damn about Americans or real American
interests.
17 so-called "Intelligence Agencies" are an existential threat, a clear and present
danger to what remains of our constitutional freedoms and prosperity.
Who the hell can possibly control 17 intelligence agencies run by sociopaths and
corrupt politicians (is that redundant)?
Do pink slippers go with pink hats? I heard a rumor that Huffington Post laid off all its
opinion writers. Looks like its true:
https://www.huffingtonpost....
""These giant platforms, they broke our industry. This is an existential challenge for
every single publisher." HuffPost Editor-in-Chief Lydia Polgreen on platforms such as Google
and Facebook""
I wonder what took her so long in figuring out the obvious.
Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool | It's FOSS
LaTeX editors
are
excellent when it comes to
writing
academic and scientific documentation.
There is a steep learning curved involved of course. And this
learning curve becomes
steeper
if you have to write complex mathematical equations.
Mathpix
is a nifty little tool
that helps you in this regard.
Suppose you are reading a document that has mathematical equations. If you want to use those equations in your
LaTeX document
, you need
to use your ninja LaTeX skills and plenty of time.
But Mathpix solves this problem for you. With Mathpix, you take the screenshot of the mathematical equations, and
it will instantly give you the LaTeX code. You can then use this code in your
favorite LaTeX editor
.
Pandoc is a command-line tool for converting files from one markup language to another. In my
introduction to
Pandoc , I explained how to convert text written in Markdown into a website, a slideshow,
and a PDF.
In this follow-up article, I'll dive deeper into Pandoc , showing how to produce a website and an ePub book from the
same Markdown source file. I'll use my upcoming e-book, GRASP Principles for the Object-Oriented Mind
, which I created using this process, as an example.
First I will explain the file structure used for the book, then how to use Pandoc to
generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its
companion ePub book.
I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you
introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub
document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown
heading H1 ( # ). You can put more than one chapter in each file, but putting them in separate
files makes it easier to find content and do updates later.
The meta-information follows a similar pattern: each output format has its own
meta-information file. Meta-information files define information about your documents, such as
text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a
folder named parts (this is important for the Makefile that generates the website and
ePub). As an example, let's take the table of contents, the preface, and the about chapters
(divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out
the remaining chapters.
My about file might begin like:
# About this book {-}
## Who should read this book {-}
Before creating a complex software system one needs to create a solid foundation.
General Responsibility Assignment Software Principles (GRASP) are guidelines to assign
responsibilities to software classes in object-oriented programming.
Once the chapters are finished, the next step is to add meta-information to setup the format
for the website and the ePub.
Generating the websiteCreate the HTML
meta-information file
The meta-information file (web-metadata.yaml) for my website is a simple YAML file that
contains information about the author, title, rights, content for the <head> tag, and
content for the beginning and end of the HTML file.
I recommend (at minimum) including the following fields in the web-metadata.yaml
file:
---
title: <a href="/grasp-principles/toc/">GRASP principles for the Object-oriented
mind</a>
author: Kiko Fernandez-Reyes
rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
header-includes:
- |
```{=html}
<link href="https://fonts.googleapis.com/css?family=Inconsolata" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Gentium+Basic|Inconsolata"
rel="stylesheet">
```
include-before:
- |
```{=html}
<p>If you like this book, please consider
spreading the word or
<a href="https://www.buymeacoffee.com/programming">
buying me a coffee
</a>
</p>
```
include-after:
- |
```{=html}
<div class="footnotes">
<hr>
<div class="container">
<nav class="pagination" role="pagination">
<ul>
<p>
<span class="page-number">Designed with</span> ❤️ <span
class="page-number"> from Uppsala, Sweden</span>
</p>
<p>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img
alt="Creative Commons License" style="border-width:0"
src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a>
</p>
</ul>
</nav>
</div>
</div>
```
---
Some variables to note:
The header-includes variable contains HTML that will be embedded inside the <head>
tag.
The line after calling a variable must be - | . The next line must begin with triple
backquotes that are aligned with the | or Pandoc will reject it. {=html} tells Pandoc that
this is raw text and should not be processed as Markdown. (For this to work, you need to
check that the raw_attribute extension in Pandoc is enabled. To check, type pandoc
--list-extensions | grep raw and make sure the returned list contains an item named +raw_html
; the plus sign indicates it is enabled.)
The variable include-before adds some HTML at the beginning of your website, and I ask
readers to consider spreading the word or buying me a coffee.
The include-after variable appends raw HTML at the end of the website and shows my book's
license.
These are only some of the fields available; take a look at the template variables in HTML
(my article introduction to Pandoc covered this for
LaTeX but the process is the same for HTML) to learn about others.
Split the website into
chapters
The website can be generated as a whole, resulting in a long page with all the content, or
split into chapters, which I think is easier to read. I'll explain how to divide the website
into chapters so the reader doesn't get intimidated by a long website.
To make the website easy to deploy on GitHub Pages, we need to create a root folder called
docs (which is the root folder that GitHub Pages uses by default to render a website).
Then we need to create folders for each chapter under docs , place the HTML chapters
in their own folders, and the file content in a file named index.html.
For example, the about.md file is converted to a file named index.html that is placed in a
folder named about (about/index.html). This way, when users type
http://<your-website.com>/about/ , the index.html file from the folder about
will be displayed in their browser.
# Creation and copy of stylesheet and images into
# the assets folder. This is important to deploy the
# website to Github Pages.
setup:
@mkdir -p $(DOCS)
@cp -r assets $(DOCS)
# Creation of folder and index.html file on a
# per-chapter basis
The option -c /assets/pandoc.css declares which CSS stylesheet to use; it will be fetched
from /assets/pandoc.css . In other words, inside the <head> HTML tag, Pandoc adds the
following line:
<link rel="stylesheet" href="/assets/pandoc.css">
To generate the website, type:
make
The root folder should contain now the following structure and files:
To deploy the website on GitHub, follow these steps:
Create a new repository
Push your content to the repository
Go to the GitHub Pages section in the repository's Settings and select the option for
GitHub to use the content from the Master branch
You can get more details on the GitHub
Pages site.
Check out my book's website , generated
using this process, to see the result.
Generating the ePub bookCreate the ePub
meta-information file
The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information
file. The main difference is that ePub offers other template variables, such as publisher and
cover-image . Your ePub book's stylesheet will probably differ from your website's; mine uses
one named epub.css.
---
title : 'GRASP principles for the Object-oriented Mind'
publisher : 'Programming Language Fight Club'
author : Kiko Fernandez-Reyes
rights : 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
cover-image : assets/cover.png
stylesheet : assets/epub.css
... Update the Makefile and deploy the ePub
Add the following content to the previous Makefile:
The command for the ePub target takes all the dependencies from the HTML version (your
chapter names), appends to them the Markdown extension, and prepends them with the path to the
folder chapters' so Pandoc knows how to process them. For example, if $(DEPENDENCIES) was only
preface about , then the Makefile would call:
Pandoc would take these two chapters, combine them, generate an ePub, and place the book
under the Assets folder.
Here's an example
of an ePub created using this process.
Summarizing the process
The process to create a website and an ePub from a Markdown file isn't difficult, but there
are a lot of details. The following outline may make it easier for you to follow.
Englishpage.com has conducted an extensive text analysis of over 2,000 novels and resources
and we have found 680 irregular verbs so far including prefixed verbs ( misunderstand
, reread ) as well as rare and antiquated forms ( colorbreed ,
bethink ).
According to Englishpage.com's text analysis of over 2,000 novels and resources, the most
common irregular verbs in English are: be , have , say , do
, know , get , see , think , go and take
.
EPUB e-book files can be converted to a Kindle-compatible format using a desktop converter app or
online conversion site. Since the Kindle doesn't natively support EPUB files, conversion is the only way to enjoy your EPUB books
on your Kindle without a separate purchase from the Kindle store.
.... ... ...
Calibre Converter
Calibre Converter
is an
open-source e-book management program that works not only as a converter, but also as a reader. Calibre is free and can handle
many file formats, including EPUB and Kindle formats.
These easy-to-use open source apps can help you sharpen your writing skills, research more efficiently, and stay organized.
If you've read my article about how I switched
to Linux , then you know that I'm a superuser. I also stated that I'm not an "expert" on anything. That's still fair to say.
But I have learned many helpful things over the last several years, and I'd like to pass these tips along to other new Linux users.
Today, I'm going to discuss the tools I use when I write. I based my choices on three criteria:
My main writing tool must be compatible for any publisher when I submit stories or articles.
The software must be quick and simple to use.
Free is good.
There are some wonderful all-in-one free solutions, such as:
However, I tend to get lost and lose my train of thought when I'm trying to find information, so I opted to go with multiple applications
that suit my needs. Also, I don't want to be reliant on the internet in case service goes down. I set these programs up on my monitor
so I can see them all at once.
Consider the following tools suggestions -- everyone works differently, and you might find some other app that better fits the
way you work. These tools are current to this writing:
Word processor
LibreOffice 6.0.1 . Until recently, I used
WPS , but font-rendering problems (Times New Roman was always in bold format)
nixed it. The newest version of LibreOffice adapts to Microsoft Office very nicely, and the fact that it's open source ticks the
box for me.
Artha gives you synonyms, antonyms, derivatives, and more.
It's clean-looking and fast. Type the word "fast," for example, and you'll get the dictionary definition as well as the
other options listed above. Artha is a huge gift to the open source community, and more people should try it as it seems to be one
of those obscure little programs. If you're using Linux, install this application now. You won't regret it. Note-taking
Zim touts itself as a desktop wiki, but it's also the easiest multi-level note-taking
app you'll find anywhere. There are other, prettier note-taking programs available, but Zim is exactly what I need to manage my characters,
locations, plots, and sub-plots.
Submission tracking
I once used a proprietary piece of software called FileMaker Pro , and
it spoiled me. There are plenty of database applications out there, but in my opinion the easiest one to use is
Glom . It suits my needs graphically, letting me enter information in a form
rather than a table. In Glom, you create the form you need so you can see relevant information instantly (for me, digging through
a spreadsheet table to find information is like dragging my eyeballs over shards of glass). Although Glom no longer appears to be
in development, it remains relevant.
Research
I've begun using StartPage.com as my default search engine. Sure,
Google can be one of your best friends when you're writing. But I don't like
how Google tracks me every time I want to learn about a specific person/place/thing. So I use StartPage.com instead; it's fast and
does not track your searches. I also use DuckDuckGo.com as an alternative to
Google.
As you might have noticed, my taste in apps tends to merge the best of Windows, MacOS, and the open source Linux alternatives
mentioned here. I hope these suggestions help you discover helpful new ways to compose (thank you, Artha!) and track your written
works.
If you've got the means to print money (or to simply post it and jockey it plus or minus on
electronic score boards) and you can maintain it as the world's standard instrument of trade,
you'll have people lined up to get some. And what the hell, it's just numbers on paper. It's
backed by "faith and credit".
Everything Wiki is CIA approved. They do have a sense of humor and a sense of irony. One
can often find the relevant details buried within the deep layers of bullshit.
Procrastination is like a sore throat; it's a symptom with many possible causes. Unless you
know the cause, the treatment for the symptom might things worse. This column contains the five
most common causes of procrastination and how to overcome them.
1. The size of a task
seems overwhelming.
Explanation: Every time you think about the task it seems like a huge mountain of work that
you'll never be able to complete. You therefore avoid starting.
Solution: Break the task into small steps and then start working on them. This builds
momentum and makes the task far less daunting.
Example: You've decided to write a book. Rather than sitting down and trying to write the
book (which will probably cause you to stare at the blank screen), spend one hour on each of
the following sub-tasks:
Assign those materials to sections of your outline.
Write the first three paragraphs of a sample chapter.
Create a schedule to write 2 pages a day.
2. The number of tasks seems
overwhelming.
Explanation: Your to-do list has so many tasks in it that you feel as if you'll never be
able to finish them all, so why bother getting started?
Solution: Combine the tasks into a conceptual activity and then set a time limit for how
long you'll pursue that activity.
Example: Your email account is being peppered by so many requests and demands that you feel
as if you can't possibly get them done. Rather than fret about the pieces and parts, set aside
a couple of hours to "do email." Schedule a similar session tomorrow or later that day.
Thinking of the work as an activity rather than a bunch of action items makes them seem less
burdensome.
3. A set of tasks seem repetitive and boring.
Explanation: You're a creative person with an active mind so you naturally put off any
activity that doesn't personally interest you.
Solution: Set a time limit for completing a single task in the set and then compete against
yourself to see if you can beat that time limit. Reward yourself each time you beat the
clock.
Example: You're a newly-hired salesperson who must write personalized emails to two dozen
customers. The work involves quickly researching their account, addressing any issues they've
had with the previous salesperson, and then introducing yourself.
Rather than just slogging through the work, estimate the maximum amount of time it should
take to write one letter (let's say 5 minutes). It should thus take you 120 minutes (2 hours)
to write all of them.
Start the stopwatch, write the first email. If you have time left over, do something else
(like read the news). When the stopwatch buzzes, reset, write the second email, etc.
4.
The task seems so important that it's daunting.
Explanation: You realize that if you screw this task up, it might mean losing your job or
missing a huge opportunity. You avoid it because you don't want to risk failure.
Solution: Contact somebody you trust and ask if they'll review your work (if the task is
written) or act as a sounding board (if the task is verbal). Doing the task for your reviewer
is low-risk and thus the task is easier to start. The reviewer's perspective and approval
provides you extra confidence when you actually execute the task.
Example: You need to write an email demanding payment from a customer who's in arrears.
Because you don't want to damage the relationship and yet need to be paid, it's a difficult
balancing act--so difficult that you avoid writing the email.
To break the mental log-jam, ask a colleague or friend if they'll review your email before
you send it to see if it hits the right tone. Writing the email then becomes easier because
you're writing it for your friend to read rather than for the customer.
Problem: You just
don't feel like working.
Explanation: You're feeling burned out and generally unmotivated, so you're finding it very
hard to get down to work.
Solution: You have two choices: 1) reschedule the activity for a time when you'll be more
motivated or 2) motivate yourself in the short-term by setting a reward.
Example: You need to write a trip report but you're tired after a long day of travel. While
you know that the report will be more accurate if you write it now, you decide to write it
tomorrow morning after breakfast and coffee--a time when you're typically more motivated.
Alternatively, you motivate yourself short-term promising yourself that you'll buy and
download a book that you've been wanting to read... but only if you write the report tonight.
Calibre 3.4 is here only one week after the release of the 3.3 update, which means that it's
not a major version and it only adds a few user interface improvements, along with the usual
bug fixes. The most important thing introduced in Calibre 3.4 is the a new method of exporting
books to your computer. In the Edit Book component, there's now an option called "Export
selected files" if you right-click on the File browser, and it makes it a lot easier to export
all selected books to your computer. In addition, there's now a configurable shortcut to move
the focus from the Quickview component to the book list.
(arstechnica.com)
90
Posted by msmash
on Friday December 02, 2016 @04:20PM
from the
empire-strikes-back
dept.
20-year-old Lan Cai was in a car crash this summer, after she was plowed into
by a drunk driver and broke two bones in her lower back. She didn't know how to
navigate her car insurance and prove damages, so she reached out for legal
help.
Things didn't go as one would have liked, initially, as
ArsTechnica
documents:
The help she got, Cai said, was less than satisfactory. Lawyers
from the Tuan A. Khuu law firm ignored her contacts, and at one point they came
into her bedroom while Cai was sleeping in her underwear. "Seriously, it's
super unprofessional!" she wrote on Facebook. (The firm maintains it was
invited in by Cai's mother.) She also took to Yelp to warn others about her bad
experience. The posts led to a threatening e-mail from Tuan Khuu attorney Keith
Nguyen. Nguyen and his associates went ahead and filed that lawsuit, demanding
the young woman pay up between $100,000 and $200,000 -- more than 100 times
what she had in her bank account. Nguyen said he didn't feel bad at all about
suing Cai. Cai didn't remove her review, though. Instead she fought back
against the Khuu firm, all thanks to attorney Michael Fleming, who took her
case pro bono. Fleming filed a motion arguing that, first and foremost, Cai's
social media complaints were true. Second, she couldn't do much to damage the
reputation of a firm that already had multiple poor reviews. He argued the
lawsuit was a clear SLAPP (strategic Lawsuit Against Public Participation).
Ultimately, the judge agreed with Fleming, ordering the Khuu firm to pay
$26,831.55 in attorneys' fees.
(technologyreview.com)
220
Posted by msmash
on Wednesday November 30, 2016 @11:45AM
from
the
inside-look
dept.
Reader
Joe_NoOne
writes:
Like
TV, social media now increasingly entertains us, and even more so than
television it amplifies our existing beliefs and habits. It makes us feel more
than think, and it comforts more than challenges. The result is a deeply
fragmented society, driven by emotions, and radicalized by lack of contact and
challenge from outside. This is why Oxford Dictionaries designated "post-truth"
as the word of 2016: an adjective "relating to circumstances in which objective
facts are less influential in shaping public opinion than emotional appeals."
Traditional television still entails some degree of surprise. What you see on
television news is still picked by human curators, and even though it must be
entertaining to qualify as worthy of expensive production, it is still likely
to challenge some of our opinions (emotions, that is). Social media, in
contrast,
uses algorithms to encourage comfort and complaisance, since its entire
business model is built upon maximizing the time users spend inside of it
.
Who would like to hang around in a place where everyone seems to be negative,
mean, and disapproving? The outcome is a proliferation of emotions, a
radicalization of those emotions, and a fragmented society.
This is way more dangerous for the idea of democracy founded on the
notion of informed participation. Now what can be done? Certainly the
explanation for Trump's rise cannot be reduced to a technology- or
media-centered argument. The phenomenon is rooted in more than that; media or
technology cannot create; they can merely twist, divert, or disrupt. Without
the growing inequality, shrinking middle class, jobs threatened by
globalization, etc. there would be no Trump or Berlusconi or Brexit. But we
need to stop thinking that any evolution of technology is natural and
inevitable and therefore good. For one thing, we need more text than videos in
order to remain rational animals. Typography, as Postman describes, is in
essence much more capable of communicating complex messages that provoke
thinking. This means we should write and read more, link more often, and watch
less television and fewer videos -- and spend less time on Facebook, Instagram,
and YouTube.
"... Newspapers exist to process and assess the rival claims of experts � politicians, governments, corporations, the professoriate, pollsters, authors, whistleblowers, filmmakers, and denizens of the blogosphere. When its own claims to authority are misplaced � a spectacular example having been the Monday before the election, when newspapers were still expecting a Clinton victory � the print press and its kith and kin correct themselves (the next day) and investigate the prior beliefs that led them to error. A free and competitive press resembles the other great self-correcting systems that have evolved over centuries � democracy, markets, and science. ..."
"... And as for social media, the new highly-decentralized content producers, to the extent they are originators of new information, the claims made there are slowly becoming subject to the same checking and assessment routines as are claims advanced in other realms. (No, the Pope did not endorse Donald Trump.) As for intelligence services, in which the experts' job is to know more than is public, it is the newspapers that make them less secret. More than any other institution in democratic industrial societies, newspapers produce a provisional version of the truth. So the condition of newspapers should concern us all ..."
"... In What If the Newspaper Industry Made a Colossal Mistake? , in Politico , Jack Shafer speculated recently the newspaper companies had "wasted hundreds of millions of dollars" by building out web operations instead of investing in their print editions, "where the vast majority of their readers still reside and where the overwhelming majority of advertising and subscription revenue still come from." As perspicacious a press critic as is writing today, Shafer was reporting on an essay by a pair of University of Texas professors, H. Iris Chyi and Ori Tenenboim, in Journalism Practice . ..."
"... More serious has been the lack of thinking-out-loud about the future of those print editions. No one needs to be told that smart phones have replaced newspapers, radio, and television as the tip of the spear of news. It appears that Facebook and Twitter have supplanted cable television and radio talk shows as the dominant forum for political discussion. ..."
"... The immense prestige associated with newspapers arose from the fact that for centuries they were reliable money machines, thanks to their semi-monopoly on readers' attention. ..."
"... In a world in which the gas pump starts talking to you when you pick up the hose and video commercials are everywhere online, the virtues of print are many-sided, for readers and advertisers alike. In Why Print Still Rules , Shafer laid out the case for print's superiority as a medium � "an amazingly sophisticated technology for showing you what's important, and showing you a lot of it." It's finite. It attracts a paying crowd, which is why advertisers are willing to pay more � much more � for space. ..."
"... The WSJ costs $525 a year for six days, including a first-rate weekend edition. The Times charges $980 a year for seven days a week, including a Sunday edition that contains much more content than most readers need. (Its ads bring in a ton of money.) That's why the WSJ decision to cut back to from four to two daily sections is significant: it acknowledges the reduced but still very powerful claim of print on consumers' ever-more stretched budget of time. It puts more pressure on the Times's luxury brand. ..."
The Other
Infrastructure, Economic Principals : Bridges, roads, airports, the electricity grid, pipelines,
food and fuel and water systems: all of these are underfunded to some degree. So are the myriad new
arrangements, from satellites and ocean buoys to emission scrubbers and ocean barriers, required
to keep abreast and cope with climate change. Which wheels will begin to get the grease in coming
months? We'll see.
At the moment I am even more interested in the well-being of social information systems Last week
The Wall Street Journal announced it would reduce its print edition from four sections to
two, bringing it into line with the Financial Times . Should that be an occasion for concern?
On the contrary, let me try to convince you that it is welcome news.
Although newspapers still carry crossword puzzles, comics, agony aunts, and churn out all manner
of fashion magazines, they are mainly in the business of producing provisionally reliable knowledge.
What's that? I have in mind propositions on which every honest and knowledgeable person can agree.
Not so much big judgement, such whether climate change is occurring or whether Vladimir Putin
is a despot, but rather ascertainable facts, beginning with what parties to various debates are saying
about themselves and each other and about their pasts. These are the foundations on which big judgements
are based
A case in point: almost all of what the world knows about Donald Trump, that is, that we consider
that we really know, we owe to The New York Times , The Wall Street Journal
, The Washington Post , the Financial Times , and various newspaper-like organizations,
Bloomberg News, Politico , and the Guardian in particular. The Associated Press, Reuters
and the BBC contributed a little less; magazines still less; the rest of radio and television, hardly
anything at all, with the notable exception of Fox News anchor Megyn Kelly's lead off question in
the first presidential debate . Someone will prepare a list of the fifty or a hundred of the
best stories of the last year, I expect. I'll only mention a few memorable examples:
The Post's coverage of the Trump Foundation; the Times' many investigations,
including those of his tax strategies and his practices as a young landlord; a Politico
roundtable of five Trump biographers; the WSJ's pursuit of the George Washington bridge
closing, coverage that changed the course of the campaign; and the FT's continuing emphasis
on the foreign policy implications of the America election. The same thing could be said about
newspapers' coverage of Hillary Clinton.
Newspapers exist to process and assess the rival claims of experts � politicians, governments,
corporations, the professoriate, pollsters, authors, whistleblowers, filmmakers, and denizens of
the blogosphere. When its own claims to authority are misplaced � a spectacular example having been
the Monday before the election, when newspapers were still expecting a Clinton victory � the print
press and its kith and kin correct themselves (the next day) and investigate the prior beliefs that
led them to error. A free and competitive press resembles the other great self-correcting systems
that have evolved over centuries � democracy, markets, and science.
And as for social media, the new highly-decentralized content producers, to the extent they are
originators of new information, the claims made there are slowly becoming subject to the same checking
and assessment routines as are claims advanced in other realms. (No, the Pope did not endorse Donald
Trump.) As for intelligence services, in which the experts' job is to know more than is public, it
is the newspapers that make them less secret. More than any other institution in democratic industrial
societies, newspapers produce a provisional version of the truth. So the condition of newspapers
should concern us all.
In
What If the Newspaper Industry Made a Colossal Mistake? , in Politico , Jack Shafer speculated
recently the newspaper companies had "wasted hundreds of millions of dollars" by building out web
operations instead of investing in their print editions, "where the vast majority of their readers
still reside and where the overwhelming majority of advertising and subscription revenue still come
from." As perspicacious a press critic as is writing today, Shafer was reporting on an essay by a
pair of University of Texas professors, H. Iris Chyi and Ori Tenenboim, in Journalism Practice
.
Chyi and Tenenboim overstated their case, I think. Those dollars invested in web operations weren't
wasted; they had to be spent. Most newspapers, all but the WSJ , made the mistake of making
their content free on the Web for several years. Only gradually did they come round to the approach
the Journal had pioneered: a paywall, with some sort of a metering technology designed to
encourage online subscriptions.
More serious has been the lack of thinking-out-loud about the future of those print editions.
No one needs to be told that smart phones have replaced newspapers, radio, and television as the
tip of the spear of news. It appears that Facebook and Twitter have supplanted cable television and
radio talk shows as the dominant forum for political discussion. But newspapers haven't gone away;
indeed, by establishing beachheads for the content they produce on social media platforms, they have
become more influential than ever.
The immense prestige associated with newspapers arose from the fact that for centuries they were
reliable money machines, thanks to their semi-monopoly on readers' attention. It is no longer news
that the revenue model has turned upside down, Advertisers used to pay two thirds or more of the
cost of publishing a successful newspaper; today it is more like a third, if that. Attention was
slowly eroded away by radio, broadcast and pay television, until the invention of search-based advertising
in 2002 turned decline into a seeming rout. The basic business model is still the same, as Tim Wu
explains in
The Attention Merchants; The Epic Scramble to Get Inside Our Heads (Knopf, 2016): "free diversion
in exchange for a moment of your consideration, sold in turn to the highest-bidding advertiser."
It's the technology that has changed.
In a world in which the gas pump starts talking to you when you pick up the hose and video commercials
are everywhere online, the virtues of print are many-sided, for readers and advertisers alike. In
Why Print Still Rules , Shafer laid out the case for print's superiority as a medium � "an amazingly
sophisticated technology for showing you what's important, and showing you a lot of it." It's finite.
It attracts a paying crowd, which is why advertisers are willing to pay more � much more � for space.
The fancy newspapers are in good shape to refurbish their printed editions. Three of the four
have new owners with deep pockets. Rupert Murdoch, a maverick Australian, now a US citizen, bought
the WSJ in 2007; Amazon's Jeff Bezos, thought to be the second richest American, after Bill
Gates, bought the WPost in 2013; the Japanese newspaper group around Nikkei bought
the FT in 2015. The NYT is the shakiest of the four, but there seems little doubt that
the cousins of the Sulzberger/Ochs clan will find a suitable partner, the oft-expressed enmity of
President-elect Trump notwithstanding.
Pricing, meanwhile, is all over the map, as is the appropriate size of the paper edition itself.
The FT delivers two sections of tightly-written no-jump news over five days and a great weekend
edition for $406 a year. The WSJ costs $525 a year for six days, including a first-rate weekend
edition. The Times charges $980 a year for seven days a week, including a Sunday edition that
contains much more content than most readers need. (Its ads bring in a ton of money.) That's why
the WSJ decision to cut back to from four to two daily sections is significant: it acknowledges
the reduced but still very powerful claim of print on consumers' ever-more stretched budget of time.
It puts more pressure on the Times's luxury brand.
It's the regional papers that worry me, as much for their roles as distributors of news as producers
of it. When the Times , WSJ and FT are placed on the stoop in the morning, my
old paper, The Boston Globe , is not among them. At around $770 a year, it simply costs too
much, especially considering the meager local content it provides. Assume that the "right" price
for a year of a fancy paper today is somewhere between the FT and the WSJ , at around
$500 a year. At around half as much, or even $300, a print edition of the Globe would be highly
attractive. My hunch is that circulation would again begin to increase, and, in the process, shore
up the metropolitan area's home-delivery network. Instead I buy digital versions of the Globe
(for $208) and the Post (for $149). Want to know what a year of the print Post costs?
So does the copy editor. But I stopped looking after interrogating the web page for five minutes.
Newspapers are notorious for gulling their subscribers. Not even the FT is straightforward
about it.
Like the other leading papers � the Chicago Tribune , Los Angeles Times , Philadelphia
Inquirer , and Baltimore Sun � the Globe was sold for a song to a non-newspaper
owner in the course of the panic that followed the advent of search advertising in 2002. These publishers
no longer seem to see themselves as part of an industry that was quite tight-knit before the fall.
That's another disadvantage with which the big national dailies must cope. For many years, newspaperfolk
considered that their businesses were mostly exempt from the laws of supply and demand. Price cuts
play a big part in the lore of its past. Today, the future of the industry depends on the recognition
that price/performance is everything.
"This book was a pleasurable, gripping, interesting read...It is academically
focused with lots of bibliographic notes and references, yet it is clearly written
for the general reader too. This skills of a journalist shine through: collect,
curate and create a clearly understandable text from a seething mass of ideas."
(Darren Ingram Darren Ingram Media )
General readers, media and publishing professionals, journalism students
"[A] hard-hitting examination of the future of news and reporting - and a
'must' for social issues and journalism collections alike." (California Bookwatch,
The Journalism Shelf Midwest Book Review )
"The book is essential reading for many journalists today who must prepare
themselves for the digital dilemmas of tomorrow." (Geoff Ward All Voices
)
"The book is optimistic without being sentimental, thought-provoking without
being pretentious and realistic without being harsh, which makes it comforting
for someone with a keen interest in seeing journalism prevail and hopefully
eye-opening for those who wish to better understand it." (Madeleine Maccar
Chicago Center for Literature and Photography )
"Commendably well written and annotated, this volume will be valuable to
anyone interested in journalism, mass communication, or digital media. Summing
up : Highly recommended." (R.A. Logan CHOICE )
"Brock's writing is crisp, concise, and clear and his research extensive.
The book is impeccably edited and presented in a very reader-friendly fashion...As
reference material, Out of Print is an essential addition to any media-related
collection. To members of the journalism field who've endured years of angst
over the future of their profession, it's so much more. Brock's analysis is
too well-reasoned and supported to be easily dismissed as blind optimism, lighting
a beacon of hope to those interested in seeing journalism right itself from
its current state of upheaval." (Rich Rezler ForeWord Reviews )
"[A]rgues that the experimentation and inventiveness of the new news media
are cause for greater optimism than the red ink on the balance sheets of media
companies.Seeking to reassure the doom-mongers, he delves back into the history
of journalism and demonstrates the shaky beginnings and rapid innovation that
powered news journalism for three centuries before the maturation and slow decline
of the business in the 20th century. His pr�cis of the history is fascinating
and elegantly done." (Emily Bell New Statesman )
"A brief survey of journalism's history and evolution leads toward modern
transformations that are forcing people to rethink how journalism can be accomplished,
both ethically and profitably... Out of Print is a 'must-read' for anyone
in today's journalism or periodical industries, and is worthy of the highest
recommendation for public or college library Media Studies shelves." (Library
Bookwatch, The Journalism Shelf Midwest Book Review )
"[P]rovides an insightful and detailed analysis of journalism through history
and reviews the effects of the digital age on journalism's current state, as
well as its potential future... By working through the history of journalism
starting from its uncertain beginnings with the development of the postal service
in the 15th century, Brock emphasizes the fact that journalism has never been
fixed, but has continued to develop and evolve in a fluid manner and has undergone
radical periods of change before the development of the internet in the 1990s...
Although arguably an overly positive analysis of journalism today, Brock's stance
is refreshing and the book is a pleasure to read."
( WAN-IFRA )
"A good overview of the problems--and some of the opportunities--facing those
in the world of media. While the book paints a picture of where the newspaper
industry has gone wrong, which is a sad story that tends to dominate the media
(surprise!), it also makes the oft-overlooked point that print media is just
one stage in the evolution of journalism. Therefore, it's possible to come away
from this book, which is ostensibly about the death of a great industry, feeling
upbeat and even excited about the possibilities for the next stage of media's
evolution. What exactly that will be is uncertain, but it's clear--from the
book and just by surveying the current media landscape--that it will be a lot
less centralized, more democratic and, likely, much less profitable for those
in charge than in print media's heyday. Which is probably a good thing." (Phil
Stott)
"[Brock's] particularly good at analyzing the changes which have taken place,
such as digital technology, and showing that they should force a complete rethink
of journalism rather than attempts to adapt old ways to fit new technology.
The chapter on 'Rethinking Journalism Again' is a thought-provoking look at
what is changing and how it should be regarded both within the industry and
as a consumer." (Sue Magee The Bookbag )
"[A] comprehensive look at the history of the news. getAbstract recommends
[Brock's] historical overview to those in and out the news business who believe
that a free society prospers when journalism does." (getAbstract Inc.
)
" Out of Print does what 'think books' about contemporary journalism
do best: It addresses a larger public who might not know about the problems
facing journalism but also offers an academic discussion rooted in a conversation
about the past, present, and future of journalism. Brock's work makes a significant
contribution in the field." (Nikki Usher International Journal of Communication
)
"[A]n unsentimental look at the fall of the 'golden age' of newspapers as
much as it is an optimistic take on the future of the news business...Brock's
frank, level headed take on business models, ethics, and other tenets of journalism
is approachable and refreshing." (Karen Fratti Media Bistro, 10,000 Words
)
"Its greatest virtue, by far, is in seeing the changes in journalism throughout
history as a ceaseless process. Brock refuses to fall into the trap of technological
determinism. He accepts that technological developments lead to change but rightly
understands that, even between the inventions which have influenced how news
is gathered and transmitted, journalism has always been in a state of flux."
(Roy Greenslade The Guardian )
"All journalists and certainly journalism students should read this book.
And bloggers and technologists interested in the media biz should, too." (Hope
Leman Critical Margins )
Top Customer Reviews
5.0 out of 5 stars
Lessons in digital disruption By
John Gibbs on September 5, 2013 Format: Kindle Edition Many busy people
take journalism for granted, but the disruption of journalism should be a matter
of urgent concern to democratic societies because the free flow, integrity and
independence of journalism is essential to citizens who vote, according to journalism
professor George Brock in this book. The book aims to explain why the news media
is undergoing radical alteration, and what the result ought to be and might
be.
The book provides an entertaining overview of the history of journalism,
from its messy and opinionated beginnings featuring sensational and unreliable
news stories through to the Leveson Inquiry in 2011 and 2012 into the culture,
practices and ethics of the British press following the News International phone
hacking scandal. In a 2000-page final report, Justice Leveson made a range of
recommendations which would improve the protection of privacy in the UK and
restrain the excesses of the press.
However, it is not the Leveson recommendations which provide the greatest
threat to the press; rather, it is the digital disruption brought about by the
Internet. Shrinking subscriber bases and advertising revenue have resulted in
the crumbing of the established business model. Experiments have been made with
paywalls and meters, but so far no-one has established a clearly viable new
business model.
Read more �
Comment 7 of 8 people found this helpful. Was this review
helpful to you?
Yes
No Sending feedback... Thank you for your feedback. Sorry,
we failed to record your vote. Please try again
Report abuse 5.0 out of 5 stars
Journalism: Past, Present and Future By
Shalom Freedman
HALL OF FAME on December 26, 2013 Format: Paperback This is a book which
in a sense is written in the hope of revitalizing Journalism. It provides a
history of the business and tries to contend with the general pessimism which
has come to the profession in recent years with the contracting of Print Media
and the ascension of Digita formats of expression. It points out that the centralized
powerful Print world many think of as the only face of Journalism is a relatively
recent development in its history. The Golden Era of Journalism which began
in the 1890's Brock suggests had already begun to fade somewhat in the fifties
of the twentieth century. Brock tells the story of the Digital Transformation
the drastic loss in Advertising revenues , the contraction in personnel and
outlets which came to the Print world once the Computer began taking over. He
indicates however that News as we think of it was not necessarily the primary
business of that grab-bag creation the Newspaper. All in all he provides in
this age of Abundance of Information a great deal of information and clear thought
about Jounalism its idea and ideals. He suggests that much of its future is
open to experimentation and that new developments will come which will help
strengthen the free flow of ideas, the objective reporting of reality, the investigating
of and keeping honest government and business officials. This is a book for
the General Reader but it should be of course of first interest to all who practice
and would practice the trade of Journalism.
Comment 1 of 1 people found this helpful. Was this review
helpful to you?
Yes
No Sending feedback... Thank you for your feedback. Sorry,
we failed to record your vote. Please try again
Report abuse 3.0 out of 5 stars
Clear-Eyed Dissection of the Contemporary Newspaper Industry (with a British
focus) By
Dr. Laurence Raw on January 17, 2014 Format: Paperback OUT OF PRINT takes
a long, hard look at the British newspaper industry - its past, present and
future. The author, a former journalist with many years of experience - for
example, at the London SUNDAY TIMES - looks at the way in which newspapers acquired
a position of considerable primacy in British cultures from the mid-eighteenth
to the late twentieth centuries, a position that is now under threat through
digitization. Brock is well aware of how the internet has changed the ways in
which readers consume news - looking for outlets other than that of the newspapers
and exercising freedom of choice, as well as making the news themselves through
blogs. On the other hand, he believes that there is a future for the printed
newspaper - perhaps the circulation figures will not be as substantial as they
were in the past, but Brock understands how many readers prefer paper to the
screen, even if they own an IPad or a smartphone. Ultimately OUT OF PRINT calls
for the newspaper industry to become more flexible, to reject its antediluvian
practices of the past, both in terms of news-gathering and distribution, and
adapt itself to changing practices. A combination of the tried and tested, the
reliable and the trustworthy, allied to new, innovative methods of delivering
the news, both in print and online, seems like the formula for future success.
Perhaps the book is a little too parochial in focus (there is too much on the
Leverson inquiry, and not enough on developments within the American newspaper
industry), but it is nonetheless well written and highly accessible.
Writing book is a grueling long, long job. And chances that your book became
a hit are slim even if you manage to find a publisher -- too much depends
on advertizing...
Dear patient readers,
Loyalists may have noticed that I am still not back up to my old level of
posts. That is because I still have heavy duty book responsibilities. I now
know why Spaulding Grey called one of his books "the monster in the box" (in
the box in those days because manuscripts were typewritten).
The manuscript was in to my editor August 4. I still don't have her edits
back, but I have tons to do anyhow (this is typical, BTW) particularly because
one chapter still does not work and needs to be rewritten yet again (9th time,
7 of 8 previous rewrites were major. It does not want to submit). And aside
from cleanup and nailing down some very important open details (to say anything
about CDOs, you need to do primary research, the media did not get deeply enough
into that one, no doubt because it is a difficult product and data does not
converge neatly), I need to worry about continuity and redundancy (for instance,when
I talk about CDS and return to it 4 chapters later, how much do I have to reintroduce
the concept for a lay reader?).
The book was supposed to go to copy edit August 18, which was clearly nuts.
I had to make myself obnoxious to get that pushed back a mere six days. I get
half the book back out of copy edit Sept 2, the rest the following week. The
overall deadline is still the same, which is the manuscript is pretty locked
down on Sept 23.
I review copy edits and can still make changes till then, and will keep editing
while in copy edit. I also need to get some permissions for a few charts I use
before Sept 23..
The manuscript then goes for a proofreading (an extra step most publishers
don't take) and goes into page proofs, which I review again, but you really
cannot change page proofs much at all (you can maybe artfully change a line
if if does not change the rest of the page).
The book goes into galleys as of mid October and galleys are ready Nov. 1.
And you may remember the book is not out until late Feb-March. Why such a
long lead time? They want to send galleys to long lead time publications. It
takes time to assign books to reviewers. To get reviews in some magazines for
Feb-March, they need galleys 4 months plus prior.
Now this book is a big historical sweep, but I wonder what happens if this
ides of September is even a pale shadow of the last one.
And I have a client project starting the day the book goes into copy edit
(Aug 24), so even if I wanted to take a few days off then that is not in the
cards.
KISSING FROGS: THE GREATEST RISK (John
Joss, August 20, 2007)
"Ability is of no account without opportunity"
≈≈Napoleon Buonaparte
Career choices remain, for most of us, the highest life risk. Bad decisions,
early, may spell doom. The rot may set in while we are still in our teens, picking
poor study specialties that become dead ends. Though we will each have ten or
more separate jobs during our working life, it's better to work into areas with
genuine career potential. Buggy whips are no longer made in quantity. Repairing
typewriters is not a growth trade.
The most significant risk I ever took was trying to become a writer. To be accepted
as a writer is to offer one's most intimate self≈≈the mind and heart≈≈for public
appraisal. If this leads to authentication, so much the better. If not . . .
After years spent slaving in the corporate world and creating soulless promotional
and business writing, I decided to take the plunge and write a novel≈≈well,
three. Because the mortgage payment fell due every month, I wrote them between
three and eight AM while working full time (for a freelance, around 60-80 hours
a week). Each novel took nine months, a pregnant period to consider. When my
first, SIERRA SIERRA, was taken by William Morrow in New York, I was elated.
I was launched as a novelist. No longer would I need to slave over commercial
'writing,' with its intrinsic limitations and its lack of creativity. Now I
could let my brain, heart and imagination soar in a series of novels already
planned in my mind. I could not have been more wrong. How naОve! What delusions!
One accepted book, especially a first novel, does not begin to approximate a
writing career.
People who have chosen wisely not to take up writing for a living often ask
me 'What's it like to be a writer?'
I sometimes detect a hint of envy, for reasons that escape me. These are, as
far as I can tell, people≈≈seemingly sane≈≈already receiving a regular paycheck.
My counsel to them is invariably to keep working at that salaried job they now
hold and study to remain current, or become a home hobbyist with a working spouse.
Many people apparently imagine that writers enjoy a glamorous life: lots of
partying, approached by agents, directors and producers eager to produce articles,
books, films or TV series based on their work, traveling to exotic locations,
being wined and dined by publishers who sit at their feet and press huge advances
and lucrative contracts on them, rubbing shoulders with celebrities, receiving
adoration from Beautiful People, being interviewed and lionized by the media,
earning pots of money.
For a few of the world's scribblers, this is reality. You read about them everywhere:
their latest work or three (already accepted, based on a working title, huge
advances paid), their brushes with the law, their drugs, sexual proclivities
and conquests, their current partner(s), what they are wearing and eating, their
travels≈≈to Venice for Carnevale, to Tibet to meet the Dalai Lama, to the Vatican
(private audience with the Pope).
For the vast majority of writers, this existence is fantasy. The real writing
life is solitary, often lonely, with (for me, anyway) endless hours spent trying
to assemble words correctly, failing frequently. And badly paid. Perhaps one
of the riskiest and most precarious activities on earth, especially if you enjoy
eating and drinking, clothing and shelter. I have had years in which I have
earned six figures (once, years ago commerce pays, art doesn't). But I have
had years, an embarrassingly large number, in which my writing earnings were
in four figures.
It is not easy to live on a four-figure salary in the U.S., well below the poverty
line, especially not in the high-cost-of-living Bay Area of Northern California.
Once, in a burst of masochism, I calculated that
I had earned less than $1 an hour in one particularly bad year;
that calculation did not include work done but not sold. For the sake of your
mental health, try not to indulge in such math. And stay out of the cooking
sherry: alcohol is a depressant and most writers are already depressed enough.
The great, Oscar-winning screenwriter William ('Butch Cassidy') Goldman wrote
famously: "No one knows anything." He was referring to Hollywood green-lighters'
inability to predict movie popularity, the confusion and rapid head-lopping
surrounding costly failures deemed certain winners before production and the
surprising success of films despised and predicted to fail, often rejected by
dozens of the industry's supposedly finest arbiters of quality and box-office
potential.
The same phenomenon applies to every artistic field. The history of art in every
form is littered with examples of artists now accepted as great who were spurned
when they first emerged. Mozart, Van Gogh . . . the list is endless and I am
not comparing myself to them. Since writing is applied thought and thought precedes
any physical manifestation of worthwhile art, I confine my comments here to
writing, specifically to the writing of books, though I've written in many other
forms during my so-called writing life (for some unaccountable reason, non-writers
always equate writing with books). So, risk takers, go for it and try to be
a writer. You have nothing to lose but the roof over your head and the ability
to eat regularly. 'Trust me.' Ooof.
To sell a book of any worth to a major publisher a writer needs a capable, professional
agent. On behalf of thousands of writers without an agent or access to one via
insider introduction, I will describe what it is like for an outsider to try
to gain representation. My over-all professional background: a writer of 20
books, published in New York (Morrow, fiction; Ballantine, nonfiction), and
a freelance with a long record of achievement in print, broadcast and Internet
media worldwide for some of the best corporations, magazines, media and similar
interests. Those credentials, plus $1, will buy you a really rotten cup of coffee.
Before evaluating the agent perplex, consider the basic dynamics of book writing
in this age of bottom-line, 'pull' publishing in which publishers rarely support
new authors:
If you get a great, original idea for a book≈≈fiction or nonfiction; And if
you have the skill, energy and dedication to write it; And if, preferably, you're
young and of 'desirable' gender and ethnicity (translation: not old, not male,
not Caucasian); And if you manage to hit a cultural 'fad' window successfully;
And if you know a friendly editor to straighten you out before you attempt to
submit your oeuvre; And if you have the courage, skill and will to edit your
own work meticulously to punctilious standards of quality; And if you can find
the 'right' professional agent (see below) to represent your book; And if that
agent reads your work, likes it and agrees to represent you; And if that agent
knows, by first name, publishers' editors who might like it; And if one of those
editors likes it enough to put in on his or her work list and supports it enthusiastically;
And if it survives vs. the house's numerous other projects; And if the book
acquires production values and a publicity budget to promote the work (i.e.
publisher investment based on estimated potential revenues); And if the critics,
reviewing maybe one in 100 books, like it and the review is published in a publicly
visible place; And if the distribution system, down to major chains such as
Amazon, Barnes & Noble and Borders and their equivalents outside the U.S., selling
95% by volume and taking ~5% by title of all books offered (mostly from 'name'
writers and the major publishers), accepts and distributes the book; And if
enough public word-of-mouth buzz creates decent sales numbers and long-term
attention for you and your work; Then maybe you might have published a successful
book. Maybe. I say again: maybe. Might. I repeat: might. Don't try to
spend the money until the check has cleared. The odds of the above happening≈≈
all must, for success≈≈are hundreds of thousands to one against and may take
years or decades. The odds are higher that lightning will strike you or that
you will win the lottery, or more likely shrivel and die meantime of old age.
Welcome to writing reality. Never forget the difference between amateurs and
professionals, especially when it comes to writing: amateurs can perform brilliantly
on occasion; professionals must deliver well, fast, consistently, no matter
how they feel, or starve. Professional writing is merciless and deadlines or
writers' blocks are relentless meat-grinders.
Publishers are under immense pressure to be profitable: many or perhaps most
are now owned by conglomerates run by accountants focused on bottom-line profits,
based on evaluations suitable in, say, the manufacturing or service industries.
They cannot afford to staff with enough competent editors to read submissions
from authors and consign all unsolicited material to 'slush piles.'
Supply far outstrips demand. They
are receiving enough from writers they are already publishing and rely on agents
as gatekeepers. Much great writing dies on slush piles (an agent reportedly
picked Billionaire J.K. Rowling's first Potter randomly from his slush pile).
By contrast, dead authors such as Ludlum have 'new' books ghost written and
earn millions from the grave. Brands sell regardless of quality. All Ludlum's
book were reviewed at once, in TIME: "The Ludlum Formula."
Publishers know that only one in ten offerings will succeed, even from known
sources, but don't know which one that might be. That's why agents can rarely
get new writers accepted regardless of quality. A typical agency receives 500+
submissions per month (25+/day, but sometimes four or five times as many) and
rejects >99.5%. An aspiring writer could query 250 agents (about the right number
of the 2,500 listed in specific genres) and get perhaps one positive response≈≈but
don't bet on it. My favorite, probably apocryphal tale in this area is about
the chairman of a huge conglomerate who bought a New York publishing house.
"How many books did you publish last year?" he asked the CEO of the acquired
publishing house.
"About 650," said the CEO.
"How many made money?"
"Oh, maybe 65."
"Well, next year you should publish 65≈≈the ones that make money." Brilliant.
The paradox: as Goldman explained, "no one knows anything." Many best-sellers
were rejected dozens of times (Richard Bach's Seagull went to 30 publishers
before Eleanor Friede at McMillan took it). Jerzy Kozinsky's Painted Bird
was submitted as a test under another title soon after publication; Doris Lessing,
probing the realities for unknowns, sent two of her best-sellers under other
titles. Result: all were rejected, by form letter. This experiment has been
repeated many times, with identical results, and reported widely in the Press.
There were no follow-up accounts of writers' suicides
in which the suicide note cited these awful realities.
Writing correctly is basic and essential, but for any of us but geniuses it
takes a long time to learn. Experienced readers, starting with agents looking
at queries, reject incompetents outright. Flawless spelling, grammar, syntax,
vocabulary and style are vital to anyone trying to write professionally. It's
akin to the need for job applicants to dress and behave properly for interviews≈≈inappropriate
speech, manner and dress close interviews almost before they start. Cap on backwards?
Bad idea.
Writing competence is obvious to a capable agent in the first few pages, or
in the first paragraph. Note: this does not apply to best-seller junk from established
'authors' who are accepted regardless of literary skills≈≈one well-known and
financially successful 'writer' of flash trash for a big house sends in her
'work' hand written in pencil on un-numbered legal-pad pages, leaving an editor
to assemble the mess and turn it into a book. The 'writing' is barely readable,
I might say. I do say.
[May 11, 2007]
theShepler
How to publish your book for a dozen friends
Paper is cumbersome to deal with when it is great quantities but many people
still find it helpful (comforting?) to have documents printed on paper instead
of reading them in electronic forms.
As you have noticed by now, I am involved with the NFSv4.1
effort and the resultant document which is currently at
464 pages. While a great majority of those NFS engineers interested in this
document will opt for either a soft copy or the
html formatted version there are those engineers that like to have a stack
of paper to inspect.
Well, printing the 464 pages on your printer is likely a hassle; toss in
getting the pagination correct or binding the result then it is a real pain.
The audience for this type of technical document is very limited. The audience
is limited even further given that the protocol is still evolving. How does
one get something like this easily printed. Drop it off at Kinko's? Maybe. How
do you share the result with your engineering buddies? Especially when they
are spread around the world? Well, one example is
cafepress.com (I am sure there are others but this is the one I bumped into
first).
Create a simple account and resultant
store, upload
a PDF of your document in the page size that is preferable for your material,
generate a little cover art and voila -- you have your own
NFSv4.1 Draft 10 Book. Printed, bound, and shipped to your door. What a
great deal!
In my particular case, I am not in it for the money. Just the convenience.
The resultant price is the cost charged by cafepress.com and there is a nominal
shipping fee. Very cool and helpful to the 12 or so NFSv4.1 friends that really
care.
And yes, I chose the genre for the book very late at night with little patience
for choosing something more appropriate.
SAN FRANCISCO (Reuters) - Free software is about to get freer.
Wikipedia founder Jimmy Wales said on Monday his for-profit company, Wikia
Inc., is ready to give away -- for free -- all the software, computing, storage
and network access that Web site builders need to create community collaboration
sites.
Wikia, a commercial counterpart to the non-profit Wikipedia, will go even further
to provide customers -- bloggers or other operators who meet its criteria for
popular Web sites -- 100 percent of any advertising revenue from the sites they
build.
Started two years ago, Wikia (http://www.wikia.com)
aims to build on the anyone-can-edit success of the Wikipedia online encyclopedia.
Using the same underlying software, called MediaWiki, Wikia hosts group publishing
sites, known as wikis, on topics from Star Wars to psychology to travel to iPods.
"It is open-source software and open content," Wales said in a phone interview.
"We will be providing the computer hosting for free, and the publisher can keep
the advertising revenue."
That could prove disruptive to business models of Web sites that provide
free services to customers but require a cut of any resulting revenue in return.
Wikia gives away the tools and the revenue to its users. It requires only
that sites built with the company's resources link to Wikia.com, which makes
money through advertising.
Wikia calls the free-hosting service "OpenServing" (http://www.openserving.com).
It runs on an easy-to-use version of MediaWiki software developed by ArmchairGM.com,
a sports fan community site Wikia recently acquired and plans to extend.
Wales is betting the plunging cost of computers and networks can help Wikia
support the free services offer. "It is becoming more and more practical and
feasible to do," he said.
WISDOM TO PREVAIL
"We don't have all the business model answers, but we are confident -- as
we always have been -- that the wisdom of our community will prevail," he said.
The move follows the announcement last week that Amazon.com (NASDAQ:AMZN
- News) had
become Wikia's first corporate investor and is acting as the sole investor in
Wikia's second round of funding. Terms were not disclosed.
Wikia took $4 million in funding in March from Bessemer Venture Partners,
Omidyar Network, high-profile Silicon Valley "angel" backers including Marc
Andreessen, Dan Gillmor, Reid Hoffman and Mitch Kapor and Joichi Ito of Japan.
In recent months, Amazon.com has revealed an ambitious strategy of its own
to offer a range of low-cost computer, data storage and Web site hosting services
to companies large and small, which could come into play for Wikia.
Wales said using Amazon to supply Web services is not part of Wikia's deal
with Amazon. "Potentially, but this is really completely separate," he said
when asked if there was a tie.
Wikia aims to become is a clearinghouse of free software.
Armchair's software is the first of hundreds of freely licensed software
packages to be hosted by the company in the near future, Wales said. These could
include popular open-source publishing software such as WordPress and Drupal.
Consumers would then have a single password across all sites.
"The real concept is to become much broader, to host lots of different free
software and free content, Wales said.
Thirty-thousand users have posted 400,000 articles so far on Wikia sites.
The San Mateo, California-based company employs 38 people, including top volunteer
editors from the Wikipedia.
Print on demand (POD) is the commonly-used term for the digital
printing technology that allows a complete book to be printed and bound in a
matter of minutes. This makes it easy and cost-effective to produce books one
or two at a time or in small lots, rather than in larger print runs of several
hundred or several thousand.
POD has a number of applications. Commercial and academic publishers use it
to print advance reading copies, or when they can't justify the expense of producing
and warehousing a sizeable print run--for instance, to keep backlist books available.
Some independent publishers use it as a more economical fulfillment method,
trading lower startup costs against smaller per-book profits (due to economies
of scale, digitally printed books have a higher unit production cost than books
produced in large runs on offset presses). Last but not least, there are the
POD-based publishing service providers, which offer a for-fee service that can
be described, depending on one's bias, as either vanity publishing or self-publishing.
The "POD Publisher" and the POD Stigma
Strictly speaking, "print on demand" is simply a term for a kind of printing
technology, and doesn't describe any particular business model. Over the past
few years, however, digital technology has become so firmly associated with
a particular complex of business practices that the term "POD publisher" has
taken on specific meaning.
What defines a POD publisher?
Inadequate selectivity. Some POD publishers accept everyone who
submits; others do more screening, but aren't expert enough to ensure high
quality.
Inadequate editing. Some POD publishers do no more than a light
copy edit, releasing books that are essentially unedited. Others employ
inexperienced or unprofessional editors, to more or less the same effect.
Some POD publishers do no editing of any kind.
High cover prices. As noted above, the unit cost for digitally
printed books is higher than for books printed on offset presses. Cover
prices, therefore, must be correspondingly higher in order for the publisher
to make a profit. Depending on length, a POD book can cost more than twice
as much as its offset counterpart.
Short discounts. Booksellers expect discounts of 40% or more.
POD publishers often offer much smaller discounts.
Nonreturnability. Booksellers expect to be able to return unsold
books to the publisher for full credit. POD publishers rarely accept returns,
or if they do, have such a limited returns policy that it's hardly more
attractive than no policy at all.
Minimal marketing and distribution. POD publishers don't want
to cut into their profits by spending money on book promotion. They'll ensure
that their books are available for order online and through a wholesaler
such as Ingram, but they won't advertise, and will make little or no effort
to obtain professional reviews and bookstore placement.
Other nonstandard practices. These may include amateurish formatting,
terrible cover design, hellacious contracts, and fees of various kinds.
Most of these practices, including the fee, are characteristic
of the POD-based publishing service providers discussed in the next section.
However, they're increasingly common among POD-based independent publishers,
whose often inexperienced staff may not have the skill to rigorously select
and edit (never mind market and promote) their books, and whose shoestring budgets
force them to keep costs as low as possible.
Not all POD-based independents employ these practices, of course. Unfortunately,
a great many do. Together with the aggressive policies and poor-quality offerings
of the POD-based publishing service providers, this has tainted print on demand
in general. Many booksellers, reviewers, and readers are wary of POD on principle,
and may assume that a publisher that relies exclusively or mainly on digital
technology is a POD publisher, even if the publisher is entirely professional.
This is the POD stigma, and it's something that anyone who's thinking of signing
a contract with a POD-based independent publisher needs to take into account,
because it can make marketing extremely difficult.
IMHO Lulu is too expensive... I saw one book from lulu.com: it was overpriced
and badly written.
You need to read deeper into the article. Different publishers are accepting
source materials in different formats. Blurb has their composer on a web site,
Picaboo gives you a free download of their software, and Lulu takes PDFs. Shop
around, and find the one willing to work with you. They all seem comparably
priced for the end product, which isn't much more than you'd pay for an ordinary
hardbound edition from a well respected author.
I played around with Lulu.com's print-on-demand service a few months ago;
it was surprisingly easy. I layed out the book in OpenOffice, saved it to a
PDF, checked it in xpdf, and sent the file to them. A week or so later, I had
a hard copy with a professional-looking cover and everything. One thing to note
before ordering from them: Lulu's 6" x 9" format is actually larger than most
paperback books; if you want yours to look "normal," don't use it. Anyway, overall
it was a fairly positive experience; I'd recommend them for low-volume book
printing.
The typical paperback (what's called a "mass market paperback" in the publishing
biz ) is about 4.25 x 7 inches. The 6 x 9" size is called a "trade paperback."
My experience with lulu has been a little more mixed. I have some free-information
textbooks that I sell in print. (Even though they're free to download, sometimes
it's nice to have a real printed, bound copy.)
I had been buying them in batches of about 500 from a local guy, storing
them in a closet, and selling them to schools and individuals.
The problem was, it was just an incredibly inefficient way to do business.
Recently, I've been experimenting with lulu. The good news is that they're
incredibly efficient, and can produce a single book at about the same unit
price as I'd been getting from a traditional printing process (or maybe
just a little more).
When I get an individual retail order, they take care of it. I've canceled
my credit card processing account (which was a major pain to have). No more
trips to the post office to mail books. Most importantly, I no longer have
to keep ~$10,000 worth of inventory in a closet.
There have been some
problems, though:
They sometimes do a lousy job of packaging books, and the books
arrive damaged. If you complain, they're willing to send replacements,
but only if you send them digital camera pictures to show the damage.
It doesn't seem that reasonable to me to expect my customers to go through
that kind of hassle for something that's basically due to lulu's sloppy
packaging.
A bigger problem has been that they don't do a very good job of
supporting the pdf standard and OSS. Basically the situation seems to
be that they have a number of subcontractors who actually produce the
book, and which subcontractor it's sent to may depend on the geographical
location of the customer. These subcontractors don't fully support the
pdf standard. Part of the issue seems to be that some pdf documents
take a lot of cpu time to print, so they put arbitrary, undocuments
limits on various things. Also, there are things you can do with fonts
(such as subsetting) that are allowed by the pdf standard, but that
certain subcontractors may not allow. The machines (docutechs?) they
use are totally proprietary. What it adds up to is that some of my books
would print 10 or 100 times just fine, and then on one particular order
I'd get a message passed back from the subcontractor saying that it
failed to print. You can post on their forums about problems, and people
there have been very helpful, but you actually can't get any information
back from the subcontractor. Basically lulu says that if you use Acrobat
to produce your pdf, and embed all fonts without subsetting, it will
work, but if you use OSS to produce your pdf, it may or may not work.
A little ironic, since IIRC the founder of lulu was one of the guys
who started Red Hat. It's a little like web designers who only test
their sites on IE; lulu only cares if their system works on Acrobat
output.
If you know how to use LaTex, you could set
up a lulu.com book in about 10 minutes.
LaTex has had a "book" template for years, and true to its purpose as "type-setting
sofware" (created by Donald Knuth at Stanford), it creates an absolutely
picture perfect document with chapter headings, and eye-pleasing margins
and hyphenation. This is all done automatically according to the principles
of typography printers have been using for hundreds of years (though of
course they can be manually over-riden). All that is required is that you
learn a few html-like mark-up commands to format your text.
I've printed one novel with lulu.com and LaTex, and the inner text was easily
as good as hard-cover books from the 50s and 60s (which I consider kind
of a golden age of printing). The cover though does require some graphic
design skill , as I think a professional designer noted above (though lulu.com
does have a gallery of about 50 stock covers you can use).
Also, lulu.com was started by Bob Young, founder of Red Hat Linux, because
of the terrible experience he had publishing a book through conventional
means. I believe lulu.com runs on FOSS software.
When Steve Mandel, a management trainer from Santa Cruz, Calif., wants to
show his friends why he stays up late to peer through a telescope, he pulls
out a copy of his latest book, "Light in the Sky," filled with pictures he has
taken of distant nebulae, star clusters and galaxies.
Steve Mandel, above, created his book "Light in the Sky"
using software from Blurb.com; the cover image is of the Hale-Bopp comet.
"I consistently get a very big 'Wow!' The printing of my photos was spectacular
- I did not really expect them to come out so well." he said. "This is as
good as any book in a bookstore."
Mr. Mandel, 56, put his book together himself with free software from
Blurb.com. The 119-page edition is
printed on coated paper, bound with a linen fabric hard cover, and then wrapped
with a dust jacket. Anyone who wants one can buy
it for $37.95, and Blurb will make a copy just for that buyer.
The print-on-demand business is gradually moving toward the center of the
marketplace. What began as a way for publishers to reduce their inventory and
stop wasting paper is becoming a tool for anyone who needs a bound document.
Short-run presses can turn out books economically in small quantities or singly,
and new software simplifies the process of designing a book.
As the technology becomes simpler, the market is expanding beyond the earliest
adopters, the aspiring authors. The first companies like AuthorHouse, Xlibris,
iUniverse and others pushed themselves as new models of publishing, with an
eye on shaking up the dusty book business. They aimed at authors looking for
someone to edit a manuscript, lay out the book and bring it to market.
The newer ventures also produce bound books, but they do not offer the same
hand-holding or the same drive for the best-seller list. Blurb's product will
appeal to people searching for a publisher, but its business is aimed at anyone
who needs a professional-looking book, from architects with plans to present
to clients, to travelers looking to immortalize a trip.
Blurb.com's design software, which is still in beta testing, comes with a
number of templates for different genres like cookbooks, photo collections and
poetry books. Once one is chosen, it automatically lays out the page and lets
the designer fill in the photographs and text by cutting and pasting. If the
designer wants to tweak some details of the template - say, the position of
a page number or a background color - the changes affect all the pages.
The software is markedly easier to use - although less capable - than InDesign
from
Adobe or Quark XPress, professional publishing packages that cost around
$700. It is also free because Blurb expects to make money from printing the
book. Prices start at $29.95 for books of 1 to 40 pages and rise to $79.95 for
books of 301 to 440 pages.
Blurb, based in San Francisco, has many plans for expanding its software.
Eileen Gittins, the chief executive, said the company would push new tools for
"bookifying" data, beginning with a tool that "slurps" the entries from a blog
and places them into the appropriate templates.
The potential market for these books is attracting a number of start-ups
and established companies, most of them focusing on producing bound photo albums.
Online photo processing sites like
Kodak Gallery (formerly Ofoto), Snapfish and Shutterfly and popular packages
like the iPhoto software from
Apple let their customers order bound volumes of their prints.
These companies offer a wide variety of binding fabrics, papers, templates
and background images, although the styles are dominated by pink and blue pastels.
Snapfish offers wire-bound "flipbooks" that begin at $4.99. Kodak Gallery offers
a "Legacy Photo Book" made with heavier paper and bound in either linen or leather.
It starts at $69.99. Apple makes a tiny 2.6-by-3.5-inch softbound book that
costs $3.99 for 20 pages and 29 cents for each additional page.
The nature and style of these options are changing as customers develop new
applications. "Most of the people who use our products are moms with kids,"
says Kevin McCurdy, a co-founder of
Picaboo.com in Palo Alto, Calif. But he said there had been hundreds of
applications the company never anticipated: teachers who make a yearbook for
their class, people who want to commemorate a party and businesses that just
want a high-end brochure or catalog.
Picaboo, like Blurb, distributes a free copy of its book design software,
which runs on the user's computer. Mr. McCurdy said that running the software
on the user's machine saves users the time and trouble of uploading pictures.
The companies that offer Web-based design packages, however, point out that
their systems do not require installing any software and also offer a backup
for the user's photos.
As more companies enter the market, they are searching for niches. One small
shop in Duvall, Wash., called SharedInk.com,
emphasizes its traditional production techniques and the quality of its product.
Chris Hickman, the founder, said that each of his books was printed and stitched
together by "two bookbinders who've been in the industry for 30 or 40 years."
The result, he said, is a higher level of quality that appeals to professional
photographers and others willing to pay a bit more. Books of 20 pages start
at $39.95.
Some companies continue to produce black-and-white books. Lulu.com is a combination
printer and order-fulfillment house that prints both color and black-and-white
books, takes orders for them and places them with bookstores like
Amazon.com.
Lulu works from a PDF file, an approach that forces users to rely on basic
word processors or professional design packages. If this is too complex, Lulu
offers a marketplace where book designers offer their services. Lulu does offer
a special cover design package that will create a book's cover from an image
and handle the specialized calculations that compute the size of the spine from
the number of pages and the weight of the paper.
A 6-by-9-inch softcover book with 150 black-and-white pages from Lulu would
cost $7.53 per single copy.
These packages are adding features that stretch the concept of a book, in
some cases undermining the permanent, fixed nature that has been part of a book's
appeal. The software from SharedInk.com, for instance, lets a user leave out
pages from some versions of the book. If Chris does not like Pat, for instance,
then the copy going to Chris could be missing the pages with Pat's pictures.
Blurb is expanding its software to let a community build a book. Soon, it
plans to introduce a tool that would allow group projects, like a Junior League
recipe book, to be created through Blurb's Web site. The project leader would
send out an e-mail message inviting people to visit the site and add their contributions
to customized templates, which would then be converted into book pages.
"Books are breaking wide open," Ms. Gittins said. "Books are becoming vehicles
that aren't static things."
Fatbrain.com contends that Simon & Schuster's
decision not to let Fatbrain.com join other online retailers like Amazon.com
and Barnesandnoble.com in selling King's book was a way of punishing Fatbrain.com
for presuming to poach on the venerable publisher's territory.
Here's what this story led me to say to Fatbrain
CEO Chris McAskill:
S&S is right, though. Fatbrain put a stake in
the ground, and started acting like a publisher rather than a reseller. As I've
argued repeatedly on the StudioB mailing list, it's tough to be both a publisher
and a retailer, because you end up with the worst of both rather than the best
of both. Not only do publishers rightly see you asa competitor, but authors
see you as a publisher who has only one outlet: your own web presence. So unless
you get dominant share REALLY quickly, you're out of the game, because faced
with a publisher with single-point distribution, or a publisher with multi-layer
distribution, the multi-point, multi-layer publisher will appear to have significantly
more reach.
Later on in the Salon story, this point is driven
home:
[David] Gernert [John Grisham's agent] says
that electronic publishers have approached Grisham, but none has succeeded
in persuading him to go digital, partly because the needs of author and
e-publisher don't, as Gernert sees it, entirely coincide. "For an electronic
publisher to say that they're publishing Grisham is instant legitimacy and
instant publicity and instant viability," he says. "As an author you would
want a story to go on as many computers, Web sites and devices as possible."
Until we have a system where "publishing" is
distinct from "distribution" and from "retailing", and "publishing" means being
an intermediary between authors and a complex, multi-point distribution system,
we won't have a market that is ready for prime time. (The nature of that publishing
intermediary includes shielding retailers (and ultimately consumers) from the
slush pile, and shielding authors from building relationships with thousands
of resellers.)
It's OK to have a publishing arm, I think, but
not OK to munge publishing and retailing together. It's OK for a retailer to
publish some of its own books, but not to compete with publishers for original
content by offering royalty levels that ignore what publishers bring to the
table. It's OK for a publisher to have some direct sales, as long as they don't
cut out their resellers by offering preferred pricing to direct customers.
Fatbrain has made some good progress by separating
mightywords.com from
fatbrain.com. That makes mightywords your publishing arm. Now, maybe you
can find a way to get fatbrain.com back into the ebook retailing/distribution
space, where I predict all your competitors will soon be, using a format that
reproduces many of the characteristics of print publishing:
The author/publisher can produce the work
once, and have it resold by many parties.
Distributors will allow authors/publishers
to reach specialty retailers, so that every retailer can participate without
the overhead of one-to-one relationships with every publisher.
Specialty distributors/retailers/publishers
may make the work available in alternate versions.
Third parties will catalog and review the
various published works.
To support the needs of libraries, companies
like Netlibrary will make works available for "check out" rather than purchase.
There are a couple of other points I'd make,
partly coming off #3 above:
There will likely be two or three branches of
the online book tree.
3a. There is likely to be a format that is targeted
for download, either to the PC or to a small device. The format that ultimately
succeeds may well need to be easily transferable from one to the other.
3b. There is likely to be a format that is targeted
for online/connected access, which benefits (e.g. in the tech book space) from
integrated online searching across a library of titles, supports other ancillary
materials from the web space, and so on. This kind of thing might be hosted
by a publisher, by a corporate intranet, by a library, or by some new class
of information reseller/integrator.
3c. The solution to prevail will include print-on-demand
(and/or the bundled sale of print and online copies. In fact, the ideal Digital
Rights Management solution would support the aggregation of a, b, and c, such
that someone could buy a copy for download (which would take advantage of the
ability to buy the product from a variety of retailers), but present some sort
of credential representing that purchase to a central site (hosted either by
a publisher or a third party) so that it can get access to that book in the
context of other services provided by that aggregator. Such DRM solution would
allowed tiered pricing (either up or down) for the purchase of added services
(such as print on demand) or for some kind of repeat purchaser discounting.
In any event, it will be interesting to see how
it all plays out. The one thing I'm sure of is that we'll see a repeat of what
we saw in the web space, where everyone started out thinking "disintermediation"
but things didn't take off till we had reintermediation, with the development
of a rich ecology of sites and services cooperating to make a fully functioning
marketplace.
In the early days of the web (1993), when we
had created GNN, the first web portal and the first web site supported by advertising,
we had a huge uphill struggle, because we had to do everything ourselves. We
had to get people on the web in the first place (equivalent to getting them
to download some kind of ebookreader, but even harder); we had to convince advertisers
that there was a market there (we commissioned the first ever market research
study on Internet demographics); we had to evangelize the possibilities and
experiment with different formats. The list goes on and on.
I contrasted this with my experience as a print
publisher, where we fit neatly into an ecology, with manufacturers who already
knew how to make our product, retailers and wholesalers who came to sign us
up, natural places to advertise and create demand, known standards for pricing,
customer expectations of what a book looked like, etc. etc.
I ended up going around giving talks saying that
the web wasn't going to take off till it looked more like print publishing.
When I was explaining this to Ted Leonsis of AOL, he "got it" with the memorable
line: "You're saying 'Where's the Publisher's Clearinghouse for the Web?'" Exactly.
There are all these crazy intermediaries who make any branch of print publishing
work, from rack jobbers to remainder houses, to folks who've figured out how
to make school children into a sales force :-(
Now, on the web, we're seeing the success grow
in proportion to the richness of that cooperating ecology:
ISPs and hosting services
self-published sites (authors)
online "magazines" (publishers) like Salon
portals
search engines
ad agencies
ad hosting services
caching services
web design firms
market researchers to justify the ad pricing
So the challenge I put out to all would-be ebook
publishers is to envision a future in which they aren't the only party who succeeds.
The market won't take off till it's a win for many parties.
This isn't to say that there won't be massive
realignments of power and success in the new market (you only have to look at
how much market share amazon.com took from traditional booksellers to know that.)
There will be new publishers, new retailers, new wholesalers, and new "manufacturers"
(software platform providers) springing up, as well as new providers of various
support services. But my suspicion is that anyone who tries to go it alone will
be left behind by folks who figure out what niche in the ecology they want to
own, and pursue it wholeheartedly.
Tim O'Reilly is founder and CEO of
O'Reilly Media, Inc., and an activist for Internet standards and for open source
software. For everything Tim, see
tim.oreilly.com.
At 37Signals, Jason Fried
asks: "What do you think about self-published books?" There's a lot of great
reader feedback. I wrote some comments myself, recounting my start as a self-published
author to becoming one of the largest computer book publishers in the country.
(My first print run was 100 copies. In the twenty years since, I've sold more
than 30 million copies of a thousand odd titles.)
Anyway, to make a long story short, several people suggested I repeat my comments
here. Here goes:
Well, I like to think of myself as a self publisher who grew up into a real
publisher. So I've seen the world from both sides. I never thought when I printed
my first run of 100 copies of Learning the Unix Operating System in 1985 that
it would go on to sell hundreds of thousands of copies, and start me on the
path to being one of the largest computer book publishers in the country. It's
been a long and fruitful ride, which took me in many unexpected directions,
and with a huge number of mistakes, some of which turned out to be inspired!
Here are the differences between self publishing and working with an established
publisher as I see them:
1) If you're not an experienced author, having a good editor can help you produce
a book you'll be proud of. You guys have already written a book, so you know
what help you got, and whether or not it improved your book. So scratch that
issue.
2) If you're not well known, you may have real trouble getting visibility and
distribution for your book. You guys are well known and have a built-in distribution
channel and audience. Get your book on Amazon, plus sell it from your own site,
and you'll probably move as many copies as most publishers would move of a comparable
book from less well known authors. (Given your current notoriety, you might
even be able to sell as many copies as New Riders sold of your Defensive Design
book, or more.) My guess is that I could significantly more copies of your book
via additional channels than you would sell yourself, but probably not enough
to make up the difference in margin that you'd make by printing and selling
the book yourselves. So scratch that issue as well.
3) If you sell a lot of books, you'll find yourself having to build a lot of
the apparatus of a publisher. When we were small, we hired a temp to ship out
books, and when a shipment arrived from the printer, all our employees would
make a bucket brigade to carry the cartons to the basement. But that gets old
fast. This is the biggest question for you: what business do you want to be
in? A successful publisher (self or otherwise) ends up in the business of book
design, copyediting and layout, printing (contracted out, but still a set of
relationships and processes you need to manage), warehousing, shipping, order
taking (can mostly be done self service), customer service ("where's my book?";
"my copy was damaged in shipping", etc.), and many other mundane but necessary
tasks.
And of course, once you have more than a couple of books, you really need to
start expanding your channels, your retail marketing (very challenging to get
a foot in the door in today's market), and your sales force. So you start up
the ramp, as I did, of becoming a full fledged publisher yourself.
Of course, there are alternatives to doing all the work. For example, you could
become what's called a packager, where you establish a series and and brand,
and deliver camera ready copy to a publisher, who pays you a higher than normal
royalty because they provide no editing or development services, but still takes
the inventory risk, and thereafter treats the book as one of their own products.
Pogue Press (now wholly owned by O'Reilly) and Deke Press are two O'Reilly imprints
that started out as packaging deals. To make something like this work, you need
to have a strong brand (you do), a scalable publishing idea (rather than just
a single book), and the ability to deliver completed books to the publisher.
The next step up is to publishing itself, which adds the element of inventory
risk. That is, it's easy to say, "Wow, print a book for $2, sell it for $30,
pocket $28." But what happens instead is "print 1000 copies of a book for $5
each, 5000 copies for $3 each, or 10,000 copies for $2 each." And then if you
sell fewer than you expect, you might end up with a very different cost of goods
than you expect. Many small publishers make the mistake of printing too many
copies, and their cost of goods (and warehousing those goods) becomes much higher
than they expect. So you might print 10,000 for $20,000, sell 1000 directly
from your website for $30, and another 1000 from Amazon for $14 (which is about
what you'll get after discount), you're netting $44,000 on a $20,000 investment,
not the $300,000 that the naive math of $2 manufacturing vs. $30 list price
would suggest. Still, not bad, and a real option - if you want to be in the
publishing business for the long haul. Self publishing a single book can be
fun. But I'd be that after the second or third, you either decide to be in the
publishing business full bore, or look for a partner to take on some of the
chores.
FWIW, many small publishers are distributed by larger publishers. When O'Reilly
was small, for example, Addison-Wesley and later Thomson did our international
distribution before we started our own international companies. And today, O'Reilly
distributes smaller presses like the Pragmatic Programmers, No Starch, Paraglyph,
Sitepoint, and Syngress. That leverages our sales force, our distribution systems,
and our relationships with major retailers.
Note however, that in order to take either the packaging or distribution route,
you really need to be thinking about more than a single book.
Tim O'Reilly is founder and CEO of
O'Reilly Media, Inc., and an activist for Internet standards and for open source
software. For everything Tim, see
tim.oreilly.com.
Now that I'm going through the book, this appeals to me on a few different levels:
#1 - I had a strong practical need for this book NOW - not in 5 months, but
now now now. THANK YOU to the authors for making this available early. It has
helped me immensely.
#2 - They have a
wonderful
error-submitting page that they respond to daily. I found a few typos as
I was going through the examples, submitted them, and got a reply that they
were fixed the following day. THIS IS BRILLIANT! Why wait until it's on the
bookshelves to find out that there are typos?
#3 - I prefer technical books on PDF anyway.
Releasing books in beta-format takes advantage of the fact that there are
different kinds of readers. Some, like me, need the info sooner, even if
it's not "perfect" yet. We're avid fans of the technology. We'll hear on the
mailing list that you are making this available. We'll be right there giving
feedback daily, which will improve the book for when it's released to the much-larger
public.
I hope more authors and publishers do this.
Derek Sivers is the founder, president,
and sole programmer behind
CD Baby, independent music distribution, and
HostBaby, web hosting
for musicians.
Comment on this weblog
You must be
logged in to the O'Reilly Network to post a comment. Trackbacks appear below the discussion thread.
Showing messages 1 through 2 of 2.
Here's another one
2005-07-13 08:00:15
alexfarran
[Reply
|
View]
Peter Seibel's book, Practical Common Lisp, is available on his web site
http://www.gigamonkeys.com/book/ and was discussed on the comp.lang.lisp
newsgroup as he wrote it. Didn't seem to hurt the sales at all.
Interesting. In essence, this takes the principle of wiki, in a restricted
form, to the book authoring process. The strenghts (and weaknesses) of wikis
are well-known.
Write a book and Xlibris will (for free) format the file,
design a cover, and tag it with an ISBN number --which means book sellers can
track the title. The book gets posted on the website and sells for an average
of $16. Extra service fees for design range from $300 to $1,200 per book. If
someone orders a copy, Lightning Source prints and ships the title, and Xlibris
and the author split the profit, typically about $3.
The company charges writers $459 for formatting, posting and
publicizing books, including arranging author appearances and posting audio
books that can be downloaded from the Net. Audio books are the fastest-growing
part of the publishing industry, a $2 billion annual market, according to the
Audio Publishers Association.
"I don't have a publishing background," McCormack says, "which
is great because it makes me think anything is possible."
Author Stephen King's recent foray
into e-book publishing has kicked off a new round of activity among book publishers
and software developers looking to gain a foothold in the nascent market.
Since last year, well-known book publishers such as Houghton Mifflin Co.,
The McGraw-Hill Cos. and Simon & Schuster Inc. have hooked up with software
companies to convert books into digital format and enable them to be read on
screen.
Now Adobe Systems Inc. and Microsoft Corp. are duking it out in the market
for e-book reading software.
This week at the Seybold Seminars conference here, Adobe announced that it
has acquired Glassbook Inc., a provider of e-book software that last week announced
a beta version of its Reader 2.0 software. Terms of the deal were not released.
Features of Reader 2.0, due to ship in mid-September, include two-page views,
text-to-speech capabilities, screen rotation, text annotation and highlighting
with electronic sticky notes, searching and text enhancement to make content
easier to read, said officials from Glassbook, of Waltham, Mass.
Adobe, of San Jose, Calif., also announced an expanded partnership with digital
content services company iUniverse.com to offer authors and publishers a faster
and less expensive way to publish, manage and distribute their content as e-books.
Separately at the show, Microsoft announced a partnership with online retailer
Amazon.com to create a customized version of the Microsoft Reader e-book software.
The new version will enable consumers to purchase and download e-book titles
directly from Amazon.com. Microsoft Reader will be the preferred format for
Amazon.com's future e-book store.
Microsoft and Adobe have similar relationships with Barnes&Noble.com.
Microsoft, of Redmond, Wash., also selected ContentGuard Inc., a provider
of digital rights management software for content, to help booksellers, publishers
and consumers easily adopt the digital format. ContentGuard will support Microsoft
customers who want to build distribution systems using Microsoft's Digital Asset
Server in order to launch digital offerings.
Adobe already has a similar relationship with ContentGuard.
ContentGuard's eBook Practice, announced this week, provides content preparation
and management services to online bookstores and publishers so they can create
digital offerings. The eBook Practice also provides an outsourced operation
to manage the entire eBook distribution process and a consulting service to
install and manage in-house e-book operations, said officials of the McLean,
Va., company.
The stakes in the race to create standard e-book software are not small.
"If you get to be the provider of software for reading, essentially you get
to be the toll keeper," said Jonathan Gaw, Internet research manager at International
Data Corp. in Mountain View, Calif. "For every book or magazine article that
uses this software, then you get to charge a toll, and if it's 5 or 10 cents
per article, that adds up quickly. The question is, who will develop the best
and most widely accepted standard?"
Boston-based Houghton Mifflin is working with several of the reading software
vendors, including a pending agreement with Microsoft in the next few months,
to reach more consumers and cover the different ways -- PC, laptop, personal
digital assistant -- they want to receive e-books.
"We're working to do this as intelligently as we can," said David Jost, vice
president of electronic publishing in the company's trade and reference division.
"Every e-book company has their own reading or e-book format. We have to check
each company's [digital] version of the book to make it consistent with our
[print] version."
McGraw-Hill is also embracing the technology in this evolving market. It
plans to have a total of 700 e-books on its list by the end of September, and
250 of them will be sold through the company's new online store.
"We're providing our customers with books in as many [digital] formats as
possible," said April Hattori, a spokeswoman for McGraw-Hill in New York. "We
want to be able to give our customers a choice. It's important to be in as many
places as possible."
Although both Jost and Hattori believe publishing e-books will lower costs
in the long run, they said it's too early to predict how much. While e-books
will cut down on unused hard copy inventory, publishers will still have to pay
software programmers to convert books into the proper electronic format, Jost
said.
According to Billy Pidgeon, a Web technologies analyst at Jupiter Communications
Inc. in New York, the market for e-books is in its infancy. "It's really early.
Dedicated hardware is very iffy. ... Audio books as digital files with an MP3
player is a more immediate market."
Publishers have to develop consumer awareness for e-books, and vendors have
to drive consumer adoption, Pidgeon said, noting that book printers are working
with publishers to support common standards for text such as Microsoft Word
and Adobe PDF.
"To a large extent the publishing industry is still in the Guttenberg era,
and it's being dragged into the digital era," he said. "It's going to be a rough
ride for some of these publishers. Fortunately, the market is still young, so
there will be some time. But for publishers not looking at this space, they
may be losing an opportunity."
Dominican asks: How often are books revised? Open to the author?
Tim responds:
In our early days, we revised our books constantly. For example, I did ten editions
of Managing UUCP and Usenet between 1986 and 1991--about one every six months.
The book grew in something much like an open source software process, where
I was constantly incorporating reader feedback, and rolling it into the next
printing. We didn't do a big marketing push about it being a new edition, we
just had a change log on the copyright page, much like you do with a piece of
software, each time you check it in and out of your source code control system.
Now that we're much larger (and many of our authors no longer
work directly for us), it's harder to do that, but we still roll in a lot of
small changes each time we go back to print.
The reason why it's harder mainly has to do with the inefficiency
of retail distribution. When there are thousands of copies sitting in bookstores
waiting to be bought, rolling out a "new edition" is a big deal, since you have
to take back and recycle all the old ones. So you have to go through a process
of letting the inventory "stored in the channel" thin out. This means that,
especially for a very successful book, you can't do updates as often as you
otherwise might like. We slipstream in fixes to errors and other small changes,
but major changes need to be characterized as a "new edition" with all the attendant
hoopla.
There is also the issue you advert to in your question, and
that is the availability of the author to do the update. Sometimes an author
like David Flanagan has a number of bestselling books, and he updates them in
round-robin fashion. Sometimes an author loses interest in a topic, or gets
a new job and doesn't have time any more, and we have to find someone else.
Sometimes the technology is fairly stable, and so we don't need to do a new
edition.
Sometimes we know we need a new edition, but we just get distracted,
and don't get around to it as quickly as we should! At least we don't do what
a lot of other publishers do, which is issue a "new edition" for marketing reasons
only, where the content stays pretty much the same, but it's called a new edition
just so they can sell it in freshly to bookstores.
t-money asks: Fatbrain.com has recently announced that it will offer an electronic publishing
service, E-matter. What do you think about offering documents for download for
a fee? Is this something that O'Reilly might be undertaking in the future?
Tim responds:
Well, we were part of FatBrain's ematter announcement, and we're going to be
working with them. But I have to confess that the part of their project I liked
the best wasn't the bit about selling short documents in online-only form, it
was the idea of coordinating sale of online and print versions.
I know that there's a lot of talk about putting books up online
for free, and we're doing some experiments there, but to be honest, I think
that it's really in all of our best interests to "monetize" online information
as soon as possible. Money, after all, is just a mutually-agreed ratio of exchange
for services. When the price is somewhere between zero and a large number, based
on negotiation, the uncertainty often means that the product is not available.
In general, I foresee a large period of experimentation, until
someone or other figures out the right way to deliver AND pay for the kinds
of things that people want to read online. We've seen it take about five years
to develop enough confidence in advertising as a revenue model for the web (starting
from our first-ever internet advertising on O'Reilly's prototype GNN portal
in early 1993). Similarly, I think that the "pay for content" sites--whether
eMatter or ibooks.com, or books24x7, or itknowledge.com--will take some time
to shake out. Meanwhile, we're playing with a bunch of these people, and doing
some experiments of our own as well.
the_tsi asks: Not to start a free SQL server war here, but I notice there is a (quite good)
book on mSql and MySql, but nothing for PostgreSQL. Are there any plans to cover
it in the near future?
Tim responds:
We're looking at this but haven't started any projects yet. We've had a huge
number of requests for a book on PostgreSQL, and we're taking them very seriously.
Tet asks: You've said that the Linux Network Administrator's Guide sold significantly
less than would normally be expected as a result of the text of the book being
freely available on the net. By what sort of margin? How many copies did it
sell, and how many would you have expected to sell under normal circumstances?
Would you release another book in a similar manner if the author accepts that
they'll make less money from it? Did the book actually make a loss, or just
not make as much profit as expected?
Tim responds:
Well, it's always hard to say what something *would* have done if circumstances
had been otherwise. But on average, the book sold about a thousand copies a
month in a period where Running Linux sold 3-4000 and Linux Device Drivers about
1500. Now the book is badly out of date (though a new edition is in the works),
but you'd expect that there are more people doing network admin than there are
writing device drivers. (And in fact, reader polls have actually put the NAG
at the top of the list of "most useful" of our Linux books.)
Frank Willison, our editor in chief, made the following additional
comments about the NAG and its free publication:
"We can demonstrate that we lost money because another publisher
(SSC) also published the same material when it became available online.
Because the books were identical, word for word (a requirement the author
put on anyone else publishing the material), every copy sold of the SSC
book was a loss of the sale of one copy of our book.
One interesting side note was that SSC published the book
for a lower price than we did. Of course, we had the fixed costs: editing,
reviewing, production, design. But those fixed costs didn't make the difference:
when you took out the retail markup, the difference in price was equal to
the author royalty on the book.
The above may be too much info, and isn't directly related
to current Open Source practices, but it still chafes my butt."
If I had to quantify the effect, I'd guess that making a book
freely available might cut sales by 30%. But note that this is for O'Reilly--we've
got books with a great reputation, which makes people seek them out. And we
cover "need to know" technologies where people are already looking for the O'Reilly
book on the topic. For J. Random Author out there, open sourcing a book might
be a terrible idea, or a great one. An author with some unique material that
doesn't fall into an obvious "I already know I need this" category can build
a real cult following online, and then turn that into printed book sales to
a wider audience. We're hoping to do the same thing in publishing Eric Raymond's
The Cathedral and the Bazaar (and other essays) this fall. Most of you guys
have probably read them online, but there is a larger population who've probably
heard the buzz, and will pick them up in the bookstore. On the other hand, an
author who puts a lousy book online will only show this to the world, and sales
will be 10% of what they'd been if the reader hadn't been able to see the book
first.
Perhaps more compelling is the evidence from the Java world,
where sales of the Addison-Wesley books based on the Sun documentation (which
is mostly available online) are quite dismal, while our unique standalone books
(as those from other publishers) do quite well. More importantly, though, programmers
in our focus groups for Java report spending far less overall on books than
programmers in other areas, because they say that they get most of the info
they need online.
All of this is what tells me we need to tread carefully in
this area, since I have to look out for the interests of my employees and my
authors as well as my customers. In the end, free books online may look like
a great deal, but it won't look so good if it ends up disincetivizing authors
from doing work that you guys need.
And frankly, we have conversations all the time that go like
this: "I'm making $xxx as a consultant. I'd love to write a book, but it's really
not worth my while." At O'Reilly, we try to use authors who really know their
stuff. So writing a book is either a labor of love, or it's a competitive situation
with all the other things that author could be doing with their time. So money
is an issue.
maelstrom asks:
(two out of three submitted) What books would you recommend a budding writer
should read and study? and Do you read every book you publish?
Tim responds:
Books about writing that I like are Strunk & White (The Elements of Style) and
William Zinsser's On Writing Well. But really, read any books that you like.
Reading good technical books, and thinking about what works about them for you,
is always great. We learn far more by osmosis than by formal instruction. So
read, and then write.
Going back to the recurrent questions about free documentation--a
great way to learn to write is to do it. Contribute your efforts to one of the
many open source software projects as a documentation writer, get criticism
from the user community, and learn by doing.
I would say that the ability to organize your thoughts clearly
is the most important skill for a technical writer. Putting things in the right
order, and not leaving anything out (or rather, not leaving out anything important,
but everything unimportant), is far more important than trying to write deathless
prose. The best writing is invisible, not showy. My favorite quote about writing
(which came from a magazine interview that I read many years ago) was from Edwin
Schlossberg: "The skill of writing is to create a context in which other people
can think."
As to your second question: alas, I no longer have time to
read everything we publish. We have a number of senior editors whose work I
trust completely -- I never read their stuff unless I'm trying to use it myself.
For new or more junior editors, I generally do a bit of a "sample" of each book
somewhere during the development process. If I like it, I say so, and don't
feel I have to look at it again. If I don't like it, I may make terrible trouble,
as some of my editors and authors can attest.
howardjp asks: One of the biggest compaints aong critics of the BSD operating systems is
the lack of available books. Since O'Reilly is the leader in Open Source documentation,
you are well positioned to enter the BSD market. With that in mind, why hasn't
O'Reilly published any BSD books in recent memory?
Tim responds:
Every once in a while we make a stupid editorial decision, as, for instance,
when we turned down Greg Lehey's proposed BSD book (now published by Walnut
Creek CDROM). This was based on the fact that the BSD documentation, which we'd
co-published with Usenix, had done really poorly, and the relative sales of
our original BSD UNIX in a Nutshell relative to our System V/Solaris one. That
was many years ago now, and BSD has emerged from the shadows of the AT&T lawsuit,
and become a real force in the open source community. So I definitely think
that there are some books that we might want to do there. Proposals are welcome.
That being said, so many of our books cover BSD (just like
they cover Linux, even if they don't say Linux on the cover). After all, BSD
is one of the great mothers of the open source movement. What is Bind, what
is sendmail, what is vi, what is a lot of the TCP/IP utility suite but the gift
of BSD...it's so much part of the air we all breathe that it doesn't always
stand out as topic that gets the separate name on it.
chromatic asks: Would you ever consider making previous editions of
certain books free for download when supplanted by newer editions?
For example, when Larry Wall finally gets around to writing
the 3rd edition of the Camel (probably about the same time as Perl 6), would
you consider making the second edition available in electronic format?
I realize this has the possibility of forking documentation,
but it's hard to find anyone more qualified than Larry, Randal, and Tom, for
example. It would only work for certain books.
Tim responds:
The previous edition of CGI Programming for the WWW is available online now,
while we work on a new edition, as is MH & xmh and Open Sources. You can read
these at http://www.oreilly.com/openbooks/.
We'd like to put more of our out of print books online, but it's a matter of
man hours. Our Web team is organizing a new effort around this now, so look
for more books to appear on this page.
And in fact, an awful lot of Programming Perl *is* available
for free online, as part of the Perl man page or other perl documentation. It's
not available in exactly the same form, but it's available. That's one of the
big questions for online documentation: does the online version always look
like the print version.
But this is a good question, and it's one we have certainly
something we can think about. Might be another interesting experiment in understanding
the ecology of online publishing.
Crutcher asks: Not sure how to phrase this, but, well, what is the status of O'Reilley and
marketing books to schools and colleges for use as textbooks. Our textbooks
suck, and if there textbook versions of ya'lls books it would rock.
Tim responds:
We actually do quite a bit of marketing to schools and colleges, and they are
used as textbooks in a number of places. If you know of a professor who ought
to be adopting an O'Reilly book, please send mail to our manager of college
and library sales, Kerri Bonasch, at [email protected]. We also have a Web site
to support this effort at
http://www.oreilly.com/sales/edu/.
Are there any specific things that you see as obstacles to
use of the books as textbooks? What topics would you especially like to see
as textbooks?
zilym asks: Are there any plans to improve the binding on your future books? Many of
us use O'Reilly books to death and the binding is the first to go. I know I
certainly wouldn't mind pay slightly more for a stronger version of some of
the most heavily used titles.
Tim responds:
Hmmm. We use a special high-cost binding, which allows the books to lay flat.
It's quite a bit more expensive than the normal perfect binding used by most
publishers, and we think it's worth it. I have heard lots of compliments on
how great this binding is. I haven't heard complaints about it breaking down--at
least not without use that would break down a normal perfect-bound book as well.
I don't know of any way to make it more durable.
Maybe hardcover? It would be great to have a slashdot poll
on how many people share your problem and would like to see O'Reilly books in
hardcover. (One caveat: We tried an experiment once (for our Phigs Programming
Manuals--real behemoths) to offer books in both hardcover and softcover, so
people could choose. Despite polls that said people would pay more for a more
durable hardcover, everyone bought the softcover to save the difference in price.)
So, if there is a poll, how much would you pay for a more durable book?
jzawodn asks: Given some of the recent discussion surrounding the
Linux Documentation Project (LDP), I began to wonder about its long-term direction
and viability.
I "grew up" with Linux by reading *many* of the HOWTOs and
other documents that were part of the LDP. In many ways, I'd have been lost
without the LDP. But with the growth of Linux mind-share and increased demand
for texts that help newcomers get acquainted with the various aspects of running
their own Linux systems, there seems to have been a stagnation in much of the
free documentation. I can't help but to wonder if many of the folks who would
be working on LDP-type material have opted to write books for publishers instead.
Where do you see free documentation projects like the LDP
going? What advice can you offer to the LDP and those who write documents for
inclusion in the project? Might we see electronic versions of O'Reilly books
(or parts of them) included in free documentation projects?
Tim responds:
I don't think that the slowdown of the LDP is because of authors deserting it
to write commercial books. In fact, I think you're going to see a reinvigoration
of free documentation efforts, as publishers try to contribute to these projects.
I think that the right answer is for those who are writing books to figure out
some useful subset of their work that will be distributed online as part of
the free documentation, and for there to be some added value only available
in books. I think that this has worked pretty well for the core perl documentation,
where an update to the camel and an update to the online docs are really seen
as part of the same project.
When O'Reilly is directly involved in an Open Source project,
this is fairly typical of what we do. For example, O'Reilly was one of the original
drivers behind the development of the docbook DTD, which is now used by the
LDP. (We started the Davenport Group, which developed Docbook, back in the late
80's.)
We're releasing a book about Docbook, by Norm Walsh and Len
Muellner, called DocBook: the Definitive Guide." It will be out in October.
Norm and Len's book will be also available for free online through the Oasis
web site as the official documentation of the DocBook DTD. This is our contribution
to users of DocBook; without our signing and creating this book, good documentation
for DocBook wouldn't exist. (This is in addition to our historical support of
the creation of DocBook.)
Our goal here, though, is evangelical. We want more people
to use docbook (and xml in general), and we think that making the documentation
free will help that goal.
CmdrTaco asks (on behalf of a friend): I understand from a very reliable source that O'Reilly is moving their website
from a single Sun and an inside developed webserver to an NT cluster and some
barely functioning proprietary software. Their bread and butter has been Unix.
They have been taking a more and more vocal position within the OSS community.
Why are they switching to NT?
Tim responds:
Well, your very reliable source has only part of the story right, and that's
because it's a long and involved story. It started about 18 months ago, when
the people on our web team wanted to replace what had become a fairly obsolete
setup whose original developers no longer work for the company.
This system--which was about five years old--involves a lot
of convoluted perl scripts that take data in a pseudo-sgml format, and generate
a bunch of internal documents (marketing reports, sales sheets, copy for catalogs
etc) as well as web pages. We wanted to do something more up to date, and didn't
have internal resources to devote to a complete rework.
So we went out to a number of web design firms for bids. The
winning firm does work on both NT and UNIX, but they showed us all kinds of
nifty things that they said they had already developed on NT that we could use.
These were tools for surveys, content management, etc. There was also stuff
around integration with the spreadsheets and databases and reports used by our
financial and customer service people. To recreate these tools on their UNIX
side would cost several hundred thousand dollars.
So I said: "We can either walk the talk, or talk the walk.
I don't care which, as long as what we do and what we say line up. If you can
do it better and cheaper on NT, go ahead and do it, and I'll go out there and
tell the world why the NT solution was better."
I was prepared to have to tell a story about interoperability--after
all, despite all our efforts to champion open source, we realize that our customers
use many, many different technologies, and we try to use them all ourselves
as well. We were looking at doing some things on NT--the stuff our vendor said
they already had working--while incorporating other elements on UNIX, Mac, Linux,
and Pick (yes, we run a Pick system too!). The whole thing was going to be a
demonstration of ways that you can choose from and integrate tools from many
different platforms.
Instead, I have to tell the story that is so familiar to Slashdot
readers, of promises of easy-to-use tools that, unfortunately, don't work as
advertised. As your source suggests, the NT parts of the system haven't been
delivered on time or on budget, and what we've seen doesn't appear to work,
and we're considering scrapping that project and going back to the safe choice.
To put a new spin on an old saw: No one ever got fired for using open source.
I say that tongue-in-cheek of course, because unlike a lot
of open source partisans, I don't think that all good things come from the open
source community. We like to bash Microsoft with the idea that "no matter how
big you are, all the smart people don't work for you" but it's just as true
that they don't all work for the open source community either. There are great
ideas coming from companies like Sun and Microsoft, and (most of) the people
who work there are just like us. They care about doing a good job. They want
to solve interesting problems and make the world a better place. And sometimes
they do.
I consider it my job to give them a fair shake at convincing
me, and if they do, to give you a fair shake at learning what they've done right
as well as what they've done wrong. I'll keep you posted.
****
Virtualbookworm They screen manuscripts, have a great customer support,
a very moderate setup fee, good distribution, and a fair contract, the only
problem is that their author's discount is a little low.
This is the only publisher that actually met all of the criteria outlined
in the "What to look for" section of the
Is POD for me? guide.
Llumina Press Their setup fee is a little expensive (as it relates to their
royalties, they are affordable otherwise), but they screen manuscripts, they
have good distribution and graphics are not a problem (though they recently
became an extra).
They don't screen manuscripts, don't offer a choice when it comes to trim
size, authors have little control over a couple of aspects and readers can't
buy the books directly from the publisher's site, but they are reasonably priced,
they offer great royalties, you can buy copies of your own book paying little
more than printing costs, they have good distribution and their books seem to
have a competitive retail price.
PageFree
A publisher that doesn't screen manuscripts but gives authors a great deal of
control over the whole project and has a lot of flexibility when it comes to
format. The main problems have to do with a couple of grey areas in their contract
and a basic package that may be a little too bare bones for some
authors' needs.
Julie Duffy, former Director of Author Services at fee-based POD Xlibris,
offers a series of
print-on-demand articles that provide a good overview of the POD process,
from the technology itself to sales and contract concerns. There's some very
useful information here about the technical issues you need to consider if you're
thinking of using a POD service.
Printondemand.com
is a "digital printer's resource," offering news and information on digital
technology. Among other things, it allows you to search for a digital printing
service provider in your area.
From Aeonix Publishing Group: a concise
discussion of the
issues that factor into an independent publisher's decision whether or not to
use digital technology to produce its books.
Supply and (Print
on) Demand, an article by Jaclyn Pare for Poets & Writers, provides
a good, if dated, overview of both the hope and the hype surrounding the POD
services, and discusses the marketing challenges that POD-published writers
face.
Clea Saal's
Books
and Tales website provides comparisons of a number of POD services, as well
as a series of articles on the stages of the POD publication process (note:
Writer Beware has received complaints about several of the publishers listed
here, including publishers that this site rates highly).
The POD Quandary: Brenda Rollins takes another look at the pros and
cons of POD services.
POD-dy Mouth
is a blog that tracks all things POD. Among many other interesting and informative
items, it features an interesting series of Q&A sessions with established agents
and editors about their attitudes toward books from POD services.
An article assessing the pitfalls of POD services from the true self-publisher's
perspective:
The Truth About POD Publishing,by David Taylor.
Print
on Demand, One Year Later: this article by Adam Barr, who published
a book with POD service iUniverse, highlights some of the difficulties and frustrations
authors who use such a service may encounter.
A more positive perspective on the POD service experience is provided in
this series of interviews with writers who published with
Xlibris,1st Books
Library, (now Author House) and
iUniverse.
Beyond Vanity Fare? This 2003 article from Publishing Trends
magazine reports on the kinds of changes that POD services have undertaken in
order to provide their authors with more options.
True Stories About PublishAmerica: Authors' own accounts of their experiences
with one of the more infamous POD-based author mills (Writer Beware has received
more than 100 complaints about this self-styled "traditional" publisher).
Julie Duffy, former Director of Author Services at fee-based POD Xlibris,
offers a series of
print-on-demand articles that provide a good overview of the POD process,
from the technology itself to sales and contract concerns. There's some very
useful information here about the technical issues you need to consider if you're
thinking of POD.
Clea Saal's
Books
and Tales website provides comparisons of the services of a number of fee-based
PODs, as well as a series of articles on the stages of the POD publication process
(note: Writer Beware has received complaints about a few of the publishers listed
here).
Supply and (Print
on) Demand, an article by Jaclyn Pare for Poets & Writers, provides
a good, if slightly dated, overview of both the hope and the hype surrounding
POD, and discusses the marketing challenges that fee-based POD-published writers
face.
E-author Karen Weisner discusses the differences between subsidy and non-subsidy
digital publishing, as well as the different kinds of fee-based e- and POD publishers,
in Electronic Publishing: Subsidy vs. Non-Subsidy.
Self-Publishing: Is it for You?An article from writer Thomas
M. Sipos on the pros and cons of publishing via POD, with an interesting discussion
of the traditional stigma attached to pay-to-publish, and possible ways to cope
with it.An interesting article assessing the pitfalls of fee-based POD from
the true self-publisher's perspective:
The Truth About POD Publishing,by David Taylor.
Also from Thomas M. Sipos,
Marketing
Through Amazon addresses that perennial literary mystery--the meaning
of Amazon.com sales rankings. This is a very useful article for any POD-, e-,
or self-published author.
Print
on Demand, One Year Later: this article by Adam Barr, who published
a book with fee-based POD iUniverse, highlights some of the difficulties and
frustrations fee-based POD authors may encounter.
A more positive perspective on the fee-based POD experience is provided
in this series of interviews with writers who published with
Xlibris,1st Books
Library, and
iUniverse.
ALC Press - publisher
of books and specialized software related to local area networks. Focus is on
Novell's NetWare and security.
Artech House -
books and software for high-technology professionals.
BHV Publishing Group
- publishing house issusing computer literature in Russian.
Business
and Vocational Employment - creators of Step by Step software training manuals,
explaining the latest spreadsheets, word processors, and other office computer
applications.
Canadian Electronic
Scholarly Network (CESN) -- Provides information on the Electronic Publishing
Promotion Project (EPPP) and on other electronic publishing projects, especially
Canadian initiatives.
Citation -- Resources for Canadian publishers and publishing professionals.
Electronic Publishing
Association LLC -- An open international association of firms, companies
and individuals dealing with language teaching and electronic publishing.
Graphic Communications Today
-- Site for Graphic Communications Association (GCA), the leading technical
management association in the publishing, printing, and information technologies
industries.
Newsletter Publishers
Association - Represents the interests of and provides information to publishers
of for-profit newsletters and specialized information services.
Women in Publishing
- Promotes the status of women in the industry. Our site features training and
meeting programmer, reports, mentoring network and international contacts.
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
Copyright � 1996-2021 by Softpanorama Society. www.softpanorama.org
was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP)
without any remuneration. This document is an industrial compilation designed and created exclusively
for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong
to respective owners. Quotes are made for educational purposes only
in compliance with the fair use doctrine.
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.
Re:Experience with Lulu.com
(Score:3, Informative)by Fear the Clam (230933) on Friday July 21, @08:35PM (#15761081)
The typical paperback (what's called a "mass market paperback" in the publishing biz ) is about 4.25 x 7 inches. The 6 x 9" size is called a "trade paperback."