|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
|
This page describes, in Unix manual page style, a Perl program available for downloading from this site which corrects numerous errors and incompatibilities in HTML generated by, or edited with, Microsoft applications. The demoroniser keeps you from looking dumber than a bag of dirt when your Web page is viewed by a user on a non-Microsoft platform.
Lynx and W3 can batch convert HTML into text. Mosaic and other browsers can do it too
About: trillbox is a flexible and extendable toolkit for building dynamic Web pages. Written in Perl and based on Template::Recall, it provides "widgets" (or controls) that you can quickly integrate into your Perl Web application. trillbox widgets are designed to be independent points of control that can be easily plugged into a Web programming system, e.g. a CGI application, template-based, or included as part of an application framework. Widgets purposely have no direct knowledge of each other in order to offer the greatest flexibility (although they may be designed so that output and input can be piped between widgets).
Changes: A Treeview widget has been added. It will build nested structures of nodes, like a file system directory tree. There is a demo on the homepage.
Many project are mirrored worldwide. Mirmon helps in monitoring these mirrors. In a concise graphic format, mirmon shows each site's history of the last two weeks, making it easy to spot stale or dead mirrors. Mirmon quietly probes a subset of the sites in a given list, writes the results in the 'state' file, and generates a Web page with the results.
#!/usr/bin/perl
my $sitepath="/yourhtdocs";
my $website="http://yoursite.com";
chdir($sitepath);
@stuff=`find . -type f -name "*.html"`;
open(O,">sitemap");
print O <<EOF;
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.google.com/schemas/sitemap/0.84">
EOF
foreach (@stuff) {
chomp;
$badone=$_;
$badone =~ tr/-_.\/a-zA-Z0-9//cd;
print if ($badone ne $_);
s/^..//;
$rfile="$sitepath/$_";
($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize�,$blocks)=stat
$rfile;
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst)=localtime($mtime);
$year +=1900;
$mon++;
$mod=sprintf("%0.4d-%0.2d-%0.2dT%0.2d:%0.2d:%0.2d+00:00",$year,$mon,$mday,$�hour,$min,$sec);
$mod=sprintf("%0.4d-%0.2d-%0.2d",$year,$mon,$mday);
$freq="monthly";
$freq="daily" if /index.html/;
$priority="0.5";
$priority="0.7" if /index.html/;
$priority="0.9" if /\/index.html/;
print O <<EOF;
}
<url>
<loc>$website/$_</loc>
<lastmod>$mod</lastmod>
<changefreq>$freq</changefreq>
<priority>$priority</priority>
</url>
EOF
print O <<EOF;
</urlset>
EOF
close O;
unlink("sitemap.gz");
system("gzip sitemap");
An multi-page HTML site index generator that easily handles virtual domains.
Clean up your Web pages with HTML TIDY
The maintenance of Tidy has now been taken over by a group of enthusiastic volunteers at Source Forge, see http://tidy.sourceforge.net.
When editing HTML it's easy to make mistakes. Wouldn't it be nice if there was a simple way to fix these mistakes automatically and tidy up sloppy editing into nicely layed out markup? Well now there is! Dave Raggett's HTML TIDY is a free utility for doing just that. It also works great on the atrociously hard to read markup generated by specialized HTML editors and conversion tools, and can help you identify where you need to pay further attention on making your pages more accessible to people with disabilities.
Tidy is able to fix up a wide range of problems and to bring to your attention things that you need to work on yourself. Each item found is listed with the line number and column so that you can see where the problem lies in your markup. Tidy won't generate a cleaned up version when there are problems that it can't be sure of how to handle. These are logged as "errors" rather than "warnings".
Dave Raggett has now passed the baton for maintaining Tidy to a group of volunteers working together as part of the open source community at Source Forge. The source code continues to be available under an open source license, and you are encouraged to pass on bug reports and enhancement requests at http://tidy.sourceforge.net.
If you find HTML Tidy useful and you would like to say thanks, then please send me a (paper) postcard or other souvenir from the area in which you live along with a few words on what you are using Tidy for. It will be fun to map out where Tidy users are to be found! My postal address is given at the end of this file.
The W3C public email list devoted to HTML Tidy is: <[email protected]>. To subscribe send an email to [email protected] with the word subscribe in the subject line (include the word unsubscribe if you want to unsubscribe). The archive for this list is accessible online. If you would like to contact the developers, or you just want to submit an enhancement request or a bug report, please visit http://tidy.sourceforge.net.
Tidy can now perform wonders on HTML saved from Microsoft Word 2000! Word bulks out HTML files with stuff for round-tripping presentation between HTML and Word. If you are more concerned about using HTML on the Web, check out Tidy's "Word-2000" config option! Of course Tidy does a good job on Word'97 files as well!
Tidy features in an article by Scott Nesbitt on webreview.com, and more recently on Dave Central's Best of Linux, and as tool of the month on Unix Review by Joe Brockmeier, who writes:
"One thing I love about the UNIX philosophy is the idea that each program should do one job and do it really well. There are zillions of small tools for UNIX-type OSes that make life much easier and are hugely useful, but they don't necessarily get written about. They certainly don't receive the same kind of coverage that Apache and Sendmail receive. One of my favorites, HTML Tidy, is a tool for HTML/Web development that I think will interest a lot of folks. HTML Tidy cleans up HTML produced by WYSIWYG editors and such."Tidy is available as a downloadable binary, as source code (ANSI C), or as an online service at W3C, Info Network, HTML Help's site Valet and other sites.
Tutorials for HTML and CSS
If you are just starting off and would like to know more about how to author Web pages, you may find my guide to HTML and CSS helpful. Please send me feedback on this, and I will do my best to further improve it.
Features:
Automatically generates site index based on the file/folder hierarchy
Handles multiple virtual domains
Splits site index into multiple pages to limit links-per-page
Indented text for different levels of hierarchy
Simple, small, and easy to modify or improve.
Valid HTML (4.01 Transitional)
License:
This software is essentially free, but please read my payment spiel
Please read the full licenseExamples:
I use it to generate the site_indexes for all my domains, such as the top site index of this domain.
Download:
It's a single perl script.
IndexMaker is a PERL script to make an index.html file from PDF files, HTML files, VRML files and other files. From version 5.0 IndexMaker works with PERL 5.x. The modules you need: PDF library 1.05 http://www.geocities.com/CapeCanaveral/Hangar/4794/ written by Antonio Rosella mailto:[email protected] libnet http://www.connect.net/gbarr/libnet/ libwww http://www.sn.no/libwww-perl/ This is the sintax usage: indexmaker [-options ...] list where options include: -help print out this message -verbose verbose -recursive directory scan recursively the directory -match files match different files ex. *.pdf, a?.* (require -recursive option) -configure file default indexmaker.cfg -output file default index.html list: with list you can use metacharacters and relative and absolute path name and ftp URL like ftp://ftp.host.com/directory/file and http URL like http://www.host.com/directory/file example: indexmaker *.pdf indexmaker -c tests/test.cfg ftp://ftp.host.com/directory/file *.pdf indexmaker -v */*.html indexmaker -o home.htm *.gif *.tiff *.jpeg indexmaker -m *.pdf -r my_directory *.gz If you want to know more about this tool, you might want to read the docs. They came together with indexmaker! Home: http://www.geocities.com/CapeCanaveral/Lab/3469/indexmaker.html Enjoy it! Send me your suggestions. Fabrizio Pivari mailto:[email protected] Home: http://www.geocities.com/CapeCanaveral/Lab/3469
Html to Xhtml Convertor is a straight-forward Perl script to convert HTML pages into XHTML pages. It can process batches of files, convert Windows/Unix/Mac line breaks, and deal with attribute minimization, quoting of attribute values, and more.
You can convert HTML tables without all the HTML tags into comma (or anything else) delimited tables which you easily can import into Excel or a database.
Navigation.pl is a Perl CGI script for generating navigation trees on your Web page. The tree content is loaded from a file on the Web server. Default file can be specified, but you can also pass a parameter to the script and specify a different config file. Thus, one instance of this script on your Web server can be used to show different trees.
mimeStrip.pl strips base64-encoded attachments and decodes and saves them in the specified folder. Because Netscape saves sent messages with attachments, this script was written to extract these attachments and reduce the size of the Sent mail folder. The attachment is replaced by a message in the stripped folder indicating where the attachment was saved.
About: Tagreader is a Perl extension module which allows you to read HTML or XML files tag by tag in a manner similar to reading text files with "while(<>)". It also includes utilities to check for broken links in Web pages, build tar archives from a number of HTML pages (including images), and to list all of the links in an HTML page.
Adobe PDF2HTML conversion page. Also check out ps2pdf.com its pretty cool. you can upload postscript files and it will convert them into PDF files for you. this is nice if you don't have access to a UNIX machine.
The web servers were monitored by a Perl script which was triggered by cron every 10 minutes. As this is a bit short for retrieving more than 100 web pages - only four timeouts would exceed this time span - the script has to use several parallel external processes to get the pages.
This is done by the fork/exec mechanism. Calling fork creates a new process which is identical to its creator. All the variables have the same value, except $pid, which becomes the returned value of the function call. In its parent process, the function call contains the process ID of the new process, while the child is given a 0 , which causes it to fork into the other if-loop branch.
There, it executes a new program via exec(). This means the process gives itself up completely as exec() replaces it with the new program. There is no turning back here, if the called program terminates, so does the child process.
This is exactly what the creator has been waiting for in the meantime, now calling 'wait' to round up his flock. Only after a futile wait call returning the value "-1" has signalled that there are no more active child processes will it start evaluating the results which can be found in the respective temporary file.
The outer if-loop makes sure there isn't more than a given maximum number of processes to avoid system overload. If the number of active child processes becomes greater than $maxfork the parent process waits until the first one has finished.
The program called via exec() - again a Perl script - executes a simple HTTP GET request for the IP address submitted as a parameter. A timeout causes the request to be aborted after three minutes and prompts an error message.
Should script execution take more than ten minutes, the existing lock file prevents the start of a new instance before the old one has been finished. Multiple execution could result in a death spiral plunging the entire system into an abyss.
Only in parts we print another loop in which the test program checks the web server gateways. The list which is read in at the beginning contains the server's and also the gateway's IP address. As alternative routing may change this address as well as the equally important hop count (the number of stations up to the gateway), this list is updated in the background.
The actual test program s output may look like this:
953550000 www.heise.de 193.100.232.131 1 1 1The first value is the elapsed time since 1.1.1970 in seconds. This value is retrieved when the program is started and does not represent the precise time of measurement. After that we have server name and address. The last three values signal with either '0' or '1' whether the server could be pinged, whether the gateway was accessible and finally whether the HTTP request was successful.
After more than four weeks, over 4 MBytes of data had been collected and had to be evaluated. This was done using - you guessed it - a little Perl script which added up and assessed the time spans between two web server status changes. Another Perl script used this output to generate an overview of uptimes and downtimes per server which a third analysis script evaluated statistically according to search criteria like operating system or web server.
#!/usr/bin/perl #... $maxfork= 20; # maximum number of procs $stime = time(); # get current time in secs since 1970 # --- check if older proc is already running ------ stat $lock; if (-e _) { $curtime = scalar localtime($stime); print STDERR "$curtime: LOCKED, skipping.\n"; exit 1; } # nope, so lock it open LOCK, ">$lock"; print LOCK "$$"; close LOCK; # get host list open IFILE, $hlist; # --- extract file info into hashes ---- while () { ($name, $ip, $gate, $hop, $url, $ping) = split " ", $_; $allhosts{$ip} = $name; $allgates{$ip} = $gate; $allhops{$ip} = $hop; $allurls{$ip} = $url; $allgates_{$gate} = $ip; # no double gates here } close IFILE; #--- test gateways --- foreach $gate ( keys %allgates_ ) { ... # use "traceroute" with fixed hop-count open TRACE, "$trace -f $hop -m $hop $host 2> /dev/null |"; ... } # --- get web-page --- $loop = 0; $forked = 0; foreach $ip (keys %allhosts) { my ($url); $url = $allurls{$ip}; if ( $forked > $maxfork) { wait; $forked--; } if ( $pid = fork ) { $forked++; # parent } elsif ( defined $pid) { # child exec("$gwww $ip $url > $tmp/$loop.o 2> $tmp/$loop.e"); } else { die "error forking!"; } $loop++; } # wait for child procs while ( wait != -1) { ; }; # now get file results $loops = $loop; for ( $loop = 0; $loop < $loops; $loop++) { open( LOG, "<$tmp/$loop.o"); ($host, $stat) = split ' ', ; $wstat{$host} = $stat; close( LOG); } # --- print all results --- foreach $ip (keys %allhosts) { if (!defined($pstat{$ip})) { $pstat{$ip} = "-"; } if (!defined($gstat{$ip})) { $gstat{$ip} = "-"; } if (!defined($wstat{$ip})) { $wstat{$ip} = "-"; } print "$stime $allhosts{$ip} $ip ", "$pstat{$ip} $gstat{$ip} $wstat{$ip}\n"; } print "#-----\n"; unlink $lock;
(Perl) Simple perl 5 script to calculate hex notation for web colors.
Fast, html link checker shareware package. This is a total rewrite of Rick Jansen's webxref, by Jim Bowlin. Written in Perl. Checks local files, remote sites, and remote links. Good documentation.
a huge catalog of contributed scripts you can use for Web development. Lots of samples in Perl, Java, JavaScript, Tcl, Shell, Applescript, VB, C/C++ and other languages. Well organized.
WebPages is a CGI script written in Perl that can centralize all pages from several web servers containing this script, and merge them all into one page. It is especially useful for distributed sites that hold personal pages.
webgrep is a set of 7 different check and search utilities for the web-master.
http://www.linuxfocus.org/~guido.socher/webgrep-1.7.tar.gz Alternate Download: http://www.oche.de/~bearix/g/webgrep-1.7.tar.gz Red Hat Packages:
http://www.linuxfocus.org/~guido.socher/webgrep-1.6-1.i386.rpm Homepage: [April 16, 1999] Proxy Caching -- parameters using in caching
Philip and Alex's Guide to Web Publishing -- a 17-chapter book containing everything I've learned from building nearly 100 RDBMS-backed Web sites Adding Collaboration to Your Web Site (without installing an RDBMS or hiring a dbadmin) Building Collaborative Web Services Using the ArsDigita Community System, a free open-source toolkit that can save you $1 million Site Development (a structured approach to coordinating multiple people working on a mostly-static site) ArsDigita Server Architecture, a way to keep that beautiful site up and running reliably
Recommended Links
Google matched content
Softpanorama Recommended
Top articles
Sites
Softpanorama University WWW-Scriping Links
World Wide Web Software HTML Converters in the Yahoo! Directory
Perl Web Site Management Scripts
Matt's Script Archive, Inc. Free Perl CGI Scripts
The CGI Resource Index Programs and Scripts Perl Tests and Quizes
Bookmarks Converters
Favorites.pl (Perl)
Perl script that creates a web page from an Internet Explorer favorites list.BM.PL (Perl)
BM.PL is a command-line script that will convert Microsoft Internet Explorer Favorites into a Netscape Navigator Bookmark file.bmark (Perl)
This is a variation of bmarkcgi, which creates separate Web pages from your Netscape bookmarks file.bmarkcgi (Perl)
A CGI program which allows you to publish your Netscape bookmarks file on your Web site.Table Generation and Conversion
Table Creator (Perl)
This script creates html tables on the fly by reading in your data from an ascii text file.Table Maker (Perl)
Creating lots of tables takes a lot of time, and this Perl script is a solution that will make it easy for you.
HTML to Plain Text and vise versa
HTML Converters
Lynx and W3 can batch convert HTML into text. Mosaic and other browsers can do it too
txt2html (Perl)
Perl script that converts plain text files into HTML.
Global Replacement
sarep (Console/Editors) Command-line search and replace tool written in Perl.
Sep 16th 1998, 21:51 stable: 0.32 - devel: none - license: freely distributable
rpl
PelDaddy - May 26th 1999, 11:15 ESTrpl is a UNIX text replacement utility. It will replace strings with new strings in multiple text files. It can scan directories recursively and replace strings in all files found. Includes source, build script, and man page. Should work on most flavors of Unix.
ftp://ftp2.laffeycomputer.com/pub/current_builds/rpl.tar.gz
ftp://ftp.laffeycomputer.com/ftp/pub/current_builds/rpl.tar.gz
replacer.pl (Perl) A utility to replace all instances of a given text string with a new text string in all the files in a single directory.
Treesed -- Freeware
Treesed, a Perl program, is a search/replace tool for lists of files. It can search for patterns in a list of files, or even a tree of directories with files.Usage:
treesed pattern1 <pattern2> -files <file1 file2 ...>
treesed pattern1 <pattern2> -treeTreesed searches for pattern1. If pattern2 is supplied pattern1 is replaced by pattern2. If pattern2 is not supplied treesed just searches. A list of files can be supplied with the -files parameter. Treesed is also capable of search/replace in files in subdirectories if you supply the -tree parameter. All files in the current directory and subdirectories are processed. Always a backup is made of the original file, with a random numeric suffix.
Sitemap generator
- freshmeat.net Project details for nSite nSite generates site maps for a given WWW site. It walks a site from the root URL and generates an HTML, TEXT, or XML link page which illustrates the structure and links of the site. This is a highly configurable Perl 5 script and companion module. The site map can contain the page url, title, unique fingerprint, summary, and list of internal (blue) and external (orange) links. Using this tool can be a quick way to determine the structure of an unknown or complex web site. This tool was inspired by and extends the sitemapper.pl v1.016 and WWW:Sitemap V0.002 utilities.
- QuickLinks (Perl) This script creates a file with html links to every file in a specified directory. The name of the link will match the name of the file, and the links will be in alphabetical order. View Product Homepage Download
Table of contents generators
htmltoc (Perl) htmltoc is a Perl program to generate a Table of Contents (ToC) for HTML documents.
HTML Converters
See also
Accessibility Tools page -- adobe PDF2HTML conversion page
Hypermail 2 alpha 16
Daniel Stenberg - March 14th 1999, 05:40 ESTHypermail 2 is a much enhanced version of the popular tool that converts mails into nicely formatted HTML pages. Version 2 has a lot of new features including MIME support. Perfect for archiving mailing lists and similar.
Changes: "text_types" for MIME types treated as text, better url parsing, no more options.h, as well as various bugfixes.
asp2php 0.01 asp2php converts WWW Active Server Pages (ASP) files that only run on the Microsoft IIS Web Server into PHP3 pages to run on Apache. This is the first alpha release to let people know this is being done. It will currently convert very simple ASP pages only.
Michael Kohn @ 12/22/98 - 15:03 EST txt2html - Text to HTML converter
Programming Languages
- perl2html.pl - converts Perl code to HTML (this is not the authoratative source)
- C++ to HTML
- C++ to HTML another one.
- ctoohtml is a c/C++ to HTML filter that HTMLizes a set of source and header files. Function and macro definitions are automatically anchored, and references to these definitions are hyperlinked. In addition, the user may include HTML markup within the code.
- CXX2HTML is perhaps better than the previous two.
- The Cocoon Utilities process C++ include files and produce a net of web pages that document the libraries, classes, and global functions and types that are found in them
- Object Outline extracts comments directly from source code. Requires no source changes or ugly comment tags. Combine external design documents, source code, and comments into a single, coherent, up to date, document. Automatic hyper linking. Integrate into an internal WWW project page. Works with all of the major 32 bit Windows Compilers.
- c2man can be used to generate man page style HTML documentation from c source.
- HTCLtoTCL preprocessor that converts TCL code with HTML directives embedded as comments into HTML documents and TCL source.
- HyperCode - generates HTML representations of program source code including definition/use links, documentation links, user comments, execution profiles and more.
Source Code Documentation
Cocoon (C/C++) The Cocoon utilities process C++ include files and produce a net of web pages that document the libraries, classes, and global functions and types that are found in them.
Cxx2html (Perl) Cxx2html creates HTML pages from C++ header file information.
Esd2html (Perl) Esd2html extracts embedded source documentation into linked html pages.
PERCEPS (Perl) PERCEPS is a Perl script designed to parse C/C++ header files and automatically generate documentation in a variety of formats based on the class definitions, declarations, and comment information found in those files.
Src2html (Perl) Src2html is a program which takes a C source tree and creates a set of HTML hypertext documents that allows the most important symbols in the source tree to be found easily.
Srcdoc (Perl) This tool generates documentation in HTML format from C source files.
The EXT System (Perl) The EXT system is a set of programs that generate documentation for the World-Wide Web from specially-formatted C programs and automatically place function prototypes in header files.
Mail Convertors
- Hypermail 2 alpha 16 download
- Usenet-Web 1.0.2 -- Usenet-Web 1.0.2 is a combination Usenet newsgroup archiver and archiveWeb presentation system. It includes facilities for searching the archive 'From' and 'Subject' lines for ranges of dates and can handle multiple newsgroup archives.
- mail2html - another one by Earl Hood, written in Perl. mail2html has been replaced with MHonArc.
- MHonArc (Perl) MHonArc is a Perl mail-to-HTML converter.
htxp -- htxp is a macro preprocessor for HTML by M.K. Kwong that provides time-saving features for writing HTML files: user-specified abbreviations, and built-in and user-definable macros.
Babymail -- Babymail is a Perl script that converts a Unix mailbox file into a browseable HTML archive.
HTML Validation
Fixspell (Perl)
Perl script that fixes spelling in text files and HTML pages.htmlchek (Perl)
Perl source code for an HTML validator.Weblint (Perl)
Weblint is a syntax and minimal style checker for HTML.Link Verifier
MOMspider (Perl)
MOMspider is a web robot written in Perl that verifies links across multiple websites.Webxref (Perl)
Webxref is a WWW link checker and cross referencing tool, intended to quickly check a local set of HTML documents for missing files, anchors etc.Macro Language
htmlpp (Perl)
Htmlpp is a pre-processor for HTML documents. Its purpose is to simplify the work of writing and packaging large numbers of HTML documents.Jamal (Just Another Macro Language) (Perl)
Jamal is a html macro extension language.Faq
FAQ Manager (Perl) This Perl script takes makes it easy to prepare FAQ pages.
Faq-O-Matic (Perl) The Faq-O-Matic is a CGI-based system that automates the process of maintaining a FAQ (or Frequently Asked Questions list).
Website generators
Gtindex (Perl) This script will create an HTML index of all the graphics specified on the command line.
Gtml (Perl) Gtml is an HTML pre-processor which adds extra features, specially designed for maintaining multiple Web pages.
HomePageMaker (Perl) HomePageMaker enables you to host small homepages built and maintained by your visitors.
HTML Editing Suite (Perl) Script to allow direct editing of files on a server.
HTML suite (Perl) HTML suite allows you to edit files online, advanced features include dynamic index building and page editing.
Makehomeidx (Perl) makehomeidx is a Perl program that creates an HTML file (home pages index) containing links to users home pages on the machine makehomeidx is invoked.
MaxiGen (Perl) MaxiGen is a HTML generator and editor designed to automate and ease the process of publishing web pages on a personal homepage account.
Page O Matic (Perl) Page 'O Matic allows you to maintain Web pages through your web browser without messing around with FTP'ing HTML files.
The Homepage Machine (Perl) Perl script for a simple to use homepage generator.
Web Page Generator (Perl)
This program allows the user to create a generic web page.
Random Findings
Crnuke (Perl)
Reduce the size of your web pages with basic HTML compression using crnuke.
Catalog
Loic Dachary - February 01st 1999, 11:54 EST
Download: http://www.senga.org/download.html Mirror List: http://www.perl.com/CPAN/modules/by-module/Catalog/ Homepage: http://www.senga.org/ Changelog: http://www.senga.org/Catalog/current/ChangeLog Catalog allows to build and maintain Yahoo! style catalogs. It includes a browsing interface and an easy to use template system for customization of the HTML pages. It is fully documented and distributed under the GNU General Public Licence. Catalog is also included in CPAN.
VelociGen for Perl (VEP) 1.0c VelociGen for Perl is a high performance web application server using Perl as its programming language. It features CGI compatible mode, allowing you to speed up your existing CGI scripts by as much as 20 times without modifying your existing code, as well as embedded mode which allows you to embed Perl code within your html pages, resulting in dynamic, template based pages. Standard Perl extentions can be used to access databases, create images on the server, and much more. The Linux version is free for non-commercial use. This release has been updated to work with Apache 1.3.3. It is available for both Libc5 and Glibc based systems.
Fish Head @ 11/11/98 - 15:39 EST Net::DNS version 0.12
Net::DNS is a Perl interface to the DNS resolver. It allows the programmer to perform any type of DNS query from a Perl script.Language: Perl Platform: Unix
Download Complete Source Code, 0.090M bytes
Etc
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright � 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors of this site Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019