|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
Adapted from IBM General Parallel File System - Wikipedia and IBM Redbooks Implementing the IBM General Parallel File System (GPFS) in a Cross Platform Environment
The IBM General Parallel File System (GPFS) is a high performance clustered network filesystem. compertition includes Intel Lustra, Red Hat GFS2, CXFS (specialized for storage area network (SAN) environment.), OCFS2 (available with Oracle Linux 5 but not later versions; It is bundled with Oracle's Unbreakable Enterprise Kernel. ).
It is a very old software product: it was initially designed for AIX on RS/6000 system(1998). With GPFS IBM proved that it can play naming game as well as HP. The last known name is Spectrum Scale.
GPFS assumes that nodes within a subnet are connected using high-speed networks. It can natively utilize SAN networks and Infiniband networks. In case of SAN network Ethernet is still used for tokens and some metadata operations but all nodes for the given GPFS filesystem can point to the same LUN.
GPFS with version 4.2 became too complicated and it's never going to be a general purpose parallel file system. The IBM licensing model alone guarantee that.
It can store metadata and data separately (which allow to store metadata of SSD drives) and can access them via different network channels. There are three options:
GPFS uses a token management system to provide data consistency while allowing multiple independent paths to the same file by the same name from anywhere in the cluster. GPFS provides storage management based on three types of grouping of files in the filesystem: Storage pools, Policies and Filesets:
Up to version 3.5 this was just a reliable distributed filesystem. Now in version 4.2 it lost conception integrity and is all-singing -- all-dancing solution: it supports compression, encryption, replication, quality-of-service I/O, WAN connectivity and more. 4.2 also introduced GUI interface. The GUI interface that tries to hide the complexity of GPFS and just makes things worse.
On level zero you can view it as NFS with multiple servers instead of a single server that export filesystem to the nodes. Both are server client solutions, the main difference that in GPFS there can be multiple servers serving the same set of nodes.
Upper limits are really impressive: 18 PBm upto 2048 disks per filesystem, up to 256 filesystems, up to 2^64 files per filesystem. 5K nodes max.
The key function of GPFS is similar to NFS -- it allows applications on multiple ("computational") nodes to share file data but not with a single "mothership" like in NFS, but the whole fleet of storage nodes. Like most filesystems it stripes data across multiple logical disks called NSD (network shared disk). GPFS is based on a shared disk model which provides lower overhead access to disks not directly attached to the application nodes and uses a distributed locking protocol to provide full data coherence for access from any node.
|
It offers many of the standard POSIX file system interfaces allowing most applications to execute without modification or recompiling. These capabilities are available while allowing high speed access to the same data from all nodes of the cluster and providing full data coherence for operations occurring on the various nodes. GPFS attempts to continue operation across various node and component failures assuming that sufficient resources exist to continue.
GPFS can use Infiniband directly (with Mellanox switches and cards) and Remote Direct Memory Access (RDMA) to provide access to the file system. It is especially useful in HPC clusters that have high I/O requirements like in genome decoding. TCP even with 10Gbit card is way too slow.
As a cluster file system GPFS provides a global namespace, shared file system access among GPFS clusters, simultaneous file access from multiple nodes, high recoverability and data availability through replication, the ability to make changes while a file system is mounted, and simplified administration even in large environments.
The same file can be accessed concurrently from multiple nodes. GPFS is designed to provide high availability through advanced clustering technologies, dynamic file system management and data replication. GPFS can continue to provide data access even when the cluster experiences storage or node malfunctions. GPFS scalability and performance are designed to meet the needs of data intensive applications such as engineering design, digital media, data mining, relational databases, financial analytics, seismic data processing, scientific research and scalable file serving.
The unique differentiation points for GPFS versus other files systems are as follows:
GPFS is the commercial software licensed by IBM. It has one open source component -- so called portability layer, which is compiled for each kernel during the installation. Unlike Lustre, it is neither free, not open source. As usually IBM plays rather dirty licensing game -- licensees are per socket, not per node.
There are three type of licenses: express, standard (+storage pools, +policy +hadoop +cNFS +WAN feature +GUI) and advanced (+crypto +compression).
In addition there two types of licenses depending of the type of server:
Client: The IBM Spectrum Scale Client license permits exchange of data between nodes that locally mou|nt the same GPFS file system. No other export of the data is permitted. The GPFS client cannot be used for nodes to share GPFS data directly through any application, service, protocol or method, such as Network File System (NFS), Common Internet File System (CIFS), File Transfer Protocol (FTP), or Hypertext Transfer Protocol (HTTP). For these functions, an IBM Spectrum Scale Server license would be required.
Licenses also distinguish between computational nodes and storage nodes. Express licenses is the cheapest.
GPFS began as the Tiger Shark file system, a research project at IBM's Almaden Research Center as early as 1993. Shark was initially designed to support high throughput multimedia applications. This design turned out to be well suited to scientific computing.
Another ancestor of GPFS is IBM's Vesta filesystem, developed as a research project at IBM's Thomas J. Watson Research Center between 1992-1995. Vesta introduced the concept of file partitioning to accommodate the needs of parallel applications that run on high-performance multicomputers with parallel I/O subsystems. With partitioning, a file is not a sequence of bytes, but rather multiple disjoint sequences that may be accessed in parallel. The partitioning is such that it abstracts away the number and type of I/O nodes hosting the filesystem, and it allows a variety of logical partitioned views of files, regardless of the physical distribution of data within the I/O nodes. The disjoint sequences are arranged to correspond to individual processes of a parallel application, allowing for improved scalability.
Vesta was commercialized as the PIOFS filesystem around 1994 and was succeeded by GPFS around 1998. The main difference between the older and newer filesystems was that GPFS replaced the specialized interface offered by Vesta/PIOFS with the standard Unix API: all the features to support high performance parallel I/O were hidden from users and implemented under the hood.
Today, GPFS is used by many of the top 500 supercomputers listed on the Top 500 Supercomputing Sites web site. Since inception GPFS has been successfully deployed for many commercial applications including: digital media, grid analytics and scalable file service.
In 2010 IBM released a version of GPFS that included a capability known as GPFS-SNC where SNC stands for Shared Nothing Cluster. This allows GPFS to be used as a filesystem for locally attached disks on a cluster of network connected servers rather than requiring sharing of disks using a SAN with dedicated servers. GPFS-SNC is suitable for workloads with high data locality
GPFS provides high performance by allowing data to be accessed over multiple computers at once. Most existing file systems are designed for a single server environment, and adding more file servers does not improve performance. GPFS provides higher input/output performance by "striping" blocks of data from individual files over multiple disks, and reading and writing these blocks in parallel. Other features provided by GPFS include high availability, support for heterogeneous clusters, disaster recovery, security, DMAPI, HSM and ILM.
GPFS consists of two types of actors: DND servers and GPFS clients.
According to (Schmuck and Haskin), a file that is written to the filesystem is broken up into blocks of a configured size, less than 1 megabyte each. These blocks are distributed across multiple filesystem nodes, so that a single file is fully distributed across the disk array. This results in high reading and writing speeds for a single file, as the combined bandwidth of the many physical drives is high. This makes the filesystem vulnerable to disk failures --- any one disk failing would be enough to lose data. To prevent data loss, the filesystem nodes have RAID controllers - multiple copies of each block are written to the physical disks on the individual nodes. It is also possible to opt out of RAID-replicated blocks, and instead store two copies of each block on different filesystem nodes.
Other features of this filesystem include
It is interesting to compare this with Hadoop's HDFS filesystem, which is designed to store similar or greater quantities of data on commodity hardware - that is, datacenters without RAID disks and a Storage Area Network (SAN).
http://ti-alejandro.blogspot.com/2010/12/global-file-system-gpfs-disk-descriptor.html
The Disk Descriptor
When a disk is defined as an NSD for use by GPFS a descriptor is written to the disk so that it can be identified when the GPFS daemon starts. The descriptor is written in the first few sectors of each disk and contains information like the disk name and ID. The format of the disk descriptor layout is as follows:
Sector 2 contains the NSD id which GPFS should match with a GPFS disk name in the /var/mmfs/gen/mmsdrfs file. This is written when the mmcrnsd command is run.Sector 1 contains the "FS unique id" which is assigned when the NSD disk assigned to a file system. This id is matched in the File System Descriptor (FSDesc) to a GPFS disk name. The id is written when one of the GPFS commands mmcrfs, mmadddisk, or mmrpldisk are run.
Sectors 8+ contain copy of the FSDesc, but it may not be the most current copy. This area of the descriptor is written when mmcrfs, mmadddisk, or mmrpldisk is run. A small subset (1, 3, 5, or 6) of the NSDs in the file system contain the most current version of the FSDesc. These are called the "descriptor quorum" or "desc" disks, and can be seen using the command mmlsdisk -L.
When GPFS starts up or is told that there are disk changes, it scans all the disks it has locally attached to see which ones have which NSD ids. (There is a hint file from the last search in /var/mmfs/gen/nsdmap). If it does not see an NSD id on a disk it assumes it is not a GPFS disk. A mount request will check again that the physical disk it sees has the correct NSD id and also that it has the correct "FS unique id" from the most recent FSDesc.
Storage pools allow for the grouping of disks within a file system. This way tiers of storage can be created by grouping disks based on performance (SSD vs rotating, 15K RPM vs 10K RPM, etc), locality or reliability characteristics. For example, one pool could be high performance fibre channel disks and another more economical SATA storage.
A fileset is a sub-tree of the file system namespace and provides a way to partition the namespace into smaller, more manageable units. Filesets provide an administrative boundary that can be used to set quotas and be specified in a policy to control initial data placement or data migration. Data in a single fileset can reside in one or more storage pools. Where the file data resides and how it is migrated is based on a set of rules in a user defined policy.
There are two types of user defined policies in GPFS: File placement and File management. File placement policies direct file data as files are created to the appropriate storage pool. File placement rules are determined by attributes such as file name, the user name or the fileset. File management policies allow the file's data to be moved or replicated or files deleted. File management policies can be used to move data from one pool to another without changing the file's location in the directory structure. File management policies are determined by file attributes such as last access time, path name or size of the file.
The GPFS policy processing engine is scalable and can be run on many nodes at once. This allows management policies to be applied to a single file system with billions of files and complete in a few hours.
See also
https://sites.google.com/site/torontoaix/gpfs_home/gpfs_intro
http://www.slideshare.net/IBMDK/ibm-general-parallel-file-system-introduction
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
IBM products, licensing, etc26 years at IBM, so I've seen this before. When you the product manager go to Legal before launch, to get the licensing decided, your project can end up being assigned to the nasty paranoid tight-ass, and you end up with an unworkable propositon in the market place.
Even at behemoth companies like IBM, it oftens comes down to the attitudes of individuals.
In my day, it was not unusual to have to have a very large number of Go/NoGo signoffs before being permitted to launch. In one case, 57 signoffs, any one of whom could kiil the product after hundreds of thousands had been spent on developing it.
15 Apr 2014Re: IBM products, licensing, etcYou forgot to say "and then they sack all of the developers".
15 Apr 2014IBM already make such products: the enterprise-class SONAS and mid-range IBM V7000 Unified. They are just clustered Linux/Samba NAS gateways running GPFS, see:
http://www.redbooks.ibm.com/abstracts/sg248010.html
Open Chapter 7.
V7000 Unified
Peter GathercoleThe V7000 Unified has many shortcomings... here are ten...
1. The V7000 Unified TSM client is limited (not full), it doesn't allow for backup sets etc.
2. The number of snapshots is limited (can be limited to a couple per day depending on the rate of change of data), deletion of snapshots can cause performance issues
3. Support is limited - anyone with any significant knowledge is based in Mainz Germany and you better be a large client to get access to them
4. NFS version 4 is not supported
5. SMB 2.1 and SMB signing is not supported
6. TPC reporting is constrained on the V7000 Unified (if you're after file information, rather than block)
7. IBM have decimated their UK pre-sales engineering teams and are relying on re-sellers to provide client pre-sales support, this is not working well yet
8. The product has suffered from data corruption and data loss issues
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004483
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004375
9. Try and find a training course - IBM now rely on partners who never seem to be able to get enough people to run a course
10. There is no SVC equivalent for files on the Unified, so migrations to it can be challenging
Mostor AstrakanGPFS is an old-school product. It's been around for a long time (I first heard about it as mmfs about 20 years ago), and as such it is configured like an old-school product.
But I would say that it seriously benefits from not being set up by a point-and-click GUI. It is a very high performance filesystem, and really benefits from the correct analysis of the expected workload to size and place the vdisks and stripe the filesystems accordingly. It's just one of those systems that is traditionally deployed in high-cost, high function environments where the administrators are used to/prefer to work using a CLI. If it were to appear in more places, it may need to change, but then that is what I thought SONAS was supposed to provide.
I have been working with GNR and the GPFS disk hospital for the last two years on a P7IH system, and now that the main bugs have been worked out (which were actually mostly in the control code for the P7IH Disk Enclosure which provide 384 disks in 4U of rack space, although it is a wide and deep rack), it really works quite well, although like everything else in GPFS, it's CLI based. But to my mind, that's not a problem. But it is very different, and takes a bit of getting used to, and it could be integrated with AIX's error logging system and device configuration a bit better.
Dapprman"But I would say that it seriously benefits from not being set up by a point-and-click GUI."
Oh yes. Many things do. You can get an HACMP cluster running in roughly five minutes using the user friendly SmittyWizard. Any idiot can do it. Which leads one to the disadvantage of having something that any idiot can set up: You get a cluster (or in this case a high performance file system), that you are going to trust the weight of your Enterprise to... set up by idiots. Which is why the section of IT bods who are not idiots never go for the easy install option.
Miss running GPFS systemsAnonymous CowardI'm another GPFS fan, however I also fear looking at the costs. I describe it as the sort of product where if you have the requirements and the financial backers then it is worth it, however if you're missing one of those it's just too expensive.
Back in ~1999/2000 (almsot a decade before I started using it) I remember there were three tiers - a basic very limited free version, a cheap version with no resilience, and the full fat resilient version. Think the first two got dropped as people tried running setups with them then and then complaining that it was a useless sytem when they had a disk failure or a node went/was taken down.
BTW - with experience you can get it up and running rather quickly, it just depends on what additional complexities you want to introduce.
Preaching to the choirGPFS is great. Couple it with IBM LTFS and you have the best/least costly archive storage platform around. Throw some Flash into that mix and you have a storage platform which will suit almost everyone's requirements at a fraction of the competitors' costs (people need a little high IOPS/low latency storage and a lot of high capacity/low cost storage). IBM needs to bundle it and make it easy to buy. They have been making strides with LTFS EE (GPFS combined with LTFS).
GPFS Solution ArchitectsThe OP's comments about GPFS and GSS are generally spot-on. GPFS is primarily a storage tool, yet it's sold by IBM folks who don't have storage backgrounds and don't understand it's competitive advantages (primarily when supporting complex global workflows, or multi-PB capacities, or compute-intensive workflows).
One of the primary benefits of GPFS is a dramatic cost reduction (both CAPEX and OPEX) for customers using petabytes of Tier-1 disks. If you're buying Tier-1 disk, do the research - you'll be shocked to find out what's possible using a multi-tiered approach using a tape-based storage tier for archiving and includes integrated data-protection (no need to 'duplicate & replicate' for DR).
As GPFS solution architects, we've made a living being that 'last mile' between the customer and IBM. It's ironic, but the fact that IBM's 'difficult to work with' has allowed us a place to be relevant.
Also, GPFS has recently undergone a tremendous amount of development.
For an easy, good read:
https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=stg-web&S_PKG=ov21284
John Aiken
www.re-store.net
24 Apr 2014Re: GPFS Solution ArchitectsI deployed a V7000 system for IBM and the IBM sales folks are indeed the biggest problem. They promise the world and do not understand what they are talking about.
The V7000 had serious limitations and bugs when I was setting it up (Q1 2012) and as Peter Gathercole mentioned the GUI interface that tries to hide the complexity of GPFS just makes things worse.
Add in the support issues John mentioned and it seems Re-Store have a sweet niche helping folks with a genuine need for GPFS.
Introduction
We're installing a two node GPFS cluster : gpfs1 and gpfs2. Those are RHEL5 systems, accessing a shared disk as '/dev/sdb'. We're not using client/server GPFS feature but just two NSD.
Installation
On each node, make sure you've got those packages installed,
- rpm -q \
- libstdc++ \
- compat-libstdc++-296 \
- compat-libstdc++-33 \
- libXp \
- imake \
- gcc-c++ \
- kernel \
- kernel-headers \
- kernel-devel \
- xorg-x11-xauth
On each node, make sure the nodes reslove and are able to login as root to each one, even itself,
cat /etc/hostsssh-keygen -t dsa -P ''
copy/paste the public keys from each node,
cat .ssh/id_dsa.pubto one same authorized_keys2 on all the nodes,
vi ~/.ssh/authorized_keys2check the nodes can connect to each other, even to itselfs,
ssh gpfs1ssh gpfs2
On each node, extract and install IBM Java,
./gpfs_install-3.2.1-0_i386 --text-onlyrpm -ivh /usr/lpp/mmfs/3.2/ibm-java2-i386-jre-5.0-4.0.i386.rpm
extract again and install the GPFS RPMs,
./gpfs_install-3.2.1-0_i386 --text-onlyrpm -ivh /usr/lpp/mmfs/3.2/gpfs*.rpm
On each node, get the latest GPFS update (http://www14.software.ibm.com/webapp/set2/sas/f/gpfs/download/home.html) and install it,mkdir /usr/lpp/mmfs/3.2.1-13
tar xvzf gpfs-3.2.1-13.i386.update.tar.gz -C /usr/lpp/mmfs/3.2.1-13
rpm -Uvh /usr/lpp/mmfs/3.2.1-13/*.rpm
On each node, prepare the portability layer build,
#mv /etc/redhat-release /etc/redhat-release.dist#echo 'Red Hat Enterprise Linux Server release 5.3 (Tikanga)' > /etc/redhat-release
cd /usr/lpp/mmfs/srcexport SHARKCLONEROOT=/usr/lpp/mmfs/src
rm config/site.mcrmake Autoconfig
check for those values into the configuration,
grep ^LINUX_DISTRIBUTION config/site.mcrgrep 'define LINUX_DISTRIBUTION_LEVEL' config/site.mcr
grep 'define LINUX_KERNEL_VERSION' config/site.mcrNote. "2061899" for kernel "2.6.18-128.1.10.el5"
On each node, build it,
make cleanmake World
make InstallImages
On each node, edit the PATH,
vi ~/.bashrcadd this line,
PATH=$PATH:/usr/lpp/mmfs/binapply,
source ~/.bashrc
On some node, create the cluster,
mmcrcluster -N gpfs1:quorum,gpfs2:quorum -p gpfs1 -s gpfs2 -r /usr/bin/ssh -R /usr/bin/scpNote. gpfs1 as primary configuration server, gpfs2 as secondary
On some node, start the cluster on all the nodes,
mmstartup -a
On some node, create the NSD,
vi /etc/diskdef.txtlike,
/dev/sdb:gpfs1,gpfs2::::apply,
mmcrnsd -F /etc/diskdef.txt
On some node, create the filesystem,
mmcrfs gpfs1 -F /etc/diskdef.txt -A yes -T /gpfsNote. '-A yes' for automount
Note. check for changes into '/etc/fstab'
On some node, mount /gpfs on all the nodes,
mmmount /gpfs -a
On some node, check you've got access to the GUI,
/etc/init.d/gpfsgui startNote. if you need to change the default ports, edit those file and change "80" and "443" to the ports you want,
#vi /usr/lpp/mmfs/gui/conf/config.properties#vi /usr/lpp/mmfs/gui/conf/webcontainer.properties
wait a few seconds (starting JAVA...) and go to node's GUI URL,
https: //gpfs2/ibm/console/
On each node, you can now disable the GUI to save some RAM,
/etc/init.d/gpfsgui stopchkconfig gpfsgui off
and make sure gpfs is enable everywhere,
chkconfig --list | grep gpfsNote. also make sure the shared disk shows up at boot.
Usage
For toubleshooting, watch the logs there,
tail -F /var/log/messages | grep 'mmfs:'
On some node, to start the cluster and mount the file system on all the nodes,
mmstartup -ammmount /gpfs -a
Note. "mmshutdown" to stop the cluster.
Show cluster informations,
mmlscluster#mmlsconfig
#mmlsnode#mmlsmgr
Show file systems and mounts,
#mmlsnsd#mmlsdisk gpfs1
mmlsmount allshow file systems options,
mmlsfs gpfs1 -a
To disable automount,
mmchfs gpfs1 -A noto reenable automount,
mmchfs gpfs1 -A yes
References
- Install and configure General Parallel File System (GPFS) on xSeries : http://www.ibm.com/developerworks/eserver/library/es-gpfs/
- General Parallel File System (GPFS) : http://www.ibm.com/developerworks/wikis/display/hpccentral/General+Parallel+File+System+(GPFS)
- Managing File Systems : http://www.ibm.com/developerworks/wikis/display/hpccentral/Managing+File+Systems
- GPFS V3.1 Problem Determination Guide : http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfs31/bl1pdg1117.html
- GPFS : http://csngwinfo.in2p3.fr/mediawiki/index.php/GPFS
- GPFS V3.2 and GPFS V3.1 FAQs : http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html
Mar 21, 2006 | IBM
A file system describes the way information is stored on a hard disk, such as ext2, ext3, ReiserFS, and JFS. The General Parallel File System (GPFS) is another type of file system available for a clustered environment. The design of GPFS has better throughput and a high fault tolerance.
This article discusses a simple case of GPFS implementation. To keep things easy, you'll use machines with two hard disks -- the first hard disk is used for a Linux® installation and the second is left "as is" (in raw format).
Google matched content |
|accessdate=
requires |url=
(help)
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: August 15, 2019