|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 |
2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 |
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
|
|
Learn to script (Score:4, Interesting)
by holden_t (444907) <[email protected]> on Thursday October 09, @03:09PM (#7175570)Certainly I haven't read the book but it looks as if Kirk is offering examples of how to write scripts to handle everyday gruntwork. Good idea.
But I say to those that call themselves sys.admins, Learn how to script!!!
I work at a large bankrupt telcom
:) and it's amazing the amount of admins that don't have the slightest idea how to write the simplest loop. Or use ksh, bash, or csh's cmd history. Or vi. Maybe this is just a corporate thing. They were raised, in a sense, in a setting where all they had to do was add users and replace disks. Maybe they never learned how to do anything else.
Back in '83 I took manuals home and poured over every page, every weekend for months. That didn't make me a good admin but it gave me a good foundation. From there I had to just halfway use my head (imagination?) and start writing scripts. Ugly? Sure. Did they get better? Of course!
Now I play admin on 110+ machines, and I stay bored. Why? Because I've written a response engine in Expect that handles most of my everyday problems. I call it AGE, Automated Gruntwork Eliminator.
There's no way I could have done this if I had just sat back and floated, not put in a bit of effort to learn new things.
Multiple Machines (Score:5, Interesting)
by BrookHarty (9119) on Thursday October 09, @01:48PM (#7175005)
(http://www.ironwolve.com/)One of the problems we have, is when you have clusters with 100+ machines, and need to push configs, or gather stats off each box.
On solaris, we run a script called "shout" that does a for/next loop that ssh's into each box and runs a command for us. We also have one called "Scream" which does some root privilege ssh enabled commands.
Nortel has a nice program called CLIManager (use to be called CLImax), that allows you telnet into multiple passports and run commands. Same idea, but the program formats data to display. Say you wanted to display "ipconfig" on 50 machines, this would format it, so you have columns of data, easy to read and put in reports.
Also, has a "Watch" command that will repeat a command, and format the data. Say you want to display counters.
I have not seen an opensource program that does the same as "CliManager" but its has to be one of the best idea's that should be implemented in opensource. Basically, it logs into multiple machines, parses and displays data, and outputs all errors on another window to keep your main screen clean.
Think of logging into 10 machines, and doing a tail -f on an active log file. Then the program would parse the data, display it in a table, and all updates would be highlighted.
I havnt spoken to the author of CliManager, but I guess he also hated logging into multiple machines, and running the same command. This program has been updated over the years, and is now the standard interface to the nodes. It just uses telnet and a command line, but you can log into 100's of nodes at once.
Wish I could post pics and the tgz file, maybe someone from Nortel can comment. (Runs on Solaris, NT and linux)
Re:Multiple Machines (Score:2)
by Xzzy (111297) <sether@ t r u 7 h.org> on Thursday October 09, @04:21PM (#7176481)
(http://tru7h.org)> Nortel has a nice program called CLIManager (use
> to be called CLImax), that allows you telnet into
> multiple passports and run commands.Fermilab has available a tool called rgang that does (minus the output formatting) something like this:
http://fermitools.fnal.gov/abstracts/rgang/abstra
c t.htmlWe use it regularily on a cluster of 176 machines. It's biggest flaw is it tends to hang when one of the machines it encounters is down.
But it is free so I won't complain.
:) Multiple Machines in Parallel (Score:1)
by cquark (246669) on Thursday October 09, @04:29PM (#7176572)One of the problems we have, is when you have clusters with 100+ machines, and need to push configs, or gather stats off each box. On solaris, we run a script called "shout" that does a for/next loop that ssh's into each box and runs a command for us. We also have one called "Scream" which does some root privilege ssh enabled commands.While the serial approach of looping through machines is a huge improvement over making changes by hand, for large scale environments, you need to use a parallel approach, with 16 processes or so contacting machines in parallel. I wrote my own script, but these days the Parallel::ForkManager [cpan.org] module for perl does the process management part for you.Re:Multiple Machines (Score:2)
by Sevn (12012) on Thursday October 09, @04:57PM (#7176807)
(http://www.dangpow.com/~sevn | Last Journal: Tuesday April 01, @07:18PM)I do pretty much the same thing this way:
Generate ssh key file.
Put pub key file in $HOME/.ssh/authorized_keys2 on the remote machines.Have a text file with a list of all the names the machines resolve to.
for i in `cat machinelist.txt`; do echo "running blah on $i"; ssh user@$i 'some command I want to run on all machines'; echo " "; done
It comes in handy for stuff like checking the mail queues or doing a tail -50 on a log file. Mundane stuff like that. Everyone once in a while I'll do basically the same thing with scp instead. It can get as complicated as you want. I used a for loop like this to remount 150
/tmp dirs noexec and make the edits to fstab. Re:Multiple Machines (Score:2)
by drinkypoo (153816) <[email protected] minus distro> on Thursday October 09, @10:00PM (#7179637)
(http://slashdot.org/ | Last Journal: Friday November 21, @04:31PM)IBM also owns Tivoli Systems, which made something called TME10, the current name of which escapes me at the moment. TME10 uses CORBA (their ORB is now Java, but it used to be basically ANSI C plus classes, compiled with the microsoft compiler on windows and gcc on most other platforms. Lots of it was perl, some of it was shell, plenty of it was C. Methods called Perl scripts pretty damn frequently. The interface was completely configurable and not only could you customize them without purchasing any additional products (if you felt froggy) but they also sold products to make this easier to do.
Last I checked this package ran with varying degrees of ability (but most operating systems were very well suppored) on all major Commercial Unices, BSDi, Linux, OS/2, NT, Novell, and a bunch of random Unices that most people have never heard of, and never had to. It was sometimes problematic but the fact is that it was incredibly cross-platform.
It was a neat way to do system monitoring. It would be nice to develop something open source like that. I think that today it would not be all that difficult a task. I'd like to see all communications be encrypted, with arbitrary shapes allowed in the network in terms of who talks to who, and who has control over who, to reflect the realities of organizations.
Re:Multiple Machines (Score:0)
by Anonymous Coward on Thursday October 09, @04:14PM (#7176396)IBM has two solutions depending on the environment. PSSP under AIX will allow you to run distrbuted command across nodes with either a correct RSH config or SSH Keys with no passphrase. PSSP, also, allow for parrallel copy. Under Linux( and AIX actually) there is CSM which also allows for DSH with the same config requirements. You can do Parallel copy under CSM, but you have to be tricky with something like, "dsh headnode:/file
/file" . Re:Learn to script (Score:2)
by Wolfrider (856) <[email protected] minus city> on Friday October 10, @08:10PM (#7187085)
(http://wolfrdr.tripod.com/linuxtips.html)O'Reilly's book helped me quite a bit.
http://www.oreilly.com/catalog/bash2/
In addition, Debian has a new package called abs-guide that I haven't checked out yet.
http://packages.debian.org/unstable/doc/abs-guide
. html--I've written a bunch of helpful bash scripts to help me with everyday stuff, as well as aliases and functions. If you want, email me - kingneutron at yahoo NOSPAM dot com and put "Request for bash scripts" in the subject line, and I'll send you a tarball.
Might be useful... (Score:2)
by Vrallis (33290) on Friday October 10, @12:22AM (#7180451)
(http://krynn.penguinpowered.com)This might very well be a book I'll pick up sometime. I'm always looking for more ideas.
I maintain about ~170 remote Linux boxes (in our company's retail stores and warehouses), as well as our ~30 or so inhouse servers.
I went through a lot of work to enable our rollout and conversion to go more smoothly. The network and methodology for users, printers, etc. is extremely simplified and patterened.
For each of the 3 'models' of PCs we use, I have a master system that I produced. I used Mondo Rescue [mondorescue.com] to produce CD backups of these systems. These systems act as serial terminal controllers, print spoolers, routers, desktop system usage (OpenOffice, Mozilla, Kmail under KDE), and other functions as needed.
When we need to replace a system, or rollout a new location, we grab a system, pop in the Mondo CD, and do a nuke restore. When done, we have a standard configuration user that we log in as. It runs a quick implementation script where you answer anywhere from 3-8 questions (depending on the system type and options), and it configures everything. All networking, users, sets up Kmail, configures all printers and terminals (we use Comtrol Rocketport serial boards), and so on.
If the system is physically ready, we can have it ready software-wise in about 20 minutes (2 CDs to restore).
Updates are done via a couple different methods. I use SSH (over our internal VPN, using key authentication) in scripts to do most updates. If I need to do anything major, such as recently updating Mozilla, we do a CD distribution. The users have a simple menu to take care of running the update for them, even with autorun under KDE. Just pop in the CD, and it automatically takes them into the menu they need.
All logs are duplicated across the network to a central server, but intrusion is less likely as these systems sit on a private frame network. They do, however, have fully secured network setups, as we use cheap dial-up internet access as a backup in case the frame circuit goes down.
I can't help but feel every day like this is just one big hack/kludge, but it works, works damned well, and was about half the cost of any other solution (i.e. higher end Cisco routers to handle various functions, and using Equinox ELS-IIs or the like...those pieces of crap never would work right, we finally pulled only 2 we had in use, and they are currently collecting dust in a storage cabinet).
Needless to say, I am *always* looking for ideas to improve upon this.
The simple approach: Do it yourself (DIY)
Theoretically (and with the right tools!) anyone can build a configuration parser, right? The Perl Cookbook, for one, shows a quick implementation that provides a good start. So how hard can it be to write a configuration file parser if you begin with this kind of implementation?
Quite hard, actually, because this kind of project raises several more complex issues like these:
- Blank lines and comments in the configuration file
- Erroneous lines (like misspelled keywords), and the question of which are critical and which can be ignored
- The probability that you may have to write your own parser, because you are likely to need a variety of different data structures (booleans, scalars, arrays, and hashes, for example)
- Multiple configuration files
- Variable defaults
- Integrating command-line options with the file configuration and controlling how they interact
- Educating users in yet another DIY configuration file format (This usually goes something like: "This will work, as long as you have no '=' on a line by itself. Oh, and comments begin with '#' but they have to be by themselves. Don't forget to use uppercase for the keywords and lowercase for the values. Come back! Come back! I didn't tell you about the mandatory keywords!")
- Rewriting or copying possibly buggy configuration code instead of reusing a module
- Making the configuration an object with a consistent interface instead of the usual DIY haphazard hash of keywords
Scared yet? That's why we have AppConfig. It can handle all these concerns. It's more than likely that DIY is not what you should be using.
With the Blaster worm seeming to be under control, alleged virus-author Jeffrey Parson under house arrest in Minnesota, and hacker Adrian Lamo under the watchful eye of the feds, business-technology managers may have enjoyed a few hours of peace and quiet last week. But it was short-lived. On Sept. 10, Microsoft issued a security bulletin warning of three new critical vulnerabilities in the Windows operating system, sending systems administrators rushing to patch their computers. It's become an all-too-common scenario--and one that's causing some businesses to re-evaluate their heavy reliance on Microsoft products.
A year-and-a-half after Bill Gates declared that trustworthy computing had become Microsoft's No. 1 priority, the software bugs keep coming. The latest vulnerabilities involve the Remote Procedure Call service in Windows, making it possible for a malicious hacker to take control of a target system, introduce an infectious worm, or launch a denial-of-service attack. A week earlier, Microsoft issued five other warnings, four involving the omnipresent Office applications suite. For the year, the tally stands at 39.
And those are just the holes that have been uncovered by others and reported to Microsoft. In addition, the software vendor is combing through its code, finding holes, and issuing patches without publicizing the flaws. No one knows how many more are yet to be uncovered. "There's no way to wrap your hands around that," says Dan Ingevaldson, engineering manager with security vendor Internet Security Systems Inc.
Some business and technology professionals are running out of patience. "The issues around these vulnerabilities are escalating to the point where it's not just CIOs or CTOs, it's corporate officers, it's boards of directors asking: 'What are we going to do?'" says Ruth Harenchar, CIO of Bowne & Co., which last week scrambled to patch 4,500 Windows PCs and 500 servers in the United States and more overseas. "The situation appears to be getting worse, not better."
The patching work has thrown Bowne & Co.'s technology projects off schedule. Now, the specialty-printing-services company is assessing its options. Among them: redesigning its network around a thin-client model to reduce the number of PCs running Windows and, on other machines, migrating to Linux. "It's getting to be enough of a burden that you have to seriously start thinking about alternatives," Harenchar says.
Raymond James & Associates has assembled a team of IT staffers to manage the constant patching. "Organizations have to mobilize and realize this is going to be a way of life for the foreseeable future," says VP of IS Gene Fredriksen.
The financial-services firm, with offices around the world, last week began the arduous task of patching 10,000 PCs and 1,000 servers. "The pressure is on," Fredriksen says. "Anybody that isn't patched by the weekend is going to have trouble." The fear is that the latest vulnerability leaves Windows computers open to a Blaster-like worm. "There's a very good chance that a worm is going to be developed" to take advantage of the latest security holes, says ISS's Ingevaldson.
"People are getting fed up," says Lloyd Hession, chief information security officer at financial-network provider Radianz, adding that the number of Windows patches is reaching "epic proportions." The situation is causing more than just a few disgruntled customers to re-evaluate how much they use Microsoft products. Says Gartner security analyst John Pescatore, "There's definitely a very large trend towards that."
June 2003 | Sun
Summary
Has increased demand caused a shortage of resources on your server? Are customers complaining about slow response times? In these days of exponential network growth, keeping up with demand can be a difficult challenge. Jamie Wilson explains what you can do to analyze your current resource demands, and gives tips on planning for future growth.
(2,100 words)
It's a phone call most administrators never want to receive. "The server is slow, no one can check email. Web pages are loading slowly, or not at all!" Too often administrators find themselves trying to climb up the steep slope of increased demand. As a user base grows, the demand placed on the server grows as well. This growth may be linear and predictable, or it may be completely random or exponential.
There are ways to avoid the angry phone call altogether. Understanding system bottlenecks and gathering statistical data can help you project your system's current and future needs. This can eliminate user complaints -- and prevent that phone from ringing.
What causes a bottleneck?
Why does a system slow down in the first place? Slowdowns can usually be attributed to one or more bottlenecks, which are caused when part of the system is not running fast enough to keep up with the demands placed on it. The most common bottlenecks occur for the following reasons:
- Slow disks or disk arrays aren't able to handle I/O requests quickly enough
- The system is starved for memory, so applications are forced to swap to disk, which can slow response
- The system is out of processor power
- The network interface is overloaded
So how can you tell which of these systems may be having a problem? By using the various tools of the capacity planning trade:
sar
,netstat
,lockstat
, andtop
sar
sar
is by far one of the most valuable tools an administrator has to track past trends and predict future demand.sar
is only installed by default with the full distribution of Solaris. Verify thatsar
is installed on your system:pkginfo -l SUNWaccuIf it's not currently installed, you can add it by installing
SUNWaccu
.Once
sar
is installed, you'll need to configure it to begin collecting data. First, edit the system'scrontab
:crontab -e sysRemove the comments so that you have these lines:
0 * * * 0-6 /usr/lib/sa/sa1 20,40 8-17 * * 1-5 /usr/lib/sa/sa1 5 18 * * 1-5 /usr/lib/sa/sa2 -s 8:00 -e 18:01 -i 1200 -AThen
vi /etc/init.d/perf
.Remove the comments below
Uncomment the following lines
.This will enable
sar
for system-activity reporting. You may also want to increasesar
's log retention:vi /usr/lib/sa/sa2 /usr/bin/find /var/adm/sa ( -name 'sar*' -o -name 'sa*' ) -mtime +30 -exec /usr/bin/rm {} ;Your system will now begin gathering data. For a detailed explanation of how to use
sar
, please see thesar
man pages. Here is a quick list ofsar
's more useful features:
sar
run with no options shows CPU usagesar -q
shows your average queue sizesar -p
andsar -g
show paging activitysar -d
shows disk utilizationsar -f
reads a previously saved file,sar -f /var/adm/sa/sa03
netstat
One of
sar
's shortcomings is that it will not trend network traffic for you. This can be done usingnetstat
.netstat -in
will show you your network interfaces, how much traffic they have passed since booting, and any problems with them.netstat -in Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue hme1 1500 192.168.100.0 192.168.100.1 1477758588 0 2897473608 0 0 0 hme2 1500 192.168.101.0 192.168.101.1 3228181693 157415 3365694030 0 0 0From this example, you can see that
hme1
andhme2
are very busy, withhme2
having seen some incoming errors on its interface.lockstat
With Solaris 2.6 and up, Sun included a utility called
lockstat
, which can show you what is causing kernel locking. Thelockstat
man pages are available for more information. Here is one example of how to use this utility:lockstat sleep 30 > /tmp/lock.out more /tmp/lock.outCallers with the most lock counts may be causing problems. If you see
hmestart
orqfestart
causing many kernel locks, you may need to add another network interface.top
top
is not installed with Solaris, but it is invaluable tool that offers a realtime snapshot of what's happening on the system; you can download it from http://www.sunfreeware.com.top
will show you how much memory is free on the system, and which processes are using the most CPU or memory resources.So where's the slowdown?
Using tools such as
sar
,netstat
, andlockstat
can help you determine where a slowdown might be happening, or where one is about to happen. Here are some examples of how you can use these tools:
sar
with no options. This will show how idle the CPUs are. If your CPUs are using a lot of%usr
or%sys
, you may have to add extra CPUs to deal with increased demand. If%wio
is high, your system is waiting for your I/O subsystems to catch up. You may have a slow disk or array.sar -g
. If you have manypgscan/
s, your system is swapping. No swapping is the only good swapping. Your system is probably short on memory. Usesar -r
to verify this.netstat -in
. Look to see if an interface is overloaded with traffic. If so, you may have to add another physical interface. Also, look forIerrs, Oerrs
, andCollis
. These should all be relatively low numbers if not zero. High numbers in these columns can indicate network problems, such as speed or duplex autonegotiation issues, bad cabling, or a bad switch port.top
. If all else fails, look attop
. What process is taking up the most resources?Analyze the data and make recommendations
So you've put together all of your reporting tools. You're able to do past trend analysis and future growth predictions based on
sar
. You can also do realtime snapshots usingtop
. What should you do to make the system perform better now, as well as in the future?It's very important to note that if you do identify and solve a bottleneck, your solution can potentially cause even worse problems. For example, if you have idle CPU and a busy disk, replacing the busy disk with a fast disk can cause the CPU usage to spike. Remember, capacity planning is a constant exercise, not a one-time activity. Here are some scenarios:
- Busy I/O subsystems. Say you've determined by using
sar -d
that one or more of your disks is very busy (more than 90 percent busy). Either move I/O from that disk to a faster disk or array, or split up the I/O amongst many arrays, depending on the data. Remember also that SCSI interfaces can be overloaded as well. This is difficult to determine, but it's a good idea to add new SCSI interfaces and balance I/O traffic accordingly. Improving I/O access can have a major impact on CPU or network performance.- Busy CPUs. Using
sar
, it may become apparent that your system is in heavy%usr
and%sys
. You may also want to usempstat
to see more information about your CPUs. Adding CPUs in this situation can help, but it may not solve the problem. A poorly written application can consume infinite amounts of CPU resources.- Busy network.
netstat -in
andlockstat
may show your network interface to be very busy. Add another physical interface, but beware of increased I/O and CPU demands. Is the system swapping? Add more memory. Do whatever you can to prevent the system from swapping. If possible, create swap on fast disks.Application slowdowns
Sometimes system hardware isn't the problem at all. Remember that applications are what consume system resources, and poorly written applications can be very difficult to deal with. Here are some bits of advice:
- Beware of single-threaded applications. While a single-threaded application is generally easier to develop, it's also more costly to run. Many applications developed in-house are single-threaded. The worst example is the single-threaded nonforking application. This is an application that's not only single-threaded, but also won't fork copies of itself to consume resources more efficiently.
top
will only show one instance of this daemon running.ps -eLf
will only show one thread. This can be a very challenging application, as it may only consume a single CPU even if you add more CPUs. Single-threaded applications that fork copies of themselves are much easier to deal with, but still are not as efficient as a multithreaded application.- Learn as much as possible about the application you're dealing with. Talk to the vendors or the authors, because they'll know what tricks and tips will work best. Often, entries need to be made in
/etc/system
so that an application can work at peak capacity.ndd
settings may also need to be tweaked based on your current needs. Consider all of these performance suggestions before adding new hardware.Planning for future capacity
Sometimes the best way to plan for the future is to look at your past performance data. Using
sar
, you can ascertain a trend in the resource consumption on your system. If your system CPU was 90 percent idle three months ago, and now it's 80 percent idle, it's not unreasonable to assume that in three months your system will only have 70 percent idle CPU. Some parts of your system may grow at exponential rates, such as I/O or network subsystems. That's why it's important to constantly gather data, so you can see where you've been and where you're going. You may also want to consider writing scripts that can monitorsar
and alert you when certain thresholds are reached. If your I/O is 70 percent busy for more than a week, it's probably time to consider a replacement or an upgrade.Communication within your own organization can help you meet future capacity as well. You need to know if your marketing department is planning a big push to acquire more customers, or if a new accounting system is going into place next week. Growth is then predictable, as you can plan for increased access to your database or for exponential growth in your Web server's traffic. Knowing how your customers will be using your servers will help you provide better performance.
Scaling horizontally and vertically
For large-scale applications, it's extremely important to be able to scale your systems both horizontally and vertically. Horizontal scaling allows you to add many boxes to serve the same application, while vertical scaling allows you to break the application into pieces so that each one can be scaled horizontally. A system designed to be both horizontally and vertically scalable allows you to add servers as demand increases. This way, you avoid the pitfalls of trying to scale one big box, and can benefit from having many small boxes.
Here are some examples of horizontal and vertical scaling:
- Horizontal Web servers. Multiple Web servers are set up serving identical content, using independent hardware on different networks. DNS round robin or load balancing can be used.
- Horizontal and vertical email solutions. Each component of the email server (mx, SMTP, POP, Web mail) can be run on its own independent server. Multiple individual servers can be set up to balance the load. In this way, you can have four mx servers, two SMTP servers, two POP servers, and one Web mail server, or whatever configuration you need to meet demand.
- Horizontal and vertical Web servers. Multiple Web servers can be set up -- some that serve graphics, and others that serve just CGI scripts. Servers can be added as demand increases.
Staying ahead of the curve
Using reporting tools such as
sar
makes it possible to identify trends on your system. Learning about the applications on your system and communicating with your organization can also help when planning future growth. Finally, designing a system that can scale both horizontally and vertically can help you stay one step ahead of the growth curve.Resources
- The Unix Insider Topical Index, a comprehensive listing of all Unix Insider articles by subject: http://www.unixinsider.com/common/swol-siteindex.html
- Visit sunWHERE, launchpad to hundreds of online resources for Sun users:
http://www.unixinsider.com/sunwhere.html- Explore Unix Insider's back issues:
http://www.unixinsider.com/common/swol-backissues.html
May 29, 2003 | ONLamp.com
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: January 05, 2020