|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 |
The upcoming 2.6.20 Linux kernel is bringing a nice virtualization framework for all virtualization fans out there. It's called KVM, short for Kernel-based Virtual Machine. Not only is it user-friendly, but also of high performance and very stable, even though it's not yet officialy released. This article tries to explain how it all works, in theory and practice, together with some simple benchmarks.A little bit of theory
There are several approaches to virtualization, today. One of them is a so called paravirtualization, where the guest OS must be slightly modified in order to run virtualized. The other method is called "full virtualization", where the guest OS can run as it is, unmodified. It has been said that full virtualization trades performance for compatibility, because it's harder to accomplish good performance without guest OS assisting in the process of virtualization. On the other hand, recent processor developments tend to narrow that gap. Both Intel (VT) and AMD (AMD-V) latest processors have hardware support for virtualization, tending to make paravirtualization not necessary. This is exactly what KVM is all about, by adding virtualization capabilities to a standard Linux kernel, we can enjoy all the fine-tuning work that has gone (and is going) into the kernel, and bring that benefit into a virtualized environment.
Under KVM's model, every virtual machine is a regular Linux process scheduled by the standard Linux scheduler. A normal Linux process has two modes of execution: kernel and user. KVM adds a third mode: guest mode (which has its own kernel and user modes).
November 22, 2006 (onlamp.com) Virtualization is a trendy topic in the server room now, especially as commodity computers begin to support features that mainframes have had for decades. Mainframes aren't standing still, however; IBM's POWER5 architecture supports powerful virtualization features on AIX... and Linux. Ken Milberg describes some of the benefits of the recent work on this platform. [Linux]
Microsoft Virtual Server 2005 consistently proves to be worth its weight in gold, with new implementations thought up every day. With this product now a free download from Microsoft, scores of newusers are able to experience what the power of virtualization can do for their networks. The book is aimed at network administrators who are interested in ways that Virtual Server 2005 can be implemented in their organizations in order to save money and increase network productivity.
Linux Journal Submitted by Armando Ortiz (not verified) on Mon, 2006-09-11 09:52.I'm a Linux desktop fan. While I can't appreciate the fact that Novell isn't making Xen very user-friendly in a more expeditious way, I do love the desktop they polished for use. We're a mostly-Microsoft house - our database is written in Access with links to SQL Server, our documents use primarily Word and Excel, but we do have some Linux servers. I'm the only SuSE desktop user - but then again, I know what I'm doing. When I go home, I'll sometimes have to access the resources here at work through a VPN, however, I detest my laptop or desktops at home joining the domain and it was somewhat difficult to access the resources available, especially in running our Access applications that want to see SQL Server. So what I did with VMware was create a Win2k 'machine' that has all of the applications I would run as if I were at the office, is part of the domain and as soon as I get into the network through the VPN, I simply full-screen my VMware Win2k machine and act like I'm AT the office. It's been a treat for me and I honestly don't mind that the source is closed. VMware makes everything for me worthwhile.
To underscore the usefulness of VMware, I also ran proofs-of-concept for some projects using FreeBSD, various Linux distros and a thin client which we're planning to use. I couldn't have done this without VMware and its freeness.
Submitted by Petem (not verified) on Fri, 2006-09-01 13:21.
i have been using the free version of vmware server as a test bed here.. and it works great.. however.. if you want speed.. you really have to go with their infrastructure product.. yes.. is does cost money.. but it is probably what yo want to compare against XEN.. now.. XEN.. i have yet to get it working... AND.. from what i have read.. it is a bear to get windows to run in a xen vm if at all.. and i have a need for that... IF and when XEN becomes as easy to setup and will run the platforms i need i will defenently give it a try... im not to crazy about using a closed source product.. but it works.. and it works really well..
Troubled conscience at Xensource?
Submitted by Michael L (not verified) on Fri, 2006-09-01 02:11.
Xensource is actively contributing to the codebase of a GPL product and selling services built around it, which is the classic model of building a business model around open-source software. VMWare is giving away closed-source software in an attempt to sell their priced closed-source products. Whatever one can say about the relative merits of each virtualization product (eg - Xen's paravirtulaized environment runs with an order of magnitude less overhead than VMWare), I don't think that one can fairly criticize Xensource about their open-source chops while praising a completely closed-source company. Sure, the free-as-in-beer release of VMWare server is cool....I've installed it on one of my machines. However, I have a RIGHT to use the Xen code that I run on several servers while VMWare could take their ball and go home any time that they wanted.
Submitted by Anonymous (not verified) on Fri, 2006-09-15 20:27.
The open source community has been spewing this kind of crap for over a decade now without any major examples of this occurrence taking place. It's the same old philosophical BS applied to a new situation with absolutely no supporting evidence.
When you guys figure out the truth in what he is saying, you might actually start to penetrate the enterprise in a meaningful way. The simple fact is that without a quality front end to make management of the product efficient to a wide audience of admins, the product will find no serious place in any enterprise.
Until development activities include accounting for that fact, proprietary software will continue to kick your asses with simple little tactics like this one regardless of the merits of the open source philosophies.
you might actually start to penetrate the enterprise
Submitted by Anonymous (not verified) on Thu, 2006-10-05 14:57.
Well well. But enterprise is not about single service. Enterprise is about integration of essentials.
Text based configuration currently rules the world of administrated systems as with perl it was about 200 lines of code to manage dns, dhcp, qos, ldap, wiki, user accounts on non ldap systems and several more things and about same amount of makefile's lines spread across the systems that make the configuration testing, backup, deployment and verification single command.
Apples and Oranges
Submitted by Bryce Leo (not verified) on Thu, 2006-08-31 12:48.
You cannot compare Xen and VMware. They're not the same type of emulator, yes they both do the same job however Xen is a Hypervisor while VMWare provides virtualization. The Xen approach technically allows for much closer to speeds you would attain from being directly on hardware. It's a very complex subject and Xen and VMWare are two very different products. If you'd care to email me a reminder to dig up the article in my (Linux Format I belive) about Xen and how it relates to VMWare I'd be glad to. I'm not a fan boy and I completely respect the need for the tried and tested VMWare in a enterprise environment however I do think that Xen has the edge on being the overall performance king when all is said and done. Submitted by Anonymous (not verified) on Thu, 2006-08-31 10:28.For Web-based UIs, check out PHPLDAPAdmin (http:http://phpldapadmin.sourceforge.net/) and Gosa (mentioned above). For non-Web GUIs, there's LAT (LDAP Administration Tool) http://dev.mmgsecurity.com/projects/lat/ and luma (luma.sourceforge.net/)
Submitted by AK (not verified) on Wed, 2006-08-30 17:51.Redhat has been working on a Gnome frontend for Xen. Check it out at
Submitted by Jed Reynolds (not verified) on Wed, 2006-08-30 15:23.
http://virt-manager.et.redhat.com/index.html
Given the proper libraries, it could also do Vmware and Virtual PC (hah!) management.I believe it's called Enomolism.
http://bitratchet.prweblogs.com/2006/07/23/interesting-find-enomalism/
I agree, Webmin is aSubmitted by Anonymous (not verified) on Thu, 2006-08-31 10:23.
I agree, Webmin is a miserable choice for administering OpenLDAP. Instead, you should have a look at PHPLDAPAdmin: http://phpldapadmin.sourceforge.net/ Gosa is also another feasilbe alternative (mentioned above). If you want a GUI instead of a web app, have a look at: LAT (LDAP Administration Tool) over at http://dev.mmgsecurity.com/projects/lat/ and luma (http://luma.sourceforge.net) Submitted by Steve Scott (not verified) on Wed, 2006-08-30 13:13.Yes, there are a couple for Webmin. There is also GOsa http://gosa.gonicus.de/ which "is a GPL'ed PHP based administration tool for managing accounts and systems in LDAP databases"
Submitted by Tom Adelstein on Wed, 2006-08-30 13:02.anonymous wrote:
"I would think that webmin would have plugins for administrating daemons like openldap and such. Try looking into that."
No thanks. Take a look at VMware's console. I was alluding to robust web frontends not ancient ones.
Virtualization holds great promise, but current proprietary technologies add cost and complexity in the form of new management requirements and performance overhead. This session looks at how virtualization makes the data center more efficient and flexible while accelerating data center initiatives and improving total cost of ownership. We'll assess the virtualization landscape and discuss strengths and weaknesses of existing virtualization solutions as well as new solutions that leverage industry-standard and open source technologies. We'll also discuss the role of emerging technologies like hardware-assisted virtualization and the Xen open source hypervisor, as well as advanced software capabilities that enable virtualization of enterprise-class workloads. This session will clarify the new wave of virtualization-related technologies and explain what to consider when creating a virtualization landscape.
Vasilevsky has over 20 years of engineering, technology leadership, and management experience. At Virtual Iron, he has been instrumental in defining and creating the technology and architecture behind the company. Previously, he was Chief Technology Officer at Ucentric Systems (acquired by Motorola), a leading provider of home media software for media centers.-->An expert in parallel processing, grid run-times systems, and advanced optimizing compilers, Vasilevsky also held senior engineering and management roles in such companies as Ucentric Systems, Avid Technology, and Thinking Machines. He holds five US patents for his work in parallel processing, and is the winner of three IEEE Gordon Bell Awards.
ZDNet Asia Red Hat and Novell, the two top Linux sellers, have only just begun building Xen virtualization software into their products. But they're already planning to add a higher-level option.
Xen is a "hypervisor" that lets a single computer run several operating systems simultaneously, using an idea called "virtualization." This enables companies to use a single server more efficiently--something that could save them money. Now "containers," a higher-level virtualization approach that makes a single operating system look like many, is also getting traction.
Specifically, containers are likely to appear in the next major versions of Red Hat Enterprise Linux (RHEL) and Novell's Suse Linux Enterprise Server (SLES). The technology could even be added before those updates, company executives said.
Two projects are under way to bring containers to Linux: Vserver and OpenVZ, the latter backed by a company called SWsoft. Overall, their prospects look bright.
"I think the big advantage of a containers approach, compared to a hypervisor, is a lot less overhead. You get much higher performance," Gabriel Consulting Group analyst Dan Olds said.
Containers are increasingly popular. Sun Microsystems introduced its own container technology in 2005 with Solaris 10. And Microsoft is working on an adaptation of existing technology.
They are not suited to all tasks. Containers require all applications to use the same copy of the underlying operating system, for example. Xen and the established virtualization leader, EMC's VMware, don't have that requirement. Nevertheless, containers are desirable.
Next on the agenda
"It's something that we want to see happen," Red Hat's chief technology officer, Brian Stevens, said in an interview here during the LinuxWorld Conference & Expo. Red Hat hasn't decided whether to use OpenVZ or Vserver, he added.Xen is the priority for RHEL 5, due to arrive at the end of the year, but after that will come containers, Stevens said. "I'm looking at that as a RHEL 6 thing," he said.
Novell, which wants to maintain Suse's reputation as the first place to find advanced new features for Linux, is more eager and is considering adding OpenVZ in Service Pack 1 of SLES 10. "We are still evaluating if this is something we can take into SP1," said Holger Dyroff, vice president of Linux product management.
If containers don't arrive with SLES 10 Service Pack 1, Novell will urge SWsoft to work with Linux programmers so that the software can be easily added to SLES 11, Dyroff said.
Debian Linux, a noncommercial version of the open-source operating system, added OpenVZ to its "Sid" development version in August.
And some work being done for Xen will help pave the way for containers. Specifically, this will provide management tools that let customers start, stop and otherwise control virtual machines. The same technology can be used to control containers, Stevens said.
"It'll be a lot easier next time. We'll be able to just plug it in. There already will be tools to manage it," Stevens said.
But SWsoft, the company that is sponsoring the OpenVZ and that sells a fuller-featured commercial version called Virtuozzo, sees things the other way around. Last week, the company announced that its container management tools will also be able to manage Xen virtual machines, said Chief Executive Serguei Beloussov.
(IDG News Service) -- Virtualization software provider XenSource Inc. will launch its first product, XenEnterprise next week, competing head-to-head with industry leader VMWare Inc. in the space, its CEO said yesterday.
"XenEnterprise is ready to go. We believe there is a lot of demand for this stuff,” Peter Levine, president and CEO of XenSource, said after an address at the LinuxWorld Conference & Expo in San Francisco.
Levine described it as a soft launch, with a more formal launch to follow in the fourth quarter of this year. XenSource has set up a two-tier sales channel and distributors around Europe and North America, he said.
XenEnterprise, an open-source product, serves as a hypervisor, an overarching software program that helps an enterprise manage a disparate computer network holistically. Enterprises struggling to reduce costs and control an unwieldy IT infrastructure find that multiple servers are often underutilized. Virtualization, as it’s called, allows multiple applications to run on one server but operate independently, allowing the enterprise to better utilize its servers.
Although virtualization has been the buzz among technology providers, only 6% of enterprises have actually deployed virtualization on their networks, said Levine, citing a TWP Research report. That makes the other 94% a wide-open market.
XenSource’s open-source product competes against proprietary virtualization systems from VMware, a unit of EMC Corp. Levine acknowledged VMware’s role in establishing virtualization.
“VMware has done a great job of educating the market. As a start-up, we don’t have to go out and say, 'Virtualization is important because of this or that.' VMware has done that,” Levine said.
XenEnterprise is supported in the new release of Novell Inc.’s SUSE Enterprise Linux distribution, while Microsoft Corp. pledged to support Xen-virtualized Linux with its forthcoming Longhorn server virtualization technology. IBM, meanwhile, announced that its low-end servers and middleware will support Xen via the new SUSE release.
Although the concept of virtualization has been around for years in mainframes, it is now catching on in client/server environments, Levine said, and is changing the industry.
“Virtualization is having an amazing global impact. It hasn’t solved hunger, but it is having a significant impact,” Levine said.
July 14, 2006 | InformationWeek
IBM said Friday it will support Novell's Suse 10 Linux and Xen virtualization on its IBM's BladeCenter and other x86 hardware. It will also allow management of Xen virtual machines under its Virtualization Engine. That allows IBM customers to use familiar IBM management software to provision and manage multiple Xen virtual machines.Xen can convert a low-cost Intel or AMD processor-based server into multiple virtual machines, each running a separate application. As freely available open-source code, Xen is expected to play a major role in server consolidation over the next few years. A consolidated server running six or seven applications will achieve far higher utilization rates than one running a single application.
IBM support is a plus for Novell, which is getting Linux out the door with Xen ahead of its competitor, Red Hat. Both have announced support for Xen on their future Linux distributions. Novell on its Web site says it's putting "the final touches" on its Suse 10 distribution.
Red Hat plans to offer a distribution including Xen 3.0 late this year. IBM says the company will support Xen running on Red Hat Linux when Red Hat gets its distribution out containing Xen 3.0.
Xen was originally developed at Cambridge University in England, and its originators formed XenSource, a commercial company, to provide technical support for its adoption.
As a more mature Xen version 3.0 approached release last year, the virtualization market leader, VMware, made a bid to compete with the open-source code by making VMware Server, a base-level, single-server virtualization product, available free. VMware, an independent business unit of EMC, reported revenues of $157 million in its second quarter of 2006, a growth rate of 73%. If revenues continue at that pace for four quarters, VMware will become a $630 million-a-year software company. EMC hasn't previously broken out revenue figures for VMware.
IBM, HP, and Sun Microsystems(SUNW) are lined up behind open-source Xen as a way of bidding for part of the burgeoning virtualization software revenues currently commanded by VMware.
February 14th, 2007 | ZDNet.com
Red Hat plans to squeeze VMware on pricing as it bundles virtualization technology with its operating system.
... Crenshaw argues that with current VMware software a customer buys software from VMware and then has to buy more operating system licenses for each instance virtualized. By combining the operating system–in this case Red Hat–with virtualization technology those additional licenses aren't necessary.
...Red Hat will also aim to simplify pricing with virtualization. "We will have everything you need to virtualize your environment into one SKU. It will be much more economical than buying separate components," says Crenshaw.
Red Hat's target is pretty clear. For instance, VMware Infrastructure 3 comes in three editions–starter, standard and enterprise–with prices ranging from $1,000 per two processors to $5,750. Support and upgrade subscriptions boost those totals.
...KVM appears to be on the fast path. This project first surfaced in October 2006; it found its way into the 2.6.20 kernel a few months later. On 25 February, KVM 15 was announced. This release has an interesting new feature: live migration. The speed with which the KVM developers have been able to add relatively advanced features is impressive; equally impressive is just how simple the code that implements live migration is.
KVM starts with a big advantage over other virtualisation projects: it relies on support from the hardware, which is only available in recent processors. As a result, KVM will not work on the bulk of currently deployed systems. On the other hand, designing for future hardware is often a good idea -- the future tends to come quickly in the technology world.
By focusing on hardware-supported virtualisation, KVM is able to concentrate on developing interesting features to run on the systems that companies are buying now.
The migration code is built into the QEMU emulator; the relevant source file is less than 800 lines long.
Re:Yawn
(Score:5, Informative)by giminy (94188) on Friday March 09, @02:16PM (#18292472)
We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual.
(http://www.readingfordummies.com/blog/ | Last Journal: Thursday November 21, @05:10PM)
Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes. Programs running exclusively I/O will be on the sleep queue while the kernel does i/o stuff, giving you a load average of near-zero even though your machine is busy scrambling for files on disk or waiting for network data. Likewise, a program that consists entirely of NOOPs will give you a load average of one (+1 per each additional instance) even if its nice value is all the way up and it is quite interruptable/is really putting zero strain on your system.
Before deciding that a machine is virtualizable, don't just look at load average. Run a real monitoring utility and look at iowait times, etc.
Reid- by hackstraw (262471) on Friday March 09, @03:48PM (#18293756)
(http://www.spamgourmet.com/) Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes.
This may be wrong, but I've always looked at load as the number of processes waiting for resources (usually disk, CPU, or network).
I've seen boxes with issues that have had a number of processes stuck in the nonkillable D (disk) wait state that were just stuck, but they had no real impact on the system besides artifically running the load up.
I've also seen where load was reported as N/NCPUs and N regardless of the number of CPUs.
Like all statistics, any single number in isolation is just a number. Even if the real meaning is the average number of processes in the run queue, that does not tell you much. Thinking of it as the number of processes waiting for some piece of hardware seems more accurate.- by T-Ranger (10520) <jeffw@cheb u c t o . n s . ca> on Friday March 09, @02:19PM (#18292508)
(http://coherentnetworksolutions.com/) Well, disks may not be a great example. VMWare is of course a product of EMC, which makes (drumroll) high end SAN hardware and software management tools. While Im not quite saying that there is a clear conflict of interest here, the EMC big picture is clear: "now that you have saved a metric shit load of cash on server hardware, spend some of that on a shiny new SAN system". The nicer way of that is that both EMC SANs and VMware do the same thing: consolidation of hardware onto better hardware, abstraction of services provided, finer grained allocation of services, shared overhead - and management.
If spikes on one VM are killing the whole physical host, then you are surely doing something wrong. Perhaps you do need that SAN with very fast disk access. Perhaps you need to schedule migration of VMs from one physical host to another when your report server pegs the hardware. Or, if its an unscheduled spike, you need to have rules that trigger migration if one VM is degrading service to others.
- Re:Yawn
(Score:2, Interesting)
by dthable (163749) <dhable AT uwm DOT edu> on Friday March 09, @01:11PM (#18291506)
(Last Journal: Monday February 12, @01:59PM)I could also see their use when upgrading or patching machines. Just take a copy of the virtual image and try to execute the upgrade (after testing, of course). If it all goes to hell, just flip the switch back. Then you can take hours trying to figure out what went wrong instead of being under the gun.
- Re:Yawn
(Score:4, Interesting)
by afidel (530433) on Friday March 09, @01:55PM (#18292162)Well, our Oracle servers are DL585's with four dual core cpu's, 32GB of ram, dual HBA's backed by an 112 disk SAN and they regularly max out both HBA's, trying to run that kind of load on a VM just doesn't make sense with the I/O latency and throughput degradation that I've seen with VMWare. I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect. If you have a lightly loaded SQL server or some AD controllers handling a small number of users then they might be good candidates, but any server that is I/O bound and/or spends a significant percentage of the day busy is probably the lowest priority to try to virtualize. You can probably get 99+% of the benefit of virtualization from the other 80-90% of your servers that are likely good candidates.
- by ergo98 (9391) <[email protected]> on Friday March 09, @02:08PM (#18292350)
(http://www.yafla.com/dforbes/ | Last Journal: Tuesday September 27, @10:43AM)I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect.
Indeed, it has become a bit of a unqualified, blanket meme: "Don't put database servers on virtual machines!" we hear. I heard it just yesterday from an outsourced hardware rep for crying out loud (they were trying to display that they "get" virtualization).
Ultimately, however, it's one of those easy bits of "wisdom" that people parrot because it's cheap advice, and it buys some easy credibility.
Unqualified, however, the statement is complete and utter nonsense. It is absolutely meaningless (just because something can superficially get called a "database" says absolutely nothing about what usage it sees, its disk access patterns, CPU and network needs, what it is bound by, etc).
An accurate rule would be "a machine that saturates one of the resources of a given piece of hardware is not a good candidate to be virtualized on that same piece of hardware" (e.g. your aforementioned database server). That really isn't rocket science, and I think it's obvious to everyone. It also doesn't rely upon some meaningless simplification of application roles.
Note that all of the above is speaking more towards the industry generalization, and not towards you. Indeed, you clarified it more specifically later on.
- Re:Yawn
(Score:2)
by Courageous (228506) on Friday March 09, @03:23PM (#18293384)Well. VMWare has issues with IO latency. One has to watch for that, not try to virtualize everything. But. You say "Virtualization bad" for "CPU intensive," and I cannot agree with that. SPECint2006 and SPECfp2006, as well as rates are within 5% of hard metal for ESX. I've run the tests myself. Old school "CPU intensive" applications are a non-conversation in in virtualizaation today.
It's the network IO and network latency that will kill you if you don't know what you're doing. VMWare has known issues in that area (although they must break through these entirely or 10GE will never work properly in VMWare). One can work around these issues, however I'd simply say it's a Best Practice to simply plan to "not virtualize everything." I'd say target 65%-ish of your compute infrastructure in preplanning and base your real decisions on an actual analysis.
C//
- Re:He must be talking about freeware
I'm certified for both VMware ESX 2.5 and VMware VI3. VMware's best practices are to never use a single path, whether it be for NIC or FC HBA (storage). VMware also has Virtual Switches, which not only allows you to team NICs for load balancing and failover, but also use port groups (VLANs). You can then view pretty throughput graphs for either physical NICs or virtual adapters. It's crazy amazing(TM).(Score:5, Informative) by Semireg (712708) on Friday March 09, @12:59PM (#18291308)
As for "putting many workloads on a box and uptime," this writer should really take a look at VMware VI3 and Vmotion. Not only can you migrate a running VM without downtime, you can "enter maintenance mode" on a physical host, and using DRS (distributed resource scheduler) it will automatically migrate the VMs to hosts and achieve a load balance between CPU/Memory. It's crazy amazing(TM).
Lastly, just to toot a bit of the virtualization horn... VMware's HA will automatically restart your VMs on other physical hosts in your HA cluster. It's not unusual for a Win2k3 VM to boot in under 20 seconds (VMware's BIOS posts in about.5 seconds compared to an IBM xSeries 3850 which takes 6 minutes). Oh, and there is the whole snapshotting feature, memory and disk, which allows for point in time recovery on any host. Yea... downsides indeed.
Virtualization is Sysadmin Utopia. -- cvl, a Virtualization Consultant
- Re:He must be talking about freeware
(Score:2) by div_2n (525075) on Friday March 09, @02:05PM (#18292314) I'm managing VI3 and we use it for almost everything. Ran into some trouble with one antiquated EDI application that just HAD to have a serial port. That is a long discussion, but for reasons I'm quite sure you could guess, I offloaded it to an independent box. We run our ERP software on it and the vendor has tried (unsuccessfully) several times to blame VMWare for issues.
You don't mention it, but consolidated backup just rocks. I have some external Linux based NAS machines that use rsync to keep local copies of both our nightly backups and occasional image backups at both sites.
Thanks to VMWare, it's like I've told management--"Our main facility could burn to the ground and I could have our infrastructure back up and running at our remote site before the remains stop smoldering much less get a check from the insurance company."
- He must. ESX set up properly avoids most pitfalls
(Score:5, Insightful)
by cbreaker (561297) on Friday March 09, @01:38PM (#18291910)
(Last Journal: Tuesday December 12, @07:54PM)Indeed. If you have a proper ESX configuration: At least two hosts, SAN back-end, multiple NIC's, supported hardware - you'll find that almost none of the points are valid.
Teaming, hot-migrations, resource management, and lots of other great tools make modern x86 virtualization really enterprise caliber.
I think that the people that see it as a toy are people that have never used virtualization in the context of a large environment, being used properly with proper hardware. You can virtualize almost any server if you plan properly for it.
In the end, by going virtual you end up actually removing so much complexity from your systems that you'll never know how you did it before. No longer does each server have it's own drivers, quirks, OpenManage/hardware monitor, etc etc. You can create a new VM from a template in 5 minutes, ready to go. You can clone a server in minutes. You can snapshot the disks (and RAM, in ESX3) and you can migrate them to new hardware without bringing them down. You can create scheduled copies of production servers for your test environment. So much more simple then all-hardware.
I'll admit that you shouldn't use virtual servers for everything (yet) but you will eventually be able to run everything virtual, so it's best to get used to it now.
- Virtualization
(Score:4, Interesting)
by DesertBlade (741219) on Friday March 09, @12:51PM (#18291170)Good story, but I disagree in some areas.
Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.
Downtime: If you need to do maintance on the host that may be a slight issue, but I hardly ever have to anything to the host. Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.
- it is all roses for Disaster Recovery
(Score:2) by QuantumRiff (120817) on Friday March 09, @12:52PM (#18291196) If your servers become toast, due to whatever reason, you can get a simple workstation, put a ton of RAM in it, and load up your virtual systems. Of course they will be slower, but they will still be running. We don't need to carry expensive 4 hour service contracts, just next business day contracts, saving a ton of money. The nice thing for me with Virtual servers is it is device agnostic, so if I have to recover, worst case, I have only one server to worry about NIC drivers, RAID settings/drivers, etc. After that, its just loading up the virtual server files.
- Re:it is all roses for Disaster Recovery
(Score:1) by bigredradio (631970) on Friday March 09, @01:08PM (#18291444)
(http://www.storix.com/ | Last Journal: Sunday August 20, @03:39PM) Sort of... I agree that you can limit hardware needs, but you also have a central point of failure. If the host OS or local storage goes, you now have lost multiple systems instead of one. One issue I have seen is having external scsi support. At least with Xen, you cannot dynamically allocate a pci scsi card to each node. This may also hold true for fiber channel cards.(not sure). That means, no offsite tape backups for the individual nodes and no access to SAN storage through the virtual nodes.- We're about 95% virtualized and never going back!
(Score:1, Interesting) by Anonymous Coward on Friday March 09, @12:56PM (#18291262) The absolute only place it has not been appropriate are locations requiring high amounts of disk IO. It has been a godsend everywhere else. All of our web servers, application servers, support servers, management servers, blah blah blah. It's all virtual now. Approximately 175 servers are now virtual. The rest are huge SQL Server/Oracle systems.
License controls are fine. All the major players support flexible VM licensing. The only people that bark about change control are those who simply don't understand virtual infrastructure and a good sit-down solved that issue. "Compliance" has not been an issue for us at all. As far as politics are concerned -- if they can't keep up with the future, then they should get out of IT.
FYI: We run VMware ESX on HP hardware (DL585 servers) connected to an EMC Clariion SAN.- Home Use
(Score:2, Insightful) by 7bit (1031746) on Friday March 09, @12:59PM (#18291316) I find Virtualization to be great for home use.
It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be useful for code developement within a protected environment.
Did I mention portability? Keep back-up's of your VM file and run it on any system you want after installing something like the Free VMWare Server:
http://www.vmware.com/products/server/ [vmware.com]
- Hype Common Sense
(Score:3, Interesting)
by micromuncher (171881) on Friday March 09, @01:14PM (#18291556)The article mentions a point of common sense that I fought tooth 'n nail about and lost in the Big Company I'm at now.
For a year I fought against virtualizing our sandbox servers because of resource contention issues. One machine pretending to be many with one NIC and one router. We had a web app that pounded a database... pre virtualization it was zippy. Post virtualization it was unusuable. I explained that even though you can Tune virtualized servers, it happens after the fact, and it becomes a big active management problem to make sure your IT department doesn't load up tons of virtual servers to the point it affects everyone virtualized. They argued, well, you don't have a lot of use (a few users, and not a lot of resource utilization.)
My boss eventually gave in. The client went from zippy workability in an app being developed, to slow piece of crap because of resource contention, and its hard to explain that an IT change forced under the hood was the reason for SLOW, and in UAT, SLOW = BUSTED.
That was a huge nail in the coffin for the project. When the user can't use the app on demand, for whatever reason, and they don't want to hear jack about tuning or saving rack space.
So all you IT managers and people thinking you'll get big bonuses by virtualizing everything... consider this... ONE MACHINE, ONE NETWORK CARD, pretending to be many...- Author is completely uninformed
(Score:5, Insightful)
by LodCrappo (705968) on Friday March 09, @01:28PM (#18291790)
(http://www.spogbiper.com/)Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."
No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.
In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.
Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.
The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.
- Re:Should I jump?
(Score:2) by 15Bit (940730) on Friday March 09, @03:17PM (#18293318) It will depend what you do with your 3 shuttles.
I just ditched my dual opteron (linux) + shuttle (windows) setup and replaced it with a single Core Duo box with linux virtualized under WinXP. I'm running the VMware free server software (http://www.vmware.com/products/free_virtualizati
o n.html [vmware.com]) and i have to say i'm impressed.The only negatives i've found so far (aside from the obvious ones related to two systems in one computer) are some slowdown in mouse responsiveness in the virtualized linux and the lack of hardware accelerated graphics (these might be the same thing, i don't know). You also have to turn off access of the virtualized OS to the DVDROM or everything gets confused.
The positives are that it was piss easy to set up and really "just works". The VMware'd linux talked to my network card without intervention and happily picked up a unique IP from my DHCPD. NIS/NFS to my fileserver "just worked". I can allocate the VMware OS 1 or 2 cores and vary the amount of RAM it sees. My main use for the linux V/OS is molecular dynamics simulations, the software running message passing via LAM/MPI and all compiled under Intel C and Fortran Compilers. Again, all of that "just worked".
In terms of performance, MD calcs done on 1 cpu seem to be at close to full speed for one core, but running them dual gives only an 80% scaling improvement. That slowdown is about as expected, given that there's another OS running. Another nice side benefit is that i can run an MD calc on 1 cpu and play games with the other. I don't notice any lag.
So to summarise - if i'd paid money for VMware i'd be seriously impressed, but for something to do exactly what i want for free is truly amazing.
- A nice buffer zone!
(Score:1) by Gazzonyx (982402) on Friday March 09, @03:19PM (#18293356) I've found that virtualization is a nice buffer zone from management decisions! Case in point, yesterday my boss (he's got a degree in comp. sci - 20 years ago...), who's just getting somewhat used to the linux server that I set up, decided that we should 'put
/var in the /data directory tree'; I had folded once when he wanted to put /home in /data, for backup reasons, and made it a symlink from /.
Now when he gets these ideas, before just going and doing it on the production server, I can say "How about I make a VM and we'll see how that goes over", thinking under my breath the words of Keith Moon, "That'll go over like a lead zeppelin". It give me a technology to leverage where I can show that an idea is a Bad Idea, without having to trash the production server to prove my point.
I've even set up a virtual network (1 samba PDC and 3 windows machines), to simulate our network on a small scale to set up proof of concepts. If they don't believe that something will work, I can show them without having their blessing to mess with our network. If it doesn't work, I roll back to my snapshots, and I have a virgin virtual network again.Does anyone do this? Has it worked out where you can do a proof of concept that otherwise, without virtualization, you would be confined to whiteboard concepts that no one would listen to?
- Re:the sad thing is how much we need virtualizatio
(Score:2, Interesting)
by dthable (163749) <dhable AT uwm DOT edu> on Friday March 09, @01:17PM (#18291608)
And if the software doesn't require a dedicated machine, the IT department wants one. The company I used to work for would buy a new machine for every application component because they didn't want Notes and a homegrown ASP application to conflict with each other. Seemed like a waste of hardware in my opinion.
(Last Journal: Monday February 12, @01:59PM)
- This is FUD
(Score:2)
by fyngyrz (762201) * on Friday March 09, @05:38PM (#18295052)
(http://www.blackbeltsystems.com/ | Last Journal: Saturday January 27, @06:16PM)...virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.Examine that quote from the article closely. See anything there that indicates virtualization "doesn't work"? No, nor do I. What they are talking about here has nothing to do with how well virtualization works, what they're complaining about is that a particular tool requires competence to use well in various work environments. Well, no one ever said that virtualization would gift brains to some middle level manager, or teach anyone how to use an office suite, or imbue morals and ethics into those who would steal; virtualization lets you run an operating system in a sandbox, sometimes under another operating system entirely. And it does that perfectly well, or in other words, it works very well indeed. I call FUD.
Sun will release working Xen support code in July. This code will give OpenSolaris the ability to run on Xen as a "Domain 0" (Dom0), or host, system, with support for 32-bit and 64-bit guest (DomU) Solaris systems.
OpenSolaris will get full Xen support by October, which will be extended to Solaris 10 in the first half of 2007, Sun said.
Under Xen, a virtualized machine is called a "domain," and operating systems must be modified at the kernel level to be fully virtualized - an approach called paravirtualization that is designed to allow for maximum performance. The Dom0 system is fully virtualized, but has direct access to hardware, unlike DomU systems.
So far, Linux operating systems such as SUSE Linux Professional 9.3, the upcoming Suse Linux Enterprise 10 and Red Hat's Fedora Core 3 and 4, have been modified for Xen support. Operating systems such as Windows can run as a host system without modifications using virtualization technology found in newer Intel chips and upcoming AMD chips.
Virtualization is expected to revolutionize the use of operating systems, applications and even malware once it goes mainstream. Xen, developed at the University of Cambridge, is an open-source competitor to virtualization providers such as VMware. Sun also provides its own container technology, but said it plans to provide users with the ability to mix and match.
Sun initially got Solaris working with Xen in a rudimentary form in July 2005. In February 2006 Sun released the first, early OpenSolaris-on-Xen code.
"Running on Xen, OpenSolaris is reasonably stable, but it's still very much 'pre-alpha' compared with our usual finished code quality," wrote Sun engineer Tim Marsland in his blog at the time. "Installing and configuring a client is do-able, but not for the faint of heart."
While details of the concept are just beginning to emerge, it's likely only a matter of time before it shows up in Windows and Linux.
More stories on virtualizationNow, a newer variety of virtualization is emerging that employs a lighter-weight approach so that a single operating system can be sliced into independent sections.
While details of the concept are just beginning to emerge, it's likely only a matter of time before it shows up in Windows and Linux. "It's something any operating system vendor has to have," said Serguei Beloussov, chief executive of software maker SWsoft, whose products enable the lightweight approach.
The overall goals of the two approaches are the same: Make a single computer more efficient, divide work among separate non-interfering partitions, and eventually move to a fluid world where software tasks move among computers in response to shifting computing priorities.
The new approach, virtualizing above the operating system, requires less computer memory, permitting dozens of partitions on the same machine in some Linux cases, but sacrifices some flexibility and partition independence.
While servers are likely to be the first place the technology is used, it holds promises for PCs, too, where users could easily create partitions for trying new software, dividing work and home tasks, or isolating potentially risky applications such as Web browsers.
The idea is used in Solaris 10, which Sun Microsystems released in early 2005 with a feature called Solaris Containers. Now it's spreading to other operating systems.
Mike Neil, product unit manager for Microsoft's virtualization technologies, confirmed that his company is working on the lightweight virtualization approach variously known as containers, virtual private servers or virtual environments.
"You'll see that as an evolutionary step," he said in an interview at the LinuxWorld Conference and Expo here last week, though he declined to say when it might become available as a product.
Microsoft is following in the footsteps of SWsoft, a much smaller company whose Virtuozzo product is available for Windows and Linux. And Beloussov says programmers are moving swiftly to build container technology into Linux through a project called OpenVZ, the foundation of Virtuozzo.
Beloussov believes the kernel at the heart of the open-source operating system will soon--likely this year--get some important portions of container technology. It will be "something you can actually use," he said, adding that the company is getting help from Linux sellers Red Hat and Novell.
Increasing the efficiency of computer utilization is the main draw for the technique, Gabriel Consulting Group analyst Dan Olds said. "Tens or even hundreds of low-demand user workspaces can be layered on a few systems," he said. But there's a significant concern in moving critical tasks to containers. "A single operating system kernel is a potential vulnerability. If it goes down, everyone goes down. I think the VMware approach is the better solution for x86-based systems right now," he said.
But SWsoft is making progress. OpenVZ project manager Kirill Korotaev proposed adding some container foundations to the kernel in late March, and received a favorable reply from others including Herbert Poetzl, lead programmer of an OpenVZ alternative called VServer. Korotaev then submitted patches.
But there's work to be done convincing the Linux kernel's top brass, including Andrew Morton, a key deputy to Linux founder and leader Linus Torvalds.
"It's enabling infrastructure which will permit further feature work in the future," Morton said in an interview about the OpenVZ work. "I'd need to get a clearer idea of where it's all headed before supporting the addition of such a thing."
Pricing complications
But like other virtualization technologies, containers introduce yet another complication into traditional software pricing. Standard pricing models assume a single operating system running on a computer with a fixed number of processors.Containers not only present the appearance of many different operating systems, they raise the possibility of constantly changing numbers.
Consolidating Legacy Applications onto Sun x64 Servers (pdf), a Sun BluePrints article, offers a consolidation technique for moving Microsoft Windows NT Applications onto Sun x64 servers using VMware ESX Server. An example shows how to consolidate an Apache web server running on the Windows NT Server operating system onto ESX Server running on a Sun Fire V40z server, with no changes to the application or its configuration.
"In a virtualized environment, it becomes much cleaner around how we wrap and package things, and redeployment and reuse becomes phenomenal."The traditional use of virtualization has been server and storage consolidation helping enterprises deal with underutilized servers.
"Enterprises today have low server utilization, and we're trying to help customers to a point where they can utilize near 100 percent capacity."
"Historically, virtualization has been driven by enterprise need at Dell. We believe that's about to change."
Kettler demoed a number of possibilities of running virtualized, purpose-built applications, such as a secure browsing experience or a dedicated gaming stack.
"It allows you to separate and isolate different aspects of what you might be doing on your machine," Kettler said. "A single machine with unique personalities that you can plug in and out."
The opportunity for Linux is for Linux developers to develop unique personalities to plug into the virtualized environments. The use of virtual machines was also noted as a way to deal with legacy operating systems issues. So instead of being forced to migrate, users will now have a choice.
"With virtualization, the opportunity is to drive Linux adoption even deeper on the client," Kettler said.
"There are still a few things that need to happen to make virtualization pervasive. Users need to embrace virtualization, and developers need to understand the opportunity. They need to support standards, and vendors need to revisit licensing concerns.
"We believe virtualization is key, Linux is key and both together can play a strong role both within the enterprise and the client."
CNET News.com Microsoft will support customers who chose to run Linux with Microsoft's Virtual Server 2005 R2, software for running multiple operating systems on one machine.In addition, the company on Monday said that it has now made Virtual Server 2005 R2--which the company had charged either $99 for up to four physical processors or $199 for an unlimited number of processors--a free download. The announcements were made in conjunction with the LinuxWorld conference in Boston this week.
Virtualization, an emerging technology which is garnering growing interest from corporate customers, allows a server to run multiple instances of an operating system. This makes it easier for corporations to consolidate many applications on a single hardware server and provides a level of reliabilty.
Microsoft said that it has developed software to simplify the installation of Linux distributions from Red Hat and Novell SuSE to run on Virtual Server 2005 R2 on Windows. In addition, Microsoft will provide technical support customers running Windows and Linux side by side.
"We’ve made a long-term commitment to make sure that non-Windows operating systems can be run in a supported manner, both on top of Virtual Server and our future virtualization products," said Zane Adam, director of Windows Server product marketing, in a statement.
Microsoft has said that the server edition of Windows Vista will have virtualization built into it. Specifically, it said it is developing so-called hypervisor software, code-named Viridian, to host multiple operating systems on one machine.
Microsoft faces competition in the market from EMC subsidiary VMware and increasingly the Xen project that's being built into forthcoming versions of Suse Linux Enterprise Server and Red Hat Enterprise Linux.
VMware believes that the benefits of server virtualization should be universally available. Period. VMware has introduced free VMware Server Beta for immediate download.
VMware Server is a robust yet easy to use product for users new to server virtualization technology. VMware Server enables companies to partition a physical server into multiple virtual machines, and to start experiencing the benefits of virtualization. With VMware Server, companies can provision a new server in minutes without investing in new hardware, run multiple different operating systems and applications on the same physical host server, move virtual machines from one physical host to another without re-configuration, and much more!
VMware Server can be used to streamline software development and testing, evaluate software in ready-to-run virtual machines, re-host legacy applications or simplify server provisioning. In addition, users can leverage a wide variety of plug-and-play virtual appliances for commonly used infrastructure.
VMware Server offers more than GSX Server
In addition to the VMware GSX Server capabilities, the generally available release of VMware Server plans to offer the following unique features:
- Virtual SMP
- Experimental support for Intel® Virtualization Technology
- Support for 64-bit guest operating systems
Learn more about VMware Server.
Support and Subscription Options
VMware is fully committed to GSX Server customers' continued success. Notwithstanding VMware's Support and Subscription agreement terms, GSX Server will be fully supported by VMware for two years after VMware Server becomes generally available. GSX Server customers will be able to renew existing support contracts during that period.
Upgrade Options
The free VMware Server represents the upgrade path for all GSX Server customers. Once VMware Server is generally available, which is currently planned for Q2 2006, it will replace GSX Server as VMware's hosted server virtualization offering. At that time, VMware will also start offering Support and Subscription services for VMware Server for purchase.
VMware also offers very favorable terms for upgrading from GSX Server to VMware Virtual Infrastructure products—VMware ESX Server and VirtualCenter. To learn more about the terms of purchasing Support and Subscription for VMware Server or upgrading to ESX Server and VirtualCenter, please read VMware Server Order Information.
To learn more, please read the GSX Server FAQ.
Start experiencing the benefits of server virtualization
- Download VMware Server.
- Download pre-built, ready-to-run virtual appliances from industry-leading ISV partners, open source partners and the VMware community.
The upcoming Fedora Core 5 (FC 5) community release and the end-of-year RHEL 5 are expected to integrate virtualization in a more seamless and enterprise users friendly way than ever before.Brian Stevens, Red Hat CTO, explained in a conference call that FC 4 was the "anti-integrated thing." Stevens added that with FC 4, Red Hat built a para-virtualized kernel and a FAQ on how a user could actually install it.
"It was a pretty arduous process to actually get to a running virtualized environment," Stevens said. "We didn't worry about any of the other tools; as well we didn't worry about an applet for monitoring what was going on in the system, for doing VM control and all the finish and polish that you'd expect. It was more of a developer focus than a user focus."
In FC 5, the process is expected to be considerably easier for users to deploy and, to some degree, manage virtualization.
Stevens explained that with FC 5, it's going to take the rocket science away from somebody that's going to be the end user of virtualization and provide an "out of the box" virtualization experience.
The expectation is that users will try out Xen virtualization and figure out how to deploy it and how to improve it, the results of which will find their way in the more stable and enterprise ready RHEL 5 end of year release.
Red Hat plans on providing Virtualization Migration and Assessment Services for RHEL 5 customers in addition to including Xen as part of the release.
Red Hat's use of Xen isn't only a good thing for Red Hat; it's also being hailed as a good thing for Xen itself.
"The community and XenSource can do an awful lot of testing around Xen itself, but the hypervisor is only one piece of an integrated virtualization stack," Frank Artale, XenSource vice president of business development, said on the call.
"In general only an operating system vendor can test the entire stack from top to bottom -- can certify that all parts and pieces work.
"For us having the Xen open source hypervisor be part of the RHEL 5 release train is critical to the continued building up of quality of the Xen open source hypervisor."
Last November, Red Hat rolled out its 2006 roadmap, which addressed the inclusion of the Xen open source virtualization hypervisor in its upcoming RHEL 5 flagship Linux release.
Backed by XenSource, Xen has IBM's support. And Xen 3.0, which was released in December, supports hardware virtualization technology, including Intel's VT-x virtualization technology and AMD Pacifica.
Xen is no stranger to Linux distribution. It found its way into Novell SUSE Linux version 9.3 and Red Hat Fedora Core 4, which were released in 2005.
Consolidating several small machines into one powerful one has advantages in administration and resource usage. It also has implications for security and encapsulation. FreeBSD's jails feature allows you to host multiple separate services on a single machine while keeping them securely separate. Dan Langille shows how.
Microsoft's CEO, Steve Ballmer, hosted the Microsoft Management Summit in Las Vegas right after we went to press with the prior issue of The Linux Beacon, and at that event, he said Microsoft would finally deliver full support for non-Windows operating systems within its Virtual Server 2005 virtual machine partitioning middleware for the Windows platform.
Virtual Server supported Linux when it was created by Connectix several years ago, but when Microsoft bought the company, it initially tried to position the product as a Windows server consolidation tool and did not offer installation or technical support for Linux even though the software clearly did support Linux.
With Virtual Server 2005 Service Pack 1, which has just been put into beta and will be delivered later this year, Microsoft is conceding that it cannot just support Windows with this product. "We've added support for non-Windows virtual machines being hosted on top of our Virtual Server product, including support for Linux," explained Ballmer. "We know folks are going to want to run Windows systems and Linux systems and other systems together on top of our Virtual Server` and Windows. You'll see support for that later in the year." Later in his presentation, when Microsoft demonstrated Virtual Server running Red Hat Enterprise Linux Server 3 running on Virtual Server, Ballmer got a laugh. "As much as that hurts my eyes, I know that's an important capability for the virtual server technology for our customers."Part of what Microsoft needs to do to improve Linux support is tweak its Microsoft Operations Manager (MOM) system management tools such that it can gather information from the Linux instances and make them more manageable from this Windows tool.
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019