|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Bulletin | 1998 | 1999 | 2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 |
2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 |
Jan | Feb | Mar | Apr | May | Jun | Jul | Sept | Oct | Nov | Dec |
Version 2.0 with updates of May 2, 2015
|
Security department in a large corporation is often staffed with people who are useless in any department and they became really harmful in this new for them role. One typical case is enthusiastic "know-nothings" who after the move to the security department almost instantly turn into script kiddies and are ready to test on internal corporate infrastructure (aka production servers) latest and greatest exploits downloaded from Internet, seeing such activity as their sacred duty inherent in this newly minted "security specialists" role.
So let's discuss how to protect ourselves from those "institutionalized" hackers who are often more clueless and no less ambitious then a typical teenager script kiddie (Wikipedia):
In a Carnegie Mellon report prepared for the U.S. Department of Defense in 2005, script kiddies are defined as"The more immature but unfortunately often just as dangerous exploiter of security lapses on the Internet. The typical script kiddy uses existing and frequently well known and easy-to-find techniques and programs or scripts to search for and exploit weaknesses in other computers on the Internet - often randomly and with little regard or perhaps even understanding of the potentially harmful consequences.[5]
|
In this case you need to think about some defenses. Enabling firewall on an internal server is probably an overkill and might get you into hot water quicker than you can realize consequences of such a conversion. But enabling TCP wrappers can help in cutting oxygen for overzealous "know-nothings" who typically are able to operate only from a dozen of so IP addresses (those addresses are visible in logs; also your friends in networking department can help ;-).
The simplest way is to include into deny file those protocols that caused problem in previous scan attempts. For example ftpd daemon sometimes crashes when subjected to sequence of packets representing exploit. HP-UX version of wu-ftpd is probably one of the weakest in this respect. Including in /etc/deny files selected hosts from which they run exploits can help. One thing that is necessary is to ensure that the particular daemon is compiled with TCP wrappers (which is the case for vsftpd and wu-ftpd) or run via xinetd on Linux or Solaris and via TCP wrappers on other Unixes.
Including SSH and telnet might also helpful as block the possibility of exploiting some unpatched servers. Latest fashion in large corporation is to make a big show from any remotely exploitable bug with spreadsheets going to the high management. Which create substantial and completely idiotic load on rank-and-file system administrators as typically corporation architecture has holes in which elephant can walk-in and any particular exploit change nothing in this picture. In this case restricting access to ssh to a handful useful servers or local subnets can save you some time not only with currently "hot" exploit, but in the future too.
In this case even after inserting a new entry into /etc/passwd or something similar, it is impossible to login to the server outside a small subset of internet addresses. This is a good security practice in any case.
See entries collected in the "Old News" section for additional hints.
Another useful measure is creating baseline of /etc and several other vital directories (/root and cron files). Simple script comparing current status with the base line is useful not only again those jerks but also can provide important information about actions of your co-workers who sometimes make changes and forget to notify other people who administer the server. And that means you.
And last but not least, new, unique entries in output of last command should be mailed to you within an hour. Same is for new, unique entries in /etc/messages. Server in corporate environment mostly perform dull repeating tasks and one week of long of logs serves as an excellent baseline of what to expect from the server. Any lines that are substantially different should generate a report, which in simplest case can be mailed or collected via scp on a special server with web server to view them. Ot they can be converted in pseudo-mailbox with the ability to view those reports via simple webmail interface.
Now each Unix/Linux server has its own built-in firewall which with careful configuration can protect the server from many "unpatched" remote exploits. But security department like any bureaucratic organization has its own dynamic and after some spectacular remote internet exploit or revelation about activities of three-letter agencies some wise head in this department comes with the initiative of installing additional firewalls.
To object to such an initiate means to take the responsibility to possible breach, so usually corporate
brass approves it. At this point real fun starts as typically the originator of the idea has no clue
about vulnerable aspects of corporate architecture and is just guided by the principle the more is the
better and improvise rule in ad hoc fashion. The key problem with such an approach is that as soon as
some level of complexity of rulesets raise above IQ of their creators and became the source of
denial of service attacks on corporate infrastructure. In other words it became an "
Moreover, typically people to put forward such initiatives never ask themselves a question whether there are alternative ways to login to the server. And on modern server there is such (actually extremely vulnerable) way: using ILO/DRAC or other built-in specialized computer for this purpose. Without putting those connections on a specially protected network segment and regular updates of firmware (I think exploits for it are a priority for three-letter agencies) this path is represent essentially an open backdoor to the server OS that bypass all this "multiplexer" controlled logins.
And this tendency of arbitrary, caprice based compartmentalization using badly firewalls has some other aspects that we will discuss in a special paper. One of them is that it typically raises the complexity of infrastructure far above IQ of staff and soon the set of rules soon becomes unmanageable. Nobody fully understand it and as such it represents a weak point in the security infrastructure not a strong point. Unless periodically pruned such set often makes even trivial activities such as downloading patches extremely cumbersome and subject to periodic "security-inspired" denial of service attacks.
Actually all this additional firewall infrastructure soon magically turns into giant denial of service attack on the users and system administrators. Attack on the scale that hackers can only envy with hdden losses in productivity far above typical script kiddies games costs ;-).
Another typical "fashion item" is install special server (we will call in login multiplexor) to block direct user logins and files transfer to the servers in some datacenter or lab. Fro now on everybody should use "multiplexer" for initial login. Only from it you can connect and retrieve data from the any individual server with "multiplexed" territory.
The problem with this approach is that if two factor authentication is used there is already central server (server that controls tokens) that collects all the data about user logins. Why we need another one? If security folk does not have IQ to use those already available data what exactly login multiplexor changes in this situation ?
And if multiplexer does not use two factor authentication using some type of tokens, it is a perfect point for harvesting all corporate passwords. So it in itself is just a huge vulnerability. Which should be avoided at all costs.
That suggests that while this idea has its merits, correct implementation is not trivial and as with everything corporate security requires deep understanding of architecture, understanding that is by definition lacking in security departments.
This move especially tricky (and critically effects productivity of researchers) for labs. And the first question here: "What we are protecting?" One cynical observation is that due to wave of downsizing staff might have so little loyalty that if somebody wants particular information we are trying to protect, he/she can buy it really cheaply without troubles of breaching into particular labs computers. Moreover breaking into cloud based email servers for key researchers (emails which BTW are replicated on their cell phones) in best NSA style also is a cheaper approach.
Naively put in place "login multiplexer" severely restricts "free movement of data" which is important for research as a discipline and, as such, has huge negative effect on researchers.
Also while multiplexer provides all the login data for the connection to it, it does not provide automatically any means to analyze them and this part should be iether written or acquired elsewhere. In this is not done, and activity of users is not analyzed to minimize negative effects, multiplexer soon becomes pretty useless additional roadblock for the users. Just another ritual peace of infrastructure. And nobody has courage to shout "King is naked".
Linux is a complex OS with theoretically unlimited number of exploits both in kernel and major filesystems. So patching a single exploit logically does not improve security much as we never know how many "zero-days" exploit exists "in the wild" and are in the hands of hackers and three-letter agencies.
At the same time paranoia is artificially wipes up by all those security companies which understand that this represents a chance to sell their (often useless of harmful) wares to stupid folk in large corporations (aka milk cows).
Typically conditions for applicability and exact nature of exploit is not revealed so it is just proclaimed to be another "Big Bad Exploit" and everything is rushing to patch it in pretty crazy frenzy as if their life depends on it. Forgetting about a proverb that mushrooms are usually grow in packs and that some exploit that are "really exploitable" (see for example Top 5 Worst Software Exploits of 2014 -- [Added May2, 2015, NNB]) often point to some architectural flaws in the network architecture of the corporation (OpenSSL and OpenSSH vulnerabilities, and, especially, Cryptolocker Trojan are quite interesting examples here). It goes without saying that architectural flaws can't be fixed by patching.
So patching new remotely exploited vulnerability became much like voodoo ritual by which shaman try to sway evil daemons to stay away from the corporation. Questions about about what aspects of the current datacenter architecture make this vulnerability more (or less) dangerous and how to make architecture more "exploit resistance" are never asked.
All efforts are spend on patching one current "fashionable" exploit that got great (and typically incompetent) MSM coverage, often without even investigated can it be used in the particular environment or not. All activity is driven by spreadsheet with the list of "vulnerable" servers. Achieving zero count is all that matter. After this noble goal is achieved all activity stops until the next "fashionable" exploit.
"We are going to spend a large amount of money to produce a more complicated artifact, and it is not easy to quantify what we are buying for all the money and effort"
Bob Blakey, principal analyst with Bullshitting is not exactly lying, and bullshit remains bullshit whether it's true or false. The difference lies in the bullshitter's complete disregard for whether what he's saying corresponds to facts in the physical world: he does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are. Harry G. Frankfurt
|
Marketing-based security has changed the expectations about the products that are pushed by security companies in a very bad way. Like MSM in foreign policy coverage, vendors who are manufacturing a particular "artificial reality", painting an illusion, and then showing how their products make that imaginary reality more secure.
We should be aware that "snake oil" sellers recently moved into security. And considerable financial success. Of course, IT industry as a whole using "dirty" marketing tricks to sell their wares, but here situation is really outstanding when a company (ISS) which produces nothing but crap at the end was bought 1.3 billion dollars by IBM in 2006. I think IBM brass would be better off spending those money in best former TYCO CEO Dennis Kozlowski style with wild orgies somewhere in Cypress or other Greek island :-). At least this way money would be really wasted in style ;-).
Let's talk about one area, that one that I used to know really well -- network intrusion detector systems (NIDS)
For some unknown to me reason the whole industry became pretty rotten selling mostly hype and FUD. Still I need to admit that FUD sells well. The total size of the world market for network IDS is probably several hundred millions dollars and this market niche is occupied by a lot of snake oil salesmen:
Synergy Research Group reported that the worldwide network security market spending continued to be over the $1 billion in the fourth quarter of 2005, in all segments -- hybrid solutions (firewall/VPN, appliances, and hybrid software solutions), Intrusion Detection/Prevention Systems (IDS/IPS), and SSL VPN.
IDS/IPS sales increased seven percent for the quarter and were up 30 percent over 2004. Read article here.
Most money spent on IDS can be spend with much greater return on investment on ESM software as well as on improving rules in existing firewalls, increasing quality of log analyses and host based integrity checking.
That means that network IDS area is a natural area where open source software is more competitive then any commercial software. Simplifying we can even state that the fact of acquisition of commercial IDS by any organization can be a sign or weak or incompetent management ( although reality is more complex and sometimes such an acquisition is just a reaction on pressures outside IT like compliances-related pressures; moreover some implementations were done under the premises of "loss leader" mentality under the motto "let those jerks who want it have this sucker" ).
Actually an organization that is spending money on NIDS without first creating a solid foundation by deploying ESM commits what is called "innocent fraud" ;-). It does not matter what traffic you detect, if you do not understand what exactly happening on your servers/workstations and view your traffic as an unstructured stream, a pond out of which IDS magically fish alerts. In reality most time IDS is crying wolf even few useful alerts are buried in the noise. Also "real time" that is selling point for IDS does not really matter: most organization have no possibility to react promptly on alerts even if we assume that there are (very rare) cases when NIDS pick up useful signal instead on noise. A good introduction to NIDS can be found at NIST Draft Special Publication 800-94, Guide to Intrusion Detection and Prevention (IDP) Systems (Adobe PDF (2,333 KB) Zipped PDF (1,844 KB) )
A typical network IDS (NIDS) uses network card(s) in promiscuous mode, sniffing all packets on each network segment the server is connected to. Installations usually consists of several sensors and a central console to aggregate and analyze data. NIDS can be classified into several types:
On the other side there are possibilities of adding NIDS functionality to regular firewalls and
this is promising path of developing NIDS. If you think about it NIDS can be considered to
be a passive device listen to traffic and analyze only those packets that passed firewall filtering
rules. That means that 90% of processing required for NIDS were already done on firewall level.
It is in this narrow sense Gartner 2003 statement that NIDS are irrelevant and we should switch to
IPS might be true.
At the same time like local firewalls they represent a danger to the networking stack of the computer they supposedly protect.
The second important classification of NIDS is the placement:
Organizations rarely have the resources to investigate every "security" event. Instead, they must attempt to identify and address the top issues, using the tools they've been given. This is practically impossible if an IDS is listening to a large traffic stream with many different types of servers and protocols. In this case security personnel, if any, are being forced to practice triage: tackle the highest-impact problems first and move on from there. Eventually it is replaced with even more simple approach: ignore them all ;-). Of course much depends on how well signatures are tuned to particular network infrastructure. Therefore another classification can be based on the type of signature used:
Even is case when you limit traffic to specific segment of the internal network (for example local sites in national or international corporation, which is probably the best NIDS deployment strategy) the effectiveness of network IDS is low but definitely above zero. That can be marginally useful in this restricted environment. Moreover that might have value for network troubleshooting (especially if they also configured to act as a blackbox recorder for traffic; the latter can be easily done using TCPdump as the first stage and processing TCPdump results with Perl scripts postprocessing, say, each quarter of an hour). Please not that all those talks about real time detection are 99% pure security FUD. Nothing can be done in most large organizations in less then an hour ;-)
In order to preserve their business (and revenue stream) IDS vendors started to hype intrusion prevention systems as the next generation of IDS. But IPS is a very questionable idea that mixes the role of firewall with the role of IDS sensor. It's not surprising that it backfired many times for early (and/or too enthusiastic) adopters (beta addicts).
It is very symptomatic and proves the point about "innocent fraud" that intrusion prevention usually is advertised on the base of its ability to detect mail viruses, network worms threats and Spyware. For any specialist it is evident that mail viruses actually should be detected on mail gateway and it is benign idiotism to try to detect then on the packet filter level. Still idiotism might be key to commercial success and most IDS vendors pay a lot of attention to the rules or signatures that provide positive PR and that automatically drives that into virus/worms detection wonderland. There are two very important points here:
May be things eventually improve, but right now I do not see how commercial IDS can justify the return on investment and NIDS looks like a perfect area for open source solutions. In this sense please consider this page a pretty naive (missing organizational dynamic and power grab issues in large organizations) attempt to counter "innocent fraud" to borrow the catch phrase used by famous economist John Kenneth Galbraith in the title of his last book "The Economics of Innocent Fraud".
Important criteria for NIDS is also the level of programmability:
It's rather counterproductive to place NIDS in segments with large network traffic. Mirroring port on the switches work in simple cases, but in complex cases where there are multiple virtual LANs that will not work as usually only one port can be mirrored. Also mirroring increase the load on the switch. Taps are additional component and are somewhat risky on high traffic segments unless they are designed to channel all the traffic in case of failure. Logically network IDS belongs to firewall and some commercial firewalls have rudimentary IDS functionality. Also personal firewall with NIDS component might be even be more attractive for most consumers as they provide some insight on what is happening. They also can be useful for troubleshooting. Their major market is small business and probably people connected by DSL or cable who fear that their home computers may be invaded by crackers.
The problem is that useful signal about probes on actual intrusions is usually buried under mountains of data and wrong signal may drive you in a wrong direction. A typical way to cope with information overload from network IDS is to rely more on the aggregation of data (for example, detect scans not single probes) and "anomaly detection" (imitate firewall detector or use statistical criteria for traffic aggregation).
Misuse detection is more costly and more problematic that anomaly detection approach with the notable exception of honeypots. It might be beneficial to use a hybrid tools that combine honeypots and NIDS. Just as a sophisticated home security system might comprise both external cameras and sensors and internal monitoring equipment to watch for suspicious activity both outside and within the house - so should an intrusion detection system.
You may not know it, but a surprisingly large number of IDS vendors have license provisions that can prohibit you from communicating information about the quality and usability of their security software. Some vendors have used these software license provisions to file or threaten lawsuits to silence users who criticized software quality in places such as Web sites, Usenet newsgroups, user group bulletin boards, and the technical support boards maintained by software vendors themselves. Here open source has a definite advantage, because it may be not the best but at least it is open, has a reasonable quality (for example Snort is very competitive with most popular commercial solutions) or at least it is the cheapest alternative among several equally bad choices ;-).
IDS often (and wrongly) are considered to be the key component for the enterprise-level security. Often that is achieved by buying fashionable but mainly useless outsourced IDS services. Generally this idea has a questionable value proposition because of the level of false positives and problems with the internal infrastructure (often stupid misconfigurations on WEB server level, inability to apply patches in a timely manner, etc.) that far outweigh and IDS-inspired capabilities. If you are buying IDS, the good staring point is to ask to show what attacks they recently detected and negotiate one to six month trial before you pay the money ("try before you buy").
The problem of false positives for IDS is a very important problem that is rarely discussed on a sound technological level. I don't think there is a 'best' IDS. But here are some considerations:
You probably got the idea at this point: the IQ of the network/security administrators and the ability to adapt the solution to this organization is of primary importance in the IDS area, more important then in, say, virus protection (where precooked signatures sets rules despite being a huge overkill).
All-in-all the architecture and the level of customarization of the rulebase are more important then the capabilities of the NIDS.
From: AnonymousFrom: AnonymousReverse tunneling is very, very useful, but only in quite specific cases. Those cases are usual the result of extreme malice and or incompetence of network staff.
The only difficult part here is to determine what's the most common attribute of most IT/networking departments: Malice or incompetence. My last IT people had certainly both.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
HP
The File Transfer Protocol (FTP) enables you to transfer files between a client host system and a remote server host system. On the client system, a file transfer program provides a user interface to FTP; on the server, the requests are handled by the FTP daemon, ftpd. WU-FTPD is the FTP daemon for HP-UX systems. It is based on the replacement FTP daemon developed at Washington University. WU-FTPD 2.6.1 is the latest version of WU-FTPD available on the HP-UX 11i v1, HP-UX 11i v2, and HP-UX 11i v3 platforms.
The FTP client with SSL support is available for download from this page for the HP-UX 11i v2 operating system. Starting from May 2010, the WU-FTPD 2.6.1 bundle that you can download from this page contains the FTP daemon with SSL support for the HP-UX 11i v3 operating system.
Table 1: Latest WU-FTPD 2.6.1 Bundle Numbers
Product Version
NumberOperating System Bundle Version
NumberRelease Date WU-FPTD 2.6.1 Bundle Versions HP Revision: 1.014a HP-UX 11i v1 B.11.11.01.014 July 2010 HP Revision: 1.001a HP-UX 11i v2b B.11.23.01.001 September 2008 HP Revision: 6.0a HP-UX 11i v3b C.2.6.1.7.0 May 2011
IPv6-enabled version of WU-FTPD 2.6.1 available.
b The TLS/SSL feature is available for the HP-UX 11i v2 and HP-UX 11i v3 operating systems.WU-FTPD 2.6.1 offers the following features:
- Virtual hosts support
- The privatepw utility
- New clauses in the /etc/ftpd/ftpaccess file
- IPv6 support
- New command-line options
- New features related to data transfer
- New configuration file, /etc/ftpd/ftpservers
- A set of virtual domain configuration files used by ftp
WU-FTPD 2.6.1 for the HP-UX 11i v2 and HP-UX 11i v3 operating systems now supports the TLS/SSL feature. For more information on the TLS/SSL feature, see WU-FTPD 2.6.1 Release Notes on the HP Business Support Center.
IMPORTANT: The WU-FTPD 2.6.1 depot that you can download from this page is the TLS/SSL-enabled version of FTP. The core (default) HP-UX 11i v2 operating system still contains the non-TLS/SSL version of FTP. For patch updates to WU-FTPD 2.6.1 in the core HP-UX 11i v2 operating system, see http://itrc.hp.com
Compatibility Information
For HP-UX 11i v1 customers, WU-FTPD 2.6.1 adds new functionality to the already existing WU-FTPD 2.4 software, which is delivered as part of the core networking products on HP-UX 11i v1. For HP-UX 11.0, this version allows customers to upgrade to WU-FTPD 2.6.1 from either the legacy FTP version, which is delivered with the core networking products on HP-UX 11.0, or from WU-FTPD 2.4, which is available in the patch PHNE_21936.
Documentation
The following product documentation is available with WU-FTPD 2.6.1.Man Pages
The following man pages are distributed with the WU-FTPD 2.6.1 depot:
- ftp.1
- ftpd.1m
- ckconfig.1
- ftprestart.1
- ftpwho.1
- ftpcount.1
- ftpshut.1
- privatepw.1
- ftpaccess.4
- ftpgroups.4
- ftpservers.4
- ftpconversions.4
- ftpusers.4
- ftphosts.4
- xferlog.5
arstechnica.com
Few common IT policies drive users to distraction as regularly and reliably as the aggressiveness of enterprise password policies.
... ... ...
Passwords are still important, but the value of aggressive password policies as security against unauthorized access is questionable, said Andrew Marshall, CIO of Philadelphia-based Campus Apartments in an interview with Ars Technica. "Statistical attacks-repeated attempts at guessing a password using hints or a dictionary-are unlikely to yield results, provided that the enterprise system implements a 'lockout after X incorrect attempts" policy," he said. "Enforcing tricky complexity and length rules increases the likelihood that the password will be written down somewhere."
Even strong passwords don't prevent breaches. Scott Greaux, a product manager at Phishme, a security risk assessment firm, said that most recent data breaches have been the result of social engineering attacks like phishing. "Every major breach has been initiated by phishing," he said. "Password controls are great. Mature authentication systems enforce strong passwords, and have reasonable lockouts for failed login attempts, so brute-forcing is increasingly difficult."
But, Greaux says, the weak link is a user's trusting nature. "I could ask people for their strong, complex password," he added, "and they'll probably give it to me."
If users aren't writing down or giving up their password, many just forget them, increasing the workload on help desks. Adam Roderick, director of IT services at Aspenware, tells Ars that he frequently hears from client companies that a quarter to a third of all help-desk requests are the result of forgotten passwords or locked accounts. Despite the availability of self-service password recovery systems such as those from ManageEngine, "I do not see much investment from corporate IT in password recovery tools," he said.
Roderick said single sign-on systems could significantly reduce the problem, since users' frustrations usually come from having to manage multiple passwords.
Aug.20, 2009 | NachoTech
If you want to access an iLO behind a firewall, there are some TCP ports that need to be opened on the firewall to allow all iLO traffic to flow through. Here is a list of the default ports used by iLO, but these can be modified on iLO's Administration… Access… Services… tab.
ILO FUNCTION SOCKET TYPE PORT NUMBER ---------------------- ----------- ----------- Secure Shell (SSH) TCP 22 Remote Console/Telnet TCP 23 Web Server Non-SSL TCP 80 Web Server SSL TCP 443 Terminal Services TCP 3389 Virtual Media TCP 17988 Shared Remote Console TCP 9300 Console Replay TCP 17990 Raw Serial Data TCP 3002
HP Communities
Hi Guys,
We have found that the remote console port defined for iLo3 has changed from being 3389 (standard RDP port) to 17990.
Can one of you please ask HP about the reasoning about this change and if it will be an issue if we change this to the standard 3389 port. The alternative is that we get NS to open the port 17990 on the firewall then we do not have to manually change every iLO 3 interface for servers in ecom.
***************************
David responded:
**************************
I think they're confusing 2 different things. Port #3389 is a standard RDP port and was valid for the "iLO Terminal Services Pass-through" but never was the port for accessing the iLO remote console. Since TS Pass-through is no longer available with iLO3, this doesn't apply.
HP
For an iLO device to work properly when going across routers using port blocking and/or firewalls, ports 23, 80, 443, and 17988 must be open.
The directory services LDAP port (636) may be required. The Terminal Services RDP port (3389) may be required.
Port 23 is for the Telnet ports where the remote and graphical Remote Console is used, port 80 is for HTTP communications, port 443 is required for the HTTPS connection, and port 17988 is for Virtual Media.
LDAP traffic from a directory server uses random port numbers to enter the iLO device.
The inability to access the iLO management ports is often confused with incorrect proxy settings. When in doubt, disable proxy in Internet Explorer or Netscape.
gawker.com
Before he was deposed from Apple the first time around, Jobs already had a reputation internally for acting like a tyrant. Jobs regularly belittled people, swore at them, and pressured them until they reached their breaking point. In the pursuit of greatness he cast aside politeness and empathy. His verbal abuse never stopped. Just last month Fortune reported about a half-hour "public humiliation" Jobs doled out to one Apple team:
Jobs ended by replacing the head of the group, on the spot."Can anyone tell me what MobileMe is supposed to do?" Having received a satisfactory answer, he continued, "So why the fuck doesn't it do that?"
"You've tarnished Apple's reputation," he told them. "You should hate each other for having let each other down."
In his book about Jobs' time at NeXT and return to Apple, The Second Coming of Steve Jobs, Alan Deutschman described Jobs' rough treatment of underlings:
He would praise and inspire them, often in very creative ways, but he would also resort to intimidating, goading, berating, belittling, and even humiliating them... When he was Bad Steve, he didn't seem to care about the severe damage he caused to egos or emotions... suddenly and unexpectedly, he would look at something they were working on say that it "sucked," it was "shit."
Jobs had his share of personal shortcomings, too. He has no public record of giving to charity over the years, despite the fact he became wealthy after Apple's 1980 IPO and had accumulated an estimated $7 billion net worth by the time of his death. After closing Apple's philanthropic programs on his return to Apple in 1997, he never reinstated them, despite the company's gusher of profits.
It's possible Jobs has given to charity anonymously, or that he will posthumously, but he has hardly embraced or encouraged philanthropy in the manner of, say, Bill Gates, who pledged $60 billion to charity and who joined with Warren Buffet to push fellow billionaires to give even more.
"He clearly didn't have the time," is what the director of Jobs' short-lived charitable foundation told the New York Times. That sounds about right. Jobs did not lead a balanced life. He was professionally relentless. He worked long hours, and remained CEO of Apple through his illness until six weeks before he died. The result was amazing products the world appreciates. But that doesn't mean Jobs' workaholic regimen is one to emulate.There was a time when Jobs actively fought the idea of becoming a family man. He had his daughter Lisa out of wedlock at age 23 and, according to Fortune, spent two years denying paternity, even declaring in court papers "that he couldn't be Lisa's father because he was 'sterile and infertile, and as a result thereof, did not have the physical capacity to procreate a child.'" Jobs eventually acknowledged paternity, met and married his wife, now widow, Laurene Powell, and had three more children. Lisa went to Harvard and is now a writer.
Windows XP's retail release was October 25, 2001, ten years ago today. Though no longer readily available to buy, it continues to cast a long shadow over the PC industry: even now, a slim majority of desktop users are still using the operating system.
Windows XP didn't boast exciting new features or radical changes, but it was nonetheless a pivotal moment in Microsoft's history. It was Microsoft's first mass-market operating system in the Windows NT family. It was also Microsoft's first consumer operating system that offered true protected memory, preemptive multitasking, multiprocessor support, and multiuser security.
The transition to pure 32-bit, modern operating systems was a slow and painful one. Though Windows NT 3.1 hit the market in 1993, its hardware demands and software incompatibility made it a niche operating system. Windows 3.1 and 3.11 both introduced small amounts of 32-bit code, and the Windows 95 family was a complex hybrid of 16-bit and 32-bit code. It wasn't until Windows XP that Windows NT was both compatible enough-most applications having been updated to use Microsoft's Win32 API-and sufficiently light on resources.
In the history of PC operating systems, Windows XP stands alone. Even Windows 95, though a landmark at its release, was a distant memory by 2005. No previous PC operating system has demonstrated such longevity, and it's unlikely that any future operating system will. Nor is its market share dominance ever likely to be replicated; at its peak, Windows XP was used by more than 80 percent of desktop users.
The success was remarkable for an operating system whose reception was initially quite muted. In the wake of the September 11th attacks, the media blitz that Microsoft planned for the operating system was toned down; instead of arriving with great fanfare, it slouched onto the market. Retail sales, though never a major way of delivering operating systems to end users, were sluggish, with the operating system selling at a far slower rate than Windows 98 had done three years previously.
It faced tough competition from Microsoft's other operating systems. Windows 2000, released less than two years prior, had won plaudits with its marriage of Windows NT's traditional stability and security to creature comforts like USB support, reliable plug-and-play, and widespread driver support, and was widely adopted in businesses. For Windows 2000 users, Windows XP was only a minor update: it had a spruced up user interface with the brightly colored Luna theme, an updated Start menu, and lots of little bits and pieces like a firewall, UPnP, System Restore, and ClearType. ...
Long in the tooth it may be, but Windows XP still basically works. Regardless of the circumstances that led to its dominance and longevity, the fact that it remains usable so long after release is remarkable. Windows XP was robust enough, modern enough, well-rounded enough, and usable enough to support this extended life. Not only was Windows XP the first (and only) PC operating system that lasted ten years: it was the first PC operating system that was good enough to last ten years. Windows 98 didn't have the security or stability; Windows 2000 didn't have the security or comfort; Mac OS X 10.1 didn't have the performance, the richness of APIs, or the hardware support.
... ... ...
Given current trends, Windows 7 will overtake XP within the next year, with many businesses now moving away from the decade-old OS in earnest. Not all-there are still companies and governments rolling out Windows XP on new hardware-but the tide has turned. Windows XP, with its weaker security and inferior support for modern hardware, is now becoming a liability; Windows 7 is good enough for business and an eminently worthy successor, in a way that Windows Vista was never felt to be.Ten years is a good run for any operating system, but it really is time to move on. Windows 7 is more than just a solid replacement: it is a better piece of software, and it's a much better match for the software and hardware of today. Being usable for ten years is quite an achievement, but the stagnation it caused hurts, and is causing increased costs for administrators and developers alike. As incredible as Windows XP's longevity has been, it's a one-off. Several factors-the 32-bit transition, the Longhorn fiasco, even the lack of competition resulting from Apple's own Mac OS X transition-conspired to make Windows XP's position in the market unique. We should not want this situation to recur: Windows XP needs to be not only the first ten-year operating system; it also needs to be the last.
Selected comments
Muon:
"We should not want this situation to recur: Windows XP needs to be not only the first ten-year operating system; it also needs to be the last."
It feels like you completely missed the point.
Stability. Matters.
In today's fast-paced, constantly iterating (not innovating, as they claim) world, "good enough" is an alien concept, a foreign language. Yet we reached "good enough" ten years ago and it shows no signs of ever going away.
superslav223
OttoResponder wrote: All those words and yet one was missed: monopoly. You can't really talk about XP - or any other Microsoft OS - without talking about the companies anti-competitive practices. For example, the way they strong-armed VARs into selling only Windows... Sure the Bush Administration let them off the hook, but the court's judgement still stands.
Pirated XP is still installed far more than Linux despite being an OS from 2001.
Linux on the desktop has shortcomings and pretending they don't exist won't make them go away.
Microsoft used strong arm tactics but the competition also sucked. I have known many geeks that chose XP over Linux because they found the latter to be too much of a hassle, not because of OEMs or software compatibility.
xpclient:
"Windows XP didn't boast exciting new features". I stopped reading there because that's a load of crap/myth. XP came with a large number of NEW and EXCITING features. Read more about them here: http://en.wikipedia.org/wiki/Features_new_to_Windows_XP . XP was a very well engineered system that improved by orders of magnitude upon Windows 2000. Its popularity and continued use demonstrate just how well designed the system was. Great compatibility, excellent stability and performance. Security was an Achilles heel but SP2 nailed it and XP became a very good OS.
Windows 7 has some nice features but plenty of regressions too. Windows 7 can't even do basic operations like freely arrange pictures in a folder or not force a sorting order on files. The search is totally ruined for real-time searching, WMP12 is a UI disaster. Service packs and updates take hours to install instead of minutes and can't be slipstreamed into setup files. There's no surround sound audio in games. There is no choice of a Classic Start Menu. Windows Explorer, the main app where I live (instead of living on Facebook) is thoroughly dumbed down.
chabig:
"Windows XP didn't boast exciting new features or radical changes, but it was nonetheless a pivotal moment in Microsoft's history. It was Microsoft's first mass-market operating system in the Windows NT family. It was also Microsoft's first consumer operating system that offered true protected memory, preemptive multitasking, multiprocessor support, and multiuser security."
Talk about contradictions. First, claim there were no new or exciting features, then list a bunch of them. XP was the first fully 32 bit Windows OS and broke dependence on DOS. I'd say it did offer radical changes for the better. carlisimo | 5 days ago | permalink I'm still on XP, and I'm hesitant about upgrading... I don't like the changes to Windows Explorer at all. dnjake | 5 days ago | permalink How often do you need a new spoken language or a new hammer? When people spend effort learning how to use an operating system and that operating system meets their needs, change is a losing proposition. The quality of Microsoft's work is going down. But Microsoft's quality is still far better than almost any of the low grade work that is standard for the Web. Windows 7 does offer some improvement over XP and it is a more mature operating system. It will be used longer than XP and it remains to be seen how long XP's life will turn out to be. The quality of the Web is still low. Even the most basic forms and security sign in applications are primitive and often broken. It may easily take another decade. But the approach Microsoft is talking about with Windows 8 probably will eventually will provide a mature system based on the HTML DOM as the standard UI. Between that and XAML, it is hard to see why anything more will be needed. The days of ever changing operating systems are drawing to a close.
microlith
superslav223 wrote:
Windows 2008R2 is probably safer than RHEL for web hosting.
Really? I'd like to see some results on this.
Quote: IE6 and IE9 might as well be entirely different browsers.
Yes, because they had competition surpassing them. Otherwise you get 5+ years of... nothing.
theJonTech:
I am in charge of the PC Deployment team of a Fortune 400 company. I can tell you first hand the nightmare of upgrading to Windows 7. The facts are, legacy apps haven't been upgraded to run on Windows 7, much less Windows 7 64bit. We had the usual suspects, Symantec, Cisco, Citrix all ready for launch, but everyone else drags their feet and we have had to tell our customers, either do away with the legacy app and we can find similar functionality in another application or keep your legacy app and you will be sent an older PC with XP on it (Effectively redeploying what they already have with more memory). Add to that, we are using Office 2010 and it's a complete shell shock to most end users used to Office 2003, though going from 2007 isn't as bad.
On the other hand I do small business consulting and moved over a 10 person office and they were thrilled with Windows 7, as it really took advantage of the newer hardware.
It just depends on the size and complexity of the upgrade. My company just cannot throw away a working OS, when these production applications won't work... Maybe in a perfect IT world
Hagen:
fyzikapan wrote:
The various Linux distros still haven't managed to come up with a solid desktop OS that just works, to say nothing of the dearth of decent applications, and what is there frequently looks and works like some perpetual beta designed by nerds in their spare time.
It's funny b/c it's so very true
jiffylube1024:
Windows XP's longevity is truly remarkable. The article makes a good point in that the strong push towards internet-connected PC's and internet security made running all pre-XP Microsoft desktop OS'es untenable after a few years, especially after Windows XP SP2 released with beefier security.
I personally jumped ship to Vista as soon as I could, because after the stability issues were ironed out within the first 6 months, it was a much smoother, better PC experience than XP (long boot times notwithstanding). Windows 7, which was essentially just a large service pack of Vista sold as a new OS (think OS X releases), was a smoother, more refined Vista.
I believe that Windows 7 is "the new XP", and it will probably still command well over 10% of the desktop market in 5+ years. I believe that for non-touch screen PC's, Windows 7 will be the gold standard for years to come, and that is the vast majority of buisness PC's and home PC's. New builds of Windows 7 boot faster than XP, and run smoother with fewer hiccups. The GPU-accelerated desktop really does run smoother than the CPU driven ones of the past.
Nothing will approach XP's 10-year run, most of that as the dominant desktop OS. The lines between desktop and laptop have been blurred lately as well; Windows 7 and Mac OS X are considered "desktop" OS'es even when they run on laptops. There is a newfound emphasis on mobile OS'es like never before today. More and more people will use Tablet devices as media consumption devices - to surf the net, watch videos, etc. More and more people use computers *while watching TV; it's a trend that is only increasing, and smartphones and tablets make this even easier. ----------
Because Windows 8's "Metro" UI is so touch-focused, I could see it taking off in school usage, laptops, and, of course, tablets in the 201x decade. It will be interesting to see how Windows 8 tablets run when the OS first launches in late 2012; tablet hardware is at least an order of magnitude slower than desktop hardware. Within a few years of Windows 8's launch, however, there may be no perceptible performance difference between Tablet and desktop/laptop usage.
superslav223
Hagen wrote:
OS X 10.0-10.7 = $704 Windows XP - 7 via upgrades (XP Professional Full - Ultimate - Ultimate) = $780
Windows XP - 7 via full versions (Professional - Ultimate - Ultimate) = $1020
OS X has been cheaper if you were looking for the full experience of Windows each time. I dont' have time to research which of the versions were valid upgrades of each other, b/c that was a super PITA, so I'll just leave that there for others to do.
You're too lazy to do the research for your own comparison?
Ultimate mainly exists for the people that max out laptops because they have nothing better to do with their money. The best feature of Ultimate (bitlocker) has a free alternative (truecrypt).
You certainly don't need Ultimate for the full Windows experience.
EmeraldArcana:
DrPizza wrote: I don't even understand the question, particularly not with regard to how it relates to Apple. Apple doesn't give you iterative improvements. It releases big new operating systems that you have to buy.
I think the case can be made that the transition between versions of Windows, traditionally, are a bit larger of a jump than transitions between versions of Mac OS X.
The leap from Windows XP to Windows Vista was quite large; compare that to the changes between Mac OS 10.3 and Mac OS 10.4. Similarly, Vista to 7 looks "more" than 10.4 to 10.5.
While Apple's charging for each point iteration of its operating system and adding new features, most of the underlying elements are essentially the same. They round out corners, remove some brushed metal here and there, change a candy stripe or two, but the visual differences between versions aren't as dramatic.
chronomitch
cactusbush wrote: Yeah, we are obliged to upgrade OS's eventually, when we purchase new hardware. Problem is that Windoze is getting worse - not better. Windoze Vista and 7 -STINK- and the future with Win8 looks even bleaker. My issues revolve around having command and control over my personal machine rather than it behaving like a social machine or by having the OS protect me from the machine's inner workings.
Several previous commentators have already remarked along similar lines, their frustration with Win7&8's isolation and de-emphasis of the file managing 'Explorer' app. - "aliasundercover" listed several Win7 shortcomings; including excessive 'nag screens', less control over where things get put, "piles of irrelevant things run incessantly in the background", the need for the machine to 'Phone Home' constantly and the "copy protection and activation getting meaner".
Years ago the purchase of XP and its new limited installation to only one computer; determined that XP would be my last MS OS purchase. Linux however has not yet blossomed to a desirable point.
MS is developing dumbed down style of operating systems that I don't want and don't like.
1) There is absolutely nothing stopping you from "having command and control" over your personal Windows 7 machine. In fact, Windows 7 provides more and better facilities for automatically starting and managing various tasks.
2) Nag screens can be easily disabled. Moreover, there are multiple settings for these screens. For example, I only see these screens when I install or uninstall software.
3) I don't see how explorer has been "isolated" or "de-emphasized." There are a few UI changes to the program, but most can be reverted to what XP looked like, save for the lack of an "up" button (which can be restored with certain software). Learn to use the new search in the start menu. It will save you a lot of time in the long run.
4) I'm not sure what the "less control over where things get put" complaint is about. Pretty much every program installer allows you to change where programs are installed.
5) Windows 7 runs faster and more efficiently than Windows XP, regardless of background processes.
6) Windows 7 activation has been painless. I don't see why anyone cares about Windows "phoning home" for activation after installation or the copy protection scheme, unless you're a pirate. Buy a copy of Windows 7 for each PC and stop being such a cheap-ass.
Honestly, it sounds like you have had very little actual exposure to Windows 7 and have just picked up complaints from other people. Neither XP nor 7 are perfect OSes, but 7 is leagues above XP in terms of security, performance, and standards. Windows 7 is a modern OS in every sense of the word. XP is an OS that has been patched and updated many times during its lifespan to include features and security it should have had in the first place.
me987654
lwatcdr wrote:
Nightwish wrote: That and most people walk into a best buy and get whatever they're told to get having no idea what an OS is or that there's a choice. Enterprise is terrified of change.
Enterprise needs to get work done. Vista had all the problems of a major update with the benefits of a minor update.
I'm left wondering the age of the people spouting this "enterprise is terrified of change" meme.
Seriously. This isn't meant to insult of younger people. It isn't bad to be young. However youth often don't fully grasp the factors that go into the decision making process.
IT departments aren't afraid of change. Change is exactly what keeps them employed and makes their job interesting. You'll find that they usually run the latest and greatest at home, likely have a brand new gadgets, and spend their free time on sites like ars.
So why don't they upgrade? Because upgrading costs time, money, and the devoted attention of people in key rolls. It also results in lost productivity in the short term. The benefits of upgrading must be weighed against the costs of upgrading. But not only that, the upgrade must be weighed against other projects that might help the organization more. Only so much change can be managed and endured simultaneously.
Meanwhile, casual and overly emotional observers pretend that IT departments are sticking with XP because they're lazy or haven't given the topic much thought. Rest assured, migration from XP has been given a ton of attention and the decision of when to leap isn't made lightly.
Great post... I think young people don't fully grasp how important it is to keep those main line of business applications operating.
MekkelRichards
I love XP and always will. I have been using it for almost 10 years. Got it just after it came out on my first real computer. The Dell Dimension 4100 with 733MHZ Pentium3 and 128MB SDRAM.
Just a few months ago I sold a brand new replacement laptop that I was sent from Dell so that I could buy an older, cheap laptop. A 2006 Inspiron E1405. It has a 1.83GHZ Core Duo, released before even the Core 2 Duos came out, only a 32-bit CPU. I am running XP SP3 on it with 2gigs of RAM and it flies. I run every program that any normal person would. Currently have 13 tabs open in Chrome, Spotift open, some WinExplorer windows, and Word 2010 open. Not a hint of slow down.
XP is just so lean and can be even furtherly leaned out through tons of tweaks.
FOR ANYONE WHO STILL RUNS XP, download an UxTheme.dll patcher so that you can use custom themes!
NuSkoolTone
All this crying about "Holding us back". I say to the contrary, it kept us from moving "Forward". Forward as in needing new hardware every couple of years that in the end gave us NO REAL new functionality, speed, or efficiency. It wasn't until the CORE processors from Intel that there was any NEED for a new OS to take advantage.
Being able to keep working on old hardware that STILL PERFORMED, or being able to upgrade when you FELT like it (instead of being FORCED to because the new crappy whizbang OS brought it to its knees) with results that FLEW was NICE.
Windows 7 is a worthy successor to XP, but that doesn't mean XP wasn't a GREAT OS during its run!
Mr Bil
A few of us are using XP because the 70-250 thousand dollar instrument requires a particular OS to run the software. Upgrading to a new OS (if offered) is a 3 to 14 thousand dollar cost for new controller boards in the PC and the new software, not to mention the additional cost of a new PC. We have two Win98, one WinNT, and three WinXP machines in our lab running instruments.
metalhead0043
I just got on the Windows 7 bandwagon a little over a month ago. There are some things I like and some things I don't like. The boot times and shut down times are considerably faster that XP. Also I feel like the entire OS is just much more stable. I never get programs that hang on shut down. It just plain works. I don't care much for the new Windows 7 themes. I immediately went to the Windows Classic theme as soon as I found it. However, I still like the old XP start menu more. It was just more compact and cleaner. I do like the search feature for Windows 7. There are some other things I don't like, like the Explorer automatically refreshing when I rename a file. It's a pointless feature that adds nothing to the experience.
At my job, however, I see no upgrade from XP in sight. I work for a major office supply retailer, and we are hurting financially. We still use the same old Pentium 4 boxes from when I started back in 2003.
trollhunter
What moves people (and companies) to upgrade (or not) their OS ? Basically, 2 things: applications and hardware. For a long time, XP covered this 2 items very well (even in the case of legacy Win-16 or DOS applications in most of the cases, either natively or thru 3rd. party support like DOSBox), so the market felt no big need to change. Microsoft knew this, that's why things like DirectX10 got no support on XP. OTOH, the hardware market evolved to 64 bit platforms, and things like the 3GB address space limit in XP and immature multi core processors support became real problems.
I think that Windows 7 / Windows 8 will spread faster when people and enterprises feel the need to migrate to 64 bit hardware (that was my case, BTW).
October 09, 2011 | Moon of Alabama
As the Chaos Computer Club, a 25 year old hacker organization which promotes privacy, found, the "Federal Trojan" software the police uses for sniffing into Skype calls allows full manipulation of the hosting PC. The software can install additional programs and it can upload, download and manipulate files.
"This refutes the claim that an effective separation of just wiretapping internet telephony and a full-blown trojan is possible in practice – or even desired," commented a CCC speaker. "Our analysis revealed once again that law enforcement agencies will overstep their authority if not watched carefully. In this case functions clearly intended for breaking the law were implemented in this malware: they were meant for uploading and executing arbitrary code on the targeted system."Even worse, the software is written on an amateur level, uses unsecured communication methods and, once installed, leaves the computer open to be manipulated by anyone on the Internet.
Slashdot
"Today we celebrate Dennis Ritchie Day, an idea proposed by Tim O'Reilly. Ritchie, who died earlier this month, made contributions to computing that are so deeply woven into the fabric that they impact us all. We now have to remark on the elephant in the room. If Dennis Ritchie hadn't died just after Steve Jobs, there would probably have been no suggestion of a day to mark his achievements.
We have to admit that it is largely a response to the perhaps over-reaction to Steve Jobs which highlighted the inequality in the public recognition of the people who really make their world work."
2003-07-31 | WU-FTPD Development Group
A vulnerability has been found in the current versions of WU-FTPD up to 2.6.2. Information describing the vulnerability is available from
Please apply the realpath.patch patch to WU-FTPD 2.6.2.
- Ciac bulletin n-132
- CVE can-2003-0466
- Redhat errata RHSA-2003-245 with updated packages
- isec.pl
This fixes an off-by-one error in the fb_realpath() function, as derived from the realpath function in BSD. It may allow attackers to execute arbitrary code, as demonstrated in wu-ftpd 2.5.0 through 2.6.2 via commands that cause pathnames of length MAXPATHLEN+1 to trigger a buffer overflow, including (1) STOR, (2) RETR, (3) APPE, (4) DELE, (5) MKD, (6) RMD, (7) STOU, or (8) RNTO.
Additionally, applying the connect-dos.patch is advised for all systems.
This patch fixes a possible denial of service attack on systems that allow only one non-connected socket bound to the same local address.
Additionally, applying the skeychallenge.patch is advised strongly for systems using S/Key logins.
This patch fixes a stack overflow in the S/Key login handling.
Jan 18, 2004 | HP Communities
The following assumes that you downloaded TCP wrappers from hpux.connect.org.uk or one of the mirrors. I think (but I might be wrong) that support for keyword "banners" may not be compiled in.
If you have a look at the source code package Makefile at http://hpux.cs.utah.edu/hppd/cgi-bin/wwwtar?/hpux/Networking/Admin/tcp_wrappers-7.6/tcp_wrappers-7....
, you will find that optional features are not enabled.#STYLE = -DPROCESS_OPTIONS # Enable language extensions.So you can get the source code package from http://hpux.cs.utah.edu/ftp/hpux/Networking/Admin/tcp_wrappers-7.6/tcp_wrappers-7.6-ss-11.00.tar.gz... uncomment the STYLE line from Makefile and recompile the package.
Alternative is to get TCP Wrappers from HP software web site. It seems to have optional features compiled in (but I haven't verified this).
http://www.software.hp.com/portal/swdepot/displayProductInfo.do?productNumber=TCPWRAP
freebsd.org
Suppose that a situation occurs where a connection should be denied yet a reason should be sent to the individual who attempted to establish that connection. How could it be done? That action can be made possible by using the twist option. When a connection attempt is made, twist will be called to execute a shell command or script. An example already exists in the hosts.allow file:# The rest of the daemons are protected. ALL : ALL \ : severity auth.info \ : twist /bin/echo "You are not welcome to use %d from %h."This example shows that the message, "You are not allowed to use daemon from hostname." will be returned for any daemon not previously configured in the access file. This is extremely useful for sending a reply back to the connection initiator right after the established connection is dropped. Note that any message returned must be wrapped in quote " characters; there are no exceptions to this rule.
Warning: It may be possible to launch a denial of service attack on the server if an attacker, or group of attackers could flood these daemons with connection requests.
Another possibility is to use the spawn option in these cases. Like twist, the spawn option implicitly denies the connection and may be used to run external shell commands or scripts. Unlike twist, spawn will not send a reply back to the individual who established the connection. For an example, consider the following configuration line:
# We do not allow connections from example.com: ALL : .example.com \ : spawn (/bin/echo %a from %h attempted to access %d >> \ /var/log/connections.log) \ : denyThis will deny all connection attempts from the *.example.com domain; simultaneously logging the hostname, IP address and the daemon which they attempted to access in the /var/log/connections.log file.
Aside from the already explained substitution characters above, e.g. %a, a few others exist. See the hosts_access(5) manual page for the complete list.
Thus far the ALL option has been used continuously throughout the examples. Other options exist which could extend the functionality a bit further. For instance, ALL may be used to match every instance of either a daemon, domain or an IP address. Another wildcard available is PARANOID which may be used to match any host which provides an IP address that may be forged. In other words, PARANOID may be used to define an action to be taken whenever a connection is made from an IP address that differs from its hostname. The following example may shed some more light on this discussion:
# Block possibly spoofed requests to sendmail: sendmail : PARANOID : denyIn that example all connection requests to sendmail which have an IP address that varies from its hostname will be denied.
Caution: Using the PARANOID wildcard may severely cripple servers if the client or server has a broken DNS setup. Administrator discretion is advised.
To learn more about wildcards and their associated functionality, see the hosts_access(5) manual page.
Before any of the specific configuration lines above will work, the first configuration line should be commented out in hosts.allow. This was noted at the beginning of this section
October 7, 2011 | Mail Online
A father he never knew, a love-child he once denied and a sister he only met as an adult: The tangled family of Steve Jobs... and who could inherit his $8.3 BILLION fortune Apple co-founder survived by two sisters, wife and their three children But he also had love child Lisa Brennan-Jobs with Chrisann Brennan His Syrian biological father never had conversation with Jobs as an adult
Steve Jobs's tangled family of a forgotten father, long-lost sister and love child means lawyers may face a delicate task breaking up his $8.3billion fortune. The 56-year-old co-founder and former CEO of Apple is widely seen as one of the world's greatest entrepreneurs - and he died just outside the top 100 world's richest billionaires. But behind the iconic Californian's wealth and fame lies an extraordinary story of a fragmented family. Husband and wife: Steve Jobs leans his forehead against his wife after delivering the keynote address at an Apple conference in San Francisco, California, in June Mr Jobs, of Palo Alto, California, is survived by his sisters Patti Jobs and Mona Simpson, his wife Laurene Powell Jobs and their three children Eve, Erin and Reed. STEVE JOBS AND HIS FAMILY Biological parents: Joanne Schieble and Abdulfattah Jandali Biological sister: Mona Simpson Adoptive parents: Clara and Paul Jobs Adoptive sister: Patti Jobs Wife: Laurene Powell Jobs Children: Eve, Erin and Reed Love child: Lisa Brennan-Jobs from relationship with Chrisann Brennan
But his family is far from straightforward. He was adopted as a baby and, despite his biological father's attempts to contact him later on, remained estranged from his natural parents. In his early twenties Mr Jobs became embroiled in a family scandal before his days of close media scrutiny, after he fathered a love child with his high school sweetheart Chrisann Brennan. Ms Brennan, who was his first serious girlfriend, became pregnant in 1977 - and he at first denied he was the father. She gave birth to Lisa Brennan-Jobs in 1978 - and in the same year Mr Jobs created the 'Lisa' computer, but insisted it only stood for 'Local Integrated Software Architecture'. The mother initially raised their daughter on benefits. But he accepted his responsibilities two years later after a court-ordered blood test proved he was the father, despite his claims of being 'infertile'. Relatives: Mr Jobs did not meet his biological sister Mona Simpson, left, until he was aged 27. Lisa Brennan-Jobs, right, was his love child with longtime girlfriend Chrisann Brennan in 1978 Ms Brennan-Jobs has made a living for herself, after graduating from Harvard University, as a journalist and writer. 'My father was rich and renowned, and later, as I got to know him, went on vacations with him, and then lived with him for a few years, I saw another, more glamorous world' Lisa Brennan-Jobs She was eventually invited into her father's life as a teenager and told Vogue that she 'lived with him for a few years'. 'In California, my mother had raised me mostly alone,' Lisa wrote in an article for Vogue in 2008. 'We didn't have many things, but she is warm and we were happy. We moved a lot. We rented.
'My father was rich and renowned, and later, as I got to know him, went on vacations with him, and then lived with him for a few years, I saw another, more glamorous world.' Biological dad: Abdulfattah Jandali, 80, a casino boss, has said he wanted to meet his son but was worried about calling him in case Mr Jobs thought he was after money Mr Jobs was born to Joanne Schieble and Syrian student Abdulfattah Jandali before being given up for adoption.
Mr Jandali was a Syrian student and not married to Ms Simpson at the time of Mr Jobs's birth in San Francisco, California, in February 1955.
More...The man who changed the world: Apple founder Steve Jobs, 56, dies weeks after quitting as boss of firm he started in his garage 'The world is a better place because of Steve': The life and times of Apple visionary Steve Jobs
She did not want to bring up a child out of wedlock and went to San Francisco from their home in Wisconsin to have the baby.
Mr Jobs is thought never to have made contact with his biological father.
Mr Jandali, 80, a casino boss, has said he wanted to meet his son but was worried if Mr Jobs thought he was after money. Tributes: Flowers adorn the sidewalk outside the home of Steve Jobs in Palo Alto, California, today He had always hoped that his son would call him to make contact - and had emailed him a few times in an attempt to speak. Mr Jandali once said he 'cannot believe' his son created so many gadgets. 'This might sound strange, though, but I am not prepared, even if either of us was on our deathbeds, to pick up the phone to call him,' he said. Ms Schieble and Mr Jandali then had a second child called Mona Simpson, who became a novelist. Ms Simpson is an author who once wrote a book loosely based on her biological brother. She lives in Santa Monica, California, with her two children and was once married to producer Richard Appel. Couple: He met his wife Laurene Powell in 1989 while speaking at Stanford's graduate business school and he had three children with her - Eve, Erin and Reed But Mr Jobs did not actually meet Ms Simpson until he was aged 27. He never wanted to explain how he tracked down his sister, but she described their relationship as 'close'. Mr Jobs was adopted by working-class couple Clara and Paul Jobs, who have both since died, but they also later adopted a second child - Patti Jobs. He later had the Ms Brennan-Jobs love child with his longtime girlfriend Ms Brennan in 1978. He met his wife Laurene Powell in 1989 while speaking at Stanford's graduate business school and he had three children with her - Eve, Erin and Reed. Residence: Apple co-founder Mr Jobs lived in this home estimated at $2.6million in Palo Alto, California They married in 1991 and Reed was born soon after. He is their oldest child, aged 20. Mr Jobs registered an incredible 338 U.S. patents or patent applications for technology and electronic accessories, reported the International Business Times. He was believed to have driven a 2007 Mercedes Benz SL55 AMG, which was worth around $130,000 new at the time. His 5,700 sq ft home was a 1930s Tudor-style property with seven bedrooms and four bathrooms - and it is estimated by CNBC to be worth $2.6million.
Mr Jobs also owned a huge historic Spanish colonial home in Woodside, which had 14 bedrooms and 13 bathrooms, located across six acres of forested land. Steve Jobs: The 56-year-old co-founder and former CEO of Apple is widely seen as one of the world's greatest entrepreneurs - and he also died just outside the top 100 world's richest billionaires But he later had it knocked down to make way for a smaller property after a long legal battle.
His charitable giving has always been a secret topic, just like most other elements of his lifestyle. Mr Jobs reportedly declined to get involved with the Giving Pledge - founded by Warren Buffett and Bill Gates to get the wealthiest people to give away at least half of their wealth. But he is rumoured to have given $150million to the Helen Diller Family Comprehensive Cancer Center at the University of California in San Francisco, reported the New York Times. It is cancer organisations that are most likely to be supported if any charities are in his will, as he died on Wednesday at the age of 56 from the pancreatic form of the illness.
Read more: http://www.dailymail.co.uk/news/article-2046031/Steve-Jobs-death-Apple-boss-tangled-family-inherit-8-3bn-fortune.html#ixzz1cG2DkrVu
I don't care how you spell it out..... money, love children, tangled web of a life..... The world just lost the Henry Ford and the Thomas Edison of our day. Can anyone set the dollar value of his estate aside and look at Steve Jobs holistically? The guy was damn brilliant.... RIP Steve and sympathies to your wife and children. Right now, they don't care what your net worth was..... you were there dad and a father....
- Kurt R, Northville, MI, 08/10/2011 00:07
Click to rate Rating 84 Report abuse
@Nancy Briones: he was a hero because he epitomized the American dream - brought up in a very modest household, dropped out of college to save his parents' money, started a business in his garage, made it big, failed and was fired, got back up and made it big all over again. His visions and his attention to detail have changed everyone's lives, whether you use Apple products or not. All computers changed because of Apple; the music industry was dragged kicking & screaming into the 21st century, to the benefit of consumers. Pixar revolutionized animated movies. Just simple computer typography changed massively because of Jobs. He took the computer version of ergonomics (that is, their ease of use) to levels no-one else could be remotely bothered to take them. He made computers useful for the liberal arts field, not just number crunching. His mission in life was to improve the world. His salary was $1 per year. He got rich just because he was successful at changing the world.
- DBS, San Francisco, USA, 08/10/2011 00:00
Click to rate Rating 66 Report abuse
My name is Ozymandias, king of kings: Look on my works, ye Mighty, and despair
- Clive, Fife, 07/10/2011 15:24
Click to rate Rating 53 Report abuse
Why was he such a hero? He benefited greatly from his creations. It was his job and he was paid for it. Funny how his cancer diagnosis somehow made us all so sympathetic to someone whose mission in life was to amass wealth, not save the world. My heart goes out to his family in their time of loss, however.
Software specification: Workstation/Server, HP-UX11.11
Man Pages
- tcpd(1M)
- tcpdmatch(1)
- tcpdchk(1)
- hosts_access(3)
- hosts_access(5)
- hosts_options(5)
- tcpd.conf(4)
- try-from(1)
linuxhelp.blogspot.com
Wildcards
You can use wildcards in the client section of the rule to broadly classify a set of hosts. These are the valid wildcards that can be used.
Patterns
- ALL - Matches everything
- LOCAL - Matches any host that does not contain a dot (.) like localhost.
- KNOWN - Matches any host where the hostname and host addresses are known or where the user is known.
- UNKNOWN - Matches any host where the hostname or host address are unknown or where the user is unknown.
- PARANOID - Matches any host where the hostname does not match the host address.
You can also use patterns in the client section of the rule . Some examples are as follows:
ALL : .xyz.comMatches all hosts in the xyz.com domain . Note the dot (.) at the beginning.ALL : 123.12.Matches all the hosts in the 123.12.0.0 network. Note the dot (.) in the end of the rule.ALL : 192.168.0.1/255.255.255.0IP address/Netmask can be used in the rule.ALL : *.xyz.comAsterisk * matches entire groups of hostnames or IP addresses.sshd : /etc/sshd.denyIf the client list begins with a slash (/), it is treated as a filename. In the above rule, TCP wrappers looks up the file sshd.deny for all SSH connections.sshd : ALL EXCEPT 192.168.0.15If the above rule is included in the /etc/hosts.deny file, then it will allow ssh connection for only the machine with the IP address 192.168.0.15 and block all other connections. Here EXCEPT is an operator.Note: If you want to restrict use of NFS and NIS then you may include a rule for portmap . Because NFS and NIS depend on portmap for their successful working. In addition, changes to portmap rules may not take effect immediately.
Suppose I want to log all connections made to SSH with a priority of emergency. See my previous post to know more on logging. I could do the following:
sshd : .xyz.com : severity emergNote: You can use the options allow or deny to allow or restrict on a per client basis in either of the files hosts.allow and hosts.denyin.telnetd : 192.168.5.5 : deny in.telnetd : 192.168.5.6 : allow
ef12517-2.tu-sofia.bg
Wildcards allow TCP wrappers to more easily match groups of daemons or hosts. They are used most frequently in the client list field of access rules.
The following wildcards may be used:
Caution: The KNOWN, UNKNOWN, and PARANOID wildcards should be used with care as a disruption in name resolution may prevent legitimate users from gaining access to a service.
- ALL - Matches everything. It can be used for both the daemon list and the client list.
- LOCAL - Matches any host that does not contain a period (.), such as localhost.
- KNOWN - Matches any host where the hostname and host address are known or where the user is known.
- UNKNOWN - Matches any host where the hostname or host address are unknown or where the user is unknown.
- PARANOID - Matches any host where the hostname does not match the host address.
www.samag.com
The AIX error logging facility components are part of the bos.rte and the bos.sysmgt.serv_aid packages, both of which are automatically placed on the system as part of the base operating system installation. Some of these components are shown in Table 1.
Unlike the syslog daemon, which performs no logging at all in its default configuration as shipped, the error logging facility requires no configuration before it can provide useful information about the system. The errdemon is started during system initialization and continuously monitors the special file /dev/error for new entries sent by either the kernel or by applications. The label of each new entry is checked against the contents of the Error Record Template Repository, and if a match is found, additional information about the system environment or hardware status is added, before the entry is posted to the error log.
The actual file in which error entries are stored is configurable; the default is /var/adm/ras/errlog. That file is in a binary format and so should never be truncated or zeroed out manually. The errlog file is a circular log, storing as many entries as can fit within its defined size. A memory buffer is set by the errdemon process, and newly arrived entries are put into the buffer before they are written to the log to minimize the possibility of a lost entry. The name and size of the error log file and the size of the memory buffer may be viewed with the errdemon command:
[aixhost:root:/] # /usr/lib/errdemon -l Error Log Attributes -------------------------------------------- Log File /var/adm/ras/errlog Log Size 1048576 bytes Memory Buffer Size 8192 bytesThe parameters displayed may be changed by running the errdemon command with other flags, documented in the errdemon man page. The default sizes and values have always been sufficient on our systems, so I've never had reason to change them.
Due to use of a circular log file, it is not necessary (or even possible) to rotate the error log. Without intervention, errors will remain in the log indefinitely, or until the log fills up with new entries. As shipped, however, the crontab for the root user contains two entries that are executed daily, removing hardware errors that are older than 90 days, and all other errors that are older than 30 days.
0 11 * * * /usr/bin/errclear -d S,O 30 0 12 * * * /usr/bin/errclear -d H 90These entries are commented out on my systems, as I prefer that older errors are removed "naturally", when they are replaced by newer entries.
Viewing Errors
Although a record of system errors is a good thing (as most sys admins would agree), logs are useless without a way to read them. Because the error log is stored in binary format, it can't be viewed as logs from syslog and other applications are. Fortunately, AIX provides the errpt command for reading the log.
The errpt command supports a number of optional flags and arguments, each designed to narrow the output to the desired amount. The man page for the errpt command provides detailed usage; Table 2 provides a short summary of the most useful arguments. (Note that all date/time specifications used with the errpt command are in the format of mmddHHMMyy, meaning "month", "day", "hour", "minute", "year"; seconds are not recorded in the error log, and are not specified with any command.)
Each entry in the AIX error log can be classified in a number of ways; the actual values are determined by the entry in the Error Record Template Repository that corresponds with the entry label as passed to the errdemon from the operating system or an application process. This classification system provides a more fine-grained method of prioritizing the severity of entries than does the syslog method of using a facility and priority code. Output from the errpt command may be confined to the types of entries desired by using a combination of the flags in Table 2. Some examples are shown in Table 3.
Dissecting an Error Log Entry
Entries in the error log are formatted in a standard layout, defined by their corresponding template. While different types of errors will provide different information, all error log entries follow a basic format. The one-line summary report (generated by the errpt command without using the "-a" flag) contains the fields shown in Table 4:
Here are several examples of error log entry summaries:
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION D1A1AE6F 0223070601 I H rmt3 TAPE SIM/MIM RECORD 5DFED6F1 0220054301 I O SYSPFS UNABLE TO ALLOCATE SPACE IN FILE SYSTEM 1581762B 0219162801 T H hdisk98 DISK OPERATION ERRORAnd here is the full entry of the second error summary above:
LABEL: JFS_FS_FRAGMENTED IDENTIFIER: 5DFED6F1 Date/Time: Tue Feb 20 05:43:35 Sequence Number: 146643 Machine Id: 00018294A400 Node Id: rescue Class: O Type: INFO Resource Name: SYSPFS Description UNABLE TO ALLOCATE SPACE IN FILE SYSTEM Probable Causes FILE SYSTEM FREE SPACE FRAGMENTED Recommended Actions CONSOLIDATE FREE SPACE USING DEFRAGFS UTILITY Detail Data MAJOR/MINOR DEVICE NUMBER 000A 0006 FILE SYSTEM DEVICE AND MOUNT POINT /dev/hd9var, /varMonitoring with errreporter
Most, if not all systems administrators have had to deal with an "overload" of information. Multiple log files and process outputs must be monitored constantly for signs of trouble or required intervention. This problem is compounded when the administrator is responsible for a number of systems. Various solutions exist, including those built into the logging application (i.e., the use of a loghost for syslog messages), and free third-party solutions to monitor log files and send alerts when something interesting appears. One such tool that we rely on is "swatch", developed and maintained by Todd Atkins. Swatch excels at monitoring log files for lines that match specific regular expressions, and taking action for each matched entry, such as sending an email or running a command.
For all of the power of swatch, though, I was unable to set up the configuration to perform a specific task: monitoring entries in the AIX error log, ignoring certain specified identifiers, and emailing the full version of the entry to a specified address, with an informative subject line. So, I wrote my own simple program that performs the task I desired. errreporter (Listing 1) is a Perl script runs the errpt command in concurrent mode, checks new entries against a list of identifiers to be ignored, crafts a subject line based upon several fields in the entry, and emails the entire entry to a specified address.
errreporter can be run from the command line, though I have chosen to have it run automatically at system startup, with the following entry in /etc/inittab (all on a single line, but broken here, for convenience):
errrptr:2:respawn:/usr/sec/bin/errreporter \ -f /usr/sec/etc/errreporter.conf/dev/console 2&1Of course, if you choose to use this script, be sure to set the proper locations in your inittab entry. The system must have Perl installed; Perl is included with AIX as of version 4.3.3, and is available in source and compiled forms from numerous Web sites. It relies only on modules that are included with the base Perl distribution (see Listing 2 for errreporter.conf file).
Although this script perfectly suits my current needs, there are many areas in which it could be expanded upon or improved. For instance, it may be useful to have entries mailed to different addresses, based upon the entry's identifier. Another useful feature would be to incorporate "loghost"-like functionality, so that a program running on a single server can receive error log entries sent by other systems, communicating via sockets à la the syslog "@loghost" method.
... ... ...
The first source to go to for information on the usage of the commands and programs that are part of the Error Logging Facility is the man pages for the errpt, errdemon, errclear, errinstall, errupdate, errlogger, and errdemon commands, and for the errlog subroutine.
The complete listing of the entries in the Error Template Repository can be generated with the "errpt -t" command, for the one-line summary, or the "errpt -at" command, for the full text of each error template.
A more in-depth overview of the Error Logging Facility can be found in Chapter 10 of the AIX 4.3 Problem Solving Guide and Reference, available online at:
http://www.rs6000.ibm.com/doc_link/en_US/a_doc_lib/aixprob/prbslvgd/errlogfac.htm
freshmeat.net
synctool is a cluster administration tool that keeps configuration files synchronized across all nodes in a cluster. Nodes may be part of a logical group or class, in which case they need a particular subset of configuration files. synctool can restart daemons when needed, if their relevant configuration files have been changed. synctool can also be used to do patch management or other system administrative tasks.
freshmeat.net
Survey is a nearly complete list of your system's configuration files. It also lists installed packages, hardware info, dmesg output, etc. The resulting printout is 25-50 pages in size, fully documenting your system. Survey is invaluable after a blown install or upgrade when it is too late to get this information. Large organizations could use survey to document every Linux system.
Many Unix flavors are supported by TCP wrappers, so you shouldn't have any trouble building from source. There are, however, a few decisions to make at compile time. Features, for example, can be turned on or off through definitions. Here is a list, with default values shown where appropriate:
STYLE = -DPROCESS_OPTIONS: | Enable language extensions. This is disabled by default. |
FACILITY = LOG_MAIL: | Where do log records go? I prefer to set this to LOG_DAEMON so that everything goes to /var/log/daemon. |
SEVERITY = LOG_INFO: | Indicates what level to give to the log message. The default, LOG_INFO, is fine. |
HOSTS_ACCESS: | When compiled with this option, wrapper programs support a simple form of access control. Because this is the raison d'être of the suite, it's defined by default. |
PARANOID: | When compiled with -DPARANOID, wrappers will always try to look up and double-check the client hostname, and will always refuse service in the case of a discrepancy between hostname and IP address. This is a reasonable policy for most systems. When compiled without -DPARANOID, wrappers still perform hostname lookup; however, where such lookups give conflicting results for hostname and IP address, hosts are not automatically rejected. They can be matched with the PARANOID wildcard in the access files, and a decision is made on whether or not to grant access. |
DOT = -DAPPEND_DOT: | This appends a dot to every domain name -- transforming example.com into example.com. for instance. This is done because on many Unix systems the resolver will append substrings of the local domain and try to look up those hostnames before trying to resolve the name it has actually been given. Use of the APPEND_DOT feature stops this waste of time and resources. It is off by default. |
AUTH = -DALWAYS_RFC931: | Will cause the system to always try to look up the remote username. For this to be of any use, the remote host must run a daemon that supports the finger protocol. Such lookups aren't possible for UDP-based connections. By default, this is turned off and the wrappers look up the remote username only when the access control rules specify such behavior. |
RFC931_TIMEOUT = 10: | Sets the username lookup timeout. |
-DDAEMON_UMASK = 022: | This is the default file permissions mask for processes run under the control of the wrappers. |
ACCESS = -DHOSTS_ACCESS: | Sets host access control. This feature can also be turned off at runtime by providing no, or empty, access control tables. Enabled by default. |
TABLES = -DHOSTS_DENY=\"/etc/hosts.deny\" --DHOSTS_ALLOW=\"/etc/hosts.allow\": | Sets the pathnames for the access control tables. |
HOSTNAME = -DALWAYS_HOSTNAME: | Sets the system to always attempt to look up the client hostname. If this is disabled, the client hostname lookup is postponed until the name is required by an access control rule or by a %letter expansion. If this is what you want, note that PARANOID mode must be disabled as well. This is on by default. |
-DKILL_IP_OPTIONS: | This is for protection against hosts that pretend they have someone else's host address --i.e., host address spoofing. This option isn't needed on modern Unix systems that can stop source-routed traffic in the kernel -- e.g., Linux, Solaris 2.x, 4.4 BSD and derivatives. |
-DNETGROUP: | Determines whether or not your system has NIS support. This is used only in conjunction with host access control, so if you're not using that, don't bother about this in any case. Off by default. |
Some definitions are given that work around system bugs (just the basics here; see makefile for details). The standard define is BUGS = -DGETPEERNAME_BUG -DBROKEN_FGETS - DLIBC_CALLS_STRTOK.
Having set the options to your requirements, type make sys-type, with sys-type being one of the following:
In the unlikely event that none of these match your system, you'll have to edit the system dependencies sections in the makefile and do a make other.
There are two ways to install the software -- the easy way and the advanced way.
... ... ...
And here is the same record after modification to support TCP wrappers:
telnet stream tcp nowait root /sbin/tcpd /sbin/in.telnetd
After editing this file, remember to tell inetd to reread it with kill -1.
Access control
The core idea behind TCP wrappers is that of an access control policy. The policy rules are held
in two files: /etc/hosts.allow and etc/hosts.deny. These are the default pathnames,
which can be changed in the makefile.
Access can be controlled per host, per service, or in combinations thereof. Access control can also be used to connect clients to particular services, depending on the requested service, the origin of the request, and the host address to which the client connects. For example, a www daemon might be set to serve documents in French when contacted from within the France, but otherwise respond in English.
The format of these files is described in detail by hosts_access(5). Basically, each file consists of a set of rules, which are searched for first in hosts.allow and then hosts.deny. The search stops at the first match, so if a host is granted access in host.allow it doesn't matter if it's then blocked in hosts.deny. Remember, the first rule matched determines what action the system will take.
There are two basic keywords, allow and deny. Both are used in conjunction with either specific hostnames or a wildcard from the list below.
A string beginning with . matches all hostnames that conclude with that string. For example, .example.com would match dunne.example.com. A string ending with . matches all hosts whose IP addresses begin with that sequence. For example, 192.168. would match all addresses in the range 192.168.xxx.xxx. A string beginning with @ is treated as an NIS netgroup name. A string of the form n.n.n.n/m.m.m.m is treated as a network/mask pair.
There are also some special shorthand names:
There is also a set of symbolic names that expand to various information about the client and server. The full list of such expansions is shown in the table below.
%a | The client IP address |
%c | Client information: user@host, user@IP, etc |
%d | argv[0] from the daemon process |
%h | Client host name or IP address |
%n | Client host name |
%p | Process id of the daemon |
%s | Server information |
%u | Client username |
%% | Literal % |
Examples
There are several typical forms of access control that provide examples of using the access control files. Explicitly authorized hosts are listed in hosts.allow, while most other rules are put in hosts.deny.
To deny all access, leave hosts.allow blank and put this in hosts.deny.
/etc/hosts.deny: ALL: ALL
To allow all access, simply leave both files blank. To allow controlled access, add rules to hosts.allow and hosts.deny as appropriate. The simplest way to do this is to list banned sites in hosts.deny.
/etc/hosts.deny: evilcrackers.com: ALL
On the other hand, you can also deny access to all save selected sites:
/etc/hosts.allow: example.com: ALL /etc/hosts.deny: ALL:ALL
Remember, the first match is the important one -- the ALL in hosts.deny won't block example.com.
Booby traps
A useful feature is the ability to trigger actions on the host which are based on attempted connections.
For example, should you detect a remote site attempting to use your TFTP server, the following rule
in /etc/hosts.deny not only rejects the attempt, but notifies the system administrator:
in.tftpd: ALL: finger -l @%h 2>&1 | mail -s 'remote tftp attempt' sysadm
Note that use of this feature relies on the PROCESS_OPTIONS option. This option also provides some other useful features:
See the host_options(5) man page for full details of these and other options.
Logging
Log records are written to the syslog daemon, syslogd, with facility and level as specified
in the makefile at compile time. What happens to the logs there is determined by the syslogd
config file, /etc/syslog.conf. If PROCESS_OPTIONS has been defined, the facility
and level can be changed at runtime, using the keyword severity. For example severity
mail.info specifies logging with facility mail at level info. An undotted
argument is understood as a level.
... ... ...
A good account of the thinking that led to the creation of the TCP wrappers is the paper "TCP Wrapper: Network Monitoring, Access Control, and Booby Traps," which is available from the same FTP site as the TCP wrappers software. Look for tcp_wrapper.<format>.Z.
PMSVN is a server configuration management and monitoring tool. It helps keep track of administrative actions for many servers with many administrators. It allows administrators to put specific configuration files under revision control and eases the burden of having to remember to commit changes. It can synchronize and monitor the consistency of small bits of configuration that are the same or mostly the same across many servers.
Oct 22, 2011 | freshmeat.net
NSC is a set of utilities for easy administration of DNS servers. You write very simple configuration files and NSC automatically generates zone files, reverse zone files, and configuration for the BIND. Among other things, it also supports classless reverse delegations and IPv6 addressing. NSC is written in POSIX shell and GNU M4, and it should run on all systems where these two languages are available. Generation of DNS daemon config files is currently limited to BIND 8 or 9, but it's very easy to add your own config file generators for other daemons.
freshmeat.net
MyPurgeLogs is a script that can delete files older than a number of days and can rotate and compress other files. It simply reads a configuration file to know what it must do. Wildcards for directories and files are allowed. The compression's method is determined to optimize the free space on the filesystem. This script has only been tested on AIX.
[****] Linux Security 101 Issue 14 By Kelley Spoon, [email protected] -- very good discussion of tcpd
There's a daemon that's probably been installed on your machine that you don't know about. Or at least, you're not aware of what it can do. It's called tcpd, and it's how we shut off access to some of the basic services that the Bad Guys can use to get on our system.
Since tcpd can be pretty complex, I'm not going to go into all the details and tell you how to do the fancy stuff. The goal here is to keep the mischievous gibbons from knocking down what it took so long for use to set up.
tcpd is called into action from another daemon, inetd, whenever someone tries to access a service like in.telnetd, wu.ftpd, in.fingerd, in.rshd, etc. tcpd's job is to look at two files and determine if the person who is trying to access the service has permission or not.
The files are /etc/hosts.allow and /etc/hosts.deny. Here's how it all works:
- Someone tries to use a service that tcpd is monitoring.
- tcpd wakes up, and makes a note of the attempt to the syslog.
- tcpd then looks hosts.allow
- if it finds a match, tcpd goes back to sleep and lets the user access the service.
- tcpd now takes a look at hosts.deny
- if it finds a match, tcpd closes the user's connection
- If it can't find a match in either file, or if both files are empty, tcpd shrugs, guesses it's OK to let the user on, and goes back to sleep.
Now, there are a couple of things to note here. First, if you haven't edited hosts.allow or hosts.deny since you installed Linux, then tcpd assumes that you want to let everyone have access to your machine. The second thing to note is that if tcpd finds a match in hosts.allow, it stops looking. In other words, we can put an entry in hosts.deny and deny access to all services from all machines, and then list ``friendly'' machines in the hosts.allow file.
Let's take a look at the man page. You'll find the info you need by typing man 5 hosts_access (don't forget the 5 and the underscore).
daemon_list : client_list daemon_list is a list of one or more daemon process names or wildcards client_list is a list of one or more host names, host addresses, patterns or wildcards that will be matched against the remote host name or address. List elements should be separated by blanks and/or commas.Now, if you go take a look at the man page, you'll notice that I didn't show you everything that was in there. The reason for that is because the extra option (the shell_command) can be used to do some neat stuff, but *most Linux distributions have not enabled the use of this option in their tcpd binaries*. We'll save how to do this for an article on tcpd itself.
If you absolutely have to have this option, get the source from here and recompile.
Back to business. What the above section from the hosts_access man page was trying to say is that the format of hosts.[allow|deny] is made up of a list of services and a list of host name patterns, separated by a ``:''
You'll find the name of the services you can use by looking in your /etc/inetd.conf...they'll be the ones with /usr/sbin/tcpd set as the server path.
The rules for determining host patterns are pretty simple, too:
- if you want to match all hosts in a domain, put a ``.'' at the front.
- Ex: .bar.com will match "foo.bar.com", "sailors.bar.com", "blue.oyster.bar.com", etc.
- if you want to match all IPs in a domain, put a "." at the end.
- Ex: 192.168.1. will match "192.168.1.1", "192.168.1.2", "192.168.1.3", etc.
And finally, there are some wildcards you can use:
- ALL matches everything. If in daemon_list, matches all daemons; if in client_list, it matches all host names.
- Ex: ALL : ALL would match any machine trying to get to any service.
- LOCAL matches host names that don't have a dot in them.
- Ex: ALL : LOCAL would match any machine that is inside the domain or search aliases given in your /etc/resolv.conf
- except isn't really a wildcard, but it comes in useful. It excludes a pattern from the list.
- Ex: ALL : ALL except .leetin-haxor.org would match all services to anyone who is not from ``*.leetin-haxor.org''
Ok. Enough technical stuff. Let's get to some examples.
Let's pretend we have a home LAN, and a computer for each member of the family.
Our home network looks like this:
linux.home.net 192.168.1.1 dad.home.net 192.168.1.2 mom.home.net 192.168.1.3 sis.home.net 192.168.1.4 bro.home.net 192.168.1.5Now, since no one in the family is likely to try and ``hack root,'' we can assume they're all friendly. But....we're not so sure about the rest of the people on the Internet. Here's how we go about setting things up so people on home.net have full access to our machine, but no one else does. In /etc/hosts.allow:
# /etc/hosts.allow for linux.home.net ALL: .home.netAnd in /etc/hosts.deny
# /etc/hosts.deny for linux.home.net ALL : ALLSince tcpd looks at hosts.allow first, we can safely deny access to all services for everybody. If tcpd can't match the machine sending the request to ``*.home.net'', the connection gets refused.
Now, let's pretend that Mom has been reading up on how Unix stuff works, and she's started doing some unfriendly stuff on our machine. In order to deny her access to our machine, we simply change the line in hosts.allow to:
ALL: .home.net except mom.home.netNow, let's pretend a friend from....uh....friend.com wants to get something off our ftp server. No problem, just edit hosts.allow again:
# /etc/hosts.allow for linux.home.net ALL: .home.net except mom.home.net wu.ftpd: .friend.comThings are looking good. The only problem is that the name server for home.net is sometimes down, and the only way we can identify someone as being on home.net is through their IP address. Not a problem:
# /etc/hosts.allow for linux.home.net ALL: .home.net except mom.home.net ALL: 192.168.1. except 192.168.1.3 ALL: .friend.comAnd so on....
I have found that's it's easier to deny everybody access, and list your friends in hosts.allow than it is to allow everybody access, and deny only the people who you know are RBG's. If you are running a private machine, this won't really be a problem, and you can rest easy.
However, if you're trying to run a public service (like an ftp archive of Tetris games for different OS's) and you can't afford to be this paranoid, then you need shouldn't put anything in hosts.allow, and just put all of the people you don't want touching your machine in hosts.deny
freshmeat.net
iBackup simplifies the task of backing up the system configuration files (those under /etc) for Solaris, *BSD, and Linux systems. You can run it from any directory and it will, by default, save the (maybe compressed) tarball to /root. It is possible to encrypt the tarball, to upload the tarball to another host, and to run the backup automated in a cron job. You can also create a nice HTML summary of a system using the included sysconf.
Glenn Brunette's Security Weblog
Before answering this question, let's first provide a little background. TCP Wrappers has been around for many, many years. It is used to restrict access to TCP services based on host name, IP address, network address, etc. For more detailed on what TCP Wrappers is and how you can use it, see tcpd(1M). TCP Wrappers was integrated into Solaris starting in Solaris 9 where both Solaris Secure Shell and inetd-based (streams, nowait) services were wrapped. Bonus points are awarded to anyone who knows why UDP services are not wrapped by default.TCP Wrappers support in Secure Shell was always enabled since Secure Shell always called the TCP Wrapper function host_access(3) to determine if a connection attempt should proceed. If TCP Wrappers was not configured on that system, access, by default, would be granted. Otherwise, the rules as defined in the hosts.allow and hosts.deny files would apply. For more information on these files, see hosts_access(4). Note that this and all of the TCP Wrappers manual pages a stored under /usr/sfw/man in Solaris 10. To view this manual page, you can use the following command:
$ man -M /usr/sfw/man -s 4 hosts_accessinetd-based services use TCP Wrappers in a different way. In Solaris 9, to enable TCP Wrappers for inetd-based services, you must edit the /etc/default/inetd file and set the ENABLE_TCPWRAPPERSparameter to YES. By default, TCP Wrappers was not enabled for inetd.
In Solaris 10, two new services were wrapped: sendmail and rpcbind. sendmail works in a way similar to Secure Shell. It always calls the host_access function and therefore TCP Wrappers support is always enabled. Nothing else needs to be done to enable TCP Wrappers support for that service. On the other hand, TCP Wrappers support for rpcbind must be enabled manually using the new Service Management Framework ("SMF"). Similarly, inetd was modified to use a SMF property to control whether TCP Wrappers is enabled for inetd-based services.
Let's look at how to enable TCP Wrappers for inetd and rpcbind...
To enable TCP Wrappers support for inetd-based services, you can simply use the following commands:
# inetadm -M tcp_wrappers=true # svcadm refresh inetdThis will enable TCP Wrappers for inetd-based (streams, nowait) services like telnet, rlogin, and ftp (for example):
# inetadm -l telnet | grep tcp_wrappers default tcp_wrappers=TRUEYou can see that this setting has taken effect for inetd by running the following command:
# svcprop -p defaults inetd defaults/tcp_wrappers boolean trueNote that you can also use the svccfg(1M) command to enable TCP Wrappers for inetd-based services.
# svccfg -s inetd setprop defaults/tcp_wrappers=true # svcadm refresh inetdWhether you use inetadm(1M) or svccfg is really a matter of preference. Note that you can also use inetadm or svccfg to enable TCP Wrappers on a per-service basis. For example, let's say that we wanted to enable TCP Wrappers for telnet but not for ftp. By default, both the global and per-service settings for TCP Wrappers are disabled:
# inetadm -p | grep tcp_wrappers tcp_wrappers=FALSE # inetadm -l telnet | grep tcp_wrappers default tcp_wrappers=FALSE # inetadm -l ftp | grep tcp_wrappers default tcp_wrappers=FALSETo enable TCP Wrappers for telnet, use the following command:
# inetadm -m telnet tcp_wrappers=TRUELet's check out settings again:
# inetadm -p | grep tcp_wrappers tcp_wrappers=FALSE # inetadm -l telnet | grep tcp_wrappers tcp_wrappers=TRUE # inetadm -l ftp | grep tcp_wrappers default tcp_wrappers=FALSEAs you can see, TCP Wrappers has been enabled for telnet but none of the other inetd-based services. Pretty cool, eh?
You can enable TCP Wrappers support for rpcbind by running the following command:
# svccfg -s rpc/bind setprop config/enable_tcpwrappers=true # svcadm refresh rpc/bindThis change can be verified by running:
# svcprop -p config/enable_tcpwrappers rpc/bind trueThat is all that there is to it! Quick, easy and painless! As always, let me know what you think!
Take care!
3 CommentsTrackback URL: http://blogs.sun.com/roller/trackback/gbrunett/Weblog/tcp_wrappers_on_solaris_10
Comments:Just wondering if using IP filter would not be a better way of blocking/allowing machines to connect to services? I used to use TCP Wrappers all the time, but find now, that I rarely use them in favor of using ipfilter? Is there some advantage to using both? Just curious....
Posted by Jason Grove on April 07, 2005 at 11:58 PM EDT #
Jason,Thank you for your question. In my opinion, I agree with you - I too would rarely use TCP Wrappers in favor of IP Filter. The reasons for this are simple - IP Filter simply has a more rich feature set and offers greater flexibility for defining filtering policy. Further, if you use Solaris containers, keep in mind that the IP Filter policy is defined in the global zone (versus TCP Wrappers which is done per container). The benefit of configuring IP Filter from the global zone is that if a local zone is breached, an attacker (even with root privileges) will not be able to alter the firewall policy or touch the firewall logs since they are safely protected in the global zone.
That said, TCP Wrappers was designed to protect TCP services and it does that very well. Further it offers an easy to understand and use interface for configuring policy. The choice to use IP Filter or TCP Wrappers will likely depend on your experience and comfort level with these tools as well as on your filtering requirements. If you are looking for a more comprehensive host-based firewall solution however, I would certainly recommend IP Filter.
Thanks again!
Glenn
Posted by Glenn Brunette (192.18.128.12) on April 08, 2005 at 01:17 PM EDT
Nice article. To answer the bonus question, the UDP services are not wrapable because they are stateless so there is no connection to manage. As the first comment said, IP filter can manage services, including UDP, because it is low enough in the protocol stack to cover both UDP and TCP.
Website: http://blogs.sun.com/gbrunett/ #Posted by Ben Strother on August 17, 2005 at 08:20 PM EDT
Website: http://www.livejournal.com/~wr4th/ #
Etcsvn is a command line program for managing system configurations in subversion. It doesn't make a working copy out of your /etc, but uses a temporary workspace. It will preserve ownership/permissions of the files being tracked.
Dpsyco is a automated system to distribute system configurations to several computers. It is written mainly for the Debian distribution but should be portable (without too much difficulties) to other distributions or Unixes as well. It consists of a number of shell scripts to perform the desired actions. With it you can handle users, add ssh-public-keys, patch the system, update things using cfengine, install files (overriding other package files), and more.
freshmeat.netCyberDNS is a Web application to manage DNS zones and BIND configurations easily. It lets you add entries and then watch as they are generated into zone and configuration files. It supports IPv6 and most DNS types. The Web interface runs on the master DNS server, and cron is used to synchronize the Sqlitedb and zone files to slave DNS servers. It is easy to make snapshots to revert in the event of failure.
Written in Python
bakonf is a tool for making backups of configuration files on GNU/Linux or Unix-like systems. It uses various methods to reduce the size of the backups it creates, and is designed to be useful for unattended remote servers.
Oct 25, 2011
programmer99 has asked for the wisdom of the Perl Monks concerning the following question:Hello. This is my first post, so please forgive me for any mistakes I make.BrowserUk (Pope)I have been programming for about 16 years now and Perl has always been one of my absolute favorite languages. I have been wondering how I can make a Programming language using Perl, only Perl and nothing but Perl. How would I go about doing this?
I have never created a language before. I know the basics of what a compiler is and how it works. But I don't know how to make one. After I make the language (which I don't know how to do) how do I make the language work with the compiler?
I truly do not want to read another book, as these can be extremely unhelpful. There really aren't any good tutorials that I can find. Can anyone give me some kind of tutorial or a point in the right direction? Thanks
P.S.- No mistakes that I can find! I'm good at this :-)
Your Mother (Abbot)See Exploring Programming Language Architecture in Perl by Bill Hails. for one of the most complete explorations of doing exactly what the title suggests.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". >In the absence of evidence, opinion is indistinguishable from prejudice.
programmer99BUK's suggestion is quite good. IIRC, it's a fun manuscript.
I've been meaning to suggest Marpa to you for awhile. While still marked experimental it should be excellent for this kind of thing. I've wanted to play with it myself for a long time but tuits being what they are...
Marpa would be used, I'd expect, to convert a language spec/grammar to Perl and then you could use any number of strategies to freeze the resulting code (in memory, file, some other specialized cache) until the original source changed, at which point it could be "recompiled." Doing it this way should afford an easy path for testing and building up a grammar piece by piece. If you don't write tests as you go, you will be sorry down the road. If you do write tests, you'll be able to breeze through wrong turns and accidentally introduced bugs with confidence.
Ok, I have a few questions (if you don't mind). What do you mean by grammar? How will Marpa help me? Thanks.Your Mother (Abbot)
ikegamiAs I say, tuits in the round are the issue and without the time to cook up some working example I'd just be glossing the various documentation and BNF/ParseRec stuffage. I think a simple example can be cooked up but it's also not entirely trivial or my forte (hint, hint to others who might have something to show). This is quite interesting and I've been waiting to try some Marpa though. If I can grab an hour tonight I'll revisit this otherwise, the weekend, otherwise... uh, what were we talking about?
ikegamiThe grammar of a language defines its syntax. Parsers (e.g. Marpa) take a sequence of bytes, characters or tokens, check if it conforms to the grammar, and assigns meaning to the components of the sequence as per the grammar.
For example, the job of the parser is to receive "a+b*c" and return "a multiplication of ( an addition of ( identifier a ) and (identifier b ) ) and ( identifier c )".
daviesMarpa - Parse any Language You Can Describe in BNF
Hopefully, it can do a bit more than that, because associativity cannot be described in BNF.
ikegamiHave a look at Jonathan Worthington's slides (http://www.jnthn.net/papers/2011-yapc-russia-compiler.pdf). The code is next door on his web site. But as this was a public talk, there may be a recording of it on YAPC.tv. My spies tell me that the talk was outstanding, writing a complete language (well, almost) from next to nothing in under an hour.
Regards,
John Davies
Corion (Saint)If you're interested in parsing and compiling, consider getting your hands on Compilers: Principles, Techniques, and Tools. You won't find any Perl in it, though.
andal:In my opinion a very good book is Compiler Construction (pdf warning) by Niklaus Wirth. I've only used the German edition from 1986, but it introduces a small language and how to build a parser and compiler for it. It only teaches you the technique of recursive descent. Recursive Descent is a technique that is currently enjoying a comeback because it is easier with it to produce meaningful error messages than it is when using lex and yacc or other parsing tools. The German book has 115 pages, the English and more recent edition (from 1996) has 130 pages. This seems to me a good starting point in the sense that little previous knowledge is needed and it gets you a feel for implementing some language with Pascal-like syntax. Actually, in the English PDF, it's an Oberon-like syntax, but that's still close enough.
When I first read the book, I skipped over the chapters 1 to 4, as they concern themselves with the theoretical background of the languages that the approach can handle. Looking back over the chapters, I'm not sure whether it was really necessary to skip over them, as they don't seem as dry to me as they did back then. On the other hand, I now have much more experience and also learned the theory of regular languages and context-free languages in university.
Browsing through the PDF, I see that it also mentions bottom-up (yacc-style) parsing and gives a rough outline of it.
spx2The work of Crenshaw at http://compilers.iecc.com/crenshaw/ appears to be simpler. But YMMV.
There's a lot of theory about "bla grammars bla bla translators bla ... context-free context-dependent bla bla etc".hbm (Pilgrim:)
You can just throw that Dragon Book out the window and study the grammar of some small language like Basic or JavaScript's grammar to learn stuff
(here you can find a lot of grammars to many many languages).
This is not rocket science, it's engineering.
So I totally agree with you about not wanting to read 9999 books on the subject, you need to build something.
I'm not in any way a compiler/language expert but I do feel that you want to get down to business
so.. here's Parse::Yapp, here's an article , here's another article , here's some benchmarks and comparisons between different parser generators for Perl,
and another comparison , and another article. If you have further questions, I'm sure the monastery has lots of monks with knowledge of Parse::Yapp
so your questions will receive an answer.
Now go write a language and good luck !For ideas, you might look at Brainf*ck, which has a simple grammar of: <>+-[],.Your Mothersundialsvc4Speaking of which, this is worth revisiting for anyone interested or that missed it the first time around: Ook interpreter.
JavaFanCreating a programming language from scratch is an interesting exercise. I not only did that, but sold umpteen thousand copies of a system that
useduses it. (So there...) But I dunno if I would implement the entire language in Perl. If you do it, I'd love to see it.I'd start by first thinking what your new language should do. Is it going to be a domain specific language? Or a general purpose one? If the latter, what is its strength going to be? Is it going to be procedial? functional? OO? Block based? List based? Something else?spx2:
Second, make some trade-offs. Do you want to write a simple compiler/interpreter? (That probably means a simple syntax, not many features in your language). Does the compiler have to be run fast? Or do you want to give much power to the user of the language (resulting in a more complex compiler)?Only then I'd worry about the implementation.
Oh, and my step 0 would be is to ponder about "if I really haven't a fucking clue on how to do that, is my disdain to read a book justified"?
well, just look over Parse::Yapp, that would be your entry point.JavaFan (Abbot)In a way, I have already thought this out. The issue I am having now how to get started with the actual code.Anonymous MonkSo, your question really is, I know what I want to do, but I'm not telling you what it is, yet I ask from you how to get started?
Is that how you ask for directions in an unknown city as well? You ask I want to go somewhere, in which direction do I go?
LOL *cough* *cough* aXML *cough* *cough*Anonymous MonkParsing Strings and Trees with Parse::Eyapp (An Introduction to Compiler Constructionhttp://search.cpan.org/dist/HOP-Lexer/lib/HOP/Lexer/Article.pod Lexing Without Grammars: When Regular Expressions Suck http://hop.perl.plover.com/Examples/ALL/calculator http://hop.perl.plover.com/Examples/ALL/regex-parser http://hop.perl.plover.com/linogram/
There is a robust online Source Code to HTML option at ToGoTutor - Code2Html . The website has tools for Perl, Java and other languages.<
This is how I like my code, in no specific order. :)
- 4 space indents
- No tabs in code (includes indents)
- Always Class->method, never method Class (this includes "new"!)
- Cuddled else: } else {
- Opening curly on the same line as the keyword it belongs to
- Closing vertically aligned with that keyword
- Space after comma or semi-colon, but not before
- No extra spaces around or inside parens: foo, (bar, baz), quux
- Extra spaces in arrayref constructor: [ foo, bar ]
- Extra spaces in hashref constructor: { foo => bar }
- Extra spaces in code delimiting curlies: sort { $a <=> $b } @foo
- No $a or $b except when sorting
- No parens unless needed for clarity
- Space between special keyword and its arguments: if (...) { ... }
- No space between keyword and its arguments if the "looks like a function, therefor it is a function" rule applies:
print((split)[22]) , notprint ((split)[22]) . (And of course notprint (split)[22] )- No subroutine prototypes if they're ignored anyway
- No subroutine prototypes just to hint the number of arguments
- Prototypes enforce context, so use them only if that makes sense
- No globals when access from another package is not needed
- use strict and -w. Loading of normal modules comes after loading strict.
- Lots of modules, but not to replace few-liners or simple regexes
- Comments on code lines have two spaces before and one after the # symbol
- No double spaces except for vertical alignment and comments
- Only && || ! where parens would be needed with and or not
- No double empty lines
- Empty line between logical code chunks
- Explicit returns from subs
- Guards (return if ...) are nicer than large else-blocks
- No space between array/hash and index/key: $foo[0], $foo{bar}
- No quotes for simple literal hash keys
- Space around index/key if it is complex: $foo{ $bar{baz}{bar} }
- Long lines: indent according to parens, but always 4 spaces (or [], {}, etc)
- Long lines: continuing lines are indented
- Long lines: Lines end with operator, unless it's || && and or
- No "outdent"s
- No half indents
- No double indents
- grep EXPR and map EXPR when BLOCK is not needed
- Logical order in comparisons: $foo == 4, but never 4 == $foo
- English identifiers
- Not the English.pm module
- Multi-word identifiers have no separation, or are separated by underscores
- Lowercase identifiers, but uppercase for constants
- Whatever tool is useful: no OO when it does not make sense
- It's okay to import symbols
- No here-documents, but multi-line q/qq. Even repeated prints are better :) (Okay, here-docs can be used when they're far away from code that contains any logic. Code MUST NOT break when (un)indented.)
- Always check return values where they are important
- No spaces around: -> **
- Spaces around: =~ !~ * / % + - . << >> comparison_ops & | ^ && || ?: assignment_ops => and or xor
- Spaces or no spaces, depending on complexity: .. ... x
- No space after, unless complex: ~ u+ u-
- Long lines: break between method calls, -> comes first on a line, space after it
- => where it makes sense
- qw where useful
- qw when importing, but '' when specifying pragma behaviour
- () for empty list, not qw()
- -> to dereference, where possible
- No abbreviations (acronyms are okay, and so are VERY common abbreviations) NEVER "ary"
- Data type not represented in variable name: %foo and @foo, but not %foo_hash or @foo_array
- Sometimes: data type of referent in reference variable names: $bla_hash is okay
- Sometimes: data type 'reference' in reference variable names: $hashref is okay
- No one-letter variable names, unless $i or alike
- $i is a(n index) counter
- Dummy variables can be called foo, bar, baz, quux or just dummy
- Taint mode *only* for setuid programs
- No sub main(), unless it needs to be called more often than once
- Subs before main code!
- Declare variables on first use, not before (unless required)
- \cM > \x0d > \015. \r only where it makes sense as carriage return.
- Complex regexes get /x
- No space between ++/-- and the variable
- List assignment for parameters/arguments, not lots of shifts
- Only shift $self from @_ if @_ is used elsewhere in the sub
- Direct @_ access is okay in very short subs
- No eval STRING if not needed
- Constructor "new" does not clone. Only handles a *class* as $_[0]
- Constructor that clones is called "clone"
- Constructor can be something else than "new", but "new" is an alias
- No setting of $| when it is not needed
- Lexical filehandles
- No v-strings
- Single quotes when double-quote features not used
- In DBI: value interpolation using placeholders only
- use base 'BaseClass' instead of use BaseClass and setting @ISA
- Comments where code is unclear
- Comments usually explain the WHY, not the HOW
- POD at the bottom, not top, not interleaved
- Sane variable scopes
- No local, except for perlvar vars
- No C-style loop for skipless iteration
- No looping over indexes if only the element is used
- 80 characters width. It's okay to give up some whitespace
- Unbalanced custom delimiters are not metacharacters and not alphanumeric
- RHS of complex s///e is delimited by {}
- Favourite custom delimiter is []
- Semi-colon only left out for implicit return or in single-statement block
- No $&, $` or $'
- Localization of globals if they're to be changed (local $_ often avoids weird bugs)
- Semi-colon not on its own line
- (in|de)crement in void context is post(in|de)crement
- No map or grep in void context
- ? and : begin lines in complex expressions
- True and false are always implied. No $foo == 0 when testing for truth.
- Only constructors return $self. Accessor methods never do this.
- Stacking methods is okay, but a non-constructor method should never return $self.
- Accessor methods should behave like variables (Attribute::Property!)
- Other methods should behave like subroutines
- our $VERSION, not use vars qw($VERSION);
- Module version numbers are ^\d+\.\d\d\z
- Error checking is done using or. This means open or do { ... } instead of unless (open) { ... } when handling the error is more than a simple statement.
- The result of the modulus operator (%) has no useful boolean meaning (it is reversed), so explicit == 0 should be used.
Jun 16, 2002 | www.perlmonks.org
ChemBoy
Use Perltidy's HTML formatting options (-html etc.) As shown in the Perltidy manual, you can generate a whole page or a simple preformatted section, with embedded or linked style sheets, and set different colors for all manner of different constructs. Answer: How can I convert Perl code into HTML with syntax highlighting?
vim can do it.
:runtime! syntax/2html.vimTo view the help page:
:help 2htmlDetails are in the vim help under "convert-to-HTML".Code2HTML is a free Perl script that syntax highlights 15 different programming languages.
Emacs can do it. I often use the htmlfontify library.
By Wolf Richter www.testosteronepit.com
Tuition has done it again: up by 8.3% for universities and by 8.7% for community colleges, according to the College Board. Here in California, tuition increases are outright ridiculous. Much of it will be paid for with student loans (though grants, scholarships, other aid, and tax credits will cover some of it). Student loan debt will exceed $1 trillion by the end of the year-a stunning amount. But unlike other debt, it cannot be discharged in a bankruptcy.
The skyrocketing costs of higher education add to the strains already weighing down the middle class whose median household income has fallen 9.8% between December 2007 and June 2011 (Sentier Research) and whose real wages have declined 1.8% over last year (BLS) and around 9% since their peak in 1999.
We all support education; we want the next generation to be productive. So now, under increased pressure to "do something," the Obama administration has come up with a Band-Aid, which includes income-based payment limits and ultimate debt forgiveness in certain cases-an accelerated implementation of program improvements that would have taken effect in 2014. Looking forward, it is likely that more taxpayer funded relief is on the way.
But the system itself is dysfunctional. The cause: a misalignment of interests within the complex relationships between students, universities, the student-loan industry, and the federal government.
Universities are businesses, and businesses have the natural purpose to charge the maximum price the market will bear. But unlike Walmart or the shop down the street, universities operate in a system that has become devoid of market forces, and when they demand higher tuition, the whole system falls in line to support those increases:
- Students want an education, and they have to get it within the higher education system. When costs go up, they can't massively drop out; doing so would endanger their future. They can choose cheaper universities and community colleges, but they're all within the same system, and they're all doing the same thing: jacking up tuition and fees. And the very logical mechanism of "out-of-state tuition" ends up being a highly anti-competitive measure. So students fight tuition increases the only way they can: obtain more funding.
- The student loan industry profits from processing student loans and related government subsidies. Naturally, they encourage students to take on more debt. Risk would function as a natural brake for making loans. But in the student loan industry, there is little or no risk as the government guarantees the loans, and loans cannot be discharged in bankruptcy. Further, the amount of a student loan is a function of the cost of a particular school-which further reduces price competition between schools.
- The government, in constant need of voter support, will fund or guarantee whatever it takes to allow students to get their education. Any cutback would be perceived as a way to strangle the education of the next generation. The only option the government would have is limit what universities can charge, similar to the limits that the government imposes on Medicare providers, but that option is a non-starter.
As a consequence, university budgets have become huge. Administrator salaries, bonuses, benefits, golden parachutes, and pensions have shocked the public when they're exposed in the media. Programs that have little or nothing to do with education swallow up more and more money. And sure, everybody loves to have well-equipped labs.
sellstop
Tuition is high in part because people are lazy. Instead of going to a junior college, working while they do it, then move up to a better paying job and then continuing their education as they can pay for it, they go for the fulltime 4-10 yrs of college to get a big degree while they party. Then when they get out of college they expect to be made CEO of some company. 'Cause they don't like real work.
in other words it is the same old credit-based society and the inflation that goes with it. Don't blame anyone but the participants. gh
blindman http://www.youtube.com/watch?v=pLwh9eMvcuc MONEY: Before Ron Paul, was Merrill M.E. Jenkins Sr. (M.R.) . http://www.youtube.com/watch?v=ff0dcR1_ZeE&feature=related MONEY: Before Ron Paul, was Merrill Jenkins (M.R.) 2 . http://www.youtube.com/watch?NR=1&v=3Ha8HKSUxgk MONEY: Before Ron Paul, was Merrill Jenkins (M.R.) 3 . http://www.youtube.com/watch?v=0yrSHMn_CZo&feature=related MONEY: Before Ron Paul, was Merrill Jenkins (M.R.) 4 . ...etc.
Beancounter
What do you call it again when cheap subisidized credit is chasing a modestly constrained resource (a degree) produced by a tax subsidized entity? a bubble? hell yeah. time to eliminate the exemptions for nonprofits education...
Catullus
Missing in the "income-based" student loan repayment is that if the government "discovers" cost savings 15 years from now by not allowing the debt to be forgiven, it becomes a permanent 10% income tax increase. It's how you sneak in an income tax increase to a generation of debt slaves.
philipat
It seems to me that education has been very slow to adopt technology. I mean attending lectures is just a waste of everyone's time when the subject matter could be better covered online. By adopting online study programs, courses could be shorter, especially for students who are prepared to work harder, and staff numbers could be reduced. But, of course...........
Mark Pappa .
..Colleges have the most bloated administrative staffs I have ever seen. Who would ever quit when their kids gets to attend the college for free. In CT, they are giving in state tuition to illegal immigrants, squeezing out the in state tax payers. System is insane.
topcallingtroll
I had always planned for my kids to be able to go to harvard or some other expensive private school. However i wont qualify for financial aid.
I now think they are much better off skipping harvard and me just giving them a couple hundred thousand.
Being born in the mid sixties I was part of the last group that could benefit from getting as much of the most expensive education possible. That strategy for success is no longer valid.
RafterManFMJ
I had always planned for my kids to be able to go to harvard or some other expensive private school. However i wont qualify for financial aid.
I now think they are much better off skipping harvard and me just giving them a couple hundred thousand.
Being born in the mid sixties I was part of the last group that could benefit from getting as much of the most expensive education possible. That strategy for success is no longer valid.
Your biggest mistake was working hard and making a large income, as well as being white. I'll bet you feel pretty damn stoopid. Here, I'll help you improve your plans for your kids; convert your 'couple a hundred thousand' into shiny PMs, then give them to them...
...and why go to Harvard now? Would be terrible to graduate into the 'elite' just when people start hanging them from lampposts by the thousands, right?
| AldousHuxley
Employee name Job title Campus Overtime pay Gross payDescending Tedford, Jeff Head Coach-Intercolg Athletics Berkeley $0 $2,342,315 Howland, Benjamin Clark Coach, Intercol Athletics,Head Los Angeles $0 $2,058,475 Busuttil, Ronald W Professor-Medcomp-A Los Angeles $0 $1,776,404 Leboit, Philip E Prof Of Clin___-Medcomp-A San Francisco $0 $1,574,392 Mccalmont, Timothy H Prof Of Clin___-Medcomp-A San Francisco $0 $1,573,494 Shemin, Richard J Professor-Medcomp-A Los Angeles $0 $1,400,000 Neuheisel, Richard Gerald Coach, Intercol Athletics,Head Los Angeles $0 $1,290,136 Azakie, Anthony Assoc Prof In Res-Medcomp-A San Francisco $0 $1,098,588 Ames, Christopher P Aso Prof Of Clin___-Medcomp-A San Francisco $0 $945,407 Montgomery, Michael J. Head Coach-Intercolg Athletics Berkeley $0 $918,562 Weinreb, Robert N. Professor-Medcomp-A San Diego $0 $869,436 Lawton, Michael T Prof In Res-Medcomp-A San Francisco $0 $821,468 Esmailian, Fardad Hs Clin Prof-Medcomp-A Los Angeles $0 $815,958 Reemtsen, Brian L Hs Asst Clin Prof-Medcomp-A Los Angeles $0 $813,937 Berggren, Marie N Treasurer Of The Regents Office of the President $0 $810,341 Jamieson, Stuart W Professor-Medcomp-A San Diego $0 $807,500 Gershwin, Merrill E Professor-Medcomp-A Davis $0 $796,135 Prusiner, Stanley B Professor-Medcomp-A San Francisco $0 $795,574 Schanzlin, David J. Prof Of Clin___-Medcomp-A San Diego $0 $793,000 Vail, Thomas P Professor-Medcomp-A San Francisco $0 $780,000
Read more: http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2009/06/05/ucpay2008.DT...
Read more: http://www.sfgate.com/cgi-bin/article.cgi?
AldousHuxley
Internet exposes educational system's corruption.
Six figure salaries for teachers in OHIO:
Check out AXNER DAVID's $180k/year salary while working only 260 days a year!
Year Name District Building Salary Days Worked 2007 ABEGGLEN JOHN W. West Clermont Local West Clermont Local $121,034.00 218 2008 ABEGGLEN JOHN W. West Clermont Local West Clermont Local $123,454.00 218 2010 ACKERMAN JILL A. Lima City Lima City $102,000.00 260 2011 ACKERMAN JILL A. Lima City Lima City SD $104,040.00 260 2011 ACKERMANN TIMOTHY A. Milford Exempted Village Milford Ex Vill SD $102,375.00 223 2010 Acomb Michael J. Solon City Orchard Middle School $104,252.00 227 2011 Acomb Michael J. Solon City Orchard Middle School $104,252.00 227 2011 ACTON JAMES F. Finneytown Local Finneytown Local SD $104,981.00 265 2007 AdamsJohn Willoughby-Eastlake City Willoughby-Eastlake City $117,147.00 261 2008 AdamsJohn Willoughby-Eastlake City Willoughby-Eastlake City $117,147.00 261 2008 AdamsJennie Berea City Berea City $104,864.00 260 2009 AdamsJohn Willoughby-Eastlake City Willoughby-Eastlake City $123,675.00 261 2009 AdamsJennie Berea City Berea City $107,333.00 260 2010 AdamsJohn Willoughby-Eastlake City Willoughby-Eastlake City $123,675.00 261 2010 AdamsJennie Berea City Berea City $109,451.00 260 2011 ADAMS NATASHA L. Forest Hills Local Nagel Middle School $103,384.00 232 2011 ADAMS WILLIAM J. Revere Local Revere High School $100,410.00 248 2011 AdamsJohn Willoughby-Eastlake City Willoughby-Eastlake City SD $123,675.00 261 2011 AdamsJennie Berea City Berea City SD $113,573.00 260 2011 ADEN MARC E Cleveland Heights-University Heights City Cleveland Heights High School $101,914.00 220 2010 ADKINS MARY-ANNE Aurora City Craddock/Miller Elementary School $101,004.00 185 2011 ADKINS MARY-ANNE Aurora City Craddock/Miller Elementary School $101,004.00 185 2011 ADKINS PATRICK D. Port Clinton City Port Clinton City SD $100,587.00 260 2007 ADREAN ANGELA A. Gahanna-Jefferson City Gahanna South Middle School $105,209.00 220 2008 ADREAN ANGELA A. Gahanna-Jefferson City Gahanna South Middle School $109,646.00 220 2009 ADREAN ANGELA A. Gahanna-Jefferson City Gahanna South Middle School $114,254.00 220 2010 ADREAN ANGELA A. Gahanna-Jefferson City Gahanna South Middle School $119,039.00 220 2011 ADREAN ANGELA A. Gahanna-Jefferson City Gahanna South Middle School $120,397.00 220 2007 AERNI KENNETH H. Maumee City Maumee City $110,626.00 223 2008 AERNI KENNETH H. Maumee City Maumee City $114,461.00 224 2009 AERNI KENNETH H. Maumee City Maumee City $118,430.00 224 2010 AERNI KENNETH H. Maumee City Maumee City $118,430.00 224 2011 AERNI KENNETH H. Maumee City Maumee City SD $120,759.00 223 2010 AHO MARTIN H. Twinsburg City Twinsburg City $106,446.00 260 2011 AHO MARTIN H. Twinsburg City Twinsburg City SD $108,166.00 260 2011 Aker Jeffrey C. Solon City Solon High School $102,694.00 186 2007 ALEXANDER GLENN C. Clermont County Educ Serv Cntr Clermont County Educ Serv Cntr $121,550.00 261 2007 ALEXANDER GLENN C. Clermont County Educ Serv Cntr Clermont County Educ Serv Cntr $121,550.00 132 2008 ALEXANDER GLENN C. Clermont County Educ Serv Cntr Clermont County Educ Serv Cntr $121,550.00 261 2009 ALEXANDER GLENN C. Clermont County Educ Serv Cntr Clermont County Educ Serv Cntr $121,550.00 261 2010 ALEXANDER GLENN C. Clermont County Educ Serv Cntr Clermont County Educ Serv Cntr $121,550.00 261 2011 ALEXANDER GLENN C. Clermont County Educ Serv Cntr Clermont County Educ Serv Cntr $121,550.00 261 2007 ALEXANDER RAMI CAROLINE J. Olmsted Falls City Olmsted Falls City $112,054.00 227 2008 ALEXANDER RAMI CAROLINE J. Olmsted Falls City Olmsted Falls High School $112,718.00 227 2009 ALEXANDER RAMI CAROLINE J. Olmsted Falls City Olmsted Falls High School $112,718.00 227 2011 ALEXANDER-JONES SHIRLEY L Columbus City School District Columbus City Schools City SD $103,332.06 260 2010 ALIG JANA M. Reynoldsburg City French Run Elementary School $101,674.00 232 2011 ALIG JANA M. Reynoldsburg City Herbert Mills Elementary School $101,674.00 232 2007 ALLEN DAVID L. Mason City School District William Mason High School $111,330.00 261 2007 ALLEN DENNIS L. Rocky River City Rocky River City $134,780.00 261 2007 ALLEN DONALD S. London City London City $105,400.00 260 2007 ALLEN ALFRED Gahanna-Jefferson City Lincoln High School $102,310.00 220 2008 ALLEN ALFRED Gahanna-Jefferson City Lincoln High School $111,097.00 261 2008 ALLEN NANCY S. Centerville City Centerville City $109,930.00 224 2008 ALLEN DAVID L. Mason City School District William Mason High School $116,238.00 262 2008 ALLEN DONALD S. London City London City $107,400.00 260 2009 ALLEN DONALD S. London City London City $111,588.00 260 2009 ALLEN ALFRED Gahanna-Jefferson City Lincoln High School $111,097.00 261 2009 ALLEN DAVID L. Mason City School District William Mason High School $121,331.00 261 2009 ALLEN NANCY S. Centerville City Centerville City $113,357.00 261 2010 ALLEN DAVID L. Mason City School District William Mason High School $121,331.00 261 2010 ALLEN DONALD S. London City London City $114,500.00 260 2010 ALLEN NANCY S. Centerville City Centerville City $113,656.00 260 2010 ALLEN DAVID L. Mason City School District Mason City School District $125,123.00 261 2011 AllenAmy R. Kettering City J F Kennedy Elementary School $101,403.00 220 2011 ALLEN MARY C Columbus City School District Oakland Park Alternative Elementary $100,022.00 260 2011 ALLEN DAVID L. Mason City School District Mason City City SD $130,194.00 260 2011 ALLEN DONALD S. London City London City SD $114,500.00 260 2009 ALLENICK DAVID S. South Euclid-Lyndhurst City Brush High School $101,120.00 215 2010 ALLENICK DAVID S. South Euclid-Lyndhurst City Brush High School $106,074.00 215 2011 ALLENICK DAVID S. South Euclid-Lyndhurst City Brush High School $106,074.00 215 2011 ALSTON ANTHONY E Columbus City School District Beechcroft High School $104,562.12 260 2007 ALTHERR EVELYN E. Middletown City Middletown City $115,781.00 225 2008 ALTHERR EVELYN E. Middletown City Middletown City $118,652.00 225 2009 ALTHERR EVELYN E. Middletown City Middletown City $118,652.00 225 2008 AMES JOHN A. Loveland City Loveland City $102,677.00 222 2009 AMES JOHN A. Loveland City Loveland City $104,637.00 222 2010 AMES JOHN A. Loveland City Loveland City $107,954.00 222 2011 AMES JOHN A. Loveland City Loveland City SD $110,560.00 222 2008 AMODIO ROBERT M. Fairfield City Fairfield City $101,499.00 228 2009 AMODIO ROBERT M. Fairfield City Fairfield City $101,499.00 228 2010 AMODIO ROBERT M. Norwood City Norwood City $105,000.00 260 2011 AMODIO ROBERT M. Norwood City Norwood City Schools City SD $105,000.00 260 2007 AMOS MICHAEL J. Oak Hills Local Oak Hills Local $107,234.00 260 2008 AMOS MICHAEL J. Oak Hills Local Oak Hills Local $107,234.00 260 2009 AMOS MICHAEL J. Oak Hills Local Oak Hills Local $117,561.00 260 2010 AMOS MICHAEL J. Oak Hills Local Oak Hills Local $117,561.00 260 2011 AMOS MICHAEL J. Oak Hills Local Oak Hills Local SD $122,310.00 260 2007 ANDERSON STEPHEN P. Jackson City Jackson City $112,255.00 260 2007 ANDERSON BART G. ESC of Central Ohio Educ Serv ESC of Central Ohio Educ Serv $139,500.00 255 2008 AndersonLarry D Bellefontaine City Bellefontaine City $104,738.00 262 2008 ANDERSON STEPHEN P. Jackson City Jackson City $126,309.00 260 2008 ANDERSON BART G. ESC of Central Ohio Educ Serv ESC of Central Ohio Educ Serv $145,000.00 255 2008 ANDERSON EDNA S. Columbiana County JVSD Columbiana County JVSD $101,057.00 260 2008 Anderson Dennis P. Solon City Solon City $100,288.00 260 2009 ANDERSON ELIZABETH A. Rocky River City Rocky River City $118,994.00 260 2009 ANDERSON STEPHEN P. Jackson City Jackson City $128,252.00 264 2009 ANDERSON BART G. ESC of Central Ohio Educ Serv ESC of Central Ohio Educ Serv $149,350.00 255 2009 AndersonLarry D Bellefontaine City Bellefontaine City $109,975.00 261 2009 ANDERSON EDNA S. Columbiana County JVSD Columbiana County JVSD $104,594.00 260 2009 Anderson Dennis P. Solon City Solon City $104,211.00 260 2009 ANDERSON RONALD W. Greene Educ Serv Cntr Greene Educ Serv Cntr $102,870.00 190 2010 ANDERSON ELIZABETH A. Rocky River City Rocky River City $121,739.00 261 2010 ANDERSON BART G. ESC of Central Ohio Educ Serv ESC of Central Ohio Educ Serv $151,590.00 212 2010 ANDERSON EDNA S. Columbiana County JVSD Columbiana County JVSD $108,046.00 260 2010 ANDERSON RONALD W. Greene Educ Serv Cntr Greene Educ Serv Cntr $106,598.00 190 2010 AndersonLarry D Bellefontaine City Bellefontaine City $109,975.00 261 2010 ANDERSON MICHAEL D. East Cleveland City School District East Cleveland City School District $100,202.00 260 2011 ANDERSON ELIZABETH A. Rocky River City Rocky River City SD $129,192.00 261 2011 ANDERSON BART G. ESC of Central Ohio Educ Serv ESC of Central Ohio Educ Serv $155,590.00 212 2011 AndersonLarry D Bellefontaine City Bellefontaine City SD $109,975.00 261 2011 ANDERSON MICHAEL D. East Cleveland City School District East Cleveland City Schools Ci $109,492.00 260 2011 ANDERSON RONALD W. Greene Educ Serv Cntr Greene Educ Serv Cntr $111,750.00 190 2011 ANDERSON EDNA S. Columbiana County JVSD Columbiana County JVSD $111,612.00 260 2011 ANDRE CHARLES M. ESC of Central Ohio Educ Serv ESC of Central Ohio Educ Serv $103,083.00 200 2007 ANDREWS GEOFFREY G. Polaris JVSD Polaris Career Center JVSD $106,279.00 215 2008 ANDREWS GEOFFREY G. Oberlin City Schools Oberlin City Schools $101,416.00 260 2009 ANDREWS GEOFFREY G. Oberlin City Schools Oberlin City Schools $110,950.00 260 2010 ANDREWS GEOFFREY G. Oberlin City Schools Oberlin City Schools $113,085.00 260 2011 ANDREWS GEOFFREY G. Oberlin City Schools Oberlin City SD $113,085.00 260 2009 ANDRZEJEWSKI JANINE M. Parma City Valley Forge High School $103,713.00 230 2010 ANDRZEJEWSKI JANINE M. Parma City Valley Forge High School $105,780.00 230 2011 ANDRZEJEWSKI JANINE M. Parma City Valley Forge High School $105,780.00 230 2007 ANGELLO MARTHA J. Sycamore Community City Sycamore Community City $107,734.00 260 2008 ANGELLO MARTHA J. Sycamore Community City Sycamore Community City $110,966.00 260 2009 ANGELLO MARTHA J. Sycamore Community City Sycamore Community City $110,966.00 260 2010 ANGELLO MARTHA J. Sycamore Community City Sycamore Community City $115,740.00 260 2011 ANGELLO MARTHA J. Sycamore Community City Sycamore Community City SD $117,740.00 260 2011 ANTEL SHIRLEY A. Amherst Exempted Village Amherst Ex Vill SD $108,397.00 260 2008 APOLITO ELIZABETH A. Montgomery Educ Serv Cntr Montgomery Educ Serv Cntr $103,646.00 200 2009 APOLITO ELIZABETH A. Montgomery Educ Serv Cntr Montgomery Educ Serv Cntr $100,339.00 180 2010 APOLITO ELIZABETH A. Montgomery Educ Serv Cntr Montgomery Educ Serv Cntr $102,446.00 229 2011 APOLITO ELIZABETH A. Montgomery Educ Serv Cntr Montgomery Educ Serv Cntr $103,572.00 229 2011 APPLEBAUM ROBERT J. Maple Heights City Maple Heights City SD $100,981.00 260 2011 ARBAUGH JAY G. Keystone Local Keystone Local SD $106,000.00 260 2009 ARCHER RUTH Cleveland Metropolitan Cleveland Metropolitan $100,785.50 260 2010 ARCHER RUTH Cleveland Metropolitan Cleveland Metropolitan $100,785.50 260 2011 AREND LOIS E Columbus City School District Columbus City Schools City SD $103,093.12 260 2010 Argalas Roger Northwest Local Northwest Local $107,482.00 251 2011 Argalas Roger Northwest Local Northwest Local SD $110,643.00 251 2011 ARGENTI KAREN M. Amherst Exempted Village Marion L Steele High School $108,603.00 184 2009 ARLEDGE JR ROBERT L. Greene Educ Serv Cntr Greene Educ Serv Cntr $103,179.00 260 2010 ARLEDGE JR ROBERT L. Greene Educ Serv Cntr Greene Educ Serv Cntr $103,179.00 260 2011 ARLEDGE JR ROBERT L. Greene Educ Serv Cntr Greene Educ Serv Cntr $106,274.00 260 2007 ARMOCIDA ANTHONY M. Yellow Springs Exempted Village Yellow Springs Exempted Village $109,496.00 248 2008 ARMOCIDA ANTHONY M. Yellow Springs Exempted Village Yellow Springs Exempted Village $109,496.00 248 2009 ARMOCIDA ANTHONY M. Yellow Springs Exempted Village Yellow Springs Exempted Village $109,496.00 248 2007 ARMSTRONG BRUCE W. Highland Local Highland Local $128,048.00 260 2008 ARMSTRONG BRUCE W. Highland Local Highland Local $128,048.00 260 2009 ARMSTRONG BRUCE W. Highland Local Highland Local $128,048.00 260 2011 Arnoff Gail Solon City Grace L Roxbury Elementary School $102,694.00 186 2011 ARNOLD MATTHEW D. Beavercreek City Ferguson Middle School $103,965.00 213 2007 ASHWORTH MARY ANNE Delaware City Delaware City $111,427.00 261 2007 ASHWORTH DENNIS D. West Clermont Local Glen Este High School $102,630.00 260 2008 ASHWORTH MARY ANNE Delaware City Delaware City $115,210.00 262 2008 ASHWORTH DENNIS D. West Clermont Local Glen Este High School $104,683.00 260 2009 ASHWORTH MARY ANNE Delaware City Delaware City $121,757.00 261 2009 ASHWORTH DENNIS D. West Clermont Local Glen Este High School $112,456.00 260 2010 ASHWORTH DENNIS D. West Clermont Local Glen Este High School $114,704.00 260 2010 ASHWORTH MARY ANNE Delaware City Delaware City $121,290.00 260 2011 ASHWORTH MARY ANNE Delaware City Delaware City SD $121,290.00 260 2008 ATCHLEY JAMES R. Ansonia Local Ansonia Local $100,060.00 254 2009 ATCHLEY JAMES R. Ansonia Local Ansonia Local $103,062.00 260 2010 ATCHLEY JAMES R. Ansonia Local Ansonia Local $103,062.00 260 2011 ATCHLEY JAMES R. Ansonia Local Ansonia Local SD $103,062.00 260 2008 ATKINSON CHERYL L. Lorain City Lorain City $175,000.00 260 2009 ATKINSON CHERYL L. Lorain City Lorain City $175,000.00 260 2010 ATKINSON CHERYL L. Lorain City Lorain City $210,058.00 260 2011 ATKINSON CHERYL Lorain City Lorain City SD $210,058.00 260 2007 AUBUCHON WILLIAM R. Parma City Parma City $102,457.00 230 2008 AUBUCHON WILLIAM R. Lorain County JVS JVSD Lorain County JVS JVSD $113,000.00 260 2008 AUBUCHON WILLIAM R. Parma City Parma City $102,457.00 230 2009 AUBUCHON WILLIAM R. Lorain County JVS JVSD Lorain County JVS JVSD $113,000.00 260 2010 AUBUCHON WILLIAM R. Lorain County JVS JVSD Lorain County JVS JVSD $113,000.00 260 2009 AUGINAS CHRISTINE M. Shaker Heights City Shaker Heights City $113,626.00 250 2010 AUGINAS CHRISTINE M. Shaker Heights City Shaker Heights City $113,626.00 250 2011 AUGINAS CHRISTINE M. Shaker Heights City Shaker Heights City SD $113,626.00 250 2007 AukermanCatherine Berea City Berea City $101,862.00 260 2008 AukermanCatherine Berea City Berea City $107,685.00 260 2009 Aukerman Catherine Highland Local Highland Local $120,000.00 260 2009 Aukerman Catherine Berea City Berea City $107,685.00 260 2010 AUKERMAN CATHERINE L. Highland Local Highland Local $122,100.00 260 2011 AUKERMAN CATHERINE L. Highland Local Highland Local SD $122,100.00 260 2007 AULT MARK C. Indian Hill Exempted Village Indian Hill Exempted Village $109,000.00 260 2008 AULT MARK C. Indian Hill Exempted Village Indian Hill Exempted Village $116,480.00 260 2009 AULT MARK C. Indian Hill Exempted Village Indian Hill Exempted Village $120,557.00 260 2010 AULT MARK C. Indian Hill Exempted Village Indian Hill Exempted Village $122,667.00 260 2011 AULT MARK C. Indian Hill Exempted Village Indian Hill Ex Vill SD $124,813.00 260 2010 AUSTIN SHENISHA D Cleveland Heights-University Heights City Frank L Wiley Middle School $118,000.00 190 2008 AXNER DAVID E. Dublin City Dublin City $170,000.00 260 2008 AXNER DAVID E. Chagrin Falls Exempted Village Chagrin Falls Exempted Village $135,990.00 260 2009 AXNER DAVID E. Dublin City Dublin City $175,100.00 260 2010 AXNER DAVID E. Dublin City Dublin City $180,353.00 260 2011 AXNER DAVID E. Dublin City Dublin City SD $182,157.00 259 2007 BABCANEC WAYNE Norwalk City Norwalk City $101,113.00 260 2008 BABCANEC WAYNE Norwalk City Norwalk City $105,195.00 260 2009 BABCANEC WAYNE Norwalk City Norwalk City $105,195.00 260 2010 BABCANEC WAYNE Norwalk City Norwalk City $108,877.00 260 2011 BABCANEC WAYNE Norwalk City Norwalk City SD $108,877.00 260 2010 BACHO ALAN A. Sylvania City Sylvania City $102,827.00
boiltherich
But But But what happened to letting people make as much as they possibly can and keeping every cent without even income taxes to worry about?
I see, it is OK when it is YOUR money but a total bitch kitty when it is other people.
I have a brother whom I have nothing to do with for personal reasons, he grew up in the same grinding poverty I did, real poverty, went to college in 1973 after high school, took a LOT of drugs and became a hippie, dropped out of school. I went back to college late and got a degree in Finance, by then he was burned out on planting trees in mountain country in all weather and smoking weed for minimum wage, he went back to school the following year for an accounting degree. I graduated 1996, he in 1997.
I faced my own set of issues after graduation, he went to work for the county auditor/controller. Thirteen years he did the best work and when the auditor/controller died or retired or whatever he was appointed or elected into that position, I am not sure which because I still think he is a flaming asshole and we do not speak, but California lists all elected officials salary ranges ( http://lgcr.sco.ca.gov/ ) and there is his name, $125,000 -167,000, and I assume he makes the low end of the range.
The job description you can skip over if you like but I post it for a reason, because the responsibility and liability is huge, and a lot of the time it is out of your control. Serves as chief accounting officer in the County; records the financial transactions of the County and other related agencies; audits and processes claims for payment; issues receipts for all monies received by the County; prepares financial reports; compiles the County budget; audits and issues payroll checks; maintains personnel earning and benefit records; accounts for property tax monies; oversees the divisions of Payroll, Accounting, Accounts Receivable, Audit and Cost Accounting and Property Tax.
He was 56 yesterday. $125,000 now for a CPA and elected auditor controller is just about equal to $12 grand a year when we were kids in the sixties. MIDDLE CLASS. It is anything but rich. People try to better themselves and a few like myself and my brother succeed in upward mobility from true poverty (you know true poverty when you have to go to school with nothing to eat in a couple days and wearing a blouse your sister outgrew last year, and you are a boy) to just middle class. The vast majority of people doing what we did find they did it for nothing, they can get no job no less a middle class or better job.
And you all who say take any job, if illegal Mexicans are willing to break the law and work under the table for less than minimum wages then you should be too, eat my feces and die, employers who hire such people do so because they are nameless and faceless, seen as subhuman, and I would have worked for them at points of desperation but they would not hire me because it is one thing to use illegal aliens as slaves, it is far worse to use Americans with an education for such work. They will not hire you. They go from a fine to a prison sentence if they do.
I had to take out a student loan in 1988 when I originally went back to college. Only $2,400, I also had $1,200 in California Board of Governors grant, $1,400 in Pell grant, a veteran disability of $62 per month, and old GI bill that paid $365 a month. Not enough, I had to work 20 hours a week in the hospital on top of that. I was still eating and living with mom. Our rent on the duplex we lived in was $850 a month. Cheapest place we could get.
All vets had a different office from the financial aid office, because of our GI Bill issues we all had one guy assigned just to take care of veterans. He went on vacation during semester break and while he was gone some fat cunt in the regular financial aid office took it upon herself to recalculate all the Pell Grants for veterans using their VA disability as income even though it was bold capitalized in black letters on the application form that VA disability was not to be counted. The result was when I went to collect my pay the next first of the month I was told I had been found to be $20 over the limit for Pell Grant and thus it was canceled.
When the vet rep came back from vacation he said don't worry about it I will fix it. Three days later I was told once a Pell Grant is voided it cannot be "fixed." I had to drop a class and take more hours at work to adjust for the loss of the Pell Grant. But when I did the VA and BOG Grant were also reduced by half forcing me to drop all classes and go to work full time just to eat. Now I owed the VA money for the payments in the semester I dropped out, with the intention of going back in the fall once the mess (that I had no part in creating) was cleared up.
No go. Once you have an overpayment from the VA you can't qualify for any federal financial aid, period till it is satisfied. I appealed but it took months and by the time they found in my favor the six months was up on the student loan, now I had to repay that before I could get any financial aid. Jesus fucking Key rist!
I lost my vehicle, I lost my job because it was 1984. I had no place to turn and was essentially homeless for years. No credit, part time jobs with no benefits, minimum wage or close to it jobs. Many of them. Moved every few months because I could not pay and nothing ever worked out the way it was supposed to. Broken promises, crooked employers that looked to take advantage of willing people. One employer was deducting SS and FICA but when I looked at my statement later they had no record I ever worked there, the boss was just pocketing the money they were supposed to pay into the government.
I finally gave up and asked the VA to reexamine my disability. They did and a couple years later they said OK we will up it to 40%. Why does it take years? But it qualified me to go back to college under Chapter 31 benefits for disabled vets. It was still only $800 a month plus tuition and books, with free medical care. I could not go back to school in 1993 on that, not in California, so I had a friend in New York that said I could live at his place free of charge if I wanted to go to school at UCON in Danbury. I agreed and went there, but the VA messed up the paperwork and my tuition was not paid. I had to go back to California and start over again.
I had to drive a car for a transport company because I could not afford to fly or take a train. I drove to Santa Cruz in less than 72 hours non stop. Most states were still 55 MPH speed limit then too. Talk about an intrusion into our daily lives.
They fixed the problem and I went to Columbus Ohio for their program in business at Ohio state. When I got there I was told that the annual $20,000 tuition was only for Ohio residents, my tuition would be $40,000. The VA would not pay it. But, they said that there was a private business university in town called Franklin University that they would pay for. I took it. I still had to work full time plus, and go to school in the evenings for almost four years, because we all know you can't get a four year degree in four years these days.
Fuck fuck fuck fuck fuck! It never ends for people born to parents that are not wealthy. All the promises made by America are broken unless you are rich to start with. At one point my debit card to access my VA money did not work. It had expired. The money was in my account but I could not access it. There were no BAC branches in Ohio, and they would not even talk to me on the phone. They did say they would send a new ATM card in the mail. Two days later my mother called an said she got a letter from Bank of America. They sent my card to her address. I had not eaten in three days by then. I had my mother send it via overnight mail so I could get something to eat. Two days later no card. I called the post office and bitched them out royally and all they could say was if I wanted to file a complaint or track overnight mail I could come into their main office. Total bottom. I had not eaten in five days. At that point I was actually and really ready to die killing fat pigs rather than one more night without eating. When my card would not work all I had in the house was the sugar in the a little sugar bowl. I searched the place for fallen tidbits, found a small bag of wallpaper paste powder. Ate some of it before I read that it had insecticide in it to kill silverfish. I was hallucinating when someone knocked on the door and said she was from the post office. A black woman with like a three foot tall hair tower and six inch nails in a smoking beat up old Camaro aks me if I am Mark. I said yes, and was so embarrassed because I think I had a big diarea stain, I was so horrified. She had the mail from mother, mom had spelled the street name wrong, instead of Elesmere Street she had printed Elsmere Street. For that dropped letter I had to deal with bureaucracy that can only be deemed evil, and I ended up going 6 days with no food.
Trust me when I say that is only a small taste of the abuse our society has given me. No wonder people give up and go criminal. And I am supposed to lick the boots of the wealthy because the GOP says that your human worth is equal to your net cash worth?
Call me a terrorist, call me a commy, call me anything you want to call me but I am an honest, loving, educated man who happens to be gay. A citizen first, a man second, yet I am full to overflowing with the shit the rich and their boot licking GOP have given us. DONE! Wall Street resists with violence now. Get ready for more. Login or register to post comments Wed, 10/26/2011 - 21:27 | Schmuck Raker AH, are you familiar with the terms 'Proportion', or 'Link'? Login or register to post comments Wed, 10/26/2011 - 19:46 | falun bong Depends what a country wants to spend its money on. America decided to spend all the money on great new ways to kill goat herders on the other side of the world. Educating its citizens? not so much...
AldousHuxley
America has no choice.
They squandered the generations of wealth gambling it away on wall st. Since 1980, America took the position of colonizing oil rich countries to support the gambling habits. WAR-OIL-FED is all part of the Petro dollar system.
Everything else takes a backseat.
acaciapuffin
Could Wal Mart start a college. it sounds funny however with the success they have had with their business they could do something like this. Also there could be an alternate to college which essentialy gets people some basic skills they need.
Education for the 21st century is also something that should be addressed at the high school level. I am under the impression that in the 30's and 40's someone with a high school education was very highly educated up to the current level of an associates degree. What has happened in the last 50 years?
Don Smith
Acacia, thanks for your service. Now, about Wal-Mart - they are a huge part of the problem. They have destroyed more jobs in America than I would have to guess any other company.
1) They lure Americans into buying cheap crap from China.
2) They squeeze the vendors for lower-cost goods, keeping those vendors profits down.
3) Jobs are shifted overseas through wage and tax arbitrage.
4) People take lower-paying jobs, and now need to shop at Wal-Mart to survive.
5) Their competitive advantage bankrupts smaller stores, forcing out more jobs.
6) Rinse and Repeat.
Were it not for wage arbitration and our ridiculous tax policies, Wal-Mart would compete fairly and we wouldn't have the behemoth we have now. I applaud Wal-Mart for its model and how it got where it got, but for the last 15 or so years, its effect on our economy is now irreversible and wholly negative.
Who cares about a $30 DVD player if the only job you can get is at Wal-Mart?
philipat
Not just Wal-Mart. Globalisation has been very kind to all US Corporations but not so kind on the masses. Manufacturing has been systematically shifted to the lowest cost locations, whilst Transfer pricing has systematically shifted profits to offshore tax havens such that most US Corporations pay little, if any, taxes. They also wholly own Congress, which therefore does nothing to change the totally corrupt system.
This is, or should be, at the heart of the Occupy Wall Street (And DC) movement. Unless the system changes, the future is bleak for the citizens of The United Corporatocracy of America.
acaciapuffin
My education took place in the military and the schools I attended there and the field work I did. It shocked me when I would come back to my hometown and chat with the kids who had gone to college instead. I was out there learning about radios and Law enforcement and they were taking remedial english. In the mean while I have gone back to college and recieved my bachelors, much of my military experience came in handy. Login or register to post comments Wed, 10/26/2011 - 19:40 | acaciapuffin My education took place in the military and the schools I attended there and the field work I did. It shocked me when I would come back to my hometown and chat with the kids who had gone to college instead. I was out there learning about radios and Law enforcement and they were taking remedial english. In the mean while I have gone back to college and recieved my bachelors, much of my military experience came in handy.
Melin
"The ultimate but innocent and well-meaning enabler: the setup of government guaranteed student loans."
Are you referring to the government as innocent and well-meaning in this sentence? Login or register to post comments Wed, 10/26/2011 - 19:19 | rlouis We're from the government and we're here to help you....
darteaus
American ideal (used to be):
You pay for yourself, and I pay for myself. If you can't afford it, you can't buy it; and me neither. If I drive my cab twice as many hours as you do, then I should make twice the money. If I make money then it is my money. I don't owe you anything, and you don't owe me anything. If you watch TV all day, that's your wife's problem.
batterycharged
Yeah, unlike insurance companies and the health care system. That is working great.
Realize that the reason schools are expensive is because of the banks lending money to own students for life.
Take banks out of the picture and schools will have to function within a budget. Unlike the federal gov't, states can't print money.
When Joe from the ghetto can't get a $100,000 loan from sallie mae, all bets are off on tuition increases.
This is exactly what Milton Freidman wanted, indentured-servant students. Look it up.
boiltherich
The REAL problem at base is a debt problem just exactly as it was in the 1920's. I keep saying, and people keep ignoring me, when you live in an economy with a debt based fiat currency - which does not have to be a bad thing if properly controlled - anybody who borrows/lends is in essence creating more debt based money, that is counterfeiting. When there were strict standards and accounting rules and enforcement long ago this was fine, the money supply was exactly adequate most of the time to deal with transfers to service debt, but we have had a 40 year run up in borrowing and lending, mostly in the top elite 1%, without a concominent run up in cash money supply to service this debt, the result is borrowing to pay bills on all levels.
The day you borrow a dime to pay a debt you are toast. And that is as true for government as it is for me. The same laws that prevented this long ago are still mostly on the books because it is fraud, but they are no longer enforced because business has bribed government - also illegal - up the money supply, send every American adult a packet of cash, say $100,000, and then enforce the laws that are on the books and guess what? The fallout will be far less than you think and a whole lot better than poverty and class warfare and blood and death, the road we are on leads to the end of America. Login or register to post comments Wed, 10/26/2011 - 19:00 | navy62802 So the government is going to step in and try to fix a problem that it created. Wonderful. And by the way, this student debt problem has been around for several years now. It's only getting attention because it has been allowed to grow so incredibly large.
Mark my words, this is a debt bomb that is just waiting to explode. Students graduating with such incredible debt loads (sometimes in the six figures) are entering a collapsing job market. This story does not end well for anyone involved ... neither the students nor the taxpayers. Login or register to post comments
Vampyroteuthis
This crap will end quickly when the next credit crunch comes. Parents and students are both unemployed and no credit results in unpaid tuition. No students.
navy62802
At this pace, the US is going to be a nation of tootheless vagabonds roaming the dust-covered ruins of a once-powerful nation. And all the profit that was squeezed out of the American Dream will be absolutely worthless.
- new setting ftp:use-tvfs (yes, no, auto).
- improved ftp path handling for servers without TVFS feature.
- improved closure matching, now *.EXT matches URLs ending with ".EXT".
- updated man page.
- updated translations.
- fixed mirror target directory naming.
This release provides updated system ROM images for the latest maintenance releases of HP ProLiant DL580 G7 (P65) Servers.
SP54712.exe (4.4 MB)
To ensure the integrity of your download, HP recommends verifying your results with this MD5
File: SP54712.exeInstallation:
USB Key - HPQUSB.exe is a Windows-based utility to locally partition, format and copy necessary files to a USB flash media device (e.g. HP Drive Key) through the Windows environment. The created USB flash media device is made bootable and ready to locally restore and/or update the firmware on the system.
1. Obtain a formatted USB Key media.
2. Download the SoftPaq to a directory on a system running Microsoft Windows 2000, Microsoft Windows XP, Microsoft Windows Vista, Microsoft Windows 7, Microsoft Windows Server 2003, Microsoft Windows Server 2008, or Microsoft Windows Server 2008 R2 and change to that directory.
3. From that drive and directory, execute the downloaded SoftPaq file: Simply double click on the SPxxxxx.exe file and follow the installation wizard to complete the SoftPaq installation process. At the end of a successful installation of the SoftPaq a web page will automatically appear to provide you with the different methods for restoring and/or upgrading the firmware on the system.
4. After the USB Key is created, you may delete the downloaded file if you wish.
5. Insert this USB Key into the USB Key port of the system to be updated and power the system on to boot to the USB Key.
Version: 2011.05.23 (27 Sep 2011)
This component provides updated system firmware that can be installed directly on supported Operating Systems. Additionally, when used in conjunction with HP Smart Update Manager (HPSUM), this Smart Component allows the user to update firmware on remote servers from a central location. This remote deployment capability eliminates the need for the user to be physically present at the server in order to perform a firmware update.
CP016066.scexe
Installation:
To update firmware from Linux operating system on target server:
1. Login as root. (You must be root in order to apply the ROM update.)
2. Place the Smart Component in a temporary directory.
3. From the same directory, run the Smart Component.
For example: ./CPXXXXXX.scexe
4. Follow the directions displayed by the Smart Component to complete the firmware update.
5. Reboot your system if you would like the update to take effect immediately.
To use HPSUM on the HP Smart Update Firmware DVD (version 9.00 and later) or the Service Pack for ProLiant 2011.07.0 (and later):1. Place the Firmware DVD or Service Pack for ProLiant on a USB key using the HP USB Key Creator Utility.
2. Place the firmware to be used for updating in the directory, /HPFWUPxxx/hp/swpackages on the USB key.
3. Boot from the newly created USB key.
4. Follow the on-screen directions to complete the firmware update.
When a hard drive, CD/DVD, USB stick, or any digital storage media is on its way to the Great Bitbucket in the Sky, GNU ddrescue is my favorite data recovery tool. GNU ddrescue is included in the default SystemRescue image. Before we dive into the fun stuff, there is some vexing naming confusion to clear up. There are two ddrescue programs in SystemRescue. GNU ddrescue, by Antonio Diaz, is the one I prefer. The version on the current SystemRescue release is ddrescue 1.14. There is also a dd_rescue, version 1.23, by Kurt Garloff. dd_rescue is nice, but it's slower than ddrescue and doesn't include as many features.
Just to keep it interesting, Debian Linux adds its own bizarre naming conventions. The Debian package name for GNU ddrescue is gddrescue, and the package name for dd_rescue is ddrescue. But the binary for gddrescue is /sbin/ddrescue, and the binary for dd_rescue is /bin/dd_rescue. Fortunately, SystemRescue doesn't mess with the original binary names, and calls them /usr/bin/ddrescue and /bin/dd_rescue.
Enough of that; let's talk about what makes GNU ddrescue my favorite. It performs block-level copies of the failing media, and so it doesn't matter what filesystem is on the media. You're probably thinking it sounds like the venerable dd command, and it is similar, with some significant improvements. dd works fine on healthy disks, but when it encounters a read error it stops, and you have to manually restart it. It reads the media sequentially, which is very slow, and if there are a lot of bad blocks it may never complete a full pass.
GNU ddrescue is fully automatic and fast for a block-level copy program, and you want speed when a drive full of important data is dying. It seeks out good blocks to copy and skips over the bad blocks. It optionally records all activity in a logfile, so you can resume where you left off if the copying is interrupted for any reason. It is best to always generate a logfile, because every time you power up the failing drive the more likely it is to die completely. Using a logfile ensures that ddrescue will not repeat operations, but will move on and look for new good blocks to copy.
When you are rescuing a failing drive, the first step is to copy it with ddrescue. Then take the original offline, and perform any additional recovery operations on the copy. Don't touch the original any more than you have to. You can copy the copy as many times as you need for insurance.
You need a healthy drive to copy your rescued data to. I prefer USB-attached media such as a USB hard drive, USB thumb drive, Compact Flash, or SD cards. Of course a second internal hard drive is a good option, or this might be your chance to finally use that eSATA port that always looked like it should be cool and useful, but you never found a reason to use it. Your second drive should be at least 50% larger than the drive you're recovering. The troubled drive must not be mounted. The simplest invocation looks like this:
# ddrescue /dev/sda1 /dev/sdb1 logfile
Here, /dev/sda1 is a partition on the failing drive. Everything on /dev/sdb1 will be overwritten, and the logfile will be written to /dev/sdb1. You can name the logfile anything you want. You can rescue an entire drive if you prefer, like this:
# ddrescue /dev/sda /dev/sdb logfile
Note that if there is more than one partition on the failing drive and the partition table is damaged, you will have to re-create it on the rescue drive. I copy one partition at a time to avoid this sort of drama.
You can have ddrescue make multiple passes with the -r option; sometimes you can make a more complete recovery this way. You can go as high as you want; I use 3-5:
# ddrescue -r5 /dev/sda2 /dev/sdb1 logfile
Sometimes ddrescue is nearly magical for rescuing scratched CDs and DVDs. The first command copies the disk, and the second command copies it to a blank disk:
# ddrescue -n -b2048 /media/cdrom image logfile # ddrescue -d -b2048 /media/cdrom image logfileYou can give the image file whatever name you like. While I've never needed to go beyond the basics in this article, ddrescue has a whole lot of other capabilities that you can learn about in the GNU ddrescue manual.
Hello.
Firstly, thank you for your efforts in creating such a useful collection in a package that works so well. I love it and think it's a great system rescue tool.
Will you please consider including the Linux Disk Editor (http://lde.sourceforge.net/) in the next release of SystemRescue CD. I use it for recovering 'lost' partitions that parted won't even look at (yes I use 1.6.6 from your 0.2.8 CD). I use the statically linked lde-i386, as downloaded directly from SourceForge, and run it from a floppy after booting from your SystemRescue CD.
Incidentally, to really mess up a disk's partitions, just create them with parted, then load Partition Magic and let it 'fix' the 'misalignment' errors it finds, then watch as neither Partition Magic, nor parted will look at the disk again. It doesn't always happen, but sometimes yes. One way to avoid this is if only one person with one set of tools works on a system. Not always possible unfortunately.
To fix this, I use gpart to give me a list of 'possible' partition locations, use linux disk editor to view the contents of the partition tables, and a calculator to determine the 'actual' table locations, then linux disk editor again to edit the tables so they work. Tedious, but such a relief (especially for the owner) when it all works again.
I used to boot a DOS floppy and use Norton Disk Editor, but I much prefer to stay within Linux and use Linux tools.Thank you for your consideration.
Andrew
Softpedia
,,,SystemRescueCd is based on Gentoo and, now, with this version, you can add the packages of your choice using Gentoo's package manager. This is present here by including development tools (like gcc, automake, autoconf, etc.) and specific Gentoo-Linux applications (such as emerge, autoconf, etc.) required in order to install new packages, as all that is installed on Gentoo must be compiled.
SystemRescueCd includes 4 kernels, 2 standard ones (rescuecd and rescue 64) and 2 alternative ones (altker32 and altker64). Now, you have the possibility to compile which kernel you want, to best fit your needs. This is usually performed if you'd like more recent sources or you need another driver, or you simply need various compilation options.
Another clever utility we find in this version is the backstore, primarily used to keep the changes after a reboot. A backing-store is a loopback filesystem containing all the changed files of a system. Every minor file change like a file edit, creation or deletion, is recorded on the backing-store, so you just have to load the appropriate one to return to the state you want.
Now, let's have a look at a list with some of the highlights of SystemRescueCd 1.1.0:
· The two kernel sources, standard and alternative, have been swapped;
· The majority of drivers are compiled as module in the standard kernels (2.6.25.16);
· The majority of drivers are built-in the alternative kernels (2.6.25.14);
· The necessary development tools (gcc, make, ...) and Gentoo tools (emerge, equery, ...) have been added;
· The nameif option has been added, which can be used to specify the name of each ethernet interface using the mac address (ex: "nameif=eth0!00:0C:29:57:D0:6E,eth1 00:0C:29:57:D0:64");
· Support for backing-store loopback file systems has been introduced;
· Added support for the speakup (support devices for blind people).
You can find more information regarding the new features and updates by visiting the official changelog.
Solid-state drives (SSDs) gain popularity, their prices are falling. They outperform conventional hard drives in both random and streaming access patterns and open new possibilities. SSDs work in Linux out-of-the-box, but their performance and endurance can be highly optimized through various tunings: File systems like Ext4, Btrfs or XFS allow online and batched discard to trim unused sectors and have special features like pre-discard on file system initialization. Over-provisioning a SSD compensates possible drawbacks when discard is not possible. Additional tweaks like using the noatime mount option, tmpfs for volatile data, native command queuing, and some more tunings finally provide altogether I/O never seen before. Attendees know how to manage device mapper, LVM, software RAID, and Ext4. They want to solve I/O bottlenecks on Linux servers or simply accelerate their Linux laptop by using SSDs in the best possible way.
Werner Fischer, Thomas-Krenn.AG Werner Fischer is a technology specialist at Thomas-Krenn.AG and chief editor of the Thomas Krenn Wiki. His current main focus is hardware-monitoring and I/O optimization, especially with Flash-based storage. Besides that Werner mainly deals with virtualization and high availability. He is a regular speaker about all these topics at many conferences, including LinuxTag, Open Source Data Center Conference, Profoss, Open Source Meets Business, Security Forum Hagenberg and other conferences (see www.wefi.net) and has written articles for the German ADMIN Magazin, Linux Technical Review, network Computing, and LANline Magazin. Before joining Thomas Krenn in 2005, Werner worked at IBM in Germany and Austria. Together with other IBMers he wrote two IBM Redbooks in the HA and storage area. Werner holds a graduate degree in computer and media security from the University of Applied Sciences of Upper Austria in Hagenberg.
BarryWilfred is right – and I add that I never noticed any of the neoliberals even trying; they consigned it to the 'far future, when the state withers away, and True Communism emerges'.John, the entire right-wing movement (including movement libertarianism) makes much much more sense when viewed through the paradigm of 'redistribute wealth upwards'. And we've seen what the GOP does when stripped of the usual luxuries, and put backwards onto their core – they went for removing as much money from the 90-odd percent as possible, while happily subsidizing the top 1%. And over the decades, the right has eagerly and successfully attacked entitlements, while keeping and extending crony capitalism. When they start acting like people who actually believe their propaganda, I'll believe it. Until then, their propaganda is just designed to set up the victims, and to salve the conscience of those 'useful idiot' liberals who look upon evil works, and come up with bullsh*t to cover for those works.
You put out a sweet review of Frum's 'Dead Right' several years ago, and have now seen those views put into actual operation. You and he both figured that those views were not politically achievable – yet the right has achieved most of them.
Are you willing to accept the evidence in front of you?
I'm getting a bit personal, John, but this has become your trademark, coming up with reasons to not accept the evidence of evil.
New features introduced in SystemRescueCd-1.1.0
- advanced customization: You can now install new packages to SystemRescueCd by doing an advanced customization.
- kernel recompilation: There is a new documentation about building a customized SystemRescueCd with your own kernel
- backstore: It allows you to keep your changes when you reboot sysresccd
- nameif: It allows you to specify the name of each ethernet interface using the mac address.
- rsync tutorial: Here is a new documentation about how to use rsync
Features introduced in SystemRescueCd-1.0.x
- You can now install new packages to SystemRescueCd by doing an advanced customization.
- New option root=auto to boot the first linux system found on the hard-disk
- SystemRescueCd has been ported to unicode (utf8)
- Use SystemRescueCd remotely with VNC-server
- New boot options for advanced ethernet configuration
- How to use Xvesa when Xorg fails to start, so that you can always get the graphical environment to work.
- New chapter in the handbook that explains How to manage remote servers using SystemRescueCd
- The GPT disklabel is the new generation partition table that supports large disks (over 2TB) and more than four partitions.
- The autorun feature has been rewritten. It supports more options and scripts can be downloaded from an http web server.
- Network booting via PXE can download the sysrcd.dat filesystem through TFTP as well as HTTP
- SystemRescueCd-1.0.1 is now based on unionfs and it comes with JWM as the default windows mananager.
KNOPPIX: Live GNU/Linux System KNOPPIX is a GNU/Linux live system running off DVD, USB flash disks or over the network, focused on productive mobile working on different computers with a personal, customized system. It was first presented at the Atlanta Linux Showcase 11 years ago. Now based on Debian, it has undergone several changes and extensions to keep it useful for many purposes, including "Microknoppix" as base for custom derivatives. The talk gives an overview of the various technologies used in Knoppix, the Knoppix build process, and some possibly lesser known use cases will be presented.
Klaus Knopper, KNOPPER.NET Klaus Knopper holds a masters degree in electrical engineering, works as freelance consultant and software developer in his main profession, and is teaching software engineering at the department of business administration at the university of applied sciences Kaiserslautern/Zweibrücken. His current projects are the KNOPPIX live GNU/Linux system, the ADRIANE desktop for blind computer users and a few other related projects like LINBO. He has been speaking at LinuxAsia/OsiWeek India, LinuxTag, Atlanta Linux Showcase and CeBIT.
SSDs are becoming more and more common, but they are still restricted in size and in lifetime. This makes their useability as the main hard drive rather limited. So instead it would make more sense to use them for caching I/O accesses to a normal HDD. In this talk I will be presenting two approaches to this, 'flashcache' and 'ssdcache'. The first is an existing implementation currently in use at taobao (cf taobao.com). It's using an on-disk bitmap format to store the block references and implements both write-through and write-back caching. The second is a re-implementation from me using a different paradigm: - Metadata will not be stored on disk, but rather in a persistent memory region - latency is minimized, preferring fast I/O to expensive cache lookups. I'll be giving some results for both of them and will be discussing the pros and cons of both approaches.
Hans Reinecke, SUSE
Studied Physics with main focus image processing in Heidelberg from 1990 until 1997, followed by a PhD in Edinburgh 's Heriot-Watt University in 2000. Worked as sysadmin during the studies, mainly in the Mathematical Institute in Heidelberg. Linux addict since the earliest days (0.95); various patches to get Linux up and running. Now working for SUSE Linux Products GmbH as senior engineer with focus on storage and mainframe support. Main points of interest are all the nifty things you can do with storage: (i)SCSI, multipathing, FCoE and the like. And S/390, naturally. Plus occasionally maintaining the aic79xx driver.
It's again a while ago I wrote my review about lightweight Linux distribution. In my very first review of 2008 I took a look at the following distributions:Those days I concluded TinyMe was the best choice, followed directly by Zenwalk. Now lets see how those distributions have evolved in a couple of mini reviews...
- Arch 2007.08-2
- Damn Small Linux 4.2.5
- Puppy 4.0
- TinyMe Test7-KD
- Xubuntu 8.04
- Zenwalk 5.0
First of all at this moment TinyMe isn't the way to go; on the 6th of April 2011 the developer decided to 'hibernate' the project for undetermined length. This is in my opinion not very good news and I expect TinyMe to die. The last stable release was almost three years ago!
Zenwalk is still under heavy development, this year in may version 7 was released...
Also Puppy Linux is alive and kicking; at time of writing version 5.2.8 is only one month old.
I won't re-review Arch, however it's still under full development and it is a great distribution it does not really fit in this category. You can make Arch lightweight, but then you have to do it yourself. Damn Small Linux as it is too lightweight, and XUbuntu as it lost mostly of its lightweightness (according to 'reliable' sources on the web, to be honest I have no personal experience anymore on XUbuntu)
Newcomer is : LUbuntu, also based on Ubuntu, but lighter ('the old XUbuntu'?)
So, the battle will go between:
Of course these are not all lightweight Linux distributions there are out there, but according to Distrowatch.com these are more or less the most popular ones.
- Zenwalk 7.0 Standard Edition
- Puppy Linux 5.2.8
- Lubuntu 11.10
naked capitalism
Readers may recall that we discussed a Financial Times op ed by University of Massachusetts professor of political sciences and favorite Naked Capitalism curmudgeon Tom Ferguson which described a particularly sordid aspect of American politics: an explicit pay to play system in Congress. Congresscritters who want to sit on influential committees, and even more important, exercise leadership roles, are required to kick in specified amounts of money into their party's coffers. That in turn increases the influence of party leadership, since funds provided by the party machinery itself are significant in election campaigning. And make no doubt about it, they are used as a potent means of rewarding good soldiers and punishing rabble-rousers
A new article by Ferguson in the Washington Spectator sheds more light on this corrupt and defective system. Partisanship and deadlocks are a direct result of the increased power of a centralized funding apparatus. It's easy to raise money for grandstanding on issues that appeal to well-heeled special interests, so dysfunctional behavior is reinforced.
Let's first look at how crassly explicit the pricing is. Ferguson cites the work of Marian Currander on how it works for the Democrats in the House of Representatives:
Under the new rules for the 2008 election cycle, the DCCC [Democratic Congressional Campaign Committee] asked rank-and-file members to contribute $125,000 in dues and to raise an additional $75,000 for the party.
- Subcommittee chairpersons must contribute $150,000 in dues and raise an additional $100,000.
- Members who sit on the most powerful committees … must contribute $200,000 and raise an additional $250,000.
- Subcommittee chairs on power committees and committee chairs of non-power committees must contribute $250,000 and raise $250,000.
- The five chairs of the power committees must contribute $500,000 and raise an additional $1 million.
- House Majority Leader Steny Hoyer, Majority Whip James Clyburn, and Democratic Caucus Chair Rahm Emanuel must contribute $800,000 and raise $2.5 million.
- The four Democrats who serve as part of the extended leadership must contribute $450,000 and raise $500,000, and the nine Chief Deputy Whips must contribute $300,000 and raise $500,000.
- House Speaker Nancy Pelosi must contribute a staggering $800,000 and raise an additional $25 million.
Ferguson teases out the implications:
Uniquely among legislatures in the developed world, our Congressional parties now post prices for key slots on committees. You want it - you buy it, runs the challenge. They even sell on the installment plan: You want to chair an important committee? That'll be $200,000 down and the same amount later, through fundraising…..
The whole adds up to something far more sinister than the parts. Big interest groups (think finance or oil or utilities or health care) can control the membership of the committees that write the legislation that regulates them. Outside investors and interest groups also become decisive in resolving leadership struggles within the parties in Congress. You want your man or woman in the leadership? Just send money. Lots of it….
The Congressional party leadership controls the swelling coffers of the national campaign committees, and the huge fixed investments in polling, research, and media capabilities that these committees maintain - resources the leaders use to bribe, cajole, or threaten candidates to toe the party line… Candidates rely on the national campaign committees not only for money, but for message, consultants, and polling they need to be competitive but can rarely afford on their own..
This concentration of power also allows party leaders to shift tactics to serve their own ends….They push hot-button legislative issues that have no chance of passage, just to win plaudits and money from donor blocs and special-interest supporters. When they are in the minority, they obstruct legislation, playing to the gallery and hoping to make an impression in the media…
The system …ensures that national party campaigns rest heavily on slogan-filled, fabulously expensive lowest-common-denominator appeals to collections of affluent special interests. The Congress of our New Gilded Age is far from the best Congress money can buy; it may well be the worst. It is a coin-operated stalemate machine that is now so dysfunctional that it threatens the good name of representative democracy itself.
If that isn't sobering enough, a discussion after the Ferguson article describes the mind-numbing amount of money raised by the members of the deficit-cutting super committee. In addition, immediately after being named to the committee, several members launched fundraising efforts that were unabashed bribe-seeking. But since the elites in this country keep themselves considerable removed from ordinary people, and what used to be considered corruption in their cohort is now business as usual, nary an ugly word is said about these destructive practices.
Ferguson gave a preview of his article last week on Dylan Ratigan:
Selected comments
Rex
Congressional theme song?LucyLuluHow depressing. Secession isn't looking better all the time. Maybe nobody would notice if a group left with Wyoming.anonOver at his blog, Glenn Greenwald highlights an article that takes the bought-and-paid-for-government problem a step further:Rex"UPDATE IV: In Slate, Anne Applebaum actually argues that the Wall Street protests are anti-democratic because of their "refusal to engage with existing democratic institutions." In other words, it's undemocratic to protest oligarchic rule; if these protesters truly believed in democracy, they would raise a few million dollars, hire lobbying firms filled with ex-political officials, purchase access to and influence over political leaders, and then use their financial clout to extract the outcomes they want. Instead, they're attempting to persuade their fellow citizens that we live under oligarchy, that our democratic institutions are corrupted and broken, and that fundamental change is urgent - an activity which, according to Applebaum, will "simply weaken the [political system] further."
Could someone please explain to her that this is precisely the point? Protesting a political system and attempting to achieve change outside of it is "anti-democratic" only when the political system is a healthy and functioning democracy. Oligarchies and plutocracies don't qualify."
http://www.salon.com/2011/10/17/what_are_those_ows_people_so_angry_about/singleton/
Watched Rachel Maddow this evening (Oct 17). In one part she interviewed Barney Frank who mainly whined that the protesters aren't doing things properly ie. through voting for Democrats that's worked so well recently.patrickI looked for a link that gets right to the interview - this may work - http://www.realclearpolitics.com/video/rachel_maddow/ You may need to find the section titled *Barney Frank: "I'd Like To Go Even Lower" On Surtax For Millionaires*
My immediate thought was that they had two years of pretty strong power and they didn't fix much. Voting is working so well for us to this point. Choices between rocks and hard pointy places abound.
I guess not only the bankers are a bit uncomfortable about this OWS kind of uprising.
Mark Twain said: "If voting could change anything it would be illegal."darmsFrank had the chance of making a significant change and he, Dodd, Obama, and the Democrats did nothing. We would be foolish to think that they have learnt any lessons since.
Sorry but the quote is from Emma Goldman, not Samuel Clemens…Linus HuberThat the legislator has been corrupted is obvious considering how banks were able to get the laws that allow them to loot the system. We have too wide a gap between justice and the law. Occupy wallstreet is one of the results of this situation but people generally are not yet able to pinpoint where the system went wrong. In my opinion it all starts with the spirit of the rule of law that has been violated repeatedly in favor of the few at the expense of the many.sleeperThe few better start running because such great injustice caused by the effectiveness of lobbying by financial interests will unleash a powerful reaction.
The old adage is the congress holds the purse strings that is they set taxes and spending.ZSo to raise funds the congress routinely sells tax code "adjustments". That is why the tax code is so long convoluted. So any congressinal talk on taxes or simplefing taxes is the pot calling the kettle black.
These guys will happily sell their grandmother if need be.
And the MSM ought to be prefacing their interviews with -
Good morning Congressman. Welcome to Talking Heads. Before we get started could you tell us how much you've paid so far for your committee seat ? Oh and how's the fund raising going ?
I worked for a democratic congressional campaign in 2006 … this was back when the despicable emanuel ran the dccc. Our candidate was against the Iraq War, which is why I volunteered to work for his campaign. The polling data we got was produced by Lake Research run by democratic establishment hack Celinda Lake. I strongly believe that the polling was either provided by the dccc or the dccc strongly encouraged our campaign to use Lake. The polling data came up with really srewy conclusions such that even the voters in my congressional district, which was predominately republican, that were not in favor on bush were still in favor of the war. That didn't make much sense to me, but that's what Lake's outfit came up with, which just happened to support what emanuel was pushing for democratic candidates: don't run against the war. Though I didn't realize what was going on at the time as I was removed from the inner strategic workings of the campaign, emanuel was heavily backing democratic candidates who either favored the war or didn't make an issue of it over those that were running on opposing it despite the fact that the majority of the public was going against the war. Our candidate went to dc to talk to the dccc about getting funding from them and shortly afterward changed his views on the war to the point that he even took the stance that he would have voted in favor of the resolution to allow bush the power to go to war with Iraq.Timothy GawneThe democrats won a ton of seats in the 2006 election becoz the country was fed up with bush and the war, but emanuel made sure that the "right" kind of democrats won: ones that were more beholden to the establishment than the people which set the stage for the numerous "compromises" the democratic leadership have made with the blue dogs to pass pro-corporate, pro-war, pro-wall street legislation.
Z
Excuse me: partisanship and deadlock? In your dreams! I wish we had more partisanship and deadlock, it might be a good thing.mobster ruleWe only have 'deadlock' when there is the potential for voting for something that might benefit the average American. When the rich and powerful want something there is absolutely no deadlock at all. Not a jot.
Look at how easily congress passed the latest 'free' trade bill giving away yet more American jobs and industries. Or how easily they gave away trillions in dollars to the big banks. Or how easily they pass bills giving hundreds of billions to the big defense contractors. No debate, no muss, no fuss.
With respect, any talk of 'deadlock' is just missing the point, and allowing yourself to be fooled by the Kabuki theater that is American politics.
Since trading in influence and abuse of function are institutionalized in Congress, it's interesting that the international review of the US for the Convention Against Corruption was scoped to exclude Articles 18 and 19 regarding trading in influence and abuse of function. Previous review efforts reported without comment the US claim that it had the two offenses covered by USC bribery statutes. So I'm sure that given the rock-solid integrity of the Holder DoJ, we can expect a rerun of ABSCAM any day now, huh?Joe RebholzSo congress should change the way committee assignments are made. Below is a suggestion. I make this suggestion not because I think it will be accepted now or later, but just because if we don't put out ideas of how things might be improved, they never will be improved. Articles like this without suggestions for improvement are likely to increase cynicism and hoplessness. So here is the suggestion:propertiusHave committee assignments, chairmenships, all other leadership positions be determined by vote of the whole chamber. And no money raising requirements by parties for any elected person. This might even get rid of the party system.
Think about it, modify it, make other suggestions. There are probably a zillion ways to fix this. Think about it even if you don't believe there is any way they - congress - will change. Because they surely won't if we can't imagine a better system. We have to imagine better systems and then push to implement them.
And you expect a corrupt Congress to vote to eliminate the corrupt system from which they all benefit? Really? It seems to me that the only way to get something like this implemented is to throw out every single incumbent, regardless of party (since both parties are in on the game). That's certainly my plan.SchofieldChime this with half the representatives in Congress being millionaires and two-thirds being so in the Senate and you have a so-called democratic body which in reality represents mainly the interests of the rich.
Recently there has been much interest in the security aspects of operating systems and software. At issue is the ability to prevent undesired disclosure of information, destruction of information, and harm to the functioning of the system. This paper discusses the degree of security which can be provided under the UNIX# system and offers a number of hints on how to improve security.
The first fact to face is that UNIX was not developed with security, in any realistic sense, in mind; this fact alone guarantees a vast number of holes. (Actually the same statement can be made with respect to most systems.) The area of security in which UNIX is theoretically weakest is in protecting against crashing or at least crippling the operation of the system. The problem here is not mainly in uncritical acceptance of bad parameters to system calls -- there may be bugs in this area, but none are known -- but rather in lack of checks for excessive consumption of resources. Most notably, there is no limit on the amount of disk storage used, either in total space allocated or in the number of files or directories. Here is a particularly ghastly shell sequence guaranteed to stop the system:
while : ; do mkdir x cd x doneEither a panic will occur because all the i-nodes on the device are used up, or all the disk blocks will be consumed, thus preventing anyone from writing files on the device.
In this version of the system, users are prevented from creating more than a set number of processes simultaneously, so unless users are in collusion it is unlikely that any one can stop the system altogether. However, creation of 20 or so CPU or disk-bound jobs leaves few resources available for others. Also, if many large jobs are run simultaneously, swap space may run out, causing a panic.
It should be evident that excessive consumption of disk space, files, swap space, and processes can easily occur accidentally in malfunctioning programs as well as at command level. In fact
UNIX is essentially defenseless against this kind of abuse, nor is there any easy fix. The best that can be said is that it is generally fairly easy to detect what has happened when disaster strikes, to identify the user responsible, and take appropriate action. In practice, we have found that difficulties in this area are rather rare, but we have not been faced with malicious users, and enjoy a fairly generous supply of resources which have served to cushion us against accidental overconsumption.
The picture is considerably brighter in the area of protection of information from unauthorized perusal and destruction. Here the degree of security seems (almost) adequate theoretically, and the problems lie more in the necessity for care in the actual use of the system.
Each UNIX file has associated with it eleven bits of protection information together with a user identification number and a user-group identification number (UID and GID). Nine of the protection bits are used to specify independently permission to read, to write, and to execute the file to the user himself, to members of the user's group, and to all other users. Each process generated by or for a user has associated with it an effective UID and a real UID, and an effective and real GID. When an attempt is made to access the file for reading, writing, or execution, the user process's effective UID is compared against the file's UID; if a match is obtained, access is granted provided the read, write, or execute bit respectively for the user himself is present.
If the UID for the file and for the process fail to match, but the GID's do match, the group bits are used; if the GID's do not match, the bits for other users are tested.
The last two bits of each file's protection information, called the set-UID and set-GID bits, are used only when the file is executed as a program. If, in this case, the set-UID bit is on for the file, the effective UID for the process is changed to the UID associated with the file; the change persists until the process terminates or until the UID changed again by another execution of a set-UID file. Similarly the effective group ID of a process is changed to the GID associated with a file when that file is executed and has the set-GID bit set. The real UID and GID of a process do not change when any file is executed, but only as the result of a privileged system call.
The basic notion of the set-UID and set-GID bits is that one may write a program which is executable by others and which maintains files accessible to others only by that program. The classical example is the game-playing program which maintains records of the scores of its players. The program itself has to read and write the score file, but no one but the game's sponsor can be allowed unrestricted access to the file lest they manipulate the game to their own advantage. The solution is to turn on the set-UID bit of the game program. When, and only when, it is invoked by players of the game, it may update the score file but ordinary programs executed by others cannot access the score.
There are a number of special cases involved in determining access permissions. Since executing a directory as a program is a meaningless operation, the execute-permission bit, for directories, is taken
instead to mean permission to search the directory for a given file during the scanning of a path name; thus if a directory has execute permission but no read permission for a given user, he may access files with known names in the directory, but may not read (list) the entire contents of the directory. Write permission on a directory is interpreted to mean that the user may create and delete files in that directory; it is impossible for any user to write directly into any directory.
Another, and from the point of view of security, much more serious special case is that there is a ``super user'' who is able to read any file and write any non-directory. T he super-user is also able to change the protection mode and the owner UID and GID of any file and to invoke privileged system calls. It must be recognized that the mere notion of a super-user is a theoretical, and usually practical, blemish on any protection scheme.
The first necessity for a secure system is of course arranging that all files and directories have the proper protection modes. Traditionally,
UNIX software has been exceedingly permissive in this regard; essentially all commands create files readable and writable by everyone. In the current version, this policy may be easily adjusted to suit the needs of the installation or the individual user. Associated with each process and its descendants is a mask, which is in effect and-ed with the mode of every file and directory created by that process. In this way, users can arrange that, by default, all their files are no more accessible than they wish. The standard mask, set by login, allows all permissions to the user himself and to his group, but disallows writing by others.
To maintain both data privacy and data integrity, it is necessary, and largely sufficient, to make one's files inaccessible to others. The lack of sufficiency could follow from the existence of set-UID programs created by the user and the possibility of total breach of system security in one of the ways discussed below (or one of the ways not discussed below). For greater protection, an encryption scheme is available. Since the editor is able to create encrypted documents, and the crypt command can be used to pipe such documents into the other text-processing programs, the length of time during which clear text versions need be available is strictly limited. The encryption scheme used is not one of the strongest known, but it is judged adequate, in the sense that cryptanalysis is likely to require considerably more effort than more direct methods of reading the encrypted files. For example, a user who stores data that he regards as truly secret should be aware that he is implicitly trusting the system administrator not to install a version of the crypt command that stores every typed password in a file.
Needless to say, the system administrators must be at least as careful as their most demanding user to place the correct protection mode on the files under their control. In particular, it is necessary that special files be protected from writing, and probably reading, by ordinary users when they store sensitive files belonging to other users. It is easy to write programs that examine and change files by accessing the device on which the files live.
On the issue of password security, UNIX is probably better than most systems. Passwords are stored in an encrypted form which, in the absence of serious attention from specialists in the field, appears reasonably secure, provided its limitations are understood. In the current version, it is based on a slightly defective version of the Federal DES; it is purposely defective so that easily-available hardware is useless for attempts at exhaustive key-search. Since both the encryption algorithm and the encrypted passwords are available, exhaustive enumeration of potential passwords is still feasible up to a point. We have observed that users choose passwords that are easy to guess: they are short, or from a limited alphabet, or in a dictionary. Passwords should be at least six characters long and randomly chosen from an alphabet which includes digits and special characters.
Of course there also exist feasible non-cryptanalytic ways of finding out passwords. For example, write a program which types out ``login: '' on the typewriter and copies whatever is typed to a file of your own. Then invoke the command and go away until the victim arrives.
The set-UID (set-GID) notion must be used carefully if any security is to be maintained. The first thing to keep in mind is that a writable set-UID file can have another program copied onto it. For example, if the super-user (su) command is writable, anyone can copy the shell onto it and get a password-free version of su. A more subtle problem can come from set-UID programs which are not sufficiently careful of what is fed into them. To take an obsolete example, the previous version of the mail command was set-UID and owned by the super-user. This version sent mail to the recipient's own directory. The notion was that one should be able to send mail to anyone even if they want to protect their directories from writing. The trouble was that mail was rather dumb: anyone could mail someone else's private file to himself. Much more serious is the following scenario: make a file with a line like one in the password file which allows one to log in as the super-user. Then make a link named ``.mail'' to the password file in some writable directory on the same device as the password file (say /tmp). Finally mail the bogus login line to /tmp/.mail; You can then login as the super-user, clean up the incriminating evidence, and have your will.
The fact that users can mount their own disks and tapes as file systems can be another way of gaining super-user status. Once a disk pack is mounted, the system believes what is on it. Thus one can take a blank disk pack, put on it anything desired, and mount it. There are obvious and unfortunate consequences. For example: a mounted disk with garbage on it will crash the system; one of the files on the mounted disk can easily be a password-free version of su; other files can be unprotected entries for special files. The only easy fix for this problem is to forbid the use of mount to unprivileged users. A partial solution, not so restrictive, would be to have the mount command examine the special file for bad data, set-UID programs owned by others, and accessible special files, and balk at unprivileged invokers.
ACLs *ARE NOT NECESSARY* (Score:5, Insightful)
by Captain_Carnage ([email protected]) on Sunday February 25, @06:30PM EST (#82)
(User #4901 Info)There are two basic access needs that people need to have to data: the ability to READ the data, and the ability to MODIFY the data. In ALL cases (at least, in all useful cases), these priviledges can be granted using standard Unix permissions. Let's say you have a directory full of files, and you need some people to be able to write to these files (which implies they'll also need to be able to read the files, to verify their changes), and you have another group of people who needs to be able to read the files. Everyone else in the organization should have NO access. This is the most complicated case.
Can this be done with standard Unix permissons? At first glance, you might think that you can't, because the only permissions provided in Unix are User (owner), Group, and Other (world). You can't control the access for a second group, which is what you need, right?
However, the answer is YES! You can do this. Here's how:
Create one group each for the people who need to be able to read the files, and write the files. For simplicity of the example, let's call the groups "read" and "write" respectively.
Now, add every user who needs read access to those files to the "read" group, and add all users who need write access to BOTH groups.
Now, create a top level directory, like this (only ownerships, permissions, and the name are shown for brevity):
drwxr-x--- root read topdir
# mkdir topdir
# chgrp read topdir
# chmod 750 topdirBoth groups we created can cd into this directory (because we added the "write" group to the "read" group, remember?). Now, under that directory, create one or more directories where your data will be stored, like this:
drwxrwsr-x root write datadir
# cd topdir
# mkdir datadir
# chgrp write datadir
# chmod 2775 datadirThe '2' sets the SGID bit on the directory, which forces all files created in this directory to be created group-owned by the "write" group (it copies the group ownership of the directory to all new files in it). It will also make new files created in this directory group writable by default (again, copying the group permissions from the directory).
You might also want to prevent users from deleting files they don't own, by setting the sticky bit on the directory, which will make the '2' a '3' instead.
Now, users in the "write" group can create and write to files in this directory, and users in the "read" group will be able to read them, because they will be readable by other (world). However, everyone else will NOT be able to read them, because in order to do so, they would have needed to be in the "read" group in order to cd into topdir to get to datadir (which is why we also included the users in the "write" group in the "read" group)!
Thus, your problem is solved. Do this for every directory where the groups of people who need each type of access are different. This is BETTER than ACLs because a) it is either the same amount of administrative effort than managing ACL's on a per-directory basis (but you manage group membership instead), or LESS administrative effort than managing ACLs on a per-file basis; and b) it FORCES you to organize your data heirarchically by who has access to it.
Get over ACLs... they are a waste of time and programming effort.
You could argue that you might want some third group of people to have write access ONLY, but the practical value of this is very limited. If you feel that you need this you are probably being silly or WAY too paranoid, even for a system administrator. Limiting people from reading data that they can over-write is generally nonsensical.
I don't deny that there are certain very narrow applications for that sort of access limitation, but the likelihood that such an application would also present the need to have groups with each of those access requirements (read, read/write, and write-only) seems rather slim.
Note to slashdot maintainers: PLEASE make the damn text box for typing comments into bigger! The one currently provided on the web form makes typing long comments especially painful. And allowing the CODE HTML tag would be nice too.
ACLs are a Bad Idea (tm) (Score:1)
by tbray on Sunday February 25, @10:35PM EST (#110)
(User #95102 Info)IMHO the original Unix user-group-world read-write-execute is one of the great 80/20 points in computing history. The biggest downside of ACLs is their potential to reduce security due to inevitable human error introduced in dealing with the complexity. Perhaps the canonical example is (old-fart alert) release X.0 (I forget X) of the late unlamented VAX/VMS, which ignored all the lessons of Linux except for (in early releases) the user-group-world model, except for they added an idiotic and useless "delete" access.
Anyhow, in X.0, VMS introduced ACLs; rather good and clean ones. Unfortunately, they screwed up the ACL on one of the core system name translation tables, and left it wide-open to subersion by anybody who wandered by and noticed.
I tend to think that this pattern is the rule rather than the exception... the cost of ACLs immensely exceeds the benefit, which isn't hard since it's often negative.
Cheers, Tim
ACLs are too cumbersome to maintain (Score:1)
by herbierobinson on Monday February 26, @03:07AM EST (#134)
(User #183222 Info) http://www.curbside-recording.com/hrmusic/index.html
ACLs (at least in the implementations I've seen) are too cumbersome because they are too hard to maintain. One ends up having to copy the same ACl onto hundreds of directories (which is not so bad). The bad part is when have to change it: You need a tree walk program to make the change to all 100 directories. One possibility would be to have some named object in the file system which defines a set of access rights. That named object would be referenced from all the directories (and optionally files) it should control access to.
There are a number of articals in the latest JACM on security which are very relevant to this discussion, too. One of them discussed making access control be task based rather than object based. In other words, users get assigned to one or more tasks (aka groups) and each task defines the access to all ojects somehow (the somehow is the hard part...).
ACLs are really not enough. (Score:4, Insightful)
by Anonymous Coward on Sunday February 25, @03:28PM EST (#59)
I've worked with ten or twelve operating systems at the system administration level, and I've done so in academic, corporate, medical, and military-industrial settings. Most of the proprietary Unices (if you count different *nixen as separate OSes, double the OS count given above) have their own lame, incompatible implementations of ACLs. These are typically layered over the antique Unix rwxrwxrwx scheme.
"rwx" is insufficient. People often exhort others to "think unix" - and when you are talking about pipes & I/O redirection, or any of the other wonderful features of Unix, that's great. But if you "think unix" to the extent that you can't see how fundamentally LAME the unix access control mechanisms are, you are crippling yourself. To put it in perspective, consider the IBM MVS file protection system RACF - in RACF, you cannot grant a user write permission without also granting read permission. This is partially because of the underlying architecture of MVS, but that doesn't mean it's not lame and restrictive. However, most hardcore mainframers literally cannot conceive of a situation where they'd want write and not read.
Novell has the most advanced and useful system of file attributes I am aware of. For example, "create" and "write" are separate - this allows the creation of dead-drop boxes; folders where a user can create a file but cannot afterwards modify it. If you can't conceive a situation where you could put this to use, you are "thinking unix" to the point of wearing blinders. NOTE: the forgoing statement will cause this post to be labeled "flamebait" and modded into oblivion by self-righteous Berkleyites >:^} while simultaneously generating dozens of "oh yeah name one" followups.
There are many other aspects of the Novell system of file protection and user rights that are very advanced. Consult your local guru, but I'll mention "rename inhibit" as one useful ability. If Stef Murky, excuse me, Jeff Mirkey, ever gets his MANOS opsystem going under the GPL I personally will immediately convert to his Novell-clone filesystem. Even DEC VMS does not compare, and the VMS access control mechanisms beat Unix hands down.
I don't recommend Novell because it's not Open Source and because the complexity of X500 or NDS type "directory" systems adds instability and management overhead that is seldom warranted to achieve concrete goals. That being said, as the world becomes increasingly networked the previous statement becomes increasingly less accurate.
LDAP interfaces to SQL backends like MySQL and Postgres will eventually be the way to go (but not today, and ADS will never fit the problem space). The one warning I would sound is that when you keep file data and file attributes in separate places - as many systems do - you markedly decrease the robustness of your system. User data in the user directory, file attributes in the file header, is a better idea. Just like it's a better idea to put the comments in the source code than in a separate documentation file (don't get me started about that stupid practice).
Sorry my .02 pence is so lenghty. I could rant some more, but I think I got the major point across - ACLs on a "rwxrwxrwx" system is like streamlining a 1954 Volkswagen Beetle.
Re:ACLs *ARE NOT NECESSARY* (Score:3, Insightful)
by coyote-san on Sunday February 25, @07:52PM EST (#91)
(User #38515 Info)There are three small problems with this scheme. First, you run into combinatorical explosion in the real world. Try running through your example in a small CS department with 5 professors, 50 students, and each student enrolled in three classes. Everyone has to work in teams, but the teams are different in each class. So each student needs to see his own files, and *some* of the files in 6-12 other user accounts (with team sizes from 3-5 people). He can't see all because that would be "cheating."
Now maintain that over 5 years, as students enroll, graduate, etc.
The second problem is that ACLs often support much more than simple read/write/execute. There's "modify" vs. "append." That is so important that ext2 supports "append-only" as one of the extended attributes. There's "change attributes (chattr)" as a separate permission than "write." Some ACL systems even control your ability to "stat" a file, to determine its ownership, size, time of access, etc. Some of this can be handled by your scheme, but it makes it much more complex.
The final problem is that ACLs are far more powerful than most implementations would suggest. Besides being able to grant - or revoke - access to individuals for read/write/execute/modify/append/delete/chattr/sta t, I've seen ACL specs which allowed restriction based on time-of-day, day-of-week, and even access terminal (local vs. network). You can use 'cron' to automatically change some bits, but it's hard to set it up so that, e.g., the faculty can play games any time, the staff can play them after hours, and the students can play them on weekends.
For every complex problem there is an answer that is clear, simple, and wrong. -- H L Mencken
Re:ACLs *ARE NOT NECESSARY* (Score:1)
by Captain_Carnage ([email protected]) on Sunday February 25, @10:53PM EST (#111)
(User #4901 Info)In the real world, in most cases, going through the trouble I describe is not necessary. It is only necessary in a (usually) small number of cases where there are two distinct groups of people that require two different types of access. In the University example that you describe, it is unlikely that all of these classes will have people working in teams for every assignment. In many such courses, the students' work is entirely individual.
In those cases where it is not, you simply need to create a Unix group for each team. All of the files for that team's project are kept in a central project-related directory. There is no reason whatsoever for any user not in that group to have access of any kind to the files of that group's project, so a more complicated scheme is not necessary.
Moreover, the classes offered from semester to semester don't tend to change much, so for the most part the groups will stay the same too, so you're not likely to need to spend a lot of time maintaining that, nor are you likely to run out of groups, even in a much larger CS department.
In the "real world", your first case just isn't really a problem. I learned how to use Unix permissions from the sysadmin of my college, whose CS department has over a thousand users, who successfully employed this tecnique for years.
The second case, modify vs. append: To me the latter is just a special case of the former. I personally see no reason why one should be treated differently from the other. If you have a compelling reason why someone should be allowed to append data to a file, but not modify the data that's already in the file, I'd certainly like to hear it.
Your permission to stat a file is controlled by whether or not you have read access to the directory the file is in. What legitimate reason can you suggest for preventing a user from seeing SOME files in a directory they legitimately have access to, but not others? What practical purpose does this serve?
Re:ACLs *ARE NOT NECESSARY* (Score:2)
by Baki ([email protected]) on Monday February 26, @08:31AM EST (#143)
(User #72515 Info)Real world problems that suffer from mentioned combinatorial explosion, are too complex: The business model in such cases should be simplified. The sulution to complex access schemes is not to add a complex technical implementation, but to simplify the scheme. The (many) times I've seen ACL's in action to implement overly complex access control schemes, it bacame chaos, and nobody knew anymore who was allowed to see what, and why the ACL's were the way they were. Maybe massive beaurocratic measurements could prevent chaos in such cases, but one had better rethink the way permissions are granted.
As for modify/append: There are a couple of very specific (system) tasks where append-only might be useful, especially for logfiles that intruders may not tamper with. But for general purpose use, I don't see the need. Append-only can just as well be implemented as a write-only directory (a la /tmp on Unix systems) where a user can "append" a record by creating a new file. Then cat them all in order of mtime, and you have about the same.
Time-dependant access etc is insane to administer, and again IMO the business model should rather be simplified.
Maybe ACL's are nice for control-freak type of system administrators that don't have much work to do, but for normal situations they're no good.
Securing Debian HOWTO Chapter 8 Frequently asked Questions
8.4 Questions regarding users and groups
8.5 Are all system users necessary?
Yes and no. Debian comes with some predefined users (id < 99 as described in Debian Policy) for some services so that installing new services is easy (they are already run by the appropriate user). If you do not intend to install new services, you can safely remove those users who do not own any files in your system and do not run any services.
You can easily find users not owning any files by executing the following command (be sure to run it as root, since a common user might not have enough permissions to go through some sensitive directories):
cut -f 1 -d : /etc/passwd | while read i; do find / -user "$i" | grep -q . && echo "$i"; doneThese users are provided by base-passwd. You will find in its documentation more information on how these users are handled in Debian.
The list of default users (with a corresponding group) follows:
- root: Root is (typically) the superuser.
- daemon: Some unprivileged daemons that need to be able to write to some files on disk run as daemon.daemon (portmap, atd, probably others). Daemons that don't need to own any files can run as nobody.nogroup instead, and more complex or security conscious daemons run as dedicated users. The daemon user is also handy for locally installed daemons, probably.
- bin: maintained for historic reasons.
- sys: same as with bin. However, /dev/vcs* and /var/spool/cups are owned by group sys.
- sync: The shell of user sync is /bin/sync. Thus, if its password is set to something easy to guess (such as ""), anyone can sync the system at the console even if they have no account on the system.
- games: Many games are sgid to games so they can write their high score files. This is explained in policy.
- man: The man program (sometimes) runs as user man, so it can write cat pages to /var/cache/man
- lp: Used by printer daemons.
- mail: Mailboxes in /var/mail are owned by group mail, as is explained in policy. The user and group is used for other purposes as well by various MTA's.
- news: Various news servers and other associated programs (such as suck) use user and group news in various ways. Files in the news spool are often owned by user and group news. Programs such as inews that can be used to post news are typically sgid news.
- uucp: The uucp user and group is used by the UUCP subsystem. It owns spool and configuration files. Users in the uucp group may run uucico.
- proxy: Like daemon, this user and group is used by some daemons (specifically, proxy daemons) that don't have dedicated user id's and that need to own files. For example, group proxy is used by pdnsd, and squid runs as user proxy.
- majordom: Majordomo has a statically allocated uid on Debian systems for historical reasons. It is not installed on new systems.
- postgres: Postgresql databases are owned by this user and group. All files in /var/lib/postgresql are owned by this user to enforce proper security.
- www-data: Some web browsers run as www-data. Web content should *not* be owned by this user, or a compromised web server would be able to rewrite a web site. Data written out by web servers, including log files, will be owned by www-data.
- backup: So backup/restore responsibilities can be locally delegated to someone without full root permissions.
- operator: Operator is historically (and practically) the only 'user' account that can login remotely, and doesn't depend on NIS/NFS.
- list: Mailing list archives and data are owned by this user and group. Some mailing list programs may run as this user as well.
- irc: Used by irc daemons. A statically allocated user is needed only because of a bug in ircd -- it setuid()s itself to a given UID on startup.
- gnats.
- nobody, nogroup: Daemons that need not own any files run as user nobody and group nogroup. Thus, no files on a system should be owned by this user or group.
Other groups which have no associated user:
- adm: Group adm is used for system monitoring tasks. Members of this group can read many log files in /var/log, and can use xconsole. Historically, /var/log was /usr/adm (and later /var/adm), thus the name of the group.
- tty: Tty devices are owned by this group. This is used by write and wall to enable them to write to other people's tty's.
- disk: Raw access to disks. Mostly equivalent to root access.
- kmem: /dev/kmem and similar files are readably by this group. This is mostly a BSD relic, but any programs that need direct read access to the system's memory can thus be made sgid kmem.
- dialout: Full and direct access to serial ports. Members of this group can reconfigure the modem, dial anywhere, etc.
- dip: THe group's man stands for "Dialup IP". Being in group dip allows you to use a tool such as ppp, dip, wvdial, etc. to dial up a connection. The users in this group cannot configure the modem, they can just run the programs that make use of it.
- fax: Allows members to use fax software to send / receive faxes.
- voice: Voicemail, useful for systems that use modems as answering machines.
- cdrom: This group can be used locally to give a set of users access to a cdrom drive.
- floppy: This group can be used locally to give a set of users access to a floppy drive.
- tape: This group can be used locally to give a set of users access to a tape drive.
- sudo: Members of this group do not need to type their password when using sudo. See /usr/share/doc/sudo/OPTIONS.
- audio: This group can be used locally to give a set of users access to an audio device.
- src: This group owns source code, including files in /usr/src. It can be used locally to give a user the ability to manage system source code.
- shadow: /etc/shadow is readable by this group. Some programs that need to be able to access the file are set gid shadow.
- utmp: This group can write to /var/run/utmp and similar files. Programs that need to be able to write to it are sgid utmp.
- video: This group can be used locally to give a set of users access to an video device.
- staff: Allows users to add local modifications to the system (/usr/local, /home) without needing root privileges. Compare with group "adm", which is more related to monitoring/security.
- users: While Debian systems use the user group system by default (each user has their own group), some prefer to use a more traditional group system. In that system, each user is a member of the 'users' group.
8.5.1 What is the difference between the adm and the staff group?
'adm' are administrators and is mostly useful to allow them to read logfiles without having to su.
'staff' is useful for more helpdesk/junior sysadmins type of people and gives them the ability to do things in /usr/local and create directories in /home.
[Oct 14, 2011] Nasty surprise with the command cd joeuser; chown -R joeuser:joeuser .*
This is classic case of side effect of dot .* along with -R flag which cause complete tree traversal in Unix. The key issue here is not panic. The recovery is possible even if you do not have the map of all files permissions (and you better do it on regular basis). The first step is to use
for p in $(rpm -qa); do rpm --setugids $p; doneThe second is to copy remaining ownership info from some similar system. Especially important is to restore ownership in /dev directory.
Similar approach can be used for resoring permissions:
for p in $(rpm -qa); do rpm --setperms $p; donePlease note that the rpm --setperms command actaully resets setuid, setgid, and sticky bits. These must be set manually using some existing system as a baseline.
[Oct 14, 2011] Nasty surprise with the command cd joeuser; chown -R joeuser:joeuser .*
This is classic case of side effect of dot .* along with -R flag which cause complete tree traversal in Unix. The key issue here is not panic. The recovery is possible even if you do not have the map of all files permissions (and you better do it on regular basis). The first step is to use
for p in $(rpm -qa); do rpm --setugids $p; doneThe second is to copy remaining ownership info from some similar system. Especially important is to restore ownership in /dev directory.
Similar approach can be used for resoring permissions:
for p in $(rpm -qa); do rpm --setperms $p; donePlease note that the rpm --setperms command actaully resets setuid, setgid, and sticky bits. These must be set manually using some existing system as a baseline.
[Oct 14, 2011] Dennis Ritchie, 70, Dies, Programming Trailblazer - by Steve Rohr
October 13, 2011 | NYTimes.com
Dennis M. Ritchie, who helped shape the modern digital era by creating software tools that power things as diverse as search engines like Google and smartphones, was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70.Mr. Ritchie, who lived alone, was in frail health in recent years after treatment for prostate cancer and heart disease, said his brother Bill.
In the late 1960s and early '70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator.
The C programming language, a shorthand of words, numbers and punctuation, is still widely used today, and successors like C++ and Java build on the ideas, rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly had a rich and enduring impact. Its free, open-source variant, Linux, powers many of the world's data centers, like those at Google and Amazon, and its technology serves as the foundation of operating systems, like Apple's iOS, in consumer computing devices.
"The tools that Dennis built - and their direct descendants - run pretty much everything today," said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs.
Those tools were more than inventive bundles of computer code. The C language and Unix reflected a point of view, a different philosophy of computing than what had come before. In the late '60s and early '70s, minicomputers were moving into companies and universities - smaller and at a fraction of the price of hulking mainframes.
Minicomputers represented a step in the democratization of computing, and Unix and C were designed to open up computing to more people and collaborative working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were making not merely software but, as Mr. Ritchie once put it, "a system around which fellowship can form."
C was designed for systems programmers who wanted to get the fastest performance from operating systems, compilers and other programs. "C is not a big language - it's clean, simple, elegant," Mr. Kernighan said. "It lets you get close to the machine, without getting tied up in the machine."
Such higher-level languages had earlier been intended mainly to let people without a lot of programming skill write programs that could run on mainframes. Fortran was for scientists and engineers, while Cobol was for business managers.
C, like Unix, was designed mainly to let the growing ranks of professional programmers work more productively. And it steadily gained popularity. With Mr. Kernighan, Mr. Ritchie wrote a classic text, "The C Programming Language," also known as "K. & R." after the authors' initials, whose two editions, in 1978 and 1988, have sold millions of copies and been translated into 25 languages.
Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y. His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J., where Mr. Ritchie grew up and attended high school. He then went to Harvard, where he majored in applied mathematics.
While a graduate student at Harvard, Mr. Ritchie worked at the computer center at the Massachusetts Institute of Technology, and became more interested in computing than math. He was recruited by the Sandia National Laboratories, which conducted weapons research and testing. "But it was nearly 1968," Mr. Ritchie recalled in an interview in 2001, "and somehow making A-bombs for the government didn't seem in tune with the times."
Mr. Ritchie joined Bell Labs in 1967, and soon began his fruitful collaboration with Mr. Thompson on both Unix and the C programming language. The pair represented the two different strands of the nascent discipline of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson came from electrical engineering.
"We were very complementary," said Mr. Thompson, who is now an engineer at Google. "Sometimes personalities clash, and sometimes they meld. It was just good with Dennis."
Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham, England.
Mr. Ritchie traveled widely and read voraciously, but friends and family members say his main passion was his work. He remained at Bell Labs, working on various research projects, until he retired in 2007.
Colleagues who worked with Mr. Ritchie were struck by his code - meticulous, clean and concise. His writing, according to Mr. Kernighan, was similar. "There was a remarkable precision to his writing," Mr. Kernighan said, "no extra words, elegant and spare, much like his code."
CINT
CINT is an interpreter for C and C++ code. It is useful e.g. for situations where rapid development is more important than execution time. Using an interpreter the compile and link cycle is dramatically reduced facilitating rapid development. CINT makes C/C++ programming enjoyable even for part-time programmers.CINT is written in C++ itself, with slightly less than 400,000 lines of code. It is used in production by several companies in the banking, integrated devices, and even gaming environment, and of course by ROOT, making it the default interpreter for a large number of high energy physicists all over the world.
Features
CINT covers most of ANSI C (mostly before C99) and ISO C++ 2003. A CINT script can call compiled classes/functions and compiled code can make callbacks to CINT interpreted functions. Utilities like makecint and rootcint automate the process of embedding compiled C/C++ library code as shared objects (as Dynamic Link Library, DLL, or shared library, .so). Source files and shared objects can be dynamically loaded/unloaded without stopping the CINT process. CINT offers a gdb like debugging environment for interpreted programs.
Download
CINT is free software in terms of charge and freedom of utilization: it is licensed under the X11/MIT license. See the included COPYING for details.
The source of CINT 5.18.00 from 2010-07-02 is available here (tar.gz, 2MB).
CINT 5.16.19 from 2007-03-19 is available via anonymous ftp:
- Source package for all platforms (2MB) (with bash configure script for most of the platforms)
- Binary package for Windows (2MB).
To build the source package do:
$ tar xfz cint-5.16.19-source.tar.gz $ cd cint-5.16.19 $ ./configure $ gmakeThe current sources of CINT can be downloaded via subversion. From a bash shell (the $ is meant to denote the shell prompt), run
$ svn co http://root.cern.ch/svn/root/trunk/cint cint $ cd cintOnce you have the sources you can simply update them by running svn up.
You can also download a certain version of CINT using subversion:
$ svn co http://root.cern.ch/svn/cint/tags/v5-16-19 cint-v5.16.19 $ cd cint-v5.16.19You can build CINT from these sources by running
$ ./configure $ make -j2For windows you will need to install the cygwin package to build CINT from sources.
Before downloading check the release notes of the latest version.
Portability
CINT works on number of operating systems. Linux, HP-UX, SunOS, Solaris, AIX, Alpha-OSF, IRIX, FreeBSD, NetBSD, NEC EWS4800, NewsOS, BeBox, HI-UX, Windows-NT/95/98/Me, MS-DOS, MacOS, VMS, NextStep, Convex have all been reported as working at some point in time; Linux, Windows, MacOS and Solaris are actively developed. A number of compilers is supported, i.e. GCC, Microsoft Visual C++, Intel's ICC, HP-CC/aCC, Sun-CC/CC5, IBM-xlC, Compac-cxx, SGI-CC, Borland-C++, Symantec-C++, DJGPP, cygwin-GCC.
The ROOT System
The ROOT system embeds CINT to be able to execute C++ scripts and C++ command line input. CINT also provides extensive RTTI capabilities to ROOT. See the following chapters on how CINT is used in ROOT:
- Adding Your own Classes to ROOT
- CINT as Dictionary Generator
- CINT as Command Line and Macro Interpreter
The Authors
CINT is developed by Masaharu Goto, who works for Agilent Technologies, Philippe Canal and Paul Russo at Fermilab, and Leandro Franco, Diego Marcos, and Axel Naumann from CERN.
Limitations
CINT implements a very large subset of C++, but has also some differences and limitations. We have developed a new version of CINT where most of these limitations have been be removed. This new version is called CINT 7 (aka "new core"); it is the now the default for stand-alone CINT installations.
CINT Mailing List
[email protected] is the CINT mailing list. In order to subscribe, send an e-mail to [email protected] containing a line 'subscribe cint [preferred mail address]' where [preferred mail address] is an option. The archive of the mailing list is also available; you can also find it on Nabble.com.
For more detailed CINT information see below:
- README: features, license condition, installation, etc...
- Releases Notes: current revision, recent changes
- ChangeLog: Subversion changes
- FAQ
- Cint man page: command line options, debugger commands
- Reference: reference manual in alphabetical order
- Commands
- Bytecode compiler
- CINT API
- Error messages
- Limitations (concepts, ISO compatibility)
- Limitations (sizes, lengths)
- Makecint: utility for embedding C/C++ library - reference manual
- Adding External Libraries: embedding C/C++ library - user guide
Windows XP(SP3) Firewall Slow To Startup - Windows XP Support
Some Linux Foundation crack attack details emerge ZDNet
The malware seems to have been on a Linux machine @ BadgeredThe answer to your question is in the link provided by Vaughan-Nichols.
The term 'malware compromised PC' is something that Vaughan-Nichols simply made up (as he tends to do), unless he's posted the wrong link. The link he posted makes no reference to a PC. Rather, it states that a trojan was discovered on 'HPA's personal colo machine' -- a 'personal machine', not a 'PC'.
More importantly, the source also states that a 'trojan startup file was added to rc3.d'. As anyone familiar with Linux will know, 'rc3.d' is a directory containing start-up scripts for run level 3. The Linux run level scheme was copied from Unix, and as anyone familiar with Windows will know, Windows does not use run levels, nor has it ever.
In short, what Vaughan-Nichols calls a 'malware compromised PC' was apparently a 'personal co[-]lo[cation] machine' running Linux. It was apparently infected, along with several other Linux machines, by a trojan that targets Linux. It was Linux malware, full stop.
Anyone who's puzzled by a high-profile infection of Linux systems should consider the following:
1. Every production operating system contain bugs
2. Every user/administrator makes mistakes (much more important than 1)
3. Containing user/administrator mistakes and managing problems caused by bugs requires considerable resources
4. It's exceedingly unlikely that the Linux Kernel Organization, a non-profit, can match the resources of large commercial firms
5. Despite the myths spread by the technically inept, Linux isn't inherently more secure than Windows (indeed, as Charlie Miller has pointed out, Linux desktops are probably easier to hack than Windows desktops)
To those who haven't the first clue about security and think Linux is magically protected by pixies (i.e. most Linux zealots), the fact that hackers were able to compromise kernel.org and apparently remain undetected for some time must come as a shock. To anyone who actually understands the Linux, Unix and Windows security models, however, it isn't the least bit surprising.
[Oct 12, 2011] Microsoft Says IE9 Blocks More Malware Than Chrome
"For every computer exploited using a Windows flaw, 100 are exploited using Flash. Acrobat Reader and Java are the other major culprits. "
October 11, 2011 | SlashdotCSHARP123 writes "In a move that's sure to raise some eyebrows, Microsoft today debuted a new web site designed to raise awareness of security issues in web browsers. When you visit the site, called Your Browser Matters, it allows you to see a score for the browser you're using. Only IE, Chrome, or Firefox are included - other browsers are excluded. Not surprisingly, Microsoft's latest release, Internet Explorer 9, gets a perfect 4 out of 4. Chrome or Firefox do not even come close to the score of 4. Even though the web site makes it easy for users to upgrade to the latest version of their choice of browser, Roger Capriotti hopes people will choose IE9, as it blocks more malware compared to Chrome or Firefox." Of note in the Windows Team post is that the latest Microsoft Security Intelligence Report discovered that 0-day exploits account for a mere tenth of a percent of all intrusions. Holes in outdated software and social engineering account for the majority of successful attacks.
Hatta:
NoScript blocks more malware than either.
Tridus:
I've seen the same data from Mcafee, and it was really something. For every computer exploited using a Windows flaw, 100 are exploited using Flash. Acrobat Reader and Java are the other major culprits.
In a lot of ways, browser security itself has never been better. There's several highly capable ones out there in this area. The weak link is some truly terrible plugins.
LordLimeca:
It might have been informative. Seriously, when you accuse Chrome of not meeting the requirement, "Does the browser help protect you from websites that are known to distribute socially engineered malware?" when google's anti-malware service is the basis for at least two browsers, and predates IE's effort by at least a year (probably more like 2), it sort of hampers your credibility.
:znerk:
Get Adobe Flash player
This page requires Flash Player version 10.2.0 or higher.My browser only scored a 2 out of 4, yet was able to keep me from seeing most of the malicious content on the linked page.
NoScript and AdBlockPlus, thank you.
My browser: 1 Microsoft FUD: 0
Moving along, now... so much more internet to see, so little time.
FyberOptic:
Why does everyone fall back on attacking Microsoft for press releases like this? Statistically, IE HAS been safer than other browsers in certain respects nowadays. It's silly to dismiss their complete turnaround in taking security seriously just because it's fun to hate on the company.
Of course there's going to be some marketing thrown into it as well. But what company doesn't? Why isn't everyone attacking Apple when they claim Safari is the fastest and safest browser? Or Mozilla, which has made the same claims for years too? It's not true for either of those, and they certainly can't both be right at the same time. Everyone lets that slide, because it's not cool to hate on them, despite their own terrible histories with security/vulnerability problems.
I haven't used IE for years (stopped for security reasons, in fact), but that doesn't change the fact that I can still offer them kudos for helping keep the web a safer place, especially when they still provide the dominant browser. The less infected machines on the internet is beneficial to ALL of us.
rsmith-mac:
Even though the site is the usual mix of MS inaccuracies, one thing it does do a good job pointing out is that Firefox is the odd man out right now when it comes to sandboxing. IE has it, Chrome has it, Safari on the Mac has it. Yet Firefox as the #2/#3 browser in the world lacks it. And while it's of limited use in protecting against attacks on plugins (which are the most common vector), it means it's easier to exploit the browser itself.
The FF devs should be working on getting Firefox appropriately sandboxed, even if it's Windows-only at the start. It would go a long way towards bringing it up to par with Chrome, which is Firefox's real competition.
[Oct 12, 2011] HP Integrated Lights-Out 3 Version: 1.26 (29 Aug 2011)
Installation:
To update firmware from the Linux operating system on target server:
Download the SCEXE file to the target server. Execute: sh CP015458.scexeTo obtain firmware image for updating via iLO user interface, utilities, or scripting interface:
Download the SCEXE file to a client running a Linux operating system. Execute: sh CP015458.scexe --unpack=directory.This command will unpack the iLO3 bin into a user specified "directory". If the directory does not exist, the unpacker will attempt to create it.
To use HP Smart Update Manager on the Firmware Maintenance CD:
- Place the Firmware Maintenance CD on a USB key using the USB Key Creator Utility.
- Copy CP015458.scexe to /hp/swpackages directory on the USB Key.
- Follow HP Smart Update Manager steps to complete firmware update.
[Oct 12, 2011] CentOS 5.5 ethtool
1.49.1. RHBA-2010:0279: bug fix and enhancement update
An enhanced ethtool package that fixes a number of minor issues is now available.The ethtool utility allows the querying and changing of specific settings on network adapters. These settings include speed, port, link auto-negotiation settings and PCI locations.
This updated package adds the following enhancements:* ethtool can now display all NIC speeds, not just 10/100/1000. (BZ#450162)
* the redundant INSTALL file has been removed from the package. (BZ#472034)* the ethtool usage message has been fixed to not state that -h requires a DEVNAME. (BZ#472038)
* ethtool now recognizes 10000 as a valid speed and includes it as a supported link mode. (BZ#524241, BZ#529395)All ethtool users should upgrade to this updated package which provides these enhancements.
[Oct 12, 2011] AOL Creates Fully Automated Data Center
October 11, 2011 | Slashdot
miller60 writes with an except from a Data Center Knowledge article: "AOL has begun operations at a new data center that will be completely unmanned, with all monitoring and management being handled remotely. The new 'lights out' facility is part of a broader updating of AOL infrastructure that leverages virtualization and modular design to quickly deploy and manage server capacity. 'These changes have not been easy,' AOL's Mike Manos writes in a blog post about the new facility. 'It's always culturally tough being open to fundamentally changing business as usual.'" Mike Manos's weblog post provides a look into AOL's internal infrastructure. It's easy to forget that AOL had to tackle scaling to tens of thousands of servers over a decade before the term Cloud was even coined.
Wow ... we were doing this 10 years ago before virtual systems were commonplace, 'computers on a card' where just coming out. Data center was 90 miles away.
All monitoring and managing was done remotely. The only time we ever went to physical data center was if a physical piece of hardware had to be swapped out. Multiple IP addresses were configured per server so any single server one one tier could act as a fail over for another one on the same tier.
We used firewalls to automate failovers, hardware failures were too infrequent to spend money on other methods.
We could rebuild Sun servers in 10 minutes from saved images. All software updates were scripted and automated. A separate maintenance network was maintained. Logins were not allowed except on the maintenance network, and all ports where shutdown except for ssh.
A remote serial interface provided hard-console access to each machine if the networks to a system wasn't available.
rubycodez:
virtual systems were commonplace in the 1960s. But finally these bus-oriented microcomputers, and PC wintel type "servers" have gotten into it. Young 'uns.......
ebunga:
Eh, machines of that era required constant manual supervision, and uptime was measured in hours, not months or years. That doesn't negate the fact that many new tech fads are poor reimplementations of technology that died for very good reasons.
timeOday:
And other new tech fads are good reimplementations of ideas that didn't pan out in the past but are now feasible due to advances in technology. You really can't generalize without looking at specifics - "somebody tried that a long time ago and it wasn't worth it" doesn't necessarily prove anything.
rednip:
"somebody tried that a long time ago and it wasn't worth it" doesn't necessarily prove anything.
Unless there is some change in technology or technique, past failures are a good indicator of continued inability.
timeOday:
The tradeoff between centralized and decentralized computing is a perfect example of a situation where the technology is constantly evolving at a rapid pace. Whether it's better to have a mainframe, a cluster, a distributed cluster (cloud), or fully decentralized (peer-to-peer) varies from application to application and from year-to-year. None of those options can be ruled in or out by making generalizations from the year 2000, let alone the 1960's.
- One - If there is redundancy and virtualization, AOL can certainly keep services running while a tech goes in, maybe once a week, and swaps out the failed blades that have already been remotely disabled and their usual services relocated. this is not a problem. Our outfit here has a lights-out facility that sees a tech maybe every few weeks, and other than that a janitor keeps the dust bunnies at bay and makes sure the locks work daily. And yes, they've asked him to flip power switches and tell them what color the lights were. He's gotten used to this. that center doesn't have state-of-the-art stuff in it, either.
- Two - Didn't AOL run on a mainframe (or more than one) in the 90s? It predated anything useful, even the Web I think. Netscape was being launched in 1998, Berners-Lee was making a NeXT browser in 1990, and AOL for Windows existed in 1991. Mosaic and Lynx were out in 1993. AOL sure didn't need any PC infrastructure, it predated even Trumpet Winsock, I think, and Linux. I don't think I could have surfed the Web in 1991 with a Windows machine, but I could use AOL.
All about Linux Find the speed of your Ethernet card in Linux
For logging on to the net or for attaching as a node on a LAN, your computer needs a network card. The network card forms the interface between your computer and the network. There are different kinds of network cards available in the market depending on its speed and other features. Here is a tip to find out the characteristics of your network card.If you want to find what type of network card is used, its speed, on which IRQ it is listed, and the chip type used, you use the following command :
# dmesg |grep eth0Here eth0 is the first network card. If you have additional cards, it will be named eth1, eth2 and so on. And here is the output of the above command :divert: allocating divert_blk for eth0 eth0: RealTek RTL8139 at 0xd800, 00:80:48:34:c2:84, IRQ 9 eth0: Identified 8139 chip type 'RTL-8100B/8139D' divert: freeing divert_blk for eth0 divert: allocating divert_blk for eth0 eth0: RealTek RTL8139 at 0xd800, 00:90:44:34:a5:33, IRQ 9 eth0: Identified 8139 chip type 'RTL-8100B/8139D' eth0: link up, 100Mbps, full-duplex, lpa 0x41E1 eth0: no IPv6 routers present ...The important things to note here are those highlighted in colour. As you can see from the above listing, my ethernet card is a RealTek RTL8139 chipset based card on IRQ 9 (Interrupt Request). Its speed is 100 Mbps and is a full-duplex card. And the link is up.....
And it uses autonegotiation to bring up the link. You can call the above device as a 10/100 NIC.
Another tool which also does the same thing is ethtool. Try the following command on your machine to see the output.
# ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 32 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: p Current message level: 0x00000007 (7) Link detected: yesHere full duplex, half duplex and auto-negotiation have the following meanings.
Full Duplex - Logic that enables concurrent sending and receiving. This is usually desirable and enabled when your computer is connected to a switch.
Half Duplex - This logic requires a card to only send or receive at a single point of time. When your machine is connected to a Hub, it auto-negotiates itself and uses half duplex to avoid collisions.
Auto-negotiation - This is the process of deciding whether to work in full duplex mode or half duplex mode. An ethernet card supporting autonegotiation will decide for itself which mode is the optimal one depending on the network it is attached to.
Task: Install mii-tool and ethtool tools
If you are using Debian Linux you can install both of these package with following command:# apt-get install ethtool net-toolsIf you are using Red Hat Enterprise Linux you can install both of these package with following command:# up2date ethtool net-toolsIf you are using Fedora Core Linux you can install both of these package with following command:# yum install ethtool net-tools
Task: Get speed and other information for eth0
Type following command as root user:
# ethtool eth0Output:Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 32 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000007 (7) Link detected: yesOr use mii-tool command as follows:# mii-tool eth0Output:
eth0: negotiated 100baseTx-FD flow-control, link okTask: Change the speed and duplex settings
Setup eth0 negotiated speed with mii-tool
Disable autonegotiation, and force the MII to either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or 10baseT-HD:# mii-tool -F 100baseTx-HD
# mii-tool -F 10baseT-HDSetup eth0 negotiated speed with ethtool# ethtool -s eth0 speed 100 duplex full
# ethtool -s eth0 speed 10 duplex halfTo make these settings permanent you need to create a shell script and call from /etc/rc.local (Red Hat) or if you are using Debian create a script into the directory /etc/init.d/ directory and run update-rc.d command to update the script.Read man page of mii-tool and ethtool for more information.
[Oct 12, 2011] A Tcpdump Tutorial and Primer danielmiessler.com
Below are a few options (with examples) that will help you greatly when working with the tool. They're easy to forget and/or confuse with other types of filters, i.e. ethereal, so hopefully this page can serve as a reference for you, as it does me.
First off, I like to add a few options to the tcpdump command itself, depending on what I'm looking at. The first of these is -n, which requests that names are not resolved, resulting in the IPs themselves always being displayed. The second is -X, which displays both hex and ascii content within the packet. The final one is -S, which changes the display of sequence numbers to absolute rather than relative. The idea there is that you can't see weirdness in the sequence numbers if they're being hidden from you. Remember, the advantage of using tcpdump vs. another tool is getting manual interaction with the packets.
It's also important to note that tcpdump only takes the first
6896 bytes of data from a packet by default. If you would like to look at more, add the -s number option to the mix, where number is the number of bytes you want to capture. I recommend using 0 (zero) for a snaplength, which gets everything. Here's a short list of the options I use most:
- -i any : Listen on all interfaces just to see if you're seeing any traffic.
- -n : Don't resolve hostnames.
- -nn : Don't resolve hostnames or port names.
- -X : Show the packet's contents in both hex and ASCII.
- -XX : Same as -X, but also shows the ethernet header.
- -v, -vv, -vvv : Increase the amount of packet information you get back.
- -c : Only get x number of packets and then stop.
- -s : Define the size of the capture (use -s0 unless you are intentionally capturing less.)
- -S : Print absolute sequence numbers.
- -e : Get the ethernet header as well.
- -q : Show less protocol information.
- -E : Decrypt IPSEC traffic by providing an encryption key.
- -s : Set the snaplength, i.e. the amount of data that is being captured in bytes
- -c : Only capture x number of packets, e.g. 'tcpdump -c 3'
[ The default snaplength as of tcpdump 4.0 has changed from 68 bytes to 96 bytes. While this will give you more of a packet to see, it still won't get everything. Use -s 1514 to get full coverage ]
So, based on the kind of traffic I'm looking for, I use a different combination of options to tcpdump, as can be seen below:
- Basic communication // see the basics without many options
# tcpdump -nS- Basic communication (very verbose) // see a good amount of traffic, with verbosity and no name help
# tcpdump -nnvvS- A deeper look at the traffic // adds -X for payload but doesn't grab any more of the packet
# tcpdump -nnvvXS- Heavy packet viewing // the final "s" increases the snaplength, grabbing the whole packet
# tcpdump -nnvvXSs 1514
[Oct 11, 2011] Reset ILO (Integrated Lights-Out 2) on HP Server
Stracca Blog
Recently I had the necessity to reset the ILO interface of an HP Proliant Server.
I found that you need to connect in ssh (or in telnet) to do it.
One connect give this commands:cd /Map1
resetHere an example:
User:admin logged-in to ILOGB87451B7E(10.1.1.15)
iLO 2 Advanced 1.81 at 11:05:47 Jan 15 2010
Server Name: myserver.mydomain.com
Server Power: OnhpiLO-> cd /Map1
status=0
status_tag=COMMAND COMPLETED/Map1
hpiLO-> reset
status=0
status_tag=COMMAND COMPLETED
Resetting iLO.CLI session stopped
[Oct 10, 2011] w3perl
W3Perl 3.13 is a Web logfile analyzer. But it can also read FTP/Squid or mail logfiles. It allows most statistical data to be ouput with graphical and textual information. An administration interface is available to manage the package.(more)
[Oct 10, 2011] Uncrustify 0.59
Written in C++
Uncrustify is a source code beautifier that allows you to banish crusty code. It works with C, C++, C#, D, Java, and Pawn and indents (with spaces only, tabs and spaces, and tabs only), adds and removes... newlines, has a high degree of control over operator spacing, aligns code, is extremely configurable, and is easy to modify (more)
[Oct 10, 2011] The Jim Interpreter 0.72
Jim is a small footprint implementation of the Tcl programming language. It implements a large subset of Tcl and adds new features like references with garbage collection, closures, a built-in object oriented... programming system, functional programming commands, and first class arrays. The interpreter's executable file is only 70 KB in size, and can be reduced by further excluding some commands. It is appropriate for inclusion inside existing programs, for scripting without dependencies, and for embedded systems
prettyprinter.de
This is a Web-based source code beautifier (source code formatter), similiar to indent. Please make a backup before you replace your code!
How To Install, Secure, And Automate AWStats (CentOS-RHEL) HowtoForge - Linux Howtos and Tutorials
Now that YUM has its additional repository we are ready to install. From the commandline type:
yum install awstats
Modify AWStats Apache Configuration:
Edit /etc/httpd/conf.d/awstats.conf (Note: When putting your conf file in the /etc/httpd/conf.d/ folder it's automatically loaded as part of the Apache configuration. There is no need to add it again into httpd.conf. This setup is usually for one of two reasons; A cleaner approach and separating of different applications in their own configuration files, or you are in a hosted environment that does not allow for direct editing of httpd.conf):
Alias /awstats/icon/ /var/www/awstats/icon/ ScriptAlias /awstats/ /var/www/awstats/ <Directory /var/www/awstats/> DirectoryIndex awstats.pl Options ExecCGI order deny,allow allow from all </Directory> Alias /awstatsclasses "/var/www/awstats/lib/" Alias /awstats-icon/ "/var/www/awstats/icon/" Alias /awstatscss "/var/www/awstats/examples/css"Note: the mod_cgi module of Apache must be pre-loaded into Apache otherwise Apache will not try to view the file, it will try to execute it. This can be done in two ways, either enable for the entire web server, or utilizing VirtualHosts, enable for AWStats.
Edit the following lines in the default awstats configuration file /etc/awstats/awstats.localhost.localdomain.conf:
SiteDomain="<server name>.<domain>" HostAliases="<any aliases for the server>"Rename config file:
mv /etc/awstats/awstats.localhost.localdomain.conf /etc/awstats/awstats.<server name>.<domain>.conf
Update Statistics (Note: By default, statistics will be updated every hour.):
/usr/bin/awstats_updateall.pl now -confdir="/etc" -awstatsprog="/var/www/awstats/awstats.pl"
Start Apache:
/etc/init.d/httpd start
To automate startup of Apache on boot up, type
chkconfig --add httpd
Verify Install
Go to http://<server name>.<domain>/awstats/awstats.pl?config=<server name>.<domain>
Securing AWStats
Setting File System Permissions
The webserver needs only read-access to your files in order for you to be able to access AWStats from the browser. Limiting your own permissions will keep you from accidentally messing with files. Just remember that with this setup you will have to run perl to execute scripts rather than executing the scripts themselves.
$ find ./awstats -type d -exec chmod 701 '{}' \;
$ find ./awstats -not -type d -exec chmod 404 '{}' \;Apache doesn't need direct access to AWStats configuration files therefore we can secure them tightly and not affect the relationship between them. To ensure that your .htaccess files are not readable via browser:
chmod 400 /etc/awstats/*.conf
Protecting The AWStats Directory With And Adding .htaccess
To secure the Awstats folder(s), is a measured process. Ensuring ownership of the awstats folder is owned by the user that needs access to it, creating an htpasswd.users file and adding the corresponding .htaccess file to authenticate against it. Let's first secure the awstats folder by typing the below from the command-line:
find ./awstats -type d -exec chmod 701 '{}' \;
find ./awstats -not -type d -exec chmod 404 '{}' \;Now that our folders have been secured, we'll need to create the .htpasswd.users file. Go to the /etc/awstats folder and execute the following command:
htpasswd -c /etc/awstats/htpasswd.users user
(Select whatever username you'd like.)
It'll ask you to add a password for the user you've selected, add it and re-type it for confirmation and then save. The final step is to create an .htaccess file pointing to the .htpasswd file for authentication. Go to /var/www/awstats/ and create a new file called .htaccess using your favorite editor, typically nano or vi tend to be the more popular ones. In this example we'll use vi. From the command line type
vi .htaccess
An alternate method of creating an .htaccess file is using the Htaccess Password Generator. Add the following content to your newly created .htaccess file:
AuthName "STOP - Do not continue unless you are authorized to view this site! - Server Access" AuthType Basic AuthUserFile /etc/awstats/htpasswd.users Require valid-user htpasswd -c /etc/awstat/htpasswd.users awstats_onlineOnce done, secure the .htaccess file by typing:
chmod 404 awstats/.htaccess
[Oct 10, 2011] coccigrep 1.3
Coccigrep is a semantic grep for the C language. It can be used to find where in code files a given structure is used or where one of its attributes is used, set, or used in a test.
[Oct 10, 2011] Another File Integrity Checker 2.18
afick is another file integrity checker, designed to be fast and fully portable between Unix and Windows platforms. It works by first creating a database that represents a snapshot of the most essential... parts of your computer system. You can then run the script to discover all modifications made since the snapshot was taken (i.e. files added, changed, or removed). The configuration syntax is very close to that of aide or tripwire, and a graphical interface is provided.
[Oct 08, 2011] Microsoft Touch Mouse Microsoft Mouse Microsoft Hardware
[Oct 7, 2011] Excerpts From Steve Jobs' Wikipedia Entry
October 06, 2011 | Moon of Alabama
To consider today's Steve Jobs hype citing some excerpts from the Wikipedia entry about him seems appropriate.
Jobs returned to his previous job at Atari and was given the task of creating a circuit board for the game Breakout. According to Atari founder Nolan Bushnell, Atari had offered $100 for each chip that was eliminated in the machine. Jobs had little interest in or knowledge of circuit board design and made a deal with Wozniak to split the bonus evenly between them if Wozniak could minimize the number of chips. Much to the amazement of Atari, Wozniak reduced the number of chips by 50, a design so tight that it was impossible to reproduce on an assembly line. According to Wozniak, Jobs told Wozniak that Atari had given them only $700 (instead of the actual $5,000) and that Wozniak's share was thus $350.
...
While Jobs was a persuasive and charismatic director for Apple, some of his employees from that time had described him as an erratic and temperamental manager.
...
In the coming months, many employees developed a fear of encountering Jobs while riding in the elevator, "afraid that they might not have a job when the doors opened. The reality was that Jobs' summary executions were rare, but a handful of victims was enough to terrorize a whole company." Jobs also changed the licensing program for Macintosh clones, making it too costly for the manufacturers to continue making machines.
...
After resuming control of Apple in 1997, Jobs eliminated all corporate philanthropy programs.
...
In 2005, Jobs responded to criticism of Apple's poor recycling programs for e-waste in the U.S. by lashing out at environmental and other advocates at Apple's Annual Meeting in Cupertino in April.
...
In 2005, Steve Jobs banned all books published by John Wiley & Sons from Apple Stores in response to their publishing an unauthorized biography, iCon: Steve Jobs.The article doesn't go into the outsourcing of the production of Apple products to a Chinese company which is essentially using slave labor with 16 hour work days and a series of employee suicides. This while Apple products are beyond real price competitions and the company is making extraordinary profits.
Jobs was reported to be the 42nd of the richest men list in the United States.
He marketed some good products. The NeXT cube was nice. Jobs though wasn't a nice man.
b@jdmckay NeXT OS & Development tools were 5-10 years beyond... *anything* else out there. NeXT STEP *defined* OOP... when CS professors were still saying it was a fad.
NeXT came out 1988/89.
I learned object oriented programming (OOP) 1985/86 on a Symbolics LISP Machine which had a very nice graphic interface. The machine was of course running at a computer science department at a university and there were several capable CS professors around who were working on such machines and saw them as the future and not as a fad.
Jobs didn't invent with NeXT. He created a really nice package of existing technologies using a UNIX derivative and aspects of the LISP Machine and Smalltalk. Objective-C was developed in early 1980s. Jobs just licensed it. People at XEROX and elsewhere had been working at such stuff for years before Jobs adopted them.
NeXTStep did not define OOP. It made it wider available. There were already some 7000+ LISP machines sold before NeXT came onto the market.
What is this thing doing to the network?
To see all network related system calls (name resolution, opening sockets, writing/reading to sockets, etc)
strace -e trace=network curl --head http://www.redhat.com
What files are trying to be opened
A common troubleshooting technique is to see what files an app is reading. You might want to make sure it's reading the proper config file, or looking at the correct cache, etc. `strace` by default shows all file I/O operations.
But to make it a bit easier, you can filter strace output. To see just file open()'s
strace -eopen ls -alThis is a wonderful way to discover any configuration files that might be queried, as well as determining the order of the PATH settings.
[Oct 06, 2011] Text Processing in Python (a book)
A couple of you make donations each month (out of about a thousand of you reading the text each week). Tragedy of the commons and all that... but if some more of you would donate a few bucks, that would be great support of the author.
In a community spirit (and with permission of my publisher), I am making my book available to the Python community. Minor corrections can be made to later printings, and at the least errata noted on this website. Email me at <[email protected]> .
A few caveats:
(1) This stuff is copyrighted by AW (except the code samples which are released to the public domain). Feel free to use this material personally; but no permission is given for further distribution beyond your personal use.
(2) The book is provided in "smart ASCII" format. This is converted to print (and maybe to fancier electronic formats) by automated scripts (txt->LaTeX->PDF for the printed version).
As a highly sophisticated "digital rights management" system, those scripts are not themselves made readily available. :-)
acknowledgments.txt FOLKS WHO HAVE MADE THIS BOOK BETTER intro.txt INTRODUCTION chap1.txt PYTHON BASICS chap2.txt BASIC STRING OPERATIONS chap3.txt REGULAR EXPRESSIONS chap4.txt PARSERS AND STATE-MACHINES chap5.txt INTERNET TOOLS AND TECHNIQUES appendix_a.txt A SELECTIVE AND IMPRESSIONISTIC SHORT REVIEW OF PYTHON appendix_b.txt A DATA COMPRESSION PRIMER appendix_c.txt UNDERSTANDING UNICODE appendix_d.txt A STATE-MACHINE FOR ADDING MARKUP TO TEXT glossary.txt GLOSSARY TERMS
Strace analisys of Linux System Calls By Mark L. Mitchell,Jeffrey Oldham
Oct 12, 2001 | InformIT
Sample Chapter is provided courtesy of New Riders.
Linux currently provides about 200 different system calls. A listing of system calls for your version of the Linux kernel is in /usr/include/asm/unistd.h. Some of these are for internal use by the system, and others are used only in implementing specialized library functions. In this sample chapter, authors Jeffrey Oldham and Mark Mitchell present a selection of system calls that are likely to be the most useful to application and system programmers.
So far, we've presented a variety of functions that your program can invoke to perform system-related functions, such as parsing command-line options, manipulating processes, and mapping memory. If you look under the hood, you'll find that these functions fall into two categories, based on how they are implemented.
- A library function is an ordinary function that resides in a library external to your program. Most of the library functions we've presented so far are in the standard C library, libc. For example, getopt_long and mkstemp are functions provided in the C library.
- A call to a library function is just like any other function call. The arguments are placed in processor registers or onto the stack, and execution is transferred to the start of the function's code, which typically resides in a loaded shared library.
- A system call is implemented in the Linux kernel. When a program makes a system call, the arguments are packaged up and handed to the kernel, which takes over execution of the program until the call completes. A system call isn't an ordinary function call, and a special procedure is required to transfer control to the kernel. However, the GNU C library (the implementation of the standard C library provided with GNU/Linux systems) wraps Linux system calls with functions so that you can call them easily. Low-level I/O functions such as open and read are examples of system calls on Linux.
- The set of Linux system calls forms the most basic interface between programs and the Linux kernel. Each call presents a basic operation or capability.
- Some system calls are very powerful and can exert great influence on the system. For instance, some system calls enable you to shut down the Linux system or to allocate system resources and prevent other users from accessing them. These calls have the restriction that only processes running with superuser privilege (programs run by the root account) can invoke them. These calls fail if invoked by a nonsuperuser process.
Note that a library function may invoke one or more other library functions or system calls as part of its implementation.
Linux currently provides about 200 different system calls. A listing of system calls for your version of the Linux kernel is in /usr/include/asm/unistd.h. Some of these are for internal use by the system, and others are used only in implementing specialized library functions. In this chapter, we'll present a selection of system calls that are likely to be the most useful to application and system programmers.
Most of these system calls are declared in <unistd.h>.
8.1 Using strace
Before we start discussing system calls, it will be useful to present a command with which you can learn about and debug system calls. The strace command traces the execution of another program, listing any system calls the program makes and any signals it receives.
To watch the system calls and signals in a program, simply invoke strace, followed by the program and its command-line arguments. For example, to watch the system calls that are invoked by the hostname 1 command, use this command:
% strace hostnameThis produces a couple screens of output. Each line corresponds to a single system call. For each call, the system call's name is listed, followed by its arguments (or abbreviated arguments, if they are very long) and its return value. Where possible, strace conveniently displays symbolic names instead of numerical values for arguments and return values, and it displays the fields of structures passed by a pointer into the system call. Note that strace does not show ordinary function calls.
In the output from strace hostname, the first line shows the execve system call that invokes the hostname program: 2
execve("/bin/hostname", ["hostname"], [/* 49 vars */]) = 0The first argument is the name of the program to run; the second is its argument list, consisting of only a single element; and the third is its environment list, which strace omits for brevity. The next 30 or so lines are part of the mechanism that loads the standard C library from a shared library file.
Toward the end are system calls that actually help do the program's work. The uname system call is used to obtain the system's hostname from the kernel,
uname({sys="Linux", node="myhostname", ...}) = 0Observe that strace helpfully labels the fields (sys and node) of the structure argument. This structure is filled in by the system call-Linux sets the sys field to the operating system name and the node field to the system's hostname. The uname call is discussed further in Section 8.15, "uname."
Finally, the write system call produces output. Recall that file descriptor 1 corresponds to standard output. The third argument is the number of characters to write, and the return value is the number of characters that were actually written.
write(1, "myhostname\n", 11) = 11This may appear garbled when you run strace because the output from the hostname program itself is mixed in with the output from strace.
If the program you're tracing produces lots of output, it is sometimes more convenient to redirect the output from strace into a file. Use the option -o filename to do this.
Understanding all the output from strace requires detailed familiarity with the design of the Linux kernel and execution environment. Much of this is of limited interest to application programmers. However, some understanding is useful for debugging tricky problems or understanding how other programs work.
Rudimentary profiling
One thing that strace can be used for that is useful for debugging performance problems is some simple profiling.
strace -c ls -laInvoking strace with '-c' will cause a cumulative report of system call usage to be printed. This includes approximate amount of time spent in each call, and how many times a system call is made.
This can sometimes help pinpoint performance issues, especially if an app is doing something like repeatedly opening/closing the same files.
strace -tt ls -althe -tt option causes strace to print out the time each call finished, in microseconds.
strace -r ls -althe -r option causes strace to print out the time since the last system call. This can be used to spot where a process is spending large amounts of time in user space or especially slow syscalls.
[Oct 06, 2011] Oracle Updates Linux, Sticks with Intel and Promises Solaris by Sean Michael Kerner
October 5, 2011 | Datamation
Oracle is now updating that kernel to version 2, delivering even more performance thanks to an improved scheduler for high thread count applications like Java. The Unbreakable Enterprise Kernel 2 release also provides transmit packet steering across CPUs, which Screven said delivers lower network latency. There is also a virtual switch that enables VLAN isolation as well as Quality of Service (QoS) and monitoring.
The new kernel also provides Linux containers, which are similar to the Solaris containers, for virtualization isolation.
"Linux containers give you low-overhead operating system isolation," Screven said.
In another nod to Solaris, Oracle is now also bringing Solaris' Dtrace to Linux. Dtrace is one of the primary new features that debuted in Solaris 10 and provides administrators better visibility into their system performance.
More info
Overview of linux system calls (http://www.quepublishing.com/articles/article.asp?p=23618&rl=1)
PDF version of Advanced Linux Programming (http://www.advancedlinuxprogramming.com/alp-folder)
[Oct 06, 2011] Linux Troubleshooting Wiki
strace
Strace is one of the most powerful tools available for troubleshooting. It allows you to see what an application is doing, to some degree.
`strace` displays all the system calls that an application is making, what arguments it passes to them, and what the return code is. A system call is generally something that requires the kernel to do something. This generally means I/O of all sorts, process management, shared memory and IPC useage, memory allocation, and network useage.
examples
The simplest example of using strace is as follows:
strace ls -alThis starts the strace process, which then starts `ls -al` and shows every system call. For `ls -al` this is mostly I/O related calls. You can see it calling stat() on files, opening config files, opening the libs it is linked against, allocating memory, and calling write() to output the contents to the screen.
Following forks and attaching to running processes
Often it is difficult or impossible to run a command under strace (an apache httpd for instance). In this case, it's possible to attach to an already running process.
strace -p 12345where 12345 is the PID of the process. This is very handy for trying to determine why a process has stalled. Many times a process might be blocking while waiting for I/O. with strace -p, this is easy to detect.
Lot's of processes start other processes. It is often desireable to see a strace of all the processes.
strace -f /etc/init.d/httpd startwill strace not just the bash process that runs the script, but any helper utilities executed by the script, and httpd itself.
Since strace output is often a handy way to help a developer solve a problem, it's useful to be able to write it to a file. The easiest way to do this is with the -o option.
strace -o /tmp/strace.out programBeing somewhat familar with the common syscalls for Linux is helpful in understanding strace output. But most of the common ones are simple enough to be able to figure out on context.
A line in strace output is essentially the system call name, the arguments to the call in parentheses (sometimes truncated...), and then the return status. A return status for error is typically -1, but varies sometimes. For more information about the return status of a typical system call invoke `man 2 syscallname`. Usually the return status will be documented in the "RETURN STATUS" section.
Another thing to note about strace is it often shows "errno" status. If you're not familar with UNIX system programming, errno is a global variable that gets set to specific values when some commands execute. This variable gets set to different values based on the error mode of the command. More info on this can be found in `man errno`. But typically, strace will show the brief description for any errno values it gets, e.g.
open("/foo/bar", O_RDONLY) = -1 ENOENT (No such file or directory)strace -s Xthe -s option tells strace to show the first X digits of strings. The default is 32 characters, which sometimes is not enough. This will increase the info available to the user.
[Oct 06, 2011] Directory permissions in chroot SFTP
We ban this because allowing a user write access to a chroot target is dangerously similar to equivalence with allowing write access to the root of a filesystem. If you want the default directory that users start in to be writable then you must create their home directory under the chroot. After sshd(8) has chrooted to the ChrootDirectory, it will chdir to the home directory as normal.
OpenSSH Dev
Re: Directory permissions in chroot SFTP Remove Highlighting [In reply to]
On Tue, 11 Nov 2008, Carlo Pradissitto wrote:
> Hi,
> I configured openssh 5.1p1 for sftp server.
>
> Here the specifications in sshd_config file:
>
> Subsystem sftp internal-sftp
> Match Group sftp
> ForceCommand internal-sftp
> ChrootDirectory /home/%u
> AllowTcpForwarding no
>
> When a user is logged in, he can't upload his document and he receives
> this message:
>
> carlo [at] Musi:~$ sftp user [at] 213
> Connecting to 213.217.147.123...
> user [at] 213's password:
> sftp> put prova
> Uploading prova to /prova
> Couldn't get handle: Permission denied
> sftp>>From the sshd_config manual page:
> ChrootDirectory
> Specifies a path to chroot(2) to after authentication. This path,
> and all its components, must be root-owned directories that are
> not writable by any other user or group.
> Here the directory permissions:
>
> [root [at] sftp-serve ~]# ls -la /home/user/
> total 24
> drwxr-xr-x 6 root sftp 4096 Nov 10 18:05 .
> drwxr-xr-x 54 root root 4096 Nov 10 16:48 ..
>
> OK, my user is a sftp group member, and the sftp group hasn't
> sufficient permissions to write in user's home directory.Your permissions are correct.
> I add the write permission for the sftp group:
>
> [root [at] sftp-serve ~]# chmod 770 /home/user/
> [root [at] sftp-serve ~]# ls -la /home/user/
> total 24
> drwxrwx--- 6 root sftp 4096 Nov 10 18:05 .
> drwxr-xr-x 54 root root 4096 Nov 10 16:48 ..
>
>
> But now the user can't access:
>
> carlo [at] Musi:~$ sftp user [at] 213
> Connecting to 213.217.147.123...
> user [at] 213's password:
> Read from remote host 213.217.145.321: Connection reset by peer
> Couldn't read packet: Connection reset by peer
>
> Here the error message in /var/log/messages of sftp-server:
>
> Nov 11 11:33:02 sftp-server sshd[10254]: Accepted password for user
> from 213.217.145.329 port 38685 ssh2
> Nov 11 11:33:02 sftp-server sshd[10256]: fatal: bad ownership or modes
> for chroot directory "/home/user"Right, this is on purpose. We ban this because allowing a user write access to a chroot target is dangerously similar to equivalence with allowing write access to the root of a filesystem.
If you want the default directory that users start in to be writable then you must create their home directory under the chroot. After sshd(8) has chrooted to the ChrootDirectory, it will chdir to the home directory as normal. So, for a passwd line like:
djm:*:1000:1000:Damien Miller:/home/djm:/bin/ksh
Create a home directory "/chroot/djm/home/djm". Make the terminal "djm" directory user-owned and writable (everything else must be root-owned). Set "ChrootDirectory /chroot" in /etc/config.
A variant of this that yields less deep directory trees would be to set the passwd file up as:
djm:*:1000:1000:Damien Miller:/upload:/bin/ksh
Create "/chroot/djm/upload", with "upload" the only user-owned and writable
component.-d
Chrooting SFTP - a knol by Dirk H. Schulz
chrooting is a technique of restricting a process or user (who, in UNIX, is just a process) to a certain directory that is its root directory "/". Since this directory is the topmost entry of this process' file system it cannot break out of this jail.Giving somebody SSH/SFTP access to a server has the disadvantage of letting him/her roam the entire file system (having a close look at it one can find lots of files that are world readable). So there is the need to restrict those users to certain directories, in most cases their home directories or webserver document folders or whatever.
Here is how to do that easily using onboard means.
In this article I show how to setup chrooting with the means of PAM. I have done and verified this on RHEL5, so you can redo it to the bit on CentOS 5.
In this setup all Users reside in one jail; the home directories (the individual root directories) are subdirectories of the jail. That has the psychologic disadvantage of one user being able to cd into other users' home dirs (without being able to read or write anything there, see below), and it has the advantage of one directory of shared binaries. Inside a chroot jail, the user has access only to the binaries INSIDE the jail. Typing e. g. "ls" in the shell only works if the program file "ls" is located inside the jail. So if every user has its own jail, every user needs its own set of binaries - that can mean a lot of redundant copying.
So we use a shared jail with one set of binaries.Dependencies
pam_chroot.so has to be installed. On RHEL5 it is installed by default and located in /lib/security/. For other distros this has to be checked.Configuration
In /etc/pam.d/sshd the following entry has to be added at the end:
session required pam_chroot.so debug
The "debug" is optional and can be used for troubleshooting during config and verification phase.
In /etc/ssh/sshd_config the following has to be uncommented or added:
UsePAM yes
Next we have to create the chroot jail. The place in the file system is up to the server admin; I have used /var/chroot:
mkdir /var/chroot
chmod 755 /var/chroot
As explained above a set of binaries, config files and others is needed for SFTP to work inside the jail. Here is the complete list:
http://knowledgebase.kinzesberg.de/files/lslr_varchroot.txt
These files should be copied with
cp -p /etc/onefile /var/chroot/etc/onefile
to preserve permissions.
Additionally to this list there is the directory /var/chroot/home where the home directories/chroot directories of the SFTP users reside.
The list of binaries, libraries etc. has been thoroughly tested. It makes SFTP work, but not SSH. In our setup SSH was not needed, and preventing it was an additional means of security. So if both is needed there has to be quite some testing to find out what SSH needs additionally.
Next there is a few device files that have to be created inside the jail:
cd /var/chroot/dev/
mknod random c 1 8
mknod tty c 5 0
mknod urandom c 1 9
mknod zero c 1 5
mknod pts/1 c 136 1
mknod null c 1 3
To make user management easier we have symlinked /home to /var/chroot/home. One beneficial side effect of this is that /etc/passwd and /var/chroot/etc/passwd do not have to differ.
mv /home /var/chroot
ln -s /var/chroot/home /homeAdding Users
All our users are system users, so they are added the way we always do it (useradd in the shell or some GUI tool). Remember to copy /etc/passwd, /etc/group and /etc/shadow into the jail afterwards.
Every user that should be chrooted needs an entry in /etc/security/chroot.conf with user name and path to the jail, e.g.
testuser /var/chroot/
That makes it possible to exclude certain users (e.g. root) from the chroot mechanism. Otherwise remote administration of our server would become a bit complicated. :-)Do not faint, please
If you test the setup now you will find that a chrooted user can cd into the home dir/chroot dir of every other user. No problem! He will have no rights to read and write there, so "ls" shows an empty directory even if it is filled.One more thing
Chrooting SFTP users can be combined with chrooted FTP access using vsftpd. So the same user can use FTP and SFTP and be jailed into the same directory. Chrooting vsftpd is described in a separate article.www.brandonhutchinson.com
Fedora Core 1 instructions
1. Remove the vendor-supplied OpenSSH RPMs.
# rpm -e openssh openssh-clients openssh-server
2. Download and install the latest openssh-chroot tarball from http://chrootssh.sourceforge.net/download/
3. Create an sshd startup/shutdown script.cat << END_FILE > /etc/init.d/sshd
#!/bin/sh# chkconfig: 2345 55 25
# description: OpenSSH server daemoncase $1 in
'start' )
/usr/local/sbin/sshd
;;
'stop' )
pkill sshd
;;
*)
echo "usage: `basename $0` {start|stop}"
esac
END_FILE4. Add the sshd startup/shutdown script to chkconfig.
# /sbin/chkconfig --add sshd5. Create the chroot environment. The following shell script installs all $REQUIRED_CHROOT_FILES, shared library dependencies, and required device files in $CHROOT_DIR. Note: /lib/libnss_files.so.2 is required for UID-to-username resolution. Otherwise, you may receive "cannot find username for UID" errors.
#!/bin/sh
CHROOT_DIR=/chroot
REQUIRED_CHROOT_FILES=" /bin/cp \
/bin/ls \
/bin/mkdir \
/bin/mv \
/bin/rm \
/bin/rmdir \
/bin/sh \
/usr/local/libexec/sftp-server \
/lib/libnss_files.so.2"# Create CHROOT_DIR
[ ! -d $CHROOT_DIR ] && mkdir $CHROOT_DIR
cd $CHROOT_DIR# Copy REQUIRED_CHROOT_FILES and shared library dependencies
# to chroot environmentfor FILE in $REQUIRED_CHROOT_FILES
do
DIR=`dirname $FILE | cut -c2-`
[ ! -d $DIR ] && mkdir -p $DIR
cp $FILE `echo $FILE | cut -c2-`
for SHARED_LIBRARY in `ldd $FILE | awk '{print $3}'`
do
DIR=`dirname $SHARED_LIBRARY | cut -c2-`
[ ! -d $DIR ] && mkdir -p $DIR
[ ! -s "`echo $SHARED_LIBRARY | cut -c2-`" ] && cp $SHARED_LIBRARY `echo $SHARED_LIBRARY | cut -c2-`
done
done# Create device files
mkdir $CHROOT_DIR/dev
mknod $CHROOT_DIR/dev/null c 1 3
mknod $CHROOT_DIR/dev/zero c 1 5# Create chroot /etc/passwd placeholder
mkdir $CHROOT_DIR/etc
touch $CHROOT_DIR/etc/passwd6. Create the chroot user. The chroot user's home directory should use the following format:
/path_to_chroot/./home_directoryTo support chrooted ssh and sftp, use /bin/sh as the chroot user's shell.
To support chrooted sftp-only, use /usr/local/libexec/sftp-server as the chroot user's shell.
ex. $ grep hutch /etc/passwd
hutchib:x:1000:1:Brandon Hutchinson:/home/chroot/./home/hutch:/bin/sh
7. Add each chroot user's /etc/passwd entry to /etc/passwd within the chroot directory. Note: if /etc/passwd does not exist in the chroot directory, chrooted sftp will work, but chrooted ssh will not.
ex. # grep hutch /etc/passwd >> /home/chroot/etc/passwd
When user "hutch" logs in via ssh or sftp, he will be chrooted to /home/chroot and placed in the /home/hutch directory.
Solaris 7 instructions
1. Download and install the latest openssh-chroot tarball from http://chrootssh.sourceforge.net/download/
2. Create the chroot environment.
Note: the file system containing the chroot jail must be mounted suid. Attempting to use a chroot jail in a nosuid-mounted file system may result in the following error message:
ld.so.1: /bin/sh: fatal: /dev/zero: open failed: No such file or directory
Remounting the nosuid file system with mount -o remount,suid file_system will not fix the problem. You must unmount the file system, remove nosuid from /etc/vfstab (if applicable), and remount the file system.
Killed
The following shell script builds a chroot environment for OpenSSH 3.7.1p2 on a Solaris 7 Sparc system.
#!/bin/sh
CHROOT_DIRECTORY=chroot_directory
mkdir $CHROOT_DIRECTORY
cd $CHROOT_DIRECTORY
# Create directories
mkdir -m 755 -p bin dev usr/local/ssl/lib usr/local/lib usr/local/libexec usr/lib usr/bin usr/platform/`uname -i`/lib
# Copy files
cp -p /bin/sh $CHROOT_DIRECTORY/bin/sh
cp -p /usr/bin/cp /usr/bin/ls /usr/bin/mkdir /usr/bin/mv /usr/bin/rm /usr/bin/rmdir $CHROOT_DIRECTORY/usr/bin
cp -p /usr/lib/ld.so.1 /usr/lib/libc.so.1 /usr/lib/libdl.so.1 /usr/lib/libgen.so.1 /usr/lib/libmp.so.2 /usr/lib/libnsl.so.1 /usr/lib/libsocket.so.1 /usr/lib/librt.so.1 /usr/lib/libaio.so.1 $CHROOT_DIRECTORY/usr/lib
cp -p /usr/local/lib/libz.so $CHROOT_DIRECTORY/usr/local/lib
cp -p /usr/local/libexec/sftp-server $CHROOT_DIRECTORY/usr/local/libexec
cp -p /usr/local/ssl/lib/libcrypto.so.0.9.6 $CHROOT_DIRECTORY/usr/local/ssl/lib
cp -p /usr/platform/`uname -i`/lib/libc_psr.so.1 $CHROOT_DIRECTORY/usr/platform/`uname -i`/lib
# Create required character devices
mknod $CHROOT_DIRECTORY/dev/zero c 13 12
mknod $CHROOT_DIRECTORY/dev/null c 13 2
chmod 666 $CHROOT_DIRECTORY/dev/zero $CHROOT_DIRECTORY/dev/null
3. Create the chroot user. The chroot user's home directory should use the following format:
/path_to_chroot/./home_directoryTo support chrooted ssh and sftp, choose /bin/sh as the chroot user's shell.
ex. $ grep hutch /etc/passwd
To support chrooted sftp-only, choose /usr/local/libexec/sftp-server as the chroot user's shell.
hutchib:x:1000:1:Brandon Hutchinson:/home/chroot/./home/hutch:/bin/sh
When user "hutch" logs in via ssh or sftp, he will be chrooted to /home/chroot and placed in the /home/hutch directory.
Back to brandonhutchinson.com.
[Oct 06, 2011] Apple's Steve Jobs, visionary leader, dead at 56
Yahoo! Finance
Six years ago, Jobs had talked about how a sense of his mortality was a major driver behind that vision.
"Remembering that I'll be dead soon is the most important tool I've ever encountered to help me make the big choices in life," Jobs said during a Stanford commencement ceremony in 2005.
"Because almost everything -- all external expectations, all pride, all fear of embarrassment or failure -- these things just fall away in the face of death, leaving only what is truly important."
"Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart."
[Oct 06, 2011] DTrace for Linux
Adam Leventhal's blog
Back in 2005, I worked on Linux-branded Zones, Solaris containers that contained a Linux user environment. I wrote a coyly-titled blog post about examining Linux applications using DTrace. The subject was honest - we used precisely the same techniques to bring the benefits of DTrace to Linux applications - but the title wasn't completely accurate. That wasn't exactly "DTrace for Linux", it was more precisely "The Linux user-land for Solaris where users can reap the benefits of DTrace"; I chose the snappier title.
I also wrote about DTrace knockoffs in 2007 to examine the Linux counter-effort. While the project is still in development, it hasn't achieved the functionality or traction of DTrace. Suggesting that Linux was inferior brought out the usual NIH reactions which led me to write a subsequent blog post about a theoretical port of DTrace to Linux. While a year later Paul Fox started exactly such a port, my assumption at the time was that the primary copyright holder of DTrace wouldn't be the one porting DTrace to Linux. Now that Oracle is claiming a port, the calculus may change a bit.
What is Oracle doing? Even among Oracle employees, there's uncertainty about what was announced. Ed Screven gave us just a couple of bullet points in his keynote; Sergio Leunissen, the product manager for OEL, didn't have further details in his OpenWorld talk beyond it being a beta of limited functionality; and the entire Solaris team seemed completely taken by surprise.
What is in the port? Leunissen stated that only the kernel components of DTrace are part of the port. It's unclear whether that means just fbt or includes sdt and the related providers. It sounds certain, though, that it won't pass the DTrace test suite which is the deciding criterion between a DTrace port and some sort of work in progress.
What is the license? While I abhor GPL v. CDDL discussions, this is a pretty interesting case. According to the release manager for OEL, some small kernel components and header files will be dual-licensed while the bulk of DTrace - the kernel modules, libraries, and commands - will use the CDDL as they had under (the now defunct) OpenSolaris (and to the consernation of Linux die-hards I'm sure). Oracle already faces an interesting conundum with their CDDL-licensed files: they can't take the fixes that others have made to, for example, ZFS without needing to release their own fixes. The DTrace port to Linux is interesting in that Oracle apparently thinks that the CDDL license will make DTrace too toxic for other Linux vendors to touch.
5-simple-ways-to-troubleshoot-using-strace by Vidar Hokstad
Jun 11, 2008 | Vidar Hokstad V2.0
Strace is quite simply a tool that traces the execution of system calls. In its simplest form it can trace the execution of a binary from start to end, and output a line of text with the name of the system call, the arguments and the return value for every system call over the lifetime of the process.
But it can do a lot more:
- It can filter based on the specific system call or groups of system calls
- It can profile the use of system calls by tallying up the number of times a specific system call is used, and the time taken, and the number of successes and errors.
- It traces signals sent to the process.
- It can attach to any running process by pid.
If you've used other Unix systems, this is similar to "truss". Another (much more comprehensive) is Sun's Dtrace.
This is just scratching the surface, and in no particular order of importance:
1) Find out which config files a program reads on startup
Ever tried figuring out why some program doesn't read the config file you thought it should? Had to wrestle with custom compiled or distro-specific binaries that read their config from what you consider the "wrong" location?
The naive approach:
$ strace php 2>&1 | grep php.ini open("/usr/local/bin/php.ini", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/local/lib/php.ini", O_RDONLY) = 4 lstat64("/usr/local/lib/php.ini", {st_mode=S_IFLNK|0777, st_size=27, ...}) = 0 readlink("/usr/local/lib/php.ini", "/usr/local/Zend/etc/php.ini", 4096) = 27 lstat64("/usr/local/Zend/etc/php.ini", {st_mode=S_IFREG|0664, st_size=40971, ...}) = 0So this version of PHP reads php.ini from /usr/local/lib/php.ini (but it tries /usr/local/bin first).
The more sophisticated approach if I only care about a specific syscall:
$ strace -e open php 2>&1 | grep php.ini open("/usr/local/bin/php.ini", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/local/lib/php.ini", O_RDONLY) = 4The same approach work for a lot of other things. Have multiple versions of a library installed at different paths and wonder exactly which actually gets loaded? etc.
2) Why does this program not open my file?
Ever run into a program that silently refuse to read a file it doesn't have read access to, but you only figured out after swearing for ages because you thought it didn't actually find the file? Well, you already know what to do:
$ strace -e open,access 2>&1 | grep your-filenameLook for an open() or access() syscall that fails
3) What is that process doing RIGHT NOW?
Ever had a process suddenly hog lots of CPU? Or had a process seem to be hanging?
Then you find the pid, and do this:
root@dev:~# strace -p 15427 Process 15427 attached - interrupt to quit futex(0x402f4900, FUTEX_WAIT, 2, NULLProcess 15427 detached Ah. So in this case it's hanging in a call to futex(). Incidentally in this case it doesn't tell us all that much - hanging on a futex can be caused by a lot of things (a futex is a locking mechanism in the Linux kernel). The above is from a normally working but idle Apache child process that's just waiting to be handed a request.
But "strace -p" is highly useful because it removes a lot of guesswork, and often removes the need for restarting an app with more extensive logging (or even recompile it).
4) What is taking time?
You can always recompile an app with profiling turned on, and for accurate information, especially about what parts of your own code that is taking time that is what you should do. But often it is tremendously useful to be able to just quickly attach strace to a process to see what it's currently spending time on, especially to diagnose problems. Is that 90% CPU use because it's actually doing real work, or is something spinning out of control.
Here's what you do:
root@dev:~# strace -c -p 11084 Process 11084 attached - interrupt to quit Process 11084 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 94.59 0.001014 48 21 select 2.89 0.000031 1 21 getppid 2.52 0.000027 1 21 time ------ ----------- ----------- --------- --------- ---------------- 100.00 0.001072 63 total root@dev:~#After you've started strace with -c -p you just wait for as long as you care to, and then exit with ctrl-c. Strace will spit out profiling data as above.
In this case, it's an idle Postgres "postmaster" process that's spending most of it's time quietly waiting in select(). In this case it's calling getppid() and time() in between each select() call, which is a fairly standard event loop.
You can also run this "start to finish", here with "ls":
root@dev:~# strace -c >/dev/null ls % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 23.62 0.000205 103 2 getdents64 18.78 0.000163 15 11 1 open 15.09 0.000131 19 7 read 12.79 0.000111 7 16 old_mmap 7.03 0.000061 6 11 close 4.84 0.000042 11 4 munmap 4.84 0.000042 11 4 mmap2 4.03 0.000035 6 6 6 access 3.80 0.000033 3 11 fstat64 1.38 0.000012 3 4 brk 0.92 0.000008 3 3 3 ioctl 0.69 0.000006 6 1 uname 0.58 0.000005 5 1 set_thread_area 0.35 0.000003 3 1 write 0.35 0.000003 3 1 rt_sigaction 0.35 0.000003 3 1 fcntl64 0.23 0.000002 2 1 getrlimit 0.23 0.000002 2 1 set_tid_address 0.12 0.000001 1 1 rt_sigprocmask ------ ----------- ----------- --------- --------- ---------------- 100.00 0.000868 87 10 totalPretty much what you'd expect, it spents most of it's time in two calls to read the directory entries (only two since it was run on a small directory).
5) Why the **** can't I connect to that server?
Debugging why some process isn't connecting to a remote server can be exceedingly frustrating. DNS can fail, connect can hang, the server might send something unexpected back etc. You can use tcpdump to analyze a lot of that, and that too is a very nice tool, but a lot of the time strace will give you less chatter, simply because it will only ever return data related to the syscalls generated by "your" process. If you're trying to figure out what one of hundreds of running processes connecting to the same database server does for example (where picking out the right connection with tcpdump is a nightmare), strace makes life a lot easier.
This is an example of a trace of "nc" connecting to www.news.com on port 80 without any problems:
$ strace -e poll,select,connect,recvfrom,sendto nc www.news.com 80 sendto(3, "\24\0\0\0\26\0\1\3\255\373NH\0\0\0\0\0\0\0\0", 20, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("62.30.112.39")}, 28) = 0 poll([{fd=3, events=POLLOUT, revents=POLLOUT}], 1, 0) = 1 sendto(3, "\213\321\1\0\0\1\0\0\0\0\0\0\3www\4news\3com\0\0\34\0\1", 30, MSG_NOSIGNAL, NULL, 0) = 30 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, 5000) = 1 recvfrom(3, "\213\321\201\200\0\1\0\1\0\1\0\0\3www\4news\3com\0\0\34\0\1\300\f"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("62.30.112.39")}, [16]) = 153 connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("62.30.112.39")}, 28) = 0 poll([{fd=3, events=POLLOUT, revents=POLLOUT}], 1, 0) = 1 sendto(3, "k\374\1\0\0\1\0\0\0\0\0\0\3www\4news\3com\0\0\1\0\1", 30, MSG_NOSIGNAL, NULL, 0) = 30 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, 5000) = 1 recvfrom(3, "k\374\201\200\0\1\0\2\0\0\0\0\3www\4news\3com\0\0\1\0\1\300\f"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("62.30.112.39")}, [16]) = 106 connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("62.30.112.39")}, 28) = 0 poll([{fd=3, events=POLLOUT, revents=POLLOUT}], 1, 0) = 1 sendto(3, "\\\2\1\0\0\1\0\0\0\0\0\0\3www\4news\3com\0\0\1\0\1", 30, MSG_NOSIGNAL, NULL, 0) = 30 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, 5000) = 1 recvfrom(3, "\\\2\201\200\0\1\0\2\0\0\0\0\3www\4news\3com\0\0\1\0\1\300\f"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("62.30.112.39")}, [16]) = 106 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("216.239.122.102")}, 16) = -1 EINPROGRESS (Operation now in progress) select(4, NULL, [3], NULL, NULL) = 1 (out [3])So what happens here?
Notice the connection attempts to /var/run/nscd/socket? They mean nc first tries to connect to NSCD - the Name Service Cache Daemon - which is usually used in setups that rely on NIS, YP, LDAP or similar directory protocols for name lookups. In this case the connects fails.
It then moves on to DNS (DNS is port 53, hence the "sin_port=htons(53)" in the following connect. You can see it then does a "sendto()" call, sending a DNS packet that contains www.news.com. It then reads back a packet. For whatever reason it tries three times, the last with a slightly different request. My best guess why in this case is that www.news.com is a CNAME (an "alias"), and the multiple requests may just be an artifact of how nc deals with that.
Then in the end, it finally issues a connect() to the IP it found. Notice it returns EINPROGRESS. That means the connect was non-blocking - nc wants to go on processing. It then calls select(), which succeeds when the connection was successful.
Try adding "read" and "write" to the list of syscalls given to strace and enter a string when connected, and you'll get something like this:
read(0, "test\n", 1024) = 5 write(3, "test\n", 5) = 5 poll([{fd=3, events=POLLIN, revents=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 read(3, "This shows it reading "test" + linefeed from standard in, and writing it back out to the network connection, then calling poll() to wait for a reply, reading the reply from the network connection and writing it to standard out. Everything seems to be working right. Comments
Brian
strace -c `ps h -u 505 | awk '$1 > 3 {print "-p "$1}' | perl -pe 's/\n/ /g'`
Shows what my oracle (uid 505) is doing. I'd not used -c before I read your post.. THanks!
Prakash
Great Article. I will be of immense help in troubleshooting problems.
[root@www01 ~]# strace -c -p 2079 Process 2079 attached - interrupt to quit Process 2079 detached
I couldn't get profiling data for a Java process. Why is Java different from other system processes?
Thanks, Prakash.
Vidar Hokstad
Prakash,
Most likely the process wasn't executing any system calls while you were tracing it.
If the JVM you're using is multi-threaded it's possible any syscalls was executed by another thread. If not it's possible it simply was executing only in userspace at the time, and strace can't trace that.
You can try "ltrace" that Josh mentioned. "ltrace" is sort of a cousin of strace that traces dynamic library calls instead of syscalls, but that only helps for non-Java code that's dynamically linked into the JVM.
If you have access to a machine with Solaris/OpenSolaris you can also use "dtrace" which supports tracing Java code in much more detail.
2008-06-23 19:21 UTC
[Oct 05, 2011] Jobs, Apple Cofounder and Visionary, Is Dead
NYTimes.com
Apple said in a press release that it was "deeply saddened" to announce that Mr. Jobs had passed away on Wednesday.
"Steve's brilliance, passion and energy were the source of countless innovations that enrich and improve all of our lives," the company said. "The world is immeasurably better because of Steve.
Mr. Jobs stepped down from the chief executive role in late August, saying he could no longer fulfill his duties, and became chairman. He underwent surgery for pancreatic cancer in 2004, and received a liver transplant in 2009.
Rarely has a major company and industry been so dominated by a single individual, and so successful. His influence went far beyond the iconic personal computers that were Apple's principal product for its first 20 years. In the last decade, Apple has redefined the music business through the iPod, the cellphone business through the iPhone and the entertainment and media world through the iPad. Again and again, Mr. Jobs gambled that he knew what the customer would want, and again and again he was right.
The early years of Apple long ago passed into legend: the two young hippie-ish founders, Mr. Jobs and Steve Wozniak; the introduction of the first Macintosh computer in 1984, which stretched the boundaries of what these devices could do; Mr. Jobs's abrupt exit the next year in a power struggle. But it was his return to Apple in 1996 that started a winning streak that raised the company from the near-dead to its current position of strength.
Bill Gates, the former chief executive of Microsoft, said in a statement that he was "truly saddened to learn of Steve Jobs's death." He added: "The world rarely sees someone who has had the profound impact Steve has had, the effects of which will be felt for many generations to come. For those of us lucky enough to get to work with him, it's been an insanely great honor. I will miss Steve immensely."
Mr. Jobs's family released a statement that said: "Steve died peacefully today surrounded by his family. In his public life, Steve was known as a visionary; in his private life, he cherished his family. We are thankful to the many people who have shared their wishes and prayers during the last year of Steve's illness; a Web site will be provided for those who wish to offer tributes and memories."
On the home page of Apple's site, product images were replaced with a black-and-white photo of Mr. Jobs.
Mr. Jobs's decision to step down in August inspired loving tributes to him on the Web and even prompted some fans to head to Apple stores to share their sentiments with others. Some compared him to legendary innovators like Thomas Edison.
Using Grub To Change RedHat Linux's Root Password
08/06/2008 | The Linux and Unix Menagerie
- Thanks for this Additional Useful Information From zcat:
On many distros the 'single' or 'rescue' boot will still ask for a password. You can get around this by starting linux without starting initd, just launch a shell instead; and it's blindingly fast.
'e' to edit the boot entry, select the kernel line and press 'e' again, then type "init=/bin/bash", enter, press 'b' to boot it. You end up at a root prompt with / mounted read-only. (depending on the distro, you might need /bin/sh instead)
# mount / -o remount,rw
# passwd
<change your root password here>
# mount / -o remount,ro
<three-finger salute or hit the reset button>
It's also useful for fixing up boot problems, if you're silly enough to have put commands in various init scripts that don't actually exit or daemonize...
shutting down after init=-bin-bash
LinuxQuestions.org
By the way, when you do init=/bin/sh (or bash), it isn't strictly necessary to reboot afterwards (well, depending on what you change I suppose), you can just do an 'exec /sbin/init' to continue the boot process. Make sure the state of the system is as it would normally be though (e.g. umount /usr, make / readonly again etc).
Resetting the Root Password
Linux Journal
The following methods can be used for resetting the root password if the root password is unknown.
If you use GRUB for booting, select the system to be booted, and add 1 to the end of the kernel boot command. If you're not presented with an edit "box" to add boot parameters, try using GRUB's edit command (the letter e). The 1 tells the kernel to boot to single-user mode.
The system now should boot to a root prompt. At this point, simply use the passwd command to change the root password.
Another option is to boot a rescue CD or an installation CD that lets you get to the command line. Once you're at a command prompt, mount the system's root directory if it's not already mounted:
$ mkdir /mnt/system $ mount /dev/sda1 /mnt/systemNow, do a chroot and reset the password:
$ chroot /mnt/system $ passwd
lostlinuxpassword.html
Lost Root Password Linux
First, try single user. If you don't see either a LILO or GRUB boot screen, try hitting CTRL-X to get one. If it's LILO, just type "linux single" and that should do it (assuming that "linux" is the lilo label). If GRUB, hit 'e", then select the "kernel" line, hit "e" again, and add " single" (or just " 1") to the end of the line. Press ENTER, and then "b" to boot.
You should get a fairly normal looking boot sequence except that it terminates a little early at a bash prompt. If you get a "Give root password for system maintenance", this isn't going to work, so see the "init" version below.
If you do get the prompt, the / filesystem may not be mounted rw (although "mount" may say it is). Do
mount -o remount,rw /
If that doesn't work (it might not), just type "mount" to find out where "/" is mounted. Let's say it is on /dev/sda2. You'd then type:
mount -o remount,rw /dev/sda2
If you can do this, just type "passwd" once you are in and change it to whatever you like. Or just edit /etc/shadow to remove the password field: move to just beyond the first ":" and remove everything up to the next ":". With vi, that would be "/:" to move to the first ":", space bar once, then "d/:" and ENTER. You'll get a warning about changing a read-only file; that's normal. Before you do this, /etc/shadow might look like:
root:$1$8NFmV6tr$rT.INHxDBWn1VvU5gjGzi/:12209:0:99999:7:-1:-1:1074970543
bin:*:12187:0:99999:7:::
daemon:*:12187:0:99999:7:::
adm:*:12187:0:99999:7:::and after, the first few lines should be:
root::12209:0:99999:7:-1:-1:1074970543
bin:*:12187:0:99999:7:::
daemon:*:12187:0:99999:7:::
adm:*:12187:0:99999:7:::You'll need to force the write: with vi, ":wq!". (If that still doesn't work, you needed to do the -o remount,rw, see above).
Another trick is to add "init=/bin/bash" (LILO "linux init=/bin/bash" or add it to the Grub "kernel" line). This will dump you to a bash prompt much earlier than single user mode, and a lot less has been initialized, mounted, etc. You'll definitely need the "-o remount,rw" here. Also note that other filesystems aren't mounted at all, so you may need to mount them manually if you need them. Look in /etc/fstab for the device names.
See also
http://aplawrence.com/Bofcusm/861.html
http://aplawrence.com/Bofcusm/872.html
http://aplawrence.com/Bofcusm/873.html
Howtos Reset A Forgotten Root Password
5dollarwhitebox.org Media Wiki
In Grub :
Once you're at a /bin/bash prompt...
- Type 'e' to edit the default kernel line
- Then 'e' again on the line that starts with 'kernel'
- Add 'init=/bin/bash' to the end of the 'kernel' line
- Press <ENTER>
- Type 'b' to boot it
Remount the filesystem read/write (will be ro when bin/bash'ing):
# mount -o remount,rw /Then change the passwd: # passwd root
Remount the filesystem back to read/only (keep things clean):
# mount -o remount,ro /Then CTR-ALT-DELETE (though this will result in a kernel panic most likely). After rebooting the system and you should be good to go.
[Oct 03, 2011] How to reset a root password
FedoraProject
While your system is starting up, hold down the Ctrl key or Esc to see the boot loader menu. After you see the menu:
- Use the arrows to select the boot entry you want to modify.
- Press e to edit the entry.
- Use the arrows to go to kernel line.
- Press a or e to append this entry.
- At the end of the line add the word single or the number 1.
- Press Enter to accept the changes.
- Press b to boot this kernel.
A series of text messages scrolls by and after a short time, a root prompt appears awaiting your commands (#).
[Oct 01, 2011] Step by Step Enable Root Login on Fedora 11
Many links about enabling root login in Fedora suggest editing only one file /etc/pam.d/gdm. This is incorrect and does not work. Two files should be edited: /etc/pam.d/gdm and /etc/pam.d/gdm-password
GUI Desktop Linux Windows Install Setup Configuration Project
To enable root login, two files: /etc/pam.d/gdm and /etc/pam.d/gdm-password need to be edited
Remove or comment out line by prefixing #.
# auth required pam_succeed_if.so user != root quietSave and close the file. Logout from terminal and from GUI itself. Now you should be able login as root user using GDM GUI login manager.
Etc
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors of this site Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: July 28, 2019