|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
Due to the size of the page an introductory note was converted to an Editorial on a separate page. Please read it as it might help you to avoid typical hardening mistakes. It is not very current: with Solaris 10 it needs to be updated as zones and privilege sets changed the Solaris security landscape significantly.
|
Hardening is a special type of tuning that often is performed after OS installation. It is very important to understand that security is human-related feature (or, more correctly, an organization IQ related feature IQ related feature -- organizations with stupid management usually do not have great security) and Solaris admins are often more qualified then Linux admin and more professionally trained. Often large corporations require them to be certified (although Red Hat certification is better then Sun certification). They often are older and have more years under the belt, although this is both advantage and disadvantage (see remark about firewalls below).
See http://www.softpanorama.org/Articles/softpanorama_laws_of_security.shtml
Solaris security advantage rests on combination of unique features: RBAC (which now includes the concept of privileges) and zones. In addition ACLs are also more widely used in Solaris although they are now fully available in Linux. Linux admins typically do not know this feature and as a result do not use it, but it is available.
There are some minor things like the fact that primary CPU for Solaris is pretty obscure RISC CPU (UltraSparc) which kills most exploits dead (security via obscurity) but this now is in a process of matching by IBM which tries to promote Linux on Power CPUs.
Also Linux is now Microsoft of Unix world and that means that most exploits are directed at popular Linux distributions, especially Red Hat.
Solaris filesystem security is weaker then in Free BSD, but somewhat better then
in Linux. For example you can make /usr
read only in Solaris and JASS (standard hardening toolkit) does exactly
that. A
Sun BluePrints
article (pdf) describes the Solaris Fingerprint Database (sfpDB), a security
tool that enables users to verify the integrity of files distributed with the Solaris
OS. It is different and better that RPM based security checking.
Linux has a distinct advantage of more wide and established use of local firewall
(Red Hat training actually presuppose that this feature is enabled; Solaris training
does not).
Networking stack is better engineered in Solaris and as such is more secure.
Solaris implemented IPv6 earlier then Linus and is is more mature and less prone
to problems.
As for application security Suse Apparmor is superior to anything Solaris has in
application security hardening. See http://en.opensuse.org/Apparmor
An internal firewall that is now became an integral component of any more or less secure server changed priorities in hardening quite significantly. Using firewall based hardening is easier for entry level administrators because issues and tradoff are easier to understand and more transparent.
Still it in worth to not that beginners and entry level admins usually have excessive zeal as for hardening and can screw a server (or a dozen ;-) almost in no time due to insufficient testing of the feature (or firewall rules) on test servers or insufficient understanding of the limitations of the hardening packets or both.
Although such an experiment represent tremendous learning opportunity it's better of avoid it. Somebody said “Experience keeps a very expensive school but fools can learn in no other” and George Bernard Shaw aptly added "There are two tragedies in life. One is to lose your heart's desire. The other is to gain it.".
Those two quotes are fully applicable to hardening. Naive enthusiasm after some trade presentation (or minor bribe by security snake oil salesmen) and attempts to implement them on production servers probably caused ten times more damage that all hackers attacks together.
It is very important to understand that many more servers were hosed due to hardening mistakes than from hacker attacks. That does not means that hardening is unimportant or that it should be better ignored. What it means is that you have not overdo it. Excessive zeal really hurts here. Each change that suppously increases the level of protection should be weighted against convenience of work with the server. The level of hardening should correspond to general level of security in a company. If holes are everywhere and nobody is paying attention to such problems as role base access, authentication, etc, hardening does not increase the general level of security: it is always as good as the weakest link.
Situation with hardening tools on Solaris looks like one man game: JASS is still maintained, but Titan (although I like Titan simple approach to writing hardening modules better) is not. Unless you want to improve it yourself (and Titan is more suitable for adaptation the JASS) it does not make much sense to use it (it never made sense to use Titan blindly, anyway). Google list Comparison of Solaris Hardening Scripts among top finding on "Solaris hardening" topic. Ignore it, this is a very old paper that outlived its usefulness. Another entry YASSP is dead for so long that I start to be vary about Goggle algorithm seeing it in the top finding list (YASSP: Hardening Script for Solaris - stable beta.).
Availability of internal firewall creates a new situation where old Josef Stalin style recipe of hardening "if you have a network service you have a problem, if you do not have a network service you have no problems" can be applied to any service or port ;-). Now you can limit vulnerable services to specific subnets or even hosts. That also means that the excessive zeal in elimination of /etc/inetd.conf entries now is much less justified.
Dr. Nikolai Bezroukov
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003_and_earlier |
September 25, 2014 | troyhunt.com
Remember Heartbleed? If you believe the hype today, Shellshock is in that league and with an equally awesome name albeit bereft of a cool logo (someone in the marketing department of these vulns needs to get on that). But in all seriousness, it does have the potential to be a biggie and as I did with Heartbleed, I wanted to put together something definitive both for me to get to grips with the situation and for others to dissect the hype from the true underlying risk.To set the scene, let me share some content from Robert Graham's blog post who has been doing some excellent analysis on this. Imagine an HTTP request like this:
target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html) http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 http-header = Host:() { :; }; ping -c 3 209.126.230.74 http-header = Referer:() { :; }; ping -c 3 209.126.230.74Which, when issued against a range of vulnerable IP addresses, results in this:
en.wikipedia.org
Analysis of the source code history of Bash shows that the vulnerabilities had existed undiscovered since approximately version 1.13 in 1992.[4] The maintainers of the Bash source code have difficulty pinpointing the time of introduction due to the lack of comprehensive changelogs.[1]
In Unix-based operating systems, and in other operating systems that Bash supports, each running program has its own list of name/value pairs called environment variables. When one program starts another program, it provides an initial list of environment variables for the new program.[14] Separately from these, Bash also maintains an internal list of functions, which are named scripts that can be executed from within the program.[15] Since Bash operates both as a command interpreter and as a command, it is possible to execute Bash from within itself. When this happens, the original instance can export environment variables and function definitions into the new instance.[16] Function definitions are exported by encoding them within the environment variable list as variables whose values begin with parentheses ("()") followed by a function definition. The new instance of Bash, upon starting, scans its environment variable list for values in this format and converts them back into internal functions. It performs this conversion by creating a fragment of code from the value and executing it, thereby creating the function "on-the-fly", but affected versions do not verify that the fragment is a valid function definition.[17] Therefore, given the opportunity to execute Bash with a chosen value in its environment variable list, an attacker can execute arbitrary commands or exploit other bugs that may exist in Bash's command interpreter.
The name "shellshock" is attributed[by whom?][not in citation given] to Andreas Lindh from a tweet on 24 September 2014.[18][non-primary source needed]
On October 1st, Zalewski released details of the final bugs, and confirmed that Florian's patch does indeed prevent them. Zalewski says fixed
CGI-based web server attack
When a web server uses the Common Gateway Interface (CGI) to handle a document request, it passes various details of the request to a handler program in the environment variable list. For example, the variable HTTP_USER_AGENT has a value that, in normal usage, identifies the program sending the request. If the request handler is a Bash script, or if it executes one for example using the system(3) call, Bash will receive the environment variables passed by the server and will process them as described above. This provides a means for an attacker to trigger the Shellshock vulnerability with a specially crafted server request.[4] The security documentation for the widely used Apache web server states: "CGI scripts can ... be extremely dangerous if they are not carefully checked."[20] and other methods of handling web server requests are often used. There are a number of online services which attempt to test the vulnerability against web servers exposed to the Internet.[citation needed]
SSH server example
OpenSSH has a "ForceCommand" feature, where a fixed command is executed when the user logs in, instead of just running an unrestricted command shell. The fixed command is executed even if the user specified that another command should be run; in that case the original command is put into the environment variable "SSH_ORIGINAL_COMMAND". When the forced command is run in a Bash shell (if the user's shell is set to Bash), the Bash shell will parse the SSH_ORIGINAL_COMMAND environment variable on start-up, and run the commands embedded in it. The user has used their restricted shell access to gain unrestricted shell access, using the Shellshock bug.[21]
DHCP example
Some DHCP clients can also pass commands to Bash; a vulnerable system could be attacked when connecting to an open Wi-Fi network. A DHCP client typically requests and gets an IP address from a DHCP server, but it can also be provided a series of additional options. A malicious DHCP server could provide, in one of these options, a string crafted to execute code on a vulnerable workstation or laptop.[9]
Note of offline system vulnerability
The bug can potentially affect machines that are not directly connected to the Internet when performing offline processing, which involves the use of Bash.[citation needed]
Initial report (CVE-2014-6271)
This original form of the vulnerability involves a specially crafted environment variable containing an exported function definition, followed by arbitrary commands. Bash incorrectly executes the trailing commands when it imports the function.[22] The vulnerability can be tested with the following command:
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"In systems affected by the vulnerability, the above commands will display the word "vulnerable" as a result of Bash executing the command "echo vulnerable", which was embedded into the specially crafted environment variable named "x".[23][24]
There was an initial report of the bug made to the maintainers of Bash (Report# CVE-2014-6271). The bug was corrected with a patch to the program. However, after the release of the patch there were subsequent reports of different, yet related vulnerabilities. On 26 September 2014, two open-source contributors, David A. Wheeler and Norihiro Tanaka, noted that there were additional issues, even after patching systems using the most recently available patches. In an email addressed to the oss-sec list and the bash bug list, Wheeler wrote: "This patch just continues the 'whack-a-mole' job of fixing parsing errors that began with the first patch. Bash's parser is certain [to] have many many many other vulnerabilities".[25]
On 27 September 2014, Michal Zalewski announced his discovery of several other Bash vulnerabilities,[26] one based upon the fact that Bash is typically compiled without address space layout randomization.[27] Zalewski also strongly encouraged all concerned to immediately apply a patch made available by Florian Weimer.[26][27]CVE-2014-6277
CVE-2014-6277 relates to the parsing of function definitions in environment variables by Bash. It was discovered by Michał Zalewski.[26][27][28][29]
This causes a segfault.
() { x() { _; }; x() { _; } <<a; }
CVE-2014-6278
CVE-2014-6278 relates to the parsing of function definitions in environment variables by Bash. It was discovered by Michał Zalewski.[30][29]
() { _; } >_[$($())] { echo hi mom; id; }
CVE-2014-7169
On the same day the bug was published, Tavis Ormandy discovered a related bug which was assigned the CVE identifier CVE-2014-7169.[21] Official and distributed patches for this began releasing on 26 September 2014.[citation needed] Demonstrated in the following code:
env X='() { (a)=>\' sh -c "echo date"; cat echo
which would trigger a bug in Bash to execute the command "date" unintentionally. This would become CVE-2014-7169.[21]
- Testing example
Here is an example of a system that has a patch for CVE-2014-6271 but not CVE-2014-7169:
$ X='() { (a)=>\' bash -c "echo date" bash: X: line 1: syntax error near unexpected token `=' bash: X: line 1: `' bash: error importing function definition for `X' $ cat echo Fri Sep 26 01:37:16 UTC 2014The patched system displays the same error, notifying the user that CVE-2014-6271 has been prevented. However, the attack causes the writing of a file named 'echo', into the working directory, containing the result of the 'date' call. The existence of this issue resulted in the creation of CVE-2014-7169 and the release patches for several systems.
A system patched for both CVE-2014-6271 and CVE-2014-7169 will simply echo the word "date" and the file "echo" will not be created.
$ X='() { (a)=>\' bash -c "echo date" date $ cat echo cat: echo: No such file or directoryCVE-2014-7186
CVE-2014-7186 relates to an out-of-bounds memory access error in the Bash parser code.[31] While working on patching Shellshock, Red Hat researcher Florian Weimer found this bug.[23]
- Testing example
Here is an example of the vulnerability, which leverages the use of multiple "<<EOF" declarations:
bash -c 'true <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF' || echo "CVE-2014-7186 vulnerable, redir_stack"
- A vulnerable system will echo the text "CVE-2014-7186 vulnerable, redir_stack".
CVE-2014-7187
CVE-2014-7187 relates to an off-by-one error, allowing out-of-bounds memory access, in the Bash parser code.[32] While working on patching Shellshock, Red Hat researcher Florian Weimer found this bug.[23]
- Testing example
Here is an example of the vulnerability, which leverages the use of multiple "done" declarations:
(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno"
- A vulnerable system will echo the text "CVE-2014-7187 vulnerable, word_lineno".
Sep 26, 2014 | securityblog.redhat.com
Why are there four CVE assignments?
The original flaw in Bash was assigned CVE-2014-6271. Shortly after that issue went public a researcher found a similar flaw that wasn't blocked by the first fix and this was assigned CVE-2014-7169. Later, Red Hat Product Security researcher Florian Weimer found additional problems and they were assigned CVE-2014-7186 and CVE-2014-7187. It's possible that other issues will be found in the future and assigned a CVE designator even if they are blocked by the existing patches.
... ... ...
Why is Red Hat using a different patch then others?
Our patch addresses the CVE-2014-7169 issue in a much better way than the upstream patch, we wanted to make sure the issue was properly dealt with.
I have deployed web application filters to block CVE-2014-6271. Are these filters also effective against the subsequent flaws?If configured properly and applied to all relevant places, the "() {" signature will work against these additional flaws.
Does SELinux help protect against this flaw?
SELinux can help reduce the impact of some of the exploits for this issue. SELinux guru Dan Walsh has written about this in depth in his blog.
Are you aware of any new ways to exploit this issue?
Within a few hours of the first issue being public (CVE-2014-6271), various exploits were seen live, they attacked the services we identified at risk in our first post:
- from dhclient,
- CGI serving web servers,
- sshd+ForceCommand configuration,
- git repositories.
We did not see any exploits which were targeted at servers which had the first issue fixed, but were affected by the second issue. We are currently not aware of any exploits which target bash packages which have both CVE patches applied.
Why wasn't this flaw noticed sooner?
The flaws in Bash were in a quite obscure feature that was rarely used; it is not surprising that this code had not been given much attention. When the first flaw was discovered it was reported responsibly to vendors who worked over a period of under 2 weeks to address the issue.
This entry was posted in Vulnerabilities and tagged bash, CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, shellshocked by Huzaifa Sidhpurwala. Bookmark the permalink.
>Update 2014-09-25 16:00 UTC
Red Hat is aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been assigned CVE-2014-7169.We are working on patches in conjunction with the upstream developers as a critical priority. For details on a workaround, please see the knowledgebase article.
Red Hat advises customers to upgrade to the version of Bash which contains the fix for CVE-2014-6271 and not wait for the patch which fixes CVE-2014-7169. CVE-2014-7169 is a less severe issue and patches for it are being worked on.
Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, Bash has evolved from a simple terminal based command interpreter to many other fancy uses.
In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the Bash shell. It is common for a lot of programs to run Bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc)
Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the Bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example:
- ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.
- Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in Bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).
- PHP scripts executed with mod_php are not affected even if they spawn subshells.
- DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.
- Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.
- Any other application which is hooked onto a shell or runs a shell script as using Bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.
Like "real" programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like:
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a testThe patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function. So if you run the above example with the patched version of Bash, you should get an output similar to:
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a testWe believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice.
Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue.
We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723
Information on CentOS can be found at http://lists.centos.org/pipermail/centos/2014-September/146099.html.
zdnet.com
The only thing you have to fear with Shellshock, the Unix/Linux Bash security hole, is fear itself. Yes, Shellshock can serve as a highway for worms and malware to hit your Unix, Linux, and Mac servers, but you can defend against it.
The real and present danger is for servers. According to the National Institute of Standards (NIST), Shellshock scores a perfect 10 for potential impact and exploitability. Red Hat reports that the most common attack vectors are:
- httpd (Your Web server): CGI [Common-Gateway Interface] scripts are likely affected by this issue: when a CGI script is run by the web server, it uses environment variables to pass data to the script. These environment variables can be controlled by the attacker. If the CGI script calls Bash, the script could execute arbitrary code as the httpd user. mod_php, mod_perl, and mod_python do not use environment variables and we believe they are not affected.
- Secure Shell (SSH): It is not uncommon to restrict remote commands that a user can run via SSH, such as rsync or git. In these instances, this issue can be used to execute any command, not just the restricted command.
- dhclient: The Dynamic Host Configuration Protocol Client (dhclient) is used to automatically obtain network configuration information via DHCP. This client uses various environment variables and runs Bash to configure the network interface. Connecting to a malicious DHCP server could allow an attacker to run arbitrary code on the client machine.
- CUPS (Linux, Unix and Mac OS X's print server): It is believed that CUPS is affected by this issue. Various user-supplied values are stored in environment variables when cups filters are executed.
- sudo: Commands run via sudo are not affected by this issue. Sudo specifically looks for environment variables that are also functions. It could still be possible for the running command to set an environment variable that could cause a Bash child process to execute arbitrary code.
- Firefox: We do not believe Firefox can be forced to set an environment variable in a manner that would allow Bash to run arbitrary commands. It is still advisable to upgrade Bash as it is common to install various plug-ins and extensions that could allow this behavior.
- Postfix: The Postfix [mail] server will replace various characters with a ?. While the Postfix server does call Bash in a variety of ways, we do not believe an arbitrary environment variable can be set by the server. It is however possible that a filter could set environment variables.
So much for Red Hat's thoughts. Of these, the Web servers and SSH are the ones that worry me the most. The DHCP client is also troublesome, especially if, as it the case with small businesses, your external router doubles as your Internet gateway and DHCP server.
Of these, Web server attacks seem to be the most common by far. As Florian Weimer, a Red Hat security engineer, wrote: "HTTP requests to CGI scripts have been identified as the major attack vector." Attacks are being made against systems running both Linux and Mac OS X.
Jaime Blasco, labs director at AlienVault, a security management services company, ran a honeypot looking for attackers and found "several machines trying to exploit the Bash vulnerability. The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system."
Other security researchers have found that the malware is the usual sort. They typically try to plant distributed denial of service (DDoS) IRC bots and attempt to guess system logins and passwords using a list of poor passwords such as 'root', 'admin', 'user', 'login', and '123456.'
So, how do you know if your servers can be attacked? First, you need to check to see if you're running a vulnerable version of Bash. To do that, run the following command from a Bash shell:
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
If you get the result:
vulnerable this is a test
Bad news, your version of Bash can be hacked. If you see:
bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test
You're good. Well, to be more exact, you're as protected as you can be at the moment.
26 Sep 2014 | support.novell.com
We have fixed the critical issue CVE-2014-6271 (http://support.novell.com/security/cve/CVE-2014-6271.html) with updates for all supported and LTSS code streams.
SLES 10 SP3 LTSS, SP4 LTSS, SLES 11 SP1 LTSS, SLES 11 SP2 LTSS, SLES 11 SP3, openSUSE 12.3, 13.1.
The issue CVE-2014-7169 ( http://support.novell.com/security/cve/CVE-2014-7169.html) is less severe (no trivial code execution) but will also receive fixes for above. As more patches are under discussions around the bash parser, we will wait some days to collect them to avoid a third bash update.
Describes the Solaris Fingerprint Database (sfpDB), a security tool that enables users to verify the integrity of files distributed with the Solaris OS.
From: Gideon T. Rasmussen, CISSP, CISA, CISM, CFSO, SCSA <lists_at_infostruct.net>
- This message: [ Message body ] [ More options ]
- Related messages: [ Next message ] [ Previous message ]
Date: Sun, 30 Jan 2005 14:58:08 -0500I just sent an e-mail to a gent I met at a UNIX auditing course. Thought it might be of interest...
To take a quick Solaris security audit, use the CIS Solaris bench marking tool (http://www.cisecurity.org/bench_solaris.html). It produces a vulnerability assessment report. There is a corresponding Solaris hardening standard on the same page.
My Solaris hardening recommendations can be found at: http://www.sun.com/bigadmin/content/submitted/Solaris_build_document.pdf
Additional Solaris hardening resources can be found at:
http://www.sun.com/blueprints/browsesubject.html#security
http://www.nsa.gov/snac/downloads_sunsol.cfm?MenuID=scg10.3.1.1The usual hardening disclaimers apply here. Test in a non production environment and conduct thorough functionality testing...
You may also want to take a look at my INFOSEC site (http://www.ussecurityawareness.org). It has auditing resources you may find of interest.
Contact me if you have any questions or comments.
Kind regards,
Gideon
Gideon T. Rasmussen
CISSP, CISA, CISM, CFSO, SCSA
Boca Raton, FL
gideon_at_infostruct.net
Received on Jan 31 2005
On a FreeBSD system, you can set the "immutable flag" on a file. Given a high enough system securelevel, that file will be completely resistant to change (including unsetting that flag). This is extremely handy for locking down file signature databases, kernel files, and other likely targets for stealth modification. So long as that portion of the kernel stands intact, the system can never be completely clandestinely ownedVery interesting. This FAQ [osxfaq.com] suggest that OS X retains BSD's immutable flag. In theory, the only way to change this flag in OS X is to reboot in single-user mode. I wonder if a rootkit could force a reboot into single user mode, change these flags, and reboot back to remotely own an OS X machine? I would assume that unless the rootkit can insert something into the single-user mode start-up sequence, the system immutable flag should be fairly safe. The big downside would be that System Update would cease to work (and probably create a corrupt partial update) if the wrong file were locked in this way (security vs. ease-of-use again!).
The Solaris Security Toolkit, formerly known as the JumpStart Architecture and Security Scripts (JASS) toolkit, provides a flexible and extensible mechanism to harden and audit Solaris Operating Systems (OSs). The Solaris Security Toolkit simplifies and automates the process of securing Solaris Operating Systems and is based on proven security best practices and practical customer site experience gathered over many years. This toolkit can be used to secure SPARC(R)-based and x86/x64-based systems.The Solaris Security Toolkit 4.2 release is available now and the toolkit is fully supported as part of Solaris Software Support Service Plans or the SunSpectrum(SM) Service Plan contract. For more information on Solaris Support go to:
http://www.sun.com/service/support/software/solaris/
The Solaris Security Toolkit 4.2 software is fully supported on the following SPARC and x86/x64 Solaris Operating System releases:
- Solaris 10
- Solaris 9
- Solaris 8
Today is the big day! The Solaris Security Toolkit version 4.2 has been released. The biggest change in this new release is with its support of the Solaris 10 OS (global and local zones). You can read all about the changes in this new update in the Release Notes. With this release, you have a fully documented and supported tool for hardening the Solaris 10 OS (as well as previous releases) on both SPARC, Intel and AMD platforms!
Commentor: Casper Dik
Added: September 7, 2004
Comment:
It is rather pointless to install TCP wrappers for Solaris 9 and later as the version included in the OS is exactly the same as the one available on porcupine. That version has also been reved twice because of bugs we ran into. Solaris 9 SSH already has libwrap support compiled on. In S10 and later we also provide rpcbind linked with libwrap.
[Apr 3, 2005] Conversion of application accounts to roles is a simple, but at the same time effective hardening technique. See Security/RBAC/conversion_of_application_accounts_to_roles
by Glenn Brunette
This Sun BluePrints Cookbook describes how to centralize and automate the collection of file integrity information using the following Solaris features:
* Secure Shell
* Role-based Access Control (RBAC)
* Process Privileges
* Basic Auditing and Reporting Tool (BART)Each of these features can be quickly and easily integrated to centralize and automate the process of collecting file fingerprints across a network of Solaris 10 systems.
Note: This article is available in PDF Format only.
This Tech Tip explains how to use NFS to inspect the underlying directory structure if the reported disk usage seems inconsistent.
Read about the build, configuration, and subsequent hardening of UNIX servers that constitute a secured FTP solution.
So what makes Solaris Privileges different? Why didn't we copy something else like Trusted Solaris Privileges or "POSIX" capabilities?
Let's start from what we formulated as our requirements near the beginning of our project.
One of the important features of Solaris is complete binary backward compatibility; in order to offer that we needed to design the privilege subsystem in such a manner that current practices, binaries and products would continue to work. Of course, some have solved this issue by providing a system wide knob to turn: root / root + privileges / just privileges. We don't like knobs in our OS; specifically not ones which drastically alter the behavior of a system. It makes it harder to develop software; it needs to work for all settings. Certain products may require conflicting settings, and so on. So we decided on a "per-process" knob which is largely automatic
With backward compatibility comes the onus on the software developer to develop future proof interfaces; that ruled out all other interfaces as they all have fixed bitmaps and fixed privilege/capability numbers, fixed structure sizes in the programmer visible parts of the system. Solaris Privileges have none of that. And while we could safely reuse the names of the Trusted Solaris interfaces we can not redefine interfaces even from a defunct standard. So we have interfaces which smell like Trusted Solaris but with a completely new userland representation of privileges and privilege sets. We can never have more signals; but we can have more privileges and more privilege sets!
The privileges and privilege sets in Solaris 10 are represented to userland processes and non-core kernel modules as strings; privilege sets are bitmasks of undetermined size; they can only be allocated through the C library routines. Privilege set names are also strings and not plain integer indices; this gives us even more flexibility. A Solaris binary compiled for 4 privilege sets of each 32 privileges will continue to work on a Solaris system with 5 privilege sets each of which can contain 64 privileges and with all the privileges having their internal representation renumbered.
... Many software exploits count on this escalated privilege to gain superuser access to a machine via bugs like buffer overflows and data corruption. To combat this problem, the Solaris 10 Operating System includes a new least privilege model, which gives a specified process only a subset of the superuser powers and not full access to all privileges.
The least privilege model evolved from Sun's experiences with Trusted Solaris and the tighter security model used there. The Solaris 10 OS least privileged model conveniently enables normal users to do things like mount file systems, start daemon processes that bind to lower numbered ports, and change the ownership of files. On the other hand, it also protects the system against programs that previously ran with full root privileges because they needed limited access to things like binding to ports lower than 1024, reading from and writing to user home directories, or accessing the Ethernet device. Since setuid root binaries and daemons that run with full root privileges are rarely necessary under the least privilege model, an exploit in a program no longer means a full root compromise. Damage due to programming errors like buffer overflows can be contained to a non-root user, which has no access to critical abilities like reading or writing protected system files or halting the machine.
The Solaris 10 OS least privilege model includes nearly 50 fine-grained privileges as well as the basic privilege set.
- The defined privileges are broken into the groups
contract
,cpc
,dtrace
,file
,ipc
,net
,proc
, andsys
.- The basic privilege set includes all privileges granted to unprivileged processes under the traditional security model:
proc_fork
,proc_exec
,proc_session
,proc_info
, andfile_link_any
.
Increasing life expectancy
The past 12-24 months has seen a significant downward shift in successful random attacks against Linux-based systems. Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2 ) have an online mean life expectancy of 3 months before being successfully compromised. Meanwhile, the time to live for unpatched Win32 systems appears to continues to decrease. Such observations have been reported by various organizations, including Symantec [1], Internet Storm Center[2] and even USAToday[3]. The few Win32 honeypots we have deployed support this. However, Win32 compromises appear to be based primarily on worm activity.
T H E D A T A
Background Our data is based on 12 honeynets deployed in eight different countries (US, India, UK, Pakistan, Greece, Portugal, Brazil and Germany). Data was collected from the calendar year of 2004, with most of the data collected in the past six months. Each honeynet deployed a variety of different Linux systems accessible from anywhere on the Internet. In addition, several Win32 based honeypots were deployed, but these were limited in number and could not be used to identify widespread trends. A total of 24 unpatched Unix honeypots were deployed, of which 19 were Linux, primarily Red Hat. These unpatched honeypots were primarily default server installations with additional services enabled (such as SSH, HTTPS, FTP, SMB, etc). In addition, on several systems insecure or easily guessed passwords were used. In most cases, host based firewalls had to be modified to allow inbound connections to these services. These systems were targets of little perceived value, often on small home or business networks. They were not registered in DNS or any search engines, so the systems were found by primarily random or automated means. Most were default Red Hat installations. Specifically one was RH 7.2, five RH 7.3, one RH 8.0, eight RH 9.0, and two Fedora Core1 deployments. In addition, there were one Suse 7.2, one Suse 6.3 Linux distributions, two Solaris Sparc 8, two Solaris Sparc 9, and one Free-BSD 4.4 system. Of these, only four Linux honeypots (three RH 7.3 and one RH 9.0) and three Solaris honeypots were compromised. Two of the Linux systems were compromised by brute password guessing and not a specific vulnerability. Keep in mind, our data sets are not based on targets of high value, or targets that are well known. Linux systems that are of high value (such as company webservers, CVS repositories or research networks) potentially have a shorter life expectancy.
The science is methodical, premeditated actions to gather and analyze evidence. The technology, in the case of computers, are programs that suite particular roles in the gathering and analysis of evidence. The crime scene is the computer and the network (and other network devices) to which it is connected.
Your job, as a forensic investigator, is to do your best to comb through the sources of evidence -- disc drives, log files, boxes of removable media, whatever -- and do two things: make sure you preserve as much of this data in its original form, and to try to re-construct the events that occurred during a criminal act and produce a meaningful starting point for police and prosecutors to do their jobs.
Every incident will be different. In one case, you may simply assist in the seizure of a computer system, which is analyzed by law enforcement agencies. In another case, you may need to collect logs, file systems, and first hand reports of observed activity from dozens of systems in your organization, wade through all of this mountain of data, and reconstruct a timeline of events that yields a picture of a very large incident.
In addition, when you begin an incident investigation, you have no idea what you will find, or where. You may at first see nothing (especially if a "rootkit" is in place.) You may find a process running with open network sockets that doesn't show up on a similar system. You may find a partition showing 100% utilization, but adding things up with du only comes to 50%. You may find network saturation, originating from a single host (by way of tracing its ethernet address or packet counts on its switch port), a program eating up 100% of the CPU, but nothing in the file system with that name.
The steps taken in each of these instances may be entirely different, and a competent investigator will use experience and hunches about what to look for, and how, in order to get to the bottom of what is going on. They may not necessarily be followed 1, 2, 3. They may be way more than is necessary. They may just be the beginning of a detailed analysis that involves De-compilation of recovered programs and correlation of packet dumps from multiple networks.
Instead of being a "cookbook" that you follow, consider this a collection of techniques that a chef uses to construct a fabulous and unique gourmet meal. Once learned, you'll discover there are plenty more steps than just those listed here.
Its also important to remember that the steps in preserving and collecting evidence should be done slowly, carefully, methodically, and deliberately. The various pieces of data -- the evidence -- on the system are what will tell the story of what occurred. The first person to respond has the responsibility of ensuring that as little of this evidence as possible is damaged, thereby making it useless in contributing to a meaningful reconstruction of what occurred.
One thing is common to every investigation, and it cannot be stressed enough. Keep a regular old notebook handy and take careful notes of what you do during your investigation. These may be necessary to refresh your memory months later, to tell the same long story to a new law enforcement agent who takes over the case, or to refresh your own memory when/if it comes time to testify in court. It will also help you accurately calculate the cost of responding to the incident, avoiding the potentially exaggerated estimates that have been seen in some recent computer crime cases. Crimes deserve justice, but justice should be fair and reasonable.
As for the technology aspect, the description of basic forensic analysis steps provided here assumes Red Hat Linux on i386 (any Intel compatible motherboard) hardware. The steps are basically the same with other versions of Unix, but certain things specific to i386 systems (e.g., use of IDE controllers, limitations of the PC BIOS, etc.) will vary from other Unix workstations. Consult system administration or security manuals specific to your version of Unix.
It is helpful to set up a dedicated analysis system on which to do your analysis. An example analysis system in a forensic lab might be set up as follows:
- Fast i386 compatible motherboard with 2 IDE controllers
- At least two large (>8GB) hard drives on the primary IDE controller (to fit the OS and tools, plus have room to copy partitions off tape or recover deleted file space from victim drives)
- Leave second IDE cable empty. This means you won't need to mess with jumpers on discs -- just plug them in and they will show up as /dev/hdc (master) or /dev/hdd (slave)
- SCSI interface card (e.g., Adaptec 1542)
- DDS-3 or DDS-4 4mm tape drive (you need enough capacity to handle the largest partitions you will be backing up)
- If this system is on the network, it should be FULLY PATCHED and have NO NETWORK SERVICES RUNNING except SSH (for file transfer and secure remote access) -- Red Hat Linux 6.2 with Bastille-Linux hardening is a good choice
(It can be argued that no services should be running, not even SSH, on your analysis systems. You can use netcat to pipe data into the system, encrypting it with DES or Blowfish stream cyphers for security. This is fine, provided you do not need remote access to the system.)
Another handy analysis system is a new laptop. An excellent way of taking the lab to the victim, a fast laptop with 10/100 combo ethernet card, an 18+GB hard drive, and a backpack with padded case, allows you to easily carry everything you need to obtain file system images (later written to tape for long-term storage), analyze them, display the results, crack intruder's crypt() passwords you encounter, etc.
A cross-over 10Base-T cable allows you to get by without a hub or switch, and to still use the network to communicate with the victim system on an isolated mini-network of two systems. (You will need to set up static route table entries in order for this to work.)
A Linux analysis system will work for analyzing file systems from several different operating systems that have supported file system under Linux, e.g., Sun UFS. You simply need to mount the file system with the proper type and options, e.g. (Sun UFS):
# mount -r -t ufs -o ufstype=sun /dev/hdd2 /mnt
Another benefit to Linux are "loopback" devices, which allow you to mount a file containing an image copy (obtained with dd) into the analysis system's file system. See Appendices A and B.
First let's first provide a little background. TCP Wrappers has been around for many, many years (see Wietse Venema's FTP archive). It is used to restrict access to TCP services based on host name, IP address, network address, and so on. For more details on what TCP Wrappers is and how you can use it, seetcpd(1M)
. TCP Wrappers was integrated into the Solaris Operating System starting in the Solaris 9 release, where both Solaris Secure Shell andinetd
-based (streams, nowait) services were wrapped. Bonus points are awarded to anyone who knows why UDP services are not wrapped by default.TCP Wrappers support in Secure Shell was always enabled since Secure Shell always called the TCP Wrapper function
host_access(3)
to determine if a connection attempt should proceed. If TCP Wrappers was not configured on that system, access, by default, would be granted. Otherwise, the rules as defined in thehosts.allow
andhosts.deny
files would apply. For more information on these files, seehosts_access(4)
. Note that this and all of the TCP Wrappers manual pages are stored under/usr/sfw/man
in the Solaris 10 OS. To view this manual page, you can use the following command:$ man -M /usr/sfw/man -s 4 hosts_access
inetd
-based services use TCP Wrappers in a different way. In the Solaris 9 OS, to enable TCP Wrappers forinetd
-based services, you must edit the/etc/default/inetd
file and set theENABLE_TCPWRAPPERS
parameter toYES
. By default, TCP Wrappers was not enabled forinetd
.In the Solaris 10 OS, two new services were wrapped:
sendmail
andrpcbind
.sendmail
works in a way similar to Secure Shell. It always calls thehost_access
function and therefore TCP Wrappers support is always enabled. Nothing else needs to be done to enable TCP Wrappers support for that service. On the other hand, TCP Wrappers support forrpcbind
must be enabled manually using the new Service Management Facility (SMF). Similarly,inetd
was modified to use a SMF property to control whether TCP Wrappers is enabled forinetd
-based services.
The next item of my list of lesser known and/or publicized security enhancements to the Solaris 10 OS is account lockout. Account lockout is the ability of a system or service to administratively lock an account after that account has suffered "n" consecutive failed authentication attempts. Very often "n" is three hence the "three strikes" reference.
Recall from yesterday's entry on non-login and locked accounts that there is in fact a difference. Locked accounts are not able to access any system services whether interactively or through the use of delayed execution mechanisms such as cron(1M). So, when an account is locked out using this capability, only a system administrator is able to re-enable the account, using the passwd(1) command with the "-u" option.
Account lockout can be enabled in one of two ways. The first way will enable account lockout globally for all users. The second method will all more granular control of which users will or will not be subject to account lockout policy. Note that the account lockout capability will only apply to accounts local to the system. We will look at both in a little more detail below.
Before we look at how to enable or disable the account lockout policy, let's first take a look at how you configure the number of consecutive, failed authentication attempts that will serve as your line in the sand. Any number of consecutive, failed attempts beyond the number selected will result in the account being locked. This number is based on the RETRIES parameter in the /etc/default/login file. By default, this parameter is set to 5. You can certainly customize this parameter based on your local needs and policy. By default, the Solaris Security Toolkit will set the RETRIES parameter to 3.
The Solaris Service Manager
To better handle software faults, Sun has redesigned the way it starts and monitors services. Instead of the the traditional
/etc/init.d
startup scripts, many programs in the Solaris 10 OS have been converted to use the service management framework (smf) of the Solaris Service Manager to start, stop, modify, and monitor programs. The service manager is also used to identify software interdependencies and ensure that services are started in the correct order. Should a service, such as sendmail, suddenly die, the service manager automatically verifies that all of the requirements for the sendmail service are running and respawns the necessary programs. When a hardware fault occurs and hardware is offlined, the service manager can restart any programs under service manager control that needed to be stopped to remove the hardware from service.Each service under the control of the service manager is controlled by an XML configuration file, called a manifest, that defines the name of the service, the type, any dependencies, and other important information. These manifests are stored in a repository and can be viewed and modified by the repository daemon,
svc.configd(1M)
. The repository is read by the master restarter daemon,svc.startd(1M)
, which evaluates the dependencies and initiates the services as needed. Traditional inetd services are now part of the service manager as well. Any of the inetd services can be enabled, disabled, or restarted via the same mechanism as any other service manager-enabled program.
Itch scratching, and audit (Score:3, Interesting)
by RedPhoenix (124662) on Tuesday September 14, @09:15PM (#10251879)At the risk of the post sounding like a discussion at a head-lice convention, everyone has their own personal itch to scratch. Several posts thus far, have questioned the viability of establishing yet another secure-Debian project, similar to other existing projects, and have indicated that there would be a better use of available resources if everyone would just get along and work together (or at least, form under a single project). Fair enough.
However, there are a whole range of reasons why diversity and natural selection w.r.t many competing projects can provide benefits over and above a single large project - organizational inertia, effective and efficient communication, and development priority differences, for example.
'Organizational inertia' in particular, whereby the larger a organization/project gets, the slower it can react to changing requirements, is a good reason why this effort-amalgamation can potentially be a bad thing.
Each of these projects probably has a slightly different 'itch' to 'scratch'. There's no reason why, later on down the track, that the best elements of each of these projects cannot be merged into something cohesive.
A good example is the current situation in Linux Auditing (as in C2/CAPP style auditing and event logging, not code verification) and host-based audit-related intrusion detection. Over time, we've had Snare (http://www.intersectalliance.com), SLES (http://www.suse.com), and Riks Audit Daemon (http://www.redhat.com). Each project had a slightly different focus, and each development team have come up with some great solutions to the problems of auditing / event logging.
The developers of each of these projects are now communicating and collaborating, with a view to bringing a effective audit subsystem to Linux that incorporates the best ideas from each approach.
BTW: How about auditing in this project? Here's a starting point:
http://www.gweep.net/~malk/snare_debian.shtmlRed. (Snare Developer)
About: pam_passwdqc is a simple password strength checking module for PAM-aware password changing programs, such as passwd(1). In addition to checking regular passwords, it offers support for passphrases and can provide randomly generated passwords. All features are optional and can be (re-)configured without rebuilding.
Changes: The module will now assume invocation by root only if both the UID is 0 and the PAM service name is "passwd". This should fix changing expired passwords on Solaris and HP-UX and make "enforce=users" safe. The proper English explanations of requirements for strong passwords will now be generated for a wider variety of possible settings.
The topic for this article is the Solaris 10 Reduced Networking Software Group (also commonly known as the Solaris 10 Reduced Networking Meta Cluster). This software group is new and joins the five existing software groups available in Solaris today: Core, End User, Developer, Entire and Entire + OEM software groups. The Reduced Networking Software Group is positioned as a subset of Core and represents the smallest amount of Solaris that can or should be installed and have a working and supported system. (Note that for support reasons, it is not advised to remove packages installed by the Reduced Networking Software Group.)
To install the Reduced Networking Software Group, simply select it from the list when doing a graphical installation. If you are using JumpStart, then you should use the cluster keyword with the new value SUNWCrnet. The following is a sample JumpStart profile that uses the Reduced Networking Software Group. This profile was also used to build the system used as an example in this article.
New
Rated:
Each CERT Security Improvement module addresses an important but narrowly defined problem in network security. It provides guidance to help organizations improve the security of their networked computer systems.
Each module page links to a series of practices and implementations. Practices describe the choices and issues that must be addressed to solve a network security problem. Implementations describe tasks that implement recommendations described in the practices. For more information, read the section about module structure.
- List of modules
- List of practices
- List of implementations
- Configuring NCSA httpd and Web-server content directories on a Sun Solaris 2.5.1 host
- Enabling process accounting on systems running Solaris 2.x
- Installing, configuring, and using tcp wrapper to log unauthorized connection attempts on systems running Solaris 2.x
- Configuring and using syslogd to collect logging messages on systems running Solaris 2.x
- Using newsyslog to rotate files containing logging messages on systems running Solaris 2.x
- Installing, configuring, and using logdaemon to log unauthorized login attempts on systems running Solaris 2.x
- Installing, configuring, and using logdaemon to log unauthorized connection attempts to rshd and rlogind on systems running Solaris 2.x
- Understanding system log files on a Solaris 2.x operating system
- Installing, configuring, and using swatch to analyze log messages on systems running Solaris 2.x
- Installing, configuring, and using logsurfer on systems running Solaris 2.x
- Configuring and installing lsof 4.50 on systems running Solaris 2.x
- Configuring and installing top 3.5 on systems running Solaris 2.x
- Installing, Configuring, and using npasswd to improve password quality on systems running Solaris 2.x
- Installing and configuring sps to examine processes on systems running Solaris 2.x
- Installing and securing Solaris 2.6 servers
- Installing, configuring, and operating the secure shell (SSH) on systems running Solaris 2.x
- Characterizing files and directories with native tools on Solaris 2.X
- Detecting changes in files and directories with native tools on Solaris 2.X
- Installing and operating lastcomm on systems running Solaris 2.x
- Installing, configuring, and using spar 1.3 on systems running Solaris 2.x
- Installing and operating tcpdump 3.5.x on systems running Solaris 2.x
- Installing, configuring, and using argus to monitor systems running Solaris 2.x
- Using newarguslog to rotate log files on systems running Solaris 2.x
- Installing libpcap to support network packet tools on systems sunning Solaris 2.x
- Writing rules and understanding alerts for Snort, a network intrusion detection system
- Disabling network services on systems running Solaris 2.x
- Installing noshell to support the detection of access to disabled accounts on systems running Solaris 2.x.
- Disabling user accounts on systems running Solaris 2.x
- Installing OpenSSL to ensure availability of cryptographic libraries on systems running Solaris 2.x.
- Installing and Operating ssldump 0.9 Beta 1 on systems running Solaris 2.x.
Linux sources that might be useful (some Linux HOW-TO are not bad and are largely applicable to other Unix environments):
Etc
A word of thanks is due Dr. Cohen for making this valuable tool freely available. Check it out !
This DTK is remarkable. Within three hours of successful installation, I was able to interdict a vicious (and persistent) little ankle-biter who has been troubling me for weeks.
HoneyNet project:
Etc
Etc
Some insecurely-configured Web proxy servers can be exploited by a remote attacker to make arbitrary connections to unauthorized hosts. Two common abuses of a misconfigured proxy server are to use it to bypass firewall restrictions and to send spam email. A server is used to bypass a firewall by connecting to the proxy from outside the firewall and then opening a connection to a host inside the firewall. A server is used to send spam by connecting to the proxy and then having it connect to a SMTP server. It has been reported that many Web proxy servers are distributed with insecure default configurations.
Users should carefully configure Web proxy servers to prevent unauthorized connections. It has been reported that http://www.monkeys.com/security/proxies/ contains secure configuration guidelines for many Web proxy servers. We can not verify the accuracy of this information, and if there are any questions users should contact their vendors.
Solaris Fingerprint Database Companion & Sidekick
Sun Managers Mailing List Archive
Yassp Development Mailing List Archive
See also
A stack smashing attack is most typical for C programs. Many C programs have buffer overflow vulnerabilities, both because the C language lacks array bounds checking, and because the culture of C programmers encourages a performance oriented style that avoids error checking... Several papers contain "cook book style" descriptions of stack smashing attacks exploitation. If the attacker has access to a non-privileged account than unless the server has a hardware or software protection the only remaining work for an wanna-be attacker is to find a suitable non-patched utility and download or write an exploit. Hundreds of such exploits have been reported in recent years.
Aleph One's "Smashing The Stack For Fun And Profit" from Phrack 49
Mudge's "Compromised - Buffer - Overflows, from Intel to SPARC Version 8"
Mudge's "How to write Buffer Overflows"
Prym's "finding and exploiting programs with buffer overflows"
Richard Jones and Paul Kelly's bounds checking patches to GCC
Solar Designer's Non-executable user stack area -- Linux kernel patch
Miller, Fredrickson and So's "An Empirical Study in the Reliability of UNIX utilities"
Description of the StackGuard Mechanism
"StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks", Proceedings of the 7th USENIX Security Conference. Postscript, HTML
Linuxexpo 1999- Day 4- Protecting Systems from Stack Smashing ...
Other Exploits
*** SecurityPortal.com Securing your File System in Linux. Average discussion.
Best practices in Linux file system security dictate a philosophy of configuring file system access rights in the most restrictive way possible that still allows legitimate users and processes to function properly. However, even with the most careful planning and restrictive settings, successful file system attacks and corruption can occur. To have the most comprehensive plan for Linux file system security, a system administrator needs to modify a default installation's settings, proactively monitor and audit file system changes and have multiple methods to recover from a file system attack.
In configuring file system security, the key areas to be concerned about are: access rights granted to legitimate users to create/modify files and execute programs, access to the file system granted to remote machines, and any area of the file system designated as world-writable.
To quickly review Linux permissions for files and directories, there are three basic types: read (numerically represented as 4), write (2) and execute (1). The values are summed to determine the permissions for the file or directory - a value of 4 meaning read-only, a value of 7 meaning read, write and execute are allowed. A file or directory is assigned three standard sets of permissions: access allowed to the owner, the associated group, and everyone.
umask A common occurrence over time on Linux systems is that when files get created or modified, the permissions become significantly more liberal than what was originally intended. When new files are created by users, administrators or processes, the default permissions granted are determined by umask. By using umask to set a restrictive default, files and directories that are created will retain more restrictive permissions unless they are manually changed with chmod. Umask defaults for all users are set in /etc/profile. Default permissions are determined by subtracting the umask value from 777. Files created by a user with a umask of 037 would have permissions of 640 (that isn't new math, the execute bit is not getting set for the owner), which means the owner can read/write the file, the group can read the file, and everyone has no access. Setting umask values to 077 means no one else has any access to files created by the owner.
... ... ...
NFS, Samba - The "Not For Security" file system should be avoided where possible on Linux boxes directly connected to the Internet. NFS requires a high degree of trust of the peer machine that will be mounting your partitions. You must be very careful about providing anything beyond read access to hosts in the /etc/exports. Samba, while not using a peer trust system, can nonetheless be complex to maintain user rights. They are both network filing services, and the only way to be sure that your file system is not at risk is to be running in a completely trusted environment.
Auditing your file system regularly is a must. You should look for files with the permissions anomalies described above. You should also be looking for changes in standard files and packages. At a minimum, you can use the find command to search for questionable file permissions:
Suid & sgid: find / \( -perm -2000 -o -perm -4000 \) -ls (You can add -o -perm -1000 to catch sticky bit files and directories)
World-writable files: find / -perm -2 ! -type l -ls
Files with no owner: find / \( -nouser -o -nogroup \) -print (thanks to Michael Wood for correcting this)
You can write and create cron job for a simple script that directs this output to a file, compares it with a file created by a search the day before and mails the difference to you.
As you might guess, several people have written simple to complex tools that check for files with questionable permissions, checksum binaries to detect tampering and a host of other functions. Here are a few:
Remote audit services
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019