|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
Tmpfs was pioneered in Solaris is designed primarily as a performance enhancement to allow short lived files to be written and accessed without generating disk or network I/O. Tmpfs maximizes file manipulation speed while preserving UNIX file semantics. It does not require dedicated disk space for files and has no negative performance impact. The most brilliant idea, that was somewhat reminiscent of Multics, was to use virtual memory system facilities for file storage and access. Tmpfs files are written and accessed directly from the memory maintained by the kernel; they are not differentiated from other uses of physical memory. This means tmpfs file data can be ‘‘swapped’’ or paged to disk, freeing VM resources for other needs. General VM system routines are used to perform many low level tmpfs file system operations. This reduces the amount of code needed to maintain the file system, and ensures that tmpfs resource demands may coexist with other VM resource users with no adverse effects. As such it is more advanced then ramdisk, and more dynamic. tmpfs can use both RAM and swap file pushing inactive files to disk. It was invented in Solaris and reimplemented in linux is slightly different form.
The TMPFS file system is activated automatically in the Solaris environment by an entry in the /etc/vfstab file. The TMPFS file system stores files and their associated information in memory (in the /tmp directory) rather than on disk, which speeds access to those files. This results in a major performance improvement for applications that use filesystem intensively such as Sendmail and other MTA, databases, webservers (for webservers solid state drive give very good results as constant is static). See Swap Space and Virtual Memory
|
As is typically used as /tmp main information about it is provided with the discussion of swap (The Swap File System)
The TMPFS file system is activated automatically in the Solaris environment by an entry in the /etc/vfstab file. The TMPFS file system stores files and their associated information in memory (in the /tmp directory) rather than on disk, which speeds access to those files. This feature results in a major performance enhancement for applications such as compilers and DBMS products that use /tmp heavily....Physical memory is the random-access memory (RAM) installed in a computer. To view the amount of physical memory installed in your computer, type the following:
prtconf| grep "Memory size"The system displays a message similar to the following:
Memory size: 384 MegabytesNot all physical memory is available for Solaris processes. Some memory is reserved for kernel code and data structures. The remaining memory is referred to as available memory. Processes and applications on a system can use available memory.
Physical memory is supplemented by specially configured space on the physical disk that is known as swap space; together they are referred to as virtual memory. Swap space is configured either on a special disk partition known as a swap partition or on a swap file system (swapfs). In addition to swap partitions, special files called swap files can also be configured in existing Unix file systems (UFS) to provide additional swap space when needed.
Every process running on a Solaris system requires space in memory. Space is allocated to processes in units known as pages. Some of a process's pages are used to store the process executable, andand other pages are used to store the process's data.
Physical memory is a finite resource on any computer, and sometimes there are not enough pages in physical memory for all of a system's processes. When a physical memory shortfall is encountered, the virtual memory system begins moving data from physical memory out to the system's configured swap areas. When a process requests data that has been sent to a swap area, the virtual memory system brings that data back into physical memory. This process is known as paging.
The Solaris virtual memory system maps the files on disk to virtual addresses in memorythis is referred to as virtual swap space. As data in those files is needed, the virtual memory system maps the virtual addresses in memory to real physical addresses in memory. This mapping process greatly reduces the need for large amounts of physical swap space on systems with large amounts of available memory.
The virtual swap space provided by swapfs reduces the need for configuring large amounts of disk-based swap space on systems with large amounts of physical memory. This is because swapfs provides virtual swap space addresses rather than real physical swap space addresses in response to the requests to reserve swap space.
With the virtual swap space provided by swapfs, real disk-based swap space is required only with the onset of paging, because when paging occurs, processes are contending for memory. In this situation, swapfs must convert the virtual swap space addresses to physical swap space addresses in order for paging to actual disk-based swap space to occur.
Swap Space and TMPFS
The temporary file system (TMPFS) makes use of virtual memory for its storagethis can be either physical RAM or swap space; it is transparent to the user. /tmp is a good example of a TMPFS file system where temporary files and their associated information are stored in memory (in the /tmp directory) rather than on disk. This speeds up access to those files and results in a major performance enhancement for applications such as compilers and database management system (DBMS) products that use /tmp heavily.
TMPFS allocates space in the /tmp directory from the system's virtual memory resources. This means that as you use up space in /tmp, you are also using up virtual memory space. So if your applications use /tmp heavily and you do not monitor virtual memory usage, your system could run out of this resource.
The TMPFS file system allocates space in the /tmp directory from the system's swap resources. This feature means that as you use up space in the /tmp directory, you are also using up swap space. So if your applications use the /tmp directory heavily and you do not monitor swap space usage, your system could run out of swap space.
Use the following if you want to use TMPFS but your swap resources are limited:
Using your compiler's TMPDIR variable only controls whether the compiler is using the /tmp directory. This variable has no effect on other programs' use of the /tmp directory.
Swap-Related Error Messages
These messages indicate that an application was trying to get more anonymous
memory and there was no swap space left to back it.
TMPFS-Related Error Messages
directory: File system full, swap space limit exceeded
This message is displayed if a page could not be allocated when writing
a file. This can occur when TMPFS tries to write more than it is allowed
or if currently executed programs are using a lot of memory.
directory: File system full, memory allocation failed
This message means TMPFS ran out of physical memory while attempting to create a new file or directory.
See TMPFS(7FS) for information on recovering from the TMPFS-related error messages.
The main pager on Solaris TMFS is tmpfs: A Virtual Memory File System by P Snyder:
This paper describes tmpfs, a memory-based file system that uses resources and structures of the SunOS virtual memory subsystem. Rather than using dedicated physical memory such as a ‘‘RAM disk’’, tmpfs uses the operating system page cache for file data. It provides increased performance for file reads and writes, allows dynamic sizing of the file system while requiring no disk space, and has no adverse effects on overall system performance. The paper begins with a discussion of the motivations and goals behind the development of tmpfs, followed by a brief description of the virtual memory resources required in its implementation. It then discusses how some common file system operations are accomplished. Finally, system performance with and without tmpfs is compared and analyzed.
1. Introduction
This paper describes the design and implementation of tmpfs, a file system based on SunOS virtual memory resources. Tmpfs does not use traditional non-volatile media to store file data; instead, tmpfs files exist solely in virtual memory maintained by the UNIX kernel. Because tmpfs file systems do not use dedicated physical memory for file data but instead use VM system resources and facilities, they can take advantage of kernel resource management policies.
Tmpfs is designed primarily as a performance enhancement to allow short lived files to be written and accessed without generating disk or network I/O. Tmpfs maximises file manipulation speed while preserving UNIX file semantics. It does not require dedicated disk space for files and has no negative performance impact.
Tmpfs is intimately tied to many of the SunOS virtual memory system facilities. Tmpfs files are written and accessed directly from the memory maintained by the kernel; they are not differentiated from other uses of physical memory. This means tmpfs file data can be ‘‘swapped’’ or paged to disk, freeing VM resources for other needs. General VM system routines are used to perform many low level tmpfs file system operations. This reduces the amount of code needed to maintain the file system, and ensures that tmpfs resource demands may coexist with other VM resource users with no adverse effects.
This paper begins by describing why tmpfs was developed and compares its implementation against other projects with similar goals. Following that is a description of its use and its appearance outside the kernel. Section 4 briefly discusses some of the structures and interfaces used in the tmpfs design and implementation. Section 5 explains basic file system operations and how tmpfs performs them differently than other file system types. Section 6 discusses and analyzes performance measurements. The paper concludes with a summary of tmpfs goals and features.
2. Implementation Goals
Tmpfs was designed to provide the performance gains inherent to memory-based file systems, while making use of existing kernel interfaces and minimising impact on memory resources. Tmpfs should support UNIX file semantics while remaining fully compatible with other file system types. Tmpfs should also - 2 - provide additional file system space without using additional disk space or affecting other VM resource users.
File systems are comprised of two types of information; the data for files residing on a file system, and control and attribute information used to maintain the state of a file system. Some file system operations require that control information be updated synchronously so that the integrity of the file system is preserved. This causes performance degradation because of delays in waiting for the update’s I/O request to complete.
Memory-based file systems overcome this and provide greater performance because file access only causes a memory-to-memory copy of data, no I/O requests for file control updates are generated. Physical memory-based file systems, usually called RAM disks, have existed for some time. RAM disks reserve a fairly large chunk of physical memory for exclusive use as a file system. These file systems are maintained in various ways; for example, a kernel process may fulfill I/O requests from a driver-level strategy routine that reads or writes data from private memory [McKusick1990]. RAM disks use memory inefficiently; file data exists twice in both RAM disk memory and kernel memory, and RAM disk memory that is not being used by the file system is wasted. RAM disk memory is maintained separately from kernel memory, so that multiple memory-to-memory copies are needed to update file system data.
Tmpfs uses memory much more efficiently. It provides the speed of a RAM disk because file data is likely to be in main memory, causing a single memory-to-memory copy on access, and because all file system attributes are stored once in physical memory, no additional I/O requests are needed to maintain the file system. Instead of allocating a fixed amount of memory for exclusive use as a file system, tmpfs file system size is dynamic depending on use, allowing the system to decide the optimal use of memory.
3. tmpfs usage
Tmpfs file systems are created by invoking the mount command with ‘‘tmp’’ specified as the file system type. The resource argument to mount (e.g., raw device) is ignored because tmpfs always uses memory as the file system resource. There are currently no mount options for tmpfs. Most standard mount options are irrelevant to tmpfs; for example, a ‘‘read only’’ mount of tmpfs is useless because tmpfs file systems are always empty when first mounted. All file types are supported, including symbolic links and block and character special device files. UNIX file semantics are supported1. Multiple tmpfs file systems can be mounted on a single system, but they all share the same pool of resources.
Because the contents of a volatile memory-based file system are lost across a reboot or unmount, and because these files have relatively short lifetimes, they would be most appropriate under /tmp, (hence the name tmpfs). This means /usr/tmp is an inappropriate directory in which to mount a tmpfs file system because by convention its contents persist across reboots.
The amount of free space available to tmpfs depends on the amount of unallocated swap space in the system. The size of a tmpfs file system grows to accommodate the files written to it, but there are some inherent tradeoffs for heavy users of tmpfs. Tmpfs shares resources with the data and stack segments of executing programs. The execution of very large programs can be affected if tmpfs file systems are close to their maximum allowable size. Tmpfs is free to allocate all but 4MB of the system’s swap space. This is enough to ensure most programs can execute, but it is possible that some programs could be prevented from executing if tmpfs file systems are close to full. Users who expect to run large programs and make extensive use of tmpfs should consider enlarging the swap space for the system.
4. Design
SunOS virtual memory consists of all its available physical memory resources (e.g., RAM and file systems). Physical memory is treated as a cache of ‘‘pages’’ containing data accessed as memory ‘‘objects’’. Named memory objects are referenced through UNIX file systems, the most common of which is a regular file. SunOS also maintains a pool of unnamed (or anonymous, defined in section 4.4) memory to facilitate cases where memory cannot be accessed through a file system. Anonymous memory is implemented using the processor’s primary memory and swap space
Tmpfs uses anonymous memory in the page cache to store and maintain file data, and competes for pages along with other memory users. Because the system does not differentiate tmpfs file data from other page cache uses, tmpfs files can be written to swap space. Control information is maintained in physical memory allocated from kernel heap. Tmpfs file data is accessed through a level of indirection provided by the VM system, whereby the data’s tmpfs file and offset are transformed to an offset into a page of anonymous memory.
Tmpfs itself never causes file pages to be written to disk. If a system is low on memory, anonymous pages can be written to a swap device if they are selected by the pageout daemon or are mapped into a process that is being swapped out. It is this mechanism that allows pages of a tmpfs file to be written to disk. The following sections provide an overview of the design and implementation of tmpfs.
For detailed information about the SunOS virtual memory design and implementation, please consult the references [Gingell1987] and [Moran1988] listed at the end of the paper.
To understand how tmpfs stores and retrieves file data, some of the structures used by the kernel to maintain virtual memory are described. The following is a brief description of some key VM and tmpfs structures and their use in this implementation.
4.1. tmpfs data structures
4.2. vnodes
A vnode is the fundamental structure used within the kernel to name file system memory objects. Vnodes provide a file system independent structure that gives access to the data comprising a file object. Each open file has an associated vnode. This vnode contains a pointer to a file system dependent structure for the private use of the underlying file system object. It is by this mechanism that operations on a file pass through to the underlying file system object [Kleiman1986].
4.3. page cache
The VM system treats physical memory as a cache for file system objects. Physical memory is broken up into a number of page frames, each of which has a corresponding page structure . The kernel uses page structures to maintain the identity and status of each physical page frame in the system. Each page is identified by a vnode and offset pair that provides a handle for the physical page resident in memory and that also designates the part of the file system object the page frame caches.
4.4. anonymous memory
anonymous memory is term describing page structures whose name (i.e., vnode and offset pair) is not part of a file system object. Page structures associated with anonymous memory are identified by the vnode of a swap device and the offset into that device. It is not possible to allocate more anonymous memory than there is swap space in a system. The location of swap space for an anonymous page is determined when the anonymous memory is allocated. Anonymous memory is used for many purposes within the kernel, such as: the uninitialised data and stack segments of executing programs, System V shared memory, and pages created through copy on write faults [Moran1988]. An anon structure is used to name every anonymous page in the system. This structure introduces a level of indirection between anonymous pages and their position on a swap device. An anon_map structure contains an array of pointers to anon structures and is used to treat a collection of anonymous pages as a unit. Access to an anonymous page structure is achieved by obtaining the correct anon structure from an anon_map. The anon structure contains the swap vnode and offset of the anonymous page.
4.5. tmpnode
A tmpnode contains information intrinsic to tmpfs file access. It is similar in content and function to an inode in other UNIX file systems (e.g., BSD Fast file system). All file-specific information is included in this structure, including file type, attributes, and size. Contained within a tmpnode is a vnode. Every tmpnode references an anon_map which allows the kernel to locate and access the file data. References to an offset within a tmpfs file are translated to the corresponding offset in an anon_map and passed to routines dealing with anonymous memory. In this way, an anon_map is used in similarly to the direct disk block array of an inode. tmpnodes are allocated out of kernel heap, as are all tmpfs control structures. Directories in tmpfs are special instances of tmpnodes. A tmpfs directory structure consists of an array of filenames stored as character strings and their corresponding tmpnode pointers.
4.6. vnode segment (seg_vn)
SunOS treats individual mappings within a user address space in an object oriented manner, with public and private data and an operations vector to maintain the mapping. These objects are called ‘‘segments’’ and the routines in the ‘‘ops’’ vector are collectively termed the ‘‘segment driver’’. The seg_vn structure and segment driver define a region of user address space that is mapped to a regular file. The seg_vn structure describes the range of a file being mapped, the type of mapping (e.g., shared or private), and its protections and access information. User mappings to regular files are established through the mmap system call. The seg_vn data structure references an anon_map similarly in structure and use to tmpfs. When a mapping to a tmpfs file is first established, the seg_vn structure is initialised to point to the anon_map associated with the tmpfs file. This ensures that any change to the tmpfs file (e.g., truncation) is reflected in all seg_vn mappings to that file. - 5 -
4.7. kernel mapping (seg_map)
The basic algorithm for SunOS read and write routines is for the system to first establish a mapping in kernel virtual address space to a vnode and offset, then copy the data from or to the kernel address as appropriate (i.e., for read or write, respectively). Kernel access to non-memory resident file data causes a page fault, which the seg_map driver handles by calling the appropriate file system routines to read in the data. The kernel accesses file data much the same way a user process does when using file mappings. Users are presented with a consistent view of a file whether they are mapping the file directly or accessing it through read or write system calls. The seg_map segment driver provides a structure for maintaining vnode and offset mappings of files and a way to resolve kernel page faults when this data is accessed. The seg_map driver maintains a cache of these mappings so that recently-accessed offsets within a file remain in memory, decreasing the amount of page fault activity.
5. Implementation
All file operations pass through the vnode layer, which in turn calls the appropriate tmpfs routine. In general, all operations that manipulate file control information (e.g., truncate, setattr, etc.) are handled directly by tmpfs. Tmpfs uses system virtual memory routines to read and write file data. Page faults are handled by anonymous memory code which reads in the data from a swap device if it is not memory resident. Algorithms for some basic file system operations are outlined below.
5.1. mount
Tmpfs file systems are mounted in much the same manner as any other file system. As with other file systems, a vfs structure is allocated, initialised and added to the kernel’s list of mounted file systems. A tmpfs-specific mount routine is then called, which allocates and initialises a tmpfs mount structure, and then allocates the root directory for the file system. A tmpfs mount point is the root of the entire tmpfs directory tree, which is always memory resident.
5.2. read/write
The offset and vnode for a particular tmpfs file are passed to tmpfs through the vnode layer from the read or write system call. The tmpfs read or write routine is called, which locates the anon structure for the specified offset. If a write operation is extending the file, or if a read tries to access an anon structure that does not yet exist (i.e., a hole exists in the file), an anon structure is allocated and initialised. If it is a write request, the anon_map is grown if it is not already large enough to cover the additional size. The true location of the tmpfs file page (i.e., the vnode and offset on the swap device) is found from the anon structure. A kernel mapping (using a seg_map structure) is made to the swap vnode and offset. The kernel then copies the data between the kernel and user address space. Because the kernel mapping has the vnode and offset of the swap device, if the file page needs to be faulted in from the swap device, the appropriate file system operations for that device will be executed.
5.3. Mapped files
Mappings to tmpfs files are handled in much the same way as in other file systems. A seg_vn structure is allocated to cover the specified vnode and offset range. However, tmpfs mappings are accessed differently than with other file systems. With many file systems, a seg_vn structure contains the vnode and offset range corresponding to the file being mapped. With tmpfs, the segment is initialised with a null vnode. Some seg_vn segment driver routines assume that pages should be initialised to the vnode contained in the seg_vn structure, unless the vnode is null. If the vnode is set to be that of the tmpfs file, routines expecting to write the page out to a swap device (e.g., pageout daemon), would write the page to the tmpfs file with no effect. Instead, routines initialise pages with the swap vnode of the swap device.Shared mappings to tmpfs files simply share the anon_map with the files tmpnode. Private mappings allocate a new anon_map and copy the pointers to the anon structures, so that copy-on-write operations can be performed.
6. Performance
All performance measurements were conducted a SPARCStation 1 configured with 16MB physical memory and 32MB of local (ufs) swap.
6.1. File system operations
Table 1 refers to results from a Sun internal test suite developed to verify basic operations of ‘‘NFS’’ file system implementations. Nine tests were used: create Create a directory tree 5 levels deep. remove Removes the directory tree from create. lookup stat a directory 250 times setattr chmod and stat 10 files 50 times each read/write write and close a 1MB file 10 times, then read the same file 10 times. readdir read 200 files in a directory 200 times. link/rename rename, link and unlink 10 files 200 times. symlink create and read 400 symlinks on 10 files. statfs stat the tmpfs mount point 1500 times. nfs ufs tmpfs Test type (sec) (sec) (sec) create 24.15 16.44 0.14 remove 20.23 6.94 0.80 lookups 0.45 0.22 0.22 setattr 19.23 22.31 0.48 write 135.22 25.26 2.71 read 1.88 1.76 1.78 readdirs 10.20 5.45 1.85 link/rename 14.98 13.48 0.23 symlink 19.84 19.93 0.24 statfs 3.96 0.27 0.26 Table 1: nfs test suite The create, remove, setattr, write, readdirs, link and symlink benchmarks all show an order of magnitude performance increase under tmpfs. This is because for tmpfs, these file system operations are performed completely within memory, but with ufs and nfs, some system I/O is required. The other operations (lookup, read, and statfs) do not show the same performance improvements largely because they take advantage of various caches maintained in the kernel.
6.2. File create and deletes
While the previous benchmark measured the component parts of file access, this benchmark measures overall access times. This benchmark was first used to compare file create and deletion times for various operating systems [Ousterhout1990]. The benchmark opens a file, writes a specified amount of data to it, and closes the file. It then reopens the file, rereads the data, closes, and deletes the file. The numbers are the average of 100 runs.
File size nfs ufs tmpfs (kilobytes) (ms) (ms) (ms) 0 82.63 72.34 1.61 10 236.29 130.50 7.25 100 992.45 405.45 46.30 1024 (1MB) 15600.86 2622.76 446.10 Table 2: File creates and deletes File access under tmpfs show great performance gains over other file systems. As the file size grows, the difference in performance between file system types decreases. This is because as the file size increases, all of the file system read and write operations take greater advantage of the kernel page cache.
6.3. kernel compiles
Table 3 presents compilation measurements for various types of files with ‘‘/tmp’’ mounted from the listed file system types. The ‘‘large file’’ consisted of approximately 2400 lines of code. The ‘‘benchmark’’ compiled was the NFS test suite from the section above. The kernel was a complete kernel build from scratch. Compile type nfs ufs tmpfs large file 50.46 40.22 32.82 benchmark 50.72 47.98 38.52 kernel 39min49.9 32min27.45 27min.8.11 Table 3: Typical compile times (in seconds) Even though tmpfs is always faster than either ufs or nfs, the differences are not as great as with the previous benchmarks. This is because the compiler performance is affected more by the speed of the CPU and compiler rather than I/O rates to the file system. Also, there can be much system paging activity during compiles, causing tmpfs pages to be written to swap and decreasing the performance gains.
6.4. Performance discussion
Tmpfs performance varies depending on usage and machine configurations. A system with ample physical memory but slow disk or on a busy network, notices improvements from using tmpfs much more than a machine with minimal physical memory and a fast local disk. Applications that create and access files that fit within the available memory of a system have much faster performance than applications that create large files causing a demand for memory. When memory is in high demand, the system writes pages out and frees them for other uses. Pages associated with tmpfs files are just as likely to be written out to backing store as other file pages, minimising tmpfs performance gains. Tmpfs essentially caches in memory those writes normally scheduled for a disk or network file system. Tmpfs provides the greatest performance gains for tasks that generate the greatest number of file control updates. Tmpfs never writes out control information for files, so directory and file manipulation is always a performance gain.
7. Summary
The tmpfs implementation meets its design goals. Tmpfs shows the performance gains associated with memory-based file systems but also provides significant advantages over RAM disk style file systems. Tmpfs uses memory efficiently because it is not a fixed size, and memory not used by tmpfs is available for other uses. It is tightly integrated with the virtual memory system and so takes advantage of system page cache and the kernel’s resource management policies. Tmpfs uses many of the kernel’s interfaces and - 8 - facilities to perform file system operations. It provides additional file system space, and supports UNIX file semantics while remaining fully compatible with other file system types.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Applies to: Solaris SPARC Operating System - Version: 8.0 and later [Release: 8.0 and later ] All Platforms Goal Performance of the tmpfs file system can be improved by setting tmpfs tunable "tmp_nopage = 1" in /etc/system. This issue is raised in bug
Solution
Tmpfs is a memory resident file system. It uses the page cache for caching file data. Files created in a tmpfs file system avoid physical disk read and write.
The primary goal of designing tmpfs was to improve read/write performance of short lived files without invoking network and disk I/O.
Tmpfs does not use a dedicated memory such as a "RAM DISK". Instead it uses virtual memory (VM) maintained by the kernel. This allows it to use VM and kernel resource allocation policies. Tmpfs files are written and read directly from the kernel memory. Pages allocated to tmpfs files are treated the same way as any other physical memory pages.
Physical memory assigned to tmpfs files uses anonymous memory to store file data. The kernel does not differentiate tmpfs file data from the page cache. During memory pressure, tmpfs pages can be freed and written back to the physical swap device if the page daemon selects them as candidates for such.
It is the user's responsibility to keep a back up of tmpfs files by copying tmpfs files to disk based file system such as ufs. Otherwise, tmpfs files will be lost in case of a crash or reboot.
In Solaris, fsflush (the file system flush daemon), is responsible for flushing the dirty pages to disk. A page is considered dirty, when the content of the page is modified in memory and has not been sync'd to the disk. For every dirty page in memory, fsflush calls the putpage() routine of the file system, responsible for writing the page to the backing store. For the ufs file system fsflush calls fs_putpage() and similarly for tmpfs dirty page it calls tmpfs_putpage(). Pages in memory are identified using vnode and offset.
When a tmpfs file is created or modified, pages are marked dirty. Tmpfs pages stay dirty until the file is deleted. The only time that the tmpfs_putpage() routine pushes the dirty tmpfs pages to the swap device is when the system experiences memory pressure. Systems with no physical swap device or configured with plenty of physical memory can avoid this overhead by setting the tmpfs tunable
tmpfs:tmp_nopage = 1in /etc/system. Setting this tunable causes tmpfs_putpage() to return immediately without it's overhead.
tmpfs_putpage() OverheadThere is a great deal of work done in the tmp_putpage() routine. For every vnode and offset, tmpfs searches for dirty page in the global page hash list and locks the page. To make sure it can write multiple dirty pages in chunks, it performs the similar search for pages adjacent to the locked page. tmpfs_putpage() does a lookup for the backing store for the page. If physical swap device is full or not configured, it unlocks the pages and returns without writing the dirty pages. The page-out operation to the swap device only happens when the free memory (freemem) is low. For every successful page-out, tmpfs_putpage() increments the tmp_putpagecnt and tmp_pagespushed. Systems with no physical swap device or a system with a physical swap but plenty of memory should have zero value for tmp_putpagecnt and tmp_pagespushed.
If the system has no swap device configured, then the option to use paging out to free up memory is not available.
Testing and Verification
Lab tests have shown that copying a large file (1 GB in size) from a tmpfs to a ufs file system gets a huge performance boost when the tmp_nopage tunable is set to 1. Test results are shown below:
tmp_nopage=0 (default)
$ mkfile 1024m /tmp/one
$ ptime cp /tmp/one /fast/one
real 2:27.301 user 0.044 sys 2:27.207
$ mkfile 1024m /tmp/two
$ ptime cp /tmp/two /fast/two
real 2:27.452 user 0.044 sys 2:27.352
tmp_nopage=1
Setting tmp_nopage=1 on a Live system using mdb:
# echo 'tmp_nopage/W 1' | mdb -kw
$ rm /tmp/* /fast/*
$ mkfile 1024m /tmp/one
$ ptime cp /tmp/one /fast/one
real 18.767 << 18 seconds instead of over 2 minutes. user 0.044 sys 18.695
$ mkfile 1024m /tmp/two
$ ptime cp /tmp/two /fast/two
real 19.160 user 0.040 sys 19.095
Setting tmp_nopage permanently
To set this on a permanent basis, the following line should be placed in /etc/system and the system rebooted:
set tmpfs:tmp_nopage=1
January 24, 2011 | sonia hamilton
A useful script from BigAdmin for increasing the size of a tmpfs /tmp without rebooting. See also SoftPanorama, Talking about RAM disks in the Solaris OS.#!/bin/ksh if [ $# -ne 1 ]; then echo "" echo "\tUsage: $0 newsize" echo "" echo "Where newsize is the size in kilobytes (default) that you want /tmp to be" echo "Alternatively you can specify size in (p)ages (m)egabytes or (g)igabytes" echo "" exit fi if [ -z `id | grep "uid=0"` ]; then echo "ERROR -- you must be root to run this script" exit fi pagesize=`pagesize` pagesize=$(( $pagesize / 1024 )) echo "Pages are ${pagesize}K" newsize=`echo $1 |sed -e 's/\([kKmMgGpP]\)/ \1/' | tr '[a-z]' '[A-Z]'` type=`echo "$newsize" | awk '{print $2}'` newsize=`echo "$newsize" | awk '{print $1}'` case "$type" in P) newsize=$(( $newsize * $pagesize )) ;; M) newsize=$(( $newsize * 1024 )) ;; G) newsize=$(( $newsize * 1024 * 1024 )) ;; esac if [ "$newsize" -lt 102400 ]; then echo "ERROR -- this script won't let you go below 100MB (102400K)" echo "" exit fi tmp_size=`df -k /tmp | grep ^swap | awk '{print $2}'` if [ "$tmp_size" -eq 0 ]; then echo "Error, cannot get size reading on /tmp" exit fi tmp_pages=$(( $tmp_size / $pagesize )) echo "/tmp is ${tmp_size}K (${tmp_pages} pages)" newsize_pages=$(( $newsize / $pagesize )) echo "/tmp will be resized to ${newsize}K (${newsize_pages})" if [ "$tmp_size" -gt "$newsize" ]; then echo "ERROR -- this script cannot be used to shrink /tmp" echo "" exit fi tmp_addresses=`echo "vfs" | crash | grep tmpfs | awk '{print $6}'` if [ -z "$tmp_addresses" ]; then echo "Ach, cannot get addressed from crash..." exit fi for i in $tmp_addresses; do echo "Looking at address $i" mysize=`echo "${i}+18/e" | adb -k | grep -v physmem | awk '{print $2}'` if [ "$mysize" -eq tmp_pages ]; then if [ -z "$foundit" ]; then echo "Looks like $i is the one!" foundit=$i else echo "Interesting! Looks like there's more than one match." echo "You're going to have to do this by hand" exit fi fi done if [ -z "$foundit" ]; then echo "Error -- cannot locate a tmpfs filesystem that's the size of /tmp" exit fi echo "Before:" df -k /tmp echo "${foundit}+18/Z 0T${newsize_pages}" | adb -k -w #echo "${foundit}+18/Z 0T${newsize_pages}" echo "After:" df -k /tmp ############################################################################## ### This script is submitted to BigAdmin by a user of the BigAdmin community. ### Sun Microsystems, Inc. is not responsible for the ### contents or the code enclosed. ### ### ### Copyright Sun Microsystems, Inc. ALL RIGHTS RESERVED ### Use of this software is authorized pursuant to the ### terms of the license found at ### http://www.sun.com/bigadmin/common/berkeley_license.jsp ##############################################################################
Background
Adaptive Server devices usually are raw devices or file system devices. Solaris users have a third option, tmpfs, for tempdb.
tmpfs -- the temporary file system -- caches writes only for a session. Files are not preserved across operating system reboots.
Note: Other UNIX platforms may allow you to create a temporary file system device. See your operating system System Administrator.
Should you use tmpfs?
To determine whether tmpfs would benefit your system, perform benchmarks comparing the memory assigned to tmpfs versus the memory assigned to the data cache.
Usually, it is more effective to give extra memory to the server for use as general data cache rather than creating a tmpfs device for tempdb. If tempdb is used heavily, then it will use a fair share of the data cache. If tempdb is not used often, then the server can use the memory assigned to data cache for non-tempdb data processing, but if the memory is assigned for tempfs it is wasted.
Servers that are most likely to benefit from using tmpfs are those that are already near the addressable memory limit:
- For Sybase SQL Server 11.0.x, see TechNote 20239: Addressable Memory Limits for Sybase SQL Server 11.0.x .
- For Adaptive Server Enterprise 11.5.x, see TechNote 20101: Addressable Memory Limits in Adaptive Server Enterprise 11.5.x .
- For Adaptive Server 11.9.2, the limits generally are the same as in TechNote 20101.
Addressable memory in Adaptive Server 11.9.3 generally is 4TB, and therefore tmpfs may not be as beneficial.
Creating a tmpfs device
Follow these steps:
- Create and test an operating system startup script that creates tmpfs after every operating system reboot. See the Solaris man page on tmpfs for details on creating a tmpfs filesystem.
- Create the tmpfs device with disk init just like creating any other filesystem device, except that you are specifying the tmpfs filesystem you just created. For example, if you named and mounted it as "/mytmpfs":
1> use master 2> go 1> disk init name = "tempdb1_dev1", 2> physname = "/mytmpfs/tempdb", 3> vdevno = 3, size = 102400 4> goThis creates a 200MB device for tempdb on the /mytmpfs device.
- Use alter database to extend tempdb to the tmpfs device:
1> alter database tempdb 2> on tempdb1 = 200 3> go- Modify your RUN_Server file to issue a UNIX touch command against tempdb on the tmpfs device before the call to the dataserver. This creates the file if it does not exist, as might happen if the operating system had been rebooted. Upon startup, the server can activate the device and rewrite tempdb. If the file entry was missing, the server would not be able to activate it and tempdb would not be available. For example:
RUN_SYBASE: ---------------------------------------------- #!/bin/sh # # Adaptive Server name: SYBASE # Master device path: /devices/master.dev # Error log path: /sybase/install/SYBASE.log # Directory for shared memory files: /sybase # touch /mytmpfs/tempdb_dev1 /sybase/bin/dataserver -sSYBASE -d/devices/master.dev \ -e/sybase/install/SYBASE.log -M/sybase \
February 3, 2008 | JakUB - Jakub's Universal Blog
A year ago, I started to work on a problem one of our customers had with the tmpfs file system. Since the problem has been already fixed in all relevant Solaris and OpenSolaris releases, I feel I can share some of the more interesting technical bits.The problem, as laid by the customer, was that the system would hang if an attempt to fill a tmpfs filesystem (e.g. /tmp) is made. In other words, a mere:
dd if=/dev/zero of=/tmp/testfile
would slowly hang the system.
During the course of investigation, we actually found two bugs that contributed to this hang, even though the system would be usually only extremely slow and non-responsive, but not completely hung.
The first bug showed when one process held the majority of all available memory pages in the dirty state and wanted to allocate yet another page for itself. When there were no more available pages, this process got blocked waiting for the pageout thread to pageout the dirty pages to swap and make them available again. The pageout daemon walks the pages and when it finds a dirty page, it uses the respective vnode's putpage routine to do the actual pageout. In case of tmpfs, the putpage routine has a check that verifies that the tn_contents rwlock (i.e. the lock protecting the contents of the tmpnode to which the dirty page belongs) is not being held. If it is, putpage simply gives up this page and moves on to another one. Now, the problem was that when the dd process went to sleep, it held the tn_contents lock of /tmp/testfile from our example command. Moreover, almost every single page in the system belonged to this file and was dirty. The result? The pageout daemon could not do any forward progress as it had to give up paging out majority of pages due to the held tn_contents lock and the dd process could not unlock the tn_contents lock because it wanted another page.
This bug got fixed by modifying the wrtmp() routine, which is on the write(2) execution call path. The fix simply dropped the tn_contents lock before the thread in wrtmp() would get blocked waiting for a page and reacquired it later. Nevertheless, the problem didn't go away completely and we learned we only cured half of it (for the side effects of this cure, read on), but maybe the more serious half.
It turned out that a fix for large ISM segments on systems with ZFS introduced a change in the reservation of anonymous memory which was a regression for tmpfs and the second half of the required solution. After the addition of ZFS , databases which needed to create large shared mappings started to fail (i.e. were unable to create these large shared mappings) due to ZFS caching memory, which would otherwise be necessary for the shared segment. This was fixed by inserting a call that reaped several different caches in front of the test in anon_resvmem() that checks the amount of reservable memory (i.e. availrmem) and fails if the anon_resvmem() request cannot be satisfied. The problem with this is that this fix made it actually much harder for a tmpfs allocation to fail. Additionally, the procedure which reaped the caches was pretty heavyweight and contained a loop, which could delay every anon_resvmem() by as much as by 60 seconds! What? I said the theoretical maximum delay per request was 60 seconds. In the lab, I was able to reproduce this and using a clever dtrace script, I measured the maximum delay of 13 seconds, which is still pretty horrible.
I fixed this by selectively disabling the cache-reap for tmpfs completely so that now when there is no reservable memory, the dd command above will fail. Users can still guarantee tmpfs reservations by creating disk swap of sufficient size. Everything which is beyond the disk swap size, is not guaranteed and the reservation may fail. Note that this behaviour is consistent with the documentation for swap.
As it turned out, the fix for the latter bug was sufficient because it prevented the former bug from occurring. But it was difficult to see the latter bug without first fixing the former one. Moreover, the customer didn't make use of the possibility to crop the tmpfs size by the "size" mount option, which would have also solved the problem.
Finally, let me go back to the side effects of dropping the tn_contents lock. Due to the implementation of wrtmp(), the act of increasing the file's size and creating the new portion of its content was no longer atomic as seen from the perspective of a process which writes to a tmpfs file and mmaps its end and tries to read it at a time. Although documented and forbidden, such a behaviour was a nuisance. So I slightly reordered wrtmp() and putback the fix for this last Tuesday.
As of now, I am still not completely done with tmpfs and will come back to this topic later, when there is a little more to add.
December 2008 | BigAdmin Homepage: http://www.bnsmb.de/
In Solaris /tmp is by default a memory based file system mounted on swap:
# df -k /tmp Filesystem kbytes used avail capacity Mounted on swap 1961928 504 1961424 1% /tmpThis has some advantages:
- the access of /tmp is fast
- there is always a writable directory even if Solaris can not mount the disks in read/write mode or if Solaris is booted from a read-only NFS share
- you do not need to think about cleaning up /tmp after or before a reboot
- It's not neccessary to create a filesystem for tmp - just mount it und use it
On the other hand there are some things to take care of if using /tmp:
Because /tmp is mounted on swap you should not use it for files which should survive a reboot - use the disk based directory for temporary files, /var/tmp, instead for these files.
One very important point:
Every user can write to /tmp. And in the default configuration /tmp is mounted without a size limitation. This fact results in the possibility that every user can use the whole virtual memory of the machine (that is physical memory and swap) by simply filling up /tmp with garbage.
To avoid this situation you should mount /tmp with an upper limit for the size, e.g in /etc/vfstab change the line
swap - /tmp tmpfs - yes -to
swap - /tmp tmpfs - yes size=1024m(replace 1024m with an approbiate value for the machine)
Unfortunately you can not change the size for /tmp while Solaris is running:
# lockfs /tmp /tmp: Inappropriate ioctl for device # mount -o remount,size=512m swap /tmp mount: Operation not supportedTherefore you must reboot the machine to activate the change.
Because of the fact that tmpfs is a "normal" filesystem in Solaris you can always add additional memory based file systems, e.g.
to create another tmpfs on the fly use:[Mon Mar 17 21:53:19 root@sol9 /] # mkdir /mytmp [Mon Mar 17 22:05:44 root@sol9 /] # mount -o size=100m -F tmpfs swap /mytmp [Mon Mar 17 22:06:04 root@sol9 /] # df -k /mytmp Filesystem kbytes used avail capacity Mounted on swap 102400 0 102400 0% /mytmpTo create this new filesystem every time the machine boots up simply add another line to the /etc/vfstab:
swap - /mytmp tmpfs - yes size=1024mThere are some restrictions for tmpfs Filesystems:
- There is not really a device for a memory based filesystems like /dev/dsk/c#t#d#s#.. for harddisks or /dev/lofi/# for lofi mounts. Especially there is no raw device for the memory based filesystems.
- There are some restrictions in tmpfs (see tmpfs(7FS) )
- And you can only use the tmpfs filesystem on memory based file systems; you can not use for example ufs or vxfs on these kind of file systems.
But because Solaris is a real Operating system there is a solution for this problem also:
Instead of using tmpfs to create a memory based file system, use ramdiskadm. ramdiskadm is part of the Solaris OS since (at least) version 9.
ramdiskadm is part of the SUNWcsu package and therefore should be installed on every Solaris machine (x86 and SPARC, of course).ramdiskadm can be used to create real ramdisk devices which can be used like any other disk device, e.g:
# create the ramdisk # [Mon Mar 17 22:15:03 root@sol9 /] # ramdiskadm -a mydisk 40m /dev/ramdisk/mydisk # check the result # [Mon Mar 17 22:15:21 root@sol9 /] # ls -l /dev/ramdisk/mydisk lrwxrwxrwx 1 root root 40 Mar 17 22:15 /dev/ramdisk/mydisk -> ../../devices/pseudo/ramdisk@1024:mydisk [Mon Mar 17 22:16:04 root@sol9 /] # ls -l /dev/rramdisk/mydisk lrwxrwxrwx 1 root root 44 Mar 17 22:15 /dev/rramdisk/mydisk -> ../../devices/pseudo/ramdisk@1024:mydisk,raw # check the fstype # [Mon Mar 17 22:16:07 root@sol9 /] # fstyp /dev/rramdisk/mydisk unknown_fstyp (no matches) # create a filesystem on the ramdisk # [Mon Mar 17 22:16:22 root@sol9 /] # newfs /dev/rramdisk/mydisk /dev/rramdisk/mydisk: Unable to find Media type. Proceeding with system determined parameters. newfs: construct a new file system /dev/rramdisk/mydisk: (y/n)? y /dev/rramdisk/mydisk: 81872 sectors in 136 cylinders of 1 tracks, 602 sectors 40.0MB in 9 cyl groups (16 c/g, 4.70MB/g, 2240 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 9664, 19296, 28928, 38560, 48192, 57824, 67456, 77088, # mount the ramdisk # [Mon Mar 17 22:16:44 root@sol9 /] # mkdir /myramdisk [Mon Mar 17 22:16:51 root@sol9 /] # mount /dev/ramdisk/mydisk /myramdisk [Mon Mar 17 22:17:01 root@sol9 /] # df -k /myramdisk Filesystem kbytes used avail capacity Mounted on /dev/ramdisk/mydisk 38255 1041 33389 4% /myramdisk [Mon Mar 17 22:17:06 root@sol9 /]Be aware that these ramdisks are also gone after a reboot. If you need them permanent you should create an init script or an SMF service to recreate them while booting the machine.
For more detailed information about ramdiskadm please consult the man page of ramdiskadm(1m) and ramdisk(7d); The man page of ramdiskadm also describes how to give users other than root access to create and delete ramdisks and the man page for ramdisk explains how much memory can be used for ramdisks.
And, for the records, you can use a ramdisk created with ramdiskadm also for an SVM mirror . This can be useful if an application is mostly reading from
the disk; in this case you can change the read policy for the mirror to first read from the ramdisk..But that's a story for another wiki entry.
Update 23.11.2008
A script to start and stop ramdisks in the Solaris OS can be find in this article:
A script to start and stop ramdisks in the Solaris OS
Update 06.12.2008
There's an interesting blog entry about ramdisks and swap:
Are Solaris RAM Disks swappable?
Oct 28, 2008 bnsmb says:>>Could you elaborate on how capping /tmp with the "size=Xg" value in /etc... >>Could you elaborate on how capping /tmp with the "size=Xg" value in /etc/vfstab impacts memory usage? >>If I've formatted a disk with a slice for swap, for say, 4G and then I cap /tmp with size=512m, what >> happens to the other 3.5G of disk space allocated? Is it still usable? Thanks.
Yes, it is still usable as swap space for Solaris (for example to pageout memory from running processes)
The point is :
If you do not limit the size for /tmp, /tmp can use the complete free memory. Example:
Suppose you have 4 GB real memory and 4 GB swap. If you mount /tmp without a size restriction in this example /tmp could use maximal
4 GB + 4 GB - <memory_used_by_solaris>.
And /tmp is writable by every user on the system! Even nobody can write to /tmp.
Example:
# real memory: = 4 GB # prtconf | more System Configuration: Sun Microsystems sun4u Memory size: 4096 Megabytes
# swap space: = 4 GB # swap -l swapfile dev swaplo blocks free /dev/md/dsk/d20 85,5 16 8094960 8094960
# free space in /tmp (= free memory of the machine) # df -h /tmp Filesystem size used avail capacity Mounted on swap 6.5G 120K 6.5G 1% /tmp
In this example every user can create files that use up to 6.5 GB in /tmp and fill all free memory on the machine.
A couple more followups to this-
I found this document:
http://docs.sun.com/app/docs/doc/817-0404/6mg74vs9q?a=view says that
tmpfs:tmpfs_minfree can be set to ensure that tmpfs leaves a specific
amount of swap space for the rest of the system. By default it's set to
256 pages, and pagesize on my machines is 8192B, so that's 2MB assured to
be free.
Setting this variable to a higher variable will limit the max size of a
tmpfs. For example, here are some settings on my test server along with
the size of a default tmpfs:# no tmpfs:tmpfs_minfree setting (default)
swap 2.1G 0K 2.1G 0% /mnt# tmpfs:tmpfs_minfree=62500
swap 1.6G 0K 1.6G 0% /mnt# tmpfs:tmpfs_minfree=125000
swap 1.1G 0K 1.1G 0% /mnt# tmpfs:tmpfs_minfree=187500
swap 657M 0K 657M 0% /mnt
Additionally, i had the fortune to attend a Solaris 10 Boot Camp today
during which i asked the presenter about limiting resources. I saw a
slide that said memory sets are available in Sol10U1, and the presenter
said that swap sets are in the works. For now, she said it is possible to
create a project in the global zone with RSS and VM limits and start a
zone in that project. Supposedly the Sun Management Center - Solaris
Container Manager eases the implementation of such projects for zones as
well as other zone settings, but it's currently a for-pay product.
http://www.sun.com/software/products/container_mgr/
Google matched content |
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019