|
|
You need to monitor disk use carefully to prevent running out of disk space. There are ways that you can make better use of the disk space you have, or recover used space. You might also consider adding additional hard disks to your system to increase the amount of disk space available.
Make better use of the disk space on your system by:
You can use the df(1M) command to monitor filesystem use. In general, filesystems should have at least 10 to 15 percent of their capacity available. If available space falls below 10 percent, filesystem fragmentation increases and performance is degraded. Note that when you are using sfs, the secure filesystem, or the ufs filesystem, if the available space falls below 10 percent of capacity, non-root users cannot write to the filesystem.
The default system configuration is set up so the filesystem blocks are allocated in an optimum way for most environments. See ``Managing filesystem types'' for more information on filesystem allocation.
You can also control filesystem space by balancing the load between filesystems. To do this, user directories often need to be moved. It is best to group users with common interests in the same filesystem.
To balance filesystem space:
Example: Move directory trees userx and usery from filesystem fs1 to fs2 where there is more space available.
cd /fs1 find userx usery -print -depth | cpio -pdm /fs2
/usr/sbin/usermod -d /fs2/userx userx /usr/sbin/usermod -d /fs2/usery usery
rm -rf /fs1/userx /fs1/usery
Very large directories are inefficient and they can affect performance. If a directory becomes bigger than 10K (twenty 512-byte blocks or about 600 entries of average name length), then directory searches can cause performance problems. For larger block sizes, bigger directories are less of a problem, but they should be watched carefully. The find command can locate large directories.
find / -type d -size +20 -print
For all the available filesystem types, removing files from a directory does not make that directory smaller. When a file is removed from a directory, the space is left in the directory and is available for new files added to the directory.
For example, when a file is removed from a directory in an s5 filesystem, the inode (file header) number is cleared. This leaves an unused slot that can be reused; over time the number of empty slots might become large. For example, if you have a directory on an s5 filesystem with 100 files in it and you remove the first 99 files, the directory still contains 99 empty slots, at 16 bytes per slot, preceding the active slot. Unless a directory is reorganized on the disk, it will retain the largest size it has ever achieved.
Note that some filesystem types, such as ufs and sfs, support dynamic shrinking of a directory when new files are created in it. However, as free directory data blocks are not coalesced and a directory shrinks up to the block containing the last useful file entry, the same problem (the retention of the largest-ever size) exists if the last useful file entry happens to be in a latter data block.
You can reduce directory size by locating inactive files, backing them up, and then deleting them.
To locate and delete files:
find / -mtime +90 -atime +90 -print > fileswhere files contains the names of files neither written to nor accessed within a specified time period, here 90 days (``+90'').
Before you reorganize a directory, use the ``Locating and deleting inactive files'' procedure to remove files that are no longer useful.
To reorganize a single directory:
Example:
mv /home/bob /home/obob
Example:
mkdir /home/bob
Example:
cd /home/obob find . -print | cpio -plm ../bob
Example:
cd .. rm -rf obob
If you install an application such as a database program that creates very large files, you may need to increase the maximum file size that the system can handle.
The maximum file size for the system is determined by the parameters SFSZLIM and HFSZLIM. These parameters are described in ``Tunable parameters''.
To increase the maximum file size:
Make the two values identical, unless you have a good reason to do otherwise. HFSZLIM must not be less than SFSZLIM.
Example: To change the maximum file size to 1000000 bytes (10MB) change the values of HFSZLIM and SFSZLIM to 0xA00000 (the 0x denotes hexadecimal).
A file consists of multiple disk blocks, which may or may not be contiguous. Files that consist of contiguous disk blocks can be accessed more efficiently than those that aren't. A heavily used filesystem composed of noncontiguous disk blocks might produce performance problems. You can make your filesystem more efficient by rearranging the files to make the constituent blocks contiguous, which also has the effect of shrinking your directories. You cannot reorganize the root filesystem.
The following sections describe two methods for improving performance by reorganizing files. Specifically, it explains how to
To reorganize any type of filesystem except an s5 filesystem:
eval `mkfs -F file_sys_type -m device`
Example:
eval `mkfs -F s5 -m /dev/dsk/c0b0t0d0sc`
To reorganize an s5 filesystem:
The first argument is the name of the filesystem you are reorganizing. It should be a character device. The second argument is the name of the spare disk slice.
Example:
/usr/sbin/dcopy -F s5 /dev/rdsk/c0b0t0d0sc /dev/dsk/c0b0t0d0s4
The choice of filesystem type can affect the performance of your system. The default filesystem type provided during installation is the VERITAS filesystem (vxfs) with a logical block size of 1K (1024 bytes) for filesystems up to 8GB. For most applications, this should provide the best balance of performance and reliability because vxfs offers speedy system boot and shutdown and fast recovery from system outages such as power failures. However, some applications may perform better using other filesystem types. For detailed information about the vxfs filesystem type, see ``The vxfs filesystem type''.
If you want to change the filesystem type for an existing filesystem, the procedure is the same as for reorganizing a filesystem: backup the filesystem and then remake it.
Depending on the average size of the files, you might also want to change either the logical block size or the filesystem type of the filesystem. vxfs uses logical block sizes of 2K (2048 bytes), 4K (4096 bytes), and 8K (8192 bytes), in addition to the default size of 1024-byte blocks. Other filesystem types that can be selected include: s5, sfs and ufs.
There are three logical block sizes for s5 filesystems: 512 bytes, 1K (1024 bytes), and 2K (2048 bytes). The ufs and sfs filesystems also have three logical block sizes: 2K (2048 bytes), 4K (4096 bytes), and 8K (8192 bytes). Each has its advantages and disadvantages in terms of performance.
vxfs allocates storage in extents that are collections of one or more blocks, so there are no fragments with vxfs. Because vxfs does allocation and I/O in multiple-block extents, keeping the logical block size as small as possible increases performance and reduces wasted space for most workloads. For the most efficient space utilization, best performance, and least fragmentation, use the smallest block size available on the system. The smallest block size available is 1K, which is the default block size for vxfs filesystems created on the system.
For a vxfs filesystem, select a logical block size of 1K, 2K, 4K, or 8K bytes; the default is 1024-byte blocks for a filesystem smaller than 8GB.
Generally, you will get the best possible performance (system throughput) from sfs, s5, and ufs filesystems if the logical block size is the same as the page size. The system kernel uses the logical block size when reading and writing files. For example, if the logical block size of the filesystem is 4K, whenever I/O is done between a file and memory, 4K chunks of the file are read into or out of memory. The ufs and sfs filesystems provide the option to specify fragment size, too; the s5 filesystem does not provide this feature.
A large logical block size improves disk I/O performance by reducing seek time, and also decreases CPU I/O overhead. On the other hand, if the logical block size is too large, then disk space is wasted. The space is lost because even if only a portion of a block is needed the entire block is allocated. For example, if files are stored in 1K (1024 bytes) logical blocks, then a 24-byte file wastes 1000 bytes. If the same 24-byte file is stored on a filesystem with a 2K (2048 bytes) logical block size, then 2024 bytes are wasted. However, if most files on the filesystem are very large this waste is reduced considerably.
For a filesystem with mostly small files, the small logical block sizes (512 byte and 1K) available for the s5 filesystem have the advantage of less wasted space on disk. However, CPU overhead might be increased for files larger than the block size. Similarly, for sfs and ufs filesystems, when there are mostly small files, small fragment sizes have the advantage of less wasted space on disk.
The sar command with the -u option can help determine if large I/O transfers are slowing the system down. See ``Checking CPU use with sar -u''.
For an sfs or ufs filesystem, select a 2K, 4K, or 8K block size. The 4K block size provides distinctly better performance than the 2K or 8K block size on a machine with a 4K page size.
For an sfs or ufs filesystem, you can choose a fragment size, also. This size can be any power of two between 512 and the block size. The number of fragments per logical block must not be larger than 8. Using fragments is not worthwhile on an sfs or ufs 2K (block size) filesystem because the amount of space saved is less than the 10 percent that would be reserved to prevent excessive fragmentation.
For an s5 filesystem, the default is 1024-byte blocks. You can select a 512-byte, 1K, or 2K block size. A 2K block size provides the best performance for an s5 filesystem on a machine with a page size equal to or greater than 2K. There are no fragments defined for an s5 filesystem.