페이지

2022년 4월 23일 토요일

1.5.4 Mass-Storage Management

 As we have already seen, the computer system must provide secondary storage to back up main memory. Most modern computer systems use HDDs and NVM devices as the principal on-line storage media for both programs and data. Most programs-including compilers, web browsers, word processors, and games-are stored on these devices until loaded into memory. The processing. Hence, the proper management of secondary storage is of central importance to a computer system. The operating system is responsible for the following activities in connection with secondary storage management:

- Mounting and unmounting

- Free-space management

- Storage allocation

- Disk scheduling

- Partitioning

- Protection

Because secondary storage is used frequently and extensively, it must be used efficiently. The entire speed of operation of a computer may hinge on the speeds of the seconddary storage subsystem and the algorithms that manipulate that subsystem.

At the same time, there are amny users for storage that is slower and lower in cost (and sometimes higher in capacity) than secondary storage. Backups of disk data, storage of seldom-used data, and long-term archival storage are some examples. Magnetic tape drivers and their tapes and CD DVD and Blu-ray drivers and platters are typical tertiary storage devices.

Tertiary storage is not crucial to system performance, but it still must be managed. Some operating systems take on this task, while others leave tertiary-storage management to application programs. Some of the functions that operating systems can provide include mounting and unmounting media in devices, allocating and freeing the devices for exclusive use by processes, and migrating data from secondary to tertiary storage.

Techniques for secondary storatge and tertiary storage management are discussed in Chapter 11.

2022년 4월 22일 금요일

1.5.3 File-System Management

 To make the computer system convenient for users, the operating system provides a uniform. logical view of information storage. The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the fil. The operating system maps files onto physical media and accesses these files via the storage devices.

File management it one of the most visible components of an operating system. Computers can store information on several different types of physical media. Secondary storage is the most common, but tertiary storage is also possible. Each of these media has its own characteristics and physical organization. Most are controlled by a device, such as a disk direve, that also has its own unique characteristics. These propertes include access speed, capacity, data-transfer rate, and access method (sequential or random).

A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data. Data files may be numeric, alphabetic, alphanumeric, or binary. File may be freeform(for example, text files), or they may be formatted rigidly (for example, fixed fields such as an mp3 music file). Clearly, the concept of a file is an extremely general one.

The operating system implements the abstract concept of a file by managing mass storage media and the devices that control them. In addition, files are normally organized into directories to make them easier to use. Finally, when multiple users have access to files, it may be desirable to control which user may access a file and how that user amy access if (for example, read, write, append).

The operating system is responsible for the following activities in connection with file management:

- Creating and deleting files

- Creating and deleting directories to organize files

- Supporting primitives for manaipulating files and directories

- mapping files onto mass storage

- Backing up files on stable (nonvolatile) storage media

File-management techniques are discussed in Chapter 13, Chapter 14, and Chapter 15.

1.5.2 Memory Management

 As discussed in Section 1.2.2, the main memory is central to the operation of a modern computer system. Main memory is a large array of bytes, ranging in size from hundreds of thousands to billions. Each byte has its own address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. The CPU reads instructions from main memory during the  instruction-fetch cycle (on a von Neumann architecture). As noted earlier, the main memory is generally the only large storage device that the CPU is able to address and access directly. For example, for the CPU to process data from disk, those data must first be transferred to main memory by CPU-generated I/O calls. In the same way, instructions must be in memory for the CPU to execute them.

For a program to be executed, it must be mapped to absolute addresses and loaded into memory. As the program executes, it accesses program instructions and data from memory by generating these absolute addresses. Eventually, the program terminates, its memory space is declared available, and the next program can be loaded and executed.

To improve both the utilization of the CPU and the speed of the computer's in memory, creating a need for memory management. Many different memory management schemes are used. These schemes relect various approaches, and the effectiveness of any given algorithm depends on the situation. In selecting a memory-management scheme for a specific system, we must take into account many factors-especially the hardware design of the system. Each algorithm requires its own hardware support.

The operating system is responsiblie for the following activities in connection with memory management:

- Keeping track of which parts of memory are currently being used and which process is using them

- Allocating and deallocating memory space as needed

- Deciding which processes (or parts of processes) and data to move into and out of memory

Memory-management techniques are discussed in Chapter 9 and Chapter 10.

2022년 4월 19일 화요일

1.5.1 Process Management

 A program can do nothing unless its instructions are executed by a CPU. A program in execution, as mentioned, is a process. A program such as a compiler is a process, and a word-processing program being run by  an individual user on a PC is a process. Similarly, a social media app on a mobile device is a process. For now, you can consider a process to be an instance of a program in execution, but later you will wee that the concept is more general. As described in Chapter 3, it is possible to provide system calls that allow processes to create subprocesses to execute concurrently.

A process needs certain resources-including CPU time, memory, files, and I/O devices-to accomplish its task. These resources are typically allocated to the process while it is running. In addition to the various physical and logical resources that a process obtains when it is created, various initlization data(input) may be passed along. For example, consider a process running a web browser whose function is to display the contents of a web page on a screen. The process will be given the URL, as an input and will execute the appropriate instructions and system calls to obtain and display the desired information on the screen. When the process terminates, the operaing system will reclaim any reusable resources.

We emphasize that a program by itself is not a process. A program is a passive entity, like the contents of a file stored on disk, whereas a process is an active entity. A single-threaded process has one program counter specifying the next instruction to execute.(Threads are coverd in Chapter 4). The execution of such a process must be sequential. The CPU executes one instuction of the process after another, until the process completes. Further, at any time, one instuction at most is executed on behalf of the process. Thus, although two process may be associated with the same program, they are nevertheless considered two separate execution sequences. A multithreaded process has multiple program counters, each pointing to the next instruction to execute for a given thread.

A process is the unit of work in a system. A system consists of a collection of processes, some of which are operating-system processes (those that execute system code) and the rest of which are user processes (those that execute user code). All these processes can potentially execute concurrently-by multiplexing on a single CPU core-or in paralled across multiple CPU cores.

The operaing system is responsible for the following activities in connection with process management:

- Creating an deleting both user and system processes

- Scheduling processes and threads on the CPUs

- Supending and resuming processes

-Providing mechanisms for process synchronization

- Providing mechanisms for process communication

We discuss process-management techniques in Chapter 3 through Chapter 7.


1.5 Resource Management

 As we have seen, an operating system is a resource manager. The system's CPU, memory space, file-storage space, and I/O devices are among the reources that the operating system must manage.

LINUX TIMERS

 On Linux systems, the kernel configuration parameter HZ specifies the frequency of timer interrupts. An HZ value of 250 means that the timer generates 250 interrupts per second, or one interrupt every 4 milliseconds. The value of HZ depends upon how the kernel is configured, as well the machine type and architecture on which it is running. A related kernel variable is jiffies, which represent the number of timer interrupts that have occurred since the system was booted. A programming project in Chapter 2 further explores timing in the Linux kernel.

1.4.3 Timer

 We must ensuire that the operating system maintains control over the CPU. We cannot allow a user program to get stuck in an  infinite loop or to fail to call system services and never return control to the operating system. To accomplish the goal, we can use a timer. A timer can be set to interrupt the computert after a specifed period. The period may be fixed (for example, 1/60 second) or variable (for example, from 1 millisecond to 1 second). A variable timer is generally implemented by a fixed-rate clock and a counter. The operating system sets the counter. Every time the clock ticks, the counter is decremented. When the counter reaches 0, an interrupts at intervals from 1 millisecond to 1,024 millisecond clock allows interrupts at intervals from 1 millisecond to 1,024 milliseconds, in steps of 1 millisecond.

Before turning over control to the user, the operating system ensures that the timer is set to interrupt. If the timer interrupts, control transfers automatically to the operating system, which may treat the interrupt as a fatal error or may give the program more time. Clearly, instructions that modify the content of the timer are privileged.