Search results
Results from the WOW.Com Content Network
A ready queue or run queue is used in computer scheduling. Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in a queue for "ready" processes. Other processes that are waiting for an ...
minimizing wait time (time from work becoming ready until the first point it begins execution); minimizing latency or response time (time from work becoming ready until it is finished in case of batch activity, [1] [2] [3] or until the system responds and hands the first output to the user in case of interactive activity); [4]
When a process is created (initialized or installed), the operating system creates a corresponding process control block, which specifies and tracks the process state (i.e. new, ready, running, waiting or terminated). Since it is used to track process information, the PCB plays a key role in context switching. [1]
Some second-level CPU caches run slower than the processor core. When the processor needs to access external memory, it starts placing the address of the requested information on the address bus. It then must wait for the answer, that may come back tens if not hundreds of cycles later. Each of the cycles spent waiting is called a wait state.
For the portion of the time required for CPU cycles, the process is being executed and is occupying the CPU. During the time required for I/O cycles, the process is not using the processor. Instead, it is either waiting to perform Input/Output, or is actually performing Input/Output. An example of this is reading from or writing to a file on disk.
Processes are also removed from the run queue when they ask to sleep, are waiting on a resource to become available, or have been terminated. In the Linux operating system (prior to kernel 2.6.23), each CPU in the system is given a run queue, which maintains both an active and expired array of processes.
The CPU-bound process will get and hold the CPU. During this time, all the other processes will finish their I/O and will move into the ready queue, waiting for the CPU. While the processes wait in the ready queue, the I/O devices are idle. Eventually, the CPU-bound process finishes its CPU burst and moves to an I/O device.
A process that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low-priority processes waiting indefinitely. A steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU. [1]