Noop scheduler

From Wikipedia, the free encyclopedia
The location of I/O schedulers in a simplified structure of the Linux kernel.

The NOOP scheduler is the simplest I/O scheduler for the Linux kernel. This scheduler was developed by Jens Axboe.

Overview[edit]

The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. This scheduler is useful when it has been determined that the host should not attempt to re-order requests based on the sector numbers contained therein. In other words, the scheduler assumes that the host is unaware of how to productively re-order requests.

There are (generally) three basic situations where this situation is desirable:

  • If I/O scheduling will be handled at a lower layer of the I/O stack. Examples of lower layers that might handle the scheduling include block devices, intelligent RAID controllers, Network Attached Storage, or an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network.[1] Since I/O requests are potentially rescheduled at the lower level, resequencing IOPs at the host level uses host CPU time on operations that will just be undone at the lower level, increasing latency/decreasing throughput for no productive reason.
  • Because accurate details of sector position are hidden from the host system. An example would be a RAID controller that performs no scheduling on its own. Even though the host has the ability to re-order requests and the RAID controller does not, the host system lacks the visibility to accurately re-order the requests to lower seek time. Since the host has no way of judging whether one sequence is better than another, it cannot restructure the active queue optimally and should, therefore, pass it on to the device that is (theoretically) more aware of such details.
  • Because read/write head movement doesn't impact application performance enough to justify the reordering overhead. This is usually the case with non-rotational media such as flash drives or solid-state drives (SSDs).

However, NOOP is not necessarily the preferred I/O scheduler for the above scenarios. Typical to performance tuning, all guidance shall be based on observed work load patterns (undermining one's ability to create simplistic rules of thumb). If there is contention for available I/O bandwidth from other applications, it is still possible that other schedulers will generate better performance by virtue of more intelligently carving up that bandwidth for the applications deemed most important. For example, running an LDAP directory server may benefit from deadline's read preference and latency guarantees. At the same time, a user with a desktop system running many different applications may want to have access to CFQ's tunables or its ability to prioritize bandwidth for particular applications over others (ionice).

If there is no contention between applications, then there are little to no benefits from selecting a scheduler for the above-listed three scenarios. This is due to a resulting inability to deprioritize one workload's operations in a way that makes additional capacity available to another workload. In other words, if the I/O paths are not saturated and the requests for all the workloads fail to cause an unreasonable shifting around of drive heads (which the operating system is aware of), the benefit of prioritizing one workload may create a situation where CPU time spent scheduling I/O is wasted instead of providing desired benefits.

The Linux kernel also exposes the nomerges sysfs parameter as a scheduler-agnostic configuration, making it possible for the block layer's requests merging logic to be disabled either entirely, or only for more complex merging attempts.[2] This reduces the need for the NOOP scheduler as the overhead of most I/O schedulers is associated with their attempts to locate adjacent sectors in the request queue in order to merge them. However, most I/O workloads benefit from a certain level of requests merging, even on fast low-latency storage such as SSDs.[3][4]

See also[edit]

References[edit]

  1. ^ "Choosing an I/O Scheduler for Red Hat Enterprise Linux 4 and the 2.6 Kernel". Red Hat. Archived from the original on 2007-08-27. Retrieved 2007-08-10.
  2. ^ "Documentation/block/queue-sysfs.txt". Linux kernel documentation. kernel.org. December 1, 2014. Retrieved December 14, 2014.
  3. ^ "6.4.3. Noop (Red Hat Enterprise Linux 6 Documentation)". Red Hat. October 8, 2014. Retrieved December 14, 2014.
  4. ^ Paul Querna (August 15, 2014). "Configure flash drives in High I/O instances as Data drives". Rackspace. Retrieved December 15, 2014.

External links[edit]