Fast path is a term used in computer science to describe a path with shorter instruction path length through a program compared to the normal path. For a fast path to be effective it must handle the most commonly occurring tasks more efficiently than the normal path, leaving the latter to handle uncommon cases, corner cases, error handling, and other anomalies. Fast paths are a form of optimization.[1]

For example dedicated packet routing hardware used to build computer networks will often take care of the most common kinds of packets in hardware, with other kinds passed to the "slow path", usually implemented by software running on the control processor. For example, packets with special control information, packets with errors, or packets directed at the device itself instead of being routed elsewhere would be passed to the slow path. The slow path is more flexible, and can handle any kind of packet.

Even in pure software, specific implementations have been developed that leverage the concept of a fast path to maximize the performance of packet processing. In these implementations, the networking stack is split into two layers and the lower layer processes the majority of incoming packets outside the operating system (OS) environment without incurring any of the OS overheads that degrade overall performance. Only those rare packets that require complex processing are forwarded to the OS networking stack, which performs the necessary management, signaling and control functions.

Some hardware RAID controllers implement a "fast path" for write-through access which bypasses the controller's cache in certain situations. This tends to increase IOPS, particularly for solid-state drives.

For a fast path to be beneficial, it must process the majority of operations. This is because the "fast path or slow path" test itself slows down the slow path. One common way to perform a denial-of-service attack is to flood a device with packets which require use of the slow path.

See also

edit

References

edit