
Techniques for Memory-efficiency, Low-latency and High-throughput in RDMA-based Datacenter Networks and Applications
Title:
Techniques for Memory-efficiency, Low-latency and High-throughput in RDMA-based Datacenter Networks and Applications
Author:
Xue, Jiachen, author.
ISBN:
9780438120884
Personal Author:
Physical Description:
1 electronic resource (119 pages)
General Note:
Source: Dissertation Abstracts International, Volume: 79-11(E), Section: B.
Advisors: Mithuna Thottethodi Committee members: Milind Kulkarni; Anand Raghunathan; Sanjay Rao; T.N. Vijaykumar.
Abstract:
Remote Direct Memory Access (RDMA) fabrics such as Infiniband and Converged Ethernet report latencies shorter by a factor of 50 as compared to TCP. As such, RDMA augurs well for the emerging class of user-facing, low-latency applications, such as Web search and memcached, and is a potential replacement for TCP in datacenters (DCs). Employing RDMA in datacenters, however, poses three challenges: (1) RDMA provides hop-by-hop flow control but not end-to-end congestion control. (2) The well-known incast problem, where multiple senders' flows converge at a switch, causes long latency tails in RDMA (3) RDMA buffer management scheme either incurs memory wastage or involves significant programming effort.
Previous approaches to address the challenges of hop-by-hop flow control and the incast problem focus on latency by modulating the sending rates, which may unnecessarily sacrifice throughput.Instead, my proposal, called Blitz, decouples the handling of the throughput issue of edge congestion (including incasts) and the latency issue of transient, in-network congestion (including incasts). Blitz's approach to congestion control achieves low latency without sacrificing throughput.
To address buffer management, prior approaches force a choice of memory wastage (e.g., by conservatively over-allocating memory to hold all possible incoming message sizes and bursts from all possible sources) or programmer effort (to carefully tune the application to prevent over-allocation). In contrast, my proposal RIMA -- remote indirect memory access -- avoids the above pitfalls by using indirection. The use of indirection enables RIMA to employ 'append' semantics which is vastly more efficient than existing RDMA communication semantics as it eliminates the need for over-allocation and requires no additional programmer effort.
RIMA requires hardware support. The final contribution of this thesis is a set of techniques to maximize throughput and to minimize latency even in legacy RDMA systems (i.e., without RIMA hardware). In addition to improving performance at the RDMA communication layer, the proposed techniques also help improve end-to-end latency and throughput for an important datacenter application -- Memcache. Existing memcache designs suffer from performance bottlenecks because they focus on optimizing server-side throughput. My design optimizes client-side throughput which is more valuable as it directly impacts the front-end servers that access storage and memory caching tiers. The resulting memcache implementation achieves significantly better throughput and latency than recently proposed RDMA-based key-value stores.
Local Note:
School code: 0183
Added Corporate Author:
Available:*
Shelf Number | Item Barcode | Shelf Location | Status |
|---|---|---|---|
| XX(687777.1) | 687777-1001 | Proquest E-Thesis Collection | Searching... |
On Order
Select a list
Make this your default list.
The following items were successfully added.
There was an error while adding the following items. Please try again.
:
Select An Item
Data usage warning: You will receive one text message for each title you selected.
Standard text messaging rates apply.


