With the increasing use of various Web-based services, design of high performance, scalable and dependable data centers has become a critical issue. Recent studies show that a clustered, multi-tier architecture is a cost-effective approach to design such servers. To complement this architecture, Java 2 Enterprise Edition (J2EE) provides the tools and the environment to standardize the implementation of Web-based applications. However, current versions of these data centers use TCP/IP over Ethernet for supporting the inter/intra cluster communication. None of them have exploited the advantage of switching to the InfiniBand architecture to enhance the communication performance. In this context, this thesis aims to analyze the possible benefits of utilizing the InfiniBand communication mechanism. Our research consists of five parts.
First, we have implemented a three-tier data center prototype based on J2EE specification. This prototype implementation is used for conducting comparative experiments in order to analyze the possible benefits of IBA versus the traditional Ethernet. Our results with two popular benchmarks, RUBiS and SPEC-jAppServer2004, show that the IBA can provide 21% better throughput and up to 15% reduction in average response time compared to Ethernet.
Second, we propose a novel load balancing mechanism for effectively distributing the load between application servers located in the mid-tier. Our load balancing technique is based on estimating the processing and communication requirements (weight) for each request by the amount of data required from the database to complete the request, and forwarding the requests to application servers accordingly. We have implemented this load balancing mechanism on our prototype data center. Using the same experimental setup, our load balancing technique can provide 11% improvement in throughput and up to 10% reduction in average response time compared to the default round-robin implementation running over SDP. This improvement is in addition to the benefits of IBA over Ethernet.
Third, we investigated the feasibility of minimizing the response time of a Web server by exploiting the advantages of both user-level communication and coscheduling. We, thus, propose a coscheduled server model where the remote cache accesses can be coscheduled on different nodes to reduce the response time. We experiment this concept using two known coscheduling techniques, called Dynamic Coscheduling (DCS) and DCS with immediate blocking. Simulation based performance evaluation indicates that the average response time of a distributed server can be reduced up to 80%, on average 40%, using a coscheduling mechanism. Fourth, we proposed a mechanism to exploit the local memory of programmable Network Interface Cards (NICs) to improve the performance of cluster-based Web servers. Our mechanism use the NIC memory for caching recently accessed data blocks to improve server performance. We have implemented a prototype of the proposed NIC caching mechanism for a distributed-Web server, on an 8-node Linux cluster. Measurements with several server workloads show that NIC caching can enhance throughput by up to 27% and reduce the average intra-cluster latency by around 20%, by minimizing the DMA and PCI bus overhead.
Finally, we have conducted a characterization study of the network behavior within a clustered, multi-tiered data center. Using a real implementation of a clustered three-tier data center, we analyzed the arrival rate and inter-arrival time distribution of the requests to individual server nodes, the network traffic between tiers, and the average size of messages exchanged between tiers. The main results of this study are; (1) in most cases, the request inter-arrival rates follow log-normal distribution, and self-similarity exists when the data center is heavily loaded, (2) message sizes can be modeled by the log-normal distribution, and (3) service times fit reasonably well with the Pareto distribution and show heavy tailed behavior at heavy loads.
This research helps to understand the possible benefits of IBA in cluster-based, multi-tier data centers. Our results show that IBA can improve the performance in data centers and the proposed load balancing, coscheduling and NIC caching mechanisms yield further performance improvements. In addition, the results of our workload characterization study can be helpful in understanding the details of the inner workings of data centers and thus, could be used in designing and fine-tuning network interfaces and servers in the future.