Sunday, July 21, 2019

Application Performance Optimization and Load Balancing

Application Performance Optimization and Load Balancing Application Performance Optimization and Load Balancing using RAID and Caching Techniques Akilesh Kailash Sunil Iyer Kolar Suresh Kumar Sabarish Venkatraman ABSTRACT As the data processing and demand for storage grows, the performance of a critical application should always be intact with respect to disk I/O. There has been considerable improvements related to disk seek, latency and spindle speeds; However, these improvements have not met the challenges and addresses the need for better performance and load balancing. The challenge of any Database administrator is to maximize the Application I/O performance and ensure the high availability with zero downtime. This performance challenge can be met using I/O monitoring, Load balancing, Cache management and RAID (Redundant Array of Inexpensive Disks) technologies. The primary goal of this paper is to exemplify the details of successfully solving the I/O problems of a database application in a consistent fashion with the appropriate RAID configurations, caching mechanisms and load balancing algorithm. Categories and Subject Descriptors B.3.2 [Design Styles]: Mass storage – RAID. D.4.2 [Storage Management]: Secondary storage, Storage hierarchies. D.4.3 [File Systems Management]: File organization. D.4.4 [Communications Management]: Input/Output. D.4.5 [Reliability]: Backup procedures, Fault-tolerance. General Terms Algorithms, Performance, Design, Theory, Reliability. Keywords RAID Redundant Array of Inexpensive Disks I/O – Input/Output DBA Database Administrators HA High Availability OLTP Online Transaction Processing. IOPS HBA 1. INTRODUCTION RAID technology addresses the need for higher storage capacity in IO system and provides the feature of data redundancy. This helps in efficient and improved disk access and avoids data loss by disk failures. Theoretically, RAID is mainly used to create a logical disk from two or more physical disk drives in order to provide high bandwidth. RAID is an imperative part of storage stack and fabric layer and is coordinated by various storage vendors like EMC, Hitachi, NetApp. RAID technologies have enumerated different methods in building storage stacks and sub-systems for different kinds of databases. Thus, the two main technical reasons for switching to RAID are scalability and high availability in the context of I/O and system performance. As the database sizes of today have grown manifold from the gigabytes to petabytes range, the intricacy to scale I/O performance of such gigantic systems is needed very much for critical applications. Load balancing is a critical factor in environments like Operating Systems, Clusters, Networking and Applications. They play a quintessential role in the performance and reliability of any environment avoiding catastrophic failures. In a quotidian scenario, the resource allocation and load balancing are done through hash methods, genetic algorithms and several scheduling algorithms in Operating systems. Many database applications demand high throughput and availability from storage subsystems. For instance, a stock market application running in New York stock exchange will need to have a high throughput and bandwidth with absolutely no downtime. This requires continuous operation i.e., the need to satisfy each I/O request even in the case of disk failures. It is not acceptable to meet the aforementioned requirements at the cost of deprived performance mainly in real-time applications such as video and audio. It is highly unacceptable if a video is played at slower speed or the data is lost during transmission and ends abruptly. Since a database application may encounter extreme I/O activity or suffer a sudden spike of I/O activities for a brief period of time, the organization of the database structure onto the disk becomes imperative. 2. PROBLEM DEFINITION Mission critical data centers have a compelling need to have highly available applications and services thereby ensuring zero downtime. Current clustering solutions, like MSCS or HP Service Guard enable HA for vital applications. However, such applications are specific and developed only for the OS/application for which they are designed. The I/O performance and their patterns of a database application has to be analyzed by understanding their relation with the physical storage so that it helps in determining the deployment of application based on any given workload. I/O from an application needs to be categorized based on which appropriate techniques can be used in order to improve its performance. There are many DBA tuning software which are primarily used for indexing the database and monitor the drive activities. This approach is effective but requires lot of time and in reality it is quite tedious in nature. 3. ABSTRACT SOLUTION The possible solutions are: Determining the RAID Level and stripe size RAID levels are determined on factors such as type of I/O, disk cost, read/write I/O and so on. The data transfer rate and IOPS performance is very much influenced based on the segment size chosen and the striping size used. For example: In a RAID 5 configuration, there are 4 disks and 1 parity disk. Let the segment size of each disk be 64KB. Thus, when an I/O of 64KB has to be addressed, it is written to the first drive. The next I/O of 64KB is written to next and so on and finally the parity of the 4 I/O’s is calculated and written to the last disk. In case of RAID 1 (Mirroring), there are 2 disk groups and 2 mirror groups. A 64KB I/O would be written to each of the disk drives and mirrored drives. Caching techniques Splitting the cache The cache acts as an interface between the host application and RAID controllers. The cache can be divided into two parts viz. front-end and back-end. Database applications can rely on the front-end cache. Prefetching OLTP applications may have I/O operations which are not sequential; the pre-fetch algorithm confirms the addresses which will fetched in future and loads it in memory. The amount of data to be pre-fetched depends on the application requirement, memory and performance desired by application. Database organization on a storage system Organizing the database objects such as tables, logs, views on storage layout comes in a wide range. Based on the structure of the database layout, an appropriate storage is chosen. Load Balancing I/O load balancing across cluster nodes are performed using regression analysis. If a port of an HBA or fabric node is loaded heavily, then the I/O is balanced across the ports which are not utilized to its full potential. 4. LITERATURE SURVEY I/O performance and disk I/O contention plays a vital role for critical applications. Our proposal and work on application performance monitoring and I/O tuning and load balancing is motivated based on the â€Å"Oracle I/O Performance† and â€Å"Array tuning Best Practices† paper. The proposed solution and enhancements are based on similar lines of these papers. We start off the survey by explaining the technical feasibilities, the pros and cons of these approaches discussed in the papers and explain in brief about the issue we are addressing based on the survey findings. 5. PERFORMANCE BOTTLENECKS Application performance and write access is generally obtained by using storage Arrays having different RAID configurations. For instance, the striping of data across multiple disks using RAID 1 in order to achieve redundancy is the most common way of obtaining high availability. Disk failure vulnerabilities in enterprise storage The main motivation of going for striping technologies is because of the vulnerability in disk failures in enterprise storage Arrays which can result in catastrophic loss of data. This high availability of application and I/O is obtained at the cost of write performance. Keeping synch of write operations During a write operation, all the writes have to be updated simultaneously to all the disks in order to keep the disks in synch. This will have a catastrophic result in operations which will have heavy writes and its performance. In addition to it, maintaining the synchronization of data between all disks and achieving concurrency is a difficult task and can lead to system crashes. In order to overcome the aforementioned problems; a number of different striping mechanisms have been proposed; each of them have their specific tradeoff based on cost, high performance, scalability and robustness. The majority of RAID configurations are based on the interleaving of the data and the pattern is which the redundant information is distributed across the disks. Load Balancing of I/O and resource utilization Load balancing is essentially implemented in SQL server clustering and is very common practice. There are many third party tools that provide solutions to load balancing and resource utilization; however the limitations of such tools is that the factors to decide on load balancing are very system specific and are dependent heavily on the characteristic of each application. As the database size grows in a short period, we generally observe that the query speed has a performance hit as the number of rows increases. This is mainly observed on applications where the performance data is being collected in frequent intervals and simultaneously the data is read from the DB for other purposes. The general and quick solutions to optimize query speed it to partition the views, indexing and table partitioning. But even then, things are observed to be quite slow. The main problem with such solutions is that the database tables and views are located on different servers. Hence a server cluster is used which add in reliability if there is any performance issues seen on one of the cluster nodes. 6. RAID LEVEL SELECTION CRITERIA The choice of RAID level to be chosen is based on different factors. When a mirrored configuration is chosen such as RAID 1 or RAID 1+0, each write request is duplicated to disk by the raid controller. This results in performance issues if the application does not rely heavily on data duplication and its availability. When higher levels/parity based RAID configuration is used, things get more intricate. Let us consider that, when RAID 5 or RAID 6 is used and if the size of the write I/O is less than the stripe size which is frequently observed in database applications where the data write is around 4kb pages contrasting to the drive size of around 128KB; as a result of this, the raid controller has to perform magnitude of I/O operations for just a single request. The main drawback of the above technique is that for a small write request, the raid controller has to first fetch the data from the back end disk to the main memory. Then it has to insert the fresh data at the appropriate position and calculate the new parity stripe to perform another write operation back to the disk. Hence, one I/O operation results in roughly 3 to 4 times the IOPS. This overhead adds in if the calculation of parity is for two sets as in RAID 6. The other factors of choosing the RAID configuration are the disk/drive cost and I/O pattern. The cost is zero for RAID 0 as there is no redundancy; while it is highest for RAID 1 or its combination such as RAID 10. This cost is high because of drive mirroring. The cost of RAID 5 is comparatively lower than RAID 1 but it has one disk which is dedicated for parity. A cleared distinction is required to classify small I/O and large I/O. The bursty nature and large I/O is seen if the request for the I/O is more than the one third of the cache size. All the small/short I/O’s are addressed in cache thereby avoiding the RAID access. All in all, RAID 5 and 6 are generally preferred for large I/O and sequential I/O operations while RAID 1 and RAID 10 is preferred for short I/O operations. 7. SCOPE FOR IMPROVEMENT This paper goes on the aforementioned aspects and concentrates on monitoring the I/O pattern, analyzing the load on each of the I/O and performing a load balance if required; In addition to the above criteria, taking the I/O pattern into consideration, an appropriate RAID configuration along with write-back cache method is used if necessary. 8. PROPOSED SOLUTION Characterize the I/O pattern The first step is to monitor the I/O and characterize it. This is done using tools such as Perfmon or IO Meter. We plan to use these tools and analyze the I/O pattern of a given application. This monitoring of pattern is required as we will characterize the request as read intensive, write intensive, how the load is being varied. Perform load balancing upon I/O threshold The second step is to perform load balancing. This is done by analyzing the load and identifying the threshold of the I/O from a server HBA Port through the fabric layer to the storage Array. Threshold is a boundary which serves as a benchmark for comparison or guidance, and any deviation or breach of the said threshold may result in a change in state of an overall system. Our proposed infrastructure identifies the threshold by analyzing the I/O graph and monitoring the following parameters: Linear Regression Slope of the curve Using Linear Regression, the value of the slope is calculated. Based on these two parameters, if we observe that if one of the HBA ports is heavily loaded, we tend to balance it out by redistributing the excess load to different cluster nodes. Once the I/O is balanced, an appropriate RAID configuration is calculated. 9. CONCLUSION AND FUTURE WORK After studying the I/O access patterns of various workloads, we can clearly the map the database application to the physical storage thereby achieving high performance, fast access and retrieval. This would be helpful for DBA’s to deploy management applications and would be easy to track the application performance. This analysis can be implemented at the enterprise level configuration as well resulting in efficient usage of physical storage, making it cost effective and reducing the work for DBA’s or lab administrators. 10. REFERENCES The RAID Book: Sixth Edition. RAID Advisory Board. LACIE: RAID Technology White Paper. RAID: High-Performance, Reliable Secondary Storage – ACM Computing Surveys Peter M. Chen, Edward K. Lee. Array tuning best practices A Dell technical white paper DOI=http://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200i-performance-tuning-white-paper.pdf. Exploring Disk Size and Oracle Disk I/O performance DOI= http://www.openmpe.com/cslproceed/HPW02CD/paper/11026.pdf

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.