Quantcast
Channel: Dell TechCenter
Viewing all articles
Browse latest Browse all 1001

If it’s Wednesday, it must be random… optimizing SQL Server workloads for performance using EqualLogic PS Series storage

$
0
0

How would you rate yourself for grocery shopping on any particular day? For me, it depends on whether I went grocery shopping prepared or unprepared. For example, if I were coming home from work one Wednesday and made the last-minute decision to stop by the grocery store, I would be unprepared and not have a shopping list. If I remembered and picked up 5 out of 5 items that I needed, I would meet my performance metric for that Wednesday.

On another day, let’s say a Sunday; I go to the grocery store to do my weekly shopping. This time, I need about 20 things and want to be prepared with a shopping list if I can finish my grocery shopping in 20 minutes that is my performance metric for Sunday.

Random vs. Sequential data access pattern is similar to grocery shopping on a Wednesday vs. Sunday. In a sequential workload, blocks are read and/or written in a sequential manner on the disk (e.g., applications such as data warehousing). In a random workload, blocks are read and/or written randomly on the disk (e.g., Online transaction processing).

In the storage world, the performance of these two workloads is measured by IOPS in one case and Throughput in the other, just like the performance measure is different while grocery shopping on Wednesday vs. Sunday.

IOPS (Input Output per second) represents the number of individual I/O operations taking place in a second. IOPS figures can be very useful, but only when you understand the nature of the I/O, such as the I/O size and how random it is.

Throughput (also known as bandwidth) is a measure of data volume over time—in other words, the amount of data that can be pushed or pulled through a system per second. Throughput figures are therefore usually given in units of MB/sec and are more meaningful for sequential workloads since blocks are accessed serially.

 Throughput = IOPS x I/O size

The most common database application workload models are online transaction processing (OLTP) and data warehouse (DW)/Decision Support Systems (DSS). SQL Server DW workloads differ significantly from traditional OLTP workloads in how they should be tuned for performance because of the different I/O patterns inherent in both designs—just like grocery shopping.

Below is a table that summarizes some of the most important differences between OLTP and DW workloads.

DW applications are typically designed to support complex analytical query activities using large data sets. The queries executed on a DSS database typically take a long time to complete and usually require processing large amounts of data. A DSS query may fetch millions of records from the database for processing. To support these queries the server reads large amounts of data from the storage devices. OLTP applications are optimal for managing rapidly changing data. These applications typically have many users performing transactions while at the same time changing real-time data. Although individual data requests by users usually reference few records, many of these requests are being made at the same time. Examples of different types of OLTP systems include airline ticketing systems, banking/financial transaction systems, and web ordering systems.

Since DW workloads differ significantly from that of OLTP systems, it is important to understand the performance criteria’s for each of these and then properly design, size and deploy Microsoft SQL Server on the right storage platforms. Hence, careful planning prior to deployment is crucial for a successful SQL Server environment.

Maximizing SQL Server performance and scalability is a complex engineering challenge as I/O characteristics vary considerably between applications depending on the nature of the access patterns. Several factors must be considered in gathering storage requirements before arriving at a conclusion. A key challenge for SQL Server database and SAN administrators is to effectively design and manage system storage, especially to accommodate performance, capacity and future growth requirements. At Dell, engineers perform benchmark testing to provide storage best practices and sizing guidance that help you plan before you deploy these applications in your environment. Based on the benchmark testing, we recently published two performance based whitepapers focusing on SQL DW and OLTP workloads on EqualLogic PS 6110 and PS6100 storage arrays respectively. To get a deeper understanding of how EqualLogic storage arrays performed on these applications, see these detailed best practices documents:

Best Practices and Sizing Guidelines for Transaction Processing Applications with Microsoft SQL Server 2012 using EqualLogic PS Series Storage  and

Best Practices for Decision Support Systems with Microsoft SQL Server 2012 using Dell EqualLogic PS Series Storage Arrays.

Visit the EqualLogic Technical Content page for a collection of EqualLogic best practice whitepapers.

You can follow me on Twitter @Magi_Kapoor for more SQL and EqualLogic discussions.


Viewing all articles
Browse latest Browse all 1001

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>