Oracle Database I/O Performance Tuning: Capturing Extra-Database I/O Performance Metrics

August 25, 2010

Before tuning an Oracle Database 11g Release 2 (11gR2) for optimal I/O performance, it's important to understand how well the underlying I/O architecture is performing versus its expected I/O rates. This article explores how to implement a suitable extra-database I/O performance test and then capture the actual I/O performance results.

Before tuning an Oracle 11g Release 2 (11gR2) database for optimal I/O performance, it's important to understand how well the underlying I/O architecture is performing versus its expected I/O rates. Using tools and concepts already discussed in prior articles, this article - the third in this series - explores how to implement a suitable extra-database I/O performance test and then capture the actual I/O performance results to show that I/O performance testing is still just as much art as science.

The prior article in this series covered how to:

  • Measure expected and actual I/O performance using established I/O monitoring tools
  • Create pre-database and post-database I/O performance workloads

Before I even attempt to determine what is the I/O performance of any Oracle 11gR2 database, however, I'll first need to gather some pertinent statistics on what type of performance I can reasonably expect from the underlying I/O subsystem itself, so I'll next demonstrate exactly how to accomplish this with several detailed examples of:

  • Implementing extra-database I/O performance tests
  • Evaluating the results of extra-database I/O performance testing

Extra-Database I/O Testing: An Experimental I/O Subsystem Configuration

I'll first need to configure a sufficiently complex I/O subsystem that will permit the ability to isolate and/or combine different I/O devices with sufficiently different I/O capacities and capabilities. If you've read any of my prior article series, you already know I've illustrated many times just how well VMware works to rapidly build and deploy virtualized testing environments. As with many of my prior experiments, I'll take advantage of my own home-grown server with a dual-core Intel CPU and 4 GB of addressable memory, and I'll use VMware Workstation 7.0.0 to create a virtual machine (VM) that uses just under 3GB of my server's available server memory to run the 64-bit version of the Oracle Enterprise Linux (OEL) operating system. I've deployed all available OEL 5 Update 4 RPMs that comply with Linux kernel 2.6.18-92.5.

For my I/O subsystem, I've installed and configured three different physical storage devices:

  • One low-capacity 80GB HDD
  • One high-capacity 750GB HDD
  • One 30GB MLC solid-state drive (SSD)

The pertinent technical specifications for these storage devices are listed in Table 3.1 below.

Table 3-1. Physical I/O Devices Evaluated

Manufacturer

Model

Capacity (GB)

Rotational Speed

Cache Size (MB)

OCZ Flash Drive

Vertex

30

N/A

64

Samsung HDD

HD753LJ

750

7200

32

Western Digital HDD

800JD-00JN

80

7200

8

To leverage these devices' different I/O capabilities, I'll use VMware's ability to access virtualized devices by creating at least one virtual file on each of the physical devices. I've listed the virtual files in Table 3.2.

Table 3-2. Virtualized Files Present on Physical I/O Devices

Physical Device Type

File Name

Maximum File Size (GB)

VM Guest Mount Point

Large Capacity HDD

Ora11gR2FRA.vmdk

60

/dev/mapper/vg1-lvfra

Large Capacity HDD

Ora11gR2FRA.vmdk

20

/dev/sdi8

Large Capacity HDD

Ora11gR2FRA.vmdk

20

/dev/sdi9

Flash / SSD

SSD1.vmdk

10

/dev/sdj1

Small Capacity HDD

vmw020.vmdk

10

/dev/sdk1

Small Capacity HDD

vmw021.vmdk

10

/dev/sdl1

Small Capacity HDD

vmw022.vmdk

20

/dev/sdm1

Once I've started my test VM from within VM Workstation, these virtual files will become visible to its operating system as virtualized mount points. To illustrate this, I've captured the output from the fdisk -l command I issued as the root user on the VM in Listing 3.1.

I/O Load Generation Using ORION

Oracle Corporation provides a free I/O load generation tool called ORION (Oracle Input/Output Numbers) that's easily obtained from the Oracle Technology Network web site. One of ORION's major advantages is that it's extremely simple to install and use because, unlike vdbench and similar tools, it does not require the Java Runtime Environment (JRE) for execution.

Once I'd downloaded the compressed executable, I simply unzipped it to my Oracle user account's home directory. The 64-bit version of the ORION executable has a small footprint of just under 50 MB; as I mentioned in the previous article in this series, running the executable without any commands simply echoes its various command set instructions. After a few abortive attempts at running a quick test of its capabilities, I decided to take a few extra moments to read through the additional documentation that's provided in the ORION User's Guide. (The link to this documentation seems to have been almost deliberately hidden at the bottom of the ORION utility download page.)

Configuring ORION I/O Load Generation Tests. First and foremost, ORION needs to understand which I/O devices should be the target(s) of its I/O load generation efforts. I therefore created five ORION I/O test configuration files, one for each combination of virtualized mount points that I wished to test. To build an ORION test configuration file for the test named HDD_3Drives1LVM, for example, I constructed a test configuration file named HDD_3Drives1LVM.lun. I've laid out these test configurations in Listing 3.2. What's absolutely crucial to keep in mind here is that ORION is going to accept all of the devices named within the test configuration file - so if even the slightest amount of write operations are planned, it's best to make sure that the mount points are either completely empty or contain no files of significance!

Executing a Simple ORION I/O Load Generation Test. Now that my test configuration files are ready, it's a simple matter to fire off an I/O test from a terminal window session. Here's an example of the feedback from my first test run:

[oracle@11gR2Base ~]$ /home/oracle/orion_x86_64 -run simple -testname HDD_3Drives1LVM

ORION: ORacle IO Numbers -- Version 11.1.0.7.0

HDD_3Drives1LVM_20100815_1627

Test will take approximately 9 minutes

Larger caches may take longer

. . .

This first simple I/O generation test only generated a perfunctory, standard I/O workload against the specifically targeted LUNs. By default, ORION will run this "simple" test for just 60 seconds first using an 8 KB I/O size for small random I/O reads, and then with a size of 1 MB for large sequential reads - and no write I/Os will be performed at all. This "simple" test also simulates concatenated I/O by chaining all specified LUNs together and reading from each specified LUN from its first block to its last block.

Executing an Advanced ORION I/O Load Generation Test. However, it's also possible to generate much more complex I/O testing against just about any combination of physical or virtualized I/O devices. It also offers a plethora of different combinations of I/O test generation options, including:

Table 3-4. ORION Test Results: Output Files

Parameter

Default Value

Description

-duration

60

Tells ORION for how long to test each specific data point (in seconds); the default value is 1 minute (60 seconds)

-matrix

detailed

Specifies the types of I/O workloads to test. The default (detailed) tells ORION to execute small random, large random, and large sequential I/O workloads in combination. There are several other settings as well; see the ORION documentation for complete details

-type

rand

Specifies the type of Large I/O workload:

  • The default value, rand, tells ORION to generate a large random I/O workload
  • A value of seq tells ORION to generate a large sequential I/O workload

-size_small

8

Controls the small random I/O workload size (in kilobytes)

-size_large

1024

Controls the large sequential I/O workload size (in kilobytes)

-simulate

concat

Tells ORION how to simulate a large sequential workload (if one is requested):

  • The default value, concat, specifies that all I/O generated will begin at the start of a LUN and end at the end of that LUN
  • The other value, raid0, tells ORION to stripe all generated I/O across all listed LUNs. This option is suggested for simulation of ASM striping

-num_streamIO

1

Tells ORION how many outstanding I/Os per sequential I/O stream should be generated. This parameter only has meaning when the value for -type is (seq)uential

-write

0

Controls what percent of I/O generation is spent writing out data to target LUNs vs. reading data from the target LUNs. Note that when large sequential I/O is generated, this actually corresponds to the number of I/O streams that are generating either read or write activity; in other words, there are no partial large sequential I/O read/write streams

-stripe

1024

The size of the I/O stripe written when the -simulate parameter is set to raid0; its default size of 1024 KB can be overridden to simulate larger ASM allocation units (e.g. 4096 for a 4M AU)

-num_small

0

Controls the maximum number of outstanding I/O requests when a small random I/O workload is generated for special I/O generation patterns. See the ORION documentation on the -matrix parameter for complete details

-num_large

0

Controls the maximum number of outstanding I/O requests when a large random I/O workload is generated for special I/O generation patterns. See the ORION documentation on the -matrix parameter for complete details

-cache_size

 

When large sequential I/O workloads are to be generated against enterprise-level storage arrays, this parameter tells ORION the approximate size of the storage array's read / write cache in MB. If this parameter is set, ORION will "warm up" the cache by generating several random large I/Os. Setting this parameter to a value of zero (0) tells ORION not to "warm" the cache at all

The ORION test scripts I've used to generate all five I/O test workloads are shown in Listing 3.3; please notice that these workloads take advantage of many of these parameter settings to generate I/O workloads of different types, durations, and percentages of read vs. write activity against dramatically different combinations of virtualized devices.

Interpreting the Result of an ORION I/O Load Generation Test. After each I/O test successfully completed, ORION logged the results into a set of text and comma-separated values (CSV) files. Each file is named so it corresponds to the I/O generation test executed; ORION also adds a date and time stamp to each file so that prior results won't be overwritten by future executions of the same test. Table 3.4 describes these output files and their contents:

Table 3-4. ORION Test Results: Output Files

File Name Prefix:

[Test Name]_[Test Date]_[Test Time]_

Contents

summary.txt

Summarizes the ORION test's input parameters, including the test name, its target devices, the type(s) of tests performed, and some high-level I/O performance statistics

iops.csv

Captures the I/O performance (in I/O operations per second) as each test point was completed

mbps.csv

Details the I/O throughput (in MB / second) as each test point was completed

lat.csv

Shows statistics on how much I/O latency (i.e. the delays encountered as higher amounts of throughput is demanded) as each test point was completed

trace.txt

Captures detailed results of all ORION test points, including average IOPS, MBPS, and latency statistics at the end of each test point

The comma-separated values (CSV) files are intended as input into common graphing tools, such as Microsoft Excel 7.0. Figures 3.1, 3.2, and 3.3 below illustrate how to take advantage of these data points. I've selected probably the most interesting and more complex ORION test series - XDD6Drives3Devices, which incorporated six LUNs from three different physical devices - to present the resulting test data.

ORION Test XDD6Drives3Devices: MBPS
Figure 3.1. ORION Test XDD6Drives3Devices: MBPS.

ORION Test XDD6Drives3Devices: IOPS
Figure 3.2. ORION Test XDD6Drives3Devices: IOPS.

ORION Test XDD6Drives3Devices: Latency
Figure 3.3. ORION Test XDD6Drives3Devices: Latency.

The results from all of these tests - including a Microsoft Excel 7.0 spreadsheet that contains the raw data I used to create these graphs - is available for download here.

Next Steps

Now that I've got baseline I/O performance statistics for my I/O subsystem, the next step is to actually create Oracle database files within these mount points to determine the impact that different I/O configurations will have upon my database's performance. The next article in this series will therefore focus on how to use well-established database application workloads to evaluate the relative I/O performance. I will illustrate how to:

  • Create and load sample TPC-E and TPC-H database schemas
  • Perform intra-database I/O performance tests
  • Evaluate the I/O subsystem's response to different application workloads

References and Additional Reading

Before you proceed to experiment with any of these new features, I strongly suggest that you first look over the corresponding detailed Oracle documentation before trying them out for the first time. I've drawn upon the following Oracle Database 11g Release 2 documents for this article's technical details:

E10713-03 Oracle Database 11gR2 Concepts

E10500-02 Oracle Database 11gR2 Storage Administrator's Guide








The Network for Technology Professionals

Search:

About Internet.com

Legal Notices, Licensing, Permissions, Privacy Policy.
Advertise | Newsletters | E-mail Offers