Oracle Exadata Basics

Since its introduction in September, 2008, Exadata has fast become both a familiar term and a familiar sight in the IT/database realm. The system has undergone several changes in its short history, from storage solution to complete database appliance.

What IS Exadata?

Exadata is a machine, composed of matched and tuned components providing enhancements available with no other configuration, which can improve the performance of the database tier. This system includes database servers, storage servers, an internal Infiniband network with switches, and storage devices (disks), all configured by Oracle Advanced Customer Support personnel to meet the customer’s requirements. Can it improve every situation? No, but it wasn’t designed to. Originally designed as a data warehouse/business intelligence appliance the releases from V2 on have added OLTP applications to the list.

Available Configurations

The previous Exadata release was X2, which had four available configurations, three of which included a keyboard/video/mouse module:

  • X2-2 Quarter Rack – Two database servers each with 2 6-core Xeon processors and 48 GB of RAM, three storage servers, 36 disk drives
  • X2-2 Half Rack – Four database servers each with 2 6-core Xeon processors and 48 GB of RAM, seven storage servers, 84 disk drives, spine switch for expansion
  • X2-2 Full Rack – Eight database servers each with 2 6-core Xeon processors and 48 GB of RAM, fourteen storage servers, 168 disk drives, spine switch for expansion
  • X2-8 Full Rack – Two database servers each with 8 10-core Xeon processors and 2 TB of RAM, fourteen storage servers, 168 disk drives, spine switch for expansion, no keyboard/video/mouse module The -2 and the -8 descriptors indicate the CPU count per database server. Note that the X2-8 configuration has more total CPU cores (16 10-core CPUs versus 16 6-core CPUs for the full rack X2-2 configuration) so you do get a bit more computing power at the expense of not having the internal keyboard/video/mouse module. An X3 series of Exadata machines is now available, replacing the X2 series, in the following five configurations:
  • X3-2 Eighth Rack – Two database servers each with 2 8-core Xeon processors with 8 cores enabled and 256 GB of RAM, three storage servers, 36 disk drives
  • X3-2 Quarter Rack – Two database servers each with 2 8-core Xeon processors and 256 GB of RAM, three storage servers, 36 disk drives
  • X3-2 Half Rack – Four database servers each with 2 8-core Xeon processors and 256 GB of RAM, seven storage servers, 84 disk drives, spine switch for expansion
  • X3-2 Full Rack – Eight database servers each with 2 8-core Xeon processors and 256 GB of RAM, fourteen storage servers, 168 disk drives, spine switch for expansion
  • X3-8 Full Rack – Two database servers each with 8 10-core Xeon processors and 2 TB of RAM, fourteen storage servers, 168 disk drives, spine switch for expansion, no keyboard/video/mouse module

In general the X3 series of Exadata machines is twice as powerful than the X2 series on a ‘like for like’ comparison.

Storage

How much raw storage you have depends on whether you choose High Capacity or High Performance drives – High Capacity drives have 3 TB each of raw storage running at 7,200 RPM and the High Performance drives have 600 GB each running at 15,000 RPM. For a Quarter Rack or Eighth Rack configuration with High Capacity disks 108 TB of total raw storage is provided with roughly 40 TB available for data after normal ASM redundancy is configured. Using High Performance disks the total raw storage for the same units is 21.1 TB with approximately 8.4 TB of usable data storage with normal ASM redundancy. High redundancy reduces the storage by roughly another third on both configurations; the tradeoff is the additional ASM mirror in case of disk failure as high redundancy provides two copies of the data. Normal redundancy provides one copy.

The disks are accessed through the storage servers (or cells). It is interesting to note that there is no direct access to the storage from the database servers; the only way they can ‘see’ the disks is through ASM. In the X3-2 Quarter Rack and Eighth Rack configurations there are three storage cells with each storage cell controlling 12 disks. Each storage server provides two eight-core Xeon processors and 256GB of RAM. Between the various configurations of Exadata the differences become the number of database servers (often referred to as ‘compute nodes’) and the number of storage servers or cells – the greater the number of storage cells the more storage the Exadata machine can control internally.

Smart Flash Cache

Another part of the Exadata performance package is the Smart Flash Cache, 1600 GB of solid-state flash storage for each storage cell configured across four Sun Flash Accelerator F20 PCIe cards. With a Quarter Rack configuration (three storage servers/cells) 4.7 TB of flash storage is available; a Full Rack provides 22.6 TB of flash storage. The flash cache is usable as a smart cache to service large volumes of random reads and writes or it can be configured as flash disk devices and mounted as an ASM diskgroup.

Even More Storage

Expansion racks consist of both storage servers and disk drives.There are three different configurations available: Quarter Rack, Half Rack and Full Rack, any of which can connect to any Oracle Exadata machine.For the Quarter Rack configuration an additional spine switch will be necessary; the IP address for the spine switch is left unassigned during configuration so that if one is installed the address will be available with no need to reconfigure the machine. Besides adding storage these racks also add computing power for Smart Scan operations withthe smallest expansion rack containing four storage servers and 48 disk drives, adding eight six-core CPUs to the mix. The largest expansion rack provides 18 storage servers with 216 disk drives. Actual storage will depend on whether the system is using High Capacity or High Performance disk drives; the drive types cannot be mixed in a single Exadata machine/expansion rack configuration, so if the Exadata machine is using High Capacity drives then the Expansion Rack must also contain High Capacity disks.One reason for this requirement is that ASM stripes the data across the total number of drives in a disk group thus the size and geometry of the disk units must be uniform across the storage tier. The beauty of these expansion racks is they integrate seamlessly with the existing storage on the host Exadata machine. Once these disks are added to the diskgroups ASM automatically triggers a rebalance to evenly distribute the extents across the total number of available disks. If the storage in the Expansion Rack is destined for a new diskgroup then the disk type does not need to match that of the Exadata system; the only requirement is that all disks in a single diskgroup be of the same type.

Summary

An Exadata machine is a complex configuration of database servers, storage servers, disk drives and an internal Infiniband network with modifications designed to address many performance issues in a unique way. It’s the first machine with a ‘divide and conquer’ approach to query processing that can dramatically improve performance and reduce query response time. It also includes Smart Flash Cache, a write-back cache that can handle large volumes of reads and writes and is designed for OLTP systems. This cache can also be configured as flash disk. Additional storage is available in the form of Exadata Expansion Racks, which can be added to any Exadata configuration to extend the storage and add storage cell computing power. The storage in the Expansion Rack must be the same type (High Capacity, High Performance) as in the Exadata Machine if the additional storage is to be added to an existing diskgroup. Should another diskgroup be created from the disks in the Expansion Rack those disks do not need to match those in the Exadata Server, they simply must be consistent across a diskgroup.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles