Saturday, November 2, 2019

What's new in Oracle Exadata Deployment Assistant in latest releases

About Oracle Exadata Deployment Assistant:

Oracle Exadata Deployment Assistant includes a configuration tool and a deployment tool.

Oracle Exadata Deployment Assistant uses the configuration file created by the configuration tool to configure the Oracle Exadata Database Machine. The Oracle Exadata Deployment Assistant configuration tool runs on a client. The client must run one of the following operating systems:

-Oracle Linux x86-64

-Oracle Solaris SPARC (64-bit)

-Microsoft Windows

-Apple OS X (64-bit)


Supported Releases:

This update of Oracle Exadata Deployment Assistant supports the following Oracle Database releases and platforms:

Note:

The requirements for Exadata storage servers and database servers listed in My Oracle Support note 888828.1 must be met before installing Oracle Database 12c Release 1 (12.1.0.2) or Oracle Database 12c Release 1 (12.1.0.1).

Oracle Database 19c
Oracle Linux x86-64
Oracle Solaris SPARC (64-bit)
Oracle Database 18c

Oracle Linux x86-64
Oracle Solaris SPARC (64-bit)
Oracle Database 12c Release 2 (12.2.0.1)

Oracle Linux x86-64
Oracle Solaris SPARC (64-bit)
Oracle Database 12c Release 1 (12.1.0.2)

Oracle Linux x86-64
Oracle Linux SPARC
Oracle Solaris x86-64 (64-bit)
Oracle Solaris SPARC (64-bit)
Oracle Database 11g Release 2 (11.2.0.4)

Oracle Linux x86-64
Oracle Solaris x86-64 (64-bit)
Oracle Solaris SPARC (64-bit)

Note:

Oracle Grid Infrastructure release 11.2.0.4 is not supported on Oracle Linux 7. Oracle Database release 11.2.0.4.180717 or higher is supported on Oracle Linux 7 but you must use Oracle Grid Infrastructure release 12.1.0.2.180717 or higher.

Using Oracle Exadata Deployment Assistant to Create the Configuration Files
Oracle Exadata Deployment Assistant (OEDA) configures the engineered system based on the configuration information entered by the customer.

OEDA is provided as a compressed file. To use OEDA, download the compressed file, extract the file on a client machine, and then follow the instructions in one of the following sections.

OEDA is now available with two different interface types: Java-based, and a new, web-based interface. Both the Java-based and the web-based versions of OEDA are included as part of the same patch download and delivered as part of the normal OEDA release cycle. Oracle Exadata Database Machine and Oracle Zero Data Loss Recovery Appliance use the web-based version of OEDA. Oracle SuperCluster configurations continue to be configured using the OEDA Java-based interface.


Topics:

Using the OEDA Web-based Interface
The web-based interface for OEDA is available starting with the October 2018 release of OEDA.
Using the OEDA Java-based Interface
If you downloaded OEDA release September 2018 or earlier, or you are configuring Oracle SuperCluster, then you use the Java-based version of OEDA.

Using the OEDA Web-based Interface

The web-based interface for OEDA is available starting with the October 2018 release of OEDA.

OEDA Web is available for Linux, OSX and Windows. The OEDA Web-based interface can import previous OEDA XML configuration files created with the Java-based version of OEDA.

Note:

The OEDA web interface is supported on only Chrome and Firefox browsers.

Extract the contents of the downloaded compressed file. When you extract the contents, it creates a directory based on the operating system, such as linux-x64, macosx-x64, or windows-i586, to store the extracted files. This is referred to as the OEDA Home directory. The compressed file must be extracted into a directory with no spaces in its path.

Before you can use the web-based interface, you must install and run the Web Application Server. In the created directory, locate and run the installOedaServer program. You do not have to be logged in as an administrator user to run this program. Use one of the following commands, where the -p option specifies the port number:

On Linux, Apple, and UNIX:

./installOedaServer.sh -p 7001 [-g]
On Microsoft Windows:

installOedaServer.cmd -p 7001 [-g]

Note:

You can specify a non-default port (for example, 8002) by using a value other than 7001. However, it is not recommended to use port numbers less than 1024.

By default, the OEDA Web Server will only listen to the localhost (127.0.0.1) interface. Using the command line option -g will enable the OEDA Web Server to listen on all network interfaces.

When you run the installOedaServer program, it first stops and removes any previous installation of the OEDA Web Server. Then it installs and starts the latest version of the OEDA Web Server on the local system.

After the OEDA Web Server has been installed, you can access the web-based application by opening a browser and entering the following URL:

http://localhost:7001/oeda

Refer to My Oracle Support note 2460104.1 for more information.

Managing the OEDA Web Server after Installation After you have installed the OEDA Web Server, you can perform basic management tasks on the web server.
About the OEDA Web-Based Interface Pages


Thursday, June 20, 2019

Network interfaces disappears upon reboot linked to IB switches ?

Issue:

InfiniBand SUN DCS 36p switch port is set to auto-disabled when the link exhibits sub-optimal link speed or bandwidth.

Error in messages file:

ip : [ID 505317 kern.error] ibd: DL_ATTACH_REQ failed: DL_SYSERR (errno 22)
ip:  [ID 590039 kern.error] ibd: DL_BIND_REQ failed: DL_OUTSTATE 
ip: [ID 312130 kern.error] ibd: DL_UNBIND_REQ failed: DL_OUTSTATE 
ip: [ID 505317 kern.error] ibd: DL_ATTACH_REQ failed: DL_SYSERR (errno 22)

Below is a description of Autodisable Functionality:
Switch chip ports and their connectors can be configured to automatically disable should their links exhibit high error rates or sub-optimal link speed or width.

You use the autodisable command to add the connectors to the autodisable list, which has two parts; one for connectors whose links fail from high error rates, and another for connectors whose links fail from suboptimal link speed or width. A connector can be configured for both parts.

The autodisable feature monitors the following to determine if a connector and its respective link are experiencing high error rates:
     SNMP traps
     Oracle ILOM event log
     Syslog
     Email alerts
The autodisable feature also monitors the link speed and width, and if any of the following combinations are discovered, the link is considered suboptimal:
     1x SDR
     1x DDR
     1x QDR
     4x SDR
     4x DDR
 As a side note, this issue may also be caused by a bad partner link or misconfiguration.

Solution:

This feature is enabled from firmware 2.X. When the port goes down with AutomaticBadSpeedOrWidth, then re-enable using enableswitchport --automatic once confirmed there are no faults reported at physical layer.
For more details:  Refer to (Doc ID 1605955.1) on how to enable the switch port when it is auto-disabled.

Saturday, May 25, 2019

I/O Issues between DB and Storage tiers in Exadata ?

How storage servers detect and cancel or repair slow I/Os and hung I/Os and confine sick disks..

IOs are pumping between Database and Storage Tiers from time to time. 
Let's see what are the different problems can be handled at storage tier.

1. Slow IO ?    ->  Cell IO Latency Capping

  What happens if we hit with slow I/Os in the storage tier, something called cell I/O latency issues? Well Exadata has a feature called Cell IO Latency Capping, which monitors I/O timings and if any disk is taking too long, it will direct read to a mirror and write to an alternate healthy disk.

2. Hung IO ?      ->  IO Hang detection

  It can be really bad if you face with truly hung I/O that escalates all the way up to like a controller level problem, you can stall your entire system with this hung I/O.. IO Hang detection will help with detection and repair and may even reset a whole cell if the problem is bad to make sure system won't stop.

3. Sick disk?     -> Predictive failure / confinement

If you have a situation where the disk about to die and I/O service timings are really bad..
Predictive failure feature built in the controllers which has heuristics to tell when a disk is going to fail and it will put in a predictive failure mode. This feature monitors metrics of disks and flash are being serviced across all different components. If they aren't then it potentially offline the sick disk.


What happens if there is undiscovered hardware or software issue on the storage tier, probably a bug or a network glitch on InfiniBand network connecting to cells or so..

4. Undiscovered hardware / Software issue?  -> Database tier I/O latency capping

From database tier, it monitors how long I/Os are taking. If there is a problem detected it will cancel them and redirect to a healthy cell.




Tuesday, May 14, 2019

Exadata X8

Technical Specifications:

> Latest Intel Xenon processors
> Latest PCIe NVME flash technology
> 25 Gbps Ethernet for client connectivity


Exadata X8-2 Features:
> Up to 912 CPU cores and 28.5 TB memory per rack for database processing
> Up to 576 CPU cores per rack dedicated to SQL processing in storage
> From 2 to 19 database servers per rack
> From 3 to 18 storage servers per rack
> Up to 920 TB of flash capacity (raw) per rack
> Up to 3.0 PB of disk capacity (raw) per rack
> Hybrid Columnar Compression often delivers 10X-15X compression ratios
> 40 Gb/second (QDR) InfiniBand Network
> Complete redundancy for high availability


Exadata X8-2 Benefits:
> Pre-configured, pre-tested system optimized for all database applications
> Uncompressed I/O bandwidth of up to 560 GB/second per full rack from SQL
> Ability to perform up to 4.8M 8K database read I/O operations, or 4.3M 8K flash write I/O operations per second per full rack
> Easily add compute or storage servers to meet the needs of any size application
> Scale by connecting multiple Exadata Database Machine X8-2 racks or Exadata Storage Expansion Racks. Up to 18 racks can be  connected by simply adding InfiniBand cables and internal switches. Larger configurations can be built with external InfiniBand switches
 
New hardware Extended (XT)
-> Much Lower cost Exadata Storage
       - Used for infrequently accessed, older or regulatory data
-> Better performance:
       - 560 GB/sec I/O throughput
       - 60% more for all-flash storage vs X7
-> 6.57 Million OLTP read IOPS
       - 25% more per storage server vs X7
       - 3.5 million iops under 250 microseconds
-> Dramatically faster than leading all-Flash arrays in every metric


Smart system Software:
Analytics:
Smart scan technology:
-Exadata automatically offloads data intensive SQL operations to storage
 - Unique Smart Scan technology offloads SQL processing to storage delivers:
    - Over 560 GB/sec throughput while offloading database CPUs
 - Unique algorithms offload Data Mining, Decryption, Aggregation and Backups to storage
-Exadata automatically reduces I/o
 - Unique Database-aware flash caching yields speed of PCI flash with capacity of disk
 - Unique storage indexes eliminates I/O that is not relevant to a particular query
-Exadata uses analytics optimized Columnar format
 - Unique Hybrid Columnar compression reduce space and speeds analytics by up to an order of magnitude
Exadata brings In-memory Analytics Performance to Storage:
In- Memory Columnar scans but also In-flash Columnar scans at storage levels
 - As exadata flash throughput approaches memory througput, SQL bottleneck moves from I/O to CPU
 - Exadata storage automatically transforms table data into In-memory DB columnar formats in Exadata Flash Cache
   - Enables fast vector processing for storage server queries
 - Uniquely optimizes next generation flash as memory
   - Now works for both row format OLTP databases and Hybrid Columnar Compressed Analytics databases
 Preview - Intel Optane DC persistent Memory will be enabled for columnar data in DB and Exa servers
for more speed and more columnar storage for anlaytics


OLTP:

Exadata automatically eliminates traditional OLTP bottleneck : random I/O
through use of unique scale-out storage, ultra-fast NVMe flash, ultra-fast IB delivers:
- Unique Smart Flash loging automatically optimizes OLTP logging to flash
Exadata automatically eliminates OLTP stalls from failed or sick components
  - Unique detection of server failures without a long timeout avoids system hangs
  - Unique sub-second redirection of I/Os around sick devices and avoid database hangs
Exadata automatically eliminates inter-node cluster coordination bottlenecks
 - Unique direct-to-wire protocol gives 3x faster inter-node OLTP messaging
 - Unique Smart Fusion Block Transfer eliminates log write on inter-node block move
 - Unique RDMA protocol to coordinate transactions between nodes
Persistent memory for even faster OLTP in storage
 - Exadata storage servers will add persistent memory OLTP accelerator infront of flash memory
   - Using Intel Optance DC persistent Memory
 - RDMA bypasses software stack, giving 20x faster latency to remote persistent memory
 - Persistent Memory mirrored across storage servers for fault-tolerance
 - Persistent Memory used as shared cache increases its value 10x versus using it directly as expensive storage
 - Makes it cost-effective to run multi-TB databases in memory

Consolidation:
Exadata uniquely optimizes Mixed workload and consolidation
Completely Automatic, No management required
 - Exadata automatically prioritizes latency sensitive operations
    - Unique prioritization of critical network messages for locks, cache fusion, logging etc
    - Unique prioritization of OLTP I/O over Analytic or Batch I/O
 - Exadata automatically prioritizes important workloads based on user policies
     - Unique prioritization of CPU and I/O by job,user, service,pdb,session, SQL
 - Exadata automatically provides isolation between multiple tenants
    - Unique prioritization and separation by database, or pluggable database


Software Release:
> Exadata system software 19.1.0.0.0 and 19.2.0.0.0
> OEL linux 7.6
> AIDE (advanced intrusion detection env)
> Automatic monitoring of CPU, Network and Memory using Machine Learning
 - Detects and alerts on stuck process, memory leaks, flaky networks etc


Automated management:
 - Automation and optimization of configuration, updates, performance and management culmination in fully autonomous infrastructure and database





Monday, January 28, 2019

Switch Port xx is down (AutomaticBadSpeedOrWidth)



 

Solution

 For version 1.3.3-2 ,   2.1.3-4 and above , make sure the file /conf/disabledports.conf doesn't exist .
# version
SUN DCS 36p version: 1.3.3-2
Build time: Apr  4 2011 11:15:19
SP board info:
Manufacturing Date: 2011.12.25
Serial Number: "NCD7K0152"
Hardware Revision: 0x0006
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010

# getportstatus 29
Port status for connector 3B Switch Port 29
Adminstate:......................Disabled (AutomaticBadSpeedOrWidth)
LinkWidthEnabled:................1X or 4X
LinkWidthSupported:..............1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkState:.......................Down
PhysLinkState:...................Disabled
LinkSpeedActive:.................2.5 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps

 
This below command works on version: 2.1.3-4 but not on version: 1.3.3-2

# enableswitchport --reason=AutomaticBadSpeedOrWidth 29
Invalid reason AutomaticBadSpeedOrWidth
Usage:
enableswitchport [--reason=reason] connector | [ibdevicename] port
Values for ibdevicdename: Switch
Values for port: 1-36
Values for connector: 0A-17A, 0B-17B
Values for reason: Blacklist, Partition

 
Somehow /conf/disabledports.conf had these settings.  Only possibility I can think of is this system had a fw with the autodisable feature with 2 ports that had this condition and then the system was downgraded and this file did not get updated.   the file should be renamed or removed completely.
cat /conf/disabledports.conf
# List of Disabled ports
# Format:
# ibdev port Adminstate
#Switch 34 AutomaticBadSpeedOrWidth
#Switch 29 AutomaticBadSpeedOrWidth

after removing or renaming the file /conf/disabledports.conf, enable the port as follow
enableswitchport Switch 29
getportstatus 29
Port status for connector 5B Switch Port 29
Adminstate:......................Enabled
LinkWidthEnabled:................1X or 4X
LinkWidthSupported:..............1X or 4X
LinkWidthActive:.................1X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkState:.......................Active
PhysLinkState:...................LinkUp
LinkSpeedActive:.................10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps

ibportstate 29 query
PortInfo:
# Port info: Lid 29 port 0
LinkState:.......................Active
PhysLinkState:...................LinkUp
LinkWidthSupported:..............1X or 4X
LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................10.0 Gbps


#listlinkup
Connector  4B Present <-> Switch Port 27 is up (Enabled)
Connector  5B Present <-> Switch Port 29 is up (Enabled)
Connector  6B Present <-> Switch Port 36 is up (Enabled)

 
to disable the port again you can use
disableswitchport switch 29

Now for version SUN DCS 36p version: 2.1.3-4 and above
# listlinkup

Connector  5B Present <-> Switch Port 29 is down (AutomaticBadSpeedOrWidth) <----------

# enableswitchport  --automatic Switch 29
Enable connector 5B Switch port 29
Adminstate:......................Enabled
LinkWidthEnabled:................1X or 4X
LinkWidthSupported:..............1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkState:.......................Down
PhysLinkState:...................PortConfigurationTraining
LinkSpeedActive:.................2.5 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
NeighborMTU:.....................2048
OperVLs:.........................VL0-7
#  ibportstate 3 29
PortInfo:
# Port info: Lid 3 port 29
LinkState:.......................Down
PhysLinkState:...................Polling
LinkWidthSupported:..............1X or 4X
LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................10.0 Gbps
#  ibportstate 3 29
PortInfo:
# Port info: Lid 3 port 29
LinkState:.......................Down
PhysLinkState:...................Polling
LinkWidthSupported:..............1X or 4X
LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................10.0 Gbps
#
#  ibportstate 3 29
PortInfo:
# Port info: Lid 3 port 29
LinkState:.......................Active
PhysLinkState:...................LinkUp
LinkWidthSupported:..............1X or 4X
LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................2.5 Gbps
Peer PortInfo:
# Port info: Lid 3 DR path slid 65535; dlid 65535; 0,29 port 2
LinkState:.......................Active
PhysLinkState:...................LinkUp
LinkWidthSupported:..............1X or 4X
LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................2.5 Gbps
ibwarn: [2687] validate_speed: Peer ports operating at active speed 1 rather than  4 (10.0 Gbps)
[root@scam07sw-ibb0 IBdata]# ibportstate 3 29
PortInfo:
# Port info: Lid 3 port 29
LinkState:.......................Down
PhysLinkState:...................Disabled
LinkWidthSupported:..............1X or 4X
LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X
LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................2.5 Gbps
[root@scam07sw-ibb0 IBdata]#


This might explain why the port got disabled in the first place. You have an SDR link speed on peer port instead of QDR. It could be either a cable or HCA problem.  try switching cables?
 

Thursday, January 3, 2019

Negative Values of USABLE_FILE_MB

When you lose multiple disks from multiple failure groups, then you could lose both the primary and the redundant copies of your data. In addition, if you do not have enough capacity to restore redundancy, then Oracle ASM can continue to operate. However, if another disk fails, then the system may not be able to tolerate additional failures.
 
The V$ASM_DISKGROUP view contains the following columns that contain information to help you manage capacity:
 
SQL> SELECT name, type, total_mb, free_mb, required_mirror_free_mb,
     usable_file_mb FROM V$ASM_DISKGROUP;

NAME         TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------------ ------ ---------- ---------- ----------------------- --------------
DATA         NORMAL      4194304   189923328   398204   2260992  

-931394

 
  • REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be available in a disk group to restore full redundancy after the worst failure that can be tolerated by the disk group without adding additional storage. This requirement ensures that there are sufficient failure groups to restore redundancy. Also, this worst failure refers to a permanent failure where the disks must be dropped, not the case where the disks go offline and then back online.
    The amount of space displayed in this column takes the effects of mirroring into account. The value is computed as follows:
    • A normal redundancy disk group with more than two REGULAR failure groups
      The value is the total raw space for all of the disks in the largest failure group. The largest failure group is the one with the largest total raw capacity. For example, if each disk is in its own failure group, then the value would be the size of the largest capacity disk.
    • A high redundancy disk group with more than three REGULAR failure groups
      The value is the total raw space for all of the disks in the two largest failure groups. 
  • USABLE_FILE_MB indicates the amount of free space, adjusted for mirroring, that is available for new files to restore redundancy after a disk failure. USABLE_FILE_MB is computed by subtracting REQUIRED_MIRROR_FREE_MB from the total free space in the disk group and then adjusting the value for mirroring
  • TOTAL_MB is the total usable capacity of a disk group in megabytes. The calculations for data in this column take the disk header overhead into consideration. The disk header overhead depends on the number of Oracle ASM disks and Oracle ASM files. This value is typically about 1% of the total raw storage capacity.
  • FREE_MB is the unused capacity of the disk group in megabytes, without considering any data imbalance.
USABLE_FILE_MB calculation:
(FREE_MB - REQUIRED_MIRROR_FREE_MB) / 2 = USABLE_FILE_MB => Normal redundancy
(FREE_MB - REQUIRED_MIRROR_FREE_MB) / 3 = USABLE_FILE_MB => High redundancy

Due to the relationship between FREE_MB, REQUIRED_MIRROR_FREE_MB, and USABLE_FILE_MB, USABLE_FILE_MB can become negative. Although this is not necessarily a critical situation, it does mean that:
  • Depending on the value of FREE_MB, you may not be able to create new files.
  • The next failure might result in files with reduced redundancy.
If USABLE_FILE_MB becomes negative, it is strongly recommended that you add more space to the disk group as soon as possible.
  

Tuesday, December 25, 2018

IORM plan on Exadata

Configuring an IORM plan on Exadata


The IORM plan can be configured using the ALTER IORMPLAN command on command-line interface (CellCLI) utility on each Exadata storage cell. It consists of two parameters - dbplan and catplan. While the "dbplan" is used to create the I/O resource directives for the databases, the "catplan" is used to allocate resources by workload category consolidated on the target system. Both the parameters are optional, i.e. if catplan is not specified, category-wise I/O allocation will not take place. The directives in an inter-database plan specify allocations to databases, rather than consumer groups. To create a database plan, IORM uses certain attributes as listed below.
  • name - Specify the database name, profile name (from Exadata Storage Software Release 12.1.2.1). Use "other" when specifying allocation and "default" when specifying share for databases.
  • level - Specify the level of allocation. In a multi-level plan, if the current level is unable to utilize the allocated resources, the resources are cascaded to the next level.
  • role - Specify the database role i.e. primary or standby in an Oracle Data Guard environment. It indicates that the directive is applicable only if the database exists in the role specified. For "other" and "default" directive, the attribute is not applicable.
  • allocation/share - Specify the resource allocation to a specific database in terms of percentage or shares. If you specify both allocation and share, the directive is invalidated. With percentage based allocation, you can specify a "level", so that the unused resources can be cascaded to the successive levels. There can be a maximum of eight levels and the sum of all allocations at a level must not exceed 100. Likewise, there can be a maximum 32 directives.
With "share" based allocation, you do not have to specify levels and allocation as a percentage. A share can be a value between 1 to 32, which represents the degree of importance for a specific database. Share-based allocations can support up to 1024 directives.
  • limit - Specify maximum limit of disk utilization for a database. This is a handy directive in consolidation exercises because it helps in achieving consistent I/O performance and pay-for-performance capability.
  • flashCache - Specify whether or not a database can make use of flash cache
  • flashLog - Specify whether or not a database can make use of flash log
 
From Exadata cell versions 11.2.3.2 and above, the IORM is enabled by default with the BASIC objective. The BASIC objective lets IORM protect high latency small I/O requests and manage flash cache. To enable the IORM for user defined plans, you must set the objective to AUTO. To disable the IORM, set the objective back to BASIC.

CellCLI> ALTER IORMPLAN OBJECTIVE = AUTO;
 
 

The IORM Objective

The objective is an essential setting in an IORM plan. It is used to optimize the I/O request issue based on the workload characteristics. An IORM objective can be either basic, auto, low_latency, high_throughput, or balanced.
  • Basic - This is the default setting and doesn't deal with the user-defined plans. It only guards the high latency small I/Os while maximum throughput is maintained.
  • low_latency - The objective is to reduce the latency by capping the concurrent I/O requests maintained in the disk drive buffer. The setting is suitable specifically for OLTP workloads.
  • high_throughput - The target is used for warehouse workloads to maximize the throughput by delivering a larger buffer of concurrent I/O requests.
  • balanced - The objective balances the low latency and high throughput
  • auto - The objective lets the IORM to decide the appropriate objective depending on the active workload on the cell

 

Managing Exadata Flash Cache

One of the key enablers of Exadata's extreme performance and scalability is the Exadata Smart flash Cache. The IO Resource Manager allows the enabling and disabling the usage of flash cache by multiple databases consolidated on an Exadata machine. The IORM plan directive can set the "flashCache" attribute to prevent the databases from using flash cache. If the attribute is not specified in the directive, the database is assumed to be using the flash cache. Disabling the flash cache for a database would require considerable thinking and strong justification. The usage of flash logs can also be controlled through IORM plans. You can set the attribute "flashLog" in the plan directive to enable or disable the flash log usage for a database. But since it consumes a very small portion of total flash, it is recommended to make use of flash log.
 
Starting with Exadata Storage Server software release 12.1.2.1, the IORM can also manage the flash I/Os along with the disk I/Os using a feature known as Flash IORM. OLTP flash I/Os are automatically prioritized over scan I/Os, thus ensuring faster OLTP response times. Based on the allocation made in IORM plan directives, the flash bandwidth can be distributed across multiple databases. The distribution of excess flash bandwidth between scans cascades up to the consumer groups in each database.
Another new feature in Exadata Storage Server software release 12.1.2.1, the new Flash Cache Resource Management allows the users to configure the minimum and maximum value of flash which can be consumed by a database. The new attributes - "flashCacheMin" determines the minimum flash guaranteed for a database while "flashCacheLimit" is the soft upper limit. The "flashCacheLimit" is enforced only when the flash is full.