Tuesday, December 25, 2018

IORM plan on Exadata

Configuring an IORM plan on Exadata


The IORM plan can be configured using the ALTER IORMPLAN command on command-line interface (CellCLI) utility on each Exadata storage cell. It consists of two parameters - dbplan and catplan. While the "dbplan" is used to create the I/O resource directives for the databases, the "catplan" is used to allocate resources by workload category consolidated on the target system. Both the parameters are optional, i.e. if catplan is not specified, category-wise I/O allocation will not take place. The directives in an inter-database plan specify allocations to databases, rather than consumer groups. To create a database plan, IORM uses certain attributes as listed below.
  • name - Specify the database name, profile name (from Exadata Storage Software Release 12.1.2.1). Use "other" when specifying allocation and "default" when specifying share for databases.
  • level - Specify the level of allocation. In a multi-level plan, if the current level is unable to utilize the allocated resources, the resources are cascaded to the next level.
  • role - Specify the database role i.e. primary or standby in an Oracle Data Guard environment. It indicates that the directive is applicable only if the database exists in the role specified. For "other" and "default" directive, the attribute is not applicable.
  • allocation/share - Specify the resource allocation to a specific database in terms of percentage or shares. If you specify both allocation and share, the directive is invalidated. With percentage based allocation, you can specify a "level", so that the unused resources can be cascaded to the successive levels. There can be a maximum of eight levels and the sum of all allocations at a level must not exceed 100. Likewise, there can be a maximum 32 directives.
With "share" based allocation, you do not have to specify levels and allocation as a percentage. A share can be a value between 1 to 32, which represents the degree of importance for a specific database. Share-based allocations can support up to 1024 directives.
  • limit - Specify maximum limit of disk utilization for a database. This is a handy directive in consolidation exercises because it helps in achieving consistent I/O performance and pay-for-performance capability.
  • flashCache - Specify whether or not a database can make use of flash cache
  • flashLog - Specify whether or not a database can make use of flash log
 
From Exadata cell versions 11.2.3.2 and above, the IORM is enabled by default with the BASIC objective. The BASIC objective lets IORM protect high latency small I/O requests and manage flash cache. To enable the IORM for user defined plans, you must set the objective to AUTO. To disable the IORM, set the objective back to BASIC.

CellCLI> ALTER IORMPLAN OBJECTIVE = AUTO;
 
 

The IORM Objective

The objective is an essential setting in an IORM plan. It is used to optimize the I/O request issue based on the workload characteristics. An IORM objective can be either basic, auto, low_latency, high_throughput, or balanced.
  • Basic - This is the default setting and doesn't deal with the user-defined plans. It only guards the high latency small I/Os while maximum throughput is maintained.
  • low_latency - The objective is to reduce the latency by capping the concurrent I/O requests maintained in the disk drive buffer. The setting is suitable specifically for OLTP workloads.
  • high_throughput - The target is used for warehouse workloads to maximize the throughput by delivering a larger buffer of concurrent I/O requests.
  • balanced - The objective balances the low latency and high throughput
  • auto - The objective lets the IORM to decide the appropriate objective depending on the active workload on the cell

 

Managing Exadata Flash Cache

One of the key enablers of Exadata's extreme performance and scalability is the Exadata Smart flash Cache. The IO Resource Manager allows the enabling and disabling the usage of flash cache by multiple databases consolidated on an Exadata machine. The IORM plan directive can set the "flashCache" attribute to prevent the databases from using flash cache. If the attribute is not specified in the directive, the database is assumed to be using the flash cache. Disabling the flash cache for a database would require considerable thinking and strong justification. The usage of flash logs can also be controlled through IORM plans. You can set the attribute "flashLog" in the plan directive to enable or disable the flash log usage for a database. But since it consumes a very small portion of total flash, it is recommended to make use of flash log.
 
Starting with Exadata Storage Server software release 12.1.2.1, the IORM can also manage the flash I/Os along with the disk I/Os using a feature known as Flash IORM. OLTP flash I/Os are automatically prioritized over scan I/Os, thus ensuring faster OLTP response times. Based on the allocation made in IORM plan directives, the flash bandwidth can be distributed across multiple databases. The distribution of excess flash bandwidth between scans cascades up to the consumer groups in each database.
Another new feature in Exadata Storage Server software release 12.1.2.1, the new Flash Cache Resource Management allows the users to configure the minimum and maximum value of flash which can be consumed by a database. The new attributes - "flashCacheMin" determines the minimum flash guaranteed for a database while "flashCacheLimit" is the soft upper limit. The "flashCacheLimit" is enforced only when the flash is full.
 

Tuesday, December 18, 2018

[CELL-05651] [oracle.ossmgmt.ms.core.MSCellMetricDef] File system "/opt/oracle" is now 100% used.

CELL-01514: Connect Error. Verify that Management Server is listening at the specified HTTP port: 8888


 

Oracle Exadata Storage Server Software - Version 18.1.0.0.0 to 18.1.4.0.0 [Release 12.2]
Information in this document applies to any platform.

 

Symptoms

File system "/opt/oracle" is 100% full on Cell

Cause

Bug 26995980 EXADATA /OPT/ORACLE FULL DUE TO ACCESS LOGS FULL OF /CLISERVICE MESSAGES

Bug 27525029 CONTENT INCLUSION OF 26995980 IN EXADATA PSU 18.1.5.0.0

Solution

This is a known issue caused by Base Bug 26995980 'EXADATA /OPT/ORACLE FULL DUE TO ACCESS LOGS FULL OF /CLISERVICE MESSAGES'

Release Notes:

WebServer logs (access.log) are not removed after reaching allowed count number.

--

No other workarounds are found. The recommendation is to manually removed the older access log files to free up the space.

The fix for Bug 26995980 is included in the latest Exadata release 18.1.5.0.0

Reference:

Bug 26995980 EXADATA /OPT/ORACLE FULL DUE TO ACCESS LOGS FULL OF /CLISERVICE MESSAGES

Bug 27525029 CONTENT INCLUSION OF 26995980 IN EXADATA PSU 18.1.5.0.0
 

Tuesday, December 11, 2018

Where to find supported versions for Exadata


 

Purpose

This document lists the software patches and releases for Oracle Exadata Database Machine. This document includes versions for both the database servers and storage servers of Oracle Exadata Database Machine with database servers running Intel x86-64 processors.

For an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Exadata and Oracle Exadata Database Machine environments, refer to the Master Note for Oracle Exadata Database Machine and Exadata Storage Server
Note 1187674.1.

Scope

The information in this document applies only to Exadata software 11.2 and higher.  It does not apply to any previous version of Exadata software. Current releases for other Exadata software versions is maintained in a different note.

Note: The currently supported versions may change frequently, so it is important to review this document immediately prior to any Oracle Exadata Database Machine deployment.

Details 


Latest Releases and Patching News

Before upgrading, see the Requirements for Exadata Feature Usage section in this document for software requirements that may necessitate pre-upgrade patch application to other software in order to support specific Exadata features or patch application methods.
  • New Oracle Exadata Deployment Assistant (OEDA) release Nov 2018 - Supports 18 (18.1.0-18.4.0), 12.2.0.1 (BP170620-RU181016), 12.1.0.2 (BP1-BP181016) and 11.2.0.4 (BP1-BP181016)
  • New Exadata 18.1.10.0.0 (Note 2463368.1)
  • New QFSDP release - Quarterly Full Stack Download Patch (QFSDP) Oct 2018
  • New Exadata 19.1.0.0.0 (Note 2334423.1)
    • Requires OEDA Oct 2018 or later.
    • Verify minimum Grid Infrastructure and Database version requirements are met before updating to this release. See Note 2334423.1 for details
    • Database servers configured physical and domU move to Oracle Linux 7.5.  Verify compatibility of custom-installed software with OL7 before updating to this release.
  • New 18c Database release - 18.4.0.0.181016 Release Update
  • New 12.2.0.1 Database release - 12.2.0.1.181016 Release Update
  • New 12.1.0.2 Database release - 12.1.0.2.181016 Database Proactive Bundle Patch
  • New 11.2.0.4 Database release - 11.2.0.4.181016 Database Patch for Exadata
  • Updated ACFS drivers required for the most efficient CVE-2017-5715 (Spectre variant 2) mitigation in Exadata versions >= 18.1.5.0.0 and >= 12.2.1.1.7 are included in the July 2018 quarterly database releases. Earlier quarterly database releases (April 2018 and earlier) still require a separate ACFS patch. See Document 2356385.1 for details.
  • Oracle Database and Grid Infrastructure Upgrade Recommendations
    • If you are currently running 11.2.0.4 or 12.1.0.2, review the upgrade recommendations in Document 742060.1 to help you stay within the guidelines of Lifetime Support and Error Correction Policies.
 

Exadata Software Updates Overview and Guidelines

For an explanation and overview of Oracle Exadata Database Machine updates, and guidelines for applying and testing software on Exadata, refer to Document 1262380.1.

Exadata Software and Hardware Maintenance Planning

For information about Exadata Software and Hardware Support Lifecycle, see Document 1570460.1.
For information about planning for software and hardware maintenance, see Document 1461240.1.

Critical Issues

Review Document 1270094.1 for Exadata Critical Issues.

Security-Related Guidance

Review Document 1405320.1 for Responses to common Exadata security scan findings.

 

Disable pstack Called From Diagsnap After Applying PSU/RU released between October 2017 and July 2018 to Grid Infrastructure (GI) Home on 12.1.0.2 and 12.2. (Doc ID 2422509.1)

Description

Troubleshooting node reboots/evictions within Grid Infrastructure (GI) often is difficult due to the lack of Network and OS level resource information.  To help circumvent this situation the diagsnap feature has been developed and integrated with Grid Infrastructure.  Diagsnap is triggered to collect Network and OS level resource information when a given node is about to get evicted or when Grid Infrastructure is about to crash.
  
The diagsnap feature is enabled automatically starting from 12.1.0.2 Oct2017 PSU and 12.2.0.1 Oct2017 RU.For more information about the diagsnap feature, refer to the Document 2345654.1 "What is diagsnap resource in 12c GI and above?"  

Occurrence

In certain situations diagsnap executes pstack (and pfiles on Solaris) against critical daemons like ocssd.bin and gipcd.bin. 
Although very infrequent, taking pstack and pfiles on ocssd.bin can suspend the ocssd.bin daemon long enough to cause node reboots and evictions.  For this reason Oracle has decided to ask customers to disable diagsnap functionality until the proper fixes are  provided in a future PSU and/or RU. Once the fixes are applied, diagsnap will not call pstack (and pfiles on Solaris).

Symptoms

Node reboots and evictions after applying the 12.1.0.2 Oct2017 PSU (and later) or 12.2.0.1 Oct2017 RU (and later) but before 12.1.0.2 Oct2018 PSU and 12.2.0.1 Oct2018 RU To Grid Infrastructure (GI) Home.
The problem is fixed in 12.1.0.2 Oct2018 PSU and 12.2.0.1 Oct2018 RU.


Workaround

Either apply the patch or disable the osysmond from issuing pstack (and diagsnap from issuing pfiles in Solaris)

For non-Solaris environments:

1.  apply the latest PSU or RU or the patch for Bug:28266751, and the fix disables the osysmond from issuing pstack.

The fix for bugs 28266751 is included in the 12.1.0.2 Oct 2018 PSU and 12.2.0.1 Oct 2018 RU,
so
the strong recommendation is to apply 12.1.0.2 Oct 2018 PSU and 12.2.0.1 Oct 2018 RU or later.Refer to the Document 756671.1 "Master Note for Database Proactive Patch Program" for the patch number for the latest 12.1.0.2 PSU and 12.2.0.1 RU.

OR
2.  Disable osysmond from issuing pstack:
As root user, issue
crsctl stop res ora.crf -init
Update PSTACK=DISABLE in $GRID_HOME/crf/admin/crf<HOSTNAME>.ora
crsctl start res ora.crf -init
 

Patches

Following bugs are opened to remove the pstack and pfiles feature from diagsnap.
Bug:28266751 - REMOVE PSTACK FOR CSS AND GIPC IN DIAGSNAPBug:26943660 - DIAGSNAP.PL SHOULDN'T RUN PFILES ON CRSD.BIN

ORA-09817: Write to audit file failed. Linux-x86_64 Error: 28: No space left on device



 Affects:



Product (Component)Oracle Server (PCW)
Range of versions believed to be affected(Not specified)
Versions confirmed as being affected
Platforms affectedGeneric (all / most platforms affected)

Fixed:


The fix for 28266751 is first included in

Interim patches may be available for earlier versions - click here to check.


Description:

  On Clusterware you may see pstack collected on processes like ocssd or gipcd

 

Rediscovery Notes:  

   CHM performs sometimes calls pstack on ocssd.bin or gipcd

Workaround:  

    None


For more details check - Doc ID 2422509.1