Wednesday, February 24, 2016

Configuring the Sun ZFS Backup Appliance for Oracle Exadata

Configuring Networks, Pools, and Shares

The following sections summarize best practices for optimizing Sun ZFS Backup Appliance network, pool, and share configurations to support backup and restore processing.
Network Configuration
This section describes how to configure the IP network multipathing (IPMP) groups, and how to configure routing in the Sun ZFS Backup Appliance. The basic network configuration steps are:
1. Connect the Sun ZFS Backup Appliance to the Oracle Exadata as described in the previous chapter.
2. Configure ibp0ibp1ibp2, and ibp3 with address 0.0.0.0/8 (necessary for IPMP), connected mode, and partition key ffff. To identify the partition key used by the Oracle Exadata system, run the following command as the root user: 
# cat /sys/class/net/ib0/pkey
3. Configure the active/standby IPMP group over ibd0 and ibd3, with ibd0 active and ibd3 standby.
4. Configure the active/standby IPMP group over ibd1 and ibd2, with ibd2 active and ibd1 standby.
5. Enable adaptive routing to ensure traffic is load balanced appropriately when multiple IP addresses on the same subnet are owned by the same head. This occurs after a cluster failover.

Pool Configuration
This section describes design considerations to determine the most appropriate pool configuration for the Sun ZFS Backup Appliance for Oracle RMAN backup and restore operations based on data protection and performance requirements. The system planner should consider pool protection based on the following guidelines:
  • Use parity-based protection for general-purpose and capacity-optimized systems:
    • RAID-Z for protection from single-drive failure on systems subject to random workloads.
    • RAID-Z2 for protection from two-drive failure on systems with streaming workloads only.
  • Use mirroring for high-performance with incrementally applied backup.
  • Configure pools based on performance requirements:
    • Configure a single pool for management-optimized systems.
    • Configure two pools for performance-optimized systems. Two-pool systems can be configured by using half the drives from each tray.
  • Configure log device protection:
    • Stripe log devices for RAID-Z and mirrored pool configurations.
    • Mirror log devices for RAID-Z2 pool configurations.
Share Configuration
The default options for Sun ZFS Backup Appliance shares provide a good starting point for general-purpose workloads. Sun ZFS Backup Appliance shares can be optimized for Oracle RMAN backup and restore operations as follows:
  • Create a project to store all shares related to backup and recovery of a single database. For a two-pool implementation, create two projects; one for each pool.
  • Configure the shares supporting Oracle RMAN backup and restore workloads with the following values:
    • Database record size (recordsize): 128kB
    • Synchronous write bias (logbias): Throughput (for processing backup sets and image copies) or Latency (for incrementally applied backups)
    • Cache device usage (secondary cache): None (for backup sets) or All (when supporting incrementally applied backups or database clone operations)
    • Data compression (compression): Off for performance-optimized systems, LZJB or gzip-2 for capacity-optimized systems
    • Number of shares per pool: 1 for management-optimized systems, 2 or 4 for performance-optimized systems
Additional share configuration options, such as higher-level gzip compression or replication, can be applied to shares used to support Oracle Exadata backup and restore, as customer requirements mandate. 
Customers implementing additional Sun ZFS Backup Appliance data services should consider implementation-specific testing to verify the implications of deviations from the practices described earlier.

Configuring Oracle RMAN and the Oracle Database Instance

Oracle RMAN is an essential component for protecting the content of Oracle Exadata. Oracle RMAN can be used to create backup sets, image copies, and incrementally updated backups of Oracle Exadata content on Sun ZFS Backup Appliances. To optimize performance of Oracle RMAN backups from Oracle Exadata to a Sun ZFS Backup Appliance, the database administrator should apply the following best practices:
    • Load balance Oracle RMAN channels evenly across the nodes of the database machine.
    • Load balance Oracle RMAN channels evenly across Sun ZFS Backup Appliance shares and controllers.
To optimize buffering of the Oracle RMAN channel to the ZFS Storage Appliance, you can tune the values of several hidden instance parameters. For Oracle Database 11g Release 2, the following parameters can be tuned:
  • For backup and restore set:
    • _backup_disk_bufcnt=64
    • _backup_disk_bufsz=1048576
  • For image copy backup and restore:
    • _backup_file_bufcnt=64
    • _backup_file_bufsz=1048576
Oracle Direct NFS (dNFS) is a high-performance NFS client that delivers exceptional performance for Oracle RMAN backup and restore operations. dNFS should be configured for customers seeking maximum throughput for backup and restore operations.

Configuring the Sun ZFS Backup Appliance Network

Sun ZFS Backup Appliance network configuration steps include assigning IP addresses, and optionally IPMP groups, to the physical network interface cards (NICs). For maximum throughput, set the Link Mode to Connected Mode for IB interfaces and select Use Jumbo Frames for 10 Gb Ethernet.

Configuring the Sun ZFS Backup Appliance Storage Pool 

Pool configuration assigns physical disk drive resources to logical storage pools for backup data storage. To maximize system throughput, configure two equally sized storage pools by assigning half of the physical drives in each drive tray to each storage pool


Configuring the Client NFS Mount

When configuring the Sun ZFS Backup Appliance, any server that accesses the appliance, including Oracle Exadata servers, is considered a client. Configuring the client NFS mount includes creating the target directory structure for access to the Sun ZFS Backup Appliance as well as the specific NFS mount options necessary for optimal system performance. Mount options for Linux clients are: rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,vers=3,timeo=600
For detailed configuration steps, see Configuration Details for Client NFS Mount and Oracle dNFS.


Note - Implementations using Oracle Database version 11.2.0.1 may run into the bug 9244583, ORA-27054 when running Oracle RMAN over NFS. The patch for this bug and workarounds are documented in the My Oracle Support document ORA-27054 WHEN RUNNING RMAN WITH NFS IN 11.2 (WORKS FINE ON 10.2 AND 11.1) [ID 1076405.1].


Tuning the Linux Network and Kernel

Depending on the specific Linux installation, the NFS client software and necessary supporting software subsystems may or may not be enabled. Two Linux services required to run NFS are portmap and nfslock. The services can be configured to run after reboot using the chkconfig command and enabled dynamically using the service command as follows:
# chkconfig portmap on
# service portmap start
# chkconfig nfslock on
# service nfslock start

To ensure the portmap service has access to the /etc/hosts.allow and /etc/hosts.deny files, open up group and world read permissions on these files after verifying with local system administration officials that read permissions may be granted for these files:
# ls -l /etc/host*
-rw-r--r-- 1 root root   17 Jul 23  2000 /etc/host.conf
-rw-r--r-- 1 root root 1394 Mar  4 10:36 /etc/hosts
-rw------- 1 root root  161 Jan 12  2000 /etc/hosts.allow
-rw-r--r-- 1 root root  147 Mar  3 14:03 /etc/hosts.backupbyExadata
-rw------- 1 root root  347 Jan 12  2000 /etc/hosts.deny
-rw-r--r-- 1 root root  273 Mar  3 14:03 /etc/hosts.orig
 
# dcli -l root -g /home/oracle/dbs_group chmod 640 /etc/hosts.allow
# dcli -l root -g /home/oracle/dbs_group chmod 640 /etc/hosts.deny
# dcli -l root -g /home/oracle/dbs_group chown root:rpc /etc/hosts.allow
# dcli -l root -g /home/oracle/dbs_group chown root:rpc /etc/hosts.deny
 
# ls -l /etc/host*
-rw-r--r-- 1 root root 17 Jul 23 2000 /etc/host.conf
-rw-r--r-- 1 root root 1394 Mar 4 10:36 /etc/hosts
-rw-r----- 1 root rpc  161 Jan 12 2000 /etc/hosts.allow
-rw-r--r-- 1 root root 147 Mar 3 14:03 /etc/hosts.backupbyExadata
-rw-r----- 1 root rpc  347 Jan 12 2000 /etc/hosts.deny
-rw-r--r-- 1 root root 273 Mar 3 14:03 /etc/hosts.orig

For Oracle Exadata, the Linux service cpuspeed is disabled by default, which optimizes throughput for some network devices. In a general Linux implementation, cpuspeed may be set to enable by default, which can reduce NFS throughput over 10 Gb Ethernet. If this service is not being used, or its use is less valuable than maximizing NFS performance over 10 Gb Ethernet, the service can be manually disabled after boot or dynamically disabled with the chkconfig and service commands as follows:
# chkconfig cpuspeed off
# service cpuspeed stop

Further client, operating system, network, and kernel tuning may be needed, including software updates, to maximize device driver, networking, and kernel throughput related to network I/O processing. These tuning procedures are system-specific and beyond the scope of this paper. Consult with your operating system and NIC vendors for evaluation and implementation details.

Configuring Oracle Direct NFS (dNFS)

A complete description of Direct NFS (dNFS) configuration is available for each specific release of the Oracle Database software from http://support.oracle.com
For detailed configuration steps, see Configuration Details for Client NFS Mount and Oracle dNFS.


Note - Prior to configuring dNFS, apply Oracle Database patch 8808984 to ensure optimal dNFS operation. Patch 8808984 is available from http://support.oracle.com, and is included in Oracle Exadata Database 11.2.0.1 (BP 8).


A summary of how to configure dNFS is as follows:
1. Shut down the running instance of the Oracle Database software.
2. Enable dNFS using one of the options below:
    • For version 11.2.0.2 or greater of the Oracle Database software, enter:
$ make -f \
     $ORACLE_HOME/rdbms/lib/ins_rdbms.mk 
dnfs_on
    • For a version prior to 11.2.0.2, enter:
$ ln -sf \ 
     $ORACLE_HOME/lib/libnfsodm11.so \
     $ORACLE_HOME/lib/libodm11.so
3. Update the oranfstab (/etc/oranfstab) file with entries showing the channels and shares accessed on the Sun ZFS Backup Appliance in cases where multiple IP addresses are used to access a single share. The following example shows how to access the backup share on aie-7420a-h1 over two separate IP addresses, 192.168.36.200 and 192.168.36.201.
server: aie-test-l-71-ib-data
local: 192.168.36.100 path: 192.168.36.200
local: 192.168.36.101 path: 192.168.36.202
local: 192.168.36.102 path: 192.168.36.204
local: 192.168.36.103 path: 192.168.36.206
dontroute
export: /export/qs/backup1 mount: /zfssa/qs/backup1
export: /export/qs/backup3 mount: /zfssa/qs/backup3
export: /export/qs/backup5 mount: /zfssa/qs/backup5
export: /export/qs/backup7 mount: /zfssa/qs/backup7
server: aie-test-l-72-ib-data
local: 192.168.36.100 path: 192.168.36.201
local: 192.168.36.101 path: 192.168.36.203
local: 192.168.36.102 path: 192.168.36.205
local: 192.168.36.103 path: 192.168.36.207
dontroute
export: /export/qs/backup2 mount: /zfssa/qs/backup2
export: /export/qs/backup4 mount: /zfssa/qs/backup4
export: /export/qs/backup6 mount: /zfssa/qs/backup6
export: /export/qs/backup8 mount: /zfssa/qs/backup8 
4. Restart the Oracle Database software instance.

No comments:

Post a Comment