Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata

Preface

This document describes the upgrade of the GI from 12.1 to 12.2 with installation of the RU August 2017.

The Exadata consists of:

  • Two db nodes: exadb1/exadb2
  • Three cell servers

The software release of Exadata components:

Infiniband Software version:    2.2.6-2 (Juli 2017)
Storage cells version:          12.2.1.1.2.170714
DB node versioin:               12.2.1.1.2.170714
GI Version:                     12.1.0.2.170418
RDBMS Version:                  12.1.0.2.170418

The order of the action:

  • Out of the place installation of the Grid Infrastructure 12.2 on all database servers
  • Installation RU Patch August 2017 on all database servers
  • Upgrade Gird Infrastructure to 12.2

Installation Grid Infrastructure 12.2

Prepare the environment

Prepare the file /root/dbs_group with all db node names or ip addresses:

exadb1
exadb2

Creating a directiries for the new CRS_ORA_HOME and change permissions to the GRID_HOME owner (in my example: oracle):

This action will be executed from the first database node as root user:

[root@exadb1]# dcli -g ~/dbs_group -l root mkdir -p /u01/app/12.2.0.1/grid
[root@exadb1]# dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/12.2.0.1/grid

Download and prepare the 12.2 software on both database hosts

This action will be executed from the first database node as root user:

[root@exadb1]# dcli -g /root/dbs_group -l root mkdir /software_stage
[root@exadb1]# cd /software_stage
[root@exadb1]# scp V840012-01.zip exadb1:/software_stage
[root@exadb1]# dcli -g /root/dbs_group -l root chown oracle:oinstall /software_stage/V840012-01.zip

This action will be produced on all database servers as user oracle:

Server exadb1:

[oracle@exadb1]$ unzip -q /software_stage/V840012-01.zip -d /u01/app/12.2.0.1/grid

Server exadb2:

[oracle@exadb2]$ unzip -q /software_stage/V840012-01.zip -d /u01/app/12.2.0.1/grid

Download and install the new version of OPatch

This action will be produced on all database servers as user oracle:

Download the file p6880880_121010_Linux-x86-64.zip and copy then to directory /software_stage. Then unzip the file to the GI HOME 12.2:

Server exadb1:

[oracle@exadb1]$ cp p6880880_121010_Linux-x86-64.zip /software_stage/
[oracle@exadb1]$ unzip -q /software_stage/p6880880_121010_Linux-x86-64.zip -d /u01/app/12.2.0.1/grid

Server exadb2:

[oracle@exadb2]$ cp p6880880_121010_Linux-x86-64.zip /software_stage/
[oracle@exadb2]$ unzip -q /software_stage/p6880880_121010_Linux-x86-64.zip -d /u01/app/12.2.0.1/grid

Check Opatch version on both database servers as oracle:

[oracle@exadb1 OPatch]# cd /u01/app/12.2.0.1/grid/OPatch
[oracle@exadb1 OPatch]# ./opatch version

OPatch Version: 12.2.0.1.9

Check the environment with clufvy

As user oracle on the first database server:

[oracle@exadb1]$ /u01/app/12.2.0.1/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/12.2.0.1/grid -dest_version 12.2.0.1.0 -fixupnoexec –verbose

Important: carefully check the output for errors and warnings and fix these!!!

Change SGA Values for the ASM

The ASM Instance needs 3 GB for the SGA.

Set the environment to GI 12.1 and start the SQLPlus:

[oracle@exadb1]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1

[oracle@exadb1]$ sqlplus / as sysasm

SQL> alter system set sga_max_size = 3G scope=spfile sid='*';
SQL> alter system set sga_target = 3G scope=spfile sid='*';
SQL> alter system set memory_target=0 sid='*' scope=spfile;
SQL> alter system set memory_max_target=0 sid='*' scope=spfile;
SQL> alter system reset memory_max_target sid='*' scope=spfile;
SQL> alter system set use_large_pages=true sid='*' scope=spfile;

Verify no active rebalance is running:

SQL> select count(*) from gv$asm_operation;

COUNT(*)
----------
0

Set Limits

Set the soft limits for the owner of the GI as root on all database servers. The values in /etc/security/limits.conf should look like as shown below:

oracle soft stack 10240

Software Only Installation Grid Infrastructure 12.2

These steps must be executed on all database servers as GI software owner: (in my example user oracle). You need the configured X11 environment for the starting of the OUI.

[oracle@exadb1]$ cd /u01/app/12.2.0.1/grid/
[oracle@exadb1]$ ./gridSetup.sh

Choose „Software Only“

Open a putty session and execute the script root.sh as root:

[root@exadb1]# /u01/app/12.2.0.1/grid/root.sh

Output:

Performing root user operation.
The following environment variables are set as:

ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: n
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: n
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: n

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.

Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as grid user:
/u01/app/12.2.0.1/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

Return to OUI and click on „OK“:

Installation Bundle Patch (RU) August 2017 on top of the GI 12.2. HOME

Installation Patch 26610291: GRID INFRASTRUCTURE RELEASE UPDATE 12.2.0.1.170814

These steps must be executed on all database servers as GI software owner: (in my example user oracle).

Prepare the installation file p26610291_122010_Linux-x86-64.zip:

[oracle@exadb1]$ cp p26610291_122010_Linux-x86-64.zip /software_stage
[oracle@exadb1]$ cd /software_stage
[oracle@exadb1]$ unzip p26610291_122010_Linux-x86-64.zip
[oracle@exadb1]$ cd 26610291

OPatch Conflict check:

[oracle@exadb1]$ export ORACLE_HOME=/u01/app/12.2.0.1/grid
[oracle@exadb1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@exadb1]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./26609817
[oracle@exadb1]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./26609966
[oracle@exadb1]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./25586399

Expected output:

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch SystemSpace Check:

[oracle@exadb1]$ vi /tmp/patch_list_gihome.txt

/software_base/26610291/26609817
/software_base/26610291/26609966
/software_base /26610291/25586399

[oracle@exadb1]$ opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt

Output:

Prereq "checkSystemSpace" passed.

Installation Patch 26609966: OCW RU 12.2.0.1.170814

These steps must be executed on all database servers as GI software owner: (in my example user oracle).

[oracle@exadb1]$ export ORACLE_HOME=/u01/app/12.2.0.1/grid
[oracle@exadb1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@exadb1]$ cd /PSU_quarterly/26610291/26609966
[oracle@exadb1]$ opatch apply -local

Installation Patch 26609817: DB RU 12.2.0.1.170814

These steps must be executed on all database servers as GI software owner: (in my example user oracle).

[oracle@exadb1g]$ export ORACLE_HOME=/u01/app/12.2.0.1/grid
[oracle@exadb1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@exadb1]$ cd /PSU_quarterly/26610291/26609817
[oracle@exadb1]$ opatch apply -local

Installation Patch 25586399: ACFS RU 12.2.0.1.170718

These steps must be executed on all database servers as GI software owner: (in my example user oracle).

[oracle@exadb1]$ export ORACLE_HOME=/u01/app/12.2.0.1/grid
[oracle@exadb1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@exadb1]$ cd /PSU_quarterly/26610291/25586399
[oracle@exadb1]$ opatch apply -local

Check:

[oracle@exadb1]$ opatch lspatches

Output:

25586399;ACFS Patch Set Update : 12.2.0.1.170718 (25586399)
26609817;DATABASE RELEASE UPDATE: 12.2.0.1.170814 (26609817)
26609966;OCW Patch Set Update : 12.2.0.1.170814(26609966)

Save the cluster configuration

These steps must be executed on the first database server as GI software owner: (in my example user oracle).

Set environment:

[oracle@exadb1]$ cd /home/grid
[oracle@exadb1]$ . grid.env

Save network configuration:

[oracle@exadb1]$ srvctl config listener > listener_config_save_befor_upgrade.txt
[oracle@exadb1]$ srvctl config network > network_config_save_befor_upgrade.txt
[oracle@exadb1]$ srvctl config scan -all > scan_config_save_befor_upgrade.txt
[oracle@exadb1]$ srvctl config scan_listener -all > scan_listener_config_save_befor_upgrade.txt
[oracle@exadb1]$ srvctl config vip -node exadb1 > vip_config_save_befor_upgrade.txt
[oracle@exadb1]$ srvctl config vip -node exadb2 >> vip_config_save_befor_upgrade.txt

Save whole configuration:

[oracle@exadb1]$ crsctl stat res -p > crsctl_statresp_output_save_befor_upgrade.txt
[oracle@exadb1]$ crsctl stat res -t > crsctl_statrest_output_save_befor_upgrade.txt

Shutdown all databases

These steps must be executed on the first database server as RDBMS software owner: (in my example user oracle).

Set environment:

[oracle@exadb1]$ . oraenv
ORACLE_SID = [oracle] ? mydb1

[oracle@exadb1]$ srvctl stop database -d mydb1
[oracle@exadb1]$ srvctl stop database -d mydb2
[oracle@exadb1]$ srvctl stop database -d mydb3
...

Upgrade the Grid Infrastructure

These steps must be executed on the first database server as GI software owner: (in my example user oracle).

Unset all ORACLE related environment variables:

[oracle@exadb1]$ set|grep ORA

Output: empty!!!!

Start Installer:

[oracle@exadb1]$ /u01/app/12.2.0.1/grid/gridSetup.sh -J-Doracle.install.mgmtDB=false -J-Doracle.install.crs.enableRemoteGIMR=false

Launching Oracle Grid Infrastructure Setup Wizard...

Some checks are unsuccessful, due to customer-specific network configuration

Now we will open the putty window as root.

These steps must be executed on all database servers as user root!!!

DB Server exadb1

Disable all acfs mounts on the server, for example

[root@exadb1]# umount /acfs_mounts/my_mount
[root@exadb1]# /u01/app/12.2.0.1/grid/rootupgrade.sh

Output:

Performing root user operation.
The following environment variables are set as:

ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.2.0.1/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.
Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/exadb1/crsconfig/rootcrs_exadb1_2017-08-24_11-15-56AM.log
2017/08/24 11:15:58 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/08/24 11:15:58 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/08/24 11:16:50 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/08/24 11:16:50 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/08/24 11:16:58 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/08/24 11:16:59 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/08/24 11:16:59 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/08/24 11:17:06 CLSRSC-515: Starting OCR manual backup.
2017/08/24 11:17:47 CLSRSC-516: OCR manual backup successful.
2017/08/24 11:17:56 CLSRSC-486:

At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2017/08/24 11:17:56 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2017/08/24 11:17:56 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2017/08/24 11:17:56 CLSRSC-615:
3. The last node to downgrade cannot be a Leaf node.
2017/08/24 11:18:01 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/08/24 11:18:01 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/08/24 11:18:03 CLSRSC-363: User ignored prerequisites during installation
2017/08/24 11:18:08 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/08/24 11:18:13 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/08/24 11:18:21 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2017/08/24 11:18:23 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2017/08/24 11:18:23 CLSRSC-482: Running command: '/ora01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 12.2.0.1.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2017/08/24 11:18:28 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /ora01/app/12.1.0.2/grid -oldCRSVersion 12.1.0.2.0 -firstNode true -startRolling false '
ASM configuration upgraded in local node successfully.
2017/08/24 11:18:29 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2017/08/24 11:18:37 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/08/24 11:19:04 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/08/24 11:19:06 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/08/24 11:19:06 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/08/24 11:19:11 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/08/24 11:19:11 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/08/24 11:19:15 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/08/24 11:19:19 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/08/24 11:19:34 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
2017/08/24 11:20:02 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exadb1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'exadb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/08/24 11:20:28 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/08/24 11:20:36 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exadb1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'exadb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'exadb1'
CRS-2672: Attempting to start 'ora.evmd' on 'exadb1'
CRS-2676: Start of 'ora.mdnsd' on 'exadb1' succeeded
CRS-2676: Start of 'ora.evmd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'exadb1'
CRS-2676: Start of 'ora.gpnpd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'exadb1'
CRS-2676: Start of 'ora.gipcd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'exadb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'xadb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'exadb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'exadb1'
CRS-2676: Start of 'ora.diskmon' on 'exadb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'exadb1'
CRS-2672: Attempting to start 'ora.ctssd' on 'exadb1'
CRS-2676: Start of 'ora.ctssd' on 'exadb1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'exadb1'
CRS-2676: Start of 'ora.asm' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'exadb1'
CRS-2676: Start of 'ora.storage' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'exadb1'
CRS-2676: Start of 'ora.crf' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'exadb1'
CRS-2676: Start of 'ora.crsd' on 'exadb1' succeeded
CRS-6017: Processing resource auto-start for servers: exadb1
CRS-6016: Resource auto-start has completed for server exadb1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exadb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'exadb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'exadb1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'exadb1'
        CRS-2673: Attempting to stop 'ora.DBFS_DG.dg' on 'exadb1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'exadb1'
CRS-2677: Stop of 'ora.DATA.dg' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'exadb1'
CRS-2677: Stop of 'ora.asm' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.net.network' on 'exadb1'
CRS-2677: Stop of 'ora.net1.network' on 'exadb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'exadb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'exadb1'
CRS-2673: Attempting to stop 'ora.crf' on 'exadb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'exadb1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'exadb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'exadb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'exadb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'exadb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'exadb1'
CRS-2677: Stop of 'ora.ctssd' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'exadb1'
CRS-2677: Stop of 'ora.cssd' on 'exadb1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'exadb1'
CRS-2673: Attempting to stop 'ora.gipcd' on 'exadb1'
CRS-2677: Stop of 'ora.gipcd' on 'exadb1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'exadb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'exadb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'exadb1'
CRS-2672: Attempting to start 'ora.evmd' on 'exadb1'
CRS-2676: Start of 'ora.mdnsd' on 'exadb1' succeeded
CRS-2676: Start of 'ora.evmd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'exadb1'
CRS-2676: Start of 'ora.gpnpd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'exadb1'
CRS-2676: Start of 'ora.gipcd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'exadb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'exadb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'exadb1'
CRS-2676: Start of 'ora.diskmon' on 'exadb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'exadb1'
CRS-2672: Attempting to start 'ora.ctssd' on 'exadb1'
CRS-2676: Start of 'ora.ctssd' on 'exadb1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'exadb1'
CRS-2676: Start of 'ora.asm' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'exadb1'
CRS-2676: Start of 'ora.storage' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'exadb1'
CRS-2676: Start of 'ora.crf' on 'exadb1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'exadb1'
CRS-2676: Start of 'ora.crsd' on 'exadb1' succeeded
CRS-6017: Processing resource auto-start for servers: exadb1

...

CRS-6016: Resource auto-start has completed for server exadb1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/08/24 11:28:25 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/08/24 11:28:25 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2017/08/24 11:28:28 CLSRSC-474: Initiating upgrade of resource types
2017/08/24 11:31:37 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/08/24 11:31:37 CLSRSC-475: Upgrade of resource types successfully initiated.
2017/08/24 11:32:44 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/08/24 11:32:48 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

DB Server exadb2

[root@exadb2]# umount /acfs_mounts/my_mount
[root@exadb2]# cd /ora01/app/12.2.0.1/grid/
[root@exadb2]# ./rootupgrade.sh

Output:

Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.2.0.1/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...

...

2017/08/24 11:55:32 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/08/24 11:55:40 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
Start upgrade invoked..
2017/08/24 11:56:27 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2017/08/24 11:56:27 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 12.2.0.1.0.
2017/08/24 11:57:37 CLSRSC-479: Successfully set Oracle Clusterware active version
2017/08/24 11:57:58 CLSRSC-476: Finishing upgrade of resource types
2017/08/24 11:59:14 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last'
2017/08/24 11:59:14 CLSRSC-477: Successfully completed upgrade of resource types
2017/08/24 11:59:35 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/08/24 12:00:45 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Return to OUI:







Some checks via clufvy are unsuccessful, due to customer-specific network configuration.
This is in my environment an expected behavior.

Check your clufvy output and fix any errors/warnings

After an upgrade

Checks:

[root@exadb1]# /ora01/app/12.2.0.1/grid/bin/crsctl check cluster -all

**************************************************************
exadb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
exadb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Check all cluster resources:

[oracle@edb2]$ crsctl stat res -t

All resources except databases should be online

Disable diagsnap for Exadata

MOS Note 2111010.1:

NOTE: Due to unpublished bugs 24900613 25785073 and 25810099, Diagsnap should be disabled for Exadata

This step must be executed on the first database server as a user oracle!!!

[oracle@exadb1]$ cd /u01/app/12.2.0.1/grid/bin
[oracle@exadb1]$ ./oclumon manage -disable diagsnap
Diagsnap is already Disabled on exadb1
Diagsnap is already Disabled on exadb2

No action required: the setting is correct after the upgrade!

Verify Flex ASM Cardinality is set to „ALL“

These steps must be executed on the first database server as a user oracle!!!

[oracle@exadb1]$ . grid.env
[oracle@exadb1]$ export TNS_ADMIN=/ora01/app/grid/network/admin
[oracle@exadb1]$ srvctl config asm

ASM home: <CRS home>
Password file: +DBFS_DG/orapwASM
Backup of Password file: +DBFS_DG/orapwASM_backup
ASM listener: LISTENER
ASM instance count: ALL
Cluster ASM listener: ASMNET1LSNR_ASM,ASMNET2LSNR_ASM

[oracle@exadb1]$ srvctl status asm
ASM is running on exadb1,exadb2

Perform the following step:

[oracle@exadb1]$ /ora01/app/12.2.0.1/grid/oui/bin/runInstaller -ignoreSysPrereqs -silent -updateNodeList ORACLE_HOME=/ora01/app/12.2.0.1/grid "CLUSTER_NODES={exadb1,exadb2}" CRS=true LOCAL_NODE=exadb1

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Starting databases

These steps must be executed on the first database server as RDBMS software owner: (in my example user oracle).

Set environment:

[oracle@exadb1]$ . oraenv
ORACLE_SID = [oracle] ? mydb1

[oracle@exadb1]$ srvctl start database -d mydb1
[oracle@exadb1]$ srvctl start database -d mydb2
[oracle@exadb1]$ srvctl start database -d mydb3

...

Following tasks

Deactive the MGMTDB database and delete it

MOS Note 2111010.1:

To deconfigure MGMTDB post upgrade or post deployment, perform the following these steps. Please note steps will be performned from old and new GI_HOMES and last step as user root.

(oracle)$export ORACLE_SID=-MGMTDB
(oracle)$export ORACLE_BASE=/u01/app/oracle
(oracle)$export ORACLE_HOME=/u01/app/12.1.0.2/grid
(oracle)$export PATH=$ORACLE_HOME/bin:$PATH
(oracle)$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
(oracle)$srvctl stop mgmtlsnr
(oracle)$srvctl remove mgmtlsnr
(oracle)$/u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
(root)# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr "ENABLED=0" -init

When MGMTDB needs to be added back follow the steps in MOS Note: 2246123.1, 12.2: How to Create GI Management Repository

Deinstall the GI 12.1 GRID_HOME on all database servers

Detach GI ORACLE_HOME

Deinstall GI ORACLE_HOME

Used documentation

  • 12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1)
  • How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues (Doc ID 2240959.1)
  • Information Center: Oracle Exadata Database Machine (Doc ID 1306791.2)
  • How Do I Find the Exadata Documentation, Such as Owner and User Guide? (Doc ID 1342281.1)
  • Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
Advertisements

Autor: Neselovskyi, Borys

Oracle Database / Middleware / Engineered System Infrastructure Solution Architect

2 Kommentare zu „Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata“

  1. I have done it this way on a Standard RAC and my gridSetup was pointing out that OPatch files are missing.. of course they were there but a higher version. Then during linking it failed to link irman, but with a simple retry it fixed it and completed successfully. I don’t understand why you have deleted the -MGMTDB, you know that Oracle builds it’s ML Autonomous Healthcheck Framework on this feature?

    Gefällt 1 Person

Kommentar verfassen

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:

WordPress.com-Logo

Du kommentierst mit Deinem WordPress.com-Konto. Abmelden /  Ändern )

Google+ Foto

Du kommentierst mit Deinem Google+-Konto. Abmelden /  Ändern )

Twitter-Bild

Du kommentierst mit Deinem Twitter-Konto. Abmelden /  Ändern )

Facebook-Foto

Du kommentierst mit Deinem Facebook-Konto. Abmelden /  Ändern )

w

Verbinde mit %s