This document describes the upgrade of the GI from 12.1 to 12.2 with installation of the RU August 2017.
The Exadata consists of:
- Two db nodes: exadb1/exadb2
- Three cell servers
The software release of Exadata components:
Infiniband Software version: 2.2.6-2 (Juli 2017)
Storage cells version: 184.108.40.206.2.170714
DB node versioin: 220.127.116.11.2.170714
GI Version: 18.104.22.168.170418
RDBMS Version: 22.214.171.124.170418
The order of the action:
- Out of the place installation of the Grid Infrastructure 12.2 on all database servers
- Installation RU Patch August 2017 on all database servers
- Upgrade Gird Infrastructure to 12.2
Weiterlesen „Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata“
This document describes the configuration of the Oracle Data Guard Version 12.2. We will configure a standby for a non-multitenant database.
Our test environment consists of two Linux (OEL 7.3) servers: oradg1 and oradg2. I already created the non-multitenant database on the server oradg1. We will configure the standby database on the server oradg2. We will use a new feature of 12.2 by creating a physical standby via dbca – oracle database creation assistant.
Primary server name/os: oradg1/Oracle Linux Server 7.3 (64 bit)
Standby server name/os: oradg2/Oracle Linux Server 7.3 (64 bit)
RDBMS Version (primary and standby): 126.96.36.199.0
ORACLE_HOME (primary and standby): /u01/app/oracle/product/12.2.0/dbhome_1
Primary ORACLE_SID/DB_UNIQUE_NAME: ORCL1/ORCL1PRM
Standby ORACLE_SID/DB_UNIQUE_NAME: ORCL1/ORCL1STB
Listener name/port (primary and standby): LISTENER/1521
Path to DB files (primary and standby): /u01/app/oracle/oradata/orcl1
Recovery area (primary and standby): /u01/app/oracle/fast_recovery_area/orcl1
Preparing the primary database
Weiterlesen „Steps to configure Oracle Data Guard 12.2 for non-multitenant database“
The high availability of applications in the WebLogic environment is realized by clustering. Managed servers in the cluster work together. The information about transactions is distributed cluster-wide. If a cluster member fails, another server takes over the tasks of the failed server and executes them. In this way, the applications are kept running without interruption.
The AdminServer is a single point of failure: if the server fails, the Domain is no longer administrable:
– Configuration changes cannot be performed
– The administration console is not available
The managed servers are still running and can continue to work, even if the AdminServer is not available: this requires the activation of MSI (Managed Server Independence) Mode.
How can the AdminSevrer be protected from failure? In this blogpost I will describe all the steps that are necessary to keep your server running safely.
Weiterlesen „How to configure the WebLogic AdminServer for High Availability“