Steps to configure Oracle Data Guard version 12.2 for a pluggable database

Steps to configure Oracle Data Guard version 12.2 for a pluggable database

This document describes the configuration of the Oracle Data Guard Version 12.2. We will configure a standby for a pluggable database.

My test environment consists of two Linux (OEL 7.3) servers: oradg1 and oradg2. I already created a pluggable database on the server oradg1. The database consists of the following components:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> set lines 200
SQL> col name forma a20
SQL> select con_id, name,open_mode, restricted from v$pdbs;

CON_ID NAME         OPEN_MODE RES
---------- -------------------- ---------- ---
     2 PDB$SEED        READ ONLY NO
     3 ORCLPDB1        READ WRITE NO
     4 ORCLPDB2        READ WRITE NO

SQL> select PDB_ID,PDB_NAME,CON_ID,STATUS,LOGGING,FORCE_LOGGING from dba_pdbs;

PDB_ID PDB_NAME         CON_ID STATUS LOGGING    FOR
---------- -------------------- ---------- ---------- --------- ---
     2 PDB$SEED             2 NORMAL LOGGING    NO
     3 ORCLPDB1             3 NORMAL LOGGING    NO
     4 ORCLPDB2             4 NORMAL LOGGING    NO

Following tasks will be executed:

  • Configuration the standby database on the server oradg2.
  • Creation / drop a pdb on the primary database and verification what happens on the standby site.
  • Executing the swithover tests.

In my previously post Steps to configure Oracle Data Guard 12.2 for non-multitenant database I created the physical standby for a non-multitenant database using a database creation assistant – dbca (new 12.2 feature). Unfortunately this feature is not available for a pluggable database and we will create a physical standby via RMAN utility.

Weiterlesen „Steps to configure Oracle Data Guard version 12.2 for a pluggable database“

Advertisements

Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata

Preface

This document describes the upgrade of the GI from 12.1 to 12.2 with installation of the RU August 2017.

The Exadata consists of:

  • Two db nodes: exadb1/exadb2
  • Three cell servers

The software release of Exadata components:

Infiniband Software version:    2.2.6-2 (Juli 2017)
Storage cells version:          12.2.1.1.2.170714
DB node versioin:               12.2.1.1.2.170714
GI Version:                     12.1.0.2.170418
RDBMS Version:                  12.1.0.2.170418

The order of the action:

  • Out of the place installation of the Grid Infrastructure 12.2 on all database servers
  • Installation RU Patch August 2017 on all database servers
  • Upgrade Gird Infrastructure to 12.2

Weiterlesen „Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata“

Steps to configure Oracle Data Guard 12.2 for non-multitenant database

This document describes the configuration of the Oracle Data Guard Version 12.2. We will configure a standby for a non-multitenant database.

Our test environment consists of two Linux (OEL 7.3) servers: oradg1 and oradg2. I already created the non-multitenant database on the server oradg1. We will configure the standby database on the server oradg2. We will use a new feature of 12.2 by creating a physical standby via dbca – oracle database creation assistant.

Environment overview:

Primary server name/os: oradg1/Oracle Linux Server 7.3 (64 bit)

Standby server name/os: oradg2/Oracle Linux Server 7.3 (64 bit) 

RDBMS Version (primary and standby): 12.2.0.1.0 

ORACLE_HOME (primary and standby): /u01/app/oracle/product/12.2.0/dbhome_1 

Primary ORACLE_SID/DB_UNIQUE_NAME: ORCL1/ORCL1PRM 

Standby ORACLE_SID/DB_UNIQUE_NAME: ORCL1/ORCL1STB 

Listener name/port (primary and standby): LISTENER/1521 

Path to DB files (primary and standby): /u01/app/oracle/oradata/orcl1 

Recovery area (primary and standby): /u01/app/oracle/fast_recovery_area/orcl1

Blueprint:

Preparing the primary database

Weiterlesen „Steps to configure Oracle Data Guard 12.2 for non-multitenant database“

How to configure the WebLogic AdminServer for High Availability

Introduction

The high availability of applications in the WebLogic environment is realized by clustering. Managed servers in the cluster work together. The information about transactions is distributed cluster-wide. If a cluster member fails, another server takes over the tasks of the failed server and executes them. In this way, the applications are kept running without interruption.

The AdminServer is a single point of failure: if the server fails, the Domain is no longer administrable:

– Configuration changes cannot be performed

– The administration console is not available

The managed servers are still running and can continue to work, even if the AdminServer is not available: this requires the activation of MSI (Managed Server Independence) Mode.

How can the AdminSevrer be protected from failure? In this blogpost I will describe all the steps that are necessary to keep your server running safely.

Weiterlesen „How to configure the WebLogic AdminServer for High Availability“

Save the Date: DOAG 2017 IMC (Infrastruktur-Middleware) Days

Am 07.09.2017 trifft sich die Middleware Community in Berlin, um spannenden Vorträgen zu lauschen und sich über aktuelle IT-Themen auszutauschen. Bekannte Referenten teilen ihre Erfahrungen in Themen rund um Oracle Middleware Administration mit und geben einen Überblick über die aktuellen Cloud Trends

Info und Agenda hier

Wir sehen uns in Berlin!

 

How to hide passwords and account information in WLST scripts (WebLogic Server)

If you are worried about your password showing up in clear text in the startup scripts, you can use the storeUserConfig command to generate encrypted userconfigFile and a userKeyFile that you can then use in the „nmConnect“ and „connect“ command.

Place there two files on the protected place and decide to set right permissions

Create storeUserConfig for the Node Manager

Start wlst via the shell script $ORACLE_HOME/oracle_common/common/bin/wlst.sh:

/u01/app/oracle/product/FMW/Oracle_Home/oracle_common/common/bin/wlst.sh

Weiterlesen „How to hide passwords and account information in WLST scripts (WebLogic Server)“