Cloud Control: the Graph in the Performance Top Activity view shrinks to left

Some time I have the problem with the vies of the Top Activity Page in the Cloud Control:

The Graf shrink to left and you can see only the part of the picture:

Weiterlesen „Cloud Control: the Graph in the Performance Top Activity view shrinks to left“

Advertisements

KVM Virtualization on top of the vmware workstation 12 guest vm‘s

Kernel-based Virtual Machine (KVM) provides a possibility to use the Oracle (or Redhat/CentOS) Linux kernel as a hypervisor.

In my example, I will configure the KVM virtualization on top of the guest vm running OEL 7.4. The guest VM is running on top of the VMWare Workstation 12.

To enable the virtualization layer select VM settings and navigate to the Processor settings. Enable the setting „Virtualize Intel VT-x/EPT or AMD-V/RVI.“

The Guest VM is now enabled for the using of the nested virtualization (such kvm). Start the Guest VM.

Install and configure KVM virtualization on the OEL 7.4 Linux

Weiterlesen „KVM Virtualization on top of the vmware workstation 12 guest vm‘s“

Extend Filesystem /u01 on an Exadata ComputeNode

Do you know the problem: an Exadata does not have enough space in filesystem /u01? The Exadata has the locale storage and you can increase /u01.

Steps:

1. Check filesystem and associated volume:

[root@myexadb01]# df -h

Filesystem                      Size    Used    Avail    Use%    Mounted on
/dev/mapper/VGExaDb-LVDbSys1     30G    23G    5.7G    80%    /
tmpfs                           757G    4.0K    757G    1%    /dev/shm
/dev/sda1                       488M    48M    405M    11%    /boot
/dev/mapper/VGExaDb-LVDbOra1    158G    107G    43G    72%    /u01

Check the free space in the volume group:

[root@myexadb01]# pvs
PV         VG        Fmt    Attr    PSize    PFree
/dev/sda2    VGExaDb   lvm2   a—u     1.63t    1.38t

Weiterlesen „Extend Filesystem /u01 on an Exadata ComputeNode“

Steps to configure Oracle Data Guard version 12.2 for a pluggable database

Steps to configure Oracle Data Guard version 12.2 for a pluggable database

This document describes the configuration of the Oracle Data Guard Version 12.2. We will configure a standby for a pluggable database.

My test environment consists of two Linux (OEL 7.3) servers: oradg1 and oradg2. I already created a pluggable database on the server oradg1. The database consists of the following components:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> set lines 200
SQL> col name forma a20
SQL> select con_id, name,open_mode, restricted from v$pdbs;

CON_ID NAME         OPEN_MODE RES
---------- -------------------- ---------- ---
     2 PDB$SEED        READ ONLY NO
     3 ORCLPDB1        READ WRITE NO
     4 ORCLPDB2        READ WRITE NO

SQL> select PDB_ID,PDB_NAME,CON_ID,STATUS,LOGGING,FORCE_LOGGING from dba_pdbs;

PDB_ID PDB_NAME         CON_ID STATUS LOGGING    FOR
---------- -------------------- ---------- ---------- --------- ---
     2 PDB$SEED             2 NORMAL LOGGING    NO
     3 ORCLPDB1             3 NORMAL LOGGING    NO
     4 ORCLPDB2             4 NORMAL LOGGING    NO

Following tasks will be executed:

  • Configuration the standby database on the server oradg2.
  • Creation / drop a pdb on the primary database and verification what happens on the standby site.
  • Executing the swithover tests.

In my previously post Steps to configure Oracle Data Guard 12.2 for non-multitenant database I created the physical standby for a non-multitenant database using a database creation assistant – dbca (new 12.2 feature). Unfortunately this feature is not available for a pluggable database and we will create a physical standby via RMAN utility.

Weiterlesen „Steps to configure Oracle Data Guard version 12.2 for a pluggable database“

Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata

Preface

This document describes the upgrade of the GI from 12.1 to 12.2 with installation of the RU August 2017.

The Exadata consists of:

  • Two db nodes: exadb1/exadb2
  • Three cell servers

The software release of Exadata components:

Infiniband Software version:    2.2.6-2 (Juli 2017)
Storage cells version:          12.2.1.1.2.170714
DB node versioin:               12.2.1.1.2.170714
GI Version:                     12.1.0.2.170418
RDBMS Version:                  12.1.0.2.170418

The order of the action:

  • Out of the place installation of the Grid Infrastructure 12.2 on all database servers
  • Installation RU Patch August 2017 on all database servers
  • Upgrade Gird Infrastructure to 12.2

Weiterlesen „Upgrade Grid Infrastructure 12.1 to 12.2 + Install the 2017 August Release Update (RU) on Exadata“

Steps to configure Oracle Data Guard 12.2 for non-multitenant database

This document describes the configuration of the Oracle Data Guard Version 12.2. We will configure a standby for a non-multitenant database.

Our test environment consists of two Linux (OEL 7.3) servers: oradg1 and oradg2. I already created the non-multitenant database on the server oradg1. We will configure the standby database on the server oradg2. We will use a new feature of 12.2 by creating a physical standby via dbca – oracle database creation assistant.

Environment overview:

Primary server name/os: oradg1/Oracle Linux Server 7.3 (64 bit)

Standby server name/os: oradg2/Oracle Linux Server 7.3 (64 bit) 

RDBMS Version (primary and standby): 12.2.0.1.0 

ORACLE_HOME (primary and standby): /u01/app/oracle/product/12.2.0/dbhome_1 

Primary ORACLE_SID/DB_UNIQUE_NAME: ORCL1/ORCL1PRM 

Standby ORACLE_SID/DB_UNIQUE_NAME: ORCL1/ORCL1STB 

Listener name/port (primary and standby): LISTENER/1521 

Path to DB files (primary and standby): /u01/app/oracle/oradata/orcl1 

Recovery area (primary and standby): /u01/app/oracle/fast_recovery_area/orcl1

Blueprint:

Preparing the primary database

Weiterlesen „Steps to configure Oracle Data Guard 12.2 for non-multitenant database“

How to configure the WebLogic AdminServer for High Availability

Introduction

The high availability of applications in the WebLogic environment is realized by clustering. Managed servers in the cluster work together. The information about transactions is distributed cluster-wide. If a cluster member fails, another server takes over the tasks of the failed server and executes them. In this way, the applications are kept running without interruption.

The AdminServer is a single point of failure: if the server fails, the Domain is no longer administrable:

– Configuration changes cannot be performed

– The administration console is not available

The managed servers are still running and can continue to work, even if the AdminServer is not available: this requires the activation of MSI (Managed Server Independence) Mode.

How can the AdminSevrer be protected from failure? In this blogpost I will describe all the steps that are necessary to keep your server running safely.

Weiterlesen „How to configure the WebLogic AdminServer for High Availability“