Monday, 6 January 2020

RAC GRID Upgrade From 11.2.0.3/4 To 12.1.0.2

Upgrade grid from Oracle version 11.2.0.3/4 to 12.1.0.2.

Overview
Preinstallation

  • Verify Size Of OCR Disks
  • Create CRQ
  • Create Grid Directory
  • Clean the Software Directory
  • Copy the required files to one node of the RAC
  • Unzip the files
  • Pre-Requisite Check
  • Create grid response file
  • Deploy software
  • Verify Installtion
  • Copy init.ora and password files
  • Complete preinstallation items

Rolling GI Upgrade
  • Stop Instances
  • Execute rootupgrade.sh
  • Verify CRS Version
  • Verify ASM Version
  • Start Instance
  • Repeat for each RAC Node
  • Configure Tools
  • Final Cluster Check


Apply the latest PSU
  • Update OPatch
  • Unzip Patch
  • Create OCM Response file

Post Installation
  • Remove 11.2.0.4 listener valid node registration
  • Update OEM
  • Move-MGMTDB to off OCR Diskgroup
  • Close CRQ
*************************************************************************************************************


Overview

Upgrade grid from Oracle version 11.2.0.3/4 to 12.1.0.2. This includes conversation to Flex ASM


Preinstallation

Verify Size Of OCR Disks

Ensure all OCR disks are at least 500mb insize. Add space if required . In addition, the current standard  is for 5 OCR disks. So Add additional if needed. %- Disks are not required , but recomended.

Create CRQ

 A CRQ is required for an upgrade of grid

Create Grid Directory

As a root on each node

mkdir -p /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u01/app/12.1.0.2/grid
chmode -R 775 /u01/app/12.1.0.2/grid

Clean the software directory

Remove all previous installation software from /oraworkspace/software and change the directory group so our personal ID's can copy software directory (dba group)

As a root on each node

cd /oraworkspace/software
rm -rf database grid psu gpsu
chgrp dba /oraworkspace/software
chmod 775 /oraworkspace/software


Copy required files to one node of the RAC

Please use latest version

Grid software files

p17694377_12120_Linux-x86-64_3of8.zip(grid#1)
p17694377_12120_Linux-x86-64_4of8.zip(grid#2)

PSU Patch software

p20485724_121020_Linux-x86-64.zip(PSU 3-Copy to all nodes)


OPatch software 

p6880880_121010_Linux-x86-64.zip(OPatch-copy to all nodes)

Unzip the files

As grid

cd /oraworkspace/software
unzip p17694377_12120_Linux-x86-64_3of8.zip
unzip p17694377_12120_Linux-x86-64_4of8.zip

To Upgrade from previous version to 12.1.0.2 make sure all the pre-requisite checks are met. Pre-Requisited check may fail for some patches. The prerequisite check uses following syntax.

As grid 

cd /oraworkspace/software/grid
runcluvfy.sh stage -pre crsinst -upgrade [-n nide_list][-rolling] -src_crshome src_Gridhome \ -dest_crshome dest_Gridhome -dest_version dest_version -fixupnoexec -verbose 

# Note that there are 6 dependencies for nfs-utils to be installed.

#ym -y install nfs-utils
Loaded plugins: product-id,rhnplugin,security,subscription-manager
.....................
Installed:
nfs-utils.x86_64 1:1.2.3-39.el6_5.3
Dependency Installed:
keyutils.x86_64 0:1.4-4.el6        libevent.x86_64 -:1.4.13-4.el6             libgssglue.x86_64 0:0.1-11.el6
libtirpc.x86_64 0:0.2.1-6.el6_5.2  nfs_utils-lib.x86_64 0:1.1.5-6.el6_5       rpcbind.x86_64 0:0.2.0-11.el6

Complete!

#/tmp/CVU_12.1.0.2.0_grid/runfixup.sh
All fix-up operations were completed successfully 


#Typically the /etc/sysconfig/network file is unreadable by grid , so the zeroconf  check will always fail  .

Create grid response file

As grid

cd /oraworkspace/software
grep -i ^[A-Z] grid/response/grid-install.rsp | sort > grid.rsp


Modify the following variables as per RAC environment you are patching.

.INVENTORY_LOCATION=/u01/app/oraInventory
.ORACLE_BASE=/u01/app/grid
.ORACLE_HOME=/u01/app/12.1.0.2/grid
.oracle.install.asm.diskGroup.AUSize=1
.oracle.install.asm.OSASM=asmadmin
.oracle.install.asm.OSDBA=asmdba
.oracle.install.asm.OSOPER=asmoper
.oracle.install.crs.config.clusterNodes=node1:node1-vip,node2,node2-vip
.oracle.install.cr.config.ClusterType=STANDARD
.oracle.install.crs.config.gpnp.configureGNS=false
.oracle.install.crs.gpnp.scanPort=1521
.oracle.install.crs.config_storageOption=FLEX_ASM_STORAGE
.oracle.install.crs.config.useFM=false
.oracle.install.option=UPGRADE


Deploy software

As grid
cd /oraworkspace/software/grid
./runInstaller -silent -responseFile /oraworkspace/software/grid.rsp -showProgress -waitForCompletion

Example:

cd /oraworkspace/software/grid
./runInstaller -silent -responseFile /oraworkspace/software/grid.rsp -showProgress -waitForCompletion
Starting Oracle Universal Installer...
Checking Temp space:must be greater than 415MB. Actual 1655 MB Passed
Checking swap space:must be greater than 150MG. Actual 31923 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-12-31_14-23-34PM , Pleas wait ...[WARNING][INS-1301]
Cause:Some of the optional prerequisites are not met .See the logs for details , /u01/app/oraInventory/logs/InstallActions201-12-31-14_23_34.log
Action:Identify the list of failed prerequisite check from the log /u01/app/oraInventory/logs/InstallActions201-12-31-14_23_34.log

You can find the log of this install session at: 
/u01/app/oraInventory/logs/InstallActions201-12-31-14_23_34.log
Prepare in progress.
........................................................7% Done.
........................................................19% Done.
........................................................25% Done.
........................................................30% Done.
........................................................35% Done.
.........................
Copy files successful
Link binaries in progress
........................................................100% Done.
Successfully setup software.
As install user, execute the following script to complete the configuration.

Note:

1. This script must be run on the same host from where installer was run.
2. This script needs a small password properties file for configuration assistance that require password.

Verify installation

As grid

.Verify the software installed on all nodes

if[-f /etc/oratab];then
  ORACLE_SID="`grep -i ^+[A-Z] /etc/oratab|head -l|cut -d: -f1`"
  if [${#ORACLE_SID} -gt 0];then
    ORAENV_ASK="NO"
    . oraenv -s
    unset ORAENV_ASK
    for node in `${ORACLE_HOME}/bin/olsnodes`;do
    ssh -q ${node} du -sk /u01/app/12.1.0.2/grid
  done
 fi
fi

Should looks something like following  there the directory size is similar

6848660  /u01/app/12.1.0.2/grid
6847272  /u01/app/12.1.0.2/grid

Verify software ,release and active versions.

if [-f /etc/oratab];then
 ORACLE_SID="`grep -i ^+[A-Z] /etc/oratab |head -l |cut -d:f1`"
  if [${#ORACLE_SID} -gt 0]; then
   ORAENV_ASK="NO"
   . oraenv -s
   unset ORAENV_ASK
   ${ORACLE_HOME}/bin/crsctl query crs activeversion
   ${ORACLE_HOME}/bin/crsctl query crs releaseversion
   for node in `${ORACLE_HOME}/bin/olsnodes`;do
    ${ORACLE_HOME}/bin/crsctl query crs softwareversion ${node}
   done
 fi
fi

Expect output similar to:

Oracle Clusterware active version on the cluster is [11.2.0.4.0]
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
Oracle Clusterware version on node [oradb1]is [11.2.0.4.0]
Oracle Clusterware version on node [oradb2]is [11.2.0.4.0]

Copy init.ora and password files

Copy the init.ora and password files from the current grid home to the new one

As grid

cp -p /u01/app/11.2.0.4/grid/init+ASM1.ora /u01/app/12.1.0.2/grid/init+ASM1.ora 

cp -p u01/app/11.2.0.4/grid/passwd+ASM1.ora /u01/app/12.1.0.2/grid/passwd+ASM1.ora 

Complete preinstallation items

As this point the software is deployed on all nodes and ready to be used. The following steps require as rolling outage by node

Rolling GI Upgrade

On the first node to be upgrade.

Stop Instances

As grid

if[-f /etc/oratab]; then
 ORACLE_SID="`grep -i ^+[A-Z] /etc/oratab |head -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  for db in `srvctl config database `;do
   echo "stop instance for database ${db}"
   srvctl stop instance -d ${db} -n `hostname -s `
  done
 fi
fi

Execute rootupgrade.sh

As root

/u01/app/12.1.0.2/grid/rootupgrade.sh


Example Output:

Check /u01/app/12.1.0.2/grid/install/root_oradb1_2019-12-31_16-02-46.log for the output of root script

Verify crs version

as grid

if [-f /etc/oratab]; then
 ORACLE_SID="`grep -i ^+[A-Z]/etc/oratab |head  -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  ${ORACLE_HOME}/bin/crsctl query crs activeversion
  ${ORACLE_HOME}/bin/crsctl query crs releaseversion
  for node in `${ORACLE_HOME}/bin/olsnodes`;do
   ${ORACLE_HOME}/bin/crsctl query crs softwareversion ${node}
  done
 fi
fi

Example output(Except last node)

Oracle Clusterware active version on the cluster is [11.2.0.4.0]
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
Oracle Clusterware version on node[oradb1] is [12.1.0.2.0]
Oracle Clusterware version on node[oradb2] is [11.2.0.4.0]

Example output(Last node)

Oracle Clusterware active version on the cluster is [11.2.0.4.0]
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
Oracle Clusterware version on node[oradb1] is [12.1.0.2.0]

Oracle Clusterware version on node[oradb2] is [12.1.0.2.0]

Verify ASM Version

As grid


if [-f /etc/oratab]; then
 ORACLE_SID="`grep -i ^+[A-Z]/etc/oratab |head  -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  ${ORACLE_HOME}/bin/sqlplus /nolog <<EndSQL
  connect  as sysasm
  set linesize 91
  select * from v\$version;
  exit
  EndSQL
 fi

fi

Example Output:

Banner                                                                       CON_ID
---------------------------------------------------------------------------  ------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 -64bit Production  0
PL/SQL Release 12.1.0.2.0 - Production                                       0
CORE  12.1.0.2.0 - Production                                                0
TNS for Linux: Version 12.1.0.2.0 - Production                               0
NLSRTL Version 12.1.0.2.0 - Production                                       0

Start Instances

As grid

if [-f /etc/oratab]; then
 ORACLE_SID="`grep -i ^+[A-Z]/etc/oratab |head  -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  for db in `srvctl config database`; do
  echo "start instance for database ${db}"
  srvctl start database -d ${db} -n `hostname -s`
  done
 fi
fi

Repeat for each RAC Node

As a grid on each node


  • Stop instance
  • Execute rootupgrade.sh
  • Verify CRS version
  • Verify ASM Version
  • Start Instance

Configure Tools

As grid


cd /oraworkspace/software
echo "oracle.assistants.asm|S_ASMPASSWORD=Sysdba12"                          >CRSTool.conf
echo "oracle.assistants.asm|S_ASMMONITORPASSWORD=Sysdba12"                   >>CRSTool.conf

/u01/app/12.1.0.2/grid/crgtoollogs/configAllCommands RESPONSE_FILE=CRSTool.conf


Example Output:

Setting the invPtrLoc to  /u01/app/12.1.0.2/grid/oraInst.loc

perform -mode is starting for action : configure
Dec 31,2019 7:17:09 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO:UpdateNodelist data:
Dec 31,2019 7:17:09 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO:oracle.installer.oui_loc:/u01/app/12.1.0.2/grid/oui
........................................................
INFO:Read: Look at the log file "/u01/app/12.1.0.2/grid/cfgtoollogs/dbca/_mgmtdb/<DBName>/_mgmtdb.log
Dec 31,2019 7:26:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess

You can see log file:/u01/app/12.1.0.2/grid/cfgtoollogs/oui/configActions2019-12-31_07-17-08PM.log


Final Cluster Check

As grid

if [-f /etc/oratab]; then
 ORACLE_SID="`grep -i ^+[A-Z]/etc/oratab |head  -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  for node in `${ORACLE_HOME}/bin/olsnodes`;do
   crsctl query crs softwareversion ${node}
  done
  crsctl query crs activeversion
  crsctl query crs releaseversion
  crsctl check cluster -all
  srvctl status asm
 fi
fi


Example Output:

Oracle Clusterware version on node [oradb1] is [12.1.0.2]
Oracle Clusterware version on node [oradb2] is [12.1.0.2]
Oracle Clusterware Active version on the cluster is [12.1.0.2]
Oracle High Availability Services release version on the local node is [12.1.0.2]
************************************************************************************************
oradb1:
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization  Services is online
CRS-4533:Event Manager is online
************************************************************************************************
oradb2:
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization  Services is online
CRS-4533:Event Manager is online

************************************************************************************************
ASM is running on oradb1,oradb2


Apply the latest PSU

Apply the latest PSU in a rolling fashion . Will shutdown the instances and grid on each node being patched.

Update OPatch

As grid

cd /${ORACLE_HOME}
unzip -o /oraworkspace/software/p6880880_121010_Linux-x86-64.zip


Unzip Patch

As grid

cd /oraworkspace/software
mkdir gpsu
cd gpsu
unzip ../p20485724_121020_Linux-x86-64.zip


Create OCM response file

Check for OCM response file:ls -lt /tmp/ocm.rsp /var/tmp/ocm.rsp /oraworkspace/software/ocmrsp

If OCM.rsp doesn't exist, then create

As grid

The following command has 2-prompts

1. Email address/User Name: Leave Blank
2. Do yo wish to remain uninformed of security issues(Yes,No)N: Answer "Y"

 ORACLE_SID="`grep -i ^+[A-Z]/etc/oratab |head  -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
 cd ${ORACLE_HOME}/OPatch/ocm/bin
 ./emocmrsp -no_banner -output /oraworkspace/software/ocm.rsp
fi


Analyze Patch

As root

 ORACLE_SID="`grep -i ^+[A-Z]/etc/oratab |head  -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
${ORACLE_HOME}/OPatch/opatchauto apply /oraworkspace/software/gpsu/20485724 -analyze -ocmrf /oraworkspace/software/ocm.rsp
fi

Example Output

OPatch Automation Tool
Copyright (C) 2014, Oracle Corporation . All rights reserved.

OPatchauto Version  :12.1.0.1.5
OUI Version         :12.1.0.2.0
Running from        :/u01/app/12.1.0.2/grid      
opatchauto log file :/u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/20485724/opatch_gi_2019-12-31_20-31-12_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system

Parameter validation:Successful

Grid Infrastructure home:
/u01/app/12.1.0.2/grid 

Configuration Validation : Successful

Patch Location:/oraworkspace/software/gpsu/20485724
Grid Infrastructure Patch(es):20299023 19392590 19392604
RAC Patch(es): 20299023 19392604


Patch Validation:Successful  

Analyzing Patch(es) on "/u01/app/12.1.0.2/grid"...
Patch "/oraworkspace/software/gpsu/20485724/20299023" successfully analyzed on "/u01/app/12.1.0.2/grid" for apply.
Patch "/oraworkspace/software/gpsu/20485724/19392590" successfully analyzed on "/u01/app/12.1.0.2/grid" for apply.
Patch "/oraworkspace/software/gpsu/20485724/19392604" successfully analyzed on "/u01/app/12.1.0.2/grid" for apply.

Apply Summary:
Following patch(es) successfully analyzed :
GI Home: /u01/app/12.1.0.2/grid:20299023,19392590,19392604
opatchauto succeeded 


Stop Instances:

As oracle or grid

 ORACLE_SID="`grep -i ^+[A-Z] /etc/oratab |head -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  for db in `srvctl config database `;do
   echo "stop instance for database ${db}"
   srvctl stop instance -d ${db} -n `hostname -s `
  done
 fi

Shutdown any ACFS file systems

As root

Note that these instructions may be incomplete



srvctl config filesystem
#From above information stop acfs disks
srvcctl stop filesystem -d /dev/asm/ggatevol -398
#Verify acfs unmounted on all nodes, 

Example:
#$mount /gg/GG11

Verify acfs is properly stopped -should be able to stop/start CRS with no errors

#May need "-f" to force the stop below.
crsctl stop crs
#Restart before continuing 
crsctl start crs


Apply Grid PSU

As root

Set environment and apply patch


 ORACLE_SID="`grep -i ^+[A-Z] /etc/oratab |head -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  ${ORACLE_HOME}/OPatch/opatchauto apply /oraworkspace/software/gpsu/20485724 -oh ${ORACLE_HOME} -ocmrf  /oraworkspace/software/ocm.rsp
fi

The above OPatch command will takes 15 to 20 mins to run

Examples Output:


OPatch Automation Tool
Copyright (C) 2014, Oracle Corporation . All rights reserved.

OPatchauto Version  :12.1.0.1.5
OUI Version         :12.1.0.2.0
Running from        :/u01/app/12.1.0.2/grid      
opatchauto log file :/u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/20485724/opatch_gi_2019-12-31_20-31-12_deploy.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system

Parameter validation:Successful

User specified following Grid Infrastructure home:
/u01/app/12.1.0.2/grid 



Configuration Validation : Successful

Patch Location:/oraworkspace/software/gpsu/20485724
Grid Infrastructure Patch(es):20299023 19392590 19392604
RAC Patch(es): 20299023 19392604


Patch Validation:Successful  

Applying Patch(es) on "/u01/app/12.1.0.2/grid"...
Patch "/oraworkspace/software/gpsu/20485724/20299023" successfully applied to "/u01/app/12.1.0.2/grid".
Patch "/oraworkspace/software/gpsu/20485724/19392590" successfully applied to "/u01/app/12.1.0.2/grid".
Patch "/oraworkspace/software/gpsu/20485724/19392604" successfully applied to "/u01/app/12.1.0.2/grid".

Starting CRS.....Successful

Apply Summary:
Following patch(es) successfully installed :
GI Home: /u01/app/12.1.0.2/grid:20299023,19392590,19392604
opatchauto succeeded 


Start Instance

As oracle or grid

 ORACLE_SID="`grep -i ^+[A-Z] /etc/oratab |head -l |cut -d: -f1`"
 if [${#ORACLE_SID} -gt 0] ; then 
  ORAENV_ASK="NO"
  . oraenv -s
  unset ORAENV_ASK
  for db in `srvctl config database `;do
   echo "start instance for database ${db}"
   srvctl start instance -d ${db} -n `hostname -s `
  done
 fi

Repeat on next node

Copy OPatch and OCM response file to next node.

  • Update OPatch
  • Unzip Patch
  • Stop Instance
  • Apply Grid PSU
  • Start Instance
Reset iiAgent Symlinks

Reset the iiAgent synlinks from the old gridhome to new grid home


As root on all nodes

cd /usr/Tivoli/ABC
rm -f oracle_crsd.log oracle_ocssd.log
ls -s /u01/app/12.1.0.2/grid/log/`hostname -s`/crsd/crsd.log oracle_crsd.log
ls -s /u01/app/12.1.0.2/grid/log/`hostname -s`/cssd/ocssd.log oracle_ocssd.log

Remove TFA on all nodes

Follow below link to Remove TFA

Post Installation

Remove 11.2.0.4 listener valid node registration
As grid on all nodes
sed -i`/VALID_NODE_CHECKING_REGISTRATION_LISTENER/d` ${ORACLE_HOME}/network/admin/listener.ora
sed -i `/REGISTRATION_INVITED_NODES_LISTENER/d` ${ORACLE_HOME}/network/admin/listener.ora

Update OEM

The listener and ASM configuration must be updated with new ORACLE_HOME


. Update Listener
  .Open OEM and enter the  hostname in "Search target name"box in the upper right
  .Select the "Target Name" corresponding with the "Listener" "Target Type"
  .From the "Oracle Listener" drop down (Upper Left) select "Target Setup"/"Monitoring configuration"
  .Update the directories paths for the  "Listener.ora Directory " and Oracle Home Fields.

. Update ASM

  .Select the "Target Name" corresponding with the "Automatic Storage Management"  "Target Type"
  . From the "Automatic storage Management" drop down list (Upper left) select "Target Setup"/"Monitoring configuration"
  . Update the value of the "ORACLE HOME PATH". Test connection to ensure all is in order.
  . Click OK to save

Move MGMTDB to off OCR Diskgroup

Check the location of the datafiles of -MGMTDB database. if they are not on DATA(Generally defaults to OCR), remove the existing MGMTDB
and move the DATADG diskgroup if already on DATADG then this section can be ignored.

The grid infrastructure management repository is automatically installed with oracle grid infrastructure 12c release.
The Grid infrastructure Management repository enables such features as cluster health monitor , Oracle QOS management and Rapid Home provisioning and provides a  historical metric repository that simplifies viewing of past performance and diagnosis of issues

The capability is fully integrated into Oracle Enterprise Manager cloud control for seamless management . This is a oracle single instance which is being managed by  Grid infrastructure and fails over to surviving node if existing node crashes.

1. On each node as root user

 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/crsctl stop res ora.crf -init
 ${ORACLE_HOME}/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

2. Locate the node the management database is running on as grid
 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/srvctl status mgmtdb

3. From the node where ora mgmtdb resouce is running as grid
 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB

4. From the node where ora mgmtdb resouce is running as grid
 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/dbca -silent -CreateDatabase -sid -MGMTDB -   createASContainerDatabase true -template MGMTseed_data

5. Create PDB within the MGMTDB using DBCA as grid
   Make sure the cluster name is properly fetched from the code below.
 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/cemutlo -n |tr [a-z][A-Z]

NOTE: The CLUSTER_NAME need to have hyphens(".") replace with underscores("_")
If the above code properly displays the clustername , then cut and past the code below and run to create the pluggable database .if the above didn't work properly . Replace  `${ORACLE_HOME}/bin/cemutlo -n |tr [a-z] [A-Z]` with the cluster name.

 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/dbca -silent -CreatePluggable -source -MGMTDB -pdbName `${ORACLE_HOME}/bincemulto -n|tr [a-z][A-z]

6. Confirm the node on which MGMTDB running as grid

 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/srvctl status mgmtdb

7. Populate the management database as grid
 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/mgmtca

No output expected if executed without error

8. Enable and start ora crf resource on each node as root

 export ORACLE_SID="`grep ^+ASM/etc/oratab|head -l \cut -d: -f1`"
 export ORACLE_HOME="`grep ^+ASM/etc/oratab|head -l \cut -d: -f2`"
 ${ORACLE_HOME}/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
 ${ORACLE_HOME}/bin/crsctl start res ora.crf -init

Close CRQ

No comments:

Post a Comment