Friday, August 27, 2021

Duplicate database in ODA (Source and Target database going to be under the same server)


For this duplicate database testing in ODA , we are going to use source database as MANGO db and duplicate database name as clonedb.

1. Preparation

1.1 gather source database information.

First gather MANGO database details such as
  1. datafile location.
  2. number of redo log files.
  3. db recovery file desk.

Check datafile location:


set lines 800
col file_name for a100
select file_name,bytes/1024/1024 MB ,maxbytes/1024/1024 MB from dba_data_files;

FILE_NAME                                                                                                    MB         MB
---------------------------------------------------------------------------------------------------- ---------- ----------
/u02/app/oracle/oradata/datastore/.ACFS/snaps/mango/MANGO/datafile/o1_mf_system_hplvqsyj_.dbf               700 32767.9844
/u02/app/oracle/oradata/datastore/.ACFS/snaps/mango/MANGO/datafile/o1_mf_sysaux_hplvqz98_.dbf              1530 32767.9844
/u02/app/oracle/oradata/datastore/.ACFS/snaps/mango/MANGO/datafile/o1_mf_undotbs1_hplvr39f_.dbf             460 32767.9844
/u02/app/oracle/oradata/datastore/.ACFS/snaps/mango/MANGO/datafile/o1_mf_undotbs2_hplvrgpc_.dbf             200 32767.9844
/u02/app/oracle/oradata/datastore/.ACFS/snaps/mango/MANGO/datafile/o1_mf_users_hplvrj49_.dbf                  5 32767.9844

check for redo file locations :

SQL> show parameter db_uni

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      mango
set lines 700
col member for a100 
select a.MEMBER,b.BYTES/1024/1024 MB from v$logfile a,v$log b;

  MEMBER                                                                                   MB
-------------------------------------------------------------------------------- ----------
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_1_hplvqqz6_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_1_hplvqqz6_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_1_hplvqqz6_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_1_hplvqqz6_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_2_hplvqrvn_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_2_hplvqrvn_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_2_hplvqrvn_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_2_hplvqrvn_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_3_hplwd0s5_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_3_hplwd0s5_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_3_hplwd0s5_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_3_hplwd0s5_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_4_hplwd1t0_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_4_hplwd1t0_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_4_hplwd1t0_.log          1024
/u01/app/oracle/oradata/datastore/mango/MANGO/onlinelog/o1_mf_4_hplwd1t0_.log          1024

flash recovery desk location:

SQL> show parameter db_rec

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u01/app/oracle/fast_recovery_
                                                 area/datastore/mango

1.2 Backup source database. 

 We need to take backup of source database. If the schedule backup are not there use below mention run block to take a source database backup.

############## Backup source database

RUN
{
  ALLOCATE CHANNEL ch1 TYPE DISK MAXPIECESIZE 5G;
  ALLOCATE CHANNEL ch2 TYPE DISK MAXPIECESIZE 5G;
  BACKUP 
  FORMAT '/db_backup/MANGO_BKP/%d_D_%T_%u_s%s_p%p'
  DATABASE
  CURRENT CONTROLFILE 
  FORMAT '/db_backup/MANGO_BKP/%d_C_%T_%u' 
  SPFILE 
  FORMAT '/db_backup/MANGO_BKP/%d_S_%T_%u' 
  PLUS ARCHIVELOG 
  FORMAT '/db_backup/MANGO_BKP/%d_A_%T_%u_s%s_p%p'; 
  RELEASE CHANNEL ch2;
}
If you want to take separate archivelog backup use below run block.

############### archive backup 

run 
{ 
allocate channel c1 type disk format '/db_backup/MANGO_BKP/arc_%U'; 
BACKUP ARCHIVELOG ALL TAG ARCH;
} 



########### create directory for duplicate

oakcli create dbstorage -db CLONEDB

[root@ecl-odabase-0 ~]# oakcli create dbstorage -db CLONEDB
INFO: 2021-08-20 13:20:17: Please check the logfile  '/opt/oracle/oak/log/ecl-odabase-0/tools/18.8.0.0.0/createdbstorage_CLONEDB_20796.log' for more details
INFO: 2021-08-20 13:20:38: Storage for the Database with the name CLONEDB is possible

Please enter the 'SYSASM'  password : (During deployment we set the SYSASM password to 'welcome1'):
Please re-enter the 'SYSASM' password:
Please select one of the following for Database Class  [1 .. 6] :
1    => odb-01s  (   1 cores ,     4 GB memory)
2    =>  odb-01  (   1 cores ,     8 GB memory)
3    =>  odb-02  (   2 cores ,    16 GB memory)
4    =>  odb-04  (   4 cores ,    32 GB memory)
5    =>  odb-06  (   6 cores ,    48 GB memory)
6    =>  odb-12  (  12 cores ,    96 GB memory)
1
The selected value is : odb-01s  (   1 cores ,     4 GB memory)

...SUCCESS: Ran /usr/bin/rsync -tarqvz /opt/oracle/oak/onecmd/ root@192.168.16.28:/opt/oracle/oak/onecmd --exclude=*zip --exclude=*gz --exclude=*log --exclude=*trc --exclude=*rpm and it returned: RC=0

.........
SUCCESS: All nodes in /opt/oracle/oak/onecmd/tmp/db_nodes are pingable and alive.


INFO: 2021-08-20 13:23:33: Successfully setup the storage structure for the database 'CLONEDB'
INFO: 2021-08-20 13:23:36: Set the following directory structure for the Database CLONEDB
INFO: 2021-08-20 13:23:36: DATA: /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB
INFO: 2021-08-20 13:23:36: REDO: /u01/app/oracle/oradata/datastore/CLONEDB
INFO: 2021-08-20 13:23:36: RECO: /u01/app/oracle/fast_recovery_area/datastore/CLONEDB
SUCCESS: 2021-08-20 13:23:36: Successfully setup the Storage for the Database : CLONED

After creating make sure to verify the created storage mount points


[root@ecl-odabase-0 ~]# oakcli show dbstorage

All the DBs with DB TYPE as non-CDB share the same volumes

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
mango              non-CDB    /u02/app/oracle/oradata/datastore                 3665G     2.98G    3662.02G             N/A          DATA
                              /u02/app/oracle/oradata/flashdata                  279G   144.88G     134.12G             N/A          FLASH
                              /u01/app/oracle/oradata/datastore                   60G     4.30G      55.70G             N/A          REDO

                              /u01/app/oracle/fast_recovery_area/datastore      4874G    40.63G    4833.37G             N/A          RECO

cdbdev             CDB        /u02/app/oracle/oradata/datcdbdev                  400G    26.65G     373.35G             40G          DATA
                              /u01/app/oracle/oradata/rdocdbdev                    6G     4.21G       1.79G              1G          REDO
                              /u01/app/oracle/fast_recovery_area/rcocdbdev       530G    45.87G     484.13G             53G          RECO

[root@ecl-odabase-0 ~]#

1.4 If there are existing clone database.

If there are already existing clone database make sure to drop the database before duplicate.


#################### drop previous  database

##### Method 1

startup mount exclusive restrict;

##### Method 2

SQL> startup nomount pfile='/home/oracle/DUP_SCRIPTS/initCLONEDB.ora' restrict;
ORACLE instance started.

Total System Global Area  876609536 bytes
Fixed Size                  2930224 bytes
Variable Size             763365840 bytes
Database Buffers          100663296 bytes
Redo Buffers                9650176 bytes
SQL> alter database mount;

Database altered.

SQL> drop database;

Database dropped.

2. Duplicate source database 

MANGO to target database : clonedb

2.1 create duplicate run block. 

This scenario we are performing backup base duplicate.

################# run block
RUN {
ALLOCATE AUXILIARY CHANNEL T1 TYPE DISK;
ALLOCATE AUXILIARY CHANNEL T2 TYPE DISK;
ALLOCATE AUXILIARY CHANNEL T3 TYPE DISK;
ALLOCATE AUXILIARY CHANNEL T4 TYPE DISK;
ALLOCATE AUXILIARY CHANNEL T5 TYPE DISK;
DUPLICATE DATABASE MANGO TO CLONEDB
backup location '/db_backup/MANGO_BKP'
UNTIL TIME "TO_DATE('2021-08-20 13:41:00', 'YYYY-MM-DD HH24:MI:SS')";
}
 

################# rman_duplicate.sh script
export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=clonedb1
rman auxiliary \/ cmdfile=/home/oracle/DUP_SCRIPTS/CLONEDB.cmd log=/home/oracle/DUP_SCRIPTS/CLONEDB_20082021.log

2.2 Init file parameter for auxiliary instance.

Note : do not use db_file_name_convert and log_file_name_covert for this ODA duplicate. because database using OMF.Link to understand the parameters and why we should not use above mention parameters. Creating a Duplicate Database on a Local or Remote Host

########## init file parameters for ODA
[oracle@ecl-odabase-0 DUP_SCRIPTS]$ cat initCLONEDB.ora
control_files='/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/control01.ctl'
db_name='clonedb'
log_archive_dest_1='location=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB'
*.compatible='12.1.0.2.0'
db_create_file_dest='/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB'
db_create_online_log_dest_1='/u01/app/oracle/oradata/datastore/CLONEDB'
[oracle@ecl-odabase-0 DUP_SCRIPTS]$

2.3 Startup database in nomount state. 


start the database using pfile.

startup nomount pfile='/home/oracle/DUP_SCRIPTS/initCLONEDB.ora'
After creating spfile run the rman_duplicate.sh script in nohup mode and tail the log. 

Please find the complete logfile for detail for verification.

[oracle@ecl-odabase-0 DUP_SCRIPTS]$ cat CLONEDB_20082021.log

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Aug 23 12:50:08 2021

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to auxiliary database: CLONEDB (not mounted)

RMAN> RUN {
2> ALLOCATE AUXILIARY CHANNEL T1 TYPE DISK;
3> ALLOCATE AUXILIARY CHANNEL T2 TYPE DISK;
4> ALLOCATE AUXILIARY CHANNEL T3 TYPE DISK;
5> ALLOCATE AUXILIARY CHANNEL T4 TYPE DISK;
6> ALLOCATE AUXILIARY CHANNEL T5 TYPE DISK;
7> DUPLICATE DATABASE MANGO TO CLONEDB
8> backup location '/db_backup/MANGO_BKP'
9> UNTIL TIME "TO_DATE('2021-08-20 13:41:00', 'YYYY-MM-DD HH24:MI:SS')";
10> }
11>
12>
allocated channel: T1
channel T1: SID=994 device type=DISK

allocated channel: T2
channel T2: SID=1242 device type=DISK

allocated channel: T3
channel T3: SID=1366 device type=DISK

allocated channel: T4
channel T4: SID=1490 device type=DISK

allocated channel: T5
channel T5: SID=1614 device type=DISK

Starting Duplicate Db at 23-AUG-21

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     876609536 bytes

Fixed Size                     2930224 bytes
Variable Size                763365840 bytes
Database Buffers             100663296 bytes
Redo Buffers                   9650176 bytes
allocated channel: T1
channel T1: SID=1118 device type=DISK
allocated channel: T2
channel T2: SID=1242 device type=DISK
allocated channel: T3
channel T3: SID=1366 device type=DISK
allocated channel: T4
channel T4: SID=1490 device type=DISK
allocated channel: T5
channel T5: SID=1614 device type=DISK

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''MANGO'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''CLONEDB'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/db_backup/MANGO_BKP/MANGO_C_20210820_0706vuu7';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''MANGO'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''CLONEDB'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     876609536 bytes

Fixed Size                     2930224 bytes
Variable Size                763365840 bytes
Database Buffers             100663296 bytes
Redo Buffers                   9650176 bytes
allocated channel: T1
channel T1: SID=1118 device type=DISK
allocated channel: T2
channel T2: SID=1242 device type=DISK
allocated channel: T3
channel T3: SID=1366 device type=DISK
allocated channel: T4
channel T4: SID=1490 device type=DISK
allocated channel: T5
channel T5: SID=1614 device type=DISK

Starting restore at 23-AUG-21

channel T3: skipped, AUTOBACKUP already found
channel T4: skipped, AUTOBACKUP already found
channel T1: skipped, AUTOBACKUP already found
channel T5: skipped, AUTOBACKUP already found
channel T2: restoring control file
channel T2: restore complete, elapsed time: 00:00:09
output file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/control01.ctl
Finished restore at 23-AUG-21

database mounted

contents of Memory Script:
{
   set until scn  12935274;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 23-AUG-21

channel T1: starting datafile backup set restore
channel T1: specifying datafile(s) to restore from backup set
channel T1: restoring datafile 00001 to /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_system_%u_.dbf
channel T1: restoring datafile 00003 to /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs1_%u_.dbf
channel T1: restoring datafile 00004 to /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs2_%u_.dbf
channel T1: reading from backup piece /db_backup/MANGO_BKP/MANGO_D_20210820_0506vutt_s5_p1
channel T2: starting datafile backup set restore
channel T2: specifying datafile(s) to restore from backup set
channel T2: restoring datafile 00002 to /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_sysaux_%u_.dbf
channel T2: restoring datafile 00005 to /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_users_%u_.dbf
channel T2: reading from backup piece /db_backup/MANGO_BKP/MANGO_D_20210820_0406vutt_s4_p1
channel T1: piece handle=/db_backup/MANGO_BKP/MANGO_D_20210820_0506vutt_s5_p1 tag=TAG20210820T121244
channel T1: restored backup piece 1
channel T1: restore complete, elapsed time: 00:00:07
channel T2: piece handle=/db_backup/MANGO_BKP/MANGO_D_20210820_0406vutt_s4_p1 tag=TAG20210820T121244
channel T2: restored backup piece 1
channel T2: restore complete, elapsed time: 00:00:07
Finished restore at 23-AUG-21

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=1081342289 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_system_jl7njb4r_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=7 STAMP=1081342289 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_sysaux_jl7njb4t_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=1081342289 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs1_jl7njb85_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=9 STAMP=1081342289 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs2_jl7njbbz_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=10 STAMP=1081342289 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_users_jl7njb7l_.dbf

contents of Memory Script:
{
   set until time  "to_date('AUG 20 2021 13:41:00', 'MON DD YYYY HH24:MI:SS')";
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 23-AUG-21

starting media recovery

channel T1: starting archived log restore to default destination
channel T1: restoring archived log
archived log thread=1 sequence=65
channel T1: restoring archived log
archived log thread=2 sequence=46
channel T1: restoring archived log
archived log thread=1 sequence=66
channel T1: restoring archived log
archived log thread=1 sequence=67
channel T1: restoring archived log
archived log thread=2 sequence=47
channel T1: restoring archived log
archived log thread=1 sequence=68
channel T1: restoring archived log
archived log thread=2 sequence=48
channel T1: restoring archived log
archived log thread=1 sequence=69
channel T1: restoring archived log
archived log thread=2 sequence=49
channel T1: restoring archived log
archived log thread=2 sequence=50
channel T1: restoring archived log
archived log thread=1 sequence=70
channel T1: restoring archived log
archived log thread=1 sequence=71
channel T1: reading from backup piece /db_backup/MANGO_BKP/arc_0e07048e_1_1
channel T2: starting archived log restore to default destination
channel T2: restoring archived log
archived log thread=2 sequence=45
channel T2: reading from backup piece /db_backup/MANGO_BKP/MANGO_A_20210820_0b06vuv0_s11_p1
channel T2: piece handle=/db_backup/MANGO_BKP/MANGO_A_20210820_0b06vuv0_s11_p1 tag=TAG20210820T121320
channel T2: restored backup piece 1
channel T2: restore complete, elapsed time: 00:00:01
channel T1: piece handle=/db_backup/MANGO_BKP/arc_0e07048e_1_1 tag=ARCH
channel T1: restored backup piece 1
channel T1: restore complete, elapsed time: 00:00:03
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_65_1051745212.dbf thread=1 sequence=65
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_45_1051745212.dbf thread=2 sequence=45
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_45_1051745212.dbf RECID=1 STAMP=1081342292
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_46_1051745212.dbf thread=2 sequence=46
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_65_1051745212.dbf RECID=4 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_66_1051745212.dbf thread=1 sequence=66
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_66_1051745212.dbf RECID=10 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_67_1051745212.dbf thread=1 sequence=67
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_46_1051745212.dbf RECID=2 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_47_1051745212.dbf thread=2 sequence=47
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_67_1051745212.dbf RECID=5 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_68_1051745212.dbf thread=1 sequence=68
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_47_1051745212.dbf RECID=3 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_48_1051745212.dbf thread=2 sequence=48
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_68_1051745212.dbf RECID=7 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_69_1051745212.dbf thread=1 sequence=69
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_48_1051745212.dbf RECID=6 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_49_1051745212.dbf thread=2 sequence=49
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_49_1051745212.dbf RECID=9 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_50_1051745212.dbf thread=2 sequence=50
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_69_1051745212.dbf RECID=8 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_70_1051745212.dbf thread=1 sequence=70
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_70_1051745212.dbf RECID=12 STAMP=1081342293
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_71_1051745212.dbf thread=1 sequence=71
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch1_71_1051745212.dbf RECID=13 STAMP=1081342293
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/12.1.0.2/dbhome_2/dbs/arch2_50_1051745212.dbf RECID=11 STAMP=1081342293
media recovery complete, elapsed time: 00:00:11
Finished recover at 23-AUG-21
Oracle instance started

Total System Global Area     876609536 bytes

Fixed Size                     2930224 bytes
Variable Size                763365840 bytes
Database Buffers             100663296 bytes
Redo Buffers                   9650176 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''CLONEDB'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''CLONEDB'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance started

Total System Global Area     876609536 bytes

Fixed Size                     2930224 bytes
Variable Size                763365840 bytes
Database Buffers             100663296 bytes
Redo Buffers                   9650176 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONEDB" RESETLOGS ARCHIVELOG
  MAXLOGFILES    192
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES    32
  MAXLOGHISTORY      292
 LOGFILE
  GROUP   1  SIZE 1 G ,
  GROUP   2  SIZE 1 G
 DATAFILE
  '/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_system_jl7njb4r_.dbf'
 CHARACTER SET AL32UTF8

sql statement: ALTER DATABASE ADD LOGFILE

  INSTANCE 'i2'
  GROUP   3  SIZE 1 G ,
  GROUP   4  SIZE 1 G

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_sysaux_jl7njb4t_.dbf",
 "/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs1_jl7njb85_.dbf",
 "/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs2_jl7njbbz_.dbf",
 "/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_users_jl7njb7l_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_sysaux_jl7njb4t_.dbf RECID=1 STAMP=1081342341
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs1_jl7njb85_.dbf RECID=2 STAMP=1081342341
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs2_jl7njbbz_.dbf RECID=3 STAMP=1081342341
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_users_jl7njb7l_.dbf RECID=4 STAMP=1081342341

datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=1081342341 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_sysaux_jl7njb4t_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1081342341 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs1_jl7njb85_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=1081342341 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_undotbs2_jl7njbbz_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=1081342341 file name=/u02/app/oracle/oradata/datastore/.ACFS/snaps/CLONEDB/CLONEDB/datafile/o1_mf_users_jl7njb7l_.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Cannot remove created server parameter file
Finished Duplicate Db at 23-AUG-21

Recovery Manager complete.


3. Verification.
Once the duplicate is complete you can run oakcli show databases to verify the database status.



[root@ecl-odabase-0 ~]# oakcli show databases
Name     Type       Storage   HomeName             HomeLocation                                       Edition Type Version
-----    ------     --------  --------------       ----------------                                   ------------ ----------
willow   RAC        ACFS      OraDb12102_home1     /u01/app/oracle/product/12.1.0.2/dbhome_1          Enterprise   12.1.0.2.191015
mango    RAC        ACFS      OraDb12102_home2     /u01/app/oracle/product/12.1.0.2/dbhome_2          Enterprise   12.1.0.2.191015
CDBDEV   RAC        ACFS      OraDb12102_home2     /u01/app/oracle/product/12.1.0.2/dbhome_2          Enterprise   12.1.0.2.191015
clonedb  RAC        ACFS      OraDb12102_home2     /u01/app/oracle/product/12.1.0.2/dbhome_2          Enterprise   12.1.0.2.191015
[root@ecl-odabase-0 ~]#

Also check the database status using below mention query. set lines 300 select instance_name,status,HOST_NAME,to_char(startup_time,'dd/mm/yyyy hh24:mi') startup_time from gv$instance;


INSTANCE_NAME    STATUS       HOST_NAME                                                        STARTUP_TIME
---------------- ------------ ---------------------------------------------------------------- ----------------
clonedb1         OPEN         ecl-odabase-0                                                    13/09/2021 14:42
clonedb2         OPEN         ecl-odabase-1                                                    13/09/2021 14:45


Monday, August 23, 2021

ODA upgrade 12.1.2.12 to 18.3.0.0 -- Journey to 19.9.0.0 - Part 1

ODA upgrade 12.1.2.12 to 18.3.0.0



There are two options to upgrade ODA to the latest version.

1. re-image system with the latest binary -

    re-imaging ODA is required to re-create the whole structure of your environment and it’s complicated

2. Upgrade from 12.1.2.12 - 18.3.0.0.

I have tried to cover as many issues as possible to make this upgrade smooth. Okay, let's try to get ready for the challenge.

 As a first step, we will gather all the required details about the ODA environment. It’s very important to understand the environment. Then can only get ready with patching plan.

This is just the start of moving to 19.9.0.0. We found this below mention plan from looking at the oracle documents link.

Eg: server version, hardware version and its bare metal or virtual environment.

Section 1: Gather ODA server information

Find the ODA hardware version.


######### Find ODA version:
[root@ecl-oda-DOM0-0 ~]# ipmitool sunoem cli 'show System' | grep model
        model = ODA X5-2
        component_model = ORACLE SERVER X5-2
[root@ecl-oda-DOM0-0 ~]#
 
[root@ecl-oda-DOM0-1 ~]# ps -ef | grep pmon
root      5577  4433  0 13:57 pts/2    00:00:00 grep pmon
[root@ecl-oda-DOM0-1 ~]# ipmitool sunoem cli 'show System' | grep model
        model = ODA X5-2
        component_model = ORACLE SERVER X5-2
[root@ecl-oda-DOM0-1 ~]
Find Server version:

[root@ecl-odabase-0 delshare]# oakcli show server
 
        Power State              : On
        Open Problems            : 0
        Model                    : ODA X5-2
        Type                     : Rack Mount
        Part Number              : 33765810+1+1
        Serial Number            : 1535NMF006
        Primary OS               : Not Available
        ILOM Address             : 10.11.99.202
        ILOM MAC Address         : 00:10:E0:8D:5E:AC
        Description              : Oracle Database Appliance X5-2 1535NMF006
        Locator Light            : Off
        Actual Power Consumption : 288 watts
        Ambient Temperature      : 15.750 degree C
        Open Problems Report     : System is healthy


[root@ecl-odabase-0 delshare]# oakcli show server

        Power State              : On
        Open Problems            : 0
        Model                    : ODA X5-2
        Type                     : Rack Mount
        Part Number              : 33765810+1+1
        Serial Number            : 1535NMF006
        Primary OS               : Not Available
        ILOM Address             : 10.11.99.202
        ILOM MAC Address         : 00:10:E0:8D:5E:AC
        Description              : Oracle Database Appliance X5-2 1535NMF006
        Locator Light            : Off
        Actual Power Consumption : 288 watts
        Ambient Temperature      : 15.750 degree C
        Open Problems Report     : System is healthy
Find hardware version:
[root@ecl-odabase-0 delshare]# oakcli show env_hw
VM-ODA_BASE ODA X5-2
[root@ecl-odabase-0 delshare]#
Check for running vm’s and repositories :
If the environment is virtualized you will need to note down the vm names and repository names before patching.
These repositories are ACFS mount points , which stores allocated to store virtual machine and virtual disks.
=======================================
Check the running VMS and respository
=======================================


[root@ecl-odabase-0 PATCH]# oakcli show vm

          NAME                                  NODENUM         MEMORY          VCPU            STATE           REPOSITORY

        kali_server                             0               4196M              2            ONLINE          kali_test
        qualyssrv                               0               4196M              2            ONLINE          qualys


[root@ecl-odabase-0 PATCH]#



[root@ecl-odabase-0 PATCH]# oakcli show repo

          NAME                          TYPE            NODENUM  FREE SPACE     STATE           SIZE


          kali_test                     shared          0            94.78%     ONLINE          512000.0M

          kali_test                     shared          1            94.78%     ONLINE          512000.0M

          odarepo1                      local           0               N/A     N/A             N/A

          odarepo2                      local           1               N/A     N/A             N/A

          qualys                        shared          0            98.44%     ONLINE          204800.0M

          qualys                        shared          1            98.44%     ONLINE          204800.0M

          vmdata                        shared          0            99.99%     ONLINE          4068352.0M

          vmdata                        shared          1               N/A     UNKNOWN         N/A

          vmsdev                        shared          0            99.99%     ONLINE          1509376.0M

          vmsdev                        shared          1               N/A     UNKNOWN         N/A


before downloading and getting ready for patching we need to find what is the oda version.

This is very important steps for patch preparation.


########### Check running ODA details 

[root@ecl-odabase-0 PATCH]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.1.2.12.0
                Controller_INT            4.650.00-7176             Up-to-date
                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      Up-to-date
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d0,c1d1,c1d2,c1d      PAG1                      Up-to-date
                3,c1d4,c1d5,c1d6,c1d
                7,c1d8,c1d9,c1d10,c1
                d11,c1d12,c1d13,c1d1
                4,c1d15,c1d28 ]
                [ c1d24,c1d25,c1d26,      A3A0                      Up-to-date
                c1d27,c1d29,c1d30,c1
                d31,c1d32,c1d33,c1d3
                4,c1d35,c1d36,c1d37,
                c1d38,c1d39 ]
                             }
                ILOM                      3.2.9.23 r116695          Up-to-date
                BIOS                      30110000                  Up-to-date
                IPMI                      1.8.12.4                  Up-to-date
                HMP                       2.3.5.2.8                 Up-to-date
                OAK                       12.1.2.12.0               Up-to-date
                OL                        6.8                       Up-to-date
                OVM                       3.4.3                     Up-to-date
                GI_HOME                   12.1.0.2.170814(2660      Up-to-date
                                          9783,26609945)
                DB_HOME                   12.1.0.2.170814(2660      Up-to-date
                                          9783,26609945)
[root@ecl-odabase-0 PATCH]#

Section 2: Patching preparation steps

ODA patching you have to follow the http://docs.oracle.com .

There is no meta link note that covers all the patching steps.

go to http://docs.oracle.com and navigate engineered system page.


Then select the Oracle Database Appliance :



Navigate earlier release pages to find out the patching plan from 19.11 document, our plan is to upgrade ODA to 12.1.2.12 - 19.8.0.0.



This is how we can get 19.9.0.0 from 12.1.2.12.

12.1.2.12 - > 18.3.0.0
18.3.0.0 - > 18.8.0.0
18.8.0.0 - > 19.8.0.0
19.8.0.0 - > 19.9.0.0

Left column shows the patch version , right column shows from version we should be in to upgrade ODA.

Table 3-1 Minimum Patch Requirements for Oracle Database Appliance Releases






Now plan is to upgrade ODA 12.1.2.12 to 18.3.0.0. steps are there in 18.3

document section.



18.3 patches can be download from below mention link https://updates.oracle.com/Orion/PatchDetails/process_form?patch_num=28864520

2.1 Address known issues.

Always go and check the known issue pages for 18.3 , For this patching we have to apply acfs patch before starting the patching.

We need to apply patch 29608813 before upgrading ODA to 18.3 as proactive action to avoid that bug

Upgrading to 18.x from 12.x Can Automatically Result in ADVM Processes Consuming Excessive CPU and IO During Automatic Resilvering (Doc ID 2525427.1).


Also found very useful link which has most of the ODA upgrade issues ODA Upgrade « Timur Akhmadeev's blog (wordpress.com)

2.1.1 Applying acfs patch

Before applying the patch check the currently running acfs mount points


[root@ecl-odabase-0 ACFS_BUG]# /sbin/acfsutil registry | grep -i 'Mount Point'
  Mount Point: /u02/app/oracle/oradata/datastore
  Mount Point: /u02/app/oracle/oradata/datcdbdev
  Mount Point: /u01/app/sharedrepo/kali_test
  Mount Point: /u01/app/sharedrepo/qualys
  Mount Point: /u01/app/sharedrepo/vmdata
  Mount Point: /u02/app/oracle/oradata/flashdata
  Mount Point: /u01/app/oracle/fast_recovery_area/datastore
  Mount Point: /db_backup
  Mount Point: /delshare
  Mount Point: /prdmgtshare
  Mount Point: /u01/app/oracle/fast_recovery_area/rcocdbdev
  Mount Point: /u01/app/sharedrepo/vmsdev
  Mount Point: /u01/app/oracle/oradata/datastore
  Mount Point: /u01/app/oracle/oradata/rdocdbdev
[root@ecl-odabase-0 ACFS_BUG]#


To apply the patch we need to stop the acfs mount points , so before unmouting we need to stop the associate acfs repositories.

Stop the currently running vm’s and repository:

sequence of stopping for repository:

1.     vm

2.     repository

oakcli stop vm <vm_name>

oakcli stop repo <repo_name> -node <0|1>

Once the vm and repositories are stopped we can umount acfs mount points.


/bin/umount /u02/app/oracle/oradata/datastore
/bin/umount /u02/app/oracle/oradata/datcdbdev
/bin/umount /u01/app/sharedrepo/kali_test
/bin/umount /u01/app/sharedrepo/qualys
/bin/umount /u01/app/sharedrepo/vmdata
/bin/umount /u02/app/oracle/oradata/flashdata
/bin/umount /u01/app/oracle/fast_recovery_area/datastore
/bin/umount /db_backup
/bin/umount /delshare
/bin/umount /prdmgtshare
/bin/umount /u01/app/oracle/fast_recovery_area/rcocdbdev
/bin/umount /u01/app/sharedrepo/vmsdev
/bin/umount /u01/app/oracle/oradata/datastore
/bin/umount /u01/app/oracle/oradata/rdocdbdev

Check the current opatch version before applying the patch , if the patch is older than 12.1.0.1.7 upgrade the opatch.

The version output of the previous command should be 12.1.0.1.7 or later.


######### ecl-odabase-0
DB:
========
[oracle@ecl-odabase-0 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/OPatch/opatch version
OPatch Version: 12.2.0.1.8

OPatch succeeded.
[oracle@ecl-odabase-0 ~]$

Grid
=====
[grid@ecl-odabase-0 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.8

OPatch succeeded.
[grid@ecl-odabase-0 ~]$

########### ecl-odabase-1

DB:
=======
[oracle@ecl-odabase-1 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/OPatch/opatch version
OPatch Version: 12.2.0.1.8

OPatch succeeded.
[oracle@ecl-odabase-1 ~]$

Grid
=======
[grid@ecl-odabase-1 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.8

OPatch succeeded.
[grid@ecl-odabase-1 ~]$


Checking the patch conflict check using opatch auto :

/u01/app/12.1.0.2/grid/OPatch/opatchauto apply /u01/PATCH/ACFS_BUG/29608813/29608813 -oh /u01/app/12.1.0.2/grid -analyze 

Apply Patch:

/u01/app/12.1.0.2/grid/OPatch/opatch apply -oh /u01/app/12.1.0.2/grid -local /u01/PATCH/ACFS_BUG/29608813/29608813

verification : run below mention commands to validate the patch:


=================
Validation
=================
export GI=/u01/app/12.1.0.2/grid
export OH=/u01/app/oracle/product/12.1.0.2
/u01/app/12.1.0.2/grid/OPatch/opatch lsinventory -oh $GI | grep ^Patch << to collect data at end.
/u01/app/oracle/product/12.1.0.2/OPatch/opatch lsinventory -oh $OH | grep ^Patch << to collect data at end.

expected output:


[grid@ecl-odabase-1 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch lsinventory -oh $GI | grep ^Patch
Patch  29608813     : applied on Thu Aug 12 14:37:26 EDT 2021
Patch description:  "ACFS Interim patch for 29608813"
Patch  26609945     : applied on Fri Sep 08 15:31:12 EDT 2017
Patch description:  "OCW Patch Set Update : 12.1.0.2.170814 (26609945)"
Patch  26477255     : applied on Fri Sep 08 15:30:03 EDT 2017
Patch  25897615     : applied on Fri Sep 08 15:29:29 EDT 2017
Patch  25034396     : applied on Fri Sep 08 15:28:56 EDT 2017
Patch  26609783     : applied on Fri Sep 08 15:28:07 EDT 2017
Patch description:  "Database Patch Set Update : 12.1.0.2.170814 (26609783)"
Patch  21436941     : applied on Fri Sep 08 15:25:34 EDT 2017
Patch description:  "WLM Patch Set Update: 12.1.0.2.5 (21436941)"
Patch level status of Cluster nodes :
[grid@ecl-odabase-1 ~]$

[grid@ecl-odabase-0 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch lsinventory -oh $GI | grep ^Patch
Patch  29608813     : applied on Thu Aug 12 14:03:11 EDT 2021
Patch description:  "ACFS Interim patch for 29608813"
Patch  26609945     : applied on Fri Sep 08 15:31:12 EDT 2017
Patch description:  "OCW Patch Set Update : 12.1.0.2.170814 (26609945)"
Patch  26477255     : applied on Fri Sep 08 15:30:03 EDT 2017
Patch  25897615     : applied on Fri Sep 08 15:29:29 EDT 2017
Patch  25034396     : applied on Fri Sep 08 15:28:56 EDT 2017
Patch  26609783     : applied on Fri Sep 08 15:28:07 EDT 2017
Patch description:  "Database Patch Set Update : 12.1.0.2.170814 (26609783)"
Patch  21436941     : applied on Fri Sep 08 15:25:34 EDT 2017
Patch description:  "WLM Patch Set Update: 12.1.0.2.5 (21436941)"
Patch level status of Cluster nodes :
[grid@ecl-odabase-0 ~]$

2.3 Space check.

First of all we need to ensure we have enough space on /, /u01 and /opt file systems. At least 20 GB should be available. If not, we can do some cleaning or extend the LVM partitions.

Usage of below mention mount points :

1.     unpacking is happening in /opt mount point

2.     Grid and db patching and upgrades are happening in /u01 mount.

3.     OS upgrade is happening in / mount point.


######### Space check 

First of all we need to ensure we have enough space on /, /u01 and /opt file systems. At least 20 GB should be available. If not, we can do some cleaning or extend the LVM partitions.

df -h / /u01 /opt

[root@ecl-odabase-1 PATCH]# df -h / /u01 /tmp /opt
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2       55G   34G   18G  66% /
/dev/xvdb1       92G   55G   33G  63% /u01
/dev/xvda2       55G   34G   18G  66% /
/dev/xvda2       55G   34G   18G  66% /
[root@ecl-odabase-1 PATCH]#

2.4 Unpacking patches and upgrade repository

Use below mention commands to unpack 18.3.0.0 patches to /opt mount point.

Make sure you have download patching from below mention metalink :
Download the Oracle Database Appliance Server Patch for the ODACLI/DCS stack (patch 28864520)

# oakcli unpack -package /tmp/p28864520_183000_Linux-x86-64_1of3.zip
# oakcli unpack -package /tmp/p28864520_183000_Linux-x86-64_2of3.zip
# oakcli unpack -package /tmp/p28864520_183000_Linux-x86-64_3of3.zip

/u01/PATCH
# oakcli unpack -package /u01/PATCH/p28864520_183000_Linux-x86-64_1of3.zip
# oakcli unpack -package /u01/PATCH/p28864520_183000_Linux-x86-64_2of3.zip
# oakcli unpack -package /u01/PATCH/p28864520_183000_Linux-x86-64_3of3.zip

Expected output:


[root@ecl-odabase-0 PATCH]# oakcli unpack -package /u01/PATCH/p28864520_183000_Linux-x86-64_1of3.zip
Unpacking will take some time,  Please wait...
Successfully unpacked the files to repository.

[root@ecl-odabase-0 PATCH]# oakcli unpack -package /u01/PATCH/p28864520_183000_Linux-x86-64_2of3.zip
Unpacking will take some time,  Please wait...
Successfully unpacked the files to repository.

[root@ecl-odabase-0 PATCH]# oakcli unpack -package /u01/PATCH/p28864520_183000_Linux-x86-64_3of3.zip
Unpacking will take some time,  Please wait...
Successfully unpacked the files to repository.

Once the unpacking is successful update the repository with latest patches.

oakcli update -patch 18.3.0.0.0 --verify

Section 3: Patching

3.1 Server patching.

Patching is devided into two parts .

1.     Server patching - This will patch several components.

2.     Storage patching.

Also this patching steps will execute patches on domu and dom0 as well.

Before starting the patching make sure to stop the vm and repo to which helps patching process to run smoothly , even if we forgot the stop vm’s and repo before the patching , below command will take care of stopping the vm and repo, But the stopping these components are time consuming.

Always make sure you will execute this patching commands from node01.

Note :
Sequnece before patching :
First stop vms and then the repository

script /tmp/output.txt - record all the steps  
/opt/oracle/oak/bin/oakcli update -patch 18.3.0.0.0 --server 

you can verify the patching steps from below mention location.

Log location:

/opt/oracle/oak/log/ecl-odabase-0/patch/18.3.0.0.0 - LOGS

3.1.1 Troubleshooting patching issues.

We faced issue while running the patching on , suddenly session got terminated after complete patching on few components in node02.


ERROR  : Ran '/usr/bin/ssh -l root ecl-odabase-0 /opt/oracle/oak/pkgrepos/System/18.3.0.0.0/bin/prepatch -v 18.3.0.0.0 --infra' and it returned code(1) and output is:
         INFO: 2021-08-13 09:47:31: Checking for available free space on /, /boot, /tmp, /u01 and /opt
 INFO: 2021-08-13 09:47:43: Infra prepatch.....
 INFO: 2021-08-13 09:47:43: Performing a dry run for OS patching
 INFO: 2021-08-13 09:48:02: There are no conflicts. OS upgrade could be successful
 INFO: 2021-08-13 09:48:06: Checking if patch 29520544 is installed on the grid home
 INFO: 2021-08-13 09:48:10: Patch 29520544 is installed on the grid home
 WARNING: 2021-08-13 09:48:39: FAILED to  update the inventory
 WARNING: 2021-08-13 09:48:40: FAILED to  update the inventory
 WARNING: 2021-08-13 09:48:41: FAILED to  update the inventory
 WARNING: 2021-08-13 09:48:42: Failed to get the good disks to read the partition sizes
 WARNING: 2021-08-13 09:48:42: Failed to get the valid Disk partition sizes to modify the oak conf. 0
 INFO: Shutdown of local VM and Repo on both nodes.
 ERROR: OAKD on local node is not available.
 INFO: Failed to stop all resources on local node.
 ERROR: 2021-08-13 09:49:44: Failed to stop all resources on local node.
 INFO: Exiting...
error at and errnum=<1>
ERROR  : Command = /usr/bin/ssh -l root ecl-odabase-0 /opt/oracle/oak/pkgrepos/System/18.3.0.0.0/bin/prepatch -v 18.3.0.0.0 --infra did not complete successfully. Exit code 1 #Step -1#
Exiting...
ERROR: Unable to apply the patch 

To overcome this we used below mention work around and restarted the patching process.

ODA ODAVP Applying the Bundle Patch 12.2.1.4 FAILS - 'CRS version [12.1.0.2.0] is incompatible for OAK 12.1/12.2/18.1' (Doc ID 2454218.1)

First check the /opt/oracle/oak/install/oakdrun file on both the nodes.
if one node shows that it's in upgrade mode, we need to change that to non-cluster mode on both the nodes and start the oakd service.

echo "non-cluster" > /opt/oracle/oak/install/oakdrun
/etc/init.d/init.oakd start
ps -ef | grep oakd

3.2 Storage patching.

Once you verified the server patching is completed by running below command on both the servers.

if the support version shows as up to date we can move with storage patching.


oakcli show version -detail -- verification

######### NODE - 01

[root@ecl-odabase-0 grid]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.3.0.0.0
                Controller_INT            4.650.00-7176             Up-to-date

                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      Up-to-date
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d0,c1d1,c1d2,c1d      PAG1                      Up-to-date
                3,c1d4,c1d5,c1d6,c1d
                7,c1d8,c1d9,c1d10,c1
                d11,c1d12,c1d13,c1d1
                4,c1d15,c1d28 ]
                [ c1d24,c1d25,c1d26,      A3A0                      Up-to-date
                c1d27,c1d29,c1d30,c1
                d31,c1d32,c1d33,c1d3
                4,c1d35,c1d36,c1d37,
                c1d38,c1d39 ]
                             }
                ILOM                      4.0.2.26.b r125868        Up-to-date
                BIOS                      30130500                  Up-to-date
                IPMI                      1.8.12.4                  Up-to-date
                HMP                       2.4.1.0.14                Up-to-date
                OAK                       18.3.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.3.0.0.180717           Up-to-date
                DB_HOME                   12.1.0.2.170814           12.1.0.2.180717
[root@ecl-odabase-0 grid]#
[root@ecl-odabase-0 grid]#

######### NODE - 02

[root@ecl-odabase-1 ~]# oakcli show version -detail
Reading the metadata. It takes a while...

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.3.0.0.0

                Controller_INT            4.650.00-7176             Up-to-date

                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      Up-to-date
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d4,c1d24,c1d25,c      PAG1                      Up-to-date
                1d26,c1d27,c1d28,c1d
                29,c1d30,c1d31,c1d32
                ,c1d33,c1d34,c1d35,c
                1d36,c1d37,c1d38,c1d
                39 ]
                [ c1d0,c1d1,c1d2,c1d      A3A0                      Up-to-date
                3,c1d5,c1d6,c1d7,c1d
                8,c1d9,c1d10,c1d11,c
                1d12,c1d13,c1d14,c1d
                15 ]
                             }
                ILOM                      4.0.2.26.b r125868        Up-to-date
                BIOS                      30130500                  Up-to-date
                IPMI                      1.8.12.4                  Up-to-date
                HMP                       2.4.1.0.14                Up-to-date
                OAK                       18.3.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.3.0.0.180717           Up-to-date
                DB_HOME                   12.1.0.2.170814           12.1.0.2.180717
[root@ecl-odabase-1 ~]#

Then can start the storage patching using below commands. In 18.3 we do not have any storage upgrade , but we are still running this command to make sure we are up to date with storage.

before storage patching we need to stop vm and repo.

1.     First stop the vm

2.     second stop the repo.

/opt/oracle/oak/bin/oakcli update -patch version --storage

3.3 Database Patching

Now we have completed most of the components , only thing left is database . Here we are applying the latest psu patches for 12.1.0.2.180717 database.

Let's check the running oracle homes in oda environment.


***************************************************************************************
#######################################################################################
*************** Database Patching

######## show homes
[root@ecl-odabase-0 ~]# oakcli show dbhomes 
Oracle Home Name      Oracle Home version Home Location                              Home Edition
----------------      ------------------- ------------                               ------------
OraDb12102_home1      12.1.0.2.180717     /u01/app/oracle/product/12.1.0.2/dbhome_1  Enterprise
OraDb12102_home2      12.1.0.2.180717     /u01/app/oracle/product/12.1.0.2/dbhome_2  Enterprise
[root@ecl-odabase-0 ~]#


######### check running databases
[root@ecl-odabase-0 tmp]# oakcli show databases
Name     Type       Storage   HomeName             HomeLocation                                       Edition Type Version
-----    ------     --------  --------------       ----------------                                   ------------ ----------
willow   RAC        ACFS      OraDb12102_home1     /u01/app/oracle/product/12.1.0.2/dbhome_1          Enterprise   12.1.0.2.170814
mango    RAC        ACFS      OraDb12102_home2     /u01/app/oracle/product/12.1.0.2/dbhome_2          Enterprise   12.1.0.2.170814
CDBDEV   RAC        ACFS      OraDb12102_home2     /u01/app/oracle/product/12.1.0.2/dbhome_2          Enterprise   12.1.0.2.170814

run the below mention command to patch database to latest psu:

/opt/oracle/oak/bin/oakcli update -patch 18.3.0.0.0 --database

3.4 Verification

Once the server patching is complete you can run below mention command to verify.

This will display all the component are patched or not. expected output

oakcli show version -detail
oakcli update -patch 18.3.0.0.0 --verify
########################### After patching

######### NODE - 01

[root@ecl-odabase-0 grid]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.3.0.0.0
                Controller_INT            4.650.00-7176             Up-to-date

                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      Up-to-date
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d0,c1d1,c1d2,c1d      PAG1                      Up-to-date
                3,c1d4,c1d5,c1d6,c1d
                7,c1d8,c1d9,c1d10,c1
                d11,c1d12,c1d13,c1d1
                4,c1d15,c1d28 ]
                [ c1d24,c1d25,c1d26,      A3A0                      Up-to-date
                c1d27,c1d29,c1d30,c1
                d31,c1d32,c1d33,c1d3
                4,c1d35,c1d36,c1d37,
                c1d38,c1d39 ]
                             }
                ILOM                      4.0.2.26.b r125868        Up-to-date
                BIOS                      30130500                  Up-to-date
                IPMI                      1.8.12.4                  Up-to-date
                HMP                       2.4.1.0.14                Up-to-date
                OAK                       18.3.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.3.0.0.180717           Up-to-date
                DB_HOME                   12.1.0.2.170814           Up-to-date
[root@ecl-odabase-0 grid]#
[root@ecl-odabase-0 grid]#


######### NODE - 02

[root@ecl-odabase-1 ~]# oakcli show version -detail
Reading the metadata. It takes a while...

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.3.0.0.0

                Controller_INT            4.650.00-7176             Up-to-date

                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      Up-to-date
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d4,c1d24,c1d25,c      PAG1                      Up-to-date
                1d26,c1d27,c1d28,c1d
                29,c1d30,c1d31,c1d32
                ,c1d33,c1d34,c1d35,c
                1d36,c1d37,c1d38,c1d
                39 ]
                [ c1d0,c1d1,c1d2,c1d      A3A0                      Up-to-date
                3,c1d5,c1d6,c1d7,c1d
                8,c1d9,c1d10,c1d11,c
                1d12,c1d13,c1d14,c1d
                15 ]
                             }
                ILOM                      4.0.2.26.b r125868        Up-to-date
                BIOS                      30130500                  Up-to-date
                IPMI                      1.8.12.4                  Up-to-date
                HMP                       2.4.1.0.14                Up-to-date
                OAK                       18.3.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.3.0.0.180717           Up-to-date
                DB_HOME                   12.1.0.2.170814           Up-to-date
[root@ecl-odabase-1 ~]#

Section 4 : Post Patching

Once everything is complete check oracle cluster is running and database are in open state.

ps -ef | grep d.bin
ps -ef | grep pmon
sqlplus / as sysdba
select open_mode,database_role from gv$database;



Exacs database creation using dbaascli

  Intro OCI (Oracle Cloud Infrastructure) provides robust automation capabilities for routine maintenance tasks such as patching, ...