Thursday, September 30, 2021

Oracle Audit Vault - 20.4 - Part 2 - Agent registration and configure audit data collection

 





This article illustrates the host agent registration and the configuring the  audit collection for database. To collected the audit data from database required to install agent on db server . Agent binary is available in the DV web console.

Acronyms:

AV : Audit Vault
DF : Database Firewall

After registering the hosts on the Audit Vault Server perform the following steps to be able to collect audit records:

  1. Download the Audit Vault Agent software from the Audit Vault Server console
  2. Deploy the Audit Vault Agent
  3. Activate the Audit Vault Agent
  4. Register one or more targets from which the audit data needs to be collected
  5. Start audit trails using the Audit Vault Server console

1. Download jar file.


 Login to dv console and download the agent jar file . Once the download completes copy this file the database server.

This is the sample of the log url.

####### console url
https://192.168.56.20/console/f?p=7700:LOGIN::::::
 


 2. Install agent 


 To install agent on db server we need java 1.8 , set the java home and verify the version using java -version. 

Verify java version.

chown oracle:oinstall agent.jar
export JAVA_HOME=$ORACLE_HOME/jdk
export PATH=$JAVA_HOME/bin:$PATH

java -version

[oracle@crs01 config]$ java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
[oracle@crs01 config]$


Next step is to install the agent on the db server.


chown oracle:oinstall agent.jar
export JAVA_HOME=$ORACLE_HOME/jdk
export PATH=$JAVA_HOME/bin:$PATH


[oracle@crs01 dbhome_1]$ java -jar /home/Stage/agent.jar -d /u01/app/oracle/avdf_agent
Agent installed successfully.
If deploying hostmonitor please refer to product documentation for additional installation steps.
[oracle@crs01 dbhome_1]$


[oracle@crs01 dbhome_1]$ java -jar /home/Stage/agent.jar -d /u01/app/oracle/avdf_agent
Agent installed successfully.
If deploying hostmonitor please refer to product documentation for additional installation steps.



3. Register agent on db server.

Register agent with db vault server we need to obtain the key. This key can be obtained from database vault console. Register database using ./agentctl start -k (we need to specify -k for insert the key) , next restart we do not need -k option.



[oracle@crs01 dbhome_1]$ cd /u01/app/oracle/avdf_agent/bin
[oracle@crs01 bin]$ ./agentctl start -k
Enter Activation Key:
Checking for updates...
Agent is updating. This operation may take a few minutes. Please wait...
Agent updated successfully.
Agent started successfully.
[oracle@crs01 bin]$

Command to monitor agent





[oracle@crs01 bin]$ ./agentctl start
Agent started successfully.

[oracle@crs01 bin]$ ./agentctl status
Agent is running.

[oracle@crs01 bin]$ ./agentctl stop
Stopping Agent...

4. Verify the agent registration.

Login to database vault web console and navigate to agent tab. Agent tab verify the agent status.

This should displayed in green color and show status as ‘running’.



 

5. Register target database.

Now next step is to register target database in database vault. Navigate the target tab and click the register tab.








To configure the target registration we need to perform below tasks.
  1. Create user
  2. grant required privileges.

5.1 User creation.

In this scenario , we are creating a user called avdfuser.

    
    create user avdfuser identified by Oracle_4U;
    
  

5.2 Grant required privilege.

To complete this grants we need to execute below mention passing SETUP , SPA and ENTITLEMENT. @/u01/app/oracle/avdf_agent/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql avdfuser eg: @/u01/app/oracle/avdf_agent/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql avdfuser SETUP


  
  [oracle@crs01 config]$ echo $ORACLE_PDB_SID
TWHSE_PDB
[oracle@crs01 config]$

[oracle@crs01 config]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Mon Sep 27 14:35:31 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 TWHSE_PDB                      READ WRITE NO

SQL> @/u01/app/oracle/avdf_agent/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql AVDFUSER SETUP

Session altered.

Granting privileges to "AVDFUSER" ... Done.
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@crs01 config]$

SQL>  @/u01/app/oracle/avdf_agent/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql avdfuser SPA

Session altered.

Granting privileges to "AVDFUSER" ... Done.
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> @/u01/app/oracle/avdf_agent/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql avdfuser ENTITLEMENT

Session altered.

Granting privileges to "AVDFUSER" ... Done.
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
  
  

5.3 Verify the grants. 

Execute below mention script to validate the grant.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 TWHSE_PDB                      READ WRITE NO

############# check privileges

SQL> select granted_role from dba_role_privs where grantee='AVDFUSER';

GRANTED_ROLE
--------------------------------------------------------------------------------
AUDIT_ADMIN
AUDIT_VIEWER
RESOURCE

SQL> select privilege from dba_sys_privs where grantee='AVDFUSER';

PRIVILEGE
----------------------------------------
AUDIT ANY
AUDIT SYSTEM
CREATE SESSION

6. Configure Auditing in the database

Configure auditing in database involve two steps 

  •  Create new tablespace auditing 
  •  Execute procedures to set the appropriate parameters settings. 

 Let's verify the current audit settings using below mention sql query before performing any changes.

Current settings:


set lines 600
col OWNER for a10
col TABLE_NAME for a30
col INTERVAL for a20
select owner,table_name,interval,partitioning_type,partition_count,def_tablespace_name from dba_part_Tables where owner='AUDSYS';
OWNER      TABLE_NAME                     INTERVAL             PARTITION PARTITION_COUNT DEF_TABLESPACE_NAME
---------- ------------------------------ -------------------- --------- --------------- ------------------------------
AUDSYS     AUD$UNIFIED                    INTERVAL '1' MONTH   RANGE             1048575 SYSAUX

Create tablespace to record this auditing data.


SQL> create tablespace avdf_aud_data datafile '/oradata/TWHSE01/TWHSE_PDB/avdf_aud_data01.dbf' size 2048m;

Tablespace created.
Set audit parameter settings for new tablespace.

SQL> BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
audit_trail_location_value => 'AVDF_AUD_DATA');
END;
/  2    3    4    5    6

PL/SQL procedure successfully completed.

SQL> BEGIN
DBMS_AUDIT_MGMT.INIT_CLEANUP(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
DEFAULT_CLEANUP_INTERVAL => 1 );
END;
/  2    3    4    5    6



PL/SQL procedure successfully completed.

SQL> SQL> SQL> BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
AUDIT_TRAIL_PURGE_INTERVAL => 1,
AUDIT_TRAIL_PURGE_NAME => 'CLEANUP_OS_DB_AUDIT_RECORDS',
USE_LAST_ARCH_TIMESTAMP => TRUE );
END;
/  2    3    4    5    6    7    8

PL/SQL procedure successfully completed.
Verification


set lines 600
col owner for a10
col table_name for a30
col interval for a30
select owner,table_name,interval,partitioning_type,partition_count,def_tablespace_name from dba_part_Tables where owner='AUDSYS';


OWNER      TABLE_NAME                     INTERVAL                       PARTITION PARTITION_COUNT DEF_TABLESPACE_NAME
---------- ------------------------------ ------------------------------ --------- --------------- ------------------------------
AUDSYS     AUD$UNIFIED                    INTERVAL '1' MONTH             RANGE             1048575 AVDF_AUD_DATA


6.1 Add audit collection data


To collect audit trails from targets, you must deploy the Audit Vault Agent on a standalone host computer which is usually the same computer where the target resides. The Audit Vault Agent includes plug-ins for each target type.

  1. For audit trail collection perform the following:
  2. Register the host
  3. Deploy the Audit Vault Agent
  4. Register the target
  5. Add audit trails for the targets

 Once the database side configurations are complete , configure the audit trail in web console. 
 Navigate to target and select audit data collection and add database settings , such as trail type , agent host etc…





Adding database settings to web console.


After completing the adding step verify using below mention window.




Tuesday, September 28, 2021

How to create Linux local repo using iso image

 Create local repository using iso image to installation oracle rpm.





This blog will cover the local repository creation using iso image. Most environment customers are not exposing their servers to internet due to many security reasons. So we need to find a way to installed required rpm using yum. This is article is all about addressing this rpm installation from yum repo. default yum repos are working with internet. But we can create local repo to run yum commands with out using internet.

In this scenario, we are using cdrom to mount the iso image and create a local repository using the iso image.

For this testing we are using virtual box to mount the iso image.

First change the boot priority and mount the image to virtual cdrom.

Change the boot priority as mentioned below.


Browse the RL7 image.




First we need to mount the iso image to linux vm box.

After mounting this to virtual machine. run df -h to verify that cdrom is mounted to linux machine.



 [root@crs01 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
devtmpfs                         1.8G     0  1.8G   0% /dev
tmpfs                            1.8G  4.0K  1.8G   1% /dev/shm
tmpfs                            1.8G  9.5M  1.8G   1% /run
tmpfs                            1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root               38G  5.4G   32G  15% /
/dev/mapper/oradatavg-oradatalv   19G   45M   18G   1% /oradata
/dev/mapper/orabinvg-orabinlv     29G   45M   27G   1% /u01
/dev/mapper/ol-home               19G   33M   19G   1% /home
/dev/sda1                       1014M  235M  780M  24% /boot
tmpfs                            365M   28K  365M   1% /run/user/0
/dev/sr0                         4.5G  4.5G     0 100% /run/media/root/OL-7.9 Server.x86_64
[root@crs01 ~]#

1. Create a soft link to cdrom 

 We need setup this in local mount point , because cdrom is read only mode. Now we need to create a softlink to local directory using  mounted cdrom iso image.

[root@crs01 cdrom]# mkdir /cdrom
[root@crs01 cdrom]# ln -s '/run/media/root/OL-7.9 Server.x86_64' /cdrom/  -- As this has space we have to use quotations
[root@crs01 cdrom]# cd /cdrom/
[root@crs01 cdrom]# ls -lrth
total 0
lrwxrwxrwx. 1 root root 36 Sep 22 14:51 OL-7.9 Server.x86_64 -> /run/media/root/OL-7.9 Server.x86_64

1.1 Install create repo package. 

 We need createrepo-* rpm to create the repository. Install this rpm to as a initial step.

[root@crs01 OL-7.9 Server.x86_64]# cd Packages/
[root@crs01 Packages]# ls -l createrepo*
-rw-rw-r--. 1 root root 95344 May 27  2017 createrepo-0.9.9-28.el7.noarch.rpm
[root@crs01 Packages]# rpm -ivh createrepo-0.9.9-28.el7.noarch.rpm
warning: createrepo-0.9.9-28.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                          ################################# [100%]
        package createrepo-0.9.9-28.el7.noarch is already installed
[root@crs01 Packages]#

1.2 Create repo 

 createrepo -v /cdrom 

1.3 Add entry to yum.repos.d


 Create a new file and add below mention entries to specifying the local mount point path , In this case it's /cdrom. There will be more repo entries that use internet to download rpm files. so you can move this files to get clean output for yum repolist output.

######## create yum reportformat
[root@crs01 Packages]# cat /etc/yum.repos.d/cdrom.repo
[cdrom]
name=OL7.9
baseurl=file:///cdrom
gpgcheck=0
enable=0
[root@crs01 Packages]#

-- move yum entries as mention below.

root@crs01 Packages]# cd /etc/yum.repos.d/
[root@crs01 yum.repos.d]# ls -lrth
total 20K
-rw-r--r--. 1 root root  226 Jul  1  2020 virt-ol7.repo
-rw-r--r--. 1 root root 2.6K Jul  1  2020 uek-ol7.repo
-rw-r--r--. 1 root root 4.2K Jul  1  2020 oracle-linux-ol7.repo
-rw-r--r--. 1 root root   61 Sep 22 14:56 cdrom.repo
[root@crs01 yum.repos.d]# mv virt-ol7.repo virt-ol7.repo.ori
[root@crs01 yum.repos.d]# mv uek-ol7.repo uek-ol7.repo.ori
[root@crs01 yum.repos.d]# mv oracle-linux-ol7.repo oracle-linux-ol7.repo.ori

1.4 Verification. 


 Now run the yum repolist command to get clean repolist output.

[root@crs01 yum.repos.d]# yum repolist
Loaded plugins: langpacks, ulninfo
repo id                                                                                                             repo name                                                                                                          status
cdrom                                                                                                               OL7.9                                                                                                              5,320
repolist: 5,320

2 Install oracle pre requisite packages to 19c. 


 Use yum install -y oracle-database-preinstall-19c command to install 19c pre requisities rpm.

[root@crs01 yum.repos.d]# yum install -y oracle-database-preinstall-19c
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-preinstall-19c.x86_64 0:1.0-2.el7 will be installed
--> Processing Dependency: libaio-devel for package: oracle-database-preinstall-19c-1.0-2.el7.x86_64
--> Processing Dependency: ksh for package: oracle-database-preinstall-19c-1.0-2.el7.x86_64
--> Running transaction check
---> Package ksh.x86_64 0:20120801-142.0.1.el7 will be installed
---> Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================================================================================================
 Package                                                                 Arch                                            Version                                                        Repository                                      Size
=============================================================================================================================================================================================================================================
Installing:
 oracle-database-preinstall-19c                                          x86_64                                          1.0-2.el7                                                      cdrom                                           19 k
Installing for dependencies:
 ksh                                                                     x86_64                                          20120801-142.0.1.el7                                           cdrom                                          882 k
 libaio-devel                                                            x86_64                                          0.3.109-13.el7                                                 cdrom                                           12 k

Transaction Summary
=============================================================================================================================================================================================================================================
Install  1 Package (+2 Dependent packages)

Total download size: 914 k
Installed size: 3.2 M
Downloading packages:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                        105 MB/s | 914 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : ksh-20120801-142.0.1.el7.x86_64                                                                                                                                                                                           1/3
  Installing : libaio-devel-0.3.109-13.el7.x86_64                                                                                                                                                                                        2/3
  Installing : oracle-database-preinstall-19c-1.0-2.el7.x86_64                                                                                                                                                                           3/3
  Verifying  : oracle-database-preinstall-19c-1.0-2.el7.x86_64                                                                                                                                                                           1/3
  Verifying  : libaio-devel-0.3.109-13.el7.x86_64                                                                                                                                                                                        2/3
  Verifying  : ksh-20120801-142.0.1.el7.x86_64                                                                                                                                                                                           3/3

Installed:
  oracle-database-preinstall-19c.x86_64 0:1.0-2.el7

Dependency Installed:
  ksh.x86_64 0:20120801-142.0.1.el7                                                                                   libaio-devel.x86_64 0:0.3.109-13.el7

Complete!

Monday, September 27, 2021

Oracle Audit Vault - 20.4 - Part 1 - Installation


In this article we will cover the audit vault installation and issues encountered while performing the installation.

Intro

Database security is in greater demand sector in current data era. All the companies spending colossal sum of money to enhance the security of the database servers. Oracle came up with Oracle Audit vault and Firewall to protect and prevent data from internal and external attackers. 

It's important to get an understanding of the what is Database Audit Vault and Database firewall ?

What is DB Audit vault ?

Oracle Audit Vault and Database Firewall (AVDF) is a complete Database Activity Monitoring (DAM) solution that combines native audit data with network-based SQL traffic capture. ... A Quick-JSON collector simplifies ingesting audit data from databases like MongoDB.

What is Database firewall ?

Oracle Database Firewall acts as the first line of defense for databases, helping prevent internal and external attacks from reaching the database. Highly accurate SQL grammar-based technology monitors and blocks unauthorized SQL traffic on the network before it reaches the database.


NOTE :

The Audit Vault Server and Database Firewall server are software appliances. You must not make any changes to the Linux operating system through the command line on these servers unless following official Oracle documentation or under guidance from Oracle Support.






Make sure to full fill these pre requisites before starting the installation.

In this scenario we are going to install this on virtual box.

1 . Pre-requisites

1.1 VM creation

  • 8GB - Memory
  • 250GB - HDD
  • 1 core cpu.

1.2 network settings.


Make sure to note vm-network address range and default gateway because we need this to setup vm network and access this using url.




1.3 Download the image


We can download the iso image from :https://edelivery.oracle.com/

Then search for : Oracle Audit Vault and Database Firewall .
In this case we are going to download Audit Vault and Database 20.4.




This comes with two separate iso images , 
1.  audit vault 
2.  firewall.







2. Installation

First we need to create a vm with 8gb memory , 250GB hard disk and 1 core. After that browse the iso to controller IDE . This is pre-setup installation media consists with database and application installation.

Mount iso file:


Then it will automatically mount the iso image and start booting from the iso image.



2.1 Installation Issues : HDD Capacity

We faced a issue when trying to give less capacity for hdd.

Error:




Solution:

Increase the hdd size 250 or more.

2.2 Installation issues : Lack of memory

If you tried to install this only with 4GB of memory Installation will failed with below mention error, so make sure to give 8GB to avoid this failure.


2.3 Setup root account password


We have taken few screen shots of the installation get the understanding of the installation components.
First screen shot shows the disk partitioning . Second screen shot shows the installation of the requirement rpms.





After this step , setup will move to root password configuration.


Now installation will again prompt the iso to continue with other steps. Again browse the same iso image from controller ide.




2.4 Setup network


I have added two network interfaces here , Network settings information (Chapter 1 - Pre-requites ) is required to setup the network.

1. NAT network to access internet.
2. Vm-host network to access vm via same network range 192.168.56.1.


Feed network information:




On completion of the network setup , installation will move to asm and db installation






Once the repository creation is done setup will apply the GRID RU



2.5 Setup Application.

Last two steps of the installation illustrates the application installation and migrating the created repository to asm storage.





Congratulation now installation is complete !!!!!!!!

you can access the web application from below mention url.




3. Web Application Access

Initial access needs root credentials . After that we need to setup several users to segregate the accounts.


This oracle document link will covers the setup of these accounts for segregation of duties.



 3.4 Setting the Usernames and Passwords of Audit Vault Server Users 
(taken from oracle documents)

Set up usernames and passwords for Oracle Audit Vault and Database Firewall (Oracle AVDF).

In the post-install configuration page, you set up usernames and passwords for the following Oracle Audit Vault and Database Firewall users:
  • Super Administrator
  • Super Auditor
  • Repository Encryption Keystore
  • Support
  • Root

Changing the root user password on this screen is optional as it is already set during installation.

Password requirements:

If your password contains Unicode character (such as non-English characters with accent marks), the password requirement is as follow: "between 8 and 30 characters long...etc"
  • Be between 8 and 30 characters long.
If you are using English-only, ASCII printable characters, Oracle Audit Vault and Database Firewall require that passwords:
  • Be between 8 and 30 characters long.
  • Contain at least one of each of the following:
  • Lowercase letters: a-z
  • Uppercase letters: A-Z
  • Digits: 0-9
Punctuation marks: comma (,), period (.), plus sign (+), colon (:), exclamation mark (!), and underscore (_)


Console output:



Tuesday, September 14, 2021

Virtualized ODA upgrade 18.3.0.0 to 18.8.0.0 - Journey to 12.1.2.12 - 19.13.0.0 - Part 2

 

ODA upgrade 18.3.0.0 to 18.8.0.0





I expect the previous blog was useful for patching oda from 12.1.2.12 to 18.3.0.0. 

Our plan is to upgrade oda with the latest 12.1.2.12 to 19.13.0.0. Before moving to 19.13.0.0 we need to upgrade oda to 18.8.0.0. (Plan is there in ODA upgrade 12.1.2.12 to 18.3.0.0 -- Journey to 19.13.0.0 - Part 1).

This blog elaborates on the steps taken to patch virtualized oda from 18.3.0.0 to 18.8.0.0. ODA virtualized platform - X5 upgrade commands are orchestrated by oakcli utility.

The previous patching grid upgraded from 12C to 18C and that was the major upgrade. In this patching 18.8.0.0 applies grid PSU on top of the current 18c grid binary and there are few storage patches included in this upgrade.

This article is focused on 18.8.0.0 upgrade for X5 hardware platform.

How to find the hardware version.

Find hardware version:


[root@ecl-odabase-0 delshare]# oakcli show env_hw
VM-ODA_BASE ODA X5-2
[root@ecl-odabase-0 delshare]#



Patching plan :

12.1.2.12 - >  18.3.0.0 - complete
18.3.0.0  - >  18.8.0.0 - In - progress
18.8.0.0  - >  19.8.0.0 -
19.8.0.0  - >  19.9.0.0 -
To get an understanding please find the patching sequence below. 

Also make sure to run oakcli show disk to validate the disk status, if there are any disk failures address those disk failures before the patching.

###########  Patching sequnece

1. computenodes -ODA_BASE
2. storage
3. database - create new 18.8.0.0 home and move database to 18.8 or 
           upgrade oracle database with latest psu that comes with 18.8 bundle

############ Pre checkDisk status 
oakcli show disk

1. Preparation

1.1 Space requirement



First of all we need to ensure we have enough space on (root mount point ) /, /u01 and /opt file systems. At least 20 GB should be available. If not, we can do some cleaning or extend the LVM partitions to gain space.

df -h / /u01 /opt

[root@ecl-odabase-0 18.8.0.0]# df -h / /opt /u01
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2       55G   33G   20G  63% /
/dev/xvda2       55G   33G   20G  63% /
/dev/xvdb1       92G   61G   27G  70% /u01
[root@ecl-odabase-0 18.8.0.0]#

Download the Oracle Database Appliance Server Patch for OAK Stack and Virtualized Platforms (patch 30518438)

Stage the patches in /u01 mount point and unpack the binaries.

# oakcli unpack -package /tmp/p30518438_188000_Linux-x86-64_1of2.zip
# oakcli unpack -package /tmp/p30518438_188000_Linux-x86-64_2of2.zip

/u01/PATCH
# oakcli unpack -package /u01/PATCH/18.8.0.0/p30518438_188000_Linux-x86-64_1of2.zip
# oakcli unpack -package /u01/PATCH/18.8.0.0/p30518438_188000_Linux-x86-64_2of2.zip
Please find the expected output after unpacking

expected output:

[root@ecl-odabase-0 18.8.0.0]# oakcli unpack -package /u01/PATCH/18.8.0.0/p30518438_188000_Linux-x86-64_1of2.zip
Unpacking will take some time,  Please wait...
Successfully unpacked the files to repository.

[root@ecl-odabase-0 18.8.0.0]# oakcli unpack -package /u01/PATCH/18.8.0.0/p30518438_188000_Linux-x86-64_2of2.zip
Unpacking will take some time,  Please wait...
Successfully unpacked the files to repository.
[root@ecl-odabase-0 18.8.0.0]#

Once the unpacking completes update the repository with the latest patches.

############ update repository with latest bundle patches 
oakcli update -patch 18.8.0.0.0 --verify

1.2 Backup ODA Base 


ODA_BASE backup can be taken from DOM0. Also, take the database full-back and VM backup before performing this upgrade activity.


- Take level zero backup of the running databases
- Backup the running vms
- Backup oda_base(domu) from dom0
2. Pre-Patching Steps

Before running the patching commands always make sure to check the pre patching report for os and components. If there are any major issues you can work with oracle to address these issues before patching.

2.1 OS post-validation steps  

Use below mentioned commands to validate the os upgrade. run this from both the nodes.
########## Validate ospatch
oakcli validate -c ospatch -ver 18.8.0.0.0
These commands use to validate the ODA components.

########## Validate from first node

oakcli validate -a

3. Patching 

 We will follow the below mentioned patching sequence. First start patching with compute nodes (oda_base), storage, and last database PSU.

###########  Patching sequnece
computenodes -ODA_BASE
storage
database - create new 18.8.0.0 home and move database to 18.8 or 
           upgrade oracle database with latest psu that comes with 18.8 bundle
First note down the running VM and the repo details. use the below commands to note down the running repo's.

[root@ecl-odabase-0 18.3.0.0]# oakcli show repo



          NAME                          TYPE            NODENUM  FREE SPACE     STATE           SIZE


          kali_test                     shared          0               N/A     OFFLINE         N/A

          kali_test                     shared          1               N/A     OFFLINE         N/A

          odarepo1                      local           0               N/A     N/A             N/A

          odarepo2                      local           1               N/A     N/A             N/A

          qualys                        shared          0               N/A     OFFLINE         N/A

          qualys                        shared          1               N/A     OFFLINE         N/A

          vmdata                        shared          0               N/A     OFFLINE         N/A

          vmdata                        shared          1            99.99%     ONLINE          4068352.0M

          vmsdev                        shared          0               N/A     OFFLINE         N/A

          vmsdev                        shared          1               N/A     UNKNOWN         N/A
          
Use below mentioned commands to note down running VMS.

[root@ecl-odabase-1 PATCH]# oakcli show vm

          NAME                                  NODENUM         MEMORY          VCPU            STATE           REPOSITORY

        kali_server                             0               4196M              2            UNKNOWN         kali_test
        qualyssrv                               0               4196M              2  
Note: 18.8 there is a bug for TFA : 

TFA – it should be stopped manually. before starting the patching process.

/etc/init.d/init.tfa stop
expected output (TFA):

[root@ecl-odabase-0 18.8.0.0]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
oracle-tfa stop/waiting
WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.
TFAmain Force Stopped Successfully : status mismatch
TFA Stopped Successfully
Killing TFA running with pid 19343
. . .
Successfully stopped TFA..
[root@ecl-odabase-0 18.8.0.0]#

[root@ecl-odabase-1 ~]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
oracle-tfa stop/waiting
WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.
TFA-00104 Cannot establish connection with TFA Server. Please check TFA Certificates
Killing TFA running with pid 10681
. . .
Successfully stopped TFA..
[root@ecl-odabase-1 ~]#

3.1 Patching ODA Base servers.


Run below mention patching command in screen terminal, so we do not want to panic about the connection disconnections. If the connection got interrupted during the patching window we can still attach the screen using screen -r.

screen
screen -ls -- screen terminal verification
script /tmp/odabase_upgrade_18800_19082021.txt - record all the steps 
/opt/oracle/oak/bin/oakcli update -patch 18.8.0.0.0 --server

3.2 Troubleshooting server patching issues

Patching failed due to grid pre-check failure. 
This is the error displayed in the terminal window.



 TFA-00002 Oracle Trace File Analyzer (TFA) is not running
 TFA-00002 Oracle Trace File Analyzer (TFA) is not running
 TFA-00002 Oracle Trace File Analyzer (TFA) is not running
 stop: Unknown instance:
 TFA-00002 Oracle Trace File Analyzer (TFA) is not running
 SUCCESS: 2021-08-17 15:14:55: Successfully update AHF rpm.
 INFO: 2021-08-17 15:14:55: ------------------Patching Grid-------------------------
 INFO: 2021-08-17 15:14:57: Clusterware is not running on local node
 INFO: 2021-08-17 15:14:57: Attempting to start clusterware and its resources on local
 node
 INFO: 2021-08-17 15:16:16: Successfully started the clusterware on local node

 INFO: 2021-08-17 15:16:16: Checking for available free space on /, /tmp, /u01
 INFO: 2021-08-17 15:16:16: Shutting down Clusterware and CRS on local node.
 INFO: 2021-08-17 15:16:16: Clusterware is running on local node
 INFO: 2021-08-17 15:16:16: Attempting to stop clusterware and its resources locally
 SUCCESS: 2021-08-17 15:17:18: Successfully stopped the clusterware on local node

 INFO: 2021-08-17 15:17:18: Shutting down CRS on the node...
 SUCCESS: 2021-08-17 15:17:21: Successfully stopped CRS processes on the node
 INFO: 2021-08-17 15:17:21: Checking for running CRS processes on the node.
 INFO: 2021-08-17 15:17:43: Starting up CRS and Clusterware on the node
 INFO: 2021-08-17 15:17:43: Starting up CRS on the node...
 SUCCESS: 2021-08-17 15:20:49: CRS has started on the node

 INFO: 2021-08-17 15:20:51: Running cluvfy to correct cluster state
 ERROR: 2021-08-17 15:24:49: Clusterware state is not NORMAL.
 ERROR: 2021-08-17 15:24:49: Failed to patch server (grid) component

error at Command = /usr/bin/ssh -l root ecl-odabase-1 /opt/oracle/oak/pkgrepos/System/18.8.0.0.0/bin/PatchDriver -tag 20210817140429 -server -version 18.8.0.0.0> and errnum=
ERROR  : Command = /usr/bin/ssh -l root ecl-odabase-1 /opt/oracle/oak/pkgrepos/System/18.8.0.0.0/bin/PatchDriver -tag 20210817140429 -server -version 18.8.0.0.0 did not complete successfully. Exit code 1 #Step -1#
Exiting...
ERROR: Unable to apply the patch 
oda patching and other logs are under /opt mount point , still patching is Log location : /opt/oracle/oak/log/ecl-odabase-0/patch/18.8.0.0.0

ecl-odabase-1: PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP
                addresses, but SCAN "ecl-oda-scan" resolves to only
                "/10.11.30.48,/10.11.30.49"

 ecl-odabase-0: PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP
                addresses, but SCAN "ecl-oda-scan" resolves to only
                "/10.11.30.48,/10.11.30.49"

   Verifying DNS/NIS name service 'ecl-oda-scan' ...FAILED
   PRVG-1101 : SCAN name "ecl-oda-scan" failed to resolve

 Verifying Clock Synchronization ...FAILED
   Verifying Network Time Protocol (NTP) ...FAILED
     Verifying NTP daemon is synchronized with at least one external time source
     ...FAILED
     ecl-odabase-1: PRVG-13602 : NTP daemon is not synchronized with any
                    external time source on node "ecl-odabase-1".

     ecl-odabase-0: PRVG-13602 : NTP daemon is not synchronized with any
                    external time source on node "ecl-odabase-0".


 CVU operation performed:      stage -post crsinst
 Date:                         Aug 17, 2021 3:20:55 PM
 CVU home:                     /u01/app/18.0.0.0/grid/
 User:                         grid

2021-08-17 15:24:49: Executing cmd: /u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion -f
2021-08-17 15:24:49: Command output:
>  Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [3769208751]. ,
>End Command output
2021-08-17 15:24:49: ERROR:  Clusterware state is not NORMAL.

Followed:How to resolve the cluster upgrade to state of [UPGRADE FINAL] after successfully upgrading Grid Infrastructure (GI) to 18c or higher (Doc ID 2583141.1)

The solution to address grid pre-patching issues.

######### Solution

1. Issue "/u01/app/18.0.0.0/grid/bin/cluvfy stage -post crsinst -gi_upgrade -n all"
2. Fix the critical errors that above command reports
3. Rerun "/u01/app/18.0.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all"
4. Issue "/u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion -f" and confirm that the cluster upgrade state is [NORMAL].
5. If the above command still reports that the the cluster upgrade state is [UPGRADE FINAL], repeat steps 1 to 3 and fix all critical errors.

3 issues are to be addressed in this scenario.

  1.  NTP 
  2.  DNS issue 
  3.  only two scan addresses are configured.

3.2.1 Resolution for NTP

In this environment we do not have an NTP server, so the only possibility is to use cluster network time. To use the in-build cluster network time feature we need to mv the NTP configuration files and start the cluster using the in-build network feature.


mv /etc/ntp.conf /etc/ntp.conf.ori
rm /var/run/ntpd.pid
crsctl start crs

3.2.2 Resolution for DNS issue 

 In this scenario, we do not have DNS server, so the plan is to use /etc/hosts file as an alternative. add required IP address in the /etc/hosts file in both servers. The first comment to resolve conf entry, if not cluster we try to resolve the IP address for DNS server.

[root@ecl-odabase-0 18.8.0.0]# cat /etc/resolv.conf

# Following added by OneCommand
search newco.local
#nameserver 10.11.30.254
# End of section
[root@ecl-odabase-0 18.8.0.0]#

[oracle@ecl-odabase-0 gg_191004]$ cat /etc/hosts


# Following added by OneCommand
127.0.0.1    localhost.localdomain localhost

# PUBLIC HOSTNAMES

# PRIVATE HOSTNAMES
192.168.16.27    ecl-oda-lab1-priv0.newco.local    ecl-oda-lab1-priv0
192.168.16.28    ecl-oda-lab2-priv0.newco.local    ecl-oda-lab2-priv0

# NET(0-3) HOSTNAMES
10.11.30.155    ecl-odabase-0.newco.local ecl-odabase-0
10.11.30.156    ecl-odabase-1.newco.local ecl-odabase-1

# VIP HOSTNAMES
10.11.30.157    ecl-oda-0-vip.newco.local  ecl-oda-0-vip
10.11.30.158    ecl-oda-1-vip.newco.local  ecl-oda-1-vip

# Below are SCAN IP addresses for reference.
# SCAN_IPS=(10.11.30.48 10.11.30.49)
10.11.30.48     ecl-oda-scan.newco.local  ecl-oda-scan
10.11.30.49     ecl-oda-scan.newco.local  ecl-oda-scan
10.11.30.50     ecl-oda-scan.newco.local  ecl-oda-scan

10.11.30.105  eclipsys-noc.localdomain          eclipsys-noc
[oracle@ecl-odabase-0 gg_191004]$

3.2.3 Resolution for missing scan address. 

 In this environment, we had only 2 scans of IP addresses. so we need to configure a new scan address. Check with your network team member and obtain IP address from the same scan range. This environment scan address is 10.11.30.50, Once you add this on both the nodes run below-mentioned commands to discover the new scan address.

/u01/app/18.0.0.0/grid/bin/srvctl modify scan -n ecl-oda-scan
Now run config commands to verify

[root@ecl-odabase-0 ~]# /u01/app/18.0.0.0/grid/bin/srvctl config scan
SCAN name: ecl-oda-scan, Network: 1
Subnet IPv4: 10.11.30.0/255.255.255.0/eth0, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.11.30.48
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.11.30.49
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: 10.11.30.50
SCAN VIP is enabled.
Once it’s discovered make sure to check the scan listener status and start the newly configured scan.

[root@ecl-odabase-0 ~]# srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node ecl-odabase-1
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node ecl-odabase-0
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is not running
[root@ecl-odabase-0 ~]#
So now time to verify the cluster post-check again. If there are any issues we need to address those before patching.

/u01/app/18.0.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
When there are no more cluster issues, we can start the patching again using the below commands which is already mentioned in the chapter (3.1 Patching ODA base server)

script /tmp/odabase_upgrade_18800_19082021.txt - record all the steps 
/opt/oracle/oak/bin/oakcli update -patch 18.8.0.0.0 --server

3.3 Server patching failed on ilom 


 Again we faced an obstacle on server patching, this time it failed on ilom patching.

Note: We faced this error while performing the ODA patch for 18.3 - 18.8 upgrade and node01

Error:
ERROR  : Ran '/usr/bin/scp  root@192.168.16.28:/opt/oracle/oak/install/oakpatch_summary /opt/oracle/oak/install/oakpatch_summary' and it returned code(1) and output is:
         ssh: connect to host 192.168.16.28 port 22: Connection timed out

INFO: Infrastructure patching summary on node: 192.168.16.28

INFO: Running post-install scripts
INFO: Running postpatch on node 1...
ERROR  : Ran '/usr/bin/ssh -l root 192.168.16.28 /opt/oracle/oak/pkgrepos/System/18.8.0.0.0/bin/postpatch -v 18.8.0.0.0 --infra --gi -tag 20210819112727' and it returned code(255) and output is:
         ssh: connect to host 192.168.16.28 port 22: Connection timed out

error at  --gi="" --infra="" -l="" -tag="" -v="" 18.8.0.0.0="" 192.168.16.28="" 20210819112727="" bin="" oak="" opt="" oracle="" pkgrepos="" postpatch="" root="" ssh="" usr="" ystem=""> and errnum=
ERROR  : Command = /usr/bin/ssh -l root 192.168.16.28 /opt/oracle/oak/pkgrepos/System/18.8.0.0.0/bin/postpatch -v 18.8.0.0.0 --infra --gi -tag 20210819112727 did not complete successfully. Exit code 255 #Step -1#
Exiting...
ERROR: Unable to apply the patch 

3.3.1 ILOM patching solution

Only solution is to restart the oda_base and dom0 from ilom console , after reboot please check the oda components.

loging to ilom and power cycle the node 01 server.
### verify the component version once the node is fully up 
oakcli show version -detail
Validation output


========================
18.8 After pacthing 
========================
#### Node 01

[root@ecl-odabase-0 ~]# oakcli show version -detail
Reading the metadata. It takes a while...

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.8.0.0.0
                Controller_INT            4.650.00-7176             Up-to-date
                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      001E
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d0,c1d1,c1d2,c1d      PAG1                      PD51
                3,c1d4,c1d5,c1d6,c1d
                7,c1d8,c1d9,c1d10,c1
                d11,c1d12,c1d13,c1d1
                4,c1d15,c1d28 ]
                [ c1d24,c1d25,c1d26,      A3A0                      Up-to-date
                c1d27,c1d29,c1d30,c1
                d31,c1d32,c1d33,c1d3
                4,c1d35,c1d36,c1d37,
                c1d38,c1d39 ]
                             }
                ILOM                      4.0.4.52 r132805          Up-to-date
                BIOS                      30300200                  Up-to-date
                IPMI                      1.8.15.0                  Up-to-date
                HMP                       2.4.5.0.1                 Up-to-date
                OAK                       18.8.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.8.0.0.191015           Up-to-date
                DB_HOME                   12.1.0.2.180717           12.1.0.2.191015
[root@ecl-odabase-0 ~]#

3.4 Troubleshoot shared repo start-up issues.


On completion, we noticed that shared repositories were not coming up due havip startup issues. Because all the exportfs mount points were missing from the cluster.

Dom0 mount points are mounted as NFS share and dynamic entries created under /etc/mtab were missing.
Please find the sample /etc/mtab entry for your perusal.
/dev/sda3 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sda2 /OVS ext3 rw 0 0
/dev/sda1 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
debugfs /sys/kernel/debug debugfs rw 0 0
xenfs /proc/xen xenfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
none /var/lib/xenstored tmpfs rw 0 0
192.168.18.21:/u01/app/sharedrepo/vmstor1 /OVS/Repositories/vmstor1 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,nfsvers=3,timeo=600,addr=192.168.18.21 0 0
[root@pinode0 ~]#
This shared-repo issue is recorded under the know issues section, But we found this slight difference because the repo is not mounted on the dom0 server. 

https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.8/cmtrn/issues-with-oda-odacli.html#GUID-5BA56322-127F-424F-8D1E-DEB3939CD60C



### verify the component version once the node is fully up 
oakcli show version -detail
Validation output

========================
18.8 After pacthing 
========================
#### Node 01

[root@ecl-odabase-0 ~]# oakcli show version -detail
Reading the metadata. It takes a while...

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.8.0.0.0
                Controller_INT            4.650.00-7176             Up-to-date
                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  0018                      001E
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d0,c1d1,c1d2,c1d      PAG1                      PD51
                3,c1d4,c1d5,c1d6,c1d
                7,c1d8,c1d9,c1d10,c1
                d11,c1d12,c1d13,c1d1
                4,c1d15,c1d28 ]
                [ c1d24,c1d25,c1d26,      A3A0                      Up-to-date
                c1d27,c1d29,c1d30,c1
                d31,c1d32,c1d33,c1d3
                4,c1d35,c1d36,c1d37,
                c1d38,c1d39 ]
                             }
                ILOM                      4.0.4.52 r132805          Up-to-date
                BIOS                      30300200                  Up-to-date
                IPMI                      1.8.15.0                  Up-to-date
                HMP                       2.4.5.0.1                 Up-to-date
                OAK                       18.8.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.8.0.0.191015           Up-to-date
                DB_HOME                   12.1.0.2.180717           12.1.0.2.191015
[root@ecl-odabase-0 ~]#


Error log :

########## Error

####################### DOM 0 NODE01
2021-08-30 12:19:41,201 [Cmd_EnvId] [MainThread] [repoactions] [INFO] [162] Checking for shared repos
2021-08-30 12:19:44,228 [Cmd_EnvId] [MainThread] [repoactions] [ERROR] [182] Error encountered while checking for shared repos: OAKERR:7084The HAVIP 192.168.18.21 is not pingable
2021-08-30 12:19:44,230 [Cmd_EnvId] [MainThread] [agentutils] [DEBUG] [181] Created xml string 0OAKERR:7084The HAVIP 192.168.18.21 is not pingable
2021-08-30 12:19:59,169 [Cmd_EnvId] [MainThread] [repoactions] [INFO] [162] Checking for shared repos
2021-08-30 12:20:02,193 [Cmd_EnvId] [MainThread] [repoactions] [ERROR] [182] Error encountered while checking for shared repos: OAKERR:7084The HAVIP 192.168.18.21 is not pingable
2021-08-30 12:20:02,194 [Cmd_EnvId] [MainThread] [agentutils] [DEBUG] [181] Created xml string 0OAKERR:7084The HAVIP 192.168.18.21 is not pingable

####################### DOM 0 NODE02
2021-08-30 12:30:16,328 [Cmd_EnvId] [MainThread] [repoactions] [INFO] [162] Checking for shared repos
2021-08-30 12:30:19,364 [Cmd_EnvId] [MainThread] [repoactions] [ERROR] [182] Error encountered while checking for shared repos: OAKERR:7084The HAVIP 192.168.19.21 is not pingable
2021-08-30 12:30:19,366 [Cmd_EnvId] [MainThread] [agentutils] [DEBUG] [181] Created xml string 0OAKERR:7084The HAVIP 192.168.19.21 is not pingable
2021-08-30 12:31:36,167 [Cmd_EnvId] [MainThread] [repoactions] [INFO] [162] Checking for shared repos
2021-08-30 12:31:39,188 [Cmd_EnvId] [MainThread] [repoactions] [ERROR] [182] Error encountered while checking for shared repos: OAKERR:7084The HAVIP 192.168.19.21 is not pingable
2021-08-30 12:31:39,189 [Cmd_EnvId] [MainThread] [agentutils] [DEBUG] [181] Created xml string 0OAKERR:7084The HAVIP 192.168.19.21 is not pingable
2021-08-30 12:32:53,686 [Cmd_EnvId] [MainThread] [repoactions] [INFO] [162] Checking for shared repos

Secondly, check the acfsmount point status using below mention command.


[root@ecl-odabase-0 ~]# /sbin/acfsutil registry -l
Device : /dev/asm/datastore-37 : Mount Point : /u02/app/oracle/oradata/datastore : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : DATASTORE : Accelerator Volumes :
Device : /dev/asm/datcdbdev-37 : Mount Point : /u02/app/oracle/oradata/datcdbdev : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : DATCDBDEV : Accelerator Volumes :
Device : /dev/asm/kali_test-37 : Mount Point : /u01/app/sharedrepo/kali_test : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : KALI_TEST : Accelerator Volumes :
Device : /dev/asm/qualys-37 : Mount Point : /u01/app/sharedrepo/qualys : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : QUALYS : Accelerator Volumes :
Device : /dev/asm/vmdata-37 : Mount Point : /u01/app/sharedrepo/vmdata : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : VMDATA : Accelerator Volumes :
Device : /dev/asm/flashdata-216 : Mount Point : /u02/app/oracle/oradata/flashdata : Options : none : Nodes : all : Disk Group: FLASH : Primary Volume : FLASHDATA : Accelerator Volumes :
Device : /dev/asm/datastore-445 : Mount Point : /u01/app/oracle/fast_recovery_area/datastore : Options : none : Nodes : all : Disk Group: RECO : Primary Volume : DATASTORE : Accelerator Volumes :
Device : /dev/asm/db_backup-445 : Mount Point : /db_backup : Options : none : Nodes : all : Disk Group: RECO : Primary Volume : DB_BACKUP : Accelerator Volumes :
Device : /dev/asm/delshare-445 : Mount Point : /delshare : Options : none : Nodes : all : Disk Group: RECO : Primary Volume : DELSHARE : Accelerator Volumes :
Device : /dev/asm/prdmgtshare-445 : Mount Point : /prdmgtshare : Options : none : Nodes : all : Disk Group: RECO : Primary Volume : PRDMGTSHARE : Accelerator Volumes :
Device : /dev/asm/rcocdbdev-445 : Mount Point : /u01/app/oracle/fast_recovery_area/rcocdbdev : Options : none : Nodes : all : Disk Group: RECO : Primary Volume : RCOCDBDEV : Accelerator Volumes :
Device : /dev/asm/vmsdev-445 : Mount Point : /u01/app/sharedrepo/vmsdev : Options : none : Nodes : all : Disk Group: RECO : Primary Volume : VMSDEV : Accelerator Volumes :
Device : /dev/asm/datastore-158 : Mount Point : /u01/app/oracle/oradata/datastore : Options : none : Nodes : all : Disk Group: REDO : Primary Volume : DATASTORE : Accelerator Volumes :
Device : /dev/asm/rdocdbdev-158 : Mount Point : /u01/app/oracle/oradata/rdocdbdev : Options : none : Nodes : all : Disk Group: REDO : Primary Volume : RDOCDBDEV : Accelerator Volumes :
If all the acfs mount points are mounted check the cluster status

root@ecl-odabase-0 ~]# /u01/app/18.0.0.0/grid/bin/crsctl status res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.crf
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.crsd
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.cssd
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.ctssd
      1        ONLINE  ONLINE       ecl-odabase-0            OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.drivers.oka
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.gipcd
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.gpnpd
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.mdnsd
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
ora.storage
      1        ONLINE  ONLINE       ecl-odabase-0            STABLE
--------------------------------------------------------------------------------
[root@ecl-odabase-0 ~]# ps -ef | grep pmon
Check the havip status and it shows the exportfs are not mounted.

####### HAVIP issue 

-- as root
/u01/app/18.0.0.0/grid/bin/srvctl config havip

[grid@ecl-odabase-0 trace]$ /u01/app/18.0.0.0/grid/bin/srvctl start havip -id havip_3 -n ecl-odabase-0
PRCE-1026 : Cannot start HAVIP resource without an Export FS resource.

Now let's check the log file for oda_base. This is where you can find the actual problem.

Log file : /opt/oracle/oak/log//odabaseagent/odabaseagent* Error message below:

 OAKERR8038 The filesystem could not be exported as a crs resource  
OAKERR:5015 Start repo operation has been disabled by flag

We can validate the mount nfs shares using under mention command.

showmount -e

3.4.1 Solution for shared repo issue.


 Let’s enable the shared repo from oda_base and start the reboot of the cluster in a rolling fashion. The better option is to stop the cluster and reboot the nodes from ilom.

 Meta Link note: Shared Repo Startup Fails with OAKERR:8038 and OAKERR:5015 on ODA 12.2.1.2.0 (Doc ID 2379347.1) 

 Known issues Link :

 https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.8/cmtrn/issues-with-oda-odacli.html#GUID-5BA56322-127F-424F-8D1E-DEB3939CD60C

 [root@ecl-odabase-0 ~]# oakcli enable startrepo -node 0
Start repo operation is now ENABLED on node 0
[root@ecl-odabase-0 ~]# oakcli enable startrepo -node 1
Start repo operation is now ENABLED on node 1
[root@ecl-odabase-0 ~]#
 
oakcli show repo

Now only two components are left to patch 
  1.  Storage 
  2.  Database


3.5 Storage patching

Before storage patching makes sure to stop VM and repo's. 

script /tmp/output_storage_08202021.txt
/opt/oracle/oak/bin/oakcli update -patch version --storage

/opt/oracle/oak/bin/oakcli update -patch 18.8.0.0.0 --storage
Run below-mentioned command for verification.

===============================
After verification
===============================

[root@ecl-odabase-0 ~]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
18.8.0.0.0
                Controller_INT            4.650.00-7176             Up-to-date
                Controller_EXT            13.00.00.00               Up-to-date
                Expander                  001E                      Up-to-date
                SSD_SHARED {
                [ c1d20,c1d21,c1d22,      A29A                      Up-to-date
                c1d23,c1d44,c1d45,c1
                d46,c1d47 ]
                [ c1d16,c1d17,c1d18,      A29A                      Up-to-date
                c1d19,c1d40,c1d41,c1
                d42,c1d43 ]
                             }
                HDD_LOCAL                 A7E0                      Up-to-date
                HDD_SHARED {
                [ c1d24,c1d25,c1d26,      A3A0                      Up-to-date
                c1d27,c1d29,c1d30,c1
                d31,c1d32,c1d33,c1d3
                4,c1d35,c1d36,c1d37,
                c1d38,c1d39 ]
                [ c1d0,c1d1,c1d2,c1d      PD51                      Up-to-date
                3,c1d4,c1d5,c1d6,c1d
                7,c1d8,c1d9,c1d10,c1
                d11,c1d12,c1d13,c1d1
                4,c1d15,c1d28 ]
                             }
                ILOM                      4.0.4.52 r132805          Up-to-date
                BIOS                      30300200                  Up-to-date
                IPMI                      1.8.15.0                  Up-to-date
                HMP                       2.4.5.0.1                 Up-to-date
                OAK                       18.8.0.0.0                Up-to-date
                OL                        6.10                      Up-to-date
                OVM                       3.4.4                     Up-to-date
                GI_HOME                   18.8.0.0.191015           Up-to-date
                DB_HOME                   12.1.0.2.180717           12.1.0.2.191015
[root@ecl-odabase-0 ~]#

4. Post Patching Validation


Once this patching is complete. Validate the oda environment as mentioned below.
ps -ef | grep pmon - check database is up and running


ps -ef | grep pmon 
grid     22358     1  0 Sep10 ?        00:00:17 asm_pmon_+ASM1
grid     26041     1  0 Sep10 ?        00:00:17 apx_pmon_+APX1
oracle   93837     1  0 Sep13 ?        00:00:05 ora_pmon_clonedb1
root     98071 97908  0 14:02 pts/0    00:00:00 grep pmon

Also, execute oakcli show repo commands to validate running shared repositories and oakcli show vm to validate running VMS.



[root@ecl-odabase-0 ~]# oakcli show repo

          NAME                          TYPE            NODENUM  FREE SPACE     STATE           SIZE


          kali_test                     shared          0            94.74%     ONLINE          512000.0M

          kali_test                     shared          1            94.74%     ONLINE          512000.0M

          odarepo1                      local           0               N/A     N/A             N/A

          odarepo2                      local           1               N/A     N/A             N/A

          qualys                        shared          0            98.35%     ONLINE          204800.0M

          qualys                        shared          1            98.35%     ONLINE          204800.0M

          vmdata                        shared          0            99.99%     ONLINE          4068352.0M

          vmdata                        shared          1            99.99%     ONLINE          4068352.0M

          vmsdev                        shared          0            99.99%     ONLINE          1509376.0M

          vmsdev                        shared          1            99.99%     ONLINE 



[root@ecl-odabase-0 ~]# oakcli show vm

          NAME                                  NODENUM         MEMORY          VCPU            STATE           REPOSITORY

        kali_server                             0               4196M              2            OFFLINE         kali_test
        qualyssrv                               0               4196M              2  

Unified Auditing Housekeeping

  Intro  Data is the new currency. It is one of the most valuable organizational assets, however, if that data is not well...