Friday, June 30, 2023

Oracle Base Database patching to 19.17 using DBCLI

 




OCI - (Oracle Cloud Infrastructure)

Intro

Cloud platforms are booming and many organizations are planning on a cloud journey to get a competitive advantage.  The cloud market is vast, with a diverse selection of services. It's not easy to select a suitable cloud platform for an organization.  There are certain criteria you need to consider when selecting a platform. 

    • Certifications & Standards
    • Technologies & Service Roadmap
    • Data Security, Data Governance, and Business policies
    • Service Dependencies & Partnerships
    • Contracts, Commercials & SLAs
    • Reliability & Performance
    • Migration Support, Vendor Lock-in & Exit Planning
    • Business health & Company profile
    • Cost of the cloud services and License.
    All these factors play a huge role when selecting a cloud platform suited for your system workloads. If your IT infrastructure consists of many Oracle databases,  I would say there is no better place than OCI (Oracle Cloud Infrastructure) to host your workload.  Oracle has been there for the market for a long time and gives immense selection when hosting Oracle workloads. 

    Autonomous selection  :
    • Autonomous Database
    • Autonomous data warehouse
    • Autonomous JSON Database
    • Autonomous Transaction Processing
    Exadata services : 
    • Exadata at Oracle Cloud.
    • Exadata Cloud@Customer
    DB service  selection :
    • Oracle Base Database Service
    VM selection.
    • Bare Metal

    The best part about OCI is they came up with new orchestration tools to standardize and ease the administration work. I have listed down all the orchestration tools and administration work we can perform using the tool.

    • OCI CLI - Oracle Cloud Command Line Interface:
    OCI is a small-footprint tool that makes calls to Oracle Cloud Infrastructure REST APIs using HTTPS requests. It is built with the SDK for Python.
    • dbcli - Database Cli :
    dbcli is a command line tool available on virtual machines and bare metal DB systems (not on Exadata DB systems)
    • cliadm - CLI admin command
    Updates your dbcli tool to get the new commands that support new features added to the cloud services:
    • dbaascli
    dbaasapi is deprecated and replaced by dbaascli. 

    This tool is the command line tool for life-cycle and administration operations for Exadata databases. It must be run on the local Exadata compute node. Many dbaascli commands can be run by the Oracle user. However, some commands require root administrator privileges.

    Summary of the dbaascli tool task ; 
    1. Scale up/down OCPUs in disconnected mode (ExaCC only)
    2. Download and list available software images
    3. Managing pluggable databases (PDBs) – use SQL alternatively
    4. Rotating the TDE master encryption key – use SQL alternatively
    5. Starting and stopping the Oracle Net listener – use lsnrctl alternatively
    6. Managing databases created via dbaasapi (dbaascli) on a subset of the nodes (not recommended)
    • bkup_api
    bkup_api was replaced by dbaascli.

    • exacli
    exacli is the command line for getting storage cell metrics and diagnostics information on Exadata DB systems. 


    In this article, I will illustrate how you can patch the VM machine database system.

    Before running any command you should upgrade the cliadm with the latest patch version.  Upgrading this help you to support the latest features added to cloud services.

    Note: you do not need to worry about staging the patches, oracle will stage the patches in all the environments.

    Upgrade cliadm

    Once you execute the update-dbcli  status can be checked by running dbcli list-jobs command.

    
    
    [root@dbsdpl21 ~]# cliadm update-dbcli
    
    Job details
    ----------------------------------------------------------------
                         ID:  79c1ed97-3b3f-4d35-94e6-b5cba5565ecd
                Description:  DcsCli patching
                     Status:  Created
                    Created:  April 14, 2023 2:54:32 PM UTC
                    Message:  Dcs cli will be updated
    
    Task Name                                                                Start Time                          End Time                            Status
    ------------------------------------------------------------------------ ----------------------------------- ----------------------------------- ----------
    
    

    Validate cliadm upgrade status

    
    
    [root@dbsdpl21 ~]# dbcli list-jobs
    79c1ed97-3b3f-4d35-94e6-b5cba5565ecd     DcsCli patching    Friday, April 14, 2023, 14:54:32 UTC Success
    
    

    Describe the component

    Execute dbcli describe-component command and check what is the latest patch available to apply.

    
    
    [root@dbsdpl21 ~]# dbcli describe-component
    System Version
    ---------------
    21.2.3.0.0
    
    Component                                Installed Version    Available Version
    ---------------------------------------- -------------------- --------------------
    GI                                        19.11.0.0.0           19.18.0.0
    DB                                        19.11.0.0.0           19.18.0.0
    
    

    Describe latest patches

    
    
    [root@dbsdpl21 ~]# dbcli describe-latestpatch
    
    componentType   availableVersion
    --------------- --------------------
    gi              12.2.0.1.230117
    gi              12.1.0.2.230117
    gi              18.16.0.0.0
    gi              19.18.0.0.0
    gi              21.9.0.0.0
    db              11.2.0.4.230117
    db              12.2.0.1.230117
    db              12.1.0.2.230117
    db              18.16.0.0.0
    db              19.18.0.0.0
    db              21.9.0.0.0
    [root@dbsdpl21 ~]#
    
    

    Upgrade database

    Upgrade commands need the database home-id, this can be obtained by running the dbcli list-DB homes. Make sure to clean up the database home folder and gain space before the upgrade.

    Note: I would recommend staying one patch level down (N-1), so you can apply the 19.17.0.0 patch for the database.

    
    
    [root@dbsdpl21 ~]# dbcli list-dbhomes
    
    ID                                       Name                 DB Version                               Home Location                                 Status
    ---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
    42935588-54f1-4f08-98cb-5f95d95cb881     OraDB19000_home1     19.11.0.0.0                              /u01/app/oracle/product/19.0.0.0/dbhome_1     Configured
    
    [root@dbsdpl21 ~]#
    
    
    
    
    [root@dbsdpl21 ~]# dbcli update-dbhome -i 42935588-54f1-4f08-98cb-5f95d95cb881 -v 19.17.0.0.0
    {
      "jobId" : "31904f6a-f9b9-498b-8e5f-8ecfab1d6114",
      "status" : "Created",
      "message" : null,
      "errorCode" : "",
      "reports" : [ ],
      "createTimestamp" : "April 14, 2023 15:01:46 PM UTC",
      "resourceList" : [ ],
      "description" : "Database inplace image patching with dbhomeId : 42935588-54f1-4f08-98cb-5f95d95cb881",
      "updatedTime" : "April 14, 2023 15:01:52 PM UTC",
      "percentageProgress" : "0%",
      "cause" : null,
      "action" : null
    }
    [root@dbsdpl21 ~]
    
    

    Upgrade: Status Check

    The upgrade consists of 4 steps.


    • Precheck DBHome patching tasks
    • DBHome patching

    
    
    [root@dbsdpl21 log]# dbcli describe-job -i 31904f6a-f9b9-498b-8e5f-8ecfab1d6114
    
    Job details
    ----------------------------------------------------------------
                         ID:  31904f6a-f9b9-498b-8e5f-8ecfab1d6114
                Description:  Database inplace image patching with dbhomeId : 42935588-54f1-4f08-98cb-5f95d95cb881
                     Status:  Running
                    Created:  April 14, 2023 3:01:46 PM UTC
                   Progress:  99%
                    Message:
                 Error Code:
    
    Task Name                                                                Start Time                          End Time                            Status
    ------------------------------------------------------------------------ ----------------------------------- ----------------------------------- ----------
    Precheck DBHome patching tasks                                           April 14, 2023 3:02:43 PM UTC       April 14, 2023 3:09:53 PM UTC       Success
    DBHome patching                                                          April 14, 2023 3:09:53 PM UTC       April 14, 2023 3:53:46 PM UTC       Success
    Post DBHome patching tasks                                               April 14, 2023 3:53:46 PM UTC       April 14, 2023 3:53:46 PM UTC       Running
    
    [root@dbsdpl21 log]#
    
    

    DB Upgrade: Validate

    Upgrade is

    
    
    [root@dbsdpl21 log]# dbcli describe-job -i 31904f6a-f9b9-498b-8e5f-8ecfab1d6114
    
    Job details
    ----------------------------------------------------------------
                         ID:  31904f6a-f9b9-498b-8e5f-8ecfab1d6114
                Description:  Database inplace image patching with dbhomeId : 42935588-54f1-4f08-98cb-5f95d95cb881
                     Status:  Success
                    Created:  April 14, 2023 3:01:46 PM UTC
                   Progress:  100%
                    Message:
                 Error Code:
    
    Task Name                                                                Start Time                          End Time                            Status
    ------------------------------------------------------------------------ ----------------------------------- ----------------------------------- ----------
    Precheck DBHome patching tasks                                           April 14, 2023 3:02:43 PM UTC       April 14, 2023 3:09:53 PM UTC       Success
    DBHome patching                                                          April 14, 2023 3:09:53 PM UTC       April 14, 2023 3:53:46 PM UTC       Success
    Post DBHome patching tasks                                               April 14, 2023 3:53:46 PM UTC       April 14, 2023 4:19:32 PM UTC       Success
    Install object store swift module                                        April 14, 2023 4:18:54 PM UTC       April 14, 2023 4:19:06 PM UTC       Success
    
    [root@dbsdpl21 log]#
    
    

    Conclusion

    In summary, OCI is getting more mature day by day.  These new orchestration tools ease the life of oracle dba that spending on administration tasks like patching. 

    If you want to learn about server patching refer to my article on Oracle Cloud Infrastructure (OCI) - Part 2: OCI -VM Server and DB Patching.
     

    OVM - Remove stale cluster enties

     






    Intro 

    It has been ages since when Oracle released its own hypervisor (OVM).  OVM   technology is based on paravirtualization and uses Xen-based hypervisor.  OVM's latest release version 3.4.6.3 is the latest one available. Oracle announces extended support for OVM and the support period is March 2021 and will end on March 31, 2024.

    If you need more information about OVM support read the below mentioned article: https://blogs.oracle.com/virtualization/post/announcing-oracle-vm-3-extended-support/

    This is going to be the end of the OVM tree, after this there will be no release for OVM. The latest technology going to Oracle KVM.  Oracle KVM is much more stable than the OVM and gives you more flexibility in the virtualization environment. If you are still planning on staying on on-prem. I would say this is the right time to plan your journey to KVM.  

    In this article, I will cover the issue we faced recently in the OVM environment.

     

    Overview of the issue

        
    We faced a new issue with the OVM cluster environment. This was caused due to sudden data center power outage. Once everything was online we were not able to start the OVM hypervisor. so we had to perform a complete reinstallation of the node. 

    When I try to add node backup to the cluster we faced an issue with mounting the repositories. The next option was to remove the nodes from the cluster again, This action was performed via GUI.

    I did validation and realized that cluster entries are still there in node02. we found that there were some stale entries in the master node and node02.

    Please find the Oracle meta link node that covers the issue for stale entries : 
    OVM - How To Remove Stale Entry of the Oracle VM Server which was Removed from The Pool (Doc ID 2418834.1)

    How to identify there are stale entire?

    First, validate the o2cb status. This is the cluster service which consists of all the information about the cluster. I have highlighted node information in red color.

    [root@ovm-node02 ~]# service o2cb status
    Driver for "configfs": Loaded
    Filesystem "configfs": Mounted
    Stack glue driver: Loaded
    Stack plugin "o2cb": Loaded
    Driver for "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking O2CB cluster "f6f6b47b38e288e0": Online
      Heartbeat dead threshold: 61
      Network idle timeout: 60000
      Network keepalive delay: 2000
      Network reconnect delay: 2000
      Heartbeat mode: Global
    Checking O2CB heartbeat: Active
      0004FB0000050000B705B4397850AAD6 /dev/dm-2
    Nodes in O2CB cluster: 0 1
    Debug file system at /sys/kernel/debug: mounted
    
    
    Now let's check entries from node02, if this is correctly removed from the cluster you should see only one entry. But here there are two entries.
    
    [root@ovm-node02 ovm-node02]# ls -lrth /sys/kernel/config/cluster/f6f6b47b38e288e0/node/
    total 0
    drwxr-xr-x 2 root root 0 Jun 23 09:28 ovm-node02
    drwxr-xr-x 2 root root 0 Jun 23 09:33 ovm-node01
    [root@ovm-node02 ovm-node02]#
    
    

    The next step is to validate from the master node (ovm-node01)  database entries. This shows there are two pool_member_ip_list. 

    
    
    [root@ovm-node01]# ovs-agent-db dump_db server
    {'cluster_state': 'DLM_Ready',
     'clustered': True,
     'fs_stat_uuid_list': ['0004fb000005000015c1fb14ef761f40',
                           '0004fb000005000079ae03177c3edc7e',
                           '0004fb000005000065985109f8834e8b'],
     'is_master': True,
     'manager_event_url': 'https://192.168.85.152:7002/ovm/core/wsapi/rest/internal/Server/08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:ef:de:6a/Event',
     'manager_ip': '192.168.85.152',
     'manager_statistic_url': 'https://192.168.85.152:7002/ovm/core/wsapi/rest/internal/Server/08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:ef:de:6a/Statistic',
     'manager_uuid': '0004fb0000010000c8ecbd219dc6b1ee',
     'node_number': 0,
     'pool_alias': 'EclipsysOVM',
     'pool_master_ip': '192.168.85.177',
     'pool_member_ip_list': ['192.168.85.177', '192.168.85.178'],
     'pool_uuid': '0004fb0000020000f6f6b47b38e288e0',
     'poolfs_nfsbase_uuid': '',
     'poolfs_target': '/dev/mapper/36861a6fddaa0481ec0dd3584514a8d62',
     'poolfs_type': 'lun',
     'poolfs_uuid': '0004fb0000050000b705b4397850aad6',
     'registered_hostname': 'ovm-node01',
     'registered_ip': '192.168.85.177',
     'roles': set(['utility', 'xen'])}
    [root@calavsovm01 ovm-node01]#
    
    

    Remove node from cluster commands line

    Now we can remove the oven-node02 from the second node.

    
    [root@ovm-node01]# o2cb remove-node f6f6b47b38e288e0 ovm-node02
    

    Validate node entries

    After removing node02, we can see only one entry in the OVM database.

    
    [root@ovm-node01]# ls /sys/kernel/config/cluster/f6f6b47b38e288e0/node/
    ovm-node02
    [root@ovm-node01]#
    

    Validate using O2CB

    First, restart the ovs-agent on both nodes and validate the o2cb cluster status from node01.

    
    [root@ovm-node01]# service ovs-agent restart
    Stopping Oracle VM Agent:                                  [  OK  ]
    Starting Oracle VM Agent:                                  [  OK  ]
    
    [root@ovm-node01 ~]# service ovs-agent status
    log server (pid 32442) is running...
    notificationserver server (pid 32458) is running...
    remaster server (pid 32464) is running...
    monitor server (pid 32466) is running...
    ha server (pid 32468) is running...
    stats server (pid 32470) is running...
    xmlrpc server (pid 32474) is running...
    fsstats server (pid 32476) is running...
    apparentsize server (pid 32477) is running...
    [root@ovm-node01 ~]#
    
    

    Also I would recommend to restart the node02 after the node removal, Once the node is back online validate the /etc/ocfs2/cluster.conf

    
     
     [root@ovm-node01 ~]# cat /etc/ocfs2/cluster.conf
    cluster:
            heartbeat_mode = global
            node_count = 1
            name = f6f6b47b38e288e0
    
    node:
            number = 0
            cluster = f6f6b47b38e288e0
            ip_port = 7777
            ip_address = 10.110.110.101
            name = ovm-node01
    
    heartbeat:
            cluster = f6f6b47b38e288e0
            region = 0004FB0000050000B705B4397850AAD6
     
     

    Note: ovs-agent restart won't have any impact on running VMs.

    
      
      [root@ovm-node01]# ovs-agent-db dump_db server
    {'cluster_state': 'DLM_Ready',
     'clustered': True,
     'fs_stat_uuid_list': ['0004fb000005000015c1fb14ef761f40',
                           '0004fb000005000079ae03177c3edc7e',
                           '0004fb000005000065985109f8834e8b'],
     'is_master': True,
     'manager_event_url': 'https://192.168.85.152:7002/ovm/core/wsapi/rest/internal/Server/08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:ef:de:6a/Event',
     'manager_ip': '192.168.85.152',
     'manager_statistic_url': 'https://192.168.85.152:7002/ovm/core/wsapi/rest/internal/Server/08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:ef:de:6a/Statistic',
     'manager_uuid': '0004fb0000010000c8ecbd219dc6b1ee',
     'node_number': 0,
     'pool_alias': 'EclipsysOVM',
     'pool_master_ip': '192.168.85.177',
     'pool_member_ip_list': ['192.168.85.177'],
     'pool_uuid': '0004fb0000020000f6f6b47b38e288e0',
     'poolfs_nfsbase_uuid': '',
     'poolfs_target': '/dev/mapper/36861a6fddaa0481ec0dd3584514a8d62',
     'poolfs_type': 'lun',
     'poolfs_uuid': '0004fb0000050000b705b4397850aad6',
     'registered_hostname': 'ovm-node01',
     'registered_ip': '192.168.85.177',
     'roles': set(['utility', 'xen'])}
    [root@ovm-node01]#
      
      

    Conclusion

    There can be situations gui will not remove the entries from the OVM hypervisor.  Always validate the OVM data entries before retying the node addition to the cluster. Make sure the cluster-shared repositories are mounting automatically.  

     

    Monday, June 26, 2023

    OLVM : Create FC shared data domain

     





    Intro 

    Nowadays all the business markets are highly competitive and organizations want to take competitive advantages over others. When business is highly critical databases and application servers need to be up and running 24X7  . Virtualization is the technology that helps to maximize the IT infrastructure cost. High availability of VM can be achieved easily from virtualization clustering. Also, Virtualization technology will ease the cloud migration journey. 

    Virtualization can be deployed in two ways.

    • Converged architecture.
    • Hyper-converged architecture.
    Below mentioned links are useful for getting a clear understanding of both types of deployment types.

    Converged deployment : https://www.bmc.com/blogs/converged-infrastructure-vs-hyper-converged-infrastructure/#:~:text=In%20a%20non%2Dconverged%20architecture,Network%2Dattached%20storage%20(NAS)

    Hyperconverged: https://www.redhat.com/en/topics/hyperconverged-infrastructure/what-is-hyperconverged-infrastructure.

    What is Converged architecture? 

    In a non-converged architecture, physical servers operate a virtualization hypervisor that manages each of the virtual machines (VMs) that have been created on the server. For data storage, there are typically three options:
    • A storage-area network (SAN)
    • Direct-attached storage (DAS)
    • Network-attached storage (NAS)
    With converged architecture, storage is attached directly to the physical server. Regular converged architecture can be just as fast (if not faster) than hyper-converged alternatives. In this setup, flash storage is almost always used. (The need for expensive SAN and NAS, in particular, is eliminated.)

    What is Hyper-converged architecture? 

    Hyperconvergence is an approach to IT infrastructure that consolidates computing, storage, and networking resources into a unified system. A hyper-converged infrastructure (HCI) consists of compute resources (virtual machines) managed with a hypervisor, software-defined storage, and software-defined networking. Hyperconvergence of virtualized resources allows you to manage your resources from a single, unified interface.

    With software-defined computing and storage integrated together, you can reduce data center complexity and footprint and support more modern workloads with flexible architectures on industry-standard hardware.

    There are a few prerequisites you need to understand before creating a cluster. 

    • Are you going with converged or hyper-converged architecture?
    • How many nodes do you need to cluster which helps with fencing and shared storage?
    • What is the virtualization management network bandwidth?
    • What is the shared storage type best suited for the environment?

    For optimal virtualization architecture, we need at least 3 nodes.  Having 3 nodes will handle the fencing more efficiently than 2 nodes. 

    All depend on the capex cost, you can start with 2 nodes and plan to move 3 nodes within 2 years or so. There is no restriction that you cannot create an OLVM cluster with 2 nodes, but some of the fencing features will not work as expected.   

    Note: Make sure not to implement glusterfs with 2 nodes. Glusterfs hyper-converged architecture storage you need to have 3 nodes. 

    In this article, I will cover how to implement a shared fiber domain between two nodes.

    As per Figure 1: Fiber LUN should be mapped to both the KVM nodes. In OLVM shared cluster SPM (Storage Pool Manager) Role will handle this shared storage mounting in both nodes.

    Note: Make sure to map the same Fiber LUN to both nodes. 



                                          Figure 1: How to map the Fiber storage domain to both nodes.


    Mapping Fiber LUN to node01.

    
    
    [root@KVM01 ~]# lsblk | grep 3624a93701561d6718da94a200001104c
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    [root@KVM120 ~]#
    
    


    Mapping Fiber LUN to node02.

    
    
    [root@KVM02 ~]# lsblk | grep 3624a93701561d6718da94a200001104c
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    └─3624a93701561d6718da94a200001104c     252:2    0  200G  0 mpath
    [root@KVM121 ~]#
    
    
    


    Adding FC storage from OLVM

    Figure 2: Illustrates how to create a fiber channel data domain, Always double-check the LUN ID before selecting the disk. This will take 10min to create the lvm2 and the file structure.





                                       Figure 2:  Add Fiber LUN as an FC-Data-Domain.


    If this is created successfully, all 3 tasks will be completed with a green color taskbar.


                                         Figure 3: OLVM task detail tab.


    Once you successfully created the fc domain, the status will be displayed in green color in the storage section.



    Conclusion

    In summary, For two nodes not giving you the expected fencing results. When designing the OLVM architecture with gluster storage optimal results can be archived via 3 node architectures. 

    For two-node architectures, we can achieve OLVM environmental stability by implementing fiber channel storage domains. if you are still planning to have gluster storage, KVM must consist of 10G network cards. Also, we need to implement an arbitrator to avoid split brain. 

    Moving to fiber storage will completely eliminate the network traffic from the management network. OLVM SPM (Storage Pool Manager) will handle the mounting on both nodes. Also, fiber gives you enhanced performance on storage.


    Friday, June 9, 2023

    ODA - Virtualized - DB upgrade (oakcli)

     








    ODA virtualized eases the database upgrade process using oakcli orchestration utility. This tool became more stable after 19.8. All the ODA components can be orchestrated via this oakcli utility. However, 19.13 will be the last upgrade for the virtualized platform, After that oracle going to discontinue OVM and they are moving to KVM-based virtualization. This is going to be a revolutionary change for ODA.

    In this article, I will cover the steps to perform database upgrades via oakcli tool.

    There are pre-requisite steps that need to be performed before the upgrade.

    • Gain space in /u01 for an upgrade.
    • Create a new database home. I have covered this new database home creation in a previous blog post: https://chanaka-dbhelp.blogspot.com/2022/07/oda-x5-db-home-creation-error.html
    • Execute pre-upgrade check.  
    • Fixed the pre-upgrade issues.
    • Backup database before upgrade.
    • Upgrade database.


    1. We can gain space by removing the old repository files, this can be achieved by executing the manage clear repo command.

    
    oakcli manage cleanrepo --ver 19.8.0.0.0
    

    Check the database homes that are configured, As per this example we have 3 database homes two 19c and one 12c

    
    
    [root@ecl-odabase-0 app]# oakcli show dbhomes
    Oracle Home Name      Oracle Home version Home Location                              Home Edition
    ----------------      ------------------- ------------                               ------------
    OraDb12102_home2      12.1.0.2.191015     /u01/app/oracle/product/12.1.0.2/dbhome_2  Enterprise
    OraDb19000_home1      19.13.0.0.211019    /u01/app/oracle/product/19.0.0.0/dbhome_1  Enterprise
    OraDb19000_home2      19.13.0.0.211019    /u01/app/oracle/product/19.0.0.0/dbhome_2  Enterprise
    [root@ecl-odabase-0 app]#
    
    


    2.  Download the pre-upgrade utility from Oracle support.

    How to Download and Run Oracle's Database Pre-Upgrade Utility (Doc ID 884522.1)


    Note: We noticed that even pre-upgrade is handled by oakcli utility. But as a best practice, I would recommend to fixed those pre-upgrade suggestions before running the upgrade.


    Please find the sample pre-upgrade execution output

    
    
    [oracle@ecl-odabase-0 TESTDB]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/jdk/bin/java -jar /home/oracle/PRE_UPGRADE/TESTDB/preupgrade.jar FILE DIR /home/oracle/PRE_UPGRADE/TESTDB/
    ==================
    PREUPGRADE SUMMARY
    ==================
      /home/oracle/PRE_UPGRADE/TESTDB/preupgrade.log
      /home/oracle/PRE_UPGRADE/TESTDB/preupgrade_fixups.sql
      /home/oracle/PRE_UPGRADE/TESTDB/postupgrade_fixups.sql
    
    Execute fixup scripts as indicated below:
    
    Before upgrade:
    
    Log into the database and execute the preupgrade fixups
    @/home/oracle/PRE_UPGRADE/TESTDB/preupgrade_fixups.sql
    
    After the upgrade:
    
    Log into the database and execute the postupgrade fixups
    @/home/oracle/PRE_UPGRADE/TESTDB/postupgrade_fixups.sql
    
    Preupgrade complete: 2023-03-20T13:55:32
    [oracle@ecl-odabase-0 TESTDB]$
    
    
    

    Upgrade Database 

    As per this example, we are upgrading testdb to 19c, the command required to point 19c home. The upgrade will take close to 1hr to complete all the tasks, Make sure to enable the screen before executing this command.

    
    
    [root@ecl-odabase-0 incident]# oakcli upgrade database -db testdb -to OraDb19000_home1
    INFO: 2023-03-20 14:31:04: Look at the log file '/opt/oracle/oak/log/ecl-odabase-0/tools/19.13.0.0.0/dbupgrade_96057.log' for more details
    
    Please enter the 'SYS'  password :
    Please re-enter the 'SYS' password:
    INFO: 2023-03-20 14:31:52: Upgrading the database testdb. It will take few minutes. Please wait...
    ...
    
    
    
    

    After upgrade

    This is oakcli output after the upgrade

    
    
    [root@ecl-odabase-0 incident]# screen -ls
    There is a screen on:
            96799.pts-1.ecl-odabase-0       (Attached)
    1 Socket in /var/run/screen/S-root.
    
    [root@ecl-odabase-0 incident]# oakcli upgrade database -db testdb -to OraDb19000_home1
    INFO: 2023-03-20 14:31:04: Look at the log file '/opt/oracle/oak/log/ecl-odabase-0/tools/19.13.0.0.0/dbupgrade_96057.log' for more details
    
    Please enter the 'SYS'  password :
    Please re-enter the 'SYS' password:
    INFO: 2023-03-20 14:31:52: Upgrading the database testdb. It will take few minutes. Please wait...
    ...
    
    
    ...
    SUCCESS: 2023-03-20 15:04:10: Successfully upgraded the database testdb
    
    

    I have shared the complete log to get the proper understanding, oakcli utility is handling all the tasks pre-patching, patching, and post-patching.


    Complete log

    
    
    [root@ecl-odabase-0 ~]# cat /opt/oracle/oak/log/ecl-odabase-0/tools/19.13.0.0.0/dbupgrade_96057.log
    2023-03-20 14:31:04: INFO:  Look at the log file '/opt/oracle/oak/log/ecl-odabase-0/tools/19.13.0.0.0/dbupgrade_96057.log' for more details
    2023-03-20 14:31:04: Command received is : oakcli upgrade database -db testdb -to OraDb19000_home1
    2023-03-20 14:31:04: This is V4 machine
    2023-03-20 14:31:05: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:05: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:05: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:05: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:05: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:05: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:05: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:05: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:05: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:05: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:05: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:05: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:06: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:06: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:06: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:06: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:06: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:06: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:18: Checking SSH equivalence for root.
    2023-03-20 14:31:18: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster
    2023-03-20 14:31:18: Command output:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online ,
    >End Command output
    2023-03-20 14:31:18: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:18: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:18:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-0 /bin/true" as user "root"
    2023-03-20 14:31:18: Executing cmd: /bin/su  root -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-0 /bin/true"
    2023-03-20 14:31:18: SSH equivalence for root to ecl-odabase-0: success
    2023-03-20 14:31:18:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-1 /bin/true" as user "root"
    2023-03-20 14:31:18: Executing cmd: /bin/su  root -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-1 /bin/true"
    2023-03-20 14:31:18: SSH equivalence for root to ecl-odabase-1: success
    2023-03-20 14:31:18:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.27 /bin/true" as user "root"
    2023-03-20 14:31:18: Executing cmd: /bin/su  root -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.27 /bin/true"
    2023-03-20 14:31:19: SSH equivalence for root to 192.168.16.27: success
    2023-03-20 14:31:19:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.28 /bin/true" as user "root"
    2023-03-20 14:31:19: Executing cmd: /bin/su  root -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.28 /bin/true"
    2023-03-20 14:31:19: SSH equivalence for root to 192.168.16.28: success
    2023-03-20 14:31:19: Checking SSH equivalence for oracle.
    2023-03-20 14:31:19: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster
    2023-03-20 14:31:19: Command output:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online ,
    >End Command output
    2023-03-20 14:31:19: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:19: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:19:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-0 /bin/true" as user "oracle"
    2023-03-20 14:31:19: Executing cmd: /bin/su  oracle -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-0 /bin/true"
    2023-03-20 14:31:19: SSH equivalence for oracle to ecl-odabase-0: success
    2023-03-20 14:31:19:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-1 /bin/true" as user "oracle"
    2023-03-20 14:31:19: Executing cmd: /bin/su  oracle -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 ecl-odabase-1 /bin/true"
    2023-03-20 14:31:19: SSH equivalence for oracle to ecl-odabase-1: success
    2023-03-20 14:31:19:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.27 /bin/true" as user "oracle"
    2023-03-20 14:31:19: Executing cmd: /bin/su  oracle -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.27 /bin/true"
    2023-03-20 14:31:19: SSH equivalence for oracle to 192.168.16.27: success
    2023-03-20 14:31:19:   Invoking "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.28 /bin/true" as user "oracle"
    2023-03-20 14:31:19: Executing cmd: /bin/su  oracle -c "/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 192.168.16.28 /bin/true"
    2023-03-20 14:31:20: SSH equivalence for oracle to 192.168.16.28: success
    2023-03-20 14:31:32: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:32: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:32: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:32: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:32: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:32: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:33: Executing cmd: export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2; /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/srvctl status database -d testdb
    2023-03-20 14:31:33: Command output:
    >  Instance testdb1 is running on node ecl-odabase-0
    >  Instance testdb2 is running on node ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:33: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:33: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:33: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:33: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:33: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:33: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:34: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:34: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:34: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:34: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:34: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:34: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:34: run_as_user2: Running /bin/su oracle -c ' export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2;/u01/app/oracle/product/12.1.0.2/dbhome_2/bin/srvctl config database -d testdb '
    2023-03-20 14:31:35: Removing file /tmp/file2c3Y6L
    2023-03-20 14:31:35:
    2023-03-20 14:31:35: Successfully removed file: /tmp/file2c3Y6L
    2023-03-20 14:31:35: /bin/su successfully executed
    
    2023-03-20 14:31:35: Getting the JOB_Queue_Processes for the database : testdb
    2023-03-20 14:31:35: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:35: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:35: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:35: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:35: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:35: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:35: Executing cmd: export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2; /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/srvctl config database -db testdb
    2023-03-20 14:31:36: Command output:
    >  Database unique name: testdb
    >  Database name: testdb
    >  Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_2
    >  Oracle user: oracle
    >  Spfile: +DATA/TESTDB/PARAMETERFILE/spfile.296.1131972439
    >  Password file: +DATA/TESTDB/PASSWORD/pwdtestdb.269.1131971689
    >  Domain:
    >  Start options: open
    >  Stop options: immediate
    >  Database role: PRIMARY
    >  Management policy: AUTOMATIC
    >  Server pools:
    >  Disk Groups: DATA,REDO
    >  Mount point paths:
    >  Services:
    >  Type: RAC
    >  Start concurrency:
    >  Stop concurrency:
    >  OSDBA group: dba
    >  OSOPER group: racoper
    >  Database instances: testdb1,testdb2
    >  Configured nodes: ecl-odabase-0,ecl-odabase-1
    >  Database is administrator managed ,
    >End Command output
    2023-03-20 14:31:36: db_domain for the database testdb is
    2023-03-20 14:31:36: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config scan
    2023-03-20 14:31:37: Command output:
    >  SCAN name: ecl-oda-scan, Network: 1
    >  Subnet IPv4: 10.11.30.0/255.255.255.0/eth0, static
    >  Subnet IPv6:
    >  SCAN 1 IPv4 VIP: 10.11.30.48
    >  SCAN VIP is enabled.
    >  SCAN 2 IPv4 VIP: 10.11.30.49
    >  SCAN VIP is enabled.
    >  SCAN 3 IPv4 VIP: 10.11.30.50
    >  SCAN VIP is enabled. ,
    >End Command output
    2023-03-20 14:31:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config scan_listener -i 1 -S 1
    2023-03-20 14:31:37: Command output:
    >  #@=result[0]: res_name={ora.LISTENER_SCAN1.lsnr} netnum={1} lsnr_name={LISTENER_SCAN1} port={1521,1531} ports={null} enabled={true} enabled_nodes={} disabled_nodes={}  ,
    >End Command output
    2023-03-20 14:31:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:38: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:38: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:38: Will be running following sql statements as user: oracle:
    
                      export oracle_home=/u01/app/oracle/product/12.1.0.2/dbhome_2,
                      export sid= testdb1,
                      /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/sqlplus -L sys/password@ecl-oda-scan/testdb as sysdba
    
                      set heading off
    set echo off
    set lines 200
    show parameter job_queue_processes
    
    2023-03-20 14:31:40: Removing file /tmp/fileKzXXW6
    2023-03-20 14:31:40:
    2023-03-20 14:31:40: Successfully removed file: /tmp/fileKzXXW6
    2023-03-20 14:31:40: /bin/su successfully executed
    
    2023-03-20 14:31:40: Output is :
     job_queue_processes                  integer     1000
    
    2023-03-20 14:31:40: JOB_Queue_Processes for the database testdb is 1000
    2023-03-20 14:31:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:40: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:40: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:40: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:40: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config scan
    2023-03-20 14:31:41: Command output:
    >  SCAN name: ecl-oda-scan, Network: 1
    >  Subnet IPv4: 10.11.30.0/255.255.255.0/eth0, static
    >  Subnet IPv6:
    >  SCAN 1 IPv4 VIP: 10.11.30.48
    >  SCAN VIP is enabled.
    >  SCAN 2 IPv4 VIP: 10.11.30.49
    >  SCAN VIP is enabled.
    >  SCAN 3 IPv4 VIP: 10.11.30.50
    >  SCAN VIP is enabled. ,
    >End Command output
    2023-03-20 14:31:41: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config scan_listener -i 1
    2023-03-20 14:31:41: Command output:
    >  SCAN Listeners for network 1:
    >  Registration invited nodes:
    >  Registration invited subnets:
    >  Endpoints: TCP:1521,1531
    >  SCAN Listener LISTENER_SCAN1 exists
    >  SCAN Listener is enabled. ,
    >End Command output
    2023-03-20 14:31:41: Executing cmd: /u01/app/19.0.0.0/grid/bin/cemutlo -n
    2023-03-20 14:31:41: Command output:
    >  ecl-oda-lab-c ,
    >End Command output
    2023-03-20 14:31:41: Executing cmd: hostname -d
    2023-03-20 14:31:41: Command output:
    >  newco.local ,
    >End Command output
    2023-03-20 14:31:41: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config nodeapps -a
    2023-03-20 14:31:42: Command output:
    >  Network 1 exists
    >  Subnet IPv4: 10.11.30.0/255.255.255.0/eth0, static
    >  Subnet IPv6:
    >  Ping Targets:
    >  Network is enabled
    >  Network is individually enabled on nodes:
    >  Network is individually disabled on nodes:
    >  VIP exists: network number 1, hosting node ecl-odabase-0
    >  VIP Name: ecl-oda-0-vip.newco.local
    >  VIP IPv4 Address: 10.11.30.157
    >  VIP IPv6 Address:
    >  VIP is enabled.
    >  VIP is individually enabled on nodes:
    >  VIP is individually disabled on nodes:
    >  VIP exists: network number 1, hosting node ecl-odabase-1
    >  VIP Name: ecl-oda-1-vip.newco.local
    >  VIP IPv4 Address: 10.11.30.158
    >  VIP IPv6 Address:
    >  VIP is enabled.
    >  VIP is individually enabled on nodes:
    >  VIP is individually disabled on nodes:  ,
    >End Command output
    2023-03-20 14:31:43: INFO : Logging all actions in the file /opt/oracle/oak/onecmd/tmp/ecl-odabase-0-20230320143143.log and traces in the file /opt/oracle/oak/onecmd/tmp/ecl-odabase-0-20230320143143.trc
    2023-03-20 14:31:43: INFO : Loading the configuration file /opt/oracle/oak/onecmd/upgrade_databse.params...
    2023-03-20 14:31:45: INFO : Creating the node list files...
    2023-03-20 14:31:45: Executing cmd: ssh -o StrictHostKeyChecking=no root@ecl-odabase-0 "/bin/df -k /u01 /tmp"
    2023-03-20 14:31:45: Command output:
    >  Filesystem     1K-blocks     Used Available Use% Mounted on
    >  /dev/xvdb1      95988492 78908600  12197204  87% /u01
    >  /dev/xvda2      57060636 43633540  10521856  81% / ,
    >End Command output
    2023-03-20 14:31:45: Free space on /u01 on ecl-odabase-0 is 12197204 1K-blocks
    2023-03-20 14:31:45: Free space on / on ecl-odabase-0 is 10521856 1K-blocks
    2023-03-20 14:31:45: Executing cmd: ssh -o StrictHostKeyChecking=no root@ecl-odabase-1 "/bin/df -k /u01 /tmp"
    2023-03-20 14:31:45: Command output:
    >  Filesystem     1K-blocks     Used Available Use% Mounted on
    >  /dev/xvdb1      95988492 77826264  13279540  86% /u01
    >  /dev/xvda2      57060636 41370112  12785284  77% / ,
    >End Command output
    2023-03-20 14:31:45: Free space on /u01 on ecl-odabase-1 is 13279540 1K-blocks
    2023-03-20 14:31:45: Free space on / on ecl-odabase-1 is 12785284 1K-blocks
    2023-03-20 14:31:45: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:45: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:45: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:45: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:45: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:45: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:45: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:45: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:45: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:46: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:46: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:46: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:46: Executing cmd: export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2; /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/srvctl config database -db testdb
    2023-03-20 14:31:47: Command output:
    >  Database unique name: testdb
    >  Database name: testdb
    >  Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_2
    >  Oracle user: oracle
    >  Spfile: +DATA/TESTDB/PARAMETERFILE/spfile.296.1131972439
    >  Password file: +DATA/TESTDB/PASSWORD/pwdtestdb.269.1131971689
    >  Domain:
    >  Start options: open
    >  Stop options: immediate
    >  Database role: PRIMARY
    >  Management policy: AUTOMATIC
    >  Server pools:
    >  Disk Groups: DATA,REDO
    >  Mount point paths:
    >  Services:
    >  Type: RAC
    >  Start concurrency:
    >  Stop concurrency:
    >  OSDBA group: dba
    >  OSOPER group: racoper
    >  Database instances: testdb1,testdb2
    >  Configured nodes: ecl-odabase-0,ecl-odabase-1
    >  Database is administrator managed ,
    >End Command output
    2023-03-20 14:31:47: db_domain for the database testdb is
    2023-03-20 14:31:47: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config scan
    2023-03-20 14:31:47: Command output:
    >  SCAN name: ecl-oda-scan, Network: 1
    >  Subnet IPv4: 10.11.30.0/255.255.255.0/eth0, static
    >  Subnet IPv6:
    >  SCAN 1 IPv4 VIP: 10.11.30.48
    >  SCAN VIP is enabled.
    >  SCAN 2 IPv4 VIP: 10.11.30.49
    >  SCAN VIP is enabled.
    >  SCAN 3 IPv4 VIP: 10.11.30.50
    >  SCAN VIP is enabled. ,
    >End Command output
    2023-03-20 14:31:47: Executing cmd: /u01/app/19.0.0.0/grid/bin/srvctl config scan_listener -i 1 -S 1
    2023-03-20 14:31:48: Command output:
    >  #@=result[0]: res_name={ora.LISTENER_SCAN1.lsnr} netnum={1} lsnr_name={LISTENER_SCAN1} port={1521,1531} ports={null} enabled={true} enabled_nodes={} disabled_nodes={}  ,
    >End Command output
    2023-03-20 14:31:48: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:48: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:48: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:48: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:48: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:48: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:48: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:48: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:48: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:49: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:49: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:49: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:49: Will be running following sql statements as user: oracle:
    
                      export oracle_home=/u01/app/oracle/product/12.1.0.2/dbhome_2,
                      export sid= testdb1,
                      /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/sqlplus -L sys/password@ecl-oda-scan/testdb as sysdba
    
                      set echo off
    set heading off
    select value from v$parameter where name='pga_aggregate_target'
    
    2023-03-20 14:31:50: Removing file /tmp/filezIGyIR
    2023-03-20 14:31:50:
    2023-03-20 14:31:50: Successfully removed file: /tmp/filezIGyIR
    2023-03-20 14:31:50: /bin/su successfully executed
    
    2023-03-20 14:31:50: Output is :
     2147483648
    
     1 row selected.
    
    
    2023-03-20 14:31:50: Will be running following sql statements as user: oracle:
    
                      export oracle_home=/u01/app/oracle/product/12.1.0.2/dbhome_2,
                      export sid= testdb1,
                      /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/sqlplus -L sys/password@ecl-oda-scan/testdb as sysdba
    
                      set echo off
    set heading off
    select value from v$parameter where name='pga_aggregate_limit'
    
    2023-03-20 14:31:51: Removing file /tmp/fileBNgBgF
    2023-03-20 14:31:51:
    2023-03-20 14:31:51: Successfully removed file: /tmp/fileBNgBgF
    2023-03-20 14:31:51: /bin/su successfully executed
    
    2023-03-20 14:31:51: Output is :
     4294967296
    
     1 row selected.
    
    
    2023-03-20 14:31:51: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:51: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:51: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:51: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:51: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:51: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:51: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:51: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:51: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:52: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:52: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:52: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:52: INFO:  Upgrading the database testdb. It will take few minutes. Please wait...
    2023-03-20 14:31:52: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:52: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:52: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:52: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:52: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:52: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:52: Executing cmd: export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2; /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/srvctl status database -d testdb
    2023-03-20 14:31:53: Command output:
    >  Instance testdb1 is running on node ecl-odabase-0
    >  Instance testdb2 is running on node ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:53: INFO : Running the command /u01/app/19.0.0.0/grid/bin/crsctl stat resource ora.testdb.db -p
    2023-03-20 14:31:58: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:58: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:58: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 14:31:58: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 14:31:58: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 14:31:58: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 14:31:58: INFO : Did not do scp for node : ecl-odabase-0
    2023-03-20 14:31:59: INFO : This is root, will become oracle and run: /bin/su oracle -c /usr/bin/ssh -l oracle ecl-odabase-0 /opt/oracle/oak/onecmd/tmp/dbstats_testdb.sh
    2023-03-20 14:31:59: INFO : Running on the local node: /bin/su oracle -c /opt/oracle/oak/onecmd/tmp/dbstats_testdb.sh
    2023-03-20 14:32:37: INFO : Running dbua to upgrade testdb
    2023-03-20 14:32:37: INFO : Running dbua command 
    2023-03-20 14:32:38: INFO : This is root, will become oracle and run: /bin/su oracle -c /usr/bin/ssh -l oracle ecl-odabase-0 /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/dbua -silent -performFixUp true -dbName testdb
    2023-03-20 14:32:38: INFO : Running on the local node: /bin/su oracle -c /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/dbua -silent -performFixUp true -dbName testdb
    2023-03-20 15:03:35: INFO : upgrade testdb <>
    2023-03-20 15:03:35: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:35: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:35: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:35: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:35: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:03:35: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:35: Executing cmd: export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1; /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/srvctl status database -d testdb
    2023-03-20 15:03:36: Command output:
    >  Instance testdb1 is running on node ecl-odabase-0
    >  Instance testdb2 is running on node ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:36: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:36: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:36: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:36: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:36: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:03:37: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:37: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:37: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:03:37: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:37: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:37: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:37: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:03:37: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:03:38: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:03:38: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:03:38: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:03:38: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:03:39: run_as_user2: Running /bin/su oracle -c ' export ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1;/u01/app/oracle/product/19.0.0.0/dbhome_1/bin/srvctl status instance -d testdb -n ecl-odabase-0 '
    2023-03-20 15:03:39: Removing file /tmp/filetjwniy
    2023-03-20 15:03:39:
    2023-03-20 15:03:39: Successfully removed file: /tmp/filetjwniy
    2023-03-20 15:03:39: /bin/su successfully executed
    
    2023-03-20 15:03:39: INFO : Did not do scp for node : ecl-odabase-0
    2023-03-20 15:03:40: INFO : This is root, will become oracle and run: /bin/su oracle -c /usr/bin/ssh -l oracle ecl-odabase-0 /opt/oracle/oak/onecmd/tmp/runDatapatch.sh
    2023-03-20 15:03:40: INFO : Running on the local node: /bin/su oracle -c /opt/oracle/oak/onecmd/tmp/runDatapatch.sh
    2023-03-20 15:04:09: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:04:10: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:04:10: Executing cmd: /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all
    2023-03-20 15:04:10: Command output:
    >  **************************************************************
    >  ecl-odabase-0:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  **************************************************************
    >  ecl-odabase-1:
    >  CRS-4537: Cluster Ready Services is online
    >  CRS-4529: Cluster Synchronization Services is online
    >  CRS-4533: Event Manager is online
    >  ************************************************************** ,
    >End Command output
    2023-03-20 15:04:10: Executing cmd: /u01/app/19.0.0.0/grid/bin/olsnodes
    2023-03-20 15:04:10: Command output:
    >  ecl-odabase-0
    >  ecl-odabase-1 ,
    >End Command output
    2023-03-20 15:04:10: SUCCESS:  Successfully upgraded the database testdb
    [root@ecl-odabase-0 ~]#
    
    

    Conclusion

    ODA is the best option for small and medium enterprises. Oracle Database Appliance simplifies deployment, maintenance, and support for high availability database solutions built from Oracle Database templates designed for optimal performance on this platform. For virtualized environments, oakcli simplifies upgrading the database to 19c.  19.13 is the most stable version of virtualized ODA and this is the last support version of virtualized ODA.  


    Wednesday, June 7, 2023

    OLVM Upgrade 4.4.8 - 4.4.10

     


    Intro 

    OLVM (Oracle Linux Virtualization Manager) releases 4.4 is based on the oVirt 4.4.0 through 4.4.10 release. Ovirt already released version 4.5, But Oracle is behind on releasing the 4.5. Currently, they are working on testing the 4.5 release for bugs.  If you are planning to upgrade OLVM and KVMs make sure to upgrade the OLVM engine first. Once the engine is up to date you can plan for KVM upgrades.

    Note: OLVM-Engine upgrade can be performed without downtime. There is no impact on running VM and KVMs.

    Please find the Oracle and Ovirt documentation to read the new feature upgrade from 4.4.8 to 4.4.10.

    Oracle :

    https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/relnotes/relnotes-whatsnew.html#whatsnew

    Ovirt : 

    https://www.ovirt.org/release/4.4.10/


    In this article, I will illustrate how can we upgrade the OLVM engine from 4.4.8 to 4.4.10. 

    Please find the Oracle note for the upgrade :  OLVM: How to set OLVM Engine and KVM Hosts to Maintenance Mode? (Doc ID 2915795.1)

    Pre-requisite steps before upgrading the engine.

    • If this is running in the virtualized platform, I would recommend taking a consistent snapshot. (To take a Consistent snapshot need to shut down the VM and take the snap).
    • Back up the OLVM engine database using the below-mentioned command.
    • Execute the engine-upgrade-check command which will check for the latest rpms.

    OLVM backup command

    
    engine-backup --scope=all --mode=backup --file=/root/backup_Before_Upgrade_05June2023.bck --log=/root/backuplog_before_upgrade.log
    

    Sample backup output

    
    
    [root@local-olvm-01 ~]# engine-backup --scope=all --mode=backup --file=/root/backup_Before_Upgrade_05June2023.bck --log=/root/backuplog_before_upgrade.log
    Start of engine-backup with mode 'backup'
    scope: all
    archive file: /root/backup_Before_Upgrade_05June2023.bck
    log file: /root/backuplog_before_upgrade.log
    Backing up:
    Notifying engine
    - Files
    - Engine database 'engine'
    - DWH database 'ovirt_engine_history'
    - Grafana database '/var/lib/grafana/grafana.db'
    Packing into file '/root/backup_Before_Upgrade_05June2023.bck'
    Notifying engine
    Done.
    [root@local-olvm-01 ~]#
    
    

    Pre-Upgrade check

    • Execute engine-upgrade-check to get the latest rpm upgrade for the engine.

    Sample pre-upgrade engine log

    
    [root@local-olvm-01 ~]# engine-upgrade-check
    VERB: Creating transaction
    VERB: Queue package ovirt-engine-setup for update
    VERB: Building transaction
    VERB: Transaction built
    VERB: Transaction Summary:
    VERB:     install   : ovirt-engine-setup-plugin-websocket-proxy-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-plugin-cinderlib-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : python3-ovirt-engine-lib-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-plugin-imageio-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-plugin-ovirt-engine-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-plugin-ovirt-engine-common-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.10.7-1.0.22.el8.noarch
    VERB:     install   : ovirt-engine-setup-base-4.4.10.7-1.0.22.el8.noarch
    VERB:     remove    : ovirt-engine-setup-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-base-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-plugin-cinderlib-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-plugin-imageio-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-plugin-ovirt-engine-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-plugin-ovirt-engine-common-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : ovirt-engine-setup-plugin-websocket-proxy-4.4.8.6-1.0.11.el8.noarch
    VERB:     remove    : python3-ovirt-engine-lib-4.4.8.6-1.0.11.el8.noarch
    VERB: Closing transaction with commit
    Upgrade available.
    [root@local-olvm-01 ~]#
    [0] 0:platform-python*   
    
    

    Upgrade engine

    I would recommend executing this in screen or tmux to avoid any terminal connection interruption. This process will be close to 30 to 45 min. you can validate the rpm update by looking at /var/log/dnf.rpm.log

    Sample engine upgrade log

    
    
    [root@local-olvm-01 ~]# dnf update ovirt\*setup\*
    Last metadata expiration check: 4:30:51 ago on Mon 05 Jun 2023 04:00:02 PM EDT.
    Dependencies resolved.
    =============================================================================================================================================================================================================
     Package                                                                      Architecture                       Version                                         Repository                             Size
    =============================================================================================================================================================================================================
    Upgrading:
     ovirt-engine-dwh-grafana-integration-setup                                   noarch                             4.4.10-1.0.3.el8                                ovirt-4.4                              89 k
     ovirt-engine-dwh-setup                                                       noarch                             4.4.10-1.0.3.el8                                ovirt-4.4                              96 k
     ovirt-engine-setup                                                           noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                              22 k
     ovirt-engine-setup-base                                                      noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                             119 k
     ovirt-engine-setup-plugin-cinderlib                                          noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                              43 k
     ovirt-engine-setup-plugin-imageio                                            noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                              30 k
     ovirt-engine-setup-plugin-ovirt-engine                                       noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                             208 k
     ovirt-engine-setup-plugin-ovirt-engine-common                                noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                             126 k
     ovirt-engine-setup-plugin-vmconsole-proxy-helper                             noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                              42 k
     ovirt-engine-setup-plugin-websocket-proxy                                    noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                              43 k
     python3-ovirt-engine-lib                                                     noarch                             4.4.10.7-1.0.22.el8                             ovirt-4.4                              44 k
    
    Transaction Summary
    =============================================================================================================================================================================================================
    Upgrade  11 Packages
    
    Total download size: 862 k
    Is this ok [y/N]: y
    
    

    Engine-step

    This will reconfigure the engine with the latest inputs.

    Sample engine-setup log

    
    
    [root@local-olvm-01 ~]# engine-setup
    [ INFO  ] Stage: Initializing
    [ INFO  ] Stage: Environment setup
              Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/10-packaging.conf
              Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20220725103208-vp6a4t.log
              Version: otopi-1.9.5 (otopi-1.9.5-1.el8)
    [ INFO  ] Stage: Environment packages setup
    [ INFO  ] Stage: Programs detection
    [ INFO  ] Stage: Environment setup (late)
    [ INFO  ] Stage: Environment customization
    
              --== PRODUCT OPTIONS ==--
    
              Configure Cinderlib integration (Currently in tech preview) (Yes, No) [No]:
              Configure Engine on this host (Yes, No) [Yes]:
    
              Configuring ovirt-provider-ovn also sets the Default cluster's default network provider to ovirt-provider-ovn.
              Non-Default clusters may be configured with an OVN after installation.
              Configure ovirt-provider-ovn (Yes, No) [Yes]:
              Configure WebSocket Proxy on this host (Yes, No) [Yes]:
    
              * Please note * : Data Warehouse is required for the engine.
              If you choose to not configure it on this host, you have to configure
              it on a remote host, and then configure the engine on this host so
              that it can access the database of the remote Data Warehouse host.
              Configure Data Warehouse on this host (Yes, No) [Yes]:
              Configure VM Console Proxy on this host (Yes, No) [Yes]:
              Configure Grafana on this host (Yes, No) [Yes]:
    
              --== PACKAGES ==--
    
    [ INFO  ] Checking for product updates...
    [ INFO  ] DNF Package grafana-postgres available, but not installed.
    [ INFO  ] No product updates found
    
              --== NETWORK CONFIGURATION ==--
    
              Host fully qualified DNS name of this server [local-olvm-01.local.com]:
    [WARNING] Failed to resolve kvm02.local.com using DNS, it can be resolved only locally
    
              Setup can automatically configure the firewall on this system.
              Note: automatic configuration of the firewall may overwrite current settings.
              Do you want Setup to configure the firewall? (Yes, No) [Yes]:
    [ INFO  ] firewalld will be configured as firewall manager.
    
              --== DATABASE CONFIGURATION ==--
    
              Where is the DWH database located? (Local, Remote) [Local]:
    
              Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications.
              Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
              Where is the Engine database located? (Local, Remote) [Local]:
    
              Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
              Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
    
              --== OVIRT ENGINE CONFIGURATION ==--
    
              Engine admin password:
              Confirm engine admin password:
    [WARNING] Password is weak: The password is shorter than 8 characters
              Use weak password? (Yes, No) [No]: Yes
              Application mode (Virt, Gluster, Both) [Both]:
              Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]:
    
              --== STORAGE CONFIGURATION ==--
    
              Default SAN wipe after delete (Yes, No) [No]:
    
              --== PKI CONFIGURATION ==--
    
              Organization name for certificate [local.com]:
    
              --== APACHE CONFIGURATION ==--
    
              Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
              Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
    
              Setup can configure apache to use SSL using a certificate issued from the internal CA.
              Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
    
              --== SYSTEM CONFIGURATION ==--
    
    
              --== MISC CONFIGURATION ==--
    
              Please choose Data Warehouse sampling scale:
              (1) Basic
              (2) Full
              (1, 2)[1]:
              Use Engine admin password as initial Grafana admin password (Yes, No) [Yes]:
    
              --== END OF CONFIGURATION ==--
    
    [ INFO  ] Stage: Setup validation
    [WARNING] Less than 16384MB of memory is available
    
              --== CONFIGURATION PREVIEW ==--
    
              Application mode                        : both
              Default SAN wipe after delete           : False
              Host FQDN                               : local-olvm-01.local.com
              Firewall manager                        : firewalld
              Update Firewall                         : True
              Set up Cinderlib integration            : False
              Configure local Engine database         : True
              Set application as default page         : True
              Configure Apache SSL                    : True
              Engine database host                    : localhost
              Engine database port                    : 5432
              Engine database secured connection      : False
              Engine database host name validation    : False
              Engine database name                    : engine
              Engine database user name               : engine
              Engine installation                     : True
              PKI organization                        : local.com
              Set up ovirt-provider-ovn               : True
              Grafana integration                     : True
              Grafana database user name              : ovirt_engine_history_grafana
              Configure WebSocket Proxy               : True
              DWH installation                        : True
              DWH database host                       : localhost
              DWH database port                       : 5432
              DWH database secured connection         : False
              DWH database host name validation       : False
              DWH database name                       : ovirt_engine_history
              Configure local DWH database            : True
              Configure VMConsole Proxy               : True
    
              Please confirm installation settings (OK, Cancel) [OK]:
    [ INFO  ] Stage: Transaction setup
    [ INFO  ] Stopping engine service
    [ INFO  ] Stopping ovirt-fence-kdump-listener service
    [ INFO  ] Stopping dwh service
    [ INFO  ] Stopping vmconsole-proxy service
    [ INFO  ] Stopping websocket-proxy service
    [ INFO  ] Stage: Misc configuration (early)
    [ INFO  ] Stage: Package installation
    [ INFO  ] DNF Downloading 4 files, 16083.99KB
    [ INFO  ] DNF Downloaded ovirt-vmconsole-1.0.9-3.el8.noarch.rpm
    [ INFO  ] DNF Downloaded selinux-policy-3.14.3-80.0.4.el8_5.2.noarch.rpm
    [ INFO  ] DNF Downloaded selinux-policy-targeted-3.14.3-80.0.4.el8_5.2.noarch.rpm
    [ INFO  ] DNF Downloaded ovirt-vmconsole-proxy-1.0.9-3.el8.noarch.rpm
    [ INFO  ] DNF Upgraded: selinux-policy-3.14.3-80.0.4.el8_5.2.noarch
    [ INFO  ] DNF Upgraded: selinux-policy-targeted-3.14.3-80.0.4.el8_5.2.noarch
    [ INFO  ] DNF Upgraded: ovirt-vmconsole-1.0.9-3.el8.noarch
    [ INFO  ] DNF Upgraded: ovirt-vmconsole-proxy-1.0.9-3.el8.noarch
    [ INFO  ] DNF Unknown: ovirt-vmconsole-proxy-1.0.9-2.el8.noarch
    [ INFO  ] DNF Unknown: ovirt-vmconsole-1.0.9-2.el8.noarch
    [ INFO  ] DNF Unknown: selinux-policy-targeted-3.14.3-80.0.1.el8.noarch
    [ INFO  ] DNF Unknown: selinux-policy-3.14.3-80.0.1.el8.noarch
    [ INFO  ] DNF Verify: selinux-policy-3.14.3-80.0.4.el8_5.2.noarch 1/8
    [ INFO  ] DNF Verify: selinux-policy-3.14.3-80.0.1.el8.noarch 2/8
    [ INFO  ] DNF Verify: selinux-policy-targeted-3.14.3-80.0.4.el8_5.2.noarch 3/8
    [ INFO  ] DNF Verify: selinux-policy-targeted-3.14.3-80.0.1.el8.noarch 4/8
    [ INFO  ] DNF Verify: ovirt-vmconsole-1.0.9-3.el8.noarch 5/8
    [ INFO  ] DNF Verify: ovirt-vmconsole-1.0.9-2.el8.noarch 6/8
    [ INFO  ] DNF Verify: ovirt-vmconsole-proxy-1.0.9-3.el8.noarch 7/8
    [ INFO  ] DNF Verify: ovirt-vmconsole-proxy-1.0.9-2.el8.noarch 8/8
    [ INFO  ] Stage: Misc configuration
    [ INFO  ] Upgrading CA
    [ INFO  ] Initializing PostgreSQL
    [ INFO  ] Creating PostgreSQL 'engine' database
    [ INFO  ] Configuring PostgreSQL
    [ INFO  ] Creating PostgreSQL 'ovirt_engine_history' database
    [ INFO  ] Configuring PostgreSQL
    [ INFO  ] Creating CA: /etc/pki/ovirt-engine/ca.pem
    [ INFO  ] Creating CA: /etc/pki/ovirt-engine/qemu-ca.pem
    [ INFO  ] Updating OVN SSL configuration
    [ INFO  ] Updating OVN timeout configuration
    [ INFO  ] Creating/refreshing DWH database schema
    [ INFO  ] Setting up ovirt-vmconsole proxy helper PKI artifacts
    [ INFO  ] Setting up ovirt-vmconsole SSH PKI artifacts
    [ INFO  ] Configuring WebSocket Proxy
    [ INFO  ] Creating/refreshing Engine database schema
    [ INFO  ] Creating a user for Grafana
    [ INFO  ] Creating/refreshing Engine 'internal' domain database schema
    [ INFO  ] Creating default mac pool range
    [ INFO  ] Adding default OVN provider to database
    [ INFO  ] Adding OVN provider secret to database
    [ INFO  ] Setting a password for internal user admin
    [ INFO  ] Install selinux module /usr/share/ovirt-engine/selinux/ansible-runner-service.cil
    [ INFO  ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
    [ INFO  ] Stage: Transaction commit
    [ INFO  ] Stage: Closing up
    [ INFO  ] Starting engine service
    [ INFO  ] Starting dwh service
    [ INFO  ] Starting Grafana service
    [ INFO  ] Restarting ovirt-vmconsole proxy service
    
              --== SUMMARY ==--
    
    [ INFO  ] Restarting httpd
              Please use the user 'admin@internal' and password specified in order to login
              Web access is enabled at:
                  http://local-olvm-01.local.com:80/ovirt-engine
                  https://local-olvm-01.local.com:443/ovirt-engine
              Internal CA CF:C8:A2:E0:42:FE:5F:19:55:B3:E2:9F:A9:7F:4C:DC:49:8D:C7:CB
              SSH fingerprint: SHA256:aRJ1E8zUzNaYsXG2tCfvV4EGTMxw/mMzrKQQt2ZFZKE
    [WARNING] Less than 16384MB of memory is available
              Web access for grafana is enabled at:
                  https://local-olvm-01.local.com/ovirt-engine-grafana/
              Please run the following command on the engine machine kvm02.local.com, for SSO to work:
              systemctl restart ovirt-engine
    
              --== END OF SUMMARY ==--
    
    [ INFO  ] Stage: Clean up
              Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220725103208-vp6a4t.log
    [ INFO  ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220725103526-setup.conf'
    [ INFO  ] Stage: Pre-termination
    [ INFO  ] Stage: Termination
    [ INFO  ] Execution of setup completed successfully
    [root@local-olvm-01 ~]#
    
    

    Engine startup issue

    After the upgrade, we were not able to start the OLVM engine because of the JDBC driver issue. 

    Error

    
    Caused by: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [select * from  gettagsbyparent_id(?, ?, ?, ?, ?, ?, ?, ?, ?)]; nested exception is org.postgresql.util.PSQLException: ERROR: function gettagsbyparent_id(uuid, unknown, character varying, character varying, unknown, boolean, integer, unknown, unknown) does not exist
      Hint: No function matches the given name and argument types. You might need to add explicit type casts.
      Position: 16
    
    

    Complete Error log

    
    
    2023-06-05 20:44:07,605-04 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 50) 
    MSC000001: Failed to start service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: org.jboss.msc.service.StartException in service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: 
    WFLYEE0042: Failed to construct component instance
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:57)
            at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
            at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
            at org.jboss.threads@2.4.0.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
            at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
            at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
            at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
            at java.base/java.lang.Thread.run(Thread.java:829)
            at org.jboss.threads@2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
    Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:163)
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:134)
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:127)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:141)
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54)
            ... 8 more
    Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@2bd6ae3f
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:239)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:446)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:70)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161)
            ... 13 more
                    at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
            at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
            at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:232)
            ... 28 more
    Caused by: java.lang.reflect.InvocationTargetException
            at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.base/java.lang.reflect.Method.invoke(Method.java:566)
            at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:83)
            ... 59 more
    Caused by: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar 
    [select * from  gettagsbyparent_id(?, ?, ?, ?, ?, ?, ?, ?, ?)]; nested exception is org.postgresql.util.PSQLException: 
    ERROR: function gettagsbyparent_id(uuid, unknown, character varying, character varying, unknown, boolean, integer, unknown, unknown) does not exist
      Hint: No function matches the given name and argument types. You might need to add explicit type casts.
      Position: 16
    
    
    
    

    Solution

    This error was caused due to old Postgres JDBC driver issue, Oracle has a meta link note to address this JDBC driver failure. OLVM: 500 - Internal Server Error after OLVM 4.4.10 dnf update (Doc ID 2909844.1)

    Validate the Postgres JDBC driver version 

    First, validate the current JDBC driver version as mentioned below. This will help to understand driver is up to date or not. The issue can be fixed by installing the postgresql-jdbc-42.2.14-1.el8.noarch. Also, the latest rpm can be validated from /vat/log/dnf.rpm.log .


    [root@local-olvm-01 ovirt-engine]# rpm -qa | grep jdbc*
    postgresql-jdbc-42.2.3-3.el8_2.noarch
    ovirt-engine-extension-aaa-jdbc-1.2.0-1.el8.noarch
    

    Upgrade Postgres JDBC driver.

    
    [root@sofe-olvm-01 log]# dnf update postgresql-jdbc-42.2.14-1.el8.noarch
    Last metadata expiration check: 0:19:54 ago on Mon 05 Jun 2023 08:43:55 PM EDT.
    Dependencies resolved.
    =============================================================================================================================================================================================================
     Package                                             Architecture                               Version                                              Repository                                         Size
    =============================================================================================================================================================================================================
    Upgrading:
     postgresql-jdbc                                     noarch                                     42.2.14-1.el8                                        ol8_appstream                                     753 k
    
    Transaction Summary
    =============================================================================================================================================================================================================
    Upgrade  1 Package
    
    Total download size: 753 k
    Is this ok [y/N]: y
    Downloading Packages:
    postgresql-jdbc-42.2.14-1.el8.noarch.rpm                                                                                                                                     788 kB/s | 753 kB     00:00
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                                                                                                        786 kB/s | 753 kB     00:00
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                                                                                                                     1/1
      Upgrading        : postgresql-jdbc-42.2.14-1.el8.noarch                                                                                                                                                1/2
      Cleanup          : postgresql-jdbc-42.2.3-3.el8_2.noarch                                                                                                                                               2/2
      Verifying        : postgresql-jdbc-42.2.14-1.el8.noarch                                                                                                                                                1/2
      Verifying        : postgresql-jdbc-42.2.3-3.el8_2.noarch                                                                                                                                               2/2
    
    Upgraded:
      postgresql-jdbc-42.2.14-1.el8.noarch
    
    Complete!
    [root@sofe-olvm-01 log]#
    
    
    

    Conclusion

    OLVM 4.4.10 looks much more stable than 4.4.8 and also came up with bug fixes for VM high availability and snapshots.  Before upgrading OLVM, I would always recommend taking a backup of the OLVM engine. In case of failure, we can restore from the backup. 

    Also, these upgrades are smooth and most of the upgrade issues are covered with oracle meta link notes. 

    Unified Auditing Housekeeping

      Intro  Data is the new currency. It is one of the most valuable organizational assets, however, if that data is not well...