Monday, April 7, 2025

Protecting Your Encryption Keys: (OKV - Part 2 )

 






Intro 

The Role of Oracle Key Vault in Securing Encryption Keys

One of the fundamental aspects of database security is effective key management. A common yet risky practice is storing Transparent Data Encryption (TDE) keys locally. This approach exposes organizations to data loss due to disk corruption or system failures. Furthermore, manually managing encryption keys across multiple Oracle and MySQL databases can be complex and prone to errors, increasing security risks.

Oracle Key Vault (OKV) provides a centralized, secure repository for managing encryption keys, certificates, and secrets. It plays a vital role in enhancing security by:

  • Protecting Encryption Keys: OKV ensures that TDE keys and other critical credentials are securely stored, mitigating risks associated with local key storage.

  • Automating Key Management: By streamlining key distribution and rotation, OKV reduces operational overhead and human error.

  • Enhancing Compliance: Many regulatory frameworks require robust encryption key management. OKV helps organizations meet compliance standards by providing a secure and auditable key management solution.

  • Supporting Disaster Recovery: Backing up encryption keys to an external, secure repository is crucial for disaster recovery. OKV facilitates seamless recovery processes, ensuring business continuity in case of system failures.

Strengthening Security Best Practices

To mitigate the risks associated with security breaches and ensure robust protection of sensitive data, organizations should adopt the following best practices:

  1. Implement Centralized Key Management: Use Oracle Key Vault or similar solutions to securely store and manage encryption keys.

  2. Enforce Strong Access Controls: Restrict access to encryption keys and credentials based on the principle of least privilege.

  3. Regularly Rotate Keys and Passwords: Frequent key and password rotation reduces the risk of prolonged exposure in the event of a breach.

  4. Monitor for Anomalous Activity: Utilize security monitoring tools to detect unauthorized access attempts and potential threats.

  5. Backup Encryption Keys Securely: Store backups in an encrypted, external repository to ensure data recovery in case of corruption or loss.

Initial setup.

In the previous article, we set up the admin user to connect to Oracle Key Vault (OKV). Now, use the admin account to log in to the OKV console.

If you need installation guidance, refer to: Protecting Your Encryption Keys: Lessons from the Oracle Cloud Security Breach (OKV - Part 1).


For endpoint creation and registration, I refer to the links below. They provide valuable insights and are highly useful for understanding the concept.

- Installation





- DataSheet

- Setup Oracle Key Vault 21.9 in Guest VM



                                                             Figure 1: Initial logging page


We need to create an endpoint in Oracle Key Vault (OKV). In this example, I have created a database named TWHSE01. Before proceeding, I will first set up the following folder structure.


Note: wallet_root is a special folder where you keep all the OKV file , best practices is to keep the OKV EP installation under wallet_root folder. 


mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde - Location for TDE keys
mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/okv - Location for OKV End point
mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde_seps - Location for External SSO Key


Create an OKV endpoint (EP) 

Overview of EP 

Endpoints are Oracle Key Vault clients that securely store and retrieve security objects such as keys, passwords, certificates, and credential files.

These endpoints can include Oracle database servers, Oracle middleware servers, operating systems, and more. They use Oracle Key Vault for long-term secret retention, secure sharing with trusted peers, and on-demand retrieval.

Oracle Key Vault also provides a library that enables Transparent Data Encryption (TDE) to communicate with it. While Oracle Enterprise Manager can manage database server endpoints in Oracle Key Vault, it does not support TDE integration with Key Vault.


 
Figure 2: Create OKV End endpoint 


This step follows the creation of the endpoint for the TWHSE01 database.



                                                       Figure 3: Endpoint 

Default wallet

Create a default wallet and click Save. In this example, I have created WL_TWHSE01.


                                                   Figure 4: Create default wallet


Download EP 

Before downloading, be sure to note the token key, as it is required to download the endpoint software. Token key to mentioned in the endpoint.


                                                                

                                                        Figure 5: Token 

                                       

Log out of Oracle Key Vault, return to the main login window, and click Endpoint Enrollment and Software Download.



                                                      Figure 6: Download EP software.
 

After logging in, you will be redirected to the Enroll Endpoint & Download Software page. Enter the token key and click Enroll to download the endpoint software.



                                                  Figure 7: Add a token to download EP software.                                      

                                                           

Installed the Endpoint 

This file is crucial. After installation, be sure to remove it from the server. To simplify identification, name each endpoint software after the corresponding database.

In this example, I copied the file to the wallet_root/okv folder and renamed it to okvclient_TWHSE01.jar.

Note: For security best practices, ensure you delete this file after installation to prevent potential exploitation by hackers.

For endpoint installation, I am using 0KV2025! as the endpoint and external OKV password. Since TDE is file-based, I will use F1LE2025!.

  • Endpoint password: 0KV2025!

  • TDE password: F1LE2025!

Use the command below to install:


$ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/admin/TWHSE01/wallet_root/okv/okvclient_TWHSE01.jar -d /u01/app/oracle/admin/TWHSE01/wallet_root/okv -v
        

Sample installation output

        
[oracle@crs01 wallet_root]$ $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/admin/TWHSE01/wallet_root/okv/okvclient_TWHSE01.jar -d /u01/app/oracle/admin/TWHSE01/wallet_root/okv -v
Detected JAVA_HOME: /u01/app/oracle/product/19.0.0/dbhome_1/jdk
Detected ORACLE_HOME: /u01/app/oracle/product/19.0.0/dbhome_1
Detected ORACLE_BASE: /u01/app/oracle
Using OKV_HOME: /u01/app/oracle/admin/TWHSE01/wallet_root/okv
Please set environment variables ORACLE_HOME, ORACLE_BASE, and OKV_HOME
consistently across processes.
Enter new Key Vault endpoint password ( for auto-login): 0KV2025!
Confirm new Key Vault endpoint password: 0KV2025!
The endpoint software for Oracle Key Vault installed successfully.
Deleted the file : /u01/app/oracle/admin/TWHSE01/wallet_root/okv/okvclient_TWHSE01.jar
[oracle@crs01 wallet_root]$
        
        
        
After the extract process is complete, run the root.sh script. This script creates the directory tree/opt/oracle/extapi/64/hsm/oracle/1.0.0, sets the appropriate ownership and permissions, and copies the PKCS#11 library into the directory.

The library PKCS#11 library liborapkcs.so is used by Oracle Database to communicate with OKV.

Sample root.sh Script execution output


        
[root@crs01 oracle]#/u01/app/oracle/admin/TWHSE01/wallet_root/okv/bin/root.sh
Creating directory: /opt/oracle/extapi/64/hsm/oracle/1.0.0/
Copying PKCS library to /opt/oracle/extapi/64/hsm/oracle/1.0.0/
Setting PKCS library file permissions
Installation successful.
[root@crs01 oracle]#

        
        

Before the installation endpoint status was REGISTERED.


After Installation endpoint status changed to ENROLLED.



Environment Variables.

To complete the installation, we must properly set up all environment variables. When migrating TDE to OKV, the environment variable should be configured to identify the correct path.

Important variables are ORACLE_HOME, ORACLE_SID , OKV_HOME and JAVA_HOME.


        
export ORACLE_BASE=/u01/app/oracle
export DB_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=TWHSE01
export ORACLE_TERM=xterm
export OKV_HOME=/u01/app/oracle/admin/TWHSE01/wallet_root/okv
export JAVA_HOME=/u01/app/oracle/product/19.0.0/dbhome_1/jdk

        
        

Upload TDE keys to OKV

Now, let's upload the TDE keys to the endpoint using the following command to add them to the OKV wallet.


        
[oracle@crs01 bin]$ ./okvutil upload -h
Usage 1: okvutil upload -l location>-t type [-o] [-g group]
  type := WALLET | JKS | JCEKS
Usage 2: okvutil upload -l location> -t type [-o] [-g group] [-d description]
  type := SSH | KERBEROS | TDE_KEY_BYTES | OTHER
Usage 3: okvutil upload -l location -t type -U SSH-user -L length [-g group] [-i SSH-private-key-id] [-d description]
  type := SSH_PUBLIC_KEY
Usage 4: okvutil upload -l location -t type -U SSH-user -L length [-g group] [-d description]
  type := SSH_PRIVATE_KEY
Description:
  Use the upload command to upload data to the server.
Options:
  -l, --location -location-
        Read information from -location-
        For wallets, point to the directory containing the wallet.
        For all other types, point to the file.
  -t, --type -type-
        Type of store.
        type := WALLET | JKS | JCEKS | SSH | KERBEROS | TDE_KEY_BYTES | SSH_PUBLIC_KEY | SSH_PRIVATE_KEY | OTHER
  -o, --overwrite
        Overwrite any conflicting data on the server with the data to upload.
  -g, --group -group-
        Name of object group (Oracle Key Vault virtual wallet).
        Note that group must exist and the endpoint must have sufficient access privileges.
  -d, --description description
        Add a free-form description. This option is only valid when the source file type is one that Oracle Key Vault stores as a single object (OTHER, KERBEROS, SSH, TDE_KEY_BYTES, SSH_PUBLIC_KEY).
  -i, --item SSH-private-key-id
        ID of the SSH private key linked to the SSH public key.
  -U, --ssh-user SSH-user
        SSH user who owns the SSH public or private key.
  -L, --length length
        Length (in bits) of the SSH public or private key to be uploaded.
Example:
  * okvutil upload -l . -t wallet -g Group1
  * okvutil upload -l foo.txt -t other -d description
  * okvutil upload -l tde_key_bytes.txt -t tde_key_bytes -d master_key_for_db
  * okvutil upload -l ./keystore.jks -t jks -g Group2
[oracle@crs01 bin]$
        
        

This is the way you can upload the TDE keys to the OKV wallet:
        
/u01/app/oracle/admin/SWHSE01/okv/bin/okvutil upload -t WALLET -l /u01/app/oracle/admin/SWHSE01/wallet/tde -g WL_SWHSE01 -v 4
        
    

Sample output :

        
[oracle@crs01 bin]$ pwd
/u01/app/oracle/admin/SWHSE01/okv/bin
[oracle@crs01 bin]$ /u01/app/oracle/admin/SWHSE01/okv/bin/okvutil upload -t WALLET -l /u01/app/oracle/admin/SWHSE01/wallet/tde  -g WL_SWHSE01 -v 4
okvutil version 21.10.0.0.0
Endpoint type: Oracle Database
Configuration file: /u01/app/oracle/admin/SWHSE01/okv/conf/okvclient.ora
Server: 192.168.56.210:5696
Standby Servers:
Uploading from /u01/app/oracle/admin/SWHSE01/wallet/tde
Enter source wallet password:
No auto-login wallet found, password needed
Enter Oracle Key Vault endpoint password:
ORACLE.SECURITY.ID.ENCRYPTION.
Trying to connect to 192.168.56.210:5696 ...
Connected to 192.168.56.210:5696.
ORACLE.SECURITY.KB.ENCRYPTION.
Trying to connect to 192.168.56.210:5696 ...
Connected to 192.168.56.210:5696.
ORACLE.SECURITY.KM.ENCRYPTION.AUjX3D9pzU9xv0601AarqbMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.AUjX3D9pzU9xv0601AarqbMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY

Uploaded 1 TDE keys
Uploaded 0 SEPS entries
Uploaded 0 other secrets
Uploaded 3 opaque objects

Uploading private key
Uploading certificate request
Uploading trust points

Uploaded 1 private keys
Uploaded 1 certificate requests
Uploaded 0 user certificates
Uploaded 0 trust points

Upload succeeded
[oracle@crs01 bin]$
        
        


Migrate TDE keys to OKV


Add a secret to allow use of "External Store".

I will securely store the OKV password in the keystore as a secret, allowing the use of the EXTERNAL STORE option instead of manually entering the password.

Next, I will create an external key in Oracle Key Vault (OKV) and migrate the existing encryption keys to OKV. The keys will be stored in the following directory: /u01/app/oracle/admin/TWHSE01/wallet_root/tde_seps


ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2025!' FOR CLIENT 'OKV_PASSWORD' TO LOCAL AUTO_LOGIN KEYSTORE '/u01/app/oracle/admin/TWHSE01/wallet_root/tde_seps';
        

Note: As mentioned in the previous post:
  • The keystore must be located in a subdirectory of WALLET_ROOT named "tde_seps" to be recognized.
  • The "FOR CLIENT" entry must be 'OKV_PASSWORD' for proper detection.
  • The keystore must be set to AUTO_LOGIN to ensure it can be opened and used automatically.

Enabling Auto Login for Oracle Key Vault (OKV) Keystore

To streamline access to the Oracle Key Vault (OKV) keystore, I will store the OKV password as a secret within the keystore. This setup enables AUTO_LOGIN, eliminating the need for manual password entry when accessing the OKV keystore.

The database can securely authenticate with OKV by configuring auto-login, ensuring seamless key management while maintaining strong encryption security.

Note: Before creating a new OKV SSO file, backup the current SSO file.

ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2025!' FOR CLIENT 'HSM_PASSWORD' TO AUTO_LOGIN KEYSTORE '/u01/app/oracle/admin/TWHSE01/wallet_root/tde';
      
The parameter "KEYSTORE_CONFIGURATION=OKV|FILE" means that the database will get the encryption key from OKV and the auto_login file cwallet.sso from local disk.
    
alter system set tde_configuration = "KEYSTORE_CONFIGURATION=OKV|FILE" scope=both sid='*';
    
    

Validate the wallet status


SQL> set lines 600
SQL> col WALLET for a20
col WALLET_LOCATION for a80

select WRL_TYPE wallet,status,WALLET_TYPE,wrl_parameter wallet_location,KEYSTORE_MODE from v$encryption_wallet;SQL> SQL> SQL>

WALLET               STATUS                         WALLET_TYPE          WALLET_LOCATION                                                                  KEYSTORE
-------------------- ------------------------------ -------------------- -------------------------------------------------------------------------------- --------
FILE                 OPEN                           UNKNOWN              /u01/app/oracle/admin/SWHSE01/wallet/tde/                                        NONE
OKV                  CLOSED                         UNKNOWN                                                                                               NONE
FILE                 OPEN                           UNKNOWN                                                                                               UNITED
OKV                  CLOSED                         UNKNOWN                                                                                               UNITED
FILE                 OPEN                           UNKNOWN                                                                                               UNITED
OKV                  CLOSED                         UNKNOWN                                                                                               UNITED

6 rows selected.

SQL>

    
Before migrating keys, ensure the wallet is open. To open the wallet using OKV, you must first close the file-based auto-login wallet.

    ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE;
    ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2025!"; -- OKV file based password 
    

After this you can validate the wallet status , OKV should be "OPEN_NO_MASTER_KEY" and FILE bases wallet should be "OPEN_NO_MASTER_KEY".
TDE status after TDE_CONFIGURATION change:


    
WALLET               STATUS                         WALLET_TYPE          WALLET_ORDER         WALLET_LOCATION                                                                  KEYSTORE
-------------------- ------------------------------ -------------------- -------------------- -------------------------------------------------------------------------------- --------
FILE                 OPEN_NO_MASTER_KEY             AUTOLOGIN            SINGLE               /u01/app/oracle/admin/TWHSE01/wallet_root/tde/                                   NONE
OKV                  OPEN_NO_MASTER_KEY             OKV                  SINGLE     
SQL>
   
    

Migrate the FILE local wallet to OKV

"Now, execute the key migration command. This command must complete successfully to ensure full integration with OKV.

    
SQL> ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "0KV2025!" MIGRATE USING "F1LE2025!"  WITH BACKUP;
keystore altered.
    

Validate wallet status


Once the TDE key migration is complete, the status should display OKV as the primary keystore and File as the secondary.


    
SQL> @tde_status.sql

WALLET               STATUS                         WALLET_TYPE          WALLET_ORDER         WALLET_LOCATION                                                                  KEYSTORE
-------------------- ------------------------------ -------------------- -------------------- -------------------------------------------------------------------------------- --------
FILE                 OPEN                           AUTOLOGIN            SECONDARY            /u01/app/oracle/admin/TWHSE01/wallet_root/tde/                                   NONE
OKV                  OPEN                           OKV                  PRIMARY                                                                                               NONE

SQL>

    
    

Conclusion

The recent Oracle Cloud security incident serves as a stark reminder of the evolving threat landscape. While Oracle Cloud provides strong security capabilities, organizations must take a proactive approach to encryption key management and data protection. Implementing solutions like Oracle Key Vault enhances security, streamlines compliance, and safeguards sensitive information against potential breaches. By prioritizing robust key management strategies, organizations can significantly reduce risks and ensure the integrity of their cloud environments.

Tuesday, April 1, 2025

Protecting Your Encryption Keys: Lessons from the Oracle Cloud Security Breach (OKV - Part 1 )

 








Intro

Recent reports have surfaced about a significant security breach involving Oracle Cloud. A threat actor, identified as "rose87168," claims to have exfiltrated over six million records, potentially affecting more than 140,000 tenants. The compromised data reportedly includes encrypted Single Sign-On (SSO) and Lightweight Directory Access Protocol (LDAP) passwords, Java KeyStores (JKS) files, and Enterprise Manager Java Platform Security (JPS) keys. The attacker is allegedly demanding payments from affected organizations to prevent further exposure.

This incident underscores the critical need for robust encryption key management and security best practices. While Oracle Cloud offers built-in security measures, organizations must proactively safeguard their sensitive credentials and encryption keys. This is where Oracle Key Vault (OKV) plays a crucial role in enhancing data security and mitigating potential threats.

It's not advisable to store your Transparent Data Encryption (TDE) keys locally, as disk corruption could lead to the loss of these critical keys. Managing keys manually across multiple Oracle and MySQL databases with TDE encryption can be challenging and error-prone. That’s why I always encourage customers to back up their keys to an external repository as part of disaster recovery planning. Oracle Key Vault (OKV) serves a crucial role in this context by providing a centralized repository to protect and manage keys. This product is highly mature in terms of security and offers robust protection for sensitive data.

How Oracle Key Vault Can Help

Oracle Key Vault (OKV) is designed to provide centralized and secure storage for encryption keys, certificates, and credentials, reducing the risk of unauthorized access. In light of this breach, organizations leveraging OKV can benefit in the following ways:

  1. Secure Key Management—OKV provides a dedicated environment for securely storing and managing encryption keys, preventing exposure even if other credentials are compromised.

  2. Separation of Duties – By decoupling encryption key storage from application environments, OKV minimizes the risk of key exposure in case of a breach.

  3. End-to-End Encryption – Ensuring that sensitive data, including authentication credentials and cryptographic keys, remains encrypted both in transit and at rest.

  4. Auditing & Compliance – OKV provides robust auditing capabilities, helping organizations track key access and usage to meet regulatory compliance requirements.

  5. Automated Key Rotation – Regular key rotation can help mitigate risks associated with long-term key exposure, making it harder for attackers to exploit stolen credentials.


In this blog, I will walk you through the installation of the Oracle Key Vault (OKV) appliance in an on-premises environment. For this demonstration, I’ll be using VirtualBox to set up and test the deployment.

Initial Step 

The first step is to download the Oracle Key Vault (OKV) ISO image from Oracle e-Delivery.

  1. Visit Oracle e-Delivery.

  2. In the search bar, type Oracle Key Vault and locate the latest version (21.10).

  3. Download the ISO file for installation.




                                                         Figure 1 : Download OKV


Create a VM using the mentioned system requirements.

  • CPU: Minimum: x86-64 16 cores. Recommended: 24-48 cores with cryptographic acceleration support (Intel AESNI).
  • Memory: Minimum 16 GB of RAM. Recommended: 32–64 GB.
  • Disk: Both BIOS and UEFI boot mode. For a system with a boot disk size greater than 2 TB, Oracle Key Vault supports booting in UEFI mode only.

Installation 

After downloading the OKV ISO, boot your system from the ISO image. Once the installation menu appears, simply press Enter to start a fresh installation of Oracle Key Vault.

                                             
                                                    Figure 2: Installation   


Note: The installation will fail if the recommended values are not provided. For example, insufficient disk space can cause the installation to fail, as shown below.



                                                 Figure 3: Installation failure.

After pressing Enter, the installation process will begin by creating the Logical Volume Manager (LVM) and then proceed with the database installation.


                                            Figure 4 : Installation: LVM creation


During the installation, you will be prompted to provide the ISO file again. Reattach the ISO to proceed with the database installation and Oracle REST Data Services (ORDS), which powers the web-based interface built on APEX.



                                                       Figure 5: ISO file prompt 01 


                      

                                                    Figure 6 : ISO file prompt 02 


Setup Network


The next step in the installation process is configuring the network for Oracle Key Vault (OKV). Since this is a test setup, I am selecting Classic Mode for the network configuration.

  
                                                         Figure 7 : Setup network

Choose the appropriate network interface and configure the IP address for the Oracle Key Vault (OKV) appliance.


                                                      Figure 8 : Setup network - 2


                                                     Figure 9 : Setup network - 3


After applying the Database Release Update (RU), the installation proceeds with deploying the Oracle Key Vault (OKV) application within Oracle REST Data Services (ORDS).



                                                    Figure 9: Apply DB RU



                                                       Figure 10: Beginning application installation.

This figure illustrates the final step of the application installation process.


                                              Figure 10: Completing the application installation.

Now we can access take OKV URL using IP or the hostname. 



                                                           Figure 11 : OKV Loging page.

Set up roles and users for OKV.

There are 3 different roles in OKV. 
  • Admin

  • System Administrator

  • Audit Manager

Alternatively, you can use the same account for all these roles. For this configuration, I will use a single account.



                                                    Figure 12: Setup users.

Once everything is configured, this is how the dashboard will appear when you log in to OKV.



                                              Figure 13 : Dashboard view.


Next article I will show you how you can migrate TDE keys to OKV. 

Conclusion

In an era where AI-driven security threats are becoming increasingly sophisticated, robust encryption key management is more critical than ever. Oracle Key Vault (OKV) provides a secure, centralized solution for managing encryption keys, reducing the risks associated with manual handling and local storage. By integrating OKV into their security strategy, organizations can enhance data protection, ensure compliance, and mitigate potential threats. As cybersecurity challenges evolve, proactive key management remains a fundamental pillar of a strong defense strategy.

Friday, March 21, 2025

Ensuring Database Integrity During Migration: A Production DBA's Challenge with Datapatch Execution (19c PDB : datapatch issue :ORA-01110 )

 





Intro

In today’s fast-paced data-driven world, data is more valuable than gold, and protecting databases has become a critical priority. As a production DBA, one of my key responsibilities is ensuring that the software stays up to date with the latest patches, as Oracle releases these patches every quarter.

Recently, I faced a challenge after migrating a database from ODA to Exadata, upgrading from Oracle version 19.13 to 19.24. During the process, the datapatch execution failed on a pluggable database (PDB) due to a missing PDB temporary file.

Before running datapatch, I performed a sanity check to confirm that everything was in good condition, and the results were clean.

SANITY CHECK

Sanity check results : 



[oracle@exanode01 OPatch]$ ./datapatch -sanity_checks
SQL Patching sanity checks version 19.24.0.0.0 on Wed 23 Oct 2024 04:20:17 PM UTC
Copyright (c) 2021, 2024, Oracle.  All rights reserved.

Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sanity_checks_20241023_162017_119323/sanity_checks_20241023_162017_119323.log

Running checks
Use of uninitialized value in concatenation (.) or string at /u02/app/oracle/product/19.0.0.0/dbhome_1/sqlpatch/sqlpatch_sanity_checks.pm line 645.
Use of uninitialized value in concatenation (.) or string at /u02/app/oracle/product/19.0.0.0/dbhome_1/sqlpatch/sqlpatch_sanity_checks.pm line 645.
Use of uninitialized value in concatenation (.) or string at /u02/app/oracle/product/19.0.0.0/dbhome_1/sqlpatch/sqlpatch_sanity_checks.pm line 645.

JSON report generated in /u02/app/oracle/cfgtoollogs/sqlpatch/sanity_checks_20241023_162017_119323/sqlpatch_sanity_checks_summary.json file
Checks completed. Printing report:

Check: Database component status - OK
Check: PDB Violations - OK
Check: Invalid System Objects - OK
Check: Tablespace Status - OK
Check: Backup jobs - OK
Check: Temp file exists - OK
Check: Temp file online - OK
Check: Data Pump running - OK
Check: Container status - OK
Check: Oracle Database Keystore - OK
Check: Dictionary statistics gathering - WARNING
  Patching the database while dictionary statistics are being collected may lead to performance issues.
  Dictionary statistics are currently being collected in the database.
  Consider patching the database when dictionary statistics are not being collected. To check if such operations are currently running, run this query:
    SELECT sofar, totalwork, units, start_time, time_remaining, message
    FROM sys.v$session_longops
    WHERE opname = 'Gather Dictionary Schema Statistics'
    AND time_remaining > 0;
  PDB$SEED:
    |     LATEST      |        OPERATION        |
    |-----------------+-------------------------|
    | 17-JUL-22 09:16 | gather_dictionary_stats |
    |-----------------+-------------------------|
Check: Scheduled Jobs - WARNING
  Execution of scheduler jobs while database patching is running may lead to failures and/or performance issues.
  There are jobs currently running or scheduled to be executed during next hour.
  Consider patching the database when jobs are not running and will not be scheduled to run during patching.
  To check for jobs that are running or scheduled to run:
    SELECT owner as schema_name, job_name, state, next_run_date
    FROM sys.all_scheduler_jobs
    WHERE state = 'RUNNING'
    UNION
      SELECT owner as schema_name, job_name, state, next_run_date
      FROM sys.all_scheduler_jobs
      WHERE state = 'SCHEDULED'
      and cast(next_run_date as date)  sysdate
      and cast(next_run_date as date)  sysdate + 1/24;
  CDB$ROOT:
    |         JOB_NAME         |            NEXT_RUN_DATE            | SCHEMA_NAME |   STATE   |
    |--------------------------+-------------------------------------+-------------+-----------|
    | CLEANUP_ONLINE_IND_BUILD | 23-OCT-24 04.48.21.428584 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
    |    CLEANUP_ONLINE_PMO    | 23-OCT-24 04.49.01.459102 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
    |   CLEANUP_TAB_IOT_PMO    | 23-OCT-24 04.48.31.422134 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
  EGDEV_PDB1:
    |         JOB_NAME         |            NEXT_RUN_DATE            | SCHEMA_NAME |   STATE   |
    |--------------------------+-------------------------------------+-------------+-----------|
    | CLEANUP_ONLINE_IND_BUILD | 23-OCT-24 05.11.03.390152 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
    |    CLEANUP_ONLINE_PMO    | 23-OCT-24 05.11.43.403227 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
    |   CLEANUP_TAB_IOT_PMO    | 23-OCT-24 05.11.13.329389 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
  EGQA_PDB1:
    |         JOB_NAME         |            NEXT_RUN_DATE            | SCHEMA_NAME |   STATE   |
    |--------------------------+-------------------------------------+-------------+-----------|
    | CLEANUP_ONLINE_IND_BUILD | 23-OCT-24 05.11.03.441575 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
    |    CLEANUP_ONLINE_PMO    | 23-OCT-24 05.11.43.187682 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
    |   CLEANUP_TAB_IOT_PMO    | 23-OCT-24 05.11.13.443853 PM +00:00 |     SYS     | SCHEDULED |
    |--------------------------+-------------------------------------+-------------+-----------|
Check: GoldenGate triggers - OK
Check: Logminer DDL triggers - OK
Check: Check sys public grants - OK
Check: Statistics gathering running - OK
Check: Optim dictionary upgrade parameter - OK
Check: Symlinks on oracle home path - OK
Check: Central Inventory - OK
Check: Queryable Inventory dba directories - OK
Check: Queryable Inventory locks - OK
Check: Queryable Inventory package - OK
Check: Queryable Inventory external table - OK
Check: Imperva processes - OK
Check: Guardium processes - OK
Check: Locale - OK

Refer to MOS Note 2680521.1 and debug log
/u02/app/oracle/cfgtoollogs/sqlpatch/sanity_checks_20241023_162017_119323/sanity_checks_debug_20241023_162017_119323.log

SQL Patching sanity checks completed on Wed 23 Oct 2024 04:21:18 PM UTC
[oracle@exanode01  OPatch]$


Everything looked clean, so I executed datapatch -version to update the registry information. However, it was hanging on for a long time. Upon checking the database alert log, I noticed complaints about the temp files under PDB$SEED.

I decided to kill the session since the temp files hadn't been moved. Now, the issue is that PDB$SEED has moved to the MOUNT state.


Datapatch output :



[oracle@exanode01 OPatch]$ pwd/u02/app/oracle/product/19.0.0.0/dbhome_2/OPatch

[oracle@exanode01 OPatch]$ ./datapatch -verboseSQL Patching tool version 19.24.0.0.0 Production on Tue Dec 10 22:15:39 2024
Copyright (c) 2012, 2024, Oracle.  All rights reserved.

Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_380836_2024_12_10_22_15_39/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...Unable to acquire sqlpatch global lock in EXCLUSIVE mode
because another datapatch session is currently running.
Waiting for that session to complete before continuing...



^CError: prereq checks failed!
DBD::Oracle::st execute failed: ORA-01013: user requested cancel of current operation
ORA-06512: at "SYS.DBMS_LOCK", line 92
ORA-06512: at "SYS.DBMS_LOCK", line 108
ORA-06512: at line 9 (DBD ERROR: OCIStmtExecute) [for Statement "DECLARE
           lockmode INTEGER;
         BEGIN
           IF ? = 'SHARED' THEN
             lockmode := dbms_lock.s_mode;
           ELSE
             lockmode := dbms_lock.x_mode;
           END IF;
           ? := dbms_lock.request(?, lockmode, dbms_lock.maxwait, false);
         END;" with ParamValues: :p1='EXCLUSIVE', :p2='1', :p3='10737418441073741844167'] at /u02/app/oracle/product/19.0.0.0/dbhome_2/sqlpatch/sqlpatch.pm line 2134.


Please refer to MOS Note 1609718.1 and/or the invocation log
/u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_380836_2024_12_10_22_15_39/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Tue Dec 10 22:26:39 2024

We migrated this database using the RMAN backup and restore method. However, RMAN does not create temp files during the backup and restore process. As a result, PDB$SEED temp files were missing, which prevented the database from completing its operations in PDB$SEED.


Alert log error :



PDB$SEED(2):Pluggable database PDB$SEED closing
PDB$SEED(2):Increasing priority of 2 RS
PDB$SEED(2):JIT: pid 141609 requesting stop
PDB$SEED(2):Closing sequence subsystem (601820723513).
PDB$SEED(2):Buffer Cache flush started: 2
PDB$SEED(2):Buffer Cache flush finished: 2
2024-10-23T16:25:13.153882+00:00
Errors in file /u02/app/oracle/diag/rdbms/egdev_exa/DEV1/trace/DEV1_dbw0_59940.trc:
ORA-01157: cannot identify/lock data file 1026 - see DBWR trace file
ORA-01110: data file 1026: '/u02/app/oracle/oradata/DEV/DEV/datafile/temp012024-05-06_05-30-49-610-AM.dbf'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2024-10-23T16:25:13.154139+00:00
Errors in file /u02/app/oracle/diag/rdbms/dev_exa/DEV1/trace/DEV1_dbw0_59940.trc:
ORA-01186: file 1026 failed verification tests
ORA-01157: cannot identify/lock data file 1026 - see DBWR trace file
ORA-01110: data file 1026: '/u02/app/oracle/oradata/DEV/DEV/datafile/temp012024-05-06_05-30-49-610-AM.dbf'
2024-10-23T16:25:13.154193+00:00
File 1026 not verified due to error ORA-01157

Error from datapath log :



[2024-10-23 16:25:12] sql_error_handler called: [2024-10-23 16:25:12] 1 ADD_TO_QUEUE PDB$SEED: DBD::Oracle::st bind_param failed: ORA-01157: cannot identify/lock data file 1026 - see DBWR trace file
ORA-01110: data file 1026: '/u02/app/oracle/oradata/EGDEV/EGDEV/datafile/temp012024-05-06_05-30-49-610-AM.dbf' (DBD ERROR: OCILobCreateTemporary) [for Statement "DECLARE
             rec dba_registry_sqlpatch%ROWTYPE;
           BEGIN
             rec.patch_id := :patch_id;
             rec.patch_uid := :patch_uid;
             rec.patch_type := :patch_type;
             rec.action := :action;
             rec.description := SUBSTR(:description, '1', '100');
             rec.flags := :flags;
             rec.patch_descriptor := :patch_descriptor;
             rec.patch_directory := :patch_directory;
             rec.source_version := :source_version;
             rec.source_build_description := :source_build_description;
             rec.source_build_timestamp :=
               TO_TIMESTAMP(:source_build_timestamp, 'YYMMDDHH24MISS');
             rec.target_version := :target_version;
             rec.target_build_description := :target_build_description;
             rec.target_build_timestamp :=
               TO_TIMESTAMP(:target_build_timestamp, 'YYMMDDHH24MISS');
             sys.dbms_sqlpatch.set_patch_metadata(rec);


Resolving PDB$SEED in MOUNT State to Create Tempfiles.

After encountering an issue where PDB$SEED moved to the MOUNT state, preventing access to create the necessary tempfiles, I applied the following method to regain access to PDB$SEED and proceed with tempfile creation.


SQL> alter session set container=PDB$SEED;
SQL> alter session set "_oracle_script"=TRUE;
SQL> alter pluggable database pdb$seed close immediate instances=all;
SQL> alter pluggable database pdb$seed OPEN READ ONLY;



Recreating Tempfiles in PDB$SEED

To resolve the issue with missing temp files in PDB$SEED, follow these steps:

  1. Create New Tempfiles: Start by creating new tempfiles to ensure that the database functions as expected.
  2. Handle Old Tempfiles: For the old tempfiles that were not created, you can take them offline and drop them once they're no longer needed.

This approach ensures the smooth creation of temp files and the proper cleanup of any old or unnecessary ones.



SQL> alter database tempfile '/u02/app/oracle/oradata/DEV/DEV/datafile/temp012024-05-06_05-30-49-610-AM.dbf' offline;

Database altered.

SQL>  alter database tempfile '/u02/app/oracle/oradata/DEV/DEV/datafile/temp012024-05-06_05-30-49-610-AM.dbf' drop including datafiles;

Database altered.

SQL>


Once all the necessary changes have been made and everything is in order, you can proceed to rerun the datapatch -version command to confirm that the patching process has been completed successfully.


[oracle@exadevdb02-tsgb71 OPatch]$ ./datapatch -verbose
SQL Patching tool version 19.24.0.0.0 Production on Wed Oct 23 16:53:31 2024
Copyright (c) 2012, 2024, Oracle.  All rights reserved.

Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_258820_2024_10_23_16_53_31/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...

done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done

Determining current state...done

Current state of interim SQL patches:
Interim patch 34086870 (OJVM RELEASE UPDATE: 19.16.0.0.220719 (34086870)):
  Binary registry: Not installed
  PDB CDB$ROOT: Applied successfully on 17-JUL-22 09.18.13.470863 AM
  PDB DEV_PDB1: Applied successfully on 17-JUL-22 09.18.24.040293 AM
  PDB QA_PDB1: Applied successfully on 17-JUL-22 09.18.24.040293 AM
  PDB PDB$SEED: Applied successfully on 17-JUL-22 09.18.24.040293 AM
Interim patch 36414915 (OJVM RELEASE UPDATE: 19.24.0.0.240716 (36414915)):
  Binary registry: Installed
  PDB CDB$ROOT: Not installed
  PDB DEV_PDB1: Not installed
  PDB QA_PDB1: Not installed
  PDB PDB$SEED: Not installed

Current state of release update SQL patches:
  Binary registry:
    19.24.0.0.0 Release_Update 240627235157: Installed
  PDB CDB$ROOT:
    Applied 19.16.0.0.0 Release_Update 220703022223 successfully on 17-JUL-22 09.18.13.460370 AM
  PDB DEV_PDB1:
    Applied 19.16.0.0.0 Release_Update 220703022223 successfully on 17-JUL-22 09.18.24.036238 AM
  PDB QA_PDB1:
    Applied 19.16.0.0.0 Release_Update 220703022223 successfully on 17-JUL-22 09.18.24.036238 AM
  PDB PDB$SEED:
    Applied 19.16.0.0.0 Release_Update 220703022223 successfully on 17-JUL-22 09.18.24.036238 AM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED DEV_PDB1 QA_PDB1
    The following interim patches will be rolled back:
      34086870 (OJVM RELEASE UPDATE: 19.16.0.0.220719 (34086870))
    Patch 36582781 (Database Release Update : 19.24.0.0.240716 (36582781)):
      Apply from 19.16.0.0.0 Release_Update 220703022223 to 19.24.0.0.0 Release_Update 240627235157
    The following interim patches will be applied:
      36414915 (OJVM RELEASE UPDATE: 19.24.0.0.240716 (36414915))

Installing patches...

Patch installation complete.  Total patches installed: 12

Validating logfiles...done
Patch 34086870 rollback (pdb CDB$ROOT): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/34086870/24803071/34086870_rollback_DEV_CDBROOT_2024Oct23_16_54_23.log (no errors)
Patch 36582781 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36582781/25751445/36582781_apply_DEV_CDBROOT_2024Oct23_16_54_24.log (no errors)
Patch 36414915 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36414915/25700138/36414915_apply_DEV_CDBROOT_2024Oct23_16_54_24.log (no errors)
Patch 34086870 rollback (pdb PDB$SEED): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/34086870/24803071/34086870_rollback_DEV_PDBSEED_2024Oct23_16_58_32.log (no errors)
Patch 36582781 apply (pdb PDB$SEED): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36582781/25751445/36582781_apply_DEV_PDBSEED_2024Oct23_16_58_59.log (no errors)
Patch 36414915 apply (pdb PDB$SEED): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36414915/25700138/36414915_apply_DEV_PDBSEED_2024Oct23_16_58_59.log (no errors)
Patch 34086870 rollback (pdb DEV_PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/34086870/24803071/34086870_rollback_DEV_DEV_PDB1_2024Oct23_16_58_32.log (no errors)
Patch 36582781 apply (pdb DEV_PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36582781/25751445/36582781_apply_DEV_DEV_PDB1_2024Oct23_16_58_34.log (no errors)
Patch 36414915 apply (pdb DEV_PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36414915/25700138/36414915_apply_DEV_DEV_PDB1_2024Oct23_16_58_33.log (no errors)
Patch 34086870 rollback (pdb QA_PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/34086870/24803071/34086870_rollback_DEV_QA_PDB1_2024Oct23_16_58_31.log (no errors)
Patch 36582781 apply (pdb QA_PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36582781/25751445/36582781_apply_DEV_QA_PDB1_2024Oct23_16_58_33.log (no errors)
Patch 36414915 apply (pdb QA_PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/36414915/25700138/36414915_apply_DEV_QA_PDB1_2024Oct23_16_58_33.log (no errors)
SQL Patching tool complete on Wed Oct 23 17:04:31 2024

Conclusion

In conclusion, migrating a database and applying patches is a complex process that requires careful attention to detail. Although the sanity check showed no immediate issues, the missing PDB temporary file caused a failure during the datapath execution. This experience reinforced the importance of verifying all database components especially temp files before patching. By addressing the missing temp files and following the proper steps to recreate and clean them, I was able to resolve the issue and successfully complete the patching process. As a production DBA, these challenges are part of the job, and being proactive and methodical ensures smooth database operations and minimal downtime.

Friday, January 31, 2025

EXACS: Creating an 11g Database Home

 



Intro

Oracle Exadata Cloud Service (ExaCS) historically supported Oracle Database 11g, but with recent updates, Oracle has phased out support for older database versions like 11g in ExaCS. While existing Oracle Database 11g instances may continue to run, it is no longer possible to provision new Oracle 11g databases on Exadata Cloud Service through the Oracle Cloud Infrastructure (OCI) console. Oracle encourages users to upgrade to more recent supported versions, such as Oracle Database 19c or 23ai, to leverage enhanced features, security, and performance improvements.

Some customers continue to run Oracle 11g because they have not been able to schedule downtime due to mission-critical business applications. Additionally, Oracle 11.2.0.4 R2 is considered one of the most stable environments that Oracle has built. However, these customers face challenges in finding an efficient migration method, as their databases have grown to terabyte sizes.

Customers are still stuck on Oracle 11g for several key reasons:

Mission-Critical Applications: Many businesses run critical applications that require constant uptime, making it difficult to schedule downtime for upgrades or migrations.

Stable Environment: Oracle 11.2.0.4 R2 is considered a highly stable release. Some businesses prefer to stick with this known stability rather than risk issues with newer versions.

Cost and Complexity: Upgrading Oracle databases, especially from 11g, can be expensive and complex. It may involve re-architecting applications, testing, and retraining staff, which some companies may not have the resources to handle.

Large Databases: Over time, databases have grown to terabyte sizes, which makes migration and upgrade processes much more challenging in terms of time, performance, and storage.

Custom Solutions and Legacy Code: Many organizations have customized solutions that are tightly integrated with Oracle 11g, and updating these systems can require significant rework, leading to compatibility issues with newer versions.

Business Prioritization: In some cases, businesses prioritize operational stability over modernization, choosing to delay upgrades to focus on other initiatives.

In this article, I will show you how you can create the 11g database in exacs using dbaascli.

Note: You can create this only if you have upgrade support for 11g. Oracle ExaCS does not support running 11g in the ExaCS environment.

List db images in EXACS



[root@exadevdb-01 ~]# dbaascli cswlib showImages
DBAAS CLI version 24.3.1.0.0
Executing command cswlib showImages
Job id: 10706bde-2e70-42fe-a371-7cb9bd572f7c
Session log: /var/opt/oracle/log/cswLib/showImages/dbaastools_2024-09-06_05-54-57-PM_59941.log
Log file location: /var/opt/oracle/log/cswLib/showImages/dbaastools_2024-09-06_05-54-57-PM_59941.log

############ List of Available database Artifacts  #############

1.IMAGE_TAG=11.2.0.4.230418
  VERSION=11.2.0.4.230418
  DESCRIPTION=11.2 APR 2023 DB Image
2.IMAGE_TAG=12.2.0.1.230718
  VERSION=12.2.0.1.230718
  DESCRIPTION=12.2 JUL 2023 DB Image
3.IMAGE_TAG=18.23.0.0.0
  VERSION=18.23.0.0.0
  DESCRIPTION=18c JUL 2023 DB Image
4.IMAGE_TAG=19.22.0.0.0
  VERSION=19.22.0.0.0
  DESCRIPTION=19c JAN 2024 DB Image
5.IMAGE_TAG=12.1.0.2.231017
  VERSION=12.1.0.2.231017
  DESCRIPTION=12.1 OCT 2023 DB Image
6.IMAGE_TAG=23.4.0.24.05
  VERSION=23.4.0.24.05
  DESCRIPTION=23ai DB image 23.4.0.24.05
7.IMAGE_TAG=18.22.0.0.0
  VERSION=18.22.0.0.0
  DESCRIPTION=18c APR 2023 DB Image
8.IMAGE_TAG=11.2.0.4.231017
  VERSION=11.2.0.4.231017
  DESCRIPTION=11.2 OCT 2023 DB Image
9.IMAGE_TAG=12.1.0.2.230418
  VERSION=12.1.0.2.230418
  DESCRIPTION=12.1 APR 2023 DB Image
10.IMAGE_TAG=19.23.0.0.0
  VERSION=19.23.0.0.0
  DESCRIPTION=19c APR 2024 DB Image
11.IMAGE_TAG=19.20.0.0.0
  VERSION=19.20.0.0.0
  DESCRIPTION=19c JUL 2023 DB Image
12.IMAGE_TAG=23.5.0.24.07
  VERSION=23.5.0.24.07
  DESCRIPTION=23ai DB image 23.5.0.24.07
13.IMAGE_TAG=12.2.0.1.230418
  VERSION=12.2.0.1.230418
  DESCRIPTION=12.2 APR 2023 DB Image
14.IMAGE_TAG=11.2.0.4.230718
  VERSION=11.2.0.4.230718
  DESCRIPTION=11.2 JUL 2023 DB Image
15.IMAGE_TAG=12.1.0.2.230718
  VERSION=12.1.0.2.230718
  DESCRIPTION=12.1 JUL 2023 DB Image
16.IMAGE_TAG=19.24.0.0.0
  VERSION=19.24.0.0.0
  DESCRIPTION=19c JUL 2024 DB Image
17.IMAGE_TAG=12.2.0.1.231017
  VERSION=12.2.0.1.231017
  DESCRIPTION=12.2 OCT 2023 DB Image
18.IMAGE_TAG=18.24.0.0.0
  VERSION=18.24.0.0.0
  DESCRIPTION=18c OCT 2023 DB Image
19.IMAGE_TAG=19.21.0.0.0
  VERSION=19.21.0.0.0
  DESCRIPTION=19c OCT 2023 DB Image
Images can be downloaded using their image tags. For details, see help using 'dbaascli cswlib download --help'.

dbaascli execution completed
[root@exadevdb-01 ~]#



Download the the 11204 Binary OCT2023 version.


dbaascli cswlib download --version 11204 --bp OCT2023 --bp_update no
Sample output


[root@exadevdb-01 ~]# dbaascli cswlib download --version 11204 --bp OCT2023 --bp_update no
DBAAS CLI version 24.3.2.0.0
Executing command cswlib download --version 11204 --bp OCT2023 --bp_update no
Job id: 8dab6933-9505-40a9-a398-c1ba46eae6e3
Session log: /var/opt/oracle/log/cswLib/download/dbaastools_2024-09-19_02-00-06-PM_166429.log
Loading PILOT...
Session ID of the current execution is: 40
Log file location: /var/opt/oracle/log/cswLib/download/pilot_2024-09-19_02-00-10-PM_166741
-----------------
Running initialize job
Completed initialize job
-----------------
Running validate_url_reachability job
Completed validate_url_reachability job
-----------------
Running validate_image_tag_existence job
Completed validate_image_tag_existence job
-----------------
Running validate_free_space job
Completed validate_free_space job
-----------------
Running validate_file_permissions job
Completed validate_file_permissions job
-----------------
Running download_image job
Image location=/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017
Download succeeded
Completed download_image job
-----------------
Running decrypt job
Completed decrypt job
-----------------
Running post_download_update job
Completed post_download_update job
-----------------
Running verify_signature job
Skipping verifySignature check because the catalog contains signed property as false
Completed verify_signature job
-----------------
Running validate_sha256sum job
Skipping Sha256Sum check because the catalog does not contain sha256sum
Skipping Sha256Sum check because the catalog does not contain sha256sum
Completed validate_sha256sum job
-----------------
Running update_file_permissions job
Completed update_file_permissions job
{"11.2.0.4.231017":{"file_list":[{"file":"/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017/db11204_bits_EXA.zip"},{"file":"/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017/exadbf_11204.tar.gz"}]}}

dbaascli execution completed
[root@exadevdb-01 ~]#


List downloaded db images

After downloading the image you can list local database binary images using cswLib listlocal.


[opc@exadevdb-01 ~]$ sudo dbaascli cswLib listLocal
DBAAS CLI version 24.3.1.0.0
Executing command cswLib listLocal
Job id: 0405333f-04d5-4043-97f6-0fa0a9eca51b
Session log: /var/opt/oracle/log/cswLib/listLocal/dbaastools_2024-09-06_06-15-17-PM_154824.log
Log file location: /var/opt/oracle/log/cswLib/listLocal/dbaastools_2024-09-06_06-15-17-PM_154824.log

############ List of Available Database Images  #############

1.IMAGE_TAG=11.2.0.4.231017
  IMAGE_SIZE=3GB
  VERSION=11.2.0.4.231017
  DESCRIPTION=11.2 OCT 2023 DB Image
2.IMAGE_TAG=19.24.0.0.0
  IMAGE_SIZE=5GB
  VERSION=19.24.0.0.0
  DESCRIPTION=19c JUL 2024 DB Image

dbaascli execution completed
[opc@exadevdb-01 ~]$

Create a new database home using 11g binary

The next step is to create a database home 

[root@exadevdb-01 ~]# dbaascli dbhome create --version 11.2.0.4
DBAAS CLI version 24.3.2.0.0
Executing command dbhome create --version 11.2.0.4
Job id: 615dc2e8-af82-47c1-8612-526082b6e9bf
Session log: /var/opt/oracle/log/dbHome/create/dbaastools_2024-10-10_07-30-24-PM_99121.log
Loading PILOT...
Session ID of the current execution is: 34
Log file location: /var/opt/oracle/log/dbHome/create/pilot_2024-10-10_07-30-27-PM_99403
-----------------
Running Plugin_initialization job
Acquiring native write lock: _u02_app_oracle_product_11.2.0_dbhome_1
Acquiring native write lock: OraHome2
Completed Plugin_initialization job
-----------------
Running OH_image_validate job
Completed OH_image_validate job
-----------------
Running DB_home_version_check job
Completed DB_home_version_check job
-----------------
Running DB_home_environment_version_check job
Completed DB_home_environment_version_check job
-----------------
Running GI_version_check job
Completed GI_version_check job
-----------------
Running Validate_backup_location job
Completed Validate_backup_location job
-----------------
Running Cluster_nodes_check job
Completed Cluster_nodes_check job
-----------------
Running Validate_users_group job
Completed Validate_users_group job
-----------------
Running OH_image_download job
Completed OH_image_download job
-----------------
Running OH_image_version_check job
Completed OH_image_version_check job
-----------------
Running Disk_space_check job
Completed Disk_space_check job
Acquiring write lock: provisioning
-----------------
Running Pre_OH_creation_lock_manager job
Completed Pre_OH_creation_lock_manager job
-----------------
Running OH_pre_existence_check job
Completed OH_pre_existence_check job
-----------------
Running Local_node_oh_image_unzip job
Completed Local_node_oh_image_unzip job
-----------------
Running OH_creation_cvu_prechecks job
Completed OH_creation_cvu_prechecks job
-----------------
Running Local_node_oh_clone job

Completed Local_node_oh_clone job
-----------------
Running Update_dbnid_bits job
Skipping. Job is detected as not applicable.
-----------------
Running Local_node_oh_backup_creation job
Completed Local_node_oh_backup_creation job
-----------------
Running Remote_nodes_software_copy job

Completed Remote_nodes_software_copy job
-----------------
Running Remote_nodes_attach_home job
Completed Remote_nodes_attach_home job
-----------------
Running Remote_nodes_post_install_steps job
Skipping. Job is detected as not applicable.
-----------------
Running Inventory_nodes_list_update job
Completed Inventory_nodes_list_update job
-----------------
Running Root_script_run job
Completed Root_script_run job
-----------------
Running Post_OH_creation_lock_manager job
Completed Post_OH_creation_lock_manager job
Releasing lock: provisioning
-----------------
Running release_lock job
Releasing native lock: OraHome2
Releasing native lock: _u02_app_oracle_product_11.2.0_dbhome_1
Completed release_lock job
-----------------
Running Generate_dbhome_metadata job
Completed Generate_dbhome_metadata job
-----------------
Running Generate_dbhome_system_details job
Acquiring native write lock: global_dbsystem_details_generation
Releasing native lock: global_dbsystem_details_generation
Completed Generate_dbhome_system_details job
-----------------
Running Plugin_cleanup job

Completed Plugin_cleanup job
---------- START OF PLUGIN RESULT ----------
{"ORACLE_HOME_NAME":"OraHome2","ORACLE_HOME":"/u02/app/oracle/product/11.2.0/dbhome_1"}
---------- END OF PLUGIN RESULT ----------

dbaascli execution completed

Conclusion


While Oracle Exadata Cloud Service (ExaCS) no longer officially supports provisioning new Oracle Database 11g instances through the OCI console, it is still possible to create and manage these databases using tools like dbaascli. This approach provides a practical solution for customers who rely on the stability and performance of Oracle 11g for mission-critical applications but face challenges in scheduling downtime or migrating to newer database versions. 

However, it’s essential to recognize the limitations of continuing to use an unsupported version, such as missing out on critical security updates, performance enhancements, and new features offered by modern versions like Oracle Database 19c or 23c AI. Organizations are encouraged to develop a long-term migration strategy to align with Oracle's support roadmap and ensure the sustainability of their database operations.

EXACS: Resolving PDB Console Issue

 





Intro 


The roles of DBAs and architects are continually evolving, becoming more technical and requiring broader expertise. For Oracle DBAs, it's no longer just about managing databases; there's a growing need to understand and work with the OCI (Oracle Cloud Infrastructure) GUI as well. This is because many OCI services depend on the database being properly visible in the OCI console to function correctly.

Recently, I faced an issue after migrating a database from on-premises to the cloud in an Exadata Cloud@Customer (ExaCS) deployment. The pluggable database (PDB) did not appear in the OCI console.

Steps I followed the below-mentioned steps to migration.

  • Created a dummy database directly from the OCI console.
  • Deleted this dummy database at the backend.
  • Used RMAN to perform a duplicate operation.

  • After the RMAN duplicate, the PDB (pluggable database) was correctly visible under the CDB (container database). However, the status in the OCI console is still displayed as "failed."

    This highlights the importance of understanding the OCI console and ensuring proper configuration post-migration to avoid operational disruptions.


    As per Figure 1, the OCI console shows pdb state from console shows as "failed".

                                                        Figure 1 : PDB state from OCI console.

    Troubleshooting steps 


    If a Pluggable Database (PDB) resource appears in a Failed status in the Cloud Console but remains available in the VM, refer to the following Oracle Meta Link note for details on this known issue:

    • Meta Link Note: Pluggable Database (PDB) Resource Shows Failed Status in Cloud Console While It Is Available in VM (Doc ID 2855062.1).

    Step 1: Validate the CON_UID

    Log in to the database and run the following query to verify the Container UID (CON_UID)

    SELECT CON_UID, NAME FROM V$PDBS;

    
    
    SQL> l
      1* select  CON_UID, NAME from V$PDBS
    SQL> /
    
       CON_UID NAME
    ---------- --------------------------------------------------------------------------------------------------------------------------------
    2368539052 PDB$SEED
    1479666877 EGDEV_PDB1
    2175707955 EGQA_PDB1
    
    SQL>
    
    

    Step 2: Confirm the PDB Details

    
    
    dbaascli pdb getDetails --dbname EGPRD --pdbUID 2297085946
    
    

    Step 3: Handle Orphaned PDB Records

  • If the above command returns a valid result, the PDB entry still exists in the backend.
  • In such a case, you may need to delete the orphaned database entry manually.

  • Step 4: Delete Manually 


    To manually delete the PDB using dbaascli, run the following command:

    
    dbaascli pdb delete --dbName  --pdbUID 2297085946
    

    Important Note:

    🚨 Before deleting any PDB, always verify the CON_UID returned from the database.

    • If the CON_UID from V$PDBS matches the one being checked, you are dealing with a live PDB.
    • Do NOT delete the database in this case, as it is currently active.

    If the CON_UID in the Cloud Console is different from the one in the database, you can safely delete the PDB from the console.

    Once deleted, wait 15-20 minutes, and the correct PDB will be reflected in the Cloud Console.




                                                      Figure 2: This is the same figure highlighting the con_uid.

    Protecting Your Encryption Keys: (OKV - Part 2 )

      Intro  The Role of Oracle Key Vault in Securing Encryption Keys One of the fundamental aspects of database security is e...