Thursday, August 7, 2025

Performance issue : Handling SQL Version Count Issues with High-Volume Kafka Inserts on ExaCS

 






Intro 

We are in the era of AI, where increasing processing power is crucial for handling growing and complex workloads. As enterprise systems continue to scale, database performance challenges become increasingly common, especially under high-throughput operations. In my experience, no other database platform matches Oracle’s flexibility and tooling when it comes to identifying and resolving such performance issues.

Recently, we encountered a performance bottleneck in an Exadata Cloud@Customer (ExaCS) environment, where the database was handling a high volume of insert operations coming from a Kafka stream.

In this article, I’ll walk through the technical details of the SQL version count issue we faced and the solution we implemented to stabilize performance.

The Issue: Excessive SQL Versioning and Hard Parses

The ExaCS database was receiving a continuous stream of dynamically structured INSERT statements from Kafka. The column structure of these inserts varied significantly; some contained 100 columns, while others had up to 150. This variation was ongoing and unpredictable.

Due to these structural differences, Oracle’s optimizer treated each statement as a unique SQL. As a result, the database began to experience:
  • Excessive hard parses

  • High CPU utilization

  • Shared pool contention and pressure

Even though we were running Oracle 19.24, which includes enhancements to SQL plan management and version count handling, the optimizer still created new cursor versions for each structurally distinct INSERT, which led to rapid cursor growth and degraded overall performance.


                                    
                                        Figure 1: AWR report for Oracle SQLID version count


Temporary Workaround: Manual Flushing of High Version Count Cursors

As an immediate mitigation step, we identified the SQLs with high version counts and manually flushed them from the shared pool using their memory IDs. This helped temporarily relieve pressure on CPU and memory by:

  • Reducing shared pool bloat

  • Freeing up memory consumed by excessive cursor versions

  • Preventing further hard parsing on the same overloaded SQL

However, it's important to note that this is only a temporary workaround. The relief is short-lived, as the issue resurfaces once new INSERT statements with varying structures continue streaming in from Kafka.

    To clarify, this issue has not been resolved in Oracle 19.24, despite the version including several recent patches and updates. Here’s the output from the environment confirming the exact patch level:
    
    [oracle@exaprd01-node01 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
    34697081;NOT SHIPPING LIBAUTH_SDK_IAM.SO IN 23 SHIPHOME INSTALL
    36538667;JDK BUNDLE PATCH 19.0.0.0.240716
    36414915;OJVM RELEASE UPDATE: 19.24.0.0.240716 (36414915)
    36587798;OCW RELEASE UPDATE 19.24.0.0.0 (36587798)
    36582781;Database Release Update : 19.24.0.0.240716 (36582781)
    OPatch succeeded.
    [oracle@exaprd01-node01 ~]$
    


    To monitor and identify SQL statements with high version counts typically those contributing to shared pool pressure, you can use the following query:
    
    SELECT version_count, sql_id, sql_text FROM   v$sqlarea WHERE  version_count >  512;
    
    For any SQLs with unusually high version counts, manual flushing can be performed as a short-term mitigation step using the following commands:

    select inst_id,ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '&sqlid';
    exec sys.DBMS_SHARED_POOL.PURGE ('-ADDRESS-, -HASH-VALUE-', 'C');
    

    Note: Use manual flushing with caution, especially in production environments, as it may impact performance for frequently executed queries.

    Permanent Solution

    To address the high version count issue more permanently, Oracle provides specific guidance in the following My Oracle Support note:

    High Version Counts For SQL Statements (>1024) Post Upgrade To 12.2 and Above Causing Database Slow Performance Doc ID 2431353.1

    As per the recommendation, you should add the following initialization parameter _cursor_obsolete_threshold = <recommended_value>

    This parameter helps control the number of obsolete child cursors and can significantly reduce version count growth, improving shared pool performance and overall database stability.

    If you’re running in a RAC environment, apply this change and restart the database instances in a rolling fashion to avoid downtime.

    
    alter system set “_cursor_obsolete_threshold”=1024 scope=spfile;
    

    Conclusion

    This issue showed us how dynamic SQL from systems like Kafka can create serious performance problems, even in powerful environments like Exadata Cloud@Customer. Because each INSERT had a slightly different structure, Oracle treated them as new statements, leading to too many cursor versions, high CPU usage, and shared pool pressure.

    Even though we were on Oracle 19.24, the problem still occurred. The key was identifying the root cause and taking action, monitoring version counts, applying a temporary fix, and then implementing a permanent solution using the _cursor_obsolete_threshold parameter.

    In short, managing SQL behavior and understanding how Oracle handles different workloads is critical for keeping your systems running smoothly, especially in today’s fast-moving, high-volume environments.

    Friday, August 1, 2025

    OLVM : Expanding Fiber Channel (FC) Support in OLVM Using a Data Domain Storage System

     




    Intro

    We are living in a data-driven era where AI is advancing rapidly, placing even greater demands on processing power and virtualized environments. While cloud adoption continues to grow—largely fueled by virtualization—many organizations still rely heavily on their on-premises virtual infrastructure alongside cloud technologies.

    Oracle Linux Virtualization Manager (OLVM) is quickly emerging as a strong alternative, addressing key gaps left by other platforms like VMware. With Broadcom tightening its licensing policies, many organizations are now considering a move away from VMware. For those looking to transition, OLVM stands out as a reliable and cost-effective option, backed by Oracle’s 24/7 enterprise-grade support.

    In this article, I’ll walk you through how to expand Fiber Channel (FC) support in OLVM using a Data Domain storage system.

    Document Reference:

    OLVM: Expanding the Size of a Storage Domain (FC/iSCSI) (Doc ID 2881013.1)

    Steps to increase the FC data domain :

    • Increase the Storage LUN at the SAN level.
    • On all KVM hypervisors where the storage is mounted, execute the following command:/usr/bin/rescan-scsi-bus.sh
    • Increase the size of the Data Domain from the OLVM storage side.


    This is a sample output from executing /usr/bin/rescan-scsi-bus.sh.

    Please ensure any issues are resolved before proceeding with the disk size increase from the OLVM side.

    Sample output :


    
    [root@kvm01 ~]# /usr/bin/rescan-scsi-bus.sh
    Scanning SCSI subsystem for new devices
    Scanning host 0 for  SCSI target IDs  0 1 2 3 4 5 6 7, all LUNs
     Scanning for device 0 0 0 0 ...
    OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
          Vendor: Generic- Model: SD/MMC CRW       Rev: 1.00
          Type:   Direct-Access                    ANSI SCSI revision: 06
    .Scanning host 1 for  all SCSI target IDs, all LUNs
     Scanning for device 1 0 0 0 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 00
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 1 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 01
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 2 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 02
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 3 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 03
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 5 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 05
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 9 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 09
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 10 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 10
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 0 11 ...
    OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 11
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 0 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 00
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 1 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 01
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 2 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 02
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 3 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 03
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 5 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 05
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 9 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 09
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 10 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 10
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 1 0 1 11 ...
    OLD: Host: scsi1 Channel: 00 Id: 01 Lun: 11
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
    Scanning host 2 for  SCSI target IDs  0 1 2 3 4 5 6 7, all LUNs
     Scanning for device 2 0 0 0 ...
    OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00
          Vendor: HPE      Model: Smart Adapter    Rev: 3.53
          Type:   Enclosure                        ANSI SCSI revision: 05
     Scanning for device 2 1 0 0 ... 0 ...
    OLD: Host: scsi2 Channel: 01 Id: 00 Lun: 00
          Vendor: HPE      Model: LOGICAL VOLUME   Rev: 3.53
          Type:   Direct-Access                    ANSI SCSI revision: 05
     Scanning for device 2 2 0 0 ... 0 ...
    OLD: Host: scsi2 Channel: 02 Id: 00 Lun: 00
          Vendor: HPE      Model: P408i-a SR Gen10 Rev: 3.53
          Type:   RAID                             ANSI SCSI revision: 05
    Scanning host 3 for  all SCSI target IDs, all LUNs
     Scanning for device 3 0 0 0 ...
    OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
          Vendor: DGC      Model: VRAID            Rev: 0532
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 3 0 0 1 ...
    OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 01
          Vendor: DGC      Model: VRAID            Rev: 0532
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 3 0 0 2 ...
    OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 02
          Vendor: DGC      Model: VRAID            Rev: 0532
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 3 0 1 0 ...
    OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 00
          Vendor: DGC      Model: VRAID            Rev: 0532
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 3 0 1 1 ...
    OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 01
          Vendor: DGC      Model: VRAID            Rev: 0532
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 3 0 1 2 ...
    OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 02
          Vendor: DGC      Model: VRAID            Rev: 0532
          Type:   Direct-Access                    ANSI SCSI revision: 04
    Scanning host 4 for  all SCSI target IDs, all LUNs
     Scanning for device 4 0 0 0 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 00
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 1 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 01
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 2 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 02
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 3 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 03
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 5 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 05
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 9 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 09
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 10 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 10
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 0 11 ...
    OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 11
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 0 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 00
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 1 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 01
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 2 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 02
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 3 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 03
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 5 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 05
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 9 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 09
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 10 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 10
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
     Scanning for device 4 0 1 11 ...
    OLD: Host: scsi4 Channel: 00 Id: 01 Lun: 11
          Vendor: DGC      Model: VRAID            Rev: 0430
          Type:   Direct-Access                    ANSI SCSI revision: 04
    Scanning host 5 for  all SCSI target IDs, all LUNs
    0 new or changed device(s) found.
    0 remapped or resized device(s) found.
    0 device(s) removed.
    
    


    Once the rescan is completed on all KVM hosts, the next step is to increase the Data Domain from the storage side.

    Keep in mind that even if you extend the LUN at the SAN level, the change will not automatically reflect in the Data Domain.

    From OLVM : Navigate to Storage >  Storage Domain > (Select respective data domain) > Select Manage domain as higlighted below.

                                                           
                                                        Figure 1 : Select Manage Domain

    The Manage Windows interface will display the updated size, allowing you to increase it as needed.


                                                   Figure 2 : Manage window

    Now, select the desired size and click OK to apply the increase.



                                                   Figure 3: Increase in size.

    Now Data Domain will reflect the new size.


    Figure: Data domain after an increase in size. 

    Conclusion 

    In today's evolving IT landscape, organizations are under increasing pressure to modernize infrastructure while maintaining flexibility, control, and cost-efficiency. As AI and data workloads grow, the need for robust, scalable virtualization solutions becomes even more critical.

    With Broadcom’s licensing changes pushing many to reconsider their reliance on VMware, OLVM offers a compelling path forward. It not only fills the functionality gaps but also provides enterprise-grade reliability, backed by Oracle’s 24/7 support.

    Whether you're planning a full migration or building out a hybrid environment, OLVM is well-positioned to meet the demands of modern workloads. In this article, we explored how to extend Fiber Channel (FC) capabilities in OLVM using Data Domain, helping organizations take a step forward in building a resilient and future-ready virtual infrastructure.


    Thursday, June 26, 2025

    Streamlining TDE Key Management with Oracle Key Vault and Secure Endpoint Deployment

     






    Intro

    Streamlining TDE Key Management with Oracle Key Vault and Secure Endpoint Deployment

    One of the core pillars of database security is robust encryption key management. Yet many organizations still store Transparent Data Encryption (TDE) keys locally, a practice that introduces significant risk because keys can be lost through disk corruption, system failure, or accidental mismanagement. Managing those keys manually across multiple Oracle and MySQL databases only compounds the challenge and increases the potential breach surface.

    Oracle Key Vault (OKV) offers a modern alternative by centralizing the storage and lifecycle management of encryption keys, certificates, and secrets in a secure, policy‑driven environment that communicates with each database over KMIP. Using OKV translates to reduced development time, improved security posture, and compliance with data‑protection standards. While there are associated costs for using the service, the benefits of enhanced security and streamlined operations often outweigh the expenses.

    To accelerate adoption at scale, OKV exposes a RESTful API that lets you automate installation and registration of endpoint software, dramatically shortening the time required to bring hundreds of database servers under centralized key management. For added protection, you can restrict the API to trusted IP ranges during rollout and disable the reset functionality once onboarding is complete, further narrowing the attack surface.

    The remainder of this post walks through a practical, script‑driven workflow that installs the endpoint client, registers it with OKV, and provisions a default wallet, laying the foundation for secure, compliant, and easily auditable encryption across your Oracle estate.

    In this article, I will elaborate on how we can perform the EP installation via RESTful API calls.

    Configure the OKV RESTful service


    First, create a directory  /u01/app/oracle/okvapi to download the okvresetcli package. Additionally, create a wallet directory that will be used to download the OKV wallet files.

    Proceed to download the REST API package using the curl command.

    
    mkdir -p /u01/app/oracle/okvapi
    
    [oracle@ora-01 okvapi]$ curl -O -k https://10.0.0.180:5695/okvrestclipackage.zip
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 4134k  100 4134k    0     0  96.1M      0 --:--:-- --:--:-- --:--:-- 96.1M
    


    Once the download is complete, extract the package. This will create the required directory structure and files, including the configuration file that needs to be updated with OKV server details.


    
    [oracle@ora-01 okvapi]$ ls -lrth
    total 4.1M
    -rw-r--r--. 1 oracle oinstall 4.1M Jun 13 14:36 okvrestclipackage.zip
    [oracle@ora-01 okvapi]$ unzip okvrestclipackage.zip
    Archive:  okvrestclipackage.zip
       creating: lib/
       creating: bin/
      inflating: bin/okv
      inflating: bin/okv.bat
       creating: conf/
      inflating: conf/okvrestcli_logging.properties
      inflating: conf/okvrestcli.ini
      inflating: lib/okvrestcli.jar
    [oracle@ora-01 okvapi]$ 


    Next, update the okvrestcli.ini file located in the conf directory with the appropriate OKV server details. Additionally, ensure that the client_wallet parameter is included and correctly set.

    
    
    [oracle@ora-01 conf]$ cat okvrestcli.ini
    #Provide absolute path for log_property, okv_client_config properties
    [Default]
    log_property=/u01/app/oracle/okvapi/conf/okvrestcli_logging.properties
    server=10.0.0.180
    okv_client_config=./conf/okvclient.ora
    user=admin
    client_wallet=/u01/app/oracle/okvapi/wallet
    
    


    Create the wallet for the OKV RESTful API

    If the client_wallet directory is already specified in the configuration file, the next step is to download the wallet from the Oracle Key Vault (OKV) server. This wallet is essential for secure communication between the client and OKV.

    Before download, make sure to export the init location and execute "/u01/app/oracle/okvapi/bin/okv admin client-wallet add --client-wallet /u01/app/oracle/okvapi/wallet --wallet-user admin"

    Step 1: Set the Configuration File Path

    
    export JAVA_HOME=/u01/app/oracle/product/19.0.0/dbhome_1/jdk
    export OKV_RESTCLI_CONFIG=/u01/app/oracle/okvapi/conf/okvrestcli.ini
    
    

    Step 2: Initialize the Client Wallet


    Run the following command to create the client wallet:
    
    
    /u01/app/oracle/okvapi/bin/okv admin client-wallet add \
      --client-wallet /u01/app/oracle/okvapi/wallet \
      --wallet-user admin
    


    You will be prompted to enter the password. Upon success, you should see:

    
    export JAVA_HOME=/u01/app/oracle/product/19.0.0/dbhome_1/jdk
    export OKV_RESTCLI_CONFIG=/u01/app/oracle/okvapi/conf/okvrestcli.ini
    [oracle@ora-01 okvapi]$ /u01/app/oracle/okvapi/bin/okv admin client-wallet add --client-wallet /u01/app/oracle/okvapi/wallet --wallet-user admin
    Password:
    {
      "result" : "Success"
    }
    [oracle@ora-01 okvapi]$
    
    

    Important Note:

    This command will fail if the RESTful service is not enabled in OKV. By default, the REST service is disabled to maintain security.

    Example of failure message:

    
    [oracle@dbsdpl55 bin]$ /u01/app/oracle/okvapi/bin/okv admin client-wallet add --client-wallet /u01/app/oracle/okvapi/wallet --wallet-user admin
    Password:
    {
      "result" : "Failure",
      "message" : "REST service is disabled"
    }
    [oracle@dbsdpl55 bin]$
    


    Solution

    For testing purposes, you may temporarily enable the REST service for all IP addresses. However, in a production environment, it is strongly recommended to enable the service only for specific, trusted targets. Once the installation and configuration are complete, you can disable the REST service again to maintain a secure setup.





    Verifying the OKV RESTful API Installation

    Use the following commands to verify the REST CLI and retrieve server details:

    
    [oracle@ora-01 okvapi]$ $OKV_HOME/bin/okv
    {
      "restCLIVersion" : "21.10.0.0.0"
    }
    
    [oracle@ora-01 okvapi]$  $OKV_HOME/bin/okv server info get
    {
      "result" : "Success",
      "value" : {
        "caCertificateExpirationDate" : "2028-05-19 18:59:37",
        "cpuCores" : "2",
        "deploymentType" : "Cluster",
        "diskInGB" : "3695",
        "fraInGB" : "160",
        "installDate" : "2025-05-20 18:59:54",
        "memoryInKB" : "16365128",
        "platformCertificatesExpirationDate" : "2027-03-12 02:48:20",
        "serverCertificateExpirationDate" : "2026-05-20 19:03:31",
        "serverTime" : "2025-06-13 14:41:53",
        "version" : "21.10.0.0.0"
      }
    }
    
    This confirms that the OKV RESTful API is correctly installed and communicating with the server.

    Configuring Endpoint (EP) via REST API


    To configure OKV using RESTful calls, follow the steps below:

    Prerequisites

    Before provisioning the endpoint, ensure the following directory structure exists under the wallet_root:
    
    mkdir -p /u01/app/oracle/admin/ORCL/wallet_root/tde        # Location for TDE keys
    mkdir -p /u01/app/oracle/admin/ORCL/wallet_root/okv        # Location for OKV endpoint
    mkdir -p /u01/app/oracle/admin/ORCL/wallet_root/tde_seps   # Location for External SSO Key
    

    Step-by-Step Configuration

    1. Create the Endpoint

    
    /u01/app/oracle/okvapi/bin/okv admin endpoint create --endpoint EP_ORCL --description "$HOSTNAME, $(hostname -i)" --type ORACLE_DB --platform LINUX64 -- Create end point 
    
    

    Sample Output:
    
    export OKV_RESTCLI_CONFIG=/u01/app/oracle/okvapi/conf/okvrestcli.ini
    export JAVA_HOME=/u01/app/oracle/product/19c/dbhome_1/jdk
    
    [oracle@ora-01 wallet_root]$ /u01/app/oracle/okvapi/bin/okv admin endpoint create --endpoint EP_ORCL --description "$HOSTNAME, $(hostname -i)" --type ORACLE_DB --platform LINUX64
    {
      "result" : "Success",
      "value" : {
        "status" : "PENDING",
        "locatorID" : "CBB0383C-CC14-4EA4-BF75-01981C522F74"
      }
    }
    [oracle@ora-01 wallet_root]$
    

    After creating the endpoint, verify its status in the OKV console; it should display as "REGISTERED". To complete the endpoint setup, you must proceed with the provisioning step.


    2. Set the Default Wallet for the Endpoint

    
    /u01/app/oracle/okvapi/bin/okv manage-access wallet set-default \
      --wallet WLT_ORCL \
      --endpoint EP_ORCL
    

    3. Provision the Endpoint

    
    /u01/app/oracle/okvapi/bin/okv admin endpoint provision \
      --endpoint EP_ORCL \
      --location /u01/app/oracle/admin/ORCL/wallet_root/okv \
      --auto-login FALSE
    

    Post-Configuration

    Once the endpoint is created, you can verify its status in the OKV console. The endpoint status should change to "ENROLLED" after successful provisioning. This completes the REST-based configuration of an Oracle Database endpoint with OKV.




    Conclusion

    Configuring Oracle Key Vault (OKV) using RESTful API provides a secure, automated, and centralized approach to managing encryption keys for Oracle databases. By following the steps outlined for setting up the configuration, creating the required directories, defining the endpoint, associating the default wallet, and provisioning the endpoint, you establish a robust foundation for TDE and other security integrations.

    Always ensure the RESTful service is enabled securely, especially in production environments, by restricting access to trusted IPs. Once the configuration is verified and functional, you can disable the REST service to further harden your setup.

    This approach not only simplifies key lifecycle management but also enhances compliance and operational security across your Oracle infrastructure.

    Friday, June 20, 2025

    Oracle Key Vault (OKV) Endpoint (EP) Health Check: Best Practices & Troubleshooting Tips

     






    Intro 

    Oracle Key Vault (OKV) plays a vital role in the AI era, where data security, encryption, and regulatory compliance are more critical than ever. OKV communicates securely with its targets using KMIP (Key Management Interoperability Protocol) over TLS. In simple terms, without a properly configured OKV Endpoint (EP), OKV cannot communicate with the target system.

    Installing the Endpoint is essential because it provides the secure link between the database and OKV.
     



                                                 Figure 1 : OKV communication with target

    Before installing the OKV Endpoint agent, there are several prerequisites to consider. One of the most important is the correct folder structure. All required directories must be created directly under the wallet_root location.

    Folder structure required for OKV installation. 

    
    mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde - Location for TDE keys
    mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/okv - Location for OKV End point
    mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde_seps - Location for External SSO Key    


    After extracting the OKV Endpoint binary, it's important to run the provided health check script. This script validates the endpoint configuration, ensuring everything is set up correctly. If there are any issues especially with the folder structure the health check will flag them.

    The OKV health check scripts are located in the bin directory of the extracted endpoint package. Below is a sample taken from an Oracle DBCS (Database Cloud Service) instance:
    
    [oracle@dbsdpl55 bin]$ pwd
    /opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz/okv/bin
    [oracle@dbsdpl55 bin]$ ls -lrth
    total 8.4M
    -rw-r--r-- 1 oracle oinstall    0 May 23 11:11 okv.log.0
    -rwxr-x--- 1 oracle oinstall 5.3K May 23 15:03 root.sh
    -rwxr-x--- 1 oracle oinstall 8.0K May 23 15:03 okvutil
    -rwxr-x--- 1 oracle oinstall  29K May 23 15:03 okv_ssh_ep_lookup_authorized_keys
    -rwxr-x--- 1 oracle oinstall 8.4M May 23 15:03 okveps.x64
    -rwxr-x--- 1 oracle oinstall  20K May 23 15:03 ep_healthcheck.sh
    [oracle@dbsdpl55 bin]$
    
    

    Executing OKV Endpoint Health Check.


    Once the OKV Endpoint binary is extracted and the environment is set up, you should run the health check script to validate the configuration. Below is a sample output from a DBCS environment:

    
     
     [oracle@dbsdpl55 bin]$ ./ep_healthcheck.sh
    Verifying the environment variables...
    PASSED.
    
    
    Verifying if there are multiple HSM libraries...
    PASSED.
    
    
    WALLET_ROOT=/opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz is set. Verifying whether okv,tde and tde_seps folders exist under /opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz...
    PASSED.
    
    
    Verifying whether the environment variables set for gen0 process...
    PASSED.
    
    
    Verifying whether Oracle Key Vault client configuration is set correctly...
    PASSED.
    
    
    Verifying whether the SSL wallet location is set correctly...
    PASSED.
    
    Enter endpoint Password (for an auto login endpoint, just press Enter):
    Verifying whether okvutil fetches data successfully...
    
    Connected to 10.0.0.180:5696.
    7F7407FF-651F-412B-834C-A616BC97C509    Template        Default template for PWHSE01_EP
    PASSED.
    
    
    
    Fetching Persistent cache details...
    WARNING: Persistent Cache file doesn't exist. If the 'EXPIRE PKCS11 PERSISTENT CACHE ON DATABASE SHUTDOWN' parameter is configured for this endpoint then persistent cache will not be accessible. Please note that this parameter won't appear in the okvclient.ora file and can be modified only from OKV webconsole.
    DONE.
    
    Verifying whether PKCS Library is set correctly...
    PASSED.
    
    
    Please verify /opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz/okv/log/ep_healthcheck_PWHSE01_2025.05.23-11.25.01.log for more details.
    
    [oracle@dbsdpl55 bin]$
    
     
     

    Conclusion

    In today's AI-driven landscape, where safeguarding sensitive data is paramount, Oracle Key Vault (OKV) offers a robust and centralized approach to managing encryption keys and secrets. A correctly configured OKV Endpoint (EP) is not just a technical requirement—it is the foundation for secure communication between the database and OKV. By ensuring the proper setup of the EP, including the necessary folder structures and successful health checks, organizations can enforce strong encryption practices, maintain compliance, and confidently protect their data assets in both on-premises and cloud environments.

    Wednesday, June 4, 2025

    EXACS: Creating an 11g Database Home

     



    Intro

    Oracle Exadata Cloud Service (ExaCS) historically supported Oracle Database 11g, but with recent updates, Oracle has phased out support for older database versions like 11g in ExaCS. While existing Oracle Database 11g instances may continue to run, it is no longer possible to provision new Oracle 11g databases on Exadata Cloud Service through the Oracle Cloud Infrastructure (OCI) console. Oracle encourages users to upgrade to more recent supported versions, such as Oracle Database 19c or 23ai, to leverage enhanced features, security, and performance improvements.

    Some customers continue to run Oracle 11g because they have not been able to schedule downtime due to mission-critical business applications. Additionally, Oracle 11.2.0.4 R2 is considered one of the most stable environments that Oracle has built. However, these customers face challenges in finding an efficient migration method, as their databases have grown to terabyte sizes.

    Customers are still stuck on Oracle 11g for several key reasons:

    Mission-Critical Applications: Many businesses run critical applications that require constant uptime, making it difficult to schedule downtime for upgrades or migrations.

    Stable Environment: Oracle 11.2.0.4 R2 is considered a highly stable release. Some businesses prefer to stick with this known stability rather than risk issues with newer versions.

    Cost and Complexity: Upgrading Oracle databases, especially from 11g, can be expensive and complex. It may involve re-architecting applications, testing, and retraining staff, which some companies may not have the resources to handle.

    Large Databases: Over time, databases have grown to terabyte sizes, which makes migration and upgrade processes much more challenging in terms of time, performance, and storage.

    Custom Solutions and Legacy Code: Many organizations have customized solutions that are tightly integrated with Oracle 11g, and updating these systems can require significant rework, leading to compatibility issues with newer versions.

    Business Prioritization: In some cases, businesses prioritize operational stability over modernization, choosing to delay upgrades to focus on other initiatives.

    In this article, I will show you how you can create the 11g database in exacs using dbaascli.

    Note: You can create this only if you have upgrade support for 11g. Oracle ExaCS does not support running 11g in the ExaCS environment.

    List db images in EXACS

    
    
    [root@exadevdb-01 ~]# dbaascli cswlib showImages
    DBAAS CLI version 24.3.1.0.0
    Executing command cswlib showImages
    Job id: 10706bde-2e70-42fe-a371-7cb9bd572f7c
    Session log: /var/opt/oracle/log/cswLib/showImages/dbaastools_2024-09-06_05-54-57-PM_59941.log
    Log file location: /var/opt/oracle/log/cswLib/showImages/dbaastools_2024-09-06_05-54-57-PM_59941.log
    
    ############ List of Available database Artifacts  #############
    
    1.IMAGE_TAG=11.2.0.4.230418
      VERSION=11.2.0.4.230418
      DESCRIPTION=11.2 APR 2023 DB Image
    2.IMAGE_TAG=12.2.0.1.230718
      VERSION=12.2.0.1.230718
      DESCRIPTION=12.2 JUL 2023 DB Image
    3.IMAGE_TAG=18.23.0.0.0
      VERSION=18.23.0.0.0
      DESCRIPTION=18c JUL 2023 DB Image
    4.IMAGE_TAG=19.22.0.0.0
      VERSION=19.22.0.0.0
      DESCRIPTION=19c JAN 2024 DB Image
    5.IMAGE_TAG=12.1.0.2.231017
      VERSION=12.1.0.2.231017
      DESCRIPTION=12.1 OCT 2023 DB Image
    6.IMAGE_TAG=23.4.0.24.05
      VERSION=23.4.0.24.05
      DESCRIPTION=23ai DB image 23.4.0.24.05
    7.IMAGE_TAG=18.22.0.0.0
      VERSION=18.22.0.0.0
      DESCRIPTION=18c APR 2023 DB Image
    8.IMAGE_TAG=11.2.0.4.231017
      VERSION=11.2.0.4.231017
      DESCRIPTION=11.2 OCT 2023 DB Image
    9.IMAGE_TAG=12.1.0.2.230418
      VERSION=12.1.0.2.230418
      DESCRIPTION=12.1 APR 2023 DB Image
    10.IMAGE_TAG=19.23.0.0.0
      VERSION=19.23.0.0.0
      DESCRIPTION=19c APR 2024 DB Image
    11.IMAGE_TAG=19.20.0.0.0
      VERSION=19.20.0.0.0
      DESCRIPTION=19c JUL 2023 DB Image
    12.IMAGE_TAG=23.5.0.24.07
      VERSION=23.5.0.24.07
      DESCRIPTION=23ai DB image 23.5.0.24.07
    13.IMAGE_TAG=12.2.0.1.230418
      VERSION=12.2.0.1.230418
      DESCRIPTION=12.2 APR 2023 DB Image
    14.IMAGE_TAG=11.2.0.4.230718
      VERSION=11.2.0.4.230718
      DESCRIPTION=11.2 JUL 2023 DB Image
    15.IMAGE_TAG=12.1.0.2.230718
      VERSION=12.1.0.2.230718
      DESCRIPTION=12.1 JUL 2023 DB Image
    16.IMAGE_TAG=19.24.0.0.0
      VERSION=19.24.0.0.0
      DESCRIPTION=19c JUL 2024 DB Image
    17.IMAGE_TAG=12.2.0.1.231017
      VERSION=12.2.0.1.231017
      DESCRIPTION=12.2 OCT 2023 DB Image
    18.IMAGE_TAG=18.24.0.0.0
      VERSION=18.24.0.0.0
      DESCRIPTION=18c OCT 2023 DB Image
    19.IMAGE_TAG=19.21.0.0.0
      VERSION=19.21.0.0.0
      DESCRIPTION=19c OCT 2023 DB Image
    Images can be downloaded using their image tags. For details, see help using 'dbaascli cswlib download --help'.
    
    dbaascli execution completed
    [root@exadevdb-01 ~]#
    
    
    

    Download the the 11204 Binary OCT2023 version.

    
    dbaascli cswlib download --version 11204 --bp OCT2023 --bp_update no
    
    Sample output
    
    
    [root@exadevdb-01 ~]# dbaascli cswlib download --version 11204 --bp OCT2023 --bp_update no
    DBAAS CLI version 24.3.2.0.0
    Executing command cswlib download --version 11204 --bp OCT2023 --bp_update no
    Job id: 8dab6933-9505-40a9-a398-c1ba46eae6e3
    Session log: /var/opt/oracle/log/cswLib/download/dbaastools_2024-09-19_02-00-06-PM_166429.log
    Loading PILOT...
    Session ID of the current execution is: 40
    Log file location: /var/opt/oracle/log/cswLib/download/pilot_2024-09-19_02-00-10-PM_166741
    -----------------
    Running initialize job
    Completed initialize job
    -----------------
    Running validate_url_reachability job
    Completed validate_url_reachability job
    -----------------
    Running validate_image_tag_existence job
    Completed validate_image_tag_existence job
    -----------------
    Running validate_free_space job
    Completed validate_free_space job
    -----------------
    Running validate_file_permissions job
    Completed validate_file_permissions job
    -----------------
    Running download_image job
    Image location=/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017
    Download succeeded
    Completed download_image job
    -----------------
    Running decrypt job
    Completed decrypt job
    -----------------
    Running post_download_update job
    Completed post_download_update job
    -----------------
    Running verify_signature job
    Skipping verifySignature check because the catalog contains signed property as false
    Completed verify_signature job
    -----------------
    Running validate_sha256sum job
    Skipping Sha256Sum check because the catalog does not contain sha256sum
    Skipping Sha256Sum check because the catalog does not contain sha256sum
    Completed validate_sha256sum job
    -----------------
    Running update_file_permissions job
    Completed update_file_permissions job
    {"11.2.0.4.231017":{"file_list":[{"file":"/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017/db11204_bits_EXA.zip"},{"file":"/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017/exadbf_11204.tar.gz"}]}}
    
    dbaascli execution completed
    [root@exadevdb-01 ~]#
    
    

    List downloaded db images

    After downloading the image you can list local database binary images using cswLib listlocal.
    
    
    [opc@exadevdb-01 ~]$ sudo dbaascli cswLib listLocal
    DBAAS CLI version 24.3.1.0.0
    Executing command cswLib listLocal
    Job id: 0405333f-04d5-4043-97f6-0fa0a9eca51b
    Session log: /var/opt/oracle/log/cswLib/listLocal/dbaastools_2024-09-06_06-15-17-PM_154824.log
    Log file location: /var/opt/oracle/log/cswLib/listLocal/dbaastools_2024-09-06_06-15-17-PM_154824.log
    
    ############ List of Available Database Images  #############
    
    1.IMAGE_TAG=11.2.0.4.231017
      IMAGE_SIZE=3GB
      VERSION=11.2.0.4.231017
      DESCRIPTION=11.2 OCT 2023 DB Image
    2.IMAGE_TAG=19.24.0.0.0
      IMAGE_SIZE=5GB
      VERSION=19.24.0.0.0
      DESCRIPTION=19c JUL 2024 DB Image
    
    dbaascli execution completed
    [opc@exadevdb-01 ~]$
    
    

    Create a new database home using 11g binary

    The next step is to create a database home 

    [root@exadevdb-01 ~]# dbaascli dbhome create --version 11.2.0.4
    DBAAS CLI version 24.3.2.0.0
    Executing command dbhome create --version 11.2.0.4
    Job id: 615dc2e8-af82-47c1-8612-526082b6e9bf
    Session log: /var/opt/oracle/log/dbHome/create/dbaastools_2024-10-10_07-30-24-PM_99121.log
    Loading PILOT...
    Session ID of the current execution is: 34
    Log file location: /var/opt/oracle/log/dbHome/create/pilot_2024-10-10_07-30-27-PM_99403
    -----------------
    Running Plugin_initialization job
    Acquiring native write lock: _u02_app_oracle_product_11.2.0_dbhome_1
    Acquiring native write lock: OraHome2
    Completed Plugin_initialization job
    -----------------
    Running OH_image_validate job
    Completed OH_image_validate job
    -----------------
    Running DB_home_version_check job
    Completed DB_home_version_check job
    -----------------
    Running DB_home_environment_version_check job
    Completed DB_home_environment_version_check job
    -----------------
    Running GI_version_check job
    Completed GI_version_check job
    -----------------
    Running Validate_backup_location job
    Completed Validate_backup_location job
    -----------------
    Running Cluster_nodes_check job
    Completed Cluster_nodes_check job
    -----------------
    Running Validate_users_group job
    Completed Validate_users_group job
    -----------------
    Running OH_image_download job
    Completed OH_image_download job
    -----------------
    Running OH_image_version_check job
    Completed OH_image_version_check job
    -----------------
    Running Disk_space_check job
    Completed Disk_space_check job
    Acquiring write lock: provisioning
    -----------------
    Running Pre_OH_creation_lock_manager job
    Completed Pre_OH_creation_lock_manager job
    -----------------
    Running OH_pre_existence_check job
    Completed OH_pre_existence_check job
    -----------------
    Running Local_node_oh_image_unzip job
    Completed Local_node_oh_image_unzip job
    -----------------
    Running OH_creation_cvu_prechecks job
    Completed OH_creation_cvu_prechecks job
    -----------------
    Running Local_node_oh_clone job
    
    Completed Local_node_oh_clone job
    -----------------
    Running Update_dbnid_bits job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Local_node_oh_backup_creation job
    Completed Local_node_oh_backup_creation job
    -----------------
    Running Remote_nodes_software_copy job
    
    Completed Remote_nodes_software_copy job
    -----------------
    Running Remote_nodes_attach_home job
    Completed Remote_nodes_attach_home job
    -----------------
    Running Remote_nodes_post_install_steps job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Inventory_nodes_list_update job
    Completed Inventory_nodes_list_update job
    -----------------
    Running Root_script_run job
    Completed Root_script_run job
    -----------------
    Running Post_OH_creation_lock_manager job
    Completed Post_OH_creation_lock_manager job
    Releasing lock: provisioning
    -----------------
    Running release_lock job
    Releasing native lock: OraHome2
    Releasing native lock: _u02_app_oracle_product_11.2.0_dbhome_1
    Completed release_lock job
    -----------------
    Running Generate_dbhome_metadata job
    Completed Generate_dbhome_metadata job
    -----------------
    Running Generate_dbhome_system_details job
    Acquiring native write lock: global_dbsystem_details_generation
    Releasing native lock: global_dbsystem_details_generation
    Completed Generate_dbhome_system_details job
    -----------------
    Running Plugin_cleanup job
    
    Completed Plugin_cleanup job
    ---------- START OF PLUGIN RESULT ----------
    {"ORACLE_HOME_NAME":"OraHome2","ORACLE_HOME":"/u02/app/oracle/product/11.2.0/dbhome_1"}
    ---------- END OF PLUGIN RESULT ----------
    
    dbaascli execution completed
    
    

    Conclusion


    While Oracle Exadata Cloud Service (ExaCS) no longer officially supports provisioning new Oracle Database 11g instances through the OCI console, it is still possible to create and manage these databases using tools like dbaascli. This approach provides a practical solution for customers who rely on the stability and performance of Oracle 11g for mission-critical applications but face challenges in scheduling downtime or migrating to newer database versions. 

    However, it’s essential to recognize the limitations of continuing to use an unsupported version, such as missing out on critical security updates, performance enhancements, and new features offered by modern versions like Oracle Database 19c or 23c AI. Organizations are encouraged to develop a long-term migration strategy to align with Oracle's support roadmap and ensure the sustainability of their database operations.

    Tuesday, June 3, 2025

    EXACS Standby - ORA-12154: TNS (Disable RFS client)

     



    Intro 

    In today’s data-driven world, safeguarding data is crucial. Organizations must be equipped to handle potential failures by implementing strong disaster recovery (DR) strategies. For those managing critical data, a DR site is essential. Oracle Data Guard provides a robust solution for database disaster recovery, ensuring data protection and continuous availability.

    Recently, I encountered an issue while creating a standby database from an ExaCS environment. These databases were manually created in ExaCS, and we also manually registered them in the OCI console.

    After configuring Data Guard, the RFS process transfers the latest archive to the standby database but then automatically transitions to a disabled state with the message "Disable RFS client." The process keeps failing because the primary database reports the error:

    ORA-12154: TNS: could not resolve the connect identifier specified.

    Here is the alert log information we gathered from the standby database:
    
    ALTER SYSTEM SET log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' SCOPE=BOTH SID='*';
    2024-12-17T05:34:15.663557+00:00
     rfs (PID:5936): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is ASYNC (PID:336925)
     rfs (PID:5936): Primary database is in MAXIMUM PERFORMANCE mode
     rfs (PID:5936): Enable RFS client [krsr.c:5782]
     rfs (PID:5936): Disable RFS client [kcrlc.c:1531]
    2024-12-17T05:34:15.699574+00:00
     rfs (PID:5936): Selected LNO:21 for T-1.S-1255 dbid 4130507393 branch 1187061775
    2024-12-17T05:34:18.286167+00:00
     rfs (PID:6076): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is ASYNC (PID:239908)
    
    
    

    Interestingly, SQL*Plus connections from the primary to the standby database are working fine. This makes the issue even more puzzling, as there are no apparent connectivity problems. However, the primary database alert log continues to report ORA-12154, and even the Data Guard Broker configuration is failing.

    After hours of troubleshooting, we accidentally found the relevant My Oracle Support note:

    "RFS is not coming up on standby due to ORA-12154 on transport (primary side)" (Doc ID 2196182.1).

    In our Exadata database, the TNS_ADMIN parameter was missing in the cluster configuration. We had to add the TNS_ADMIN parameter and restart the database to resolve the issue. Only then did it correctly pick up the TNS location.


    
    srvctl getenv database -d -db unique name- -t "TNS_ADMIN"
    srvctl setenv database -d -db_unique_name- -T "TNS_ADMIN=-path of directory holding the tnsnames.ora-"
    
    
    Sample output
    
    [oracle@exaprd01-node01 ]$ srvctl setenv database -d TEST2_EXA -T "TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_4/network/admin/TEST2"
    [oracle@exaprd01-node01 ]$ srvctl getenv database -d TEST2_EXA -t "TNS_ADMIN"
    TEST2_EXA:
    TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_4/network/admin/TEST2
    [oracle@exaprd01-node01 ]$
    



    Conclusion 

    The issue arose while setting up a standby database in an ExaCS environment, where the databases were manually created and registered in the OCI console. Despite successful SQL*Plus connections between the primary and standby databases, the RFS process repeatedly failed with ORA-12154 errors on the primary side, preventing archive log transfer.

    After extensive troubleshooting, we discovered that the root cause was a missing TNS_ADMIN parameter in the cluster configuration. This prevented the database from correctly resolving TNS names. The issue was resolved by adding the TNS_ADMIN parameter and restarting the database, allowing Data Guard to function properly.

    Exacs database creation using dbaascli

     







    Intro

    OCI (Oracle Cloud Infrastructure) provides robust automation capabilities for routine maintenance tasks such as patching, creating database homes, and server OS updates. Oracle further simplifies these tasks by handling the staging process, reducing the administrative burden.

    Additionally, Oracle has introduced several tools to streamline maintenance and management tasks, including:

  • dbaascli
  • exacli
  • dbcli
  • exadbcpatchmulti
  • dbaascli is a command-line utility for managing Oracle Database Cloud Services (DBaaS) instances. Mainly for Oracle Exadata cloud services. It provides several features to administer Oracle databases running on Oracle Cloud Infrastructure (OCI). This tool allows users to perform various administrative tasks like backup and recovery, patching, scaling, and checking the status of the database environment.

    Key Features of dbaascli:

    1. Backup and Restore:

      • Manage and schedule backups.
      • Perform on-demand backups.
      • Restore the database from backups.
    2. Patching:

      • Apply database patches.
      • Check for available patches.
      • Rollback patches if necessary.
    3. Database Management:

      • Start and stop databases.
      • Perform database health checks.
      • Manage Data Guard configurations (for high availability and disaster recovery).
    4. Storage Management:

      • Resize the database storage as required.
    5. Diagnostic Tools:

      • Collect diagnostic information to troubleshoot issues.
      • Generate diagnostic logs for support cases.
    As a DBA, it's essential to start utilizing these orchestration tools to streamline and simplify your tasks. These tools make database management more efficient by automating key processes. With dbaascli, for example, DBAs no longer need to manually download and copy binary files to servers—dbaascli will automatically download the latest binaries, making the entire process faster and smoother.

    In this article, I will demonstrate how to create a database using dbaascli.

    Note: Creating a non-CDB database is not supported via the console. The only way to create a non-CDB database is through the backend.

    Step 1: Identify the Database Home


    First, identify and note down the Database Home associated with the version you want to use for creating the database.

    For this example, since I want to create a database using the 19c home, I will record the path for the 19c Database Home.

    
    
    [root@exadevdb-node01 ~]# dbaascli dbhome info
    DBAAS CLI version 24.3.2.0.0
    Executing command dbhome info
    [INFO] [DBAAS-14011] - The usage of this command is deprecated.
       ACTION: It is recommended to use 'dbaascli dbHome getDetails or dbaascli system getDBHomes' for this operation.
    Enter a homename or just press enter if you want details of all homes
    
    1.HOME_NAME=OraHome1
      HOME_LOC=/u02/app/oracle/product/19.0.0.0/dbhome_1
      VERSION=19.24.0.0
      PATCH_LEVEL=19.24.0.0.0
      DBs installed=
       OH Backup=NOT Configured
    
    2.HOME_NAME=OraHome3
      HOME_LOC=/u02/app/oracle/product/11.2.0/dbhome_1
      VERSION=11.2.0.4
      PATCH_LEVEL=11.2.0.4.231017
      DBs installed=TEST
       Agent DB IDs=ec3f0396-d79d-450f-a9b3-aaae88b37c56
     OH Backup=NOT Configured
    
    [root@exadevdb-node01 ~]#
    
    
    

    To retrieve the DB home information, you can use the latest dbaascli command:

    
    dbaascli system getDBHomes
    
    This output will be a Jason output
    
    
    [root@exadevdb-node01 ~]# dbaascli system getDBHomes
    DBAAS CLI version 24.3.2.0.0
    Executing command system getDBHomes
    Job id: 1530c019-99c4-467d-9b2f-be3a2b5d3cd0
    Session log: /var/opt/oracle/log/system/getDBHomes/dbaastools_2024-09-30_03-45-24-PM_342051.log
    {
      "OraHome1" : {
        "homePath" : "/u02/app/oracle/product/19.0.0.0/dbhome_1",
        "homeName" : "OraHome1",
        "version" : "19.24.0.0.0",
        "createTime" : 1724441733874,
        "updateTime" : 1724683497105,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          }
        },
        "messages" : [ ]
      },
      "OraHome2" : {
        "homePath" : "/u02/app/oracle/product/19.0.0.0/dbhome_2",
        "homeName" : "OraHome2",
        "version" : "19.24.0.0.0",
        "createTime" : 1724680153103,
        "updateTime" : 1724692594136,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          }
        },
        "messages" : [ ]
      },
      "OraHome3" : {
        "homePath" : "/u02/app/oracle/product/11.2.0/dbhome_1",
        "homeName" : "OraHome3",
        "version" : "11.2.0.4.231017",
        "createTime" : 1724683333257,
        "updateTime" : 1727708629760,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "11.2.0.4.231017",
            "patches" : [ "25139545", "33613829", "33991024", "35638387", "22291453", "30432076", "31335037", "32327201", "32558369", "34006614", "34533061", "34698179", "35099667", "35220732", "35239280", "35313335", "35685663", "32224895", "22366322", "31228670", "21289564", "35574089" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "11.2.0.4.231017",
            "patches" : [ "25139545", "33613829", "33991024", "35638387", "22291453", "30432076", "31335037", "32327201", "32558369", "34006614", "34533061", "34698179", "35099667", "35220732", "35239280", "35313335", "35685663", "32224895", "22366322", "31228670", "21289564", "35574089" ]
          }
        },
        "messages" : [ ]
      },
      "OraHome4" : {
        "homePath" : "/u02/app/oracle/product/19.0.0.0/dbhome_3",
        "homeName" : "OraHome4",
        "version" : "19.24.0.0.0",
        "createTime" : 1724695480172,
        "updateTime" : 1727709224645,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          }
        },
        "messages" : [ ]
      }
    }
    
    dbaascli execution completed
    [root@exadevdb-node01 ~]#
    
    

    Create a database using dbaascli commands


    Here is a sample command to create the database. For detailed information, refer to the Oracle documentation at:  Using dbaascli for Exadata Database Service

    Since we are creating a non-container database, the createAsCDB parameter is set to false.

    
    
    /bin/dbaascli database create \
    --dbname SYNPOC \
    --dbUniqueName SYNPOC_EXA \
    --dbSID SYNPOC1 \
    --createAsCDB false \
    --dbCharset AL32UTF8 \
    --dbNCharset AL16UTF16 \
    --sgaSizeInMB 4096 \
    --pgaSizeInMB 1024 \
    --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 \
    --oracleHomeName OraHome1 \
    --tdeConfigMethod FILE \
    --executePrereqs no
    
    

    Sample log output

    
    
    [root@exadevdb-node01 ~]# /bin/dbaascli database create \
    > --dbname SYNPOC \
    > --dbUniqueName SYNPOC_EXA \
    > --dbSID SYNPOC \
    > --createAsCDB false \
    > --dbCharset AL32UTF8 \
    > --dbNCharset AL16UTF16 \
    > --sgaSizeInMB 4096 \
    > --pgaSizeInMB 1024 \
    > --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 \
    > --oracleHomeName OraHome1 \
    > --tdeConfigMethod FILE \
    > --executePrereqs no
    DBAAS CLI version 24.3.2.0.0
    Executing command database create --dbname SYNPOC --dbUniqueName SYNPOC_EXA --dbSID SYNPOC1 --createAsCDB false --dbCharset AL32UTF8 --dbNCharset AL16UTF16 --sgaSizeInMB 4096 --pgaSizeInMB 1024 --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 --oracleHomeName OraHome1 --tdeConfigMethod FILE --executePrereqs no
    Job id: 88f650c6-0467-4b3b-bcda-3f38eaa0ea6e
    Session log: /var/opt/oracle/log/SYNPOC/database/create/dbaastools_2024-09-30_03-51-46-PM_368426.log
    Enter SYS_PASSWORD:
    
    Enter SYS_PASSWORD (reconfirmation):
    
    Enter TDE_PASSWORD:
    
    Enter TDE_PASSWORD (reconfirmation):
    
    Loading PILOT...
    Enter SYS_PASSWORD
    ********************
    Enter SYS_PASSWORD (reconfirmation):
    **********************
    Enter TDE_PASSWORD
    ***********************
    Enter TDE_PASSWORD (reconfirmation):
    ********************
    Session ID of the current execution is: 1490
    Log file location: /var/opt/oracle/log/SYNPOC/database/create/pilot_2024-09-30_03-53-13-PM_374149
    -----------------
    Running Plugin_initialization job
    Completed Plugin_initialization job
    Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
    -----------------
    Running Default_value_initialization job
    Completed Default_value_initialization job
    -----------------
    Running Validate_input_params job
    Completed Validate_input_params job
    -----------------
    Running Validate_cpu_availability job
    Completed Validate_cpu_availability job
    -----------------
    Running Validate_asm_availability job
    Completed Validate_asm_availability job
    -----------------
    Running Validate_disk_space_availability job
    Completed Validate_disk_space_availability job
    -----------------
    Running Validate_huge_pages_availability job
    Completed Validate_huge_pages_availability job
    -----------------
    Running Validate_hostname_domain job
    Completed Validate_hostname_domain job
    -----------------
    Running Validate_crs_state job
    Completed Validate_crs_state job
    -----------------
    Running Install_db_cloud_backup_module job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Perform_dbca_prechecks job
    Completed Perform_dbca_prechecks job
    -----------------
    Running Validate_backup_report job
    Skipping. Job is detected as not applicable.
    Acquiring write lock: create_db_cloud
    -----------------
    Running Setup_acfs_volumes job
    Completed Setup_acfs_volumes job
    -----------------
    Running Setup_db_folders job
    Completed Setup_db_folders job
    -----------------
    
    Running DB_creation job
    Completed DB_creation job
    Releasing lock: create_db_cloud
    -----------------
    Running Generate_db_metadata job
    Completed Generate_db_metadata job
    -----------------
    Running Create_db_from_backup job
    Skipping. Job is detected as not applicable.
    Completed Create_db_from_backup job
    -----------------
    Running Load_db_details job
    Completed Load_db_details job
    -----------------
    Running Populate_creg job
    Completed Populate_creg job
    -----------------
    Running Register_ocids job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Run_datapatch job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Create_users_tablespace job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Configure_pdb_service job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Set_pdb_admin_user_profile job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Lock_pdb_admin_user job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Configure_flashback job
    Completed Configure_flashback job
    -----------------
    Running Update_cloud_service_recommended_config_parameters job
    Completed Update_cloud_service_recommended_config_parameters job
    -----------------
    Running Update_distributed_lock_timeout job
    Completed Update_distributed_lock_timeout job
    -----------------
    Running Configure_archiving job
    Completed Configure_archiving job
    -----------------
    Running Configure_huge_pages job
    Completed Configure_huge_pages job
    -----------------
    Running Set_credentials job
    Completed Set_credentials job
    -----------------
    Running Update_dba_directories job
    Completed Update_dba_directories job
    -----------------
    Running Set_cluster_interconnects job
    Completed Set_cluster_interconnects job
    -----------------
    Running Create_db_secure_profile job
    Completed Create_db_secure_profile job
    -----------------
    Running Set_utc_timezone job
    Completed Set_utc_timezone job
    -----------------
    Running Run_dst_post_installs job
    Completed Run_dst_post_installs job
    -----------------
    Running Enable_auditing job
    Completed Enable_auditing job
    -----------------
    Running Apply_security_measures job
    Completed Apply_security_measures job
    -----------------
    Running Set_listener_init_params job
    Completed Set_listener_init_params job
    -----------------
    Running Update_db_wallet job
    Completed Update_db_wallet job
    -----------------
    Running Add_oratab_entry job
    Completed Add_oratab_entry job
    -----------------
    Running Setup_dbaastools_schema job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Purge_rmf_configuration job
    Completed Purge_rmf_configuration job
    -----------------
    Running Configure_sqlnet_ora job
    Completed Configure_sqlnet_ora job
    -----------------
    Running Configure_tnsnames_ora job
    Completed Configure_tnsnames_ora job
    -----------------
    Running Enable_fips job
    Completed Enable_fips job
    -----------------
    Running DTRS_DB_Configure_Job job
    Completed DTRS_DB_Configure_Job job
    -----------------
    Running Restart_database job
    Completed Restart_database job
    -----------------
    Running Create_db_login_environment_file job
    Completed Create_db_login_environment_file job
    -----------------
    Running Generate_dbsystem_details job
    Acquiring native write lock: global_dbsystem_details_generation
    Releasing native lock: global_dbsystem_details_generation
    Completed Generate_dbsystem_details job
    -----------------
    Running Cleanup job
    Completed Cleanup job
    Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
    
    dbaascli execution completed
    [root@exadevdb-node01~]# 
    
    

    Conclusion

    To sum up, dbaascli is an effective command line tool that helps in performing various tasks involved in the lifecycle of a database in the context of the Oracle cloud. It makes the execution of tasks like the creation of databases, the administration of homes and even the fetching of information effortless. With dbaascli, the administrators have a way to automate most of the usual activities and cut down on errors and increase efficiency thereby making the management of databases smooth and effective.

    Performance issue : Handling SQL Version Count Issues with High-Volume Kafka Inserts on ExaCS

      Intro  We are in the era of AI, where increasing processing power is crucial for handling growing and complex workloads. As ent...