Thursday, June 26, 2025

Streamlining TDE Key Management with Oracle Key Vault and Secure Endpoint Deployment

 






Intro

Streamlining TDE Key Management with Oracle Key Vault and Secure Endpoint Deployment

One of the core pillars of database security is robust encryption key management. Yet many organizations still store Transparent Data Encryption (TDE) keys locally, a practice that introduces significant risk because keys can be lost through disk corruption, system failure, or accidental mismanagement. Managing those keys manually across multiple Oracle and MySQL databases only compounds the challenge and increases the potential breach surface.

Oracle Key Vault (OKV) offers a modern alternative by centralizing the storage and lifecycle management of encryption keys, certificates, and secrets in a secure, policy‑driven environment that communicates with each database over KMIP. Using OKV translates to reduced development time, improved security posture, and compliance with data‑protection standards. While there are associated costs for using the service, the benefits of enhanced security and streamlined operations often outweigh the expenses.

To accelerate adoption at scale, OKV exposes a RESTful API that lets you automate installation and registration of endpoint software, dramatically shortening the time required to bring hundreds of database servers under centralized key management. For added protection, you can restrict the API to trusted IP ranges during rollout and disable the reset functionality once onboarding is complete, further narrowing the attack surface.

The remainder of this post walks through a practical, script‑driven workflow that installs the endpoint client, registers it with OKV, and provisions a default wallet, laying the foundation for secure, compliant, and easily auditable encryption across your Oracle estate.

In this article, I will elaborate on how we can perform the EP installation via RESTful API calls.

Configure the OKV RESTful service


First, create a directory  /u01/app/oracle/okvapi to download the okvresetcli package. Additionally, create a wallet directory that will be used to download the OKV wallet files.

Proceed to download the REST API package using the curl command.


mkdir -p /u01/app/oracle/okvapi

[oracle@ora-01 okvapi]$ curl -O -k https://10.0.0.180:5695/okvrestclipackage.zip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 4134k  100 4134k    0     0  96.1M      0 --:--:-- --:--:-- --:--:-- 96.1M


Once the download is complete, extract the package. This will create the required directory structure and files, including the configuration file that needs to be updated with OKV server details.



[oracle@ora-01 okvapi]$ ls -lrth
total 4.1M
-rw-r--r--. 1 oracle oinstall 4.1M Jun 13 14:36 okvrestclipackage.zip
[oracle@ora-01 okvapi]$ unzip okvrestclipackage.zip
Archive:  okvrestclipackage.zip
   creating: lib/
   creating: bin/
  inflating: bin/okv
  inflating: bin/okv.bat
   creating: conf/
  inflating: conf/okvrestcli_logging.properties
  inflating: conf/okvrestcli.ini
  inflating: lib/okvrestcli.jar
[oracle@ora-01 okvapi]$ 


Next, update the okvrestcli.ini file located in the conf directory with the appropriate OKV server details. Additionally, ensure that the client_wallet parameter is included and correctly set.



[oracle@ora-01 conf]$ cat okvrestcli.ini
#Provide absolute path for log_property, okv_client_config properties
[Default]
log_property=/u01/app/oracle/okvapi/conf/okvrestcli_logging.properties
server=10.0.0.180
okv_client_config=./conf/okvclient.ora
user=admin
client_wallet=/u01/app/oracle/okvapi/wallet


Create the wallet for the OKV RESTful API

If the client_wallet directory is already specified in the configuration file, the next step is to download the wallet from the Oracle Key Vault (OKV) server. This wallet is essential for secure communication between the client and OKV.

Before download, make sure to export the init location and execute "/u01/app/oracle/okvapi/bin/okv admin client-wallet add --client-wallet /u01/app/oracle/okvapi/wallet --wallet-user admin"

Step 1: Set the Configuration File Path


export JAVA_HOME=/u01/app/oracle/product/19.0.0/dbhome_1/jdk
export OKV_RESTCLI_CONFIG=/u01/app/oracle/okvapi/conf/okvrestcli.ini

Step 2: Initialize the Client Wallet


Run the following command to create the client wallet:


/u01/app/oracle/okvapi/bin/okv admin client-wallet add \
  --client-wallet /u01/app/oracle/okvapi/wallet \
  --wallet-user admin


You will be prompted to enter the password. Upon success, you should see:


export JAVA_HOME=/u01/app/oracle/product/19.0.0/dbhome_1/jdk
export OKV_RESTCLI_CONFIG=/u01/app/oracle/okvapi/conf/okvrestcli.ini
[oracle@ora-01 okvapi]$ /u01/app/oracle/okvapi/bin/okv admin client-wallet add --client-wallet /u01/app/oracle/okvapi/wallet --wallet-user admin
Password:
{
  "result" : "Success"
}
[oracle@ora-01 okvapi]$

Important Note:

This command will fail if the RESTful service is not enabled in OKV. By default, the REST service is disabled to maintain security.

Example of failure message:


[oracle@dbsdpl55 bin]$ /u01/app/oracle/okvapi/bin/okv admin client-wallet add --client-wallet /u01/app/oracle/okvapi/wallet --wallet-user admin
Password:
{
  "result" : "Failure",
  "message" : "REST service is disabled"
}
[oracle@dbsdpl55 bin]$


Solution

For testing purposes, you may temporarily enable the REST service for all IP addresses. However, in a production environment, it is strongly recommended to enable the service only for specific, trusted targets. Once the installation and configuration are complete, you can disable the REST service again to maintain a secure setup.





Verifying the OKV RESTful API Installation

Use the following commands to verify the REST CLI and retrieve server details:


[oracle@ora-01 okvapi]$ $OKV_HOME/bin/okv
{
  "restCLIVersion" : "21.10.0.0.0"
}

[oracle@ora-01 okvapi]$  $OKV_HOME/bin/okv server info get
{
  "result" : "Success",
  "value" : {
    "caCertificateExpirationDate" : "2028-05-19 18:59:37",
    "cpuCores" : "2",
    "deploymentType" : "Cluster",
    "diskInGB" : "3695",
    "fraInGB" : "160",
    "installDate" : "2025-05-20 18:59:54",
    "memoryInKB" : "16365128",
    "platformCertificatesExpirationDate" : "2027-03-12 02:48:20",
    "serverCertificateExpirationDate" : "2026-05-20 19:03:31",
    "serverTime" : "2025-06-13 14:41:53",
    "version" : "21.10.0.0.0"
  }
}
This confirms that the OKV RESTful API is correctly installed and communicating with the server.

Configuring Endpoint (EP) via REST API


To configure OKV using RESTful calls, follow the steps below:

Prerequisites

Before provisioning the endpoint, ensure the following directory structure exists under the wallet_root:

mkdir -p /u01/app/oracle/admin/ORCL/wallet_root/tde        # Location for TDE keys
mkdir -p /u01/app/oracle/admin/ORCL/wallet_root/okv        # Location for OKV endpoint
mkdir -p /u01/app/oracle/admin/ORCL/wallet_root/tde_seps   # Location for External SSO Key

Step-by-Step Configuration

1. Create the Endpoint


/u01/app/oracle/okvapi/bin/okv admin endpoint create --endpoint EP_ORCL --description "$HOSTNAME, $(hostname -i)" --type ORACLE_DB --platform LINUX64 -- Create end point 


Sample Output:

export OKV_RESTCLI_CONFIG=/u01/app/oracle/okvapi/conf/okvrestcli.ini
export JAVA_HOME=/u01/app/oracle/product/19c/dbhome_1/jdk

[oracle@ora-01 wallet_root]$ /u01/app/oracle/okvapi/bin/okv admin endpoint create --endpoint EP_ORCL --description "$HOSTNAME, $(hostname -i)" --type ORACLE_DB --platform LINUX64
{
  "result" : "Success",
  "value" : {
    "status" : "PENDING",
    "locatorID" : "CBB0383C-CC14-4EA4-BF75-01981C522F74"
  }
}
[oracle@ora-01 wallet_root]$

After creating the endpoint, verify its status in the OKV console; it should display as "REGISTERED". To complete the endpoint setup, you must proceed with the provisioning step.


2. Set the Default Wallet for the Endpoint


/u01/app/oracle/okvapi/bin/okv manage-access wallet set-default \
  --wallet WLT_ORCL \
  --endpoint EP_ORCL

3. Provision the Endpoint


/u01/app/oracle/okvapi/bin/okv admin endpoint provision \
  --endpoint EP_ORCL \
  --location /u01/app/oracle/admin/ORCL/wallet_root/okv \
  --auto-login FALSE

Post-Configuration

Once the endpoint is created, you can verify its status in the OKV console. The endpoint status should change to "ENROLLED" after successful provisioning. This completes the REST-based configuration of an Oracle Database endpoint with OKV.




Conclusion

Configuring Oracle Key Vault (OKV) using RESTful API provides a secure, automated, and centralized approach to managing encryption keys for Oracle databases. By following the steps outlined for setting up the configuration, creating the required directories, defining the endpoint, associating the default wallet, and provisioning the endpoint, you establish a robust foundation for TDE and other security integrations.

Always ensure the RESTful service is enabled securely, especially in production environments, by restricting access to trusted IPs. Once the configuration is verified and functional, you can disable the REST service to further harden your setup.

This approach not only simplifies key lifecycle management but also enhances compliance and operational security across your Oracle infrastructure.

Friday, June 20, 2025

Oracle Key Vault (OKV) Endpoint (EP) Health Check: Best Practices & Troubleshooting Tips

 






Intro 

Oracle Key Vault (OKV) plays a vital role in the AI era, where data security, encryption, and regulatory compliance are more critical than ever. OKV communicates securely with its targets using KMIP (Key Management Interoperability Protocol) over TLS. In simple terms, without a properly configured OKV Endpoint (EP), OKV cannot communicate with the target system.

Installing the Endpoint is essential because it provides the secure link between the database and OKV.
 



                                             Figure 1 : OKV communication with target

Before installing the OKV Endpoint agent, there are several prerequisites to consider. One of the most important is the correct folder structure. All required directories must be created directly under the wallet_root location.

Folder structure required for OKV installation. 


mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde - Location for TDE keys
mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/okv - Location for OKV End point
mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde_seps - Location for External SSO Key    


After extracting the OKV Endpoint binary, it's important to run the provided health check script. This script validates the endpoint configuration, ensuring everything is set up correctly. If there are any issues especially with the folder structure the health check will flag them.

The OKV health check scripts are located in the bin directory of the extracted endpoint package. Below is a sample taken from an Oracle DBCS (Database Cloud Service) instance:

[oracle@dbsdpl55 bin]$ pwd
/opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz/okv/bin
[oracle@dbsdpl55 bin]$ ls -lrth
total 8.4M
-rw-r--r-- 1 oracle oinstall    0 May 23 11:11 okv.log.0
-rwxr-x--- 1 oracle oinstall 5.3K May 23 15:03 root.sh
-rwxr-x--- 1 oracle oinstall 8.0K May 23 15:03 okvutil
-rwxr-x--- 1 oracle oinstall  29K May 23 15:03 okv_ssh_ep_lookup_authorized_keys
-rwxr-x--- 1 oracle oinstall 8.4M May 23 15:03 okveps.x64
-rwxr-x--- 1 oracle oinstall  20K May 23 15:03 ep_healthcheck.sh
[oracle@dbsdpl55 bin]$


Executing OKV Endpoint Health Check.


Once the OKV Endpoint binary is extracted and the environment is set up, you should run the health check script to validate the configuration. Below is a sample output from a DBCS environment:


 
 [oracle@dbsdpl55 bin]$ ./ep_healthcheck.sh
Verifying the environment variables...
PASSED.


Verifying if there are multiple HSM libraries...
PASSED.


WALLET_ROOT=/opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz is set. Verifying whether okv,tde and tde_seps folders exist under /opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz...
PASSED.


Verifying whether the environment variables set for gen0 process...
PASSED.


Verifying whether Oracle Key Vault client configuration is set correctly...
PASSED.


Verifying whether the SSL wallet location is set correctly...
PASSED.

Enter endpoint Password (for an auto login endpoint, just press Enter):
Verifying whether okvutil fetches data successfully...

Connected to 10.0.0.180:5696.
7F7407FF-651F-412B-834C-A616BC97C509    Template        Default template for PWHSE01_EP
PASSED.



Fetching Persistent cache details...
WARNING: Persistent Cache file doesn't exist. If the 'EXPIRE PKCS11 PERSISTENT CACHE ON DATABASE SHUTDOWN' parameter is configured for this endpoint then persistent cache will not be accessible. Please note that this parameter won't appear in the okvclient.ora file and can be modified only from OKV webconsole.
DONE.

Verifying whether PKCS Library is set correctly...
PASSED.


Please verify /opt/oracle/dcs/commonstore/wallets/PWHSE01_3c9_yyz/okv/log/ep_healthcheck_PWHSE01_2025.05.23-11.25.01.log for more details.

[oracle@dbsdpl55 bin]$

 
 

Conclusion

In today's AI-driven landscape, where safeguarding sensitive data is paramount, Oracle Key Vault (OKV) offers a robust and centralized approach to managing encryption keys and secrets. A correctly configured OKV Endpoint (EP) is not just a technical requirement—it is the foundation for secure communication between the database and OKV. By ensuring the proper setup of the EP, including the necessary folder structures and successful health checks, organizations can enforce strong encryption practices, maintain compliance, and confidently protect their data assets in both on-premises and cloud environments.

Wednesday, June 4, 2025

EXACS: Creating an 11g Database Home

 



Intro

Oracle Exadata Cloud Service (ExaCS) historically supported Oracle Database 11g, but with recent updates, Oracle has phased out support for older database versions like 11g in ExaCS. While existing Oracle Database 11g instances may continue to run, it is no longer possible to provision new Oracle 11g databases on Exadata Cloud Service through the Oracle Cloud Infrastructure (OCI) console. Oracle encourages users to upgrade to more recent supported versions, such as Oracle Database 19c or 23ai, to leverage enhanced features, security, and performance improvements.

Some customers continue to run Oracle 11g because they have not been able to schedule downtime due to mission-critical business applications. Additionally, Oracle 11.2.0.4 R2 is considered one of the most stable environments that Oracle has built. However, these customers face challenges in finding an efficient migration method, as their databases have grown to terabyte sizes.

Customers are still stuck on Oracle 11g for several key reasons:

Mission-Critical Applications: Many businesses run critical applications that require constant uptime, making it difficult to schedule downtime for upgrades or migrations.

Stable Environment: Oracle 11.2.0.4 R2 is considered a highly stable release. Some businesses prefer to stick with this known stability rather than risk issues with newer versions.

Cost and Complexity: Upgrading Oracle databases, especially from 11g, can be expensive and complex. It may involve re-architecting applications, testing, and retraining staff, which some companies may not have the resources to handle.

Large Databases: Over time, databases have grown to terabyte sizes, which makes migration and upgrade processes much more challenging in terms of time, performance, and storage.

Custom Solutions and Legacy Code: Many organizations have customized solutions that are tightly integrated with Oracle 11g, and updating these systems can require significant rework, leading to compatibility issues with newer versions.

Business Prioritization: In some cases, businesses prioritize operational stability over modernization, choosing to delay upgrades to focus on other initiatives.

In this article, I will show you how you can create the 11g database in exacs using dbaascli.

Note: You can create this only if you have upgrade support for 11g. Oracle ExaCS does not support running 11g in the ExaCS environment.

List db images in EXACS



[root@exadevdb-01 ~]# dbaascli cswlib showImages
DBAAS CLI version 24.3.1.0.0
Executing command cswlib showImages
Job id: 10706bde-2e70-42fe-a371-7cb9bd572f7c
Session log: /var/opt/oracle/log/cswLib/showImages/dbaastools_2024-09-06_05-54-57-PM_59941.log
Log file location: /var/opt/oracle/log/cswLib/showImages/dbaastools_2024-09-06_05-54-57-PM_59941.log

############ List of Available database Artifacts  #############

1.IMAGE_TAG=11.2.0.4.230418
  VERSION=11.2.0.4.230418
  DESCRIPTION=11.2 APR 2023 DB Image
2.IMAGE_TAG=12.2.0.1.230718
  VERSION=12.2.0.1.230718
  DESCRIPTION=12.2 JUL 2023 DB Image
3.IMAGE_TAG=18.23.0.0.0
  VERSION=18.23.0.0.0
  DESCRIPTION=18c JUL 2023 DB Image
4.IMAGE_TAG=19.22.0.0.0
  VERSION=19.22.0.0.0
  DESCRIPTION=19c JAN 2024 DB Image
5.IMAGE_TAG=12.1.0.2.231017
  VERSION=12.1.0.2.231017
  DESCRIPTION=12.1 OCT 2023 DB Image
6.IMAGE_TAG=23.4.0.24.05
  VERSION=23.4.0.24.05
  DESCRIPTION=23ai DB image 23.4.0.24.05
7.IMAGE_TAG=18.22.0.0.0
  VERSION=18.22.0.0.0
  DESCRIPTION=18c APR 2023 DB Image
8.IMAGE_TAG=11.2.0.4.231017
  VERSION=11.2.0.4.231017
  DESCRIPTION=11.2 OCT 2023 DB Image
9.IMAGE_TAG=12.1.0.2.230418
  VERSION=12.1.0.2.230418
  DESCRIPTION=12.1 APR 2023 DB Image
10.IMAGE_TAG=19.23.0.0.0
  VERSION=19.23.0.0.0
  DESCRIPTION=19c APR 2024 DB Image
11.IMAGE_TAG=19.20.0.0.0
  VERSION=19.20.0.0.0
  DESCRIPTION=19c JUL 2023 DB Image
12.IMAGE_TAG=23.5.0.24.07
  VERSION=23.5.0.24.07
  DESCRIPTION=23ai DB image 23.5.0.24.07
13.IMAGE_TAG=12.2.0.1.230418
  VERSION=12.2.0.1.230418
  DESCRIPTION=12.2 APR 2023 DB Image
14.IMAGE_TAG=11.2.0.4.230718
  VERSION=11.2.0.4.230718
  DESCRIPTION=11.2 JUL 2023 DB Image
15.IMAGE_TAG=12.1.0.2.230718
  VERSION=12.1.0.2.230718
  DESCRIPTION=12.1 JUL 2023 DB Image
16.IMAGE_TAG=19.24.0.0.0
  VERSION=19.24.0.0.0
  DESCRIPTION=19c JUL 2024 DB Image
17.IMAGE_TAG=12.2.0.1.231017
  VERSION=12.2.0.1.231017
  DESCRIPTION=12.2 OCT 2023 DB Image
18.IMAGE_TAG=18.24.0.0.0
  VERSION=18.24.0.0.0
  DESCRIPTION=18c OCT 2023 DB Image
19.IMAGE_TAG=19.21.0.0.0
  VERSION=19.21.0.0.0
  DESCRIPTION=19c OCT 2023 DB Image
Images can be downloaded using their image tags. For details, see help using 'dbaascli cswlib download --help'.

dbaascli execution completed
[root@exadevdb-01 ~]#



Download the the 11204 Binary OCT2023 version.


dbaascli cswlib download --version 11204 --bp OCT2023 --bp_update no
Sample output


[root@exadevdb-01 ~]# dbaascli cswlib download --version 11204 --bp OCT2023 --bp_update no
DBAAS CLI version 24.3.2.0.0
Executing command cswlib download --version 11204 --bp OCT2023 --bp_update no
Job id: 8dab6933-9505-40a9-a398-c1ba46eae6e3
Session log: /var/opt/oracle/log/cswLib/download/dbaastools_2024-09-19_02-00-06-PM_166429.log
Loading PILOT...
Session ID of the current execution is: 40
Log file location: /var/opt/oracle/log/cswLib/download/pilot_2024-09-19_02-00-10-PM_166741
-----------------
Running initialize job
Completed initialize job
-----------------
Running validate_url_reachability job
Completed validate_url_reachability job
-----------------
Running validate_image_tag_existence job
Completed validate_image_tag_existence job
-----------------
Running validate_free_space job
Completed validate_free_space job
-----------------
Running validate_file_permissions job
Completed validate_file_permissions job
-----------------
Running download_image job
Image location=/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017
Download succeeded
Completed download_image job
-----------------
Running decrypt job
Completed decrypt job
-----------------
Running post_download_update job
Completed post_download_update job
-----------------
Running verify_signature job
Skipping verifySignature check because the catalog contains signed property as false
Completed verify_signature job
-----------------
Running validate_sha256sum job
Skipping Sha256Sum check because the catalog does not contain sha256sum
Skipping Sha256Sum check because the catalog does not contain sha256sum
Completed validate_sha256sum job
-----------------
Running update_file_permissions job
Completed update_file_permissions job
{"11.2.0.4.231017":{"file_list":[{"file":"/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017/db11204_bits_EXA.zip"},{"file":"/var/opt/oracle/dbaas_acfs/dbnid/11.2.0.4.231017/exadbf_11204.tar.gz"}]}}

dbaascli execution completed
[root@exadevdb-01 ~]#


List downloaded db images

After downloading the image you can list local database binary images using cswLib listlocal.


[opc@exadevdb-01 ~]$ sudo dbaascli cswLib listLocal
DBAAS CLI version 24.3.1.0.0
Executing command cswLib listLocal
Job id: 0405333f-04d5-4043-97f6-0fa0a9eca51b
Session log: /var/opt/oracle/log/cswLib/listLocal/dbaastools_2024-09-06_06-15-17-PM_154824.log
Log file location: /var/opt/oracle/log/cswLib/listLocal/dbaastools_2024-09-06_06-15-17-PM_154824.log

############ List of Available Database Images  #############

1.IMAGE_TAG=11.2.0.4.231017
  IMAGE_SIZE=3GB
  VERSION=11.2.0.4.231017
  DESCRIPTION=11.2 OCT 2023 DB Image
2.IMAGE_TAG=19.24.0.0.0
  IMAGE_SIZE=5GB
  VERSION=19.24.0.0.0
  DESCRIPTION=19c JUL 2024 DB Image

dbaascli execution completed
[opc@exadevdb-01 ~]$

Create a new database home using 11g binary

The next step is to create a database home 

[root@exadevdb-01 ~]# dbaascli dbhome create --version 11.2.0.4
DBAAS CLI version 24.3.2.0.0
Executing command dbhome create --version 11.2.0.4
Job id: 615dc2e8-af82-47c1-8612-526082b6e9bf
Session log: /var/opt/oracle/log/dbHome/create/dbaastools_2024-10-10_07-30-24-PM_99121.log
Loading PILOT...
Session ID of the current execution is: 34
Log file location: /var/opt/oracle/log/dbHome/create/pilot_2024-10-10_07-30-27-PM_99403
-----------------
Running Plugin_initialization job
Acquiring native write lock: _u02_app_oracle_product_11.2.0_dbhome_1
Acquiring native write lock: OraHome2
Completed Plugin_initialization job
-----------------
Running OH_image_validate job
Completed OH_image_validate job
-----------------
Running DB_home_version_check job
Completed DB_home_version_check job
-----------------
Running DB_home_environment_version_check job
Completed DB_home_environment_version_check job
-----------------
Running GI_version_check job
Completed GI_version_check job
-----------------
Running Validate_backup_location job
Completed Validate_backup_location job
-----------------
Running Cluster_nodes_check job
Completed Cluster_nodes_check job
-----------------
Running Validate_users_group job
Completed Validate_users_group job
-----------------
Running OH_image_download job
Completed OH_image_download job
-----------------
Running OH_image_version_check job
Completed OH_image_version_check job
-----------------
Running Disk_space_check job
Completed Disk_space_check job
Acquiring write lock: provisioning
-----------------
Running Pre_OH_creation_lock_manager job
Completed Pre_OH_creation_lock_manager job
-----------------
Running OH_pre_existence_check job
Completed OH_pre_existence_check job
-----------------
Running Local_node_oh_image_unzip job
Completed Local_node_oh_image_unzip job
-----------------
Running OH_creation_cvu_prechecks job
Completed OH_creation_cvu_prechecks job
-----------------
Running Local_node_oh_clone job

Completed Local_node_oh_clone job
-----------------
Running Update_dbnid_bits job
Skipping. Job is detected as not applicable.
-----------------
Running Local_node_oh_backup_creation job
Completed Local_node_oh_backup_creation job
-----------------
Running Remote_nodes_software_copy job

Completed Remote_nodes_software_copy job
-----------------
Running Remote_nodes_attach_home job
Completed Remote_nodes_attach_home job
-----------------
Running Remote_nodes_post_install_steps job
Skipping. Job is detected as not applicable.
-----------------
Running Inventory_nodes_list_update job
Completed Inventory_nodes_list_update job
-----------------
Running Root_script_run job
Completed Root_script_run job
-----------------
Running Post_OH_creation_lock_manager job
Completed Post_OH_creation_lock_manager job
Releasing lock: provisioning
-----------------
Running release_lock job
Releasing native lock: OraHome2
Releasing native lock: _u02_app_oracle_product_11.2.0_dbhome_1
Completed release_lock job
-----------------
Running Generate_dbhome_metadata job
Completed Generate_dbhome_metadata job
-----------------
Running Generate_dbhome_system_details job
Acquiring native write lock: global_dbsystem_details_generation
Releasing native lock: global_dbsystem_details_generation
Completed Generate_dbhome_system_details job
-----------------
Running Plugin_cleanup job

Completed Plugin_cleanup job
---------- START OF PLUGIN RESULT ----------
{"ORACLE_HOME_NAME":"OraHome2","ORACLE_HOME":"/u02/app/oracle/product/11.2.0/dbhome_1"}
---------- END OF PLUGIN RESULT ----------

dbaascli execution completed

Conclusion


While Oracle Exadata Cloud Service (ExaCS) no longer officially supports provisioning new Oracle Database 11g instances through the OCI console, it is still possible to create and manage these databases using tools like dbaascli. This approach provides a practical solution for customers who rely on the stability and performance of Oracle 11g for mission-critical applications but face challenges in scheduling downtime or migrating to newer database versions. 

However, it’s essential to recognize the limitations of continuing to use an unsupported version, such as missing out on critical security updates, performance enhancements, and new features offered by modern versions like Oracle Database 19c or 23c AI. Organizations are encouraged to develop a long-term migration strategy to align with Oracle's support roadmap and ensure the sustainability of their database operations.

Tuesday, June 3, 2025

EXACS Standby - ORA-12154: TNS (Disable RFS client)

 



Intro 

In today’s data-driven world, safeguarding data is crucial. Organizations must be equipped to handle potential failures by implementing strong disaster recovery (DR) strategies. For those managing critical data, a DR site is essential. Oracle Data Guard provides a robust solution for database disaster recovery, ensuring data protection and continuous availability.

Recently, I encountered an issue while creating a standby database from an ExaCS environment. These databases were manually created in ExaCS, and we also manually registered them in the OCI console.

After configuring Data Guard, the RFS process transfers the latest archive to the standby database but then automatically transitions to a disabled state with the message "Disable RFS client." The process keeps failing because the primary database reports the error:

ORA-12154: TNS: could not resolve the connect identifier specified.

Here is the alert log information we gathered from the standby database:

ALTER SYSTEM SET log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' SCOPE=BOTH SID='*';
2024-12-17T05:34:15.663557+00:00
 rfs (PID:5936): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is ASYNC (PID:336925)
 rfs (PID:5936): Primary database is in MAXIMUM PERFORMANCE mode
 rfs (PID:5936): Enable RFS client [krsr.c:5782]
 rfs (PID:5936): Disable RFS client [kcrlc.c:1531]
2024-12-17T05:34:15.699574+00:00
 rfs (PID:5936): Selected LNO:21 for T-1.S-1255 dbid 4130507393 branch 1187061775
2024-12-17T05:34:18.286167+00:00
 rfs (PID:6076): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is ASYNC (PID:239908)


Interestingly, SQL*Plus connections from the primary to the standby database are working fine. This makes the issue even more puzzling, as there are no apparent connectivity problems. However, the primary database alert log continues to report ORA-12154, and even the Data Guard Broker configuration is failing.

After hours of troubleshooting, we accidentally found the relevant My Oracle Support note:

"RFS is not coming up on standby due to ORA-12154 on transport (primary side)" (Doc ID 2196182.1).

In our Exadata database, the TNS_ADMIN parameter was missing in the cluster configuration. We had to add the TNS_ADMIN parameter and restart the database to resolve the issue. Only then did it correctly pick up the TNS location.



srvctl getenv database -d -db unique name- -t "TNS_ADMIN"
srvctl setenv database -d -db_unique_name- -T "TNS_ADMIN=-path of directory holding the tnsnames.ora-"

Sample output

[oracle@exaprd01-node01 ]$ srvctl setenv database -d TEST2_EXA -T "TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_4/network/admin/TEST2"
[oracle@exaprd01-node01 ]$ srvctl getenv database -d TEST2_EXA -t "TNS_ADMIN"
TEST2_EXA:
TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_4/network/admin/TEST2
[oracle@exaprd01-node01 ]$



Conclusion 

The issue arose while setting up a standby database in an ExaCS environment, where the databases were manually created and registered in the OCI console. Despite successful SQL*Plus connections between the primary and standby databases, the RFS process repeatedly failed with ORA-12154 errors on the primary side, preventing archive log transfer.

After extensive troubleshooting, we discovered that the root cause was a missing TNS_ADMIN parameter in the cluster configuration. This prevented the database from correctly resolving TNS names. The issue was resolved by adding the TNS_ADMIN parameter and restarting the database, allowing Data Guard to function properly.

Exacs database creation using dbaascli

 







Intro

OCI (Oracle Cloud Infrastructure) provides robust automation capabilities for routine maintenance tasks such as patching, creating database homes, and server OS updates. Oracle further simplifies these tasks by handling the staging process, reducing the administrative burden.

Additionally, Oracle has introduced several tools to streamline maintenance and management tasks, including:

  • dbaascli
  • exacli
  • dbcli
  • exadbcpatchmulti
  • dbaascli is a command-line utility for managing Oracle Database Cloud Services (DBaaS) instances. Mainly for Oracle Exadata cloud services. It provides several features to administer Oracle databases running on Oracle Cloud Infrastructure (OCI). This tool allows users to perform various administrative tasks like backup and recovery, patching, scaling, and checking the status of the database environment.

    Key Features of dbaascli:

    1. Backup and Restore:

      • Manage and schedule backups.
      • Perform on-demand backups.
      • Restore the database from backups.
    2. Patching:

      • Apply database patches.
      • Check for available patches.
      • Rollback patches if necessary.
    3. Database Management:

      • Start and stop databases.
      • Perform database health checks.
      • Manage Data Guard configurations (for high availability and disaster recovery).
    4. Storage Management:

      • Resize the database storage as required.
    5. Diagnostic Tools:

      • Collect diagnostic information to troubleshoot issues.
      • Generate diagnostic logs for support cases.
    As a DBA, it's essential to start utilizing these orchestration tools to streamline and simplify your tasks. These tools make database management more efficient by automating key processes. With dbaascli, for example, DBAs no longer need to manually download and copy binary files to servers—dbaascli will automatically download the latest binaries, making the entire process faster and smoother.

    In this article, I will demonstrate how to create a database using dbaascli.

    Note: Creating a non-CDB database is not supported via the console. The only way to create a non-CDB database is through the backend.

    Step 1: Identify the Database Home


    First, identify and note down the Database Home associated with the version you want to use for creating the database.

    For this example, since I want to create a database using the 19c home, I will record the path for the 19c Database Home.

    
    
    [root@exadevdb-node01 ~]# dbaascli dbhome info
    DBAAS CLI version 24.3.2.0.0
    Executing command dbhome info
    [INFO] [DBAAS-14011] - The usage of this command is deprecated.
       ACTION: It is recommended to use 'dbaascli dbHome getDetails or dbaascli system getDBHomes' for this operation.
    Enter a homename or just press enter if you want details of all homes
    
    1.HOME_NAME=OraHome1
      HOME_LOC=/u02/app/oracle/product/19.0.0.0/dbhome_1
      VERSION=19.24.0.0
      PATCH_LEVEL=19.24.0.0.0
      DBs installed=
       OH Backup=NOT Configured
    
    2.HOME_NAME=OraHome3
      HOME_LOC=/u02/app/oracle/product/11.2.0/dbhome_1
      VERSION=11.2.0.4
      PATCH_LEVEL=11.2.0.4.231017
      DBs installed=TEST
       Agent DB IDs=ec3f0396-d79d-450f-a9b3-aaae88b37c56
     OH Backup=NOT Configured
    
    [root@exadevdb-node01 ~]#
    
    
    

    To retrieve the DB home information, you can use the latest dbaascli command:

    
    dbaascli system getDBHomes
    
    This output will be a Jason output
    
    
    [root@exadevdb-node01 ~]# dbaascli system getDBHomes
    DBAAS CLI version 24.3.2.0.0
    Executing command system getDBHomes
    Job id: 1530c019-99c4-467d-9b2f-be3a2b5d3cd0
    Session log: /var/opt/oracle/log/system/getDBHomes/dbaastools_2024-09-30_03-45-24-PM_342051.log
    {
      "OraHome1" : {
        "homePath" : "/u02/app/oracle/product/19.0.0.0/dbhome_1",
        "homeName" : "OraHome1",
        "version" : "19.24.0.0.0",
        "createTime" : 1724441733874,
        "updateTime" : 1724683497105,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          }
        },
        "messages" : [ ]
      },
      "OraHome2" : {
        "homePath" : "/u02/app/oracle/product/19.0.0.0/dbhome_2",
        "homeName" : "OraHome2",
        "version" : "19.24.0.0.0",
        "createTime" : 1724680153103,
        "updateTime" : 1724692594136,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          }
        },
        "messages" : [ ]
      },
      "OraHome3" : {
        "homePath" : "/u02/app/oracle/product/11.2.0/dbhome_1",
        "homeName" : "OraHome3",
        "version" : "11.2.0.4.231017",
        "createTime" : 1724683333257,
        "updateTime" : 1727708629760,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "11.2.0.4.231017",
            "patches" : [ "25139545", "33613829", "33991024", "35638387", "22291453", "30432076", "31335037", "32327201", "32558369", "34006614", "34533061", "34698179", "35099667", "35220732", "35239280", "35313335", "35685663", "32224895", "22366322", "31228670", "21289564", "35574089" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "11.2.0.4.231017",
            "patches" : [ "25139545", "33613829", "33991024", "35638387", "22291453", "30432076", "31335037", "32327201", "32558369", "34006614", "34533061", "34698179", "35099667", "35220732", "35239280", "35313335", "35685663", "32224895", "22366322", "31228670", "21289564", "35574089" ]
          }
        },
        "messages" : [ ]
      },
      "OraHome4" : {
        "homePath" : "/u02/app/oracle/product/19.0.0.0/dbhome_3",
        "homeName" : "OraHome4",
        "version" : "19.24.0.0.0",
        "createTime" : 1724695480172,
        "updateTime" : 1727709224645,
        "unifiedAuditEnabled" : false,
        "ohNodeLevelDetails" : {
          "exadevdb-node02" : {
            "nodeName" : "exadevdb-node02",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          },
          "exadevdb-node01" : {
            "nodeName" : "exadevdb-node01",
            "version" : "19.24.0.0.0",
            "patches" : [ "34697081", "36538667", "36414915", "36587798", "36582781" ]
          }
        },
        "messages" : [ ]
      }
    }
    
    dbaascli execution completed
    [root@exadevdb-node01 ~]#
    
    

    Create a database using dbaascli commands


    Here is a sample command to create the database. For detailed information, refer to the Oracle documentation at:  Using dbaascli for Exadata Database Service

    Since we are creating a non-container database, the createAsCDB parameter is set to false.

    
    
    /bin/dbaascli database create \
    --dbname SYNPOC \
    --dbUniqueName SYNPOC_EXA \
    --dbSID SYNPOC1 \
    --createAsCDB false \
    --dbCharset AL32UTF8 \
    --dbNCharset AL16UTF16 \
    --sgaSizeInMB 4096 \
    --pgaSizeInMB 1024 \
    --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 \
    --oracleHomeName OraHome1 \
    --tdeConfigMethod FILE \
    --executePrereqs no
    
    

    Sample log output

    
    
    [root@exadevdb-node01 ~]# /bin/dbaascli database create \
    > --dbname SYNPOC \
    > --dbUniqueName SYNPOC_EXA \
    > --dbSID SYNPOC \
    > --createAsCDB false \
    > --dbCharset AL32UTF8 \
    > --dbNCharset AL16UTF16 \
    > --sgaSizeInMB 4096 \
    > --pgaSizeInMB 1024 \
    > --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 \
    > --oracleHomeName OraHome1 \
    > --tdeConfigMethod FILE \
    > --executePrereqs no
    DBAAS CLI version 24.3.2.0.0
    Executing command database create --dbname SYNPOC --dbUniqueName SYNPOC_EXA --dbSID SYNPOC1 --createAsCDB false --dbCharset AL32UTF8 --dbNCharset AL16UTF16 --sgaSizeInMB 4096 --pgaSizeInMB 1024 --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 --oracleHomeName OraHome1 --tdeConfigMethod FILE --executePrereqs no
    Job id: 88f650c6-0467-4b3b-bcda-3f38eaa0ea6e
    Session log: /var/opt/oracle/log/SYNPOC/database/create/dbaastools_2024-09-30_03-51-46-PM_368426.log
    Enter SYS_PASSWORD:
    
    Enter SYS_PASSWORD (reconfirmation):
    
    Enter TDE_PASSWORD:
    
    Enter TDE_PASSWORD (reconfirmation):
    
    Loading PILOT...
    Enter SYS_PASSWORD
    ********************
    Enter SYS_PASSWORD (reconfirmation):
    **********************
    Enter TDE_PASSWORD
    ***********************
    Enter TDE_PASSWORD (reconfirmation):
    ********************
    Session ID of the current execution is: 1490
    Log file location: /var/opt/oracle/log/SYNPOC/database/create/pilot_2024-09-30_03-53-13-PM_374149
    -----------------
    Running Plugin_initialization job
    Completed Plugin_initialization job
    Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
    -----------------
    Running Default_value_initialization job
    Completed Default_value_initialization job
    -----------------
    Running Validate_input_params job
    Completed Validate_input_params job
    -----------------
    Running Validate_cpu_availability job
    Completed Validate_cpu_availability job
    -----------------
    Running Validate_asm_availability job
    Completed Validate_asm_availability job
    -----------------
    Running Validate_disk_space_availability job
    Completed Validate_disk_space_availability job
    -----------------
    Running Validate_huge_pages_availability job
    Completed Validate_huge_pages_availability job
    -----------------
    Running Validate_hostname_domain job
    Completed Validate_hostname_domain job
    -----------------
    Running Validate_crs_state job
    Completed Validate_crs_state job
    -----------------
    Running Install_db_cloud_backup_module job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Perform_dbca_prechecks job
    Completed Perform_dbca_prechecks job
    -----------------
    Running Validate_backup_report job
    Skipping. Job is detected as not applicable.
    Acquiring write lock: create_db_cloud
    -----------------
    Running Setup_acfs_volumes job
    Completed Setup_acfs_volumes job
    -----------------
    Running Setup_db_folders job
    Completed Setup_db_folders job
    -----------------
    
    Running DB_creation job
    Completed DB_creation job
    Releasing lock: create_db_cloud
    -----------------
    Running Generate_db_metadata job
    Completed Generate_db_metadata job
    -----------------
    Running Create_db_from_backup job
    Skipping. Job is detected as not applicable.
    Completed Create_db_from_backup job
    -----------------
    Running Load_db_details job
    Completed Load_db_details job
    -----------------
    Running Populate_creg job
    Completed Populate_creg job
    -----------------
    Running Register_ocids job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Run_datapatch job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Create_users_tablespace job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Configure_pdb_service job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Set_pdb_admin_user_profile job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Lock_pdb_admin_user job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Configure_flashback job
    Completed Configure_flashback job
    -----------------
    Running Update_cloud_service_recommended_config_parameters job
    Completed Update_cloud_service_recommended_config_parameters job
    -----------------
    Running Update_distributed_lock_timeout job
    Completed Update_distributed_lock_timeout job
    -----------------
    Running Configure_archiving job
    Completed Configure_archiving job
    -----------------
    Running Configure_huge_pages job
    Completed Configure_huge_pages job
    -----------------
    Running Set_credentials job
    Completed Set_credentials job
    -----------------
    Running Update_dba_directories job
    Completed Update_dba_directories job
    -----------------
    Running Set_cluster_interconnects job
    Completed Set_cluster_interconnects job
    -----------------
    Running Create_db_secure_profile job
    Completed Create_db_secure_profile job
    -----------------
    Running Set_utc_timezone job
    Completed Set_utc_timezone job
    -----------------
    Running Run_dst_post_installs job
    Completed Run_dst_post_installs job
    -----------------
    Running Enable_auditing job
    Completed Enable_auditing job
    -----------------
    Running Apply_security_measures job
    Completed Apply_security_measures job
    -----------------
    Running Set_listener_init_params job
    Completed Set_listener_init_params job
    -----------------
    Running Update_db_wallet job
    Completed Update_db_wallet job
    -----------------
    Running Add_oratab_entry job
    Completed Add_oratab_entry job
    -----------------
    Running Setup_dbaastools_schema job
    Skipping. Job is detected as not applicable.
    -----------------
    Running Purge_rmf_configuration job
    Completed Purge_rmf_configuration job
    -----------------
    Running Configure_sqlnet_ora job
    Completed Configure_sqlnet_ora job
    -----------------
    Running Configure_tnsnames_ora job
    Completed Configure_tnsnames_ora job
    -----------------
    Running Enable_fips job
    Completed Enable_fips job
    -----------------
    Running DTRS_DB_Configure_Job job
    Completed DTRS_DB_Configure_Job job
    -----------------
    Running Restart_database job
    Completed Restart_database job
    -----------------
    Running Create_db_login_environment_file job
    Completed Create_db_login_environment_file job
    -----------------
    Running Generate_dbsystem_details job
    Acquiring native write lock: global_dbsystem_details_generation
    Releasing native lock: global_dbsystem_details_generation
    Completed Generate_dbsystem_details job
    -----------------
    Running Cleanup job
    Completed Cleanup job
    Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
    
    dbaascli execution completed
    [root@exadevdb-node01~]# 
    
    

    Conclusion

    To sum up, dbaascli is an effective command line tool that helps in performing various tasks involved in the lifecycle of a database in the context of the Oracle cloud. It makes the execution of tasks like the creation of databases, the administration of homes and even the fetching of information effortless. With dbaascli, the administrators have a way to automate most of the usual activities and cut down on errors and increase efficiency thereby making the management of databases smooth and effective.

    Monday, June 2, 2025

    Protecting Your Encryption Keys: (OKV - Part 2 )

     






    Intro 

    The Role of Oracle Key Vault in Securing Encryption Keys

    One of the fundamental aspects of database security is effective key management. A common yet risky practice is storing Transparent Data Encryption (TDE) keys locally. This approach exposes organizations to data loss due to disk corruption or system failures. Furthermore, manually managing encryption keys across multiple Oracle and MySQL databases can be complex and prone to errors, increasing security risks.

    Oracle Key Vault (OKV) provides a centralized, secure repository for managing encryption keys, certificates, and secrets. It plays a vital role in enhancing security by:

    • Protecting Encryption Keys: OKV ensures that TDE keys and other critical credentials are securely stored, mitigating risks associated with local key storage.

    • Automating Key Management: By streamlining key distribution and rotation, OKV reduces operational overhead and human error.

    • Enhancing Compliance: Many regulatory frameworks require robust encryption key management. OKV helps organizations meet compliance standards by providing a secure and auditable key management solution.

    • Supporting Disaster Recovery: Backing up encryption keys to an external, secure repository is crucial for disaster recovery. OKV facilitates seamless recovery processes, ensuring business continuity in case of system failures.

    Strengthening Security Best Practices

    To mitigate the risks associated with security breaches and ensure robust protection of sensitive data, organizations should adopt the following best practices:

    1. Implement Centralized Key Management: Use Oracle Key Vault or similar solutions to securely store and manage encryption keys.

    2. Enforce Strong Access Controls: Restrict access to encryption keys and credentials based on the principle of least privilege.

    3. Regularly Rotate Keys and Passwords: Frequent key and password rotation reduces the risk of prolonged exposure in the event of a breach.

    4. Monitor for Anomalous Activity: Utilize security monitoring tools to detect unauthorized access attempts and potential threats.

    5. Backup Encryption Keys Securely: Store backups in an encrypted, external repository to ensure data recovery in case of corruption or loss.

    Initial setup.

    In the previous article, we set up the admin user to connect to Oracle Key Vault (OKV). Now, use the admin account to log in to the OKV console.

    If you need installation guidance, refer to: Protecting Your Encryption Keys: Lessons from the Oracle Cloud Security Breach (OKV - Part 1).


    For endpoint creation and registration, I refer to the links below. They provide valuable insights and are highly useful for understanding the concept.

    - Installation





    - DataSheet

    - Setup Oracle Key Vault 21.9 in Guest VM



                                                                 Figure 1: Initial logging page


    We need to create an endpoint in Oracle Key Vault (OKV). In this example, I have created a database named TWHSE01. Before proceeding, I will first set up the following folder structure.


    Note: wallet_root is a special folder where you keep all the OKV file , best practices is to keep the OKV EP installation under wallet_root folder. 

    
    mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde - Location for TDE keys
    mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/okv - Location for OKV End point
    mkdir -p /u01/app/oracle/admin/TCDB/wallet_root/tde_seps - Location for External SSO Key
    


    Create an OKV endpoint (EP) 

    Overview of EP 

    Endpoints are Oracle Key Vault clients that securely store and retrieve security objects such as keys, passwords, certificates, and credential files.

    These endpoints can include Oracle database servers, Oracle middleware servers, operating systems, and more. They use Oracle Key Vault for long-term secret retention, secure sharing with trusted peers, and on-demand retrieval.

    Oracle Key Vault also provides a library that enables Transparent Data Encryption (TDE) to communicate with it. While Oracle Enterprise Manager can manage database server endpoints in Oracle Key Vault, it does not support TDE integration with Key Vault.


     
    Figure 2: Create OKV End endpoint 


    This step follows the creation of the endpoint for the TWHSE01 database.



                                                           Figure 3: Endpoint 

    Default wallet

    Create a default wallet and click Save. In this example, I have created WL_TWHSE01.


                                                       Figure 4: Create default wallet


    Download EP 

    Before downloading, be sure to note the token key, as it is required to download the endpoint software. Token key to mentioned in the endpoint.


                                                                    

                                                            Figure 5: Token 

                                           

    Log out of Oracle Key Vault, return to the main login window, and click Endpoint Enrollment and Software Download.



                                                          Figure 6: Download EP software.
     

    After logging in, you will be redirected to the Enroll Endpoint & Download Software page. Enter the token key and click Enroll to download the endpoint software.



                                                      Figure 7: Add a token to download EP software.                                      

                                                               

    Installed the Endpoint 

    This file is crucial. After installation, be sure to remove it from the server. To simplify identification, name each endpoint software after the corresponding database.

    In this example, I copied the file to the wallet_root/okv folder and renamed it to okvclient_TWHSE01.jar.

    Note: For security best practices, ensure you delete this file after installation to prevent potential exploitation by hackers.

    For endpoint installation, I am using 0KV2025! as the endpoint and external OKV password. Since TDE is file-based, I will use F1LE2025!.

    • Endpoint password: 0KV2025!

    • TDE password: F1LE2025!

    Use the command below to install:

    
    $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/admin/TWHSE01/wallet_root/okv/okvclient_TWHSE01.jar -d /u01/app/oracle/admin/TWHSE01/wallet_root/okv -v
            

    Sample installation output

            
    [oracle@crs01 wallet_root]$ $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/admin/TWHSE01/wallet_root/okv/okvclient_TWHSE01.jar -d /u01/app/oracle/admin/TWHSE01/wallet_root/okv -v
    Detected JAVA_HOME: /u01/app/oracle/product/19.0.0/dbhome_1/jdk
    Detected ORACLE_HOME: /u01/app/oracle/product/19.0.0/dbhome_1
    Detected ORACLE_BASE: /u01/app/oracle
    Using OKV_HOME: /u01/app/oracle/admin/TWHSE01/wallet_root/okv
    Please set environment variables ORACLE_HOME, ORACLE_BASE, and OKV_HOME
    consistently across processes.
    Enter new Key Vault endpoint password ( for auto-login): 0KV2025!
    Confirm new Key Vault endpoint password: 0KV2025!
    The endpoint software for Oracle Key Vault installed successfully.
    Deleted the file : /u01/app/oracle/admin/TWHSE01/wallet_root/okv/okvclient_TWHSE01.jar
    [oracle@crs01 wallet_root]$
            
            
            
    After the extract process is complete, run the root.sh script. This script creates the directory tree/opt/oracle/extapi/64/hsm/oracle/1.0.0, sets the appropriate ownership and permissions, and copies the PKCS#11 library into the directory.

    The library PKCS#11 library liborapkcs.so is used by Oracle Database to communicate with OKV.

    Sample root.sh Script execution output

    
            
    [root@crs01 oracle]#/u01/app/oracle/admin/TWHSE01/wallet_root/okv/bin/root.sh
    Creating directory: /opt/oracle/extapi/64/hsm/oracle/1.0.0/
    Copying PKCS library to /opt/oracle/extapi/64/hsm/oracle/1.0.0/
    Setting PKCS library file permissions
    Installation successful.
    [root@crs01 oracle]#
    
            
            

    Before the installation endpoint status was REGISTERED.


    After Installation endpoint status changed to ENROLLED.



    Environment Variables.

    To complete the installation, we must properly set up all environment variables. When migrating TDE to OKV, the environment variable should be configured to identify the correct path.

    Important variables are ORACLE_HOME, ORACLE_SID , OKV_HOME and JAVA_HOME.

    
            
    export ORACLE_BASE=/u01/app/oracle
    export DB_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1
    export ORACLE_HOME=$DB_HOME
    export ORACLE_SID=TWHSE01
    export ORACLE_TERM=xterm
    export OKV_HOME=/u01/app/oracle/admin/TWHSE01/wallet_root/okv
    export JAVA_HOME=/u01/app/oracle/product/19.0.0/dbhome_1/jdk
    
            
            

    Upload TDE keys to OKV

    Now, let's upload the TDE keys to the endpoint using the following command to add them to the OKV wallet.

    
            
    [oracle@crs01 bin]$ ./okvutil upload -h
    Usage 1: okvutil upload -l location>-t type [-o] [-g group]
      type := WALLET | JKS | JCEKS
    Usage 2: okvutil upload -l location> -t type [-o] [-g group] [-d description]
      type := SSH | KERBEROS | TDE_KEY_BYTES | OTHER
    Usage 3: okvutil upload -l location -t type -U SSH-user -L length [-g group] [-i SSH-private-key-id] [-d description]
      type := SSH_PUBLIC_KEY
    Usage 4: okvutil upload -l location -t type -U SSH-user -L length [-g group] [-d description]
      type := SSH_PRIVATE_KEY
    Description:
      Use the upload command to upload data to the server.
    Options:
      -l, --location -location-
            Read information from -location-
            For wallets, point to the directory containing the wallet.
            For all other types, point to the file.
      -t, --type -type-
            Type of store.
            type := WALLET | JKS | JCEKS | SSH | KERBEROS | TDE_KEY_BYTES | SSH_PUBLIC_KEY | SSH_PRIVATE_KEY | OTHER
      -o, --overwrite
            Overwrite any conflicting data on the server with the data to upload.
      -g, --group -group-
            Name of object group (Oracle Key Vault virtual wallet).
            Note that group must exist and the endpoint must have sufficient access privileges.
      -d, --description description
            Add a free-form description. This option is only valid when the source file type is one that Oracle Key Vault stores as a single object (OTHER, KERBEROS, SSH, TDE_KEY_BYTES, SSH_PUBLIC_KEY).
      -i, --item SSH-private-key-id
            ID of the SSH private key linked to the SSH public key.
      -U, --ssh-user SSH-user
            SSH user who owns the SSH public or private key.
      -L, --length length
            Length (in bits) of the SSH public or private key to be uploaded.
    Example:
      * okvutil upload -l . -t wallet -g Group1
      * okvutil upload -l foo.txt -t other -d description
      * okvutil upload -l tde_key_bytes.txt -t tde_key_bytes -d master_key_for_db
      * okvutil upload -l ./keystore.jks -t jks -g Group2
    [oracle@crs01 bin]$
            
            

    This is the way you can upload the TDE keys to the OKV wallet:
            
    /u01/app/oracle/admin/SWHSE01/okv/bin/okvutil upload -t WALLET -l /u01/app/oracle/admin/SWHSE01/wallet/tde -g WL_SWHSE01 -v 4
            
        

    Sample output :

            
    [oracle@crs01 bin]$ pwd
    /u01/app/oracle/admin/SWHSE01/okv/bin
    [oracle@crs01 bin]$ /u01/app/oracle/admin/SWHSE01/okv/bin/okvutil upload -t WALLET -l /u01/app/oracle/admin/SWHSE01/wallet/tde  -g WL_SWHSE01 -v 4
    okvutil version 21.10.0.0.0
    Endpoint type: Oracle Database
    Configuration file: /u01/app/oracle/admin/SWHSE01/okv/conf/okvclient.ora
    Server: 192.168.56.210:5696
    Standby Servers:
    Uploading from /u01/app/oracle/admin/SWHSE01/wallet/tde
    Enter source wallet password:
    No auto-login wallet found, password needed
    Enter Oracle Key Vault endpoint password:
    ORACLE.SECURITY.ID.ENCRYPTION.
    Trying to connect to 192.168.56.210:5696 ...
    Connected to 192.168.56.210:5696.
    ORACLE.SECURITY.KB.ENCRYPTION.
    Trying to connect to 192.168.56.210:5696 ...
    Connected to 192.168.56.210:5696.
    ORACLE.SECURITY.KM.ENCRYPTION.AUjX3D9pzU9xv0601AarqbMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    ORACLE.SECURITY.DB.ENCRYPTION.AUjX3D9pzU9xv0601AarqbMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
    
    Uploaded 1 TDE keys
    Uploaded 0 SEPS entries
    Uploaded 0 other secrets
    Uploaded 3 opaque objects
    
    Uploading private key
    Uploading certificate request
    Uploading trust points
    
    Uploaded 1 private keys
    Uploaded 1 certificate requests
    Uploaded 0 user certificates
    Uploaded 0 trust points
    
    Upload succeeded
    [oracle@crs01 bin]$
            
            


    Migrate TDE keys to OKV


    Add a secret to allow use of "External Store".

    I will securely store the OKV password in the keystore as a secret, allowing the use of the EXTERNAL STORE option instead of manually entering the password.

    Next, I will create an external key in Oracle Key Vault (OKV) and migrate the existing encryption keys to OKV. The keys will be stored in the following directory: /u01/app/oracle/admin/TWHSE01/wallet_root/tde_seps

    
    ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2025!' FOR CLIENT 'OKV_PASSWORD' TO LOCAL AUTO_LOGIN KEYSTORE '/u01/app/oracle/admin/TWHSE01/wallet_root/tde_seps';
            

    Note: As mentioned in the previous post:
    • The keystore must be located in a subdirectory of WALLET_ROOT named "tde_seps" to be recognized.
    • The "FOR CLIENT" entry must be 'OKV_PASSWORD' for proper detection.
    • The keystore must be set to AUTO_LOGIN to ensure it can be opened and used automatically.

    Enabling Auto Login for Oracle Key Vault (OKV) Keystore

    To streamline access to the Oracle Key Vault (OKV) keystore, I will store the OKV password as a secret within the keystore. This setup enables AUTO_LOGIN, eliminating the need for manual password entry when accessing the OKV keystore.

    The database can securely authenticate with OKV by configuring auto-login, ensuring seamless key management while maintaining strong encryption security.

    Note: Before creating a new OKV SSO file, backup the current SSO file.

    ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2025!' FOR CLIENT 'HSM_PASSWORD' TO AUTO_LOGIN KEYSTORE '/u01/app/oracle/admin/TWHSE01/wallet_root/tde';
          
    The parameter "KEYSTORE_CONFIGURATION=OKV|FILE" means that the database will get the encryption key from OKV and the auto_login file cwallet.sso from local disk.
        
    alter system set tde_configuration = "KEYSTORE_CONFIGURATION=OKV|FILE" scope=both sid='*';
        
        

    Validate the wallet status

    
    SQL> set lines 600
    SQL> col WALLET for a20
    col WALLET_LOCATION for a80
    
    select WRL_TYPE wallet,status,WALLET_TYPE,wrl_parameter wallet_location,KEYSTORE_MODE from v$encryption_wallet;SQL> SQL> SQL>
    
    WALLET               STATUS                         WALLET_TYPE          WALLET_LOCATION                                                                  KEYSTORE
    -------------------- ------------------------------ -------------------- -------------------------------------------------------------------------------- --------
    FILE                 OPEN                           UNKNOWN              /u01/app/oracle/admin/SWHSE01/wallet/tde/                                        NONE
    OKV                  CLOSED                         UNKNOWN                                                                                               NONE
    FILE                 OPEN                           UNKNOWN                                                                                               UNITED
    OKV                  CLOSED                         UNKNOWN                                                                                               UNITED
    FILE                 OPEN                           UNKNOWN                                                                                               UNITED
    OKV                  CLOSED                         UNKNOWN                                                                                               UNITED
    
    6 rows selected.
    
    SQL>
    
        
    
    Before migrating keys, ensure the wallet is open. To open the wallet using OKV, you must first close the file-based auto-login wallet.
    
        ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE;
        ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2025!"; -- OKV file based password 
        

    After this you can validate the wallet status , OKV should be "OPEN_NO_MASTER_KEY" and FILE bases wallet should be "OPEN_NO_MASTER_KEY".
    TDE status after TDE_CONFIGURATION change:

    
        
    WALLET               STATUS                         WALLET_TYPE          WALLET_ORDER         WALLET_LOCATION                                                                  KEYSTORE
    -------------------- ------------------------------ -------------------- -------------------- -------------------------------------------------------------------------------- --------
    FILE                 OPEN_NO_MASTER_KEY             AUTOLOGIN            SINGLE               /u01/app/oracle/admin/TWHSE01/wallet_root/tde/                                   NONE
    OKV                  OPEN_NO_MASTER_KEY             OKV                  SINGLE     
    SQL>
       
        

    Migrate the FILE local wallet to OKV

    "Now, execute the key migration command. This command must complete successfully to ensure full integration with OKV.
    
        
    SQL> ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "0KV2025!" MIGRATE USING "F1LE2025!"  WITH BACKUP;
    keystore altered.
        
    

    Validate wallet status


    Once the TDE key migration is complete, the status should display OKV as the primary keystore and File as the secondary.

    
        
    SQL> @tde_status.sql
    
    WALLET               STATUS                         WALLET_TYPE          WALLET_ORDER         WALLET_LOCATION                                                                  KEYSTORE
    -------------------- ------------------------------ -------------------- -------------------- -------------------------------------------------------------------------------- --------
    FILE                 OPEN                           AUTOLOGIN            SECONDARY            /u01/app/oracle/admin/TWHSE01/wallet_root/tde/                                   NONE
    OKV                  OPEN                           OKV                  PRIMARY                                                                                               NONE
    
    SQL>
    
        
        

    Conclusion

    The recent Oracle Cloud security incident serves as a stark reminder of the evolving threat landscape. While Oracle Cloud provides strong security capabilities, organizations must take a proactive approach to encryption key management and data protection. Implementing solutions like Oracle Key Vault enhances security, streamlines compliance, and safeguards sensitive information against potential breaches. By prioritizing robust key management strategies, organizations can significantly reduce risks and ensure the integrity of their cloud environments.

    EXACS: Resolving PDB Console Issue

     





    Intro 


    The roles of DBAs and architects are continually evolving, becoming more technical and requiring broader expertise. For Oracle DBAs, it's no longer just about managing databases; there's a growing need to understand and work with the OCI (Oracle Cloud Infrastructure) GUI as well. This is because many OCI services depend on the database being properly visible in the OCI console to function correctly.

    Recently, I faced an issue after migrating a database from on-premises to the cloud in an Exadata Cloud@Customer (ExaCS) deployment. The pluggable database (PDB) did not appear in the OCI console.

    Steps I followed the below-mentioned steps to migration.

  • Created a dummy database directly from the OCI console.
  • Deleted this dummy database at the backend.
  • Used RMAN to perform a duplicate operation.

  • After the RMAN duplicate, the PDB (pluggable database) was correctly visible under the CDB (container database). However, the status in the OCI console is still displayed as "failed."

    This highlights the importance of understanding the OCI console and ensuring proper configuration post-migration to avoid operational disruptions.


    As per Figure 1, the OCI console shows pdb state from console shows as "failed".

                                                        Figure 1 : PDB state from OCI console.

    Troubleshooting steps 


    If a Pluggable Database (PDB) resource appears in a Failed status in the Cloud Console but remains available in the VM, refer to the following Oracle Meta Link note for details on this known issue:

    • Meta Link Note: Pluggable Database (PDB) Resource Shows Failed Status in Cloud Console While It Is Available in VM (Doc ID 2855062.1).

    Step 1: Validate the CON_UID

    Log in to the database and run the following query to verify the Container UID (CON_UID)

    SELECT CON_UID, NAME FROM V$PDBS;

    
    
    SQL> l
      1* select  CON_UID, NAME from V$PDBS
    SQL> /
    
       CON_UID NAME
    ---------- --------------------------------------------------------------------------------------------------------------------------------
    2368539052 PDB$SEED
    1479666877 EGDEV_PDB1
    2175707955 EGQA_PDB1
    
    SQL>
    
    

    Step 2: Confirm the PDB Details

    
    
    dbaascli pdb getDetails --dbname EGPRD --pdbUID 2297085946
    
    

    Step 3: Handle Orphaned PDB Records

  • If the above command returns a valid result, the PDB entry still exists in the backend.
  • In such a case, you may need to delete the orphaned database entry manually.

  • Step 4: Delete Manually 


    To manually delete the PDB using dbaascli, run the following command:

    
    dbaascli pdb delete --dbName  --pdbUID 2297085946
    

    Important Note:

    🚨 Before deleting any PDB, always verify the CON_UID returned from the database.

    • If the CON_UID from V$PDBS matches the one being checked, you are dealing with a live PDB.
    • Do NOT delete the database in this case, as it is currently active.

    If the CON_UID in the Cloud Console is different from the one in the database, you can safely delete the PDB from the console.

    Once deleted, wait 15-20 minutes, and the correct PDB will be reflected in the Cloud Console.




                                                      Figure 2: This is the same figure highlighting the con_uid.

    Streamlining TDE Key Management with Oracle Key Vault and Secure Endpoint Deployment

      Intro Streamlining TDE Key Management with Oracle Key Vault and Secure Endpoint Deployment One of the core pillars of database...