Hi all,

Hope you’re doing good!

Since I started the creation of AutoUpgrade-Composer, I’ve set up a lab environment (a Virtual Machine) using the OCI Tenancy that the ACE Program provides to Oracle ACEs (Thank you Oracle ACE Program!!).

In my lab, basically I have:

  • CDB running on 12.2
  • Non-CDB running on 12.2
  • CDB running on 19.x
  • Non-CDB running on 19.x

I also have some Oracle Homes that I’ve installed:

  • RU 12.2.0.1.240716
  • RU 19.26
  • RU 19.27

So, in my VM, everything related to Oracle is under /u01, a 150GB block volume attached to my instance/VM.

In the process of testing AutoUpgrade-Composer, the AutoUpgrade tool downloads Oracle patches and creates new Oracle Homes, and also patches or upgrades the DBs I have.

The main point is: I don’t want to deinstall Oracle Homes, flashback the databases to GRPs, etc. I planned to have a gold-backup of the block volume, so, every time I perform any test, I used to follow the process below:

Inside the server (OS):

  • Stop any processes reading/writing files from/to /u01
  • Unmount the block volume

Inside the OCI Console:

  • Detach the block volume from the instance
  • Terminate the block volume
  • Create a new block volume from the block volume backup
  • Attach the new block volume to the instance

Inside the server (OS):

  • Mount the block volume to the instance (optional – I will explain soon why)

Following this process, I have my /u01 ready for the tests again — very quickly!

But you all agree that even being an easy and fast process, we still have some manual tasks to do, on two different platforms: server and OCI console – usually I completed this in about 10 minutes.

So, why not automate this task? (And many of you know: I LOVE AUTOMATION!)

Thinking about how easy it would be to run all those tasks using a script, I thought: why not do that?

Based on this, I created a shell script that uses oci-cli in the backend to perform the tasks that I explained above — and we’ll walk through the script very soon.

As a prerequisite, of course, you need to install and configure the oci-cli on the server where you plan to perform the tasks.

I will not explain how to install and configure the oci-cli, as there are a lot of good sites and blogs explaining how — anyway, I will share below the URL to Gerald Venzl’s blog, explaining in detail how to accomplish this task:

Installing the Oracle Cloud Infrastructure CLI on Oracle Linux 8

Perfect!

I’ve installed the oci-cli under the opc user, so, for all commands that require root privileges, the script that I will show below is using “sudo” for privilege delegation.

On OCI, we have a property called OCID. OCID is an Oracle-assigned unique ID for a cloud resource. This ID is included as part of the resource’s information in both the Console and API.

All OCI resources have an OCID.

Let’s break down this information:

  • The block volume attached to the instance has an OCID related to the attachment
  • The block volume has an OCID
  • The block volume backup has an OCID
  • The instance/VM has an OCID
  • The compartment where the instance is running has an OCID

So, to be able to perform all tasks we listed above (detach, terminate, create, attach), we need 5 OCIDs, all listed above.

As we can destroy and create the volume as many times as we want, two OCIDs will always change:

  • Block volume OCID
  • Compute instance block volume attachment OCID

I also created a naming convention (or a pattern), so, every time we create a new block volume, I will always set the same name. Then, every time the script runs, it will always “search” for the resources with a specific name.

The following resources I need to search using their names:

  • Compartment name — to get the Compartment OCID, which will be used to search the Instance OCID and the Block Volume OCID. It will also be used during the creation of the new block volume from the backup
  • Block Volume name — to get the Block Volume OCID to be terminated/deleted, which will be used to search the Attachment OCID. It will also be used during the creation of the new block volume
  • Instance name — to get the Instance OCID, which will be used to search the Attachment OCID. It will also be used during the attachment of the new block volume

Just to illustrate what we will be doing here:

I will always use the Compartment named Prod:

I will always detach/attach the volume from/to the instance named upgrade lab:

I will always detach/terminate/create/attach a block volume named u01_restore_tst1_cdb1_cdb2_1922_1926_1927:

I will always restore/create a new block volume from a block volume backup named u01_bkp_tst1_tst2_cdb1_cdb2_cdb3_122_1922_1926_1927:

With that said, let me show the script here – I named the script as create_volume.sh:

#!/bin/bash
set -e

while :; do
  pids=$(sudo lsof +D /u01 2>/dev/null | awk 'NR>1 {print $2}' | sort -u)

  if [ -z "$pids" ]; then
    echo "No more processes using /u01. Exiting loop."
    break
  fi

  for pid in $pids; do
    sudo kill -9 "$pid" 2>/dev/null
  done

  sleep 1
done


if mountpoint -q /u01; then
  sudo umount -f /u01
else
  echo ""
fi

compartment_id=$(oci iam compartment list --name "Prod" --query 'data[*].id' | grep ocid | awk '{print $1}' | tr -d '"')
instance_id=$(oci compute instance list --compartment-id ${compartment_id} --display-name "upgrade lab" --query 'data[*].id' | grep oci | awk '{print $1}' | tr -d '"')
volume_id=$(oci bv volume list -c ${compartment_id} --display-name "u01_restore_tst1_cdb1_cdb2_1922_1926_1927" --lifecycle-state "AVAILABLE" --query 'data[*].id' | grep oci | awk '{print $1}' | tr -d '"')
attachment_id=$(oci compute volume-attachment list --instance-id ${instance_id} --volume-id ${volume_id} --query 'data[*].id' | grep ocid | awk '{print $1}' | tr -d '"')
backup_id=$(oci bv backup list --compartment-id $compartment_id --display-name "u01_bkp_tst1_tst2_cdb1_cdb2_cdb3_122_1922_1926_1927" --query 'data[*].id' | grep ocid | awk '{print $1}' | tr -d '"')

oci compute volume-attachment detach --volume-attachment-id ${attachment_id} --force --wait-for-state "DETACHED"
oci bv volume delete --volume-id ${volume_id} --force --wait-for-state "TERMINATED" 
echo "."
echo "."
echo "Create a new volume from backup..."
oci bv volume create --display-name u01_restore_tst1_cdb1_cdb2_1922_1926_1927 --availability-domain "lWzT:PHX-AD-3" --compartment-id ${compartment_id} --volume-backup-id ${backup_id} --wait-for-state "AVAILABLE" >/dev/null
echo "."
echo "."
new_volume_id=$(oci bv volume list -c ${compartment_id} --display-name "u01_restore_tst1_cdb1_cdb2_1922_1926_1927" --lifecycle-state "AVAILABLE" --query 'data[*].id' | grep oci | awk '{print $1}' | tr -d '"')
echo "Attaching the new volume..."
oci compute volume-attachment attach --instance-id ${instance_id} --type "paravirtualized" --volume-id ${new_volume_id} --wait-for-state "ATTACHED" > /dev/null
echo "."
echo "."

sleep 10
df -h /u01

Let’s break it down the script:

while :; do
  pids=$(sudo lsof +D /u01 2>/dev/null | awk 'NR>1 {print $2}' | sort -u)

  if [ -z "$pids" ]; then
    echo "No more processes using /u01. Exiting loop."
    break
  fi

  for pid in $pids; do
    sudo kill -9 "$pid" 2>/dev/null
  done

  sleep 1
done

In the part above, we have a loop that will check all the PIDs for the processes using /u01 and will kill them. The script will only exit the loop when no processes exist anymore using /u01 — this is a required task before we unmount the file system.


if mountpoint -q /u01; then
  sudo umount -f /u01
else
  echo ""
fi

Above we are unmounting /u01 from the server — this is a required task before detaching the instance.


compartment_id=$(oci iam compartment list --name "Prod" --query 'data[*].id' | grep ocid | awk '{print $1}' | tr -d '"')

Gets the OCID for the compartment Prod.


instance_id=$(oci compute instance list --compartment-id ${compartment_id} --display-name "upgrade lab" --query 'data[*].id' | grep oci | awk '{print $1}' | tr -d '"')

Gets the OCID for the instance named upgrade lab in the compartment Prod.


volume_id=$(oci bv volume list -c ${compartment_id} --display-name "u01_restore_tst1_cdb1_cdb2_1922_1926_1927" --lifecycle-state "AVAILABLE" --query 'data[*].id' | grep oci | awk '{print $1}' | tr -d '"')

Gets the Block Volume OCID (must be AVAILABLE) by name.


attachment_id=$(oci compute volume-attachment list --instance-id ${instance_id} --volume-id ${volume_id} --query 'data[*].id' | grep ocid | awk '{print $1}' | tr -d '"')

Gets the Attachment OCID for the volume attached to the instance.


backup_id=$(oci bv backup list --compartment-id $compartment_id --display-name "u01_bkp_tst1_tst2_cdb1_cdb2_cdb3_122_1922_1926_1927" --query 'data[*].id' | grep ocid | awk '{print $1}' | tr -d '"')

Gets the OCID of the block volume backup to be restored.


oci compute volume-attachment detach --volume-attachment-id ${attachment_id} --force --wait-for-state "DETACHED"

Detaches the block volume from the instance, waiting until it’s detached.


oci bv volume delete --volume-id ${volume_id} --force --wait-for-state "TERMINATED"

Deletes the block volume, waiting until it’s terminated.


oci bv volume create --display-name u01_restore_tst1_cdb1_cdb2_1922_1926_1927 --availability-domain "lWzT:PHX-AD-3" --compartment-id ${compartment_id} --volume-backup-id ${backup_id} --wait-for-state "AVAILABLE" >/dev/null

Creates a new block volume from the backup in the same AD as the instance.


new_volume_id=$(oci bv volume list -c ${compartment_id} --display-name "u01_restore_tst1_cdb1_cdb2_1922_1926_1927" --lifecycle-state "AVAILABLE" --query 'data[*].id' | grep oci | awk '{print $1}' | tr -d '"')

Gets the OCID for the newly created volume (ensures we get the new one, not the old/terminated one).


oci compute volume-attachment attach --instance-id ${instance_id} --type "paravirtualized" --volume-id ${new_volume_id} --wait-for-state "ATTACHED" > /dev/null

Attaches the new block volume to the instance using paravirtualized attachment.


sleep 10

df -h /u01

Waits a bit and checks if /u01 is mounted.


You may ask me: why did I use paravirtualized and not iSCSI?

iSCSI is indeed better than paravirtualized attachment:

IOPS performance is better with iSCSI attachments compared to paravirtualized attachments.

You can read it more here:

Overview of Block Volume

However, using iSCSI, we need to run additional commands on the server (iSCSI initiator setup) and then mount the volume. Using paravirtualized, after we attach the volume, it’s ready to use — no extra commands needed.

Keep in mind that this is a lab environment, and I do know about the trade-off on performance I may face.

Let’s run our script:

$ ./create_volume.sh 
No more processes using /u01. Exiting loop.
Action completed. Waiting until the resource has entered state: ('DETACHED',)
Action completed. Waiting until the resource has entered state: ('TERMINATED',)
.
.
Create a new volume from backup...
Action completed. Waiting until the resource has entered state: ('AVAILABLE',)
.
.
Attaching the new volume...
Action completed. Waiting until the resource has entered state: ('ATTACHED',)
.
.
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       150G   94G   57G  63% /u01

Using this script, I’m able to complete this task in less than one minute!

Hope it helps.

Peace,

Vinicius

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.