In this post we are going to deploy a R.A.C system ready to run production load with near-zero knowledge with R.A.C, Oracle cluster nor Oracle database.
We are going to use the “Deploy Cluster Tool” which is provide by Oracle to perform Oracle deployment of many kind of database architectures you may need like Oracle single instance, Oracle Restart or Oracle R.A.C. This tool permits you to choose if you want an Enterprise Edition or a Standard Edition and if you want an Oracle Release 11g or 12c.
For this demonstration we are going to deploy a R.A.C 12cR2 in Standard Edition.
What you need at this stage
- An OVM infrastructure as describe in this post. In this infrastructure we have
- 2 virtual machines called rac001 and rac002 with the required network cabling and disk configuration to run R.A.C
- The 2 VMs are create from the Oracle template which include the stuff needed to deploy whichever configuration you need
- A copy of the last release of the “Deploy Cluster Tool” available here
The most important part here is to edit 2 configurations files to describe the configuration we want
- deployRacProd_SE_RAC_netconfig.ini: network parameters needed for the deployment
- deployRacProd_SE_RAC_params.ini: parameters related to database memory, name, ASM disk groups, User UID and so on
This is the content of the network configuration used for this infrastructure:
-bash-4.1# egrep -v "^$|^#" deployRacProd_SE_RAC_netconfig.ini NODE1=rac001 NODE1IP=192.168.179.210 NODE1PRIV=rac001-priv NODE1PRIVIP=192.168.3.210 NODE1VIP=rac001-vip NODE1VIPIP=192.168.179.211 NODE2=rac002 NODE2IP=192.168.179.212 NODE2PRIV=rac002-priv NODE2PRIVIP=192.168.3.212 NODE2VIP=rac002-vip NODE2VIPIP=192.168.179.213 PUBADAP=eth1 PUBMASK=255.255.255.0 PUBGW=192.168.179.1 PRIVADAP=eth2 PRIVMASK=255.255.255.0 RACCLUSTERNAME=cluprod01 DOMAINNAME= DNSIP="" NETCONFIG_DEV=/dev/xvdc SCANNAME=cluprod01-scan SCANIP=192.168.179.205 FLEX_CLUSTER=yes FLEX_ASM=yes ASMADAP=eth3 ASMMASK=255.255.255.0 NODE1ASMIP=192.168.58.210 NODE2ASMIP=192.168.58.212
Let’s start from the OVM Manager server going in the “Deploy Cluster Tool” directory and initiating the first stage of the deployment:
-bash-4.1# cd /root/deploycluster3 -bash-4.1# ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini Oracle DB/RAC OneCommand (v3.0.5) for Oracle VM - deploy cluster - (c) 2011-2017 Oracle Corporation (com: 29100:v3.0.4, lib: 231275:v3.0.5, var: 1800:v3.0.5) - v2.6.5 - ovmm (x86_64) Invoked as root at Mon Dec 18 14:19:48 2017 (size: 43900, mtime: Tue Feb 28 01:03:00 2017) Using: ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... Oracle VM Manager Core WS-API Shell 3.4.2.1384 (20160914_1384) Copyright (C) 2007, 2016 Oracle. All rights reserved. See the LICENSE file for redistribution information. Connecting to https://localhost:7002/... INFO: Oracle VM Client CONNECTED to Oracle VM Manager (3.4.4.1709) UUID (0004fb00000100001f20e914973507f6) INFO: Inspecting /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini for number of nodes defined.... INFO: Detected 2 nodes in: /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini INFO: Located a total of (2) VMs; 2 VMs with a simple name of: ['rac001', 'rac002'] INFO: Detected a RAC deployment... INFO: Starting all (2) VMs... INFO: VM with a simple name of "rac001" is in a Stopped state, attempting to start it.................................OK. INFO: VM with a simple name of "rac002" is in a Stopped state, attempting to start it.................................OK. INFO: Verifying that all (2) VMs are in Running state and pass prerequisite checks..... INFO: Detected that all (2) VMs specified on command line have (9) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini params.ini (Overall build options): /root/deploycluster3/deployRacProd_SE_RAC_params.ini buildcluster: yes INFO: Starting to send configuration details to all (2) VM(s)................................................................. INFO: Sending to VM with a simple name of "rac001"........................................................................................................................................................................................................................................................... INFO: Sending to VM with a simple name of "rac002".............................................................................................................................................................. INFO: Configuration details sent to (2) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (rac001)... INFO: deploycluster.py completed successfully at 14:21:28 in 100.4 seconds (0h:01m:40s) Logfile at: /root/deploycluster3/deploycluster23.log
At this stage we have 2 nodes with the network configuration required like host name and IP addresses. The deployment script has also pushed the configuration files mentioned previously in the VMs.
So we connect to the first VM rac001 to
-bash-4.1# ssh root@192.168.179.210 Warning: Permanently added '192.168.179.210' (RSA) to the list of known hosts. root@192.168.179.210's password: Last login: Mon Dec 11 10:31:03 2017 [root@rac001 ~]#
Then we go the deployment directory which is part of the Template and we can execute the deployment
[root@rac001 racovm]# ./buildcluster.sh -s Invoking on rac001 as root... Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:43 EST 2017 Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition) Step(s): buildcluster INFO (node:rac001): Skipping confirmation, flag (-s) supplied on command line 2017-12-18 09:06:43:[buildcluster:Start:rac001] Building 12cR2 RAC Cluster INFO (node:rac001): No database created due to (BUILD_RAC_DATABASE=no) & (BUILD_SI_DATABASE=no) setting in params.ini 2017-12-18 09:06:45:[setsshroot:Start:rac001] SSH Setup for the root user... .. INFO (node:rac001): Passwordless SSH for the root user already configured, skipping... 2017-12-18 09:06:46:[setsshroot:Done :rac001] SSH Setup for the root user completed successfully 2017-12-18 09:06:46:[setsshroot:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s) 2017-12-18 09:06:46:[copykit:Start:rac001] Copy kit files to remote nodes Kit files: buildsingle.sh buildcluster.sh netconfig.sh netconfig.ini common.sh cleanlocal.sh diskconfig.sh racovm.sh ssh params.ini doall.sh netconfig GetSystemTimeZone.class kitversion.txt mcast INFO (node:rac001): Copied kit to remote node rac002 as root user 2017-12-18 09:06:48:[copykit:Done :rac001] Copy kit files to (1) remote nodes 2017-12-18 09:06:48:[copykit:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s) 2017-12-18 09:06:48:[usrsgrps:Start:rac001] Verifying Oracle users & groups on all nodes (create/modify mode).. .. 2017-12-18 09:06:51:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (create/modify mode).. 2017-12-18 09:06:51:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (create/modify mode).. INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows: uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba) 2017-12-18 09:06:51:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (create/modify mode).. 2017-12-18 09:06:51:[usrsgrpslocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s) INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows: uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba) 2017-12-18 09:06:51:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (create/modify mode).. 2017-12-18 09:06:51:[usrsgrpslocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s) .... INFO (node:rac001): Passwordless SSH for the Oracle user (oracle) already configured to all nodes; not re-setting users passwords 2017-12-18 09:06:55:[usrsgrps:Done :rac001] Verifying Oracle users & groups on all nodes (create/modify mode).. 2017-12-18 09:06:55:[usrsgrps:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s) INFO (node:rac001): Parameters loaded from params.ini... Users & Groups: Role Separation: no Running as: root OInstall : oinstall GID: 54321 RAC Owner : oracle UID: 54321 DB OSDBA : dba GID: 54322 DB OSOPER : GID: DB OSBACKUP: dba GID: DB OSDGDBA : dba GID: DB OSKMDBA : dba GID: DB OSRAC : dba GID: Grid Owner : oracle UID: 54321 GI OSDBA : dba GID: 54322 GI OSOPER : GID: GI OSASM : dba GID: 54322 Software Locations: Operating Mode: RAC Database Edition: STD Flex Cluster: yes Flex ASM: yes Central Inventory: /u01/app/oraInventory Grid Home: /u01/app/12.2.0/grid (Detected: 12cR2, Enterprise Edition) Grid Name: OraGrid12c RAC Home : /u01/app/oracle/product/12.2.0/dbhome_1 (Detected: 12cR2, Enterprise Edition) RAC Name : OraRAC12c RAC Base : /u01/app/oracle DB/RAC OVM kit : /u01/racovm Attach RAC Home: yes GI Home: yes Relink Homes: no On OS Change: yes Addnode Copy: no Database & Storage: Database : no DBName: ORCL SIDName: ORCL DG: DGDATA Listener Port: 1521 Policy Managed: no DBExpress: no DBExpress port: 5500 Grid Management DB: no GIMR diskgroup name: Separate GIMR diskgroup: no Cluster Storage: ASM ASM Discovery String: /dev/xvd[k-s]1 ASM diskgroup: dgocrvoting Redundancy: EXTERNAL Allocation Unit (au_size): 4 Disks : /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1 /dev/xvdo1 Recovery DG : DGFRA Redundancy: EXTERNAL Disks : /dev/xvdr1 /dev/xvds1 Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0' Extra DG #1 : DGDATA Redundancy: EXTERNAL Disks : /dev/xvdp1 /dev/xvdq1 Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0' Persistent disknames: yes Stamp: yes Partition: yes Align: yes GPT: no Permissions: 660 ACFS Filesystem: no Network information loaded from netconfig.ini... Default Gateway: 192.168.179.1 Domain: DNS: Public NIC : eth1 Mask: 255.255.255.0 Private NIC: eth2 Mask: 255.255.255.0 ASM NIC : eth3 Mask: 255.255.0.0 SCAN Name: cluprod01-scan SCAN IP: 192.168.179.205 Scan Port: 1521 Cluster Name: cluprod01 Nodes & IP Addresses (2 of 2 nodes) Node 1: PubIP : 192.168.179.210 PubName : rac001 (Hub) VIPIP : 192.168.179.211 VIPName : rac001-vip PrivIP: 192.168.3.210 PrivName: rac001-priv ASMIP : 192.168.58.210 Node 2: PubIP : 192.168.179.212 PubName : rac002 (Hub) VIPIP : 192.168.179.213 VIPName : rac002-vip PrivIP: 192.168.3.212 PrivName: rac002-priv ASMIP : 192.168.58.212 Running on rac001 as root... Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:55 EST 2017 Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition) 2017-12-18 09:06:56:[printparams:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s) 2017-12-18 09:06:56:[setsshora:Start:rac001] SSH Setup for the Oracle user(s)... .. INFO (node:rac001): Passwordless SSH for the oracle user already configured, skipping... 2017-12-18 09:06:57:[setsshora:Done :rac001] SSH Setup for the oracle user completed successfully 2017-12-18 09:06:57:[setsshora:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s) 2017-12-18 09:06:57:[diskconfig:Start:rac001] Storage Setup 2017-12-18 09:06:58:[diskconfig:Start:rac001] Running in configuration mode (local & remote nodes) . 2017-12-18 09:06:58:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)... /dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq............................OK 2017-12-18 09:07:02:[diskconfig:Disks:rac001] Checking contents of disks (localhost)... /dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1. 2017-12-18 09:07:02:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with stamping (existence check)... /dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo......../dev/xvdr./dev/xvds...../dev/xvdp./dev/xvdq........OK 2017-12-18 09:07:23:[diskconfig:Remote:rac001] Verify disks are free on remote nodes... rac002....................OK 2017-12-18 09:07:52:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)... rac002.......OK 2017-12-18 09:07:54:[diskconfig:Disks:rac001] Setting disk permissions for next startup (all nodes)... .....OK 2017-12-18 09:07:56:[diskconfig:ClearPartTables:rac001] Clearing partition tables... ./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK 2017-12-18 09:08:03:[diskconfig:CreatePartitions:rac001] Creating 'msdos' partitions on disks (as needed)... ./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK 2017-12-18 09:08:13:[diskconfig:CleanPartitions:rac001] Cleaning new partitions... ./dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1...OK 2017-12-18 09:08:13:[diskconfig:Done :rac001] Done configuring and checking disks on all nodes 2017-12-18 09:08:13:[diskconfig:Done :rac001] Storage Setup 2017-12-18 09:08:13:[diskconfig:Time :rac001] Completed successfully in 76 seconds (0h:01m:16s) 2017-12-18 09:08:15:[clearremotelogs:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s) 2017-12-18 09:08:15:[check:Start:rac001] Pre-install checks on all nodes .. INFO (node:rac001): Check found that all (2) nodes have the following (25586399 26609817 26609966) patches applied to the Grid Infrastructure Home (/u01/app/12.2.0/grid), the following (25811364 26609817 26609966) patches applied to the RAC Home (/u01/app/oracle/product/12.2.0/dbhome_1) .2017-12-18 09:08:20:[checklocal:Start:rac001] Pre-install checks 2017-12-18 09:08:21:[checklocal:Start:rac002] Pre-install checks 2017-12-18 09:08:22:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (check only mode).. INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows: uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba) 2017-12-18 09:08:22:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (check only mode).. 2017-12-18 09:08:22:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (check only mode).. INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows: uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba) 2017-12-18 09:08:22:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (check only mode).. INFO (node:rac001): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen WARNING (node:rac001): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini. INFO (node:rac001): Running disk checks on all nodes, persistent disk names (/u01/racovm/diskconfig.sh -n 2 -D 1 -s) 2017-12-18 09:08:23:[diskconfig:Start:rac001] Running in dry-run mode (local & remote nodes, level 1), no stamping, partitioning or OS configuration files will be modified...(assuming persistent disk names) INFO (node:rac002): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen WARNING (node:rac002): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini. INFO (node:rac002): Running network checks... ...... 2017-12-18 09:08:24:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)... /dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.............................OK 2017-12-18 09:08:29:[diskconfig:Disks:rac001] Checking existence of automatically renamed disks (localhost)... /dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1. 2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking permissions of disks (localhost)... /dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1 2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking contents of disks (localhost)... /dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1.. 2017-12-18 09:08:31:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with NO stamping (existence check)... rac002........OK 2017-12-18 09:08:37:[diskconfig:Remote:rac001] Verify disks are free on remote nodes... rac002........ INFO (node:rac001): Waiting for all checklocal operations to complete on all nodes (At 09:08:50, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 13222 13365)... ............... INFO (node:rac002): Check completed successfully 2017-12-18 09:09:07:[checklocal:Done :rac002] Pre-install checks 2017-12-18 09:09:07:[checklocal:Time :rac002] Completed successfully in 46 seconds (0h:00m:46s) .......OK 2017-12-18 09:09:11:[diskconfig:Remote:rac001] Checking existence of automatically renamed disks (remote nodes)... rac002... 2017-12-18 09:09:17:[diskconfig:Remote:rac001] Checking permissions of disks (remote nodes)... rac002.... 2017-12-18 09:09:21:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)... rac002.......OK 2017-12-18 09:09:26:[diskconfig:Done :rac001] Dry-run (local & remote, level 1) completed successfully, most likely normal run will too .. INFO (node:rac001): Running multicast check on 230.0.1.0 port 42050 for 2 nodes... INFO (node:rac001): All nodes can multicast to all other nodes on interface eth2 multicast address 230.0.1.0 port 42050... INFO (node:rac001): Running network checks... .................... INFO (node:rac001): Check completed successfully 2017-12-18 09:10:11:[checklocal:Done :rac001] Pre-install checks 2017-12-18 09:10:11:[checklocal:Time :rac001] Completed successfully in 111 seconds (0h:01m:51s) INFO (node:rac001): All checklocal operations completed on all (2) node(s) at: 09:10:12 2017-12-18 09:10:12:[check:Done :rac001] Pre-install checks on all nodes 2017-12-18 09:10:13:[check:Time :rac001] Completed successfully in 117 seconds (0h:01m:57s) 2017-12-18 09:10:13:[creategrid:Start:rac001] Creating 12cR2 Grid Infrastructure .. 2017-12-18 09:10:16:[preparelocal:Start:rac001] Preparing node for Oracle installation INFO (node:rac001): Resetting permissions on Oracle Homes... May take a while... 2017-12-18 09:10:17:[preparelocal:Start:rac002] Preparing node for Oracle installation INFO (node:rac002): Resetting permissions on Oracle Homes... May take a while... INFO (node:rac001): Configured size of /dev/shm is (see output below): Filesystem Size Used Avail Use% Mounted on tmpfs 1.5G 0 1.5G 0% /dev/shm 2017-12-18 09:10:27:[preparelocal:Done :rac001] Preparing node for Oracle installation 2017-12-18 09:10:27:[preparelocal:Time :rac001] Completed successfully in 11 seconds (0h:00m:11s) INFO (node:rac002): Configured size of /dev/shm is (see output below): Filesystem Size Used Avail Use% Mounted on tmpfs 1.5G 0 1.5G 0% /dev/shm 2017-12-18 09:10:31:[preparelocal:Done :rac002] Preparing node for Oracle installation 2017-12-18 09:10:31:[preparelocal:Time :rac002] Completed successfully in 14 seconds (0h:00m:14s) 2017-12-18 09:10:32:[prepare:Time :rac001] Completed successfully in 19 seconds (0h:00m:19s) .... 2017-12-18 09:10:40:[giclonelocal:Start:rac001] Attaching 12cR2 Grid Infrastructure Home INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null 2017-12-18 09:10:41:[giattachlocal:Start:rac001] Attaching Grid Infratructure Home on node rac001 INFO (node:rac001): Running on: rac001 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle' CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed 2017-12-18 09:10:41:[giclonelocal:Start:rac002] Attaching 12cR2 Grid Infrastructure Home INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null 2017-12-18 09:10:42:[giattachlocal:Start:rac002] Attaching Grid Infratructure Home on node rac002 INFO (node:rac002): Running on: rac002 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle' CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory pointer is located at /etc/oraInst.loc INFO (node:rac001): Waiting for all giclonelocal operations to complete on all nodes (At 09:11:06, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 18135 18141)... Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session. 'AttachHome' was successful. INFO (node:rac001): Running on: rac001 as root: /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. 2017-12-18 09:11:08:[giattachlocal:Done :rac001] Attaching Grid Infratructure Home on node rac001 2017-12-18 09:11:08:[giattachlocal:Time :rac001] Completed successfully in 27 seconds (0h:00m:27s) 2017-12-18 09:11:09:[girootlocal:Start:rac001] Running root.sh on Grid Infrastructure home INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/root.sh -silent Check /u01/app/12.2.0/grid/install/root_rac001_2017-12-18_09-11-09-287116939.log for the output of root script 2017-12-18 09:11:09:[girootlocal:Done :rac001] Running root.sh on Grid Infrastructure home 2017-12-18 09:11:09:[girootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s) INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)... Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session. 'AttachHome' was successful. INFO (node:rac002): Running on: rac002 as root: /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. 2017-12-18 09:11:10:[giattachlocal:Done :rac002] Attaching Grid Infratructure Home on node rac002 2017-12-18 09:11:10:[giattachlocal:Time :rac002] Completed successfully in 28 seconds (0h:00m:28s) 2017-12-18 09:11:10:[girootlocal:Start:rac002] Running root.sh on Grid Infrastructure home INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/root.sh -silent Check /u01/app/12.2.0/grid/install/root_rac002_2017-12-18_09-11-10-934545273.log for the output of root script 2017-12-18 09:11:11:[girootlocal:Done :rac002] Running root.sh on Grid Infrastructure home 2017-12-18 09:11:11:[girootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s) INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)... 2017-12-18 09:11:11:[giclonelocal:Done :rac001] Attaching 12cR2 Grid Infrastructure Home 2017-12-18 09:11:11:[giclonelocal:Time :rac001] Completed successfully in 33 seconds (0h:00m:33s) 2017-12-18 09:11:13:[giclonelocal:Done :rac002] Attaching 12cR2 Grid Infrastructure Home 2017-12-18 09:11:13:[giclonelocal:Time :rac002] Completed successfully in 34 seconds (0h:00m:34s) INFO (node:rac001): All giclonelocal operations completed on all (2) node(s) at: 09:11:14 2017-12-18 09:11:14:[giclone:Time :rac001] Completed successfully in 42 seconds (0h:00m:42s) .... 2017-12-18 09:11:18:[girootcrslocal:Start:rac001] Running rootcrs.pl INFO (node:rac001): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_<timestamp>.log INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_2017-12-18_09-11-19AM.log 2017/12/18 09:11:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 2017/12/18 09:11:30 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2017/12/18 09:12:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2017/12/18 09:12:08 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'. 2017/12/18 09:12:19 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'. 2017/12/18 09:12:25 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'. 2017/12/18 09:12:28 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'. 2017/12/18 09:12:40 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'. 2017/12/18 09:13:23 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'. 2017/12/18 09:13:23 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'. 2017/12/18 09:14:06 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'. 2017/12/18 09:14:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'. 2017/12/18 09:14:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'. 2017/12/18 09:14:27 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'. 2017/12/18 09:14:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2017/12/18 09:15:48 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2017/12/18 09:16:56 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2017/12/18 09:18:14 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2017/12/18 09:18:23 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.driver.afd' on 'rac001' CRS-2672: Attempting to start 'ora.evmd' on 'rac001' CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001' CRS-2676: Start of 'ora.driver.afd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001' CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001' CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac001' CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac001' CRS-2672: Attempting to start 'ora.diskmon' on 'rac001' CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded Disk label(s) created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details. Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details. 2017/12/18 09:24:06 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade oracle oinstall' CRS-2672: Attempting to start 'ora.crf' on 'rac001' CRS-2672: Attempting to start 'ora.storage' on 'rac001' CRS-2676: Start of 'ora.storage' on 'rac001' succeeded CRS-2676: Start of 'ora.crf' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac001' CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded CRS-4256: Updating the profile Successful addition of voting disk e0321712cd544fa6bf438b5849f11155. Successfully replaced voting disk group with +dgocrvoting. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE e0321712cd544fa6bf438b5849f11155 (AFD:DGOCRVOTING1) [DGOCRVOTING] Located 1 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001' CRS-2673: Attempting to stop 'ora.crsd' on 'rac001' CRS-2677: Stop of 'ora.crsd' on 'rac001' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'rac001' CRS-2673: Attempting to stop 'ora.crf' on 'rac001' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac001' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac001' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac001' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac001' succeeded CRS-2677: Stop of 'ora.storage' on 'rac001' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac001' CRS-2677: Stop of 'ora.crf' on 'rac001' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rac001' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac001' succeeded CRS-2677: Stop of 'ora.asm' on 'rac001' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac001' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac001' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac001' CRS-2673: Attempting to stop 'ora.evmd' on 'rac001' CRS-2677: Stop of 'ora.evmd' on 'rac001' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac001' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac001' CRS-2677: Stop of 'ora.cssd' on 'rac001' succeeded CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac001' CRS-2673: Attempting to stop 'ora.gipcd' on 'rac001' CRS-2677: Stop of 'ora.driver.afd' on 'rac001' succeeded CRS-2677: Stop of 'ora.gipcd' on 'rac001' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed CRS-4133: Oracle High Availability Services has been stopped. 2017/12/18 09:26:52 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001' CRS-2672: Attempting to start 'ora.evmd' on 'rac001' CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001' CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac001' CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001' CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac001' CRS-2672: Attempting to start 'ora.diskmon' on 'rac001' CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac001' CRS-2672: Attempting to start 'ora.ctssd' on 'rac001' CRS-2676: Start of 'ora.ctssd' on 'rac001' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac001' CRS-2676: Start of 'ora.asm' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.storage' on 'rac001' CRS-2676: Start of 'ora.storage' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac001' CRS-2676: Start of 'ora.crf' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac001' CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: rac001 CRS-6016: Resource auto-start has completed for server rac001 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2017/12/18 09:30:15 CLSRSC-343: Successfully started Oracle Clusterware stack 2017/12/18 09:30:15 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac001' CRS-2676: Start of 'ora.asm' on 'rac001' succeeded CRS-2672: Attempting to start 'ora.DGOCRVOTING.dg' on 'rac001' CRS-2676: Start of 'ora.DGOCRVOTING.dg' on 'rac001' succeeded 2017/12/18 09:32:38 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. 2017/12/18 09:34:22 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded 2017-12-18 09:34:26:[girootcrslocal:Done :rac001] Running rootcrs.pl 2017-12-18 09:34:27:[girootcrslocal:Time :rac001] Completed successfully in 1388 seconds (0h:23m:08s) 2017-12-18 09:34:49:[girootcrslocal:Start:rac002] Running rootcrs.pl INFO (node:rac002): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_<timestamp>.log INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_2017-12-18_09-34-50AM.log 2017/12/18 09:35:03 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 2017/12/18 09:35:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:35:18, elapsed: 0h:00m:31s, 1) node remaining, all background pid(s): 7263)... 2017/12/18 09:35:44 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2017/12/18 09:35:44 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'. .2017/12/18 09:35:51 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'. 2017/12/18 09:35:56 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'. 2017/12/18 09:35:56 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'. 2017/12/18 09:36:02 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'. ..2017/12/18 09:36:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'. 2017/12/18 09:37:01 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'. 2017/12/18 09:37:08 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'. 2017/12/18 09:37:16 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'. 2017/12/18 09:37:17 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'. .2017/12/18 09:37:23 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'. 2017/12/18 09:37:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' .. INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:38:21, elapsed: 0h:03m:34s, 1) node remaining, all background pid(s): 7263)... .2017/12/18 09:39:00 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. ...2017/12/18 09:40:24 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. .. INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:41:25, elapsed: 0h:06m:38s, 1) node remaining, all background pid(s): 7263)... 2017/12/18 09:41:40 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2017/12/18 09:41:42 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed CRS-4133: Oracle High Availability Services has been stopped. .CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac002' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac002' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed CRS-4133: Oracle High Availability Services has been stopped. 2017/12/18 09:42:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'. ..... INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:44:27, elapsed: 0h:09m:40s, 1) node remaining, all background pid(s): 7263)... .CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac002' CRS-2672: Attempting to start 'ora.evmd' on 'rac002' CRS-2676: Start of 'ora.mdnsd' on 'rac002' succeeded CRS-2676: Start of 'ora.evmd' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac002' CRS-2676: Start of 'ora.gpnpd' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac002' CRS-2676: Start of 'ora.gipcd' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac002' CRS-2676: Start of 'ora.cssdmonitor' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac002' CRS-2672: Attempting to start 'ora.diskmon' on 'rac002' CRS-2676: Start of 'ora.diskmon' on 'rac002' succeeded CRS-2676: Start of 'ora.cssd' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac002' CRS-2672: Attempting to start 'ora.ctssd' on 'rac002' CRS-2676: Start of 'ora.ctssd' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac002' CRS-2676: Start of 'ora.crf' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac002' CRS-2676: Start of 'ora.crsd' on 'rac002' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac002' CRS-2676: Start of 'ora.asm' on 'rac002' succeeded CRS-6017: Processing resource auto-start for servers: rac002 CRS-2672: Attempting to start 'ora.net1.network' on 'rac002' CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002' CRS-2676: Start of 'ora.net1.network' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.ons' on 'rac002' CRS-2676: Start of 'ora.ons' on 'rac002' succeeded CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac002' CRS-2676: Start of 'ora.asm' on 'rac002' succeeded CRS-6016: Resource auto-start has completed for server rac002 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2017/12/18 09:45:22 CLSRSC-343: Successfully started Oracle Clusterware stack 2017/12/18 09:45:22 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. .2017/12/18 09:45:49 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. ..2017/12/18 09:46:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded 2017-12-18 09:46:40:[girootcrslocal:Done :rac002] Running rootcrs.pl 2017-12-18 09:46:40:[girootcrslocal:Time :rac002] Completed successfully in 711 seconds (0h:11m:51s) INFO (node:rac001): All girootcrslocal operations completed on all (2) node(s) at: 09:46:42 2017-12-18 09:46:42:[girootcrs:Time :rac001] Completed successfully in 2128 seconds (0h:35m:28s) 2017-12-18 09:46:42:[giassist:Start:rac001] Running RAC Home assistants (netca, asmca) INFO (node:rac001): Creating the node Listener using NETCA... (09:46:44) INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/netca /orahome /u01/app/12.2.0/grid /instype typical /inscomp client,oraclenet,javavm,server,ano /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/12.2.0/grid/network/install/netca_typ.rsp /silent /orahnam OraGrid12c Parsing command line arguments: Parameter "orahome" = /u01/app/12.2.0/grid Parameter "instype" = typical Parameter "inscomp" = client,oraclenet,javavm,server,ano Parameter "insprtcl" = tcp Parameter "cfg" = local Parameter "authadp" = NO_VALUE Parameter "responsefile" = /u01/app/12.2.0/grid/network/install/netca_typ.rsp Parameter "silent" = true Parameter "orahnam" = OraGrid12c Done parsing command line arguments. Oracle Net Services Configuration: Profile configuration complete. Profile configuration complete. Listener "LISTENER" already exists. Oracle Net Services configuration successful. The exit code is 0 INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/asmca -silent -postConfigureASM Post configuration completed successfully INFO (node:rac001): Setting initial diskgroup name dgocrvoting's attributes as defined in RACASMGROUP_ATTRIBUTES ('compatible.asm'='12.2.0.1.0', 'compatible.rdbms'='12.2.0.1.0')... INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:38: alter diskgroup dgocrvoting set attribute 'compatible.asm'='12.2.0.1.0'; Diskgroup altered. INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:40: alter diskgroup dgocrvoting set attribute 'compatible.rdbms'='12.2.0.1.0'; Diskgroup altered. 2017-12-18 09:48:44:[creatediskgroups:Start:rac001] Creating additional diskgroups INFO (node:rac001): Creating Recovery diskgroup (DGFRA) at: 09:50:31... INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGFRA" EXTERNAL redundancy disk 'AFD:RECO1','AFD:RECO2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'; Diskgroup created. Elapsed: 00:00:09.34 INFO (node:rac001): Creating Extra diskgroup (DGDATA) at: 09:52:22... INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGDATA" EXTERNAL redundancy disk 'AFD:DGDATA1','AFD:DGDATA2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'; Diskgroup created. Elapsed: 00:00:10.48 INFO (node:rac001): Successfully created the following ASM diskgroups (DGFRA DGDATA), setting them for automount on startup and attempting to mount on all nodes... INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:52:35: alter system set asm_diskgroups='DGDATA', 'DGFRA'; System altered. INFO (node:rac001): Successfully set the ASM diskgroups (DGDATA DGFRA) to automount on startup INFO (node:rac001): Attempting to mount diskgroups on nodes running ASM: rac001 rac002 INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:38: alter diskgroup "DGFRA" mount; Diskgroup altered. INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:39: alter diskgroup "DGDATA" mount; Diskgroup altered. INFO (node:rac001): Successfully mounted the created (DGFRA DGDATA) ASM diskgroups on all nodes running an ASM instance (rac001 rac002) 2017-12-18 09:52:41:[creatediskgroups:Done :rac001] Creating additional diskgroups 2017-12-18 09:52:41:[creatediskgroups:Time :rac001] Completed successfully in 237 seconds (0h:03m:57s) WARNING (node:rac001): Management Database not created due to CLONE_GRID_MANAGEMENT_DB=no. Note that starting with release 12.1.0.2 and higher, the Management Database (GIMR) is required for a fully supported environment 2017-12-18 09:52:41:[giassist:Done :rac001] Running RAC Home assistants (netca, asmca) 2017-12-18 09:52:41:[giassist:Time :rac001] Completed successfully in 359 seconds (0h:05m:59s) 2017-12-18 09:52:41:[creategrid:Done :rac001] Creating 12cR2 Grid Infrastructure 2017-12-18 09:52:41:[creategrid:Time :rac001] Completed successfully in 2548 seconds (0h:42m:28s) INFO (node:rac001): Skipping CVU post crsinst checks, due to CLONE_SKIP_CVU_POSTCRS=yes 2017-12-18 09:52:41:[cvupostcrs:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s) 2017-12-18 09:52:41:[racclone:Start:rac001] Cloning 12cR2 RAC Home on all nodes .. INFO (node:rac001): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini 2017-12-18 09:53:04:[racclonelocal:Start:rac001] Cloning 12cR2 RAC Home INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null INFO (node:rac001): Running on: rac001 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac001" '-ignoreSysPrereqs' Starting Oracle Universal Installer... Checking Temp space: must be greater than 500 MB. Actual 6740 MB Passed Checking swap space: must be greater than 500 MB. Actual 4059 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-12AM. Please wait ... INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:53:14, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 12271 12277)... INFO (node:rac002): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini 2017-12-18 09:53:15:[racclonelocal:Start:rac002] Cloning 12cR2 RAC Home INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null INFO (node:rac002): Running on: rac002 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac002" '-ignoreSysPrereqs' Starting Oracle Universal Installer... Checking Temp space: must be greater than 500 MB. Actual 6757 MB Passed Checking swap space: must be greater than 500 MB. Actual 4082 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-42AM. Please wait ....You can find the log of this install session at: /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log .................................................. 5% Done. .................................................. 10% Done. .................................................. 15% Done. .................................................. 20% Done. .................................................. 25% Done. .................................................. 30% Done. .................................................. 35% Done. .................................................. 40% Done. .................................................. 45% Done. .................................................. 50% Done. .................................................. 55% Done. .................................................. 60% Done. .................................................. 65% Done. .................................................. 70% Done. .................................................. 75% Done. .................................................. 80% Done. .................................................. 85% Done. .......... Copy files in progress. . Copy files successful. Link binaries in progress. You can find the log of this install session at: /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log ................................................... 5% Done. .................................................. 10% Done. .................................................. 15% Done. .................................................. 20% Done. .................................................. 25% Done. .................................................. 30% Done. .................................................. 35% Done. .................................................. 40% Done. .................................................. 45% Done. .................................................. 50% Done. .................................................. 55% Done. .................................................. 60% Done. .................................................. 65% Done. .................................................. 70% Done. .................................................. 75% Done. .................................................. 80% Done. .................................................. 85% Done. .......... Copy files in progress. Copy files successful. Link binaries in progress. ... INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:56:19, elapsed: 0h:03m:35s, 2) nodes remaining, all background pid(s): 12271 12277)... .. Link binaries successful. Setup files in progress. . Setup files successful. Setup Inventory in progress. . Setup Inventory successful. Finish Setup successful. The cloning of OraRAC12c was successful. Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 95% Done. As a root user, execute the following script(s): 1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes: [rac001] .................................................. 100% Done. INFO (node:rac001): Relinking the oracle binary to disable Database Enterprise Edition options (09:58:45)... .. INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:59:25, elapsed: 0h:06m:42s, 2) nodes remaining, all background pid(s): 12271 12277)... ..2017-12-18 10:00:28:[racrootlocal:Start:rac001] Running root.sh on RAC Home Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac001_2017-12-18_10-00-28-680953042.log for the output of root script 2017-12-18 10:00:29:[racrootlocal:Done :rac001] Running root.sh on RAC Home 2017-12-18 10:00:29:[racrootlocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s) INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)... 2017-12-18 10:00:29:[racclonelocal:Done :rac001] Cloning 12cR2 RAC Home 2017-12-18 10:00:29:[racclonelocal:Time :rac001] Completed successfully in 464 seconds (0h:07m:44s) Link binaries successful. Setup files in progress. Setup files successful. Setup Inventory in progress. . Setup Inventory successful. Finish Setup successful. The cloning of OraRAC12c was successful. Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 95% Done. As a root user, execute the following script(s): 1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes: [rac002] .................................................. 100% Done. INFO (node:rac002): Relinking the oracle binary to disable Database Enterprise Edition options (10:01:17)... ..2017-12-18 10:02:21:[racrootlocal:Start:rac002] Running root.sh on RAC Home Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac002_2017-12-18_10-02-21-660060386.log for the output of root script 2017-12-18 10:02:22:[racrootlocal:Done :rac002] Running root.sh on RAC Home 2017-12-18 10:02:22:[racrootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s) INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)... 2017-12-18 10:02:22:[racclonelocal:Done :rac002] Cloning 12cR2 RAC Home 2017-12-18 10:02:22:[racclonelocal:Time :rac002] Completed successfully in 576 seconds (0h:09m:36s) INFO (node:rac001): All racclonelocal operations completed on all (2) node(s) at: 10:02:24 2017-12-18 10:02:24:[racclone:Done :rac001] Cloning 12cR2 RAC Home on all nodes 2017-12-18 10:02:24:[racclone:Time :rac001] Completed successfully in 583 seconds (0h:09m:43s) INFO (node:rac002): Disabling passwordless ssh access for root user (from remote nodes) 2017-12-18 10:02:28:[rmsshrootlocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s) INFO (node:rac001): Disabling passwordless ssh access for root user (from remote nodes) 2017-12-18 10:02:31:[rmsshrootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s) 2017-12-18 10:02:31:[rmsshroot:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s) INFO (node:rac001): Current cluster state (10:02:31)... INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/olsnodes -n -s -t rac001 1 Active Hub Unpinned rac002 2 Active Hub Unpinned Oracle Clusterware active version on the cluster is [12.2.0.1.0] Oracle Clusterware version on node [rac001] is [12.2.0.1.0] CRS Administrator List: oracle root Cluster is running in "flex" mode CRS-41008: Cluster class is 'Standalone Cluster' ASM Flex mode enabled: ASM instance count: 3 ASM is running on rac001,rac002 INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.DGDATA.dg ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.DGFRA.dg ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.DGOCRVOTING.dg ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.LISTENER.lsnr ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.net1.network ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.ons ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.proxy_advm OFFLINE OFFLINE rac001 STABLE OFFLINE OFFLINE rac002 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac001 STABLE ora.asm 1 ONLINE ONLINE rac001 Started,STABLE 2 ONLINE ONLINE rac002 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE rac001 STABLE ora.qosmserver 1 OFFLINE OFFLINE STABLE ora.rac001.vip 1 ONLINE ONLINE rac001 STABLE ora.rac002.vip 1 ONLINE ONLINE rac002 STABLE ora.scan1.vip 1 ONLINE ONLINE rac001 STABLE -------------------------------------------------------------------------------- INFO (node:rac001): For an explanation on resources in OFFLINE state, see Note:1068835.1 2017-12-18 10:02:40:[clusterstate:Time :rac001] Completed successfully in 9 seconds (0h:00m:09s) 2017-12-18 10:02:40:[buildcluster:Done :rac001] Building 12cR2 RAC Cluster 2017-12-18 10:02:40:[buildcluster:Time :rac001] Completed successfully in 3357 seconds (0h:55m:57s) INFO (node:rac001): This entire build was logged in logfile: /u01/racovm/buildcluster3.log
We are now going to multiplex network both, the cluster Hearthbeat and ASM because currently the configuration is not HA. We also need only 2 flex asm instances for my 2 nodes R.A.C :
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl modify asm -count 2 [root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener Name: ASMNET1LSNR_ASM Type: ASM Listener Owner: oracle Subnet: 192.168.3.0 Home: <CRS home> End points: TCP:1525 Listener is disabled. Listener is individually enabled on nodes: Listener is individually disabled on nodes: /u01/app/12.2.0/grid/bin/srvctl add listener -asmlistener -l ASMNET2LSNR_ASM -subnet 192.168.58.0 /u01/app/12.2.0/grid/bin/srvctl start listener -l ASMNET2LSNR_ASM [root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener Name: ASMNET1LSNR_ASM Type: ASM Listener Owner: oracle Subnet: 192.168.3.0 Home: <CRS home> End points: TCP:1525 Listener is enabled. Listener is individually enabled on nodes: Listener is individually disabled on nodes: Name: ASMNET2LSNR_ASM Type: ASM Listener Owner: oracle Subnet: 192.168.58.0 Home: <CRS home> End points: TCP:1526 Listener is enabled. Listener is individually enabled on nodes: Listener is individually disabled on nodes: [root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif eth1 192.168.179.0 global public eth2 192.168.3.0 global cluster_interconnect eth3 192.168.58.0 global asm [root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth2/192.168.3.0:cluster_interconnect,asm [root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth3/192.168.58.0:cluster_interconnect,asm [root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif eth1 192.168.179.0 global public eth2 192.168.3.0 global cluster_interconnect,asm eth3 192.168.58.0 global cluster_interconnect,asm
Let check the overall cluster state
[root@rac002 ~]# /u01/app/12.2.0/grid/bin/crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.ASMNET2LSNR_ASM.lsnr ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.DGDATA.dg ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.DGFRA.dg ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.DGOCRVOTING.dg ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.LISTENER.lsnr ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.net1.network ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.ons ONLINE ONLINE rac001 STABLE ONLINE ONLINE rac002 STABLE ora.proxy_advm OFFLINE OFFLINE rac001 STABLE OFFLINE OFFLINE rac002 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac002 STABLE ora.asm 1 ONLINE ONLINE rac001 Started,STABLE 2 ONLINE ONLINE rac002 Started,STABLE ora.cvu 1 ONLINE ONLINE rac002 STABLE ora.qosmserver 1 OFFLINE OFFLINE STABLE ora.rac001.vip 1 ONLINE ONLINE rac001 STABLE ora.rac002.vip 1 ONLINE ONLINE rac002 STABLE ora.scan1.vip 1 ONLINE ONLINE rac002 STABLE --------------------------------------------------------------------------------
The cluster is now up, functional and resilient to network failure. We could have choose to create a database as a part of the deployment process because the “Deploy Cluster Tool” permits us to do that. Nevertheless, in this demonstration, I choose to execute manually the database creation on top of this deployment.
So we add a new database to the cluster:
[oracle@rac001 ~]$ dbca -silent -ignorePreReqs \ > -createDatabase \ > -gdbName app01 \ > -nodelist rac001,rac002 \ > -templateName General_Purpose.dbc \ > -characterSet AL32UTF8 \ > -createAsContainerDatabase false \ > -databaseConfigType RAC \ > -databaseType MULTIPURPOSE \ > -dvConfiguration false \ > -emConfiguration NONE \ > -enableArchive true \ > -memoryMgmtType AUTO_SGA \ > -memoryPercentage 75 \ > -nationalCharacterSet AL16UTF16 \ > -adminManaged \ > -storageType ASM \ > -diskGroupName DGDATA \ > -recoveryGroupName DGFRA \ > -sysPassword 0rAcle-Sys \ > -systemPassword 0rAcle-System \ > -useOMF true Copying database files 1% complete 2% complete 15% complete 27% complete Creating and starting Oracle instance 29% complete 32% complete 36% complete 40% complete 41% complete 43% complete 45% complete Creating cluster database views 47% complete 63% complete Completing Database Creation 64% complete 65% complete 68% complete 71% complete 72% complete Executing Post Configuration Actions 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/app01/app01.log" for further details. [oracle@rac001 ~]$ export ORACLE_SID=app011 [oracle@rac001 ~]$ sqlplus / as sysdba SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 20 08:54:03 2017 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production SQL> select host_name from gv$instance ; HOST_NAME ---------------------------------------------------------------- rac001 rac002
We have now a R.A.C 12.2 Standard Edition ready to run our critical applications with the latest patch level for OS and Oracle including all best practices and requirements.
So with this post we have a demonstration of how it make your life simpler, with a good underlying OVM infrastructure, to deploy various kinds of Oracle database infrastructure. This automation can also easily be done for any other technologies like PostgreSQL database or “Big Data” technologies like Hortonworks or any applications.
I hope it may help and please do not hesitate to contact us if you have any questions or require further information.
Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02) est apparu en premier sur Blog dbi services.