After having worked with OVM on various architectures I can say that it is a good technology to easily build virtualized environments for production applications. Because it is based on XEN and has simple ways to deal with existing storage (FC, ISCSI, NFS, …) and networking solution (bond, lacp, …) it is a robust and convenient way to virtualized IT infrastructures keeping “bare-metal” performance.
Besides, it is an hard partitioning technology which is compliant with the Oracle licensing policies for partitioned environments to control CPU counting for licensing.
The aim of this post is to demonstrate how simple it is to build an HA virtualized architecture with the OVM Manager command line tool only (doc link). So we will create 1 VM on each server including all Oracle OS , network and storage requirements to run RAC 12.2.
Initial state:
- 2 physical servers installed with Oracle VM Server 3.4 (namely OVS, installation procedure here ) to host VMs including:
- 5 NICs on each (no bounding for the example but recommended for production system)
- eth0: administrative network connected to the organization’s administrative network
- eth1: application network dedicated to application
- eth2: storage network cabled to the storage for ISCSI LUNs and NFS accessibility
- eth3: cabled between both OVS Servers for RAC interconnect link #1/2
- eth4: cabled between both OVS Servers for RAC interconnect link #2/2
- 5 NICs on each (no bounding for the example but recommended for production system)
- 1 server with Oracle VM Manager 3.4 yet installed (installation procedure here)
- eth0: administrative network connector to the organization’s administrative network
- 1 storage system (Here we are going to use a ZFS Storage appliance)
- 1 OVM Template from Oracle Corp. (available here)
Summary
- Step 0: Connect to the OVM Manager client
- Step 1: Discover OVS servers
- Step 2: Discover file server
- Step 3: Discover NAS Storage
- Step 4: Creation of a server pool
- Step 5: Add servers to the server pool
- Step 6: Creation of a repository to store VMs’s configuration files and to import the Oracle Template
- Step 7: Create VMs called rac001 and rac002 for my 2 nodes RAC
- Step 8: Network definition for RAC interconnect and application network
- Step 9: Shared disks attachment to VMs for RAC ASM
Step 0: Connect to the OVM Manager client
Because the client connect through SSH protocol (default port number 10000, user admin), connecting to the OVM Manager client can be done from wherever you have network connectivity with OVM Server.
OVMCLI is a separate service from OVM Manager running on the OVM Manager server. Here I check the OVMCLI service status and I connect from within the VM Manager server.
service ovmcli status ssh -l admin localhost -p 10000 OVM> # --> prompt for OVM OVM> ? # --> to show which action can be done OVM> list ? # --> to show which options are available for command "list" OVM> set OutputMode={ Verbose | Sparse | Xml } # --> to make output matching your automation style
discoverServer ipAddress=192.168.56.101 password=oracle takeOwnership=Yes discoverServer ipAddress=192.168.56.102 password=oracle takeOwnership=Yes
In this example I going to store the ServerPool FSs to NFS from the ZFS Storage appliance. But it could be whatever NFS technologies or directly can be stored in ISCSI/FC LUNs.
OVM> list FileServerPlugin Command: list FileServerPlugin Status: Success Time: 2017-10-19 14:51:31,311 CEST Data: id:oracle.ocfs2.OCFS2.OCFS2Plugin (0.1.0-47.5) name:Oracle OCFS2 File system id:oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0) name:Oracle Generic Network File System OVM> create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage Command: create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage Status: Success Time: 2017-10-19 14:58:46,411 CEST JobId: 1508417926209 Data: id:0004fb00000900004801ecf9996f1d43 name:zfsstorage OVM> refreshAll Command: refreshAll Status: Success Time: 2017-10-19 16:26:58,705 CEST JobId: 1508422976145 OVM> list FileSystem Command: list FileSystem Status: Success Time: 2017-10-19 17:41:35,737 CEST Data: id:75734f6d-704d-48ee-9853-f6cc09b5af65 name:nfs on 192.168.238.10:/export/RepoOracle id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name:nfs on 192.168.238.10:/export/ServerPoolProd01 OVM> refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65 Command: refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65 Status: Success Time: 2017-10-19 17:42:28,516 CEST JobId: 1508427714903 OVM> refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 Command: refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 Status: Success Time: 2017-10-19 17:43:02,144 CEST JobId: 1508427760257
OVM> list StorageArrayPlugin Command: list StorageArrayPlugin Status: Success Time: 2017-10-19 15:28:23,932 CEST Data: id:oracle.s7k.SCSIPlugin.SCSIPlugin (2.1.2-3) name:zfs_storage_iscsi_fc id:oracle.generic.SCSIPlugin.GenericPlugin (1.1.0) name:Oracle Generic SCSI Plugin OVM> create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=oracle pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target" Command: create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=***** pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target" Status: Success Time: 2017-10-19 15:48:00,761 CEST JobId: 1508420880565 Data: id:0004fb0000090000c105d1003f051fbd name:zfsstorage OVM> addAdminServer StorageArray name=zfsstorage server=ovs001 Command: addAdminServer StorageArray name=zfsstorage server=ovs001 Status: Success Time: 2017-10-19 16:11:32,448 CEST JobId: 1508422292175 OVM> addAdminServer StorageArray name=zfsstorage server=ovs002 Command: addAdminServer StorageArray name=zfsstorage server=ovs002 Status: Success Time: 2017-10-19 16:11:35,424 CEST JobId: 1508422295266 OVM> validate StorageArray name=zfsstorage Command: validate StorageArray name=zfsstorage Status: Success Time: 2017-10-19 16:10:04,937 CEST JobId: 1508422128777 OVM> refreshAll Command: refreshAll Status: Success Time: 2017-10-19 16:26:58,705 CEST JobId: 1508422976145
Step 4: Creation of a server pool
OVM need to put its physical servers in a logical space called server pool. A server pool will use a least 2 storage spaces:
- a cluster storage configuration and disk Heartbeat (must be at least of 10GB regarding OVM 3.4’s recommendations) and it is better to separate the network access for this storage space in order to avoid unwanted cluster eviction.
- a storage space for the serverpool in which we can store VMs configuration file, Template, ISOs and so on.
OVM> list FileSystem Command: list FileSystem Status: Success Time: 2017-10-19 17:41:35,737 CEST Data: id:75734f6d-704d-48ee-9853-f6cc09b5af65 name:nfs on 192.168.238.10:/export/RepoOracle id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name:nfs on 192.168.238.10:/export/ServerPoolProd01 OVM> create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER Command: create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER Status: Success Time: 2017-10-19 17:15:11,431 CEST Data: id:0004fb0000020000c6b2c32fc58646e7 name:prod01
Step 5: Add servers to the server pool
OVM> list server Command: list server Status: Success Time: 2017-10-19 17:15:28,111 CEST Data: id:65:72:21:77:7b:0d:47:47:bc:43:e5:1f:64:3d:56:d9 name:ovs002 id:bb:06:3c:3e:a4:76:4b:e2:9c:bc:65:69:4e:35:28:b4 name:ovs001 OVM> add Server name=ovs001 to ServerPool name=prod01 Command: add Server name=ovs001 to ServerPool name=prod01 Status: Success Time: 2017-10-19 17:17:55,131 CEST JobId: 1508426260895 OVM> add Server name=ovs002 to ServerPool name=prod01 Command: add Server name=ovs002 to ServerPool name=prod01 Status: Success Time: 2017-10-19 17:18:21,439 CEST JobId: 1508426277115
Step 6: Creation of a repository to store VMs’s configuration files and to import the Oracle Template
OVM> list filesystem Command: list filesystem Status: Success Time: 2017-10-19 17:44:23,811 CEST Data: id:0004fb00000500009cbc79dde9b6649e name:Server Pool File System id:75734f6d-704d-48ee-9853-f6cc09b5af65 name:nfs on 192.168.238.10:/export/RepoOracle id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name:nfs on 192.168.238.10:/export/ServerPoolProd01 OVM> create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle" Command: create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle" Status: Success Time: 2017-10-19 17:45:22,346 CEST JobId: 1508427888238 Data: id:0004fb0000030000f1c8182390a36c8c name:RepoOracle OVM> add ServerPool name=prod01 to Repository name=RepoOracle Command: add ServerPool name=prod01 to Repository name=RepoOracle Status: Success Time: 2017-10-19 17:53:08,020 CEST JobId: 1508428361049 OVM> refresh Repository name=RepoOracle Command: refresh Repository name=RepoOracle Status: Success Time: 2017-10-19 17:53:40,922 CEST JobId: 1508428394212 OVM> importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz" Command: importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz" Status: Success Time: 2017-11-02 12:05:29,341 CET JobId: 1509619956729 Data: id:0004fb00001400005f68a4067eda1e6b name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz
Step 7: Create VMs called rac001 and rac002 for my 2 nodes RAC
Here we create VMs by cloning the template OVM_OL7U4_X86_64_12201DBRAC_PVHVM from Oracle.
OVM> list vm Command: list vm Status: Success Time: 2017-11-02 12:07:06,077 CET Data: id:0004fb00001400005f68a4067eda1e6b name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz OVM> edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM Command: edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM Status: Success Time: 2017-11-02 12:07:30,392 CET JobId: 1509620850142 OVM> list vm Command: list vm Status: Success Time: 2017-11-02 12:07:36,282 CET Data: id:0004fb00001400005f68a4067eda1e6b name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01 Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01 Status: Success Time: 2017-11-02 12:31:31,798 CET JobId: 1509622291342 Data: id:0004fb0000060000d4819629ebc0687f name:rac001 OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01 Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01 Status: Success Time: 2017-11-02 13:57:34,125 CET JobId: 1509627453634 Data: id:0004fb0000060000482c8e4790b7081a name:rac002 OVM> list vm Command: list vm Status: Success Time: 2017-11-02 15:23:54,077 CET Data: id:0004fb00001400005f68a4067eda1e6b name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM id:0004fb0000060000d4819629ebc0687f name:rac001 id:0004fb0000060000482c8e4790b7081a name:rac002 OVM> edit vm name=rac001 memory=2048 memoryLimit=2048 Command: edit vm name=rac001 memory=2048 memoryLimit=2048 Status: Success Time: 2017-11-02 17:14:45,542 CET JobId: 1509639285374 OVM> edit vm name=rac002 memory=2048 memoryLimit=2048 Command: edit vm name=rac002 memory=2048 memoryLimit=2048 Status: Success Time: 2017-11-02 17:14:59,458 CET JobId: 1509639299301
Step 8: Network definition for RAC interconnect and application network
create Network roles=VIRTUAL_MACHINE name=Application-Network create Network roles=VIRTUAL_MACHINE name=Interco-Network-01 create Network roles=VIRTUAL_MACHINE name=Interco-Network-02 OVM> list network Command: list network Status: Success Time: 2017-10-17 00:31:53,673 CEST Data: id:108572a7ca name:Application-Network id:10922ff6d7 name:Interco-Network-01 id:106765828d name:Interco-Network-02
Next, we attach physical OVS’s NICs to corresponding networks
OVM> list port Command: list port Status: Success Time: 2017-11-02 16:03:40,026 CET Data: id:0004fb00002000007667fde85d2a2944 name:eth0 on ovs002 id:0004fb00002000001fa791c597d71947 name:eth1 on ovs002 id:0004fb00002000003842bd1f3acb476b name:eth2 on ovs002 id:0004fb000020000031652acb25248275 name:eth3 on ovs002 id:0004fb00002000006fb524dac1f2319c name:eth4 on ovs001 id:0004fb0000200000748a37db41f80fb2 name:eth4 on ovs002 id:0004fb00002000000178e5cefb3c0161 name:eth3 on ovs001 id:0004fb000020000020373da7c0cdf4cf name:eth2 on ovs001 id:0004fb0000200000b0e747714aa822b7 name:eth1 on ovs001 id:0004fb00002000002787de2e68f61ecd name:eth0 on ovs001 add Port id=0004fb0000200000b0e747714aa822b7 to Network name=Application-Network add Port id=0004fb00002000000178e5cefb3c0161 to Network name=Interco-Network-01 add Port id=0004fb00002000006fb524dac1f2319c to Network name=Interco-Network-02 add Port id=0004fb00002000001fa791c597d71947 to Network name=Application-Network add Port id=0004fb000020000031652acb25248275 to Network name=Interco-Network-01 add Port id=0004fb0000200000748a37db41f80fb2 to Network name=Interco-Network-02
Then create Virtual NIC for Virtual Machines (the order matter as first created will fill first slot of the VM)
OVM> list vnic Command: list vnic Status: Success Time: 2017-11-02 15:25:54,571 CET Data: id:0004fb00000700001fe86897bfb0ecd4 name:Template Vnic id:0004fb00000700005351eb55314ab34e name:Template Vnic create Vnic name=rac001_vnic_admin network=Admin-Network on Vm name=rac001 create Vnic name=rac001_vnic_application network=Application-Network on Vm name=rac001 create Vnic name=rac001_vnic_interconnect network=Interco-Network-01 on Vm name=rac001 create Vnic name=rac001_vnic_interconnect network=Interco-Network-02 on Vm name=rac001 create Vnic name=rac002_vnic_admin network=Admin-Network on Vm name=rac002 create Vnic name=rac002_vnic_application network=Application-Network on Vm name=rac002 create Vnic name=rac002_vnic_interconnect network=Interco-Network-01 on Vm name=rac002 create Vnic name=rac002_vnic_interconnect network=Interco-Network-02 on Vm name=rac002 OVM> list vnic Command: list vnic Status: Success Time: 2017-11-02 15:27:34,642 CET Data: id:0004fb00000700005631bb2fbbeed53c name:rac002_vnic_interconnect id:0004fb00000700005e93ec7e8cf529b6 name:rac001_vnic_interconnect id:0004fb0000070000c091c9091b464846 name:rac002_vnic_admin id:0004fb00000700001fe86897bfb0ecd4 name:Template Vnic id:0004fb00000700009430b0a26566d6e3 name:rac002_vnic_application id:0004fb0000070000c4113fb1d9375791 name:rac002_vnic_interconnect id:0004fb00000700005351eb55314ab34e name:Template Vnic id:0004fb0000070000e1abd7e572bffc3a name:rac001_vnic_admin id:0004fb000007000079bb1fbf1d1942c9 name:rac001_vnic_application id:0004fb000007000085d8a41dc8fd768c name:rac001_vnic_interconnect
Step 9: Shared disks attachment to VMs for RAC ASM
Thanks to the Storage plugin available for the ZFS appliance we can directly create LUNs from the OVM Cli. You may find plugin for your Storage constructor in the Oracle Web Site https://www.oracle.com/virtualization/storage-connect-partner-program.html.
The storage plugin need to be installed on each OVS Servers and OVS servers need to be rediscovered after changes.
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu001 name=clu001dgclu001 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu002 name=clu001dgclu002 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu003 name=clu001dgclu003 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu004 name=clu001dgclu004 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu005 name=clu001dgclu005 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata001 name=clu001dgdata001 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata002 name=clu001dgdata002 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra001 name=clu001dgfra001 on VolumeGroup name=data/local/OracleTech create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra002 name=clu001dgfra002 on VolumeGroup name=data/local/OracleTech OVM> list PhysicalDisk Command: list PhysicalDisk Status: Success Time: 2017-11-02 11:44:41,624 CET Data: id:0004fb0000180000ae02df42a4c8e582 name:clu001dgclu004 id:0004fb0000180000d91546f7d1a09cfb name:clu001dgclu005 id:0004fb0000180000ab0030fb540a55b9 name:clu001dgclu003 id:0004fb0000180000d20bb1d7d50d6875 name:clu001dgfra001 id:0004fb00001800009e39a0b8b1edcf90 name:clu001dgfra002 id:0004fb00001800003742306aa30bfdd4 name:clu001dgdata001 id:0004fb00001800006131006a7a9fd266 name:clu001dgdata002 id:0004fb0000180000a5177543a1ef0464 name:clu001dgclu001 id:0004fb000018000035bd38c6f5245f66 name:clu001dgclu002 create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac001_clu001dgclu001 on Vm name=rac001 create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac001_clu001dgclu002 on Vm name=rac001 create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac001_clu001dgclu003 on Vm name=rac001 create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac001_clu001dgclu004 on Vm name=rac001 create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac001_clu001dgclu005 on Vm name=rac001 create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac001_clu001dgdata001 on Vm name=rac001 create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac001_clu001dgdata002 on Vm name=rac001 create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac001_clu001dgfra001 on Vm name=rac001 create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac001_clu001dgfra002 on Vm name=rac001 create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac002_clu001dgclu001 on Vm name=rac002 create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac002_clu001dgclu002 on Vm name=rac002 create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac002_clu001dgclu003 on Vm name=rac002 create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac002_clu001dgclu004 on Vm name=rac002 create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac002_clu001dgclu005 on Vm name=rac002 create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002 create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002 create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac002_clu001dgfra001 on Vm name=rac002 create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002 #Output of an attachment: OVM> create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002 Command: create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002 Status: Success Time: 2017-11-02 15:49:44,573 CET JobId: 1509634184144 Data: id:0004fb0000130000d1a3ecffefcc0b5b name:asm_disk_cluster_rac002_clu001dgfra002
OVM> list vmdiskmapping Command: list vmdiskmapping Status: Success Time: 2017-11-02 15:50:05,117 CET Data: id:0004fb0000130000a2e52668e38d24f0 name:Mapping for disk Id (0004fb00001200008e5043cea31e4a1c.img) id:0004fb00001300000b0202b6af4254b1 name:asm_disk_cluster_rac002_clu001dgclu003 id:0004fb0000130000f573415ba8af814d name:Mapping for disk Id (0004fb0000120000073fd0cff75c5f4d.img) id:0004fb0000130000217c1b6586d88d98 name:asm_disk_cluster_rac002_clu001dgclu002 id:0004fb00001300007c8f1b4fd9e845c4 name:asm_disk_cluster_rac002_clu001dgclu001 id:0004fb00001300009698cf153f616454 name:asm_disk_cluster_rac001_clu001dgfra002 id:0004fb0000130000c9caf8763df6bfe0 name:asm_disk_cluster_rac001_clu001dgfra001 id:0004fb00001300009771ff7e2a1bf965 name:asm_disk_cluster_rac001_clu001dgdata002 id:0004fb00001300003aed42abb7085053 name:asm_disk_cluster_rac001_clu001dgdata001 id:0004fb0000130000ac45b70bac2cedf7 name:asm_disk_cluster_rac001_clu001dgclu005 id:0004fb000013000007069008e4b91b9d name:asm_disk_cluster_rac001_clu001dgclu004 id:0004fb0000130000a8182ada5a07d7cd name:asm_disk_cluster_rac001_clu001dgclu003 id:0004fb00001300009edf25758590684b name:asm_disk_cluster_rac001_clu001dgclu002 id:0004fb0000130000a93c8a73900cbf80 name:asm_disk_cluster_rac002_clu001dgfra001 id:0004fb0000130000c8c35da3ad0148c4 name:asm_disk_cluster_rac001_clu001dgclu001 id:0004fb0000130000d1a3ecffefcc0b5b name:asm_disk_cluster_rac002_clu001dgfra002 id:0004fb0000130000ff84c64175d7e6c1 name:asm_disk_cluster_rac002_clu001dgdata id:0004fb00001300009c08b1803928536d name:Mapping for disk Id (dd3c390b29af49809caba202f234a443.img) id:0004fb0000130000e85ace19b45c0ad6 name:Mapping for disk Id (0004fb00001200002aa671facc8a1307.img) id:0004fb0000130000e595c3dc5788b87a name:Mapping for disk Id (0004fb000012000087341e27f9faaa17.img) id:0004fb0000130000c66fe2d0d66b7276 name:asm_disk_cluster_rac002_clu001dgdata id:0004fb00001300009c85bca66c400366 name:Mapping for disk Id (46da481163424b739feeb08b4d22c1b4.img) id:0004fb0000130000768a2af09207e659 name:asm_disk_cluster_rac002_clu001dgclu004 id:0004fb000013000092836d3ee569e6ac name:asm_disk_cluster_rac002_clu001dgclu005 OVM> add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001 Command: add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001 Status: Success Time: 2017-11-02 16:59:32,116 CET JobId: 1509638311277 OVM> add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001 Command: add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001 Status: Success Time: 2017-11-02 16:57:31,703 CET JobId: 1509638191228 add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgclu002 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgclu003 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgclu004 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgclu005 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgdata001 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgdata002 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgfra001 to AccessGroup name=cluster001 add PhysicalDisk name=clu001dgfra002 to AccessGroup name=cluster001 #Output of an Access addition: OVM> add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001 Command: add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001 Status: Success Time: 2017-11-02 17:10:13,636 CET JobId: 1509639013463
OVM> refreshStorageLayer Server name=ovs001 Command: refreshStorageLayer Server name=ovs001 Status: Success Time: 2017-11-02 16:42:26,230 CET JobId: 1509637330270 OVM> refreshStorageLayer Server name=ovs002 Command: refreshStorageLayer Server name=ovs002 Status: Success Time: 2017-11-02 16:42:51,296 CET JobId: 1509637355423
Final state: 2 VMs hosted on 2 different Servers with OS, Network and Storage requirements to run RAC 12.2.
This concluded this part and demonstrates how easy it can be to automate those commands and deploy many different architectures.
The next part will describe how to deploy a RAC 12.2 on top of this infrastructure with the Oracle DeployCluster Tool in few commands …
I hope it may help and please do not hesitate to contact us if you have any questions or require further information.
Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01) est apparu en premier sur Blog dbi services.