Ø
Overview
This
document applies to IT and database administrators who are deploying Oracle
Database 11g R2 database software on Red Hat Enterprise Linux 6.x. The paper is
written as a step-by-step guide on how to install, configure, and build an
Oracle Real Application Cluster (RAC) on Red Hat Enterprise Linux 6.x Server.
Ø
Pre-installation Requirements
§ Hardware
Requirements
The
minimum required RAM is 1.5 GB for grid infrastructure for a cluster, or 2.5 GB
for grid infrastructure for a cluster and Oracle RAC. To check your RAM issue,
root@nodedb1:~
# grep MemTotal /proc/meminfo
The
minimum required swap space is 1.5 GB. Oracle recommends that you set swap
space to
- 1.5 times the amount of RAM for systems with 2
GB of RAM or less.
- Systems with 2 GB to 16 GB RAM, use swap space
equal to RAM.
- Systems with more than 16 GB RAM, use 16 GB of
RAM for swap space.
To
check swap space issue
root@nodedb1:~
# grep SwapTotal /proc/meminfo
At
least you need to have 1 GB of temp space in /tmp. However if you have more it
will not hurt any.
To
check issue you temp space issue,
root@nodedb1:~
# df -h /tmp
You
will need at least 4.5 GB of available disk space for the Grid home directory,
which includes both the binary files for Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM) and their associated log files, and
at least 4 GB of available disk space for the Oracle Database home directory.
To
check space in the OS partition issue,
root@nodedb1:~
# df –h
§ Network
Hardware Requirements
Each
node must have at least two network interface cards (NIC), or network adapters.
One adapter is for the public network interface and the other adapter is for
the private network interface (the interconnect).
You
need to install additional network adapters on a node if that node does not
have at least two network adapters or has two network interface cards but is
using network attached storage (NAS). You should have a separate network
adapter for NAS.
Public
interface names must be the same for all nodes. If the public interface on one
node uses the network adapter eth0, then you must configure eth0 as the public
interface on all nodes.
You
should configure the same private interface names for all nodes as well. If
eth1 is the private interface name for the first node, then eth1 should be the
private interface name for your second node.
The
private network adapters must support the user datagram protocol (UDP) using
high-speed network adapters and a network switch that supports TCP/IP (Gigabit
Ethernet or better). Oracle recommends that you use a dedicated network switch.
§ IP
Address Requirements
You
must have a DNS server in order to make SCAN listener work. So, before you
proceed to installation, prepare you DNS server. You must give the following
entry manually in your DNS server.
i)
A public IP address for each node
ii)
A virtual IP address for each node
ii)
Three single client access name (SCAN) addresses for the cluster
During
installation a SCAN for the cluster is configured, which is a domain name that
resolves to all the SCAN addresses allocated for the cluster. The IP addresses
used for the SCAN addresses must be on the same subnet as the VIP addresses.
The SCAN must be unique within your network. The SCAN addresses should not
respond to ping commands before installation.
§ OS
and software Requirements
To
determine which distribution and version of Linux is installed as root user
issue,
root@nodedb1:~
# cat /proc/version
Be
sure your Linux version is supported by Oracle database 11gR2.
To
determine which chip architecture each server is using and which version of the
software you should install, as the root user issue,
root@nodedb1:~
# uname -m
This
command displays the processor type. For a 64-bit architecture, the output
would be "x86_64".
To
determine if the required errata level is installed, as the root user issue,
root@nodedb1:~
# uname -r
2.6.32-358.6.1.el6.x86_64
root@nodedb1:~
# rpm -q package_name
Without
cluster verification utility as well as by running OUI you can determine
whether you have missed any packages that is required to install Grid
Infrastructure. If you get any package missing you can install it by,
root@nodedb1:~
# rpm -Uvh package_name
Ø
Preparing the server to
install Grid Infrastructure
§ Synchronize
the time between each RAC nodes
Oracle
Clusterware 11g release 2 (11.2) requires time synchronization across all
nodes within a cluster when Oracle RAC is deployed.
root@nodedb1:~
# dateconfig
Command
provides you a GUI through which you can set same timing across all nodes. But
for accurate time synchronization across the nodes you have two options: an
operating system configured network time protocol (NTP), or Oracle Cluster Time
Synchronization Service (OCTSS).
Oracle
recommends using oracle cluster time synchronization service because it can
synchronize time among cluster members without contacting an external time
server.
Note
that if you use NTP, then the Oracle Cluster Time Synchronization daemon (CTSSD)
starts up in observer mode. If you do not have NTP daemons, then CTSSD starts
up in active mode.
If
you have NTP daemons on your server but you cannot configure them to
synchronize time with a time server, and you want to use Cluster Time
Synchronization Service to provide synchronization service in the cluster, then
deactivate and de-install the Network Time Protocol (NTP).
To
deactivate do the following things:
root@nodedb1:~
# /sbin/service ntpd stop
root@nodedb1:~
# chkconfig ntpd off
root@nodedb1:~
# mv /etc/ntp.conf to /etc/ntp.conf.org
Also
remove the following file:
root@nodedb1:~
# mv /var/run/ntpd.pid /var/run/ntpd.pid.org
§ Create
the required OS users and groups
NOTE Oracle recommends different
users for the installation of the Grid Infrastructure (GI) and the Oracle RDBMS
home. The GI will be installed in a separate Oracle base, owned by user 'grid.'
After the grid install the GI home will be owned by root, and inaccessible to
unauthorized users.
1. Create
OS groups using the command below. Enter these commands as the 'root' user:
root@nodedb1:~
# /usr/sbin/groupadd -g 501 oinstall
root@nodedb1:~
# /usr/sbin/groupadd -g 502 dba
root@nodedb1:~
# /usr/sbin/groupadd -g 503 oper
root@nodedb1:~
# /usr/sbin/groupadd -g 505 asmadmin
root@nodedb1:~
# /usr/sbin/groupadd -g 506 asmdba
root@nodedb1:~
# /usr/sbin/groupadd -g 507 asmoper
2. Create
the users that will own the Oracle software using the commands:
root@nodedb1:~
# /usr/sbin/useradd -u 501 -c "Oracle Grid Infrastructure
Owner" -g oinstall -G asmadmin,asmdba,asmoper grid
root@nodedb1:~
# /usr/sbin/useradd -u 502 -c "Oracle RDBMS Owner" -g oinstall -G
dba,oper,asmdba oracle
3. Set
the password for the oracle account using the following command. Replace
password with your own password.
root@nodedb1:~
# passwd grid
Changing
password for user grid.
New UNIX
password:
BAD PASSWORD:
it is based on a dictionary word
Retype new
UNIX password:
passwd: all
authentication tokens updated successfully.
root@nodedb1:~
# passwd oracle
Changing
password for user oracle.
New UNIX
password:
BAD PASSWORD:
it is based on a dictionary word
Retype new
UNIX password:
passwd: all
authentication tokens updated successfully.
4. Repeat
Step 1 through Step 3 on each node in your cluster.
§ Configure the network
1.
Determine
your cluster name. The cluster name should satisfy the following conditions. The
cluster name is globally unique throughout your host domain. The cluster name is at least 1 character long
and less than 15 characters long. The
cluster name must consist of the same character set used for host names:
single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens
(-).We set the cluster name as nodedb-scan
2.
Determine
the public host name for each node in the cluster. For the public host name,
use the primary host name of each node. In other words, use the name displayed
by the hostname command for example: nodedb1.
3.
Determine
the public virtual hostname for each node in the cluster. The virtual host name
is a public node name that is used to reroute client requests sent to the node
if the node is down. Oracle recommends that you provide a name in the format
<public hostname>-vip, for example: nodedb1-vip. The virtual hostname
must meet the following requirements:
-
The
virtual IP address and the network name must not be currently in use.
-
The
virtual IP address must be on the same subnet as your public IP address.
-
The
virtual host name for each node should be registered with your DNS.
4.
Determine
the private hostname for each node in the cluster. This private hostname does
not need to be resolvable through DNS and should be entered in the /etc/hosts
file. A common naming convention for the private hostname is <public
hostname>-priv, , for example: nodedb1-priv
-
The
private IP should NOT be accessible to servers not participating in the local
cluster.
-
The
private network should be on standalone dedicated switches.
-
The
private network should NOT be part of a larger overall network topology.
-
The
private network should be deployed on Gigabit Ethernet or better.
It is recommended that redundant NICs are
configured with the Linux bonding driver. Active/passive is the preferred
bonding method due to its simplistic configuration.
5.
Define
a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin).
SCAN IPs must NOT be in the /etc/hosts file, the SCAN name must be resolved by
DNS.
root@nodedb1:~
# nslookup nodedb-scan
Server: 10.176.128.131
Address: 10.176.128.131#53
Name: nodedb-scan.localdomain.com
Address:
10.10.8.106
Name: nodedb-scan.localdomain.com
Address:
10.10.8.104
Name: nodedb-scan.localdomain.com
Address:
10.10.8.105
root@nodedb2:~
# nslookup nodedb-scan
Server: 10.176.128.131
Address: 10.176.128.131#53
Name: nodedb-scan.localdomain.com
Address:
10.10.8.104
Name: nodedb-scan.localdomain.com
Address:
10.10.8.105
Name: nodedb-scan.localdomain.com
Address:
10.10.8.106
6.
Even
if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts
file on each node, specifying the public IP, VIP and private addresses.
Configure the /etc/hosts file so that it is similar to the following example:
NOTE:
The scan’s IP must not be in the /etc/hosts file. This will result in only 1
SCAN IP for the entire cluster.
root@nodedb1:~
# cat /etc/hosts
#Public IP
10.10.8.10 nodedb1.localdomain.com nodedb1
10.10.8.11 nodedb2.localdomain.com nodedb2
#Private IP
192.168.10.10 nodedb1-priv.localdomain.com nodedb1-priv
192.168.10.11 nodedb2-priv.localdomain.com nodedb2-priv
#Virtual IP
10.10.8.102 nodedb1-vip.localdomain.com nodedb1-vip
10.10.8.103 nodedb2-vip.localdomain.com nodedb2-vip
7. If you configured the IP addresses in a
DNS server, then, as the root user, change the hosts search order in
/etc/nsswitch.conf on all nodes as shown here:
Old:
hosts:
files nis dns
New:
hosts:
dns files nis
After
modifying the nsswitch.conf file, restart the nscd daemon on each node using
the following command:
root@nodedb1:~
# /sbin/service nscd restart
Define
a SCAN that resolves to three IP addresses in your DNS.
My
full IP Address assignment table is as following.
Identity
|
Host
Node
|
Name
|
Type
|
Address
|
Address
static or dynamic
|
Resolved
by
|
Node
1 Public
|
nodedb1
|
nodedb1
|
Public
|
10.10.8.10
|
Static
|
DNS
|
Node
1 virtual
|
Selected
by oracle clusterware
|
nodedb1-vip
|
Virtual
|
10.10.8.102
|
Static
|
DNS
and/ or hosts file
|
Node
1 private
|
nodedb1
|
nodedb1-priv
|
Private
|
192.168.10.10
|
Static
|
DNS,
hosts file, or none
|
Node
2 Public
|
nodedb2
|
nodedb2
|
Public
|
10.10.8.11
|
Static
|
DNS
|
Node
2 virtual
|
Selected
by oracle clusterware
|
nodedb2-vip
|
Virtual
|
10.10.8.103
|
Static
|
DNS
and/ or hosts file
|
Node
2 private
|
nodedb2
|
nodedb2-priv
|
Private
|
192.168.10.11
|
Static
|
DNS,
hosts file, or none
|
SCAN
VIP 1
|
Select
by oracle clusterware
|
nodedb-scan
|
Virtual
|
10.10.8.104
|
Static
|
DNS
|
SCAN
VIP 2
|
Select
by oracle clusterware
|
nodedb-scan
|
Virtual
|
10.10.8.105
|
Static
|
DNS
|
SCAN
VIP 3
|
Select
by oracle clusterware
|
nodedb-scan
|
Virtual
|
10.10.8.106
|
Static
|
DNS
|
In
your /etc/resolve.conf file entry your DNS name server address on both nodes.
root@nodedb1:~
# vi /etc/resolv.conf
search
am.mot-mobility.com
nameserver 10.176.128.131
nameserver
144.188.179.6
nameserver
4.2.2.1
nameserver
4.2.2.2
Verify
the network configuration by using the ping command to test the connection from
each node in your cluster to all the other nodes.
root@nodedb2:~
# ping -c3 nodedb1
root@nodedb1:~
# ping -c3 nodedb2
root@nodedb2:~
# ping -c3 nodedb1-priv
root@nodedb1:~
# ping -c3 nodedb2-priv
root@nodedb2:~
# ping -c3 nodedb1-vip
root@nodedb1:~
# ping -c3 nodedb2-vip
§ Synchronizing
the Time on ALL Nodes
root@nodedb1:~
# ls -lr /etc/ntp.conf
-rw-r--r-- 1
root root 1833 Dec 9 2009 /etc/ntp.conf
root@nodedb1:~
# service ntpd stop
Shutting down
ntpd: [ OK ]
root@nodedb1:~
# mv /etc/ntp.conf /etc/ntp.conf.bkp
§ Configuring
Kernel Parameter
1. As
the root user add the following kernel parameter settings to /etc/sysctl.conf.
If any of the arameters are already in the /etc/sysctl.conf file, the higher of
the 2 values should be used.
kernel.shmmni
= 4096
kernel.sem =
250 32000 100 128
fs.file-max =
6553600
net.ipv4.ip_local_port_range
= 9000 65500
net.core.rmem_default
= 262144
net.core.rmem_max
= 4194304
net.core.wmem_default
= 262144
net.core.wmem_max
= 1048576
NOTE: The latest information on kernel parameter settings for Linux can be found
in My Oracle Support ExtNote:169706.1.
2. Run
the following as the root user to allow the new kernel parameters to be put in
place:
root@nodedb1:~
# /sbin/sysctl -p
3. Repeat
steps 1 and 2 on all cluster nodes.
NOTE: OUI checks the current settings for various kernel parameters to ensure
they meet the minimum requirements for deploying Oracle RAC.
4. Set
shell limits for the oracle and grid user:
To
improve the performance of the software on Linux systems, you must increase the
shell limits for the oracle
user
Add the
following lines to the /etc/security/limits.conf file:
root@nodedb1:~
# vi /etc/security/limits.conf
grid soft
nproc 2047
grid hard
nproc 16384
grid soft
nofile 1024
grid hard
nofile 65536
oracle soft
nproc 2047
oracle hard
nproc 16384
oracle soft
nofile 1024
oracle hard
nofile 65536
5. Add
or edit the following line in the /etc/pam.d/login file, if it does not already
exist:
root@nodedb1:~
# vi /etc/pam.d/login
session
required pam_limits.so
6. Make
the following changes to the default shell startup file, add the following
lines to the /etc/profile file:
if [[ $USER =
"oracle" ] || [ $USER = "grid" ]]; then
if [ $SHELL =
"/bin/ksh" ]; then
ulimit -p
16384
ulimit -n
65536
else
ulimit -u
16384 -n 65536
fi
umask 022
fi
For
the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
if ( $USER =
"oracle" || $USER = "grid" ) then
limit maxproc
16384
limit
descriptors 65536
endif
7. Repeat
this procedure on all other nodes in the cluster.
§ Creating
the directories.
1. Create
the Oracle Inventory Director:
To
create the Oracle Inventory directory, enter the following commands as the root
user:
root@nodedb1:~
# mkdir -p /p01/app/oraInventory
root@nodedb1:~
# chown -R grid:oinstall /p01/app/oraInventory
root@nodedb1:~
# chmod -R 775 /p01/app/oraInventory
2. Creating
the Oracle Grid Infrastructure Home Directory:
root@nodedb1:~
# mkdir -p /p02/app/11.2.0/grid
root@nodedb1:~
# chown -R grid:oinstall /p02/app/11.2.0/grid
root@nodedb1:~
# chmod -R 775 /p02/app/11.2.0/grid
3. Creating
the Oracle Base Directory
To
create the Oracle Base directory, enter the following commands as the root
user:
root@nodedb1:~
# mkdir -p /p02/app/oracle
root@nodedb1:~
# mkdir /p02/app/oracle/cfgtoollogs #needed to ensure that dbca is able to run
after the rdbms installation.
root@nodedb1:~
# chown -R oracle:oinstall /p02/app/oracle
root@nodedb1:~
# chmod -R 775 /p02/app/oracle
4. Creating
the Oracle RDBMS Home Directory
To
create the Oracle RDBMS Home directory, enter the following commands as the
root user:
root@nodedb1:~
# mkdir -p /p02/app/oracle/product/11.2.0/db01
root@nodedb1:~
# chown -R oracle:oinstall /p02/app/oracle/product/11.2.0/db01
root@nodedb1:~
# chmod -R 775 /p02/app/oracle/product/11.2.0/db01
5.
Set Grid home and SID in
.bash_profile
Login
as the "grid" user and add the following lines at the end of the
"/home/grid/.bash_profile" file.
root@nodedb1:~
# su - grid
[grid@nodedb1
~]$ cat .bash_profile
# User
specific environment and startup programs
ORACLE_SID=+ASM1;
export ORACLE_SID
ORACLE_BASE=/p02/app/grid;
export ORACLE_BASE
ORACLE_HOME=/p02/app/11.2.0/grid;
export ORACLE_HOME
ORACLE_PATH=/p02/app/oracle/common/oracle/sql;
export ORACLE_PATH
JAVA_HOME=/usr/local/java;
export JAVA_HOME
ORACLE_TERM=xterm;
export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY
HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;
export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data;
export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/p02/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export
LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native;
export THREADS_FLAG
export
TEMP=/tmp
export
TMPDIR=/tmp
umask 022
6.
Set Grid home and SID in
.bash_profile
Login
as the "oracle" user and add the following lines at the end of the
"/home/oracle/.bash_profile" file
# User specific
environment and startup programs
ORACLE_SID=prod1;
export ORACLE_SID
ORACLE_UNQNAME=prod;
export ORACLE_UNQNAME
JAVA_HOME=/usr/local/java;
export JAVA_HOME
ORACLE_BASE=/p02/app/oracle;
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db01
export
ORACLE_HOME
ORACLE_PATH=/p02/app/common/oracle/sql
export
ORACLE_PATH
ORACLE_TERM=xterm;
export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY
HH24:MI:SS"
export
NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;
export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data;
export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/p02/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export
LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export
CLASSPATH
THREADS_FLAG=native;
export THREADS_FLAG
export
TEMP=/tmp
export
TMPDIR=/tmp
umask 022
Ø
Check OS Software Requirements
The
OUI will check for missing packages during the install and you will have the
opportunity to install them at that point during the pre-checks. Nevertheless
you might want to validate that all required packages have been installed prior
to launching the OUI.
NOTE: These Requirements are for 64-bit versions of OEL
6x and RHEL 6x. Requirements for other supported platforms can be found in My
Oracle Support Ext Note: 169706.1.
binutils-2.15.92.0.2
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3
(32 bit)
elfutils-libelf-0.97
elfutils-libelf-devel-0.97
expat-1.95.7
gcc-3.4.6
gcc-c++-3.4.6
glibc-2.3.4-2.41
glibc-2.3.4-2.41
(32 bit)
glibc-common-2.3.4
glibc-devel-2.3.4
glibc-headers-2.3.4
libaio-0.3.105
libaio-0.3.105
(32 bit)
libaio-devel-0.3.105
libaio-devel-0.3.105
(32 bit)
libgcc-3.4.6
libgcc-3.4.6
(32-bit)
libstdc++-3.4.6
libstdc++-3.4.6
(32 bit)
libstdc++-devel
3.4.6
make-3.80
pdksh-5.2.14
sysstat-5.0.5
unixODBC-2.2.11
unixODBC-2.2.11
(32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11
(32 bit)
The
following command can be run on the system to list the currently installed
packages:
rpm -q --qf
'%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33
\
elfutils-libelf
\
elfutils-libelf-devel
\
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers
\
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel
\
make \
sysstat \
unixODBC \
unixODBC-devel
Ø
Prepare the shared storage for
Oracle RAC using UDEV rules
This
section describes how to prepare the shared storage for Oracle RAC Each node in
a cluster requires external shared disks for storing the Oracle Clusterware
(Oracle Cluster Registry and voting disk) files, and Oracle Database files. To
ensure high availability of Oracle Clusterware files on Oracle ASM.
-
All
of the devices in an Automatic Storage Management disk group should be the same
size and have the same performance characteristics.
-
A
disk group should not contain more than one partition on a single physical disk
device. Using logical volumes as a device in an Automatic Storage Management
disk group is not supported with Oracle RAC.
-
The
user account with which you perform the installation (typically, 'oracle') must
have write permissions to create the files in the path that you specify.
§ Shared
Storage
For
this example installation we will be using ASM for Clusterware and Database
storage on top of SAN technology. The following Table shows the storage layout
for this implementation:
Block
Device
|
ASMlib
Name
|
Size
|
Comments
|
/dev/sdc1
|
CRS1
|
53.7GB
|
ASM
Disk group for OCR and Voting Disks
|
/dev/sdc2
|
CRS2
|
53.7GB
|
ASM
Disk group for OCR and Voting Disks
|
/dev/sdc3
|
DATA1
|
210GB
|
ASM
Data Disk group
|
/dev/sdc4
|
DATA2
|
210GB
|
ASM
Data Disk group
|
§ Partition
the Shared Disks:
This
section describes how to prepare the shared storage for Oracle RAC.
1. Once
the LUNs have been presented from the SAN to ALL servers in the cluster,
partition the LUNs from one node only, run fdisk to create a single
whole-disk partition with exactly 1 MB offset on each LUN to be used as ASM
Disk
root@nodedb1:~
# fdisk /dev/sdc
Device
contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new
DOS disklabel. Changes will remain in memory only,
until you
decide to write them. After that, of course, the previous
content won't
be recoverable.
Warning:
invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for
help): u
Changing
display/entry units to sectors
Command (m for
help): n
Command action
e extended
p primary
partition (1-4)
p
Partition
number (1-4): 1
First sector
(63-2097151, default 63):
Using default
value 63
Last sector or
+size or +sizeM or +sizeK (63-2097151, default 2097151): 53.7 G
Command (m for
help): w
The partition
table has been altered!
Calling
ioctl() to re-read partition table.
Syncing disks.
root@nodedb1:~
#
Load
the updated block device partition tables by running the following on ALL
servers participating in the cluster:
root@nodedb1:~
#/sbin/partprobe
In
each case, the sequence of answers is "n", "p",
"1", "Return", "Return" and "w".
Once
all the disks are partitioned, the results can be seen by repeating the
previous "ls" command.
root@nodedb1:~
# cd /dev
root@nodedb1:/dev
# ls sd*
sda sda1
sda2 sda3 sdb
sdb1 sdc sdc1
sdc2 sdd
§ Configure
UDEV rules, as per below details:
Add
the following to the "/etc/scsi_id.config" file to configure SCSI
devices as trusted. Create the file if it doesn't already exist.
options=-g
The
SCSI ID of my disks are displayed below.
root@nodedb1:~
# /sbin/scsi_id -g -u -d /dev/sdb
1ATA_QEMU_HARDDISK_QM00002
root@nodedb1:~
# /sbin/scsi_id -g -u -d /dev/sdc
1IET_00010001
root@nodedb1:~
# /sbin/scsi_id -g -u -d /dev/sdd
1IET_00010002
Using
these values, edit the "/etc/udev/rules.d/99-oracle-asmdevices.rules"
file adding the following 4 entries. All parameters for a single entry must be
on the same line.
KERNEL=="sdc1",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/sdc",
RESULT=="1IET_00010001", NAME="asm-disk1",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdc2",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/sdc",
RESULT=="1IET_00010001", NAME="asm-disk2",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdc3",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/sdc",
RESULT=="1IET_00010001", NAME="asm-disk3",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdc4",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/sdc",
RESULT=="1IET_00010001", NAME="asm-disk4",
OWNER="grid", GROUP="asmadmin", MODE="0660"
Load
updated block device partition tables.
root@nodedb1:~
# /sbin/partprobe /dev/sdc1
root@nodedb1:~
# /sbin/partprobe /dev/sdc2
root@nodedb1:~
# /sbin/partprobe /dev/sdc3
root@nodedb1:~
# /sbin/partprobe /dev/sdc4
Test
the rules are working as expected.
root@nodedb1:~
# /sbin/udevadm test /block/sdb/sdc1
root@nodedb1:~
# /sbin/udevadm test /block/sdb/sdc2
root@nodedb1:~
# /sbin/udevadm test /block/sdb/sdc3
root@nodedb1:~
# /sbin/udevadm test /block/sdb/sdc4
Reload
the UDEV rules and start UDEV.
root@nodedb1:~
# /sbin/udevadm control --reload-rules
root@nodedb1:~
# /sbin/start_udev
The
disks should now be visible and have the correct ownership using the following
command. If they are not visible, your UDEV configuration is incorrect and must
be fixed before you proceed.
root@nodedb1:~
# ls -al /dev/asm*
brw-rw---- 1
grid asmadmin 8, 33 Jun 25 08:49 /dev/asm-disk1
brw-rw---- 1
grid asmadmin 8, 34 Jun 25 05:57 /dev/asm-disk2
brw-rw---- 1
grid asmadmin 8, 35 Jun 25 08:49 /dev/asm-disk3
brw-rw---- 1
grid asmadmin 8, 36 Jun 25 08:49 /dev/asm-disk4
The
shared disks are now configured for the grid infrastructure.
NOTE: All above steps should be done in all nodes in RAC
environment.
Ø
Oracle Grid Infrastructure
Install in silent mode. (without GNS and
IPMI)
§ The
Cluster Verify Utility:
Run
the Cluster Verification Utility (CVU) to check the nodes in preparation for
installation. The CVU checks the network connectivity, the hardware and the
operating system. Try to fix all the errors that could be reported by the CVU.
-
For
running the cluster verify utility, login to any node by using grid user and
run the below script.
root@nodedb1:~
# su - grid
[grid@nodedb1
grid]$ cd /p01/grid
[grid@nodedb1
grid]$ ls
doc install
response rpm runcluvfy.sh
runInstaller sshsetup stage
welcome.html
[grid@nodedb1
grid]$ ./runcluvfy.sh stage -pre crsinst -n nodedb1 -verbose >
/tmp/runcluvfy.log
§ Clusterware
Silent Installation:
For
installing oracle 11gR2 cluster in silent mode, firstly prepare response file
according to environment. I have attached response file, which I used at time
of installation. Please use below response file and make it changes as per the
environment.
> Response file for cluster installation:
root@nodedb1:/p02 # cat crs_install.rsp
###############################################################################
## Copyright(c) Oracle Corporation 1998,2008. All rights reserved. ##
## ##
## Specify values for the variables listed below to customize ##
## your installation. ##
## ##
## Each variable is associated with a comment. The comment ##
## can help to populate the variables with the appropriate ##
## values. ##
## ##
## IMPORTANT NOTE: This file contains plain text passwords and ##
## should be secured to have read permission only by oracle user ##
## or db administrator who owns this installation. ##
## ##
###############################################################################
###############################################################################
## ##
## Instructions to fill this response file ##
## To install and configure 'Grid Infrastructure for Cluster' ##
## - Fill out sections A,B,C,D,E,F and G ##
## - Fill out section G if OCR and voting disk should be placed on ASM ##
## ##
## To install and configure 'Grid Infrastructure for Standalone server' ##
## - Fill out sections A,B and G ##
## ##
## To install software for 'Grid Infrastructure' ##
## - Fill out sections A,B and C ##
## ##
## To upgrade clusterware and/or Automatic storage management of earlier ##
## releases ##
## - Fill out sections A,B,C,D and H ##
## ##
###############################################################################
#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
###############################################################################
# #
# SECTION A - BASIC #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the hostname of the system as set during the install. It can be used
# to force the installation to use an alternative hostname rather than using the
# first hostname found on the system. (e.g., for systems with multiple hostnames
# and network interfaces)
#-------------------------------------------------------------------------------
ORACLE_HOSTNAME=
#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/p01/app/oraInventory
#-------------------------------------------------------------------------------
# Specify the languages in which the components will be installed.
#
# en : English ja : Japanese
# fr : French ko : Korean
# ar : Arabic es : Latin American Spanish
# bn : Bengali lv : Latvian
# pt_BR: Brazilian Portuguese lt : Lithuanian
# bg : Bulgarian ms : Malay
# fr_CA: Canadian French es_MX: Mexican Spanish
# ca : Catalan no : Norwegian
# hr : Croatian pl : Polish
# cs : Czech pt : Portuguese
# da : Danish ro : Romanian
# nl : Dutch ru : Russian
# ar_EG: Egyptian zh_CN: Simplified Chinese
# en_GB: English (Great Britain) sk : Slovak
# et : Estonian sl : Slovenian
# fi : Finnish es_ES: Spanish
# de : German sv : Swedish
# el : Greek th : Thai
# iw : Hebrew zh_TW: Traditional Chinese
# hu : Hungarian tr : Turkish
# is : Icelandic uk : Ukrainian
# in : Indonesian vi : Vietnamese
# it : Italian
#
# Example : SELECTED_LANGUAGES=en,fr,ja
#-------------------------------------------------------------------------------
SELECTED_LANGUAGES=en
#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY
# CRS_CONFIG - To configure Grid Infrastructure for cluster
# HA_CONFIG - To configure Grid Infrastructure for stand alone server
# UPGRADE - To upgrade clusterware software of earlier release
# CRS_SWONLY - To install clusterware files only (can be configured for cluster
# or stand alone server later)
#-------------------------------------------------------------------------------
oracle.install.option=CRS_CONFIG
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/p02/app/grid
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME=/p02/app/11.2.0/grid
################################################################################
# #
# SECTION B - GROUPS #
# #
# The following three groups need to be assigned for all CRS installations. #
# OSDBA and OSOPER can be the same or different. OSASM must be different #
# than the other two. #
# #
################################################################################
#-------------------------------------------------------------------------------
# The DBA_GROUP is the OS group which is to be granted OSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=dba
#-------------------------------------------------------------------------------
# The OPER_GROUP is the OS group which is to be granted OSOPER privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=dba
#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted OSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=dba
################################################################################
# #
# SECTION C - SCAN #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=nodedb-scan
#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanPort=1525
################################################################################
# #
# SECTION D - CLUSTER & GNS #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clutername is 15 characters. The name can be
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=nodedb
#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false
#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
# Additionally you may also specify if VIPs have to be autoconfigured
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=
#-------------------------------------------------------------------------------
# Specify a list of public node names, and virtual hostnames that have to be
# part of the cluster.
#
# The list should a comma-separated list of nodes. Each entry in the list
# should be a colon-separated string that contains 2 fields.
#
# The fields should be ordered as follows:
# 1. The first field is for public node name.
# 2. The second field is for virtual host name
# (specify as AUTO if you have chosen 'auto configure for VIP'
# i.e. autoConfigureClusterNodeVIP=true)
#
# Example: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=nodedb1:nodedb1-vip,nodedb2:nodedb2-vip
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1" or "2"(2 indicates private, and 1 indicates public)
#
# For example: eth0:140.87.24.0:1,eth1:140.87.40.0:2,eth2:140.87.52.0:1
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.privateInterconnects=eth0:10.10.8.0:1,eth1:192.168.10.0:2
################################################################################
# #
# SECTION E - STORAGE #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
# - ASM_STORAGE
# - FILE_SYSTEM_STORAGE
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=ASM_STORAGE
#-------------------------------------------------------------------------------
# THIS PROPERTY NEEDS TO BE FILLED ONLY IN CASE OF WINDOWS INSTALL.
# Specify a comma separated list of strings where each string is as shown below:
# Disk Number:Partition Number:Drive Letter:Format Option
# The Disk Number and Partition Number should refer to the location which has to
# be formatted. The Drive Letter should refer to the drive letter that has to be
# assigned. "Format Option" can be either of the following -
# 1. SOFTWARE - Format to place software binaries.
# 2. DATA - Format to place the OCR/VDSK files.
#
# For example: 1:2:P:DATA,1:3:Q:SOFTWARE,1:4:R:DATA,1:5:S:DATA
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.sharedFileSystemStorage.diskDriveMapping=
#-------------------------------------------------------------------------------
# These properties are applicable only if FILE_SYSTEM_STORAGE is chosen for
# storing OCR and voting disk
# Specify the location(s) and redundancy for OCR and voting disks
# In case of windows, mention the drive location that is specified to be
# formatted for DATA in the above property.
# Multiple locations can be specified, separated by commas
# Redundancy can be one of these:
# EXTERNAL - one(1) location should be specified for OCR and voting disk
# NORMAL - three(3) locations should be specified for OCR and voting disk
#-------------------------------------------------------------------------------
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL
################################################################################
# #
# SECTION F - IPMI #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=false
#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
# #
# SECTION G - ASM #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify a password for SYSASM user of the ASM instance
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=xxxxxxx
#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=CRS
#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
# - NORMAL
# - HIGH
# - EXTERNAL
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=EXTERNAL
#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm-disk*
#-------------------------------------------------------------------------------
# oracle.install.asm.monitorPassword=password
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=xxxxxxxx
################################################################################
# #
# SECTION H - UPGRADE #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify nodes for Upgrade.
# Example: oracle.install.crs.upgrade.clusterNodes=node1,node2
#-------------------------------------------------------------------------------
oracle.install.crs.upgrade.clusterNodes=
#-------------------------------------------------------------------------------
# For RAC-ASM only. oracle.install.asm.upgradeASM=true/false
#-------------------------------------------------------------------------------
oracle.install.asm.upgradeASM=false
After creating the response file, login with grid user and go to location of runInstaller for grid installation. and execute runInstaller with response file.
root@nodedb1:~
# su - grid
[grid@nodedb1
~]$ cd /p01/grid/
[grid@nodedb1
grid]$ ls
doc install
response rpm runcluvfy.sh
runInstaller sshsetup stage
welcome.html
[grid@nodedb1
grid]$ ./runInstaller -silent -force -responseFile /p02/crs_install.rsp
Starting
Oracle Universal Installer...
Checking Temp
space: must be greater than 120 MB.
Actual 1359 MB Passed
Checking swap
space: must be greater than 150 MB.
Actual 18431 MB Passed
Preparing to
launch Oracle Universal Installer from /tmp/installActions2013-05-28_04-01-43PM.log.
Please wait ...
You can find
the log of this install session at:
/p01/app/oraInventory/logs/installActions2013-05-28_04-01-43PM.log
As a root
user, execute the following script(s):
1.
/p01/app/oraInventory/orainstRoot.sh
2. /p02/app/11.2.0/grid/root.sh
As install
user, execute the following script to complete the configuration.
1.
/p02/app/11.2.0/grid/cfgtoollogs/configToolAllCommands
Note:
1. This script should be run in the
same environment from where the installer has been run.
2. This script needs a small password
properties file for configuration assistants that require passwords (refer to
install guide documentation).
Successfully
Setup Software.
[grid@nodedb1
~]$
Run
the root.sh scripts:
root@nodedb1:~ #. /p02/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /p02/app/11.2.0/grid
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-05-28 19:11:27: Parsing the host name
2013-05-28 19:11:27: Checking for super user privileges
2013-05-28 19:11:27: User has super user privileges
Using configuration parameter file: /p02/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on
CRS-2672: Attempting to start 'ora.gipcd' on 'nodedb1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'nodedb1'
CRS-2676: Start of 'ora.gipcd' on 'nodedb1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'nodedb1'
CRS-2676: Start of 'ora.gpnpd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'nodedb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'nodedb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'nodedb1'
CRS-2676: Start of 'ora.diskmon' on 'nodedb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'nodedb1'
CRS-2676: Start of 'ora.ctssd' on 'nodedb1' succeeded
ASM created and started successfully.
DiskGroup CRS created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'nodedb1'
CRS-2676: Start of 'ora.crsd' on 'nodedb1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 8c2e933c78a44f43bf481338ce37f26b.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 8c2e933c78a44f43bf481338ce37f26b (/dev/asm-disk1) [CRS]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'nodedb1'
CRS-2677: Stop of 'ora.crsd' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'nodedb1'
CRS-2677: Stop of 'ora.asm' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'nodedb1'
CRS-2677: Stop of 'ora.ctssd' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'nodedb1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'nodedb1'
CRS-2677: Stop of 'ora.cssd' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'nodedb1'
CRS-2677: Stop of 'ora.gpnpd' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'nodedb1'
CRS-2677: Stop of 'ora.gipcd' on 'nodedb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'nodedb1'
CRS-2677: Stop of 'ora.mdnsd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'nodedb1'
CRS-2676: Start of 'ora.mdnsd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'nodedb1'
CRS-2676: Start of 'ora.gipcd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'nodedb1'
CRS-2676: Start of 'ora.gpnpd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'nodedb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'nodedb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'nodedb1'
CRS-2676: Start of 'ora.diskmon' on 'nodedb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'nodedb1'
CRS-2676: Start of 'ora.ctssd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'nodedb1'
CRS-2676: Start of 'ora.asm' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'nodedb1'
CRS-2676: Start of 'ora.crsd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'nodedb1'
CRS-2676: Start of 'ora.evmd' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'nodedb1'
CRS-2676: Start of 'ora.asm' on 'nodedb1' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'nodedb1'
CRS-2676: Start of 'ora.CRS.dg' on 'nodedb1' succeeded
nodedb1 2013/05/28 19:27:12 /p02/app/11.2.0/grid/cdata/nodedb1/backup_20130528_192712.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
All
looks good so far, the CRS Voting disk diskgroup was created successfully. ASM
instance started successfully.
Once
finish the root.sh at last node. Check:
1. $GRID_HOME/bin/crsctl stat res –t
2. $GRID_HOME/bin/crsctl stat res -t -init
3. $GRID_HOME/bin/crsctl check cluster -all
1. $GRID_HOME/bin/crsctl stat res –t
2. $GRID_HOME/bin/crsctl stat res -t -init
3. $GRID_HOME/bin/crsctl check cluster -all
All
should return positive results; all resources should be ONLINE except the gsd
resources.
There is NO need to run /p02/app/11.2.0/grid/cfgtoollogs/configToolAllCommands on ALL nodes, Just need to execute it on node1, where the OUI started.
There is NO need to run /p02/app/11.2.0/grid/cfgtoollogs/configToolAllCommands on ALL nodes, Just need to execute it on node1, where the OUI started.
After
running root.sh on all nodes, back to node1, as grid user, follow these steps.
To run configuration assistants with the configToolAllCommands script.
1. Create a response file using the syntax filename.properties. For example. $ touch cfgrsp.properties
2. Open the file with a text editor, and cut and paste the password template, modifying as needed.
As per oracle documentation; http://download.oracle.com/docs/cd/E11882_01/install.112/e24660/scripts.htm, section Example B-1 Password response file for Oracle Real Application Clusters
To run configuration assistants with the configToolAllCommands script.
1. Create a response file using the syntax filename.properties. For example. $ touch cfgrsp.properties
2. Open the file with a text editor, and cut and paste the password template, modifying as needed.
As per oracle documentation; http://download.oracle.com/docs/cd/E11882_01/install.112/e24660/scripts.htm, section Example B-1 Password response file for Oracle Real Application Clusters
[grid@nodedb1
cfgtoollogs]$ cat /p02/app/11.2.0/grid/cfgtoollogs/cfgrsp.properties
oracle.assistants.server|S_SYSPASSWORD=xxxxxxxx
oracle.assistants.server|S_SYSTEMPASSWORD=xxxxxxxx
oracle.assistants.server|S_SYSMANPASSWORD=xxxxxxxx
oracle.assistants.server|S_DBSNMPPASSWORD=xxxxxxxx
oracle.assistants.server|S_HOSTUSERPASSWORD=xxxxxxxx
oracle.assistants.server|S_ASMSNMPPASSWORD=xxxxxxxx
Change
permissions to secure the file. For example:
[grid@nodedb1
cfgtoollogs]$ chmod 600 cfgrsp.properties
Run
the configuration script using the following syntax as the grid user:
[grid@nodedb1
~]$ /p02/app/11.2.0/grid/cfgtoollogs/configToolAllCommands
RESPONSE_FILE=/p02/app/11.2.0/grid/cfgtoollogs/cfgrsp.properties
Setting the
invPtrLoc to /p02/app/11.2.0/grid/oraInst.loc
perform - mode
is starting for action: configure
perform - mode
finished for action: configure
You can see
the log file: /p02/app/11.2.0/grid/cfgtoollogs/oui/configActions2013-06-03_11-04-19-AM.log
[grid@nodedb1
~]$
Check
if configToolRunAllcmds ran successfully; check the generated log file;
The action configuration is performing
------------------------------------------------------
The plug-in Oracle Net Configuration Assistant is running
Parsing command line arguments:
Parameter "orahome" = /p02/app/11.2.0/grid
Parameter "orahnam" = Ora11g_gridinfrahome1
Parameter "instype" = typical
Parameter "inscomp" = client,oraclenet,javavm,server
Parameter "insprtcl" = tcp
Parameter "cfg" = local
Parameter "authadp" = NO_VALUE
Parameter "responsefile" = /p02/app/11.2.0/grid/network/install/netca_typ.rsp
Parameter "silent" = true
Parameter "silent" = true
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
nodedb1...
nodedb2...
Oracle Net Listener Startup:
Listener started successfully.
Check the trace file for details: /p02/app/grid/cfgtoollogs/netca/trace_Ora11g_gridinfrahome1-13060311AM0422.log
Oracle Net Services configuration successful. The exit code is 0
The plug-in Oracle Net Configuration Assistant has successfully been performed
------------------------------------------------------
------------------------------------------------------
The plug-in Automatic Storage Management Configuration Assistant is running
The plug-in Automatic Storage Management Configuration Assistant has failed its perform method
------------------------------------------------------
The action configuration has failed its perform method
###################################################
Note: Create diskgroup DATA
for Oracle database using asmca utility.
Ø
Creating Diskgroups using ASM
Configuration Assistant (ASMCA)
§ Set
up the disks to be used by ASM
I
used ASMLIB to create 4 disks for ASM, and I can see them as follows:
$ oracleasm
listdisks
DISK1
DISK2
DISK3
DISK4
§ Configure
ASM
Run
the following command:
$ asmca
-silent -configureASM -sysAsmPassword xxxxxx -asmsnmpPassword xxxxxx
-diskString 'ORCL:*' -diskGroupName DATA -disk 'ORCL:*' -redundancy EXTERNAL
As
I used ASMLIB disks, I specified 'ORCL:*' for ASM discovery string. Make sure
you specify the correct value for your environment.
On
a successful run, the above command should have returned:
ASM created
and started successfully.
DiskGroup DATA
created successfully.
And
it should have performed the following:
-
Start
the cluster synchronisation services daemon – ocssd.bin
-
Start
three agents – cssdagent, oraagent.bin and orarootagent.bin
-
Start
the disk monitor – diskmon.bin
-
Create
and start ASM instance +ASM
-
Create
the external redundancy disk group DATA
-
Create
ASM spfile in disk group DATA
Ø Oracle RDBMS Software Install in silent mode
§ Database
Software Silent installation:
I
have attached the response file I used to install the software silently.
> Response file for Oracle Home installation:
####################################################################
## Copyright(c) Oracle Corporation 1998,2008. All rights reserved.##
## ##
## Specify values for the variables listed below to customize ##
## your installation. ##
## ##
## Each variable is associated with a comment. The comment ##
## can help to populate the variables with the appropriate ##
## values. ##
## ##
## IMPORTANT NOTE: This file contains plain text passwords and ##
## should be secured to have read permission only by oracle user ##
## or db administrator who owns this installation. ##
## ##
####################################################################
#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v11_2_0
#------------------------------------------------------------------------------
# Specify the installation option.
# It can be one of the following:
# 1. INSTALL_DB_SWONLY
# 2. INSTALL_DB_AND_CONFIG
# 3. UPGRADE_DB
#-------------------------------------------------------------------------------
oracle.install.option=INSTALL_DB_SWONLY
#-------------------------------------------------------------------------------
# Specify the hostname of the system as set during the install. It can be used
# to force the installation to use an alternative hostname rather than using the
# first hostname found on the system. (e.g., for systems with multiple hostnames
# and network interfaces)
#-------------------------------------------------------------------------------
ORACLE_HOSTNAME=
#-------------------------------------------------------------------------------
# Specify the Unix group to be set for the inventory directory.
#-------------------------------------------------------------------------------
UNIX_GROUP_NAME=oinstall
#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/p01/app/oraInventory
#-------------------------------------------------------------------------------
# Specify the languages in which the components will be installed.
#
# en : English ja : Japanese
# fr : French ko : Korean
# ar : Arabic es : Latin American Spanish
# bn : Bengali lv : Latvian
# pt_BR: Brazilian Portuguese lt : Lithuanian
# bg : Bulgarian ms : Malay
# fr_CA: Canadian French es_MX: Mexican Spanish
# ca : Catalan no : Norwegian
# hr : Croatian pl : Polish
# cs : Czech pt : Portuguese
# da : Danish ro : Romanian
# nl : Dutch ru : Russian
# ar_EG: Egyptian zh_CN: Simplified Chinese
# en_GB: English (Great Britain) sk : Slovak
# et : Estonian sl : Slovenian
# fi : Finnish es_ES: Spanish
# de : German sv : Swedish
# el : Greek th : Thai
# iw : Hebrew zh_TW: Traditional Chinese
# hu : Hungarian tr : Turkish
# is : Icelandic uk : Ukrainian
# in : Indonesian vi : Vietnamese
# it : Italian
#
# Example : SELECTED_LANGUAGES=en,fr,ja
#------------------------------------------------------------------------------
SELECTED_LANGUAGES=en
#------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#------------------------------------------------------------------------------
ORACLE_HOME=/p02/app/oracle/product/11.2.0/db01
#------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#------------------------------------------------------------------------------
ORACLE_BASE=/p02/app/oracle
#------------------------------------------------------------------------------
# Specify the installation edition of the component.
#
# The value should contain only one of these choices.
# EE : Enterprise Edition
# SE : Standard Edition
# SEONE : Standard Edition One
# PE : Personal Edition (WINDOWS ONLY)
#------------------------------------------------------------------------------
oracle.install.db.InstallEdition=EE
#------------------------------------------------------------------------------
# This variable is used to enable or disable custom install.
#
# true : Components mentioned as part of 'customComponents' property
# are considered for install.
# false : Value for 'customComponents' is not considered.
#------------------------------------------------------------------------------
oracle.install.db.isCustomInstall=false
#------------------------------------------------------------------------------
# This variable is considered only if 'IsCustomInstall' is set to true.
#
# Description: List of Enterprise Edition Options you would like to install.
#
# The following choices are available. You may specify any
# combination of these choices. The components you choose should
# be specified in the form "internal-component-name:version"
# Below is a list of components you may specify to install.
#
# oracle.rdbms.partitioning:11.2.0.1.0 - Oracle Partitioning
# oracle.rdbms.dm:11.2.0.1.0 - Oracle Data Mining
# oracle.rdbms.dv:11.2.0.1.0 - Oracle Database Vault
# oracle.rdbms.lbac:11.2.0.1.0 - Oracle Label Security
# oracle.rdbms.rat:11.2.0.1.0 - Oracle Real Application Testing
# oracle.oraolap:11.2.0.1.0 - Oracle OLAP
#------------------------------------------------------------------------------
oracle.install.db.customComponents=oracle.server:11.2.0.1.0,oracle.sysman.ccr:10.2.7.0.0,oracle.xdk:11.2.0.1.0,oracle.rdbms.oci:11.2.0.1.0,oracle.network:11.2.0.1.0,oracle.network.listener:11.2.0.1.0,oracle.rdbms:11.2.0.1.0,oracle.options:11.2.0.1.0,oracle.rdbms.partitioning:11.2.0.1.0,oracle.oraolap:11.2.0.1.0,oracle.rdbms.dm:11.2.0.1.0,oracle.rdbms.dv:11.2.0.1.0,orcle.rdbms.lbac:11.2.0.1.0,oracle.rdbms.rat:11.2.0.1.0
###############################################################################
# #
# PRIVILEGED OPERATING SYSTEM GROUPS #
# ------------------------------------------ #
# Provide values for the OS groups to which OSDBA and OSOPER privileges #
# needs to be granted. If the install is being performed as a member of the #
# group "dba", then that will be used unless specified otherwise below. #
# #
###############################################################################
#------------------------------------------------------------------------------
# The DBA_GROUP is the OS group which is to be granted OSDBA privileges.
#------------------------------------------------------------------------------
oracle.install.db.DBA_GROUP=dba
#------------------------------------------------------------------------------
# The OPER_GROUP is the OS group which is to be granted OSOPER privileges.
#------------------------------------------------------------------------------
oracle.install.db.OPER_GROUP=dba
#------------------------------------------------------------------------------
# Specify the cluster node names selected during the installation.
#------------------------------------------------------------------------------
oracle.install.db.CLUSTER_NODES=nodedb1,nodedb2
#------------------------------------------------------------------------------
# Specify the type of database to create.
# It can be one of the following:
# - GENERAL_PURPOSE/TRANSACTION_PROCESSING
# - DATA_WAREHOUSE
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
#------------------------------------------------------------------------------
# Specify the Starter Database Global Database Name.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.globalDBName=
#------------------------------------------------------------------------------
# Specify the Starter Database SID.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.SID=
#------------------------------------------------------------------------------
# Specify the Starter Database character set.
#
# It can be one of the following:
# AL32UTF8, WE8ISO8859P15, WE8MSWIN1252, EE8ISO8859P2,
# EE8MSWIN1250, NE8ISO8859P10, NEE8ISO8859P4, BLT8MSWIN1257,
# BLT8ISO8859P13, CL8ISO8859P5, CL8MSWIN1251, AR8ISO8859P6,
# AR8MSWIN1256, EL8ISO8859P7, EL8MSWIN1253, IW8ISO8859P8,
# IW8MSWIN1255, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE,
# KO16MSWIN949, ZHS16GBK, TH8TISASCII, ZHT32EUC, ZHT16MSWIN950,
# ZHT16HKSCS, WE8ISO8859P9, TR8MSWIN1254, VN8MSWIN1258
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.characterSet=AL32UTF8
#------------------------------------------------------------------------------
# This variable should be set to true if Automatic Memory Management
# in Database is desired.
# If Automatic Memory Management is not desired, and memory allocation
# is to be done manually, then set it to false.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.memoryOption=false
#------------------------------------------------------------------------------
# Specify the total memory allocation for the database. Value(in MB) should be
# at least 256 MB, and should not exceed the total physical memory available
# on the system.
# Example: oracle.install.db.config.starterdb.memoryLimit=512
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.memoryLimit=
#------------------------------------------------------------------------------
# This variable controls whether to load Example Schemas onto the starter
# database or not.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.installExampleSchemas=false
#------------------------------------------------------------------------------
# This variable includes enabling audit settings, configuring password profiles
# and revoking some grants to public. These settings are provided by default.
# These settings may also be disabled.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.enableSecuritySettings=false
###############################################################################
# #
# Passwords can be supplied for the following four schemas in the #
# starter database: #
# SYS #
# SYSTEM #
# SYSMAN (used by Enterprise Manager) #
# DBSNMP (used by Enterprise Manager) #
# #
# Same password can be used for all accounts (not recommended) #
# or different passwords for each account can be provided (recommended) #
# #
###############################################################################
#------------------------------------------------------------------------------
# This variable holds the password that is to be used for all schemas in the
# starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.ALL=
#-------------------------------------------------------------------------------
# Specify the SYS password for the starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.SYS=
#-------------------------------------------------------------------------------
# Specify the SYSTEM password for the starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.SYSTEM=
#-------------------------------------------------------------------------------
# Specify the SYSMAN password for the starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.SYSMAN=
#-------------------------------------------------------------------------------
# Specify the DBSNMP password for the starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.DBSNMP=
#-------------------------------------------------------------------------------
# Specify the management option to be selected for the starter database.
# It can be one of the following:
# 1. GRID_CONTROL
# 2. DB_CONTROL
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.control=
#-------------------------------------------------------------------------------
# Specify the Management Service to use if Grid Control is selected to manage
# the database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.gridcontrol.gridControlServiceURL=
#-------------------------------------------------------------------------------
# This variable indicates whether to receive email notification for critical
# alerts when using DB control.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.dbcontrol.enableEmailNotification=
#-------------------------------------------------------------------------------
# Specify the email address to which the notifications are to be sent.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.dbcontrol.emailAddress=
#-------------------------------------------------------------------------------
# Specify the SMTP server used for email notifications.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.dbcontrol.SMTPServer=
###############################################################################
# #
# SPECIFY BACKUP AND RECOVERY OPTIONS #
# ------------------------------------ #
# Out-of-box backup and recovery options for the database can be mentioned #
# using the entries below. #
# #
###############################################################################
#------------------------------------------------------------------------------
# This variable is to be set to false if automated backup is not required. Else
# this can be set to true.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.automatedBackup.enable=false
#------------------------------------------------------------------------------
# Regardless of the type of storage that is chosen for backup and recovery, if
# automated backups are enabled, a job will be scheduled to run daily at
# 2:00 AM to backup the database. This job will run as the operating system
# user that is specified in this variable.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.automatedBackup.osuid=
#-------------------------------------------------------------------------------
# Regardless of the type of storage that is chosen for backup and recovery, if
# automated backups are enabled, a job will be scheduled to run daily at
# 2:00 AM to backup the database. This job will run as the operating system user
# specified by the above entry. The following entry stores the password for the
# above operating system user.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.automatedBackup.ospwd=
#-------------------------------------------------------------------------------
# Specify the type of storage to use for the database.
# It can be one of the following:
# - FILE_SYSTEM_STORAGE
# - ASM_STORAGE
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.storageType=ASM_STORAGE
#-------------------------------------------------------------------------------
# Specify the database file location which is a directory for datafiles, control
# files, redo logs.
#
# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation=
#-------------------------------------------------------------------------------
# Specify the backup and recovery location.
#
# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation=
#-------------------------------------------------------------------------------
# Specify the existing ASM disk groups to be used for storage.
#
# Applicable only when oracle.install.db.config.starterdb.storage=ASM
#-------------------------------------------------------------------------------
oracle.install.db.config.asm.diskGroup=DATA
#-------------------------------------------------------------------------------
# Specify the password for ASMSNMP user of the ASM instance.
#
# Applicable only when oracle.install.db.config.starterdb.storage=ASM_SYSTEM
#-------------------------------------------------------------------------------
oracle.install.db.config.asm.ASMSNMPPassword=Manager1
#------------------------------------------------------------------------------
# Specify the My Oracle Support Account Username.
#
# Example : MYORACLESUPPORT_USERNAME=metalink
#------------------------------------------------------------------------------
MYORACLESUPPORT_USERNAME=
#------------------------------------------------------------------------------
# Specify the My Oracle Support Account Username password.
#
# Example : MYORACLESUPPORT_PASSWORD=password
#------------------------------------------------------------------------------
MYORACLESUPPORT_PASSWORD=
#------------------------------------------------------------------------------
# Specify whether to enable the user to set the password for
# My Oracle Support credentials. The value can be either true or false.
# If left blank it will be assumed to be false.
#
# Example : SECURITY_UPDATES_VIA_MYORACLESUPPORT=true
#------------------------------------------------------------------------------
SECURITY_UPDATES_VIA_MYORACLESUPPORT=
#------------------------------------------------------------------------------
# Specify whether user wants to give any proxy details for connection.
# The value can be either true or false. If left blank it will be assumed
# to be false.
#
# Example : DECLINE_SECURITY_UPDATES=false
#------------------------------------------------------------------------------
DECLINE_SECURITY_UPDATES=true
#------------------------------------------------------------------------------
# Specify the Proxy server name. Length should be greater than zero.
#
# Example : PROXY_HOST=proxy.domain.com
#------------------------------------------------------------------------------
PROXY_HOST=
#------------------------------------------------------------------------------
# Specify the proxy port number. Should be Numeric and atleast 2 chars.
#
# Example : PROXY_PORT=25
#------------------------------------------------------------------------------
PROXY_PORT=
#------------------------------------------------------------------------------
# Specify the proxy user name. Leave PROXY_USER and PROXY_PWD
# blank if your proxy server requires no authentication.
#
# Example : PROXY_USER=username
#------------------------------------------------------------------------------
PROXY_USER=
#------------------------------------------------------------------------------
# Specify the proxy password. Leave PROXY_USER and PROXY_PWD
# blank if your proxy server requires no authentication.
#
# Example : PROXY_PWD=password
#------------------------------------------------------------------------------
PROXY_PWD=
After createing response file for installing oracle home, logging with oracle user and go to location of runInstaller and execute below command.
[oracle@nodedb1
database]$ ./runInstaller silent responseFile /p02/db_install.rsp -invPtrLoc /etc/oraInst.loc
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 1358 MB Passed
Checking swap space: must be greater than 150 MB. Actual 18431 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/installActions2013-05-31_03-16-19PM.log. Please wait ...
You can find the log of this install session at:
/p01/app/oraInventory/logs/installActions2013-05-31_03-16-19PM.log
As a root user, execute the following script(s):
1. /p02/app/oracle/product/11.2.0/db01/root.sh
Successfully Setup Software.
[oracle@nodedb1
database]$
Run
the root.sh scripts on both nodes:
root@nodedb1:~
#. /p02/app/oracle/product/11.2.0/db01/root.sh
Check /p02/app/oracle/product/11.2.0/db01/install/ root_nodedb1.localdomain.com_2013-06-03_10-08-55.log for the output of root script
rootnodedb1:~#cat /p02/app/oracle/product/11.2.0/db01/install/ root_nodedb1.localdomain.com_2013-06-03_10-08-55.log
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /p02/app/oracle/product/11.2.0/db01
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
root@wmprddb1:~#
§ RAC database installation
using templates:
If
you already have a RAC database installed and you want to use it as a template.
You can run the dbca tool and create the XML template. Use the dbca with the
silent option to create your new RAC database by providing the new global
database name and System Identifier like I did below:
[oracle@nodedb1
bin]$ ./dbca -silent -nodelist nodedb1,nodedb2 \
-createDatabase
–templateName "/p02/app/oracle/product/11.2.0/db01/assistants/dbca/templates/General_Purpose.dbc"
\
-gdbName prod.localdomain.com \
-sid prod -SysPassword xxxxxx -SystemPassword xxxxxxx \
-emConfiguration
NONE -storageType ASM -asmSysPassword xxxxxx \
-diskGroupName
DATA -nationalCharacterSet "AL16UTF16" \
-characterSet
"AL32UTF8" -totalMemory 16384
Copying
database files
DBCA_PROGRESS
: 1%
DBCA_PROGRESS
: 3%
DBCA_PROGRESS
: 9%
DBCA_PROGRESS
: 15%
DBCA_PROGRESS
: 21%
DBCA_PROGRESS
: 27%
DBCA_PROGRESS
: 30%
Creating and
starting Oracle instance
DBCA_PROGRESS
: 32%
DBCA_PROGRESS
: 36%
DBCA_PROGRESS
: 40%
DBCA_PROGRESS
: 44%
DBCA_PROGRESS
: 45%
DBCA_PROGRESS
: 48%
DBCA_PROGRESS
: 50%
Creating
cluster database views
DBCA_PROGRESS
: 52%
DBCA_PROGRESS
: 70%
Completing
Database Creation
DBCA_PROGRESS
: 73%
DBCA_PROGRESS
: 76%
DBCA_PROGRESS
: 85%
DBCA_PROGRESS
: 94%
DBCA_PROGRESS
: 100%
Database
creation complete. For details check the logfiles at:
/p02/app/oracle/cfgtoollogs/dbca/prod.
Database
Information:
Global
Database Name:prod.localdomain.com
System
Identifier(SID) Prefix:prod
[oracle@nodedb1
bin]$ The installation of Oracle 11gR2 Two Node RAC Installation on RHEL 6.4 using UDEV rules was successful.
§ Verify the RAC database installation using below commands:
1. check the status of CRS on specific node:
=================================================
login as: grid
grid@10.10.8.10's password:grid
Last login: Thu Jun 6 07:43:06 2013 from 10.46.128.12
[grid@nodedb1 ~]$ hostname
nodedb1.localdomain.com
[grid@nodedb1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
2. check the status of services on all nodes:
==================================================
[grid@nodedb1 ~]$ srvctl status nodeapps
VIP nodedb1-vip is enabled
VIP nodedb1-vip is running on node: nodedb1
VIP nodedb2-vip is enabled
VIP nodedb2-vip is running on node: nodedb2
Network is enabled
Network is running on node: nodedb1
Network is running on node: nodedb2
GSD is disabled
GSD is not running on node: nodedb1
GSD is not running on node: nodedb2
ONS is enabled
ONS daemon is running on node: nodedb1
ONS daemon is running on node: nodedb2
eONS is enabled
eONS daemon is running on node: nodedb1
eONS daemon is running on node: nodedb2
Note: GSD is not mandatory for 11G RAC to function properly.It is present only for backward compatibility with 9i RAC.Also going forward with 11G RAC, GSD is disabled by default and it is not started.How to check the status of complete clusterware stack on all nodes:
3. check the status of complete clusterware stack on all nodes:
===================================================================
[grid@nodedb1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
ora.DATA.dg
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
ora.LISTENER.lsnr
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
ora.asm
ONLINE ONLINE nodedb1 Started
ONLINE ONLINE nodedb2 Started
ora.eons
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
ora.gsd
OFFLINE OFFLINE nodedb1
OFFLINE OFFLINE nodedb2
ora.net1.network
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
ora.net2.network
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
ora.ons
ONLINE ONLINE nodedb1
ONLINE ONLINE nodedb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE nodedb1
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE nodedb2
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE nodedb2
ora.ebizprd.db
1 ONLINE ONLINE nodedb1 Open
2 ONLINE ONLINE nodedb2 Open
ora.ebizprd.newsrv.svc
1 ONLINE ONLINE nodedb2
ora.ebizprd.newsrv1.svc
1 ONLINE ONLINE nodedb1
2 ONLINE ONLINE nodedb2
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE nodedb1
ora.scan2.vip
1 ONLINE ONLINE nodedb2
ora.scan3.vip
1 ONLINE ONLINE nodedb2
ora.wmprddb1.vip
1 ONLINE ONLINE nodedb1
ora.wmprddb2.vip
1 ONLINE ONLINE nodedb2
[grid@nodedb1 ~]$
No comments:
Post a Comment