Troubleshoot Oracle Grid Infrastructure Installation

By Scott Jesse, Bill Burton, Bryan Vongray on September 26, 2013


The Oracle Grid Infrastructure installation procedure involves a few major stages: the initial OUI stage, the root.sh stage, and the assistants stage. Let’s look at the files they produce that help us know what, if anything, may have gone wrong.

Oracle Universal Installer Stage

The installer writes to the oraInventory directory specified as part of the install process. The log file has a name similar to the one shown here:

   [grid]$ ls /u01/app/oraInventory/logs/inst*
   /u01/app/oraInventory/logs/installActions2010-07-16_10-54-18AM.log

The installer initially stages its files and logs in /tmp until the permanent oraInventory directory is created, so early install errors may be found in that staging area. It is also possible to run the installer in debug mode, which will show everything it is doing and will generally pinpoint exactly where an issue occurs. This verbose output is displayed to stdout (your terminal), and it is a good idea to capture terminal output as shown:

   [grid]$ script /tmp/installlog
   Script started, file is /tmp/installlog
   [grid]$ ./runInstaller –debug
   ---- Output from Installer too verbose to show.
   [grid]$ exit
   exit
   Script done, file is /tmp/installlog

NOTE

The most common problem we have seen when installing is shown as the installer hanging at around 65 percent completion when copying to remote nodes. This is normally caused by forgetting to disable SELinux and Linux firewall (iptables) before doing the install, as that causes the copy to fail and the Installer does not pick up the failure.

root.sh Stage

As you saw previously, the root.sh writes some basic information to the terminal to show what it is doing. The full text and any errors reported would be found in $GI_ HOME/cfgtoollogs/crsconfig/rootcrs_<hostname>.log. That should be the first place you look when a problem occurs with root.sh. The rootcrs log may simply point you to an area where the problem occurred and you can then dig deeper using the clusterware alert log, agent, and process logs. These cluster logs are found in $GI_HOME/log/<hostname>/.

If the problem occurs in ASM, the trace files for the instance will be found in $GI_BASE/diag/asm/+asm/<asm instance>/trace.

Assistants Stage

The configuration assistants such as asmca and netca will write trace to the following directory: $GI_BASE/cfgtoollogs.

Related Posts

Leave a Reply