One reason that raw device support for Oracle RAC has been deprecated in Oracle Database 11g Release 2 is the difficulty in managing files on raw devices. Recall that if your shared files are on raw devices, you must have a separate raw slice (with a link name) for every file. Therefore, in this situation, not only is it difficult to manage large numbers of files, but it is also not possible to have the archived logs created on raw slices themselves. Therefore, if you were using raw devices, the most common solution to archiving in the old days would have been to have the archive destination set to the private drive on each node. However, in the event that media recovery is necessary, it would then be required that all archived redo logs from all instances be copied to the archive destination from which recovery is being initiated.
To speed up the recovery process, and to simplify it greatly, this can be avoided by setting up NFS mounts on each node, similar to the following:
mount -t nfs rmsclnxclu2:/u01/app/oracle/oradata/test/archive /archive
Here we have mounted the archive destination from Node2 (rmsclnxclu2) to a directory called /archive on Node1 (rmsclnxclu1). Assuming that node rmsclnxclu1 has the same path for archiving, you can then have two archive destinations, as such:
LOG_ARCHIVE_DEST_1=location='/u01/app/oracle/oradata/test/archive/' LOG_ARCHIVE_DEST_2=location='/archive/'
By doing this, Node1 is now archiving to two separate destinations—the first destination is its own local archive directory. The second destination, the /archive directory, is actually the NFS mounted archive destination used by Node2. If you reverse the process on Node2, issuing an NFS mount from Node2 back to the archive destination on Node1, what you will have is both instances archiving to their own local archive destination as well as the archive destination of the other node. What this means, in the event that media recovery is needed, is that no matter from which node you do the media recovery, you should have access to the archived logs from all threads. As you can see, this can get rather complicated if you have more than two nodes, but in a two-node cluster, this is a workable solution.
NOTE
If you make both archive destinations mandatory, you may cause a hang of the instance if the NFS mount point for LOG_ARCHIVE_DEST_2 is inaccessible. Therefore, we recommend that you make the second destination optional in this configuration, to lessen the impact if one of the nodes is down.
Direct NFS client
Oracle Database 11g introduced the Direct NFS Client. This feature integrates the NFS client functionality directly into the Oracle software. This integration allows Oracle to optimize the I/O path between Oracle and the NFS server, which results in significantly improved performance. In addition, Direct NFS Client simplifies, and in many cases automates, the performance optimization of the NFS client configuration for database workloads.
Leave a Reply