The three different wait events of ‘control file sequential read’, ‘control file single write’, and ‘control file parallel write’ all contribute to the amount of time Oracle takes to keep the control file current.
Oracle maintains a record of the consistency of the database’s physical structures and operational state through a set of control files. The Oracle control file is essential to the database operation and ability to recover from an outage. In fact, if you lose the control file(s) associated with an instance you may not be able to recover completely.
It is the Oracle control file(s) that records information about the consistency of a database’s physical structures and operational statuses. The database state changes through activities such as adding data files, altering the size or location of datafiles, redo being generated, archive logs being created, backups being taken, SCN numbers changing, or checkpoints being taken.
Through normal operation the control file is continuously hammered with reads and writes as it is being updated.
Why control file wait events occur
The performance around reads and writes against control files is often an indication of misplaced control files that share the same I/O access path or are on devices that are heavily used. It is interesting to note that Oracle has always defaulted the creation of control files in a single directory. You can check where your control files reside on disk with this simple query.
>SELECT name FROM v$controlfile;
If you wanted to check the amount of total system waits that have occurred for control file reads and writes you could do so by querying the V$SYSTEM_EVENT view. This will give you the total number of waits, timeouts, and accumulated time for this event.
SELECT * FROM v$system_event WHERE event LIKE '%control%
Likewise you could query the V$SESSION_WAIT view to see which sessions are experiencing control file wait events in real time.
SELECT event, wait_time, p1, p2, p3 FROM v$session_wait WHERE event LIKE '%control%';
Here WAIT_TIME is the elapsed time for control file reads or writes. P1, P2, & P3 is either file#, block#, and blocks for ‘control file sequential read’ and ‘control file single write’ but is files, blocks, and requests for ‘control file parallel write’. Since there are no Oracle internal views for looking at control file layout like there is for ‘normal’ data and temporary files, you can only accomplish a look into the control files by generating a dump. You can do this through the following ALTER SESSION command where <level#> is typically from 1 to 3 and represents dumping the file header, database & checkpoint records, and circular reuse record types. It is here in the trace file, which is generated in user_dump_dest, you can see that control file(s) have a file# of 0.
ALTER SESSION set events 'immediate trace name control level <level#>';
Reducing time spent for control file reads and writes
So how can you reduce the time spent for control file reads and writes? There are two distinct answers to this problem. First, you can ensure that you have placed your control files on disks that are not under excessive heavy loads.
When trying to determine how many control files to have, it is best to keep in mind that the more control files you have, the more I/O and time will be needed to write to all those copies. If is often better to have the O/S mirror the control files and reduce Oracle overhead.
Second, since the number of reads and writes are dictated by the type of activity within the database, it is often a good idea to revisit application logic and processing steps to ensure that excessive activities are not causing excessive reads and writes to the control files. For instance, code that produces excessive commits and even log switches. Since DBA activity is typically concentrated on modifying structures in the database, you need to be careful when performing batch runs of administrative scripts that could conflict with normal application processing. So be on the lookout for high activity levels such as log switching, checkpointing, backups taking place, and structural changes.
Final thoughts
The control files are vital to database operations. Since they hold such vital information about the database it is imperative that they are safeguarded against disk I/O contention while making sure they are protected against corruption or loss.
Balancing these two is often difficult but the common practice is to have multiple copies of the control files, keep them on separate physical disks, and not putting them on heavily accessed disks.
Also consider using operating system mirroring instead of Oracle multiplexing of the control files, taking and verify your control file backups daily, and create a new backup control file whenever a structural change is made.
By taking control of your control files you can eliminate I/O contention and steer clear of becoming another war story on the net about yet another lost control file and unrecoverable database system.
Reet says
does the control file contain tempfiles record or not?
Janis says
Yes the control file contains the location, name, size, etc… of the control file. If you backup your control file with the command: ALTER DATABASE BACKUP CONTROLFILE TO TRACE; you will see the creation of the tempfiles at the end of the script. The name and location of the trace file that the above command creates can be found in the alert log.