The DB_BLOCK_SIZE is the size of the default data block size when the database is created. Since Oracle 10g Release 2, each tablespace can have a different block size, thus making block size selection a less critical selection before the database is created. That said, a separate cache memory allocation must be made for each different database block size. But it is still very important to choose wisely. Although you can have different block-size tablespaces, this is not truly a performance feature, as the nondefault buffer caches are not optimized for performance. You still want to put the bulk of your data in the default buffer cache. You must rebuild the database if you want to increase the DB_BLOCK_SIZE. The block size for data warehouses is often 32K (you want many rows to be read at a time) and OLTP systems are often 8K. Most experts recommend an 8K block size.
The data block cache for the default block size is set using the DB_CACHE_SIZE initialization parameter. Cache is allocated for other database block sizes by using the DB_nK_CACHE_SIZE, where n is the block size in KB. The larger the DB_BLOCK_SIZE, the more that can fit inside a single block and the more efficiently large amounts of data can be retrieved. A small DB_BLOCK_SIZE actually lets you retrieve single records faster and saves space in memory. In addition, a smaller block size can improve transactional concurrency and reduce log file generation rates. As a rule of thumb, a data warehouse should use the maximum block size available for your platform (either 16K or 32K) as long as no bugs exist for the given block size (check Metalink to make sure), whereas a transaction-processing system should use an 8K block size. Rarely is a block size smaller than 8K beneficial, but I’ve used a 2K block size for a stock exchange application and I’ve seen a 4K block size in benchmarks. If you have an extremely high transaction rate system or very limited system memory, you might consider a block size smaller than 8K.
Full table scans are limited to the maximum I/O of the box (usually 64K, but as high as 1M on many systems). Most systems support 1M I/O rates now. You can up the amount of data read into memory in a single I/O by increasing the DB_BLOCK_SIZE to 8K or 16K. You can also increase the DB_FILE_MULTIBLOCK_READ_COUNT to a maximum value of (max I/O size)/DB_BLOCK_SIZE.
Environments that run many single queries to retrieve data could use a smaller block size, but “hot spots” in those systems will still benefit from using a larger block size. Sites that need to read large amounts of data in a single I/O read should increase the DB_FILE_MULTIBLOCK_READ_COUNT. This may not be necessary with a parameter set, by default, to be much larger in 11g. My default for DB_FILE_MULTIBLOCK_READ_COUNT was 128, but check your system for variations to the default. The default value Oracle uses corresponds to the maximum I/O size that can be efficiently performed and is platform-dependent (according to Oracle docs). Setting the DB_FILE_MULTIBLOCK_READ_COUNT higher is especially important for data warehouses that retrieve lots of records. If the use of DB_FILE_MULTIBLOCK_READ_COUNT starts to cause many full table scans (since the optimizer now decides it can perform full table scans much faster and decides to do more of them), then set OPTIMIZER_INDEX_COST_ADJ between 1 and 10 (I usually use 10) to force index use more frequently.
TIP
The database must be rebuilt if you increase the DB_BLOCK_SIZE. Increasing the DB_FILE_MULTIBLOCK_READ_COUNT allows more block reads in a single I/O, giving a benefit similar to a larger block size.
Leave a Reply