Tech Paper - Best Practices for System Maintenance

Version 3

    Deploying the ArcFM Solution on SQL Server White Paper


    You should perform regular maintenance tasks to keep SQL Server running optimally. Your maintenance schedule should include implementing a backup strategy, updating database statistics, rebuilding and reorganizing indexes, and identifying and correcting excessive fragmentation.


    Guidance for Backing Up the Database

    You should back up the database regularly to protect your data. You should also back up the transaction log frequently to protect your data and to keep the transaction log file from growing too large.


    Use Backup Compression

    Backing up a large database can require a significant amount of time and a large amount of disk space for the backup file(s). With SQL Server 2008 backup compression, the backup file is compressed as it is written; this requires less storage, less disk I/O, and less time but incurs more (sometimes significantly more) CPU cycles as overhead.

    The compression is achieved by specifying the WITH COMPRESSION clause in the BACKUP command or by selecting compression the Options page in the Back Up Database dialog box. There is also a global setting to compress all backups taken on a server instance by default. (This setting is accessed by using the Database Settings tab in the Server Properties dialog box or by running sp_configure with backup compression default set to 1.) The RESTORE command automatically detects that a backup is compressed and decompresses the backup during the restore operation.

    For more information, see the article Tuning the Performance of Backup Compression in SQL Server 2008 on


    Implement a Backup Schedule

    A suggested schedule is a weekly full backup, a daily differential backup, and a log backup every hour. Frequent log backups can keep the size of the log file smaller and can also minimize data loss when implementing log shipping for disaster recovery. Frequent full backups can improve the speed of recovery since it is not necessary to restore as many log files. Your organization may have specific requirements for recovery times, and you should discuss these with a qualified database administrator to determine a backup profile that meets these requirements. If you implement peer-to-peer replication, you must also implement a backup schedule for the target database(s). The distributor database operates in simple recovery mode, so it does not need to be backed up.


    Back Up System Databases

    SQL Server maintains a set of system-level databases (called system databases) that are essential for the operation of a server instance. Several of the system databases must be backed up after every significant update: msdb, master, and model. If any database uses replication on the server instance, there is a distribution system database that must also backed up (on a remote server). Backups of these system databases let you restore and recover the SQL Server system in the event of system failure, such as the loss of a hard disk.

    For more information, see the article Considerations for Backing Up and Restoring System Databases.


    Recommendations for Updating Statistics

    SQL Server collects statistics about individual columns (single-column statistics) or sets of columns (multi-column statistics). Statistics are used by the query optimizer to estimate the selectivity of expressions and thus the size of intermediate and final query results. Reliable statistics let the optimizer accurately assess the cost of different query plans and then choose a high-quality plan.

    For a typical installation, it is recommended to keep the AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS database options on (which is the default setting).

    For more information, see the article Statistics Used by the Query Optimizer in Microsoft SQL Server 2008.


    Recommendations for Reorganizing or Rebuilding Indexes

    The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations are applied to the underlying data. Over time, these modifications can cause the information in the index to become scattered or fragmented in the database. Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly when performing scans. (Note that density of data in the pages relates directly to how much data can be cached, and therefore can often affect performance.)

    You can defragment indexes by either reorganizing or rebuilding. For partitioned indexes built on a partition scheme, you can rebuild or reorganize a complete index or a single partition of an index.

    To decide which defragmentation method to use, analyze the index to determine the degree of fragmentation. By using the DMF sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases. Using this function, you have access to the fragmentation levels available in defined columns at any given time.

    Table 1 shows some of the information returned by the sys.dm_db_index_physical_stats function.




    The percent of logical fragmentation (out-of-order page sin the index).


    The number of fragments (physically consecutive leaf pages) in the index.


    Average number of page sin one fragment in an index.


    The total number of index or data pages.

    Table 1, Columns returned from sys.dm_db_index_physical_stats

    You can then use Table 2 as a guide to determine the best method to correct the fragmentation.


    Corrective Statement

    < 10 percent and > + 80 percent


    < 80 percent


    Table 2, Corrective defragmentation method to use

    Note that very low levels of fragmentation (less than 10%) should not be addressed by either of these commands because the benefit from removing such a small amount of fragmentation is almost always vastly outweighed by the cost of reorganizing or rebuilding the index.


    Rebuild Indexes When Necessary

    Rebuilding an index creates an entirely new B-tree index structure. In doing this, fragmentation is removed, disk space is reclaimed by compacting the pages using the specified or existing fill-factor setting, and the index rows are reordered in contiguous pages (allocating new pages as needed). This can improve disk performance by reducing the number of page reads required to obtain the requested data.

    The following methods can be used to rebuild clustered and non-clustered indexes:

      • ALTER INDEX with the REBUILD clause. This statement replaces the DBCC DBREINDEX statement.
      • CREATE INDEX with the DROP_EXISTING clause.


    Recommendations for Update Management

    Following is guidance for updating management.


    Maintain Your Microsoft Software

    It is recommended to keep your Microsoft software up to date with the most recent software updates. Software Updates (e.g., hotfixes, Cumulative Updates, service packs, etc.) can help prevent or fix problems, enhance the security of your computers, and improve how the computers work. Consider testing and applying software updates on a regular basis.

    For more information on applying updates, see Update Management TechCenter and SQL Server 2008 failover cluster rolling patch and service pack process.