EMS Best Practices for SQL Server

This topic provides best practice information on the following:

System 

SQL Instance Properties

  • Max SQL Memory

    • If the machine is hosting no apps other than SQL Server, maximum memory should be set to 3 to 8 GB less than the machine’s memory. For example, if the machine has 32 GB, the SQL max should be set to 28 GB. For further assistance determining the max SQL memory, use the SQL Max Memory Calculator.

    • To set use a SQL query window. Effective immediately, no downtime required.  

      sp_configure 'show advanced options', 1 reconfigure 
      sp_configure 'max server memory (MB)', 16000 reconfigure 
  • To determine total memory, run the following from a command window:  

    systeminfo | findstr /C:"Total Physical Memory" 
  • Maximum Degree of Parallelism (MaxDOP)

    • Depends on the number of logical CPU cores and the rank of CXPACKET in the DMV of Wait Types.

    • Will likely use a value higher than the default zero.

    • To set use a SQL query window. Effective immediately, no downtime required.

      sp_configure 'max degree of parallelism', 4 reconfigure 
  • Cost Threshold for Parallelism

    • Depends on the number of logical CPU cores and the rank of CXPACKET in the DMV of Wait Types.

    • Will likely use a value higher than the default 5, perhaps 25 or 50.

    • To set use a SQL query window. Effective immediately, no downtime required.  

      sp_configure 'cost threshold for parallelism', 25 reconfigure 
  • Optimize for Ad Hoc Workloads

    • Change this from the default 0 to 1. This will help relieve memory pressure by not allowing the plan cache to become filled with compiled plans that are not reused.

    • To set use a SQL query window. Effective immediately, no downtime required.  

      sp_configure 'optimize for ad hoc workloads', 1 reconfigure 
  • Instant File Initialization
    • This should be enabled by setting Local Security Policy. It is granted to the SQL Server service. Instructions from Brent Ozar.

    • This allows data file auto-grows to happen instantly. If not enabled, a large file growth, such as 20 GB, might take a few minutes.

    • Run the following in a SQL query window to determine the current state:

      Exec sys.xp_readerrorlog 0, 1, N'Database Instant File Initialization' 
  • Compressed Backups by Default

    • This does not affect system performance; however, because compressed backups can take 20-50% of the time of an uncompressed backup, it does make a difference.

    • Compressed backups require less storage because compressed backups are 20–50% the size of uncompressed backups.

    • Use a SQL query window; effective immediately, no downtime required.

      sp_configure 'backup compression default', 1 reconfigure 

Database Properties 

  • Turn on Asynchronous Updating of Statistics

  • Should be done in both (or all) EMS databases.

  • To set use a SQL query window. Effective immediately, no downtime required.

    • First, ensure auto-update of statistics is turned on

      ALTER DATABASE dbNameHere SET AUTO_UPDATE_STATISTICS ON 
    • Make auto-update of statistics to be asynchronous

      ALTER DATABASE dbNameHere SET AUTO_UPDATE_STATISTICS_ASYNC ON
  • Allocated Database Size

    • There should be a formula with variables for how large the client database is, yielding an expected database size in bytes after one year.

    • Data file(s) should anticipate a year’s growth (dependent upon formula shown above).

    • Log file should be 25–50% of the size of the data files.

    • Data and log files should be on separate physical disks, preferably SSD.

    • Auto-file growth should be set to fixed bytes, not a percentage. Do not use the default setting of 1 MB.

  • Compatibility Level

    • Background:

      • A database’s compatibility level (CL) is usually the same as the version of the instance.

      • However it is possible to set a CL back one or more versions.

      • When upgrading a database to a newer version of SQL Server, the database CL may remain the same from the previous version.

      • There is a case when the CL for 2014 (120) is used, a performance degradation is possible. This is a documented bug in the cardinality estimator of the SQL query engine in 2014.

    • Ensure the CL is set as follows:

      Compatibility Level

      Version

      DB Compatibility

      110

      Microsoft SQL Server 2012 = 11.00.XXXX

      110

      120

      Microsoft SQL Server 2014 = 12.00.XXXX

      110 (avoid CL for 2014)

      130

      Microsoft SQL Server 2016 = 13.00.XXXX

      130

      Important!

      2017 is not yet officially supported by EMS Software.

tempdb 

  • tempdb is a system database and the temporary work space (scratchpad) for a data server’s query processing. It is critical this be configured for maximum efficiency. In versions SQL 2014 and prior, tempdb’s default settings are incorrect for proper configuration. This out-of-the-box configuration will likely be a bottleneck.

  • Should be located on yet another physical disk or on SSD. Tempdb should be on fastest disk available. Fault tolerance is not needed.

  • Number of data files should equal the number of logical CPU cores, up to 8.

  • Data files should be the same size.

  • Only one log file should be used.

  • Auto-file growth should be set to fixed bytes, not percentage, anticipating no grows.

SQL Instance Maintenance 

  • The following maintenance should be run at least weekly:

    • Defragmentation—REBUILD or REORGANIZE of indexes should be performed weekly

      • Threshold for reorganize: 5% fragmentation

      • Threshold for rebuild: 15% fragmentation

      • DBCC CheckDB—Should be done weekly at off-peak hours. This for ensuring integrity of the storage media.

        DBCC CheckDB (EMS)
        DBCC CheckDB (EMS_master)
    • Update Statistics—Should be done weekly by one of these methods, at off-peak hours.

      • Use EMS
      • Exec sp_updatestats
      • USE EMS_Master
      • Exec sp_updatestats

    Or optionally, if much data has been changed:

    • UPDATE STATISTICS TableName WITH FULLSCAN
    • Manage number of Virtual Log Files (VLFs)

      • High VLFs contribute to poor performance. This needs to be checked weekly. Anything over 50 should be addressed by shrinking the log then resizing to a size that will NOT anticipate any growth.

      • This is essentially fragmentation of data log files due to auto-grow. If auto-growth is set to too small an increment, there will be many auto-increments, causing a high number of VLFs.