This is in continuation to the storage story we had at
that as foundation let us take a step closer to other aspects to the storage
system like what needs to be taken care when working with physical files,
filegroups, transaction log design, SAN / RAID and more. We will cover as much
as possible and let me highlight some of the best practices on the same lines.
Part I, we did discuss about the requirements and needs from a system databases
side but here we will get started with the user controlled databases. When
evaluating solutions, you should consider their affect on your
high-availability requirements too. This is most often decided by the customers
needs and I would like you to keep them in mind.
location for storing the user databases depends on the hardware and the
database solution. Generally, you should separate the random database file I/O
from the sequential log file I/O. You cannot improve the performance by adding
more files than the number of CPUs in your SQL Server environment, unless you
provide additional physical storage devices. You should also design an
appropriate filegroup strategy that provides parallel access to different
physical devices when different database objects are stored in different
filegroups that are created on different physical storage devices.
we get some fundamentals, you must be wondering the physical storage is based
on the RAID / SAN configurations mostly. Let me callout some of the points to
remember when working with RAID:
vulnerability of a RAID array after a disk drive failure. This decides your
support for hot-spare disk drives.
it takes to rebuild a RAID array.
number of hot-spare disk drives the RAID array supports, and the number or
ratio that you will be using.
hardware RAID controller that meets the SQL Server I/O requirements.
can use RAID 5 for data files, but it is not appropriate for log files. RAID 10
(stripe of mirrors) is a better choice than RAID 0+1 (mirror of stripes),
because RAID 0+1 could fail if one disk on each side of the mirror fails. The
probability of a RAID 10 array failing is much lower because it can handle more
than one disk drive failure. You should NEVER use RAID 0 in a high-availability
should analyze the current space requirements and plan for future growth. The
auto-growth process can make the system unavailable because SQL Server needs to
get space from the disk drive. Auto-growth also causes external disk
fragmentation, and therefore you should avoid it. You can read
http://www.extremeexperts.com/SQL/Articles/AutoGrowth.aspx for more
insight to what I am talking.
File / Filegroups
and secondary database files are fundamental elements of an effective storage
solution for high-availability databases. They can improve performance,
simplify administrative tasks, and form the basis of your backup strategy. You
need to design your filegroup strategy before you implement a high-availability
database storage solution, because it can be difficult to change your strategy
after the system has been placed in the production environment. Your strategy
will vary, depending upon whether you are designing a file structure that is
composed of multiple physical files or multiple logical filegroups.
of multiple files here are some points to keep in mind:
an appropriate size for database files. Auto-growth of database files has an
impact on the performance as discussed before. By pre-allocating and monitoring
the free capacity of database files, you can avoid performance issues during
multiple database files on separate drive arrays. You can improve performance
and minimize disk contention by separating database files onto separate I/O
channels. Ensure that you do not saturate the I/O bus.
the number of database files to the number of CPUs. The maximum number of
threads that SQL Server will utilize to access database files is equal to the
number of CPUs included in your system, taking into account any affinity
an appropriate maximum file size that is equivalent to your removable media.
This will help the DBA to perform administrative tasks, such as moving a
database, by detaching locally and attaching remotely.
if we make this as multiple logical files, use these as some pointers:
only the system tables on the PRIMARY filegroup. Separate your user data from
the metadata by defining a new default filegroup. Create a set of secondary
database files associated with this new filegroup.
performance by separating different I/O patterns. You can maximize performance
with filegroups by separating your table data, Line of Business (LOB) data, and
indexes onto different drive arrays. BTW, I would use exceptions for cases like
partitioned tables where I would prefer same filegroup and keeping indexes
an appropriate backup strategy. SQL Server allows you to back up your user
databases at the file or filegroup level. By designing an appropriate file or
filegroup strategy, you can minimize the impact that backups have on your
downtime during database file failure. SQL Server 2005 allows you to perform
online file restores while the database remains online. While you are restoring
a damaged file, the file and its filegroup will be offline. This feature is
available only in SQL Server 2005, Enterprise Edition. Use NTFS compression to
obtain only historical data that is rarely accessed.
NTFS compression on read-only filegroups for accessing historical data.
You should generally avoid using NTFS compression on read-only filegroups
because they are slower than uncompressed filegroups. Also, there are
administrative overheads involved if you need to modify the data.
Log - Special case
transaction log records all the database modifications that are made by each
transaction. The transaction log file is made up of a number of Virtual Log
Files (VLFs) that are used sequentially by the SQL Server database engine in a
round-robin manner. Therefore, there is limited value in having multiple log
files. You can use the DBCC LOGINFO command to identify the active VLFs, and
you can use the sys.databases database catalog to determine why SQL Server is
waiting to reuse transaction log space. You should remember that the amount of
log file activity will also be affected by which database recovery model you
use—simple, full, or bulk-logged. Use the following guidelines for designing
file structures for transaction logs. Let us look at some of the
characteristics for transaction logs:
log files are accessed sequentially by SQL Server, whereas database files are
accessed randomly. Due to the serial architecture of the transaction log, you
will not realize any performance gains by allocating multiple files on disk
drives that are different from the disk drives for the transaction log.
Consider creating an appropriate disk array and one transaction log file that
spans the array to avoid data loss.
transaction log of your database represents a potential bottleneck in your
database solution. You should use faster and more expensive drives for your
transaction logs. You should carefully determine the RAID solution that you
will implement for your transaction log. The log is write-intensive in most SQL
Server solutions. Therefore, you should not use RAID 5. You can use a RAID 1 or
RAID 10 array for better performance.
storage solution should allow you to back up the tail-log in the event of a
database failure. This will enable you to back up all the DML operations in
your database after the last transaction log backup, thus avoiding any data
loss in the event of a disk drive failure.
need to allocate enough space and configure an appropriate monitoring and
alerting solution to ensure that your transaction log does not run out of
and SAN - Best Practices
we did talk about the logical entities, next we will move to the actual
physical entity where we will land the actual files. The primary purpose of a
RAID array is to prevent potential disk failures and help you store information
redundantly to avoid data loss. It also improves the performance of the
Input/Output subsystem by distributing the number of disk drives you use
concurrently, and thereby improves the disk Input/Output throughput. There are
many RAID levels to choose from, depending on the configuration of the database
server. Each level has its advantages and disadvantages. All the disks in a
RAID configuration function as one logical storage unit regardless of the RAID
level used. Here are some points that you need to keep in mind while working
with storage and SQL Server in particular.
0 provides performance improvements by striping data across multiple physical
disks. However, this configuration does not provide fault tolerance. If one
physical disk fails, the entire disk set fails and all the data is lost.
Therefore, RAID 0 is not appropriate for database servers and high-availability
1 provides mirroring of data and fault tolerance. In a RAID 1 configuration,
you need a minimum of two disks. Mirroring affects the performance of write
operations because data is written to both disks at the same time. You can
improve the performance of the read and write operations in RAID 1 by
parallelizing these operations. Generally, SCSI RAID disk controllers can write
to multiple disks simultaneously and thereby improve the performance of read
and write operations. In a RAID 1 configuration, the storage capacity is
exactly half of the total physical capacity available.
5 uses block-level striping and stores parity data in a different stripe
section. You can use a simple XOR operation and store redundant data in the
system so that you can recover the data in the event of a disk failure. If a
disk drive fails, you can recover its data by performing an XOR operation on
the parity data and the data in the other disk. The parity calculation produces
some overhead. Therefore, RAID 5 is not recommended for write-intensive
10 or RAID 1+0 is a stripe of mirrored or RAID 1 disks. With this approach, you
can increase the Input/Output throughput by striping data, and you can protect
the data by using mirroring. RAID 0+1 is a mirror of striping disks. RAID 0+1
is weaker than RAID 10 because if two drives on different sides of the mirror
fail, the entire array of disks will fail. In RAID 10, the system will fail
only if both the failed disks are members of the same mirror set.
SAN is a shared repository for storing data. One or more servers can connect to
a SAN device and access the data stored in it. In SQL Server clustering, a SAN
provides common storage for a SQL Server instance that can run from any node in
the cluster. The most common SAN configuration is Fibre Channel Networking with
the SCSI command set. SAN devices using this technology appear to the servers
as directly-attached storage devices. A SAN is compatible with all types of
RAID configurations, but it may be configured differently, depending on the
hardware vendor. It is a normal practice to virtualize different RAID
configurations inside a SAN device. A different SAN configuration is iSCSI. It
replaces Fibre Channel switches with Ethernet switches because Ethernet
supports the IP protocol.
Most hardware RAID solutions allow you to install spare drives that will be
automatically used by the system when a disk fails. Your server can have more
than one hot spare drive. Spare drives also allow you to schedule the
replacement of damaged disks so that you can avoid the risk of another disk
failure and data loss.
- Special Mention
storage solution for the tempdb database should ensure a reliable performance.
Secondly, a damaged disk here can bring the whole instance down. The storage
requirements for this database depend on how you plan to utilize it. Certain
SQL Server 2005 features, such as MARS, triggers, and snapshot-based isolation
levels, are based on row–versioning and utilize the tempdb database. Therefore,
you need to plan your storage solution accordingly.
should store the tempdb database on a local disk array because it is
I/O-intensive. You should not share this database with other I/O-intensive
subsystems, such as the location where the Windows pagefile, temp directory, or
other production databases are stored. In most database systems, the tempdb
database supports intensive write activity and this might increase the load on
some physical devices.
is no need to store the tempdb database on a SAN for administrative purposes
because it is rebuilt each time the SQL Server engine starts. However, if you
still need to store the tempdb database on a SAN, the SAN technology that you
have implemented would determine the performance. You should define multiple
files that are stored in multiple physical drives for the tempdb database to
minimize the resource load. You should create as many files as the number of
physical CPUs that the server uses for SQL Server. In a system with 16 or more
CPUs, you should start with a lower number of files, and increase this number
until you reduce the load to a reasonable level.
can use RAID 10 to store the tempdb database. Using a RAID 5 array is not
appropriate because the tempdb database is write-intensive. It is also not
appropriate to use a RAID 0 array in a high-availability solution because a
drive failure could cause a SQL Server failure.
always, when working with SQL Server it is a complete 360 view of your database
that helps it scale and perform at its best. So this is just the starting and
you can see we had so much to talk only for the storage system. I would urge
you to keep monitoring and viewing reports for the storage system given by
vendors from time-to-time to make sure your database / server remains healthy
and you are able to get the problems well before they occur.