In the previous post, we talked about when you should use identity/sequence as a clustered index key and when it’s problematic.
In a nutshell, since traditionally we want our clustered index key to be narrow, static and ever-increasing, Identity is in many cases a very good choice. But since all inserts go to the last page of the index, if you load data from many sessions in parallel, they will interfere each other because each one has to acquire a latch on the page when inserting data into it. This problem is sometimes referred to as the tail insert problem.
Today, we will talk about the possible solutions for this situation. We will use the following table and see how each solution changes its creation script:
Transact-SQL
CREATE TABLE dbo.UserEntries_RegularWithIdentity ( Id BIGINT IDENTITY PRIMARY KEY CLUSTERED, UserId INT NOT NULL , CreatedDate DATETIME2 NOT NULL )
1
2
3
4
5
6
CREATE TABLE dbo.UserEntries_RegularWithIdentity
(
Id BIGINT IDENTITY PRIMARY KEY CLUSTERED,
UserId INT NOT NULL ,
CreatedDate DATETIME2 NOT NULL
)
Solution #1: Load to a Heap
Loading into a heap works different than into a clustered index. Instead of traversing the index tree and finding the relevant page for the insert, it uses the combination of the GAM, SGAM and PFS maps in order to find a page to insert the row to. When loading to a heap, a common problem is that the PFS, which is a shared data structure, become a hotspot. The way to solve the problem is just to add more data files to the database/filegroup, since each file (and each ~64MB of data to be exact) has a separate PFS map.
Transact-SQL
CREATE TABLE dbo.UserEntries_RegularWithIdentity ( Id BIGINT IDENTITY PRIMARY KEY NONCLUSTERED, UserId INT NOT NULL , CreatedDate DATETIME2 NOT NULL )
1
2
3
4
5
6
CREATE TABLE dbo.UserEntries_RegularWithIdentity
(
Id BIGINT IDENTITY PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL ,
CreatedDate DATETIME2 NOT NULL
)
Solution #2: Work with In-Memory OLTP
The In-Memory OLTP (aka Hekaton) engine uses lock-free data structures in order to not use latches. By doing so (and other super-cool stuff), inserting data from many threads in parallel can become much faster. The problem is that it’s pretty hard to work with In-Memory OLTP in SQL Server 2014. Migration is hard, you have to work with binary collation, table size is limited, and there’s lots of other limitations. In SQL Server 2016, however, it will be much easier to work with this technology. It’s more mature and lots of the limitations are gone.
Transact-SQL
CREATE TABLE dbo.UserEntries_InMemoryWithIdentity ( Id BIGINT IDENTITY(1,1) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000), UserId INT NOT NULL, CreatedDate DATETIME2 NOT NULL ) WITH (MEMORY_OPTIMIZED=ON)
1
2
3
4
5
6
7
8
CREATE TABLE dbo.UserEntries_InMemoryWithIdentity
(
Id BIGINT IDENTITY(1,1) NOT NULL
PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
UserId INT NOT NULL,
CreatedDate DATETIME2 NOT NULL
)
WITH (MEMORY_OPTIMIZED=ON)
Solution #3: Work with a non-sequential key
Another solution is to use a non-sequential key in order to load data all over the index and not only at the end of it. By doing this, we spread the load and not creating a hotspot. This sounds like a very bad idea at first, because it generates lots of page splits and fragmentation when inserting the data, but it turns out page splits are less problematic for insert performance than latch contention. Also, in order to limit the amount of fragmentation, you can work with a low fillfactor (even 10%), and by doing so, you basically create “buckets” that will be ready for each row, instead of generating fragmentation when the insert occurs. If you use this method, watch out for your table size, as it can make your table quite larger than in other methods.
Transact-SQL
CREATE TABLE dbo.UserEntries_RegularWithNewID ( Id UNIQUEIDENTIFIER DEFAULT NEWID() CONSTRAINT PK_UserEntries_RegularWithNewID PRIMARY KEY CLUSTERED, UserId INT NOT NULL, CreatedDate DATETIME2 NOT NULL )
1
2
3
4
5
6
7
CREATE TABLE dbo.UserEntries_RegularWithNewID
(
Id UNIQUEIDENTIFIER DEFAULT NEWID()
CONSTRAINT PK_UserEntries_RegularWithNewID PRIMARY KEY CLUSTERED,
UserId INT NOT NULL,
CreatedDate DATETIME2 NOT NULL
)
Solution #4: Work with Hash Partitioning
This solution uses SQL Server Partitioning in order to split the clustered index to a few separate physical trees and spread the load between those trees. The solution includes adding a calculated column that is based on the Identity (or sequence) column and spreads the load among the separate physical trees. So if, for example, we want to spread the load between 10 partitions, the calculated column will be IdentityColumn % 9.
Transact-SQL
CREATE PARTITION FUNCTION pf_hash (TINYINT) AS RANGE LEFT FOR VALUES (0,1,2,3,4,5,6,7,8) CREATE PARTITION SCHEME ps_hash AS PARTITION pf_hash ALL TO ([PRIMARY]) CREATE TABLE dbo.UserEntries_RegularWithHash ( Id BIGINT IDENTITY NOT NULL, UserId INT NOT NULL , CreatedDate DATETIME2 NOT NULL, HashId AS CAST(Id % 9 AS TINYINT) PERSISTED NOT NULL, CONSTRAINT PK_UserEntries_RegularWithHash PRIMARY KEY CLUSTERED (Id,HashId) ) ON ps_hash(HashId)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
CREATE PARTITION FUNCTION pf_hash (TINYINT)
AS RANGE LEFT FOR VALUES (0,1,2,3,4,5,6,7,8)
CREATE PARTITION SCHEME ps_hash
AS PARTITION pf_hash ALL TO ([PRIMARY])
CREATE TABLE dbo.UserEntries_RegularWithHash
(
Id BIGINT IDENTITY NOT NULL,
UserId INT NOT NULL ,
CreatedDate DATETIME2 NOT NULL,
HashId AS CAST(Id % 9 AS TINYINT) PERSISTED NOT NULL,
CONSTRAINT PK_UserEntries_RegularWithHash
PRIMARY KEY CLUSTERED (Id,HashId)
)
ON ps_hash(HashId)
Solution #5: Pad the table
This solution includes adding a Char column in order to make the table bigger so that less rows can fit in a single page, and thus reduce the amount of sessions latching each page.
If (and only if) your table is small, you can pad each row so that there is only one row in each page. This will eliminate latch contention, since each row and each session will get a session of their own, and the sessions won’t have to fight over the same page. Adding a char (4100) column will do the job.
Transact-SQL
CREATE TABLE dbo.UserEntries_RegularWithIdentity ( Id BIGINT IDENTITY PRIMARY KEY CLUSTERED, UserId INT NOT NULL , CreatedDate DATETIME2 NOT NULL, PadColumn CHAR (4100) NOT NULL )
1
2
3
4
5
6
7
CREATE TABLE dbo.UserEntries_RegularWithIdentity
(
Id BIGINT IDENTITY PRIMARY KEY CLUSTERED,
UserId INT NOT NULL ,
CreatedDate DATETIME2 NOT NULL,
PadColumn CHAR (4100) NOT NULL
)
Solution #6: Use a business key
In many cases, your business key will not be an ever-increasing column. In such case, and assuming the distribution of the rows is relatively even and not creating a hotspot, using a business key should solve the tail insert problem.
Solution #7: Implement a “reverse index”
A reverse index means taking the bits of a certain number and flipping them. This distributes the numbers across the index and eliminating the hotspot at the end of it. This is pretty similar to the non-sequential key solution, but instead of using a Uniqueidentifier with is 16 bytes, you can use an Int or a Bigint in order to narrow the clustered and non-clustered indexes.
While other database systems have a built-in reverse index, in SQL Server we will have to use a user defined function (or even better – an In-Memory OLTP user defined function) to implement this.
For more information and for the function code, read the article on Rick’s SQL Server Blog.
The table creation script will look like this:
Transact-SQL
CREATE TABLE dbo.UserEntries_BitReverse ( Id BIGINT PRIMARY KEY CLUSTERED, UserId INT NOT NULL , CreatedDate DATETIME2 NOT NULL )
1
2
3
4
5
6
CREATE TABLE dbo.UserEntries_BitReverse
(
Id BIGINT PRIMARY KEY CLUSTERED,
UserId INT NOT NULL ,
CreatedDate DATETIME2 NOT NULL
)
And here’s an example for an insert:
Transact-SQL
INSERT INTO UserEntries_BitReverse SELECT dbo.BitReverse (NEXT VALUE FOR SeqForBitReverse) , 1, GETDATE()
1
2
INSERT INTO UserEntries_BitReverse
SELECT dbo.BitReverse (NEXT VALUE FOR SeqForBitReverse) , 1, GETDATE()
After a few inserts, the table will look like this:
In Conclusion
There are a few options for solving the tail insert problem. Each has its pros and cons. Weigh them against your system and get to work.
Comentarios