Awhile back I worked on an issue where a client reached the 8060 byte limit for a row in SQL Server. 8060 bytes is the max size a single row can be as that is the size of a page minus the header (this doesn’t include LOB data types as those are handled differently). They were using a software functionality that adds and removes columns to the table as fields are added and removed within the software. The issue this particular client was seeing was that they had reached the limit when adding new fields (columns), but when they removed fields (dropped columns) to try to make room they still saw the error message.

At first this didn’t quite make sense. The column has been dropped, so that space should be available right? Ultimately we found this Microsoft bug report that was closed as functioning to spec.

From Microsoft:

“This behaviour is expected. Dropping a column is a metadata-only change and can leave gaps in column-offsets on the row. When new columns are added to such a table the space left by the dropped columns may or may not be reused for the new column; it depends on the size and type of the old/new columns. Bottom-line is that if you drop/add columns enough times there will be a point at which the max-fixed-size of the row will be exceeded due to the “holes” left behind by the dropped columns.

How to fix this? If your table has a clustered index, then just rebuilding the clustered index will recompact the rows getting rid of the holes left behind by dropped columns. If your table is a heap, then there is no direct way, but you could create/drop a dummy clustered index just to get this above effect. You could use DBCC CleanTable, but it only reclaims space from dropped variable length columns (not fixed length columns). We are looking into the heap issue for a future release.”

We rebuilt the clustered index as instructed and this resolved the issue. While it was great that the issue was resolved and the explanation made sense, I still wanted to see it for myself. Below are the steps I took step through this.

Let’s create a test table with eight CHAR(1000) columns and populate it with some data. This will bring us 8000 bytes per row.

Image 1002

Now let’s try to add another CHAR(1000) column. It will fail as this would exceed the 8060 byte limit. This is what the client saw in their environment.

Image 1001

Now let’s drop one of the columns, col5. This should free up 1000 bytes, right?

With 1000 bytes free we should be able to add another column. However, when we try we see that same familiar error.

Image 1003

But we dropped a column so shouldn’t that free up space? It certainly appears that way when we look at sys.columns.

Image 1004

Something has to give, so let’s take a look at what is written to the actual data page. First we will use DBCC IND to locate one of the pages we will look at.

Image 1005

Next we will turn on trace flag 3604 and use DBCC PAGE to view the page contents. We dropped col5 which would contain the value ‘E’ so let’s scroll down and look for it. We see values ‘D’ and ‘F’ and between them we see ‘DROPED = NULL’ where ‘E’ used to exist and we can see that it is still taking up 1000 bytes.

Image 1006

There is no clustered index on this table so rather than rebuilding the clustered index we will just rebuild the heap. We will follow the same steps to view one of our data pages.

Image 1007

Image 1008

Now when we look at the page, we no longer see the 1000 bytes col5 used to consume, and if we try to add another column it will succeed.

Image 1017