SQL Azure Table Size

I’ve been fighting with a DotNetNuke install hosted on Azure for a while. We’ve been testing as there’s a lot we like about Azure but the performance when editing DotNetNuke has caused us to go a different route…but that’s another story.  In our test site, the SQL database size just kept growing.  To find out the culprit, I wanted to know the size of each table.  This post from Alexandre Brisebois did just what I needed.  My only tweak was to Order By the size.


SELECT
o.name AS [table_name],
sum(p.reserved_page_count) * 8.0 / 1024 / 1024 AS [size_in_gb],
p.row_count AS [records]
FROM
sys.dm_db_partition_stats AS p,
sys.objects AS o
WHERE
p.object_id = o.object_id
AND o.is_ms_shipped = 0

GROUP BY o.name , p.row_count
ORDER BY size_in_gb DESC

For me, it was pretty clear that I needed to truncate the DotNetNuke EventLog and ScheduleHistory.  In production, you would want to schedule this as it’s amazing how quickly these can grow.  We hadn’t put this site into production yet but the EventLog was 20 GB after just a few months.


truncate table EventLog
truncate table ScheduleHistory

This may or may not be related to anything you need, but for me in this SQL Azure situation, I noticed that even though I had truncated about 20 GB’s of data the Azure Dashboard didn’t reflect that.  Digging around it seems to be related to my indexes being fragmented.  So, following Dilkush Patel’s post I ran this query to see my fragmentation. (Note: I did make a minor change to Order By the % of fragmentation.


SELECT
DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ips.avg_fragmentation_in_percent desc, ps.object_id, ps.index_id

For me, I had several tables over 60% fragmented.  So I ran Dilkush’s script:


DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
(
SELECT '[' + IST.TABLE_SCHEMA + '].[' + IST.TABLE_NAME + ']' AS [TableName]
FROM INFORMATION_SCHEMA.TABLES IST
WHERE IST.TABLE_TYPE = 'BASE TABLE'
)

OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0

BEGIN
PRINT('Rebuilding Indexes on ' + @TableName)
Begin Try
EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD with (ONLINE=ON)')
End Try
Begin Catch
PRINT('Cannot do rebuild with Online=On option, taking table ' + @TableName+' down to rebuild')
EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD')
End Catch
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor
DEALLOCATE TableCursor

However, after running this (several times in fact) the usage in my Azure Dashboard has not changed.  I’ll wait and see if it changes and update this post if it does.

Azure database ‘xxx’ has reached its size quota

I keep bumping in to this issue so I thought I’d post my steps to resolution. When you create an Azure SQL database, it sets a size limit for the database. When the database fills up, you’ll get an error like this:

The database ‘XXX’ has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.

The fix requires 2 steps: First you need to increase the database size in the Azure portal but then secondly you need to alter your database in SQL.

  1. Login to the Azure Management Portal and go to your SQL Database.
  2. On the Dashboard tab, you should see that the size of your database is 100% of your total in the Usage Overview.
  3. Click on the Scale tab and change the Max Size and click Save.

 

That fixes the Azure max size limit, but now we need to update our SQL database itself.

  1. Fire up SQL Management Studio and connect to your Azure SQL Server.
  2. On your database, open a New Query.
  3. Run this query replacing YOURDATABASE with the name of your database in both locations.

SELECT DATABASEPROPERTYEX('[YOURDATABASE]', 'EDITION') as Edition, CONVERT(BIGINT,DATABASEPROPERTYEX ( '[YOURDATABASE]' , 'MAXSIZEINBYTES'))/1024/1024/1024 AS 'Max Size IN GB'

  1. This shows you your Azure Database Edition and your current Max Size.  Now go run this query on the MASTER database changing the Database Name, Edition and Maxsize values as needed.

ALTER DATABASE [YOURDATABASE] MODIFY (EDITION='Standard', MAXSIZE=40GB)

Hit refresh on your site and it should now come up without error.

SQL Server Migration Assistant for Access nightmare

Getting from Access to SQL is not as much fun as it should be and it seems that it gets harder with each release. The upsize tool in Access 2013 is gone and the recommended way is now to use the SQL Server Migration Assistant (SSMA).

Like a lot of people, I run Windows 64-bit OS with 32-bit Office (which is Microsoft’s recommendation. When running SSMA, I kept hitting the following error:


Access Object Collector error: Database

     Retrieving the COM class factory for component with CLSID {CD7791B9-43FD-42C5-AE42-8DD2811F0419} failed due to the following error: 80040154. This error may be a result of running SSMA as 64-bit application while having only 32-bit connectivity components installed or vice versa. You can run 32-bit SSMA application if you have 32-bit connectivity components or 64-bit SSMA application if you have 64-bit connectivity components, shortcut to both 32-bit and 64-bit SSMA can be found under the Programs menu. You can also consider updating your connectivity components from http://go.microsoft.com/fwlink/?LinkId=197502.

     An error occurred while loading database content.


Based on this post, I’d regsvr32 Da0360.dll and added that folder to my environment’s PATH but neither helped.  For others, running the 32-bit version of SSMA was the suggested fix but this also didn’t work for me.  I almost went down the path of setting corflags but just didn’t feel that that was my issue.

Thinking it was all about 32bit and 64bit, I pulled out my tablet which is Win 8.1 32-bit but it had the exact same error.  Finally, I found this post which had the fix which is to install the Microsoft Access Database Engine 2010 Redistributable.  The post used the 2007 edition but I used 2010 and it worked fine.  I think it may work with Microsoft Access Database Engine 2013 but once it worked with the 2010 edition, I moved on.  I’m not sure why having Access 2013 isn’t enough but I know a lot of other people are struggling with this issue and not getting much support from the SSMA team.  In fact, support for that product seems really lacking.  On the SSMA 5.2 version page (5.3 is out and the one I installed) there were several comments (some very frustrated) with my exact issue but no response from the Microsoft team.  I emailed their help address which replied with an auto-response to open a ticket which I did, but still no response.  Hopefully this post will help someone and you won’t feel so alone.  🙂

DPM 2012 fails to backup SQL 2012 database

Our SQL 2008 backups were working just fine with Data Protection Manager 2012 until we upgraded SQL to SQL 2012.  Then we started getting the error:

The DPM job failed for SQL Server 2012 database <SQL database> on <our sql server> because the protection agent did not have sysadmin privileges on the SQL Server instance. (ID 33424 Details: )

The suggestion is to add “‘NT Service\DPMRA\ to the sysadmin role on the SQL Server instance.” That’s very specific so that must be the fix.  The problem is I don’t have an ‘NT Service\DPMRA’ user in Windows or SQL.  Here’s the fix:

  1. In SQL Management Studio, connect to the SQL 2012 Server and then expand Security.
  2. Expand Logins and right click on NT AUTHORITY\SYSTEM and select Properties.
  3. Click Server Roles, check sysadmin and click OK.

I read a post saying you could also add NT Service\DPMA like the Recommended Action in DPM states, but I don’t have that as a SQL Logins and wasn’t able to find it as a Windows account to create one.

Once I added sysadmin to the Server Roles, I was able to right click and “Perform Consistency Check…” and everything took off.  You may also go to the jobs themselves and click “Run configuration protection job again”.

Note, for me the Consistency Checks failed on some but this time with a new (more common) error saying “Recovery point creation failed”.  The fix was simply to create a new Recovery Point.