CID Superfecta returning CID Superfecta! on Anonymous

If you’re using Asterisk/FreePBX you’re probably using CID Superfecta for you caller ID lookups. We started noticing that if the # wasn’t being provided, the name returned was CID Superfecta! which looks odd to users. I started digging in and figured out that if a CID isn’t provided the trunk will provide 5555555555.

Scheme Asked is: Default
The DID is: 5555555555
The CNUM is: anonymous
The CNAME is: CID Superfecta!

Executing Trunk Provided
Looking for Trunk Provided Caller ID ...
found value of CID Superfecta! ...
determined good.
'CID Superfecta!'
result took 0.0007 seconds.

I found a bug report for CID Superfecta that shows resolved where 5555555555 was returning CID Superfecta! from the trunk. That seems to be exactly the issue but that was fixed in Superfecta v13.03.19. I checked for module updates I had the latest but still had this issue. But, there’s an easy workaround. Just add an entry for 5555555555 to the Asterisk Phonebook and set that CNAME to whatever you want.

I set mine to “Unknown” and since we have Asterisk Phonebook at the top of the CID Superfecta sources this now returns Unknown rather than CID Superfecta!.

A little tip if you want to test, prefix your # with *67 to temporarily block your caller ID. *67 just blocks the CID for that one call so it’s a great way to test from your cell phone.

Why Google Home Caused Me to Switch to Spotify

I love Google Home. We have it controlling lights and locks and anything we can. But by far the main thing we use it for is to play music. So when we started, I reviewed who had the best music streaming service and the winner is Google Play Music. In just about every way Google Play Music is better than Spotify. I won’t go into all the details as you can find plenty of articles comparing the services but briefly some keys differences for me are # of offline devices is 10 with Google and 5 with Spotify and the # of people on a family plan is 6 with Google and 5 with Spotify. And I assumed Google Play would work best with Google Home…makes sense, right? Ironically it doesn’t!

Part of the Google Home integration does work better. I loved being able to say “Hey Google, I like this song” and it would automatically be added to my ‘thumbs up playlist’ (Spotify makes thumbs up so hard but that’s another story). What didn’t work is getting Google Home to play the playlist you want on Google Play Music. Sometimes a playlist that had worked for weeks will just stop being recognized and Home instead plays some album that has a name somewhat similar. You can find lots of posts from people with this issue. Rather than giving your playlist the priority, for some reason it’ll choose an album or person first. People have tricks like naming your playlist something simple like “first”, “second”, “third” or something unique like “octopus” or “asparagus”. The simple names helped but still wasn’t flawless.

Then if you’re a country music fan, they just broke that all together. What I would call the “main” Country playlist was called Hot Country. I could say “Hey Google, play playlist Hot Country” and it worked like a champ for months. Then all of the sudden Google didn’t know what I meant. I checked online and it was now called “Today’s Country Hits” but Home wouldn’t recognize that either. I opened 2 tickets with Home support but got nowhere. I think the name change killed it as in the Stations it’s Today’s Country Hits’ but if you save it to your library it appears as the old name ‘country hotlist’. I opened two tickets with Google Home trying to help them fix it but finally gave up.

This totally broke my routine as now something easy to play my music required me to start it on my phone and cast it to Home. If I’m not hands-free on this, what’s the point.

So I switched back to Spotify, set it to primary music service and now I just say “Okay Google, play Hot Country” and it works like a champ. Not sure if others had the same experience but for me it was actually Google Home that caused me to switch to Spotify. I’ll be curious if YouTube Music works better but switching isn’t as easy with a family plan as everyone gets tired of being yanked around after getting their favorites and playlist setup the way they want. So for now, we’ll stick with Spotify and Google Home.

Sophos XG DHCP Scope Not Working for VLAN

I have been fighting with implementing a voice VLAN on Sophos XG for months.  We’d set the vlan to ‘voice’ on the switch and the phones would go to the voice vlan with not problem but then some phones just would not get a DHCP address.  I had been searching and searching but somehow today I got lucky and found Sophos XB article123952.  The issue for me is that the phones had been on the LAN DHCP scope and so XG wouldn’t give them a new IP on the voice VLAN.  I have no idea why some phones we could just bounce from VLAN to VLAN without issue and others we couldn’t but the fix makes that a moot point.  The key is to set STATIC ENTRY SCOPE to global.  Issue this command from the Sophos CLI.

system dhcp static-entry-scope global

For me that was an instant fix.  I rebooted the phones and everything worked as expected.  And while I say this is the fix (and it is) the reason isn’t 100% clear.  Today I had a phone had no entry in DHCP on the LAN scope but it would not get a Voice VLAN IP from DHCP.  But as soon as I set the static-entry-scope to global, it immediately worked after a reboot.  XG doesn’t give you a way to flush DHCP which I really don’t like.  As DHCP is one of the least developed areas of the XG gui (seriously, we can’t set DHCP options in the web gui!) my guess is even though the mac and IP aren’t showing up in the web gui somewhere in XG’s internal DHCP table the record is still there.  That’s just a guess…and again…the key is this should fix your issue.

 

 

Restart Service on Remote Computer with PowerShell

If you search for “PowerShell Restart Service Remote Computer”, the challenge is a lot of the top results will be for earlier versions of PowerShell and a lot more complicated than needed. For some reason, Microsoft doesn’t just give the Restart-Service command a -ComputerName switch…I guess that would be too intuitive.   After some digging, I found it’s still easy, you just need to Get-Service first.  Below is an example of restarting BITS.  I used a wildcard just to show you that wildcards work.  Run this from an elevated PowerShell prompt and replace <COMPUTERNAME> with the name of your computers.


Get-Service -Name BIT* -ComputerName  | Restart-Service

This works great for restarting a service on a few remote computers.  If you need to restart a service for all the computers in your domain, here’s a script to help with that process.  This script does need to be run from an AD server as it requires Get-ADComputer or you’ll need to install the PS module on the server where you’ll be running this script.   The value on this is it pulls out all the active computers and restarts you’re selected service.  This helped us on an issue where our remote access software ScreenConnect started dropping out of our console.  A bug in their keep alives is causing the issue and the temp fix is to restart the service…but sometimes it’s hard to remember what’s missing so this goes through all computers active in your domain and restarts the service.


$today = Get-Date
$cutoffdate = $today.AddDays(-15)

Get-ADComputer  -Properties * -Filter {LastLogonDate -gt $cutoffdate}|Select -Expand DNSHostName  | out-file C:\All-Computers.txt

$computers = get-content "C:\All-Computers.txt"
$amount = $computers.count
$a=0

foreach ($computer in $computers)
{
   $a++
   Invoke-Command -ComputerName $computer { Restart-Service -Name 'ScreenConnect Client (f95335af7be34c6f)' } -ErrorAction SilentlyContinue Write-Progress -Activity "Working..." -CurrentOperation "$a Complete of $amount" -Status "Please wait.  Restarting service."
}

Enjoy!

Can’t Change Display Brightness on Windows 10

If after upgrading to Windows 10 you can’t change your display brightness (the option may be missing in Settings/Display), this may be your fix.

  1. Right-Click the Start button and select Device Manager
  2. Expand the Monitors section
  3. Right-click on Generic PnP Monitor and click on Enable

After fighting to find the latest video drivers (HP hasn’t re990leased Win 10 drivers for my Pavilion), this fixed my issue.

Windows 10 Fix for Remote Gateway VPN Bug

When connected to a vpn, we often want to continue to use our connection for Internet traffic rather than forwarding it through the tunnel.  With Win 10, there’s a bug that prevents us from clicking on the Properties button for TCP/IP v4. David Carroll posted a fix here .  Other posts give fixes of editing the RAS phonebook, but David’s PowerShell method is much easier to me.  I’m posting the steps here as well (mostly for my records).  One key, I’ve noticed that if you have a space in the name of your VPN the Get-VpnConnection won’t return your connection info.  So if you have a connection like “VPN 1”, this method wouldn’t work for me but if you name it “VPN1” it works fine.

  1. From PowerShell, type Get-VpnConnection while connected to your VPN.  You’ll notice that the SplitTunneling is set to False.
  2. Set-VpnConnection “VPN1” -SplitTunneling 1 (replace VPN1 with the name of your VPN returned from the Get-VpnConnection.
  3. Disconnect from your VPN session and reconnect.  
  4. In Bing or Google, just type “what is my ip” and you should have your local internet IP rather than the one going through the VPN.

That should do it.

SQL Azure Table Size

I’ve been fighting with a DotNetNuke install hosted on Azure for a while. We’ve been testing as there’s a lot we like about Azure but the performance when editing DotNetNuke has caused us to go a different route…but that’s another story.  In our test site, the SQL database size just kept growing.  To find out the culprit, I wanted to know the size of each table.  This post from Alexandre Brisebois did just what I needed.  My only tweak was to Order By the size.


SELECT
o.name AS [table_name],
sum(p.reserved_page_count) * 8.0 / 1024 / 1024 AS [size_in_gb],
p.row_count AS [records]
FROM
sys.dm_db_partition_stats AS p,
sys.objects AS o
WHERE
p.object_id = o.object_id
AND o.is_ms_shipped = 0

GROUP BY o.name , p.row_count
ORDER BY size_in_gb DESC

For me, it was pretty clear that I needed to truncate the DotNetNuke EventLog and ScheduleHistory.  In production, you would want to schedule this as it’s amazing how quickly these can grow.  We hadn’t put this site into production yet but the EventLog was 20 GB after just a few months.


truncate table EventLog
truncate table ScheduleHistory

This may or may not be related to anything you need, but for me in this SQL Azure situation, I noticed that even though I had truncated about 20 GB’s of data the Azure Dashboard didn’t reflect that.  Digging around it seems to be related to my indexes being fragmented.  So, following Dilkush Patel’s post I ran this query to see my fragmentation. (Note: I did make a minor change to Order By the % of fragmentation.


SELECT
DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ips.avg_fragmentation_in_percent desc, ps.object_id, ps.index_id

For me, I had several tables over 60% fragmented.  So I ran Dilkush’s script:


DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
(
SELECT '[' + IST.TABLE_SCHEMA + '].[' + IST.TABLE_NAME + ']' AS [TableName]
FROM INFORMATION_SCHEMA.TABLES IST
WHERE IST.TABLE_TYPE = 'BASE TABLE'
)

OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0

BEGIN
PRINT('Rebuilding Indexes on ' + @TableName)
Begin Try
EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD with (ONLINE=ON)')
End Try
Begin Catch
PRINT('Cannot do rebuild with Online=On option, taking table ' + @TableName+' down to rebuild')
EXEC('ALTER INDEX ALL ON ' + @TableName + ' REBUILD')
End Catch
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor
DEALLOCATE TableCursor

However, after running this (several times in fact) the usage in my Azure Dashboard has not changed.  I’ll wait and see if it changes and update this post if it does.

Azure database ‘xxx’ has reached its size quota

I keep bumping in to this issue so I thought I’d post my steps to resolution. When you create an Azure SQL database, it sets a size limit for the database. When the database fills up, you’ll get an error like this:

The database ‘XXX’ has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.

The fix requires 2 steps: First you need to increase the database size in the Azure portal but then secondly you need to alter your database in SQL.

  1. Login to the Azure Management Portal and go to your SQL Database.
  2. On the Dashboard tab, you should see that the size of your database is 100% of your total in the Usage Overview.
  3. Click on the Scale tab and change the Max Size and click Save.

 

That fixes the Azure max size limit, but now we need to update our SQL database itself.

  1. Fire up SQL Management Studio and connect to your Azure SQL Server.
  2. On your database, open a New Query.
  3. Run this query replacing YOURDATABASE with the name of your database in both locations.

SELECT DATABASEPROPERTYEX('[YOURDATABASE]', 'EDITION') as Edition, CONVERT(BIGINT,DATABASEPROPERTYEX ( '[YOURDATABASE]' , 'MAXSIZEINBYTES'))/1024/1024/1024 AS 'Max Size IN GB'

  1. This shows you your Azure Database Edition and your current Max Size.  Now go run this query on the MASTER database changing the Database Name, Edition and Maxsize values as needed.

ALTER DATABASE [YOURDATABASE] MODIFY (EDITION='Standard', MAXSIZE=40GB)

Hit refresh on your site and it should now come up without error.

SQL Server Migration Assistant for Access nightmare

Getting from Access to SQL is not as much fun as it should be and it seems that it gets harder with each release. The upsize tool in Access 2013 is gone and the recommended way is now to use the SQL Server Migration Assistant (SSMA).

Like a lot of people, I run Windows 64-bit OS with 32-bit Office (which is Microsoft’s recommendation. When running SSMA, I kept hitting the following error:


Access Object Collector error: Database

     Retrieving the COM class factory for component with CLSID {CD7791B9-43FD-42C5-AE42-8DD2811F0419} failed due to the following error: 80040154. This error may be a result of running SSMA as 64-bit application while having only 32-bit connectivity components installed or vice versa. You can run 32-bit SSMA application if you have 32-bit connectivity components or 64-bit SSMA application if you have 64-bit connectivity components, shortcut to both 32-bit and 64-bit SSMA can be found under the Programs menu. You can also consider updating your connectivity components from http://go.microsoft.com/fwlink/?LinkId=197502.

     An error occurred while loading database content.


Based on this post, I’d regsvr32 Da0360.dll and added that folder to my environment’s PATH but neither helped.  For others, running the 32-bit version of SSMA was the suggested fix but this also didn’t work for me.  I almost went down the path of setting corflags but just didn’t feel that that was my issue.

Thinking it was all about 32bit and 64bit, I pulled out my tablet which is Win 8.1 32-bit but it had the exact same error.  Finally, I found this post which had the fix which is to install the Microsoft Access Database Engine 2010 Redistributable.  The post used the 2007 edition but I used 2010 and it worked fine.  I think it may work with Microsoft Access Database Engine 2013 but once it worked with the 2010 edition, I moved on.  I’m not sure why having Access 2013 isn’t enough but I know a lot of other people are struggling with this issue and not getting much support from the SSMA team.  In fact, support for that product seems really lacking.  On the SSMA 5.2 version page (5.3 is out and the one I installed) there were several comments (some very frustrated) with my exact issue but no response from the Microsoft team.  I emailed their help address which replied with an auto-response to open a ticket which I did, but still no response.  Hopefully this post will help someone and you won’t feel so alone.  🙂

DPM 2012 fails to backup SQL 2012 database

Our SQL 2008 backups were working just fine with Data Protection Manager 2012 until we upgraded SQL to SQL 2012.  Then we started getting the error:

The DPM job failed for SQL Server 2012 database <SQL database> on <our sql server> because the protection agent did not have sysadmin privileges on the SQL Server instance. (ID 33424 Details: )

The suggestion is to add “‘NT Service\DPMRA\ to the sysadmin role on the SQL Server instance.” That’s very specific so that must be the fix.  The problem is I don’t have an ‘NT Service\DPMRA’ user in Windows or SQL.  Here’s the fix:

  1. In SQL Management Studio, connect to the SQL 2012 Server and then expand Security.
  2. Expand Logins and right click on NT AUTHORITY\SYSTEM and select Properties.
  3. Click Server Roles, check sysadmin and click OK.

I read a post saying you could also add NT Service\DPMA like the Recommended Action in DPM states, but I don’t have that as a SQL Logins and wasn’t able to find it as a Windows account to create one.

Once I added sysadmin to the Server Roles, I was able to right click and “Perform Consistency Check…” and everything took off.  You may also go to the jobs themselves and click “Run configuration protection job again”.

Note, for me the Consistency Checks failed on some but this time with a new (more common) error saying “Recovery point creation failed”.  The fix was simply to create a new Recovery Point.