Recently, we were looking into when someone had updated some management packs and when. I went digging a bit and found the following query helped me out. Run it against the OperationsManager database.
Here is the plaintext code:
SELECT TOP (1000) [MPName]
ORDER BY [MPLastModified] DESC
I found a really great article about using the TFS Client dll’s here. However, that only worked for VS2013. I had to fix it for VS2015.
For some interesting reason, they are no longer GACing the assemblies, nor are they in a default location. Now they are sym-linked to the existing version.
Here is my example PowerShell code:
$vs2015Path = “C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.Client.dll”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.Common.dll”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.Build.Client.dll”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.Build.Common.dll”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.WorkItemTracking.Client.dll”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.VersionControl.Client.dll”
Add-Type -Path “$vs2015Path\Microsoft.TeamFoundation.ProjectManagement.dll”
As we move forward in DevDiv, our next steps are to move to http://visualstudio.com as our default instance of TFS. This also comes with a switch to Git as the default VCS.
Here are some of the caveats we’ve discovered.
1. Git does not do binaries well. There is a size limitation within Git. See here and here. This means we need to componentize our source tree. A lot of pruning and splitting apart.
2. Workflow changes. I have this one command to make sure my local repo is up to date and merged (git pull origin master). You will need to understand that branches are cheap, disposable, and far easier to use than in normal TFS. I can create and destroy branches locally as many as I want, and then merge back into master, and only push those changes back up.
3. You have to read the documentation. Yes, it’s a VCS, but it’s not a centralized one like SVN or TFS. Even though it may be TFS on the back end, you still need to RTFM.
I frequently need to figure out who the owner of a computer object is in Active Directory.
Enter PowerShell to the rescue!
Easy as pie. Here is the full code for you to use.
$oADObject = Get-ADComputer <ComputerName> -Properties *
$oAceObj = Get-Acl -Path (“ActiveDirectory:://RootDSE/” + $oADObject.DistinguishedName);
Gist Link: https://gist.github.com/dalehhirt/3f03b9ee27a245933e1a
I just read an article titled The 3 Configuration Options that you always have to change in SQL Server! by Klaus Aschenbrenner (@Aschenbrenner). I went and looked over my SOP script that I keep for all of my SQL servers. The full version can be found here.
My script does several things:
- Sets 3 of the 4 most important config settings I need in my environment (more on these later).
- Sets a standard size for the Model data and log files (1024MB with 1024MB growth).
- Sets all database owners to sa. This is so we’re not reliant on Windows Authentication if someone leaves the company.
- Sets all databases to PAGE_VERIFY CHECKSUM. This is vital to make sure data is valid within the page when reading and writing from disk (See Kendra Little’s blog here).
- Computes the value of max server memory (#4 of 4 most important config settings) based on Jonathan Kehayias’ blog post.
So I only do one of three that Klaus recommends. Well, let me rephrase that. I only script out one of three. The other two, Max DOP and Cost Threshold of Parallelism, I tend to leave alone unless I see actual performance issues.
The configuration options I do set, lines 25-33 of my script are: backup compression default, optimize for ad hoc workloads, and remote admin connections.
Backup compression is needed because we do a lot of network shipping of backup files, both at creation and for secondary backup purposes (testing restores, dev environments, and offsite tape backups.) I know this has been a lifesaver when needing to restore some 600GB monster database, but the backup file is less than 30GB. In SQL Server 2014, this feature is enabled from Standard Edition on up. All versions can restore a compressed backup. So use it often.
Optimize for adhoc workloads won’t ever hurt you, and most likely will only help, as it’s more about storage of the compiled query plan that anything else. I found this made a big difference in several of my OLTP workloads.
Remote Admin Connections. I can’t tell you the number of times this has saved my bacon when I can’t log in to the Windows server but need to perform some random admin function to get data back. The DAC is a lifesaver.
What are your mandatory configuration options?
My operations team runs System Center Operations Manager 2012 R2 internally.
I host the OperationsManager database on a SQL Server 2012 Availability Group, and for the most part, it works well. However, for space reasons, we needed to move the Data Warehouse database to its own server backed by SAN storage.
After following this guide (https://technet.microsoft.com/en-us/library/hh268492.aspx), the database was successfully moved, and the data is flowing correctly. In addition, all reporting is working as expected. It was moved from scomsql (SQL 2012 Availability Group) to scomsql3 (standalone instance).
Everything worked out except for the Get-SCOMDataWareHouseSetting PowerShell cmdlet. It would return the name of the old server.
To find the current value, I ran the following:
And now it returns the correct value.
Update: due to the vagaries of WordPress, I’m linking to the SQL script here.