Using Git from PowerShell

This year has been full of changes for me. One of the biggest changes is that my job requires me to use Git and GitHub for almost all of my work. Before this new job, I had never used Git. By default, the Git installer installs the bash command shell. Most of the documentation is written assuming that you are using bash. However, I prefer to work in PowerShell. In this article I will show how I set up my environment to enable Git functionality in PowerShell. This is not meant to be a tutorial on using Git but, rather, a example of what works for me and for my workflow.

Download and install the Git for Windows

First thing is to install Git for Windows.

Download and run the Git for Windows installer. As you step through the installation wizard you are presented with several options. The following is a list of the options on each page of the installation wizard with the reasoning behind my choice.

  • The Select Components page
    • Check Git LFS (Large File Support)
    • Check Associate .git* configuration files with the default text
    • Check Use a TrueType font in all console windows
      I prefer the TrueType font Consolas as my monospaced font for command shells and code editors.
  • The Choosing the default editor used by Git page
    • Select Use Visual Studio Code as Git’s default editor
      VS Code does everything.
  • The Adjusting your PATH environment page
    • Select Use Git from the Windows Command Prompt
      This adds the Git tools to your PATH so that it works for Cmd, PowerShell, or bash.
  • The Choosing HTTPS transport backend page
    • Select Use the native Windows Secure Channel library
  • The Configure the line ending conversions page
    • Select Checkout Windows-style, commit Unix-style line endings
      This is the recommended setting on Windows and provides the most compatibility for cross-platform projects.
  • The Configuring the terminal emulator to use with Git bash page
    • Select Use Windows’ default console window
      This is the console that PowerShell uses and works best with other Windows console-based applications.
  • The Configuring extra options page
    • Check Enable file system caching
      This option is checked by default. Caching improves performance of certain Git operations.
    • Check Enable Git Credential Manager
      The Git Credential Manager for Windows (GCM) provides secure Git credential storage for Windows. GCM provides multi-factor authentication support for Visual Studio Team Services, Team Foundation Server, and GitHub. Enabling GCM prevents the need for Git to continuously prompt for your Git credentials for nearly every operation. For more information see the GCM documentation on GitHub.
    • Check Enable symbolic links

These are the options I chose. You may have different requirements in your environment.

Install the Posh-Git module

Now that we have the Git client installed we need to enable Git functionality for PowerShell. Posh-Git from the Gallery. For more information about Posh-Git, see Posh-Git on GitHub.

If you have PsGet installed just run:

Install-Module posh-git

Alternatively, you can install Posh-Git manually using the instructions in the README.MD in the GitHub repository.

Once Posh-Git is installed you need to integrate Git into your PowerShell environment. Posh-Git includes an example profile script that you can adapt to your needs.

Integrate Git into your PowerShell environment

Integrating Git into PowerShell is simple. There are three main things to do:

  1. Load the Posh-Git module
  2. Start the SSH Agent Service
  3. Configure your prompt to show the Git status

Add the following lines to your PowerShell profile script.

Import-Module posh-git
Start-SshAgent -Quiet
function global:prompt {
    $identity = [Security.Principal.WindowsIdentity]::GetCurrent()
    $principal = [Security.Principal.WindowsPrincipal] $identity
    $name = ($identity.Name -split '\\')[1]
    $path = Convert-Path $executionContext.SessionState.Path.CurrentLocation
    $prefix = "($env:PROCESSOR_ARCHITECTURE)"

    if($principal.IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) { $prefix = "Admin: $prefix" }
    $realLASTEXITCODE = $LASTEXITCODE
    $prefix = "Git $prefix"
    Write-Host ("$prefix[$Name]") -nonewline
    Write-VcsStatus
    ("`n$('+' * (get-location -stack).count)") + "PS $($path)$('>' * ($nestedPromptLevel + 1)) "
    $global:LASTEXITCODE = $realLASTEXITCODE
    $host.ui.RawUI.WindowTitle = "$prefix[$Name] $($path)"
}

The prompt function integrates Git into your PowerShell prompt to show an abbreviated git status. See the README for Posh-Git for a full explanation of the abbreviated status. I have also customize my prompt to show me my user context, whether I am running in a 64-bit or 32-bit shell, and if I am running elevated. Customize this function to meet your needs or preferences.

At this point you are done. You can use Git from PowerShell. Go forth and clone a repo.

Customize your Git environment

You may want to customize some of the settings of your Git environment, especially if this is a new install of Git. Being a good project contributor in Git you should identify yourself so that Git knows who to blame for your commits. Also, I found that the default colors used by Git in the shell could be hard to read. So I customized the colors to make them more visible. For more information, see the Customizing Git topic in the Git documentation.

The following commands only need to be run once. You are setting global preferences so, once they are set, they are used every time you start a new shell.

# Configure your user information to match your GitHub profile
git config --global user.name "John Doe"
git config --global user.email "alias@example.com"

# Set up the colors to improve visibility in the shell
git config --global color.ui true
git config --global color.status.changed "magenta bold"
git config --global color.status.untracked "red bold"
git config --global color.status.added "yellow bold"
git config --global color.status.unmerged "yellow bold"
git config --global color.branch.remote "magenta bold"
git config --global color.branch.upstream "blue bold"
git config --global color.branch.current "green bold"

As I said at the beginning, this is what works for me. Your mileage may vary. Customize this for your preferences and environmental needs.

In future articles, I plan to share scripts I have created to help me with my Git workflow. Do you use Git with Powershell? Share your questions and experiences in the comments.

Posted in Git, GitHub, PowerShell

Opening the door to the Mystery of Dates in PowerShell

Formatting and converting dates can be very confusing. Every programming language, operating system, and runtime environment seem to do it differently. And part of the difficulty in conversion is knowing what units you are starting with.

First, it is helpful to know the Epoch (or starting date) a stored value is based on. Wikipedia has a good article on this. Here is a brief excerpt.

Epoch date Notable uses Rationale for selection
January 1, AD 1 Microsoft .NET Common Era, ISO 2014, RFC 3339
January 1, 1601 NTFS, COBOL, Win32/Win64 1601 was the first year of the 400-year Gregorian calendar cycle at the time Windows NT was made
January 0, 1900 Microsoft Excel, Lotus 1-2-3 While logically January 0, 1900 is equivalent to December 31, 1899, these systems do not allow users to specify the latter date.
January 1, 1904 Apple Inc.’s Mac OS through version 9 1904 is the first leap year of the 20th century
January 1, 1970 Unix Epoch aka POSIX time. Used by Unix and Unix-like systems (Linux, Mac OS X), and programming languages: most C/C++ implementations, Java, JavaScript, Perl, PHP, Python, Ruby, Tcl, ActionScript.
January 1, 1980 IBM BIOS INT 1Ah, DOS, OS/2, FAT12, FAT16, FAT32, exFAT filesystems The IBM PC with its BIOS as well as 86-DOS, MS-DOS and PC DOS with their FAT12 file system were developed and introduced between 1980 and 1981

Common Date Conversion Tasks

WMI Dates

PS > $installDate = (Get-WmiObject win32_operatingsystem | select Installdate ).InstallDate
PS > [system.management.managementdatetimeconverter]::ToDateTime($InstallDate)
Friday, September 12, 2008 6:50:57 PM

PS > [System.Management.ManagementDateTimeConverter]::ToDmtfDateTime($(get-date))
20151127144036.886000-480

Excel dates – Excel stores dates as sequential serial numbers so that they can be used in calculations. By default, January 1, 1900, is serial number 1.

PS > ((Get-Date).AddDays(1) - (get-date "12/31/1899")).Days
42335

In this example, the value Days is 42335 which is the serial number for 11/27/2015 in Excel. The date “12/31/1899” is equivalent to January 0, 1900. The difference between “12/31/1899” and “11/27/2015” is 42334 but since the serial numbers start a 1 you need to add 1 day to get the serial number for “11/27/2015”.

Converting from custom string formats

PS > $information = '12Nov(2012)18h30m17s'
PS > $pattern = 'ddMMM\(yyyy\)HH\hmm\mss\s'
PS > [datetime]::ParseExact($information, $pattern, $null)
Monday, November 12, 2012 6:30:17 PM

FILETIME conversion – FILETIME is a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).

PS > get-aduser username -prop badPasswordTime,lastLogonTimestamp | select badPasswordTime,lastLogonTimestamp
badPasswordTime : 130927962789982434
lastLogonTimestamp : 130931333173599571

PS > [datetime]::fromfiletime(130927962789982434)
Monday, November 23, 2015 3:51:18 PM

PS > [datetime]::fromfiletime(130931333173599571)
Friday, November 27, 2015 1:28:37 PM

CTIME or Unix format – is an integral value representing the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC (i.e., a Unix timestamp).

PS > $epoch = get-date "1/1/1970"
PS > $epoch.AddMilliseconds(1448302797803)
Monday, November 23, 2015 6:19:57 PM

PS > $epoch.AddSeconds(1448302797.803)
Monday, November 23, 2015 6:19:57 PM

References

Standard Date and Time Format Strings in .NET
https://docs.microsoft.com/dotnet/standard/base-types/standard-date-and-time-format-strings

Custom Date and Time Format Strings in .NET
https://docs.microsoft.com/dotnet/standard/base-types/custom-date-and-time-format-strings

Formatting Dates and Times in PowerShell
https://docs.microsoft.com/previous-versions/windows/it-pro/windows-powershell-1.0/ee692801(v=technet.10)

PowerTip: Use PowerShell to Format Dates
https://devblogs.microsoft.com/scripting/powertip-use-powershell-to-format-dates/

Parsing Custom Date and Time Formats
http://community.idera.com/powershell/powertips/b/tips/posts/parsing-custom-date-and-time-formats
https://msdn.microsoft.com/en-us/library/system.datetime.parseexact(v=vs.110).aspx

Wikipedia – Epoch (reference date)
https://en.wikipedia.org/wiki/Epoch_(reference_date)

Posted in PowerShell, Uncategorized

Adding a Contact to a Distribution List with PowerShell

The PowerShell ActiveDirectory module has a lot of great features that I use on a daily basis. However, there is one shortcoming that I have struggled with for a while. I did a lot of internet searching and testing to see if I was missing some hidden secret. But, alas, this is one task that the AD module does not do.

Here is the scenario. We have a lot of AD Groups (Distribution Lists) we use for notification messages. We want to send notifications to mobile devices. We do this by sending an email to the devices email address. For example:

2065551212@mobilecarrier.xyz.com

These external email addresses are created as Contact objects in AD.

The problem is that the cmdlets for managing AD group objects only allow you to add objects that have a SamAccountName (and therefore a SID) to a group. This is fine for user and group objects. But Contact objects to not have SIDs. So now what do you do.

The answer is you do it the old way you would have done it in VBScript; use ADSI.

$dlGroup = [adsi]'LDAP://CN=DL-Group Name,OU=Corp Distribution Lists,DC=contoso,DC=net'
$dlGroup.Member.Add('CN=mobile-username,OU=Corp Contacts,DC=contoso,DC=net')
$dlGroup.psbase.CommitChanges()
Tagged with:
Posted in PowerShell

Use PowerShell and EWS to find out who is sending you email

I get a lot of email from a lot of different sources. A lot of it is from automated alerts generated by services accounts that monitor various applications that my team supports. Each month I like to see how many messages I have gotten from the various sources. Looking at these numbers over time can be helpful to identify trends. If we are suddenly getting more alerts from a particular sender then we may want to look more closely at the health of that system.

Using Outlook’s rules engine I send all of these alert messages to a specific folder. Now I just need an easy way to count them. I created a script that scans that folder and counts the number of messages from each sender. The output looks like this:

Count Name
----- ----
   10 Service Account A <SMTP:svca@contoso.com>
   10 Ops Monitor 2 <SMTP:opsmon2@contoso.com>
    7 Ops Monitor 3 <SMTP:opsmon3@contoso.com>
    6 Service Account D <SMTP:svcd@contoso.com>
    6 Service Account E <SMTP:svce@contoso.com>

The script is pretty simple. I created two functions:

  • one to find the specific folder in the mailbox
  • one to iterate through all the items in the folder

To find the target folder you must walk the folder tree until you reach your destination. Once you have the target folder you can create an ItemView and search for all the messages in the folder. PowerShell’s Group-Object cmdlet does the work of counting for you.

# Load the EWS dll
Add-Type -Path 'C:\Program Files\Microsoft\Exchange\Web Services\2.2\Microsoft.Exchange.WebServices.dll'

#-----------------------------------------------------
function GetTargetFolder {
   param([string]$folderPath)

   $fldArray = $folderPath.Split("\")
   $tfTargetFolder = $MsgRoot

   for ($x = 1; $x -lt $fldArray.Length; $x++)
   {
      #$fldArray[$x]
      $fvFolderView = new-object Microsoft.Exchange.WebServices.Data.FolderView(1)
      $SfSearchFilter = new-object Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
         [Microsoft.Exchange.WebServices.Data.FolderSchema]::DisplayName,
         $fldArray[$x]
      )
      $findFolderResults = $service.FindFolders($tfTargetFolder.Id,$SfSearchFilter,$fvFolderView)
      if ($findFolderResults.TotalCount -gt 0)
      {
         foreach($folder in $findFolderResults.Folders)
         {
             $tfTargetFolder = $folder
         }
      }
      else
      {
         "Error Folder Not Found"
         $tfTargetFolder = $null
         break
      }
   }
   $tfTargetFolder
}
#-----------------------------------------------------
function GetItems {
   param ($targetFolder)
   #Define ItemView to retrive just 100 Items at a time
   $ivItemView = New-Object Microsoft.Exchange.WebServices.Data.ItemView(100) 

   $AQSString = $null  #find all messages
   do
   {
        $fiItems = $service.FindItems($targetFolder.Id,$AQSString,$ivItemView)
        foreach($Item in $fiItems.Items)
        {
            $Item.Load()
            $Item
        }
        $ivItemView.Offset += $fiItems.Items.Count
   }
   while($fiItems.MoreAvailable -eq $true)
}
#-----------------------------------------------------
$ExchangeVersion = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP2
$service = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService($ExchangeVersion)

$service.UseDefaultCredentials = $true
$MailboxName = "mymailbox@contoso.com"
$service.AutodiscoverUrl($MailboxName)

#Bind to the Root of the mailbox so I can search the folder namespace for the target
$MsgRootId = [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::MsgFolderRoot
$MsgRoot = [Microsoft.Exchange.WebServices.Data.Folder]::Bind($service,$MsgRootId)
$targetFolder = GetTargetFolder '\Inbox\Alert Message\Current'

$itemList = GetItems $targetFolder
$itemList | group-object Sender -noelement | sort Count -desc | ft -a
Tagged with: ,
Posted in PowerShell

Understanding Byte Arrays in PowerShell

In a previous article, I presented a PowerShell script for inspecting and validating certificates stored as PFX files. My goal is to get data into an X509Certificate2 object so that I can validate the certificate properties. The X509Certificate2 Import() methods have two sets variations. One set takes a filename for the certificate file to be imported. The second set takes a Byte array containing the certificate data to be imported.

In this script, I can import PFX certificate files by downloading a byte stream from a web server or by reading a file stored on disk. I want to avoid creating temporary files and I want to make a generic import function that could be used independently of the data retrieval method.  I settled on using an array of bytes as the import format for either scenario.

To import a PFX file from disk I use the Get-Content cmdlet. Let’s take a closer look at how Get-Content works and what it returns.

PS C:\temp> $pfxbytes = Get-Content .\DEV113.pfx
PS C:\temp> $pfxbytes.GetType().Name
Object[]
PS C:\temp> $pfxbytes[0].GetType().Name
String
PS C:\temp> $pfxbytes[0].length
18
PS C:\temp> $pfxbytes.length
70
PS C:\temp> $pfxbytes | ForEach-Object { $count += $_.length }
PS C:\temp> $count
7489
PS C:\temp> Get-ChildItem .\DEV113.pfx

Directory: C:\temp

Mode              LastWriteTime     Length Name
----              -------------     ------ ----
-a---       9/30/2014   1:03 PM       7558 DEV113.pfx

By default, we see that Get-Content returns an array of String objects. There are two problems with this for my use case.

  1. If you add up the length of all 70 strings you get a total of 7489 characters. But the files size is 7558 bytes, so this does not match. The data in a PFX files is not string-oriented. It is binary data.
  2. I need a Byte array to import the data into an X509Certificate2 object.

Fortunately, using the -Encoding parameter you can specify that you want Byte encoded data returned instead of strings.

PS C:\temp> $pfxbytes = Get-Content .\DEV113.pfx -Encoding Byte
PS C:\temp> $pfxbytes.GetType().Name
Object[]
PS C:\temp> $pfxbytes[0].GetType().Name
Byte
PS C:\temp> $pfxbytes[0].length
1
PS C:\temp> $pfxbytes.length
7558

Notice that Get-Content still returns an array of objects but those objects are Bytes. The total length of $pfxbytes now matches the size on disk.

To download the PFX file from the web server I am using the System.Net.Webclient class. System.Net.Webclient has three main ways of downloading content from a web server:

  • The DownloadString methods are useful when you are only expecting to receive text data (e.g. HTML, XML, or JSON). Since the PFX file format is binary, not text, this will not work as I have already shown above with Get-Content.
  • The DownloadFile methods would work except that I don’t want to have to save the file to disk as required by these methods.
  • The DownloadData methods return a byte array containing the data requested. This is the method that best meets our needs.

But what is a Byte array? How is a Byte array different than a string?
A byte array can contain any arbitrary binary data. The data does not have to be character data. Character data is subject to interpretation. Character data implies encoding. There is more than one way to encode a character. Take the following example:

PS C:\temp> $string = 'Hello World'
PS C:\temp> $string.length
11
PS C:\temp> $bytes = [System.Text.Encoding]::Unicode.GetBytes($string)
PS C:\temp> $bytes.length
22

As you can see, the length of $string is 11 characters. If we convert that to a byte[] we get 22 bytes of data. It is also important to know the format of the source data when you are converting between encoding schemes. Take for example:

PS C:\temp> $array = @(72,101,108,108,111,32,87,111,114,108,100)
PS C:\temp> $string = [System.Text.Encoding]::UTF8.GetString($array)
PS C:\temp> $string.length
11
PS C:\temp> $string
Hello World

You see it is possible to convert the byte[] $array to a UTF8 encoded string because each byte represents one character. However, if you try to convert that same array to Unicode it will treat each pair of bytes as a single character.

PS C:\temp> $string = [System.Text.Encoding]::Unicode.GetString($array)
PS C:\temp> $string.length
6
PS C:\temp> $string
??????

The result is an unreadable value stored in $string.

Tagged with: ,
Posted in Scripting

Throwback Thursday – Windows Command Shell (Batch) scripting

This Thursday I am returning to my scripting roots (if you don’t count VAX DCL) to talk about Windows Command Shell scripts. With nice powerful scripting options like PowerShell why does anyone bother with “DOS Batch” scripts anymore?

First off, let me set the record straight. Windows Command Shell scripting is much more powerful than “DOS Batch” files. Yes, they share a common heritage and syntax. But the Windows Command shell can do so much more. Not to mention, there are lots of older systems still deployed that don’t have PowerShell installed. The Windows Command shell is guaranteed to be installed.

Some things you can do in Windows command shell scripts that you may not have known:

  • Arithmetic (using the SET /A command)
  • Complex FOR loops (parsing, counting, collection enumeration)
  • Subroutines within a single script file (using “CALL :label”)
  • Text parsing (using the FOR command)
  • Additional built-in environment variables (e.g. %CD%, %RANDOM%, and others)
  • Variable Substring extraction and replacement (see the help for the SET command)
  • Variable value transformation (e.g. get the size of the file named by the variable using %~z1)

I will illustrate a view of these enhancements while I discuss different ways of handling command line arguments.

The Windows Command shell provide variable for the first nine arguments passed on the command line when executing a script. These variables are numbered %1 through %9. But what if you want (or need) to pass more parameters than that? Take the following script:

@echo off
echo Arg[1] = %1
echo Arg[2] = %2
echo Arg[3] = %3
echo Arg[4] = %4
echo Arg[5] = %5
echo Arg[6] = %6
echo Arg[7] = %7
echo Arg[8] = %8
echo Arg[9] = %9
echo Arg[10] = %10
echo Arg[11] = %11
echo Arg[12] = %12
echo Arg[13] = %13
echo Arg[14] = %14
echo Arg[15] = %15

Let’s see what happens when you try to access the 10th argument:

C:\temp> test15args.cmd a b c d e f g h i j k l m n o
Arg[1] = a
Arg[2] = b
Arg[3] = c
Arg[4] = d
Arg[5] = e
Arg[6] = f
Arg[7] = g
Arg[8] = h
Arg[9] = i
Arg[10] = a0
Arg[11] = a1
Arg[12] = a2
Arg[13] = a3
Arg[14] = a4
Arg[15] = a5

Notice that starting with the 10 argument the script is just outputting the value of the 1st argument followed by a number. Also, using the numbered variables implies that you expect command line arguments to be passed in a specific order. What if you want to pass arguments in any order or handle more than nine values? This is where the SHIFT command comes in. SHIFT is not new. It existed in DOS prior to the Windows Command shell, but when combined with other new features of the Command shell it is more powerful. Take this next example:

@echo off
setlocal
set /a c=1
:top
echo Arg[%c%] = "%1"
set /a c+=1
shift
if "%1" NEQ "" goto :top
goto :eof
echo You should never reach this line of the script.

SHIFT allows us to access each argument by SHIFT-ing it through the %1 variable. Bonus: notice the use of “set /a”. This is how you do arithmetic. Here is the output:

C:\temp> testAllargs.cmd a b c d e f g h i j k l m n o
Arg[1] = "a"
Arg[2] = "b"
Arg[3] = "c"
Arg[4] = "d"
Arg[5] = "e"
Arg[6] = "f"
Arg[7] = "g"
Arg[8] = "h"
Arg[9] = "i"
Arg[10] = "j"
Arg[11] = "k"
Arg[12] = "l"
Arg[13] = "m"
Arg[14] = "n"
Arg[15] = "o"

So now we have a method of looking at each command line argument and handling it independent of its position on the command line. Let’s look at a more complex script.

I would like to check all the command line arguments and determine if they are valid for the script before ever trying to execute the main logic of the script. It would also be nice to separate blocks of code in the script into “subroutines” so that the complex logic of a specific task can be isolated from the main logic flow of the script. Here is a high-level outline of such a script.

  ForEach arg in args[]
  {
    if valid(arg) then add arg to collection
    if not valid(arg) then display error message
  }
  ForEach arg in collection
  {
    call subroutine to handle arg
  }

Take a look at the script. The script is divided into four sections:

  • Initialization – where we validate the arguments
  • Main block – where we control the order of execution
  • Subroutines – where the actual work gets done
  • Helper functions – specialized tasks that are not part of the main logic
@echo off
setlocal EnableDelayedExpansion
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:GetOpts - Check Command line options
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
if "%1" EQU "" (
   goto :FinInit
)
if /i "%1" EQU "ACTION01" goto :AddOpt
if /i "%1" EQU "ACTION02" goto :AddOpt
if /i "%1" EQU "ACTION03" goto :AddOpt
if /i "%1" EQU "ACTION04" goto :AddOpt
if /i "%1" EQU "ACTION05" goto :AddOpt
if /i "%1" EQU "ACTION06" goto :AddOpt
if /i "%1" EQU "ACTION07" goto :AddOpt
if /i "%1" EQU "ACTION08" goto :AddOpt
if /i "%1" EQU "ACTION09" goto :AddOpt
if /i "%1" EQU "ACTION10" goto :AddOpt
if /i "%1" EQU "ACTION11" goto :AddOpt
if /i "%1" EQU "ACTION12" goto :AddOpt
if /i "%1" EQU "ACTION13" goto :AddOpt
if /i "%1" EQU "ACTION14" goto :AddOpt
if /i "%1" EQU "ACTION15" goto :AddOpt
:::::::::::::::::::::::::
:   Fall through to error if no match
:::::::::::::::::::::::::
goto :CMDError %1

:::::::::::::::::::::::::
:AddOpt
:::::::::::::::::::::::::
set MY_OPT_LIST=!MY_OPT_LIST! %1
:::::::::::::::::::::::::
:NextOpt
:::::::::::::::::::::::::
shift
goto :GetOpts

::::::::::::::::::::::::::::::::::::::::::
:FinInit  Finish initializing script environment
::::::::::::::::::::::::::::::::::::::::::
echo Finish initializing script environment.
echo Add commands here to create temp files or other tasks required by the script.
::::::::::::::::::::::::::::::::::::::::::
:         Count the number of options
::::::::::::::::::::::::::::::::::::::::::
set MY_OPT_LIST
set /a optcnt = 0
for %%a in (%MY_OPT_LIST%) do (
   set /a optcnt = !optcnt! + 1
)
echo Total options selected = %optcnt%
::::::::::::::::::::::::::::::::::::::::::
:        End of Init section
::::::::::::::::::::::::::::::::::::::::::
:Main    Main Section - Process selected options
::::::::::::::::::::::::::::::::::::::::::
if %optcnt% EQU 0 goto :eof
for %%a in (%MY_OPT_LIST%) do (
   call :%%a
)
goto :eof
::::::::::::::::::::::::::::::::::::::::::
:         End of Main section
::::::::::::::::::::::::::::::::::::::::::
:         Begin Action subroutines
::::::::::::::::::::::::::::::::::::::::::
:ACTION01
echo Add your Action01 commands here.
goto :eof
:ACTION02
echo Add your Action02 commands here.
goto :eof
:ACTION03
echo Add your Action03 commands here.
goto :eof
:ACTION04
echo Add your Action04 commands here.
goto :eof
:ACTION05
echo Add your Action05 commands here.
goto :eof
:ACTION06
echo Add your Action06 commands here.
goto :eof
:ACTION07
echo Add your Action07 commands here.
goto :eof
:ACTION08
echo Add your Action08 commands here.
goto :eof
:ACTION09
echo Add your Action09 commands here.
goto :eof
:ACTION10
echo Add your Action10 commands here.
goto :eof
:ACTION11
echo Add your Action11 commands here.
goto :eof
:ACTION12
echo Add your Action12 commands here.
goto :eof
:ACTION13
echo Add your Action13 commands here.
goto :eof
:ACTION14
echo Add your Action14 commands here.
goto :eof
:ACTION15
echo Add your Action15 commands here.
goto :eof
::::::::::::::::::::::::::::::::::::::::::
:         End Action subroutines section
::::::::::::::::::::::::::::::::::::::::::
:         Begin Helper functions
::::::::::::::::::::::::::::::::::::::::::
:CMDError - report error in command line options
::::::::::::::::::::::::::::::::::::::::::
echo Error: '%1' is not a valid option
echo.
echo Valid options are:
echo    ACTION01, ACTION02, ACTION03, ACTION04, ACTION05,
echo    ACTION06, ACTION07, ACTION08, ACTION09, ACTION10,
echo    ACTION11, ACTION12, ACTION13, ACTION14, ACTION15
echo.
goto :EOF
::::::::::::::::::::::::::::::::::::::::::
:        End of Script
::::::::::::::::::::::::::::::::::::::::::

Here is the example output:

C:\temp> cmdparams.cmd action01 action05 action12 action03
Finish initializing script environment.
Add commands here to create temp files or other tasks required by the script.
MY_OPT_LIST= action01 action05 action12 action03
Total options selected = 4
Add your Action01 commands here.
Add your Action05 commands here.
Add your Action12 commands here.
Add your Action03 commands here.

C:\temp> cmdparams.cmd action1
Error: 'action1' is not a valid option

Valid options are:
   ACTION01, ACTION02, ACTION03, ACTION04, ACTION05,
   ACTION06, ACTION07, ACTION08, ACTION09, ACTION10,
   ACTION11, ACTION12, ACTION13, ACTION14, ACTION15

OK, what makes a subroutine in Windows Command shell? In DOS Batch you had to put subroutines in a separate Batch file so you could CALL that batch script from your main script. The Windows Command shell added functionality to the CALL command that allows you to CALL to a label located in the current script file instead of using GOTO. The Command shell also added the special label “:EOF” to indicate that you were done executing and you want to return back to the caller. This allows you to separate the “business logic” of your scripts into subroutines and it them separate from the “flow logic” of the main portion of the script.

PS: this is a shoutout to DonR who got me started in a lot of scripting languages way back in 1987.

Tagged with: , ,
Posted in Command Shell, Scripting

Working with certificates in PowerShell

Overview

Today’s script is an attempt to bring together several things I have learned about writing good PowerShell scripts. I still have a lot to learn and this is not necessarily a sterling example of best practices. However, it does illustrate some more advanced scripting topics, including:

  • Comment-based help
  • Parameter sets
  • Parameters passing on the pipeline
  • Error handling with try-catch blocks
  • Simple HTTP downloads using System.Net.Webclient
  • Inspecting certificates imported from external sources (http or file)
  • Inspecting certificates in the local store

I rarely use comment-based help in my scripts since I am usually writing scripts for my own use. They tend to be one-off utilities designed to fulfill an immediate need. This script, however, is going to be used by other support technicians outside of my immediate team. So documentation was important. Comment-based help allows you to include documentation in the script (rather than a separate file that can get lost or out of date). And it gives help in a format that users expect for any other PowerShell command.

Parameter handling in PowerShell is extremely versatile. Through the advanced parameter options, you can create parameter sets, specify which parameters are mandatory, perform data validation, define input from the pipeline, and much more. All of this controlled via parameter definition. No need to write code to validate parameters or ensure valid parameter combinations. PowerShell does the heavy lifting for you.

My focus will be on the certificate management portions of the script and to outline the scenario that this script is attempting to support.

The Scenario

We have a set of devices that require a device-specific certificate to be installed. We have a scripted process for creating, publishing, and installing these certificates. The certificates are created in bulk for a large number of devices. These certificates are then exported to PFX files copied to a folder shared by a web server. The device can then download the PFX file and import it into local certificate store on the device. The devices and the certificates have a standardized naming scheme (e.g. DEV###). This makes it easy to identify which certificate belongs to which device.

The certificate lifecycle an unmanaged process. There is no policy mechanism to ensure that the device has installed the proper certificate or that the certificate installed is correct and valid. Occasionally we can have problems where the installed certificate is not working properly or the PFX file published to the web server does not match the certificated issued by the CA. To troubleshoot these issues we need to be able to verify the certificates on the device in PFX files published on the web server.

The solution

This script looks for certificates in one of three locations: the certificate broker (web server), the local certificate store, or PFX files stored in the file system. In all cases, the output is the same for each certificate found. The script displays some basic information about the certificate and then checks that each certificate in the validity chain is still valid.

Example 1 – check the published PFX file for a device

This was the first scenario I needed to solve for. The script takes the specified device name and attempts to download the matching PFX file from the certificate broker.

PS C:\> .\check-devicecert.ps1 -devices DEV101

You can specify one or more device IDs as an array of strings.

Example 2 – search for the device certificate in the local store

The script takes the specified device name and searches for a certificate with a matching Subject name in the local certificate store.

PS C:\> .\check-devicecert.ps1 -devices DEV101 –local

You can specify one or more device IDs as an array of strings.

Example 3 – load a PFX file from disk

The script loads the specified PFX file(s) from disk.

PS C:\> .\check-devicecert.ps1 -pfxfilename .\DEV113.pfx
PS C:\> dir *.pfx | .\check-devicecert.ps1

You can specify one or more PFX filenames as an array of strings. You can also pass an array of files on the pipeline.

For examples #1 and #3 we are working with PFX files. The first step is to obtain the contents of the PFX file as an array of bytes so that we can create an X.509 certificate object. To download the PFX file from the certificate broker we do the following:

$client = New-Object -TypeName System.Net.WebClient
$url = "https://certbroker.contoso.com/pfxshare/$($num).pfx"
$pfxBytes = $client.DownloadData($url)

The DownloadData() method of System.Net.WebClient does this nicely for us.

To load a PFX file from disk I use the Get-Content cmdlet and specify that I want Byte encoding.

$pfxBytes = Get-Content -path $file -encoding Byte -ErrorAction:SilentlyContinue
if ($error.count -ne 0) {throw $error}

Also, note the ErrorAction parameter. For some reason, exceptions occurring inside of Get-Content were not being caught by my Try-Catch block. I had to override the ErrorAction to force Get-Content to continue silently, check to see if an error occurred, then re-throw the exception so that it would get caught by my Try-Catch block.

Once I had the Byte array containing the PFX-formatted data blob I needed to import it into an X.509 certificate object.

function import-pfxbytes {
  param($pfxBytes)
  ## Import cert into a new object. No need to import it into a certificate device.
  $pfxPass = 'pFxP@$5w0rd'
  $X509Cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
  $X509Cert.Import([byte[]]$pfxBytes, $pfxPass,"Exportable,PersistKeySet")
  return $X509Cert
}

The import-pfxbytes creates an empty X.509 certificate and imports the data using a static password and returns a certificate object. In this case, I have hard-coded the password. For better security, you should prompt the user to enter a password (for example, using Read-Host –AsSecureString).

For example #2 I am using PowerShell’s built-in provider to access the local certificate store. With this access method, you receive a certificate object, not a PFX-formatted data blob. Once I have an X.509 certificate object I pass it to show-certinfo to inspect the important properties and verify the validity of the trust chain.

<#
.SYNOPSIS
Checks a device certificate for validity. 

.DESCRIPTION
The script downloads a device certificate PFX file from the cert broker or reads an existing PFX file then checks for the validity. 

.PARAMETER devices
An array of device numbers .PARAMETER local Indicates that you want to search the local certificate device. 

.PARAMETER pfxfiles
An array of pathnames to PFX files deviced on disk. .INPUTS You can provide an array of PFX file names in the pipeline. 

.EXAMPLE
PS C:\> .\check-devicecert.ps1 -devices DEV101
    ==================================================
    Downloading DEV101.pfx

    Subject      : CN=DEV101@contoso.com
    Issuer       : CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    NotBefore    : 5/2/2013 12:39:58 PM
    NotAfter     : 5/1/2017 12:39:58 PM
    SerialNumber : 27DC85E200060000B6D2

    Validating certficate chain...

    Valid   Certificate
    -----   -----------
    True    CN=DEV101@contoso.com
    True    CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    True    CN=Contoso Corporate Root CA, O=CONTOSO
    ==================================================

The example above illustrates downloading the PFX file from the certificate broker and check the validity.
.EXAMPLE
PS C:\> .\check-devicecert.ps1 -devices DEV369,DEV123
    ==================================================
    Downloading DEV369.pfx

    Subject      : CN=DEV369@contoso.com
    Issuer       : CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    NotBefore    : 5/2/2013 3:37:14 PM
    NotAfter     : 5/1/2017 3:37:14 PM
    SerialNumber : 287ED09B00060000CD63

    Validating certficate chain...

    Valid   Certificate
    -----   -----------
    True    CN=DEV369@contoso.com
    True    CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    True    CN=Contoso Corporate Root CA, O=CONTOSO
    ==================================================
    Downloading DEV123.pfx
    Error downloading S123456 - The remote server returned an error: (404) Not Found.

The example above illustrates downloading multiple PFX files from the certificate broker and check their validity.
.EXAMPLE
PS C:\temp> .\check-devicecert.ps1 -devices DEV101 -local
    ==================================================
    Reading Cert:LocalMachine\My\584C772D4E9EAA9F5858742B2AE4F3E9A0D602C7

    Subject      : CN=DEV101@contoso.com
    Issuer       : CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    NotBefore    : 5/2/2013 12:39:58 PM
    NotAfter     : 5/1/2017 12:39:58 PM
    SerialNumber : 27DC85E200060000B6D2

    Validating certficate chain...

    Valid   Certificate
    -----   -----------
    True    CN=DEV101@contoso.com
    True    CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    True    CN=Contoso Corporate Root CA, O=CONTOSO
    ==================================================

The example above searches for a certificate in the local certificate device and test the validity.
.EXAMPLE
PS C:\temp> .\check-devicecert.ps1 -pfxfilename .\DEV113.pfx
    ==================================================
    Reading .\S10113.pfx

    Subject      : CN=DEV113@contoso.com
    Issuer       : CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    NotBefore    : 5/2/2013 3:30:35 PM
    NotAfter     : 5/1/2017 3:30:35 PM
    SerialNumber : 2878BAEA00060000CCAD

    Validating certficate chain...

    Valid   Certificate
    -----   -----------
    True    CN=DEV113@contoso.com
    True    CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    True    CN=Contoso Corporate Root CA, O=CONTOSO
    ==================================================

The example above checks the validity of an existing, locally-deviced PFX file.
.EXAMPLE
PS C:\temp> dir *.pfx | .\check-devicecert.ps1
    ==================================================
    Reading C:\temp\DEV113.pfx

    Subject      : CN=DEV113@contoso.com
    Issuer       : CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    NotBefore    : 5/2/2013 3:30:35 PM
    NotAfter     : 5/1/2017 3:30:35 PM
    SerialNumber : 2878BAEA00060000CCAD

    Validating certficate chain...

    Valid   Certificate
    -----   -----------
    True    CN=DEV113@contoso.com
    True    CN=Contoso Corporate Enterprise CA 02, DC=contoso, DC=com
    True    CN=Contoso Corporate Root CA, O=CONTOSO
    ==================================================

The example above checks the validity of all the PFX files deviced in folder.
#>

[CmdletBinding(DefaultParametersetName="devices")]
param (
       [parameter(ParameterSetName="names",Position=0,Mandatory=$true,
        ValueFromPipeline=$false,HelpMessage="Enter device Number, Ex S12345")]
       [string[]]$devices,
       [parameter(ParameterSetName="names",Position=1,Mandatory=$false,
        ValueFromPipeline=$false,HelpMessage="Look for certificate in local device.")]
       [switch]$local,
       [parameter(ParameterSetName="files",Position=0,Mandatory=$true,
        ValueFromPipeline=$true,HelpMessage="Enter PFX file name, Ex C:\folder\DEV123.pfx")]
       [string[]]$pfxfiles
      )

function import-pfxbytes {
   param($pfxBytes)
   ## Import cert into a new object. No need to import it into a certificate device.
   $pfxPass = 'pFxP@$5w0rd'
   $X509Cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
   $X509Cert.Import([byte[]]$pfxBytes, $pfxPass,"Exportable,PersistKeySet")
   return $X509Cert
}
function show-certinfo {
   param($cert)
   $cert | Select-Object -property Subject,Issuer,NotBefore,NotAfter,SerialNumber

   $certChain = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Chain
   $result = $certChain.Build($cert)
   $certChain.ChainPolicy.RevocationFlag = "EntireChain"
   $certChain.ChainPolicy.RevocationMode = "Online"
   Write-Host -Object "Validating certficate chain..." -foreground black -background yellow
   "`r`nValid`tCertificate"
   "-----`t-----------"
   foreach ($element in $certChain.ChainElements) {
       "{0}`t{1}" -f $element.Certificate.Verify(),$element.Certificate.Subject
   }
}

$Error.Clear()
("=" * 50)

try {
    switch ($PsCmdlet.ParameterSetName) {
        "names" {
            $client = New-Object  -TypeName System.Net.WebClient
            foreach ($num in $devices) {
                if ($local) {
                    $certs = Get-ChildItem -Recurse -Path Cert: | Where-Object { $_.Subject -like "CN=$num" }
                    if ($certs.count -eq 0) {
                        "No matching certificates found in the local device."
                        return ''
                    }
                    foreach ($cert in $certs) {
                        $certpath = $cert.pspath -replace 'Microsoft.PowerShell.Security\\Certificate::',"Cert:"
                        Write-host -Object "Reading $certpath"  -foreground black -background yellow
                        show-certinfo($cert)
                        ("=" * 50)
                    }
                }
                else {
                    $url = "https://certbroker.contoso.com/pfxshare/$($num).pfx"
                    Write-host -Object "Downloading $num.pfx" -foreground black -background yellow
                    $pfxBytes = $client.DownloadData($url)
                    $cert = import-pfxbytes($pfxBytes)
                    show-certinfo($cert)
                    ("=" * 50)
                }
            }
        }

        "files" {
            foreach ($file in $pfxfiles) {
                Write-host -Object "Reading $file" -foreground black -background yellow
                $pfxBytes = Get-Content -path $file -encoding Byte -ErrorAction:SilentlyContinue
                if ($error.count -ne 0) {throw $error}
                $cert = import-pfxbytes($pfxBytes)
                show-certinfo($cert)
                ("=" * 50)
            }
        }
    }
}
catch {
    $_.Exception.Message
    $_.InvocationInfo.PositionMessage
}
Tagged with: , , ,
Posted in PowerShell, Scripting

Throwback Thursday – Windows IT Pro Magazine

A few years ago I had an opportunity to contribute articles to Windows IT Pro Magazine (now known as ITPro Today). The articles are still available online.

Monitor System Startup Performance in Windows 7
Jul. 26, 2010 | https://www.itprotoday.com/windows-78/monitor-system-startup-performance-windows-7

An Easier Way to View Incoming WMI Queries
Sep. 24, 2010 | https://www.itprotoday.com/windows-78/easier-way-view-incoming-wmi-queries

Reap the Power of MPS_Reports Data
Mar. 23, 2009 | https://www.itprotoday.com/cloud-computing/reap-power-mpsreports-data

Tagged with: , ,
Posted in Throwback Thursday

Using PowerShell and EWS to monitor a mailbox

I support a suite of application services that implement our ITIL processes. One of the functions allows users to create trouble tickets by sending a specially crafted email message to a specific email address. The application has a service that polls that mailbox once a minute to retrieve those messages and create new Incidents. Periodically, that email polling service stops working causing messages to queue up in the mailbox. The process is still running and providing other functions but it is no longer processing the inbound messages. We have monitoring on the system to watch that service and alert us when it hangs or crashes. However, since the service is still running, we never get alerted.

I needed another way to monitor for this problem. What if I could create a script that would check the mailbox to see if there were any messages in the Inbox that had arrived more than a few minutes ago? Since the service polled the mailbox every minute, any message more than five minutes old would indicate that the polling service had stopped functioning. But how can you read the Inbox?

In the past, I have used VBScript to automate Outlook and manage email on my desktop. However, I didn’t want to install Outlook on the server. Installing Outlook incurs licensing costs and is way more overhead than I really need. That also means that I need to manage patching for Outlook and other Office components on a server which we don’t normally do in our environment. Searching the internet I found some scriptable POP3 and IMAP clients. Some were commercial and some open source. But the cost of these options (in money and learning curve) was too high and the supportability was questionable.

We use Microsoft Exchange for our email services. I know that PowerShell is now the preferred method for most management tasks in Exchange. This led me to search for Exchange PowerShell options.

Enter Exchange Web Services (EWS) and its friendly.NET managed API. Installing and using the EWS Managed API is simple. It can be installed on any Windows machine and does not require any other Exchange components. Simply download the MSI package and install it.

I am running this script as a scheduled task on Window Server 2008 R2. The scheduled task runs once per hour and is configured to run with domain credentials that have access to the target mailbox. The script uses EWS to access the mailbox and check for stale messages. It also uses Net.Mail.SmtpClient to send alert messages. I could have used EWS to send the alert message but Net.Mail.SmtpClient is so much easier to use.

param($mailboxName = "new-tickets@contoso.com",
$smtpServerName = "smtp.contoso.com",
$emailFrom = "monitorservice@contoso.com",
$emailTo = "support@contoso.com"
)

# Load the EWS Managed API
Add-Type -Path "C:\Program Files\Microsoft\Exchange\Web Services\2.2\Microsoft.Exchange.WebServices.dll"

try {
  $Exchange2007SP1 = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2007_SP1
  $Exchange2010    = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010
  $Exchange2010SP1 = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1
  $Exchange2010SP2 = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP2
  $Exchange2013    = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2013
  $Exchange2013SP1 = [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2013_SP1

  # create EWS Service object for the target mailbox name
  $exchangeService = New-Object -TypeName Microsoft.Exchange.WebServices.Data.ExchangeService -ArgumentList $Exchange2010SP2
  $exchangeService.UseDefaultCredentials = $true
  $exchangeService.AutodiscoverUrl($mailboxName)

  # bind to the Inbox folder of the target mailbox
  $inboxFolderName = [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Inbox
  $inboxFolder = [Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchangeService,$inboxFolderName)

  # Optional: reduce the query overhead by viewing the inbox 10 items at a time
  $itemView = New-Object -TypeName Microsoft.Exchange.WebServices.Data.ItemView -ArgumentList 10
  # search the mailbox for messages older than 15 minutes
  $dateTimeItem = [Microsoft.Exchange.WebServices.Data.ItemSchema]::DateTimeReceived
  $15MinutesAgo = (Get-Date).AddMinutes(-15)
  $searchFilter = New-Object -TypeName Microsoft.Exchange.WebServices.Data.SearchFilter+IsLessThanOrEqualTo -ArgumentList $dateTimeItem,$15MinutesAgo
  $foundItems = $exchangeService.FindItems($inboxFolder.Id,$searchFilter,$itemView)

  # report the results via email and Application event log
  $entryType = "Information"
  $messageBody = "Self-service mailbox scan completed at {0}.`r`n" -f (get-date -format "MM/dd/yyyy hh:mm:ss")
  if ($foundItems.TotalCount -ne 0) {
  $entryType = "Warning"
  $subject = "Self-service mailbox hung"
  $messageBody  = "Inbox has {0} message(s) that are more than 15 minutes old.`r`n" -f $foundItems.TotalCount
  $messageBody += "Inbox has {0} message(s) total.`r`n`r`n" -f $inboxFolder.TotalCount
  $messageBody += "Please restart the Email Engine on SERVER01`r`n"
  $messageBody += "Self-service mailbox scan completed at {0}.`r`n" -f (get-date -format "MM/dd/yyyy hh:mm:ss")
  $messageBody += "Script run from $env:computername`r`n"
  $smtpClient = New-Object -TypeName Net.Mail.SmtpClient -ArgumentList $smtpServerName
  $smtpClient.Send($emailFrom, $emailTo, $subject, $messageBody)
  }
  Write-EventLog -LogName "Application" -Source "Application" -EventId 1 -Category 4 -EntryType $entryType -Message $messageBody
}
catch
{
  $entryType = "Error"
  $subject = "Error in mailbox monitor script"
  $messageBody = "{0}`r`n{1}" -f $_.Exception.Message,$_.InvocationInfo.PositionMessage
  Write-EventLog -LogName "Application" -Source "Application" -EventId 1 -Category 4 -EntryType $entryType -Message $messageBody
  $smtpClient = New-Object -TypeName Net.Mail.SmtpClient -ArgumentList $smtpServerName
  $smtpClient.Send($emailFrom, $emailTo, $subject, $messageBody)
}
Tagged with: , ,
Posted in Scripting

Fun with Regular Expressions

A while back a friend of mine mentioned that he could not find a regular expression that was capable of parsing Windows Performance counter strings. He said that it couldn’t be done with regex alone and he had written a lot of code to manually parse the strings. That sounded like a challenge to me. I had recently been working on a project where I needed regular expressions to find and clean up text that I was extracting from a large database. I had spent a lot of time learning what I could about regex to make the job easier. Along the way, I found a great tool called Expresso.

expresso screenshot

Expresso showing the parsed results of this regex

Expresso is a power tool for developing and testing regular expressions. In just a few minutes I was able to create a regex that fit the bill. Since he was writing code in PowerShell to process these performance counters I sent him this proof of concept.

$ctrs = (
  '\\IDCWEB1\Processor(_Total)\% Processor Time',
  '\Paging File(\??\C:\pagefile.sys)\% Usage Peak',
  '\MSSQL$SQLServer:Memory Manager\Total Server Memory (KB)',
  '\\BLACKVISE\Paging File(\??\C:\pagefile.sys)\% Usage Peak',
  '\Category(Instance(x))\Counter (x)',
  '\SQLServer:Latches\Latch Waits/sec (ms)'
)

$pattern = '(?<srv>\\\\[^\\]*)?\\(?<obj>[^\(^\)]*)(\((?<inst>.*(\(.*\))?)\))?\\(?<ctr>.*\s?(\(.*\))?)'

foreach ($ctr in $ctrs) {
  if ($ctr -match $pattern) {
    "Server = " + $matches["srv"]
    "Object = " + $matches["obj"]
    "Instance = " + $matches["inst"]
    "Counter = " + $matches["ctr"]
    ""
  }
}

Here is the output :

Server = \\IDCWEB1
Object = Processor
Instance = _Total
Counter = % Processor Time

Server =
Object = Paging File
Instance = \??\C:\pagefile.sys
Counter = % Usage Peak

Server =
Object = MSSQL$SQLServer:Memory Manager
Instance =
Counter = Total Server Memory (KB)

Server = \\BLACKVISE
Object = Paging File
Instance = \??\C:\pagefile.sys
Counter = % Usage Peak

Server =
Object = Category
Instance = Instance(x)
Counter = Counter (x)

Server =
Object = SQLServer:Latches
Instance =
Counter = Latch Waits/sec (ms)

By the way, the PowerShell script he was writing was part of PAL. Check out Clint’s incredible performance analysis tool.

Tagged with: , ,
Posted in Scripting
sean on it
Follow Sean on IT on WordPress.com
Blog Stats
  • 86,233 hits