This year has been full of changes for me. One of the biggest changes is that my job requires me to use Git and GitHub for almost all of my work. Before this new job, I had never used Git. By default, the Git installer installs the bash command shell. Most of the documentation is written assuming that you are using bash. However, I prefer to work in PowerShell. In this article I will show how I set up my environment to enable Git functionality in PowerShell. This is not meant to be a tutorial on using Git but, rather, a example of what works for me and for my workflow.
Download and install the Git for Windows
First thing is to install Git for Windows.
Download and run the Git for Windows installer. As you step through the installation wizard you are presented with several options. The following is a list of the options on each page of the installation wizard with the reasoning behind my choice.
- The Select Components page
- Check Git LFS (Large File Support)
- Check Associate .git* configuration files with the default text
- Check Use a TrueType font in all console windows
I prefer the TrueType font Consolas as my monospaced font for command shells and code editors.
- The Choosing the default editor used by Git page
- Select Use Visual Studio Code as Git’s default editor
VS Code does everything.
- Select Use Visual Studio Code as Git’s default editor
- The Adjusting your PATH environment page
- Select Use Git from the Windows Command Prompt
This adds the Git tools to your PATH so that it works for Cmd, PowerShell, or bash.
- Select Use Git from the Windows Command Prompt
- The Choosing HTTPS transport backend page
- Select Use the native Windows Secure Channel library
- The Configure the line ending conversions page
- Select Checkout Windows-style, commit Unix-style line endings
This is the recommended setting on Windows and provides the most compatibility for cross-platform projects.
- Select Checkout Windows-style, commit Unix-style line endings
- The Configuring the terminal emulator to use with Git bash page
- Select Use Windows’ default console window
This is the console that PowerShell uses and works best with other Windows console-based applications.
- Select Use Windows’ default console window
- The Configuring extra options page
- Check Enable file system caching
This option is checked by default. Caching improves performance of certain Git operations. - Check Enable Git Credential Manager
The Git Credential Manager for Windows (GCM) provides secure Git credential storage for Windows. GCM provides multi-factor authentication support for Visual Studio Team Services, Team Foundation Server, and GitHub. Enabling GCM prevents the need for Git to continuously prompt for your Git credentials for nearly every operation. For more information see the GCM documentation on GitHub. - Check Enable symbolic links
- Check Enable file system caching
These are the options I chose. You may have different requirements in your environment.
Install the Posh-Git module
Now that we have the Git client installed we need to enable Git functionality for PowerShell. Posh-Git from the Gallery. For more information about Posh-Git, see Posh-Git on GitHub.
If you have PsGet installed just run:
Install-Module posh-git
Alternatively, you can install Posh-Git manually using the instructions in the README.MD in the GitHub repository.
Once Posh-Git is installed you need to integrate Git into your PowerShell environment. Posh-Git includes an example profile script that you can adapt to your needs.
Integrate Git into your PowerShell environment
Integrating Git into PowerShell is simple. There are three main things to do:
- Load the Posh-Git module
- Start the SSH Agent Service
- Configure your prompt to show the Git status
Add the following lines to your PowerShell profile script.
Import-Module posh-git Start-SshAgent -Quiet function global:prompt { $identity = [Security.Principal.WindowsIdentity]::GetCurrent() $principal = [Security.Principal.WindowsPrincipal] $identity $name = ($identity.Name -split '\\')[1] $path = Convert-Path $executionContext.SessionState.Path.CurrentLocation $prefix = "($env:PROCESSOR_ARCHITECTURE)" if($principal.IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) { $prefix = "Admin: $prefix" } $realLASTEXITCODE = $LASTEXITCODE $prefix = "Git $prefix" Write-Host ("$prefix[$Name]") -nonewline Write-VcsStatus ("`n$('+' * (get-location -stack).count)") + "PS $($path)$('>' * ($nestedPromptLevel + 1)) " $global:LASTEXITCODE = $realLASTEXITCODE $host.ui.RawUI.WindowTitle = "$prefix[$Name] $($path)" }
The prompt function integrates Git into your PowerShell prompt to show an abbreviated git status
. See the README for Posh-Git for a full explanation of the abbreviated status. I have also customize my prompt to show me my user context, whether I am running in a 64-bit or 32-bit shell, and if I am running elevated. Customize this function to meet your needs or preferences.
At this point you are done. You can use Git from PowerShell. Go forth and clone a repo.
Customize your Git environment
You may want to customize some of the settings of your Git environment, especially if this is a new install of Git. Being a good project contributor in Git you should identify yourself so that Git knows who to blame
for your commits. Also, I found that the default colors used by Git in the shell could be hard to read. So I customized the colors to make them more visible. For more information, see the Customizing Git topic in the Git documentation.
The following commands only need to be run once. You are setting global preferences so, once they are set, they are used every time you start a new shell.
# Configure your user information to match your GitHub profile git config --global user.name "John Doe" git config --global user.email "alias@example.com" # Set up the colors to improve visibility in the shell git config --global color.ui true git config --global color.status.changed "magenta bold" git config --global color.status.untracked "red bold" git config --global color.status.added "yellow bold" git config --global color.status.unmerged "yellow bold" git config --global color.branch.remote "magenta bold" git config --global color.branch.upstream "blue bold" git config --global color.branch.current "green bold"
As I said at the beginning, this is what works for me. Your mileage may vary. Customize this for your preferences and environmental needs.
In future articles, I plan to share scripts I have created to help me with my Git workflow. Do you use Git with Powershell? Share your questions and experiences in the comments.
Working with certificates in PowerShell
Overview
Today’s script is an attempt to bring together several things I have learned about writing good PowerShell scripts. I still have a lot to learn and this is not necessarily a sterling example of best practices. However, it does illustrate some more advanced scripting topics, including:
I rarely use comment-based help in my scripts since I am usually writing scripts for my own use. They tend to be one-off utilities designed to fulfill an immediate need. This script, however, is going to be used by other support technicians outside of my immediate team. So documentation was important. Comment-based help allows you to include documentation in the script (rather than a separate file that can get lost or out of date). And it gives help in a format that users expect for any other PowerShell command.
Parameter handling in PowerShell is extremely versatile. Through the advanced parameter options, you can create parameter sets, specify which parameters are mandatory, perform data validation, define input from the pipeline, and much more. All of this controlled via parameter definition. No need to write code to validate parameters or ensure valid parameter combinations. PowerShell does the heavy lifting for you.
My focus will be on the certificate management portions of the script and to outline the scenario that this script is attempting to support.
The Scenario
We have a set of devices that require a device-specific certificate to be installed. We have a scripted process for creating, publishing, and installing these certificates. The certificates are created in bulk for a large number of devices. These certificates are then exported to PFX files copied to a folder shared by a web server. The device can then download the PFX file and import it into local certificate store on the device. The devices and the certificates have a standardized naming scheme (e.g. DEV###). This makes it easy to identify which certificate belongs to which device.
The certificate lifecycle an unmanaged process. There is no policy mechanism to ensure that the device has installed the proper certificate or that the certificate installed is correct and valid. Occasionally we can have problems where the installed certificate is not working properly or the PFX file published to the web server does not match the certificated issued by the CA. To troubleshoot these issues we need to be able to verify the certificates on the device in PFX files published on the web server.
The solution
This script looks for certificates in one of three locations: the certificate broker (web server), the local certificate store, or PFX files stored in the file system. In all cases, the output is the same for each certificate found. The script displays some basic information about the certificate and then checks that each certificate in the validity chain is still valid.
Example 1 – check the published PFX file for a device
This was the first scenario I needed to solve for. The script takes the specified device name and attempts to download the matching PFX file from the certificate broker.
You can specify one or more device IDs as an array of strings.
Example 2 – search for the device certificate in the local store
The script takes the specified device name and searches for a certificate with a matching Subject name in the local certificate store.
You can specify one or more device IDs as an array of strings.
Example 3 – load a PFX file from disk
The script loads the specified PFX file(s) from disk.
You can specify one or more PFX filenames as an array of strings. You can also pass an array of files on the pipeline.
For examples #1 and #3 we are working with PFX files. The first step is to obtain the contents of the PFX file as an array of bytes so that we can create an X.509 certificate object. To download the PFX file from the certificate broker we do the following:
The DownloadData() method of System.Net.WebClient does this nicely for us.
To load a PFX file from disk I use the Get-Content cmdlet and specify that I want Byte encoding.
Also, note the ErrorAction parameter. For some reason, exceptions occurring inside of Get-Content were not being caught by my Try-Catch block. I had to override the ErrorAction to force Get-Content to continue silently, check to see if an error occurred, then re-throw the exception so that it would get caught by my Try-Catch block.
Once I had the Byte array containing the PFX-formatted data blob I needed to import it into an X.509 certificate object.
The import-pfxbytes creates an empty X.509 certificate and imports the data using a static password and returns a certificate object. In this case, I have hard-coded the password. For better security, you should prompt the user to enter a password (for example, using Read-Host –AsSecureString).
For example #2 I am using PowerShell’s built-in provider to access the local certificate store. With this access method, you receive a certificate object, not a PFX-formatted data blob. Once I have an X.509 certificate object I pass it to show-certinfo to inspect the important properties and verify the validity of the trust chain.
Posted in PowerShell, Scripting