Building PowerShell Functions – Best Practices

I spend a good deal of time wrapping common tasks into PowerShell functions. Here are a few best practices I’ve picked up along the way.  My apologies if I miss any attributions!

  • Write your function with one purpose.  Don’t build in everything but the kitchen sink.
    • This is one of PowerShell’s strengths – there are existing functions to work with input or output objects, or you can write your own set of functions to work together.
  • Follow naming conventions.
    • Use the Verb-Noun format, use an approved verb, and ensure that your Noun is unique and will not collide with another author’s function now or in in the future.
      • Approved Verbs
      • Example:  I’m writing commands to work with a Hyper-V lab.  Set-Lab and Get-Lab are generic and may be used by another author.  I can add a prefix like HV to avoid this – Set-HVLAB and Get-HVLab
    • Use common parameter names and types as appropriate.
      • Example:  Use ComputerName to specify systems.  Do not use ComputerNames, Computer, PC, or any other non-standard parameter name.  If desired, provide an alias for the parameter.
  • Use the built in comment-based help system.  At a minimum, provide a helpful synopsis, description, parameter (for all), and example
  • Let PowerShell do the work for you.  Always use [cmdletbinding()], which allows you to take advantage of the following:
  • Use advanced parameters for validation, accepting pipeline input, specifying mandatory parameters, and other functionality, where possible and appropriate.
  • Provide flexibility with your parameters.  Provide default values, allow arrays instead of single objects, allow wildcards, and provide other helpful parameter features.
    • Example:  [string[]]$ComputerName = $env:computername is more helpful than [string]$ComputerName
  • Document your code for yourself, readers, and users.
    • Use write-verbose, write-debug and write-error to provide insight at the shell
    • Comment your code in everyday language for readers.  If you used a specific command or logic for a reason, explain why it was necessary and why changing it could break things.
    • Use full command names and full named parameters.  This makes the code more readable.  It also prevents issues that could arise if you rely on aliases or positional parameters.
  • Avoid dependencies.  This includes external scripts and modules, binaries, or features exclusive to PowerShell or .NET Framework versions.  If you must include dependencies, be sure to indicate this and provide appropriate error handling.
    • Example:  Get-ADGroupMember requires the ActiveDirectory module.  Instead of relying on this, include or write your own function.
    • Example:  To create a new object, use New-Object -TypeName PSObject -Property @{A=1; B=”Two” } | Select-Object A, B instead of [PSCustomObject]@{A=1; B=”Two”} to provide compatibility with PowerShell 2.
  • Provide error handling with helpful messages
  • Do not break the user’s environment.  Don’t touch the global scope.
  • Test, Test, Test!  Test any reasonable scenario your function might run under.
    • Test with and without a profile.  Test with 64 and 32 bit PowerShell hosts.  Test with the ISE and Console Host.  Test with a single-threaded apartment and multi-threaded apartment.  Test with and without the administrative token, with and without actual administrative authority.
  • If your function provides output, use objects.
    • Do not output strings.  Do not use Write-Host.  Do not format the results.  You and your users will get the most out of PowerShell when you provide output in objects, that can be passed down the pipeline to other commands.
    • Creating Custom Objects

Why bother?

  • Following these best practices will help you and the greater PowerShell community if you chose to share your code.
  • Your function will fit into the PowerShell world, enabling integration with the many technologies PowerShell can work with.
  • Your function will be usable by wider audiences, who may even provide suggestions and tweaks to help improve it.
  • Your function will be flexible and gracefully handle various scenarios you throw at it.
  • Your function will last.  If you avoided or accounted for dependencies, your function should withstand changes to PowerShell, the .NET Framework, and the user’s environment.
  • These practices apply to scripts as well.  You can use the majority of these best practices when writing scripts, rather than functions.

Illustrating the best practices

We will look at Get-InstalledSoftware, a quick function that extracts installed software details from the registry.

Write your function with one purpose.

This function does one thing: get installed software.

Follow naming conventions.

Get-InstalledSoftware follows the Verb-Noun naming format, uses an approved verb, and uses typical parameter names such as ComputerName… but the function name is not unique.  In fact, there is another script out there with the same name.  Perhaps I should have chosen a better example!

Use the built in comment-based help system.

The help system provides a synopsis, a description that points out prerequisites, describes each parameter, provides two examples, and provides a link that will take the user to the Technet Gallery page if they use Get-Help Get-InstalledSoftware –Online

Let PowerShell do the work for you.

The function uses [cmdletbinding()] and many of the features it enables.

Use advanced parameters

This function uses advanced parameters for computername.  This allows input from the pipeline (e.g. an array of strings), input from the pipeline by property name (e.g. an array of objects with a computername property), and validates that the argument is not null or empty.

Provide flexibility with your parameters.

ComputerName is given a default value of the local machine and allows an array of strings rather than a single string.  The Publisher and DisplayName parameters are used with the –Match operator and can thus take in regular expressions.

Document your code for yourself, readers, and users.

The code uses Write-Verbose and Write-Error.  Comments explain what is happening.  The ‘help’ information describes prerequisites, and if connectivity fails, verbose output suggests where to start troubleshooting.  Aliases are not used in the function.

Avoid dependencies.

This code does depend on certain factors – privileges, connectivity, and the Remote Registry service.  This is detailed in the help information and in the verbose output.  Language, including the output objects we create, is compatible with PowerShell v2.

Provide error handling with helpful messages

Try/Catch blocks are used to capture errors where they would likely occur, and are used in a way that will allow continued processing if errors occur.  For example, if multiple computers are specified and one fails, we move to the next computer (continue), rather than breaking execution of the command.

Do not break the user’s environment.

The global scope is not altered by this function

Test, Test, Test!

This script was tested in a limited number of expected scenarios.  With and without a profile.  In the ISE and console host.  With and without the administrative token.

One scenario that illustrates the importance of testing is this command’s behavior in a 32 bit session on a 64 bit machine.  In this scenario, the script will miss 64 bit items, and will pull double copies of everything else (the native and Wow6432Node keys will point to the same location).  I added this to the description.  Ideally I should test for and handle this, but doing so would add undue overhead to a lightweight function for what I consider a niche scenario.

If your function provides output, use objects.

This function provides object based output.  Not text.  Not a CSV.  You can use the output with any number of built in or custom commands.

Get-InstalledSoftware in action

  • The end user can use the built in Get-Help command for help.  The online switch takes you right to the TechNet gallery site.

image

image

  • We can pass in multiple computers and filter Publisher and DisplayName using regular expressions

image

image

Helpful resources

The following resources will provide further help and suggestions for best practices when writing PowerShell.

Good luck!  If you do end up writing advanced functions, please consider posting them to websites like PoshCode, TechNet Script Gallery, CodePlex, or GitHub!

Why PowerShell?

Edit: This material has been adapted and augmented by Don Jones into the Why PowerShell? eBook.

I often find myself explaining why someone with responsibilities on the Windows side of the fence should learn PowerShell.  I decided to write this as a reference going forward.

I won’t be arguing for PowerShell over other Microsoft languages such as VBScript or batch, or general purpose languages such as Python or Perl.  There is a place for all of these languages, but if you work with the Microsoft and surrounding ecosystems, PowerShell is an important language to learn.

Why Scripting?

Before we dive into PowerShell itself, let’s tackle the importance of scripting and automation, an integral facet of PowerShell.

You’ve probably seen this XKCD comic or something similar to justify scripting.  While saving time is certainly a factor behind the importance of scripting and automation, it is hardly the only justification.  Here are a few others to consider:

  • Consistency.  A scripted solution will run the exact same script every time.  No risk of typos, forgetting to complete the task, or doing the task incorrectly.
  • Audit trail.  There are many tasks where having an audit trail would be helpful, perhaps including what task was performed, important results, errors that occurred, when the task ran, who ran it, and so forth.
  • Modular code.  I might spend more time on a particular function than time savings justify, but I can generally re-use or borrow ideas from the code later.
  • Documentation.  Is there documentation for the task?  Is it up to date?  A well written and commented script can generally serve as a helpful base level of documentation that might not exist for a manual task.
  • Education.  Scripting out a task will improve your scripting ability and potentially give you deeper insight into what you are doing than the black box of a GUI.
  • Motivation.  When I was starting out in support, an engineer asked me to help script out alerting, logging, and resolution of a few basic common issues we ran into.  This gave me the opportunity to learn more and grow.  Scripting is a great way to get folks to learn, assuming they want to.
  • Change of pace.  Repetition is not fun.  Removing or minimizing it will improve morale.
  • Delegation.  With a scripted solution, you can typically delegate more functions closer to the teams best equipped to handle them, giving you more time to focus on the important stuff.

The moral of the story is that scripting and automation is important, which is just one factor behind the value of learning PowerShell.

Why PowerShell?

Microsoft describes PowerShell as “a task-based command-line shell and scripting language… built on the .NET Framework.”  What is so great about PowerShell?  Why should you use it?

  • PowerShell is both a command-line shell and scripting language
    • Fight fires quickly using existing or custom PowerShell commands or scripts at the shell, no need to compile code.  Develop your code at the command line before creating a function or script around it.  Write quick and dirty scripts that you will use a single time or a handful of times.  Write formal, readable, production level scripts that will maintain your services for years.
    • What is the cost of this investment?  Learning PowerShell.  Pretty reasonable, considering you will likely need to do so regardless of your current language of choice, assuming you work with the Microsoft ecosystem.
  • PowerShell can interact with a dizzying number of technologies.
    • .NET Framework, the Registry, COM, WMI, ADSI.  Exchange, Sharepoint, Systems Center, Hyper-V, SQL.  VMware vCenter, Cisco UCS, Citrix XenApp and XenDesktop.  REST APIs, XML, CSV, JSON, websites, Excel and other Office applications.  C# and other languages, DLLs and other binaries, including *nix tools.  A language that can work with and integrate these various technologies can be incredibly valuable.
    • Windows is not text based.  Sooner or later you will need to do something that you can’t do with *nix tools and other text based languages.  Many of the technologies that PowerShell can interact with simply do not have text based interfaces, and may not even be directly accessible from more formal languages like Perl or Python.
  • PowerShell is object-based.
    • This gives us incredible flexibility.  Filter, sort, measure, group, compare or take other actions on objects as they pass through the pipeline.  Work with properties and methods rather than raw text.
    • If you have spent time deciphering and programmatically working with text based output, you know how frustrating it can be.  What delimiter do I split on?  Is there even a delimiter?  What if a particular result has a blank entry for a column?  Do I need to count characters in each column?  Will this count vary depending on the output?  With objects, this is all done for you, and makes it quite simple to tie together commands and data across various technologies.
  • Microsoft is putting its full weight behind PowerShell.
    • PowerShell isn’t going away.  It is a requirement in the Microsoft Common Engineering Criteria, and a Server product cannot be shipped without a PowerShell interface.
    • In many cases Microsoft uses it to build the GUI management consoles for its products.  Some tasks can’t be performed in the GUI and can only be completed in PowerShell.
    • Regardless of how far Microsoft shifts to the *aaS side of the spectrum, they support PowerShell for both on-premise and hosted solutions.
  • PowerShell can help anyone working in the Microsoft ecosystem
    • PowerShell is not just for systems administrators.
    • Douglas Finke wrote a great, quick read on what developers and others can get out of PowerShell.  His blog Development in a Blink, and Joel Bennett’s Huddled Masses provide helpful development-oriented PowerShell articles.  Given the object-based nature of PowerShell and tight integration with .NET and other technologies, this shouldn’t be surprising.
    • The Desktop side isn’t excluded.  Windows 7 and later include PowerShell.  Audit local administrators and other details across your domain.  Support end users without interruption.  Tie together various desktop tools that have a command line interface.  Build a GUI interface for your Support staff that can easily be modified by non-developers.
    • DBAs on the Windows side of the house benefit from PowerShell as well.  Gather an inventory of your SQL instances.  Monitor SQL performance.  Write tools that fit your exact needs rather than spending money on proprietary, not-tailored-to-you solutions.  Chad Miller of the now defunct Sev17 has written many posts on PowerShell for SQL.

Where can I learn more?

There is a wealth of information on PowerShell.  I keep a running list of resources I’ve found helpful in learning and using PowerShell right here, covering cheat sheets, blogs, videos, books and more.

What resources you find the most helpful will depend on your experience, role, and preferences.

A closing example

Let’s illustrate one solution using PowerShell that ties together a number of technologies.

We use Systems Center Operations Manager (SCOM) as part of our monitoring solution.  We use vmware vCenter as our virtualization management solution.  If you have used SCOM, you know how sparse and bland the notifications can be.  Let’s take a look at an example:

You get basic information about the alert, and only the alert.  Nothing about the rule/monitor that generated it.  No suggestions on how to fix it.  Nothing to highlight important details, just plain text.  For this particular monitor, you don’t even get details on the disk space that triggered the low disk space alert!  If you try to foist this type of notification upon your teams, there is a good chance they won’t be happy.  You might even end up with a new monitoring solution.

We decided to use PowerShell as follows:

  • A PowerShell script regularly collects inventory details on servers from Active Directory and vCenter, populating a SQL database.
  • SCOM is configured to create a file based on the alert, rather than send an e-mail directly.  A customized PowerShell ‘Script Daemon’ watches the SCOM alert folder, and runs a PowerShell alert processing script for every alert generated.  If more than a specified number of alerts are generated in a short period, they are grouped into a single notification with a synopsis table and attached HTML details.  All alert processing is parallelized using a RunspacePool.
  • The PowerShell alert processing script lets us run anything we want based on the alert.  This means we can…
    • Gather context for every alert, including alert details from SCOM, the inventory database (e.g. who is responsible for the server, class of the server), and even the server itself (in this example, current disk space free, and disk space taken by certain folders).
    • Take action based on this information, perhaps adding or removing recipients, attempting to remediate and including results in the e-mail, adding troubleshooting suggestions, querying our change management database to determine whether any expected changes are underway, etc.
    • Build an HTML notification from the resulting objects and send it out via e-mail.

Remember the boring alert with no context?  Here’s what we see now:

image

image

Good luck!  PowerShell can be a great benefit to you, your career, and your employer.