How Do I Learn PowerShell?

I often see questions on how to learn PowerShell. Rather than address these each time they come up, figured it was time for a post. PowerShell is a critical skill for anyone working in IT on the Microsoft side of the fence. Anyone from a service desk associate to printer admin to DBA to developer would benefit from learning it!

There’s no single answer to the question; reflecting back on my path, the following seems like a decent recipe for learning PowerShell. Long story short?  Practice, practice, practice, build some formal knowledge, and participate in the community.

First Things First

There are books, videos, cheat sheets, and blog posts galore.  Before any of this, get a system up and running with PowerShell. I assume you can search for the details on how to do the following:

  • Download the latest Windows Management Framework and requisite .NET Framework – .NET 4.5 and WMF 4 at the moment. Even if you target a lower version PowerShell, the newer ISE will make learning and every day use more pleasant
  • Set your execution policy as appropriate, keeping in mind it’s just a seatbelt (I use Bypass on my computers). You do read code before you run it, right?
  • Update your help. If using PowerShell 3 or later, run PowerShell as an administrator and run Update-Help -Force
  • This is standard fare for any technology, but remind yourself of the importance of testing. Test against a test environment or target where possible. Consider all the corner cases and scenarios that your code should handle. Test one, few, many, in batches, and finally, all. Consider using read-only or verbose output before sending those results to commands that change things. Read up on –Whatif, –Confirm, and where these don’t work. Don’t be this guy:

image

Okay, enough disclaimers. Start exploring and experimenting. A solid foundation built on books and other references is incredibly helpful, but experience is even more important. If you tell yourself “I’ll learn PowerShell when I have more time,” you’ve already lost. You learn through practice and experience; you’re not going to magically learn PowerShell in some boot camp, training program, or book if you don’t regularly use it in the real world.

I Only Learn By Doing!

Building a foundation for PowerShell knowledge is important. You should spend time working with the language, but this isn’t a language where you can ignore formal knowledge; your assumptions will bite you, cause frustration, and lead to poor code. This is not your standard shell or scripting language for a few key reasons:

  • PowerShell is both a shell and a scripting language; the shell isn’t a slightly different environment where you might run into odd quirks that don’t expose themselves in a script. The scripting side doesn’t have more powerful language than the shell. They are one and the same. This leads to interesting design decisions. Shell users know that ‘>’ is a redirection operator. Developers and scripters know that ‘>’ is a comparison operator. This is one of several conflicts between shell and scripting expectations that makes PowerShell a bit unique, and can inspire wharrgarbl even from talented developers.
  • PowerShell targets IT professionals and power users, rather than developers. A developer might expect an escape character to be a ‘\’. If Microsoft chose this, we would need to escape every single slash in a path: “C:\\This\\Is\\A\\Pain”. Few IT professionals in the Microsoft ecosystem would use a language like that. Several design decisions like this may confuse seasoned developers.

If you come in as an experienced scripter or developer, you might get serious heartburn if you make assumptions and don’t look at PowerShell for what it is. Even developers like Helge Klein (author of delprof2 and other tools) make this mistake. Several of my talented co-workers who are more familiar with C# and .NET, or Python/Perl and various shell languages have made this mistake as well. Like most languages, if you’re going to use it, you should spend some time with formal reading material, and should avoid assumptions.

Formal resources

Hopefully you’ve decided to look at some formal learning materials!  I keep a list of the resources I find helpful for learning PowerShell here.

Prefer books?

  • If you’re a beginner without much scripting / code experience, check out Learn Windows PowerShell 3 in a Month of Lunches.
  • If you have experience with scripting and code, Windows PowerShell in Action is the way to go. This is as deep as it gets, short of in-depth, lengthy blog posts, and you get to read about the reasons behind the language design. Knowing the reason for ‘greater than’ being -gt rather than > should quell your heartburn.
  • Strapped for cash? Mastering PowerShell is a great free book
  • Want to know what started this all? Read Jeffrey Snover’s Monad Manifesto. This isn’t on PowerShell per se, but it gives insight into the vision behind the language.

Prefer videos?

Prefer training?

There are plenty of training opportunities. Keep in mind that training might cater to the lowest common denominator in the room, and that a few hours, even at breakneck speed, won’t be enough. Of course, this could be my own bias showing.

Join the Community!

The community is a great place learn, and to get ideas for what you can do with PowerShell.

  • Find popular repositories and publications on sites like GitHub and Technet Gallery. Dive into the underlying code and see how and why it works. Experiment with it. Keep in mind that popular code does not mean good code. Contribute your own code when and if you are comfortable with it
  • Drop in and get to know the communities. Stop by #PowerShell on IRC (Freenode). Check out Twitter, if you don’t use it, you might be surprised at how valuable it can be. Join the PowerShell.org community. Keep an eye out for other communities that might be more specific to your interests or needs.
  • Look for blogs – some might cover general PowerShell, other’s might cover your IT niche. I keep a few here, but there are many others.
  • Participate in a local PowerShell user group. There’s no single way to find these, look at PowerShellGroup.org, PowerShell.org, and ask around.

As you spend time working with PowerShell and following or participating in the community, you will find some community members that you can generally rely on. PowerShell MVPs, the PowerShell team, and other respected community members put out some fantastic material.

Unfortunately, because everyone loves to contribute, you will find plenty of outdated, inefficient, incorrect, or downright dangerous code out there. This is another reason I tend to steer folks towards curated, formal resources at the onset, so that they can learn enough to recognize bad code at a glance.

Spend Some Time With PowerShell

At this point, you should be good to go! A few suggestions that I’ve found helpful along my way:

  • You can use PowerShell to learn PowerShell – Get-Command, Get-Help, and Get-Member are hugely beneficial. The Get-Help about_* topics might seem dry, but their content is as good as any book
  • Building functions can be helpful – focus on modular code that you can use across scenarios. Don’t limit yourself by adding GUIs, requiring manual input from Read-Host, or other restrictive designs.
    • As an example, I borrowed code from Boe Prox to write Invoke-Parallel a while back. I use it to speed up many solutions. I borrowed jrich523’s Test-Server, glued it together with Invoke-Parallel, and now I have Invoke-Ping. This lets me parallelize tests against thousands of nodes for services like remote registry, remote RPC, SMB, and RDP. Many of our production scripts start out by querying for a list of nodes and filtering this list with Invoke-Ping. The key is that I have re-usable tools and components that can be integrated across a variety of solutions, not just one-off scripts that are only helpful in a single use case.
  • Spend a few minutes every day!
    • Don’t hold up urgent production troubleshooting if you aren’t ready, but consider revisiting the scenario afterwards. Could you have used PowerShell to detect the issue before it happened? Could you use PowerShell to implement the fix? If you had an issue distributed across many systems, would it have saved time to use PowerShell to troubleshoot?
    • Have a project, task, or tedious manual step to accomplish? Would it make sense to use PowerShell? Spend a little time and see if you can script this out, or write a function to do the work.
    • Start with read only commands. Yes, automation and configuration are important, but you can learn a lot about PowerShell and your environment just by running ‘Get’ commands. This is a great way to learn more about technology in general as well! If I come across a piece of technology that I want to learn, perhaps Infoblox Grid Manager, Citrix NetScaler, SQL Server, or Microsoft Hyper-V, I ask for test access or build my own and try to query it with PowerShell. This helps me learn the basics of many technologies, gives me experience with PowerShell, and gets me involved with a number of fun projects. A month down the line when they want to automate or build tooling for something, we already have some basic tools and experience working with it!
    • Don’t be discouraged. If you had no scripting experience, you’re going to have some growing pains. Work through them, it will be worth it in the end. You will find that most of what you learn can be applied to other areas of PowerShell, given shared conventions and syntax. Don’t be this guy: ‘We have to make changes to 1,000 print queues, we need more FTEs to click through all the menus!’ – No, if you had someone with basic understanding of PowerShell, you could have it done more consistently, and with a tool you could re-use, likely in less time. Who wants to go clicking through 1,000 GUIs anyways? Sounds horrid.

Good luck, whichever route you take!

REST, PowerShell, and Infoblox

Edit: Wrote a follow-up illustrating a few of these issues through a rudimentary Infoblox PowerShell module.

A short while back, someone asked if I would be up for writing about calling the Infoblox web API through PowerShell. I don’t have extensive experience with this topic, but this is a great opportunity to discuss REST APIs, and the interfaces vendors expose to their customers.

There won’t be any Infoblox PowerShell in this post, that’s in the pipeline.

Interfaces – Software Defined Everything!

With the movement towards software defined everything, more and more products are exposing APIs and interfaces we can control, configure, and orchestrate with.

In general, this is a good thing. Having a nice API is an important first step, but more is needed. Folks like Helge Klein and others with development experience are well served by an API. But what about those of us working on the systems side? Even if we have a loose grasp on the topic, we lose much of the consistency and intuitiveness Jeffrey Snover aimed for with PowerShell.

Why PowerShell?

I grew up playing with Lego, figuring out how to piece together contraptions that don’t come with instructions. I do something similar at work today – I take the commands and consistent syntax and conventions of PowerShell, and put them together to build solutions that vendors don’t provide, or that we can’t afford. Unfortunately, not all vendors provide PowerShell support, so we need to build it ourselves.

This brings us to APIs, including C, Java, .NET, and of course, RESTful APIs. Most of these are beyond the skillset of a good number of systems administrators or engineers, including myself. Some of these APIs are more approachable than others, but vendors do us a disservice by limiting their interfaces to these.

Why APIs Aren’t Enough

As an engineer, I often need to integrate a variety of technologies. Many of these have fantastic PowerShell modules, from VMware’s PowerCLI, to Microsoft’s ActiveDirectory module, to Cisco’s UCS PowerTool. I can rely on the consistency and conventions of PowerShell, and spend my time considering the logic and design of a solution, rather than combing through documentation and figuring out the nuances of invoking a specific, obscure, poorly documented method of an API.

What does it mean if your product doesn’t have a PowerShell module, but may need integration with the wider Microsoft ecosystem?

  • Admins and engineers across the industry waste time and duplicated effort attempting to use your API or binaries through PowerShell
  • Admins and engineers who may not be subject matter experts in your technology end up trying to piece things together, leading to potentially buggy or feature-poor implementations. You as the vendor have the knowledge here, why not help out and ensure a smooth, consistent experience with your product?
  • Admins and engineers who may not have a strong background in PowerShell end up trying to piece a module or functions together, leading to potentially buggy and inconsistent implementations
  • Even if you have an SME for the technology in question who is fluent in PowerShell, they now get to spend copious amounts of time reading through your API documentation and figuring out how to ‘PowerShell-ize’ it

API Overload

Infoblox is one of many examples. EMC Isilon and XtremIO. Citrix NetScaler and Provisioning Services. Commvault Simpana. Thycotic Secret Server. BMC ARS. A wide range of products out there provide no PowerShell module, or something nearly unusable.

Microsoft isn’t immune. SQL Server management often relies on the SMO. A variety of Exchange tasks force you to use the Managed API or EWS. Good luck doing anything useful with the Group Policy module, or even finding an interface to AGPM – DSC is great, but you’re kidding yourself if you think Group Policy is going away any time soon.

Some of these APIs are better than others. A REST API generally means I need to dig through documentation and spend a good deal of time learning the ins-and-outs of your API. Despite being a bit dated, at least with a web service I have tools to discover the methods, constructors, and other details that help when wrapping an API in PowerShell. With REST?  Who knows what you’re going to get, good luck reading and experimenting!

Closing

How do we solve this problem? A few suggestions:

Vendors – if your product is commonly used in the Microsoft ecosystem, provide a PowerShell module. Don’t just wrap some binaries or APIs in a format that makes sense to you. Follow the standard conventions and best practices for PowerShell that have made it the successful tool that it is today. The Monad Manifesto should be required reading material for anyone responsible for implementing your PowerShell support.

Microsoft – lead by example. The inclusion of PowerShell in the (server) Common Engineering Criteria was a great start. Take steps to encourage your product groups to provide better and more wide-spread PowerShell solutions. Perhaps consider taking similar steps to encourage and assist other vendors.

Engineers and admins – if you have input on the decision making process, strongly consider whether PowerShell should be a factor in this. It can be incredibly painful ending up with a critical technology that you can’t control programmatically, or with an interface you have no familiarity with or interest in learning. Java or C? Not for me. If you do end up with these technologies, pressure the vendor to include an accessible PowerShell interface. If a vendor doesn’t hear you tell them you want a PowerShell interface, how will they get the prioritization to build one?

Lastly, while this is a big pain point for me, it’s still fantastic to have a glue language like PowerShell that can piece together well written PowerShell modules, .NET libraries, REST APIs, and everything else.

Disclaimer: This assumes you are locked into the Microsoft ecosystem and standardize on PowerShell.

Exploring PowerShell: Common Parameters, Variables, and More!

When writing PowerShell functions and scripts, you might come across a need to identify common parameters, automatic variables, or other details that can change depending on the environment.

It turns out that PowerShell offers a number of tools to explore, from the standard Get-Command, Get-Help, and Get-Member Cmdlets, to the .NET Framework, with tools like reflection and abstract syntax trees.

This past week I received an interesting question on Invoke-Parallel; can we import variables and modules from the end user’s session to make the command more approachable? This was a good question – Invoke-Parallel is widely used by my co-workers, but there is often confusion over the idea of each runspace being an independent environment.

Before we dive into this, let’s take a step back and look at some work from Bruce Payette, “a co-designer of the PowerShell language and the principal author of the language implementation.” Bruce wrote the definitive PowerShell in Action (PiA), my go-to book recommendation for anyone with scripting or development experience.

PowerShell In Action – Constrained endpoints

Constrained, delegated endpoints are getting more attention nowadays with JitJea. It turns out Bruce talked about constrained endpoints in PiA long ago.

Interactive and implicit remoting depend on a few commands. If you want to create a constrained endpoint that works with interactive or implicit remoting, you should define proxy functions for these. Rather than hard code the command names, Bruce tells us how to get them at run time, by creating a certain restricted initial session state and listing the public commands within it:

$iss = [Management.Automation.Runspaces.InitialSessionState]::CreateRestricted("RemoteServer")            
$iss.Commands | Where { $_.Visibility -eq "Public" } | Select Name

RemoteServer

How does this relate to automatic variables and modules? In both cases we can use PowerShell and the .NET Framework to find the answer at run time, rather than hard coding the answer.

Automatic variables and modules

If we want to pass variables from the user’s session into the Invoke-Parallel runspaces, we probably want to ignore the wide array of automatic variables.

We could certainly hard code these, but what fun is that? The approach we ended up taking was to compare a clean environment with the current environment, by creating a clean PowerShell runspace, listing out the modules, pssnapins, and variables within it, and comparing these with the user’s current session.

$StandardUserEnv = [powershell]::Create().addscript({            
            
    #Get modules and snapins in this clean runspace            
    $Modules = Get-Module | Select -ExpandProperty Name            
    $Snapins = Get-PSSnapin | Select -ExpandProperty Name            
            
    #Get variables in this clean runspace            
    #Called last to get vars like $? into session            
    $Variables = Get-Variable | Select -ExpandProperty Name            
                
    #Return a hashtable where we can access each.            
    @{            
        Variables = $Variables            
        Modules = $Modules            
        Snapins = $Snapins            
    }            
}).invoke()[0]

AutoVariables

This isn’t perfect; certain automatic variables won’t be created out of the gate. For example, $Matches won’t be listed. But this gives us a good start, and filters out the majority of variables and modules that we can ignore. The end result? A more usable Invoke-Parallel:

$Path = "C:\Temp"            
            
#Query 3 computers for something, save the results under $path 
echo Server1 Server2 Server3 | Invoke-Parallel -ImportVariables { 
                
    #Do something and record some output!            
    $Output = "Some Value for $_"            
            
    #Save it to the path we specified outsite the runspace            
    $FilePath = Join-Path $Path "$_-$(Get-Date -UFormat '%Y%m%d').txt"                
    Set-Content $FilePath -Value $Output -force            
            
}            
            
#List the output:            
dir $Path

Invoke-Parallel

In the past, $path would not be passed in, resulting in potential confusion and broken code.

Common parameters

What if you want a list of common parameters? Perhaps you are splatting PSBoundParameters and want to exclude common parameters, or perhaps you just want a list of common parameters to jog your memory.

You could hard code these, but this is no fun, and might get complicated if you want to cover version specific parameters like PipelineVariable. Let’s use one of the core commands for exploring PowerShell to get these; Get-Command.

#Define an empty function with cmdletbinding            
Function _temp { [cmdletbinding()] param() }            
            
#Get parameters, only common params are returned            
(Get-Command _temp | Select -ExpandProperty parameters).Keys

Common-Parameters

That’s it! We simply build a temporary function, and ask PowerShell what that function’s parameters are.

Use PowerShell to explore PowerShell

The key takeaway here is that you can use PowerShell or the .NET Framework itself to explore PowerShell. Use this to your advantage when writing functions where details on the runtime environment can improve the end user’s experience.

Cheers!

PowerShell Pipeline Demo

Happy holidays all!  Long ago, an observant co-worker added the Cookie Monster nickname to my office name plate.  Even during off-seasons, this earns me bonus cookies – “hey!  you’re cookie monster, right?  have this cookie!”  As you might imagine, the holidays are worse.  Forgive me if I’m a bit slow this month.

This is a quick hit to cover two topics that often generate confusion; handling pipeline input, and handling the code behind -Confirm and -Whatif.  I often forget what to expect with pipeline input, and use the verbose output from Test-Pipeline to double check.

Pipeline input

Many of your favorite commands support pipeline input.  Get-ADUser | Set-ADUser.  Get-ChildItem | Remove-Item.  The pipeline is an integral part of PowerShell; it’s covered in two sections of (the subjective) best practices for building PowerShell functions, and examples abound online… yet many community based functions don’t support it.

The key bits:

  • Use [parameter()] attributes to add pipeline support for a variable.
  • Use a Process block in your function
  • Reference the pipeline variable in your process block

A function to demonstrate support for pipeline input, on a ComputerName variable. Copy it to the PowerShell ISE for better code highlighting:

Function Test-Pipeline {            
    [cmdletbinding(SupportsShouldProcess=$true, ConfirmImpact="Medium")]            
    param(            
        [parameter( Mandatory = $false,            
                    ValueFromPipeline = $True,            
                    ValueFromPipelineByPropertyName = $True)]            
        [string[]]$ComputerName = "$env:computername",            
            
        [switch]$Force            
    )            
                
    Begin            
    {            
        $RejectAll = $false            
        $ConfirmAll = $false            
            
        Write-Verbose "BEGIN Block - `$ComputerName is a $(try{$ComputerName.GetType()} catch{$null}) with value $ComputerName`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
    }            
    Process            
    {            
        Write-Verbose "PROCESS Block - `$ComputerName is a $(try{$ComputerName.GetType()} catch{$null}) with value $ComputerName`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
                    
        foreach($Computer in $ComputerName)            
        {            
            if($PSCmdlet.ShouldProcess( "Processed the computer '$Computer'",            
                                        "Process the computer '$Computer'?",            
                                        "Processing computer" ))            
            {            
                if($Force -Or $PSCmdlet.ShouldContinue("Are you REALLY sure you want to process '$Computer'?", "Processing '$Computer'", [ref]$ConfirmAll, [ref]$RejectAll)) {            
                    Write-Verbose "----`tPROCESS Block, FOREACH LOOP - processed item is a $(try{$computer.GetType()} catch{$null}) with value $computer`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
                }            
            }            
        }            
    }            
    End            
    {            
        Write-Verbose "END Block - `$ComputerName is a $(try{$ComputerName.GetType()} catch{$null}) with value $ComputerName`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
    }            
}

Piping two computers to this command:

image

Notice the behavior for $ComputerName in the Begin and End block, it might catch you off guard.

SupportsShouldProcess

One of the first things we learn with PowerShell is to look for –Whatif and –Confirm parameters.  We also learn that these are not available everywhere, and that implementing them is up to the author.  This is another common omission in community based functions, despite it being a best practice to provide this support where appropriate.

You might also find inconsistent implementation.  The simplest implementation (seen in the Cmdlet snippet)  leads to reliance on funky looking language like Do-Something -Confirm:$False, and includes no -Force switch.

Joel Bennett provided a great guideline for this.  You can see the implementation of this in the Test-Pipeline code above.

Confirmation:

image

Force parameter confirms all as expected:

image

Wrapping up

If you are submitting production grade functions to the community, or just want to provide a user experience mimicking a Cmdlet, be sure to look into providing support for the pipeline and SupportsShouldProcess.  It looks like a lot of effort, but if you create a snippet or a template for your functions, you can start with code similar to Test-Pipeline and tweak it to meet your needs.

Further reading:

Disclaimer:

I don’t claim to have followed the above for all of my contributions : )  I’m trying to start though, one of my PowerShell resolutions is to start practicing what I preach and follow best practices!

PowerShell Splatting – build parameters dynamically

Have you ever needed to run a command with parameters that depend on the runtime environment?  I often see logic like this:

If($Cred) { Get-WmiObject Win32_OperatingSystem -Credential $Cred }            
Else      { Get-WmiObject Win32_OperatingSystem }

It doesn’t look too terrible if you only have one option, but things get ugly fast.  What if you want the same logic for $Credential, $ComputerName, and $Filter?  You end up with 6 potential combinations and an unreadable mess of code, and this is with only three parameters.

The answer is splatting!

What is splatting?

Splatting is just a way to pass parameters to commands, typically with a hash table.  It was introduced with PowerShell v2, so it is compatible pretty much anywhere.  Here’s a simple example

#Define the hash table            
$GWMIParams = @{            
    Class = "Win32_OperatingSystem"            
    Credential = $Cred            
    ComputerName = 'localhost'            
}            
            
#Splat the hash table.  Notice we put an @ in front of it:            
Get-WmiObject @GWMIParams

Many blog posts focus on the readability splatting offers.  Readability is very important, but splatting gives us the framework needed to build up parameters for a command dynamically.

Building up command parameters

Let’s build a simple example.  The basic steps we take include creating a hash table, adding key-value pairs to that hash table, and splatting the hash table against a command.

#Create the initial hash table            
#You can use @{} to create an empty hash table            
$GWMIParams = @{            
    ErrorAction = "Stop"            
}            
            
#Add parameters depending on the environment            
#We use simple logic here, but you can get creative            
if($Cred)            
{            
    #We can add a key value pair with the Add method            
    $GWMIParams.add("Credential", $Cred)            
}            
if($Computer)            
{            
    #This alternative to the Add method is easier to read            
    $GWMIParams.ComputerName = $Computer            
}            
if($Filter)            
{            
    $GWMIParams.Filter = $Filter            
}            
            
#Splat the hash table.  You can splat multiple hash tables and still use parameters            
Get-WmiObject @GWMIParams -Class Win32_OperatingSystem

When we run this, Get-WMIObject will have different parameters depending on whether we have $Computer, $Filter, or $Cred defined in our session.  We get a side benefit to this:  a hash table with parameters and their values, which you can use for providing verbose or debug output.

If we ran the example code without Computer, Filter, or Credential variables defined, the parameters will look like this:

image

If we set $Computer to ‘localhost’ and run the same code, we now get different parameters:

image

Next steps

That’s about it!  For further reading, check out Get-Help about_Splatting (the about topic was added in PowerShell 3), or search around for numerous blog posts on the topic.

The real fun is figuring out what logic you should use to work with the hash table you are splatting.  If you follow best practices when writing PowerShell functions, you open up access to the PSBoundParameters, conveniently in the form of a hash table!

Quick hit: Dynamic Where-Object calls

On occasion, you might want to build up a call to Where-Object that changes based on your runtime environment.  Perhaps you have to iterate over a huge collection, or you have an expensive statement to evaluate that doesn’t need to run in all scenarios.  This post will illustrate how to build up a dynamic call to Where-Object using example code from Get-Type.

The ScriptBlock

So we want to modify what runs in Where-Object.  We know the standard call is Where-Object {<# Something #> }, so we’ll dive into the help to find out what parameter that scriptblock is.  We want to find a parameter with Position 0 or 1 that takes a ScriptBlock – we dive in and find this is the FilterScript parameter:

Get-Help Where-Object -Full

Where

Now, we want to create the scriptblock for this parameter dynamically.  If we search around, we might find that you can convert a string to a scriptblock using the create method from the System.Management.Automation.ScriptBlock class.  It sounds complicated, but the code is pretty straightforward:

$ScriptBlock = [scriptblock]::Create( $String )

Okay!  At this point, we know what parameter takes in the scriptblock, we know how to create a scriptblock from text, and hopefully, we know how to work with strings.

Putting it all together

In Get-Type, we provide a few parameters to allow filtering on the returned types.  If these are set to *, we don’t want to evaluate them to the where clause.  If they aren’t set to *, we want to add a statement to the where clause.

There are many ways to skin this cat; we’re going to build up an array of statements and join them with –and.  You can build your strings as desired.

#Build the Where array            
$WhereArray = @()            
            
#If anything but the default * was provided, evaluate these with like comparison            
if($Module -ne "*"){$WhereArray += '$_.Module -like $Module'}            
if($Assembly -ne "*"){$WhereArray += '$_.Assembly -like $Assembly'}            
if($FullName -ne "*"){$WhereArray += '$_.FullName -like $FullName'}            
if($Namespace -ne "*"){$WhereArray += '$_.Namespace -like $Namespace'}            
if($BaseType -ne "*"){$WhereArray += '$_.BaseType -like $BaseType'}            
            
#Build the where array into a string by joining each statement with -and            
$WhereString = $WhereArray -Join " -and "            
            
#Create the scriptblock with your final string            
$WhereBlock = [scriptblock]::Create( $WhereString )

At this point, we have the scriptblock created!  If we call Get-Type with –Verbose, we can see what the scriptblock looks like depending on the parameters we call at run time:

Where2

That’s about it!  We illustrated how to build up a scriptblock dynamically and used it with Where-Object – keep in mind you could use this for other scenarios where you need to build a scriptblock up in pieces.

Edit: A quick follow-up – There are situations where your scriptblock really needs to be dynamically generated.  The example itself did not need it for performance or functionality; simplicity and clarity of code would generally take priority, I was just curious.

Credentials and Dynamic Parameters

Everyone has their preferred way to simplify credential handling in PowerShell.  Here are some of my favorites.  Before using these, consider your security policies and posture.

Import and Export PSCredentials

Many functions and examples out there simply serialize the encrypted password to disk, leaving you to handle the username.  Years ago, Hal Rottenberg wrote two handy functions that serialize and deserialize both the username and password; Import-PSCredential and Export-PSCredential.  The links are to very slightly modified functions.

Export-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.contoso.cmonster.crd" 

$credCMonsterContoso = Import-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.contoso.cmonster.crd"

Wait, isn’t that insecure?

There are a few considerations to take into account, but this isn’t as risky as you might expect.  Serializing the password to disk uses the Windows DPAPI to encrypt your password, limiting decryption to your account, on the computer you encrypted the password from.  Here are two considerations that immediately come to mind:

  • I don’t know of any exploits that can decrypt these files.  Might these already exist?  Might we find a vulnerability and see exploits down the line?  Perhaps.  This risk should be acceptable in most organizations, given password entropy, compensating controls over where these credentials are stored, and other factors.
  • Other processes on this system running as your account could access these credentials.  Dave Wyatt discusses a workaround using secondary entropy.

I’m personally comfortable using methods Lee Holmes describes in PowerShell Security Best Practices.  If in doubt, consult your security team.

Dynamic parameters

Dynamic parameters are parameters that are generated at runtime.  They can be both handy and painful.  The basic idea is that you can dynamically generate parameters depending on the runtime environment.  A few quick resources:

Why are we talking about dynamic parameters?  How are these related to credentials?

Serializing and deserializing credentials to disk is quite handy, but we can take this a step further.  If you don’t have a password management solution with an API, working with passwords can be quite tedious.  We’re going to devise a system where you keep PSCredentials stored in variables, with simplified copy-to-clipboard access via dynamic parameters.

Wait, isn’t that insecure?

Yes.  Copying any confidential data to the clipboard is risky.  Much riskier than relying on the DPAPI.  That being said, information security is about managing risk, not completely eliminating it.  Perhaps you would consider using this on a secured system where you don’t do much day-to-day browsing or other risky activities, and with a certain class of accounts.

Simplified credential management

We’re going to cover three steps; encrypting the credentials (one time, and after any changes), getting the credentials into your session, and a copy-password function.

Prerequisite:  Download and get the dependency functions into your session before using them.

# Load dependencies.            
    . "\\Path\To\Import-PSCredential.ps1"            
    . "\\Path\To\Export-PSCredential.ps1"            
    . "\\Path\To\New-DynamicParam.ps1"

Encrypt credentials using Export-PSCredential as desired.  You only need to do this one time, and any time the credentials change.

# I name mine COMPUTER.CURRENTUSER.[domain.]USER[Qualification as needed] to help identify where I can use them and what accounts they cover.            
# Access to decrypt these is limited to the user that exported them, on the computer they were encrypted on            
# Consider storing these in a secured location.  These are on my D:\ for illustrative purposes only            
            
Export-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.contoso.cmonster.crd"            
Export-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.cmonster.crd"            
Export-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.contoso.TestUser.crd"

Now, any time you want to access these, pull them into your session.  You could put these in your profile so they are always available, or use them in a script that needs credentials.  Don’t forget to dot source Import-PSCredential function beforehand.

# Import credentials we previously exported.  I'm using names starting with 'Cred'            
    $CredCMonsterDomain = Import-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.domain.cmonster.crd"            
    $CredCMonster = Import-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.cmonster.crd"            
    $CredTestUserDomain = Import-PSCredential -Path "D:\$ENV:COMPUTERNAME.$ENV:USERNAME.domain.TestUser.crd"

Now I can use these credentials as desired:

Cred1 Cred2

This is great for scripts, but if I want quick access to a password in an interactive session, typing this out is tedious.  Let’s write a function to quickly extract passwords from these PSCredentials:

function Copy-Password             
{            
                
    [cmdletbinding()]            
    param()            
    DynamicParam            
    {            
        $Variables = Get-Variable -Name Cred* | Select -ExpandProperty Name            
        New-DynamicParam -Name Credential -ValidateSet $Variables -Mandatory -Position 0            
    }            
    Begin            
    {            
        $Credential = Get-Variable -Name $PSBoundParameters.Credential -ValueOnly            
        $Credential.GetNetworkCredential().Password | Clip            
    }            
}

Now if I have test or other credentials that I need to use very regularly, I have a simple way to get them into my session and to extract the plaintext passwords.

Creds

You could take this a step further. In the DynamicParam block, perhaps you could get all variables that are PSCredentials, using the -is comparison operator.

Get-Variable | Where-Object {$_.Value -is [PSCredential]}            

Another method would be to create the credential objects using New-Variable, with a specific description we could filter on later.

That’s about it! Keep an eye out for other resources as well.  For example, BetterCredentials from Joel Bennett offers a more functional drop-in replacement for Get-Credential. Consider writing your own functions tailored to your needs and environment.

Cheers!