SQLite and PowerShell

March 2015 EDIT: This post has been updated and moved to my new GitHub pages site.

I’ve been planning on sharing some fun projects that involve SQL. Every time I start writing about these, I end up spending a good deal of time writing about MSSQL, and thinking of all the potential caveats that might scare off the uninitiated. Will they have an existing SQL instance they can work with? Will they have access to it? Will they run into a grumpy DBA? Will they be scared off by the idea of standing up their own SQL instance for testing and learning?

Wouldn’t it be great if we could illustrate how to use SQL, and get an idea of how helpful it can be, without the prerequisite of an existing instance with appropriate configurations and access in place?

SQLite

SQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.”

What do you know, sounds pretty close to what we are looking for!  We want to use this in PowerShell, so where do we start?

Looking around, you’ll stumble upon Jim Christopher’s SQLite PowerShell Provider. If you like working with providers and PSDrives, this is probably as far as you need to go. There are other examples abound, including interesting solutions like Chrissy LeMaire’s Invoke-Locate, which leverage SQLite behind the scenes.

I generally prefer standalone functions and cmdlets over providers. I’m also a fan of abstraction, and building re-usable, simple to use tools. The task-based nature of PowerShell makes it a great language for getting things done. We can concentrate on doing what we want to do, not the underlying implementation.

I was looking for something similar to Invoke-Sqlcmd2, which abstracts out the underlying .NET logic to provide simplified SQL queries, the ability to handle SQL parameters, PowerShell-esque behavior for DBNull, and other conveniences.

Invoke-SQLiteQuery

I spent a few minutes with the SQLite binaries and examples from Jim and Chrissy, and simply duct-taped SQLite functionality onto Invoke-Sqlcmd2. Let’s take a look at what we can do

Download and unblock Invoke-SQLiteQuery, and you’ll be up and running, ready to work with SQLite. Let’s create a data source and a table:

#Import the module, create a data source and a table
Import-Module PSSQLite
$Database = "C:\Names.SQLite"
$Query = "CREATE TABLE NAMES (
Fullname VARCHAR(20) PRIMARY KEY,
Surname TEXT,
Givenname TEXT,
Birthdate DATETIME)"
#SQLite will create Names.SQLite for us
Invoke-SqliteQuery -Query $Query -DataSource $Database
# We have a database, and a table, let's view the table info
Invoke-SqliteQuery -DataSource $Database -Query "PRAGMA table_info(NAMES)"

Init

That was pretty easy! We used a SQLite PRAGMA statement to see basic details on the table I created. Now let’s insert some data and pull it back out:

# Insert some data, use parameters for the fullname and birthdate
$query = "INSERT INTO NAMES (Fullname, Surname, Givenname, Birthdate)
VALUES (@full, 'Cookie', 'Monster', @BD)"
Invoke-SqliteQuery -DataSource $Database -Query $query -SqlParameters @{
full = "Cookie Monster"
BD = (get-date).addyears(-3)
}
# Check to see if we inserted the data:
Invoke-SqliteQuery -DataSource $Database -Query "SELECT * FROM NAMES"

InsertSelect

In this example we parameterized the query – notice that @full and @BD were replaced with the full and BD values from SQLParameters, respectively.

Let’s take a quick look at using SQLite in memory

# Create a SQLite database in memory
# This exists only as long as the connection is open
$C = New-SQLiteConnection -DataSource :MEMORY:
#Add some tables
Invoke-SqliteQuery -SQLiteConnection $C -Query "
CREATE TABLE OrdersToNames (OrderID INT PRIMARY KEY, Fullname TEXT);
CREATE TABLE Names (Fullname TEXT PRIMARY KEY, Birthdate DATETIME);"
#Add some data
Invoke-SqliteQuery -SQLiteConnection $C -SqlParameters @{BD = (Get-Date)} -Query "
INSERT INTO OrdersToNames (OrderID, fullname) VALUES (1,'Cookie Monster');
INSERT INTO OrdersToNames (OrderID) VALUES (2);
INSERT INTO Names (Fullname, Birthdate) VALUES ('Cookie Monster', @BD)"
#Query the data. Illustrate PSObject vs. Datarow filtering
Invoke-SqliteQuery -SQLiteConnection $C -Query "SELECT * FROM OrdersToNames" |
Where-Object { $_.Fullname }
Invoke-SqliteQuery -SQLiteConnection $C -Query "SELECT * FROM OrdersToNames" -As DataRow |
Where-Object { $_.Fullname }
#Joining. Yeah, a CustomerID would make more sense :)
Invoke-SqliteQuery -SQLiteConnection $C -Query "
SELECT * FROM Names
INNER JOIN OrdersToNames
ON Names.fullname = OrdersToNames.fullname
"

Memory

Typically, we might use Datarow output from MSSQL and SQLite queries. As you can see above, using Datarow output leads to unexpected filtering behavior – if I filter on Where {$_.Fullname}, I don’t expect any results to come back with no fullname. Thankfully, we have code from Dave Wyatt that can quickly and efficiently convert output to PSObjects that behave as expected in PowerShell.

We did the querying above in memory. Let’s run PRAGMA STATS to see details on the in-memory data source. If we close the connection and run this again, we see the data is gone:

MemoryGone

Next steps

That’s about it! If you want simplified SQLite queries in PowerShell, check out Invoke-SQLiteQuery. If you delve into the MSSQL side of the house, check out Invoke-Sqlcmd2 from Chad Miller et al. It was used as the basis for Invoke-SQLiteQuery and behaves very similarly.

Now I just have to find more time to write…

Disclaimer: This weekend was the first time I’ve used SQLite. If I’m missing any major functionality, or you see unexpected behavior, contributions or suggestions would be quite welcome!

Testing DSC Configurations with Pester and AppVeyor

I spent the last few days tinkering with AppVeyor. It’s an interesting service to help enable continuous integration and delivery in the Microsoft ecosystem.

Last night I realized it might offer a simple means to test the outcome of your DSC configurations. Here’s the recipe for a simple POC, with plenty of room for you to tweak and integrate with your existing processes:

  • Create a DSC configuration you want to test
  • Create a short script to apply that configuration
  • Create some Pester tests to verify the desired state of your system, post configuration
  • Create the AppVeyor yaml to control this process
  • Add this to an appropriate AppVeyor source (Example covering GitHub)
  • Test away! Make a change to the DSC code, commit, and AppVeyor spins up a VM, applies the DSC configuration, and your Pester tests verify the outcome

This certainly isn’t a perfect method, but it would be a simple way for anyone to get up and running writing and testing DSC configurations and resources.

I’m going to make the assumption that you are familiar with GitHub, Pester, and some of the basics of AppVeyor from my first overview.

Wait, what is DSC?

Windows PowerShell Desired State Configuration is a configuration management platform from Microsoft. It’s still young, but is fast tracked for the Common Engineering Criteria, receives a good deal of attention from the PowerShell team, and the guy who brought us PowerShell is quite excited about it. It’s probably something you should be paying close attention to.

There is quite a variety of resources to get started with. The DSC Book from PowerShell.org is a nice overview, and the soon-to-be-published MVA series was quite helpful (Getting Started, Advanced).

Let’s look at some caveats to testing DSC over AppVeyor.

Yes, but…

  • You could do much of this on your own, with tools like Client Hyper-V and AutomatedLab. You might even have a similar toolset in place at work.
  • No testing of distributed systems. This allows testing on a single VM.
  • Limited selection of operating systems to deploy on. There is an OS option in the yaml, presumably this means we may see more.
  • Your configurations cannot restart the VM. Hoping this will change, but depending on their architecture and design, this may be tough.
  • You’re deploying on a hosted service. If your DSC configurations are integrated with your CMDB and other internal resources, this may be a show stopper.
  • A good DSC resource should have a solid ‘test’ function. That being said, a single DSC resource’s test function might not handle the intricate combination of resources applied to a system.

Right. I’m sure I’m missing others as well. Certainly not perfect, but if your goal is to write and test some general DSC resources or configurations, and you don’t have the tools in house, this is a fantastic and simple way to spin up fresh VMs, configure them with DSC, and verify that the resulting system is now in the desired state.

Another caveat – building your own system that would perform a similar functionality would be an incredibly valuable experience. They always say “don’t re-invent the wheel,” but if your goal is experience and to learn, re-inventing the wheel is a great way to get there!

Pick a source

I’m going to stick to GitHub for this. Keep in mind that AppVeyor is only free for open source projects. Consider your security posture before uploading any sensitive DSC configurations.

image

Pick a DSC configuration to test

We’re going with an incredibly basic example here:

Configuration ContosoWebsite
{
param (
[string[]]$ComputerName = $ENV:ComputerName
)
Node $ComputerName
{
#Install the IIS Role
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server""
}
#Install ASP.NET 4.5
WindowsFeature ASP
{
Ensure = "Present"
Name = "Web-Asp-Net45"
}
#Install PowerShell AD... for fun...
WindowsFeature ADPS
{
Ensure = "Present"
Name = "RSAT-AD-PowerShell"
}
}
}
view raw WebServer.ps1 hosted with ❤ by GitHub

Write a controller script to apply the configuration

There are plenty of ways to do this. Modify this POC example to meet your needs.

# This script will invoke a DSC configuration
# This is a simple proof of concept
"`n`tPerforming DSC Configuration`n"
. .\DSC\WebServer.ps1
( ContosoWebsite -COMPUTERNAME $ENV:COMPUTERNAME ).FullName |
Set-Content -Path .\Artifacts.txt
Start-DscConfiguration .\ContosoWebsite -Wait -Force -verbose

Most of this should be self explanatory. We force application of the ContosoWebsite configuration from WebServer.ps1. The one extra bit is that we save the path to the resulting MOF file in Artifacts.txt. We upload this later on.

Write your Pester tests

Go crazy. I wrote some very simple, limited tests for this POC:

Describe "Web Server" {
It "Is Installed" {
$Output = Get-WindowsFeature web-server
$Output.InstallState | Should Be "Installed"
}
It "Includes ASP.NET 4.5" {
$Output = Get-WindowsFeature Web-Asp-Net45
$Output.InstallState | Should Be "Installed"
}
}
Describe "ActiveDirectory Module" {
It "Is Installed" {
$Output = Get-WindowsFeature RSAT-AD-PowerShell
$Output.InstallState | Should Be "Installed"
}
}
view raw DSC.Tests.ps1 hosted with ❤ by GitHub

Keep in mind that you aren’t limited to one pass. You could theoretically apply your configuration, test, add a ‘mischief’ script that messes with the configuration, and test to verify that your DSC configuration brings things back in line.

Write the AppVeyor yaml

We’re almost done!  At this point, we’re going to tell AppVeyor what to run. There’s a lot more you can configure in the yaml, so be sure to flip through the options and experiment.

# See http://www.appveyor.com/docs/appveyor-yml for many more options
# Skip on updates to the readme.
# We can force this by adding [skip ci] or [ci skip] anywhere in commit message
skip_commits:
message: /updated readme.*/
install:
- cinst pester
build: false
test_script:
# Invoke DSC configuration!
- ps: . .\Tests\appveyor.dsc.ps1
# Test with native PS version, finalize
- ps: . .\Tests\appveyor.pester.ps1 -Test -Finalize
deploy_script:
- ps: Get-Content .\Artifacts.txt | Foreach-Object { Push-AppveyorArtifact $_ }
view raw AppVeyor.yml hosted with ❤ by GitHub

What does this do?

  • We ignore any commits that match ‘updated readme’, to avoid unnecessary builds.
  • We run appveyor.dsc.ps1, which applies the DSC configuration and saves the mof path.
  • We run appveyor.pester.ps1, which invokes the pester script, sends test results to AppVeyor, and tells us if the build passed.
  • We upload the mof file, which will be available on the artifacts page for your AppVeyor project.

Tie it all together

We’re good to go! We create a GitHub repository, add this to our AppVeyor projects, and make a commit (covered here). Browse around the project on AppVeyor to see the results:

AppVeyorDSC

That’s it – we now have a simple framework for automated DSC resource and configuration testing. There’s a lot more you might want to do, but the simple POC material is in the AppVeyor-DSC-Test repository on GitHub.

Cheers!

GitHub, Pester, and AppVeyor: Part 2

I recently published a quick bit on using GitHub, Pester, and AppVeyor, a slick combination to provide version control, unit testing, and continuous integration to your PowerShell projects.

That post was a quick overview and essentially summed up ideas and implementation straight from Sergei. Before this pull request, I hadn’t worked with Pester or AppVeyor:

image.png

We just went through a major upgrade to our EMR, and I’m covering our primary admin for any fallout this morning. Thankfully, there wasn’t much to do, so I spent a few minutes toying with AppVeyor and Pester. This post is a quick summary of the outcome.

The code referenced in this post is now part of the PSDiskPart project on GitHub.

Abstraction

The AppVeyor yaml file used in the Wait-Path repository is fairly straightforward. It installs pester and runs a few lines of PowerShell. The readability for those PowerShell lines was a bit painful – no syntax highlighting, some shortcuts to keep code on one line, etc. My first step was to abstract most of the PowerShell out to another script.

The PSDiskPart AppVeyor yaml file is the result. It’s a little cleaner; the only test_script lines are calls to a single PowerShell script.

Not everyone would prefer this method, as it adds a layer of complexity, but I like the abstraction, and it enables the second line of PowerShell, where we call PowerShell.exe -version 2. So what’s in the appveyor.pester.ps1 controller script?

AppVeyor.Pester.ps1

My approach is to use a single file. Because we call it several times to cover both PowerShell version 2 and the native PowerShell version, we need to serialize our output and add a ‘finalize’ pass to collect everything and send it to AppVeyor.

If you look at my commit or AppVeyor history for this morning, you will see an embarrassing number of small changes and tweaks – multitasking is generally a bad idea; multitasking without caffeine is worse:

imageimage

The first of my struggles: relative paths. They’re great. But you need to start in the right path. You’ll note a reference to one of the AppVeyor environment variables, APPVEYOR_BUILD_FOLDER. This helped get me to the right path.

The second of my struggles: PowerShell.exe. I didn’t realize this, but if you place ExecutionPolicy and NoProfile parameters before -Version 2.0, PowerShell won’t be happy:

image.png

The third struggle: PowerShell 2 and PowerShell 3 are very different. At work, I always target PowerShell 2 and am all too familiar with the helpful language and functionality I must avoid. Throwing a PowerShell 2 iteration into the mix left me embarrassed at the number of PowerShell 3 assumptions I had made in the original PSDiskPart and Pester code!

  • There is no auto module loading in PowerShell 2. We abstracted out the call to pester, so we need to add a line to import that module.
  • Set-Loc<tab> hits Set-LocalGroup before Set-Location. Okay, that’s not PS3, that’s me being sloppy!
  • In the module manifest, PowerShell 3 let’s us use RootModule. PowerShell 2 doesn’t recognize RootModule, so we switch to ModuleToProcess.
  • PowerShell 2 did not include the $PSScriptRoot automatic variable.
  • PowerShell 2 will loop over $null
  • Get-Content -Raw was introduced in PowerShell 3

Walking through the build

Here is the resulting passing build, we’ll step through the basic flow:

image.png

First, we ignore any commits that match updated readme:

image

Next, we run the first pass of appveyor.pester.ps1, which runs tests in the native PowerShell on the AppVeyor VM:

image

This runs the AppVeyor testing controller script, which calls the PSDiskPart.Tests.ps1 Pester tests.

image

Success! Note that we differentiated the PowerShell version in the ‘It’ name. This would be more appropriate for Context, but we wanted to differentiate tests on the AppVeyor side:

image.png

Next, we run this in PowerShell version 2 mode. Native AppVeyor support for this would be nice; as is, we don’t get colorized output without going through extra effort:

image.png

image

Finally, we want to collect all the results, send our tests to AppVeyor, and give some summary feedback if anything failed:

image

image

The Outcome

Apart from working through my many mistakes, we now have our PowerShell abstracted out to a separate file we can view with syntax highlighting, we have a cleaner yaml file, and we have a simple way to test in both PowerShell version 2 mode, and with the native PowerShell.

We can focus on the domain-specific yaml in the yaml file, and PowerShell in the PowerShell file.

Badges

On a side note, I don’t think I mentioned badges in my previous post. In AppVeyor, browse to your project, view the settings, note the Badges section:

image.png

This is a great way to tell folks that your project is building successfully – or that it’s broken, as PSDiskPart was throughout the morning:

image.png

Next Steps

If you’re using GitHub for your PowerShell projects and haven’t checked them out yet, definitely consider looking into adding Pester and AppVeyor. If you already have your project in GitHub and your Pester tests laid out, adding them to AppVeyor only takes a moment (barring tests that require access to your internal environment). It took less than 30 seconds to add the InfoBlox module to AppVeyor once I had added a few example Pester tests.

On that note, consider test-driven development. Adding a comprehensive set of tests to a single function after it has been written is difficult. Covering an entire module would be painful. If you follow TDD and write your tests before you write the functionality that they test, you will be ahead of the game. As you can tell by my contributions, I am certainly not there yet, but I like the idea.

There’s a lot more to explore in AppVeyor. As far as I can tell, it looks like it can be used to help enable both continuous integration and continuous delivery. Poke around, experiment, and if you find anything helpful, share it with the community!

Fun with GitHub, Pester, and AppVeyor

This is a quick hit to cover a practical example of some very important processes; version control, unit testing, and continuous integration. We’re going to make a number of assumptions, and quickly run through GitHub (version control), Pester (unit testing for PowerShell), and AppVeyor (continuous integration).

Yesterday evening I added some basic Pester tests to a PowerShell module for DiskPart. I thought it might be fun to test out AppVeyor, but assumed running DiskPart or some other process requiring administrative privileges might not work. I assumed wrong, the build passed!

We’ll make the assumption that if you’re using PowerShell and Pester, you’re using Windows. We’ll also assume you have a PowerShell script or module already written. Finally, we’ll assume that for all the GUI fun below, you might dive into the CLI.

GitHub

I’m not going to ramble on about Git or GitHub. There are heavy duty books, hundreds of blog posts, and various youtube examples to get you up and running. Sign up. Download the Windows client. Read one of the thousands of articles on using GitHub (Example, others may be better).

You’re signed up. Let’s create a repository and upload our code.  Here’s yet another quick-start, using the website and Windows client, documented with GifCam, a handy tool I found through Doug Finke.

Create a repository:

CreateRepository

Clone the repository on your computer, copy the files you want in there, commit:

CloneRepository

There’s way more to see and do, so do spend a little time experimenting and reading up on it if you haven’t already. There are too many benefits of version control to list, and GitHub adds a simple interface and a nice layer enabling collaboration and sharing. Or go with Mercurial and BitBucket, among the many other combinations.

Pester

Pester is a unit testing framework for PowerShell. Companies like Microsoft and VMware are starting to use it, you should probably check it out:

image

Jakub Jareš has a great set of articles on PowerShellMagazine.com (One, Two, Three), and you should find plenty of other examples out there.

I took an existing Invoke-Parallel test file, used the relative path to Wait-Path.ps1, and added a few tests. Follow Jakub’s first article or two and you will realize how simple this is. Start using it, and you will realize how valuable it is.

It’s quite comforting knowing that I have a set of tests and don’t need to worry about whether I remembered to test each scenario manually. Even more comforting if you have other people collaborating on a project and can identify when one of them (certainly not you!) does something wrong.

AppVeyor

Continuous integration. You’ve probably heard it before, perhaps around the Jenkins application. Personally, I never got around to checking it out. A short while back, Sergei from Microsoft suggested using AppVeyor for Invoke-Parallel (thank you for the motivation!). He added a mysterious appveyor.yml file and Pester tests, I signed up for AppVeyor, added the Invoke-Parallel project, and all of a sudden Pester tests appeared on the web:

image

I assumed this required much wizardry. It probably did, but now I can borrow what Sergei did! Sign up with your GitHub creds, login, and here’s a quick rundown:

AddProject

Simple so far! AppVeyor is waiting for a commit. In the background, I added a readme.md and a line in Wait-Path. Let’s commit:

Commit

Behind the scenes, gears are turning at AppVeyor. After a short wait in the queue, your tests will start to run:

image

image

image

Build Passing!

Take a peak at the appveyor.yml. There’s a whole lot more you can do with AppVeyor, but this is a nice framework for simple PowerShell Pester tests. In the yaml, you can see we use domain specific language to…

  • Install Pester
  • Invoke Pester, pass the results through, and generate output XML
  • Upload the Pester XML output (Why? The Tests tab)
  • If Pester output FailedCount is greater than 0, fail the build

Better together

Version control, unit testing, and continuous integration, and the products we looked at for each of these are fantastic in their own right. If you experiment and try these out together, I think you will find they are even better together. And this particular combination of GitHub + Pester + AppVeyor (for PowerShell) is particularly smooth.

In fact, writing this post, listening to the tail end of the DSC MVA, enjoying some pasta and wine, and recording the various gifs took all of about two hours. Don’t be intimidated, just dive in!

Happy scripting!

Remotely Brick a System

I have another fun project. After some recent system performance analyses, one of my recommendations was to move appropriate systems to VMware’s Paravirtual SCSI controller. We don’t go the vCAC/vRA route yet, so I’m now tasked with integrating this into our fun little ASP.NET/C#, PowerShell, MDT, and SQL deployment Frankenstein. It may be ugly, but it was a fantastic learning experience, and works quite well.

When designing tooling, I usually step through what I want to do manually, and break each step up as needed, building re-usable tools where appropriate. Here’s the recipe, ignoring a paranoid level of error handling:

  • Check to see which of the guest’s disks are online (Why?  We’ll see…)
  • Power off the guest
  • Change all non-system drives to a new Paravirtual SCSI controller
  • Power on the guest
  • All your disks are gone!
  • Set your disks that are offline back to online. Each test case of mine resulted in all migrated disks coming up as offline.
  • See all those ‘Power’ tasks? All this will need to be performed from a remote system.

Most of this is vanilla PowerCLI. A few steps require something like DiskPart though. Yes, Windows 6.2 and later include Storage Cmdlets. Unfortunately, Microsoft cut off legacy systems, along with the many organizations out there who still rely on them, even if they can roll out 2012 R2 boxes for new projects. DiskPart it is!

Tangent: Whomever decided to include OS-specific-Cmdlets in certain DSC Resources made me sad. The whole OS-specific-Cmdlet idea has lead to a good deal of confusion, and relying on it in a technology that’s slated for inclusion in the Common Engineering Criteria (presumably as the standard for configuration management) might not help with adoption.

DiskPart on a remote computer

The blog title should make sense at this point; thankfully, no systems were harmed in the writing of this post. So, we have a set of DiskPart commands that need to run remotely. How do we do it?

A while back I wrote New-RemoteProcess. It was the second PowerShell function I published, definitely not my proudest work, but it does the trick; I have my remoting mechanism. Now I need to parse out online disks to know which offline disks should really be online. I sifted through the numerous PowerShell+DiskPart posts out there and didn’t find much on running it remotely. I did find Alan Conkle’s code for parsing disk, volume, and vdisk output. We tweaked New-RemoteProcess to give us Invoke-DiskPartScript, which we can use inside a few Get-DiskPart* functions that mash in Alan’s code.

The result? You can now remotely brick a system. Or check for offline disks and set them back to online after changing to a Paravirtual controller, your choice!

PSDiskPart

I packaged my functions up into PSDiskPart, and committed them to GitHub. If you need to run Diskpart against a remote system, and remoting + OS-specific-Storage-Cmdlets won’t work for you, give this a shot! If you have any suggestions or tips, pull requests would be appreciated.

Here are a few examples from my environment:

Get disk info for a few computers, pick out a few properties:Get-DiskPartDisk

Get volume information from the current computer:

Get-DiskPartVolume

Set a disk to offline… Don’t do this to a production system : )

Invoke-DiskPartScript-Offline

Bring a disk back online, and remove the readonly flag if it is set:

Invoke-DiskPartScript-Online

Invoke-DiskPartScript and Sleep

I did run into a small issue with the borrowed logic from New-RemoteProcess. Every so often, I simply wouldn’t get results back. Adding start-sleep resolved this, but seemed inefficient.

I wanted something where I could say ‘wait until you can see this file.’ This is simple enough to write, but one of the things I love about PowerShell is that it is task-based. I want to wait for a path to exist, export to a csv, create a VM, or perform some other specific task, not worry about the logic and error handling behind each of these tasks.

I didn’t see anything out there, so I drafted up Wait-Path, which returns more quickly than hard coding a start-sleep call.

Up next

My wife is out of town this week. This means I have more time to play and to wrap up a few posts I have planned. No promises, but the following are on my plate:

  • REST / Infoblox –  A follow-up walking through a few Infoblox functions, illustrating why it would be quite nice if vendors provided their own PowerShell modules.
  • Invoke-SqlCmd2 – Highlight some of this function’s features, with some practical examples. Pre-staging a computer and applications in MDT? Getting migrated objects from ADMT? Diving into OpsMgr? Too many choices…
  • Building an inventory database – Not everyone has a mature CMDB. Create a database that can track details on servers, SQL instances, SQL databases, scheduled tasks, and more.
  • Filling an inventory database – Now that we have an inventory database, collect the data! This boils down to the products and attributes you want to track, but we can start with some basics.

Cheers!

How Do I Learn PowerShell?

I often see questions on how to learn PowerShell. Rather than address these each time they come up, figured it was time for a post. PowerShell is a critical skill for anyone working in IT on the Microsoft side of the fence. Anyone from a service desk associate to printer admin to DBA to developer would benefit from learning it!

There’s no single answer to the question; reflecting back on my path, the following seems like a decent recipe for learning PowerShell. Long story short?  Practice, practice, practice, build some formal knowledge, and participate in the community.

First Things First

There are books, videos, cheat sheets, and blog posts galore.  Before any of this, get a system up and running with PowerShell. I assume you can search for the details on how to do the following:

  • Download the latest Windows Management Framework and requisite .NET Framework – .NET 4.5 and WMF 4 at the moment. Even if you target a lower version PowerShell, the newer ISE will make learning and every day use more pleasant
  • Set your execution policy as appropriate, keeping in mind it’s just a seatbelt (I use Bypass on my computers). You do read code before you run it, right?
  • Update your help. If using PowerShell 3 or later, run PowerShell as an administrator and run Update-Help -Force
  • This is standard fare for any technology, but remind yourself of the importance of testing. Test against a test environment or target where possible. Consider all the corner cases and scenarios that your code should handle. Test one, few, many, in batches, and finally, all. Consider using read-only or verbose output before sending those results to commands that change things. Read up on –Whatif, –Confirm, and where these don’t work. Don’t be this guy:

image

Okay, enough disclaimers. Start exploring and experimenting. A solid foundation built on books and other references is incredibly helpful, but experience is even more important. If you tell yourself “I’ll learn PowerShell when I have more time,” you’ve already lost. You learn through practice and experience; you’re not going to magically learn PowerShell in some boot camp, training program, or book if you don’t regularly use it in the real world.

I Only Learn By Doing!

Building a foundation for PowerShell knowledge is important. You should spend time working with the language, but this isn’t a language where you can ignore formal knowledge; your assumptions will bite you, cause frustration, and lead to poor code. This is not your standard shell or scripting language for a few key reasons:

  • PowerShell is both a shell and a scripting language; the shell isn’t a slightly different environment where you might run into odd quirks that don’t expose themselves in a script. The scripting side doesn’t have more powerful language than the shell. They are one and the same. This leads to interesting design decisions. Shell users know that ‘>’ is a redirection operator. Developers and scripters know that ‘>’ is a comparison operator. This is one of several conflicts between shell and scripting expectations that makes PowerShell a bit unique, and can inspire wharrgarbl even from talented developers.
  • PowerShell targets IT professionals and power users, rather than developers. A developer might expect an escape character to be a ‘\’. If Microsoft chose this, we would need to escape every single slash in a path: “C:\\This\\Is\\A\\Pain”. Few IT professionals in the Microsoft ecosystem would use a language like that. Several design decisions like this may confuse seasoned developers.

If you come in as an experienced scripter or developer, you might get serious heartburn if you make assumptions and don’t look at PowerShell for what it is. Even developers like Helge Klein (author of delprof2 and other tools) make this mistake. Several of my talented co-workers who are more familiar with C# and .NET, or Python/Perl and various shell languages have made this mistake as well. Like most languages, if you’re going to use it, you should spend some time with formal reading material, and should avoid assumptions.

Formal resources

Hopefully you’ve decided to look at some formal learning materials!  I keep a list of the resources I find helpful for learning PowerShell here.

Prefer books?

  • If you’re a beginner without much scripting / code experience, check out Learn Windows PowerShell 3 in a Month of Lunches.
  • If you have experience with scripting and code, Windows PowerShell in Action is the way to go. This is as deep as it gets, short of in-depth, lengthy blog posts, and you get to read about the reasons behind the language design. Knowing the reason for ‘greater than’ being -gt rather than > should quell your heartburn.
  • Strapped for cash? Mastering PowerShell is a great free book
  • Want to know what started this all? Read Jeffrey Snover’s Monad Manifesto. This isn’t on PowerShell per se, but it gives insight into the vision behind the language.

Prefer videos?

Prefer training?

There are plenty of training opportunities. Keep in mind that training might cater to the lowest common denominator in the room, and that a few hours, even at breakneck speed, won’t be enough. Of course, this could be my own bias showing.

Join the Community!

The community is a great place learn, and to get ideas for what you can do with PowerShell.

  • Find popular repositories and publications on sites like GitHub and Technet Gallery. Dive into the underlying code and see how and why it works. Experiment with it. Keep in mind that popular code does not mean good code. Contribute your own code when and if you are comfortable with it
  • Drop in and get to know the communities. Stop by #PowerShell on IRC (Freenode). Check out Twitter, if you don’t use it, you might be surprised at how valuable it can be. Join the PowerShell.org community. Keep an eye out for other communities that might be more specific to your interests or needs.
  • Look for blogs – some might cover general PowerShell, other’s might cover your IT niche. I keep a few here, but there are many others.
  • Participate in a local PowerShell user group. There’s no single way to find these, look at PowerShellGroup.org, PowerShell.org, and ask around.

As you spend time working with PowerShell and following or participating in the community, you will find some community members that you can generally rely on. PowerShell MVPs, the PowerShell team, and other respected community members put out some fantastic material.

Unfortunately, because everyone loves to contribute, you will find plenty of outdated, inefficient, incorrect, or downright dangerous code out there. This is another reason I tend to steer folks towards curated, formal resources at the onset, so that they can learn enough to recognize bad code at a glance.

Spend Some Time With PowerShell

At this point, you should be good to go! A few suggestions that I’ve found helpful along my way:

  • You can use PowerShell to learn PowerShell – Get-Command, Get-Help, and Get-Member are hugely beneficial. The Get-Help about_* topics might seem dry, but their content is as good as any book
  • Building functions can be helpful – focus on modular code that you can use across scenarios. Don’t limit yourself by adding GUIs, requiring manual input from Read-Host, or other restrictive designs.
    • As an example, I borrowed code from Boe Prox to write Invoke-Parallel a while back. I use it to speed up many solutions. I borrowed jrich523’s Test-Server, glued it together with Invoke-Parallel, and now I have Invoke-Ping. This lets me parallelize tests against thousands of nodes for services like remote registry, remote RPC, SMB, and RDP. Many of our production scripts start out by querying for a list of nodes and filtering this list with Invoke-Ping. The key is that I have re-usable tools and components that can be integrated across a variety of solutions, not just one-off scripts that are only helpful in a single use case.
  • Spend a few minutes every day!
    • Don’t hold up urgent production troubleshooting if you aren’t ready, but consider revisiting the scenario afterwards. Could you have used PowerShell to detect the issue before it happened? Could you use PowerShell to implement the fix? If you had an issue distributed across many systems, would it have saved time to use PowerShell to troubleshoot?
    • Have a project, task, or tedious manual step to accomplish? Would it make sense to use PowerShell? Spend a little time and see if you can script this out, or write a function to do the work.
    • Start with read only commands. Yes, automation and configuration are important, but you can learn a lot about PowerShell and your environment just by running ‘Get’ commands. This is a great way to learn more about technology in general as well! If I come across a piece of technology that I want to learn, perhaps Infoblox Grid Manager, Citrix NetScaler, SQL Server, or Microsoft Hyper-V, I ask for test access or build my own and try to query it with PowerShell. This helps me learn the basics of many technologies, gives me experience with PowerShell, and gets me involved with a number of fun projects. A month down the line when they want to automate or build tooling for something, we already have some basic tools and experience working with it!
    • Don’t be discouraged. If you had no scripting experience, you’re going to have some growing pains. Work through them, it will be worth it in the end. You will find that most of what you learn can be applied to other areas of PowerShell, given shared conventions and syntax. Don’t be this guy: ‘We have to make changes to 1,000 print queues, we need more FTEs to click through all the menus!’ – No, if you had someone with basic understanding of PowerShell, you could have it done more consistently, and with a tool you could re-use, likely in less time. Who wants to go clicking through 1,000 GUIs anyways? Sounds horrid.

Good luck, whichever route you take!

REST, PowerShell, and Infoblox

Edit: Wrote a follow-up illustrating a few of these issues through a rudimentary Infoblox PowerShell module.

A short while back, someone asked if I would be up for writing about calling the Infoblox web API through PowerShell. I don’t have extensive experience with this topic, but this is a great opportunity to discuss REST APIs, and the interfaces vendors expose to their customers.

There won’t be any Infoblox PowerShell in this post, that’s in the pipeline.

Interfaces – Software Defined Everything!

With the movement towards software defined everything, more and more products are exposing APIs and interfaces we can control, configure, and orchestrate with.

In general, this is a good thing. Having a nice API is an important first step, but more is needed. Folks like Helge Klein and others with development experience are well served by an API. But what about those of us working on the systems side? Even if we have a loose grasp on the topic, we lose much of the consistency and intuitiveness Jeffrey Snover aimed for with PowerShell.

Why PowerShell?

I grew up playing with Lego, figuring out how to piece together contraptions that don’t come with instructions. I do something similar at work today – I take the commands and consistent syntax and conventions of PowerShell, and put them together to build solutions that vendors don’t provide, or that we can’t afford. Unfortunately, not all vendors provide PowerShell support, so we need to build it ourselves.

This brings us to APIs, including C, Java, .NET, and of course, RESTful APIs. Most of these are beyond the skillset of a good number of systems administrators or engineers, including myself. Some of these APIs are more approachable than others, but vendors do us a disservice by limiting their interfaces to these.

Why APIs Aren’t Enough

As an engineer, I often need to integrate a variety of technologies. Many of these have fantastic PowerShell modules, from VMware’s PowerCLI, to Microsoft’s ActiveDirectory module, to Cisco’s UCS PowerTool. I can rely on the consistency and conventions of PowerShell, and spend my time considering the logic and design of a solution, rather than combing through documentation and figuring out the nuances of invoking a specific, obscure, poorly documented method of an API.

What does it mean if your product doesn’t have a PowerShell module, but may need integration with the wider Microsoft ecosystem?

  • Admins and engineers across the industry waste time and duplicated effort attempting to use your API or binaries through PowerShell
  • Admins and engineers who may not be subject matter experts in your technology end up trying to piece things together, leading to potentially buggy or feature-poor implementations. You as the vendor have the knowledge here, why not help out and ensure a smooth, consistent experience with your product?
  • Admins and engineers who may not have a strong background in PowerShell end up trying to piece a module or functions together, leading to potentially buggy and inconsistent implementations
  • Even if you have an SME for the technology in question who is fluent in PowerShell, they now get to spend copious amounts of time reading through your API documentation and figuring out how to ‘PowerShell-ize’ it

API Overload

Infoblox is one of many examples. EMC Isilon and XtremIO. Citrix NetScaler and Provisioning Services. Commvault Simpana. Thycotic Secret Server. BMC ARS. A wide range of products out there provide no PowerShell module, or something nearly unusable.

Microsoft isn’t immune. SQL Server management often relies on the SMO. A variety of Exchange tasks force you to use the Managed API or EWS. Good luck doing anything useful with the Group Policy module, or even finding an interface to AGPM – DSC is great, but you’re kidding yourself if you think Group Policy is going away any time soon.

Some of these APIs are better than others. A REST API generally means I need to dig through documentation and spend a good deal of time learning the ins-and-outs of your API. Despite being a bit dated, at least with a web service I have tools to discover the methods, constructors, and other details that help when wrapping an API in PowerShell. With REST?  Who knows what you’re going to get, good luck reading and experimenting!

Closing

How do we solve this problem? A few suggestions:

Vendors – if your product is commonly used in the Microsoft ecosystem, provide a PowerShell module. Don’t just wrap some binaries or APIs in a format that makes sense to you. Follow the standard conventions and best practices for PowerShell that have made it the successful tool that it is today. The Monad Manifesto should be required reading material for anyone responsible for implementing your PowerShell support.

Microsoft – lead by example. The inclusion of PowerShell in the (server) Common Engineering Criteria was a great start. Take steps to encourage your product groups to provide better and more wide-spread PowerShell solutions. Perhaps consider taking similar steps to encourage and assist other vendors.

Engineers and admins – if you have input on the decision making process, strongly consider whether PowerShell should be a factor in this. It can be incredibly painful ending up with a critical technology that you can’t control programmatically, or with an interface you have no familiarity with or interest in learning. Java or C? Not for me. If you do end up with these technologies, pressure the vendor to include an accessible PowerShell interface. If a vendor doesn’t hear you tell them you want a PowerShell interface, how will they get the prioritization to build one?

Lastly, while this is a big pain point for me, it’s still fantastic to have a glue language like PowerShell that can piece together well written PowerShell modules, .NET libraries, REST APIs, and everything else.

Disclaimer: This assumes you are locked into the Microsoft ecosystem and standardize on PowerShell.

Exploring PowerShell: Common Parameters, Variables, and More!

When writing PowerShell functions and scripts, you might come across a need to identify common parameters, automatic variables, or other details that can change depending on the environment.

It turns out that PowerShell offers a number of tools to explore, from the standard Get-Command, Get-Help, and Get-Member Cmdlets, to the .NET Framework, with tools like reflection and abstract syntax trees.

This past week I received an interesting question on Invoke-Parallel; can we import variables and modules from the end user’s session to make the command more approachable? This was a good question – Invoke-Parallel is widely used by my co-workers, but there is often confusion over the idea of each runspace being an independent environment.

Before we dive into this, let’s take a step back and look at some work from Bruce Payette, “a co-designer of the PowerShell language and the principal author of the language implementation.” Bruce wrote the definitive PowerShell in Action (PiA), my go-to book recommendation for anyone with scripting or development experience.

PowerShell In Action – Constrained endpoints

Constrained, delegated endpoints are getting more attention nowadays with JitJea. It turns out Bruce talked about constrained endpoints in PiA long ago.

Interactive and implicit remoting depend on a few commands. If you want to create a constrained endpoint that works with interactive or implicit remoting, you should define proxy functions for these. Rather than hard code the command names, Bruce tells us how to get them at run time, by creating a certain restricted initial session state and listing the public commands within it:

$iss = [Management.Automation.Runspaces.InitialSessionState]::CreateRestricted("RemoteServer")            
$iss.Commands | Where { $_.Visibility -eq "Public" } | Select Name

RemoteServer

How does this relate to automatic variables and modules? In both cases we can use PowerShell and the .NET Framework to find the answer at run time, rather than hard coding the answer.

Automatic variables and modules

If we want to pass variables from the user’s session into the Invoke-Parallel runspaces, we probably want to ignore the wide array of automatic variables.

We could certainly hard code these, but what fun is that? The approach we ended up taking was to compare a clean environment with the current environment, by creating a clean PowerShell runspace, listing out the modules, pssnapins, and variables within it, and comparing these with the user’s current session.

$StandardUserEnv = [powershell]::Create().addscript({            
            
    #Get modules and snapins in this clean runspace            
    $Modules = Get-Module | Select -ExpandProperty Name            
    $Snapins = Get-PSSnapin | Select -ExpandProperty Name            
            
    #Get variables in this clean runspace            
    #Called last to get vars like $? into session            
    $Variables = Get-Variable | Select -ExpandProperty Name            
                
    #Return a hashtable where we can access each.            
    @{            
        Variables = $Variables            
        Modules = $Modules            
        Snapins = $Snapins            
    }            
}).invoke()[0]

AutoVariables

This isn’t perfect; certain automatic variables won’t be created out of the gate. For example, $Matches won’t be listed. But this gives us a good start, and filters out the majority of variables and modules that we can ignore. The end result? A more usable Invoke-Parallel:

$Path = "C:\Temp"            
            
#Query 3 computers for something, save the results under $path 
echo Server1 Server2 Server3 | Invoke-Parallel -ImportVariables { 
                
    #Do something and record some output!            
    $Output = "Some Value for $_"            
            
    #Save it to the path we specified outsite the runspace            
    $FilePath = Join-Path $Path "$_-$(Get-Date -UFormat '%Y%m%d').txt"                
    Set-Content $FilePath -Value $Output -force            
            
}            
            
#List the output:            
dir $Path

Invoke-Parallel

In the past, $path would not be passed in, resulting in potential confusion and broken code.

Common parameters

What if you want a list of common parameters? Perhaps you are splatting PSBoundParameters and want to exclude common parameters, or perhaps you just want a list of common parameters to jog your memory.

You could hard code these, but this is no fun, and might get complicated if you want to cover version specific parameters like PipelineVariable. Let’s use one of the core commands for exploring PowerShell to get these; Get-Command.

#Define an empty function with cmdletbinding            
Function _temp { [cmdletbinding()] param() }            
            
#Get parameters, only common params are returned            
(Get-Command _temp | Select -ExpandProperty parameters).Keys

Common-Parameters

That’s it! We simply build a temporary function, and ask PowerShell what that function’s parameters are.

Use PowerShell to explore PowerShell

The key takeaway here is that you can use PowerShell or the .NET Framework itself to explore PowerShell. Use this to your advantage when writing functions where details on the runtime environment can improve the end user’s experience.

Cheers!

PowerShell Pipeline Demo

Happy holidays all!  Long ago, an observant co-worker added the Cookie Monster nickname to my office name plate.  Even during off-seasons, this earns me bonus cookies – “hey!  you’re cookie monster, right?  have this cookie!”  As you might imagine, the holidays are worse.  Forgive me if I’m a bit slow this month.

This is a quick hit to cover two topics that often generate confusion; handling pipeline input, and handling the code behind -Confirm and -Whatif.  I often forget what to expect with pipeline input, and use the verbose output from Test-Pipeline to double check.

Pipeline input

Many of your favorite commands support pipeline input.  Get-ADUser | Set-ADUser.  Get-ChildItem | Remove-Item.  The pipeline is an integral part of PowerShell; it’s covered in two sections of (the subjective) best practices for building PowerShell functions, and examples abound online… yet many community based functions don’t support it.

The key bits:

  • Use [parameter()] attributes to add pipeline support for a variable.
  • Use a Process block in your function
  • Reference the pipeline variable in your process block

A function to demonstrate support for pipeline input, on a ComputerName variable. Copy it to the PowerShell ISE for better code highlighting:

Function Test-Pipeline {            
    [cmdletbinding(SupportsShouldProcess=$true, ConfirmImpact="Medium")]            
    param(            
        [parameter( Mandatory = $false,            
                    ValueFromPipeline = $True,            
                    ValueFromPipelineByPropertyName = $True)]            
        [string[]]$ComputerName = "$env:computername",            
            
        [switch]$Force            
    )            
                
    Begin            
    {            
        $RejectAll = $false            
        $ConfirmAll = $false            
            
        Write-Verbose "BEGIN Block - `$ComputerName is a $(try{$ComputerName.GetType()} catch{$null}) with value $ComputerName`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
    }            
    Process            
    {            
        Write-Verbose "PROCESS Block - `$ComputerName is a $(try{$ComputerName.GetType()} catch{$null}) with value $ComputerName`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
                    
        foreach($Computer in $ComputerName)            
        {            
            if($PSCmdlet.ShouldProcess( "Processed the computer '$Computer'",            
                                        "Process the computer '$Computer'?",            
                                        "Processing computer" ))            
            {            
                if($Force -Or $PSCmdlet.ShouldContinue("Are you REALLY sure you want to process '$Computer'?", "Processing '$Computer'", [ref]$ConfirmAll, [ref]$RejectAll)) {            
                    Write-Verbose "----`tPROCESS Block, FOREACH LOOP - processed item is a $(try{$computer.GetType()} catch{$null}) with value $computer`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
                }            
            }            
        }            
    }            
    End            
    {            
        Write-Verbose "END Block - `$ComputerName is a $(try{$ComputerName.GetType()} catch{$null}) with value $ComputerName`nPSBoundParameters is `t$($PSBoundParameters |Format-Table -AutoSize | out-string )"            
    }            
}

Piping two computers to this command:

image

Notice the behavior for $ComputerName in the Begin and End block, it might catch you off guard.

SupportsShouldProcess

One of the first things we learn with PowerShell is to look for –Whatif and –Confirm parameters.  We also learn that these are not available everywhere, and that implementing them is up to the author.  This is another common omission in community based functions, despite it being a best practice to provide this support where appropriate.

You might also find inconsistent implementation.  The simplest implementation (seen in the Cmdlet snippet)  leads to reliance on funky looking language like Do-Something -Confirm:$False, and includes no -Force switch.

Joel Bennett provided a great guideline for this.  You can see the implementation of this in the Test-Pipeline code above.

Confirmation:

image

Force parameter confirms all as expected:

image

Wrapping up

If you are submitting production grade functions to the community, or just want to provide a user experience mimicking a Cmdlet, be sure to look into providing support for the pipeline and SupportsShouldProcess.  It looks like a lot of effort, but if you create a snippet or a template for your functions, you can start with code similar to Test-Pipeline and tweak it to meet your needs.

Further reading:

Disclaimer:

I don’t claim to have followed the above for all of my contributions : )  I’m trying to start though, one of my PowerShell resolutions is to start practicing what I preach and follow best practices!

PowerShell Splatting – build parameters dynamically

Have you ever needed to run a command with parameters that depend on the runtime environment?  I often see logic like this:

If($Cred) { Get-WmiObject Win32_OperatingSystem -Credential $Cred }            
Else      { Get-WmiObject Win32_OperatingSystem }

It doesn’t look too terrible if you only have one option, but things get ugly fast.  What if you want the same logic for $Credential, $ComputerName, and $Filter?  You end up with 6 potential combinations and an unreadable mess of code, and this is with only three parameters.

The answer is splatting!

What is splatting?

Splatting is just a way to pass parameters to commands, typically with a hash table.  It was introduced with PowerShell v2, so it is compatible pretty much anywhere.  Here’s a simple example

#Define the hash table            
$GWMIParams = @{            
    Class = "Win32_OperatingSystem"            
    Credential = $Cred            
    ComputerName = 'localhost'            
}            
            
#Splat the hash table.  Notice we put an @ in front of it:            
Get-WmiObject @GWMIParams

Many blog posts focus on the readability splatting offers.  Readability is very important, but splatting gives us the framework needed to build up parameters for a command dynamically.

Building up command parameters

Let’s build a simple example.  The basic steps we take include creating a hash table, adding key-value pairs to that hash table, and splatting the hash table against a command.

#Create the initial hash table            
#You can use @{} to create an empty hash table            
$GWMIParams = @{            
    ErrorAction = "Stop"            
}            
            
#Add parameters depending on the environment            
#We use simple logic here, but you can get creative            
if($Cred)            
{            
    #We can add a key value pair with the Add method            
    $GWMIParams.add("Credential", $Cred)            
}            
if($Computer)            
{            
    #This alternative to the Add method is easier to read            
    $GWMIParams.ComputerName = $Computer            
}            
if($Filter)            
{            
    $GWMIParams.Filter = $Filter            
}            
            
#Splat the hash table.  You can splat multiple hash tables and still use parameters            
Get-WmiObject @GWMIParams -Class Win32_OperatingSystem

When we run this, Get-WMIObject will have different parameters depending on whether we have $Computer, $Filter, or $Cred defined in our session.  We get a side benefit to this:  a hash table with parameters and their values, which you can use for providing verbose or debug output.

If we ran the example code without Computer, Filter, or Credential variables defined, the parameters will look like this:

image

If we set $Computer to ‘localhost’ and run the same code, we now get different parameters:

image

Next steps

That’s about it!  For further reading, check out Get-Help about_Splatting (the about topic was added in PowerShell 3), or search around for numerous blog posts on the topic.

The real fun is figuring out what logic you should use to work with the hash table you are splatting.  If you follow best practices when writing PowerShell functions, you open up access to the PSBoundParameters, conveniently in the form of a hash table!