Moving to GitHub Pages

WordPress.com has served me well for a few years, but it’s time for a move!

I gave GitHub pages and Jekyll a try this weekend, I’m sold.

The content here will remain. This site will be (mostly) static going forward, all new content will be posted on my GitHub page.

Cheers!

SQLite and PowerShell

March 2015 EDIT: This post has been updated and moved to my new GitHub pages site.

I’ve been planning on sharing some fun projects that involve SQL. Every time I start writing about these, I end up spending a good deal of time writing about MSSQL, and thinking of all the potential caveats that might scare off the uninitiated. Will they have an existing SQL instance they can work with? Will they have access to it? Will they run into a grumpy DBA? Will they be scared off by the idea of standing up their own SQL instance for testing and learning?

Wouldn’t it be great if we could illustrate how to use SQL, and get an idea of how helpful it can be, without the prerequisite of an existing instance with appropriate configurations and access in place?

SQLite

SQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.”

What do you know, sounds pretty close to what we are looking for!  We want to use this in PowerShell, so where do we start?

Looking around, you’ll stumble upon Jim Christopher’s SQLite PowerShell Provider. If you like working with providers and PSDrives, this is probably as far as you need to go. There are other examples abound, including interesting solutions like Chrissy LeMaire’s Invoke-Locate, which leverage SQLite behind the scenes.

I generally prefer standalone functions and cmdlets over providers. I’m also a fan of abstraction, and building re-usable, simple to use tools. The task-based nature of PowerShell makes it a great language for getting things done. We can concentrate on doing what we want to do, not the underlying implementation.

I was looking for something similar to Invoke-Sqlcmd2, which abstracts out the underlying .NET logic to provide simplified SQL queries, the ability to handle SQL parameters, PowerShell-esque behavior for DBNull, and other conveniences.

Invoke-SQLiteQuery

I spent a few minutes with the SQLite binaries and examples from Jim and Chrissy, and simply duct-taped SQLite functionality onto Invoke-Sqlcmd2. Let’s take a look at what we can do

Download and unblock Invoke-SQLiteQuery, and you’ll be up and running, ready to work with SQLite. Let’s create a data source and a table:

#Import the module, create a data source and a table
Import-Module PSSQLite
$Database = "C:\Names.SQLite"
$Query = "CREATE TABLE NAMES (
Fullname VARCHAR(20) PRIMARY KEY,
Surname TEXT,
Givenname TEXT,
Birthdate DATETIME)"
#SQLite will create Names.SQLite for us
Invoke-SqliteQuery -Query $Query -DataSource $Database
# We have a database, and a table, let's view the table info
Invoke-SqliteQuery -DataSource $Database -Query "PRAGMA table_info(NAMES)"

Init

That was pretty easy! We used a SQLite PRAGMA statement to see basic details on the table I created. Now let’s insert some data and pull it back out:

# Insert some data, use parameters for the fullname and birthdate
$query = "INSERT INTO NAMES (Fullname, Surname, Givenname, Birthdate)
VALUES (@full, 'Cookie', 'Monster', @BD)"
Invoke-SqliteQuery -DataSource $Database -Query $query -SqlParameters @{
full = "Cookie Monster"
BD = (get-date).addyears(-3)
}
# Check to see if we inserted the data:
Invoke-SqliteQuery -DataSource $Database -Query "SELECT * FROM NAMES"

InsertSelect

In this example we parameterized the query – notice that @full and @BD were replaced with the full and BD values from SQLParameters, respectively.

Let’s take a quick look at using SQLite in memory

# Create a SQLite database in memory
# This exists only as long as the connection is open
$C = New-SQLiteConnection -DataSource :MEMORY:
#Add some tables
Invoke-SqliteQuery -SQLiteConnection $C -Query "
CREATE TABLE OrdersToNames (OrderID INT PRIMARY KEY, Fullname TEXT);
CREATE TABLE Names (Fullname TEXT PRIMARY KEY, Birthdate DATETIME);"
#Add some data
Invoke-SqliteQuery -SQLiteConnection $C -SqlParameters @{BD = (Get-Date)} -Query "
INSERT INTO OrdersToNames (OrderID, fullname) VALUES (1,'Cookie Monster');
INSERT INTO OrdersToNames (OrderID) VALUES (2);
INSERT INTO Names (Fullname, Birthdate) VALUES ('Cookie Monster', @BD)"
#Query the data. Illustrate PSObject vs. Datarow filtering
Invoke-SqliteQuery -SQLiteConnection $C -Query "SELECT * FROM OrdersToNames" |
Where-Object { $_.Fullname }
Invoke-SqliteQuery -SQLiteConnection $C -Query "SELECT * FROM OrdersToNames" -As DataRow |
Where-Object { $_.Fullname }
#Joining. Yeah, a CustomerID would make more sense :)
Invoke-SqliteQuery -SQLiteConnection $C -Query "
SELECT * FROM Names
INNER JOIN OrdersToNames
ON Names.fullname = OrdersToNames.fullname
"

Memory

Typically, we might use Datarow output from MSSQL and SQLite queries. As you can see above, using Datarow output leads to unexpected filtering behavior – if I filter on Where {$_.Fullname}, I don’t expect any results to come back with no fullname. Thankfully, we have code from Dave Wyatt that can quickly and efficiently convert output to PSObjects that behave as expected in PowerShell.

We did the querying above in memory. Let’s run PRAGMA STATS to see details on the in-memory data source. If we close the connection and run this again, we see the data is gone:

MemoryGone

Next steps

That’s about it! If you want simplified SQLite queries in PowerShell, check out Invoke-SQLiteQuery. If you delve into the MSSQL side of the house, check out Invoke-Sqlcmd2 from Chad Miller et al. It was used as the basis for Invoke-SQLiteQuery and behaves very similarly.

Now I just have to find more time to write…

Disclaimer: This weekend was the first time I’ve used SQLite. If I’m missing any major functionality, or you see unexpected behavior, contributions or suggestions would be quite welcome!

Testing DSC Configurations with Pester and AppVeyor

I spent the last few days tinkering with AppVeyor. It’s an interesting service to help enable continuous integration and delivery in the Microsoft ecosystem.

Last night I realized it might offer a simple means to test the outcome of your DSC configurations. Here’s the recipe for a simple POC, with plenty of room for you to tweak and integrate with your existing processes:

  • Create a DSC configuration you want to test
  • Create a short script to apply that configuration
  • Create some Pester tests to verify the desired state of your system, post configuration
  • Create the AppVeyor yaml to control this process
  • Add this to an appropriate AppVeyor source (Example covering GitHub)
  • Test away! Make a change to the DSC code, commit, and AppVeyor spins up a VM, applies the DSC configuration, and your Pester tests verify the outcome

This certainly isn’t a perfect method, but it would be a simple way for anyone to get up and running writing and testing DSC configurations and resources.

I’m going to make the assumption that you are familiar with GitHub, Pester, and some of the basics of AppVeyor from my first overview.

Wait, what is DSC?

Windows PowerShell Desired State Configuration is a configuration management platform from Microsoft. It’s still young, but is fast tracked for the Common Engineering Criteria, receives a good deal of attention from the PowerShell team, and the guy who brought us PowerShell is quite excited about it. It’s probably something you should be paying close attention to.

There is quite a variety of resources to get started with. The DSC Book from PowerShell.org is a nice overview, and the soon-to-be-published MVA series was quite helpful (Getting Started, Advanced).

Let’s look at some caveats to testing DSC over AppVeyor.

Yes, but…

  • You could do much of this on your own, with tools like Client Hyper-V and AutomatedLab. You might even have a similar toolset in place at work.
  • No testing of distributed systems. This allows testing on a single VM.
  • Limited selection of operating systems to deploy on. There is an OS option in the yaml, presumably this means we may see more.
  • Your configurations cannot restart the VM. Hoping this will change, but depending on their architecture and design, this may be tough.
  • You’re deploying on a hosted service. If your DSC configurations are integrated with your CMDB and other internal resources, this may be a show stopper.
  • A good DSC resource should have a solid ‘test’ function. That being said, a single DSC resource’s test function might not handle the intricate combination of resources applied to a system.

Right. I’m sure I’m missing others as well. Certainly not perfect, but if your goal is to write and test some general DSC resources or configurations, and you don’t have the tools in house, this is a fantastic and simple way to spin up fresh VMs, configure them with DSC, and verify that the resulting system is now in the desired state.

Another caveat – building your own system that would perform a similar functionality would be an incredibly valuable experience. They always say “don’t re-invent the wheel,” but if your goal is experience and to learn, re-inventing the wheel is a great way to get there!

Pick a source

I’m going to stick to GitHub for this. Keep in mind that AppVeyor is only free for open source projects. Consider your security posture before uploading any sensitive DSC configurations.

image

Pick a DSC configuration to test

We’re going with an incredibly basic example here:

Configuration ContosoWebsite
{
param (
[string[]]$ComputerName = $ENV:ComputerName
)
Node $ComputerName
{
#Install the IIS Role
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server""
}
#Install ASP.NET 4.5
WindowsFeature ASP
{
Ensure = "Present"
Name = "Web-Asp-Net45"
}
#Install PowerShell AD... for fun...
WindowsFeature ADPS
{
Ensure = "Present"
Name = "RSAT-AD-PowerShell"
}
}
}
view raw WebServer.ps1 hosted with ❤ by GitHub

Write a controller script to apply the configuration

There are plenty of ways to do this. Modify this POC example to meet your needs.

# This script will invoke a DSC configuration
# This is a simple proof of concept
"`n`tPerforming DSC Configuration`n"
. .\DSC\WebServer.ps1
( ContosoWebsite -COMPUTERNAME $ENV:COMPUTERNAME ).FullName |
Set-Content -Path .\Artifacts.txt
Start-DscConfiguration .\ContosoWebsite -Wait -Force -verbose

Most of this should be self explanatory. We force application of the ContosoWebsite configuration from WebServer.ps1. The one extra bit is that we save the path to the resulting MOF file in Artifacts.txt. We upload this later on.

Write your Pester tests

Go crazy. I wrote some very simple, limited tests for this POC:

Describe "Web Server" {
It "Is Installed" {
$Output = Get-WindowsFeature web-server
$Output.InstallState | Should Be "Installed"
}
It "Includes ASP.NET 4.5" {
$Output = Get-WindowsFeature Web-Asp-Net45
$Output.InstallState | Should Be "Installed"
}
}
Describe "ActiveDirectory Module" {
It "Is Installed" {
$Output = Get-WindowsFeature RSAT-AD-PowerShell
$Output.InstallState | Should Be "Installed"
}
}
view raw DSC.Tests.ps1 hosted with ❤ by GitHub

Keep in mind that you aren’t limited to one pass. You could theoretically apply your configuration, test, add a ‘mischief’ script that messes with the configuration, and test to verify that your DSC configuration brings things back in line.

Write the AppVeyor yaml

We’re almost done!  At this point, we’re going to tell AppVeyor what to run. There’s a lot more you can configure in the yaml, so be sure to flip through the options and experiment.

# See http://www.appveyor.com/docs/appveyor-yml for many more options
# Skip on updates to the readme.
# We can force this by adding [skip ci] or [ci skip] anywhere in commit message
skip_commits:
message: /updated readme.*/
install:
- cinst pester
build: false
test_script:
# Invoke DSC configuration!
- ps: . .\Tests\appveyor.dsc.ps1
# Test with native PS version, finalize
- ps: . .\Tests\appveyor.pester.ps1 -Test -Finalize
deploy_script:
- ps: Get-Content .\Artifacts.txt | Foreach-Object { Push-AppveyorArtifact $_ }
view raw AppVeyor.yml hosted with ❤ by GitHub

What does this do?

  • We ignore any commits that match ‘updated readme’, to avoid unnecessary builds.
  • We run appveyor.dsc.ps1, which applies the DSC configuration and saves the mof path.
  • We run appveyor.pester.ps1, which invokes the pester script, sends test results to AppVeyor, and tells us if the build passed.
  • We upload the mof file, which will be available on the artifacts page for your AppVeyor project.

Tie it all together

We’re good to go! We create a GitHub repository, add this to our AppVeyor projects, and make a commit (covered here). Browse around the project on AppVeyor to see the results:

AppVeyorDSC

That’s it – we now have a simple framework for automated DSC resource and configuration testing. There’s a lot more you might want to do, but the simple POC material is in the AppVeyor-DSC-Test repository on GitHub.

Cheers!

GitHub, Pester, and AppVeyor: Part 2

I recently published a quick bit on using GitHub, Pester, and AppVeyor, a slick combination to provide version control, unit testing, and continuous integration to your PowerShell projects.

That post was a quick overview and essentially summed up ideas and implementation straight from Sergei. Before this pull request, I hadn’t worked with Pester or AppVeyor:

image.png

We just went through a major upgrade to our EMR, and I’m covering our primary admin for any fallout this morning. Thankfully, there wasn’t much to do, so I spent a few minutes toying with AppVeyor and Pester. This post is a quick summary of the outcome.

The code referenced in this post is now part of the PSDiskPart project on GitHub.

Abstraction

The AppVeyor yaml file used in the Wait-Path repository is fairly straightforward. It installs pester and runs a few lines of PowerShell. The readability for those PowerShell lines was a bit painful – no syntax highlighting, some shortcuts to keep code on one line, etc. My first step was to abstract most of the PowerShell out to another script.

The PSDiskPart AppVeyor yaml file is the result. It’s a little cleaner; the only test_script lines are calls to a single PowerShell script.

Not everyone would prefer this method, as it adds a layer of complexity, but I like the abstraction, and it enables the second line of PowerShell, where we call PowerShell.exe -version 2. So what’s in the appveyor.pester.ps1 controller script?

AppVeyor.Pester.ps1

My approach is to use a single file. Because we call it several times to cover both PowerShell version 2 and the native PowerShell version, we need to serialize our output and add a ‘finalize’ pass to collect everything and send it to AppVeyor.

If you look at my commit or AppVeyor history for this morning, you will see an embarrassing number of small changes and tweaks – multitasking is generally a bad idea; multitasking without caffeine is worse:

imageimage

The first of my struggles: relative paths. They’re great. But you need to start in the right path. You’ll note a reference to one of the AppVeyor environment variables, APPVEYOR_BUILD_FOLDER. This helped get me to the right path.

The second of my struggles: PowerShell.exe. I didn’t realize this, but if you place ExecutionPolicy and NoProfile parameters before -Version 2.0, PowerShell won’t be happy:

image.png

The third struggle: PowerShell 2 and PowerShell 3 are very different. At work, I always target PowerShell 2 and am all too familiar with the helpful language and functionality I must avoid. Throwing a PowerShell 2 iteration into the mix left me embarrassed at the number of PowerShell 3 assumptions I had made in the original PSDiskPart and Pester code!

  • There is no auto module loading in PowerShell 2. We abstracted out the call to pester, so we need to add a line to import that module.
  • Set-Loc<tab> hits Set-LocalGroup before Set-Location. Okay, that’s not PS3, that’s me being sloppy!
  • In the module manifest, PowerShell 3 let’s us use RootModule. PowerShell 2 doesn’t recognize RootModule, so we switch to ModuleToProcess.
  • PowerShell 2 did not include the $PSScriptRoot automatic variable.
  • PowerShell 2 will loop over $null
  • Get-Content -Raw was introduced in PowerShell 3

Walking through the build

Here is the resulting passing build, we’ll step through the basic flow:

image.png

First, we ignore any commits that match updated readme:

image

Next, we run the first pass of appveyor.pester.ps1, which runs tests in the native PowerShell on the AppVeyor VM:

image

This runs the AppVeyor testing controller script, which calls the PSDiskPart.Tests.ps1 Pester tests.

image

Success! Note that we differentiated the PowerShell version in the ‘It’ name. This would be more appropriate for Context, but we wanted to differentiate tests on the AppVeyor side:

image.png

Next, we run this in PowerShell version 2 mode. Native AppVeyor support for this would be nice; as is, we don’t get colorized output without going through extra effort:

image.png

image

Finally, we want to collect all the results, send our tests to AppVeyor, and give some summary feedback if anything failed:

image

image

The Outcome

Apart from working through my many mistakes, we now have our PowerShell abstracted out to a separate file we can view with syntax highlighting, we have a cleaner yaml file, and we have a simple way to test in both PowerShell version 2 mode, and with the native PowerShell.

We can focus on the domain-specific yaml in the yaml file, and PowerShell in the PowerShell file.

Badges

On a side note, I don’t think I mentioned badges in my previous post. In AppVeyor, browse to your project, view the settings, note the Badges section:

image.png

This is a great way to tell folks that your project is building successfully – or that it’s broken, as PSDiskPart was throughout the morning:

image.png

Next Steps

If you’re using GitHub for your PowerShell projects and haven’t checked them out yet, definitely consider looking into adding Pester and AppVeyor. If you already have your project in GitHub and your Pester tests laid out, adding them to AppVeyor only takes a moment (barring tests that require access to your internal environment). It took less than 30 seconds to add the InfoBlox module to AppVeyor once I had added a few example Pester tests.

On that note, consider test-driven development. Adding a comprehensive set of tests to a single function after it has been written is difficult. Covering an entire module would be painful. If you follow TDD and write your tests before you write the functionality that they test, you will be ahead of the game. As you can tell by my contributions, I am certainly not there yet, but I like the idea.

There’s a lot more to explore in AppVeyor. As far as I can tell, it looks like it can be used to help enable both continuous integration and continuous delivery. Poke around, experiment, and if you find anything helpful, share it with the community!

Querying the Infoblox Web API with PowerShell

My apologies ahead of time. This post is half rant, half discussion on the basics of using the InfoBlox Web API. A rudimentary PowerShell module abstracting this out is available here.

This is a follow-up to my thoughts on REST APIs here. Today we’re going to focus more on working with the Infoblox Web API, while highlighting some of the reasons vendors should really step in and provide PowerShell modules that sit on top of their APIs.

Getting Started: Reading

First things first; get ready to read. For every API you work with, chances are you’re going to spend more time reading than writing code. Sign into Infoblox’ support site and download the Web API documentation. Vendors: How much time do you think your customers will spend writing functions or modules that work across API versions? Or that cover more functions than are absolutely necessary?

Now skim through that documentation. As you spend more time working with REST APIs, you’ll pick out the important bits. Sadly, there is little consistency between the various REST implementations; chances are you can borrow snippets of PowerShell code between solutions, or that you might find examples online, but the conventions and syntax for accessing and interpreting output from each REST API will vary wildly.

Thankfully, the basics are summed up in the first twelve pages. The remaining 800+ are relegated to describing some examples, and the various objects we can work with, which you can selectively review later.

Key ingredients

We need a few ingredients to start:

  • SSL. Ideally you have this set up. For a quick, less secure start, consider this solution.
  • The Web API version, which Infoblox uses in the base URL
  • A base URL. In general it looks like this: https://FQDN/wapi/v1.6/
  • A credential with access to the Infoblox
  • The ability and motivation to read lengthy, verbose documentation

Authentication

We need to figure out how to authenticate. Most APIs provide a method to create a token, session, or some other persistent state. Others force you to authenticate with each request.

Some APIs require you do obfuscate the password in some way, and construct a header per their specifications. I’ve even seen specs requiring you to generate a header, generate a hash for that header, and use that hash in the real header.

Reminder: use SSL, obfuscation is not secure. Be wary of the misuse of the word ‘encryption’. Base64 encoding is neither encryption nor secure. Thankfully, with the Infoblox we can pass in a standard PSCredential object and leverage HTTPS.

Let’s try to hit the Uri without specifying a resource:

image

No luck. The documentation explains that a 400 error is essentially your fault. Let’s try with an object. Something basic, like the grid itself:

clip_image001

Voila! We’re all done, right? What if we have to make a large number of calls. Would a session be more efficient?

Let’s open up the API documentation. Ctrl+f ‘Session’. Nothing relevant. Ctrl+f ‘Token’. Nothing relevant. Ctrl+f ‘Cookie’ – got it! There’s a brief mention in the authentication section. I’m hoping we can use the SessionVariable parameter from our Invoke-RESTMethod or Invoke-WebRequest call.

If you’re lucky, you can google around and find a working example. In this case, I was able to look at Don Smith’s REST-PowerShell wrapper. It’s not very PowerShell-y, but it has some examples which come in handy. Borrowing from this, we wrote an ugly New-IBSession.

Relatively painless so far; we already know how to authenticate and pull data! But we’re looking at a single API among many, each of which has its own peculiarities and implementation details. Let’s see if there’s more to pulling data than meets the eye.

GETting data

Time to start looking at the data which we actually care about. Each web API will expose different objects to you. In this case, we have 720 pages describing the objects and their various properties. Somewhat painful, but verbose documentation beats no documentation. Wouldn’t it be nice if we had the discoverability and reflection you get with PowerShell?

Let’s pretend we want a DHCP lease address and binding state. We look through the objects, and we see “lease: DHCP Lease object”. Submit a GET request for this:

clip_image002

I have a bad feeling about this. I just want a lease, what’s going on? Let’s try another obvious object, a network:

clip_image003

Bizarre – I got data back! I dive back into the documentation. The 400 error is generic, but let’s search for it anyways. Ah ha! In the GET method section, we see specific error handling notes. A 400 error means there were too many results.

To whittle down the results, we need to dive into some domain specific CGI that will help provide no value to you outside of these Infoblox API calls. With PowerShell, if I spend some time learning the ins-and-outs of the language, it helps me whether I’m working with AD, VMware, or SQL. Vendors: if your competition offers a decent PowerShell module, it might swing my vote.

Time for more reading. Long story short, you need to implement paging. In this case, I say _paging=1, and I specify an appropriate _max_results; I chose 1000. The first page of results includes a next_page_id. I use this to quantify my next call to the Infoblox, rinse and repeat until the Infoblox doesn’t provide me a next_page_id. My implementation is crude, but you can see this in the logic of Get-IBLease.

Reading the documentation, we see we can call _max_results=[positive number] and it will truncate results, rather than error out:

clip_image004

Woohoo! I got a _ref, an address, and a network_view. That’s not what I’m after. At the very least, I want the binding state for that lease, and I want a way to filter the results.

Filtering

Time for more reading, and more CGI on the end of that Uri. It’s up to you again to invest time learning Infoblox’ specific method of picking out properties to return, and filtering results in an API call.

For each object, the documentation will describe a property, including whether and how you can filter for it:

clip_image005

Hopefully the property you want to filter is searchable! We wanted to look at binding_state, perhaps to see if we have free leases. No luck:

clip_image006

Let’s find another example for filtering. There are plenty more; in this case, I’m searching for leases that were discovered in the past two days (Epoch time is used):

clip_image007

Again, crudely implemented, but you can see the construction of these CGI queries and the resulting Uri in the Get-IB* commands, and using verbose output, respectively.

Picking and choosing data

You guessed it, time for more reading! At this point, it should be clear that if you want to work with a vendor’s API, you’re probably going to spend a great deal of time reading. Vendors: at this point, your customers may be tired. They struggled through figuring out your authentication mechanism, your object model, your unique query syntax, your unique interpretations of error codes. They might not spend much time on important details like error handling, testing, or covering functionality that they don’t have immediate plans for. What if this causes an outage and leaves your brand with a black eye? What if your customers realize they are spending valuable time designing and implementing functions that you could be creating for us?

Back to the task at hand; we want to pull different properties. Reading the documentation, we see that you simply specify _return_fields=comma,separated,list:

clip_image008

Here’s an example call to Get-IBLease with verbose output. It specifies a few default properties I find helpful, and allows filtering on properties like address (~= operator) and discovered_data.last_discovered. Yes, this might be too verbose:

clip_image009

There are a few other commands in the module, including a generic Get-IBObject. Perhaps you want to search for IPAM entries (IPv4Address) between two addresses:

Get-IBObjectFilter

POSTs, PUTs, and DELETEs

Just kidding. Hopefully you’ve learned enough to go back and learn how to work with the Infoblox beyond GET requests.

How do we fix this?

I want to emphasize that this post is not targeting Infoblox specifically: as far as REST APIs go, theirs has been solid. If you’re working with a modern product, chances are it has a web API of some sort. Some vendors do provide a PowerShell module to abstract out the painful process we went through above, but many do not.

I submitted a few potential suggestions in my closing section of the previous REST API post. What do you think? Is this even an issue? Any suggestions on fixing it? What can we do to encourage vendors to provide more than a few simplified examples of hitting their API through PowerShell?

On a side note, if your answer involves a specific vendor’s specific version of an orchestration product, and the specific third party extensions for this, please do not reply : )

Thanks to Don Smith and Anders Wahlqvist for their helpful examples.

Cheers!

Fun with GitHub, Pester, and AppVeyor

This is a quick hit to cover a practical example of some very important processes; version control, unit testing, and continuous integration. We’re going to make a number of assumptions, and quickly run through GitHub (version control), Pester (unit testing for PowerShell), and AppVeyor (continuous integration).

Yesterday evening I added some basic Pester tests to a PowerShell module for DiskPart. I thought it might be fun to test out AppVeyor, but assumed running DiskPart or some other process requiring administrative privileges might not work. I assumed wrong, the build passed!

We’ll make the assumption that if you’re using PowerShell and Pester, you’re using Windows. We’ll also assume you have a PowerShell script or module already written. Finally, we’ll assume that for all the GUI fun below, you might dive into the CLI.

GitHub

I’m not going to ramble on about Git or GitHub. There are heavy duty books, hundreds of blog posts, and various youtube examples to get you up and running. Sign up. Download the Windows client. Read one of the thousands of articles on using GitHub (Example, others may be better).

You’re signed up. Let’s create a repository and upload our code.  Here’s yet another quick-start, using the website and Windows client, documented with GifCam, a handy tool I found through Doug Finke.

Create a repository:

CreateRepository

Clone the repository on your computer, copy the files you want in there, commit:

CloneRepository

There’s way more to see and do, so do spend a little time experimenting and reading up on it if you haven’t already. There are too many benefits of version control to list, and GitHub adds a simple interface and a nice layer enabling collaboration and sharing. Or go with Mercurial and BitBucket, among the many other combinations.

Pester

Pester is a unit testing framework for PowerShell. Companies like Microsoft and VMware are starting to use it, you should probably check it out:

image

Jakub Jareš has a great set of articles on PowerShellMagazine.com (One, Two, Three), and you should find plenty of other examples out there.

I took an existing Invoke-Parallel test file, used the relative path to Wait-Path.ps1, and added a few tests. Follow Jakub’s first article or two and you will realize how simple this is. Start using it, and you will realize how valuable it is.

It’s quite comforting knowing that I have a set of tests and don’t need to worry about whether I remembered to test each scenario manually. Even more comforting if you have other people collaborating on a project and can identify when one of them (certainly not you!) does something wrong.

AppVeyor

Continuous integration. You’ve probably heard it before, perhaps around the Jenkins application. Personally, I never got around to checking it out. A short while back, Sergei from Microsoft suggested using AppVeyor for Invoke-Parallel (thank you for the motivation!). He added a mysterious appveyor.yml file and Pester tests, I signed up for AppVeyor, added the Invoke-Parallel project, and all of a sudden Pester tests appeared on the web:

image

I assumed this required much wizardry. It probably did, but now I can borrow what Sergei did! Sign up with your GitHub creds, login, and here’s a quick rundown:

AddProject

Simple so far! AppVeyor is waiting for a commit. In the background, I added a readme.md and a line in Wait-Path. Let’s commit:

Commit

Behind the scenes, gears are turning at AppVeyor. After a short wait in the queue, your tests will start to run:

image

image

image

Build Passing!

Take a peak at the appveyor.yml. There’s a whole lot more you can do with AppVeyor, but this is a nice framework for simple PowerShell Pester tests. In the yaml, you can see we use domain specific language to…

  • Install Pester
  • Invoke Pester, pass the results through, and generate output XML
  • Upload the Pester XML output (Why? The Tests tab)
  • If Pester output FailedCount is greater than 0, fail the build

Better together

Version control, unit testing, and continuous integration, and the products we looked at for each of these are fantastic in their own right. If you experiment and try these out together, I think you will find they are even better together. And this particular combination of GitHub + Pester + AppVeyor (for PowerShell) is particularly smooth.

In fact, writing this post, listening to the tail end of the DSC MVA, enjoying some pasta and wine, and recording the various gifs took all of about two hours. Don’t be intimidated, just dive in!

Happy scripting!

Remotely Brick a System

I have another fun project. After some recent system performance analyses, one of my recommendations was to move appropriate systems to VMware’s Paravirtual SCSI controller. We don’t go the vCAC/vRA route yet, so I’m now tasked with integrating this into our fun little ASP.NET/C#, PowerShell, MDT, and SQL deployment Frankenstein. It may be ugly, but it was a fantastic learning experience, and works quite well.

When designing tooling, I usually step through what I want to do manually, and break each step up as needed, building re-usable tools where appropriate. Here’s the recipe, ignoring a paranoid level of error handling:

  • Check to see which of the guest’s disks are online (Why?  We’ll see…)
  • Power off the guest
  • Change all non-system drives to a new Paravirtual SCSI controller
  • Power on the guest
  • All your disks are gone!
  • Set your disks that are offline back to online. Each test case of mine resulted in all migrated disks coming up as offline.
  • See all those ‘Power’ tasks? All this will need to be performed from a remote system.

Most of this is vanilla PowerCLI. A few steps require something like DiskPart though. Yes, Windows 6.2 and later include Storage Cmdlets. Unfortunately, Microsoft cut off legacy systems, along with the many organizations out there who still rely on them, even if they can roll out 2012 R2 boxes for new projects. DiskPart it is!

Tangent: Whomever decided to include OS-specific-Cmdlets in certain DSC Resources made me sad. The whole OS-specific-Cmdlet idea has lead to a good deal of confusion, and relying on it in a technology that’s slated for inclusion in the Common Engineering Criteria (presumably as the standard for configuration management) might not help with adoption.

DiskPart on a remote computer

The blog title should make sense at this point; thankfully, no systems were harmed in the writing of this post. So, we have a set of DiskPart commands that need to run remotely. How do we do it?

A while back I wrote New-RemoteProcess. It was the second PowerShell function I published, definitely not my proudest work, but it does the trick; I have my remoting mechanism. Now I need to parse out online disks to know which offline disks should really be online. I sifted through the numerous PowerShell+DiskPart posts out there and didn’t find much on running it remotely. I did find Alan Conkle’s code for parsing disk, volume, and vdisk output. We tweaked New-RemoteProcess to give us Invoke-DiskPartScript, which we can use inside a few Get-DiskPart* functions that mash in Alan’s code.

The result? You can now remotely brick a system. Or check for offline disks and set them back to online after changing to a Paravirtual controller, your choice!

PSDiskPart

I packaged my functions up into PSDiskPart, and committed them to GitHub. If you need to run Diskpart against a remote system, and remoting + OS-specific-Storage-Cmdlets won’t work for you, give this a shot! If you have any suggestions or tips, pull requests would be appreciated.

Here are a few examples from my environment:

Get disk info for a few computers, pick out a few properties:Get-DiskPartDisk

Get volume information from the current computer:

Get-DiskPartVolume

Set a disk to offline… Don’t do this to a production system : )

Invoke-DiskPartScript-Offline

Bring a disk back online, and remove the readonly flag if it is set:

Invoke-DiskPartScript-Online

Invoke-DiskPartScript and Sleep

I did run into a small issue with the borrowed logic from New-RemoteProcess. Every so often, I simply wouldn’t get results back. Adding start-sleep resolved this, but seemed inefficient.

I wanted something where I could say ‘wait until you can see this file.’ This is simple enough to write, but one of the things I love about PowerShell is that it is task-based. I want to wait for a path to exist, export to a csv, create a VM, or perform some other specific task, not worry about the logic and error handling behind each of these tasks.

I didn’t see anything out there, so I drafted up Wait-Path, which returns more quickly than hard coding a start-sleep call.

Up next

My wife is out of town this week. This means I have more time to play and to wrap up a few posts I have planned. No promises, but the following are on my plate:

  • REST / Infoblox –  A follow-up walking through a few Infoblox functions, illustrating why it would be quite nice if vendors provided their own PowerShell modules.
  • Invoke-SqlCmd2 – Highlight some of this function’s features, with some practical examples. Pre-staging a computer and applications in MDT? Getting migrated objects from ADMT? Diving into OpsMgr? Too many choices…
  • Building an inventory database – Not everyone has a mature CMDB. Create a database that can track details on servers, SQL instances, SQL databases, scheduled tasks, and more.
  • Filling an inventory database – Now that we have an inventory database, collect the data! This boils down to the products and attributes you want to track, but we can start with some basics.

Cheers!

How Do I Learn PowerShell?

I often see questions on how to learn PowerShell. Rather than address these each time they come up, figured it was time for a post. PowerShell is a critical skill for anyone working in IT on the Microsoft side of the fence. Anyone from a service desk associate to printer admin to DBA to developer would benefit from learning it!

There’s no single answer to the question; reflecting back on my path, the following seems like a decent recipe for learning PowerShell. Long story short?  Practice, practice, practice, build some formal knowledge, and participate in the community.

First Things First

There are books, videos, cheat sheets, and blog posts galore.  Before any of this, get a system up and running with PowerShell. I assume you can search for the details on how to do the following:

  • Download the latest Windows Management Framework and requisite .NET Framework – .NET 4.5 and WMF 4 at the moment. Even if you target a lower version PowerShell, the newer ISE will make learning and every day use more pleasant
  • Set your execution policy as appropriate, keeping in mind it’s just a seatbelt (I use Bypass on my computers). You do read code before you run it, right?
  • Update your help. If using PowerShell 3 or later, run PowerShell as an administrator and run Update-Help -Force
  • This is standard fare for any technology, but remind yourself of the importance of testing. Test against a test environment or target where possible. Consider all the corner cases and scenarios that your code should handle. Test one, few, many, in batches, and finally, all. Consider using read-only or verbose output before sending those results to commands that change things. Read up on –Whatif, –Confirm, and where these don’t work. Don’t be this guy:

image

Okay, enough disclaimers. Start exploring and experimenting. A solid foundation built on books and other references is incredibly helpful, but experience is even more important. If you tell yourself “I’ll learn PowerShell when I have more time,” you’ve already lost. You learn through practice and experience; you’re not going to magically learn PowerShell in some boot camp, training program, or book if you don’t regularly use it in the real world.

I Only Learn By Doing!

Building a foundation for PowerShell knowledge is important. You should spend time working with the language, but this isn’t a language where you can ignore formal knowledge; your assumptions will bite you, cause frustration, and lead to poor code. This is not your standard shell or scripting language for a few key reasons:

  • PowerShell is both a shell and a scripting language; the shell isn’t a slightly different environment where you might run into odd quirks that don’t expose themselves in a script. The scripting side doesn’t have more powerful language than the shell. They are one and the same. This leads to interesting design decisions. Shell users know that ‘>’ is a redirection operator. Developers and scripters know that ‘>’ is a comparison operator. This is one of several conflicts between shell and scripting expectations that makes PowerShell a bit unique, and can inspire wharrgarbl even from talented developers.
  • PowerShell targets IT professionals and power users, rather than developers. A developer might expect an escape character to be a ‘\’. If Microsoft chose this, we would need to escape every single slash in a path: “C:\\This\\Is\\A\\Pain”. Few IT professionals in the Microsoft ecosystem would use a language like that. Several design decisions like this may confuse seasoned developers.

If you come in as an experienced scripter or developer, you might get serious heartburn if you make assumptions and don’t look at PowerShell for what it is. Even developers like Helge Klein (author of delprof2 and other tools) make this mistake. Several of my talented co-workers who are more familiar with C# and .NET, or Python/Perl and various shell languages have made this mistake as well. Like most languages, if you’re going to use it, you should spend some time with formal reading material, and should avoid assumptions.

Formal resources

Hopefully you’ve decided to look at some formal learning materials!  I keep a list of the resources I find helpful for learning PowerShell here.

Prefer books?

  • If you’re a beginner without much scripting / code experience, check out Learn Windows PowerShell 3 in a Month of Lunches.
  • If you have experience with scripting and code, Windows PowerShell in Action is the way to go. This is as deep as it gets, short of in-depth, lengthy blog posts, and you get to read about the reasons behind the language design. Knowing the reason for ‘greater than’ being -gt rather than > should quell your heartburn.
  • Strapped for cash? Mastering PowerShell is a great free book
  • Want to know what started this all? Read Jeffrey Snover’s Monad Manifesto. This isn’t on PowerShell per se, but it gives insight into the vision behind the language.

Prefer videos?

Prefer training?

There are plenty of training opportunities. Keep in mind that training might cater to the lowest common denominator in the room, and that a few hours, even at breakneck speed, won’t be enough. Of course, this could be my own bias showing.

Join the Community!

The community is a great place learn, and to get ideas for what you can do with PowerShell.

  • Find popular repositories and publications on sites like GitHub and Technet Gallery. Dive into the underlying code and see how and why it works. Experiment with it. Keep in mind that popular code does not mean good code. Contribute your own code when and if you are comfortable with it
  • Drop in and get to know the communities. Stop by #PowerShell on IRC (Freenode). Check out Twitter, if you don’t use it, you might be surprised at how valuable it can be. Join the PowerShell.org community. Keep an eye out for other communities that might be more specific to your interests or needs.
  • Look for blogs – some might cover general PowerShell, other’s might cover your IT niche. I keep a few here, but there are many others.
  • Participate in a local PowerShell user group. There’s no single way to find these, look at PowerShellGroup.org, PowerShell.org, and ask around.

As you spend time working with PowerShell and following or participating in the community, you will find some community members that you can generally rely on. PowerShell MVPs, the PowerShell team, and other respected community members put out some fantastic material.

Unfortunately, because everyone loves to contribute, you will find plenty of outdated, inefficient, incorrect, or downright dangerous code out there. This is another reason I tend to steer folks towards curated, formal resources at the onset, so that they can learn enough to recognize bad code at a glance.

Spend Some Time With PowerShell

At this point, you should be good to go! A few suggestions that I’ve found helpful along my way:

  • You can use PowerShell to learn PowerShell – Get-Command, Get-Help, and Get-Member are hugely beneficial. The Get-Help about_* topics might seem dry, but their content is as good as any book
  • Building functions can be helpful – focus on modular code that you can use across scenarios. Don’t limit yourself by adding GUIs, requiring manual input from Read-Host, or other restrictive designs.
    • As an example, I borrowed code from Boe Prox to write Invoke-Parallel a while back. I use it to speed up many solutions. I borrowed jrich523’s Test-Server, glued it together with Invoke-Parallel, and now I have Invoke-Ping. This lets me parallelize tests against thousands of nodes for services like remote registry, remote RPC, SMB, and RDP. Many of our production scripts start out by querying for a list of nodes and filtering this list with Invoke-Ping. The key is that I have re-usable tools and components that can be integrated across a variety of solutions, not just one-off scripts that are only helpful in a single use case.
  • Spend a few minutes every day!
    • Don’t hold up urgent production troubleshooting if you aren’t ready, but consider revisiting the scenario afterwards. Could you have used PowerShell to detect the issue before it happened? Could you use PowerShell to implement the fix? If you had an issue distributed across many systems, would it have saved time to use PowerShell to troubleshoot?
    • Have a project, task, or tedious manual step to accomplish? Would it make sense to use PowerShell? Spend a little time and see if you can script this out, or write a function to do the work.
    • Start with read only commands. Yes, automation and configuration are important, but you can learn a lot about PowerShell and your environment just by running ‘Get’ commands. This is a great way to learn more about technology in general as well! If I come across a piece of technology that I want to learn, perhaps Infoblox Grid Manager, Citrix NetScaler, SQL Server, or Microsoft Hyper-V, I ask for test access or build my own and try to query it with PowerShell. This helps me learn the basics of many technologies, gives me experience with PowerShell, and gets me involved with a number of fun projects. A month down the line when they want to automate or build tooling for something, we already have some basic tools and experience working with it!
    • Don’t be discouraged. If you had no scripting experience, you’re going to have some growing pains. Work through them, it will be worth it in the end. You will find that most of what you learn can be applied to other areas of PowerShell, given shared conventions and syntax. Don’t be this guy: ‘We have to make changes to 1,000 print queues, we need more FTEs to click through all the menus!’ – No, if you had someone with basic understanding of PowerShell, you could have it done more consistently, and with a tool you could re-use, likely in less time. Who wants to go clicking through 1,000 GUIs anyways? Sounds horrid.

Good luck, whichever route you take!

REST, PowerShell, and Infoblox

Edit: Wrote a follow-up illustrating a few of these issues through a rudimentary Infoblox PowerShell module.

A short while back, someone asked if I would be up for writing about calling the Infoblox web API through PowerShell. I don’t have extensive experience with this topic, but this is a great opportunity to discuss REST APIs, and the interfaces vendors expose to their customers.

There won’t be any Infoblox PowerShell in this post, that’s in the pipeline.

Interfaces – Software Defined Everything!

With the movement towards software defined everything, more and more products are exposing APIs and interfaces we can control, configure, and orchestrate with.

In general, this is a good thing. Having a nice API is an important first step, but more is needed. Folks like Helge Klein and others with development experience are well served by an API. But what about those of us working on the systems side? Even if we have a loose grasp on the topic, we lose much of the consistency and intuitiveness Jeffrey Snover aimed for with PowerShell.

Why PowerShell?

I grew up playing with Lego, figuring out how to piece together contraptions that don’t come with instructions. I do something similar at work today – I take the commands and consistent syntax and conventions of PowerShell, and put them together to build solutions that vendors don’t provide, or that we can’t afford. Unfortunately, not all vendors provide PowerShell support, so we need to build it ourselves.

This brings us to APIs, including C, Java, .NET, and of course, RESTful APIs. Most of these are beyond the skillset of a good number of systems administrators or engineers, including myself. Some of these APIs are more approachable than others, but vendors do us a disservice by limiting their interfaces to these.

Why APIs Aren’t Enough

As an engineer, I often need to integrate a variety of technologies. Many of these have fantastic PowerShell modules, from VMware’s PowerCLI, to Microsoft’s ActiveDirectory module, to Cisco’s UCS PowerTool. I can rely on the consistency and conventions of PowerShell, and spend my time considering the logic and design of a solution, rather than combing through documentation and figuring out the nuances of invoking a specific, obscure, poorly documented method of an API.

What does it mean if your product doesn’t have a PowerShell module, but may need integration with the wider Microsoft ecosystem?

  • Admins and engineers across the industry waste time and duplicated effort attempting to use your API or binaries through PowerShell
  • Admins and engineers who may not be subject matter experts in your technology end up trying to piece things together, leading to potentially buggy or feature-poor implementations. You as the vendor have the knowledge here, why not help out and ensure a smooth, consistent experience with your product?
  • Admins and engineers who may not have a strong background in PowerShell end up trying to piece a module or functions together, leading to potentially buggy and inconsistent implementations
  • Even if you have an SME for the technology in question who is fluent in PowerShell, they now get to spend copious amounts of time reading through your API documentation and figuring out how to ‘PowerShell-ize’ it

API Overload

Infoblox is one of many examples. EMC Isilon and XtremIO. Citrix NetScaler and Provisioning Services. Commvault Simpana. Thycotic Secret Server. BMC ARS. A wide range of products out there provide no PowerShell module, or something nearly unusable.

Microsoft isn’t immune. SQL Server management often relies on the SMO. A variety of Exchange tasks force you to use the Managed API or EWS. Good luck doing anything useful with the Group Policy module, or even finding an interface to AGPM – DSC is great, but you’re kidding yourself if you think Group Policy is going away any time soon.

Some of these APIs are better than others. A REST API generally means I need to dig through documentation and spend a good deal of time learning the ins-and-outs of your API. Despite being a bit dated, at least with a web service I have tools to discover the methods, constructors, and other details that help when wrapping an API in PowerShell. With REST?  Who knows what you’re going to get, good luck reading and experimenting!

Closing

How do we solve this problem? A few suggestions:

Vendors – if your product is commonly used in the Microsoft ecosystem, provide a PowerShell module. Don’t just wrap some binaries or APIs in a format that makes sense to you. Follow the standard conventions and best practices for PowerShell that have made it the successful tool that it is today. The Monad Manifesto should be required reading material for anyone responsible for implementing your PowerShell support.

Microsoft – lead by example. The inclusion of PowerShell in the (server) Common Engineering Criteria was a great start. Take steps to encourage your product groups to provide better and more wide-spread PowerShell solutions. Perhaps consider taking similar steps to encourage and assist other vendors.

Engineers and admins – if you have input on the decision making process, strongly consider whether PowerShell should be a factor in this. It can be incredibly painful ending up with a critical technology that you can’t control programmatically, or with an interface you have no familiarity with or interest in learning. Java or C? Not for me. If you do end up with these technologies, pressure the vendor to include an accessible PowerShell interface. If a vendor doesn’t hear you tell them you want a PowerShell interface, how will they get the prioritization to build one?

Lastly, while this is a big pain point for me, it’s still fantastic to have a glue language like PowerShell that can piece together well written PowerShell modules, .NET libraries, REST APIs, and everything else.

Disclaimer: This assumes you are locked into the Microsoft ecosystem and standardize on PowerShell.

Exploring PowerShell: Common Parameters, Variables, and More!

When writing PowerShell functions and scripts, you might come across a need to identify common parameters, automatic variables, or other details that can change depending on the environment.

It turns out that PowerShell offers a number of tools to explore, from the standard Get-Command, Get-Help, and Get-Member Cmdlets, to the .NET Framework, with tools like reflection and abstract syntax trees.

This past week I received an interesting question on Invoke-Parallel; can we import variables and modules from the end user’s session to make the command more approachable? This was a good question – Invoke-Parallel is widely used by my co-workers, but there is often confusion over the idea of each runspace being an independent environment.

Before we dive into this, let’s take a step back and look at some work from Bruce Payette, “a co-designer of the PowerShell language and the principal author of the language implementation.” Bruce wrote the definitive PowerShell in Action (PiA), my go-to book recommendation for anyone with scripting or development experience.

PowerShell In Action – Constrained endpoints

Constrained, delegated endpoints are getting more attention nowadays with JitJea. It turns out Bruce talked about constrained endpoints in PiA long ago.

Interactive and implicit remoting depend on a few commands. If you want to create a constrained endpoint that works with interactive or implicit remoting, you should define proxy functions for these. Rather than hard code the command names, Bruce tells us how to get them at run time, by creating a certain restricted initial session state and listing the public commands within it:

$iss = [Management.Automation.Runspaces.InitialSessionState]::CreateRestricted("RemoteServer")            
$iss.Commands | Where { $_.Visibility -eq "Public" } | Select Name

RemoteServer

How does this relate to automatic variables and modules? In both cases we can use PowerShell and the .NET Framework to find the answer at run time, rather than hard coding the answer.

Automatic variables and modules

If we want to pass variables from the user’s session into the Invoke-Parallel runspaces, we probably want to ignore the wide array of automatic variables.

We could certainly hard code these, but what fun is that? The approach we ended up taking was to compare a clean environment with the current environment, by creating a clean PowerShell runspace, listing out the modules, pssnapins, and variables within it, and comparing these with the user’s current session.

$StandardUserEnv = [powershell]::Create().addscript({            
            
    #Get modules and snapins in this clean runspace            
    $Modules = Get-Module | Select -ExpandProperty Name            
    $Snapins = Get-PSSnapin | Select -ExpandProperty Name            
            
    #Get variables in this clean runspace            
    #Called last to get vars like $? into session            
    $Variables = Get-Variable | Select -ExpandProperty Name            
                
    #Return a hashtable where we can access each.            
    @{            
        Variables = $Variables            
        Modules = $Modules            
        Snapins = $Snapins            
    }            
}).invoke()[0]

AutoVariables

This isn’t perfect; certain automatic variables won’t be created out of the gate. For example, $Matches won’t be listed. But this gives us a good start, and filters out the majority of variables and modules that we can ignore. The end result? A more usable Invoke-Parallel:

$Path = "C:\Temp"            
            
#Query 3 computers for something, save the results under $path 
echo Server1 Server2 Server3 | Invoke-Parallel -ImportVariables { 
                
    #Do something and record some output!            
    $Output = "Some Value for $_"            
            
    #Save it to the path we specified outsite the runspace            
    $FilePath = Join-Path $Path "$_-$(Get-Date -UFormat '%Y%m%d').txt"                
    Set-Content $FilePath -Value $Output -force            
            
}            
            
#List the output:            
dir $Path

Invoke-Parallel

In the past, $path would not be passed in, resulting in potential confusion and broken code.

Common parameters

What if you want a list of common parameters? Perhaps you are splatting PSBoundParameters and want to exclude common parameters, or perhaps you just want a list of common parameters to jog your memory.

You could hard code these, but this is no fun, and might get complicated if you want to cover version specific parameters like PipelineVariable. Let’s use one of the core commands for exploring PowerShell to get these; Get-Command.

#Define an empty function with cmdletbinding            
Function _temp { [cmdletbinding()] param() }            
            
#Get parameters, only common params are returned            
(Get-Command _temp | Select -ExpandProperty parameters).Keys

Common-Parameters

That’s it! We simply build a temporary function, and ask PowerShell what that function’s parameters are.

Use PowerShell to explore PowerShell

The key takeaway here is that you can use PowerShell or the .NET Framework itself to explore PowerShell. Use this to your advantage when writing functions where details on the runtime environment can improve the end user’s experience.

Cheers!