SQLite and PowerShell

March 2015 EDIT: This post has been updated and moved to my new GitHub pages site.

I’ve been planning on sharing some fun projects that involve SQL. Every time I start writing about these, I end up spending a good deal of time writing about MSSQL, and thinking of all the potential caveats that might scare off the uninitiated. Will they have an existing SQL instance they can work with? Will they have access to it? Will they run into a grumpy DBA? Will they be scared off by the idea of standing up their own SQL instance for testing and learning?

Wouldn’t it be great if we could illustrate how to use SQL, and get an idea of how helpful it can be, without the prerequisite of an existing instance with appropriate configurations and access in place?


SQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.”

What do you know, sounds pretty close to what we are looking for!  We want to use this in PowerShell, so where do we start?

Looking around, you’ll stumble upon Jim Christopher’s SQLite PowerShell Provider. If you like working with providers and PSDrives, this is probably as far as you need to go. There are other examples abound, including interesting solutions like Chrissy LeMaire’s Invoke-Locate, which leverage SQLite behind the scenes.

I generally prefer standalone functions and cmdlets over providers. I’m also a fan of abstraction, and building re-usable, simple to use tools. The task-based nature of PowerShell makes it a great language for getting things done. We can concentrate on doing what we want to do, not the underlying implementation.

I was looking for something similar to Invoke-Sqlcmd2, which abstracts out the underlying .NET logic to provide simplified SQL queries, the ability to handle SQL parameters, PowerShell-esque behavior for DBNull, and other conveniences.


I spent a few minutes with the SQLite binaries and examples from Jim and Chrissy, and simply duct-taped SQLite functionality onto Invoke-Sqlcmd2. Let’s take a look at what we can do

Download and unblock Invoke-SQLiteQuery, and you’ll be up and running, ready to work with SQLite. Let’s create a data source and a table:


That was pretty easy! We used a SQLite PRAGMA statement to see basic details on the table I created. Now let’s insert some data and pull it back out:


In this example we parameterized the query – notice that @full and @BD were replaced with the full and BD values from SQLParameters, respectively.

Let’s take a quick look at using SQLite in memory


Typically, we might use Datarow output from MSSQL and SQLite queries. As you can see above, using Datarow output leads to unexpected filtering behavior – if I filter on Where {$_.Fullname}, I don’t expect any results to come back with no fullname. Thankfully, we have code from Dave Wyatt that can quickly and efficiently convert output to PSObjects that behave as expected in PowerShell.

We did the querying above in memory. Let’s run PRAGMA STATS to see details on the in-memory data source. If we close the connection and run this again, we see the data is gone:


Next steps

That’s about it! If you want simplified SQLite queries in PowerShell, check out Invoke-SQLiteQuery. If you delve into the MSSQL side of the house, check out Invoke-Sqlcmd2 from Chad Miller et al. It was used as the basis for Invoke-SQLiteQuery and behaves very similarly.

Now I just have to find more time to write…

Disclaimer: This weekend was the first time I’ve used SQLite. If I’m missing any major functionality, or you see unexpected behavior, contributions or suggestions would be quite welcome!

Testing DSC Configurations with Pester and AppVeyor

I spent the last few days tinkering with AppVeyor. It’s an interesting service to help enable continuous integration and delivery in the Microsoft ecosystem.

Last night I realized it might offer a simple means to test the outcome of your DSC configurations. Here’s the recipe for a simple POC, with plenty of room for you to tweak and integrate with your existing processes:

  • Create a DSC configuration you want to test
  • Create a short script to apply that configuration
  • Create some Pester tests to verify the desired state of your system, post configuration
  • Create the AppVeyor yaml to control this process
  • Add this to an appropriate AppVeyor source (Example covering GitHub)
  • Test away! Make a change to the DSC code, commit, and AppVeyor spins up a VM, applies the DSC configuration, and your Pester tests verify the outcome

This certainly isn’t a perfect method, but it would be a simple way for anyone to get up and running writing and testing DSC configurations and resources.

I’m going to make the assumption that you are familiar with GitHub, Pester, and some of the basics of AppVeyor from my first overview.

Wait, what is DSC?

Windows PowerShell Desired State Configuration is a configuration management platform from Microsoft. It’s still young, but is fast tracked for the Common Engineering Criteria, receives a good deal of attention from the PowerShell team, and the guy who brought us PowerShell is quite excited about it. It’s probably something you should be paying close attention to.

There is quite a variety of resources to get started with. The DSC Book from PowerShell.org is a nice overview, and the soon-to-be-published MVA series was quite helpful (Getting Started, Advanced).

Let’s look at some caveats to testing DSC over AppVeyor.

Yes, but…

  • You could do much of this on your own, with tools like Client Hyper-V and AutomatedLab. You might even have a similar toolset in place at work.
  • No testing of distributed systems. This allows testing on a single VM.
  • Limited selection of operating systems to deploy on. There is an OS option in the yaml, presumably this means we may see more.
  • Your configurations cannot restart the VM. Hoping this will change, but depending on their architecture and design, this may be tough.
  • You’re deploying on a hosted service. If your DSC configurations are integrated with your CMDB and other internal resources, this may be a show stopper.
  • A good DSC resource should have a solid ‘test’ function. That being said, a single DSC resource’s test function might not handle the intricate combination of resources applied to a system.

Right. I’m sure I’m missing others as well. Certainly not perfect, but if your goal is to write and test some general DSC resources or configurations, and you don’t have the tools in house, this is a fantastic and simple way to spin up fresh VMs, configure them with DSC, and verify that the resulting system is now in the desired state.

Another caveat – building your own system that would perform a similar functionality would be an incredibly valuable experience. They always say “don’t re-invent the wheel,” but if your goal is experience and to learn, re-inventing the wheel is a great way to get there!

Pick a source

I’m going to stick to GitHub for this. Keep in mind that AppVeyor is only free for open source projects. Consider your security posture before uploading any sensitive DSC configurations.


Pick a DSC configuration to test

We’re going with an incredibly basic example here:

Write a controller script to apply the configuration

There are plenty of ways to do this. Modify this POC example to meet your needs.

Most of this should be self explanatory. We force application of the ContosoWebsite configuration from WebServer.ps1. The one extra bit is that we save the path to the resulting MOF file in Artifacts.txt. We upload this later on.

Write your Pester tests

Go crazy. I wrote some very simple, limited tests for this POC:

Keep in mind that you aren’t limited to one pass. You could theoretically apply your configuration, test, add a ‘mischief’ script that messes with the configuration, and test to verify that your DSC configuration brings things back in line.

Write the AppVeyor yaml

We’re almost done!  At this point, we’re going to tell AppVeyor what to run. There’s a lot more you can configure in the yaml, so be sure to flip through the options and experiment.

What does this do?

  • We ignore any commits that match ‘updated readme’, to avoid unnecessary builds.
  • We run appveyor.dsc.ps1, which applies the DSC configuration and saves the mof path.
  • We run appveyor.pester.ps1, which invokes the pester script, sends test results to AppVeyor, and tells us if the build passed.
  • We upload the mof file, which will be available on the artifacts page for your AppVeyor project.

Tie it all together

We’re good to go! We create a GitHub repository, add this to our AppVeyor projects, and make a commit (covered here). Browse around the project on AppVeyor to see the results:


That’s it – we now have a simple framework for automated DSC resource and configuration testing. There’s a lot more you might want to do, but the simple POC material is in the AppVeyor-DSC-Test repository on GitHub.


GitHub, Pester, and AppVeyor: Part 2

I recently published a quick bit on using GitHub, Pester, and AppVeyor, a slick combination to provide version control, unit testing, and continuous integration to your PowerShell projects.

That post was a quick overview and essentially summed up ideas and implementation straight from Sergei. Before this pull request, I hadn’t worked with Pester or AppVeyor:


We just went through a major upgrade to our EMR, and I’m covering our primary admin for any fallout this morning. Thankfully, there wasn’t much to do, so I spent a few minutes toying with AppVeyor and Pester. This post is a quick summary of the outcome.

The code referenced in this post is now part of the PSDiskPart project on GitHub.


The AppVeyor yaml file used in the Wait-Path repository is fairly straightforward. It installs pester and runs a few lines of PowerShell. The readability for those PowerShell lines was a bit painful – no syntax highlighting, some shortcuts to keep code on one line, etc. My first step was to abstract most of the PowerShell out to another script.

The PSDiskPart AppVeyor yaml file is the result. It’s a little cleaner; the only test_script lines are calls to a single PowerShell script.

Not everyone would prefer this method, as it adds a layer of complexity, but I like the abstraction, and it enables the second line of PowerShell, where we call PowerShell.exe -version 2. So what’s in the appveyor.pester.ps1 controller script?


My approach is to use a single file. Because we call it several times to cover both PowerShell version 2 and the native PowerShell version, we need to serialize our output and add a ‘finalize’ pass to collect everything and send it to AppVeyor.

If you look at my commit or AppVeyor history for this morning, you will see an embarrassing number of small changes and tweaks – multitasking is generally a bad idea; multitasking without caffeine is worse:


The first of my struggles: relative paths. They’re great. But you need to start in the right path. You’ll note a reference to one of the AppVeyor environment variables, APPVEYOR_BUILD_FOLDER. This helped get me to the right path.

The second of my struggles: PowerShell.exe. I didn’t realize this, but if you place ExecutionPolicy and NoProfile parameters before -Version 2.0, PowerShell won’t be happy:


The third struggle: PowerShell 2 and PowerShell 3 are very different. At work, I always target PowerShell 2 and am all too familiar with the helpful language and functionality I must avoid. Throwing a PowerShell 2 iteration into the mix left me embarrassed at the number of PowerShell 3 assumptions I had made in the original PSDiskPart and Pester code!

  • There is no auto module loading in PowerShell 2. We abstracted out the call to pester, so we need to add a line to import that module.
  • Set-Loc<tab> hits Set-LocalGroup before Set-Location. Okay, that’s not PS3, that’s me being sloppy!
  • In the module manifest, PowerShell 3 let’s us use RootModule. PowerShell 2 doesn’t recognize RootModule, so we switch to ModuleToProcess.
  • PowerShell 2 did not include the $PSScriptRoot automatic variable.
  • PowerShell 2 will loop over $null
  • Get-Content -Raw was introduced in PowerShell 3

Walking through the build

Here is the resulting passing build, we’ll step through the basic flow:


First, we ignore any commits that match updated readme:


Next, we run the first pass of appveyor.pester.ps1, which runs tests in the native PowerShell on the AppVeyor VM:


This runs the AppVeyor testing controller script, which calls the PSDiskPart.Tests.ps1 Pester tests.


Success! Note that we differentiated the PowerShell version in the ‘It’ name. This would be more appropriate for Context, but we wanted to differentiate tests on the AppVeyor side:


Next, we run this in PowerShell version 2 mode. Native AppVeyor support for this would be nice; as is, we don’t get colorized output without going through extra effort:



Finally, we want to collect all the results, send our tests to AppVeyor, and give some summary feedback if anything failed:



The Outcome

Apart from working through my many mistakes, we now have our PowerShell abstracted out to a separate file we can view with syntax highlighting, we have a cleaner yaml file, and we have a simple way to test in both PowerShell version 2 mode, and with the native PowerShell.

We can focus on the domain-specific yaml in the yaml file, and PowerShell in the PowerShell file.


On a side note, I don’t think I mentioned badges in my previous post. In AppVeyor, browse to your project, view the settings, note the Badges section:


This is a great way to tell folks that your project is building successfully – or that it’s broken, as PSDiskPart was throughout the morning:


Next Steps

If you’re using GitHub for your PowerShell projects and haven’t checked them out yet, definitely consider looking into adding Pester and AppVeyor. If you already have your project in GitHub and your Pester tests laid out, adding them to AppVeyor only takes a moment (barring tests that require access to your internal environment). It took less than 30 seconds to add the InfoBlox module to AppVeyor once I had added a few example Pester tests.

On that note, consider test-driven development. Adding a comprehensive set of tests to a single function after it has been written is difficult. Covering an entire module would be painful. If you follow TDD and write your tests before you write the functionality that they test, you will be ahead of the game. As you can tell by my contributions, I am certainly not there yet, but I like the idea.

There’s a lot more to explore in AppVeyor. As far as I can tell, it looks like it can be used to help enable both continuous integration and continuous delivery. Poke around, experiment, and if you find anything helpful, share it with the community!

Querying the Infoblox Web API with PowerShell

My apologies ahead of time. This post is half rant, half discussion on the basics of using the InfoBlox Web API. A rudimentary PowerShell module abstracting this out is available here.

This is a follow-up to my thoughts on REST APIs here. Today we’re going to focus more on working with the Infoblox Web API, while highlighting some of the reasons vendors should really step in and provide PowerShell modules that sit on top of their APIs.

Getting Started: Reading

First things first; get ready to read. For every API you work with, chances are you’re going to spend more time reading than writing code. Sign into Infoblox’ support site and download the Web API documentation. Vendors: How much time do you think your customers will spend writing functions or modules that work across API versions? Or that cover more functions than are absolutely necessary?

Now skim through that documentation. As you spend more time working with REST APIs, you’ll pick out the important bits. Sadly, there is little consistency between the various REST implementations; chances are you can borrow snippets of PowerShell code between solutions, or that you might find examples online, but the conventions and syntax for accessing and interpreting output from each REST API will vary wildly.

Thankfully, the basics are summed up in the first twelve pages. The remaining 800+ are relegated to describing some examples, and the various objects we can work with, which you can selectively review later.

Key ingredients

We need a few ingredients to start:

  • SSL. Ideally you have this set up. For a quick, less secure start, consider this solution.
  • The Web API version, which Infoblox uses in the base URL
  • A base URL. In general it looks like this: https://FQDN/wapi/v1.6/
  • A credential with access to the Infoblox
  • The ability and motivation to read lengthy, verbose documentation


We need to figure out how to authenticate. Most APIs provide a method to create a token, session, or some other persistent state. Others force you to authenticate with each request.

Some APIs require you do obfuscate the password in some way, and construct a header per their specifications. I’ve even seen specs requiring you to generate a header, generate a hash for that header, and use that hash in the real header.

Reminder: use SSL, obfuscation is not secure. Be wary of the misuse of the word ‘encryption’. Base64 encoding is neither encryption nor secure. Thankfully, with the Infoblox we can pass in a standard PSCredential object and leverage HTTPS.

Let’s try to hit the Uri without specifying a resource:


No luck. The documentation explains that a 400 error is essentially your fault. Let’s try with an object. Something basic, like the grid itself:


Voila! We’re all done, right? What if we have to make a large number of calls. Would a session be more efficient?

Let’s open up the API documentation. Ctrl+f ‘Session’. Nothing relevant. Ctrl+f ‘Token’. Nothing relevant. Ctrl+f ‘Cookie’ – got it! There’s a brief mention in the authentication section. I’m hoping we can use the SessionVariable parameter from our Invoke-RESTMethod or Invoke-WebRequest call.

If you’re lucky, you can google around and find a working example. In this case, I was able to look at Don Smith’s REST-PowerShell wrapper. It’s not very PowerShell-y, but it has some examples which come in handy. Borrowing from this, we wrote an ugly New-IBSession.

Relatively painless so far; we already know how to authenticate and pull data! But we’re looking at a single API among many, each of which has its own peculiarities and implementation details. Let’s see if there’s more to pulling data than meets the eye.

GETting data

Time to start looking at the data which we actually care about. Each web API will expose different objects to you. In this case, we have 720 pages describing the objects and their various properties. Somewhat painful, but verbose documentation beats no documentation. Wouldn’t it be nice if we had the discoverability and reflection you get with PowerShell?

Let’s pretend we want a DHCP lease address and binding state. We look through the objects, and we see “lease: DHCP Lease object”. Submit a GET request for this:


I have a bad feeling about this. I just want a lease, what’s going on? Let’s try another obvious object, a network:


Bizarre – I got data back! I dive back into the documentation. The 400 error is generic, but let’s search for it anyways. Ah ha! In the GET method section, we see specific error handling notes. A 400 error means there were too many results.

To whittle down the results, we need to dive into some domain specific CGI that will help provide no value to you outside of these Infoblox API calls. With PowerShell, if I spend some time learning the ins-and-outs of the language, it helps me whether I’m working with AD, VMware, or SQL. Vendors: if your competition offers a decent PowerShell module, it might swing my vote.

Time for more reading. Long story short, you need to implement paging. In this case, I say _paging=1, and I specify an appropriate _max_results; I chose 1000. The first page of results includes a next_page_id. I use this to quantify my next call to the Infoblox, rinse and repeat until the Infoblox doesn’t provide me a next_page_id. My implementation is crude, but you can see this in the logic of Get-IBLease.

Reading the documentation, we see we can call _max_results=[positive number] and it will truncate results, rather than error out:


Woohoo! I got a _ref, an address, and a network_view. That’s not what I’m after. At the very least, I want the binding state for that lease, and I want a way to filter the results.


Time for more reading, and more CGI on the end of that Uri. It’s up to you again to invest time learning Infoblox’ specific method of picking out properties to return, and filtering results in an API call.

For each object, the documentation will describe a property, including whether and how you can filter for it:


Hopefully the property you want to filter is searchable! We wanted to look at binding_state, perhaps to see if we have free leases. No luck:


Let’s find another example for filtering. There are plenty more; in this case, I’m searching for leases that were discovered in the past two days (Epoch time is used):


Again, crudely implemented, but you can see the construction of these CGI queries and the resulting Uri in the Get-IB* commands, and using verbose output, respectively.

Picking and choosing data

You guessed it, time for more reading! At this point, it should be clear that if you want to work with a vendor’s API, you’re probably going to spend a great deal of time reading. Vendors: at this point, your customers may be tired. They struggled through figuring out your authentication mechanism, your object model, your unique query syntax, your unique interpretations of error codes. They might not spend much time on important details like error handling, testing, or covering functionality that they don’t have immediate plans for. What if this causes an outage and leaves your brand with a black eye? What if your customers realize they are spending valuable time designing and implementing functions that you could be creating for us?

Back to the task at hand; we want to pull different properties. Reading the documentation, we see that you simply specify _return_fields=comma,separated,list:


Here’s an example call to Get-IBLease with verbose output. It specifies a few default properties I find helpful, and allows filtering on properties like address (~= operator) and discovered_data.last_discovered. Yes, this might be too verbose:


There are a few other commands in the module, including a generic Get-IBObject. Perhaps you want to search for IPAM entries (IPv4Address) between two addresses:



Just kidding. Hopefully you’ve learned enough to go back and learn how to work with the Infoblox beyond GET requests.

How do we fix this?

I want to emphasize that this post is not targeting Infoblox specifically: as far as REST APIs go, theirs has been solid. If you’re working with a modern product, chances are it has a web API of some sort. Some vendors do provide a PowerShell module to abstract out the painful process we went through above, but many do not.

I submitted a few potential suggestions in my closing section of the previous REST API post. What do you think? Is this even an issue? Any suggestions on fixing it? What can we do to encourage vendors to provide more than a few simplified examples of hitting their API through PowerShell?

On a side note, if your answer involves a specific vendor’s specific version of an orchestration product, and the specific third party extensions for this, please do not reply : )

Thanks to Don Smith and Anders Wahlqvist for their helpful examples.


Fun with GitHub, Pester, and AppVeyor

This is a quick hit to cover a practical example of some very important processes; version control, unit testing, and continuous integration. We’re going to make a number of assumptions, and quickly run through GitHub (version control), Pester (unit testing for PowerShell), and AppVeyor (continuous integration).

Yesterday evening I added some basic Pester tests to a PowerShell module for DiskPart. I thought it might be fun to test out AppVeyor, but assumed running DiskPart or some other process requiring administrative privileges might not work. I assumed wrong, the build passed!

We’ll make the assumption that if you’re using PowerShell and Pester, you’re using Windows. We’ll also assume you have a PowerShell script or module already written. Finally, we’ll assume that for all the GUI fun below, you might dive into the CLI.


I’m not going to ramble on about Git or GitHub. There are heavy duty books, hundreds of blog posts, and various youtube examples to get you up and running. Sign up. Download the Windows client. Read one of the thousands of articles on using GitHub (Example, others may be better).

You’re signed up. Let’s create a repository and upload our code.  Here’s yet another quick-start, using the website and Windows client, documented with GifCam, a handy tool I found through Doug Finke.

Create a repository:


Clone the repository on your computer, copy the files you want in there, commit:


There’s way more to see and do, so do spend a little time experimenting and reading up on it if you haven’t already. There are too many benefits of version control to list, and GitHub adds a simple interface and a nice layer enabling collaboration and sharing. Or go with Mercurial and BitBucket, among the many other combinations.


Pester is a unit testing framework for PowerShell. Companies like Microsoft and VMware are starting to use it, you should probably check it out:


Jakub Jareš has a great set of articles on PowerShellMagazine.com (One, Two, Three), and you should find plenty of other examples out there.

I took an existing Invoke-Parallel test file, used the relative path to Wait-Path.ps1, and added a few tests. Follow Jakub’s first article or two and you will realize how simple this is. Start using it, and you will realize how valuable it is.

It’s quite comforting knowing that I have a set of tests and don’t need to worry about whether I remembered to test each scenario manually. Even more comforting if you have other people collaborating on a project and can identify when one of them (certainly not you!) does something wrong.


Continuous integration. You’ve probably heard it before, perhaps around the Jenkins application. Personally, I never got around to checking it out. A short while back, Sergei from Microsoft suggested using AppVeyor for Invoke-Parallel (thank you for the motivation!). He added a mysterious appveyor.yml file and Pester tests, I signed up for AppVeyor, added the Invoke-Parallel project, and all of a sudden Pester tests appeared on the web:


I assumed this required much wizardry. It probably did, but now I can borrow what Sergei did! Sign up with your GitHub creds, login, and here’s a quick rundown:


Simple so far! AppVeyor is waiting for a commit. In the background, I added a readme.md and a line in Wait-Path. Let’s commit:


Behind the scenes, gears are turning at AppVeyor. After a short wait in the queue, your tests will start to run:




Build Passing!

Take a peak at the appveyor.yml. There’s a whole lot more you can do with AppVeyor, but this is a nice framework for simple PowerShell Pester tests. In the yaml, you can see we use domain specific language to…

  • Install Pester
  • Invoke Pester, pass the results through, and generate output XML
  • Upload the Pester XML output (Why? The Tests tab)
  • If Pester output FailedCount is greater than 0, fail the build

Better together

Version control, unit testing, and continuous integration, and the products we looked at for each of these are fantastic in their own right. If you experiment and try these out together, I think you will find they are even better together. And this particular combination of GitHub + Pester + AppVeyor (for PowerShell) is particularly smooth.

In fact, writing this post, listening to the tail end of the DSC MVA, enjoying some pasta and wine, and recording the various gifs took all of about two hours. Don’t be intimidated, just dive in!

Happy scripting!

Remotely Brick a System

I have another fun project. After some recent system performance analyses, one of my recommendations was to move appropriate systems to VMware’s Paravirtual SCSI controller. We don’t go the vCAC/vRA route yet, so I’m now tasked with integrating this into our fun little ASP.NET/C#, PowerShell, MDT, and SQL deployment Frankenstein. It may be ugly, but it was a fantastic learning experience, and works quite well.

When designing tooling, I usually step through what I want to do manually, and break each step up as needed, building re-usable tools where appropriate. Here’s the recipe, ignoring a paranoid level of error handling:

  • Check to see which of the guest’s disks are online (Why?  We’ll see…)
  • Power off the guest
  • Change all non-system drives to a new Paravirtual SCSI controller
  • Power on the guest
  • All your disks are gone!
  • Set your disks that are offline back to online. Each test case of mine resulted in all migrated disks coming up as offline.
  • See all those ‘Power’ tasks? All this will need to be performed from a remote system.

Most of this is vanilla PowerCLI. A few steps require something like DiskPart though. Yes, Windows 6.2 and later include Storage Cmdlets. Unfortunately, Microsoft cut off legacy systems, along with the many organizations out there who still rely on them, even if they can roll out 2012 R2 boxes for new projects. DiskPart it is!

Tangent: Whomever decided to include OS-specific-Cmdlets in certain DSC Resources made me sad. The whole OS-specific-Cmdlet idea has lead to a good deal of confusion, and relying on it in a technology that’s slated for inclusion in the Common Engineering Criteria (presumably as the standard for configuration management) might not help with adoption.

DiskPart on a remote computer

The blog title should make sense at this point; thankfully, no systems were harmed in the writing of this post. So, we have a set of DiskPart commands that need to run remotely. How do we do it?

A while back I wrote New-RemoteProcess. It was the second PowerShell function I published, definitely not my proudest work, but it does the trick; I have my remoting mechanism. Now I need to parse out online disks to know which offline disks should really be online. I sifted through the numerous PowerShell+DiskPart posts out there and didn’t find much on running it remotely. I did find Alan Conkle’s code for parsing disk, volume, and vdisk output. We tweaked New-RemoteProcess to give us Invoke-DiskPartScript, which we can use inside a few Get-DiskPart* functions that mash in Alan’s code.

The result? You can now remotely brick a system. Or check for offline disks and set them back to online after changing to a Paravirtual controller, your choice!


I packaged my functions up into PSDiskPart, and committed them to GitHub. If you need to run Diskpart against a remote system, and remoting + OS-specific-Storage-Cmdlets won’t work for you, give this a shot! If you have any suggestions or tips, pull requests would be appreciated.

Here are a few examples from my environment:

Get disk info for a few computers, pick out a few properties:Get-DiskPartDisk

Get volume information from the current computer:


Set a disk to offline… Don’t do this to a production system : )


Bring a disk back online, and remove the readonly flag if it is set:


Invoke-DiskPartScript and Sleep

I did run into a small issue with the borrowed logic from New-RemoteProcess. Every so often, I simply wouldn’t get results back. Adding start-sleep resolved this, but seemed inefficient.

I wanted something where I could say ‘wait until you can see this file.’ This is simple enough to write, but one of the things I love about PowerShell is that it is task-based. I want to wait for a path to exist, export to a csv, create a VM, or perform some other specific task, not worry about the logic and error handling behind each of these tasks.

I didn’t see anything out there, so I drafted up Wait-Path, which returns more quickly than hard coding a start-sleep call.

Up next

My wife is out of town this week. This means I have more time to play and to wrap up a few posts I have planned. No promises, but the following are on my plate:

  • REST / Infoblox –  A follow-up walking through a few Infoblox functions, illustrating why it would be quite nice if vendors provided their own PowerShell modules.
  • Invoke-SqlCmd2 – Highlight some of this function’s features, with some practical examples. Pre-staging a computer and applications in MDT? Getting migrated objects from ADMT? Diving into OpsMgr? Too many choices…
  • Building an inventory database – Not everyone has a mature CMDB. Create a database that can track details on servers, SQL instances, SQL databases, scheduled tasks, and more.
  • Filling an inventory database – Now that we have an inventory database, collect the data! This boils down to the products and attributes you want to track, but we can start with some basics.