This blog post is part 3 of a series of 3 posts to learn how to write PowerShell pester test. After seeing the fundamentals of pester in part 1, we dove deep into the code and learned the details about the pester syntax in part 2 which gave you enough bullets to write your first pester tests. Today, in part 3 of this series, we will tackle the advanced concepts of pester tests and talk about some of pesters features that allows us to control better the information resulting from our pester tests, such as which test failed, which one didn’t etc…

These blog posts follow each other, but can be read seperatley. The three blog posts will bring you from zero to hero in no time!

The scripts that are used through out this series about pester test are available on github here. The scripts for part three are located directly  under the folder named part3.

Writing pester tests and working with the pester object using -Passthru

The great thing about pester, is not only that we can write pester tests to test  your scripts, but is that it is possible to generate a complex object which will contain the exact results of our tests. This is really usefull for further scenarios such as Continous integration using tools like team city, Jenkins, appveyor.

To access an pester object, there is nothing more easy! Simply call the invoke-pester method as follows:

Invoke-Pester -PassThru

Mocking objects in your pester tests

There might be cases where you want to test some code that calls functions or cmdlets that do destructive things such as removing an item in the registry, or deleting a Virtual Machine, a UserAccount in the Active directory etc. These are parts of your code that you might not want to really be called, because of the consequences that they imply. But insteand, you know if it returns $true of $false, or you know the contents of the object that it returns.For these cases, we will use mocking.

How to mock a pester object?

Basically it will replace the output of a specific command, with output that we want to work with. It creates a proxy command and we will be able to ensure that we still work with identical objects, but not the real ones from production.

Each time we call a mocked command, the syntetique object will be returned with values that we have specified instead of the real one from production. In this way, we can test our code, without really deleting anything.

In powershell Pester, we use the keyword Mock , to simulate results of a specific command.

I have added another test in our pester test file with the sole purpose of testing the pester mocking functionality:

pester tests pester mocking objects

And when calling our tests

calling mocked pester tests

This is not the best way to mock several times an object, but it highlights a point that I wanted to make a bit later. There is a another version a bit lower in this blog post, but keep on reading, we will get there soon.

As you can see, the test passed. We Mocked (faked) the creation of 15 objects, and ensured that the result is as we expected it to be.

You can find the code from the example above on github under the folder assertions

It was mocked, but are we sure it was mocked as we wanted it to be mocked? We will find out in the next chapter.

How to find out which powershell function has been mocked by pester?

Another neat feature, is the possibility to find out which functions have been mocked by pester. For that, calling the following command will suffice:

It will list the different help files, but also created an empty help message that is prefixed:

PesterIsMocking_<NameOfFunctionMocked>

Read more about mocking on the official Github Wiki page here.

How to find out if a mocked function has been called?

We have mocked our function, but are we actually sure that it was called the number of times that we intended it to be in our pester test?

The examples below are available on my github page under the folder Assertions.

If we took the pester test as it was written originaly, it looked like this:

 

If we run our pester tests, we will get the following output:

pester Mocked test call

What’s wrong with this you might? Everything is green, which means that everything worked. Right?

In this, it isn’t actually correct. I’ll show you why. We want to mock a number of times our function to verify that it can handle a ‘big number’ calls of our function.

We will add the following lines into our pester script:

 

Asert-VerifiableMocks is the command that will validate the fact the functions we have mocked, were really called.

Assert-MockCalled on the other hand will check how often the mocked function has been called. In our case, we added the following two switches: Exactly and Times, which as it’s name suggests, will check exactly how often our function has been mocked.

Relaunching the code again, we have the following that is displayed:

pester mock failed

We see that the function has been mocked, but the second test fails, saying that it was supposed to be called 15 times, bit instead it was called only once.

What does this mean?

This means that even though we actually had 15 objects, and that it was validated by one of our tests, the function got called only once, and returned 15 objects, instead of being called 15 times, to return 15 objects.

The issue here resides not in our function, but how we wrote our test. The problem can be found on line 71 to 91, where we Mock our Get-ComputerInfos function.

 

 

Here we call the creation of our object 15 times, while mocking the function only once. Adapting the Mock as follows should solve the issue:

 

 

See how the loop is not anymore in the mock block. We call our function 15 times this time.

pester test passed

This means although we wrote a test, and it said that it passed, it actually didn’t do exactly what we wanted him to do, and gave us a false positive.

Writing tests is definitely the way to go. Asserting that the code was actually really called is even better!

Testing code in modules using the pester inModuleScope

Until now, we have been dot sourcing our function into our pester test, and called the function directly from the pester test. This works well if you only want to test a few functions. It can actually become quite messy when we start to mess with different scopes, such as importing functions from a module. What happens if you have already the same function dot sourced into memory, as one that just got loaded from a module?

I wrote some detailed examples of using pester tests with InModuleScope that are available on github under ModuleTesting.

To avoid such issues, there is an extra keyword that we can use called inModuleScope. As it name suggests it, it will ensure that the tests you are writing, are really aimed on the content from a specific module.

I have put our function into a module that I named GetComputerInfos.psm1

Instead of dot sourcing our function at the beginning of our script, we import the module using import-module.

We then have to specefiy the KeyWord InModuleScope followed by which module we want to scope our tests to.

pester test in module scope

All the tests, variables and test drives created in this inModuleScope are only in the this InModuleScope.

You can read more about testing code located in modules using powershell pester on the official github wiki page here.

PowerShell Pester Test drive

We have mentioned it before briefly, but it is possible to create access a test drive during your pester tests. It is designed to help to test things such as the creation and outputing to files, folders etc… I really love this feature, because it automatically get’s created and deleted.

pester test testdrive

A test drive lives for a complete describe block. This means that if you have created any files on that test drive they will be lost as soon as the logic quits the describe block since you get a fresh pester test drive at each new describe block.

We can access also the test drive directly through the automatically provided variable $Testdrive. It will point directly to the Root of our TestDrive.

If you want to read more about Powershell pester test drives visit the Github wiki page here.

Code Coverage

As its name implies it, ‘code coverage‘ will help you once you have written your pester tests, to identify how much code has actually been covered by the pester tests. The coverage is expressed in percentage of called code. This is actually a great way to ensure that all, or at least most of our code has been executed and tested.

The code examples from this part are available on github on undert  part3/codeCoverage

To get a pester test code coverage report you have to call you script as follow:

Invoke-Pester -CodeCoverage <YourScript.ps1>

Applied to our pester test from the example, we would call it as followed:

Invoke-Pester -CodeCoverage GetComputerInfos.ps1

pester test code coverage

In our pester test example, we have 100% of code coverage. This is great, but we only have one single function in our script.
Being able to have 100% of code coverage of your pester test can be quite difficult, because some things are simply not possible to monitor due to the limitations of the technic used to analyze the code coverage.

Indeed, part of the code such as “try / catch” or “else” statements cannot be counted since the mechanism behind the pester code coverage is implementing break points on each line, and verifying if the break point has been triggered or not. On some keywords it is not possible set a break point, such as try, catch, else etc…and that is why, having a code coverage of 100% is pretty impossible.

To highlight this with an example, I have added an additional test on line 19 in our main function. The function was missing a test, to see if the computer was available. If not, we want it to be handled gracefully, and the object to be returned to be empty, with just the computer name. The updated function is available here under.

Remember, all the code is available directly on github here.

If we invoke-Pester now, we will have the following results:

pester test code coverage failed

 

We can clearly see that all of our pester tests actually passed. So we should be celebrating right?
Actually not! Looking at the code coverage report, we see it is only at 57% (!) This means that even though all our tests were successfully tested and validated, a portion of our code was actually not called at all. This could be catastrophic since that portion of untested code could have had a bug, and caused us some trouble when shipped into production.
In the end, this means that this non tested portion of the code could become a source of unexpected behavior. In order to fix this pester test, and to increase the code coverage percentage, we have to add another test into our pester tests file which will make a call to a non-existing machine.

I have added the following test on line 36 of our pester test:

 

The pester test above will call the function with a non existing computer name, and we will be expecting empty values for all properties except for the ComputerName. It should contain the non existing computer Name to be valid.

pester test code coverage 100%

In this case, we see that 13 tests passed successfully, and that the code coverage is now at 100%.

This means that the function, and tests can be approved, and shipped into production.

To summarize: Having all tests passed, and showing up in green means that the tests you wrote, have passed. But it doesn’t necessarly means:

a) That every line of existing code has actually been tested.

b) That you are actually testing the right things.

Tag

There are a few options that are pretty handy that I haven’t discussed yet, ‘Tag’ is one of them.

Tag, is a parameter that can be added to your describe blocks, which allow to bring additional information to your describe block. You will be able to use the same tag for different describe blocks, enabling you (in a way) to group these pester tests together. We will be able to call all the tests that are tagged with that common tag.

The code examples are available on github under/part3/tags .

I have added another in my pester test example that will test our function more intensively without mocking it. It will call the function 100 times, and make sure it contains 100 objects returned afterwards.

Notice how I have added the “-Tag ‘Operational tests’” at the very end of the describe block.

 

 

I have also added a new tag to our existing describe block named “Unit_Tests” in our pester test:

pester test tags

 

Note:The tags should not contain white spaces, otherwise the call will not work.

Now that we have marked the describe blocks with different tag names in our pester test, we can go ahead and call one of them using the -tag parameter from the invoke-pester cmdlet.

pester test call to specefic tag

Tags a great if you want to launch only a particular set of tests (for example, just the unit testing ones, that won’t create or delete anything in your environment).

Once everything is validated, there can be a second call in your workflow that will call invoke-pester using the -tag to point to the operational tests only.

EnableExit

The -EnableExit switch is quite handy, especially if you use pester tests in a Continuous Integration software. The principle is simple: it will call the pester test, and it will count the number of failed pester tests. It  and assigns that number to the $lastexitcode variable. If all the tests succeeded, then $lastexicode would contain 0. Here is an example, where I deliberately broke two tests to showcase this functionality to you.

pester test enableexit lastexitcode

The code example for the enableExit is available on my github page under part3/EnableExit.

I am going to correct one of the errors above in my pester test, and we will have a look at what $lastexitcode contains again:

pester test enableexit lastexitcode changed

We see that this time, the automatic variable $LastExitCode actually contains the value 1 which is the number of test that failed. If all of the tests pass, the $LastExitCode variable will contain 0, which means that you have the green light 😉

This method should be used for Continuous integration systems that cannot read the output of pester tests automatically. In normal cases, and when possible of course, I would recommend to use the -Passthru option which offers a detailed version of this.

OutputFile

OutputFile parameter tightly binds to the outputformat parameter (not mandatory). The pester outputFile parameter will allow to save the pester test results into a standard NunitXml . The NunitXML format is a standard format to express test results (not only pester tests, but any kind of tests). So this format can be understood by tools such as TFS, TeamCity, Jenkins, Appveyor etc.

Why is this cool? Simply because you can feed the NunitXML to any of these systems, and can bound automation to a set of passing tests for example.
You can also generate nice graphs and overview of passing and failing tests. And our management loves that! 😉

Let’s recall the same command as before (the one where I volontarly broke one of my tests), but this time, I’ll specify a file to export the results to.

The code from this example is available in part3/enableExit on github.

I have set the command in a batch file which is available in our start.cmd:

pester nunitxml

We can see that the XML file generated contains detailed information about the different tests we run. We see it contains a failure node elements.

We don’t really need to go deep into the understanding of all the elements of this XML file since most of the systems know how to intepret these type of structure tests already since a while.

Summary:

Voila, that was all for this series about pester tests. If you want to go through the code again you can get it on github here.

There are quite some ressources that can help you to have another approch on how to learn pester test, and to implement it into your environment.

Please let me know if a blog post that summarizes the different posts about learning pester in one single post would be something that you guys could be interested in. You decide!

Thanks for reading me, and please, give feedback! 🙂