Posts on this page:

Hello world! Last time (year or so) I was busy on anything else but my module. Now I’m happy to announce that the project isn’t died, it is alive and new version is published.

This version doesn’t bring new commands, nor deprecate any. I think, command list is well-established and I don’t see anything useful to add. People doesn’t ask either. However there are things to work with code: refactor, optimize, make it cleaner and so on. Let’s look at what I’ve done here:

PowerShell Gallery

  • Moved sources from CodePlex to GitHub

Initially, project was hosted at CodePlex which is died now. I moved all my sources to GitHub, documentation to my web site and used CodePlex as module download place.

  • Moved binaries to PowerShell Gallery

Since CodePlex is done, the only real option to ship binaries was to use PowerShell Gallery. It is something new to me (I never used it till today) and was a bit lost there. But it appeared more easier than I thought. Starting with v3.2.7, the module is available on PowerShell Gallery: PSPKI. Please, provide feedback on your experience with getting PowerShell PKI module from gallery.

  • Deprecated MSI Installer

In the past, I used MSI installer to ship the module. It is still very good option to do that, because you can use various tools, like group policies or ConfigMgr to deploy the module within organization. Thanks to Caphyon Advanced Installer and their free NFR license (as a part of my Microsoft MVP award) I was able to do that. And their tool was really great and easy to use. However, my MVP award options are uncertain and PowerShell Gallery is an acceptable tradeoff, so there is no big need in MSI anymore.

Fixed bugs


Read more →

This article provides descriptional information about enterprise Certification Authority signing by commercial Certification Authority (sometimes, external root is referred as "common root").

What is Certification Authority Root Signing?

Consider the following scenario: You work for an organization that requires many digital certificates. You want to ensure that these certificates are trusted by other organizations, such as external partners and customers. For example, you might want to use a code signing certificate for an application or a digital signature certificate for signing a document or email.

If you setup your own public key infrastructure (PKI), also known as a private PKI, the certificates you issue will only be trusted internally. For example, you can publish the root certification authority certificate into your Active Directory Domain Service (AD DS) and quickly have your organization's computers trusting certificates issued by your PKI. However, external organizations, such as your customers and partners, would not (by default) trust the certificates issued by your PKI. This means they would see a validity or trust error messages, if they viewed or tried to validate a certificate issued by your PKI.

If instead, you subordinate your PKI to one of the commercial PKI root certificates that are trusted by Microsoft Windows installations, you do not have the same problem. By default, Microsoft Windows applications install a set of predefined root CA certificates (well known commercial root CAs), which certificates are trusted on any Windows installation by default. For example, if you access https://login.live.com/ web site, no additional actions are required from a user. This is because SSL certificate is issued by a trusted CA.

Contrarily, if a remote user tries to access a web site that utilizes SSL certificate from a private PKI, the user receives an error message indicating certificate trust issues. When a user application (like Internet Explorer) does not specifically trust a PKI, an error message is presented each time that private PKI's certificate is presented to the user.

To overcome such an issue, you may decide to implement a PKI that utilizes the trust of a well-known and trusted PKI. This allows your organization to issue certificates that can be trusted and recognized worldwide.


Read more →

Hey guys! I was silent for a while due to a lack of good topics to discuss. Today I want to present another piece of my class work at university for “Compiler Development” course. The task is to write a manual lexical parser for a language of my choice. I decided to take JSON language, because its syntax is relatively simple and requires most common techniques to parse. In addition, it has well-looking BNF grammar for custom parser implementations.

The purpose

The purpose of lexical analysis is to read the source code and convert them to a sequence of tokens (lexemes) which are minimal parts of each language. It is important understand that lexical analysis doesn’t perform semantic (meaning) validation. That is, lexical analysis determines whether the source code can be written in a specific language’s alphabet. It doesn’t mean that the code will be executed successfully. Source code semantic is validated only after lexical analysis and uses its product (a set of tables, keywords, operators, literals, identifiers, etc.).

You can think that there is no need to write your own lexical parser, because there are LOTS of them. For example, PowerShell contains built-in JSON encoder and decoder via ConvertTo-JSON and ConvertFrom-JSON cmdlets. Though, these cmdlets completely hide parsing result and perform object conversion. You can’t access internal parser to look at exact results of the parsing. But results of lexical parsers are actively used in web. For example, JS-based syntax highlighters use lexical parser to split the source code into tokens and colorize or highlight them for better readability. And my website does it as well (though, not via JS). For example, all XML and PowerShell code snippets on my blog are colorized by using lexical parsers. For PowerShell code I’m using Tokenize method in System.Management.Automation.PSParser class. For XML strings I’m using custom XML tokenizer. And cororize them according to token types.


Read more →

Hello blog readers!

Here is another tl;dr; blog post! Yesterday I completed my winter exam session at university and want to recall one interesting work I had year ago at the course called “Data structures and algorithms” where we learned various data structures and manipulation algorithms. During the course we developed them in programming languages with further analysis. In array search class work I had to implement, analyze and compare two search methods: sentinel search and hash table search.

Most search algorithms have complexity. This means that their performance depends on array size. Larger is array, more time is required to find element in array. There is binary search that gives which better than linear, but still depends on array size and requires sorted array. Binary search is impossible for unsorted arrays. What next? Next is search algorithm that would give us constant complexity. This means that regardless of array size, search will be completed in constant time. This algorithm (actually, data structure) is hash table.

What is hash table? It is an associative array that maps keys to data values. Unlike classic arrays, there is no such term as array index, instead there used term key value. Key is an identification information about data value. During class work I learned a lot about hash tables and faced a number of very interesting challenges while attempting to develop a reliable implementation of hash table. And this blog post will reveal all of them!


Read more →

Hello S-1-1-0, PowerShell CryptoGuy (aka @Crypt32) is here again. Today I want to discuss about X.509 Name Constraints certificate extension. It is not widely used, but sometimes it is necessary. As extension name depicts, it is used to provide constraints or restrictions to certificate subject and subject alternative names (SAN) extension.

Brief Description

Name Constraints extension is defined and described in RFC 5280 §4.2.1.10. Extension presence in an end-entity certificate does not have any effect and is applied only to CA certificates that issue certificates to end entities. Once defined, the extension applies restrictions on any certificates that appear below that CA in the tree. Name Constraints may appear further in the certification path to set more restrictive constraints. It is not possible to set less restrictive constraints at lower levels. This prevents low-level (in the certification path meaning) CAs to violate restrictions applied at higher levels.

PKI Hierarchy

Figure 1 - sample certificate chain

Here we see a 3-tier PKI hierarchy with applied Name Constraints extension at 2nd level (below root). This is indicated by a yellow triangle. Name Constraints restrictions are applied to all directly and indirectly issued certificates. CA-2 doesn’t define Name Constraints extension in its own certificate, but restrictions still apply to certificates issued by CA-2 indirectly.


Read more →