Posts on this page:

This article provides descriptional information about enterprise Certification Authority signing by commercial Certification Authority (sometimes, external root is referred as "common root").

What is Certification Authority Root Signing?

Consider the following scenario: You work for an organization that requires many digital certificates. You want to ensure that these certificates are trusted by other organizations, such as external partners and customers. For example, you might want to use a code signing certificate for an application or a digital signature certificate for signing a document or email.

If you setup your own public key infrastructure (PKI), also known as a private PKI, the certificates you issue will only be trusted internally. For example, you can publish the root certification authority certificate into your Active Directory Domain Service (AD DS) and quickly have your organization's computers trusting certificates issued by your PKI. However, external organizations, such as your customers and partners, would not (by default) trust the certificates issued by your PKI. This means they would see a validity or trust error messages, if they viewed or tried to validate a certificate issued by your PKI.

If instead, you subordinate your PKI to one of the commercial PKI root certificates that are trusted by Microsoft Windows installations, you do not have the same problem. By default, Microsoft Windows applications install a set of predefined root CA certificates (well known commercial root CAs), which certificates are trusted on any Windows installation by default. For example, if you access https://login.live.com/ web site, no additional actions are required from a user. This is because SSL certificate is issued by a trusted CA.

Contrarily, if a remote user tries to access a web site that utilizes SSL certificate from a private PKI, the user receives an error message indicating certificate trust issues. When a user application (like Internet Explorer) does not specifically trust a PKI, an error message is presented each time that private PKI's certificate is presented to the user.

To overcome such an issue, you may decide to implement a PKI that utilizes the trust of a well-known and trusted PKI. This allows your organization to issue certificates that can be trusted and recognized worldwide.


Read more →

Hey guys! I was silent for a while due to a lack of good topics to discuss. Today I want to present another piece of my class work at university for “Compiler Development” course. The task is to write a manual lexical parser for a language of my choice. I decided to take JSON language, because its syntax is relatively simple and requires most common techniques to parse. In addition, it has well-looking BNF grammar for custom parser implementations.

The purpose

The purpose of lexical analysis is to read the source code and convert them to a sequence of tokens (lexemes) which are minimal parts of each language. It is important understand that lexical analysis doesn’t perform semantic (meaning) validation. That is, lexical analysis determines whether the source code can be written in a specific language’s alphabet. It doesn’t mean that the code will be executed successfully. Source code semantic is validated only after lexical analysis and uses its product (a set of tables, keywords, operators, literals, identifiers, etc.).

You can think that there is no need to write your own lexical parser, because there are LOTS of them. For example, PowerShell contains built-in JSON encoder and decoder via ConvertTo-JSON and ConvertFrom-JSON cmdlets. Though, these cmdlets completely hide parsing result and perform object conversion. You can’t access internal parser to look at exact results of the parsing. But results of lexical parsers are actively used in web. For example, JS-based syntax highlighters use lexical parser to split the source code into tokens and colorize or highlight them for better readability. And my website does it as well (though, not via JS). For example, all XML and PowerShell code snippets on my blog are colorized by using lexical parsers. For PowerShell code I’m using Tokenize method in System.Management.Automation.PSParser class. For XML strings I’m using custom XML tokenizer. And cororize them according to token types.


Read more →

Hello blog readers!

Here is another tl;dr; blog post! Yesterday I completed my winter exam session at university and want to recall one interesting work I had year ago at the course called “Data structures and algorithms” where we learned various data structures and manipulation algorithms. During the course we developed them in programming languages with further analysis. In array search class work I had to implement, analyze and compare two search methods: sentinel search and hash table search.

Most search algorithms have complexity. This means that their performance depends on array size. Larger is array, more time is required to find element in array. There is binary search that gives which better than linear, but still depends on array size and requires sorted array. Binary search is impossible for unsorted arrays. What next? Next is search algorithm that would give us constant complexity. This means that regardless of array size, search will be completed in constant time. This algorithm (actually, data structure) is hash table.

What is hash table? It is an associative array that maps keys to data values. Unlike classic arrays, there is no such term as array index, instead there used term key value. Key is an identification information about data value. During class work I learned a lot about hash tables and faced a number of very interesting challenges while attempting to develop a reliable implementation of hash table. And this blog post will reveal all of them!


Read more →

Hello S-1-1-0, PowerShell CryptoGuy (aka @Crypt32) is here again. Today I want to discuss about X.509 Name Constraints certificate extension. It is not widely used, but sometimes it is necessary. As extension name depicts, it is used to provide constraints or restrictions to certificate subject and subject alternative names (SAN) extension.

Brief Description

Name Constraints extension is defined and described in RFC 5280 §4.2.1.10. Extension presence in an end-entity certificate does not have any effect and is applied only to CA certificates that issue certificates to end entities. Once defined, the extension applies restrictions on any certificates that appear below that CA in the tree. Name Constraints may appear further in the certification path to set more restrictive constraints. It is not possible to set less restrictive constraints at lower levels. This prevents low-level (in the certification path meaning) CAs to violate restrictions applied at higher levels.

PKI Hierarchy

Figure 1 - sample certificate chain

Here we see a 3-tier PKI hierarchy with applied Name Constraints extension at 2nd level (below root). This is indicated by a yellow triangle. Name Constraints restrictions are applied to all directly and indirectly issued certificates. CA-2 doesn’t define Name Constraints extension in its own certificate, but restrictions still apply to certificates issued by CA-2 indirectly.


Read more →

Hello S-1-1-0!

In previous post we gave an introduction into techniques to work with certificate revocation lists in PowerShell. We explored common steps to read CRL’s basic information, CRL extensions and revoked certificate collection. Today I will discuss about CRL handy shortcuts and signature validation.

Get CRL next publication date and number

In some environments, it is impossible to automatically copy CRLs from CA server to CRL distribution points or there is a scenario when PKI administrators run custom scripts to monitor CRL health status at CRL distribution points and update them if they are about to expire. For such purposes I maintain two shortcut methods to quickly identify required values.

CRL validity is determined by a NextUpdate field. If the current time passes that timestamp, the CRL is considered expired. To provide better validity handling, Microsoft use their own Next CRL Publish CRL extension. This extension contains a date/time value at which CA will issue new CRL. This value (when present) is always set prior to value in NextUpdate field to provide a time window to replicate newly published CRL across all distribution points prior existing CRLs expire. I have a good article on this subject: How ThisUpdate, NextUpdate and NextCRLPublish are calculated (v2). However, Next CRL Publish extension is presented in CRLs issued by Microsoft CAs and is absent in 3rd party CAs, as the result, next CRL publication date is determined solely by Next Update field. Moreover, there might be a case when CA is in the decommission process and issues its last CRL which is supposed to be valid infinitely.


Read more →