Posts on this page:
Hello, everyone! Today I’m starting a new community whitepaper publication on certificate autoenrollment in Windows 10 and Windows Server 2016. This is a deeply rewritten version of the whitepaper published 15 years ago by David B. Cross: Certificate Autoenrollment in Windows XP. Certificate enrollment and autoenrollment was significantly changed since original whitepaper publication. Unfortunately, no efforts were made by Microsoft or community to update the topic. So I put some efforts in exploring the subject and writing a brand-new whitepaper-style document that will cover and reflect all recent changes in certificate autoenrollment subject.
This whitepaper is a structured compilation of a large number of Microsoft official documents and articles from TechNet and MSDN sites. Full reference document list and full-featured printable PDF version will be provided in the last post of this series.
Whitepaper uses the following structure:
First post of the series will cover only general questions and certificate enrollment architecture. It is important to understand how certificate enrollment works in modern Windows operating systems, because autoenrollment heavily relies on this architecture. So, let’s start!
Yesterday I pushed new PSPKI release with version number v3.3.0. New version is even more stable and even more powerful. More technical change list is moved to dedicated article: Release notes for PSPKI v3.3.0. In this (and, possibly next) blog post I would like to outline major changes/improvements to this release.
I bet that ADCS database access is one of the most popular features people love in my module. And there are reasons: I put a lot of efforts to simplify access to CA database and provide flexible filter options. For example, get certificates that will expire in next 30 days:
Hello world! Last time (year or so) I was busy on anything else but my module. Now I’m happy to announce that the project isn’t died, it is alive and new version is published.
This version doesn’t bring new commands, nor deprecate any. I think, command list is well-established and I don’t see anything useful to add. People doesn’t ask either. However there are things to work with code: refactor, optimize, make it cleaner and so on. Let’s look at what I’ve done here:
Initially, project was hosted at CodePlex which is died now. I moved all my sources to GitHub, documentation to my web site and used CodePlex as module download place.
Since CodePlex is done, the only real option to ship binaries was to use PowerShell Gallery. It is something new to me (I never used it till today) and was a bit lost there. But it appeared more easier than I thought. Starting with v3.2.7, the module is available on PowerShell Gallery: PSPKI. Please, provide feedback on your experience with getting PowerShell PKI module from gallery.
In the past, I used MSI installer to ship the module. It is still very good option to do that, because you can use various tools, like group policies or ConfigMgr to deploy the module within organization. Thanks to Caphyon Advanced Installer and their free NFR license (as a part of my Microsoft MVP award) I was able to do that. And their tool was really great and easy to use. However, my MVP award options are uncertain and PowerShell Gallery is an acceptable tradeoff, so there is no big need in MSI anymore.
Hey guys! I was silent for a while due to a lack of good topics to discuss. Today I want to present another piece of my class work at university for “Compiler Development” course. The task is to write a manual lexical parser for a language of my choice. I decided to take JSON language, because its syntax is relatively simple and requires most common techniques to parse. In addition, it has well-looking BNF grammar for custom parser implementations.
The purpose of lexical analysis is to read the source code and convert them to a sequence of tokens (lexemes) which are minimal parts of each language. It is important understand that lexical analysis doesn’t perform semantic (meaning) validation. That is, lexical analysis determines whether the source code can be written in a specific language’s alphabet. It doesn’t mean that the code will be executed successfully. Source code semantic is validated only after lexical analysis and uses its product (a set of tables, keywords, operators, literals, identifiers, etc.).
You can think that there is no need to write your own lexical parser, because there are LOTS of them. For example, PowerShell contains built-in JSON encoder and decoder via ConvertTo-JSON and ConvertFrom-JSON cmdlets. Though, these cmdlets completely hide parsing result and perform object conversion. You can’t access internal parser to look at exact results of the parsing. But results of lexical parsers are actively used in web. For example, JS-based syntax highlighters use lexical parser to split the source code into tokens and colorize or highlight them for better readability. And my website does it as well (though, not via JS). For example, all XML and PowerShell code snippets on my blog are colorized by using lexical parsers. For PowerShell code I’m using Tokenize method in System.Management.Automation.PSParser class. For XML strings I’m using custom XML tokenizer. And cororize them according to token types.