VSoft Technologies Blogs


VSoft Technologies Blogs - posts about our products and software development.

Back in Feb 2019, I blogged about the need for a Package Manager for Delphi. The blog post garnered lots of mostly useful feedback and encouragement, but until recently I could never find a solid block of time to work on it. Over the last few weeks I've been working hard to get it to an mvp stage.

DPM is an open source package/library manager for Delphi XE2 or later. It is heavily influenced by Nuget, so the cli, docs etc will seem very familiar to nuget users. Delphi’s development environment is quite different from .net, and has different challenges to overcome, so whilst I drew heavily on nuget, DPM is not identical to nuget. I also took a close look at many other package managers for other development eco systems.

What is a Package Manager

A package manager provides a standard for developers to share and consume code. Authors create packages that other developers can consume. The package manager provides a simple way to automate the installation, upgrading or removal of packages. This streamlines the development process, allowing developers to get up and running on a project quickly, without needing to understand the (usually adhoc) way the project or organization has structured their third party libraries. This also translates into simpler build/CI processes, with less ‘compiles on my machine’ style issues.

Who and Why

DPM’s initial developer is Vincent Parrett (author of DUnitX, FinalBuilder, Continua CI etc). Why is discussed in this blog post.

DPM Status

DPM is still in development, so not all functionality is ready yet. At this time, it's at the stage where we I would encourage library authors to take a look and play with it and provide feedback (and perhaps get involved in the development). It's very much at a minimum viable product stage. Potential users are of course welcome to look at it and provide feedback, it's just that, well, there are no packages for it yet (there's some test packages in the repo, and I'll be creating ones for my open source libraries). .

What works

  • Creating packages
  • Pushing packages to a package source.
  • Installing packages, including dependencies
  • Restoring packages, including dependencies.

How do I use it

The documentation is at http://docs.delphipm.org

See the getting started guide.

The command line documentation can be found here.

The Source is on GitHub https://github.com/DelphiPackageManager/DPM

Is DPM integrated into the Delphi IDE

Not yet but it is planned. If you are a wiz with the open tools api and want to contribute then let us know.

Is there a central package source

Not yet but it is planned. At the moment, only local folder based sources are supported. The client code architecture has a provision for http based sources in the future, however right now we are focused on nailing down the package format, dependency resolution, installation, updating packages etc.

Is my old version of delphi supported

Maybe, see here for supported compiler versions. All target platforms for supported compiler versions are supported.

What about C++ Builder or FPC

see here

Does it support design time components

Not yet, but that is being worked on.

How does it work

See this page

In this post, we'll take a look at the various options for managing and updating Version Info in Delphi projects using FinalBuilder.

Windows Version Info Primer

Windows Version Info (ie the version info shown in explorer) is stored in a VERSIONINFO resource inside the executable (exe or dll). These resources are created by defining a .rc file, and compiling with either the windows resource compiler (rc.exe) or Delphi's provided resource compiler (brcc32 or cgrc depending on the delphi version). This results in a .res file, which can be linked into exe at compile time buy referencing it in the source code, e.g :

{$R 'myresource.res'}

I highly recommend familiarising yourself with the VERSIONINFO resource type and it's parts.

Delphi IDE Support for Version Info

The Delphi IDE creates a [YourProjectName].res file next to the dpr or dpk when you create a new project. This is where the IDE will place the VERSIONINFO resource when you enable the option to "Include version information in project".  When you compile the project in the IDE, if needed the IDE will regenerate this res file with updated version info before it is linked into the executable.  For exe's, this resource file also includes the MAINICON resource (the icon shown in explorer).

You do not have to use this feature, you can leave the option turned off and manage the version info yourself, by creating your own resource script (.rc) with a VERSIONINFO structure,  and compiling it and referencing the resulting .res file in your source code. You can even just reference the .rc file

{$R 'myresource.res' 'myresource.rc'}

and the IDE will compile the rc file and link in the resulting res file. The caveat to this technique is that the command line compiler (dcc32, dcc64 etc) does not support this. 

If your binary doesn't have version info, or has incorrect version info, it's typically because :

1) The version info  resource wasn't referenced in the source and wasn't linked in
2) There are duplicate VERSIONINFO resources linked, windows will pick the first one it finds.
3) You set the version info on the wrong IDE configuration (more on this below). 

The Delphi IDE, along with the dproj file (which is an msbuild project file), uses a convoluted configuration inheritance mechanism to set project properties, including version info. Many a developer has been caught out by this scheme, setting the version info at the wrong node in the heirachy, resulting in no version info in their executables. There have also been issues with dproj files that have been upgraded through multiple versions of delphi over the years.

Using FinalBuilder

In the development environment, the version info usually doesn't matter too much, but for formal releases it's critical, so it's best to leave setting/updating your version info  to your automated build tool or your continuous integration server. In FinalBuilder we have invested a lot of time and energy to making version info work correctly with all the different versions of delphi, dealing with the vagaries and subtle differences with each version (and there are many!).

On the Delphi action in FinalBuilder, the Version Info tab presents the version info in a similar way to old versions of delphi IDE did (a much nice ui that the current version imho). This ui allows you control all the various version info properties (and there are a lot!). Note that these will only take effect if you have the "Regenerate resource" option checked on the Project tab (ie, regenerate yourproject.res). 

Note that the Major, Minor, Release and Build fields are number spin edits, and cannot take FinalBuilder variables. That can easily be worked around with some simple scripting, in the BeforeAction script event  (javascript):

Another option is to use Property Sets to provide the source of the Version Info. Property sets are especially useful when you have multiple actions that need the same version info, or at least to share the same version numbers. Creating a Property Set is trivial, just drop a PropertySet Define action on your target, before the Delphi Action. In the PropertySet Define action, select Win32 Version Info to manage all version info properties, or Win32 Version Numbers to have the property set just manage the major, minor, release and build numbers.

To set the property set values, add a PropertySet Assign Values action before the Delphi action.

Then in the Delphi action it's a simple task to select the property set in the version info tab

Notice that the version number fields are disabled, since they are now provided by the property set. If you choose the Win32 Version Info property set type, more fields are disabled.

One last thing I should mention, is that along with the Version Info and the MAINICON, the project.res file also typically (well for windows at least) contains the manifest file. I recommend you look at the Resource Compiler tab, where it lets you choose which resource compiler to use (more useful in older versions of delphi, where brcc32 didn't cope with some hicolor icon types) and specify the manifest file. I discussed windows manifest files a few years ago in this blog post.

This new beta release includes substantial improvements to the expressions engine including new several expressions objects and functions. We have also made some updates to the stage editor, implemented automatic report generation for some reporting actions, and added several new deployment actions providing support for Docker, Azure, SQL packages, File Transfer and SSH.

Continue reading for details of all the new features.

Enhanced expressions engine

The expression engine in Continua CI evaluates expression objects and variables denoted with $ and % characters. It also provides auto-completion suggestions when typing such expressions into expression fields. This has now been overhauled to include function return types, chaining of functions, nesting functions as function parameters, selection and filtering of collections and many improvements to expression parsing. We have added several new functions, objects and collections to give access to more values and allow you to manipulate those values.

Expression in Set Variable action list categories

You can now, for example, use the following expression to get the time that the penultimate build stage finished;


combine the result of multiple flags by chaining functions, as in this expression;


or use the following expression to get the comment of the first build changeset in the build containing the word 'merge' (ignoring case):

$Source.SuperFancyRepo.Changesets.First(Comment, Contains, "merge", true).Comment$

We have also included functions to get the value of a variable as a type, allowing you to use properties or functions on the variable value.

You can, for example, now use the following expression to get the abbreviated day of the week from a variable entered using a DateTime prompt;

$Utils.GetDateTime(%DateTimeTest%).DayOfWeek.Substring(0, 3)$

use expressions to do some more complex maths on a Numeric variable;


or get the first selected value in a checkbox select variable with this expression:


You can see a full list of available expression objects, collection and functions on the Expression Objects page of the documentation.

Auto-completion has also been revamped so show more information in the suggestions list. A list of parameters with types is now shown for each for each function. Descriptions are also displayed on mouse over for each object, collection and function in the suggestions list. We have also removed some annoying quirks with expression auto-completion where the cursor would end up in the wrong place or end characters would be added in the wrong place.

Stage editor changes

As Continua CI matures, the number of actions (and categories) has increased. This can make it more difficult to find the action you need. We have therefore redesigned the action list.

The list of categories has been pulled up into a drop down menu with all actions listed below by default.

Action list categories

The filtering of actions using the search box is now fuzzier, using partial and keyword matches.

Action list search

Stage buttons now resize (up to a maximum) to fit the stage name. If you stage names are short, this means you can fit more stages into your browser width. If your stage names are long, then the text will no longer escape the stage borders. Really long stage names which do not fit the maximum stage button size will now be truncated.


All actions now include a Validate button to allow you to check that all fields have valid values before saving.

New premium deployment actions

We have added a set of premium actions which can be used for deploying the results of your build. The following actions can only be used if you have purchased one or more concurrent build licenses.

File Transfer action: This allows you to upload files to a remote server via FTP, FTPS and SFTP.

SSH Run Script action: This can be used to run a script or list of commands on an SSH server.

Azure actions: Several new actions are available to allow you to deploy web apps, function apps, files and blobs to Azure.

Azure actions

  • Create Azure Resource Group
  • Delete Azure Resource Group
  • Create Azure App Service Plan
  • Delete Azure App Service Plan
  • Create Azure Web App
  • Deploy Azure Web App
  • Upload Azure Web App
  • Control Azure Web App
  • Delete Azure Web App
  • Create Azure Function
  • Deploy Azure Function
  • Delete Azure Function
  • Create Azure Storage Account
  • Get Azure Storage Account Keys
  • Delete Azure Storage Account
  • Create Azure Storage Container
  • Delete Azure Storage Container
  • Upload Azure Blob
  • Delete Azure Blob
  • Create Azure File Share
  • Delete Azure File Share
  • Create Azure Directory
  • Delete Azure Directory
  • Upload Azure File
  • Delete Azure File

Docker actions: These new actions are available to allow you to build, deploy and manage Docker containers.

  • Docker Build
  • Docker Command
  • Docker Commit
  • Docker Inspect
  • Docker Pull
  • Docker Push
  • Docker Run
  • Docker Stop
  • Docker Tag

SQL Package actions: These new actions allow you to create, update and export SQL Server database schemas and table data.

  • SQL Package Export
  • SQL Package Extract
  • SQL Package Import
  • SQL Package Publish
  • SQL Package Script

Other new and updated actions

Extent Reports: Wrapper for the Extent Reports CLI for reporting on NUnit results.

ReportGenerator: Updated to include all the latest command line options.

Rename Directory: Does what it says on the tin..

Automatic reporting

Currently, there are a few steps to configure when setting up a report. You have to ensure that the report files are included in the Workspace Rules and that the report is defined in the Reports section of the configuration wizard. Furthermore, it's also recommended to include the report files in the artifact rules so that you can control when they are cleaned up.

To simplify this process, we have added a new option to automatically register the report with the server to actions which generate reports (FinalBuilder, ReportGenerator and the new Extent Reports action). Ticking this option shows a new tab where you can enter the name, description and run order of the report. When a stage completes, any report files generated by actions where this option is turned on, will automatically be copied to the server workspace. The main report file will be registered as a report and all report files will be registered as artifacts.

FinalBuilder automatic report option

Download the installers for Continua CI v1.9.1 Beta from the Downloads page

Delphi/Rad Studio desperately needs a proper package/library/component manager. A package manager provides a standardized way of consuming third party libraries. At the moment, use of third party libraries is very much adhoc, and in many cases this makes it difficult to move projects between machines, or to get a new hire up and running quickly.

Other developement environments, like the .net and javascript eco systems, recognised and solved this problem many years ago. Getting a .net or javascript project up an running, in a new working folder or new machine is trivial.

With Delphi/Rad Studio, it's much harder than it should be. In consulting work, I've made it a point to see how clients were handling third party code, and every client had a different way. The most common technique was... well, best described as adhoc (with perhaps a readme with the list of third party products to install). Getting that code compiling on a CI server was a nightmare.

Existing Package Managers

Embarcadero introduced their GetIt package manager with XE8, and the GetIt infrastructure has certainly has made the installation of RAD Studio itself a lot nicer. But as a package manager for third party libraries, it comes up short in a number of areas.

There is also Delphinus, which is an admirable effort, but hasn't gotten much traction, possibly due to it being strongly tied to github (you really need github account to use it, otherwise you get api rate limiting errors).

Rather than pick apart GetIt or Delphinus, I'd like to outline my ideas for a Delphi package manager. I spend a lot of time working with .net (nuget) and javascript (npm, yarn), so they have very much influenced what I will layout below.

I have resurrected an old project (from 2013) that I shelved when GetIt was announced, and I have spent a good deal of time thinking about package management (not just in Delphi), but I'm sure I haven't thought of everything, I'd love to hear feedback from people interested in contributing to this project, or just potential users.

Project Ideals

These are just some notes that I wrote up when I first started working on this back in 2013, I've tried to whip them into some semblance of order for presentation here, but they are just just rough outline of my ideas.

Open Source

The Project should be Open Source. Of course we should welcome contributions from commercial entities, but the direction of the project will be controlled by the community (ie users). The project will be hosted on GitHub, and contributions will be made through Pull Requests, with contributions being reviewed by the Steering committee (TBA).

Public Package Registry

There will be a public website/package server, where users can browse the available packages, and package authors can upload packages. This will be a second phase of the project, with the initial phase being focused on getting a working client/package architecture, with a local or network share folder for the package source.

The package registry should not be turned into a store. Once a public package registry/server is available, evaluation packages could be be allowed, perhaps by providing a fee (web hosting is not free). Commercial vendors will of course be able to distribute commercial packages directly to their customers, as the package manager will support hosting of packages in a shared network or local directory. Package meta data will include flags to indicate if the packages are commercial, eval or free/open source. Users will be able to decide which package types show up in their searches.

Package Submission

Package submission to the public registry should be a simple process, without filling in and signing and faxing of forms! We will follow the lead of nuget, npm, ruby etc on this. There should be a dispute process for package names, copyright infringement etc. There will also be the ability to assign ownership of a package, for example when project ownership changes.

Package Authors will be able to reserve a package prefix, in order to prevent other authors from infringing on their names or copyrights. For example, Embarcadero might reserve Emb. as their prefix, TMS might reserve TMS. as theirs. (of course I'm hoping to get both on board). The project will provide a dispute resolution process for package prefixes and names.

Delphi specific challenges

Delphi presents a number of challenges when compared to the .net or nodejs/javascript world.


With npm, packages contain source (typically minimized and obfuscated) which is pure javascript. Compatibility is very high.

With Nuget, packages contain compiled (to .NET IL) assemblies. A package might contain a few different versions, that target different the versions of the .net framework. Again, compatibility is pretty good, an assembly compiled against .net 2.0 will work on .net 4.7 (.net core breaks this, but it has a new compatibility model, netstandard).

If we look at Delphi, binary compatibility between Delphi compiler versions is pretty much non existent(yes, I know about 2006/7 etc). The dcu, dcp and bpl files are typically only compatible with the version they were compiled with. They are also only compatible with the platform they were generated for (so you can't share dcu's between 32 and 64 bit windows, or between iOS and Android). So we would need to include binaries for each version of Delphi we want our library to support. This also has major implications for library dependencies. Where as npm and nuget define dependencies as a range of versions, a binary dependency in Delphi would be fixed to that specific version. There is a way to maintain binary compatibility between releases, provided the interfaces do not change, however exactly what the rules are for this is hard to come by, so for now we'll ignore that possibility.

That limits the scope for updating to newer versions of libraries, but that can also be overcome by including the source code in package, and providing on the fly compilation of the library during install. My preference would be for pre-compiled libraries, as that speeds up the build process (of course, since that's an area I have a particular interest in). In Continuous Integration environments, you want to build fast and build often, rebuilding library code with each CI build would be painful (speaking from experience here, 50% of time building FinalBuilder is building the third party libraries).

There's also the consideration of Debug vs Release - so if we are including binaries, compiled for Release would be required, but Debug optional? The size of a package file could be problematic. If the package contains pre-compiled binaries for multiple compiler versions, it could get rather large. So perhaps allow for packages that either support a single compiler version, or multiples? The compilers supported would be exposed in the package metadata, and perhaps also in the package file name. Feedback, ideas around this would be welcome.

Package files would be (like with other package managers), a simple zip file, which include a metadata (xml) file which describes the contents of the package, and folders containing binaries, source, resources etc. Packages will not contain any scripts (ie to build during install) for security reasons (I don't want to be running random scripts). We will need to provide a way to compile during install (using a simple dsl to describe what needs to be done), this still needs a lot of thought (and very much involves dependencies).

Library/Search Paths

Say goodbye to the IDE's Library path. It was great back in 1995, when we had a few third party libraries and a few projects and we just upgraded the projects to deal with library versioning (just get on the latest). It's simply incompatible with the notion of using multiple versions of the same libraries these days.

I rarely change major versions of a library during the lifespan of a major release of my products, I might however take minor updates for bugfixes or performance improvements. The way to deal with this is simply to use the Project Search path. Project A can use version 1 of a library, Project 2 can use version 9, all quite safely (design time components do complicate this).

Where a project targets multiple platforms, installing a package should install for all platforms it supports, but it should be possible for the user to specify which platforms they need the package installed for.

Design time Component Installation

The Rad Studio IDE only allows one version of a design time package to be installed at a time. So when switching projects, which might use different versions of a component library, we would need a system that is aware of component versions, and can uninstall/install components on the fly, as projects are loaded.

I suspect this will be one of the biggest project hurdles to overcome, it will requires someone with very good open tools api knowledge (ie, not me!).


Libraries that depend on other libraries will need to specify those dependencies in a metadata file, such that they can resolved during installation. As I mentioned above, binary compatibility issues make dependency resolution somewhat more complicated, but not insurmountable. The resolution algorithm will need to take into account compiler version and platform. The algorithm will also need to handle when a package is compiled from source, for example, binary only packages should not be allowed to depend on source only packages (to ensure compatibility). If we end up with install time package compilation, then some serious work will be needed on the dependency tree algorithm to work our what else needs to be done during install (ie, do any dependencies need to be recompiled?).

This is certainly more complicated than other platforms, and a significant amount of work to get right (ps, if you think it isn't, you haven't considered all the angles!)

General Considerations

Package Install/Restore

The user should be able to choose from a list packages to install. When installing the package, this would be recorded either in the dproj, or a separate file alongside the drproj. The install process will update the project search paths accordingly. Package meta data would control what gets added to the search paths, my preference would be for 1 folder per package, as that would keep the search path shorter which improves compile times.

When a project is loaded, the dproj (or packages config file) would be checked, and any missing packages restored automatically. This should also handle the situation where a project is loaded in a different IDE version.


We should allow for signing of packages, such that the signatures can be verified by the client(s). Clients should be able to chose whether to only allow signed packages, or allow signed and unsigned, and what to do when signature verification fails. This will allow users certainty in the authenticity and integrity of the package (ie where it comes from and whether it's been modified/tampered with).


It is envisaged that will be at least 2 clients, a command line tool, and a Rad Studio IDE plugin. Clients will download packages, add those packages to project/config search paths. A local package cache will help with performance, avoiding repetitive package downloads and also reduce disk space demands. The clients will also detect available updates to packages, and package dependency conflicts.

Command line Client

The command like tool will be similar to nuget or npm, which provide the ability to create packages, install or restore missing packages, update packages etc. The tool should allow the specification of compiler versions and platforms, as this is not possible to detect from the dproj alone. This is where the project is currently focused (along with the core package handling functionality).

RAD Studio IDE Client

An IDE plugin client will provide the ability to search for, install, restore, update or remove packages, in a similar manner to the Nuget Visual Studio IDE support (hopefully faster!). This plugin will share the core code with the the command line client (ie, it will not call out to the command line tool). I have not done any work on this yet (help wanted).

Delphi/Rad Studio Version Support

Undecided at the moment. I'm developing with XE7, but it's possible the code will compile with earlier versions, or be made to compile with minor changes.


Simply put, I want/need a package manager for Delphi, one that works as well as nuget, npm, yarn etc. I'm still fleshing out how this might all work, and I'd love some feedback, suggestions, ideas etc. I'd like to get some people with the right skills 'signed up' to help, particularly people with open tools api expertise.

Get Involved!

I have set up a home for the project on GitHub - The Delphi Package Manager Project - RFC. We'll use issues for discussion, and the wiki to document the specifications as we develop them. I have created a few issues with things that need some dicusssion. I hope to publish the work I have already done on this in the next few days (needs tidying up).

Today we released a FinalBuilder 8 update with Visual Studio 2019 and MSBuild 16 Preview support. So far, for the most part Visual Studio 2019 seems to operate (well, from our point of view) pretty much the same as 2017. There were some changes to the MSBuild location, but other than that it all seems to work fine. Since it's based on the preview, it's subject to change and or breakage at any time.


Back in December 2016, I posted some ideas for some Delphi language enhancements. That post turned out to be somewhat controversial, I received some rather hostile emails about how I was trying to turn Delphi into C#. That certainly wasn't my intent, but rather to modernize Delphi, in a way that helps me write less, but more maintainable code. Nearly 2 years later, Delphi 10.3 Rio actually implements some of those features.

I'm not going to claim credit for the new language features, and the syntax suggestions I made were pretty obvious ones, but I like to think I perhaps spurred them on a bit ;) My blog post had over 12K views, so there was certainly plenty of interest in new language features, and from what I have seen out there on the interwebs they have for the most part been well received.

So lets take a look at which suggestions made the cut for 10.3 - referencing my original post.

Feature Implemented Comments
Local Variable Initialisation No  
Type Inference Yes! For inline variables only, Confuses code insight!
Inline variable declaration, with type inference and block scope Yes, Yes and Yes! Confuses code insight!
Loop variable inline declaration Yes! Confuses code insight!
Shortcut property declaration No  
Interface Helpers No  
Strings (and other non ordinals) in Case Statements No  
Ternary Operator No  
Try/Except/Finally No  
Named Arguments    
Variable method arguments No  
Lambdas No  
Linq No Depends on lambdas and interface helpers.
Async/Await No  
Non reference counted interfaces No  
Attribute Constraints No  
Operator overloading on classes. No  
Improve Generic Constraint No  
Fix IEnumerable No  
Yield return - Iterator blocks No  
Partial classes No  
Allow Multiple Uses clauses No  
Allow non parameterized interfaces to have parameterized methods No  


So, 3 out of 23. To be honest, I was pleasantly surprised when I found out about them, given the pace of language change in the past. I'm hopeful this is just the start of things to come and we get to see Delphi evolve and catch up with other modern programming languages. I have a bunch of other language features I'd like to see, and received lots of suggestions from other users. 

We're still using Delphi XE7 for FinalBuilder 8, and I rarely change compiler versions during the life of a major product version. So I'll only get to use the new language features when I get fully stuck into FinalBuilder 9 & Automise 6. I'm in the process of getting Delphi 10.3 compatible versions of all the third party libraries (commercial and open source) - as and long time delphi user will know, that's always more difficult than it should be!  

TLDR; Our forums have moved to https://www.finalbuilder.com/forums

After years of frustration with Active Forums on Dotnetnuke, we finally got around to moving to a new forums platform. 

The old forums had zero facilities for dealing with spammers, and sure enough every day, spammers would register on the website and post spam on the forums. Even after turning on email verification (where registration required verifying your email), spammers would verify their emails and post spam.

The old forums were also terrible at handling images, code markup etc, and will often completely mangle any content you paste in.

So the hunt was on for a new platform. I've lost count of the number of different forum products I've looked at over the years, none of which totally satisfied my needs/wants. I've even contemplated writing my own, but I have little enough free time as it is, and would much rather focus on our products. 

Discourse  looked interesting, so I installed it on a Ubuntu Server 18.04 virtual machine (it runs in a Docker container). After some initial trouble with email configuration (it didn't handle subdomains properly) it was up and running. I'm not great with linux, I've tinkered with it many times over the years but never really used it for any length of time. I was a little apprehensive about installing Discourse, however their guide is pretty good and I managed just fine. 

The default settings are pretty good, but it is easy to configure. After experimenting with it for a few days (there are a LOT of options), we we liked it a lot, and decided to go with it. 

Discourse is Excellent at handling bad markup, I'm astounded at how well it deals with malformed html and just renders a good looking post (most of the time). Inserting images is a breeze, the editor will accept Markdown or html, and gives an accurate preview while you are writing a post. Posting code snippets works well usng the same markup as github, with syntax highlighting for a bunch of languages (C#, delphi, javascript, vbscript, xml etc). The preview makes it easy to tell when you have things just right. Discourse also works very well on mobile, although our website does not (the login page is usable) - more work to be done there (like a whole new site!). 

Discourse is open source (GPL), so you can either host it yourself (free) or let Discourse.org host if for you (paid, starting at $100pm). Since we had spare capacity on our web servers (which run hypver-v 2016) I chose to host it ourselves. That was 11 days ago. 

My goal was to import the content from the old forums, there are 12 years of valuable posts there which I was loath to lose. 

The first challenge was that Discourse requires unique emails, and our dotnetnuke install did not. After 12 years of upgrades, our database was in a bit of a sorry state. There were many duplicate accounts (some users had 5 accounts), I guess if you can't remember your login you just create a new one, right? I can't totally blame users for that, the password reset email system was unreliable in the past (it should be ok now, check your spam folder!). So we cleaned up the database, removed old accounts that had no licenses and no forum posts. 

The next challenge was enabling single sign on with the website. Someone had written a dotnetnuke extension for it, but I wasn't able to get it working (it was written for an older version), so I spent 2 days writing my own (and almost losing the will to live!). Once that was sorted, I got to work on importing the data. Discourse does have a bunch of import code on github - none of which are for dotnetnuke, and they are all written in Ruby (which I have zero experience with). Fortunately, Discourse does have a rest api - so using C# (with dapper & restsharp) I set about writing a tool to do the import. Since Discourse doesn't allow you to permanently delete topics, this needed to work first time, and be restartable when an error occurred. This took 4 days to write, much of which was just figuring out how to get past the rate limits Discourse imposes. I did this all locally with a back up of the website db and a local discourse instance. The import took several hours, with many restarts (usually due to bad content in the old forums, topics too short etc). 

Backing up the local instance of Discourse was trivial, as was restoring it on the remote server (in LA). We did have to spend several hours fixing a bunch of posts, and then some time with sql fixing dates (editing a post sends it to the top of a category). I did also have to ssh into the container to "rebake" the posts to fix image url issues. Fortunately theres is a wealth of info on Discourse's own forums - and search works really well!

We chose not to migrate the FinalBuilder Server forum (the product was discontinued in 2013) or the Action Studio forum (which gets very few posts).  


I'm sure we'll still be tweaking the forums over the next few weeks, but on the whole we are pretty happy with how they are. Let us know what you think (in the Site Feedback forum!).

Version 1.9 is now out of beta and available as a stable release. Thank you to those of you who have already tried out the beta - especially those who reported issues.

This version brings major changes to the notifications system. We redesigned it using a common architecture, that makes it much easier to add new notification publisher types. Where previously, only email, XMPP and private message notifications were available, there are now publishers for Slack, Teams, Hipchat and Stride. And we can now add more (let us know what you need).

We are no longer limited to one publisher of each type. You may, for example, have different email servers for different teams on your company. You can set up two email publishers, one for each server, and set up subscriptions so that notifications from different projects go to different email servers. Likewise for different Slack workspaces, Teams channel connectors and so on.

We have also improved the XMPP publisher to support sending notifications to rooms. Subscriptions have been improved, allowing you to specify a room and/or channel for this and other publishers.

User preferences have been updated allowing each user to specify a recipient id, username or channel per publisher.

You can see some metrics on the throughput of each publisher (number of messages on queue, messages sent per second, average send time, etc.) on the Publishers page in the Administration area. This also shows real-time counts of any errors occurring while sending messages and also any messages waiting on a retry queue due to rate limiting or service outages. This allows you to know when you need to upgrade rate limits or make other service changes.

The Templates page has been updated. Templates are now divided into a tab per publisher. The list of available variables for each event type has been moved to a expandable side panel.

This release is built on .Net Framework version 4.7.2, which has allowed us to upgrade a number of third party libraries, including the database ORM and PostgreSQL drivers. This has noticeably improved performance, as well as providing us with a richer platform to build future features on. The setup wizard will prompt for you to install .Net Framework version 4.7.2, before continuing with the installation.

Note that applications running on .Net 4.7.2 do not run on versions of Windows prior to Windows Server 2008R2 and Windows 7 SP1. We are also dropping the 32-bit server installer. This is mainly to reduce testing overheads. We will still be releasing 32-bit agents for those who are using 16-bit compilers.

We will continue to provide bug fixes to Continua CI version 1.8.1 for while to give you time to migrate from older platforms.



I'm not usually one for publishing roadmaps, mostly because I don't like to promise something and not deliver. That said, we've had a few people ask recently what is happening with Continua CI. 

Disclaimer - nothing I write here is set in stone, our plans may change.

A few weeks ago, I wrote up a "roadmap" for Continua CI on the whiteboard in our office. Continua CI 1.8.x has been out for some time, but we have been working on 2.x for quite a while. The length of time it is taking to get some features out is a cause of frustration in the office, that lead to a lengthy team discussion, the result was a "new plan". 

One of the reasons we had been holding features back for 2.x, is they required a change in the system requirements. Many of the third party libraries we use have dropped .net 4.0 support, so we were stuck on old versions. So rather than wait for 2.0 we will release 1.9 on .net 4.7.2. This will allow us to release some new features while we continue working on 2.0, and to take in some bug fixes from third party libraries.

This is "The Plan" :

Version .NET Framework x86/x64 Min OS Version UI Features
1.8.x 4.0 both Windows Server 2003R2 MVC 4  
1.9.0 4.7.2 x64 Windows Server 2008R2 MVC 5 New Notifications types
1.9.1 4.7.2 x64 Windows Server 2008R2 MVC 5 Deployment Actions
1.9.2 4.7.2 x64 Windows Server 2008R2 MVC 5 Import/Export
2.0.0 netcore 2.1 x64 Windows Server 2012 MVC 6 New Architecture
3.0.0 netcore x.x x64 Windows Server 2012 TBA New User Interface


Let's break down this plan.

1.9.0 Release

The 1.9 Release will built on .net 4.7.2, which allowed us to take updates to a number of third party libraries, most notably NHibernate and Npgsql (postgress driver). These two libraries factor heavily in the performance improvements we see in 1.9.0. 

The major new feature in 1.9.0 will be a completely redesigned notifications architecture. In 1.8, notifications are quite limited, offering only email, xmpp and private messages. There was very little shared infrastructure between the notification types, so adding new notification types was not simple. You could only use 1 mail server and 1 xmpp server.

In 1.9.0, notifications are implemented as plugins*, using a common architecture that made it much easier add new notification types. You can also define multiple notification publishers of the same type, so different projects can use different email servers for example.

Notification Types :  Private message, Email, XMPP, Slack, Hipchat, Stride. More will follow in subsequent updates (let us know what you need).

*We probably won't publish this api for others to use just yet, as it will be changing for 2.0 due to differences between the .net framework and .net core.

If you are running Continua CI on a 32-bit machine, then start planning your migration. Supporting both x86/x64 is no longer feasable, and dropping x86 support simplifies a lot of things for us (like updating bundled tools etc).  We will continue supporting 1.8.x for a while, but only with bug or security fixes. The minimum OS version will be the same as for the .Net Framework 4.7.2 - since Windows Server 2003R2 is out of support these days, it makes sense for us to drop support for it. 

1.9.1 Release

Deployment focused actions.  

  - AWS S3 Get/Put
  - Azure Blob Upload, Publish, Cloud Rest Service, Web Deploy
  - Docker Build Image, Push Image, Run Command, Run Image
  - File Transfer (FTP, FTPS, SFTP)
  - SSH Action
  - SQL Package Export, Package Extract, Package Import, Package Publish
  - SSH Run Script
  - Web Deploy

These actions are all mostly completed, but are waiting on some other (UI) changes to make them easier to use. We'll provide more detail about these when they closer release.

Note : These actions will only be available to licensed customers, not in the free Solo Edition.

1.9.2 Release

One of the most requested features in Continua CI, is the ability to Export and Import Continua CI Projects and Configurations. This might be for moving from a proof of concept server to a production server, or simply to be able to make small changes, and import configurations into other projects. The file format will be YAML.

Continua CI 2.0 Release - .net core.

We originally planned to target .net framework 4.7 with Continua CI 2.0, but with .net core improving significantly with netcore 2.0 and 2.1, the time is right to port to .net core. The most obvious reason to target .net core is cross platform. This is something we have wanted to do for some time, and even explored with mono, but were never able to get things working in a satisfactory manner. It's our hope that .net core will deliver on it's cross platform promise, but for now it's a significant amount of work just to target .net core. So that said, our plans for Continua CI 2.0 is to get it up and running on .net core on Windows only, without losing any functionality or features. During the port we are taking note of what our Windows dependencies are for future reference. 

The current (1.8.x) architecture looks like this :

Browser <----> IIS(Asp.net with MVC)<--(WCF)-->Service <--(WCF)-->Agent(s)

With .net core, it's possible to host asp.net in a service process, and that is what we have chosen to do. This cuts out the WCF layer between IIS and the service. .net core doesn't have WCF server support, and to be honest I'm not all that cut up about it ;) That said, we still need a replacement for WCF for communication between the agents and the server. We're currently evaluating few options for this.

Continua CI 2.0  architecture currently looks like this :

Browser <----> Service(hosting asp.net core 2.1/mvc)<--(TBD)--> Agent(s)

The current state of the port is that most of the code has been ported, the communication between the agents and the server is still being worked on, and none of the UI has been ported. We do have asp.net core and mvc running in the service. There are significant differences between asp.net/mvc and asp.net core/mvc, so we're still working through this, I expect it will take a month or so to go through and resolve the issues, then we can move on to new features. 

Continua CI 2.0 - new features.

Rest API. This is something we had been working on for a while, but on the .net framework using self hosted Nancy (in the service, running on a separate port from IIS). Once we made the decision to port to .net core, we chose to just use asp.net rather than Nancy. Fortunately we were able to use much of what was already done with nancy on asp.net core (models, services etc) and we're currently working on this right now..

Other features - TBA

Continua CI 3.0 - A new UI

Asp.net MVC has served us well over the years, but it relies on a bunch of jQuery code to make the UI usable, and I'll be honest, no one here likes working with jQuery! Even though we ported much of the javascript to typescript, it's still hard to create complex UI's using jQuery. The Stage Editor is a good example of this, even with some reasonably well structured javascript, it's still very hard to work on without breaking it. The UI is currently based on Bootstrap 3.0, with a ton of customisations. Of course Bootstrap 4.0 completely breaks things so we're stuck on 3.0 for now.

So it's time to change tack and use an SPA framework. We've done proof of concepts with Angular and React, and will likely look at Vue before making a decision - right now I'm leaning towards React. Creating a new user interface is a large chunk of work, so work will start on this soon (it's dependent on the rest api). We're likely to look at improving usability and consistency in the UI, and perhaps a styling refresh. 

Linux & MacOS Agents - with .net core running on these platforms, this is now a possibility. We looked at this several times before with Mono, but the api coverage or behavor left a lot to be desired. We do still have some windows specific stuff to rework in our agent code, and Actions will need to filtered by platform but this is all quite doable.

Summing up

We're making a big effort here to get features out more frequently, but you will notice I haven't put any timeframe on releases outlined above, they will be released when ready. We expect a 1.9.0 Beta to be out in the next week or so (currently testing the installer, upgrades etc), and we'll blog when that happens (with more details about the new notifications features). Note that it's highly likely there will be other releases in between the ones outlined above, with bug fixes and other minor new features as per usual. We have a backlog of feature requests to work from, many of which are high priorities, so we're never short of things to do (and we welcome feature requests). 

In version of Continua CI, we have added new archiving functionality to the workspace and repository rules.

Builds can generate a lot of output files: binary library files or report files, for example. Copying a large number of these files back to the server at the end of the build can take time. Manually downloading each individual artefact from the server can be a tedious task, so compressing these files into a handy bundle makes sense.

Previously, you would have needed to use actions, such as the Seven Zip action, in your build stages to zip these files. The compression can now be performed as part of the agent-to-server workspace rules.

To compress a set of files in the agent workspace to an archive in the server workspace, specify a file with a zip extension on the left-hand side of a agent-to-server workspace rule.


    Libraries.zip < Output/**.dll

Note that the all the usual operators are taken into account when compressing files so, in the above example, the directory structure is preserved. Likewise, using the <- operator will cause all matching files to be flattened into the root folder of the zip file.

Doubling up with the << operator will delete any existing zip file before compressing to a new file. Without the << operator, multiple sets of files can be added to the same archive file.


    Reports.zip < Output/**.html
	Reports.zip < Output/**.css

You can also compress files into subfolders within the zip file using the new : operator


    Reports.zip:/css < Output/**.css

Once files have been compressed at the end of one stage, you may need to access the contents of zip files in the next stage. Additionally, you may wish to unpack a zip file from your repository at the start of a stage. The : operator facilitates the extracting of zip files in server-to-agent workspace rules and repository rules.

To extract a set of files from an archive in the server workspace to a folder in the agent workspace, specify a file with a zip extension on the left-hand side of a server-to-agent workspace rule. Ensure that you follow the ‘zip’ with a : operator, otherwise the zip file will just be copied.


    Libraries.zip: > Libraries

This also works for repository rules.


	$Source.MyRepo$/Documents.zip: > Docs/Main

Note that the all the usual operators >, >>, -> and --> have the same meaning when extracting files as they have when copying file; signifying whether to preserve the directory structure within the zip file and whether to empty the destination folder.

You can also specify a pattern after the : operator, allowing you to filter the extracted files.


	Libraries.zip:/plugins/**.dll > Libraries/Plugins
	$Source.MyRepo$/Documents.zip:**.md > Docs/Markdown

See the Workspace Rules documentation for further details on the new archive rules syntax.