VSoft Technologies Blogs

rss

VSoft Technologies Blogs - posts about our products and software development.

I'm not usually one for publishing roadmaps, mostly because I don't like to promise something and not deliver. That said, we've had a few people ask recently what is happening with Continua CI. 

Disclaimer - nothing I write here is set in stone, our plans may change.


A few weeks ago, I wrote up a "roadmap" for Continua CI on the whiteboard in our office. Continua CI 1.8.x has been out for some time, but we have been working on 2.x for quite a while. The length of time it is taking to get some features out is a cause of frustration in the office, that lead to a lengthy team discussion, the result was a "new plan". 

One of the reasons we had been holding features back for 2.x, is they required a change in the system requirements. Many of the third party libraries we use have dropped .net 4.0 support, so we were stuck on old versions. So rather than wait for 2.0 we will release 1.9 on .net 4.7.2. This will allow us to release some new features while we continue working on 2.0, and to take in some bug fixes from third party libraries.

This is "The Plan" :

Version .NET Framework x86/x64 Min OS Version UI Features
1.8.x 4.0 both Windows Server 2003R2 MVC 4  
1.9.0 4.7.2 x64 Windows Server 2008R2 MVC 5 New Notifications types
1.9.1 4.7.2 x64 Windows Server 2008R2 MVC 5 Deployment Actions
1.9.2 4.7.2 x64 Windows Server 2008R2 MVC 5 Import/Export
2.0.0 netcore 2.1 x64 Windows Server 2012 MVC 6 New Architecture
3.0.0 netcore x.x x64 Windows Server 2012 TBA New User Interface

 

Let's break down this plan.

1.9.0 Release

The 1.9 Release will built on .net 4.7.2, which allowed us to take updates to a number of third party libraries, most notably NHibernate and Npgsql (postgress driver). These two libraries factor heavily in the performance improvements we see in 1.9.0. 

The major new feature in 1.9.0 will be a completely redesigned notifications architecture. In 1.8, notifications are quite limited, offering only email, xmpp and private messages. There was very little shared infrastructure between the notification types, so adding new notification types was not simple. You could only use 1 mail server and 1 xmpp server.

In 1.9.0, notifications are implemented as plugins*, using a common architecture that made it much easier add new notification types. You can also define multiple notification publishers of the same type, so different projects can use different email servers for example.

Notification Types :  Private message, Email, XMPP, Slack, Hipchat, Stride. More will follow in subsequent updates (let us know what you need).

*We probably won't publish this api for others to use just yet, as it will be changing for 2.0 due to differences between the .net framework and .net core.

If you are running Continua CI on a 32-bit machine, then start planning your migration. Supporting both x86/x64 is no longer feasable, and dropping x86 support simplifies a lot of things for us (like updating bundled tools etc).  We will continue supporting 1.8.x for a while, but only with bug or security fixes. The minimum OS version will be the same as for the .Net Framework 4.7.2 - since Windows Server 2003R2 is out of support these days, it makes sense for us to drop support for it. 

1.9.1 Release

Deployment focused actions.  

  - AWS S3 Get/Put
  - Azure Blob Upload, Publish, Cloud Rest Service, Web Deploy
  - Docker Build Image, Push Image, Run Command, Run Image
  - File Transfer (FTP, FTPS, SFTP)
  - SSH Action
  - SQL Package Export, Package Extract, Package Import, Package Publish
  - SSH Run Script
  - Web Deploy

These actions are all mostly completed, but are waiting on some other (UI) changes to make them easier to use. We'll provide more detail about these when they closer release.

Note : These actions will only be available to licensed customers, not in the free Solo Edition.

1.9.2 Release

One of the most requested features in Continua CI, is the ability to Export and Import Continua CI Projects and Configurations. This might be for moving from a proof of concept server to a production server, or simply to be able to make small changes, and import configurations into other projects. The file format will be YAML.

Continua CI 2.0 Release - .net core.

We originally planned to target .net framework 4.7 with Continua CI 2.0, but with .net core improving significantly with netcore 2.0 and 2.1, the time is right to port to .net core. The most obvious reason to target .net core is cross platform. This is something we have wanted to do for some time, and even explored with mono, but were never able to get things working in a satisfactory manner. It's our hope that .net core will deliver on it's cross platform promise, but for now it's a significant amount of work just to target .net core. So that said, our plans for Continua CI 2.0 is to get it up and running on .net core on Windows only, without losing any functionality or features. During the port we are taking note of what our Windows dependencies are for future reference. 

The current (1.8.x) architecture looks like this :

Browser <----> IIS(Asp.net with MVC)<--(WCF)-->Service <--(WCF)-->Agent(s)

With .net core, it's possible to host asp.net in a service process, and that is what we have chosen to do. This cuts out the WCF layer between IIS and the service. .net core doesn't have WCF server support, and to be honest I'm not all that cut up about it ;) That said, we still need a replacement for WCF for communication between the agents and the server. We're currently evaluating few options for this.

Continua CI 2.0  architecture currently looks like this :

Browser <----> Service(hosting asp.net core 2.1/mvc)<--(TBD)--> Agent(s)

The current state of the port is that most of the code has been ported, the communication between the agents and the server is still being worked on, and none of the UI has been ported. We do have asp.net core and mvc running in the service. There are significant differences between asp.net/mvc and asp.net core/mvc, so we're still working through this, I expect it will take a month or so to go through and resolve the issues, then we can move on to new features. 

Continua CI 2.0 - new features.

Rest API. This is something we had been working on for a while, but on the .net framework using self hosted Nancy (in the service, running on a separate port from IIS). Once we made the decision to port to .net core, we chose to just use asp.net rather than Nancy. Fortunately we were able to use much of what was already done with nancy on asp.net core (models, services etc) and we're currently working on this right now..

Other features - TBA

Continua CI 3.0 - A new UI

Asp.net MVC has served us well over the years, but it relies on a bunch of jQuery code to make the UI usable, and I'll be honest, no one here likes working with jQuery! Even though we ported much of the javascript to typescript, it's still hard to create complex UI's using jQuery. The Stage Editor is a good example of this, even with some reasonably well structured javascript, it's still very hard to work on without breaking it. The UI is currently based on Bootstrap 3.0, with a ton of customisations. Of course Bootstrap 4.0 completely breaks things so we're stuck on 3.0 for now.

So it's time to change tack and use an SPA framework. We've done proof of concepts with Angular and React, and will likely look at Vue before making a decision - right now I'm leaning towards React. Creating a new user interface is a large chunk of work, so work will start on this soon (it's dependent on the rest api). We're likely to look at improving usability and consistency in the UI, and perhaps a styling refresh. 

Linux & MacOS Agents - with .net core running on these platforms, this is now a possibility. We looked at this several times before with Mono, but the api coverage or behavor left a lot to be desired. We do still have some windows specific stuff to rework in our agent code, and Actions will need to filtered by platform but this is all quite doable.

Summing up

We're making a big effort here to get features out more frequently, but you will notice I haven't put any timeframe on releases outlined above, they will be released when ready. We expect a 1.9.0 Beta to be out in the next week or so (currently testing the installer, upgrades etc), and we'll blog when that happens (with more details about the new notifications features). Note that it's highly likely there will be other releases in between the ones outlined above, with bug fixes and other minor new features as per usual. We have a backlog of feature requests to work from, many of which are high priorities, so we're never short of things to do (and we welcome feature requests). 

In version 1.8.1.870 of Continua CI, we have added new archiving functionality to the workspace and repository rules.

Builds can generate a lot of output files: binary library files or report files, for example. Copying a large number of these files back to the server at the end of the build can take time. Manually downloading each individual artefact from the server can be a tedious task, so compressing these files into a handy bundle makes sense.

Previously, you would have needed to use actions, such as the Seven Zip action, in your build stages to zip these files. The compression can now be performed as part of the agent-to-server workspace rules.

To compress a set of files in the agent workspace to an archive in the server workspace, specify a file with a zip extension on the left-hand side of a agent-to-server workspace rule.

e.g.



    Libraries.zip < Output/**.dll


Note that the all the usual operators are taken into account when compressing files so, in the above example, the directory structure is preserved. Likewise, using the <- operator will cause all matching files to be flattened into the root folder of the zip file.

Doubling up with the << operator will delete any existing zip file before compressing to a new file. Without the << operator, multiple sets of files can be added to the same archive file.

e.g.



    Reports.zip < Output/**.html
	Reports.zip < Output/**.css


You can also compress files into subfolders within the zip file using the new : operator

e.g.



    Reports.zip:/css < Output/**.css


Once files have been compressed at the end of one stage, you may need to access the contents of zip files in the next stage. Additionally, you may wish to unpack a zip file from your repository at the start of a stage. The : operator facilitates the extracting of zip files in server-to-agent workspace rules and repository rules.

To extract a set of files from an archive in the server workspace to a folder in the agent workspace, specify a file with a zip extension on the left-hand side of a server-to-agent workspace rule. Ensure that you follow the ‘zip’ with a : operator, otherwise the zip file will just be copied.

e.g.



    Libraries.zip: > Libraries


This also works for repository rules.

e.g.



	$Source.MyRepo$/Documents.zip: > Docs/Main


Note that the all the usual operators >, >>, -> and --> have the same meaning when extracting files as they have when copying file; signifying whether to preserve the directory structure within the zip file and whether to empty the destination folder.

You can also specify a pattern after the : operator, allowing you to filter the extracted files.

 e.g.



	Libraries.zip:/plugins/**.dll > Libraries/Plugins
	$Source.MyRepo$/Documents.zip:**.md > Docs/Markdown


See the Workspace Rules documentation for further details on the new archive rules syntax.

SSL standards are changing, and older SSL/TSL protocols are slowly being deprecated, or even turned off by some services. This post shows how to enable TLS 1.2 support in Continua CI.

Yesterday, we started getting reports that the Github Status event handler, and the Github Status action in Continua CI had stopped working.

Sure enough, in our testing here we were able to confirm the case. While testing this under the debugger, the error we were seeing was rather strange : "The request was aborted: Could not create SSL/TLS secure channel.".

After some research, we found this error was due to being unable to negotiate a common protocol between the client and the server.

Now Continua CI 1.x is built with .NET 4.0 (v2 will be on 4.7.1) - we know that .NET 4.0 doesn't support TLS 1.2, and a quick check of the github api server using SSLLabs shows that they now only support TLS 1.2.


I wondered if this was announced by github - turns out they did announce this 3 weeks ago :

Weak cryptographic standards removal notice

and yesterday they permanently disabled TLS 1.0 and 1.1

Weak cryptographic standards removed

Anyway, back to Continua CI. The good news is that there is a way to enable TLS 1.2 support in Continua CI. Note that this only works when running on Windows Server 2008 or later (Server 2003 does not support TLS 1.2 at all, and we will be dropping support for it with v2).

1) Install .Net Framework 4.5 or later - since all 4.x frameworks effectively replace 4.0 - 4.5 has support for TLS 1.2

2) Edit %ProgramFiles%\VSoft Technologies\ContinuaCI\Server\Continua.Server.Service.exe.config on the server and %ProgramFiles%\VSoft Technologies\ContinuaCI Agent\Continua.Agent.Service.exe.config on each agent - add the following line in appSettings section :

       <add key="Continua.Service.SecurityProtocolType" value="Tls|Tls11|Tls12" />
    
Note that the key supports the following values: Ssl2|Ssl3|Tls|Tls11|Tls12|Default

Default = Ssl3|Tls
Multiple protocols can be separated with |
The value "Tls|Tls11|Tls12" will allow Continua CI to work with services that do not support or have not enabled TLS 1.2, and with services that only support TLS 1.2 .

3) Open Regedit and add the following value to : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319 : SchUseStrongCrypto type DWORD value 1

4) Restart the Server and Agent Services.

5) You may need to restart your server(s) for the registry change to take effect.

One last note : This change also effects the communication between the Continua CI Server and agents, if you make the change on the server, make sure you make a compatible change on the agents.

The Windows 10 Fall Creators Update has only been out a few hours, but we're already getting questions about it. 

In our limited testing, FinalBuilder 8 and Automise 5 run fine.

I've only been running the Fall Update for a few hours, but so far I have not noticed any issues. The applications I use daily all run fine. 

The "Windows 10 Creators Update" (ie the one before the Fall Update - stupid release naming imho) broke the Delphi debugger when using runtime packages. Aparently the issue was caused by a library loader optiimisation, not taking into account that dll's can have multiple import tables. I never did see a full explaination or acknowledgement of the problem from Microsoft.

This only affected the debugger (all native code debuggers, not just Delphi), which would load and unload each dll many times (based on the number of imports, for FinalBuilder's core package, it was in the hundreds). Sometimes the application would launch, only for the debugger to crash, sometimes it would just hang, sometimes the Delphi IDE would get out of memory errors. 

For me, this was a big issue, since FinalBuilder and Automise use runtime packages. This affected all versions of Delphi, even the latest 10.2 (Tokyo). Embarcadero did eventually ship an update to 10.2 that mostly resolved the problem (not an easy thing as it involved major linker changes), but that didn't help us as we're using an older version (for reasons I won't go into here!).

So since April 2017, I've been really hamstrung when it comes to debugging. Fortunately we discovered the issue before the Creators Update was installed on our other Delphi development machines (and it's a been a constant battle with windows update nagging to install it ever since) so we were still able debug, just not on my dev machine. Frustrating to say the least. 

The good news is that the Fall Update (mostly) fixes the problem.  I still see some dlls/packages getting unloaded and reloaded again, but the application launches and I can debug. 

As far as windows functionality in the Fall Update goes, well the Task Manager has a new GPU section on the performance tab which is mildly interesting, but since I don't use a Pen, or wear a VR headset while working, I'm not noticing much to get excited about. Hopefully, it's just a lot of bug fixes and performance enhancements, minus the show stoppers!! 

In this post, I'm going to look at how to structure a FinalBuilder project so that it will run on your dev machine, or on your Continua CI Server without modification. This allows the best of both worlds, develop and debug your build process on your development machine, and then later run it on your CI server.

I'm going to assume you are familiar with FinalBuilder to speed this along.

Version Control

The very first thing we need to do is add our FinalBuilder project to our version control system. This will ensure that the Continua CI agent will be able to access the projects.

We typically create a Build folder in each repository that has our FinalBuilder projects, installer scripts etc. Make sure you save your projects in uncompressed format (ie, .fbp8 for FinalBuilder 8), as that will make it possible to diff project file changes using your usual diff tool (we use Beyond Compare 4).

So in a typical repository, you might have a folder structure that looks something like this :

        \Build
        \Src
        \Docs
        \Help
        \Tools
        \Output
    

Note the Output folder is ignored by our version control system (via .gitignore or .hgignore for example). Don't forget to add ignores for the finalbuilder log file as you don't want to commit that to the repo.

This is the file structure I'll be using as the example in this post.

Build Workspace

Continua CI runs each build in a separate, clean build workspace folder. The reason for this is to allow multiple builds of the same configuration to be run concurrently (for example, building different branches at the same time). This means our folder structure above will be rooted differently for each build. So to deal with this, we will define some FinalBuilder Project Variables.

FinalBuilder Variables


FinalBuilder Project Variables


You can of course modify these variables to suite your needs. The REPO variable has a default value of "..", we will use the Path Manipulation action to expand that relative path when our build starts, using the FBPROJECTDIR variable as the base path. This will give us the root folder of our repository, which we can then use as the basis for other path variables.

Side Note: FinalBuilder does not support using unrooted relative paths. The best practice is to fully expand the relative path before using it. Relative paths need to be relative to something, and in windows that is typically the Current Directory for the process, however this is not viable in a multi-threaded application. Rooted relative paths (e.g "%SOURCE%\.." ) will work in most cases, however some windows api functions do not support relative paths at all.

The WORKSPACE variable will have the path to the Continua CI build workspace folder. Depending on how your repository and repository rules are structured, this might be different.

Repository Rules

Continua CI configurations can have multiple Repositories assigned to them, for example when building FinalBuilder, we have mulitple repos on the configuration, one for our source and three others for libraries. On our dev machines, we use symlinks to map the libs folder into our main repo's path, so we end up with :

        MainRepo\Src
        MainRepo\Libs <- symlink to LibsRepo\
    

This makes it easier to deal with search paths in Delphi, as we can just use simple relative paths. With .Net, working with libraries is so much simpler due to package managers like nuget or packet.

At the start of a Continua CI Stage, the source code is exported from the Repository Cache (a Mercurial repo) for each repository associated with the Configuration. How and what gets exported is controlled by the Repository Rules for that Stage.

The default rules looks like this :

        $Source$ >> Source
    

This will export all associated repos to a folder structure like this

        workspace\source\repo1
        workspace\source\repo2
    

In this example, I'm going to change the rules to mirror the folder structure on our dev machines :

Repository Rules

This results in the following folder structure :

        workspace\build
        workspace\src
        workspace\libs
    

Note that you can also use rules to limit which parts of the repository are exported to the workspace, so, for example, if you have files in the repository that are not needed for the build process, you can avoid exporting them and save some time (I/O is usually the biggest overhead) using exclusion rules.

Getting back to our FinalBuilder variables, we can see from the above folder structure, that our WORKSPACE variable will have the same value as the REPO variable. If your folder structure is different then you will need to adjust accordingly.

Continua CI Stage Actions

Ok, so now we have a basic FinalBuilder project, and Repository Rules set up to export our source to the build workspace, now it's time to call FinalBuilder. Continua CI has a FinalBuilder Action for doing just that.

Continua CI - FinalBuilder Action

We need to tell the action where our FinalBuilder project file lives. This needs to be anchored by the workspace foler, e.g $Workspace$\Build\FB\FinalBuilderBuild.fbp8

In the Variables tab, we will check the "Automatically apply Continua CI variable values to matching FinalBuilder variables." Option. If you want to send any protected Continua variables (ie passwords, api keys etc) to FinalBuilder, then check the "Save sensitive variable values to context XML file for use by FinalBuilder." option as well. Lastly, on the Environment Variables tab, we are passing some environment variables that are used in delphi search paths to FinalBuilder.

Continua CI - FinalBuilder Action

We also set some environment variables that are needed for our build process, Continua CI makes this painless.

Continua CI - Set Environment Variables

FinalBuilder Project

Remember that we want to be able to run the project on our dev machine and from Continua CI. So some logic is required to make sure it behaves the same way in both environments.

I added some targets to the project, Init, Build and Test.

The Init target is a dependency of the Default target, so it will always run first. This is where we initialise the variables, detect if the project is running from Continua CI or not, and set version info. We also create the Output folder here.

FinalBuilder - Init Target

Back in the Default target, we call the Build and Test targets. This is wrapped in a try/finally block and we export the FinalBuilder log file to %OUTPUT%\BuildLog.html in the finally section. We can register this file as an artifact and create a report definition in Continua CI to make it easy to view in the Continua CI UI. I've covered this before in an earlier blog post, so I won't cover it here.

FinalBuilder - Init Target

Note that you don't have to do the whole build/test/deploy all in one FinalBuilder project. When building FinalBuilder, we do the build & test in one project, and the deploy in another that is called from the Deploy stage. This avoids the temptation to deploy from a developer workstation!

When we build Continua CI (on Continua CI of course), we have separate Build, Obfuscate, Test, Package & Deploy Stages that use different FinalBuilder projects. This allows us to run different parts of the build process on different agent machines. The .net obfuscation tool we use is very expensive, so we only have a single license which is installed on one agent machine, so the obfuscation stage will only run on that agent machine (vm). Continua CI selects the best agent based on agent properties and the Stages Agent Requirements. I'll cover this in more detail in another blog post.

I have added this example project to the Examples repository on GitHub.

Continuous Integration Servers are often underspecified when it comes to hardware. In the early days of Automated Builds, the build server was quite often that old pc in the corner of the office, or an old server in the data center that no one else wanted. Developers weren't doing many builds per day, so it worked, it was probably slow but that didn't seem to matter much.

Fast forward 20 years, and the Continuous Integration Server is now a critical service. The volume and frequency of builds has increased dramatically and a slow CI server can be a real problem in an environment where we want fast feedback on that code we just committed (even though it "worked on my machine"). Continuous Deployment only adds to the workload of the CI server. 

In this post, I'm going to cover off some ideas to hopefully improve the performance of your CI server. I'm not going to cover compilation, unit tests etc. (which can be where a lot of the time is spent). Instead, I'll focus on the environment, machine configuration and some settings on your Continua CI configurations.

Hardware Requirements

It's impossible to provide hard and fast specs for hardware or virtual machines, as it varies greatly depending on the expected load.

There are a bunch of things you can tweak that may improve performance. I will touch on some key points for virtual hosts, but I'm not going to go too deep into tuning virtual hosts, that's not my area of expertise. Of course, dedicated physical machines would be the ideal, but these days, even if you do get dedicated hardware for CI/CD, it's most likely going to be as a virtual host (hyper-v or vmware) rather than an OS installed on bare metal (do companies still provision a single os on bare metal servers these days?). Virtualisation brings in a whole bunch of benefits, but it also brings with it some limitations that cannot be ignored.

Continuous Integration environments are mostly I/O bound and Continua CI is no different in that regard. So let's look at the various resources used by CI/CD.

CPU

It's unlikely that CPU will be a limiting factor in the performance of your CI server, unless you are running other CPU intensive tasks on your server. If that's the case, then move your CI server to dedicated hardware, or at least a dedicated virtual host.

At a minimum you should have at least 2 cores on the server. On our production server, which is a virtual machine (on Hyper-V 2012R2) with 4 virtual cores and dynamic RAM, the Windows resource monitor shows that average CPU usage usually sits around 2% when idle (no running builds, measured on the guest OS using resource monitor on the Continua Server Service). With 10 concurrent builds running, the Continua CI server service was using around 6% cpu.

Adding another 4 cores made very little difference. The Hyper-V host machine, which is also running a bunch of agent VM's, has plenty of CPU capacity, with the average CPU usage round 5-7%. Cutting down the number of cores to 2 did make a slight difference, with the VM showing slightly higher CPU usage, however no discernible difference in build times.

This is obviously not very scientific, but it did demonstrate (well to me at least) that CPU is not the limiting factor. I set the server VM back to 4 cores and left it at that. Our Hyper-V host machines are a few years old now, and have 7200 rpm SAS hard drives (in Raid 10) rather than SSD's (they were still too expensive when we bought the machines).

On a Continua CI Agent, we recommend at least 2 cpu cores, and limit the concurrent builds running on the agent to 1 per core. This isn't a hard and fast rule, just a convention we adhere to here (based on some performance testing). You may want to add extra cores depending on what compilers or tools you are running during your build process. The only way to know if this is needed is to monitor cpu on the agent machine while a build is running.

I/O

The most used resources are disk read/write and network read/write. Poor I/O performance will really slow down your builds.

Disk

It goes without saying, but use the fastest disks you have available to you. If you can afford it, new generation nvme/pcie SSD's are the way to go. They are still quite expensive for larger capacities though. At the very least, use a separate disk for the operating system and software installation, and another disk for your Continua CI Server's share folder (or the agents workspace folder on agent machines). This is where most of the I/O happens during builds. This recommendation applies whether running on dedicated hardware or in a virtual machine.

If you are running the server and agent machines on the same virtual host (as we do for our production environment) then this is very important to get right. Poor I/O performance in virtualised environments is not uncommon - having agents and the server fighting for a slice of the same I/O pie is not a good idea.

On the agent machines, good disk performance is critical. When a build is started on the agent, the first thing it does is create a workspace folder. It then exports the source code from the repository cache(s) (Mercurial repo which was cloned from the server) to that folder, using the repository rules (more on this later). This workspace initialisation phase can be very slow if you have poor I/O performance.

Network

Continua CI uses networking to transfer files, repository changes etc between the server and the agents. Poor network performance will impact on build initialisation times (updating the agents repo cache, build workspace) and on build completion times (transferring workspace changes back to the server). Logging between the agent and the server will also be impacted by poor network performance.

By default, Continua CI uses SMB to transfer files, source code (repository caches) between the server and the agents. When the server's share folder is not accessible by SMB, Continua CI will try to use SSH/SFTP (Continua CI installs it own specialised SSH service). In high latency networks (for example if the agent is remote from the server), SSH/SFTP may perform better than SMB.

You can force an agent to use SSH/SFTP by setting the agents ServerFileTransport.ForceSSH property to true.

Database

Continua CI supports PostgreSQL (the default) or Microsoft SQL Server. If you chose to use MSSQL, we recommend running it on a separate well specified machine. MSSQL is quite heavy in it's use of RAM and disk I/O - it's best run on a machine that has been tuned to run it properly. I'm not going to go into that here, that's a whole other topic on an area that I'm definitely not an expert.

The PostgreSQL database server that is installed by default (unless you select otherwise) with Continua CI is much more more frugal when it comes to resources. On our main Continua CI server, PostgreSQL typically using around 60MB of ram. Contrast that with SQL Server running on my dev machine, not used or touched for weeks and it's using 800MB! PostgreSQL can also be tuned, we have tried to provision it with sensible defaults that strike a balance between performance and resource usage. If you need to tune PostgreSQL, then we recommend installing your own PostgreSQL instance and pointing Continua CI at it.

Currently the Continua CI installer doesn't provide any options for the database install location (C:\ProgramData\VSoft\ContinuaCI\PostgreSQLDB ), this is something we are looking at for a future release, that will make it possible to put the database on it's own drive. For now, it's possible to move the database to another location by using a symlink, we have a few customers who have done this successfully. Contact support if you need help with this.

Virtualisation Tips

Virtual CPU Cores

In a virtual environment, it's very important not to overload your virtual host. Note that there is a difference between overloading and over allocating virtual cores. It's a common practice to allocate more virtual cores across the virtual machines than there are physical/logical cores (logical when HyperThreading is enabled), but this has to be done with the knowledge and understanding of the load on the host machine. Overloading happens when so many cores are allocated and in use that the hypervisor is unable to schedule a core to a virtual machine when needed. This results in pauses and poor performance.

In a clustered environment this is even more important, because when a cluster node dies, or is removed for upgrades etc, virtual machines will move to another node in the cluster - if that node is already overloaded then you will soon start hearing the complaints from users!

The best explanation I have found on how hypervisors allocate cores is this article - https://www.altaro.com/hyper-v/hyper-v-virtual-cpus-explained/ - it's Hyper-V specific (we use Hyper-V here) but much of the information also applies to VMWare.

Virtual Disks

When creating separate virtual disk volumes for your virtual machines, try to put those virtual drives on different physical drives, so they are not competing for the same I/O. Use fixed size virtual disks.

Continua CI Configuration Tuning

Continua CI is not immune to performance problems, we're always working to make it faster and consume less resources. There are however a few things that can be tuned in Continua CI to improve performance.

Repository Branch Settings

Use specific branch patterns to narrow down the number of repository files and folders which are monitored and downloaded. With repositories which use folder-based branches, such as Subversion and TFS, consider moving old branches to a separate archive folder in your repository which will not match the branch patterns. Note that you can use more than one Continua CI repository per actual repository. Some users will have multiple projects in one repository, but only need to build a single one for each configuration. Make use of relative paths, where supported by your repository type, to limit your repository to a single project folder. This can significantly speed up repository initialisation and changeset updating.

Repository Polling

Continua CI polls repositories periodically to detect new commits. Each time this occurs, Continua CI invokes the command line client for that repo, and parses the output of that process. Some clients use a surprising amount of CPU. The git client, for example, uses around 8% CPU per instance on our production server while checking for commits. Most of the time, these processes only run for a very short amount of time (when no changes are detected), however if you have a lot of repositories, these small cpu spikes can add up.

There are a couple of options to keep this under control.

1) Set the appropriate polling interval for your repositories. If changes to a repository occur infrequently, then there's no point polling frequently.

2) Set the Server property Server.RepoMonitor.MaxCheckers property. This controls how many version control client processes are spawned concurrently, the default (5) is quite conservative so you should only need to lower this on a very low spec system. If you have plenty of spare CPU capacity, then you can increase this value, however if you do then monitor CPU usage to make sure you don't overload the server.

3) Manual polling, using post commit hooks. This reduces CPU usage on the server, by only polling for repository changes when requested and has the added benefit of reducing the load on your version control server. This does take some setting up, and depends very much on the capabilities of your version control system. I'll take a look at post commit hooks in a future blog post.

Repository Path Filtering

Repository Path Filtering is an option on all repository types, with the exception of Mercurial (*I'll explain why shortly). What this filtering does is allow you to limit which files get added to the server's repository cache. This filtering has a few benefits, less disk space used on the server and the agents, less network I/O when transferring the changes from the server to the agent, and less I/O when checking out the source into the build workspace.

A typical use case for these rules is when you have files in your repository that rarely change and are not needed for the build process (design docs, deployment notes etc). No point adding them to the repo cache if you don't use them.

Changes to these rules won't affect files that are already in the repository cache, but it will avoid committing changes to those filtered out files to the repo cache. The best bang for buck with these filters will come if the repository is reset (the cache is rebuilt, so filtered out files are never committed to the cache), however that can be an expensive operation, so unlike other repository settings, changing these rules will not force a reset.

* These filters don't apply to Mercurial repositories, as we use Mercurial for our repository cache. When you point Continua CI at a Mercurial repository, it just clones it to the server (repo cache), and then clones it to the agents (repo cache) without any modifcations.

Repository Rules

Each Stage has a settings tab called Repository Rules. These rules apply when checking out the source from the agent's repository cache(s) to the build workspace. Only check out the source you need, this will improve performance. If a stage doesn't need the source at all (for example, it's only working with artifacts from previous stages), then just blank out the Repository Rules field.

Don't leave logging of the repository rules turned on unless you are debugging the rules. Logging the files exported to the workspace can be a real performance killer.

Workspace Rules

Similar to Repository Rules, these rules control which files are transferred between the server and agent's build workspace folders, and back again. Only transfer files back to the server's workspace that you actually need, like build artifacts, reports etc.

Don't leave logging of the workspace rules turned on unless you are debugging the rules. Logging the files transferred can be a real performance killer.

Actions

Avoid logging too much information. For example, verbose logging on MSBuild should be avoided unless debugging build issues. Output logged from actions is queued and sent back to the server to be written to the build log, this causes high network and disk I/O.

Disk Space

Disk space is quite often at a premium (especially with SSD's), and it's important to keep on top of it. This is where the Clean up Policies come into play. Continua CI allows you to specify a global clean up policy for both the server and the agents, however it can be overridden at the Project or Configuration level. The clean up policy controls how long to keep old builds and their associated workspaces around. The clean up policy is highly configurable - use it to keep control over disk space. Bear in mind that the work of cleaning up old builds is quite I/O and database intensive, so be sure to schedule it to run during a quite period

Anti-virus Software

Anti-virus software can be a major performance killer, and in instances, an application killer. If I had a dollar for every time anti-virus software turned out to be the cause of a problem with Continua CI or FinalBuilder, well that would be some serious beer money at least!

If you have anti-virus software installed on your server or agents, be sure to add exclusions from real-time scanning for the server's share folder, and the agent's workspace folder. Add scheduled scans on those folders instead. Also, when using the bundled PostgreSQL database, add an exclusion for C:\ProgramData\VSoft\ContinuaCI\PostgreSQLDB  - otherwise you may experience database corruption.

You should also consider adding an exclusions for the hg.exe in the "C:\Program Files\VSoft Technologies\ContinuaCI Agent\hg" folder. We found in testing that this will speed up the processing of the repostiory rules substantially (testing with windows defender).

Version Control Clients

Avoid installing tools like TortoiseSVN or ToirtoiseHG on your server or agent machines as these programs do background indexing (for icon overlays) and can also cause file/folder access issues.

Wrapping Up

I intend to revise this post as I learn more about performance tuning, especially in a virtual environment. If you have any techniques or tweaks that helped speed up your CI Server please feel free to share them with us (and fellow users).

In this post I'm going to look at Windows Manifest Files, what they do, why we need them and how to use them in Delphi and FinalBuilder.

We often get asked questions about UAC prompts, High DPI settings, Windows Themes etc when compiling Delphi & C++Builder projects in FinalBuilder. In this post we'll dissect windows manifest files, and look at how the project settings in Rad Studio interact with the manifest file, and why you should use a custom manifest file.

What is a manifest file and what's it for?

A manifest is an xml file, which is typically embedded as a resource (it can be a separate file, but it's not a good practice and is not recommended) in your native windows executable (x86 or x64). This file has information that tells windows (vista or later) what parts of windows it supports, what permissions it needs (UAC), what common control dependencies it has (Windows Side-by-Side loading).

Without a manifest resource, windows has no idea what permissions your application needs, and will not treat your application kindly. You will find things like common controls (file dialogs etc) look strange, attempts to write to files failing and other general misery. In fact your application may fail to run at all on some systems.

Ok, so now we know we really need a manifest, what does that look like.

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">
    <assemblyIdentity 
        name="VSoftTechnologies.FinalBuilder" 
        processorArchitecture="x86" 
        version="8.0.0.0" 
        type="win32"
    />
    <description>FinalBuilder is a GUI-based build automation tool for Windows developers.description>
    <dependency>
        <dependentAssembly>
            <assemblyIdentity 
                type="win32"    
                name="Microsoft.Windows.Common-Controls" 
                version="6.0.0.0"
                processorArchitecture="x86" 
                publicKeyToken="6595b64144ccf1df" 
                language="*" 
            />
        dependentAssembly>
    dependency>
    <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
        <security>
            <requestedPrivileges>
                <requestedExecutionLevel level="asInvoker" uiAccess="false" />
            requestedPrivileges>
         security>
    trustInfo>
    <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
        <application>
            
            <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> 
            
            <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/>
            
            <supportedOS Id="{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}"/>
            
            <supportedOS Id="{1f676c76-80e1-4239-95bb-83d0f6d0da78}"/>
            
            <supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}"/>
        application>
    compatibility>
assembly>

 

The above example is actually the manifest file used in the FinalBuilder IDE. Lets break it down.

 

AssemblyIdentity

The assemblyIdentity tells windows about your application and what the system architecture requirements are. This should be unique. Note that the type, name and version attributes are required. Typically, the version field just has the major version info, I guess you could update it with the exact version each time, but I've never found the need to do that.

Description

The description field is pretty self explanatory.

Dependency

The dependency section is where you describe the side by side dll dependencies, which for Delphi/C++ Builder applications means windows common controls. This is pretty standard stuff, but it's important because without it, your application will have pretty strange looking file dialogs, if it loads at all.

TrustInfo

The trustinfo section is all about security, it tells windows what sort of permissions your application should be given. The requestedExecutionLevel level attribute has 3 possible values :

  • asInvoker - requesting no additional permissions. This level requires no additional trust prompts.
  • highestAvailable - requesting the highest permissions available to the parent process. When a standard user runs the application, it will behave the same as asInvoker, ie no UAC prompt. If an administrator runs the application, they will see a UAC prompt.
  • requireAdministrator - requesting full administrator permissions. All users will see a UAC prompt, standard users will be required to enter an administrator password.

 

The uiAccess attribute Indicates whether the application requires access to protected user interface elements. Most of the time, this should be set to false. The typical use case for setting it to true is when creating remote desktop style application (like teamviewer etc). If you do set it to true, your application needs to be signed - see this post for code signing in FinalBuilder(https://www.finalbuilder.com/resources/blogs/code-signing-changes-for-2016).

 

High DPI support.

I'm not going go into to detail on this, it's a complex issue with major differences between windows versions, and limited High DPI support in Delphi. I will say, think very carefully before you enable this, High DPI support in Delphi depends very much on the version of delphi, and third party control support. Don't just enable High DPI support without serious testing. See the msdn doco link at the bottom of this post.

Compatibility

The compatibility section tells windows what versions of windows your application supports. It enables windows functionality in your application. Manifests without a compatibility section default to Windows Vista level functionality.

One of the areas where the compatibility section is very important is detecting the windows version. On Windows 8.1 and 10, in applications that do not specify compatibility with them, GetVersion and GetVersionEx will return 6.2.0.0 for the windows version, rather than 6.3.* for Windows 8.1 and 10.0.* for Windows 10. So your application will think it's running on Windows 8 (or Server 2012) and quite possibly disable functionality.

Don't use Rad Studio's default or auto generated manifest!

I'll explain why in a minute, but first lets look at delphi's support for manifests. Delphi's manifest support differs quite a lot depending on which version of Delphi you are using. I don't have every version installed to check on.

The earliest version I have installed at the moment is D2010, which just has a cryptically named check box :

This option will include a default manifest in the projectname.res file. No options to change anything about that manifest. The manifest included is woefully inadequate.

Fast forward to XE7 :

 

 

Things got slightly better (not sure in which XE? version though) as you can now point Rad Studio at a custom manifest file (and you should!) to be included with your application. The default manifest included is still woefully inadequate.

Fast forward again, to Seattle:

 

 

Things look different again here, now you can set the badly named "Enable High DPI" and "Enable Administrator Privileges" options. I say badly named, because that's not what those options do. Checking "Enable High DPI" won't make your application support High DPI, it just tells windows it does (when really, it doesn't unless embarcadero fixed it while I wasn't looking). The same applies to the "Enable Administrator Privileges" - it won't give your application Administrator Privileges, it just tells windows your application needs them to run. Semantics shemantics... but I know this has confused many a developer.

Note that the auto generated manifest uses a template, default_app.manifest, which lives in the bin folder, which is typically under program files, so you might need admin access to modify. It's probably a bad idea to modify it anyway, as it will result in a "works on my machine" moment. This template is different in Seattle and later, as it has some "variables" that get substituted by the IDE when building, this file can't be used when building with FinalBuilder as we have no way to get at those variables.

The manifest included is slightly better than before, but still inadequate.

Berlin & Tokyo manifest options are the same as Seattle, just some layout/styling changes. The auto generated manifests have the same limitations as Seattle.

So I said earlier, "Don't use Rad Studio's default or auto generated manifest", and said that the auto generated manifests are inadequate, here's why : they are simply missing information.

There's no assemblyIdentity element, which according to microsoft is required. There's no description element. The High DPI option just sets the dpiAware element to "True/PM", or not at all. You should use a manifest file that is specific to and reflects your application.

For the versions of Rad Studio that support specifying a custom manifest file, just do that. For versions without custom manifest support, uncheck the "Enable runtime themes" option, and add a resource to your project that includes the manifest :

Example manifest.rc

1 24 "E:\\Source\\app\\myapp.manifest"

Using Manifests in FinalBuilder

FinalBuilder has had support for manifest files for a long time (2007, in FB 5), way before Delphi mentioned the word manifest! On the resource compiler tab of the Delphi and C++Builder actions, there is a field to specify the manifest file.

That's all there is to it, FinalBuilder will add that manifest to the projectname.res file (along with the icon and version info).

One last thing, don't forget to add your custom manifest file to your version control, it's source code after all.

References :

MSDN - Application Manifests

Over the last year or so, we have seen more and more "bug reports" about compiling Delphi projects with FinalBuilder, in particular, reporting issues with compiling version info resources when using Delphi 10.1 (Berlin) and Delphi 10.2 (Tokyo).

This only happens if you tell the Delphi action in FinalBuilder to load version info from the dproj.

delphi action settings

The error typically looks something like this :

resource compiler error

or this :

resource compiler error

"What's this ModuleName variable?? What's com.embarcadero.$(ModuleName) ? I didn't put that there...."

Or "what's this MSBuildProjectName variable?"

This all stems from Embarcadero's support for non windows platforms (in this case I would suggest andriod). The default values for version info Berlin and later have "$(ModuleName)" in the FileDescription, ProductName and ProgramID fields.

default version info

This of course is nonsense for the Windows platform, but some bright new embarcadero hire obviously thought this would be a good idea (well that's what I think, who knows what was behind the decision). The problem is, $(ModuleName) is completely unknown outside of the IDE or MSBuild. So when we compile from the command line (using dcc32, dcc64 etc), if your Delphi Action is set to read the version values from the dproj, then you will get this error.

There are three options to resolve this :

  1. Change the FileDescription field in your delphi project settings to something more meaningful, rather than an ad for Embarcadero, like ummm, oh I know, your product name!
  2. Define the ModuleName variable in FinalBuilder - but this still gets the FileDescription field saying com.embarcadero.YourProductName (hey, it's not a java app!).
  3. Don't load the version info from the dproj, let FinalBuilder handle it completely. After all, you are using FinalBuilder to create your production builds, and version information is part of the release process, not the development process. *This would be my recommended option.*

Update 25 Sept 2017 : Microsoft have closed our bug report with a Wont Fix status.. seems they are too busy with other things.

The recent Visual Studio 2017 Update (also known as VS 15.3)  introduced a problem with command line compilation when the Lightweight Solution Load feature is enabled for the solution. 

If you are using devenv (ie you have the Use MSBuild option unchecked in FinalBuilder), and have the action set to Rebuild... be aware that while the action will succeed, nothing actually gets compiled!
Just to be clear, this is not a FinalBuilder (or Continua CI) problem :

"C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\devenv.com" /rebuild "Release|Any CPU" ConsoleApps.sln
 
Microsoft Visual Studio 2017 Version 15.0.26730.3.
Copyright (C) Microsoft Corp. All rights reserved.
========== Rebuild All: 0 succeeded, 0 failed, 0 skipped ==========

Turning off Lightweight Solution Load on the solution (don't forget to save the solution) results in proper compilation :

"C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\devenv.com" /rebuild "Release|Any CPU" ConsoleApps.sln
 
Microsoft Visual Studio 2017 Version 15.0.26730.3.
Copyright (C) Microsoft Corp. All rights reserved.
1>------ Rebuild All started: Project: ConsoleApps1, Configuration: Release Any CPU ------
2>------ Rebuild All started: Project: ConsoleApp2, Configuration: Release Any CPU ------
3>------ Rebuild All started: Project: ConsoleApp3, Configuration: Release Any CPU ------
4>------ Rebuild All started: Project: ConsoleApp4, Configuration: Release Any CPU ------
1>  ConsoleApps1 -> C:\Users\vincent.OFFICE\Documents\Visual Studio 2017\Projects\ConsoleApps\ConsoleApps1\bin\Release\ConsoleApps1.exe
2>  ConsoleApp2 -> C:\Users\vincent.OFFICE\Documents\Visual Studio 2017\Projects\ConsoleApps\ConsoleApp2\bin\Release\ConsoleApp2.exe
3>  ConsoleApp3 -> C:\Users\vincent.OFFICE\Documents\Visual Studio 2017\Projects\ConsoleApps\ConsoleApp3\bin\Release\ConsoleApp3.exe
4>  ConsoleApp4 -> C:\Users\vincent.OFFICE\Documents\Visual Studio 2017\Projects\ConsoleApps\ConsoleApp4\bin\Release\ConsoleApp4.exe
========== Rebuild All: 4 succeeded, 0 failed, 0 skipped ==========

MSBuild is not affected by this bug (reported to Microsoft here). Unless you have a very good reason for using devenv to compile visual studio solutions these days, just use MSBuild.

 

TL;DR - The Delphi language is very verbose, dated and unattractive to younger developers. Suggestions for improvements below.

The Delphi/Object Pascal language really hasn't changed all that much in the last 20 years. Yes there have been some changes, but they were mostly just tinkering around the edges. Probably the biggest change was the addition of Generics and Anonymous methods. Those two language features alone enabled a raft of libraries that were simply not possible before, for example auto mocking (Delphi Mocks, DSharp), dependency injection and advanced collections (Spring4D).

Some of the features I list below have the potential to spur on the development of other new libraries which can only be a good thing. I have several abandoned projects on my hard drive that were only abandoned because what I wanted to do required language features that just didn't exist in Delphi, or in some cases, the generics implementation fell short of what was needed. Many of these potential features would help reducing verbosity, which helps with maintainability. These days I prefer to write less lines of more expressive code.

I have tried to focus on language enhancements that would have zero impact on existing code, i.e. they are completely optional. I have often seen comments about language features where people don't want the c#/java/whatever features polluting their pure pascal code. My answer to that is, if you don't like it don't use it!! There are many features in Delphi that I don't like, I just don't use them, but that doesn't mean they should be removed. Each to his own and all that. Another argument put forward is "feature x is just syntactic sugar, we don't need it". That's true, in fact we don't need any language if we are ok with writing binary code! If a feature is sugar, and it helps me write less code, and it's easier to read/comprehend/maintain, then I love sugar, load me up with sugar.

Inspiration

Lots of the examples below borrow syntax from other languages, rather than try to invent some contrived "pascalish" syntax. The reality is that most developers these days don't just work with one programming language, they switch between multiple (I use Delphi, C#, Javascript & Typescript on a daily basis, with others thrown in on occasion as needed). Trying to invent a syntax just to be different is pointless, just borrow ideas as needed from other languages (just like they did from delphi in the past!) and get on with it!

I have not listed any functional programing features here, I have yet to spend any real time with functional programming, so won't even attempt to offer suggestions.

I don't have any suggestions for how any of these features would be implemented in the compiler, having not written a real compiler before I know my limitations!

Ok, so lets get to it, these are not listed in any particular order.

Local Variable Initialisation

Allow initialisation of local variables when declaring them, eg :

	procedure Something;
	var
	  x : integer = 99;
	begin
	.........
	

Benefits : 1 less line of code to write/maintain, per variable, initial value is shown next to the declaration, easier to read.

Type Inference

	var
	  x = 99; //it's an integer
	  s = 'hello world'; //it's a string
	begin
	........
	

Benefits : less ceremony when declaring variables, still easy to understand.

Inline variable declaration, with type inference and block scope

	procedure Blah;
	begin
      .......
	  var x : TStrings := TStringList.Create; //no type inference
	//or
	  var x := TStringList.Create; // it's a TStringList, no need to declare the type
	  .......
	end;
	

Inline declared variables should have block scope :

	if test = 0 then
	begin
	  var x := TListString.Create;
	  ....
	end;
	x.Add('bzzzzz'); //Compiler error, x not known here!
	

Benefits : Declare variables when they are needed, makes it easier to read/maintain as it results in less eye movement, block scope reduces unintended variable reuse.

Loop variable inline declaration

Declare your loop or iteration variable inline ( and they would have loop block scope)

	for var item in collection do
	begin
	  item.Foo;
	end;
	item.Bar; //<<error item unknown.
	
	for var i : integer := 0 to count do
    ....
	//or
	for var i := 0 to count do //using type inference
    ....
	

Benefits : Avoid the old variable loop value not available outside the loop error, same benefits as inline/block scope etc.

Shortcut property declaration

Creating properties that don't have getter/setter methods in Delphi is unnecessarily verbose in Delph

	type
	  TMyClass = class
	  private
	    FName : string;
	  public
	    property Name : string read FName write FName;
	  end;
	

All that is really needed is :

	type
	  TMyClass = class
	  public
	    property Name : string;
	  end;
	

Whilst this might seem the same as declaring a public variable, RTTI would be generated differently if it was a variable rather than a property.

Benefits : Cuts down on boilerplate code.

Interface Helpers

Add interface helpers just like for classes and records. Also, remove the limit of one helper per type per unit (without this the above is not really usable). Have a look at how prevalent extension methods are in C#, Linq is a prime example, it's essentially just a bunch of extension methods.

	type
	  TIDatabaseConnectionHelper = interface helper for IDatabaseConnection
	    function Query<T> : IQuerable;
	  end;
	

The above is actually a class that extends the interface when it's containing unit is used. This would make implementing something like LINQ possible.

Strings (and other non ordinals) in Case Statements

This has to be the most requested feature by far in the history of delphi imho, sure to make many long time Delphi fans happy.

	case s of
	  'hello' : x := 1;
	  'goodbye' : x := 2;
	  sWorld : x := 3; //sWorld is a string constant
	end;
	

Case sensitivity could be possibly be dealt with at compile time via CaseSensitive 'helper', (or perhaps an attribute). The above example is case insensitive, the example below is case sensitive :

	case s.CaseSensitive of
	  'hello' : x := 1;
	  'goodbye' : x := 2;
	  sWorld : x := 3; //sWorld is a string constant
	end;
    //or
	case [CaseSensitive]s of
	  'hello' : x := 1;
	  'goodbye' : x := 2;
	  sWorld : x := 3; //sWorld is a string constant
	end;
	

How about :

	case x.ClassType of
	  TBar : x := 1;
	  TFoo : x := 2;
	end;
	

Benefits : Simpler, less verbose code.

Ternary Operator

A simpler way of writing :

	If y = 0 then
	  x := 0
	else
	  x := 99;
	

Syntax : x := bool expr ? true value, false value

	//eg
	x := y = 0 ? 0 : 99;
	

Benefits : Simpler, more succinct code.

Try/Except/Finally

Probably the equal most requested language feature, allowing for try/except/finally without nesting try blocks.

	try
	...
	except
	..
	finally
	...
	end;
	

Much cleaner than:

	try
	  try
	    ...
	  except
	    ...
	  end;
	finally
	  ....
	end;
	

Benefits : Neater, Tidier, Nicer

Named Arguments

Say for example, we have this procedure, with more than one optional parameter :

	procedure TMyClass.DoDomething(const param1 : string; const param2 : ISomething = nil; const param3 : integer = 0);
	

To call this method, if I want to pass a value for param3, I have to also pass a value for parm2

	x.DoSomething('p1', nil,99);
	

With named parameters, this could be :

	x.DoSomething('p1', param3 = 99);
	

Yes, this means I have to type more, but it's much more readable down the track when maintaining the code. I also don't need to lookup the order of the parameters, or provide redundant values for parameters that I want to just use their default values for. I also don't end up adding a bunch of overloaded methods to make the method easier to call.

Benefits : More expressive method calls, less overloads.

Variable method arguments

Steal this from C# (which probably borrowed the idea from c++ varargs ... feature)

	procedure DoSomething(params x : array of integer);
	

The method can be called passing ether an array param

	procedure TestDoSomething;
	var
	   p : array of integer;
	begin
	...... //(fill out the p array)
	  DoSomething(p);
	  //or
	  DoSomething(1);
	  DoSomething(1,2);
	  DoSomething(1,2,3);
	  ........
	

Benefits : Flexibilty in how a method is called.

Lambdas

Delphi's anonymous method syntax is just too damned verbose, wouldn't you prefer to write/read this:

	var rate := rates.SingleOrDefault(x => x.Currency = currency.Code);
	

rather than this :

	var
	  rate : IRate;
	begin
	  rate := rates.SingleOrDefault(function(x : IRate) : boolean;
	                                begin
	                                  result := x.Currency = currency.Code;
	                                end);
	...
	

Ok, so I did sneak in an inline type inferenced local var in the first example, and both snippets rely on the existence of interface helpers (rates would be IEnumerable<Rate>) ;)

Benefits : Less code to write/maintain, the smiles on the faces of developers who switch between delphi and c# all the time!

LINQ!

With Lambdas and multiple type helpers per type per unit, a LINQ like feature would possible. Whilst the Delphi Spring library has had Linq like extensions to IEnumerable<T> for a while, this would formalise the interfaces and provides an implementation used by the collection classes. Providers for XML and Database (eg FireDac) would be possible.

Even a Linq 2 VCL & Linq 2 FMX would be possible. A common scenario, update a bunch of controls based on some state change :

	Self.Controls.Where( x => x.HasPropertyWithValue<boolean>('ForceUpdate', true) or x.IsType<IForceUpdate>).Do( c => c.Refresh);  //or something like that.
	

This would make it possible to operate on controls on a form, without a) knowing their names, b) having references to them, or c) knowing if the control actually exists.

Being able to do this with something like a linq expression would be a massive improvement.. all that's needed is a linq provider for each control framework. Sorta Like jQuery for the VCL/FMX!

Benefits : Too many to list here!

Caveats : Microsoft have a patent on LINQ - so perhaps this isn't really doable? Perhaps with a slightly different syntax. Or challenge the patent!

Async/Await

Bake the parallel library features into the language/compilers, much like C# and other languages have done over recent years (ala async, await). That makes it possible/easier to write highly scalable servers, over the top of asynchronous I/O. Yes, it's technically possible now, but it's so damned difficult to get right/prove it's right that very few have attempted it. This also make it possible to use the Promise pattern, something along these lines - https://github.com/Real-Serious-Games/C-Sharp-Promise to handle async tasks.

Benfits : Write scalable task oriented server code without messing with low level threading, write responsive non blocking client code.

Caveats : Microsoft have a patent on Async/Await

Non reference counted interfaces

Make it possible to use have interfaces without reference counting. I use interfaces a lot, even on forms and frames, I like to limit what surface is exposed when passing these objects around, but it can be painful when the compiler tries to call _Release on a form that is being destroyed. Obviously, there are ways to deal with this (careful clean up) but it's an easy trap to fall in and very difficult to debug.

Possible syntax :

	[NoRefCount] // << decorate interface with attribute to tell compiler reference counting not required.
	IMyInterface = interface
	....
	end;
	

There would have to be some limits imposed (using the next feature in this list), for example not allowing use of the attribute on an interface that descends from another interface which is reference counted. It would cause mayhem if passed to a method that takes the base interface (for which the compiler would then try generate addref/release calls).

Benefits : Remove the overhead of reference counting, avoid unwanted side effects from compiler generated _Release calls.

Attribute Constraints

(RSP-13322)

Delphi attributes do not currently have any constraint feature that allows the developer to limit their use, so for example an attribute designed to be applied to a class can currently be applied to a method.

Operator overloading on classes.

Currently operator overloading is only available on records, adding them to classes would make it possible to create some interesting libraries (DSL's even).

I had discussions with Allen Bauer (and others over) many years ago about about this. Memory management was always the stumbling block, with ARC this would not be big issues. Even without ARC, I think delphi people are capable of dealing with memory management requirements, just like we always have.

Edit : I believe this feature is actually available on the ARC enabled compilers.

Improve Generic Constraints

Add additional constraint types to generics, for example :

	type
	  TToken<T:enum> = class
	    .....
	  end;
	

Benefits : more type safety, less runtime checking code required.

Fix IEnumerable<T>

Not so much a language change rather than a library change, please borrow spring4d's version - much easier to implement in 1 class (delphi's version is impossible to implement in 1 class as it confuses the compiler!).

Yield return - Iterator blocks

The yield keyword in C# basically generates Enumerator implementations at compile time, which allows the developer to avoid writing a bunch of enumerator classes, which are essentially simple state machines. These enumerators should be relatively easy for the compiler to generate.

This a a good example (sourced from this stackoverflow post

	public IEnumerable<T> Read<T>(string sql, Func<IDataReader, T> make, params object[] parms)
	{
	  using (var connection = CreateConnection())
	  {
	    using (var command = CreateCommand(CommandType.Text, sql, connection, parms))
	    {
	      command.CommandTimeout = dataBaseSettings.ReadCommandTimeout;
	      using (var reader = command.ExecuteReader())
	      {
	        while (reader.Read())
	        {
	          yield return make(reader);
	        }
	      }
	    }
      }
	}
	

The example above will read the records in from the database as they are consumed, rather than reading them all into a collection first, and then returning the collection. This is far more memory efficient when there are a lot of rows returns. In essence, the consumer/caller is "pulling" the records from the database when required, this could for example be pulling messages from a queue.

I guess for delphi, this could be a simple Yield() method( or YieldExit() to more more explicit).

Benefits : less boilerplate enumerator code, lower memory usage when working with large or unbounded datasets or queues etc.

Partial classes

Yes I know, this one is sure to kick up a storm of complaints (I recall an epic thread about this topic on the newsgroups about this years ago). Like everything else, if you don't like it, don't use it. I don't often use this feature in C#, but it is indispensable for one particular scenario, working with generated code. Imagine generating code from an external model or tool for example type libraries, uml tools, database schema, orm, idl etc. The generated code would typically be full of warning comments about how it's generated code and shouldn't be modified. Partial classes get around this, by allowing you to Add code to the generated classes, and the next time they are regenerated, your added code remains intact. Simple, effective.

Benefits : Enables code generation scenarios without requiring base classes.

Allow Multiple Uses clauses

(RSP-13777)

This is something that would make commenting/uncommenting units from the uses clause a lot easier, and also make tooling of the uses clause easier.

	uses
	sharemem, System.SysUtils, System.Classes, vcl.graphics{$ifdef debug},logger{$endif};
	

The above syntax is easily messed up by the IDE.

	uses sharemem;
	uses System.SysUtils, System.Classes;
	uses vcl.graphics;
    {$IFDEF debug}uses logger;{$endif}
	

This makes commenting-out and reorganising the library names more convenient. It would also make refactoring and other tooling easier.

Allow non parameterized interfaces to have parameterized methods

(RSP-13725)

	IContainer = interface
	['{2B7B3956-7101-4619-A6DA-C8AF61EE4A81}']
	  function Resolve<T>: T;
	end;
	

That won't compile, but this code works as expected:

	TContainer = class
	  function Resolve<T>: T;
	end;
    

This is a limitation I have come across many times when trying to port code from C# to Delphi.

Conclusion

That's 20+ useful, optional and simple to implement (just kidding) language features that I would personally like to see in Delphi (and I would use every one of them). I could have gone on adding a lot more features, but I think these are enough to make my point.

Here's my suggestion to Embarcadero, invest in the language and make up for lost time! While you are at it, sponsor some developers to try porting some interesting and complex open source libraries to delphi, and when they hit road blocks in the language or compiler(and they will), make it work, iterate until baked. I tried porting ReactiveX http://reactivex.io/ to delphi a while back, but hit too many roadblocks in the language and compiler (internal compiler errors, limitations in the generics implementation).

The end result would be a modern, capable language that can handle anything thrown at it.