VSoft Technologies BlogsVSoft Technologies Blogs - posts about our products and software development.https://www.finalbuilder.com/resources/blogsNew VSoft Forumshttps://www.finalbuilder.com/resources/blogs/postid/831/new-vsoft-forumsDelphi,General,Web DevelopmentMon, 10 Sep 2018 15:13:04 GMT<p>TLDR; Our forums have moved to <a href="https://www.finalbuilder.com/forums">https://www.finalbuilder.com/forums</a></p> <p>After years of frustration with Active Forums on Dotnetnuke, we finally got around to moving to a new forums platform. </p> <p>The old forums had zero facilities for dealing with spammers, and sure enough every day, spammers would register on the website and post spam on the forums. Even after turning on email verification (where registration required verifying your email), spammers would verify their emails and post spam.</p> <p>The old forums were also terrible at handling images, code markup etc, and will often completely mangle any content you paste in.</p> <p>So the hunt was on for a new platform. I've lost count of the number of different forum products I've looked at over the years, none of which totally satisfied my needs/wants. I've even contemplated writing my own, but I have little enough free time as it is, and would much rather focus on our products. </p> <p><a href="https://discourse.org">Discourse</a>  looked interesting, so I installed it on a Ubuntu Server 18.04 virtual machine (it runs in a Docker container). After some initial trouble with email configuration (it didn't handle subdomains properly) it was up and running. I'm not great with linux, I've tinkered with it many times over the years but never really used it for any length of time. I was a little apprehensive about installing Discourse, however their guide is pretty good and I managed just fine. </p> <p>The default settings are pretty good, but it is easy to configure. After experimenting with it for a few days (there are a LOT of options), we we liked it a lot, and decided to go with it. </p> <p>Discourse is Excellent at handling bad markup, I'm astounded at how well it deals with malformed html and just renders a good looking post (most of the time). Inserting images is a breeze, the editor will accept Markdown or html, and gives an accurate preview while you are writing a post. Posting code snippets works well usng the same markup as github, with syntax highlighting for a bunch of languages (C#, delphi, javascript, vbscript, xml etc). The preview makes it easy to tell when you have things just right. Discourse also works very well on mobile, although our website does not (the login page is usable) - more work to be done there (like a whole new site!). </p> <p>Discourse is open source (GPL), so you can either host it yourself (free) or let Discourse.org host if for you (paid, starting at $100pm). Since we had spare capacity on our web servers (which run hypver-v 2016) I chose to host it ourselves. That was 11 days ago. </p> <p>My goal was to import the content from the old forums, there are 12 years of valuable posts there which I was loath to lose. </p> <p>The first challenge was that Discourse requires unique emails, and our dotnetnuke install did not. After 12 years of upgrades, our database was in a bit of a sorry state. There were many duplicate accounts (some users had 5 accounts), I guess if you can't remember your login you just create a new one, right? I can't totally blame users for that, the password reset email system was unreliable in the past (it should be ok now, check your spam folder!). So we cleaned up the database, removed old accounts that had no licenses and no forum posts. </p> <p>The next challenge was enabling single sign on with the website. Someone had written a dotnetnuke extension for it, but I wasn't able to get it working (it was written for an older version), so I spent 2 days writing my own (and almost losing the will to live!). Once that was sorted, I got to work on importing the data. Discourse does have a bunch of import code on <a href="https://github.com/discourse/discourse/tree/master/script/import_scripts">github</a> - none of which are for dotnetnuke, and they are all written in Ruby (which I have zero experience with). Fortunately, Discourse does have a <a href="https://docs.discourse.org/">rest api</a> - so using C# (with dapper & restsharp) I set about writing a tool to do the import. Since Discourse doesn't allow you to permanently delete topics, this needed to work first time, and be restartable when an error occurred. This took 4 days to write, much of which was just figuring out how to get past the rate limits Discourse imposes. I did this all locally with a back up of the website db and a local discourse instance. The import took several hours, with many restarts (usually due to bad content in the old forums, topics too short etc). </p> <p>Backing up the local instance of Discourse was trivial, as was restoring it on the remote server (in LA). We did have to spend several hours fixing a bunch of posts, and then some time with sql fixing dates (editing a post sends it to the top of a category). I did also have to ssh into the container to "rebake" the posts to fix image url issues. Fortunately theres is a wealth of info on <a href="https://meta.discourse.org">Discourse's own forums</a> - and search works really well!</p> <p>We chose not to migrate the FinalBuilder Server forum (the product was discontinued in 2013) or the Action Studio forum (which gets very few posts).  </p> <p style="text-align: center;"><img src="/blogimages/vincent/new-forums/discourse.png" /> </p> <p>I'm sure we'll still be tweaking the forums over the next few weeks, but on the whole we are pretty happy with how they are. Let us know what you think (in the <a href="https://www.finalbuilder.com/forums/c/site-feedback">Site Feedback</a> forum!).</p> 831Introducing Continua CI Version 1.9https://www.finalbuilder.com/resources/blogs/postid/782/introducing-continua-ci-version-19.NET,Continua CI,Delphi,General,Web Development,WindowsTue, 14 Aug 2018 13:47:08 GMT<p><img alt="" src="/blogimages/dave/ContinuaCIWizardImageSmall.png" style="border-width: 0px; border-style: solid; margin-right: 5px; margin-left: 5px; width: 55px; height: 55px;" /></p> <p>Version 1.9 is now out of beta and available as a stable release. Thank you to those of you who have already tried out the beta - especially those who reported issues.</p> <p>This version brings major changes to the notifications system. We redesigned it using a common architecture, that makes it much easier to add new notification publisher types. Where previously, only email, XMPP and private message notifications were available, there are now publishers for Slack, Teams, Hipchat and Stride. And we can now add more (let us know what you need).</p> <p><img alt="" src="/blogimages/dave/PublisherTypes.png" style="width: 626px; height: 399px; display: block; margin-left: auto; margin-right: auto; " /></p> <p>We are no longer limited to one publisher of each type. You may, for example, have different email servers for different teams on your company. You can set up two email publishers, one for each server, and set up subscriptions so that notifications from different projects go to different email servers. Likewise for different Slack workspaces, Teams channel connectors and so on.</p> <p>We have also improved the XMPP publisher to support sending notifications to rooms. Subscriptions have been improved, allowing you to specify a room and/or channel for this and other publishers.</p> <p><img alt="" src="/blogimages/dave/Subscription.png" style="display: block; margin-left: auto; margin-right: auto; width: 625px; height: 733px;" /></p> <p>User preferences have been updated allowing each user to specify a recipient id, username or channel per publisher.</p> <p><img alt="" src="/blogimages/dave/UserPreferences.png" style="width: 726px; height: 751px; display: block; margin-left: auto; margin-right: auto; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);" /></p> <p>You can see some metrics on the throughput of each publisher (number of messages on queue, messages sent per second, average send time, etc.) on the Publishers page in the Administration area. This also shows real-time counts of any errors occurring while sending messages and also any messages waiting on a retry queue due to rate limiting or service outages. This allows you to know when you need to upgrade rate limits or make other service changes.</p> <p><img alt="" src="/blogimages/dave/PublisherMetrics.png" style="width: 990px; height: 306px; display: block; margin-left: auto; margin-right: auto; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);" /></p> <p>The Templates page has been updated. Templates are now divided into a tab per publisher. The list of available variables for each event type has been moved to a expandable side panel.</p> <p><img alt="" src="/blogimages/dave/NotificationTemplates.png" style="width: 564px; height: 707px; display: block; margin-left: auto; margin-right: auto; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);" /></p> <p>This release is built on .Net Framework version 4.7.2, which has allowed us to upgrade a number of third party libraries, including the database ORM and PostgreSQL drivers. This has noticeably improved performance, as well as providing us with a richer platform to build future features on. The setup wizard will prompt for you to install .Net Framework version 4.7.2, before continuing with the installation.</p> <p><img alt="" src="/blogimages/dave/InstallerFrameworkRequirement.PNG" style="width: 503px; height: 391px; display: block; margin-left: auto; margin-right: auto;" /></p> <p>Note that applications running on .Net 4.7.2 do not run on versions of Windows prior to Windows Server 2008R2 and Windows 7 SP1. We are also dropping the 32-bit server installer. This is mainly to reduce testing overheads. We will still be releasing 32-bit agents for those who are using 16-bit compilers.</p> <p>We will continue to provide bug fixes to Continua CI version 1.8.1 for while to give you time to migrate from older platforms.</p> <p> </p> <p> </p> 782Continua CI Roadmap 2018https://www.finalbuilder.com/resources/blogs/postid/776/continua-ci-roadmap-2018.NET,Continua CI,Delphi,Web DevelopmentMon, 18 Jun 2018 14:49:38 GMT<p>I'm not usually one for publishing roadmaps, mostly because I don't like to promise something and not deliver. That said, we've had a few people ask recently what is happening with Continua CI. </p> <div style="background:#eeeeee;border:1px solid #cccccc;padding:5px 10px;"><strong><span style="color:#e74c3c;">Disclaimer - nothing I write here is set in stone, our plans may change.</span></strong></div> <p><br /> A few weeks ago, I wrote up a "roadmap" for Continua CI on the whiteboard in our office. Continua CI 1.8.x has been out for some time, but we have been working on 2.x for quite a while. The length of time it is taking to get some features out is a cause of frustration in the office, that lead to a lengthy team discussion, the result was a "new plan". </p> <p>One of the reasons we had been holding features back for 2.x, is they required a change in the system requirements. Many of the third party libraries we use have dropped .net 4.0 support, so we were stuck on old versions. So rather than wait for 2.0 we will release 1.9 on .net 4.7.2. This will allow us to release some new features while we continue working on 2.0, and to take in some bug fixes from third party libraries.</p> <p>This is "The Plan" :</p> <style type="text/css">.featureTable { border: 1px solid #0071c5; border-collapse: collapse; width:100%; margin-left: auto; margin-right: auto; font-size: 14px; } .featureTable thead { background-color : #0071c5; color : White; } .featureTable td { border: 1px solid #0071c5; padding : 5px; } </style> <table align="center" cellpadding="0" cellspacing="0" class="featureTable"> <thead> <tr> <td style="width: 70px;">Version</td> <td style="width: 130px;">.NET Framework</td> <td style="width: 70px">x86/x64</td> <td style="width: 170px;">Min OS Version</td> <td style="width: 70px">UI</td> <td>Features</td> </tr> </thead> <tbody> <tr> <td>1.8.x</td> <td>4.0</td> <td>both</td> <td>Windows Server 2003R2</td> <td>MVC 4</td> <td> </td> </tr> <tr> <td>1.9.0</td> <td>4.7.2</td> <td>x64</td> <td>Windows Server 2008R2</td> <td>MVC 5</td> <td>New Notifications types</td> </tr> <tr> <td>1.9.1</td> <td>4.7.2</td> <td>x64</td> <td>Windows Server 2008R2</td> <td>MVC 5</td> <td>Deployment Actions</td> </tr> <tr> <td>1.9.2</td> <td>4.7.2</td> <td>x64</td> <td>Windows Server 2008R2</td> <td>MVC 5</td> <td>Import/Export</td> </tr> <tr> <td>2.0.0</td> <td>netcore 2.1</td> <td>x64</td> <td>Windows Server 2012</td> <td>MVC 6</td> <td>New Architecture</td> </tr> <tr> <td>3.0.0</td> <td>netcore x.x</td> <td>x64</td> <td>Windows Server 2012</td> <td>TBA</td> <td>New User Interface</td> </tr> </tbody> </table> <p> </p> <p>Let's break down this plan.</p> <h2>1.9.0 Release</h2> <p>The 1.9 Release will built on .net 4.7.2, which allowed us to take updates to a number of third party libraries, most notably NHibernate and Npgsql (postgress driver). These two libraries factor heavily in the performance improvements we see in 1.9.0. </p> <p>The major new feature in 1.9.0 will be a completely redesigned notifications architecture. In 1.8, notifications are quite limited, offering only email, xmpp and private messages. There was very little shared infrastructure between the notification types, so adding new notification types was not simple. You could only use 1 mail server and 1 xmpp server.</p> <p>In 1.9.0, notifications are implemented as plugins*, using a common architecture that made it much easier add new notification types. You can also define multiple notification publishers of the same type, so different projects can use different email servers for example.</p> <p>Notification Types :  Private message, Email, XMPP, Slack, Hipchat, Stride. More will follow in subsequent updates (let us know what you need).</p> <p>*We probably won't publish this api for others to use just yet, as it will be changing for 2.0 due to differences between the .net framework and .net core.</p> <p>If you are running Continua CI on a 32-bit machine, then start planning your migration. Supporting both x86/x64 is no longer feasable, and dropping x86 support simplifies a lot of things for us (like updating bundled tools etc).  We will continue supporting 1.8.x for a while, but only with bug or security fixes. The minimum OS version will be the same as for the .Net Framework 4.7.2 - since Windows Server 2003R2 is out of support these days, it makes sense for us to drop support for it. </p> <h2>1.9.1 Release</h2> <p>Deployment focused actions.  <br /> <br />   - AWS S3 Get/Put<br />   - Azure Blob Upload, Publish, Cloud Rest Service, Web Deploy<br />   - Docker Build Image, Push Image, Run Command, Run Image<br />   - File Transfer (FTP, FTPS, SFTP)<br />   - SSH Action<br />   - SQL Package Export, Package Extract, Package Import, Package Publish<br />   - SSH Run Script<br />   - Web Deploy</p> <p>These actions are all mostly completed, but are waiting on some other (UI) changes to make them easier to use. We'll provide more detail about these when they closer release.</p> <p><strong>Note</strong> : These actions will only be available to licensed customers, not in the free Solo Edition.</p> <h2>1.9.2 Release</h2> <p>One of the most requested features in Continua CI, is the ability to Export and Import Continua CI Projects and Configurations. This might be for moving from a proof of concept server to a production server, or simply to be able to make small changes, and import configurations into other projects. The file format will be YAML.</p> <h2>Continua CI 2.0 Release - .net core.</h2> <p>We originally planned to target .net framework 4.7 with Continua CI 2.0, but with .net core improving significantly with netcore 2.0 and 2.1, the time is right to port to .net core. The most obvious reason to target .net core is cross platform. This is something we have wanted to do for some time, and even explored with mono, but were never able to get things working in a satisfactory manner. It's our hope that .net core will deliver on it's cross platform promise, but for now it's a significant amount of work just to target .net core. So that said, our plans for Continua CI 2.0 is to get it up and running on .net core on <strong>Windows only</strong>, without losing any functionality or features. During the port we are taking note of what our Windows dependencies are for future reference. </p> <p>The current (1.8.x) architecture looks like this :</p> <p><strong><span style="font-family:Courier New,Courier,monospace;">Browser <----> IIS(Asp.net with MVC)<--(WCF)-->Service <--(WCF)-->Agent(s)</span></strong></p> <p>With .net core, it's possible to host asp.net in a service process, and that is what we have chosen to do. This cuts out the WCF layer between IIS and the service. .net core doesn't have WCF server support, and to be honest I'm not all that cut up about it ;) That said, we still need a replacement for WCF for communication between the agents and the server. We're currently evaluating few options for this.</p> <p>Continua CI 2.0  architecture currently looks like this :</p> <p><strong><span style="font-family:Courier New,Courier,monospace;">Browser <----> Service(hosting asp.net core 2.1/mvc)<--(TBD)--> Agent(s)</span></strong></p> <p>The current state of the port is that most of the code has been ported, the communication between the agents and the server is still being worked on, and none of the UI has been ported. We do have asp.net core and mvc running in the service. There are significant differences between asp.net/mvc and asp.net core/mvc, so we're still working through this, I expect it will take a month or so to go through and resolve the issues, then we can move on to new features. </p> <h3>Continua CI 2.0 - new features.</h3> <p>Rest API. This is something we had been working on for a while, but on the .net framework using self hosted Nancy (in the service, running on a separate port from IIS). Once we made the decision to port to .net core, we chose to just use asp.net rather than Nancy. Fortunately we were able to use much of what was already done with nancy on asp.net core (models, services etc) and we're currently working on this right now..</p> <p>Other features - TBA</p> <h2>Continua CI 3.0 - A new UI</h2> <p>Asp.net MVC has served us well over the years, but it relies on a bunch of jQuery code to make the UI usable, and I'll be honest, no one here likes working with jQuery! Even though we ported much of the javascript to typescript, it's still hard to create complex UI's using jQuery. The Stage Editor is a good example of this, even with some reasonably well structured javascript, it's still very hard to work on without breaking it. The UI is currently based on Bootstrap 3.0, with a ton of customisations. Of course Bootstrap 4.0 completely breaks things so we're stuck on 3.0 for now.<br /> <br /> So it's time to change tack and use an SPA framework. We've done proof of concepts with Angular and React, and will likely look at Vue before making a decision - right now I'm leaning towards React. Creating a new user interface is a large chunk of work, so work will start on this soon (it's dependent on the rest api). We're likely to look at improving usability and consistency in the UI, and perhaps a styling refresh. </p> <p>Linux & MacOS Agents - with .net core running on these platforms, this is now a possibility. We looked at this several times before with Mono, but the api coverage or behavor left a lot to be desired. We do still have some windows specific stuff to rework in our agent code, and Actions will need to filtered by platform but this is all quite doable.</p> <h2>Summing up</h2> <p>We're making a big effort here to get features out more frequently, but you will notice I haven't put any timeframe on releases outlined above, they will be released when ready. We expect a 1.9.0 Beta to be out in the next week or so (currently testing the installer, upgrades etc), and we'll blog when that happens (with more details about the new notifications features). Note that it's highly likely there will be other releases in between the ones outlined above, with bug fixes and other minor new features as per usual. We have a backlog of feature requests to work from, many of which are high priorities, so we're never short of things to do (and we welcome feature requests). </p> 776Continuous Integration Server performancehttps://www.finalbuilder.com/resources/blogs/postid/754/continuous-integration-server-performance.NET,Continua CI,Delphi,FinalBuilder,Git,Mercurial,Web DevelopmentMon, 11 Sep 2017 14:31:58 GMT<p>Continuous Integration Servers are often underspecified when it comes to hardware. In the early days of Automated Builds, the build server was quite often that old pc in the corner of the office, or an old server in the data center that no one else wanted. Developers weren't doing many builds per day, so it worked, it was probably slow but that didn't seem to matter much.</p> <p>Fast forward 20 years, and the Continuous Integration Server is now a critical service. The volume and frequency of builds has increased dramatically and a slow CI server can be a real problem in an environment where we want fast feedback on that code we just committed (even though it "worked on my machine"). Continuous Deployment only adds to the workload of the CI server.&nbsp; </p> <p>In this post, I'm going to cover off some ideas to hopefully improve the performance of your CI server. I'm not going to cover compilation, unit tests etc. (which can be where a lot of the time is spent). Instead, I'll focus on the environment, machine configuration and some settings on your Continua CI configurations.</p> <h2>Hardware Requirements</h2> <p>It's impossible to provide hard and fast specs for hardware or virtual machines, as it varies greatly depending on the expected load.</p> <p>There are a bunch of things you can tweak that may improve performance. I will touch on some key points for virtual hosts, but I'm not going to go too deep into tuning virtual hosts, that's not my area of expertise. Of course, dedicated physical machines would be the ideal, but these days, even if you do get dedicated hardware for CI/CD, it's most likely going to be as a virtual host (hyper-v or vmware) rather than an OS installed on bare metal (do companies still provision a single os on bare metal servers these days?). Virtualisation brings in a whole bunch of benefits, but it also brings with it some limitations that cannot be ignored.</p> <p>Continuous Integration environments are mostly I/O bound and Continua CI is no different in that regard. So let's look at the various resources used by CI/CD.</p> <h3>CPU</h3> <p>It's unlikely that CPU will be a limiting factor in the performance of your CI server, unless you are running other CPU intensive tasks on your server. If that's the case, then move your CI server to dedicated hardware, or at least a dedicated virtual host. </p> <p>At a minimum you should have at least 2 cores on the server. On our production server, which is a virtual machine (on Hyper-V 2012R2) with 4 virtual cores and dynamic RAM, the Windows resource monitor shows that average CPU usage usually sits around 2% when idle (no running builds, measured on the guest OS using resource monitor on the Continua Server Service). With 10 concurrent builds running, the Continua CI server service was using around 6% cpu.</p> <p>Adding another 4 cores made very little difference. The Hyper-V host machine, which is also running a bunch of agent VM's, has plenty of CPU capacity, with the average CPU usage round 5-7%. Cutting down the number of cores to 2 did make a slight difference, with the VM showing slightly higher CPU usage, however no discernible difference in build times.</p> <p> This is obviously not very scientific, but it did demonstrate (well to me at least) that CPU is not the limiting factor. I set the server VM back to 4 cores and left it at that. Our Hyper-V host machines are a few years old now, and have 7200 rpm SAS hard drives (in Raid 10) rather than SSD's (they were still too expensive when we bought the machines).</p> <p>On a Continua CI Agent, we recommend at least 2 cpu cores, and limit the concurrent builds running on the agent to 1 per core. This isn't a hard and fast rule, just a convention we adhere to here (based on some performance testing). You may want to add extra cores depending on what compilers or tools you are running during your build process. The only way to know if this is needed is to monitor cpu on the agent machine while a build is running.</p> <h3>I/O</h3> <p>The most used resources are disk read/write and network read/write. Poor I/O performance will really slow down your builds.</p> <h4>Disk</h4> <p>It goes without saying, but use the fastest disks you have available to you. If you can afford it, new generation nvme/pcie SSD's are the way to go. They are still quite expensive for larger capacities though. At the very least, use a separate disk for the operating system and software installation, and another disk for your Continua CI Server's share folder (or the agents workspace folder on agent machines). This is where most of the I/O happens during builds. This recommendation applies whether running on dedicated hardware or in a virtual machine.</p> <p> If you are running the server and agent machines on the same virtual host (as we do for our production environment) then this is very important to get right. Poor I/O performance in virtualised environments is not uncommon - having agents and the server fighting for a slice of the same I/O pie is not a good idea.<br /> <br /> On the agent machines, good disk performance is critical. When a build is started on the agent, the first thing it does is create a workspace folder. It then exports the source code from the repository cache(s) (Mercurial repo which was cloned from the server) to that folder, using the repository rules (more on this later). This workspace initialisation phase can be very slow if you have poor I/O performance.</p> <h4>Network</h4> <p>Continua CI uses networking to transfer files, repository changes etc between the server and the agents. Poor network performance will impact on build initialisation times (updating the agents repo cache, build workspace) and on build completion times (transferring workspace changes back to the server). Logging between the agent and the server will also be impacted by poor network performance.</p> <p>By default, Continua CI uses SMB to transfer files, source code (repository caches) between the server and the agents. When the server's share folder is not accessible by SMB, Continua CI will try to use SSH/SFTP (Continua CI installs it own specialised SSH service). In high latency networks (for example if the agent is remote from the server), SSH/SFTP may perform better than SMB.</p> <p>You can force an agent to use SSH/SFTP by setting the agents ServerFileTransport.ForceSSH property to true.</p> <h3>Database</h3> <p>Continua CI supports PostgreSQL (the default) or Microsoft SQL Server. If you chose to use MSSQL, we recommend running it on a separate well specified machine. MSSQL is quite heavy in it's use of RAM and disk I/O - it's best run on a machine that has been tuned to run it properly. I'm not going to go into that here, that's a whole other topic on an area that I'm definitely not an expert.</p> <p>The PostgreSQL database server that is installed by default (unless you select otherwise) with Continua CI is much more more frugal when it comes to resources. On our main Continua CI server, PostgreSQL typically using around 60MB of ram. Contrast that with SQL Server running on my dev machine, not used or touched for weeks and it's using 800MB! PostgreSQL can also be tuned, we have tried to provision it with sensible defaults that strike a balance between performance and resource usage. If you need to tune PostgreSQL, then we recommend installing your own PostgreSQL instance and pointing Continua CI at it.</p> <p>Currently the Continua CI installer doesn't provide any options for the database install location (C:\ProgramData\VSoft\ContinuaCI\PostgreSQLDB ), this is something we are looking at for a future release, that will make it possible to put the database on it's own drive. For now, it's possible to move the database to another location by using a symlink, we have a few customers who have done this successfully. Contact support if you need help with this.</p> <h2>Virtualisation Tips</h2> <h3>Virtual CPU Cores</h3> <p>In a virtual environment, it's very important not to overload your virtual host. Note that there is a difference between overloading and over allocating virtual cores. It's a common practice to allocate more virtual cores across the virtual machines than there are physical/logical cores (logical when HyperThreading is enabled), but this has to be done with the knowledge and understanding of the load on the host machine. Overloading happens when so many cores are allocated and in use that the hypervisor is unable to schedule a core to a virtual machine when needed. This results in pauses and poor performance.</p> <p>In a clustered environment this is even more important, because when a cluster node dies, or is removed for upgrades etc, virtual machines will move to another node in the cluster - if that node is already overloaded then you will soon start hearing the complaints from users!</p> <p>The best explanation I have found on how hypervisors allocate cores is this article - <a href="https://www.altaro.com/hyper-v/hyper-v-virtual-cpus-explained/">https://www.altaro.com/hyper-v/hyper-v-virtual-cpus-explained/</a> - it's Hyper-V specific (we use Hyper-V here) but much of the information also applies to VMWare.</p> <h3>Virtual Disks</h3> <p>When creating separate virtual disk volumes for your virtual machines, try to put those virtual drives on different physical drives, so they are not competing for the same I/O. Use fixed size virtual disks.</p> <h2>Continua CI Configuration Tuning</h2> <p>Continua CI is not immune to performance problems, we're always working to make it faster and consume less resources. There are however a few things that can be tuned in Continua CI to improve performance.</p> <h3>Repository Branch Settings</h3> <p>Use specific branch patterns to narrow down the number of repository files and folders which are monitored and downloaded. With repositories which use folder-based branches, such as Subversion and TFS, consider moving old branches to a separate archive folder in your repository which will not match the branch patterns. Note that you can use more than one Continua CI repository per actual repository. Some users will have multiple projects in one repository, but only need to build a single one for each configuration. Make use of relative paths, where supported by your repository type, to limit your repository to a single project folder. This can significantly speed up repository initialisation and changeset updating. </p> <h3>Repository Polling</h3> <p>Continua CI polls repositories periodically to detect new commits. Each time this occurs, Continua CI invokes the command line client for that repo, and parses the output of that process. Some clients use a surprising amount of CPU. The git client, for example, uses around 8% CPU per instance on our production server while checking for commits. Most of the time, these processes only run for a very short amount of time (when no changes are detected), however if you have a lot of repositories, these small cpu spikes can add up.</p> <p> There are a couple of options to keep this under control.</p> <p> 1) Set the appropriate polling interval for your repositories. If changes to a repository occur infrequently, then there's no point polling frequently.</p> <p>2) Set the Server property Server.RepoMonitor.MaxCheckers property. This controls how many version control client processes are spawned concurrently, the default (5) is quite conservative so you should only need to lower this on a very low spec system. If you have plenty of spare CPU capacity, then you can increase this value, however if you do then monitor CPU usage to make sure you don't overload the server.</p> <p>3) Manual polling, using post commit hooks. This reduces CPU usage on the server, by only polling for repository changes when requested and has the added benefit of reducing the load on your version control server. This does take some setting up, and depends very much on the capabilities of your version control system. I'll take a look at post commit hooks in a future blog post.</p> <h3>Repository Path Filtering</h3> <p>Repository Path Filtering is an option on all repository types, with the exception of Mercurial (*I'll explain why shortly). What this filtering does is allow you to limit which files get added to the server's repository cache. This filtering has a few benefits, less disk space used on the server and the agents, less network I/O when transferring the changes from the server to the agent, and less I/O when checking out the source into the build workspace.</p> <p>A typical use case for these rules is when you have files in your repository that rarely change and are not needed for the build process (design docs, deployment notes etc). No point adding them to the repo cache if you don't use them.</p> <p>Changes to these rules won't affect files that are already in the repository cache, but it will avoid committing changes to those filtered out files to the repo cache. The best bang for buck with these filters will come if the repository is reset (the cache is rebuilt, so filtered out files are never committed to the cache), however that can be an expensive operation, so unlike other repository settings, changing these rules will not force a reset.</p> <p>* These filters don't apply to Mercurial repositories, as we use Mercurial for our repository cache. When you point Continua CI at a Mercurial repository, it just clones it to the server (repo cache), and then clones it to the agents (repo cache) without any modifcations.</p> <h3>Repository Rules</h3> <p>Each Stage has a settings tab called Repository Rules. These rules apply when checking out the source from the agent's repository cache(s) to the build workspace. Only check out the source you need, this will improve performance. If a stage doesn't need the source at all (for example, it's only working with artifacts from previous stages), then just blank out the Repository Rules field.</p> <p>Don't leave logging of the repository rules turned on unless you are debugging the rules. Logging the files exported to the workspace can be a real performance killer.</p> <h3>Workspace Rules</h3> <p>Similar to Repository Rules, these rules control which files are transferred between the server and agent's build workspace folders, and back again. Only transfer files back to the server's workspace that you actually need, like build artifacts, reports etc. </p> <p>Don't leave logging of the workspace rules turned on unless you are debugging the rules. Logging the files transferred can be a real performance killer.</p> <h3>Actions</h3> <p>Avoid logging too much information. For example, verbose logging on MSBuild should be avoided unless debugging build issues. Output logged from actions is queued and sent back to the server to be written to the build log, this causes high network and disk I/O.</p> <h3>Disk Space</h3> <p>Disk space is quite often at a premium (especially with SSD's), and it's important to keep on top of it. This is where the Clean up Policies come into play. Continua CI allows you to specify a global clean up policy for both the server and the agents, however it can be overridden at the Project or Configuration level. The clean up policy controls how long to keep old builds and their associated workspaces around. The clean up policy is highly configurable - use it to keep control over disk space. Bear in mind that the work of cleaning up old builds is quite I/O and database intensive, so be sure to schedule it to run during a quite period</p> <h3>Anti-virus Software</h3> <p>Anti-virus software can be a major performance killer, and in instances, an application killer. If I had a dollar for every time anti-virus software turned out to be the cause of a problem with Continua CI or FinalBuilder, well that would be some serious beer money at least!</p> <p>If you have anti-virus software installed on your server or agents, be sure to add exclusions from real-time scanning for the server's share folder, and the agent's workspace folder. Add scheduled scans on those folders instead. Also, when using the bundled PostgreSQL database, add an exclusion for C:\ProgramData\VSoft\ContinuaCI\PostgreSQLDB &nbsp;- otherwise you may experience database corruption.</p> <p>You should also consider adding an exclusions for the hg.exe in the "C:\Program Files\VSoft Technologies\ContinuaCI Agent\hg" folder. We found in testing that this will speed up the processing of the repostiory rules substantially (testing with windows defender). </p> <h3>Version Control Clients</h3> <p>Avoid installing tools like TortoiseSVN or ToirtoiseHG on your server or agent machines as these programs do background indexing (for icon overlays) and can also cause file/folder access issues.</p> <h2>Wrapping Up</h2> <p>I intend to revise this post as I learn more about performance tuning, especially in a virtual environment. If you have any techniques or tweaks that helped speed up your CI Server please feel free to share them with us (and fellow users).</p>754Adding NTLM SSO to Nancyfxhttps://www.finalbuilder.com/resources/blogs/postid/730/adding-ntlm-sso-to-nancyfx.NET,Nancyfx,Open Source,Web DevelopmentMon, 18 May 2015 11:52:21 GMT<a href="https://github.com/NancyFx/Nancy" target="_blank" title="Nancyfx on Github.">Nancyfx</a> is a Lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono. It's open source and available on github.&nbsp;<br /> <br /> <p>Nancy supports Forms Authentication, Basic Authentication and Stateless Authentication "out of the box", and it's simple to configure. In my application, I wanted to be able to handle mixed Forms and NTLM Authentication, which is something nancyfx &nbsp;doesn't support. We have done this before with asp.net on IIS, and it was not a simple task, involving a child site with windows authentication enabled while the main site had forms (IIS doesn't allow both at the same time) and all sorts of redirection. It was painful to develop, and it's painful to install and maintain.&nbsp;<br /> <br /> Fortunately with Nancy and <a href="http://owin.org/" target="_blank" title="Owin Website">Owin</a>, it's a lot simpler. Using Microsoft's implementation of the Owin spec, and Nancy's Owin support, it's actually quite easy, without the need for child websites and redirection etc.&nbsp;</p> <p>I'm not going to explain how to use Nancy or Owin here, just the part needed to hook up NTLM support. In my application, NTLM authentication is invoked by a button on the login page ("Login using my windows account") which causes a specific login url to be hit. We're using Owin for hosting rather than IIS and Owin enables us to get access to the HttpListener, so we can control the authentication scheme for each url. We do this by adding an AuthenticationSchemeSelectorDelegate.</p> <pre class="brush:c#; toolbar:true;">internal class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; //add a delegate to select the auth scheme based on the url listener.AuthenticationSchemeSelectorDelegate = request =&gt; { //the caller requests we try windows auth by hitting a specific url return request.RawUrl.Contains("loginwindows") ? AuthenticationSchemes.IntegratedWindowsAuthentication : AuthenticationSchemes.Anonymous; } app.UseNancy(); } } </pre> <br /> What this achieves is to invoke the NTLM negotiation if the "loginwindows" url is hit on our nancy application. If the negotiation is successful (ie the browser supports NTLM and is able to identify the user), &nbsp;then the Owin environment will have the details of the user, and this is how we get those details out of Owin (in our bootstrapper class).<br /> <br /> <pre class="brush:c#; toolbar:true;">protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines) { pipelines.BeforeRequest.AddItemToStartOfPipeline((ctx) =&gt; { if (ctx.Request.Path.Contains("loginwindows")) { var env = ((IDictionary&lt;string,&gt;)ctx.Items[Nancy.Owin.NancyOwinHost.RequestEnvironmentKey]); var user = (IPrincipal)env["server.User"]; if (user != null &amp;&amp; user.Identity.IsAuthenticated) { //remove the cookie if someone tried sending one in a request! if (ctx.Request.Cookies.ContainsKey("IntegratedWindowsAuthentication")) ctx.Request.Cookies.Remove("IntegratedWindowsAuthentication"); //Add the user as a cooking on the request object, so that Nancy can see it. ctx.Request.Cookies.Add("IntegratedWindowsAuthentication", user.Identity.Name); } } return null;//ensures normal processing continues. }); }</pre> <br /> Note we are adding the user in a cookie on the nancy Request object, which might seem a strange thing to do, but it was the only way I could find to add something to the request that can be accessed inside a nancy module, because everything else on the request object is read only. We don't send this cookie back to the user. So with that done, all that remains is the use that user in our login module<br /> <br /> <pre class="brush:c#; toolbar:true;"> Post["/loginwindows"] = parameters =&gt; { string domainUser = ""; if (this.Request.Cookies.TryGetValue("IntegratedWindowsAuthentication",out domainUser)) { //Now we can check if the user is allowed access to the application and if so, add //our forms auth cookie to the response. ... } } </pre> <br /> Of course, this will probably only work on Windows, not sure what the current status is for System.Net.HttpListener is on Mono. This code was tested with Nancyfx 1.2 from nuget.&nbsp;<br />730Downloading and email addresseshttps://www.finalbuilder.com/resources/blogs/postid/560/downloading-and-email-addressesFinalBuilder,Web DevelopmentThu, 17 Aug 2006 04:00:00 GMT<p>As an <a href="http://en.wikipedia.org/wiki/Independent_software_vendor">ISV</a>, you have to decide how people will evaluate your product before they make a purchasing decision. </p> <ul> <li>Can they directly download it from your website?</li> <li>Do they have to sign up and get sent a url and/or a license key?</li> <li>Do they have to contact sales and someone will give you a call on the 'phone before you can get your hands on the trial (if at all)?</li> </ul> <p>Ever since <a href="/finalbuilder">FinalBuilder</a> was released, it was a direct download.  Anyone can download it, and can do so completely anonymously.  This has a few advantages, but the main one is that it doesn't p*ss anyone off - it's a single click to download. Easy and simple!  <br /> <br /> But, what happens if we want to ask these people how they went on the trial? Answer - you can't.  As is the case with all shareware, trialware, demoware, etc, you get a huge amount of downloads, and you get a fairly small conversion rate.  We don't track this very accurately, but it's in the order of 5% probably.  That means for every 100 downloads, we get about 5 sales.  I'd say it's a pretty good conversion rate, but why didn't the other 95% of people buy?  Maybe they bought a competing product, maybe they found a bug, maybe they downloaded FinalBuilder just to take a look?  The point is, we just don't know.<br /> <br /> So, obviously the answer is to ask people for their contact details before they download.  That way you can email them or call them and ask them how they go with the trial.  Put your hand up if you would rather not give your email address just so you can download a trial of a piece of software.  Yep, I'm sure there's a significant percentage of people who would not download in this case - so maybe the answer isn't so simple.  It's a big risk to change from one model (ie. direct downloads) to another (ie. contact details before download), and that's certainly not a risk we're prepared to take.  But we would still like to be able to contact people to ask them how they went.<br /> <br /> So - <b>our answer</b> has been to make it optional.  That is, if you don't want to give us your details, don't, but if you don't mind us contacting you then you can provide your email address.  I <a href="/downloads/finalbuilder">implemented this change</a> on our website yesterday, so no real results yet, but so far it's been about 50/50.  50% are happy to give their email address, and the other 50% take the direct download link.  Now - granted, the direct download link is not exactly obvious, but it's not hidden and hard to find.  We're not in the business of spamming people, but we will send people an email at the end of their trial asking for their thoughts; hopefully we might get some useful information.<br /> <br /> We're going to monitor this over the next couple of weeks to see how it goes. <br /> <br /> What are your thoughts on this issue?</p> 560