VSoft Technologies BlogsVSoft Technologies Blogs - posts about our products and software development.https://www.finalbuilder.com/resources/blogsIntroducing DPM - a Package Manager for Delphihttps://www.finalbuilder.com/resources/blogs/postid/837/introducing-dpm-a-package-manager-for-delphiDelphi,DPM,Open SourceThu, 12 Dec 2019 09:41:00 GMT<p>Back in Feb 2019, I <a href="/resources/blogs/delphi-package-manager-rfc" target="_blank">blogged</a> about the need for a Package Manager for Delphi. The blog post garnered lots of mostly useful feedback and encouragement, but until recently I could never find a solid block of time to work on it. Over the last few weeks I've been working hard to get it to an mvp stage.</p> <p>DPM is an <b>open source</b> package/library manager for Delphi XE2 or later. It is heavily influenced by Nuget, so the cli, docs etc will seem very familiar to nuget users. Delphi’s development environment is quite different from .net, and has different challenges to overcome, so whilst I drew heavily on nuget, DPM is not identical to nuget. I also took a close look at many other package managers for other development eco systems.</p> <h2>What is a Package Manager</h2> <p>A package manager provides a standard for developers to share and consume code. Authors create packages that other developers can consume. The package manager provides a simple way to automate the installation, upgrading or removal of packages. This streamlines the development process, allowing developers to get up and running on a project quickly, without needing to understand the (usually adhoc) way the project or organization has structured their third party libraries. This also translates into simpler build/CI processes, with less ‘compiles on my machine’ style issues.</p> <h2>Who and Why</h2> <p>DPM’s initial developer is Vincent Parrett (author of DUnitX, FinalBuilder, Continua CI etc). Why is discussed in <a href="http://www.finalbuilder.com/resources/blogs/delphi-package-manager-rfc">this blog post</a>.</p> <h2>DPM Status</h2> <p>DPM is still in development, so not all functionality is ready yet. At this time, it's at the stage where we I would encourage library authors to take a look and play with it and provide feedback (and perhaps get involved in the development). It's very much at a minimum viable product stage. Potential users are of course welcome to look at it and provide feedback, it's just that, well, there are no packages for it yet (there's some test packages in the repo, and I'll be creating ones for my open source libraries). .</p> <h3>What works</h3> <ul> <li>Creating packages</li> <li>Pushing packages to a package source.</li> <li>Installing packages, including dependencies</li> <li>Restoring packages, including dependencies.</li> </ul> <h3>How do I use it</h3> <p>The documentation is at <a href="http://docs.delphipm.org">http://docs.delphipm.org</a></p> <p>See the <a href="http://docs.delphipm.org/get-started/getting-started.html">getting started guide</a>.</p> <p>The command line documentation can be found <a href="http://docs.delphipm.org/commands.html">here</a>.</p> <p>The Source is on GitHub <a href="https://github.com/DelphiPackageManager/DPM">https://github.com/DelphiPackageManager/DPM</a></p> <h3>Is DPM integrated into the Delphi IDE</h3> <p>Not yet but it is planned. If you are a wiz with the open tools api and want to contribute then let us know.</p> <h3>Is there a central package source</h3> <p>Not yet but it is planned. At the moment, only local folder based <a href="http://docs.delphipm.org/concepts/sources.html">sources</a> are supported. The client code architecture has a provision for http based sources in the future, however right now we are focused on nailing down the package format, dependency resolution, installation, updating packages etc.</p> <h3>Is my old version of delphi supported</h3> <p>Maybe, <a href="http://docs.delphipm.org/compiler-versions.html">see here</a> for supported compiler versions. All target <a href="http://docs.delphipm.org/platforms.html">platforms</a> for supported compiler versions are supported.</p> <h3>What about C++ Builder or FPC</h3> <p><a href="http://docs.delphipm.org/compiler-versions.html">see here</a></p> <h3>Does it support design time components</h3> <p>Not yet, but that is being worked on.</p> <h3>How does it work</h3> <p>See <a href="http://docs.delphipm.org/concepts/how-it-works.html">this page</a></p> 837Delphi Package Manager RFChttps://www.finalbuilder.com/resources/blogs/postid/777/delphi-package-manager-rfcDelphi,DPM,Open SourceMon, 16 Jul 2018 01:58:00 GMT<p>Delphi/Rad Studio desperately needs a proper package/library/component manager. A package manager provides a standardized way of consuming third party libraries. At the moment, use of third party libraries is very much adhoc, and in many cases this makes it difficult to move projects between machines, or to get a new hire up and running quickly.</p> <p>Other developement environments, like the .net and javascript eco systems, recognised and solved this problem many years ago. Getting a .net or javascript project up an running, in a new working folder or new machine is trivial.</p> <p>With Delphi/Rad Studio, it's much harder than it should be. In consulting work, I've made it a point to see how clients were handling third party code, and every client had a different way. The most common technique was... well, best described as adhoc (with perhaps a readme with the list of third party products to install). Getting that code compiling on a CI server was a nightmare.</p> <h2>Existing Package Managers</h2> <p>Embarcadero introduced their GetIt package manager with XE8, and the GetIt infrastructure has certainly has made the installation of RAD Studio itself a lot nicer. But as a package manager for third party libraries, it comes up short in a number of areas.</p> <p>There is also Delphinus, which is an admirable effort, but hasn't gotten much traction, possibly due to it being strongly tied to github (you really need github account to use it, otherwise you get api rate limiting errors).</p> <p>Rather than pick apart GetIt or Delphinus, I'd like to outline my ideas for a Delphi package manager. I spend a lot of time working with .net (nuget) and javascript (npm, yarn), so they have very much influenced what I will layout below.</p> <p>I have resurrected an old project (from 2013) that I shelved when GetIt was announced, and I have spent a good deal of time thinking about package management (not just in Delphi), but I'm sure I haven't thought of everything, I'd love to hear feedback from people interested in contributing to this project, or just potential users.</p> <h2>Project Ideals</h2> <p>These are just some notes that I wrote up when I first started working on this back in 2013, I've tried to whip them into some semblance of order for presentation here, but they are just just rough outline of my ideas.</p> <h3>Open Source</h3> <p>The Project should be Open Source. Of course we should welcome contributions from commercial entities, but the direction of the project will be controlled by the community (ie users). The project will be hosted on GitHub, and contributions will be made through Pull Requests, with contributions being reviewed by the Steering committee (TBA).</p> <h3>Public Package Registry</h3> <p>There will be a public website/package server, where users can browse the available packages, and package authors can upload packages. This will be a second phase of the project, with the initial phase being focused on getting a working client/package architecture, with a local or network share folder for the package source.</p> <p>The package registry should not be turned into a store. Once a public package registry/server is available, evaluation packages could be be allowed, perhaps by providing a fee (web hosting is not free). Commercial vendors will of course be able to distribute commercial packages directly to their customers, as the package manager will support hosting of packages in a shared network or local directory. Package meta data will include flags to indicate if the packages are commercial, eval or free/open source. Users will be able to decide which package types show up in their searches.</p> <h3>Package Submission</h3> <p>Package submission to the public registry should be a simple process, without filling in and signing and faxing of forms! We will follow the lead of nuget, npm, ruby etc on this. There should be a dispute process for package names, copyright infringement etc. There will also be the ability to assign ownership of a package, for example when project ownership changes.</p> <p>Package Authors will be able to reserve a package prefix, in order to prevent other authors from infringing on their names or copyrights. For example, Embarcadero might reserve Emb. as their prefix, TMS might reserve TMS. as theirs. (of course I'm hoping to get both on board). The project will provide a dispute resolution process for package prefixes and names.</p> <h2>Delphi specific challenges</h2> <p>Delphi presents a number of challenges when compared to the .net or nodejs/javascript world.</p> <h3>Compatibility</h3> <p>With npm, packages contain source (typically minimized and obfuscated) which is pure javascript. Compatibility is very high.</p> <p>With Nuget, packages contain compiled (to .NET IL) assemblies. A package might contain a few different versions, that target different the versions of the .net framework. Again, compatibility is pretty good, an assembly compiled against .net 2.0 will work on .net 4.7 (.net core breaks this, but it has a new compatibility model, netstandard).</p> <p>If we look at Delphi, binary compatibility between Delphi compiler versions is pretty much non existent(yes, I know about 2006/7 etc). The dcu, dcp and bpl files are typically only compatible with the version they were compiled with. They are also only compatible with the platform they were generated for (so you can't share dcu's between 32 and 64 bit windows, or between iOS and Android). So we would need to include binaries for each version of Delphi we want our library to support. This also has major implications for library dependencies. Where as npm and nuget define dependencies as a range of versions, a binary dependency in Delphi would be fixed to that specific version. There is a way to maintain binary compatibility between releases, provided the interfaces do not change, however exactly what the rules are for this is hard to come by, so for now we'll ignore that possibility.</p> <p>That limits the scope for updating to newer versions of libraries, but that can also be overcome by including the source code in package, and providing on the fly compilation of the library during install. My preference would be for pre-compiled libraries, as that speeds up the build process (of course, since that's an area I have a particular interest in). In Continuous Integration environments, you want to build fast and build often, rebuilding library code with each CI build would be painful (speaking from experience here, 50% of time building FinalBuilder is building the third party libraries).</p> <p>There's also the consideration of Debug vs Release - so if we are including binaries, compiled for Release would be required, but Debug optional? The size of a package file could be problematic. If the package contains pre-compiled binaries for multiple compiler versions, it could get rather large. So perhaps allow for packages that either support a single compiler version, or multiples? The compilers supported would be exposed in the package metadata, and perhaps also in the package file name. Feedback, ideas around this would be welcome.</p> <p>Package files would be (like with other package managers), a simple zip file, which include a metadata (xml) file which describes the contents of the package, and folders containing binaries, source, resources etc. Packages will not contain any scripts (ie to build during install) for security reasons (I don't want to be running random scripts). We will need to provide a way to compile during install (using a simple dsl to describe what needs to be done), this still needs a lot of thought (and very much involves dependencies).</p> <h3>Library/Search Paths</h3> <p>Say goodbye to the IDE's Library path. It was great back in 1995, when we had a few third party libraries and a few projects and we just upgraded the projects to deal with library versioning (just get on the latest). It's simply incompatible with the notion of using multiple versions of the same libraries these days.</p> <p>I rarely change major versions of a library during the lifespan of a major release of my products, I might however take minor updates for bugfixes or performance improvements. The way to deal with this is simply to use the Project Search path. Project A can use version 1 of a library, Project 2 can use version 9, all quite safely (design time components do complicate this).</p> <p>Where a project targets multiple platforms, installing a package should install for all platforms it supports, but it should be possible for the user to specify which platforms they need the package installed for.</p> <h3>Design time Component Installation</h3> <p>The Rad Studio IDE only allows one version of a design time package to be installed at a time. So when switching projects, which might use different versions of a component library, we would need a system that is aware of component versions, and can uninstall/install components on the fly, as projects are loaded.</p> <p>I suspect this will be one of the biggest project hurdles to overcome, it will requires someone with very good open tools api knowledge (ie, not me!).</p> <h3>Dependencies</h3> <p>Libraries that depend on other libraries will need to specify those dependencies in a metadata file, such that they can resolved during installation. As I mentioned above, binary compatibility issues make dependency resolution somewhat more complicated, but not insurmountable. The resolution algorithm will need to take into account compiler version and platform. The algorithm will also need to handle when a package is compiled from source, for example, binary only packages should not be allowed to depend on source only packages (to ensure compatibility). If we end up with install time package compilation, then some serious work will be needed on the dependency tree algorithm to work our what else needs to be done during install (ie, do any dependencies need to be recompiled?).</p> <p>This is certainly more complicated than other platforms, and a significant amount of work to get right (ps, if you think it isn't, you haven't considered all the angles!)</p> <h2>General Considerations</h2> <h3>Package Install/Restore</h3> <p>The user should be able to choose from a list packages to install. When installing the package, this would be recorded either in the dproj, or a separate file alongside the drproj. The install process will update the project search paths accordingly. Package meta data would control what gets added to the search paths, my preference would be for 1 folder per package, as that would keep the search path shorter which improves compile times.</p> <p>When a project is loaded, the dproj (or packages config file) would be checked, and any missing packages restored automatically. This should also handle the situation where a project is loaded in a different IDE version.</p> <h3>Security</h3> <p>We should allow for signing of packages, such that the signatures can be verified by the client(s). Clients should be able to chose whether to only allow signed packages, or allow signed and unsigned, and what to do when signature verification fails. This will allow users certainty in the authenticity and integrity of the package (ie where it comes from and whether it's been modified/tampered with).</p> <h2>Clients</h2> <p>It is envisaged that will be at least 2 clients, a command line tool, and a Rad Studio IDE plugin. Clients will download packages, add those packages to project/config search paths. A local package cache will help with performance, avoiding repetitive package downloads and also reduce disk space demands. The clients will also detect available updates to packages, and package dependency conflicts.</p> <h3>Command line Client</h3> <p>The command like tool will be similar to nuget or npm, which provide the ability to create packages, install or restore missing packages, update packages etc. The tool should allow the specification of compiler versions and platforms, as this is not possible to detect from the dproj alone. This is where the project is currently focused (along with the core package handling functionality).</p> <h3>RAD Studio IDE Client</h3> <p>An IDE plugin client will provide the ability to search for, install, restore, update or remove packages, in a similar manner to the Nuget Visual Studio IDE support (hopefully faster!). This plugin will share the core code with the the command line client (ie, it will not call out to the command line tool). I have not done any work on this yet (help wanted).</p> <h2>Delphi/Rad Studio Version Support</h2> <p>Undecided at the moment. I'm developing with XE7, but it's possible the code will compile with earlier versions, or be made to compile with minor changes.</p> <h2>Summary</h2> <p>Simply put, I want/need a package manager for Delphi, one that works as well as nuget, npm, yarn etc. I'm still fleshing out how this might all work, and I'd love some feedback, suggestions, ideas etc. I'd like to get some people with the right skills 'signed up' to help, particularly people with open tools api expertise.</p> <h2>Get Involved!</h2> <p>I have set up a home for the project on GitHub - <a href="https://github.com/DelphiPackageManager/PackageManagerRFC">The Delphi Package Manager Project - RFC</a>. We'll use issues for discussion, and the wiki to document the specifications as we develop them. I have created a few issues with things that need some dicusssion. I hope to publish the work I have already done on this in the next few days (needs tidying up).</p> 777VSoft.CommandLineParser for Delphi - Updatedhttps://www.finalbuilder.com/resources/blogs/postid/740/vsoftcommandlineparser-for-delphi-updatedDelphi,Open SourceThu, 10 Dec 2015 11:24:20 GMT<p>A while back I published the VSoft.CommandLineParser library on github, which makes it simple to handle command line options in delphi applications. The first version only did enough to satisfy the needs I had in DUnitX. </p> <p>In another project I&rsquo;m working on, I needed a command mode, where each command had a unique set of options, but keeping the ability to have global options.&nbsp; I have tried to implement the command mode in a backwards compatable manner, and so far the only change I had to make to an existing project was adding a const to a parameter. </p> <h3>Adding Commands</h3> <p>Adding commands is quite simple, using TOptionsRegistry.RegisterCommand. </p> <pre class="brush:delphi; toolbar:true;">cmd := TOptionsRegistry.RegisterCommand('help','h','get some help','','commandsample help [command]'); option := cmd.RegisterUnNamedOption&lt;string&gt;('The command you need help for', procedure(const value : string) begin THelpOptions.HelpCommand := value; end);</pre> <p>Note: this method returns a TCommandDefinition record that you can add options to. The reason for using a record rather than an interface here, is because delphi interfaces do not suport generic methods. Records do, so we use the record type as a wrapper around the ICommandDefinition interface.</p> <p>The helpstring parameter allows you to specify a longer help message that can be displayed when showing command usage.</p> <h3>Handling Commands</h3> <p>The ICommandLineParseResult interface has a new Command property (string) which is used to determine the selected command. It&rsquo;s up to you how to actually run the commands. </p> <h3>Showing Usage</h3> <p>The PrintUsage method now has some overloads and has some formatting improvements, and TOptionsRegistry also has new EnumerateCommands and EmumerateCommandOptions methods which make it relatively simple to handle showing usage etc yourself if you want to. </p> <h3>Where is it?</h3> <p>The source with samples is available on GitHub - <a title="https://github.com/VSoftTechnologies/VSoft.CommandLineParser" href="https://github.com/VSoftTechnologies/VSoft.CommandLineParser" target="_blank">https://github.com/VSoftTechnologies/VSoft.CommandLineParser</a></p>740Delphi-Mocks Parameter Matchershttps://www.finalbuilder.com/resources/blogs/postid/737/delphi-mocks-parameter-matchersDelphi,GeneralGit,Open SourceTue, 22 Sep 2015 10:15:11 GMT<p> We recently updated Delphi Mocks to allow for better parameter matching on Expectations registered with the Mock. This allows the developer to place tighter controls on verifying that a mocked interface/object method is called. Below is a simple example of when the parameter matchers can be used. </p> <pre class="brush:delphi; toolbar:true; highlight:[14,15,16,17]">procedure TExample_InterfaceImplementTests.Implement_Multiple_Interfaces; var sutProjectSaver : IProjectSaveCheck; mockProject : TMock&lt;IProject&gt;; begin //Test that when we check and save a project, and its dirty, we save. //CREATE - The project saver under test. sutProjectSaver := TProjectSaveCheck.Create; //CREATE - Mock project to control our testing. mockProject := TMock&lt;IProject&gt;.Create; //SETUP - Mock project will show as dirty and will expect to be saved. mockProject.Setup.WillReturn(true).When.IsDirty; //NEW! - Add expectation that the save will be called as dirty is returning true. // As we don't care about the filename value passed to us we // allow any string to be passed to report this expectation as met. mockProject.Setup.Expect.Once.When.Save(It(0).IsAny&lt;string&gt;()); //TEST - Visit the mock element to see if our test works. sutProjectSaver.Execute(mockProject); //VERIFY - Make sure that save was indeed called. mockProject.VerifyAll; end; </pre> <p> Previously the developer writing this test would have to provide the exact filename to be passed to the mocked Save method. As we don't know what the projects filename is going to be (in our example case), we would either have to; 1. Forgo doing this test. 2. Implement a project object to test with. Both of these options are not ideal. </p> <p>Parameter matchers resolve this situation. It is now simple to either restrict or broaden the parameters passed to mocked methods that will satisfy the expectation defined. To achieve this Delphi-Mocks offers eleven new functions; </p> <pre class="brush:delphi; toolbar:true;">function It(const AParamIndx : Integer) : ItRec; function It0 : ItRec; function It1 : ItRec; function It2 : ItRec; function It3 : ItRec; function It4 : ItRec; function It5 : ItRec; function It6 : ItRec; function It7 : ItRec; function It8 : ItRec; function It9 : ItRec; </pre> <p>The first "function It(const AParamIndx : Integer) : ItRec;" allows the developer to specify the index of the parameter they wish to set for the next expectation setup of a mock method. It(0) will refer to the first parameter, It(1) the second and so forth. Note that the reason for specifying the parameter index is that Delphi's parameter evaluation order is not defined, so we could not rely on the parameters being evaluated in order (which is what we did when we initially wrote this feature). Interestingly, with the 64 bit Delphi compiler, parameter evaluation does appear to happen in order, but we could not be certain this will always be the case.&nbsp;</p> <p>The other ten functions It0 through to It9 are simply wrappers of the index call passing the index in their name. All these functions return an ItRec. The ItRec has the function structure;</p> <pre class="brush:delphi; toolbar:true;">ItRec = record var ParamIndex : cardinal; constructor Create(const AParamIndex : Integer); function IsAny&lt;T&gt;() : T ; function Matches&lt;T&gt;(const predicate: TPredicate&lt;T&gt;) : T; function IsNotNil&lt;T&gt; : T; function IsEqualTo&lt;T&gt;(const value : T) : T; function IsInRange&lt;T&gt;(const fromValue : T; const toValue : T) : T; function IsIn&lt;T&gt;(const values : TArray&lt;T&gt;) : T; overload; function IsIn&lt;T&gt;(const values : IEnumerable&lt;T&gt;) : T; overload; function IsNotIn&lt;T&gt;(const values : TArray&lt;T&gt;) : T; overload; function IsNotIn&lt;T&gt;(const values : IEnumerable&lt;T&gt;) : T; overload; {$IFDEF SUPPORTS_REGEX} //XE2 or later function IsRegex(const regex : string; const options : TRegExOptions = []) : string; {$ENDIF} end; </pre> <p>Each of the functions creates a different matcher. For example the IsAny&lt;T&gt; will cause the expectation to be met when the parameter passed to the mock is of any value that has the type T. In the example above this type would be a string. You will also notice that each function returns the type T. This is so that each call can be placed within the mock methods call directly. Doing so helps with making sure parameter types match the testing value.</p> <p>IsEqualTo&lt;T&gt; requires that the parameter matches exactly to the value passed into the IsEqualTo&lt;T&gt;. This could be used to restrict the expectation to a tighter test of the functionality under test.</p> <pre class="brush:delphi; toolbar:true;">//Match on the filename being "temp.txt" only. mockProject.Setup.Expect.Once.When.Save(It(0).IsEqualTo&lt;string&gt;('temp.txt')); //VERIFY - Make sure that save was indeed called. mockProject.VerifyAll; </pre> <p>In the future we are looking to provide &ldquo;And&rdquo;\&rdquo;Or&rdquo; operators. These operators might also live on the ItRec and allow combining with as many other matchers using the same type.</p> <pre class="brush:delphi; toolbar:true;">//Match on the filename being "temp.txt" or "temp.doc" only. mockProject.Setup.Expect.Once.When.Save( It(0).Or(It(0).IsEqualTo&lt;string&gt;('temp.txt'), It(0).IsEqualTo&lt;string&gt;('temp.doc')); //VERIFY - Make sure that save was indeed called. mockProject.VerifyAll; </pre> <p>There might be a better way to make the resulting code a bit cleaner. It would make the tests easier to read, instead of using regex which is also possible in this case. As a result we believe this would be a good edition to the library.</p> <p><a href="https://github.com/VSoftTechnologies/Delphi-Mocks">Feel free to clone the repository from GitHub</a>. If you have some time to spare submit a pull requests or two with your ideas/improvements. We believe this is a great little project worthy of some attention. Let us know what you think of the changes so far.</p>737Adding NTLM SSO to Nancyfxhttps://www.finalbuilder.com/resources/blogs/postid/730/adding-ntlm-sso-to-nancyfx.NET,Nancyfx,Open Source,Web DevelopmentMon, 18 May 2015 11:52:21 GMT<a href="https://github.com/NancyFx/Nancy" target="_blank" title="Nancyfx on Github.">Nancyfx</a> is a Lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono. It's open source and available on github.&nbsp;<br /> <br /> <p>Nancy supports Forms Authentication, Basic Authentication and Stateless Authentication "out of the box", and it's simple to configure. In my application, I wanted to be able to handle mixed Forms and NTLM Authentication, which is something nancyfx &nbsp;doesn't support. We have done this before with asp.net on IIS, and it was not a simple task, involving a child site with windows authentication enabled while the main site had forms (IIS doesn't allow both at the same time) and all sorts of redirection. It was painful to develop, and it's painful to install and maintain.&nbsp;<br /> <br /> Fortunately with Nancy and <a href="http://owin.org/" target="_blank" title="Owin Website">Owin</a>, it's a lot simpler. Using Microsoft's implementation of the Owin spec, and Nancy's Owin support, it's actually quite easy, without the need for child websites and redirection etc.&nbsp;</p> <p>I'm not going to explain how to use Nancy or Owin here, just the part needed to hook up NTLM support. In my application, NTLM authentication is invoked by a button on the login page ("Login using my windows account") which causes a specific login url to be hit. We're using Owin for hosting rather than IIS and Owin enables us to get access to the HttpListener, so we can control the authentication scheme for each url. We do this by adding an AuthenticationSchemeSelectorDelegate.</p> <pre class="brush:c#; toolbar:true;">internal class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; //add a delegate to select the auth scheme based on the url listener.AuthenticationSchemeSelectorDelegate = request =&gt; { //the caller requests we try windows auth by hitting a specific url return request.RawUrl.Contains("loginwindows") ? AuthenticationSchemes.IntegratedWindowsAuthentication : AuthenticationSchemes.Anonymous; } app.UseNancy(); } } </pre> <br /> What this achieves is to invoke the NTLM negotiation if the "loginwindows" url is hit on our nancy application. If the negotiation is successful (ie the browser supports NTLM and is able to identify the user), &nbsp;then the Owin environment will have the details of the user, and this is how we get those details out of Owin (in our bootstrapper class).<br /> <br /> <pre class="brush:c#; toolbar:true;">protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines) { pipelines.BeforeRequest.AddItemToStartOfPipeline((ctx) =&gt; { if (ctx.Request.Path.Contains("loginwindows")) { var env = ((IDictionary&lt;string,&gt;)ctx.Items[Nancy.Owin.NancyOwinHost.RequestEnvironmentKey]); var user = (IPrincipal)env["server.User"]; if (user != null &amp;&amp; user.Identity.IsAuthenticated) { //remove the cookie if someone tried sending one in a request! if (ctx.Request.Cookies.ContainsKey("IntegratedWindowsAuthentication")) ctx.Request.Cookies.Remove("IntegratedWindowsAuthentication"); //Add the user as a cooking on the request object, so that Nancy can see it. ctx.Request.Cookies.Add("IntegratedWindowsAuthentication", user.Identity.Name); } } return null;//ensures normal processing continues. }); }</pre> <br /> Note we are adding the user in a cookie on the nancy Request object, which might seem a strange thing to do, but it was the only way I could find to add something to the request that can be accessed inside a nancy module, because everything else on the request object is read only. We don't send this cookie back to the user. So with that done, all that remains is the use that user in our login module<br /> <br /> <pre class="brush:c#; toolbar:true;"> Post["/loginwindows"] = parameters =&gt; { string domainUser = ""; if (this.Request.Cookies.TryGetValue("IntegratedWindowsAuthentication",out domainUser)) { //Now we can check if the user is allowed access to the application and if so, add //our forms auth cookie to the response. ... } } </pre> <br /> Of course, this will probably only work on Windows, not sure what the current status is for System.Net.HttpListener is on Mono. This code was tested with Nancyfx 1.2 from nuget.&nbsp;<br />730Introducing VSoft.CommandLineParser for Delphihttps://www.finalbuilder.com/resources/blogs/postid/719/introducing-vsoftcommandline-for-delphiDelphi,Open SourceSat, 26 Jul 2014 14:20:12 GMT<h2>Command line parsing</h2> <p>Pretty much every delphi console application I have ever written or worked on had command line options, and every one of the projects tried different ways for defining and parsing the supplied options. Whilst working on DUnitX recently, I needed to add some command line options, and wanted to find a nice way to add them and make it easy to add more in the future. The result is <a href="https://github.com/VSoftTechnologies/VSoft.CommandLineParser" target="_blank">VSoft.CommandLineParser</a> (copies of which are included with the latest DUnitX).</p> <h3>Defining Options</h3> <p>One of the things I really wanted, was to have the parsing totally decoupled from definition and the storage of the options values. Options are defined by registering them with the TOptionsRegistry, via TOptionsRegistry.RegisterOption&lt;T&gt; - whilst it makes use of generics, only certain types can be used, the types are checked at runtime, as generic constraints are not flexible enough to specify which types we allow at compile time. Valid types are string, integer, boolean, enums &amp; sets and floating point numbers. </p> <p>Calling RegisterOption will return a definition object which implements IOptionDefinition. This definition object allows you to set various settings (such as Required). When registering the option, you specify the long option name, the short option name, help text (will be used when showing the usage) and a TProc&lt;T&gt; anonymous method that will take the parsed value as a parameter.</p> <pre class="brush:delphi; toolbar:true;">procedure ConfigureOptions; var option : IOptionDefintion; begin option := TOptionsRegistry.RegisterOption&lt;string&gt;('inputfile','i','The file to be processed', procedure(value : string) begin TSampleOptions.InputFile := value; end); option.Required := true; option := TOptionsRegistry.RegisterOption&lt;string&gt;('outputfile','o','The processed output file', procedure(value : string) begin TSampleOptions.OutputFile := value; end); option.Required := true; option := TOptionsRegistry.RegisterOption&lt;boolean&gt;('mangle','m','Mangle the file!', procedure(value : boolean) begin TSampleOptions.MangleFile := value; end); option.HasValue := False; option := TOptionsRegistry.RegisterOption&lt;boolean&gt;('options','','Options file',nil); option.IsOptionFile := true; end; </pre> <p>For options that are boolean in nature, ie they have do not value part, the value passed to the anonymous method will be true if the option was specified, otherwise the anonymous method will not be called. The 'mangle' option in the above example shows this scenario. </p> <p>You can also specify that an option is a File, by setting the IsOptionFile property on the option definition. This tells the parser the value will be a file, which contains other options to be parsed (in the same format as the command line). This is useful for working around windows command line length limitations.</p> <p>Currently the parser will accept<br /> -option:value<br /> --option:value<br /> /option:value </p> <p>Note the : delimiter between the option and the value.</p> <p>Unnamed parameters are registered via the TOptionsRegistry.RegisterUnNamedOption&lt;T&gt; method. Unlike named options, unnamed options are positional, but only when more than one is registered, as they will be passed to the anonymous methods in the order they are registered.</p> <h3>Parsing the options.</h3> <p>Parsing the options is as simple as calling TOptionsRegistry.Parse, which returns a ICommandLineParseResult object. Check the HasErrors property to see if the options were valid, the ErrorText property has the parser error messages.</p> <h3>Printing Usage</h3> <p>If the parser reports errors, then typically you would show the user what the valid options are and exit the application, e.g:</p> <pre class="brush:delphi; toolbar:true;"> parseresult := TOptionsRegistry.Parse; if parseresult.HasErrors then begin Writeln(parseresult.ErrorText); Writeln('Usage :'); TOptionsRegistry.PrintUsage( procedure(value : string) begin Writeln(value); end); end else ..start normal execution here </pre> <p>The TOptionsRegistry.PrintUsage makes it easy to print the usage to the command line.</p> <p>When I started working on this library, I found some really complex libraries (mostly .net) out there with a lot of options, but I decided to keep mine as simple as possible and only cover off the scenarios I need right now. So it's entirely possible this doesn't do everything people might need, but it's pretty easy to extend. The <a href="https://github.com/VSoftTechnologies/VSoft.CommandLineParser" target="_blank">VSoft.CommandLineParser</a> library (just three units) is open source and available on Github, with a sample application and unit tests (DUnitX) included.</p>719DUnitX Updated : Filtering Testshttps://www.finalbuilder.com/resources/blogs/postid/717/dunitx-updated-filtering-testsDelphi,Open Source,Unit TestingThu, 24 Jul 2014 16:39:00 GMT<style type="text/css"> table.categories { margin: 1em 5%; padding: 0; width: auto; border-collapse: collapse; } table.categories td, table.categories th { border: 1px solid black; padding: 6px; text-align: left } table.categories th { background: #117e42; color:white } ol.operators {margin-left: 18px;} </style> <h2>Still evolving</h2> <p>DUnitX is still quite young, and still evolving. One of the features most often requested is the ability to select which tests to run. I found myself wishing for that feature recently. I never missed it while the number of my tests were relatively small and fast, but as time went by, it was taking longer and longer to debug tests. So, time to add filtering of fixtures and tests.</p> <p>The command options support in DUnitX was to be honest, quite useless and poorly though out. So my first task was to tackle how options were set/used in DUnitX, and find an extensible way of handling command line options. The result turned out better than I exepected, so I have published a separate project for that. <a href="https://github.com/VSoftTechnologies/VSoft.CommandLineParser" alt="VSoft.CommandLine project on github" rel="nofollow" target="_blank">VSoft.CommandLine</a> is a very simple library for defining and parsing command line options, which decouples the definition and parsing from where the parsed values are stored. I'll blog about this library separately.</p> <p>I did try to avoid breaking any existing test projects out there. To invoke the command line option parsing, you will need to add a call to TDUnitX.CheckCommandLine; at the start of you project code, eg:</p> <pre class="brush:delphi; toolbar:true;">begin try TDUnitX.CheckCommandLine; //Create the runner runner := TDUnitX.CreateRunner; </pre> <p>The call should be inside the try/except because it will throw exceptions if any errors are found with the command line options. I modified the IDE Expert to include the needed changes in any new projects it creates, I recommend running the expert to generate a project and then compare it to your existing dpr.</p> <h2>Filtering</h2> <p>The next thing to look at was how to apply filtering. After much experimentation, I eventually settled on pretty much copying how NUnit does it. I ported the filter and CategoryExpression classes from NUnit, with a few minor mods needed to adapt them to our needs. The cool thing here is I was able to port the associated unit tests over with ease!</p> <p>There are two types of filters, namespace/fixture/test filters, and category filters.</p> <h3>Namespace/Fixture/Test filtering</h3> <p>The new command line options are :</p> <pre>--run - specify which Fixtures or Tests to run, separate values with a comma, or specify the option multiple times</pre> eg: <pre>--run:DUnitX.Tests.TestFixture,DUnitX.Tests.DUnitCompatibility.TMyDUnitTest</pre> <p>If you specify a namespace (ie unit name or part of a unit name) then all fixtures and tests matching the namespace will run.</p> <h3>Category Filters</h3> <p>A new CategoryAttribute allows you to a apply categories to fixtures and/or tests. Tests inherit their fixture's categories, except when they have their own CategoryAttribute. You can specify multiple categories, separated by commas, eg:</p> <pre class="brush:delphi; toolbar:true;">[TestFixture] [Category('longrunning,suspect')] TMyFixture = class public [Test] procedure Test1; [Test] [Category('fast')] procedure Test2; </pre> <p>In the above example, Test1 would have "longrunning" and "suspect" categories, whilst Test2 would have just "fast".</p> <p>You can filter tests using these categories, using the --include and/or --exclude command line options. When both options are specifies, all the tests with the included categories are run, except for those with the excluded categories. The following info is copied from the NUnit doco (on which these options are based) :</p> <table class="categories"> <thead> <tr> <th>Expression</th> <th>Action</th> </tr> </thead> <tbody> <tr> <td>A|B|C</td> <td>Selects tests having any of the categories A, B or C.</td> </tr> <tr> <td>A,B,C</td> <td>Selects tests having any of the categories A, B or C.</td> </tr> <tr> <td>A+B+C</td> <td>Selects only tests having all three of the categories assigned</td> </tr> <tr> <td>A+B|C</td> <td>Selects tests with both A and B OR with category C.</td> </tr> <tr> <td>A+B-C</td> <td>Selects tests with both A and B but not C.</td> </tr> <tr> <td>-A</td> <td>Selects tests not having category A assigned</td> </tr> <tr> <td>A+(B|C)</td> <td>Selects tests having both category A and either of B or C</td> </tr> <tr> <td>A+B,C</td> <td>Selects tests having both category A and either of B or C</td> </tr> </tbody> </table> <p>As shown by the last two examples, the comma operator is equivalent to | but has a higher precendence. Order of evaluation is as follows:</p> <p> </p> <ol class="operators"> <li>Unary exclusion operator (-)</li> <li>High-precendence union operator (,)</li> <li>Intersection and set subtraction operators (+ and binary -)</li> <li>Low-precedence union operator (|)</li> </ol> <p> </p> <p> <string>Note :</string> Because the operator characters have special meaning, you should avoid creating a category that uses any of them in it's name. For example, the category "db-tests" could not be used on the command line, since it appears to means "run category db, except for category tests." The same limitation applies to characters that have special meaning for the shell you are using. I have also fixed some other minor issues with the naming of repeated tests and test cases to allow them to work with the filter. </p> <h3>Other options</h3> <p>Once you have added the command line check, run yourexe /? to see the other command line options available. None of the options are required so running the exe without any options will behave as it did before.</p> <h3>Delphi 2010</h3> <p><strong><span style="color: #ff0000;">Resolved</span></strong> - Thanks to Stefan Glienke for figuring this out - D2010 now support again . This fix was to remove any use of of STRONGLINKTYPES. </p> <p>One thing of note: at the moment these changes break our D2010 support. I get a linker error when I build :</p> <pre>[DCC Fatal Error] F2084 Internal Error: L1737</pre> <p>Interestingly, the resulting executable is produced and does seem to run ok, however it makes debugging tests impossible, and of course it would fail in automated build. I did spend several hours trying to resolve this error but got nowhere. Since my usage of DUnitX is currently focused on XE2, I'm willing to live with this and just use an older version of DUnitX for D2010. I have tested with XE2, XE5 and XE6.</p>717Mocking Multiple Interfaces - Delphi Mockshttps://www.finalbuilder.com/resources/blogs/postid/716/mocking-multiple-interfaces-delphi-mocksDelphi,Open SourceMon, 14 Jul 2014 15:01:13 GMT<p>Today we updated Delphi Mocks to enable the Mocking of multiple interfaces. This is useful when the interface you wish to Mock is cast to another interface during testing. For example you could have the following system you wish to test.</p> <pre class="brush:delphi; toolbar:true;">type {$M+} IVisitor = interface; IElement = interface ['{A2F4744E-7ED3-4DE3-B1E4-5D6C256ACBF0}'] procedure Accept(const AVisitor : IVisitor); end; IVisitor = interface ['{0D150F9C-909A-413E-B29E-4B869C6BC309}'] procedure Visit(const AElement : IElement); end; IProject = interface ['{807AF964-E937-4A8A-A3D2-34074EF66EE8}'] procedure Save; function IsDirty : boolean; end; TProject = class(TInterfacedObject, IProject, IElement) protected function IsDirty : boolean; procedure Accept(const AVisitor : IVisitor); public procedure Save; end; TProjectSaveCheck = class(TInterfacedObject, IVisitor) public procedure Visit(const AElement : IElement); end; {$M-} implementation { TProjectSaveCheck } procedure TProjectSaveCheck.Visit(const AElement: IElement); var project : IProject; begin if not Supports(AElement, IProject, project) then raise Exception.Create('Element passed to Visit was not an IProject.'); if project.IsDirty then project.Save; end; </pre> <p>The trouble previously was that when testing TProjectSaveCheck a TMock&lt;IElement&gt; would be required, as well as a TMock&lt;IProject&gt;. This is brought about by the Visit procedure requiring the IElement its passed to be an IProject for the work its going to perform.</p> <p>This is now very simple with the Implement&lt;I&gt; method available off TMock&lt;T&gt;. For example to test that Save is called when IsDirty returns true, the following test could be written;</p> <pre class="brush:delphi; toolbar:true;">procedure TExample_InterfaceImplementTests.Implement_Multiple_Interfaces; var visitorSUT : IVisitor; mockElement : TMock&lt;IElement&gt;; begin //Test that when we visit a project, and its dirty, we save. //CREATE - The visitor system under test. visitorSUT := TProjectSaveCheck.Create; //CREATE - Element mock we require. mockElement := TMock&lt;IElement&gt;.Create; //SETUP - Add the IProject interface as an implementation for the mock mockElement.Implement&lt;IProject&gt;; //SETUP - Mock project will show as dirty and will expect to be saved. mockElement.Setup&lt;IProject&gt;.WillReturn(true).When.IsDirty; mockElement.Setup&lt;IProject&gt;.Expect.Once.When.Save; //TEST - Visit the mock element to see if our test works. visitorSUT.Visit(mockElement); //VERIFY - Make sure that save was indeed called. mockElement.VerifyAll; end; </pre> <br /> <p>The Mock mockElement "implements" two interfaces IElement, and IProject. IElement is done via the constructor, and IProject is added through the Implement&lt;I&gt; call. The Implement&lt;I&gt; call adds another sub proxy to the mock object. This sub proxy then allows all the mocking functionality to be performed with the IProject interface.</p> <p>To access the Setup, and Expects behaviour there are overloaded generic calls on TMock. These return the correct proxy to interact with, and generic type ISetup&lt;I&gt; and IExpect&lt;I&gt;. This is seen in the call to mockElement.Setup&lt;IProject&gt;. This returns a ISetup&lt;IProject&gt; which allows definition of what should occur when IProject is used from the Mock.</p> <p>This feature is really useful when there is a great deal of casting of interfaces done in the system you wish to test. It can save having to mock base classes directly where multiple interfaces are implemented.</p> <p>The way this works under the hood is fairly straight forward. TVirtualInterfaces are used when an interface is required to be mocked. This allows the capturing of method calls, and the creation of the interface instance when its required.</p> <p>The Implement&lt;I&gt; functionality simply extends this so that when a TProxyVirtualInterface (inherited from TVirtualInterface) has QueryInterface called it also looks to its owning Proxy. If any other Proxies implement the requested interface its that TProxyVirtualInterface which is returned.</p> <p>In essence this allows us to fake the Mock implementing multiple interfaces, when in fact there are a list of TVirtualInterface's all implementing a single interface.</p>716Automated UI Testing with ContinuaCI and Seleniumhttps://www.finalbuilder.com/resources/blogs/postid/709/automated-ui-testing-done-right-with-continuaci.NET,Continua CI,GeneralOpen SourceTue, 20 May 2014 11:46:00 GMT<p>You have just completed an awesomely complex change to your shinny new webapp! After running all your unit tests things are in the green and looking clean.</p> <p>Very satisfied at the quality of your work you fire up the application to verify that everything is still working as advertised. Below is what greets you on the root path of your app</p> <p><img alt="Funny error" src="/blogimages/peter/404-notfound-ie5.gif" /></p> <p>We have all been here at some time or another! What happened! Perhaps it was not your code that broke it! Maybe the issue originated from another part of your organisation, or maybe it came from somewhere on the "inter-webs".</p> <p>Its time to look at the underlying problem however ..... testing web user interfaces is hard! Its time consuming and difficult to get right. Manual clicks, much typing, cross referencing client specifications etc, surely there must be an easier way. At the end of the day we DO need to test our user interfaces!></p> <h2>Automated Web UI Testing</h2> <p>Thankfully UI testing today can be Automated, running real browsers in real end to end functional tests, to ensure our results meet (and continue to meet) expectations.</p> <p>For the sake of brevity and clarity in this demonstration we will focus on testing an existing endpoint. It is considered common place to find functional tests included as part of a wider build pipeline, which may consist of such steps as:</p> <ul> <li>Build</li> <li>Unit Test</li> <li>Deploy to Test Environment</li> <li>Perform Functional Tests</li> <li>Deploy to Production</li> </ul> <p> </p> <p>In this article we will be focusing on the functional testing component of this pipeline. We will proceed on the assumption that your code has already been, built unit tested and deployed to a Functional Test environment. Today we will :</p> <ul> <li>Add Automated UI testing to an existing endpoint google.com</li> <li>Configure ContinuaCI to automatically build our project, and perform the tests</li> </ul> <p> </p> <h3>Software Requirements:</h3> <p> </p> <ul> <li>Visual Studio 2010 Express Edition SP1 or greater (<a href="http://visualstudio.com/">visualstudio.com</a>)</li> <li>Microsoft Dot Net Framework version 4 or greater</li> <li>Java JRE (<a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html">http://www.oracle.com/technetwork/java/javase/downloads/index.html</a>)</li> <li>Mercurial (<a href="https://mercurial-scm.org">https://mercurial-scm.org</a>)</li> <li>Mozilla Firefox (<a href="http://getfirefox.com/">getfirefox.com</a>)</li> <li>Nuget (<a href="http://docs.nuget.org/docs/start-here/installing-nuget">docs.nuget.org/docs/start-here/installing-nuget</a>)</li> </ul> <p> </p> <h3>Step1: Prepare a Selenium endpoint</h3> <p>Firstly we will prepare for our UI tests by setting up a Selenium server. <span><a href="http://docs.seleniumhq.org/">Selenium</a> is a browser automation framework which will be used to 'remote control' a real browser. </span>This machine will be designated for performing the UI tests against, preferably this will be a machine separate from your ContinuaCI server.<br /> <br /> Log into the machine you have chosen for the Selenium server with administrator privileges<br /> Download and install Mozilla Firefox (getfirefox.com), this will be the browser that we target as part of this example, however Selenium can target lots of other browsers. For a full breakdown please <a href="http://docs.seleniumhq.org/about/platforms.jsp"> see the docs page</a>: .<br /> Download Selenium Server (<a href="http://docs.seleniumhq.org/download">docs.seleniumhq.org/download</a>), at the time of writing the latest version is 2.41.0.<br /> <br /> Place it into a permanent location of you choosing, in our example ("C:\Program Files (x86)\SeleniumServer")<br /> Download NSSM (<a href="http://nssm.cc/download">nssm.cc/download</a>), unzip it and place into a permanent location of you choosing "C:\Program Files (x86)\nssm-2.22\"<br /> <br /> Ensure that port 4444 is set to allow traffic (this is the default communicationsport for Selenium)<br /> <br /> Open a console and run the following commands:<br /> <span style="font-family: 'Courier New';">"C:\Program Files (x86)\nssm-2.22\win64\nssm.exe" install Selenium-Server "java" "-jar \"C:\Program Files (x86)\SeleniumServer\selenium-server-standalone-2.41.0.jar\""<br /> net start Selenium-Server</span><br /> <br /> <img alt="name project" src="/blogimages/peter/install-sel-1.png" /><br /> <br /> In order to uninstall the Selenium server service, the following commands can be run:<br /> <span style="font-family: 'Courier New';">net stop Selenium-Server<br /> "C:\Program Files (x86)\nssm-2.22\win64\nssm.exe" remove Selenium-Server </span><br /> <br /> <img alt="name project" src="/blogimages/peter/install-sel-2.png" /></p> <h3>Step2: Create a test class and add it to source control</h3> <p>Create a new class library project in Visual Studio, lets call it 'tests'<br /> Open the Nuget Package Manager Console window (tools menu-> library package manager -> package manager console), select the test project as the default project and run the following script:<br /> <br /> Install-Package Selenium.Automation.Framework<br /> Install-Package Selenium.WebDriver<br /> Install-Package Selenium.Support<br /> Install-Package NUnit<br /> Install-Package NUnit.Runners<br /> <br /> Create a new class called within the tests project (lets call it tests) and place the below code (Note: line 23 should be changed with location to the Selenium-Server we setup in the previous step):</p> <pre class="brush: c#; toolbar:true"> using System; using System.Text; using NUnit.Framework; using OpenQA.Selenium.Firefox; using OpenQA.Selenium; using OpenQA.Selenium.Remote; using OpenQA.Selenium.Support.UI; namespace SeleniumTests { [TestFixture] public class test { private RemoteWebDriver driver; [SetUp] public void SetupTest() { // Look for an environment variable string server = null; server = System.Environment.GetEnvironmentVariable("SELENIUM_SERVER"); if (server == null) { server = "http:// *** PUT THE NAME OF YOUR SERVER HERE ***:4444/wd/hub"; } // Remote testing driver = new RemoteWebDriver(new Uri(server), DesiredCapabilities.Firefox()); } [TearDown] public void TeardownTest() { try { driver.Quit(); } catch (Exception) { // Ignore errors if unable to close the browser } } [Test] public void FirstSeleniumTest() { driver.Navigate().GoToUrl("http://www.google.com/"); IWebElement query = driver.FindElement(By.Name("q")); query.SendKeys("a test"); Assert.AreEqual(driver.Title, "Google"); } } } </pre> <h3>Step3: Test the test!</h3> <p>Build the solution<br /> Open NUnit build runner (by default this is located at ~\packages\NUnit.Runners.2.6.3\tools\nunit.exe) , Select file -> Open Project, and locate the tests dll that you have build in the previous step<br /> click the run button<br /> ~ 15 seconds or so you should have one green test!<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-17.png" /><br /> <br /> So what just happened? Behind the scenes an instance of firefox was opened (on the Selenium Server), perform a simple google search query and undertook a simple Nunit assertion has verified the name of the window was equal to "Google", very cool!.<br /> <br /> Now lets make the test fail, go ahead and change line 78, lets say "zzGoogle", build, and rerun the test. We now have a failing test. Go ahead and change it back so that we have a single passing test.<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-18.png" /></p> <h4>Create a source control repository</h4> <p>In this example, we're using mercurial</p> <p><span>open a command prompt at ~\</span><br /> <span>type "hg init"</span><br /> <span>add a .hgignore file into the directory. Forconveniencewe have prepared one for you</span><a href="/blogimages/peter/.hgignore.txt">here</a><br /> <span>type "hg add"</span><br /> <span>type "hg commit -m "initial commit""</span></p> <p> </p> <h3>Step 4: Setting up Automated UI testing in ContinuaCI</h3> <p>Navigate to the ContinuaCI web interface</p> <p> </p> <h4>Create a project</h4> <p><br /> Open ContinuaCI<br /> Select "Create Project" from the top tasks dropdown menu<br /> <br /> <img alt="create project" src="/blogimages/peter/create-project-1.png" /><br /> <br /> Name the project something memerable; In our case: "pete sel test 1"<br /> <img alt="name project" src="/blogimages/peter/create-project-3.png" /><br /> Click the "Save & Complete Wizard" button</p> <p> </p> <h4>Create a configuration for this project</h4> <p><br /> Click "Create a Configuration"<br /> Name the config something memorable; in our case "sel-testconfig-1"<br /> Click save & Continue<br /> Click the 'Enable now' link at the bottom of the page to enable this configuration</p> <p> </p> <h4>Point to our Repository</h4> <p><br /> under the section "Configuration Repositories", select the "Create" link<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-4.png" /><br /> <br /> Name the repository "test_repo"<br /> Select "Mercurial" from the "type" dropdown list<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-6.png" /><br /> <br /> Select the Mercurial" tab from the top of the dialogue box<br /> Enter the repository location under "source path" in our case '\\machinename\c$\sel-blog-final'<br /> Click validate to ensure all is well<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-8.png" /><br /> <br /> Click save, your repository is now ready to go!<br /> <img alt="name project" src="/blogimages/peter/create-project-9.png" /></p> <p> </p> <h4>Add actions to our build</h4> <p><br /> Click on the Stages tab<br /> We will add a nuget restore action, click on the "Nuget" section from the categories on the left<br /> Drag and drop the action "Nuget Restore" onto the design surface<br /> Enter the location of the solution file: "$Source.test_repo$\tests.sln"<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-11.png" /><br /> <br /> Click Save</p> <h4>Build our tests</h4> <p><br /> Click on the "Build runners" category from the categories on the left hand menu<br /> Drag and drop a Visual Studio action onto the design surface (note that the same outcome can be achieved here with an MSBuild action).<br /> <br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-19.png" /><br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-13.png" /><br /> <br /> Enter the name of the solution file: "$Source.test_repo$\tests.sln"<br /> Specify that this should be a 'Release' build under the configuration option<br /> Click save</p> <p> </p> <h4>Setup ContinuaCI to run our Nunit tests</h4> <p><br /> Select the 'unit testing' category from the left hand menu<br /> Drag and drop an NUnit action onto the design surface<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-20.png" /><br /> <br /> Name our action 'run UI tests'<br /> Within the files: option, specify the name of the tests project '$Source.test_repo$\tests\tests.csproj'<br /> Within the Project Configuration section specify 'Release'<br /> Specify which version of NUnit<br /> In order to provide greater configuration flexibility we can pass in the location of our Selenium server to the tests at runtime. This is done within the 'Environments' tab. In our case the location of the Selenium server is<span></span><span>http://SELSERVER:4444/wd/hub</span><span>.</span><br /> <br /> <img alt="environment tab" src="/blogimages/peter/create-project-24.png" /><br /> <br /> Click Save<br /> <br /> Click save and complete Wizard<br /> We are now ready to build!<br /> <br /> Start a build immediately by clicking the top right hand side fast forward icon<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-14.png" /><br /> A build will start, and complete!<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-15.png" /><br /> When viewing the build log (this can be done by clicking on the green build number, then selecting the log tab) we can see that our UI tests have been run successfully. They are also visible within the 'Unit Tests' tab which displays further metrics around the tests.<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-23.png" /><br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-22.png" /></p> <p> </p> <p> </p> <h3>Step 5: Getting more advanced</h3> <p>Lets try a slightly more advanced example. This time we will examine a common use case. A physical visual inspection test needs to be conducted before a release can progress in the pipeline.<br /> <br /> Place the following code within our test class.</p> <pre class="brush: c#; toolbar:true"> using System; using System.Text; using NUnit.Framework; using OpenQA.Selenium.Firefox; using OpenQA.Selenium; using OpenQA.Selenium.Remote; using OpenQA.Selenium.Support.UI; namespace SeleniumTests { [TestFixture] public class test { private RemoteWebDriver driver; [SetUp] public void SetupTest() { // Look for an environment variable string server = null; server = System.Environment.GetEnvironmentVariable("SELENIUM_SERVER"); if (server == null) { server = "http:// *** PUT THE NAME OF YOUR SERVER HERE ***:4444/wd/hub"; } // Remote testing driver = new RemoteWebDriver(new Uri(server), DesiredCapabilities.Firefox()); } [TearDown] public void TeardownTest() { try { driver.Quit(); } catch (Exception) { // Ignore errors if unable to close the browser } } [Test] public void FirstSeleniumTest() { driver.Navigate().GoToUrl("http://www.google.com/"); IWebElement query = driver.FindElement(By.Name("q")); query.SendKeys("a test"); Assert.AreEqual(driver.Title, "Google"); } [Test] public void MySecondSeleniumTest() { // Navigate to google driver.Navigate().GoToUrl("http://www.google.com/"); IWebElement query = driver.FindElement(By.Name("q")); // Write a query into the window query.SendKeys("a test"); // wait at maximum ten seconds for results to display var wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10)); IWebElement myDynamicElement = wait.Until< IWebElement >((d) => { return d.FindElement(By.Id("ires")); }); // take a screenshot of the result for visual verification var fileName = TestContext.CurrentContext.Test.Name + "-" + string.Format("{0:yyyyMMddHHmmss}", DateTime.Now) + ".png"; driver.GetScreenshot().SaveAsFile(fileName, System.Drawing.Imaging.ImageFormat.Png); // perform an code assertion Assert.AreEqual(driver.Title, "Google"); } } } </pre> <p> </p> <p><br /> Build, and run the test.<br /> <br /> In this example we added an additional test to perform a google search, wait at maximum 10 seconds for results to display, take a screenshot (stored it to disk), and perform an NUnit assertion. The screenshot output from the test can be made available as an artifact within Continua!<br /> <br /> Firstly lets commit our changes; "hg commit -m "added a more advanced test""<br /> <br /> Open the configuration in Continua CI (clicking the pencil icon)<br /> Navigate to the stages section<br /> Double click on the stage name (which will bring up the edit stage Dialogue box)<br /> Click on the Workspace rules tab<br /> Add the following line to the bottom of the text area: "/ < $Source.test_repo$/tests/bin/Release/**.png". This will tell Continua to return any .png files that we produced from this test back to the ContinuaCI Server.<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-25.png" /><br /> <br /> Click on the artifacts tab.<br /> Add the following line : **.png" This will enable any .png files within the workspace to be picked up and displayed within the Artifacts tab.<br /> **.png<br /> <img alt="name project" src="/blogimages/peter/create-project-26.png" /><br /> <br /> Click save<br /> Click Save & Complete Wizard<br /> Start a new build<br /> <br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-14.png" /><br /> <br /> Sweet! A screenshot of our test was produced, and can be seen within the Artifacts tab!<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-27.png" /><br /> Clicking on 'View' will display the image:<br /> <br /> <img alt="name project" src="/blogimages/peter/create-project-28.png" /><br /> <br /> <span>We have put the sourcecode of this article up on</span><a href="https://github.com/VSoftTechnologies/Automated-UI-Testing">Github</a><span>.</span><br /> <br /> Please subscribe and comment! We are very excited to see what you guys come up with on Continua, happy testing!</p> <p> </p> <h4>Some additional considerations:</h4> <ul> <li>The user which the Selenium service runs under should have correct privileges</li> <li>The machine designated as the Selenium server may require access to the internet if your webapp has upstream dependencies (eg third party API's like github)</li> </ul> 709Building GitHub Pull Requests with Continua CIhttps://www.finalbuilder.com/resources/blogs/postid/700/building-github-pull-requests-with-continua-ciContinua CI,Open SourceTue, 26 Nov 2013 08:27:00 GMT<p>GitHub makes it relatively simple to contribute to open source projects, just fork the repository, make your changes, submit a pull request. Couldn't be simpler. </p> <p>Accepting those Pull requests, is dead simple too, most of the time. But what if you want to build and test the pull request first, before accepting the request. Fortunately the nature of GitHub Pull requests (or more to the point, Git itself) makes this possible. </p> <h4>Git References<br />  </h4> <p>Git References are a complex topic all on it's own, but lets take a quick look at a typical cloned repository. In the .git folder, open config file in notepad and take a look at the [remote "origin"] section, here's what mine looks like :</p> <pre class="brush:plain"> [remote "origin"] url = https://github.com/VSoftTechnologies/playground.git fetch = +refs/heads/*:refs/remotes/origin/* </pre> <p>The key entry here is the fetch. Quoting from the <a href="https://git-scm.com/book/en/v2/Git-Internals-The-Refspec" title="Git Internals - The Refspec">git documentation</a> :</p> <p>"The format of the refspec is an optional <code style="margin-bottom: 1em; padding: 1px; border: 1px solid #efeee6; line-height: 18px; font-size: 14px; vertical-align: baseline; border-top-left-radius: 3px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; border-bottom-left-radius: 3px; display: inline; color: #f14e32; font-family: Courier, monospace !important;">+</code>, followed by <code style="margin-bottom: 1em; padding: 1px; border: 1px solid #efeee6; line-height: 18px; font-size: 14px; vertical-align: baseline; border-top-left-radius: 3px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; border-bottom-left-radius: 3px; display: inline; color: #f14e32; font-family: Courier, monospace !important;"><src>:<dst></dst></src></code>, where <code style="margin-bottom: 1em; padding: 1px; border: 1px solid #efeee6; line-height: 18px; font-size: 14px; vertical-align: baseline; border-top-left-radius: 3px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; border-bottom-left-radius: 3px; display: inline; color: #f14e32; font-family: Courier, monospace !important;"><src></src></code> is the pattern for references on the remote side and <code style="margin-bottom: 1em; padding: 1px; border: 1px solid #efeee6; line-height: 18px; font-size: 14px; vertical-align: baseline; border-top-left-radius: 3px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; border-bottom-left-radius: 3px; display: inline; color: #f14e32; font-family: Courier, monospace !important;"><dst></dst></code> is where those references will be written locally. The <code style="margin-bottom: 1em; padding: 1px; border: 1px solid #efeee6; line-height: 18px; font-size: 14px; vertical-align: baseline; border-top-left-radius: 3px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; border-bottom-left-radius: 3px; display: inline; color: #f14e32; font-family: Courier, monospace !important;">+</code> tells Git to update the reference even if it isn’t a fast-forward."<br /> <br /> The default fetch refspec will pull any branches from the original repository to our clone. But where are our pull requests?</p> <h4>Anatomy of a pull request<br />  </h4> <p>When a pull request is submitted, GitHub  make use of Git References to essentially "attach" your pull request to the original repository. But in my local clone, I won't see them because the default fetch refspec doesn't include them. You can see the pull requests by using the <span style="font-family: 'Courier New';">git ls-remote</span> command on the origin :</p> <pre class="brush:bash"> $ git ls-remote origin $ git ls-remote origin 27dfaaf83f60ac26a6fe465042f2ddb515667ff1 HEAD 654b98d6eb862e247e5c043460e9f9a64b2f0972 refs/heads/Test 27dfaaf83f60ac26a6fe465042f2ddb515667ff1 refs/heads/master b333438310a56823f1938071af8c697b202bf855 refs/pull/1/head 95cb80af1330e73188ea32659d7744dcfe37ab43 refs/pull/2/head 90ba13b8edaab04505396dbcb1853f6f9bdaed64 refs/pull/2/merge </pre> <p> </p> <p>Notice something odd there. There are two pull requests, but pull request 2 has two entries in the list, whilst pull request 1 has only 1 entry.  refs/pull/2/head is a reference to the head commit of your pull request, whilst refs/pull/2/merge is a reference to the result of the automatic merge that GitHub does. On pull request 1, there was a merge conflict, so the the /merge reference was not created, on pull request 2, the merge succeeded. On the pull request page, you would typically see something like this if the merge succeeded : </p> <p> </p> <div style="text-align: center;"><img alt="" src="/blogimages/vincent/GitHubPull/MergeResult.png" /></div> <h4 style="text-align: left;">Getting Continua CI to see the Pull Requests<br />  </h4> <p style="text-align: left;">The main reason for building pull requests on your CI server is to see if they build, and to run your unit tests against that build. You can chose to build the original pull request, or the result of the automatic merge, or both. In reality, if the automatic merge failed, then the person who submitted the pull request has some more work to do, so there's really no point building/testing the original pull request. What you really want to know, is "if I accept this request, will it build and the tests pass", so it's generally best to only build the automatic merge version of the pull request. Continua CI makes this quite simple. On the Git Repository settings, check the "Fetch other Remote Refs" option. This will show the Other Refs text area, which already has a default RefSpec that will fetch the pull requests (the merged versions), and create local (to Continua CI) branches with the name pr/#number - so pull request 1 becomes branch pr/1. </p> <div style="text-align: center;"><img alt="" src="https://www.finalbuilder.com/blogimages/vincent/GitHubPull/GitRepository.png" /></div> <p>You can modify this to taste, for example if you are fetching both the merge and the head versions of the  pull requests, you might use a refespec like this :</p> <pre class="brush:plain"> +refs/pull/*/merge:refs/remotes/origin/pr-merge/* +refs/pull/*/head:refs/remotes/origin/pr-head/* </pre> <p> </p> <p><span style="font-size: 1.2em;">Building the pull Requests</span></p> <p> </p> <p>Now we have gotten this far (which is to say, you enabled one option and clicked on save!) we can build the pull requests (it may take a few minutes to fetch the pull requests). If you manually start a build, you can select the pull request from the branch field for the github repository using the intellsense, just start typing pr/ and you will see a list :</p> <p> </p> <p style="text-align: center;"><img alt="" src="/blogimages/vincent/GitHubPull/SelectPR.png" /></p> <p>Now we can add a trigger to build pull requests (we are talking continuous integration after all). Using the Pattern Matched Branch feature on <a href="http://wiki.finalbuilder.com/display/continua/Repository+Trigger" title="Pattern Matched Branch">Continua CI Triggers</a> you can make your trigger start builds when a pull request changeset is fetched from Github. The pattern is a regular expression, so ^pr/.* would match our pull request branches (assuming we use the default refspec)</p> <p style="text-align: center;"><img alt="" src="/blogimages/vincent/GitHubPull/PRTrigger.png" /></p> <p style="text-align: left;">Adding a trigger specific to the pull requests allows you to set variables differently from other branches, and you will then be able to tailor your stages according to whether you are building a pull request or not. For example, you probably don't want to run your deploy stage when building a pull request).</p> <h4 style="text-align: left;">Updating GitHub Pull Request Status</h4> <p> </p> <p style="text-align: left;">One last thing you might like to add, is to update the <a href="https://github.com/blog/1227-commit-status-api">Pull Request Status</a>. This can be done using the Update GitHub Status Action in Continua CI (In a future update this will done via build event handlers, a new feature currently in development). This is what the pull request might look like after the status is updated by Continua CI :</p> <p> </p> <p style="text-align: center;"><img alt="" src="/blogimages/vincent/GitHubPull/PRStatus.png" /></p> 700