Thursday, 12 November 2015

Gartner Symposium 2015

This past week I attended the Gartner Symposium in Barcelona, Spain. Symposium is the annual European gathering of senior IT leaders (CxOs) and an opportunity to analyse trends, reflect on our current progress and network with peers across the industry.

Adoption of IT is now growing at an unprecedented rate. We often underestimate the impact of technology in the current climate. Access to technology is now easier than ever with multiple options for sourcing and implementation of IT solutions. I witness this in my organisation with directorate sourced solutions (that we MUST implement) and ad-hoc software service implementations without any involvement from IT.

As a collective IT team we need to adapt to these challenges to sustain our position as a collaborative care enabler moving forwards.

Key messages from the Gartner Symposium are:

  • Economics of Connections – A really verbose "Gartner" term for crowd-sourcing. As an organisation we are too focused on trying to solve all of the problems ourselves and we need to test new mechanisms of delivery in our organisation. What would it look like if we asked the junior doctors to solve the problems themselves with IT facilitation via a crowd-sourced forum? What about an NHS wide kick-starter to solve common NHS problems? Further reading on wikinomics is advised if you are interested.
  • Business is now Bi-Modal – Not only is the mechanism of IT delivery bi-model (Mode1 – traditional, mode 2 – agile and iterative) entire businesses are now moving to bi-modal models, taking the opportunity to shed legacy applications and rebuilding revenue streams on-top of iterative digital platforms.
  • Platforms are still important – Large systems are no longer the way to deliver successful digital business. Modern, agile platforms are utilised across industry as a key enabler to success in the digital age driven by open APIs and cloud based scalability.
  • Human contextualisation – It is now more important than ever to understand the human context of delivery. We should focus more on the user-experience and ease of use to drive benefit rather than complexity in process to ensure we sustain a successful relationship with our colleagues in the organisation.
  • Cognitive Computing – There were some truly amazing technologies on display at Symposium that show the advances in cognitive computing (speaking with Amelia the AI helpdesk was one of the highlights). It is clear that as we move forwards cognitive algorithms will drive an ethical debate in the industry. At several keynote presentations we witnessed the power of cognitive computing with machines now able to analyse X-rays 7% more accurately than human radiologists. The move from logic based to pattern based algorithms is driving this movement.

During lunch on the second day we experienced a presentation by Chris Hadfield, a truly inspirational individual who has led several space missions and is the first Canadian astronaut. Chris set his life objectives at an early age of 9 and has achieved them by focusing on the completion of what he believed impossible. If you get time I recommend watching his TED talk here: https://www.ted.com/talks/chris_hadfield_what_i_learned_from_going_blind_in_space

It was clear from the week that we are moving in the right direction with our IT strategy and the current in-flight programmes and proposed structural changes. Stability and solid foundations are critical to success, enabling platforms to allow business agility is important and facilitating great customer engagement/partnership/collaboration at all levels must be a priority for us to retain any position in the industry. We need to move away from back-office to front-office and be seen as a care enabler.​

As we move forward we all need to keep an eye on innovation and drive this through our organisation. We tend to spend too much time on delivering the "now" and not focusing on the future. It is these future innovations that will change the health economy and if we do not start scanning the horizon the current models of care and IT delivery will be depreciated before we even realise.​

Thursday, 23 January 2014

Memory Leak Free Roslyn ScriptEngine

Following on from my recent post regarding scripting with Roslyn, I have witnessed memory leaks whilst using the ScriptEngine class.  Although cool, it appears Roslyn is yet to make use of collectible assemblies which means that every expression call will generate and load a new assembly.  Once loaded this assembly will be persisted into ram and will live for the life of your application.  It appears this happens for every session.Execute method called.

To prevent this from happening you can make use of Application Domains (AppDomains).  Application Domains provide an isolated execution environment for code.  Every application runs within a domain and has the capability of spawning further domains.  AppDomains provide the capability to load resources and unload resources whilst preventing memory leaks.  All assemblies loaded and created inside of an isolated domain are destroyed when the domain is unloaded.  This has been around since .NET 1.0 and is an amazing asset when managing dynamic assembles and isolation.  It also provides memory isolation and is perfect for destroying heavy isolated code.

Below is a further elaboration on my previous sample application which shows the AppDomain approach.  My application no longer leaks memory so long as the application domain is unloaded.  This approach does have some limitations as all objects are required to be serializable.  Hope you find this helpful.

using Roslyn.Scripting.CSharp;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;

namespace RoslynMemFree
{
public class Program
{
static void Main(string[] args)
{
var domain = AppDomain.CreateDomain("Rosyln");
var roslyn = (Executor)domain.CreateInstanceAndUnwrap(typeof(Program).Assembly.FullName, typeof(Executor).FullName);

using (var file = File.OpenWrite(@"C:\temp\roslyn.csv"))
using (var streamw = new StreamWriter(file))
{
for (var i = 0; i < 100; i++)
{
var url = @"?Item1={VisitId}&Item2={TrustNumber}&Item3={VisitId.Substring(0,1)}&StartDate={System.Web.HttpUtility.UrlEncode(DateTime.Now.AddDays(-5).ToString())}&Item4={(1==0?""No"":""Yes"")}";
var item = new PatientVisitEntity { VisitId = "V123456", TrustNumber = "P123456" };
var output = roslyn.Execute(item, url);
Console.WriteLine(output);

Console.WriteLine(GC.GetTotalMemory(true));
}
}

AppDomain.Unload(domain);

Console.WriteLine(GC.GetTotalMemory(true));
while (Console.ReadLine().ToUpper() != "EXIT")
{
Console.WriteLine(GC.GetTotalMemory(true));
}
}

[Serializable]
public class PatientVisitEntity
{
public string VisitId { get; set; }

public string TrustNumber { get; set; }
}

[Serializable]
public class Executor : MarshalByRefObject
{
public override object InitializeLifetimeService()
{
return null;
}

public string Execute(dynamic t, string exp)
{
var engine = new ScriptEngine();

var session = engine.CreateSession(t);
session.AddReference(t.GetType().Assembly);
session.AddReference("System.Web");
session.ImportNamespace("System");
session.ImportNamespace("System.Web");

var expressions = System.Text.RegularExpressions.Regex.Matches(exp, @"\{.*?\}");
var outputUrl = exp;

foreach (var func in expressions)
{
var f = func.ToString();
var realExp = f.Substring(1, f.Length - 2);
var output = session.Execute(realExp).ToString();

outputUrl = outputUrl.Replace(f, output);
}

return outputUrl;
}
}
}
}



And the memory output from the console.  You will see memory grows as with every call to Execute until I unload the new AppDomain.


image

Tuesday, 10 December 2013

Dynamic Addition Of Always On AG Databases

Today I have been working on scripting the capabilities to dynamically add missing databases to secondary clusters in the event of databases being created dynamically.  With the mouthful out the way the following powershell provides the mechanism to dynamically add databases automatically to all existing AG if they are not available.


#**********************************************************************************
# DYNAMIC ADDITION OF USER-DATABASES TO EXISTING AG GROUP
# GARY MCALLISTER 2013
# RUNS FROM THE PRIMARY DATABASE AND UTILISES REMOVE \\SECONDARY\BACKUP SHARES
# SECONDARY MOST ALLOW THE SQL ACCOUNT PERMISSON TO BACKUP SHARE
#**********************************************************************************
clear
import-module sqlps

$dbs = @();
$excludedDbs = @('[master]','[model]','[msdb]','[tempdb]','[SSISDB]','[ReportServer]','[ReportServerTempDb]')

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
$sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "LOCALHOST\ENOTINGINSTANCE"

#THIS ENSURES THAT THE DATABASE IS ADDED TO ALL AVAILABILITY GROUPS 
#IN REALITY YOU PROBABLY ONLY WANT IT IN ONE.
foreach ($agGroup in $sqlServer.AvailabilityGroups)
{
 $agName = $agGroup.name
 $agDbs = $agGroup.AvailabilityDatabases
 
 if ($agName -ne $null)
 { 
  foreach($sqlDatabase in $sqlServer.databases) 
  {
   $db = "$sqlDatabase";
   $found = $false
   foreach ($agDb in $agDbs) 
   {
    if ($db -eq $agDb){
     $found = $true;
     break;
    }
   }
   
   foreach ($sysDb in $excludedDbs)
   {
    if ($db -eq $sysDb){
     $found = $true;
     break;
    }   
   }
   
   #Add this to the ag group
   if ($found -ne $true) {
    $dbs += $sqlDatabase
   }
  }
 } 
 
 $agPrimary;
 foreach ($db in $dbs) {
  foreach ($repl in $agGroup.AvailabilityReplicas) {
   if ($repl.Role -eq "Primary") 
   {
    $agPrimary = $repl;
   }
  
   #if this is the secondary
   if ($repl.Role -eq "Secondary"){
    $dbfName = $db.Name;
       
    $secondaryServerName = $repl.Name.Substring(0, $repl.Name.IndexOf("\"));
    $secondaryServerInstanceName = $repl.ToString().Replace("[","").Replace("]","");
    
    $primaryServerName = $agPrimary.Name.SubString(0, $agPrimary.Name.IndexOf("\"));
    $primaryServerInstanceName = $agPrimary.ToString().Replace("[","").Replace("]","");
    
    $backupFullLocation = "\\$secondaryServerName\backup\$dbfName.bak";
    $backupLogLocation = "\\$secondaryServerName\backup\$dbfName.trn";
    $agPrimaryPath = "SQLSERVER:\SQL\$primaryServerInstanceName\AvailabilityGroups\$agName"
    $agSecondaryPath = "SQLSERVER:\SQL\$secondaryServerInstanceName\AvailabilityGroups\$agName"
    
    if ((Test-Path -Path FileSystem::$backupFullLocation)){
     Remove-Item -Path FileSystem::$backupFullLocation -Force
    }
    
    if ((Test-Path -Path FileSystem::$backupLogLocation)){
     Remove-Item -Path FileSystem::$backupLogLocation -Force
    }
        
    #COPY_ONLY, FORMAT, INIT, SKIP, REWIND, NOUNLOAD, COMPRESSION
    Backup-SqlDatabase -Database $dbfName -BackupFile $backupFullLocation -ServerInstance $primaryServerInstanceName -CopyOnly
    Restore-SqlDatabase -Database $dbfName -BackupFile $backupFullLocation -ServerInstance $secondaryServerInstanceName -NoRecovery
    
    #NOFORMAT, NOINIT, NOSKIP, REWIND, NOUNLOAD, COMPRESSION
    Backup-SqlDatabase -Database $dbfName -BackupFile $backupLogLocation -ServerInstance $primaryServerInstanceName -BackupAction 'Log'
    Restore-SqlDatabase -Database $dbfName -BackupFile $backupLogLocation -ServerInstance $secondaryServerInstanceName -RestoreAction 'Log' -NoRecovery
    
    Add-SqlAvailabilityDatabase -Path $agPrimaryPath -Database $dbfName
    Add-SqlAvailabilityDatabase -Path $agSecondaryPath -Database $dbfName
    
    if ((Test-Path -Path FileSystem::$backupFullLocation)){
     Remove-Item -Path FileSystem::$backupFullLocation -Force
    }
    
    if ((Test-Path -Path FileSystem::$backupLogLocation)){
     Remove-Item -Path FileSystem::$backupLogLocation -Force
    }
   }
  }
 }
}

Thursday, 5 December 2013

jQuery Form Validation and Twitter Bootstrap 3

I have been doing some work developing mobile applications for my MSc recently.  To support mobile data capture requirements I have made use of twitter bootstrap and the jquery validation framework.  These two frameworks together provide a great toolset for validating input and building responsive applications.  I have made this process slicker by including the mechanism to show and hide validation errors when the forms are changed.  A sample html form is found below which demonstrates how this works.

<form role="form" class="form-horizontal" id="registrationform" name="registrationform" method="post" action="registersave.php">

<div class="form-group">

<label class="col-sm-2 control-label"  for="username">Username</label>

              <div class="input-group">

                     <span class="input-group-addon">*</span>

                     <input type="text" class="form-control" id="username" name="username" placeholder="Username" required="required">

</div>

              <label class="error label label-danger" for="username"></label>           

       </div>

</form>

And the javascript to support the validation is found below:

$(function() {
 $('#username').focus();
 
 $('#registrationform').validate({
  highlight: function(element) {
   $(element).closest('.form-group').removeClass('has-success').addClass('has-error');
   $(element).closest('.form-group').children('.error').removeClass('hide');
  },
  success: function(element) {
   $(element).closest('.form-group').removeClass('has-error').addClass('has-success');
   $(element).addClass('hide');
  }
 });  
});

This produces the following results when data is entered into the form.

Wednesday, 30 October 2013

.NET 4.5, Obtaining Role Membership

Over the last few months I have been doing a significant amount of work with claims authentication and authorisation.  This has led me into some very tight spots with management of principals and identities inside the .NET framework.  More recently I have been required to produce a console application which takes the current windows credential and verifies role membership.

This is a very quick post which provides the solution I have adopted to obtain verification of role membership on a windows machine inside a console application.  I am surprised at the effort required to achieve this outcome and am sure there is an easier way.


using System.DirectoryServices;
    using System.Runtime.InteropServices;
    using System.Security.Principal;
    using System.Threading;

    using GSTT.ENoting.Domain.Security;
    using GSTT.ENoting.Infrastructure.Data.Concrete;
    using GSTT.ENoting.Infrastructure.Data.Interfaces;
    using GSTT.ENoting.Services;
    using GSTT.ENoting.Services.Factories;

    class Program
    {
        static void Main(string[] args)
        {

            try
            {
                var item = WindowsIdentity.GetCurrent().Token;
                var identity = System.Security.Principal.WindowsIdentity.Impersonate(item);

                var principal = WindowsIdentity.GetCurrent();

                Console.WriteLine("Logged on as: " + principal.Name);
                Console.WriteLine("Credential Type: " + WindowsIdentity.GetCurrent().ToString());

                var claimRoles = principal.Claims.Where(p => p.Type == principal.RoleClaimType).ToList();
                var groups = Groups();

                using (var uow = new UnitOfWork(new IObjectContextFactory[] { new EnotingContextFactory() }).BeginTransaction(CsCatalog.ENoting))
                {
                    var roles = uow.Entity().Read.ToList();

                    foreach (var role in roles)
                    {
                        var group = groups.FirstOrDefault(p => p.Name == role.RoleName);
                        var exists = group != null && claimRoles.FirstOrDefault(p => p.Value == group.Sid) != null;

                        Console.WriteLine("IsInRole: {0}, {1}", role.RoleName, exists);
                    }
                }
            }
            catch (Exception exception)
            {
                Console.WriteLine(exception.ToString());
            }

            Console.ReadLine();
        }

        private static AdObject[] Groups()
        {
            var path = string.Format("WinNT://{0},computer", Environment.MachineName);

            using (var computerEntry = new DirectoryEntry(path))
            {
                var userNames = from DirectoryEntry childEntry in computerEntry.Children
                                where childEntry.SchemaClassName == "Group"
                                select new AdObject { Name = childEntry.Name, Class = childEntry.SchemaClassName, Id = childEntry.Guid.ToString(), Sid = GetSidString((byte[])childEntry.Properties["objectSid"][0]) };

                return userNames.ToArray();
            }           
        }

        private class AdObject
        {

            public string Name { get; set; }

            public string Id { get; set; }

            public string Class { get; set; }

            public string Sid { get; set; }
        }

        [DllImport("advapi32", CharSet = CharSet.Auto, SetLastError = true)]
        static extern bool ConvertSidToStringSid([MarshalAs(UnmanagedType.LPArray)] byte[] pSID, out IntPtr ptrSid);

        public static string GetSidString(byte[] sid)
        {
            IntPtr ptrSid;
            string sidString;
            if (!ConvertSidToStringSid(sid, out ptrSid))
            {
                throw new System.ComponentModel.Win32Exception();
            }

            try
            {
                sidString = Marshal.PtrToStringAuto(ptrSid);
            }
            finally
            {
                Marshal.FreeHGlobal(ptrSid);
            }
            return sidString;
        }
    }

Please let me know your thoughts.

Thursday, 12 September 2013

Coffee break with Roslyn

Over the last 3 years Microsoft has made some significant strides towards compiler as a service. Compiler as a service is a mechanism which enables the developer to create syntax (lexicon) and easily analyse, compile and/or execute the code to obtain a result. Until recently to write and execute C# code you would need to compile the outputs using a commandline compiler (such as csc.exe), csc.exe then outputs CIL which allows the runtime to JIT compile and execute on your architecture. Creation of modules and assemblies in CIL has always been easy due to the tooling in Visual Studio. It has always been very difficult to provide scripting and dynamic capabilities to actual applications. I am pleased to say that this is no longer the case thanks to the .NET compiler as a Service, Roslyn.

Rosyln has led to some really cool use-cases for the utilisation of C# (or VB) as a scripting language. One such project is ScriptCS. ScriptCS provides the ability to create and maintain full C# projects and run them in a "pure" JIT fashion. No longer do you have to call csc.exe to compile your C# applications, with Roslyn you can dynamically write C# and execute the output without the utilisation of native language tooling.

To demonstrate Roslyn I have created the following dynamic url generator. This console sample uses the Roslyn script engine to create a url template that is populated from a state object. As you can see it really is easy to implement and can provide powerful results.

namespace UrlExpressionLibrary
{
    using System;
    using Roslyn.Scripting.CSharp;

    public class Program
    {
        static void Main(string[] args)
        {
            var url = @"?Item1={VisitId}&Item2={TrustNumber}&Item3={VisitId.Substring(0,1)}&StartDate={System.Web.HttpUtility.UrlEncode(DateTime.Now.AddDays(-5).ToString())}&Item4={(1==0?""No"":""Yes"")}";
            var item = new PatientVisitEntity { VisitId = "V123456", TrustNumber = "P123456" };

            var engine = new ScriptEngine();
            var session = engine.CreateSession(item);
            session.AddReference(item.GetType().Assembly);
            session.AddReference("System.Web");
            session.ImportNamespace("System");
            session.ImportNamespace("System.Web");

            var expressions = System.Text.RegularExpressions.Regex.Matches(url, @"\{.*?\}");
            var outputUrl = url;

            foreach (var func in expressions)
            {
                var f = func.ToString();
                var realExp = f.Substring(1, f.Length - 2);
                var output = session.Execute(realExp).ToString();

                outputUrl = outputUrl.Replace(f, output);
            }

            Console.WriteLine(outputUrl);
            Console.ReadLine();
        }

        public class PatientVisitEntity
        {
            public string VisitId { get; set; }

            public string TrustNumber { get; set; }
        }
    }
}

Roslyn should not be confused with the previous DLR and CodeDom capabilities provided in the framework.  Roslyn is a pure compiler as a service and provides the c# compiler written completely in managed code.  It is not used solely for generating code and has no external dependency on external resources to compile code.  Roslyn is a full syntax analyser which will be the future of the .NET tooling environment.  I have no-idea where the DLR will go in the future, especially considering Microsoft's preference for type-safe programming.


So, start using Roslyn for all your dynamic use-cases, free the burden of previous scripting hacks.  The opportunities are endless.

Monday, 15 July 2013

Open Source in the NHS

Open Source in the NHS
I have worked in the UK health service for over 15 years.  During this time I have subscribed to the patient centric values of the system.  As an IT employee this is often overlooked when managing implementations, configuring hardware or developing software.  Over that time I have seen small systems come and go and a national programme deliver lacklustre benefits.  During that time I was working for a very successful Trust who ignored all the noise and continued to focus on one thing.  The patient.

Recently the government asserted the intention to utilise "Open Source" within the health service to ensure interoperability and sustainability for future developments.  This message was underpinned by the inclusion of 4 specific use-cases of Open developments in the NHS.  These being:-

·         Wardware - An open observation and assessment solution managed and controlled by Tactix4.  Source is not linked from home-page. Wardware.co.uk
·         OpenEyes - An open solution for ophthalmology, available openly from GitHub. Here: https://github.com/openeyes
·         VistA - the veteran health information systems platform.  An *Open* platform built on MUMPS with a view to modernise using JavaScript.  Source available via FOI request.
·         King Integration Engine - an integration platform built around Apache Camel.  Cannot find the source or documentation online.

As a senior manager and software engineer by trade I am slightly miffed by some of the recent messages.  I am also slightly confused by the understanding of OSS in the literature produced to date.  For me an Open solution has to meet at least 3 key criteria in order to be classified as truly "Open".

1.       All the code should be Open, freely available online, well-documented and accessible.  The code should be managed with multiple commits from multiple sources.  It should be evident that regular commits are made to the code and the code-base is active. This is somewhat a utopia, but good open solutions tend to meet this criteria.
2.       The licensing for the *solution* should utilise an "Open" license.  Meaning that *all* elements of the solution should be free to use and unrestricted from modification and distribution.
3.       A corporate investment/viable service provider should be clearly backing the product.  This may sound counter intuitive but in order for an Open solution to be viable long-term it needs backing from a corporate entity.  Without corporate backing the solution is "unsupported" and there is little evidence to ensure continued development.  You could end up with a stale loaf of bread.

If you use the above as a check-list of Open compliance the case-studies identified fall short.  The only exception being OpenEyes, which apart from some UX polish, is delivering on its ambition and following open principles.  I know that Tactix4 and KCH are working towards this, but it shows the difficulty of managing Open-Source.

This is not an attempt to dumb down or belittle the message.  The individuals and organisations above are carrying a vital baton for the health software industry.  One, that executed correctly will change and empower the NHS with true interoperability and sustained platforms.  

What we need is some published  standards and approaches which are vital to NHS certification.  All of the solutions above vary drastically technically, from Php to Java to MUMPS and C.  This increases the TCO for an organisation and when the government is trying to reduce costs we should be looking to consolidate technology in our estates.  If this is unachievable the government needs to provide these solutions as cloud hosted services, shifting TCO to the government. 

Open-Source is an Eco-system that has a management overhead.  All software requires design, build and testing.  Even in an agile world this is difficult to do on a large scale.  To create an Open-EPR with no controlled assurance is high-risk.  Who owns the legal obligations of an open system with a buggy commit that impacts patient care?  Who fixes these issues and what are the SLAs? Who supports the developer community around these products?  These are all vital questions which need resolving to ensure a mature “Open” delivery.

I believe the government should invest in an open platform which provides the core technology stack for "App" development.  This platform will utilise standard "Open" APIs and architecture for integration and development.  The government can control the standards and approaches to inclusion, offering services to NHS trusts who wish to implement an open offering.  This SaaS federation approach is the "Cloud Model" and one that is untapped to date for clinical systems.

Each NHS trust is different, each has different views, values, choices and demographic issues.  A "one-size-fits-all" approach is not going to work.  This is the challenge in the IT space too.  Every Trust has different skills and expertise.  Some Trusts have no in-house skills whatsoever.  How will we achieve the flexible configuration without holistic involvement and commitment from local Trusts?  This is where NPfIT failed.  There was a lack of realisation that local configuration is critical to successful implementation and takes years to achieve.

It is going to be interesting to see where this goes over the coming years.  New services such as http://developer.nhs.uk/ are appearing which genuinely excite me as an IT professional.  I sincerely hope that over the coming months some decisions are made around how openness will be delivered to the NHS.  Currently the message is unclear, lacks thought, governance and vision.  

As the message evolves and matures I look forward to the outcomes.  A world where free-solutions are available based on modern technologies that are safe, usable, configurable and interoperable.  Until that day comes I will continue to focus on the most important thing in health.  The patient.