Tuesday, 10 December 2013

Dynamic Addition Of Always On AG Databases

Today I have been working on scripting the capabilities to dynamically add missing databases to secondary clusters in the event of databases being created dynamically.  With the mouthful out the way the following powershell provides the mechanism to dynamically add databases automatically to all existing AG if they are not available.


#**********************************************************************************
# DYNAMIC ADDITION OF USER-DATABASES TO EXISTING AG GROUP
# GARY MCALLISTER 2013
# RUNS FROM THE PRIMARY DATABASE AND UTILISES REMOVE \\SECONDARY\BACKUP SHARES
# SECONDARY MOST ALLOW THE SQL ACCOUNT PERMISSON TO BACKUP SHARE
#**********************************************************************************
clear
import-module sqlps

$dbs = @();
$excludedDbs = @('[master]','[model]','[msdb]','[tempdb]','[SSISDB]','[ReportServer]','[ReportServerTempDb]')

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
$sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "LOCALHOST\ENOTINGINSTANCE"

#THIS ENSURES THAT THE DATABASE IS ADDED TO ALL AVAILABILITY GROUPS 
#IN REALITY YOU PROBABLY ONLY WANT IT IN ONE.
foreach ($agGroup in $sqlServer.AvailabilityGroups)
{
 $agName = $agGroup.name
 $agDbs = $agGroup.AvailabilityDatabases
 
 if ($agName -ne $null)
 { 
  foreach($sqlDatabase in $sqlServer.databases) 
  {
   $db = "$sqlDatabase";
   $found = $false
   foreach ($agDb in $agDbs) 
   {
    if ($db -eq $agDb){
     $found = $true;
     break;
    }
   }
   
   foreach ($sysDb in $excludedDbs)
   {
    if ($db -eq $sysDb){
     $found = $true;
     break;
    }   
   }
   
   #Add this to the ag group
   if ($found -ne $true) {
    $dbs += $sqlDatabase
   }
  }
 } 
 
 $agPrimary;
 foreach ($db in $dbs) {
  foreach ($repl in $agGroup.AvailabilityReplicas) {
   if ($repl.Role -eq "Primary") 
   {
    $agPrimary = $repl;
   }
  
   #if this is the secondary
   if ($repl.Role -eq "Secondary"){
    $dbfName = $db.Name;
       
    $secondaryServerName = $repl.Name.Substring(0, $repl.Name.IndexOf("\"));
    $secondaryServerInstanceName = $repl.ToString().Replace("[","").Replace("]","");
    
    $primaryServerName = $agPrimary.Name.SubString(0, $agPrimary.Name.IndexOf("\"));
    $primaryServerInstanceName = $agPrimary.ToString().Replace("[","").Replace("]","");
    
    $backupFullLocation = "\\$secondaryServerName\backup\$dbfName.bak";
    $backupLogLocation = "\\$secondaryServerName\backup\$dbfName.trn";
    $agPrimaryPath = "SQLSERVER:\SQL\$primaryServerInstanceName\AvailabilityGroups\$agName"
    $agSecondaryPath = "SQLSERVER:\SQL\$secondaryServerInstanceName\AvailabilityGroups\$agName"
    
    if ((Test-Path -Path FileSystem::$backupFullLocation)){
     Remove-Item -Path FileSystem::$backupFullLocation -Force
    }
    
    if ((Test-Path -Path FileSystem::$backupLogLocation)){
     Remove-Item -Path FileSystem::$backupLogLocation -Force
    }
        
    #COPY_ONLY, FORMAT, INIT, SKIP, REWIND, NOUNLOAD, COMPRESSION
    Backup-SqlDatabase -Database $dbfName -BackupFile $backupFullLocation -ServerInstance $primaryServerInstanceName -CopyOnly
    Restore-SqlDatabase -Database $dbfName -BackupFile $backupFullLocation -ServerInstance $secondaryServerInstanceName -NoRecovery
    
    #NOFORMAT, NOINIT, NOSKIP, REWIND, NOUNLOAD, COMPRESSION
    Backup-SqlDatabase -Database $dbfName -BackupFile $backupLogLocation -ServerInstance $primaryServerInstanceName -BackupAction 'Log'
    Restore-SqlDatabase -Database $dbfName -BackupFile $backupLogLocation -ServerInstance $secondaryServerInstanceName -RestoreAction 'Log' -NoRecovery
    
    Add-SqlAvailabilityDatabase -Path $agPrimaryPath -Database $dbfName
    Add-SqlAvailabilityDatabase -Path $agSecondaryPath -Database $dbfName
    
    if ((Test-Path -Path FileSystem::$backupFullLocation)){
     Remove-Item -Path FileSystem::$backupFullLocation -Force
    }
    
    if ((Test-Path -Path FileSystem::$backupLogLocation)){
     Remove-Item -Path FileSystem::$backupLogLocation -Force
    }
   }
  }
 }
}

Thursday, 5 December 2013

jQuery Form Validation and Twitter Bootstrap 3

I have been doing some work developing mobile applications for my MSc recently.  To support mobile data capture requirements I have made use of twitter bootstrap and the jquery validation framework.  These two frameworks together provide a great toolset for validating input and building responsive applications.  I have made this process slicker by including the mechanism to show and hide validation errors when the forms are changed.  A sample html form is found below which demonstrates how this works.

<form role="form" class="form-horizontal" id="registrationform" name="registrationform" method="post" action="registersave.php">

<div class="form-group">

<label class="col-sm-2 control-label"  for="username">Username</label>

              <div class="input-group">

                     <span class="input-group-addon">*</span>

                     <input type="text" class="form-control" id="username" name="username" placeholder="Username" required="required">

</div>

              <label class="error label label-danger" for="username"></label>           

       </div>

</form>

And the javascript to support the validation is found below:

$(function() {
 $('#username').focus();
 
 $('#registrationform').validate({
  highlight: function(element) {
   $(element).closest('.form-group').removeClass('has-success').addClass('has-error');
   $(element).closest('.form-group').children('.error').removeClass('hide');
  },
  success: function(element) {
   $(element).closest('.form-group').removeClass('has-error').addClass('has-success');
   $(element).addClass('hide');
  }
 });  
});

This produces the following results when data is entered into the form.

Wednesday, 30 October 2013

.NET 4.5, Obtaining Role Membership

Over the last few months I have been doing a significant amount of work with claims authentication and authorisation.  This has led me into some very tight spots with management of principals and identities inside the .NET framework.  More recently I have been required to produce a console application which takes the current windows credential and verifies role membership.

This is a very quick post which provides the solution I have adopted to obtain verification of role membership on a windows machine inside a console application.  I am surprised at the effort required to achieve this outcome and am sure there is an easier way.


using System.DirectoryServices;
    using System.Runtime.InteropServices;
    using System.Security.Principal;
    using System.Threading;

    using GSTT.ENoting.Domain.Security;
    using GSTT.ENoting.Infrastructure.Data.Concrete;
    using GSTT.ENoting.Infrastructure.Data.Interfaces;
    using GSTT.ENoting.Services;
    using GSTT.ENoting.Services.Factories;

    class Program
    {
        static void Main(string[] args)
        {

            try
            {
                var item = WindowsIdentity.GetCurrent().Token;
                var identity = System.Security.Principal.WindowsIdentity.Impersonate(item);

                var principal = WindowsIdentity.GetCurrent();

                Console.WriteLine("Logged on as: " + principal.Name);
                Console.WriteLine("Credential Type: " + WindowsIdentity.GetCurrent().ToString());

                var claimRoles = principal.Claims.Where(p => p.Type == principal.RoleClaimType).ToList();
                var groups = Groups();

                using (var uow = new UnitOfWork(new IObjectContextFactory[] { new EnotingContextFactory() }).BeginTransaction(CsCatalog.ENoting))
                {
                    var roles = uow.Entity().Read.ToList();

                    foreach (var role in roles)
                    {
                        var group = groups.FirstOrDefault(p => p.Name == role.RoleName);
                        var exists = group != null && claimRoles.FirstOrDefault(p => p.Value == group.Sid) != null;

                        Console.WriteLine("IsInRole: {0}, {1}", role.RoleName, exists);
                    }
                }
            }
            catch (Exception exception)
            {
                Console.WriteLine(exception.ToString());
            }

            Console.ReadLine();
        }

        private static AdObject[] Groups()
        {
            var path = string.Format("WinNT://{0},computer", Environment.MachineName);

            using (var computerEntry = new DirectoryEntry(path))
            {
                var userNames = from DirectoryEntry childEntry in computerEntry.Children
                                where childEntry.SchemaClassName == "Group"
                                select new AdObject { Name = childEntry.Name, Class = childEntry.SchemaClassName, Id = childEntry.Guid.ToString(), Sid = GetSidString((byte[])childEntry.Properties["objectSid"][0]) };

                return userNames.ToArray();
            }           
        }

        private class AdObject
        {

            public string Name { get; set; }

            public string Id { get; set; }

            public string Class { get; set; }

            public string Sid { get; set; }
        }

        [DllImport("advapi32", CharSet = CharSet.Auto, SetLastError = true)]
        static extern bool ConvertSidToStringSid([MarshalAs(UnmanagedType.LPArray)] byte[] pSID, out IntPtr ptrSid);

        public static string GetSidString(byte[] sid)
        {
            IntPtr ptrSid;
            string sidString;
            if (!ConvertSidToStringSid(sid, out ptrSid))
            {
                throw new System.ComponentModel.Win32Exception();
            }

            try
            {
                sidString = Marshal.PtrToStringAuto(ptrSid);
            }
            finally
            {
                Marshal.FreeHGlobal(ptrSid);
            }
            return sidString;
        }
    }

Please let me know your thoughts.

Thursday, 12 September 2013

Coffee break with Roslyn

Over the last 3 years Microsoft has made some significant strides towards compiler as a service. Compiler as a service is a mechanism which enables the developer to create syntax (lexicon) and easily analyse, compile and/or execute the code to obtain a result. Until recently to write and execute C# code you would need to compile the outputs using a commandline compiler (such as csc.exe), csc.exe then outputs CIL which allows the runtime to JIT compile and execute on your architecture. Creation of modules and assemblies in CIL has always been easy due to the tooling in Visual Studio. It has always been very difficult to provide scripting and dynamic capabilities to actual applications. I am pleased to say that this is no longer the case thanks to the .NET compiler as a Service, Roslyn.

Rosyln has led to some really cool use-cases for the utilisation of C# (or VB) as a scripting language. One such project is ScriptCS. ScriptCS provides the ability to create and maintain full C# projects and run them in a "pure" JIT fashion. No longer do you have to call csc.exe to compile your C# applications, with Roslyn you can dynamically write C# and execute the output without the utilisation of native language tooling.

To demonstrate Roslyn I have created the following dynamic url generator. This console sample uses the Roslyn script engine to create a url template that is populated from a state object. As you can see it really is easy to implement and can provide powerful results.

namespace UrlExpressionLibrary
{
    using System;
    using Roslyn.Scripting.CSharp;

    public class Program
    {
        static void Main(string[] args)
        {
            var url = @"?Item1={VisitId}&Item2={TrustNumber}&Item3={VisitId.Substring(0,1)}&StartDate={System.Web.HttpUtility.UrlEncode(DateTime.Now.AddDays(-5).ToString())}&Item4={(1==0?""No"":""Yes"")}";
            var item = new PatientVisitEntity { VisitId = "V123456", TrustNumber = "P123456" };

            var engine = new ScriptEngine();
            var session = engine.CreateSession(item);
            session.AddReference(item.GetType().Assembly);
            session.AddReference("System.Web");
            session.ImportNamespace("System");
            session.ImportNamespace("System.Web");

            var expressions = System.Text.RegularExpressions.Regex.Matches(url, @"\{.*?\}");
            var outputUrl = url;

            foreach (var func in expressions)
            {
                var f = func.ToString();
                var realExp = f.Substring(1, f.Length - 2);
                var output = session.Execute(realExp).ToString();

                outputUrl = outputUrl.Replace(f, output);
            }

            Console.WriteLine(outputUrl);
            Console.ReadLine();
        }

        public class PatientVisitEntity
        {
            public string VisitId { get; set; }

            public string TrustNumber { get; set; }
        }
    }
}

Roslyn should not be confused with the previous DLR and CodeDom capabilities provided in the framework.  Roslyn is a pure compiler as a service and provides the c# compiler written completely in managed code.  It is not used solely for generating code and has no external dependency on external resources to compile code.  Roslyn is a full syntax analyser which will be the future of the .NET tooling environment.  I have no-idea where the DLR will go in the future, especially considering Microsoft's preference for type-safe programming.


So, start using Roslyn for all your dynamic use-cases, free the burden of previous scripting hacks.  The opportunities are endless.

Monday, 15 July 2013

Open Source in the NHS

Open Source in the NHS
I have worked in the UK health service for over 15 years.  During this time I have subscribed to the patient centric values of the system.  As an IT employee this is often overlooked when managing implementations, configuring hardware or developing software.  Over that time I have seen small systems come and go and a national programme deliver lacklustre benefits.  During that time I was working for a very successful Trust who ignored all the noise and continued to focus on one thing.  The patient.

Recently the government asserted the intention to utilise "Open Source" within the health service to ensure interoperability and sustainability for future developments.  This message was underpinned by the inclusion of 4 specific use-cases of Open developments in the NHS.  These being:-

·         Wardware - An open observation and assessment solution managed and controlled by Tactix4.  Source is not linked from home-page. Wardware.co.uk
·         OpenEyes - An open solution for ophthalmology, available openly from GitHub. Here: https://github.com/openeyes
·         VistA - the veteran health information systems platform.  An *Open* platform built on MUMPS with a view to modernise using JavaScript.  Source available via FOI request.
·         King Integration Engine - an integration platform built around Apache Camel.  Cannot find the source or documentation online.

As a senior manager and software engineer by trade I am slightly miffed by some of the recent messages.  I am also slightly confused by the understanding of OSS in the literature produced to date.  For me an Open solution has to meet at least 3 key criteria in order to be classified as truly "Open".

1.       All the code should be Open, freely available online, well-documented and accessible.  The code should be managed with multiple commits from multiple sources.  It should be evident that regular commits are made to the code and the code-base is active. This is somewhat a utopia, but good open solutions tend to meet this criteria.
2.       The licensing for the *solution* should utilise an "Open" license.  Meaning that *all* elements of the solution should be free to use and unrestricted from modification and distribution.
3.       A corporate investment/viable service provider should be clearly backing the product.  This may sound counter intuitive but in order for an Open solution to be viable long-term it needs backing from a corporate entity.  Without corporate backing the solution is "unsupported" and there is little evidence to ensure continued development.  You could end up with a stale loaf of bread.

If you use the above as a check-list of Open compliance the case-studies identified fall short.  The only exception being OpenEyes, which apart from some UX polish, is delivering on its ambition and following open principles.  I know that Tactix4 and KCH are working towards this, but it shows the difficulty of managing Open-Source.

This is not an attempt to dumb down or belittle the message.  The individuals and organisations above are carrying a vital baton for the health software industry.  One, that executed correctly will change and empower the NHS with true interoperability and sustained platforms.  

What we need is some published  standards and approaches which are vital to NHS certification.  All of the solutions above vary drastically technically, from Php to Java to MUMPS and C.  This increases the TCO for an organisation and when the government is trying to reduce costs we should be looking to consolidate technology in our estates.  If this is unachievable the government needs to provide these solutions as cloud hosted services, shifting TCO to the government. 

Open-Source is an Eco-system that has a management overhead.  All software requires design, build and testing.  Even in an agile world this is difficult to do on a large scale.  To create an Open-EPR with no controlled assurance is high-risk.  Who owns the legal obligations of an open system with a buggy commit that impacts patient care?  Who fixes these issues and what are the SLAs? Who supports the developer community around these products?  These are all vital questions which need resolving to ensure a mature “Open” delivery.

I believe the government should invest in an open platform which provides the core technology stack for "App" development.  This platform will utilise standard "Open" APIs and architecture for integration and development.  The government can control the standards and approaches to inclusion, offering services to NHS trusts who wish to implement an open offering.  This SaaS federation approach is the "Cloud Model" and one that is untapped to date for clinical systems.

Each NHS trust is different, each has different views, values, choices and demographic issues.  A "one-size-fits-all" approach is not going to work.  This is the challenge in the IT space too.  Every Trust has different skills and expertise.  Some Trusts have no in-house skills whatsoever.  How will we achieve the flexible configuration without holistic involvement and commitment from local Trusts?  This is where NPfIT failed.  There was a lack of realisation that local configuration is critical to successful implementation and takes years to achieve.

It is going to be interesting to see where this goes over the coming years.  New services such as http://developer.nhs.uk/ are appearing which genuinely excite me as an IT professional.  I sincerely hope that over the coming months some decisions are made around how openness will be delivered to the NHS.  Currently the message is unclear, lacks thought, governance and vision.  

As the message evolves and matures I look forward to the outcomes.  A world where free-solutions are available based on modern technologies that are safe, usable, configurable and interoperable.  Until that day comes I will continue to focus on the most important thing in health.  The patient.

Wednesday, 3 April 2013

DevOps – The harmony of Dev and Ops

An Enterprise IT operating model is usually controlled and implemented via senior levels of management.  Unfortunately this usually causes a disjoint level of understanding as to what happens on the ground  promoting a “healthy” functional divide.  ITIL and other frameworks continue to stifle an organisations ability to adapt and react to business change and promote relatively old methods and mechanisms of operation.  It is difficult to mould ITIL into the new world of continuous delivery, automated testing and feature flagging.

Over the last decade Agile software development methodologies have drastically changed the customer satisfaction experiences during the development process.  No longer are customers expected to provide immense levels of details up front and no-longer is it acceptable for the business to isolate themselves from software delivery.  The IT industry is empirically learning from the mistakes of the past, addressing those mistakes, learning, improving and evolving.

It is with this evolution we now see the continuation of change into the IT operating world.  In 2008 a concept, “Dev Infrastructure” was conceived.  This conception brings a new set of business opportunities and capabilities to an organisation.  A new breed of IT operating principles known as DevOps.

DevOps is one of the most difficult things to articulate, DevOps is (currently) undefined and has several meanings to many people.  I believe DevOps is best described as an organisational culture.  The culture which enables co-location and co-operation between development and operational functions.  It is a fundamental merger of practices and principles between the previously delineated worlds.  No longer is it acceptable for manual deployments, manual fixes and manual intervention, we now have the capability to deliver code-controlled infrastructure, deployment and configuration.  Gone are the days of zero code capabilities in operations, we should now seek to ensure repitition of operational activities…. This is the DevOps way.

All too often we find ourselves in a situation where Developers are not part of the operational team, this often leads to large amounts of documentation and a blame culture.  DevOps aims to address this state of affairs by aligning the 2 disciplines, promoting “Up-Skilling” of existing operational staff and ensuring IT services are delivered in a safer operationally controlled way.  More and more we hear about the concept of “Dog-Fooding” IT to our internal teams, this is no more relevant than “Dog-Fooding” the failures back to developers.  Discipline and quality is important in software development and developers need to “Up their game” to support operational teams.

Developers also need to understand and appreciate the discipline and rigour that ITIL change and release management process brings to an organisation.  Some of the most valuable elements of ITIL are the service-management and continuous improvement principles.  To sustain any development output requires an understanding of the value of ITIL continuation and customer focused management.  DevOps culture will never be an easy journey, but it can be embraced in an ITIL organisation, by using collaborative and assured change methods.  ITIL organisations will also need to adapt to automation change mechanisms, reduce the manual overheads and “log and flog” incident culture.  Manual incident resolution should look to harness DevOps to lean out and streamline 1st and 2nd line call-outs.

I believe that with good service-management and great development discipline, Dev and Ops can merge to obtain the upmost respect and valued discipline from the 2 evolving sectors.  DevOps is an exciting movement which is radically gaining traction in the IT market, I hope to see many an exciting project stem from the automation disciplines evolving from the DevOps methods.

As with every IT change over the last 10 years the success of anything generally relies on the people involved.  DevOps is not a silver bullet, but with the right people, process and tools DevOps culture will ensure a slicker and radically leaner organisation.

For more information on DevOps tools, please see http://www.opscode.com/chef/ or https://puppetlabs.com/.

Thursday, 21 March 2013

Extending ThinkTecture Identity Server to a 3rd Party Identity Source

This is the 3rd post in a 3 part series on custom claims identity management in the enterprise.  Firstly, I would like to apologise for those who have been waiting for this post.

ThinkTecture is a custom identity provider which is a great choice to enable claims authentication from existing applications.  This post will show you how to extend IdentityServer to your own identity store.

In order to follow this blog you will need to download the identity server source-code which can be found on GitHub https://github.com/thinktecture/Thinktecture.IdentityServer.v2

Once downloaded, the easiest way to extend ThinkTecture is to open the solution in Visual Studio, this will give you full debug while you make the modifications.

image

ThinkTecture is a well-organised and easy to follow solution making good use of SOLID principles.  This makes the solution very easy to read, extend and adapt.

One of the common requirements for SSO extensibility is providing custom claims and access control mechanisms.  There are many ways to achieve this, but I would recommend utilising the existing core libraries without a pass-through mechanism.  This enables you to continue to use the membership provided with ThinkTecture, but provide additional access control from your own solution.

From the above Image you will see I have created a new assembly (Gstt.Icm.Repository).  This will be my extension to identity-server.

Identity server makes use of 2 primary interfaces for user and claim management, IUserRepository and IClaimsRepository.  These types can be found in the ThinkTecture.IdentityServer.Core assembly.  As we are also extending the existing User and Claims Repositories you will need the following assembly references.

image

With these references you are now ready to setup the environment to implement this tutorial.  To execute this tutorial we will need a sample database, which will represent our 3rd party system.  For this sample I am using SQL Server 2012 and am using the sample data as below.

Database Code
CREATE DATABASE Users
GO

INSERT INTO dbo.[User] (Username, Password)
VALUES
    ('Gary',  HASHBYTES('SHA2_256', N'Test')),
    ('Dominic',  HASHBYTES('SHA2_256', N'Test2'))

INSERT INTO Roles (RoleName)
VALUES ('Admin'), ('Limited'), ('Profit and Loss')

INSERT INTO UserRoles (UserId, RoleId)
SELECT u.Id, r.Id FROM [User] u, Roles r WHERE u.Username = 'Gary' AND r.RoleName = 'Admin'
UNION
SELECT u.Id, r.Id FROM [User] u, Roles r WHERE u.Username = 'Dominic' AND r.RoleName = 'Limited'

Once the database has been configured you should see the SQL schema below:

image

With the above complete, we can now create our data access components. To do this I have created a Domain folder in the class library and generated a new Entity Framework edmx model.  This database is a representation of the schema above created in .NET EF 5.0 Database First.

image With the schema generated we will need to include the connection string generated by EF.  Identity Server stores all configuration in the configuration folder under the web application.  Open the connectionString.config settings and add the new connectionString to the SQL database as created above.  Please remember that as a web-application the identity of the application pool should have the required rights to the SQL database, alternatively configure SQL authentication in the configuration file and encrypt the config.image

Once complete you should have a new entry in the connectionString.config.

Moving on, you will need to create 2 new class files in your extension project.  These classes will derive from IUserRepository and IClaimsRepository as above.  To make life easier, copy the existing code from the Thinktecture core repositories below to these new files.  Rename the files relative to your identity store.

image This will leave you with a project that has the skeleton from Identity Server.  What we need now is an additional type to our new library which we will use as a repository to talk to *our* SQL database.  Below is a sample of the interface and implementation for the new database module.

IIcmUserRepository
  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Text;
  5. using System.Threading.Tasks;
  6.  
  7. namespace Gstt.Icm.Repository.Interfaces
  8. {
  9.  
  10.     public interface IIcmUserRepository
  11.     {
  12.  
  13.          IEnumerable<string> GetRoles();
  14.  
  15.          bool ValidateUser(string username, string password);
  16.  
  17.          IEnumerable<string> GetUserRoles(string username);
  18.  
  19.     }
  20. }
IcmUserRepository
  1. using Gstt.Icm.Repository.Interfaces;
  2. using System;
  3. using System.Collections.Generic;
  4. using System.IO;
  5. using System.Linq;
  6. using System.Security.Cryptography;
  7. using System.Text;
  8. using System.Threading.Tasks;
  9. using System.Web.Security;
  10.  
  11. namespace Gstt.Icm.Repository
  12. {
  13.     public class IcmUserRepository : IIcmUserRepository
  14.     {
  15.         public IEnumerable<string> GetRoles()
  16.         {
  17.             using (var context = new Domain.UsersEntities())
  18.             {
  19.                 return context.Roles.Select(p => p.RoleName).ToList();
  20.             }
  21.         }
  22.  
  23.         public bool ValidateUser(string username, string password)
  24.         {
  25.             using (var context = new Domain.UsersEntities())
  26.             {
  27.                 var user = context.Users.FirstOrDefault(p => p.Username == username);
  28.                 if (user == null)
  29.                 {
  30.                     return false;
  31.                 }
  32.  
  33.                 var sha256 = SHA256.Create();
  34.                 
  35.                 using (var ms = new MemoryStream(System.Text.Encoding.Unicode.GetBytes(password)))
  36.                 {
  37.                     var bytes = sha256.ComputeHash(ms);
  38.                     if (user.Password.SequenceEqual(bytes))
  39.                     {
  40.                         return true;
  41.                     }
  42.                 }
  43.             }
  44.  
  45.             return false;
  46.         }
  47.  
  48.         public IEnumerable<string> GetUserRoles(string username)
  49.         {
  50.             using (var context = new Domain.UsersEntities())
  51.             {
  52.                 return (from p in context.Users
  53.                         join r in context.UserRoles on p.Id equals r.UserId
  54.                         join x in context.Roles on r.RoleId equals x.Id
  55.                         where p.Username == username
  56.                         select x.RoleName).ToList();
  57.             }
  58.         }
  59.     }
  60. }

With the new types in the project you should have a class library project which looks something like below.

image

The eager eyes out there will notice that I also have 3 additional types in my project, GsttConfigurationSection and IcmExportProvider.  IcmClaimsRepository and IcmRepository are my custom implementations of IUserRepository and IClaimsRepository.

ThinkTecture makes use of the Microsoft Extensibility Framework (MEF) to enable runtime initialisation of components and dependency injection.  This provides the software with the ability to swap out components at Runtime (such as the repositories we are building).  To keep our new assembly in alignment with the conventions provided in ThinkTecture we will make use of MEF, this means creating our own configuration file and MEF Export Provider. 

Below is a sample of the Config section and IcmExportProvider.  As you can see from the code below, we are creating a new config section called gstt.repositories with an attribute of icmUserManagement.  This will provide MEF with the catalogue type entries for runtime type configuration.

Config Section
  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Text;
  5. using System.Threading.Tasks;
  6.  
  7. namespace Gstt.Icm.Repository.Config
  8. {
  9.     /// <summary>
  10.     /// The IcmConfigurationSection Configuration Section.
  11.     /// </summary>
  12.     public partial class GsttConfigurationSection : global::System.Configuration.ConfigurationSection
  13.     {
  14.         public const string SectionName = "gstt.repositories";
  15.  
  16.         /// <summary>
  17.         /// The XML name of the <see cref="ConfigurationProvider"/> property.
  18.         /// </summary>
  19.         internal const global::System.String IcmUserRepositoryName = "icmUserManagement";
  20.  
  21.         /// <summary>
  22.         /// Gets or sets type of the class that provides custom user validation
  23.         /// </summary>
  24.         [global::System.Configuration.ConfigurationProperty(IcmUserRepositoryName, IsRequired = false, IsKey = false, IsDefaultCollection = false, DefaultValue = "Gstt.Icm.Repository.IcmUserRepository, Gstt.Icm.Repository")]
  25.         public global::System.String IcmUserRepository
  26.         {
  27.             get
  28.             {
  29.                 return (global::System.String)base[IcmUserRepositoryName];
  30.             }
  31.             set
  32.             {
  33.                 base[IcmUserRepositoryName] = value;
  34.             }
  35.         }
  36.     }
  37. }

Out of the box ThinkTecture comes with a RepositoryExportProvider type providing the catalogue services for Core components.  To keep consistent with ThinkTecture, we create the following MEF export catalogue.  This is a direct copy of the existing code but with my custom type registered, this enables you to swap out my component and hook into your own identity store if you wish to use my extension approach.

MEF Export Provider
  1. using Gstt.Icm.Repository.Config;
  2. using Gstt.Icm.Repository.Interfaces;
  3. using System;
  4. using System.Collections.Generic;
  5. using System.ComponentModel.Composition.Hosting;
  6. using System.ComponentModel.Composition.Primitives;
  7. using System.Configuration;
  8. using System.Linq;
  9. using System.Text;
  10. using System.Threading.Tasks;
  11. using Thinktecture.IdentityServer.Configuration;
  12.  
  13. namespace Gstt.Icm.Repository
  14. {
  15.     public class IcmExportProvider : ExportProvider
  16.     {
  17.         private Dictionary<string, string> _mappings;
  18.  
  19.         public IcmExportProvider()
  20.         {
  21.             var section = ConfigurationManager.GetSection(GsttConfigurationSection.SectionName) as GsttConfigurationSection;
  22.  
  23.             _mappings = new Dictionary<string, string>
  24.             {
  25.                 { typeof(IIcmUserRepository).FullName, section.IcmUserRepository }
  26.             };
  27.         }
  28.  
  29.         protected override IEnumerable<Export> GetExportsCore(ImportDefinition definition, AtomicComposition atomicComposition)
  30.         {
  31.             var exports = new List<Export>();
  32.  
  33.             string implementingType;
  34.             if (_mappings.TryGetValue(definition.ContractName, out implementingType))
  35.             {
  36.                 var t = Type.GetType(implementingType);
  37.                 if (t == null)
  38.                 {
  39.                     throw new InvalidOperationException("Type not found for interface: " + definition.ContractName);
  40.                 }
  41.  
  42.                 var instance = t.GetConstructor(Type.EmptyTypes).Invoke(null);
  43.                 var exportDefintion = new ExportDefinition(definition.ContractName, new Dictionary<string, object>());
  44.                 var toAdd = new Export(exportDefintion, () => instance);
  45.  
  46.                 exports.Add(toAdd);
  47.             }
  48.  
  49.             return exports;
  50.         }
  51.     }
  52. }

As you can see from the above the ExportProvider is used to provide a dynamic type registration from our new config section.

With the config section complete and the MEF export catalogue configured we can now modify identity server to include the additional files and configuration.  To start this process we need to modify the Global.ascx.cs file and register the new MEF Export Provider.  Inside the global ascx file, we need to register the new Export Provider, this is achieved via the CompositionContainer constructor (SetupCompositionContainer method):

Global.ascx.cs
  1. public class MvcApplication : System.Web.HttpApplication
  2. {
  3.     [Import]
  4.     public IConfigurationRepository ConfigurationRepository { get; set; }
  5.  
  6.     [Import]
  7.     public IUserRepository UserRepository { get; set; }
  8.  
  9.     [Import]
  10.     public IRelyingPartyRepository RelyingPartyRepository { get; set; }
  11.  
  12.  
  13.     protected void Application_Start()
  14.     {
  15.         // create empty config database if it not exists
  16.         Database.SetInitializer(new ConfigurationDatabaseInitializer());
  17.         
  18.         // set the anti CSRF for name (that's a unqiue claim in our system)
  19.         AntiForgeryConfig.UniqueClaimTypeIdentifier = ClaimTypes.Name;
  20.  
  21.         // setup MEF
  22.         SetupCompositionContainer();
  23.         Container.Current.SatisfyImportsOnce(this);
  24.  
  25.         AreaRegistration.RegisterAllAreas();
  26.  
  27.         FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
  28.         RouteConfig.RegisterRoutes(RouteTable.Routes, ConfigurationRepository, UserRepository);
  29.         ProtocolConfig.RegisterProtocols(GlobalConfiguration.Configuration, RouteTable.Routes, ConfigurationRepository, UserRepository, RelyingPartyRepository);
  30.         BundleConfig.RegisterBundles(BundleTable.Bundles);
  31.     }
  32.  
  33.     private void SetupCompositionContainer()
  34.     {
  35.         var container = new CompositionContainer(new RepositoryExportProvider(), new IcmExportProvider());
  36.         
  37.         Container.Current = container;
  38.     }
  39. }

Now the Global configuration has been set you will be able to use the MEF [Import] attribute to include your types dynamically via configuration at runtime.  To complete this part of the tutorial you will need to include the config file, this is done via modification of the web.config, including the new config section.  The section below shows the GSTT configuration section registration xml element.

Web.Config
  1. <?xml version="1.0" encoding="utf-8"?>
  2. <configuration>
  3.   <configSections>
  4.     <section name="system.identityModel" type="System.IdentityModel.Configuration.SystemIdentityModelSection, System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
  5.     <section name="system.identityModel.services" type="System.IdentityModel.Services.Configuration.SystemIdentityModelServicesSection, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
  6.     
  7.     <section name="thinktecture.identityServer.repositories" type="Thinktecture.IdentityServer.Configuration.RepositoryConfigurationSection, Thinktecture.IdentityServer.Core" />
  8.     <section name="gstt.repositories" type="Gstt.Icm.Repository.Config.GsttConfigurationSection, Gstt.Icm.Repository" />
  9.  
  10.     <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
  11.     <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->
  12.   </configSections>
  13.   <!-- ............................................... -->
  14.   <!-- start of identity server relevant configuration -->
  15.   <thinktecture.identityServer.repositories configSource="configuration\repositories.config" />
  16.   <gstt.repositories configSource="configuration\gsttCustom.config" />
gsttConfig.config
  1. <gstt.repositories icmUserManagement="Gstt.Icm.Repository.IcmUserRepository, Gstt.Icm.Repository" />

Configuration sections and types are registered using the Gstt.Icm.Repository.IcmUserRepository, Gstt.Icm.Repository (Fully Qualified Type Name, Assembly Name).

Now we are ready to use all the types and modifications we have made and change the ThinkTecture configuration.  As we are only extending the role system and login system I add a domain level qualifier to the ThinkTecture UserRepository and ClaimRepository.  The code snippet below shows my custom implementation of the UserRepository service, this service utilises a domain qualifier of TT\* to identify ThinkTecture users and allows backwards compatibility.

Code Snippet
  1. public class IcmRepository : IUserRepository
  2. {
  3.     [Import]
  4.     public IClientCertificatesRepository Repository { get; set; }
  5.  
  6.     [Import]
  7.     public IIcmUserRepository IcmUserRepository { get; set; }
  8.  
  9.     public IcmRepository()
  10.     {
  11.         Container.Current.SatisfyImportsOnce(this);
  12.     }
  13.  
  14.     public bool ValidateUser(string userName, string password)
  15.     {
  16.         // Convention that ThinkTecture Users always begin with TT
  17.         if (userName.StartsWith("TT\\") && userName.Length > 3)
  18.         {
  19.             userName = userName.Substring(3);
  20.             return Membership.ValidateUser(userName, password);
  21.         }
  22.         else
  23.         {
  24.             return IcmUserRepository.ValidateUser(userName, password);
  25.         }
  26.     }

If you login to ThinkTecture using TT\Admin we direct to the ASP.NET membership system, alternatively we use the MEF imported repository to check against our custom SQL store.

The same logic applies to the claims repository, we swap out the roles with the roles from the SQL database:

Code Snippet
  1. public class IcmClaimsRepository : IClaimsRepository
  2. {
  3.     private const string ProfileClaimPrefix = "http://identityserver.thinktecture.com/claims/profileclaims/";
  4.  
  5.     [Import]
  6.     public IIcmUserRepository UserRepository { get; set; }
  7.  
  8.     public IEnumerable<Claim> GetClaims(ClaimsPrincipal principal, RequestDetails requestDetails)
  9.     {
  10.         var userName = principal.Identity.Name;
  11.         var claims = new List<Claim>(from c in principal.Claims select c);
  12.  
  13.         if (!userName.StartsWith("TT\\"))
  14.         {
  15.             UserRepository.GetUserRoles(userName).ToList().ForEach(p => claims.Add(new Claim(ClaimTypes.Role, p)));
  16.             return claims;
  17.         }

With all the above complete and our new database referenced the final step is to modify the configuration/repositories.config in the web project, registering our new User and Claims Repository.

repositories.config
  1. <thinktecture.identityServer.repositories tokenServiceConfiguration="Thinktecture.IdentityServer.Repositories.Sql.ConfigurationRepository, Thinktecture.IdentityServer.Core.Repositories"
  2.                                           userManagement="Thinktecture.IdentityServer.Repositories.ProviderUserManagementRepository, Thinktecture.IdentityServer.Core.Repositories"
  3.                                           userValidation="Gstt.Icm.Repository.IcmRepository, Gstt.Icm.Repository"
  4.                                           claimsRepository="Gstt.Icm.Repository.IcmClaimsRepository, Gstt.Icm.Repository"
  5.                                           relyingParties="Thinktecture.IdentityServer.Repositories.Sql.RelyingPartyRepository, Thinktecture.IdentityServer.Core.Repositories"
  6.                                           claimsTransformationRules="Thinktecture.IdentityServer.Repositories.PassThruTransformationRuleRepository, Thinktecture.IdentityServer.Core.Repositories"
  7.                                           clientCertificates="Thinktecture.IdentityServer.Repositories.Sql.ClientCertificatesRepository, Thinktecture.IdentityServer.Core.Repositories"
  8.                                           clientsRepository="Thinktecture.IdentityServer.Repositories.Sql.ClientsRepository, Thinktecture.IdentityServer.Core.Repositories"
  9.                                           identityProvider="Thinktecture.IdentityServer.Repositories.Sql.IdentityProviderRepository, Thinktecture.IdentityServer.Core.Repositories"
  10.                                           delegation="Thinktecture.IdentityServer.Repositories.Sql.DelegationRepository, Thinktecture.IdentityServer.Core.Repositories"
  11.                                           caching="Thinktecture.IdentityServer.Repositories.MemoryCacheRepository, Thinktecture.IdentityServer.Core.Repositories" />

One thing to remember, in order to get the best developer experience bind the VS project to IIS root and on the first run, ensure you visit the /InitialConfiguration controller.