2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 aspnet azure csharp debugging elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Minima 3.0 Released

Every few months I like to release a new open-source project or at least a new major revision of an existing project. Today I would like to introduce Minima 3.0.  This is a completely new Minima Blog Engine that is built on WCF, that is factored into various controls and that introduces a completely new model for ASP.NET development.

As a Training Tool

Normally I leave this for last, but this time I would like to immediately start off by mention how Minima 3.0 may act as a training tool. This will give you a good idea to Minima 3.0's architecture.  Here was the common "As a Training Tool" description for Minima 2.0 (a.k.a. Minima .NET 3.5):

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Here's the new "As a Training Tool" description for Minima 3.0:

Minima 3.0 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

As you can see, it's an entirely new beast. As you should also be able to guess, I'm not going to use Minima for simply entry level .NET training anymore. With this new feature set, it's going to be my primary tool for intermediate and expert-level .NET training.  In the future, I'll post various blog entries giving lessons on various parts of Minima.

New Features

Since it's no where near the purpose of Minima, in no version have I ever claimed to have an extraordinary feature set. In fact, the actual end-user feature set of Minima 3.0 is fundamentally the same as Minima 2.0 except where features are naturally added because of the new architecture.  For example, it's now naturally a multi-blog environment with each blog allowed to have it's own blog discovery data, Google sitemap, and other things.

Architecture

There are really three major "pillars" to the architecture of Minima 3.0: WCF, ASP.NET, and my Themelia Foundation (pronounced TH[as in "Thistle"]-MEH-LEE-UH; Koine Greek for "foundations"). It will take more than one blog entry to cover every aspect of Minima's architecture (see my lessons on Themelia), but for now I'll give a very brief overview.  I will explain the ASP.NET and Themelia pillars together.

WCF Architecture

The backend of Minima is WCF and is split up into various services to factor out some of the operations that occur within Minima. Of course, not every single possible operation is included as that would violate the "specificness" of SOA, but the core operations are intact.

The entire Minima security structure is now in WCF using a custom declarative operation-level security implementation.  To set security in Minima, all you have to do on the service side is apply the MinimaBlogSecurityBehavior attribute to an operation and you're all set.  Here's an example:

[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]

Architectural Overview: Using LINQ in WCF

Today I would like to give an architectural overview of my usage of LINQ.  This may actually become the first in a series of architectural discussions on various .NET and AJAX technologies.  In this discussion, I'm going to be talking about the architecture of the next revision of my training blog engine, Minima.  Since the core point of any system is that which goes into and comes out of the system, the goal of this commentary will be to get to the point where LINQ projects data into WCF DTOs.  Let me start by explaining how I organize my types.  Some of you will find this boring, but it's amazing how many times I get questions on this topic.  For good reason too!  These questions show that a person's priorities are in the right place as your type, namespace, and file organization is critical to the manageability and architectural clarity of your system.

However, before we get started, let me state briefly that as I've stated in my post entitled SQL Server Database Model Optimization for Developers, when you design your database structure you should design it with your O/R mapper in mind.  If you don't, then you will probably fall into all kinds of problems as my post describes.  This is incredibly important, however, if you keep to normal everyday normalization procedures, you are probably doing OK for the most part anyway.  Since I've written about that before, there's no reason for me to go into detail here.  Just know that, if you database design sucks, your application will probably suck too.  Don't built your house on the sand.

In terms of LINQ, I actually use the VS2008 "LINQ to SQL Classes" template to create the LINQ information.  In most every other area of technology, it's a good practice to avoid wizards and templates like the plague, but when it comes to O/R mapping, you need to be using an automated tool.  If your O/R mapper requires you to do any work (...NHibernate...*cough*cough*), then you can't afford to work with it.  You need to be focusing on the business logic of your system, not playing around with mechanical nonsense.  As I've said in other contexts, stored procedures and ad hoc SQL are forms of unmanaged code.  When you are managing the mechanics of a system yourself, it's, by definition, unmanaged.  Stored procedures and ad hoc SQL are to LLBLGen/LINQ as ASP/PHP is to ASP.NET as C++ is to .NET languages.  If you are managing the mechanical stuff yourself, you are working with unmanaged code.  When it comes to using managed code, in the context of database access, this is the point of an O/R mapper.  Furthermore, if the O/R mapping software you are using requires you to write up templates or do manual mapping, that's obviously not completely managed code.

Now when I create a LINQ classes I will create one for each "architectural domain" of the system that I deem necessary.  For example, in a future release of Minima, there will be a LINQ class to handle my HttpHandler and UrlRewriting subsystem and another LINQ class to handle blog interaction.  There needs to be this level of flexibility or my WCF services will know too much about my web environment and my web site (a WCF client) will then have direct access to the data which the WCF service is intended to abstract.  Therefore, there will be a LINQ class for web site specific mechanics and another LINQ class for service specific mechanics.  Also, when I create the class for a particular domain I will give it a simple name with the suffix of LINQ.  So, my Minima core LINQ class is CoreLINQ.cs and my Minima service LINQ class is ServiceLINQ.cs.  Simple.

Upon load of the LINQ designer and either after or before I drop in the specific tables required in that particular architectural domain.  Then I'll set my context namespace to <SimpleName>.Data.Context and my entity namespace to <SimpleName>.Data.Entity.  For example, in the Minima example, I'll then have Core.Data.Context and Core.Data.Entity.  One may argue that there's nothing really going on in Core.Data.Context to which I much respond: yeah, well there's already a lot going on in Core.Data (other data related non-LINQ logic I would create) and Core.Data.Entity.   The reason I say "after or before I drop in the specific tables" is to emphasize the fact that you can change this at a later point.  It's important to keep in mind at this point that LINQ doesn't automatically update its schema with the schema from your database.  LLBLGen Pro does have this feature built in and it does the refreshing in a masterful way, but currently LINQ doesn't have this ability.  Therefore, to do a refresh, you need to do a "CTRL-A, Delete", to delete all the tables, do a refresh in Server Explorer, and then just re-add them.  It's not much work.

Now, moving on to using LINQ.  When I'm working with both LINQ entities (or LLBLGen entities or whatever) and WCF DTOs in my WCF service, I do not bring in the LINQ entity namespace.  The ability to import types in from another namespace is one of the most powerful set under appreciated features in all of .NET (um.. JavaScript needs them!), however, when you have a Person entity in LINQ and a Person DTO, things can get confusing fast.  Therefore, to avoid all potential conflicts, my import is left out and I, instead, keep a series of type aliases at the top of my service classes just under the namespace imports.  Notice also the visual signal in the BlogEntryXAuthor table name.  This tells the developer that this is a many-to-many linking table.  In this case it's in the database schema, but if it weren't in there, I could easily alias it as BlogEntryXAuthorLINQ without affecting anyone else.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
//+
using DataContext = Minima.Service.Data.Context.MinimaServiceLINQDataContext;
using AuthorLINQ = Minima.Service.Data.Entity.Author;
using CommentLINQ = Minima.Service.Data.Entity.Comment;
using BlogLINQ = Minima.Service.Data.Entity.Blog;
using BlogEntryLINQ = Minima.Service.Data.Entity.BlogEntry;
using BlogEntryUrlMappingLINQ = Minima.Service.Data.Entity.BlogEntryUrlMapping;
using BlogEntryXAuthorLINQ = Minima.Service.Data.Entity.BlogEntryAuthor;
using LabelLINQ = Minima.Service.Data.Entity.Label;
using LabelXBlogEntryLINQ = Minima.Service.Data.Entity.LabelBlogEntry;
using UserRightLINQ = Minima.Service.Data.Entity.UserRight;
//+

Next, since we are in the context of WCF, we need to discussion validation of incoming information.  The following method is an implementation of a WCF service operation.  As you can see, when a user sends in an e-mail address, there is an immediate validation on the e-mail address that retrieves the author's LINQ entity.  This is why the validation isn't being done in a WCF behavior (even though there are tricks to get data from a behavior too!)  You may also note my camelCasing of instances of LINQ entities.  The purpose of this is to provide an incredibly obvious signal to the brain that this is an object, not simply a type (...as is the point of almost all the Framework Design Guidelines-- buy the book!; 2nd edition due Sept 29 '08)

//- @GetBlogMetaData -//
[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]
public BlogMetaData GetBlogMetaData(String blogGuid)
{
    using (DataContext db = new DataContext(ServiceConfiguration.ConnectionString))
    {
        //+ ensure blog exists
        BlogLINQ blogLinq;
        Validator.EnsureBlogExists(blogGuid, out blogLinq, db);
        //+
        return new BlogMetaData
        {
            Description = blogLinq.BlogDescription,
            FeedTitle = blogLinq.BlogFeedTitle,
            FeedUri = new Uri(blogLinq.BlogFeedUrl),
            Guid = blogLinq.BlogGuid,
            Title = blogLinq.BlogTitle,
            Uri = new Uri(blogLinq.BlogPrimaryUrl),
            CreateDateTime = blogLinq.BlogCreateDate,
            LabelList = new List<Label>(
                blogLinq.Labels.Select(p => new Label
                {
                    Guid = p.LabelGuid,
                    FriendlyTitle = p.LabelFriendlyTitle,
                    Title = p.LabelTitle
                })
            )
        };
    }
}

It would probably be a good idea at this point to step into the Validator class to see what's really going on here.  As you can see in the following class I have two methods (in reality there are dozens!) and most of it should be obvious.  The validation is obviously in the second method, however, it's the first one that's being directly called.  Notice two things about this: First, notice that I'm passing in my DataContext.  This is to completely obliterate any possibilities of overlapping DataContexts and, therefore, any strange locking issues.  Second, notice that I'm pre-registering my messages in a strongly typed Message class(notice also that the members of Message are not static-- the magic of const.)  This last piece could easily be done in a way that provides for nice localization.

Now moving on to the actual validation.  Unless I'm desperately trying to inline some code, I normally declare the LINQ criteria prior to the actual link statement.  Of course, this is exactly what the Func<T1, T2> delegate is doing.  Notice also that I try to bring the semantics of the criteria into the name of the object.  This really helps in in making many of your LINQ statements read more naturally: "db.Person.Where(hasEmployees)".

namespace Minima.Service.Validation
{
    internal static class Validator
    {
        //- ~Message -//
        internal class Message
        {
            public const String InvalidEmail = "Invalid author Email";
        }

        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, DataContext db)
        {
            EnsureAuthorExists(authorEmail, out authorLinq, Message.InvalidEmail, db);
        }

        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, String message, DataContext db)
        {
            Func<AuthorLINQ, Boolean> authorExists = x => x.AuthorEmail == authorEmail;
            authorLinq = db.Authors.SingleOrDefault(authorExists);
            if (authorLinq == null)
            {
                FaultThrower.Throw<ArgumentException>(new ArgumentException(message));
            }
        }
    }
}

In the actual query itself, you can see that the semantics of the method is that a maximum of one author should be returned.  Therefore, I'm able to use the Single or SingleOrDefault methods.  Note that if you use these and you return more than one entity, an exception will be throw as Single and SingleOrDefault only allow what their name implies.  In this case here, AuthorEmail is the primary key in the database and, by definition, there can be only one (at this point I'm sure about 30% of you are doing Sean Connery impressions).  The difference between Single and SingleOrDefault is simple: when the criteria is not met, Single throws an exception and SingleOrDefault returns the type's default value.  The default of a type is that which the C# "default" keyword will return.  In other words, a reference type will be null and a struct will be something else (i.e. 0 for Int32).  In this case, I'm dealing with my AuthorLINQ class, which is obviously a reference type, and therefore I need to check null on it.  If it's null, then that author doesn't exist and I need to throw a fault (which is what my custom FaultThrower class does).  What's a fault?  That's a topic for a different post.

As you can see from the method signatures, not only is the author e-mail address being validated, the LINQ entity is being returned to the caller via an out parameter.  Once I have this authorLinq entity, then I can proceed to use it's primary key (AuthorId) in various other LINQ queries.  It's critical to remember that you always want to make sure that you are only using validated information.  If you aren't, then you have no idea what will happen to your system.  Therefore, you should ignore all IDs that are sent into a WCF service operation and use only the validated ones.  A thorough discussion of this topic is left for a future discussion.

Now we are finally at the place where LINQ to WCF projection happens.  For clarity, here it is again (no one likes to scroll back and forth):

return new BlogMetaData
{
    Description = blogLinq.BlogDescription,
    FeedTitle = blogLinq.BlogFeedTitle,
    FeedUri = new Uri(blogLinq.BlogFeedUrl),
    Guid = blogLinq.BlogGuid,
    Title = blogLinq.BlogTitle,
    Uri = new Uri(blogLinq.BlogPrimaryUrl),
    CreateDateTime = blogLinq.BlogCreateDate,
    LabelList = new List<Label>(
        blogLinq.Labels.Select(p => new Label
        {
            Guid = p.LabelGuid,
            FriendlyTitle = p.LabelFriendlyTitle,
            Title = p.LabelTitle
        })
    )
};

The basics flow of this are as follows: In DataContext db, in the Blogs table, pull sub-set where PersonId == AuthorId, then select transform that data into a new type.  The DTO projection is obviously happening in the Select method.  This method is akin to a SELECT in SQL.  My point in saying that is to make sure that you are aware that SELECT is not a filter; that's what Where does.  After execution of the Where method as well as after execution of the Select method, you have an IQueryable<Blog> object, which contains information about the query, but no actual data yet.  LINQ defers execution of SQL statements until they are actually used.  In this case, the data is actually being used when the ToList method is called.  This of course returns a list of List<Blog>, which is exactly what this service operation should do.  What's really nice about this is that WCF loves List<T>.  It's not a big fan of Collection<T>, but List<T> is it's friend.  Over the wire it's an Array and when it's being used by a WCF client, it's also a List<T> object.

In closing I should mention something that I know people are going to ask me about: To project from WCF DTO to LINQ you do the exact same thing.  LINQ isn't a database-specific technology.  You can LINQ between all kinds of things.  Though I use LINQ for my data access in many projects, most of my LINQ usage is actually for searching lists, combining to lists together, or modifying the data that gets bounds to the interface.  It's incredibly powerful.

Moving into a non-Minima example, if, for example, you needed to have a person's full name in a WPF ListBox and the name-specific LINQ properties you have are FirstName and LastName property, instead of doing tricks in your ItemTemplate, you can just have your ItemsSource use LINQ to sew the FirstName and LastName together.

lstPerson.ItemsSource = personList.Select(p => new
{
    FullName = p.FirstName + " " + p.LastName,
    p.PostalCode,
    Country = p.Country ?? String.Empty
});

The really sweet part about this is the fact that LINQ entities implement the INotifyPropertyChanged interface, so when doing WPF data binding, WPF will automatically update the ListBox when the data changes!  Of course, this doesn't help you if you are doing a seriously SOA system.  Therefore, my DTOs normally implement INotifyPropertyChanged as well.  This is not a WPF-specific interface (it lives in System.ComponentModel) and therefore does not tie the business object to any presentation.

That should show you a bit more of how LINQ can work with all kinds of stuff.  Therefore, it shouldn't be hard to figure out how to project from a WCF DTO to LINQ. You could literally copy/paste the LINQ -> DTO code and just switch around a few names.

If you are new to LINQ, then I recommend the book Pro LINQ by Joseph C. Rattz Jr. However, if you are already using LINQ or want a view into its internal mechanics, then I must recommend LINQ in Action by Fabrice Marguerie, Steve Eichert, and Jim Wooley.

Spring 2008 Sabbatical

Starting May 23rd, I'm starting another sabbatical to work on my company projects, to continue my seminary work, and to work on my book (to be clear: sabbatical != vacation).  During this time I will be accepting part-time AJAX, WCF, ASP.NET (no graphics work!-- hire a professional graphic designer, they are worth the money!), or general C# 3.0 and .NET 3.5 telecommuting consulting.  I'll assist in projects, but I'm not going to be able to work as senior architect on any projects.  Also remember, as a web developer, it's my duty to make sure my projects work in Mozilla, Opera, Safari, and IE, and is in no way IE-specific.  IE-only environments are the absolute most difficult to work with.

Also keep in mind that this is 2008, not 1988 and the primary purpose of modern technology is to allow us to have simpler lives and just about every single aspect of our technology has it's root in the Internet allowing us to communication from anywhere.  What's the point in having web casting and online meeting abilities or in having online white boarding or web-based project management software, or even Google Office if you aren't going to use them in a meaningful way?  Why have e-mail at all if you are going to absolutely rely on the ability to go to the person's office?  The addiction to physical contact is something that needs to be broken in the 21st century.  Stop managing with your physical "field of view" and start managing by results.

I'm a web developer/architect, not a piano mover; I don't need to be in a physical office.  If you are into technology at all, you are into moving your physical resources into a logical cloud.  If I've said it once, I've said it a million times: your associates are your greatest resource and should, therefore, be even more in a logical cloud (as they are humans and would appreciate it more!)  It is inconsistent to pursue logical management of resources and require physical management of personnel.  Not only that, but it costs a lot less (no office space required!)  If your employees don't have enough discipline to work from home, what makes you think they are working in their cube?  Unless you are working off the failed notion of "hourly management" instead of being a results-oriented manager, you won't have a problem with 100% telecommuting.  Results matter, not "time".  Also, if you don't trust your employees, well... maybe you hired the wrong people (or maybe have trust issues in general?)  Trust is the foundation of all life.  I could speak volumes on this topic, but I'll leave that to the expert: Timothy Ferris.  See his blog or get his book for more information.  I'm only an anonymous disciple of his, he is the master and authority on this topic.  Therefore, send your flames (read: insecurities) his way (after you read his book!-- audio also available; they are both worth 100x their weight in gold!)  See also, Scott Hanselman's interview with Timothy Ferris.  His YouTube page is also available.

With regards to the book, let me simply say that it's generically about AJAX communication and I'm not going to give out too many specific details on the project at this point, but I will say this: AJAX + SOA - CSS + Prototype + (ASP.NET Controls) - (ASP.NET AJAX) + WCF + (.NET 2.0 Service Interop) + Silverlight + Development Tools.  Also, I reserve the right to turn it into a video series (likely), make it a completely learning set of reading + video series (even more likely!), or to completely chuck the project.  I don't like to do things the classical way, so whatever I do, you can bet on the fact that I won't do the traditional "book".  As I've always said, the blog is the new book, but for this I think I may use a different paradigm.  I've turned down two book offers so far because I absolutely refuse to throw more paper on a bookshelf or do something that's been done a million times before.

If you are moving from ASP to ASP.NET, from PHP to ASP.NET, from ASMX to WCF 3.5 or want to add AJAX to your solutions drop me an e-mail and let's talk.

NetFXHarmonics on CodePlex

Well, I finally broke down.  My public projects are now freely available for download on CodePlex.  Below is a list of the current projects on CodePlex

Here are the current projects on CodePlex:

As far as creating "releases", these are shared-source/open-source projects and in the model I'm following "releases" are always going to be obsolete.  Therefore, I will provide ZIP versions of the archived major revisions of a project and the current revision will always be available as source code.  The only exception to this may be DevServer, which I may do monthly releases or releases based upon major upgrades.  I'm currently working on new major revisions for a few other projects and when they are completed, I will then post them on to CodePlex as well.

As a reminder, my projects are always architected to follow the current best-practices and idiots for a particular technology and are therefore often fully re-architected based on the current technology.  The reason I do this is for the simple reason that my core specialty is training (technology or not) and that's the driving principle in my projects.  Therefore, on each of my projects there is a "As a Training Tool" section that will explain that projects technology and architecture as well as what else you might be able to learn from it.

As a final note, SVNBridge is working OK for me and has really helped me get over the CodePlex hurdle.  Scott Hanselman was kind enough to encourage me to try SVNBridge again.  I'm honesty glad I did.  The Team System 2008 Team Explorer client which integrated into Visual Studio works every now and again, but I got absolutely sick of everything locking up every time I would save a file.  Not even a check it!  A simple local save!  How people put up with "connected" version control systems is beyond me.  Do people not realize that Subversion does locking too?  Anyways, SVNBridge works great for both check outs and commits (we don't "check in" in the Subversion world-- we use transactional terminology).  If you want Visual Studio 2008 integration AND speed and power and flexibility with CodePlex, get VisualSVN.  It's an add-on for VS2008 that uses Tortoise behind the scenes.  With that, depending on my mood I can commit in both VS2008 (what I would do when working on refactoring or something) and in the Windows shell (what I would do when working with JavaScript files in the world's best JavaScript IDE: Notepad2).

NetFXHarmonics DevServer Released

Two months ago started work on a project to help me in my AJAX and SOA development.  What I basically needed was a development web server that allowed me to start up multiple web servers at once, monitor server traffic, and bind to specific IP interfaces.  Thus, the creation of NetFXHarmonics DevServer.  I built it completely for myself, but others started to ask for it as well.  When the demand for it became stronger, I realized that I needed to release the project on the web.  Normally I would host it myself, but given the interest from the .NET community, I thought I would put it on CodePlex.  I've only cried twice seen I've put it on CodePlex, but I'll survive.

NetFXHarmonics DevServer is a web server hosting environment built on WPF and WCF technologies that allows multiple instances of Cassini-like web servers to run in parallel. DevServer also includes tracing capabilities for monitoring requests and responses, request filtering, automatic ViewState and ControlState parsing, visually enhanced HTTP status codes, IP binding modes for both local-only as well as remote access, and easy to use XML configuration.

Using this development server, I am able to simultaneously start multiple web sites to very quickly view everything that happens over the wire and therefore easily debug JSON and SOAP messages flying back and forth between client and server and between services.  This tool have been a tremendous help for me in the past few months to discover exactly why my services are tripping out without having to enable WCF tracing.  It's also been a tremendous help in managing my own web development server instances for all my projects, each having 3-5 web sites (or segregated service endpoints) each.

Let me give you a quick run down of the various features in NetFXHarmonics DevServer with a little discussion of each feature's usage:

XML Configuration

NetFXHarmonics DevServer has various projects (and therefore assemblies) with the primary being DevServer.Client, the client application which houses the application's configuration.

In the app.config of DevServer.Client, you have a structure that looks something like the following:

<jampad.devServer>
</jampad.devServer>

This is where all your configuration lives and the various parts of this will be explained in their appropriate contexts in the discussions that follow.

Multiple Web Site Hosting

In side of the jampad.devServer configuration section in the app.config file, there is a branch called <servers /> which allows you to declare the various web servers you would like to load.  This is all that's required to configure servers.  Each server requires a friendly name, a port, a virtual path, and the physical path.  Given this information, DevServer will know how to load your particular servers.

<servers>
  <server key="SampleWS1" name="Sample Website 1" port="2001"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  </server>
  <server key="SampleWS2" name="Sample Website 2" disabled="true" port="2003"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  </server>
</servers>

If you want to disable a specific server from loading, use the "disabled" attribute.  All disabled servers will be completely skipped in the loading process.  On the other hand, if you would like to load a single server, you can actually do this from the command line by setting a server key on the <server /> element and by accessing it via a command line argument:

DevServer.Client.exe -serverKey:SampleWS1

In most scenarios you will probably want to load various sets of servers at once.  This is especially true in properly architected service-oriented solutions.  Thus, DevServer includes a concept of startup profiles.  Each profile will include links to a number of keyed servers.  You configure these startup profiles in the <startupProfiles /> section.

<startupProfiles activeProfile="Sample">
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

This configuration block lives parallel to the <servers /> block and the inclusion of servers should be fairly self-explanatory.  When DevServer starts it will load the profile in the "activeProfile" attribute.  If the activeProfile block is missing, it will be ignored.  If the activeProfile states a profile that does not exist, DevServer will not load.  When using a startup profile, the "disabled" attribute on each server instance is ignored.  That attribute is only for non-startup profile usage.  An activeProfile may also be set via command line:

DevServer.Client.exe -activeProfile:Sample

This will override any setting in the activeProfile attribute of <startupProfiles/>.  In fact, the "serverKey" command line argument overrides the activeProfile <startupProfiles /> attribute as well.  Therefore, the order of priority is is as follows: command line argument override profile configuration and profile configuration overrides the "disabled" attribute.

Most developers don't work on one project and with only client.  Or, even if they do, they surely have their own projects as well.  Therefore, you may have even more servers in your configuration:

<server key="ABCCorpMainWS" name="Main Website" port="7001"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\Website">
</server>
<server key="ABCCorpKBService" name="KB Service" port="7003"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\KnowledgeBaseService">
</server>
<server key="ABCCorpProductService" name="Product Service" port="7005"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\ProductService">
</server>

These would be grouped together in their own profile with the activeProfile set to that profile.

<startupProfiles activeProfile="ABCCorp">
  <profile name="ABCCorp">
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

What about loading servers from different profiles?  Well, think about it... that's a different profile:

<startupProfiles activeProfile="ABCCorpWithSampleWS1">
  <profile name="ABCCorpWithSampleWS1">
    <server key="SampleWS1" />
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
</startupProfiles>

One of the original purposes of DevServer was to allow remote non-IIS access to development web sites.  Therefore, in DevServer you can use the <binding /> configuration element to set either "loopback" (or "localhost") to only allow access to your machine, "any" to allow web access from all addresses, or you can specific a specific IP address to bind the web server to a single IP address so that only systems with access to that IP on that interface can access the web site.

In the following example the first web site is only accessible by the local machine and the second is accessible by others.  This comes in handy for both testing in a virtual machine as well as quickly doing demos.  If your evil project manager (forgive the redundancy) wants to see something, bring the web site up on all interface and he can poke around from his desk and then have all his complains and irrational demands ready when he comes to your desk (maybe you want to keep this feature secret).

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
</server>
<server key="SampleWS2" name="Sample Website 2" port="2003"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  <binding address="any" />
</server>

Web Site Settings

In addition to server configuration, there is also a bit of general configuration that apply to all instances.  As you can see from the following example, you can add default documents to the existing defaults and you can also setup content type mappings.  A few content types already exist, but you can override as the example shows.  In this example, where ".js" is normally sent as text/javascript, you can override it to go to "application/x-javascript" or to something else.

<webServer>
  <defaultDocuments>
    <add name="index.jsx" />
  </defaultDocuments>
  <contentTypeMappings>
    <add extension=".jsx" type="application/x-javascript" />
    <add extension=".js" type="application/x-javascript" override="true" />
  </contentTypeMappings>
</webServer>

Request/Response Tracing

One of the core features of DevServer is the ability to do tracing on the traffic in each server.  Tracing is enabled by adding a <requestTracing /> configuration element to a server and setting the "enabled" attribute to true.

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
  <requestTracing enabled="true" enableVerboseTypeTracing="false" enableFaviconTracing="true" />
</server>

This will have request/response messages show up in DevServer which will allow you to view status code, date/time, URL, POST data (if any), response data, request headers, response headers, as well as parsed ViewState and Control state for both the request and response.  In addition, each entry is color coded based on it's status code.  Different colors will show for 301/302, 500+, and 404.

image

When working with the web, you don't always want to see every little thing that happens all the time.  Therefore, by default, you only trace common text specific file like HTML, CSS, JavaScript, JSON, XAML, Text, and SOAP and their content.  If you want to trace images and other things going across, then set "enableVerboseTypeTracing" to true.  However, since there is no need to see the big blob image data, the data of binary types are not sent to the trace viewer even with enableVerboseTypeTracing.  You can also toggle both tracing as well as verbose type tracing on each server as each is running.

There's also the ability to view custom content types without seeing all the images and extra types.  This is the purpose of the <allowedConntetTypes /> configuration block under <requestTracing />, which is parallel to <servers />.

<requestTracing>
  <allowedContentTypes>
    <add value="application/x-custom-type" />
  </allowedContentTypes>
</requestTracing>

In this case, responses of content-type "application/x-custom-type" are also traced without needing to turn on verbose type tracing.

However, there is another way to control this information.  If you want to see all requests, but want the runtime ability to see various content types, then you can use a client-side filter in the request/response list.  In the box immediately above the request/response list, you can type something like the following:

verb:POST;statuscode:200;file:css;contentType:text/css

Filtering will occur as you type, allowing you to find the particular request you are looking for.  The filter is NOT case sensitive.  You can also clear the request/response list with the clear button.  There is also the ability to copy/paste the particular headers that you want from the headers list by using typical SHIFT (for range) and CTRL-clicking (for single choosing).

Request/Response monitoring actually goes a bit further by automatically parsing both ViewState and ControlState for both request (POST) and response data.  Thanks goes to Fritz Onion for granting me permission to use his ViewState parser class in DevServer.

As a Training Tool

When announce any major project I always provide an "as a training tool" section to explain how the project can be used for personal training.  NetFXHarmonics DevServer is built using .NET 3.5 and relies heavily on LINQ and WCF with a WPF interface.  It also uses extensive .NET custom configuration for all server configuration.  In terms of LINQ, you can find many examples of how to use both query expression syntax and extension method syntax.  When people first learn LINQ, they think that LINQ is an O/R mapper.  Well, it's not (and probably shouldn't be usef for that in enterprise applications! there is only one enterprise class O/R mapper: LLBLGen Pro).  LINQ allows Language INtegrated Query in both C# and VB.  So, in DevServer, you will see heavy reliance on LINQ to search List<T> objects and also to transform LINQ database entities to WCF DTOs.

DevServer also relies heavily on WCF for all inner-process communication via named-pipes.  The web servers are actually hosted inside of a WCF service, thus segregating the web server loader from the client application in a very SOA friendly manner.  The client application loads the service and then acts as a client to the service calling on it to start, stop, and kill server instances.  WCF is also used to communicate the HTTP requests inside the web server back to the client, which is itself a WCF service to which the HTTP request is a client.  Therefore, DevServer is an example of how you can use WCF to communicate between AppDomains.

The entire interface in DevServer is a WPF application that relies heavy on WPF binding for all visual information.  All status information is in a collection to which WPF binds.  Not only that all, but all request/response information is also in a collection.  WPF simply binds to the data.  Using WPF, no eventhandling was required to say "on a click event, obtain SelectedIndex, pull data, then text these TextBox instances".  In WPF, you simply have normal every day data and WPF controls bind directly to that data being automatically updated via special interfaces (i.e INotifyPropertyChanged and INotifyCollectionChanged) or the special generic ObservableCollection<T>.

Since the bindings are completely automated, there also needs to be ways to "transform" data.  For example, in the TabItem header I have a little green or red icon showing the status of that particular web server instance.  There was no need to handle this manually.  There is already a property on my web server instance that has a status.  All I need to do is bind the image to my status enumeration and set a TypeConverter which transforms the enumeration value to a specific icon.  When the enumeration is set to Started, the icon is green, when it says "Stopped", the icon is red.  No events are required and the only code required for this scenario is the quick creation of a TypeConverter.

Therefore, DevServer is an example of WPF databinding.  I've heard people say that they are more into architecture and WCF and therefore have no interested in learning WPF.  This statement makes no sense.  If you don't want to mess with UI stuff, you need to learn WPF.  Instead of handing events all over the place and manually setting data, you can do whatever it is you do and have WPF just bind to your data.  When it comes to creating quick client applications, WPF is a much more productive platform than Windows Forms... or even the web!

Links