2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Setting a Silverlight 2 Startup Breakpoint using WinDBG



As I've mentioned on a previous occasion, hardcore debugging is done with WinDBG and SOS.  Visual Studio's debugger just isn't enough for the super interesting scenarios.  I've also mentioned that I have no intention of giving a step-by-step introduction into how to use SOS.  However, given that debugging is directly related to internals and that most people with advanced Silverlight issues will search for "advanced Silverlight debugging" or the like, I'll provide a little information here and there to help people find what they are looking for quicker.

Here I would like to explain how you can set a break point very early in the Silverlight load process, before your application even loads.  This technique will give you a glimpse into the internals of Silverlight to see how things work.  It will also help you to step through your method calls from the very beginning in WinDBG.

First, however, understand that Silverlight has three major components that you need to be aware of:

  • NpCtrl.dll
  • AgCore.dll
  • CoreCLR.dll

Each of these components is critical to loading and running a Silverlight application.  NpCtrl.dll is a browser plugin which follows the Netscape Plugin API (NPAPI).  When your browser sees that you are trying to load Silverlight 2, it will call the NPInitialize export (a public DLL function).  Later, this plug-in will load the AgCore.dll and create a instance of the COM class XcpControl2 to allow NpCtrl to interact with Silverlight (you too can use XcpControl in your COM development).  AgCore is the unmanaged part of Silverlight 2.  It handles everything from DOM interaction, to rendering, to media play.

At this point you will see the spinning XAP-loading progress bar.  When this is complete, NpCtrl will finally load CoreCLR.dll.  Upon load, the DllMain function for this DLL is called.  This function will start up the CoreCLR, the CLR that Silverlight 2 uses.  Upon completion, NpCtrl then calls the GetCLRRuntimeHost DLL export to get the loaded instance of the CLR.  The rest of your application is ready to run (as a side note, the order of a few of these things changes under certain circumstances, so you may actually see CoreCLR load AgCore.)

With this information you're ready to start a fresh instance of a web browser (it's crucial that Silverlight 2 isn't already loaded) and attach WinDBG to your browser's process.  Upon attach you are given control to the command interface.  To set a breakpoint when Silverlight 2 starts, you set a breakpoint when a specific DLL is loaded.  Depending on your needs this may be the npctrl.dll, agcore.dll, or coreclr.dll.

You set a breakpoint for a DLL load in WinDBG by using the following command:

sxe ld:DLLNAME

Thus, if you wanted to break at a specific point in the Silverlight loading procedure, you use one of the following commands:

sxe ld:npctrl

sxe ld:agcore

sxe ld:coreclr

If you want to remove these breakpoints, just type the the following:

sxr

Once you have one or more DLL load breakpoints these and resume your application (the 'g' command or F5 in WinDBG) you can test it our by going to http://themelia.netfxharmonics.com/. If successful, WinDBG will break when your DLL loads.  In the call stack window (or when you type the 'k' command), you will see a call stack that looks something like the following:

kernel32!LoadLibraryW+0x11
npctrl!GetCLRRuntimeHost+0x112
npctrl!CWindowsServices::CreateCLRRuntimeHost+0x19
agcore!CCLRHost::Initialize+0x12a
agcore!CRuntimeHost::Initialize+0x60
agcore!CCoreServices::CLR
Startup+0x95
agcore!CDeployment::Start+0x27
agcore!CCoreServices::StartDeploymentTree+0x53
npctrl!CXcpBrowserHost::put_Source+0x10f
npctrl!CommonBrowserHost::OnGotSourceDownloadResponse+0x4b
npctrl!CXcpDispatcher::OnReentrancyProtectedWindowMessage+0x284
npctrl!CXcpDispatcher::WindowProc+0x51 ¨C11CUSER32!InternalCallWinProc+0x28 ¨C12CUSER32!UserCallWinProcCheckWow+0x150

Chances are, though, that you won't see this information like this at first.  Instead, you will probably see a series of memory addresses.  To view the actual function names, you must configure your symbol paths in WinDBG.  These symbols, often called PDBs, will relate the machine information with information from the original source.  In our case, we need them to view the function names.

To setup symbols, use the following command, replacing C:_SYMBOL with a symbol location of your choice (symbols are very important and, therefore, should not be thrown into a "temp" folder):

. symfix SRVC:_SYMBOLhttp://msdl.microsoft.com/download/symbols;SRVC:_SYMBOLhttp://symbols.mozilla.org/firefox

You may also enter this information in the window that is shown when you hit Ctrl-S.  Either way, this will configure the symbols for both Microsoft's symbol server and Mozilla's.  The latter will help you view exactly how far the loading processing has gone while in Firefox.

With the symbols configured, you will then be able see something like the above call stack.  If you set the symbols after you had already loaded Silverlight 2 in your browser, then you need to exit the browser, start a new instance, and reattach WinDBG.  Regardless, every time a DLL is loaded, WinDBG will automatically access the appropriate Microsoft and Mozilla and download the symbol file for that DLL.  This will take time for every DLL that is to be loaded.  Expect the WinDBG status bar to read BUSY for a long time.  It will only load the symbol for each DLL once.

The method I've just described works perfectly for breaking in WinDBG Firefox and IE.  But, what about Chrome?  Depending on settings, Google Chrome uses either a process-per-tab model or a process-per-site model.  While this flexibility is great for daily use, for debugging we can't have processes bouncing around like jack rabbits on cocaine.  You need a process that you can attach to that won't run away as soon as you use it.  So, to successfully attach WinDBG to Chrome, you should start Chrome up in single process mode.  You do this as follows:

chrome.exe --single-process

At this point, you can attach to Firefox, IE, and Chrome for Silverlight 2 debugging in Silverlight.

Silverlight SOS Commands



Today I became rather curious of what commands Silverlight's version of SOS provided (see my Learning WinDBG/SOS and Advanced Debugging post for more information on SOS).  I didn't really have any guess to whether there would be more, less, or the same.  So I ran Silverlight's SOS.dll through .NET's dumpbin.exe application with the /exports switch to get a list of what you can do with it.  In this case, the lower case DLL exports are the actual SOS commands you can use.  I did dumpbin.exe on the .NET 2.0 version and did a diff between them.  The results?  The Silverlight version of SOS actually has more commands than the .NET version.

Here is a list of SOS commands that aren't in the .NET version:

analyzeoom histobj vh
ao histobjfind vo
dumpsigelem histroot  
findroots histstats  
fq hof  
gcheapstat listnearobj  
gcwhere lno  
heapstat t  
histclear tp  
histinit verifyobj  

You can type these into your debugger to see the specific syntax for each of these.  For detailed information on most of these, you'll have to wait until something gets posted on MSDN or one of the SOS developers post something.  For a few, however, Sasha Goldshtein has provided some information and examples.  Here are some posts from this person's web site where you can find information no some of these new commands:

In case you're wondering, the .NET version has a few commands that the Silverlight version didn't have too.  This isn't a big deal as the commands that are missing don't really have much meaning for Silverlight (or have an alternative).  Here's a list of these:

comstate
dumpmethodsig
rcwcleanuplist
tst

Also, for the sake of completeness, below is the completely list of all Silverlight commands.  Just type then into WinDBG with the ! prefix to play with each of them.  For many of them you can type "!sos.help COMMANDNAME" to get help for a specific COMMANDNAME.

analyzeoom dumpsigelem histclear syncblk
ao dumpstack histinit t
bpmd dumpstackobjects histobj threadpool
clrstack dumpvc histobjfind threads
da eeheap histroot token2ee
do eestack histstats tp
dso eeversion hof traverseheap
dumparray ehinfo ip2md u
dumpassembly finalizequeue listnearobj verifyheap
dumpclass findappdomain lno verifyobj
dumpdomain findroots minidumpmode vh
dumpheap fq name2ee vmmap
dumpil gchandleleaks objsize vmstat
dumplog gchandles pe vo
dumpmd gcheapstat printexception  
dumpmodule gcinfo procinfo  
dumpmt gcroot savemodule  
dumpobj gcwhere soe  
dumpruntimetypes heapstat sosflush  
dumpsig help stoponexception  

Then there's the complete list of .NET SOS commands:

bpmd dumpstack procinfo
clrstack dumpstackobjects rcwcleanuplist
comstate dumpvc savemodule
da eeheap soe
do eestack sosflush
dso eeversion stoponexception
dumparray ehinfo syncblk
dumpassembly finalizequeue threadpool
dumpclass findappdomain threads
dumpdomain gchandleleaks token2ee
dumpheap gchandles traverseheap
dumpil gcinfo tst
dumplog gcroot u
dumpmd help verifyheap
dumpmethodsig ip2md vmmap
dumpmodule minidumpmode vmstat
dumpmt name2ee  
dumpobj objsize  
dumpruntimetypes pe  
dumpsig printexception  

If you haven't noticed yet, most of these aren't even documented commands. However, if you type them into SOS, you will not only see that they exist, you will be given the syntax for how to use them (and, then, there's !sos.help).

kick it on DotNetKicks.com

Learning WinDBG/SOS and Advanced Debugging



In my daily R&D work as well as in my general development, I always keep WinDBG open so I can quickly debug major problems in a system or just to take a look under the covers. WinDBG is short for Windows Debugger and it's what you would use if you were to debug a Windows driver or figure out why your system blue screened.  It's an advanced unmanaged debugger.  If you're into internals and eat up books like Windows Internals and Windows via C/C++, then you will or probably already do love Windows Debugger.

You can use it for more than unmanaged debugging though.  The .NET framework ships with a product called SOS, which you can load into WinDBG to enable advanced managed debugging.  Actually, with the proper settings ("Enable unmanaged code debugging" to true) you can sometimes load SOS in Visual Studio.  Using either, you can do anything from break when a particular method is called, dump out the IL at a particular place in your call stack (yes, this means you can view the code at runtime without needing the code!), view the methods of a type, or even break when the CLR first loads.  It's incredibly powerful.  You don't even need to be at the system to use it.  Just have someone send you a memory dump and you can use that just as easily as if you were physically at the system.

You can even use it to debug .NET applications inside of unmanaged applications.  For example, WinDBG is the tool I used to figure out why Visual Studio 2008 didn't allow .NET assemblies to be referenced in Silverlight.  Visual Studio was simply loading a managed assembly to do all of its assembly reference dirty work.  Since .NET has this awesome thing called the CTS (Common Type System), types are actual types instead of just chunks of memory.  So when you pause a process and walk through memory, you don't just see memory addresses like 0x018271927, but you see things like System.String with a value of "ctl02".

In addition to being able to debug unmanaged code, .NET code, and .NET code in unmanaged code, you can also use WinDBG to debug Silverlight code.  As it turns out, when you install Silverlight, sos.dll is installed in your %ProgramFiles%\Microsoft Silverlight\2.0.31005.0\sos.dll folder (or on a 64-bit system, %ProgramFiles(x86)%\Microsoft Silverlight\2.0.31005.0\sos.dll).  Just attach your debugger to the unmanaged browser to debug the managed Silverlight application.

At this point you might expect me to announce that I'm going to do some long series on how to effectively use SOS, but that's not the case for two reasons: first, I'm not going to dump out a series of screen shots when many others have done this already.  Second, 90%+ of advanced debugging is all about understanding internals; something that takes many hours of study over many months.  Don't think that learning some tool will help you to effectively debug serious problems.

Thus, remember this: your ability to debug something is directly proportional to your knowledge of the system.  As I say regularly, if you ever try to fix something without understanding it, you are, at best, doing little more than hacking.  Therefore, instead of writing a series, I'm going to refer you to a series of places where you can easily learn all about WinDBG and, more importantly, advanced debugging and system internals.

Before you go diving into a list of resources, though, you need to realize that WinDBG, like Visual Studio's built-in, toned-down debugger, is only a tool.  As I've already mentioned, no tool will ever replace the need for the human to know how to use the tool or how to interpret the information provided by the tool.  Knowledge of debugging is required before you can use a debugging tool effectively.  Logic is also always required when debugging any issue.  Not only that, but as previously mentioned, deep knowledge of the internals of the system you are debugging is non-negotiable.  If you don't know what a MethodDesc, what a method table, or even what a module is, then your debugging abilities will be severely limited.  You won't have a clue how to find or interpret your debug output.

The more you know about the CLR, memory management, threading, garbage collection, the CLR's type system, and Windows the better off you are.  If you understand why .NET finalizers should only be used once every 10 years, you're on the right track.  If you don't even know how many generations are in .NET's garbage collector, you have some serious research ahead of you.  The more you know, the better off you are.  This applies to the whole of the system.  I hear people all the time talking about the "stack" and the "heap" as if they are some ethereal concept beyond human grasp.  These people typically have no idea that these are concepts that predate .NET by decades.  If you can't explain the stack and heap from a C/C++ perspective, then you don't truly understand the concepts.

Also, if you understand memory management in C/C++, you're way ahead of the game.  Even though you don't normally see them in a typical ASP.NET web form, you should know your pointers as they are the foundation of all memory management.  Without awareness of pointers, the entire concept of "passing by reference" is little more than magic.  You need to know why something is a value type and why something is a reference type.  If you don't understand the concept of a sync block index, then you're always wonder why you can't lock on a value type and why you can lock on a value type cast to an interface.  This type of information will go a long way to help you not only debug a system, but, equally important, to make sure it has optimal performance.

You should also not be afraid of IL.  This is the language of .NET.  You don't need to care about becoming fluent in writing in IL.  I promise you that you won't be required by any fascist employer to write a full-scale WPF system in IL.  However, if you don't even know what boxing, you need to hit the books.  Knowledge of IL dramatically aides in your understanding of both the framework class library and the CLR.  You should be aware of what is actually happening under the covers.  Don't simply assume that the csc.exe or vbc.exe compilers will fix your coding flaws.  Some times the code created by these compilers isn't optimal.  You should also understand what the evaluation stack is and how stack-based programming works as this is the foundation for IL development.

Fortunately, there are resources for these prerequisites of advanced debugging in addition to the following debugging resources.  Not only that, but there's two books that explains just about every thing I've just mentioned!  It's Jeffery Richter's CLR via C# and Joe Duffy's .NET Framework 2.0.  Buy these books and a stack of highlighters and study until every page has marks, notes, and coffee stains on them.  In addition to this, you should probably also drop by your local book store, grab a book on C/C++ and read a chapter or two on memory management so you can see the types of things the CLR has to deal with so you don't have to.

For even deeper debugging, though, you may want to study the "Processor Architecture" section of the WinDBG help files, get the aforementioned Windows/C++ books, and buy Advanced Windows Debugging as well.  Everything you study is just "code", so the more you know, the deeper you can debug.  Not just debugging though, but knowledge of internals help you get more knowledge of internals.  By understanding C++, assembly language, and Windows internals, you already have just about everything you need to do effective reverse engineering.  This will help you to understand the undocumented internals and features of various technologies.

These things will help further your knowledge of what's actually going on in .NET's brain.  It's also important to remember that you won't learns this stuff over night.  As I said, your debugging skills are directly related to your knowledge of internals.  Therefore, as pick up more prerequisites in your career, you will become better at debugging (and performance optimization).  Every few months I take some time to further my understanding of the internals of new (and even sometimes old) technologies for these specific reasons.  For example, in Q1 and Q2 of 2009, I'm spending the bulk of my time studying assembly language and Windows architecture to further my own understanding.

Now on to the resources (which will be updated every now and again):

Advanced Debugging Resources

Reusing .NET Assemblies in Silverlight



Table of Contents

Introduction

Long before Silverlight 1.0 was released, it was actually called WPF/E or WPF Everywhere.  The idea was to allow you to create WPF like interfaces in your web browser.  This can be seen in a very small way in Silverlight 1.0.  All it provided was very basic primitive objects with the ability for interact with client-side technologies like JavaScript.  However, with Silverlight 2.0, Silverlight is actually more than what was originally promised with the term "WPF/E".  Silverlight is now far much more than a graphical technology.  All this stuff about Silverlight being "WPF for the Web" is more to make the marketing folks happy than anything else.

As a technology parallel to .NET, Silverlight is not part of the .NET family.  Rather, it essentially mirrors the .NET platform to create a new platform inside of a web browser where you have a mini-CLR and mini-Framework Class Library (FCL).  However, even though they are parallel technologies, you would suspect that Microsoft would allow some level of reuse between the two.  As it turns out, most topics are completely reusable.  Among other things, Silverlight has delegates, reference types, value types, a System namespace, and the ability to write code in both C# and VB.

Furthermore, despite the rumors, Silverlight also shares the exact same module and assembly format as .NET.  This may seem completely shocking to some people given the fact that Visual Studio 2008 doesn't allow you to reference a .NET assembly in a Silverlight project.  In reality, however, there's no technical reason for this prohibition.  There isn't a single byte difference between a Silverlight and .NET assembly.  One way to see this is by referencing a Silverlight assembly in a .NET project.  Just try it.  It works great.  So, why doesn't Visual Studio allow .NET assemblies in Silverlight projects?

To answer this, we need to understand that just because an optional helper tool (i.e. Visual Studio) doesn't allow something, that doesn't mean the technology itself doesn't.  In this case, the reason why Visual Studio allows a .NET project to reference Silverlight assemblies, but not the other way around is probably because .NET assemblies can normally do more.  For example, .NET has all kinds of XML related entities in its System.Xml assembly.  If Silverlight were to try to use this, it would blow up at runtime.  However, both Silverlight and .NET have an mscorlib assembly thus giving them a sense of brotherhood.  Having said that, Silverlight has the System.Windows.Browser assembly which, upon access in .NET, would make your .NET application explode!  Thus, the Visual Studio restriction laws are flawed.

Fortunately, there are ways around Visual Studio's fascist regime.  I'm going to talk about two different ways of reusing .NET assemblies and code in Silverlight.  The first technique is the more powerful assembly-level technique, while the second is more flexible file-level technique.  Each technique is useful for its own particular scenarios.  Please keep naive comments of "I'm ALWAYS going to…" and "I'm NEVER going to…" to yourself.  You need to make decisions of which of these techniques or possibly another technique to use on a case by case basis.

The Assembly-Level Technique

For this technique, you need to understand what's going on under the covers when you try to add a .NET reference to your Silverlight application in Visual Studio.  It's actually incredibly simple.  Visual Studio isn't a monolith that controls all your code from a centralized location; sometimes it uses plug-ins to do it's dirty work.

In this case, Visual Studio 2008 uses the Microsoft.VisualStudio.Silverlight .NET assembly.  In this assembly is the Microsoft.VisualStudio.Silverlight.SLUtil class which contains the IsSilverlightAssembly method.  When you add an assembly to a Silverlight Project, this method is called internally to see if your assembly is Silverlight.  If it is, it will add it.  If not, it won't.  It's just that simple.  But, given that the Silverlight and .NET assembly format is the same, how can it know?

You may be shocked to find out that the reason behind this is completely artificial: if the assembly references the 2.0.5.X version of the mscorlib assembly, then Visual Studio says that it's a Silverlight assembly!  This test is essentially all the IsSilverlightAssembly does.  Therefore, if you take your .NET 2.x/3.x assembly and change the version of mscorlib that your assembly references from 2.0.0.0 to 2.0.5.0, you may then add the assembly as a reference.  Now let's talk about this with a more hands on approach.

Below is the sample code we will be working with for this part of the discussion.  Say this code is placed in an empty .NET project.  When it is compiled, we will have an assembly.  Let's call it DotNet.dll.

using System;
//+
namespace DotNet
{
    public class Test
    {
        public String GetText()
        {
            return String.Format("{0} {1} {2} {3}", "This", "is", "a", "test");
        }
    }
}

Before we go any further, lets' discuss the state of the universe at this point.  If you ever try to solve a problem without understanding how the system works, you will at best be hacking the system.  Professionals don't do this.   Therefore, let's try to understand what's going on. 

The first thing you need to know is that when you add an assembly to a project in Visual Studio, you are simply telling Visual Studio to tell the compiler what reference you have so that when the compiler translates your code into IL, it knows what assemblies to include as "extern assembly" sections.  Even then, only the assemblies that are actually used in your code will have "extern assembly" sections.  Thus, even if you added reference every single assembly in your entire system but only use two, the IL will only have two extern sections (i.e. references assemblies).  The second thing you need to know is that no matter what, your assemblies will always have a reference to mscorlib.  This is the root of all things and is where System.Object is stored.

To help you understand this, let's take a look at the IL produced by this class.  To look at this IL, we are going to use .NET's ILDasm utility.  Reflector will not be your tool of choice here.  Reflector is awesome for referencing code, but not for working with it.  It's more about form than function.  With ILDasm we are going to run the below command:

ILDasm DotNet.dll /out:DotNet.il

For the sake of your sanity, use the Visual Studio command prompt for this.  Otherwise you will need to either state the absolute path of ILDasm or set the path.

This command will produce two files: DotNet.il and DotNet.res.  The res file is completely meaningless for our discussion and, therefore, will be ignored.  Here is the IL code in DotNet.il:

.assembly extern mscorlib
{
    .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
    .ver 2:0:0:0
}
.assembly DotNet
{
    /** a lot of assembly level attributes have been left out **/


    .hash algorithm 0x00008004
    .ver 1:0:0:0
}
.module DotNet.dll
.imagebase 0x00400000
.file alignment 0x00000200
.stackreserve 0x00100000
.subsystem 0x0003 
.corflags 0x00000001     


.class public auto ansi beforefieldinit DotNet.Test extends [mscorlib]System.Object
{
    .method public hidebysig instance string GetText() cil managed
    {
        .maxstack    4
        .locals init ([0] object[] CS$0$0000)
        IL_0000:    ldstr "{0} {1} {2} {3}"
        IL_0005:    ldc.i4.4
        IL_0006:    newarr [mscorlib]System.Object
        IL_000b:    stloc.0
        IL_000c:    ldloc.0
        IL_000d:    ldc.i4.0
        IL_000e:    ldstr "This"
        IL_0013:    stelem.ref
        IL_0014:    ldloc.0
        IL_0015:    ldc.i4.1
        IL_0016:    ldstr "is"
        IL_001b:    stelem.ref
        IL_001c:    ldloc.0
        IL_001d:    ldc.i4.2
        IL_001e:    ldstr "a"
        IL_0023:    stelem.ref
        IL_0024:    ldloc.0
        IL_0025:    ldc.i4.3
        IL_0026:    ldstr "test"
        IL_002b:    stelem.ref
        IL_002c:    ldloc.0
        IL_002d:    call  string [mscorlib]System.String::Format(string, object[])
        IL_0032:    ret
    }


    .method public hidebysig specialname rtspecialname 
         instance void    .ctor() cil managed
    {
        .maxstack    8
        IL_0000:    ldarg.0
        IL_0001:    call  instance void [mscorlib]System.Object::.ctor()
        IL_0006:    ret
    }
}

Right now we only care about the first section:

.assembly extern mscorlib
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
  .ver 2:0:0:0
}

This ".assembly extern ASSEMBLYNAME" pattern is how your assembly references are stored in your assembly.  In this case, you can see that mscorlib is referenced using both it's version and it's public key.  For our current mission, all we need to do is change the second 0 to a 5.  The public key tokens used in Silverlight are completely different from the ones in .NET, but we are trying to fool Visual Studio, not Silverlight.  This is a compile-time issue, not a runtime-issue.  Speaking more technically, we don't care about the public key token because this information is only used when an assembly is to be loaded.  The correct mscorlib assembly will have already loaded by the Silverlight application itself long before our assembly comes on the scene.  So, in our case, this entire mscorlib reference is really just to make the assembly legal and to fool Visual Studio.

Once you make the change from 2:0:0:0 to 2:0:5:0, all you need to do is use ILAsm to restore the state of the universe (unlike Reflector with C#, ILAsm can put humpty dumpty back together again).  Here's our command for doing this (in this case the resource part is completely optional, but let's add it for completeness):

ilasm DotNet.il /dll /resource:DotNet.res /out:DotNet2.dll

You are now free to reference your .NET assembly in your Silverlight project or application.  As I've already mentioned, Silverlight and .NET have the same assembly format.  There's nothing in Silverlight that stops us from referencing .NET assemblies, it was only Visual Studio stopping us.

At this point you have just the basics of this topic.  However, it's not the end of the story.  As you should be aware, .NET's core assemblies use four-part names.  That is, they have a strong name.  This is used to disambiguate them from other assemblies.  That is, instead of the System assembly being called merely "System", which can easily conflict with other assemblies (obviously written by non-.NET developers who don't realize that System should be reserved), it's actually named "System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089".  When you reference an assembly, you need to make sure to match the name, version, culture, and public key token.  When it comes to using .NET assemblies in Silverlight, this is critically important.

Let's say, for instance, that you created a .NET project which referenced and used entities from the System, System.ServiceModel, and System.Runtime.Serialization assemblies.  In this case, the IL produced by the .NET compiler will create the following three extern assembly sections:

.assembly extern System
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
  .ver 2:0:0:0
}
.assembly extern System.ServiceModel
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
  .ver 3:0:0:0
}
.assembly extern System.Runtime.Serialization
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
  .ver 3:0:0:0
}

Notice the public key token on each.  Here all three are the same, but for other .NET assemblies they may be different.  What's important here, though, is that the keys are used to identity the assemblies for .NET, not Silverlight.  Thus, even though you did add your .NET assembly to your Silverlight application, an exception would be thrown in runtime at the point where your application tries to access something in one of these assemblies.

The following shows you what would happen in the extreme case of trying to use the System.Web assembly in your Silverlight.  You would get the same error if you tried to access something in one of the above assemblies as well.

AssemblyException

As it stands, though, we can fix this just as easily as we fixed the mscorlib problem in Visual Studio.  All we need to do is open our IL and change the public keys and versions to the Silverlight versions.  Below is a list of the common Silverlight assemblies each with their public key token and version:

.assembly extern mscorlib
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Core
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Net
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Runtime.Serialization
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Windows
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Windows.Browser
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}


//+ note the different public key token in the following
.assembly extern System.ServiceModel
{
  .publickeytoken = (31 BF 38 56 AD 36 4E 35)
  .ver 2:0:5:0
}
.assembly extern System.Json
{
  .publickeytoken = (31 BF 38 56 AD 36 4E 35)
  .ver 2:0:5:0
}

Just use the same ILDasm/Edit/ILAsm procedure already mentioned to tell the assembly to use the appropriate Silverlight assemblies instead of the .NET assemblies.  This is an extremely simple procedure consisting of nothing more than a replace, a procedure that could easily be automated with very minimal effort.  It shouldn't take you much time at all to write a simple .NET application to do this for you.  It would just be a simple .NET to Silverlight converter and validator (to test for assemblies not supported in Silverlight).  Put that application in your Post Build Events (one of the top 5 greatest features of Visual Studio!) and you're done.  No special binary hex value searching necessary.  All you're doing is changing two well documented settings (the public key token and version).

For certain assemblies, this isn't the end of the story.  If your .NET assembly has a strong name, then by modifying it's IL, you have effectively rendered it useless.  Aside from disambiguation, strong names are also used for tamper protection.  You can sort of think of them as a CRC32 in this sense.  If you were to modify the IL of an assembly with a strong name, you would get a compile-time error like the following:

StrongNameException 

However, as you know by the fact that we have looked at the raw text of the source code with our own eyes, the strong name does absolutely no encryption of the IL.  That's one of the most common misconceptions of strong names.  They are not used as for public key encryption of the assembly.  Therefore, we are able to get around this by removing the public key from our assembly before using ILAsm.  Below is what the public key will look like in your IL file.  Just delete this section and run ILAsm.

.publickey = (00 24 00 00 04 80 00 00 94 00 00 00 06 02 00 00
              00 24 00 00 52 53 41 31 00 04 00 00 01 00 01 00
              37 3C 5A 7F 6D B6 3F 30 D8 3F DE E3 17 FE E5 2E
              68 43 16 A9 7C 42 69 5A 05 52 E6 73 C5 AC 58 7E
              B0 00 9F DC 1B 0A 78 57 79 12 79 53 E1 60 EB C9
              ED 49 7C 8C 73 1B 01 A7 BA 57 79 B5 53 83 8B CA
              8D F8 6F 3B BD A5 E4 BA 6A 12 B9 52 F2 E9 A3 FC
              42 17 E4 33 97 92 DC 21 30 57 B9 D3 63 7A F2 43
              73 42 70 18 89 8B 44 B9 D4 5A BA A9 21 A3 D9 E0
              86 20 3C 30 01 A9 B9 BB F4 D8 79 B7 7D 56 5A A9)

Upon using ILAsm to create the binary version of the same IL, you will be able to add your assembly, compile and run your application without a problem.  However, you can take this one step further by telling ILAsm to sign the assembly using your original strong name key.  To do this, just use the key command line option to specify the strong name key you would like to use.  Below is the new syntax for re-signing your assembly:

ILAsm DotNet.il /dll /resource:DotNet.res /out:DotNet11.dll /key=..\..\MyStrongNameKey.snk

At this point you have a strongly-named Silverlight assembly createdrom your existing .NET assembly.

Now, before moving on to explain a more flexible method of reuse, I want to cover a few miscellaneous topics.  First, for those of you who know some IL and are trying to be clever to make this process even simpler, you may think you could just do the following:

.assembly extern mscorlib { auto }

This won't work as ILAsm will look for "auto" and place the 2.0.0.0 version in it's place, thus leaving you right where you started.  Also, don't even think about leaving the entire mscorlib part off either.  That won't fool anyone since ILAsm will detect that it's missing and add it before continuing the assembly process.  You need to explicitly state that you want assembly version 2.0.5.0.

Second, you need to think twice before you add a Silverlight assembly to a .NET application.  In the Visual Studio world, if you add a .NET assembly, you add only that assembly. But, in the that assembly is a Silverlgiht assembly, then you will see all of the associated Silverlight assemblies added for each culture you have.  When I did this on my system, exactly 100 extra files were added to my Bin folder!  That's insane.  So, perhaps the Visual Studio team put a "Add Reference" block in the wrong place!

The File-Level Technique

Now all of this is great.  You can easily access your .NET assemblies in Silverlight.  But, many times this isn't even what you need.  You need to remember that every time you reference an assembly in Silverlight, you increase the size of your Silverlight XAP package.  Whereas .NET and Silverlight will only register assembly references in IL when they are actually used, Silverlight will package referenced assemblies in the XAP file regardless of use.  They assemblies will also be registered in the AppManifest.xaml file as an assembly part.  Though the XAP file is nothing more than a ZIP file, thereby shrinking the size of the assembly, this still spells "bloat" if all you need is just a few basic types from an assembly that's within your control.  For situations like this, there's a much simpler and much more flexible solution.

The solution to this again deals with understanding the internals of your system: whenever you add a file to your project in Visual Studio, all you are really doing is adding a file to an ItemGroup XML section in the .NET project file.  This is just a basic text file that describes the project.  As you may have guessed, the ItemGroup section simply contains groups of items.  In the case of compilation files (i.e. classes, structs, enums, etc...), they are Compile items.  Here's an example of a snippet from a .NET project:

<ItemGroup>
  <Compile Include="Client\PersonClient.cs" />
  <Compile Include="Agent\PersonAgent.cs" />
  <Compile Include="Properties\AssemblyInfo.cs" />
  <Compile Include="Configuration.cs" />
  <Compile Include="Information.cs" />
  <Compile Include="_DataContract\Person.cs" />
  <Compile Include="_ServiceContract\IPersonService.cs" />
</ItemGroup>

Given this information, all you need to do is (1) create a Silverlight version of this assembly, (2) open the project file and (3) copy/paste in the parts you want to use in your Silverlight project with the appropriate relative paths changed.  This will create a link from the Silverlight project's items to the physical items.  No copying is done.  They are pointing to the exact same file.  When they are compiled, there is no need to do any IL changes in your assemblies at all since the Silverlight assembly will be Silverlight and the .NET assembly will be .NET.

Now that you know about this under-the-covers approach, you should be aware that this is actually a fully supported option in Visual Studio.  Just go to add an existing item to your project and instead of clicking add or just hitting enter, hit the little arrow next to add and select "Add As Link".  This will do the exact same thing as what we did in our bulk copy/paste method in the project file.  Here's a screen shot of the option in Visual Studio:

AddAsLink

What may be more interesting to you is that this feature may be used anywhere in .NET.  You can use this to reuse any files in your entire system.  It's a very powerful technique to reuse specific items in assemblies.  It comes it very handy when two assemblies need to share classes and creating a third assembly which both may access leads to needless complexity.

Conclusion

Given these two techniques, you should be able to effectively architect a solution that scales to virtually any number of developers.  The first technique is easy to deploy using a custom utility and post build events, while the second is natively supported by any good version control systems.  Keep in mind though, that when using the first technique you may not always need to do this on every build.  The best approach I've seen for this is to have a centralized location on a network share that contains nightly (or whatever) builds of core assemblies.  Then, a login script will copy each of the assemblies to each developers machine.  This will cut down on the complexity of compilation and dramatically lower the time to compile any solution.

Regardless of which technique you use, you should feel a sense of freedom knowing of their existence.  This is especially true if all you are doing is trying to share data contracts between .NET and Silverlight.  As I've mentioned in my popular 70+ page "Understanding WCF in Silverlight 2" document, the "Add Service Reference" feature is not something that should be used in production.  In fact, it's painful in development as well.  Using the techniques described here, you can easily share your data contracts between your .NET server and the Silverlight client without the FrontPage/Word 95 style code generation.  For more information on this specific topics, see the aforementioned document.

Links

kick it on DotNetKicks.com

FREE Silverlight Training on the Web



In case you didn't know it, knowledge is free.  In fact, it always has been.  Some cultures make it hard to obtain, but it's free nonetheless.  The Internet gives you extremely close access this knowledge.  You can randomly choose just about any topic in the world and find at least one article, blog posting, or Wikipedia entry on the topic.  In fact, when I was in college I only showed up once to my Kansas State University Physics II class.  Instead I kepted up with the class from home by watching the MIT OpenCourseWare video courses.  When it comes to Internet-related technologies like Silverlight, knowledge is even easier to find.

I see all kinds of courses by some of the biggest training companies offering all sort of great Silverlight courses.  However, these are extremely pricey.  There are also many books on Silverlight coming out.  Again, not free.  But think about it, how do you think the trainers and authors get their information?  When I was offered my Silverlight 2 book deal (since being on a deadline sucks, I turned it down), where do you think I would get my information?  It's all free online.  Here in December 2008, there are all kinds of amazing free resource for learning Silverlight.  You do not need training.  You do not need to buy a book.  Here are some of these resources that I've found this year to help bring you from ground zero to being a Silverlight master:

First, there's the 53-part video series at Silverlight.net.  This series just about every single topic you will ever see in your Silverlight career.  However, I would consider these to be at the basic level.  They cover the fundamentals of each topic, give great tips, and progressively give more interesting examples as the videos progress.  If all you are going to be doing is under-using Silverlight 2 as an RIA platform and for general [boring] UI development, then this series may be 90% of what you need.  Link: http://silverlight.net/Learn/videocat.aspx?cat=2

Second, there's the 44-part video series from Mike Taulty.  This is the guy behind the MSDN Nuggets videos.  These videos are more at the intermediate-advanced level.  It's also somewhat focused at "under-the-covers" development.  Mike doesn't do drag-n-drop videos.  He teaches real technology.  Whereas the previous series will discuss concepts and how to do things "out of the box", Mike's videos show you how to work with things at a more mechanical level, thus giving you a much greater level of control.  If you don't know the topics he's discussing in the videos, you don't know Silverlight.  Link: http://channel9.msdn.com/posts/Dan/Mike-Taulty-44-Silverlight-20-Screencasts/

Third, let's not forget that Microsoft has its annual Mix and PDC conferences.  Microsoft makes sure that the content for these conferences are freely available online.  The Mix videos are very specific and, therefore, should probably be watched on an as-needed basis.   You can just follow the link to see the wide variety of topics.  Since it's at a conference, however, some of the information will be marketing-speaking, but there's a lot of good stuff in the videos as well.  The PDC however is much less marketing-ish and there were a few Silverlight 2 sessions.  Links: http://silverlight.net/learn/videocat.aspx?cat=8 and https://sessions.microsoftpdc.com/public/timeline.aspx.

Fourth, if you're the reading-type, then you may prefer the Silverlight 2 e-book at learn-silverlight-tutorial.com.  This e-book covers a ton of information.  Much like the 53-part series, I would mark this down as basic-level.  It covers a touches on a wide variety of topics.  However, much of the information is just that: "a touch".  It's not very deep, but it's rather wide.  Link: http://www.learn-silverlight-tutorial.com/

Fifth, Microsoft has always been good about providing QuickStarts.  These are kind of a cross between visual, text, and hands-on learning.  These are also the typical go to card for any one new to anything.  The ASP.NET quick starts are still incredibly popular these many years later.  The Silverlight ones are quite well done as well.  The topics are basic-intermediate and range from topics like general UI controls to cooler stuff like JavaScript/DOM interop.  However, you may feel completely free to absolutely ignore the completely worthless "web services" section.  Whoever wrote that thought he or she was writing about the hopelessly-flawed ASMX, not the image-of-beauty WCF and, therefore, didn't even remotely bother to obey the most fundamental of WCF purposes and practices (i.e. keep your address, binding, and contract away from your implementation!) Link: http://silverlight.net/quickstarts/

Speaking of WCF, the last resource I want to mention is my document entitled "Understanding WCF in Silverlight 2".  This one has received a lot of attention since I wrote it in November 2008.  In fact, it's now listed on the WCF MSDN home page.  It's there because I cover WCF from the ground up for both .NET and Silverlight in a very deep manner.  If you are new to WCF, SOA, or Silverlight, then this is a good place to start (of course, no bias here.)  I wrote this document to help both people new to WCF and Silverlight as well as those who have been working either either for a while.  Even if you're not too serious about Silverlight, you should still read this detailed document to understand WCF better.  I don't play around with introductory nonsense, I hit the ground running with best-practices and proper architectural principles.  Link: http://www.netfxharmonics.com/2008/11/Understanding-WCF-Services-in-Silverlight-2

Though it's not a straight learning resource, I support I would also like to mention that you can always check out the Silverlight tag in my Delicous account: http://delicious.com/quantum00/silverlight.  However, keep in mind that just because I bookmark something, it doesn't mean I'm recommending the resource.  It just means it was interesting and/or provided some value to me.  You can expect this to be updated for the months to come.  I live off of my delicious account.

Another thing I would like to mention is that if you know WPF and web development, then you almost get Silverlight knowledge naturally.  Silverlight is essential a subset of WPF for the web.  You just take WPF, rip out a bunch of features, add just a handful of topics, move it to the web, and you have Silverlight.  Much of your skills are reusable if you already know WPF.  Actually, a lot of your skills are reusable if you're a .NET developer in general.  Just whip open Reflector and start looking through the framework, you'll see that there's a lot less than what's in the .NET framework, thus requiring much less learning time.

So, don't waste your money on books.  The blog is the new book.  Don't bother asking your employer for Silverlight training.  OK, well, if you just want some time off from work, sure, go ahead and ask.  Really, though, these resources will give you what you need for your Silverlight development.  In fact, if you were to compare the syllabus for an expensive course with the topics found in the first two sections of videos mentioned (97 of them!), you will see that the ROI for the course is virtually non-existent.

Links Summary

Tim Ferris - Trial by Fire



This is beyond awesome.  Tim Ferris, author of one of the greatest books ever written, Four Hour Work Week, has announced that he has a new show called Trial by Fire.  I'm incredibly excited to hear this.  Time Ferris is one of my core role models for just about every area of life.  I regularly reread and reference his Four Hour Work Week book and am constantly studying his blog.  In fact, when you read my NetFXHarmonics web site, you are reading Ferris principles applied to the development world.

He calls himself a life hacker.  What it takes others years to master, he tries to learn in days.  This is the primary purpose of his Trial by Fire show.  It's also something I've been studying for years through my research in accelerated learning and experience induction.  Ferris sometimes mentions that his technique is to deconstruct, streamline, and remap.  If you read my recent posts on streamlining WCF and WCF in Silverlight, you've seen a taste of how you can apply these principles to development.  It's how I personally think, act, and speak.

For more information on Tim Ferris, his show, or his book, check out his blog at http://www.fourhourblog.com/.  His blog is essentially an extension to his book, Four Hour Work Week, a book every single person in the world needs to read and reread.  You absolutely must buy this book.  Get it in print or get it in audio, just get it.

Understanding WCF



If you like this document, please consider writing a recommendation for me on my LinkedIn account.

Contents

Introduction

One of the most beautiful things about the Windows Communication Foundation (WCF) is that it's a completely streamlined technology.  When you can provide solutions to myriad of diverse problems using the same principles, you know you're dealing with a work of genius.  This is the case with WCF.  With a single service implementation, you can provide access to ASMX, PHP, Java, TCP, named pipe, and JSON-based services by add a single XML element for each type of connection you want to support.  On the flip side, with a single WCF client you can connect to each of these types of services, again, by adding a single like of XML for each.  It's that simple and streamlined.  Not only that, this client scenario works the same for both .NET and Silverlight.

In this document, I'm going to talk about how to access WCF services using Silverlight 2 without magic.  There will be no proxies, no generated code, no 3rd party utilities, and no disgusting "Add Service Reference" usage.  Just raw WCF.  This document will cover WCF connectivity in quite some depth.  We will talk about service setup, various WCF, SOA, and Silverlight paradigms, client setup,  some security issues, and a few supplemental features and techniques to help you aide and optimize service access.  You will learn about various WCF attributes, some interfaces, and a bunch of internals.  Though this document will be in depth, nothing will ever surpass the depth of MSDN.  So, for a more full discussion on any topic, see the WCF documentation on MSDN.

Even though we're focusing on Silverlight, most of what will be explained will be discussed in a .NET context and then applied to Silverlight 2.  That is, instead of learning .NET WCF and Silverlight WCF, you will .NET WCF and how to vary this for Silverlight.  This comparative learning method should help you both remember and understand the concepts better.  Before we begin, though, let's begin with a certain WCF service setup.  After all, we you don't have a service, we can't talk about accessing it.

Service Setup In Depth

When working with WCF, you are working with a completely streamlined system.  The most fundamental concept in this system is the ABC.  This concept scales from Hello World to the most complex sales processing system.  That is, for all WCF communication, you need an address, a binding, and a contract.  Actually, this is for any communication anywhere, even when talking to another person.  You have to know to whom, how, and what.  If you don't have these three, then there can't be any communication.

With these three pieces of information, you either create a service-side endpoint which a client will access or a client-side channel which the client will use to communicate with the service.

WCF services are setup using a 3 step method:

  • First, create a service contract with one or more operation contracts.
  • Second, create a service implementation for those contracts. 
  • Third, configure a service host to provide that implementation with an endpoint for that specific contract.

Let's begin by defining a service contract.  This is just a simple .NET interface with the System.ServiceModel.ServiceContractAttribute attribute applied to it.  This interface will contain various operation contracts, which are simply method signatures with the System.ServiceModel.OperationContractAttribute applied to each.  Both of these attributes are in the System.ServiceModel assembly.

Do not under any circumstances apply the ServiceContract attribute directly to the implementation (i.e. the class).  The ability to do this is probably the absolute worst feature in WCF.  It defeats the entire purpose of using WCF: your address, your binding, your contract and your implementation are complete separate.  Because this essentially makes your implementation your contract, all your configuration files will be incredibly confusing to those of us who know WCF well.  When I look for a contract, I look for something that starts with an "I".  Don't confuse me with "PersonService" as my contract.  Person service means person… service.  Not only that, but later on you will see how to use a contract to access a service.  It makes no sense to have my service access my service; thus with the implementation being the contract, your code will look painfully confusing to anyone who knows WCF.

Here's the sample contract that we will use for the duration of this document:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
        //- GetPersonData -//
        [OperationContract]
        Person GetPersonData(String personGuid);
    }
}

Keep in mind that when you design for WCF, you need to keep your interfaces as simple as possible.  The general rule of thumb is that you should have somewhere between 3 to 7 operations per service contract.  When you hit the 12-14 mark, it's seriously time to factor out your operations.  This is very important.  As I'll mention again later, in any one of my WCF projects I'll have upwards of dozens of service contracts per service.  You need to continuously keep in mind what your purpose is for creating this service, filtering those purposes through the SOA filter.  Don't design WCF services like you would a framework, which, even then shouldn't have many access points!

The Namespace property set on the attribute specified the namespace used to logically organize services.  Much like how .NET uses namespaces to separate various classes, structs, and interfaces, SOAP services use namespaces to separate various actions.  The name space may be arbitrarily chosen, but the client and service must just agree on this namespace. In this case, the namespace is the URI http://www.netfxharmonics.com/service/Contact/2008/11/.  This namespace will also be on the client.  This isn't a physical URL (universal resource locator), but a logical URI (universal resource identifier).  Despite what some may say, both terms are in active use in daily life.  Neither is more important than the other and neither is "deprecated".  All URLs are URIs, but not all URIs are URLs as you can see here.

Notice in this interface, there is a method interface that returns Person.  This is a data contract.  Data contracts are classes which have the System.Runtime.Serialization.DataContractAttribute attribute applied to them.  These have one or more data members, which are public or private properties or fields that have the System.Runtime.Serialization.DataMemberAttribute attribute applied to them.  Both of these attributes are in the System.Runtime.Serialization assembly.  This is important to remember; if you forget, you will probably assume them to be in the System.ServiceModel assembly and your contract will never compile.

Notice I said that data members are private or public properties or fields.  That was not a typo.  Unlike the serializer for the System.SerializableAttribute attribute, the serializer for DataContract attribute allows you to have private data members.  This allows you to hide information from developers, but allow services to see it.  Related to this is the how classes with the DataContract attribute differ from classes with the Serializable attribute.  When you use the Serializable attribute, you are using an opt-out model.  This means that when the attribute is applied to the class, each members is serializable.  You then opt-out particular fields (not properties; thus one major inflexibility) using the System.NonSerializedAttribute attribute.  On the other hand, when you apply the DataContract attribute, you are using an opt-in model.  Thus, when you apply this attribute, you must opt-in each field or property you wish to be serialized by applying the DataMember attribute.  Now to finally look at the Person data contract:

[DataContract(Namespace = Information.Namespace.Contact)]
public class Person
{
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }


    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }


    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }


    //- @City -//
    [DataMember]
    public String City { get; set; }


    //- @State -//
    [DataMember]
    public String State { get; set; }


    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
}

Note how simple this class is.  This is incredibly important.  You need to remember what this class represents: data moving over the wire.  Because of this, you need to make absolutely sure that you are sending only what you need.  Just because your internal "business object" has 10,000 properties doesn't mean that your service client will ever be able to handle it.  You can't get blood from a turnip.  Your business desires will never change the physics of the universe.  You need to design with this specific scenario of service-orientation in mind.  In the case of Silverlight, this is even more important since you are dealing with information that needs to get delegated through a web browser before the plug-in ever sees it.  Not only that, but every time you send an extra property over the wire, you are making your Silverlight application that less responsive.

When I coach architects on database design, I always remind them to design for the specific system which they'll be using (i.e. SQL Server) and always keep performance, space, and API usability in mind (this is why it's the job of the architect, not the DBA, to design databases!)  In the same way, if you are designing a system that you know will be used over the wire, account for that scenario ahead of time.  Much like security, performance and proper API design aren't "features", they're core parts of the system.  Do not design 10 different classes, each representing a property which will be used in another class which, in turn, will be serialized and sent over the wire.  This will be so absolutely massive that no one will ever be able to handle it.  If you have more than around 15 properties in your entire object graph, it's seriously time to rethink what you want to send.  And, never, ever, ever send an instance of System.Data.DataSet over the wire.  There has never been, is not now, and never will be any reason to ever send any instance of this type anywhere.  It's beyond massive and makes the 10,000 property data transfer object seem lightweight.  The fact that something is serializable doesn't mean that it should be.

This is main reason you should not apply the Serializable attribute to all classes.  Remember, this attribute follows an opt-out model (and a weak one at that).  If you want your "business objects" to work in your framework as well as over the wire, you need to remove this attribute and apply the DataContract attribute.  This will allow you to specify via the DataMember attribute which properties will be used over the wire, while leaving your existing framework completely untouched.  This is the reason the DataContract attribute exists!  Microsoft realized that the Serializable attribute is not fine grained enough for SOA purposes.  They also realized that there's no reason to force everyone in the world to write special data transfer objects for every operation.  Even then, use DataContract sparingly.  Just as you should keep as much private as possible and as much internal as possible, you want to keep as much un-serializable as possible.  Less is more.

In my Creating Streamlined, Simplified, yet Scalable WCF Connectivity document, I explain that these contracts are considered public.  That is, both the client and the service need the information.  It's the actual implementation that's private.  The client needs only the above information, where as the service needs the above information as well as the service implementation.  Therefore, as my document explains, everything mentioned above should be in a publicly accessible assembly separate from the service implementation to maximize flexibility.  This will also allow you to rely on the original contracts instead of relying on a situation where the contracts are converted to metadata over the wire and then converted to sloppily generated contracts.  That's slower, adds latency, adds another point of failure, and completely destroys your hand crafted, highly-optimized contracts.  Simply add a reference to the same assembly on both the client and server-side and you're done.  If multiple people are using the service, just hand out the public assembly.

At this point, many will try to do what I've just mentioned in a Silverlight environment to find that it doesn't seem to work.  That is, when you try to add a reference to a .NET assembly in a Silverlight project, you will get the following error message:

DotNetSilverlightReferenceMessageBox

Fortunately, this isn't the end of the world.  In my document entitled Reusing .NET Assemblies in Silverlight, I explain that this is only a Visual Studio 2008 constraint.  There's absolutely no technical reason why Silverlight can't use .NET assemblies.  Both the assembly and module formats are the same for Silverlight and .NET.  When you try to reference an assembly in a Silverlight project, Visual Studio 2008 does a check to see what version of mscorlib the assembly references.  If it's not 2.X.5.X, then it says it's not a Silverlight assembly.  So, all you need to do is modify your assembly to have it use the appropriate mscorlib file.  Of course, then it's still referencing the .NET System.ServiceModel and System.Runtime.Serialization assemblies.  Not a big deal, just copy/paste the Silverlight references in.  My aforementioned document explains everything you need to automate this procedure.

Therefore, there's no real problem here at all.  You can reuse all your contracts on both the service-side and on the client-side in both a .NET Silverlight environment.  As you will see a bit later, Silverlight follows an async communication model and, therefore, must use async-compatible service contracts.  At that time you may begin to think that you can't simply have a one-stop shop for all your contract needs.  However, this isn't the case.  As it turns out .NET can do asynchronous communication too, so when you create that new contract, you can keep it right next to your original service contract.  Thus, once again, you have a single point where you keep all your contracts.

Moving on to step 2, we need to use these contracts to create an implementation.  The service implementation is just a class which implements a service contract.  The service implementation for our document here is actually incredibly simple:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
        //- @GetPersonData -//
        public Person GetPersonData(String personGuid)
        {
            return new Person
            {
                FirstName = "John",
                LastName = "Doe",
                City = "Unknown",
                Guid = personGuid,
                PostalCode = "66062",
                State = "KS"
            };
        }
    }
}

That's it.  So, if you already have some logic you know is architecturally sound and you would like to turn it into a service.  Create an interface for your class and add some attributes to the interface.  That's your entire service implementation.

Step 3 is to configure a service host with the appropriate endpoints.  In our document, we are going to be using an HTTP based service.  Thus after we setup a new web site, we create a Person.svc file in the root and add to it a service directive specifying our service implementation.  Here's the entire Person.svc file:

<%@ ServiceHost Service="Contact.Service.PersonService" %>

No, I'm not joking.  If you keep your implementation in this class as well, then you are not using WCF properly.  In WCF, you keep your address, your binding, your contract, and your implementation completely separate.  By putting your implementation in this file, you are essentially tying the address to the implementation.  This defeats the entire purpose of WCF.  So, again, the above code is all that should ever be in any svc file anywhere.  Sometimes you may have another attribute set on your service directive, but this is basically it.

This is an unconfigured service host.  Thus, we must configure it.  We will do this in the service web site's web.config file.  There's really only one step to this, but that one step has a prerequisite.  The step is this: setup a service endpoint, but this requires a declared service.  Thus, we will declare a service and add it to an endpoint.  An endpoint specifies the WCF ABC: an address (where), a binding (how), and a contract (what).  Below is the entire web.config file up to this point:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <services>
      <service name="Contact.Service.PersonService">
        <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

This states that there can be "basicHttpBinding" communication through Contact.Service.IPersonService at address Person.svc to Contact.Service.PersonService.  Let's quickly cover each concept here.

The specified address is a relative address.  This means that the value of this attribute is appended onto the base address.  In this case the base address is the address specified by our web server.  In the case of a service outside of a web server, then you can specify an absolute address here.  But, remember, when using a web server, the web server is going to control the IP address and port bindings.  Our service is at Person.svc, thus we have already provided for us the base URL.  In this case the address is blank, but you will use this address attribute if you add more endpoints as you will see later.

The binding specifies how the information is to be format for transfer.  There's actually nothing too magical about a binding, though.  It's really just a collection of binding elements and pre-configured parameter defaults, which are easily changed in configuration.  Each binding element will have at a minimum two binding elements.  One of these is a message encoding binding element, which will specify how the message is formatted.  For example, the message could be text (via the TextMessageEncodingBindingElement class; note: binding elements are in the System.ServiceModel.Channels namespace), binary (via the BinaryMessageEncodingBindingElement class), or some other encoding.  The other required binding element is the transport binding element, which specified how the message is to go over the wire.  For example, the message could go over HTTP (via the HttpTransportBindingElement), HTTPS (via the HttpsTransportBindingElement), TCP (via the TcpTransportBindingElement), or even a bunch of others.  A binding may also have other binding elements to add more features.  I'll mention this again later, when we actually use a binding.

The last part of an endpoint, the contract, has already been discussed earlier.  One thing that you really need to remember about this though is that you are communication through a contract to the hosted service.  If you are familiar with interface based development in .NET or COM, then you already have a strong understanding of what this means.  However, let's review.

If a class implements an interface, you can access the instantiated object through the interface.  For example, in the following code, you are able to access the Dude object through the ISpeak interface:

interface ISpeak
{
    void Speak(String text);
}


class Dude : ISpeak
{
    public void Speak(String text)
    {
        //+ speak text
    }
}


public class Program
{
    public void Run()
    {
        ISpeak dude = new Dude();
        dude.Speak("Hello");
    }
}

You can think of accessing a WCF service as being exactly like that.  You can even push the comparison even further.  Say the Dude class implemented IEat as well.  Then we can access the instantiated Dude object through the IEat interface.  Here's what I mean:

interface ISpeak
{
    void Speak(String text);
}


interface IEat
{
    void Eat(String nameOfFood);
}


class Dude : ISpeak, IEat
{
    public void Speak(String text)
    {
        //+ speak text
    }
}


public class Program
{
    public void Run()
    {
        IEat dude = new Dude();
        dude.Eat("Pizza");
    }
}

In the same way, when configuring a WCF service, you will add an endpoint for contract through which you would like your service to be accessed.

Though it's beyond the scope of this document, WCF also allows you to version contracts.  Perhaps you added or removed a parameter from your contract.  Unless you want to break all the clients accessing the service, you must keep the old contract applied to your service (read: keep the old interface on the service class) and keep the old endpoint running by setting up a parallel endpoint.

You will add a new service endpoint every time you change your version, change your contract, or change your binding.  On a given service, you have have dozens of endpoints.  This is good thing.  Perhaps you provide for four different bindings with two of them having two separate configurations each, three different contracts, and 2 different versions of one of the contracts.  In this document, we are going to start out with one endpoint and add more later.

Now we have setup a complete service.  However, it's an incredibly simple service setup, thus not requiring too much architectural attention.  When you work with WCF in a real project, you will want to organize your WCF infrastructure to be a bit more architecturally friendly.  In my document entitled Creating Streamlined, Simplified, yet Scalable WCF Connectivity, I explain streamlining and simplifying WCF connectivity and how you can use a private/public project model to simplify your

Architectural Overview: Creating Streamlined, Simplified, yet Scalable WCF Connectivity

Contents

Introduction

One of the most awesome things about WCF is that the concepts scale extremely well.  If you understand the ABCs of WCF, then you can do anything from creating a simple Hello World to a complex sales processing service.  It's all based on having an address, a binding, and a contract.  All the other concepts like behaviors, validators, and service factories are simply supplemental to the core of the system.  When you understand the basics, you have the general essence of all of WCF.

Because of this fully-scalable ABC concept, I'm able to use the same pattern for WCF architecture and development for every solution.  This is really nice because it makes it so that I don't have to wait time designing a new setup every time a new problem comes along.  In this discussion, I would like to demonstrate how you can create your own extremely efficient WCF solution based on my template.  Along the way, you will also learn a few pieces of WCF internals to help you understand WCF better.

Before I begin the explanation though, keep in mind that most concepts mentioned here are demonstrated in my Minima Blog Engine 3.1.  This is my training software demonstrating an enormous world of modern technologies.  It's regularly refactored and often fully re-architected to be in line with newer technologies.  This blog engine relies heavily on WCF as it's a service-oriented blog engine.  Regardless of how many blogs you have (or any other series of "content entries"-- for example, the documentation section of the Themelia web site is Minima), you have a single set of services that your entire organization uses.  If you understand the WCF usage in Minima, you will understand WCF very well.

Now, onto the meat (or salad, for the vegetarians) of the discussion...

Service Structure

On any one of my solutions, you will find a X.Service project and a X.ServiceImpl project where X is the solution "code" (either the solution name or some other word that represents the essence of the solution).  The former is the public .NET project which contains service contracts, data contracts, service configuration, and service clients.  The latter is the private .NET project which contains the service implementations, behaviors, fault management, service hosts, validators, and other service-side-only, black-boxes portions of the service.  All projects have access to the former, only the service itself will ever even know about the latter.

This is a very simple setup based upon a public/private model, like in public key cryptography.  The idea is that everything private is protected with all your might and everything public is released for anyone, within the context of the solution, to see.

For example, below is the Minima.Service project for Minima Blog Engine 3.1.  Feel free to ignore the folder structure, no one cares about that.  Just because I make each folder a namespace with prefixed folder names for exclusions, doesn't mean anyone else in the world does.  I find it to be the most optimal way to manage namespaced and non-namespaced groups, but the point of this discussion is the separation of concerns in the projects.

MinimaService

For the time being, simply notice that data contracts and service contracts are considered public.  Everything else in the file is either meaningless to this discussion or will be discussed later.

Here is the Minima.ServiceImpl project for the same solution:

MinimaServiceImpl

Here you can see anything from the host factory to various behaviors to validators to fault management to LINQ-to-SQL capabilities.  Everything here is considered private.  The outside world doesn't need to know, therefore shouldn't know.

For the sake of a more simplified discussion, let's switch from the full-scale solution example of Minima to a smaller "Person" service example.  This "Person" service is part of the overall "Contact" solution.  Here's the Contact.Service for our Contact solution:

PersonService

As you can see, you have a standard data contract, a service contract, and a few other things, which will be discussed in a bit.

For this example, we don't need validators, fault management, or behaviors, or host factories, all we need is a simple service in our Contact.ServiceImpl project:

PersonServiceImpl

By utilizing this model separating the private from the public you can easily send the Contact.Service assembly to anyone you want without requiring them to create their own client proxy or use the painfully horrible code generated by "Add Service Reference", which ranks in my lists as one of the worst code generators right next to FrontPage 95 and Word 2000.

As a side note, I should mention that I have colleagues who actually take this a step further and make a X.Service.Client project which houses the WCF client classes.  They are basically following a Service/Client/Implementation model where as I'm following a Public/Private model.  Just use which ever model makes sense for you.

The last piece needed in this WCF setup is the host itself.  This is just a matter of creating a new folder for the web site root, adding a web.config, and adding a X.svc file.  Period.  This is the entire service web site.

In the Person service example, the service host has only two files: web.config and Person.svc.

Below is the web.config, which declares two endpoints for the same service.  One of the endpoints is a plain old fashioned ASMX style "basic profile" endpoint and the other is one to be used for JSON connectivity.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <behaviors>
      <endpointBehaviors>
    <behavior name="JsonEndpointBehavior">
      <enableWebScript />
    </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
      <service name="Contact.Service.PersonService">
    <endpoint address="json" binding="webHttpBinding" contract="Contact.Service.IPersonService" behaviorConfiguration="JsonEndpointBehavior" />
    <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

The Person.svc is even most basic:

<%@ ServiceHost Service="Person.Service.PersonService" %>

This type of solution scales to any level.  If you want to add a service, just add a new X.svc file and register as a new service.  If you want to add another endpoint, just add that line.  It's incredibly simple and scales to solutions of any size and even works well with non-HTTP services like netTcpBinding services.

Service MetaData

Now let's look at each of these files to see how they are optimized.  First, lets' look at the data contract, Person:

using System;
using System.Runtime.Serialization;
//+
namespace Contact.Service
{
    [DataContract(Namespace = Information.Namespace.Contact)]
    public class Person
    {
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }


    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }


    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }


    //- @City -//
    [DataMember]
    public String City { get; set; }


    //- @State -//
    [DataMember]
    public String State { get; set; }


    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
    }
}

Everything about this file should be self explanatory.  There's a data contract attribute on the class and data member attributes on each member.  Simple.  But what's with the data contract namespace?

Well, earlier this year, a co-architect of mine mentioned to me that you can centralize your namespaces in static locations.  Genius.  No more typing the same namespace on each and every data and service contract.  Thus, the following file is included in each of my projects:

using System;
//+
namespace Contact.Service
{
    public class Information
    {
    //- @NamespaceRoot -//
    public const String NamespaceRoot = "http://www.netfxharmonics.com/service/";


    //+
    //- @Namespace -//
    public class Namespace
    {
        public const String Contact = Information.NamespaceRoot + "Contact/2008/11/";
    }
    }
}

If there are multiple services in a project, and in 95%+ of the situations there will be, then you can simply add more services to the Namespace class and reference them from your data and service contracts.  Thus, you never, EVER have to update your service namespaces in more than one location.  You can see this in the service contract as well:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
    //- GetPersonData -//
    [OperationContract]
    Person GetPersonData(String personGuid);
    }
}

This service contract doesn't get much simpler.  There's no reason to discuss it any longer.

Service Implementation

For the sake of our discussion, I'm not going to talk very much at all about the service implementation.  If you want to see a hardcore implementation, go look at my Minima Blog Engine 3.1.  You will see validators, fault management, operation behaviors, message headers, and on and on.  You will seriously learn a lot from the Minima project.

For our discussion, here's our Person service:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return new Person
        {
        FirstName = "John",
        LastName = "Doe",
        City = "Unknown",
        Guid = personGuid,
        PostalCode = "66062",
        State = "KS"
        };
    }
    }
}

Not too excited, eh?  But as it is, this is all that's required to create a WCF service implementation.  Just create an every day ol' class and implement a service contract.

As I've mentioned, though, you could do a ton more in your private service implementation.  To give you a little idea, say you wanted to make absolutely sure that no one turned off metadata exchange for your service.  This is something that I do for various projects and it's incredibly straight-forward: just create a service host factory, which creates the service host and programmatically adds endpoints and modified behaviors. Given that WCF is an incredibly streamlined system, I'm able to add other endpoints or other behaviors in the exact same way.

Here's what I mean:

using System;
using System.ServiceModel;
using System.ServiceModel.Description;
//+
namespace Contact.Service.Activation
{
    public class PersonServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory
    {
    //- @CreateServiceHost -//
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        ServiceHost host = new ServiceHost(typeof(PersonService), baseAddresses);
        //+ add metadata exchange
        ServiceMetadataBehavior serviceMetadataBehavior = host.Description.Behaviors.Find<ServiceMetadataBehavior>();
        if (serviceMetadataBehavior == null)
        {
        serviceMetadataBehavior = new ServiceMetadataBehavior();
        host.Description.Behaviors.Add(serviceMetadataBehavior);
        }
        serviceMetadataBehavior.HttpGetEnabled = true;
        ServiceEndpoint serviceEndpoint = host.Description.Endpoints.Find(typeof(IMetadataExchange));
        if (serviceEndpoint == null)
        {
        host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex");
        }
        //+
        return host;
    }
    }
}

Then, just modify your service host line:

<%@ ServiceHost Service="Person.Service.PersonService" Factory="Person.Service.PersonServiceFactory" %>

My point in mentioning that is to demonstrate how well your service will scale based upon a properly setup base infrastructure.  Each thing that you add will require a linear amount of work.  This isn't like WSE where you need three doctorates in order to modify the slightest thing.

Service Client without Magic

At this point, a few of my colleagues end their X.Service project and begin a X.Service.Client class.  That's fine.  Keeping the client away from the service metadata is nice, but its not required.  So, for the sake of this discussion, let's continue with the public/private model instead of the service/client/implementation model.

Thankfully, I have absolutely no colleagues who would ever hit "Add Service Reference".  I've actually never worked with a person who does this either.  This is good news because, as previously mentioned, the code generated by the add service reference is a complete disaster.  You also can't modify it directly without risking your changing being overwritten.  Even then, the code is SO bad, you wouldn't want to edit it.  There's no reason to have hundreds or THOUSANDS of lines of code to create a client to your service.

A much simpler and much more manageable solution is to keep your code centralized and DRY (don't repeat yourself).  To do this, you just need to realize that the publicly accessible X.Service project already contains the vast majority of everything needed to create a WCF client.  Because of this, you may access the service using WCF's built in mechanisms.  You don't need anything fancy, it's all built right in.

Remember, WCF simply requires the ABC: and address, binding, and contract.  On the client-side, WCF uses this information to create a channel.  This channel implements your service contract, thus allowing you to directly access your WCF service without any extra work required.  You already have the contract, so you just have to do is declare the address and binding.  Here's what I mean:

//+ address
EndpointAddress endpointAddress = new EndpointAddress("http://localhost:1003/Person.svc");
//+ binding
BasicHttpBinding basicHttpBinding = new BasicHttpBinding();
//+ contract
IPersonService personService = ChannelFactory<IPersonService>.CreateChannel(basicHttpBinding, endpointAddress);
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

No 3,000 line client generated code, no excess classes, no configuration.  Just direct access to your side.

Of course, if you want to use a configuration file, that's great too.  All you need to do in create a system.serviceModel section in your project or web site and declare your service endpoint.  What's really nice about this in WCF is that, since the concept for WCF are ABC on both the client and server, most of the configuration is already done for you.  You can just copy and paste the endpoint from the service configuration to your client configuration and add a name element.

<system.serviceModel>
  <client>
    <endpoint name="PersonServiceBasicHttpBinding" address="http://localhost:1003/Person.svc" binding="basicHttpBinding" contract="Simple.Service.IPersonService" />
  </client>
</system.serviceModel>

For the most part, you will also need to keep any bindings on the service side as well, though you wouldn't copy over validation information.

At this point you can just change the previously used WCF channel code to use an endpoint, but that's an architectural disaster.  You never want to compile configuration information into your system.  Instead of tying them directly, I like to create a custom configuration for my solution to allow the endpoint configuration to change.  For the sake of this discussion though, let's just use appSettings (do NOT rely on appSettings for everything!  That's a cop-out.  Create a custom configuration section!).  Here's our appSetting:

<appSettings>
  <add key="PersonServiceActiveEndpoint" value="PersonServiceBasicHttpBinding" />
</appSettings>

At this point, I can explain that Configuration class found in the public Contact.Service project:

using System;
//+
namespace Contact.Service
{
    public static class Configuration
    {
    //- @ActivePersonServiceEndpoint -//
    public static String ActivePersonServiceEndpoint
    {
        get
        {
        return System.Configuration.ConfigurationManager.AppSettings["PersonServiceActiveEndpoint"] ?? String.Empty;
        }
    }
    }
}

As you can see, this class just gives me strongly-typed access to my endpoint name.  Thus allowing me to access my WCF endpoint via a loosely connected configuration:

//+ configuration and contract
IPersonService personService = new ChannelFactory<IPersonService>(ServiceConfiguration.ActivePersonServiceEndpoint).CreateChannel();
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

When I feel like using a different endpoint, I don't modify the client endpoint, but just add another one with a new name and update the appSettings pointer (if you're info fancy names, this is essentially the bridge pattern).

Now, while this is a great way to directly access data by people who understand WCF architecture, it's probably not a good idea to allow your developers to have direct access to system internals.  Entry-level developers need to focus on the core competency of your company (i.e. sales, marketing, data management, etc), not ponder the awesomeness of system internals.  Thus, to allow other developers to work on a solution without having to remember WCF internals, I normally create two layers of abstraction on top of what I've already shown.

For the first layer, I create a concrete ClientBase class to hide the WCF channel mechanics.  This is the type of class that "Add Service Reference" would have created if your newbie developers accidentally used it.  However, the one we will create won't have meaningless attributes and virtually manageable excess code.

Below is our entire client class:

using System;
using System.ServiceModel;
using System.ServiceModel.Channels;
//+
namespace Contact.Service
{
    public class PersonClient : System.ServiceModel.ClientBase<IPersonService>, IPersonService
    {
    //- @Ctor-//
    public PersonClient(String endpointConfigurationName)
        : base(endpointConfigurationName) { }


    //+
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return Channel.GetPersonData(personGuid);
    }
    }
}

The pattern here is incredibly simple: create a class which inherits from System.ServiceModel.ClientBase<IServiceContractName> and implements your service contract.  When you implement the class, the only implementation you need to add is a call to the base channel.  In essence, all this class does is accept calls and pass them off to a pre-created channel.  When you add a new operation to your service contract, just implement the interface and add a single line of connecting code to wire up the client class.

The channel creation mechanics that I demonstrated earlier is now done provided automatically by the ClientBase class.  You can also modify this class a little by bridging up to a total of 10 different constructors provided by ClientBase.  For example, the following constructor call will allow developers to specify a specific binding and endpoint:

public PersonClient(Binding binding, EndpointAddress address)
    : base(binding, address) { }

At this point, we have something that protects developers from having to remember how to create a channel.  However, they still have to mess with configuration names and must remember to dispose this client object (there's an open channel, remember).  Therefore, I normally add another layer of abstraction.  This one will be directly accessibly for developer user.

This layer consists of a series of service agents.  Each service has its own agent and is essentially a series of static methods which provide the most efficient means of making a service call.  Here's what I mean:

using System;
//+
namespace Contact.Service
{
    public static class PersonAgent
    {
    //- @GetPersonData -//
    public static Person GetPersonData(String personGuid)
    {
        using (PersonClient client = new PersonClient(ServiceConfiguration.ActivePersonServiceEndpoint))
        {
        return client.GetPersonData(personGuid);
        }
    }
    }
}

As you can see, the pre-configured service endpoint is automatically used and the PersonClient is automatically disposed at the end of the call (ClientBase<T> implements IDisposable).  If you want to use a different service endpoint, then just change it in your appSettings configuration.

Conclusion

At this point, I've explained every class or each project in my WCF service project model.  It's up to you to decide how to best create and manage your data and service contracts as well as clients.  But, if you want a streamlined, efficient, model for all your service projects, you will want to create a publicly accessible project to house all your reusable elements.

Also, remember you don't need a full-on client class to access a service.  WCF communicates with channels and channel creation simply requires an address, binding, and contract.  If you have that information, just create your channel and make your call.  You can abstract the internals of this by using a ClientBase object, but this is entirely optional.  If the project you are working on requires hardcore WCF knowledge, there's no reason to pretty it up.  However, if non-WCF experts will be working with your system, abstractions are easy to create.

Links

Love Sudoku?  Love competition?  Try the new Sudokian.com experience today.

Creating JavaScript Components and ASP.NET Controls



Every now and again I'll actually meet someone who realizes that you don't need a JavaScript framework to make full-scale AJAX applications happen… but rarely in the Microsoft community.  Most people think you need Prototype, jQuery, or ASP.NET AJAX framework in order to do anything from networking calls, DOM building, or component creation.  Obviously this isn't true.  In fact, when I designed the Brainbench AJAX exam, I specific designed it to test how effectively you can create your own full-scale JavaScript framework (now how well the AJAX developer did on following my design, I have no idea).

So, today I would like to show you how you can create your own strongly-typed ASP.NET-based JavaScript component without requiring a full framework.  Why would you not have Prototype or jQuery on your web site?  Well, you wouldn't.  Even Microsoft-oriented AJAX experts recognizes that jQuery provides an absolutely incredible boost to their applications.  However, when it comes to my primary landing page, I need that to be extremely tiny.  Thus, I rarely include jQuery or Prototype on that page (remember, Google makes EVERY page a landing page, but I mean the PRIMARY landing page.)

JavaScript Component

First, let's create the JavaScript component.  When dealing with JavaScript, if you can't do it without ASP.NET, don't try it in ASP.NET.  You only use ASP.NET to help package the component and make it strongly-typed.  If the implementation doesn't work, then you have more important things to focus on.

Generally speaking, here's the template I follow for any JavaScript component:

window.MyNamespace = window.MyNamespace || {};
//+
//- MyComponent -//
MyNamespace.MyComponent = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this.host = init.host;
                //+
                this.DOMElement = $(this.host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this.host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.myParameter) {
                this.myParameter = init.myParameter;
            }
            else {
                throw 'myParameter is required.';
            }
        }
    }
    ctor.prototype = {
        //- myfunction -//
        myfunction: function(t) {
        }
    };
    //+
    return ctor;
})( );

You may then create the component like the following anywhere in your page:

new MyNamespace.MyComponent({
    host: 'hostName',
    myParameter: 'stuff here'
 });

Now on to see a sample component, but, first, take note of the following shortcuts, which allow us to save a lot of typing:

var DOM = document;
var $ = function(id) { return document.getElementById(id); };

Here's a sample Label component:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        this.setText(this._initialText);
    }
    ctor.prototype = {
        //- myfunction -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

With the above JavaScript code and "<div id="host"></div>" somewhere in the HTML, we can use the following to create an instance of a label:

window.lblText = new Controls.Label({
    host: 'host',
    initialText: 'Hello World'
});

Now, if we had a button on the screen, we could handle it's click event, and use that to set the text of the button, as follows:

<div>
    <div id="host"></div>
    <input id="btnChangeText" type="button" value="Change Value" />
</div>
<script type="text/javascript" src="Component.js"></script>
<script type="text/javascript">
    //+ in reality you would use the dom ready event, but this is quicker for now
    window.onload = function( ){
        window.lblText = new Controls.Label({
            host: 'host',
            initialText: 'Hello World'
        });
         window.btnChangeText = $('btnChangeText');
         //+ in reality you would use a muli-cast event
         btnChangeText.onclick = function( ) {
            lblText.setText('This is the new text');
         };
    };
</script>

Thus, components are simple to work with.  You can do this with anything from a simple label to a windowing system to a marquee to any full-scale custom solution.

ASP.NET Control

Once the component works, you may then package the HTML and strongly-type it for ASP.NET.  The steps to doing this are very simple and once you do it, you can just repeat the simple steps (some times with a simple copy/paste) to make more components.

First, we need to create a .NET class library and add the System.Web assembly.   Next, add the JavaScript component to the .NET class library.

Next, in order to make the JavaScript file usable my your class library, you need to make sure it's set as an Embedded Resource.  In Visual Studio 2008, you do this by going to the properties window of the JavaScript file and changing the Build Action to Embedded Resource.

Then, you need to bridge the gap between the ASP.NET and JavaScript world by registering the JavaScript file as a web resource.  To do this you register an assembly-level WebResource attribute with the location and content type of your resource.  This is typically done in AssemblyInfo.cs.  The attribute pattern looks like this:

[assembly: System.Web.UI.WebResource("AssemblyName.FolderPath.FileName", "ContentType")]

Thus, if I were registering a JavaScript file named Label.js in the JavaScript.Controls assembly, under the _Resource/Controls folder, I would register my file like this:

[assembly: System.Web.UI.WebResource("JavaScript.Controls._Resource.Label.js", "text/javascript")]

Now, it's time to create a strongly-typed ASP.NET control.  This is done by creating a class which inherits from the System.Web.UI.Control class.  Every control in ASP.NET, from the TextBlock to the GridView, inherits from this base class.

When creating this control, we want to remember that our JavaScript control contains two required parameters: host and initialText.  Thus, we need to add these to our control as properties and validate these on the ASP.NET side of things.

Regardless of your control though, you need to tell ASP.NET what files you would like to send to the client.  This is done with the Page.ClientScript.RegisterClientScriptResource method, which accepts a type and the name of the resource.  Most of the time, the type parameter will just be the type of your control.  The name of the resource must match the web resource name you registered in AssemblyInfo.  This registration is typically done in the OnPreRender method of the control.

The last thing you need to do with the control is the most obvious: do something.  In our case, we need to write the client-side initialization code to the client.

Here's our complete control:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);


        //+
        //- @HostName -//
        public String HostName { get; set; }


        //- @InitialText -//
        public String InitialText { get; set; }


        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }


        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(HostName))
            {
                throw new InvalidOperationException("HostName must be set");
            }
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            host: '" + HostName + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

The code written to the client may looks kind of crazy, but that's because it's written very carefully.  First, notice it's wrapped in a script tag.  This is required.  Next, notice all the code is wrapped in a (function( ) { }) ( ) block.  This is a JavaScript containment technique.  It basically means that anything defined in it exists only for the time of execution.  In this case it means that the onLoad variable exists inside the function and only inside the function, thus will never conflict outside of it.  Next, notice I'm attaching the onLoad logic to the window.load event.  This isn't technically the correct way to do it, but it's the way that requires the least code and is only there for the sake of the example.  Ideally, we would write (or use a prewritten one) some sort of event handler which would allow us to bind handlers to events without having to check if we are using the lameness known as Internet Explorer (it uses window.attachEvent while real web browsers use addEventListener).

Now, having this control, we can then compile our assembly, add a reference to our web site, and register the control with our page or our web site.  Since this is a "Controls" namespace, it has the feel that it will contains multiple controls, thus it's best to register it in web.config for the entire web site to use.  Here's how this is done:

<configuration>
  <system.web>
    <pages>
      <controls>
        <add tagPrefix="c" assembly="JavaScript.Controls" namespace="JavaScript.Controls" />
      </controls>
    </pages>
  </system.web>
</configuration>

Now we are able to use the control in any page on our web site:

<c:Label id="lblText" runat="server" HostName="host" InitialText="Hello World" />

As mentioned previously, this same technique for creating, packaging and strongly-typing JavaScript components can be used for anything.  Having said that, this example that I have just provided borders the raw definition of useless.  No one cares about a stupid host-controlled label.

If you don't want a host-model, but prefer the in-place model, you need to change a few things.  After the changes, you'll have a template for creating any in-place control.

First, remove anything referencing a "host".  This includes client-side validation as well as server-side validation and the Control's HostName property.

Next, put an ID on the script tag.  This ID will be the ClientID suffixed with "ScriptHost" (or whatever you want).  Then, you need to inform the JavaScript control of the ClientID.

Your ASP.NET control should basically look something like this:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);


        //+
        //- @InitialText -//
        public String InitialText { get; set; }


        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }


        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"" id=""" + this.ClientID + @"ScriptHost"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            id: '" + this.ClientID + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

Now you just need to make sure the JavaScript control knows that it needs to place itself where it has been declared.  To do this, you just create a new element and insert it into the browser DOM immediately before the current script block.  Since we gave the script block and ID, this is simple.  Here's basically what your JavaScript should look like:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            if (init.id) {
                this._id = init.id;
                //+
                this.DOMElement = DOM.createElement('span');
                this.DOMElement.setAttribute('id', this._id);
            }
            else {
                throw 'id is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        var scriptHost = $(this._id + 'ScriptHost');
        scriptHost.parentNode.insertBefore(this.DOMElement, scriptHost);
        this.setText(init.initialText);
    }
    ctor.prototype = {
        //- setText -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

Notice that the JavaScript control constructor creates a span with the specified ID, grabs a reference to the script host, inserts the element immediately before the script host, then sets the text.

Of course, now that we have made these changes, you can just throw something like the following into your page and to use your in-place JavaScript control without ASP.NET.  It would look something like this:

<script type="text/javascript" id="lblTextScriptHost">
    window.lblText = new Controls.Label({
        id: 'lblText',
        initialText: 'Hello World'
    });
</script>

So, you can create your own JavaScript components without requiring jQuery or Prototype dependencies, but, if you are using jQuery or Prototype (and you should be!; even if you are using ASP.NET AJAX-- that's not a full JavaScript framework), then you can use this same ASP.NET control technique to package all your controls.

kick it on DotNetKicks.com

Cross-Browser JavaScript Tracing



No matter what system you are working with, you always need mechanisms for debugging.  One of the most important mechanisms a person can have is tracing.  Being able to see trace output from various places in your application is vital.  This is especially true with JavaScript.  I've been working with JavaScript since 1995, making stuff 12 years ago that would still be interesting today (in fact, I didn't know server-side development existed until 1998!) and I have noticed a clear correlation between the complexity of JavaScript applications and the absolute need for tracing.

Thus, a long, long time ago I built a tracing utility that would help me view all the information I need (and absolute no more or less).  These days this means being able to trace information to a console, dump arrays and objects, and be able to view line-numbered information for future reference.  The utility I've created has since been added to my Themelia suite (pronounces the-meh-LEEUH; as in thistle or the name Thelma), but today I would like to demonstrate it and deliver it separately.

The basis of my tracing utility is the Themelia.Trace namespace.  In this namespace is…. wait… what?  You're sick of listening to me talk?  Fine.  Here's the sample code which demonstrates the primary uses of Themelia.Trace, treat this as your reference documentation:

//+ enables tracing
Themelia.Trace.enable( );
//+ writes text
Themelia.Trace.write('Hello World!');
//+ writes a blank line
Themelia.Trace.addNewLine( );
//+ writes a numbered line
Themelia.Trace.writeLine('…and Hello World again!');
Themelia.Trace.writeLine('Another line…');
Themelia.Trace.writeLine('Yet another…');
Themelia.Trace.writeLine('One more…');
//+
//++ label
//+ writes labeled data to putput (e.g. 'variableName (2)')
Themelia.Trace.writeLabeledLine('variableName', 2);
//+
//++ buffer
//+ creates a buffer
var buffer = new Themelia.Trace.Buffer( );
//+ declares beginning of new segment
buffer.beginSegment('Sample');
//+ writes data under specific segment
buffer.write('data here');
//+ nested segment
buffer.beginSegment('Array Data');
//+ write array to buffer
var a = [1,2,3,4,5];
buffer.write(a);
//+ declares end of segment
buffer.endSegment('Array Data');
buffer.beginSegment('Object Data');
//+ write raw object/JSON data
buffer.write({
    color: '#0000ee',
    fontSize: '1.1em',
    fontWeight: 'bold'
});
buffer.endSegment('Object Data');
//+ same thing again
buffer.beginSegment('Another Object');
var o = {
    'personId': 2,
    name: 'david'
};
buffer.write(o);
buffer.endSegment('Another Object');
buffer.endSegment('Sample');
//+ writes all built-up data to output
buffer.flush( );

Notice a few thing about this reference sample:

  • First, you must use Themelia.Trace.enable( ) to turn tracing on.  In a production application, you would just comment this line out.
  • Second, Themelia.Trace.writeLine prefixes each line with a line number.  This is especially helpful when dealing with all kinds of async stuff floating around or when dealing with crazy events.
  • Third, you may use Themelia.Trace.writeLabeledLine to output data while giving it a name like "variableName (2)".
  • Fourth, if you want to run a tracer through your application and only later on have output, create an instance of Themelia.Trace.Buffer, write text to it, write an array to it, or write an object to it, then call flush( ) to send to data to output.  You may also use beginSegment and endSegment to create nested, indented portion of the output.
  • Fifth, notice you can throw entire arrays of objects/JSON into buffer.write( ) to write it to the screen.  This is especially handy when you want to trace your WCF JSON messages.

Trace to what?

Not everyone knows this, but Firefox, Google Chrome, Safari, and Opera each has its own console for allowing output.  Themelia.Trace works with each console in its own way.  Here are some screen shots to show you what I mean:

Firefox

Firefox since version 1.0 has the Firefox Console which allows you to write just about anything to a separate window.  I've done a video on this many years ago and last year I posted a quick "did you know"-style blog post on it, so there's no reason for me to cover it again here.  Just watch my Introduction to the Firefox Console for a detailed explanation of using the Firefox Console (you may also opt to watch my Setting up your Firefox Development Environment-- it should seriously help you out).

Firefox

Google Chrome

Chrome does things a little different than any other browser.  Instead of having a "browser" wide console, each tab has its own console.  Notice "browser" is in quotes.  Technically, each tab in Chrome is it's own mini browser, so this console-per-tab model makes perfect sense.  To access this console, just hit Alt-` on a specific tab.

Chrome

Safari

In Safari, you go to Preferences, in the Advanced Tab to check "Show Develop menu in menu bar".  When you do this, you will see the Develop menu show up.  The output console is at Develop -> Show Web Inspector.

Safari

Opera

In Opera 9, you go to Tools -> Advanced -> Developer Tools and you will see a big box show up at the bottom.  The console is the Error Console tab.

Opera9

Internet Explorer

To use Themelia.Trace with Internet Explorer, install Nikhil's Web Developer Helper.  This is different from the IE Developer Toolbar.

IEWebDevHelper

Firebug

It's important to note that, in many situations it's actually more effective to rely on Firebug for Firefox or Firebug lite for Safari/Chrome, IE, and Opera, then to use a console directly.  Therefore, Themelia.Trace allows you to set Themelia.Trace.alwaysUseFirebug to true and have all output redirected to Firebug instead of the default console.  Just try it, use the above sample, but put "Themelia.Trace.alwaysUseFirebug = true;" above it.  All data will redirect to Firebug.  Here's a screen shot (this looks basically the same in all browsers):

FirebugLite

There you have it.  A cross-browser solution to JavaScript tracing.

Links

Love Sudoku? Love brain puzzles? Check out my new world-wide Sudoku competition web site, currently in beta, at Sudokian.com.

kick it on DotNetKicks.com