2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

ASP.NET 3.5 Web Site and Application



One of the most horrendously thing about ASP.NET 1.1 was that the developers confused a web site and a project.  All that did was allow a severe influx of desktop developers into the web world that had no right to call themselves web developers.  ASP.NET 1.1 even added resx files for web forms and of course since the file was there, many developers (senior level!) actually thought they were required files.  That didn't stop me drop regularly going into CVS and DELETING them.  Worthless.

Fortunately, ASP.NET 2.0 fixed this problem by making sure that people realized that a web site was NOT a project.  This made everything so much easier to work with.  Furthermore, now we had the beautiful CodeFile page directive attribute so that we didn't have to rely on VS for everything.  There was also no need for absolutely ridiculous and redundant designer or resources files for web forms.  The ASP.NET guys were finally conforming to the preexisting conditions of the web, instead of trying to come up with a new [flawed] paradigm.

HOWEVER! Apparently the ASP.NET 3.5 team fell asleep at the wheel because I'm having horrendous flashbacks to the slop of ASP.NET 1.1.  First of all, when you add a web site, you are adding a project.  I don't WANT a csproj file for my web site!  Secondly, web forms have returned to using the completely useless CodeBehind attribute.  It took me QUITE a bit of debugging to finally realize this.  Third, every single web form now has a completely meaningless X.designer.cs file.  This also took me a while to realize.

I realized this when I kept getting an error telling me that type X.Y didn't match type X.Y.  What?  Yes it does!  After I finally fixed that error (can't even remember how), I kept getting that one stupid error telling you that your type is in two separate places.  HOW?  This was a new project!  I haven't done anything yet!  It turns out that the designer.cs file had become out of date between the time I typed up my added my custom control to the page and ran it.  Err… what?  This is beyond frustrating.

There's good news though.  The ASP.NET team wasn't completely asleep.  You can add an ASP.NET web site or an ASP.NET web application.  Yes, I realized there's no REAL difference, but for some reason they decided to make a whimsical split (I suspect it was a political or PM decision-- the ASP.NET team is smarter than that).  Perhaps they wanted to aid the old VB developers, who I would argue have no right to put things on the web anyhow (i.e. they are web coders, not web development professionals!)

If you add a ASP.NET web application, you get the old ASP.NET 1.1 style of hard to use nonsense.  On the other hand, if you add a ASP.NET web site, you get the appropriate ASP.NET 2.0 style.  Personally, I say forget both.  I always just create a folder and then "open web site".  Done.  Most of the time, however, I just start a project by checking my continually changing solution template out of subversion.  Again, DONE.  This is why it took me 8 months to finally notice this.  I don't even want to think about how many sloppy intern or VB6-developer created applications I'm going to have to clean up based on this painfully flawed design.

Free Templated Data Bound Custom Controls Chapter



Google Book Search, must like most Google products, is a great gift to humanity.  I often find myself going there to read a chapter in a book to quickly get up to speed or to review a topic.  Today, while I was reviewing a few new ASP.NET books, I came across the book ASP.NET AJAX Programming Tricks on Google Books.  The first two chapters are "Http Modules Demystified" and "Templated Data Bound Custom Controls" and are freely viewable.  This is a great reference for anyone looking to learn how to build more powerful custom controls or for anyone who needs a quick refresher.

One thing I did notice, is that the chapter looks very much like chapter 29 in ASP.NET 3.5 Unleashed.  In fact, not only is the content the same, they had the same order of the content is the same.  Furthermore, they use almost the same "tab control" example.  Ouch.  Before anyone says the P word, I would like to mention that ASP.NET AJAX Programming Tricks was released first. 

Links

Developers and Web Developers



(This is a sequel to my Coders and Professional Programmers article)

I'm fairly sure the year was 2001. It was before I did my transition from coder to professional, but it was long after I became a real web developer (1994).  This was the year that the web became severely corrupted by an influx of thousands of MFC/VB developers thinking they were web developers simply because they knew how to drag-n-drop a control onto a canvas and make something appear in a web browser. The influx was, of course, due to the release of ASP.NET. These people were not web developers and that same coder-mill continually throws out unprofessional after unprofessional today.  This was the year I got so upset with the pragmatic, unprofessional web developers running around taking my work that I retired for 3 years to go back to college.

So, what is a web developer?  Surely is at least one definition per person in the entire industry, but I must say that, at root, it's a person who understands and can proficiently interaction with web technologies.  What web technologies?  Today, these include, at a minimum, semantic XHTML, CSS, and Modern JavaScript.  In a sense, you could easily mark these as the pillars of web programming.  Without academic and hands-on knowledge of these technologies, there is no web devolvement (yes, both are required-- and despite what the pragmatists think, the former is critical). Furthermore, this technology list changes over time.  If I were to retire today, I have no right to come back in 5 years claiming to be a web developers.  To be a web developer at that time, I need to learn a new X, Y, and Z in including their guidelines and best practices.  You must keep up or be left behind.

Having said that, PHP, JSP, and ASP.NET developers often inappropriately call themselves web developers.  Not all PHP, JSP, or ASP.NET developers are like this, especially PHP developers! Respect, respect!  In any case, I can kind of see the confusion here, but even still, a quick realization of what these technologies are should have killed any thoughts of this a long time ago.  These people work with server-side technologies, not web technologies.  The same CGI model used 15 years ago is the same model today.  The only thing these people are doing is creating code that runs on a server and shipping the output.  Period.  That's not web development; this is the same work you would do if you were to build an Excel report.  It’s just work behind the scenes that may or may not touch a web browser.  Even then, just because it hit a web browser, doesn’t mean its web development.  There’s no client-side technologies involved at all. Without deeply interacting with client-side technologies, there is no web development.  In fact, the inanimate object known as a web server is more of a web development than server-side-only people.

Web development begins at the point when you begin to ponder the technologies and implementation from the perspective of the client-side.  I'm sure most people won't believe me when I say this, but I did web development for my 4 years of high school before I even knew that you could use server-side software to dynamically create pages.  Everything I did was in pure JavaScript and fancy frame manipulation.  This was web development.  I didn't need CGI or Perl.  PHP, JSP, and ASP.NET simply sends out a stream and it just so happens that a web browser may be the one making the request.  The output may be for a web browser, but that in absolutely no way makes it web development.  That's like going to a foreign country and using a translator device and saying because you have that device, you speak the language.  You in absolutely no way speak the language!  Worst, some people will defend, virtually to the death, the idea that they do speak the language simply because they know a few words to "fix" the translation!  We see this in server-side-only developers who, because they know a few HTML tags, think they know the technologies.

Most of the time, however, server-side-only developers really think they are web developers. So, this simple and obvious explanation won't do.  Therefore, we are forced to make a distinction between web 1.0 and web 2.0 developers.  We often think of web 2.0 as being quick, dynamic, and smooth client-side dynamics.  This is from a marketing perspective, but it's hardly a definition that satisfies the computer scientist.  The distinction I use is actually a bit more straightforward: web 2.0 development is development from the client-side perspective.  This definition actually reminds me of the definition of a series I learned in my Real Analysis class in college: a mapping from N to R.  How in the WORLD is that a series!? Isn't a series a set of entities or something?  Well, somehow it's a mapping from N to R (I've yet to hear another professor give this same definition, but the point is that "formal definitions" in mathematics rarely look like their application in reality).

When I talk about web 2.0 to a colleague or a client I'm talking about web-specific design and implementation from the perspective of the client. From this perspective calls are made to various services for interaction with outside data.  In other words, web 2.0 is a client-service model for the web.  In this sense, what is web 1.0?  Just the opposite: development from the server-side perspective.  This is ASP.NET development, for example.  When you are working with ASP.NET, you are working from the perspective of the server and you send data out.  In this model, you have a logically central system with entities accessing it. In reality, this isn’t web development—it’s development of something that may or may not do web development for you. Web 1.0 is a server-client model for the web (notice the word server, instead of service—as seen in the web 2.0 model)  If you are a deep Microsoft developer you recognize the web 2.0 paradigm: WPF/WCF allows you to easy create a client-service model bypassing the client-server model all together.  You create your client interfaces in WPF and access WCF servers as you need them.

In this perspective, what does this mean in terms of the actual technologies?  Well, almost all my web applications are done using the web 2.0 model.  That is, all my programming is done from the perspective of being inside the web browser.  I'll directly modify the DOM and access data via AJAX calls as required.  Some of my applications are pure-AJAX. That is, not single postback in the entire system (like meebo.com-- meebo is a prime example of a web 2.0; everything is from the perspective of the client with communication via AJAX services.)  In fact, my controls are very Google-ish.  Google is also deep into this model.  See their AdSense, AdWords, or Analytics controls; insert a declarative script and it does the rest from the perspective of the client.  As you can see here, you don't even need the XHR object for web 2.0!

What does ASP.NET AJAX bring?  In this model, ASP.NET AJAX is as web 1.0 technology that gives you the dynamics of web 2.0.  This was actual the entire point behind creating it.  Web 1.0 developers (who are often not web developers at all!) can use their existing server-side perspective and paradigms to implement dynamics on the remote system (in a web 1.0 model the client is the remote entity-- whereas in web 2.0 the services are remote).  ASP.NET AJAX very much allows for a web 2.0 model, but that's not how it's primarily marketed.  As a side note, I should mention that, this model for explaining web 1.0 and 2.0 is only a logical representation and therefore can not be right nor can it be wrong.  The fancy marketing representation kind of works too, but it's often too abstract to have real meaning.

Personally, I think the web 1.0 model of development is counterproductive and encourages sloppy priorities.  The user experience is the point of the system. Without that, the entire point of the web site is dead.  One of my problems with ASP.NET AJAX is how it's marketed.  The server-perspective model of development encourages development that seems backwards.  Furthermore, because of this, the aforementioned so-called "web developers" continue to spread their disease of pragmatism all over the world further aiding in the disintegration of quality.  As I've originally stated, most of these people don't understand even the basics of semantic XHTML, which is the single most fundamental aspect of web development, which can be seen in their use of div-soup or <br/> mania.  These people may be awesome server-side professionals putting my enterprise architecture skills to absolute shame and run circles around me in just about any algorithm or design pattern implementation, but they are only coders when it comes to the web.

After years and years of dealing with people like this, I've come to notice a few signs of web 1.0 coders:

  • If someone says "Firefox?  How's that better than IE?", it goes without saying that this person not only hasn't the first clue about web development, they don't even understand the tool which represents the core purpose of web development: the web browser.  People like this are almost always helpless.  You could try to explain the true power of CSS, the fact that SVG, HTML5, and Canvases are in every web browser except IE, or talk about how Firefox has the architecture of an operating system with its console, it's own registry (about:config), as well as the ability to install apps (extensions), but you're probably only going to get the same pragmatic blank stare of a coder. Fortunately, I haven’t heard say this in at least 3 years.
  • If someone says "I know CSS, here…" and shows you how they used font-size, color, and font-weight on a few elements contained in a table embedded in a table embedded in yet another table, then you have your work cut out for you, because you met a person who thinks HTML is the latest cool technology on the block and hasn't the first clue what CSS really means.  As I've stated in my article "CSS Architecture", CSS is not just a styling technology.  Furthermore, we web 2.0 developers realize that CSS is to be used in harmony with semantic XHTML and therefore understand the dangers of using a table.  These people obviously don't.  Of course, the minute their boss asks for mobile support, they come running to you because they now realize "DOH! Tables are too wide! AHH! Tables make the page size too big!" They will have to learn their lesson eventually.
  • If someone says "I know JavaScript, here… " and they show you a validation function, then you need to explain to them that JavaScript isn't merely a scripting language, but is rather a very powerful object-oriented/functional programming language which often puts strongly-typed languages to shame.  It includes closures, namespaces, an extremely rich object system, object-oriented access levels, multi cast events, and a boat load of core JavaScript objects.  Yet web 1.0 developers haven't the first clue. This problem isn’t nearly as bad as it used to be, though. MSDN magazine devoted some time to the topic in the May 2007 issue and the number of JavaScript experts in the Microsoft community is growing very rapidly.
  • If someone says "Hmm… I don't see the control you are talking about in my toolbox", then you know you are dealing with a coder.  Not only that, you're probably dealing with a person who has never, ever learned what semantic development even is.  Typically people like this will use the dead concept of a WYSIWYG designer to drag-n-drop controls and set properties with their mouse.  Clearly, these people focus more on how something seems to look at the moment, not how the page is actually built.  Pragmatists.  Personally, I’ve never designer support into anything, ever. If you can’t program it, don’t develop it! I personally find it extremely unprofessional to even allow designer-support. The target audience typically has absolutely no concept of the difference between a semantic <h1></h1> and a pragmatic <div id="myHeader"></div>.  Not only will their code cause problems down the road, your code will never integrate with it, which, of course, means you will be rewriting everything. Anyone who understands the importance of semantic XHTML understands the sheer severity of this problem.  You will break a page's structure by relying on a designer.  A designer should only be used by a professional who knows how to fix it's flaws.  Since only a professional would be able to fix the flaws, it follows that only a professional should do web development.  Duh?  For more information on semantic XHTML, see any modern web 2.0 book or my mini-article here (a quick note-- when I was formatting this post in WLW, every one of my list items would start a new list-- if I didn't understand semantic XHTML, I would have been completely stuck [also notice I'm using the semantic term "list item" not the syntactical term "<li />"-- focus on what things ARE, not what they DO-- try focusing on what something IS using a WYDIWYG designer!])

I know I've written about this topic before, but it's is just such a critically important topic.  Just because someone does something, that doesn't mean they are in that profession.  I change my own oil and change my tires, but this doesn't make me a mechanic.  A few months ago I was talking to a guy who actually said that he doesn't care about what he produces, because "it's just a job".  Just a job!?  Why don't you just get another one!  These people need to stop masquerading as web developers, stop undercutting my professional company by offering $3.75/hour unprofessional "development", start upping their own standards and start showing a little respect to us that were here first.  To a lot of us professionals this isn’t just a “job”; it’s actually become part of us! Unfortunately, I've learned years ago that people don't change.  Not for their marriage, not for their family, and especially not for their careers.  It's hopeless.  Moving on.

Dojo 1.0: Client-Side Web Development Framework



Recently I started a deeper study of the Dojo JavaScript Toolkit 1.0.  When I first got into Dojo, my reaction was something like "my goodness this is complicated", but then I woke up.  In reality, Dojo is not simply a JavaScript framework like prototype or an animation framework like script.aculo.us, but, rather, it's an entire client-side web development framework.  You can think of it as being a client-side version of the ASP.NET.  Because of this, I'm not going to compare it to prototype or script.aculo.us, products, which have completely different applications than Dojo.

Here is a simple break down of an example based on the hello world example found on the Dojo web site.  First let's add the dojo.js file:

<script src="dojoroot/dojo/dojo.js" djConfig="parseOnLoad: true" type="text/javascript"></script>

What in the WORLD is the djConfig attribute?  Well it's not in any XHTML DTD that's for sure.  This is something to tell Dojo to, obviously, parse the file on load.  Duh.  That's not really the fun part though.  Check this out, you actually use a PHP style "require" statement to load specific portions of Dojo:

<script type="text/javascript">
   dojo.require('dijit.form.Button');
</script> 

By doing this, now we can add the following control into our page structure:

<button dojoType="dijit.form.Button" id="hola">
  Hola Mundo!
  <script type="dojo/method" event="onClick">alert('Hola');</script>
</button>

This creates a simple button which alerts 'Hola' when clicked.  As you can see, it gives you a very nice declarative programming model.  I personally think this is incredible because 1) I believe that we should focus on web development from a client-side perspective and therefore create controls on the client, 2) I love declarative programming, 3) I don't want to build this thing myself.  With Dojo I get a declarative client-side programming model without having to architect the internals.  This is really awesome, since I'm a strong advocate of client-side perspective programming (a.k.a. web 2.0).

One interesting aspect of Dojo is that it loads only the files that you need for that specific page.  This is a rather nice compromise between the Prototype and mooTools methods.  Prototype loads the entire thing into memory and mooTools allows you to download each section you want.  Dojo, however, installs on the server as a set of files that are available for download and you retrieve them as you need them.  To add to this, you don't add them via the <script /> element, you add the dojo.js file that way, but you load the others by calling dojo.require( ).  What's nice about this is that you load modules, not files.  It also gives it a more native programming feel, but it also makes sure you don't load the same file twice.  Here's an example of what I'm talking about:

<script type="text/javascript">
  dojo.require('dojo.parser');
  dojo.require('dijit.form.Button');
  dojo.require('dijit.form.FilteringSelect');
  dojo.require('dijit.form.CheckBox');
</script>

Here you can see that I'm loading up four different modules.  What's interesting about this is that it's kind of similar to how we import .NET namespaces.  However, it's even more like how we load .NET assemblies.  Think of each one of these as being an assembly we need to reference.  In .NET, you add a reference, let fusion and it's buddies find and load the assemblies, and simply use the controls as if they were local; you don't care about "files".  This is very similar to the model presented in Dojo.  It's like you are adding a Dojo "assembly" reference, letting it load it for you, and you access it as it were local.  This is all in direct contrast to the model that PHP as well as most JavaScript/AJAX framework use.  In PHP and in these frameworks, you typically load "files" directly.  One of the hardest things for a PHP developer to do is make the mental transition from file files to "magically loading" .NET assemblies.  In .NET assembly names don't need to match their DLL names, in fact as assembly can span more than a single DLL file!  In the same way, Dojo's module names don't have a one-to-one module-to-file mapping and loads whatever physical files you need based on the logical name you request.  Very nice! The analogy isn't perfect and like I've already mentioned it's a bit like the .NET namespace import model as well.

Dojo contains a number of very nice controls as part of it's Dojo Widjet Library, also known as Dijit.  Many of these are controls that most of us have wanted for years, but just have never had the time to mess with.  For example, Dojo including a slider control, a dynamically expanding textarea, and a filtering select box.  There's a whole host of others, but these are the ones that I personally have wanted to see in a professional framework for a long time.  There's also a progress bar control and a dialog control for those of you who are into business apps.  Dojo actually provides a tooltip dialog control as well.  This control, as the name suggests, shows a dialog as a tooltip.  The only time ever seen a control like this used was on the Facebook login screen, specifically when you check "Remember me".  Another interesting thing related to controls is that Dojo gives you the ability to transform controls into a more beautified form.  It's able to do this because it ships with a few CSS files that give Dojo its initial look and feel.  Furthermore it also includes a number of themes.  You can see this in a few of the more basic demonstrations in the Dojo documentation.

Dojo also includes many layout controls including a split container, accordion, tab control and what is called a StackContainer.  This container shows a pane on the screen and gives you a next and previous button so you can go back and forth between panes.  There's also a rich text editor control.  That should get some people's attention right there.  As cool as that is though, I don't think anything beats the grid control.  The grid is like a combination of ASP.NET's GridView and WPF's Grid.  It allows databinding like GridView, but it also allows complex row and column adjustment like the Grid.  Technically it's not part of the core of Dojo, but it's incredibly amazing.  You can see a great example of a Grid with it's code, see this example.  The code for this Grid is so simple, that you probably won't even believe your eyes.  Even still, the author of that example writes about the example here.  The Grid really is one of the most powerful controls I've ever seen in a client or server technology.  Again, can you see how Dojo is like a client-side ASP.NET?

It should go without saying at this point, but Dojo also includes various validation controls.  You can actually put an <input /> element on the screen and set dojoType="dijit.form.DateTextBox" and you get an entirely new animal that loads a calendar control when you click in the textbox.  If you prefer to type the date out by hand, the field will be validated automatically.  You can also validate against money.  Look at this example from the Dojo documentation:

<input type="text" name="income1" value="54775.53"
  dojoType="dijit.form.CurrencyTextBox"
  required="true"
  constraints="{fractional:true}"
  currency="USD"
  invalidMessage="Invalid amount.  Include dollar sign, commas, and cents." />

That's seriously detailed.  The data is validated as the user types.  If you want to validate numbers that don't represent money, you can use the NumberTextBox Dojo control (also set via the dojoType attribute).  Or, if you want more powerful validation, use a ValidationTextbox and set the regExp attribute and validate directly against a regular expression.  Your regular expression doesn't have to be inline in the regExp attribute either.  Take a look at this example from the Dojo documentation:

<input type="text" name="zip" value="00000"
  dojoType="dijit.form.ValidationTextBox"
  regExpGen="checkForFiveDigitZipCode"
  required="true"
  invalidMessage="Zip codes after 5, county name before then." />

Here you can see that the JavaScript function (checkForFiveDigitZipCode) is called for validation.  To see these examples run and to see more information on validation in Dojo see the validation page in their documentation.

At this point I think I should mention something.  As many of you know, I'm a strong standards advocate and an extremely outspoken opponent of the mere existence of Internet Explorer.  Having said that, laws exist for a purpose and, frankly, only within the bounds of that purpose.  The purpose of standards are to give us a common ground and to help us have cleaner, more professional work (my "having higher web standards" thing I'm always talking about).  In terms of web browsers, each browser needs to continually keep up with the standards.  Why?  Obviously so web developers can ship out content and not want to change their career every single time they realize that browser X, Y, or Z doesn't support a specific feature.  With this in mind, there is absolutely nothing wrong with Dojo using custom attributes on types.  Dojo is requiring new functionality, but it's also providing that functionality at the same time.  That which is requires, it provides.  This is completely legal within the bounds of the purpose of standards.  So, there is no reason for anyone to start crying about Dojo adding custom attributes.  There was a time when I was a standards extremist (~2003), going so far as to even use a custom DTD on my pages where I would use custom attributes (set via JavaScript), but I've since realized that standard extremism is practically a cult and have ran from it.  Also, and you may want to sit down for this one, I think that Dojo holds nicely to semantic web principles.  Notice we aren't creating an input and procedurally making it do validation, rather, we are creating something that is a validation box.  It's not as semantic as a <validationtextbox /> would be, if it were to exist, but as with most things, semanticism (huh?) is a continuum.  If it weren't a continuum, <input type="hidden" /> shouldn't exist or ever be used (I would argue that this does in fact have some semantic value!)  Because of this, I don't see Dojo involving itself with the evils of pragmatism.  However, Dojo would be an evil pragmatic framework with little semantic structure, if it instead tried to setup some type of configuration system by setting class names.  Can you even imagine the chaos?  I've done this before as a standards extremist and it's really nasty.

Another thing that's insanely cool about Dojo is the event system.  As we ASP.NET developers know, events aren't simply things you use on visual controls.  No, you use events to notify entities of (...drum roll please...) events.  So, you could have multiple entities monitoring a centralized entity, perhaps a service and when that centralized entity sends out an update, all the other entities immediately receive the update.  It's the observer pattern, but you may know it as the publish/subscribe model.  Normally when you think of JavaScript events you think of events in terms of visual control events just as you would with ASP.NET.  With Dojo, however your event model gets an upgrade with a publisher/subscriber model.  Here's an example for you to ponder:

<script type="text/javascript">
  function Client (name) {
    this._name = name;
dojo.subscribe(<span style="color: #a31515">'update'</span>, <span style="color: blue">this</span>, update);

<span style="color: blue">function </span>update(args) {
  console.debug(<span style="color: blue">this</span>._name + <span style="color: #a31515">': ' </span>+ args);
}
} var Server = { sendUpdates: function(message) { dojo.publish('update', [message]); } }; var clientA = new Client('Client A'); var clientB = new Client('Client B'); Server.sendUpdates('event occured!'); </script>

One thing on which I would like to warn all my .NET colleagues is this: they use Java terminology.  They fire their events whereas we like our events and think they are doing a great job so we raise them.  Personally, I've never understood that terminology, especially in systems like the web that allow for event bubbling where events are RAISED to the top.  OK, enough rant.  Before moving on though, just think about what I keep mentioning : this is like a fully fledged client-side ASP.NET.  You must surely be noticing that by now.

Dojo, of course, also gives you a XMLHttpRequest abstraction layer so that you don't need to mess with all that browser detection nonsense.  The syntax is rather similar to prototype's very intuitive syntax.  This is fortunate, because not all frameworks have a nice abstraction layer.  I've given up on trying to figure out mooTool's abstraction layer a long time ago.  It's doable, but the complexity curve is very steep.  In Dojo, however, the complexity curve is relatively flat, like prototype's.  Here's an example based on a Dojo documentation example of a simple XHR call (if you like, you can set 'json' instead of 'text' in the handleAs).  This code isn't anything fancy, but that's kind of my point:

<script type="text/javascript">
dojo.xhrGet({
  url: '/file/1.txt',
  handleAs: 'text',
  timeout: 3000,
  load: function(response, ioArgs) {
    alert(response);
    return response;
  },
  error: function(response, ioArgs) {
    console.error('Status code: ', ioArgs.xhr.status);
    return response;
  }
});
</script>

As simple as this is though, you can do much more with Dojo's abstraction layer than what I've seen in any other framework.  For example, there's actually a dojo.io.iframe object to give you the ability to do iframe-based AJAX.  Dojo also includes the dojo.rpc object, which allows for incredibly poweful RPC calls.  Gone are the days of having to choose to either create your own end-to-end communication or to completely use a flawed product.  You now have a nice middle ground for your AJAX service access. In a sense it's kind of similar to .NET remoting, in how it's not SOAP, but it's not quite sockets either.  The dojo.rpc concept is amazing quite possibly my favorite Dojo feature.  You can expect me to write more about this feature in the future.  I'm been completely taken in by this incredible feature and can see an incredible number of applications for it.  If you want to see something else wildly awesome, check out the dojo.data data access layer.  The documentation needs to be developed a bit more, but it's wild.  Whereas dojo.rpc may be similar to .NET remoting, dojo.data is kind of like WCF binding.  You just connect to a built-in or custom data store and your can bind Dojo controls directly to it!  Bind directly to your Flicker.com datastore or write your own.  For a few good examples of using this feature, check out this blog entry.

There are also features which we would kind of expect from a client-side framework like drag-n-drop support, animation support (though barely documented-- here's a PDF of an example), and DOM node selection (see dojo.query).  It also gives you control over internationalization (and unicode encoding) and the power to handle the back button in AJAX applications (of course by using the # syntax).  There's also the ability to create object-oriented class with inheritance.  One downside to this feature is that Dojo went back to the days of C++ and impossibly complex object graphing by allowing multiple inheritance.  You may want to set a corporate guideline to stop people from doing this, lest your object graphs become completely unreadable.  In any case, you also have an abstraction for arrays (to help emulate JavaScript 1.6), cookies, strings and dates.  You are also provided a mechanism for converting the data of an entire form into JSON data.

As if that weren't enough, Dojo also provides a unit testing framework called D.O.H.  You can do anything from simple asserts to full test cases, including grouping test cases.  In addition to the unit testing, Dojo allows you to send informational and debug messages to the Firebug console.  The documentation is packed with more testing samples than you will know what to do with.  Most of them are for testing the Dojo framework, but these tests use D.O.H. and give you a world of insight into the variety of things you can do with Dojo.

As far as documentation, there is more documentation for Dojo than I have seen in all the other JavaScript/AJAX frameworks combined.  The online "Book of Dojo" is incredibly long.  In fact, some may say that it's too long.  The Dojo people thought of this though and allow you to quickly get ROI from their "Dojo for the Attention-Impaired".  This page demonstrates the basic idea behind Dojo by showing a quick Hello World example.  I would recommend you do skim through this page, do the demo, then skim through the rest of the book, doing demos as you go.  One thing I should mention about the documentation is that while there is a lot of it, it's very hard to read at times.  I had to read the event system documentation numerous times before I was able to get anything from it.  This is definitely something the Dojo guys should look into.

Another thing they should look into is their use of double quotes in their JavaScript documentation.  This is something most ASP.NET AJAX books do and it really makes the code hard to read and hard to manage (do you really want to escape every single double quote in your HTML controls? ouch!)  It's makes about as much sense as using double quotes in your T-SQL code (which, yeah, would require a setting, but that's my point-- it's lame).  The Dojo documentation seems to go between various authors who either respect for the JavaScript guidelines (') or who don't even realize it exists (").  To make things worst, at times they use single quotes in their HTML!  There doesn't seem to be any consistency here.  I'm glad they didn't try to mix their code with any type of server-side work.  I don't even want to try to read double-quotes JavaScript in the midst of PHP, Java, or C# code.  That would e painful to read.  Other than these two concerns, the documentation was fairly exhaustive.  Some developers, however prefer to learn by seeing.  If you're one of those, you can check out the official Dojo demos.

In terms of deployment, you actually don't even need to install Dojo.  It's on the AOL Content Delivery Network (CDN), so you can quickly just include the Dojo entry file from their server and be done with it.  If you really want to download it to your system, you can hit up the Dojo web site and download it from there.  You could also head on over the the Dojo web site that parodies the script.aculo.us web site: dojo.moj.oe.  Also, remember that everything you need is accessible from the single Dojo entry file (often dojo.js) and that you use dojo.require( ) statements to bring in functionality at a module level, not a file level.  Therefore, the AOL CDN method should be perfect for most people.

Dojo is currently in version 1.0, however, this is an open source 1.0, not a Microsoft 1.0.  When Microsoft has an alpha, it's a preview.  When they have a beta 1, it's pretty nice.  When they have a beta 2, I'm usually using it in production.  When it's RTM as v1.0, everything breaks and I end up removing it.  Google products as well as Firefox and Dojo on the other hand are hardcore and stable when they hit 1.0.  Their idea of 1.0 is like Microsoft's idea of an SP2.  Shall I remind everyone that Gmail is STILL marked as beta after all these years??  Dojo is a stable system that have been in development and testing for an extremely long time.

Dojo really is a fascinating client-side web development framework which can bring great elegance as well as a great declarative programming model to your AJAX applications.  Furthermore, given it's rich set of controls, Dojo is absolutely perfect for web-based business applications.  Dojo is also probably one of the great technologies of all time that are destined for completely misunderstanding as a product comparable and/or equal with products that don't even come close to it.  There are people living today that still try to compare Internet Explorer with Firefox, VSS with Subversion, Gimp with Photoshop (oh yes, I went there!), the Zune (which is a media player) with the iPod Touch (which is an Internet device)and Gmail with Yahoo! Mail or Hotmail.  Compare them and you will see there is no comparison.  As I said at the beginning, this neither replaces prototype or script.aculo.us, which would be used in more minimal environments.  Dojo is a different framework to be used when a project has different requirements.  Besides, you should never choose one tool as your end all be all for everything.  That's a naive way of thinking, unless you actually think it wise to cut your bread with a butter knife or spread your butter with a bread knife.  Rarely are things in life one-side-fits-all.  Frameworks are free; use them, but use them wisely.  Hopefully many of you will seriously consider using Dojo in your current or future AJAX and ASP.NET projects.

Links

Dojr.NET (Dojo RPC Library .NET 2.0)



In my overview of Dojo, I mentioned that Dojo provides a nice service abstraction layer in the form of dojo.rpc.  This is an absolutely astounding feature, yet it's so simple.  Instead of making all kinds of functions and setting up and XHR object, Dojo allows you to call server methods using a very simplified syntax.  The model should be familiar to anyone who has worked with SOAP services.  In these types of services, you are given a scheme and, depending on what client you are using, you can create a client-side proxy for all interaction with the service.  This is how the dojo.rpc feature works.  When you want to access a service, give Dojo the appropriate service metadata it needs to create a proxy and just call your service functions on the proxy.

Using dojo.rpc

In Dojo, this schema is called a Simple Method Description (SMD) and looks something like this.

var d = {
  'methods':
    [
      {
        'name':'getServerTime',
        'parameters':[
          {'name':'format'}
        ]
      },
      {
        'name':'getServerTimeStamp',
        'parameters' :[
        ]
      }
    ],
    'serviceType':'JSON-RPC',
    'serviceURL':'/json/time/'
}

With this SMD data, you create a proxy by getting and instance of the dojo.rpc.JsonService object setting the SMD in the constructor, like this:

var timeProxy = new dojo.rpc.JsonService(d);

From here you can call methods on the proxy and set a callback:

timeProxy.getServerTimeStamp( ).addCallback(function(r) { alert(r); });

Upon execution, this line will call the getServerTimeStamp method described in the SMD and route the output through the anonymous function set in the addCallback function.  If you would like, however, you can defer the callback by calling the service now and explicitly releasing the callback later.  In the following example, the first line calls the server immediately and the second releases the callback.

var deferred = timeProxy.getServerTimeStamp( );


deferred.addCallback(function(r) { alert(r); });

This is great, but what about the server?  As it turns out, Dojo, sends JSON to the service.  You can see this for yourself by taking at keep at the Request.InputStream stream in ASP.NET:

StreamReader reader = new StreamReader(Request.InputStream);
String data = reader.ReadToEnd( );

Below is the data that was in the stream.  As you can see, this is extremely simple.

{\"params\": [], \"method\": \"getServerTimeStamp\", \"id\": 1}

Providing Server Functionality

Since we are working in .NET, we have at our disposal many mechanisms that can help us deal with various formats, some of which that can really help simplify life.  As I explained in my XmlHttp Service Interop Series, providing communication between two different platforms isn't at all difficult, provided that you understand the wire format in between them.  In part 3 of that same series, I explained how you could use XML serialization to quickly and powerfully interop with any XML service, including semi-standard a SOAP service.  Furthermore, you aren't limited to XML.  Provided the right serializer, you can do the same with any wire format.  For our purposes here, we need a JSON serializer.  One of my favorites is the Json.NET framework.  However, to keep things simple and to help us focus more on the task at hand, I'm going to use the .NET 3.5 DataContractJsonSerializer object.  If you are working in a .NET 2.0 environment with a tyrannical boss who despises productivity, you should check out Json.NET (or get a new job).

To begin our interop, the first thing we need is a type that will represent this JSON message in the .NET world.  Based on what we saw in the ASP.NET Input Stream, this should be easy enough to build:

[DataContract]
public class DojoMessage
{
    [DataMember(Name = "params")]
    public String[] Params;
[<span style="color: #2b91af">DataMember</span>(Name = <span style="color: #a31515">"method"</span>)]
<span style="color: blue">public </span><span style="color: #2b91af">String </span>Method;


[<span style="color: #2b91af">DataMember</span>(Name = <span style="color: #a31515">"id"</span>)]
<span style="color: blue">public </span><span style="color: #2b91af">Int32 </span>Id = 0;
}

Having that class in place, we can now deserialize ASP.NET's InputStream into an instance of this class using out DataContractJsonSerializer:

DataContractJsonSerializer s = new DataContractJsonSerializer(typeof(DojoMessage));
DojoMessage o = (DojoMessage)s.ReadObject(stream);

That's it.  Now you have a strongly typed object where you can access the method and parameter information as you need.  From here's it shouldn't be too hard for anyone to use this information to figure out what to do on the server.  After all the logic is in place, the only thing we have left to do is to return the data, which isn't really that big deal at all.  The return data is basically plain text, but you can definitely send JSON back if you like.  If you would like to use JSON, you can even the DataContractJsonSerializer to serialize an object to the ASP.NET Request.OutputStream object:

Object r = GetSomething(o);
s.WriteObject(context.Response.OutputStream, r);

What about a more high-level approach that will allow me to simply write my core functionality without messing with mechanics?  Anyone using ASP.NET AJAX has this already in both their ASMX and WCF/JSON abstraction, but I wanted this functionality for Dojo (and for direct AJAX access).  My requirements were that I wanted to be able to define an attributed service, register it and move on.  Therefore, I build a Dojo RPC .NET 2.0 library called Dojr.NET (short for Dojo RPC, of course).  Dojr is probably the worst project name I've come up with to date, but it saves me from potential legal stuff from the Dojo Foundation.

Using Dojr.NET

To use Dojr.NET, create a class that inherits from DojoRpcServiceBase and apply attribute DojoOperationAttribute on each publicly exposed method.  Be sure to also set the dojo.rpc operation name in it's constructor, this is the name the Dojo client will see.  Since .NET uses PascalCased methods and JavaScript uses camelCased function, this is required.  Here is a complete sample class:

namespace NetFX.Web
{
    public class CalculatorService : DojoRpcServiceBase
    {
        [DojoOperation("add")]
        public Int32 Add(Int32 n1, Int32 n2) {
            return n1 + n2;
        }
    [<span style="color: #2b91af">DojoOperation</span>(<span style="color: #a31515">"subtract"</span>)]
    <span style="color: blue">public </span><span style="color: #2b91af">Int32 </span>Subtract(<span style="color: #2b91af">Int32 </span>n1, <span style="color: #2b91af">Int32 </span>n2) {
        <span style="color: blue">return </span>n1 - n2;
    }
}
}

After this, all you have to do is register the class as an HttpHandler in your web.config file.

<add verb="*" path="*/json/time/*" type="NetFX.Web.TimeService" />

At this point our Dojr.NET service is up and running, but how do we call it?  Actually, the same way you always do with dojo.rpc; nothing changes.  Believe it or not, this is a complete functional example:

var calcProxy = newdojo.rpc.JsonService('json/calc/?smd');
calcProxy.add(2, 3).addCallback(function(r) { alert(r); });
Automatic Service Method Description

But, how did the proxy obtain the required dojo.rpc metadata?  If you look closely at the address given to the proxy you will notice that it's suffixed with ?smd.  When a Dojr.NET service is suffixed with ?smd, it will automatically generate and return the service metadata.  This is similar to putting ?wsdl at the end of an ASMX URI.

Take a look at the metadata that's being automatically generated on the server via the ?smd suffix:

{
  "methods":[
    {
      "name":"add",
      "parameters":[
        {"name":"n1"},
        {"name":"n2"}
      ]
    },
    {
      "name":"subtract",
      "parameters":[
        {"name":"n1"},
        {"name":"n2"}
      ]
    }
  ],
  "serviceType":"JSON-RPC",
  "serviceURL":"http://localhost:3135/Dojo/json/calc/"
}

As you can see, Dojr.NET provides all the metadata required.  Literally all you have to do is inherit from DojoRpcServiceBase, apply the DojoOperationAttribute, and register the class to ASP.NET.  Everything else will be done for you.

Links

Cwalina's Framework Engineering Lecture Posted



In my mind the single most important aspect of any system is usability.  Unless the context states otherwise, when I use the term "optimize" or "efficiency" I am always always talking about usability optimization and efficiency.  Things can be fast with a small footprint, but if you can't figure out how to use it right away or continually confuses your methods for your fields, then it doesn't really matter.  Fortunately, Microsoft agrees with this.  The days of every company writing their own coding-guidelines are gone and we .NET developers been unified under the great Framework Design Guidelines ("FDG") that Krzysztof Cwalina and Brad Abrams have so graciously given us.

To help further the unity in the community, Krzysztof recently posted a lecture on his blog entitled "Framework Engineering: Architecting, Designing, and Developing Reusable Libraries".  Here is the abstract of the lecture:

This session covers the main aspects of reusable library design: API design, architecture, and general framework engineering processes. Well-designed APIs are critical to the success of reusable libraries, but there are other aspects of framework development that are equally important, yet not widely covered in literature. Organizations creating reusable libraries often struggle with the process of managing dependencies, compatibility, and other design processes so critical to the success of modern frameworks. Come to this session and learn about how Microsoft creates its frameworks. The session is based on experiences from the development of the .NET Framework and Silverlight, and will cover processes Microsoft uses in the development of managed frameworks.

This video is, of course, not the only video on such an incredibly critical topic.  Many years ago, Brad Abrams (and friends) gave a lecture series to Microsoft employees on the topic of framework design guidelines.  These videos don't just cover the critically important topic of name guidelines, but also CLR performance topics, interopability guidelines, security topics, as well as various others.  It's a video series that's essential to serious .NET development.

More importantly then these videos though, is the classic work produced by Krzysztof and Brad entitled "Microsoft Framework Design Guidelines".  At the time of this writing, this book has 28 Amazon.com customer reviews and is still at five stars.  Look at a few of the review titles: "A must have for any C# Developer or Architect", "For the individual who wants to rise above the masses", "If you only ever buy one .NET book, make it this one", "AWESOME * 10 = MUST HAVE;" and my personal favorite: "Passionate About Quality?"  These reviews give you a good idea of the level of community acceptance that the framework design guidelines have.  One reviewer even said "I would pay $5 per page for this book, and have found it to be, by far, the most outstandingly useful technical book I've read."  This book covers in detail many of the aspects (and often times more) that have been covered in the videos.  In fact, the videos are actually on the DVD that comes with the book.

The book is also not simply a set of laws.  Throughout the book Microsoft architects and major Microsoft community leaders like Jeffery Richter make comments on various aspects of the framework.  Sometimes they explain why a rule is stated in a certain way and other times they emphasis how crucially important a rule is.  A few of the comments in the book explain problems in the .NET framework stemming from the fact that the guidelines were still in development (people used to say C# looked like Java-- well, many people used Java's nearly obfuscated coding standard!)  At one point in the book one of the authors explains a usability study for .NET streamed and right-out admitted what most of us already know: .NET streaming is extremely non-intuitive!

Many times I hear people say that the success of the .NET framework comes from it's extremely efficient garage collection model, it's flexible common language runtime (in contrast to Java's platform runtime) and it's powerful JIT model.  All those things are crucial, but ease of use is even more at the heart of .NET.  Abstraction in framework design can be defined as the increasing of the semantic value or usability of any entity and it's at this point where we can see .NET far outshine Java and PHP.  I've all but forgotten how to work with pointers, but it's when I forgot my coding standard that I'll start to become obsolete.  It's been said that the success of Windows was driven by the very open nature of the Win16/Win32 API.  Similarly, I highly suspect that it's the the beautiful abstraction with extremely high usability that explains .NET's sheer success.  There's only so much marketing can do; at some point a product has to stand on it's own (even then, Programmers can see though marketing!).  This beautiful abstraction and extremely high usability if course due to the existence and enforcement of the FDG.

To be clear, when I talk about FDG, I'm not simply talking about FXCop rules.  I typically break the .NET framework rules down into three levels: CLS-compliance, FXCop compliance, FDG compliance, and the iDesign standard.  If you do not strictly enforce CLS-compliance, then you may very well be completely stuck in the next version of .NET.  Who knows how non-Microsoft compilers will become.  FXCop will catch problems in your CLS-compliance and it will also catch many of the FDG violations as well.  The FDG rules, however, also cover various aspects of security and performance that only a human can check.  Lastly, when people often mention the FDG, many times what they really mean is the iDesign standard, edited by Microsoft Software Legend Juval Lowy.

In fact, I often use the terms "Framework Design Guidelines" and iDesign standard interchangeably.  They aren't the same thing, but in some contexts it's acceptable to mix them.  Whereas the FDG is primarily for the public interface of a framework, the iDesign standard covers both the public and internal.  The term "iDesign standard" may not be familiar to all, but what represents is.  It's been the .NET coding defacto standard since 2003.  In fact, when you crack open any APress, Wrox, or Sam Publishing book, you will probably be looking at code following the iDesign standard.  Further, the default settings for Visual Studio is the iDesign code layout. 

Every .NET developer knows it and just about everyone follows it.  Some may think the iDesign standard is optional and since it covers private code and in a sense it is optional, but, to be sure, if you are following the FDG rules and the iDesign standard, you have immediately chopped the learning curve of your system by an enormous factor.  Also, if you ever go public with your application (i.e. go open source), you will need to make sure you follow the FDG standard (which includes CLS and FXCop compliance) and the iDesign standard.  Otherwise, your system will probably have virtually no acceptance.

In closing, I should mention that Krzysztof Cwalina and Brad Abrams are releasing the 2nd edition of their famous book, due September 29, 2008.  You can, of course, pre-order on Amazon.  You can be sure that I will!

Links

Comic Strip: Enterprise .NET/PHP



Recently I ran across a really awesome web site called ToonDoo.  The web site basically allows you to create your own comic strips using a simple, yet feature rich Flash application.  Today I learned about a few other web sites that do something similar.  In light of the nonsensical anti-.NET drivel I hear constantly from ignorant outsiders, I thought I would put a few of my encounters into a few strips.  Here is the first…

Comic Strip #2: .NET and PHP Source Code



Here's another conversations that I've had with various PHP programmers over the years.  Actually, this strip is a combination of three separate conversations wrapped up into one.  I think these conversations also give an accurate image of how the ignorant anti-Microsoft cult thinks.  Most of the time these people don't even know their own systems and often assume that since I'm a .NET programmer that I'm a complete fool.

.NET and PHP Source Code Comic Strip

March 2008 Web Technology Update



Recently a bunch of technologies have been released and/or updated and I would like to mention a few of them briefly.

First and foremost, Silverlight 2 Beta 1 has finally been released and you may download it immediately.  There is also an accompanying SDK.  You can find a nice development tutorial series on Scott Guthrie's blog.  If you are already familiar with WPF, you can just skim this entire series in less than 5 minutes.  Given that this technology isn't the same as the full WPF and given that it's designed for the web, there will obviously be differences.  It's important to remember that Silverlight 2 isn't simply WPF for the web.  I would call WPF 3.5's XBAP support for IE/Firefox "WPF for the web".  No, this is possibly the biggest web technology improvement since the release of Firefox 1.0, which in turn was the biggest technology release since the printing press.  Alight, alight… since .NET 1.1.  It's support for the dynamic language runtime is going to completely revolutionize our web development.

When reading through Scott's tutorial series (serious, at least skim it), it's interesting to note that Silverlight 2 allows cross-domain communication.  It does this by reusing the Flash communication policy files.  This is really awesome as it means that you can start accessing resources that Flash has been using for a while.  Being able to dynamically access resources from different domains is critical to the success of web architecture in the future.

Speaking of cross-domain communication, John Resig and I received a very depressing e-mail the other day telling us horrible news: cross-domain communication will probably be removed from Firefox 3 before it's official release.  Apparently a bunch of paranoid anti-architects were complaining about the dreaded evils of being able to access resources from different domains.  Um ok.  Fortunately, however, Firefox 3 has a feature called postMessage that allows you to get around this.  Malte Ubl has produced a library called xssinterface to demonstrate just this concept.  You could, of course, get around this completely with some iframe hacks or some other scripting magic.

Speaking of web browsers, I would like to bring people's attention to a technology that I've been following for some time now: Apple WebKit.  This is basically the brains inside Safari.  I absolutely love the Safari web browser.  It's by far and away the easiest web browser to use.  It also has the same keyboard short-cuts as Firefox, which is how I'm able to use it.  It's also incredibly fast, but I should mention that it uses even more memory than Firefox.  My last instance passed 500MB.  Given it's lack of an extension or configuration (i.e. about:config) system, it's obviously no where near the same caliber as Firefox though.  It is, however, my primary web browser as has been since October '07.

The reason I mention WebKit is because as very few people know, this is an open source project and has nightly binaries released on their webkit.org web site.  One of the most interesting thing about nightlies that you can actually watch the progress of development as time goes on.  About every month or so I like to get the latest Firefox nightly.  It's always interesting to see the major experiments that the developers try about 2 months after a major release of Firefox.  There's always some really awesome "teaser" feature in there that later grows into a fully grown technology.  The same can be said for WebKit.

None of that is, however, my primary reason for mentioning WebKit.  As, most web developers know, the Acid2 test has been the standard for checking a web browsers compatibility with the CSS standard.  I've been pushing this test for a long time, but I've never pushed it as the only test.  There are many things that a web browser must do and many features a web browser must have before it can be considered appropriate for use.  Merely focusing on CSS, while completely ignoring DOM support, JavaScript, and general user usability can lead a browser to be as impossible to use as Opera 9.

As I've said time and time again, I'm not a CSS specialist.  Part of the definition of being a professional web develop is that I have a solid understand of the inner workings of CSS including specificity, the various selectors, and how to merge absolute, floating, and relative position on the same elements, tasks "coders" see as nearly impossible to learn.  However, my focus is on AJAX interaction as seen from the JavaScript and DOM worlds.  Therefore, we need to have a test for browsers that goes beyond the simple Acid 2 test for CSS.  I'm not the only one thinking this way, because recently the Acid3 test was published and it tests CSS, JavaScript and DOM support.  This is the new standard for web browsers.

So far no web browser has even gotten close, with the lowest score from a web browser being 39% in Safari to the best score being 50% in Firefox 2.0.0.12.  However, in terms of non-released software, Firefox 3.0b3 has a score between 59% and 61%, depending on its mood (update: b4 is steady at 67%) and the latest WebKit nighty has a score is 90% (watch WebKit progress on Acid 3 at http://bugs.webkit.org/show_bug.cgi?id=17064).  That's phenomenal!  The newly released Internet Explorer 8 beta 1 has a score of 17%.  Those of you who have naively praising the IE team for being YEARS late on getting near the Acid 2 test need to wake up and realize this is 2008.  Time moves-- keep up.  Firefox has been close for the longest time and has always had the next-gen's next-gen JavaScript and DOM support, but has only recently completely passed the finish line of the Acid 2 test.  So, they are finally off my watch list there, but I will not stop bugging them until they pass the Acid 3 test.

For more information on the Acid 3 test, see John Resig's most entitled "Acid 3 tackes EMCAScript".  He's about as passionate as I am for web standards and Firefox and his blog is an invaluable resource for all things JavaScript.  His work is so good that I would like to take the time to plug his book he is currently writing: Secrets of the JavaScript Ninja.  I absolutely guarantee you that this book will redefine the entire world of JavaScript and will raise the bar incredibly out of the reach of "coders". To all of you coders who think you know JavaScript, do a view-source on the Acid 3 source code (you may want to bring a change of underwear with you).

Lastly, it's not necessarily a "new" technology, but it's so incredibly phenomenal that I need to mention it: Prototype 1.6.  It's amazing to me that people actually go out of their way to use ASP.NET AJAX 3.5 (I still find the ICallbackEventHandler interface more productive).  ASP.NET AJAX 3.5 is not nearly as bad as extremists think, but the design is still flawed.  Prototype on the other hand is absolutely incredible.  I've written about Prototype before, but this version 1.6 is even more powerful.  There a A LOT of changes from Prototype 1.5.  It's so good that I no longer call it "prototype/script.aculo.us".  Script.aculo.us is a great animation system, but, honestly, the main reason I used it was for the DOM abstraction in the Builder object.  Prototype now has an Element object to help create DOM objects, thus allowing me to remove Script.aculo.us from most of my projects (it's not as complete as the Builder object, but it allows object chaining-- which greatly increases code readability, conciseness and understanding!).  The Template object is also amazing as it gives you the ability to go far beyond simple String.Format formatting.  The new Class object for OOP is also great.  It's so much easier to use than Prototype 1.5.  Also, being able to hide all elements with a particular CSS pattern with one shot is very useful! (for example, $$('div span .cell-block').invoke('hide')).  It even allows you to use CSS 3 selectors on the most dead of web browsers.  It really makes developing for Internet Explorer 6 and 7 bearable!  Even if I have to use ASP.NET AJAX 3.5, I'll still including prototype.js.  If you do anything with JavaScript, you need Prototype!

 

Links

Squid Micro-Blogging Library for .NET 3.5



A few years ago I designed a system that would greatly ease data syndication, data aggregation, and reporting.  The first two components of the system were repackaged and release early last year under the incredibly horrible name "Data Feed Framework".  The idea behind the system was two fold.  The first concept was that you write a SQL statement and you immediately get a fully functional RSS feed with absolutely no more work required.  Here's an example of a DFF SQL statement that creates an RSS feed of SQL Server jobs:

select Id=0,
Title=name,
Description=description
from msdb.dbo.sysjobs
where enabled = 1

The second part of DFF was it's ASP.NET control named InfoBlock that would accept an RSS or ATOM feed and display it in a mini-reader window.  The two parts of DFF combine to create the following:

Given the following SQL statement (or more likely a stored procedure)...

select top 10
Id=pc.ContactID, 
Title=pc.FirstName + ' ' + pc.LastName + ': $' + convert(varchar(20), convert(numeric(10,2), sum(LineTotal))), 
Description='', 
LinkTemplate = '/ShowContactInformation/{id}'
from Sales.SalesOrderDetail sod
inner join Sales.SalesOrderHeader soh on soh.SalesOrderID = sod.SalesOrderID
inner join Person.Contact pc on pc.ContactID = soh.SalesPersonID
group by pc.FirstName, pc.LastName, pc.ContactID
order by sum(LineTotal) desc

...we have an automatically updating RSS feed and when that RSS feed is given to an InfoBlock, you get the following:

image

InfoBlocks could be placed all over a web site or intranet to give quick and easy access to continually updating information.  The InfoBlock control would also register the feed with modern web browsers that had integrated RSS support.  Furthermore, since it was styled properly in CSS, there's no reason for it to be a block at all.  It could be a horizontal list, a DOM-based window, or even a ticker as CSS and modern AJAX techniques allow.

DFF relied on RSS.NET for syndication feed creation and both RSS.NET and Atom.NET for aggregation.  It also used LLBLGen Pro a bit to access the data from SQL Server.  As I've promised with all my projects, they will update as new technologies are publicly released.  Therefore, DFF has been completely updated for .NET 3.5 technologies including LINQ and WCF.

I've also decided to continue down my slippery slope of a change in product naming philosophy.  Whereas before I would follow the Microsoft marketing philosophy of "add more words to the title until it's so long to say that you require an acronym" to the more Linux or O'Reilly approaches of "choose a random weird sounding word and leave it be" and "pick a weird animal", respectively.  I've also been moving more towards the idea of picking a cool name and leaving it as is.  This is in contrast to Microsoft's idea of picking an awesome name and then changing it to an impossibly long name right before release (i.e. Sparkle, Acrylic, and Atlas)  Therefore, I decided to rename DFF to Squid.  I think this rivals my Dojr.NET and Prominax (to be released-- someday) projects as having the weirdest and most random name I've ever come up with.  I think it may have something to do with SQL and uhhhh.. something about a GUID.  Donno.

Squid follows the same everything as DFF, however the dependencies on RSS.NET and ATOM.NET were completely removed.  This was possible due to the awesome syndication support in WCF 3.5.  Also, all reliance on LLBLGen Pro was removed.  LLBLGen Pro (see my training video here) is an awesome system and is the only enterprise-class O/R mapping solution in existence.  NHibernate should not be considered enterprise-class and it's usability is almost through the floor.  Free in terms of up-front costs, does not mean free in terms of usability (something Linux geeks don't seem to get).  However, given that LINQ is built into .NET 3.5, I decided that all my shared and open-source projects should be using LINQ, not LLBLGen Pro.  The new LLBLGen Pro uses LINQ and when it's released, should absolutely be used as the primary solution for enterprise-class O/R mapping.

Let me explain a bit about the new syndication feature in WCF 3.5 and how it's used in Squid.  Creating a syndication feed in WCF is required a WCF endpoint just like everything else in WCF.  This endpoint will be part of a service and will have an address, binding, and contract.  Nothing fancy yet as the sweetness is in the details.  Here's part of the contract Squid uses for it's feed service (don't be jealous of the VS2008 theme -- see Scott Hanselman's post on VS2008 themes):

namespace Squid.Service
{
    [ServiceContract(Namespace = "http://www.netfxharmonics.com/services/squid/2008/03/")]
    public interface ISquidService
    {
        [OperationContract]
        [WebGet(UriTemplate = "GetFeedByTitle/{title}")]
        Rss20FeedFormatter GetFeedByTitle(String title);


        //+ More code here
    }
}

Notice the WebGet attribute.  This is applied to signify that this will be part of a HTTP GET request.  This relates to the fact that we are using a new WCF 3.5 binding called the WebHttpBinding.  This is the same binding used by JSON and POX services.  There are actually a few new attributes, each of which provides it's own treasure chest (see later in this post when I mention a free chapter on the topic).  The WebGet attribute has an awesome property on it called UriTemplate that allows you to match parameters in the request URI to parameters in the WCF operation contract.  That's beyond cool.

The service implementation is extremely straight forward.  All you have to do is create a SyndicationFeed object, populate it with SyndicationItem objects and return it in the constructor of the Rss20FeedFormatter.  Here's a non-Squid example:

SyndicationFeed feed = new SyndicationFeed();
feed.Title = new TextSyndicationContent("My Title");
feed.Description = new TextSyndicationContent("My Desc");
List<SyndicationItem> items = new List<SyndicationItem>();
items.Add(new SyndicationItem()
{
    Title = new TextSyndicationContent("My Entry"),
    Summary = new TextSyndicationContent("My Summary"),
    PublishDate = new DateTimeOffset(DateTime.Now)
});
feed.Items = items;
//+
return new Rss20FeedFormatter(feed);

You may want to make note that you can create an RSS or ATOM feed directly from an SyndicationFeed instance using the SaveAsRss20 and SaveAsAtom10 methods.

As with any WCF service, you need a place to host it and you need to configure it.  To create a service, I simply throw down a FeedService.svc file with the following page directive (I'm really not trying to have the ugliest color scheme in the world-- it's just an added bonus):

<%@ ServiceHost Service="Squid.Service.SquidService" %>

The configuration is also fairly straight forward, all we have is our previously mentioned ending with an address(blank to use FeedService.svc directly), binding (WebHttpBinding), and contract(Squid.Service.ISquidService).  However, you also need to remember to add the WebHttp behavior or else nothing will work for you.

<system.serviceModel>
  <behaviors>
    <endpointBehaviors>
      <behavior name="FeedEndpointBehavior">
        <webHttp/>
      </behavior>
    </endpointBehaviors>
  </behaviors>
  <services>
    <service name="Squid.Service.SquidService">
      <endpoint address=""
                binding="webHttpBinding"
                contract="Squid.Service.ISquidService"
                behaviorConfiguration="FeedEndpointBehavior"/>
    </service>
  </services>
</system.serviceModel>

That's seriously all there is to it: write your contract, write your implementation, create a host, and set configuration.  In other words, creating a syndication feed in WCF is no different than creating a WsHttpBinding or NetTcpBinding service.  However, what about reading an RSS or ATOM feed? This is even simpler.

To read a feed all you have to do is create an XML reader with the data source of the feed and pass that off to the static Load method of the SyndicationFeed class.  This will return an instance of SyndicationFeed which you may iterate or, as I'm doing in Squid, transform with LINQ.  I actually liked how my server-control used an internal repeater instance and therefore wanted to continue to use it.  So, I kept my ITemplate object (RssListTemplate) the same and used the following LINQ to transform a SyndicationFeed to what my ITemplate what already using:

Object bindingSource = from entry in feed.Items
                       select new SimpleFeedEntry
                       {
                           DateTime = entry.PublishDate.DateTime,
                           Link = entry.Links.First().Uri.AbsoluteUri,
                           Text = entry.Content != null ? entry.Content.ToString() : entry.Summary.Text,
                           Title = entry.Title.Text
                       };

Thus, with .NET 3.5 I was able to remove RSS.NET and ATOM.NET completely from the project.  LINQ also, of course helped me with my database access and therefore remove my dependency on my LLBLGen Pro generated DAL:

using (DataContext db = new DataContext(Configuration.DatabaseConnectionString))
{
    var collection = from p in db.FeedCreations
                     where p.FeedCreationTitle == title
                     select p;
    //+ More code here
}

Thus, you can use Squid in your existing .NET 3.5 system with little impact to anything.  Squid is what I use in my Minima blog engine to provide the boxes of information in the sidebar.  I'm able to modify the data in the Snippet table in the Squid database to modify the content and order in my sidebar.  Of course I can also easily bring in RSS/ATOM content from the web with this as well.

You can get more information on the new web support in WCF 3.5 by reading the chapter "Programmable Web" (free chapter) in the book Essential WCF for .NET 3.5 (click to buy).  This is an amazing book that I highly recommend to all WCF users.

Links

NetFXHarmonics DevServer Released



Two months ago started work on a project to help me in my AJAX and SOA development.  What I basically needed was a development web server that allowed me to start up multiple web servers at once, monitor server traffic, and bind to specific IP interfaces.  Thus, the creation of NetFXHarmonics DevServer.  I built it completely for myself, but others started to ask for it as well.  When the demand for it became stronger, I realized that I needed to release the project on the web.  Normally I would host it myself, but given the interest from the .NET community, I thought I would put it on CodePlex.  I've only cried twice seen I've put it on CodePlex, but I'll survive.

NetFXHarmonics DevServer is a web server hosting environment built on WPF and WCF technologies that allows multiple instances of Cassini-like web servers to run in parallel. DevServer also includes tracing capabilities for monitoring requests and responses, request filtering, automatic ViewState and ControlState parsing, visually enhanced HTTP status codes, IP binding modes for both local-only as well as remote access, and easy to use XML configuration.

Using this development server, I am able to simultaneously start multiple web sites to very quickly view everything that happens over the wire and therefore easily debug JSON and SOAP messages flying back and forth between client and server and between services.  This tool have been a tremendous help for me in the past few months to discover exactly why my services are tripping out without having to enable WCF tracing.  It's also been a tremendous help in managing my own web development server instances for all my projects, each having 3-5 web sites (or segregated service endpoints) each.

Let me give you a quick run down of the various features in NetFXHarmonics DevServer with a little discussion of each feature's usage:

XML Configuration

NetFXHarmonics DevServer has various projects (and therefore assemblies) with the primary being DevServer.Client, the client application which houses the application's configuration.

In the app.config of DevServer.Client, you have a structure that looks something like the following:

<jampad.devServer>
</jampad.devServer>

This is where all your configuration lives and the various parts of this will be explained in their appropriate contexts in the discussions that follow.

Multiple Web Site Hosting

In side of the jampad.devServer configuration section in the app.config file, there is a branch called <servers /> which allows you to declare the various web servers you would like to load.  This is all that's required to configure servers.  Each server requires a friendly name, a port, a virtual path, and the physical path.  Given this information, DevServer will know how to load your particular servers.

<servers>
  <server key="SampleWS1" name="Sample Website 1" port="2001"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  </server>
  <server key="SampleWS2" name="Sample Website 2" disabled="true" port="2003"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  </server>
</servers>

If you want to disable a specific server from loading, use the "disabled" attribute.  All disabled servers will be completely skipped in the loading process.  On the other hand, if you would like to load a single server, you can actually do this from the command line by setting a server key on the <server /> element and by accessing it via a command line argument:

DevServer.Client.exe -serverKey:SampleWS1

In most scenarios you will probably want to load various sets of servers at once.  This is especially true in properly architected service-oriented solutions.  Thus, DevServer includes a concept of startup profiles.  Each profile will include links to a number of keyed servers.  You configure these startup profiles in the <startupProfiles /> section.

<startupProfiles activeProfile="Sample">
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

This configuration block lives parallel to the <servers /> block and the inclusion of servers should be fairly self-explanatory.  When DevServer starts it will load the profile in the "activeProfile" attribute.  If the activeProfile block is missing, it will be ignored.  If the activeProfile states a profile that does not exist, DevServer will not load.  When using a startup profile, the "disabled" attribute on each server instance is ignored.  That attribute is only for non-startup profile usage.  An activeProfile may also be set via command line:

DevServer.Client.exe -activeProfile:Sample

This will override any setting in the activeProfile attribute of <startupProfiles/>.  In fact, the "serverKey" command line argument overrides the activeProfile <startupProfiles /> attribute as well.  Therefore, the order of priority is is as follows: command line argument override profile configuration and profile configuration overrides the "disabled" attribute.

Most developers don't work on one project and with only client.  Or, even if they do, they surely have their own projects as well.  Therefore, you may have even more servers in your configuration:

<server key="ABCCorpMainWS" name="Main Website" port="7001"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\Website">
</server>
<server key="ABCCorpKBService" name="KB Service" port="7003"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\KnowledgeBaseService">
</server>
<server key="ABCCorpProductService" name="Product Service" port="7005"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\ProductService">
</server>

These would be grouped together in their own profile with the activeProfile set to that profile.

<startupProfiles activeProfile="ABCCorp">
  <profile name="ABCCorp">
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

What about loading servers from different profiles?  Well, think about it... that's a different profile:

<startupProfiles activeProfile="ABCCorpWithSampleWS1">
  <profile name="ABCCorpWithSampleWS1">
    <server key="SampleWS1" />
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
</startupProfiles>

One of the original purposes of DevServer was to allow remote non-IIS access to development web sites.  Therefore, in DevServer you can use the <binding /> configuration element to set either "loopback" (or "localhost") to only allow access to your machine, "any" to allow web access from all addresses, or you can specific a specific IP address to bind the web server to a single IP address so that only systems with access to that IP on that interface can access the web site.

In the following example the first web site is only accessible by the local machine and the second is accessible by others.  This comes in handy for both testing in a virtual machine as well as quickly doing demos.  If your evil project manager (forgive the redundancy) wants to see something, bring the web site up on all interface and he can poke around from his desk and then have all his complains and irrational demands ready when he comes to your desk (maybe you want to keep this feature secret).

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
</server>
<server key="SampleWS2" name="Sample Website 2" port="2003"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  <binding address="any" />
</server>

Web Site Settings

In addition to server configuration, there is also a bit of general configuration that apply to all instances.  As you can see from the following example, you can add default documents to the existing defaults and you can also setup content type mappings.  A few content types already exist, but you can override as the example shows.  In this example, where ".js" is normally sent as text/javascript, you can override it to go to "application/x-javascript" or to something else.

<webServer>
  <defaultDocuments>
    <add name="index.jsx" />
  </defaultDocuments>
  <contentTypeMappings>
    <add extension=".jsx" type="application/x-javascript" />
    <add extension=".js" type="application/x-javascript" override="true" />
  </contentTypeMappings>
</webServer>

Request/Response Tracing

One of the core features of DevServer is the ability to do tracing on the traffic in each server.  Tracing is enabled by adding a <requestTracing /> configuration element to a server and setting the "enabled" attribute to true.

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
  <requestTracing enabled="true" enableVerboseTypeTracing="false" enableFaviconTracing="true" />
</server>

This will have request/response messages show up in DevServer which will allow you to view status code, date/time, URL, POST data (if any), response data, request headers, response headers, as well as parsed ViewState and Control state for both the request and response.  In addition, each entry is color coded based on it's status code.  Different colors will show for 301/302, 500+, and 404.

image

When working with the web, you don't always want to see every little thing that happens all the time.  Therefore, by default, you only trace common text specific file like HTML, CSS, JavaScript, JSON, XAML, Text, and SOAP and their content.  If you want to trace images and other things going across, then set "enableVerboseTypeTracing" to true.  However, since there is no need to see the big blob image data, the data of binary types are not sent to the trace viewer even with enableVerboseTypeTracing.  You can also toggle both tracing as well as verbose type tracing on each server as each is running.

There's also the ability to view custom content types without seeing all the images and extra types.  This is the purpose of the <allowedConntetTypes /> configuration block under <requestTracing />, which is parallel to <servers />.

<requestTracing>
  <allowedContentTypes>
    <add value="application/x-custom-type" />
  </allowedContentTypes>
</requestTracing>

In this case, responses of content-type "application/x-custom-type" are also traced without needing to turn on verbose type tracing.

However, there is another way to control this information.  If you want to see all requests, but want the runtime ability to see various content types, then you can use a client-side filter in the request/response list.  In the box immediately above the request/response list, you can type something like the following:

verb:POST;statuscode:200;file:css;contentType:text/css

Filtering will occur as you type, allowing you to find the particular request you are looking for.  The filter is NOT case sensitive.  You can also clear the request/response list with the clear button.  There is also the ability to copy/paste the particular headers that you want from the headers list by using typical SHIFT (for range) and CTRL-clicking (for single choosing).

Request/Response monitoring actually goes a bit further by automatically parsing both ViewState and ControlState for both request (POST) and response data.  Thanks goes to Fritz Onion for granting me permission to use his ViewState parser class in DevServer.

As a Training Tool

When announce any major project I always provide an "as a training tool" section to explain how the project can be used for personal training.  NetFXHarmonics DevServer is built using .NET 3.5 and relies heavily on LINQ and WCF with a WPF interface.  It also uses extensive .NET custom configuration for all server configuration.  In terms of LINQ, you can find many examples of how to use both query expression syntax and extension method syntax.  When people first learn LINQ, they think that LINQ is an O/R mapper.  Well, it's not (and probably shouldn't be usef for that in enterprise applications! there is only one enterprise class O/R mapper: LLBLGen Pro).  LINQ allows Language INtegrated Query in both C# and VB.  So, in DevServer, you will see heavy reliance on LINQ to search List<T> objects and also to transform LINQ database entities to WCF DTOs.

DevServer also relies heavily on WCF for all inner-process communication via named-pipes.  The web servers are actually hosted inside of a WCF service, thus segregating the web server loader from the client application in a very SOA friendly manner.  The client application loads the service and then acts as a client to the service calling on it to start, stop, and kill server instances.  WCF is also used to communicate the HTTP requests inside the web server back to the client, which is itself a WCF service to which the HTTP request is a client.  Therefore, DevServer is an example of how you can use WCF to communicate between AppDomains.

The entire interface in DevServer is a WPF application that relies heavy on WPF binding for all visual information.  All status information is in a collection to which WPF binds.  Not only that all, but all request/response information is also in a collection.  WPF simply binds to the data.  Using WPF, no eventhandling was required to say "on a click event, obtain SelectedIndex, pull data, then text these TextBox instances".  In WPF, you simply have normal every day data and WPF controls bind directly to that data being automatically updated via special interfaces (i.e INotifyPropertyChanged and INotifyCollectionChanged) or the special generic ObservableCollection<T>.

Since the bindings are completely automated, there also needs to be ways to "transform" data.  For example, in the TabItem header I have a little green or red icon showing the status of that particular web server instance.  There was no need to handle this manually.  There is already a property on my web server instance that has a status.  All I need to do is bind the image to my status enumeration and set a TypeConverter which transforms the enumeration value to a specific icon.  When the enumeration is set to Started, the icon is green, when it says "Stopped", the icon is red.  No events are required and the only code required for this scenario is the quick creation of a TypeConverter.

Therefore, DevServer is an example of WPF databinding.  I've heard people say that they are more into architecture and WCF and therefore have no interested in learning WPF.  This statement makes no sense.  If you don't want to mess with UI stuff, you need to learn WPF.  Instead of handing events all over the place and manually setting data, you can do whatever it is you do and have WPF just bind to your data.  When it comes to creating quick client applications, WPF is a much more productive platform than Windows Forms... or even the web!

Links

NetFXHarmonics on CodePlex



Well, I finally broke down.  My public projects are now freely available for download on CodePlex.  Below is a list of the current projects on CodePlex

Here are the current projects on CodePlex:

As far as creating "releases", these are shared-source/open-source projects and in the model I'm following "releases" are always going to be obsolete.  Therefore, I will provide ZIP versions of the archived major revisions of a project and the current revision will always be available as source code.  The only exception to this may be DevServer, which I may do monthly releases or releases based upon major upgrades.  I'm currently working on new major revisions for a few other projects and when they are completed, I will then post them on to CodePlex as well.

As a reminder, my projects are always architected to follow the current best-practices and idiots for a particular technology and are therefore often fully re-architected based on the current technology.  The reason I do this is for the simple reason that my core specialty is training (technology or not) and that's the driving principle in my projects.  Therefore, on each of my projects there is a "As a Training Tool" section that will explain that projects technology and architecture as well as what else you might be able to learn from it.

As a final note, SVNBridge is working OK for me and has really helped me get over the CodePlex hurdle.  Scott Hanselman was kind enough to encourage me to try SVNBridge again.  I'm honesty glad I did.  The Team System 2008 Team Explorer client which integrated into Visual Studio works every now and again, but I got absolutely sick of everything locking up every time I would save a file.  Not even a check it!  A simple local save!  How people put up with "connected" version control systems is beyond me.  Do people not realize that Subversion does locking too?  Anyways, SVNBridge works great for both check outs and commits (we don't "check in" in the Subversion world-- we use transactional terminology).  If you want Visual Studio 2008 integration AND speed and power and flexibility with CodePlex, get VisualSVN.  It's an add-on for VS2008 that uses Tortoise behind the scenes.  With that, depending on my mood I can commit in both VS2008 (what I would do when working on refactoring or something) and in the Windows shell (what I would do when working with JavaScript files in the world's best JavaScript IDE: Notepad2).

Spring 2008 Sabbatical



Starting May 23rd, I'm starting another sabbatical to work on my company projects, to continue my seminary work, and to work on my book (to be clear: sabbatical != vacation).  During this time I will be accepting part-time AJAX, WCF, ASP.NET (no graphics work!-- hire a professional graphic designer, they are worth the money!), or general C# 3.0 and .NET 3.5 telecommuting consulting.  I'll assist in projects, but I'm not going to be able to work as senior architect on any projects.  Also remember, as a web developer, it's my duty to make sure my projects work in Mozilla, Opera, Safari, and IE, and is in no way IE-specific.  IE-only environments are the absolute most difficult to work with.

Also keep in mind that this is 2008, not 1988 and the primary purpose of modern technology is to allow us to have simpler lives and just about every single aspect of our technology has it's root in the Internet allowing us to communication from anywhere.  What's the point in having web casting and online meeting abilities or in having online white boarding or web-based project management software, or even Google Office if you aren't going to use them in a meaningful way?  Why have e-mail at all if you are going to absolutely rely on the ability to go to the person's office?  The addiction to physical contact is something that needs to be broken in the 21st century.  Stop managing with your physical "field of view" and start managing by results.

I'm a web developer/architect, not a piano mover; I don't need to be in a physical office.  If you are into technology at all, you are into moving your physical resources into a logical cloud.  If I've said it once, I've said it a million times: your associates are your greatest resource and should, therefore, be even more in a logical cloud (as they are humans and would appreciate it more!)  It is inconsistent to pursue logical management of resources and require physical management of personnel.  Not only that, but it costs a lot less (no office space required!)  If your employees don't have enough discipline to work from home, what makes you think they are working in their cube?  Unless you are working off the failed notion of "hourly management" instead of being a results-oriented manager, you won't have a problem with 100% telecommuting.  Results matter, not "time".  Also, if you don't trust your employees, well… maybe you hired the wrong people (or maybe have trust issues in general?)  Trust is the foundation of all life.  I could speak volumes on this topic, but I'll leave that to the expert: Timothy Ferris.  See his blog or get his book for more information.  I'm only an anonymous disciple of his, he is the master and authority on this topic.  Therefore, send your flames (read: insecurities) his way (after you read his book!-- audio also available; they are both worth 100x their weight in gold!)  See also, Scott Hanselman's interview with Timothy Ferris.  His YouTube page is also available.

With regards to the book, let me simply say that it's generically about AJAX communication and I'm not going to give out too many specific details on the project at this point, but I will say this: AJAX + SOA - CSS + Prototype + (ASP.NET Controls) - (ASP.NET AJAX) + WCF + (.NET 2.0 Service Interop) + Silverlight + Development Tools.  Also, I reserve the right to turn it into a video series (likely), make it a completely learning set of reading + video series (even more likely!), or to completely chuck the project.  I don't like to do things the classical way, so whatever I do, you can bet on the fact that I won't do the traditional "book".  As I've always said, the blog is the new book, but for this I think I may use a different paradigm.  I've turned down two book offers so far because I absolutely refuse to throw more paper on a bookshelf or do something that's been done a million times before.

If you are moving from ASP to ASP.NET, from PHP to ASP.NET, from ASMX to WCF 3.5 or want to add AJAX to your solutions drop me an e-mail and let's talk.

Architectural Overview: Using LINQ in WCF



Today I would like to give an architectural overview of my usage of LINQ.  This may actually become the first in a series of architectural discussions on various .NET and AJAX technologies.  In this discussion, I'm going to be talking about the architecture of the next revision of my training blog engine, Minima.  Since the core point of any system is that which goes into and comes out of the system, the goal of this commentary will be to get to the point where LINQ projects data into WCF DTOs.  Let me start by explaining how I organize my types.  Some of you will find this boring, but it's amazing how many times I get questions on this topic.  For good reason too!  These questions show that a person's priorities are in the right place as your type, namespace, and file organization is critical to the manageability and architectural clarity of your system.

However, before we get started, let me state briefly that as I've stated in my post entitled SQL Server Database Model Optimization for Developers, when you design your database structure you should design it with your O/R mapper in mind.  If you don't, then you will probably fall into all kinds of problems as my post describes.  This is incredibly important, however, if you keep to normal everyday normalization procedures, you are probably doing OK for the most part anyway.  Since I've written about that before, there's no reason for me to go into detail here.  Just know that, if you database design sucks, your application will probably suck too.  Don't built your house on the sand.

In terms of LINQ, I actually use the VS2008 "LINQ to SQL Classes" template to create the LINQ information.  In most every other area of technology, it's a good practice to avoid wizards and templates like the plague, but when it comes to O/R mapping, you need to be using an automated tool.  If your O/R mapper requires you to do any work (…NHibernate…coughcough*), then you can't afford to work with it.  You need to be focusing on the business logic of your system, not playing around with mechanical nonsense.  As I've said in other contexts, stored procedures and ad hoc SQL are forms of unmanaged code.  When you are managing the mechanics of a system yourself, it's, by definition, unmanaged.  Stored procedures and ad hoc SQL are to LLBLGen/LINQ as ASP/PHP is to ASP.NET as C++ is to .NET languages.  If you are managing the mechanical stuff yourself, you are working with unmanaged code.  When it comes to using managed code, in the context of database access, this is the point of an O/R mapper.  Furthermore, if the O/R mapping software you are using requires you to write up templates or do manual mapping, that's obviously not completely managed code.

Now when I create a LINQ classes I will create one for each "architectural domain" of the system that I deem necessary.  For example, in a future release of Minima, there will be a LINQ class to handle my HttpHandler and UrlRewriting subsystem and another LINQ class to handle blog interaction.  There needs to be this level of flexibility or my WCF services will know too much about my web environment and my web site (a WCF client) will then have direct access to the data which the WCF service is intended to abstract.  Therefore, there will be a LINQ class for web site specific mechanics and another LINQ class for service specific mechanics.  Also, when I create the class for a particular domain I will give it a simple name with the suffix of LINQ.  So, my Minima core LINQ class is CoreLINQ.cs and my Minima service LINQ class is ServiceLINQ.cs.  Simple.

Upon load of the LINQ designer and either after or before I drop in the specific tables required in that particular architectural domain.  Then I'll set my context namespace to <SimpleName>.Data.Context and my entity namespace to <SimpleName>.Data.Entity.  For example, in the Minima example, I'll then have Core.Data.Context and Core.Data.Entity.  One may argue that there's nothing really going on in Core.Data.Context to which I much respond: yeah, well there's already a lot going on in Core.Data (other data related non-LINQ logic I would create) and Core.Data.Entity.   The reason I say "after or before I drop in the specific tables" is to emphasize the fact that you can change this at a later point.  It's important to keep in mind at this point that LINQ doesn't automatically update its schema with the schema from your database.  LLBLGen Pro does have this feature built in and it does the refreshing in a masterful way, but currently LINQ doesn't have this ability.  Therefore, to do a refresh, you need to do a "CTRL-A, Delete", to delete all the tables, do a refresh in Server Explorer, and then just re-add them.  It's not much work.

Now, moving on to using LINQ.  When I'm working with both LINQ entities (or LLBLGen entities or whatever) and WCF DTOs in my WCF service, I do not bring in the LINQ entity namespace.  The ability to import types in from another namespace is one of the most powerful set under appreciated features in all of .NET (um.. JavaScript needs them!), however, when you have a Person entity in LINQ and a Person DTO, things can get confusing fast.  Therefore, to avoid all potential conflicts, my import is left out and I, instead, keep a series of type aliases at the top of my service classes just under the namespace imports.  Notice also the visual signal in the BlogEntryXAuthor table name.  This tells the developer that this is a many-to-many linking table.  In this case it's in the database schema, but if it weren't in there, I could easily alias it as BlogEntryXAuthorLINQ without affecting anyone else.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
//+
using DataContext = Minima.Service.Data.Context.MinimaServiceLINQDataContext;
using AuthorLINQ = Minima.Service.Data.Entity.Author;
using CommentLINQ = Minima.Service.Data.Entity.Comment;
using BlogLINQ = Minima.Service.Data.Entity.Blog;
using BlogEntryLINQ = Minima.Service.Data.Entity.BlogEntry;
using BlogEntryUrlMappingLINQ = Minima.Service.Data.Entity.BlogEntryUrlMapping;
using BlogEntryXAuthorLINQ = Minima.Service.Data.Entity.BlogEntryAuthor;
using LabelLINQ = Minima.Service.Data.Entity.Label;
using LabelXBlogEntryLINQ = Minima.Service.Data.Entity.LabelBlogEntry;
using UserRightLINQ = Minima.Service.Data.Entity.UserRight;
//+

Next, since we are in the context of WCF, we need to discussion validation of incoming information.  The following method is an implementation of a WCF service operation.  As you can see, when a user sends in an e-mail address, there is an immediate validation on the e-mail address that retrieves the author's LINQ entity.  This is why the validation isn't being done in a WCF behavior (even though there are tricks to get data from a behavior too!)  You may also note my camelCasing of instances of LINQ entities.  The purpose of this is to provide an incredibly obvious signal to the brain that this is an object, not simply a type (...as is the point of almost all the Framework Design Guidelines-- buy the book!; 2nd edition due Sept 29 '08)

//- @GetBlogMetaData -//
[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]
public BlogMetaData GetBlogMetaData(String blogGuid)
{
    using (DataContext db = new DataContext(ServiceConfiguration.ConnectionString))
    {
        //+ ensure blog exists
        BlogLINQ blogLinq;
        Validator.EnsureBlogExists(blogGuid, out blogLinq, db);
        //+
        return new BlogMetaData
        {
            Description = blogLinq.BlogDescription,
            FeedTitle = blogLinq.BlogFeedTitle,
            FeedUri = new Uri(blogLinq.BlogFeedUrl),
            Guid = blogLinq.BlogGuid,
            Title = blogLinq.BlogTitle,
            Uri = new Uri(blogLinq.BlogPrimaryUrl),
            CreateDateTime = blogLinq.BlogCreateDate,
            LabelList = new List<Label>(
                blogLinq.Labels.Select(p => new Label
                {
                    Guid = p.LabelGuid,
                    FriendlyTitle = p.LabelFriendlyTitle,
                    Title = p.LabelTitle
                })
            )
        };
    }
}

It would probably be a good idea at this point to step into the Validator class to see what's really going on here.  As you can see in the following class I have two methods (in reality there are dozens!) and most of it should be obvious.  The validation is obviously in the second method, however, it's the first one that's being directly called.  Notice two things about this: First, notice that I'm passing in my DataContext.  This is to completely obliterate any possibilities of overlapping DataContexts and, therefore, any strange locking issues.  Second, notice that I'm pre-registering my messages in a strongly typed Message class(notice also that the members of Message are not static-- the magic of const.)  This last piece could easily be done in a way that provides for nice localization.

Now moving on to the actual validation.  Unless I'm desperately trying to inline some code, I normally declare the LINQ criteria prior to the actual link statement.  Of course, this is exactly what the Func<T1, T2> delegate is doing.  Notice also that I try to bring the semantics of the criteria into the name of the object.  This really helps in in making many of your LINQ statements read more naturally: "db.Person.Where(hasEmployees)".

namespace Minima.Service.Validation
{
    internal static class Validator
    {
        //- ~Message -//
        internal class Message
        {
            public const String InvalidEmail = "Invalid author Email";
        }


        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, DataContext db)
        {
            EnsureAuthorExists(authorEmail, out authorLinq, Message.InvalidEmail, db);
        }


        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, String message, DataContext db)
        {
            Func<AuthorLINQ, Boolean> authorExists = x => x.AuthorEmail == authorEmail;
            authorLinq = db.Authors.SingleOrDefault(authorExists);
            if (authorLinq == null)
            {
                FaultThrower.Throw<ArgumentException>(new ArgumentException(message));
            }
        }
    }
}

In the actual query itself, you can see that the semantics of the method is that a maximum of one author should be returned.  Therefore, I'm able to use the Single or SingleOrDefault methods.  Note that if you use these and you return more than one entity, an exception will be throw as Single and SingleOrDefault only allow what their name implies.  In this case here, AuthorEmail is the primary key in the database and, by definition, there can be only one (at this point I'm sure about 30% of you are doing Sean Connery impressions).  The difference between Single and SingleOrDefault is simple: when the criteria is not met, Single throws an exception and SingleOrDefault returns the type's default value.  The default of a type is that which the C# "default" keyword will return.  In other words, a reference type will be null and a struct will be something else (i.e. 0 for Int32).  In this case, I'm dealing with my AuthorLINQ class, which is obviously a reference type, and therefore I need to check null on it.  If it's null, then that author doesn't exist and I need to throw a fault (which is what my custom FaultThrower class does).  What's a fault?  That's a topic for a different post.

As you can see from the method signatures, not only is the author e-mail address being validated, the LINQ entity is being returned to the caller via an out parameter.  Once I have this authorLinq entity, then I can proceed to use it's primary key (AuthorId) in various other LINQ queries.  It's critical to remember that you always want to make sure that you are only using validated information.  If you aren't, then you have no idea what will happen to your system.  Therefore, you should ignore all IDs that are sent into a WCF service operation and use only the validated ones.  A thorough discussion of this topic is left for a future discussion.

Now we are finally at the place where LINQ to WCF projection happens.  For clarity, here it is again (no one likes to scroll back and forth):

return new BlogMetaData
{
    Description = blogLinq.BlogDescription,
    FeedTitle = blogLinq.BlogFeedTitle,
    FeedUri = new Uri(blogLinq.BlogFeedUrl),
    Guid = blogLinq.BlogGuid,
    Title = blogLinq.BlogTitle,
    Uri = new Uri(blogLinq.BlogPrimaryUrl),
    CreateDateTime = blogLinq.BlogCreateDate,
    LabelList = new List<Label>(
        blogLinq.Labels.Select(p => new Label
        {
            Guid = p.LabelGuid,
            FriendlyTitle = p.LabelFriendlyTitle,
            Title = p.LabelTitle
        })
    )
};

The basics flow of this are as follows: In DataContext db, in the Blogs table, pull sub-set where PersonId == AuthorId, then select transform that data into a new type.  The DTO projection is obviously happening in the Select method.  This method is akin to a SELECT in SQL.  My point in saying that is to make sure that you are aware that SELECT is not a filter; that's what Where does.  After execution of the Where method as well as after execution of the Select method, you have an IQueryable<Blog> object, which contains information about the query, but no actual data yet.  LINQ defers execution of SQL statements until they are actually used.  In this case, the data is actually being used when the ToList method is called.  This of course returns a list of List<Blog>, which is exactly what this service operation should do.  What's really nice about this is that WCF loves List<T>.  It's not a big fan of Collection<T>, but List<T> is it's friend.  Over the wire it's an Array and when it's being used by a WCF client, it's also a List<T> object.

In closing I should mention something that I know people are going to ask me about: To project from WCF DTO to LINQ you do the exact same thing.  LINQ isn't a database-specific technology.  You can LINQ between all kinds of things.  Though I use LINQ for my data access in many projects, most of my LINQ usage is actually for searching lists, combining to lists together, or modifying the data that gets bounds to the interface.  It's incredibly powerful.

Moving into a non-Minima example, if, for example, you needed to have a person's full name in a WPF ListBox and the name-specific LINQ properties you have are FirstName and LastName property, instead of doing tricks in your ItemTemplate, you can just have your ItemsSource use LINQ to sew the FirstName and LastName together.

lstPerson.ItemsSource = personList.Select(p => new
{
    FullName = p.FirstName + " " + p.LastName,
    p.PostalCode,
    Country = p.Country ?? String.Empty
});

The really sweet part about this is the fact that LINQ entities implement the INotifyPropertyChanged interface, so when doing WPF data binding, WPF will automatically update the ListBox when the data changes!  Of course, this doesn't help you if you are doing a seriously SOA system.  Therefore, my DTOs normally implement INotifyPropertyChanged as well.  This is not a WPF-specific interface (it lives in System.ComponentModel) and therefore does not tie the business object to any presentation.

That should show you a bit more of how LINQ can work with all kinds of stuff.  Therefore, it shouldn't be hard to figure out how to project from a WCF DTO to LINQ. You could literally copy/paste the LINQ -> DTO code and just switch around a few names.

If you are new to LINQ, then I recommend the book Pro LINQ by Joseph C. Rattz Jr. However, if you are already using LINQ or want a view into its internal mechanics, then I must recommend LINQ in Action by Fabrice Marguerie, Steve Eichert, and Jim Wooley.

Minima 3.0 Released



Every few months I like to release a new open-source project or at least a new major revision of an existing project. Today I would like to introduce Minima 3.0.  This is a completely new Minima Blog Engine that is built on WCF, that is factored into various controls and that introduces a completely new model for ASP.NET development.

As a Training Tool

Normally I leave this for last, but this time I would like to immediately start off by mention how Minima 3.0 may act as a training tool. This will give you a good idea to Minima 3.0's architecture.  Here was the common "As a Training Tool" description for Minima 2.0 (a.k.a. Minima .NET 3.5):

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Here's the new "As a Training Tool" description for Minima 3.0:

Minima 3.0 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

As you can see, it's an entirely new beast. As you should also be able to guess, I'm not going to use Minima for simply entry level .NET training anymore. With this new feature set, it's going to be my primary tool for intermediate and expert-level .NET training.  In the future, I'll post various blog entries giving lessons on various parts of Minima.

New Features

Since it's no where near the purpose of Minima, in no version have I ever claimed to have an extraordinary feature set. In fact, the actual end-user feature set of Minima 3.0 is fundamentally the same as Minima 2.0 except where features are naturally added because of the new architecture.  For example, it's now naturally a multi-blog environment with each blog allowed to have it's own blog discovery data, Google sitemap, and other things.

Architecture

There are really three major "pillars" to the architecture of Minima 3.0: WCF, ASP.NET, and my Themelia Foundation (pronounced TH[as in "Thistle"]-MEH-LEE-UH; Koine Greek for "foundations"). It will take more than one blog entry to cover every aspect of Minima's architecture (see my lessons on Themelia), but for now I'll give a very brief overview.  I will explain the ASP.NET and Themelia pillars together.

WCF Architecture

The backend of Minima is WCF and is split up into various services to factor out some of the operations that occur within Minima. Of course, not every single possible operation is included as that would violate the "specificness" of SOA, but the core operations are intact.

The entire Minima security structure is now in WCF using a custom declarative operation-level security implementation.  To set security in Minima, all you have to do on the service side is apply the MinimaBlogSecurityBehavior attribute to an operation and you're all set.  Here's an example:

[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]

Microsoft MVP (ASP.NET) 2009



I'm rather pleased to announce that October 1, 2008 was my day: I was made a Microsoft MVP for ASP.NET.  Thus, as tradition seems to state, I'm posting a blog entry about it.

Thanks to God for not letting me have the MVP until now, the timing is flawless.  Thanks also to my MVP advisor David Silverlight who got me more serious about the MVP program and admitted to having nominated me.  Next, thanks to Scott Hanselman who fixed a clog in the MVP pipeline.  Apparently, I was in the system, but completely lost; in the wrong category or something.  He took it upon himself to contact a few people to get the problem fixed.

Thanks also to Brad Abrams for his recommendation to the MVP committee and to Rick Strahl, a fellow MVP, Microsoft ASP.NET AJAX loather, and Subversion lover who showed me that open source developers have equal rights to the MVP title.

To bring in the new [MVP] year, today I took some time and did a massive redesign to my web site featuring the cool MVP logo, which just so happens to fit perfectly in my existing color scheme.  I'll probably be tweaking the site over the next few days as the waves of whimsical changes come my way.

Minima 3.1 Released



I've always thought that one of the best ways to learn or teach a series of technologies is to create either a photo gallery, some forum software, or a blog engine.  Thus to aide in teaching various areas of .NET (and to have full control over my own blog), I created Minima v1.  Minima v2 came on the scene adding many new features and showing how LINQ can help make your DAL shine.  Then, Minima v3 showed up and demonstrated an enormous load of technologies as well as demonstrating proper architectural principles.

Well, Minima 3.1 is an update to Minima 3.0 and it's still here to help people see various technologies in action.  However, while Minima 3.1 adds various features to the Minima 3.1 base, its primary important to note that it's also the first major Themelia 2.0 application (Minima 3.0 was built on Themelia 1.x).  As such, not only is it a prime example of many technologies ranging from WCF to LINQ to ASP.NET controls to custom configuration, it's also a good way to see how Themelia provides a component model to the web.  In fact, Minima 3.1 is technically a Themelia 2.0 plug-in.

Here's a quick array of new blog features:

  • It's built on Themelia 2.0.  I've already said this, but it's worth mentioning again.  This isn't a classic ASP.NET application.  It's the first Themelia 2.0 application.
  • Minima automatically creates indexes (table of contents) to allow quick viewing of your site
  • Images are now stored in SQL Server as a varbinary(max) instead of as a file.
  • Themelia CodeParsers are used to automatically turn codes like {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6}}} into a link to a link entry, {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6|Click here for stuff}}} into a renamed blog entry and {Minima{AmazonAffiliate{0875526438}}} into a Amazon.com product link with your configured (see below) affiliate ID.
  • Minima now has its own full custom configuration.  Here's an example:
<minima.blog entriesToShow="7" domain="http://www.tempuri.org/">
  <service>
    <authentication defaultUserName="jdoe@tempuri.org" defaultPassword="blogpassword"/>
    <endpoint author="AuthorServiceWs2007HttpBinding" blog="BlogServiceWs2007HttpBinding" comment="CommentServiceWs2007HttpBinding" image="ImageServiceWs2007HttpBinding" label="LabelServiceWs2007HttpBinding" />
  </service>
  <suffix index="Year Review" archive="Blog Posts" label="Label Contents" />
  <display linkAuthorsToEmail="false" blankMessage="There are no entries in this view." />
  <comment subject="New Comment on Blog" />
  <codeParsers>
    <add name="AmazonAffiliate" value="net05c-20" />
  </codeParsers>
</minima.blog>

  • In addition to the normal MinimaComponent (the Themelia component which renders the blog with full interactivity), you may also use the MinimaProxyComponent to view single entries.  For example, you just as a BlogEntryProxy component to a web form and set either the blog entry guid or the blog guid plus the link, then your blog entry will show.  I built this feature in to allow Minima to be used for more than just blogging, it's a content stream.  With this feature I can keep every single page of a web site inside of Minima and no one will ever know.  There's also the MinimaViewerComponent which renders a read-only blog.  This means no rsd.xml, no site map, no commenting, no editing, just viewing.  In fact, the Themelia web site uses this component to render all its documentation.
  • There is also support for adding blog post footers.  Minima 3.1 ships with a FeedBurner footer to provide the standard FeedBurner footer.  See the "implementing" section below for more info.

As a Training Tool

Minima is often used as a training tool for introductory, intermediate, and expert-level .NET.

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Minima 3.1 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

Architecture

Probably the most important thing to learn from Minima is architecture.  Minima is built to provide great flexibility.  However, that's not for the faint of heart.  I heard one non-architect and obvious newbie say that it was "over architected".  According to this person, apparently, adding security to your WCF services to protect you private information is "over architecting" something (not to mention the fact that WCF enforces security for username authentication).

In any case, Minima is split into two parts: the service and the web site.  I use Minima many places, but for all my blogs (or, more accurately, content streams) I have a single centralized, well-protected service set.  All my internal web sites access this central location via the WCF NetNamedPipeBinding.

Implementing

Minima is NOT your every day blog engine.  If your company needs a blog engine for various people on the team, get community server.  Minima isn't for you.  Minima allows you to plop a blog into any existing web site.  For example, if you have an existing web site, just install Themelia (remember, Minima is a Themelia plugin), create a new Themelia web domain, and register Minima into that web domain as follows:

<themelia.web>
  <webDomains>
    <add>
      <components>
        <add key="Minima" type="Minima.Web.Routing.MinimaComponent, Minima.Web">
          <parameters>
            <add name="page" value="~/Page_/Blog/Root.aspx" />
            <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
          </parameters>
        </add>
      </components>
    </add>>
  </webDomains>
</themelia.web>

Now, on that Root.aspx page, just add a simple Minima.Web.Controls.MinimaBlog control.  Your blog immediately starts rendering.  Not only that, commenting is automatically supported.  Furthermore, you have a site map, a Windows Live Writer (MetaWeblog API) endpoint, a rsd.xml file, and a wlwmanifest.xml file.  All that just dropping a control on to a web site without configuring anything in that page.  Of course, you can configure things if you want and you can add more control to the page as well.  Perhaps you want a label list, an archive list, or a recent entry list.  Just add the appropriate control to the web form.  In fact, the same Minima binaries that you will compile with the source is used on each of my web sites with absolutely no changes; they are all just a single control, yet look nothing alike.

Personally, I don't like to add a lot of controls to my web forms.  Thus, I normally add a place holder control and then add my controls to that place holder.  There more here's a snippet from my blog's web form (my entire blog has only one page):

phLabelList.Controls.Add(new Minima.Web.Controls.LabelList { Heading = "Label Cloud", ShowHeading = true, TemplateType = typeof(Minima.Web.Controls.LabelListControlTemplateFactory.SizedTemplate) });
phArchivedEntryList.Controls.Add(new Minima.Web.Controls.ArchivedEntryList { ShowEntryCount = false });
phRecentEntryList.Controls.Add(new Minima.Web.Controls.RecentEntryList());
phMinimaBlog.Controls.Add(new Minima.Web.Controls.MinimaBlog
{
    ShowAuthorSeries = false,
    PostFooterTypeInfo = Themelia.Activation.TypeInfo.GetInfo(Minima.Web.Controls.FeedBurnerPostFooter.Type, "http://feeds.feedburner.com/~s/FXHarmonics"),
    ClosedCommentText = String.Empty,
    DisabledCommentText = String.Empty
});

There's nothing here that you can't do as well.  Most everything there is self explanatory too.  However, notice the post footer type.  By setting this type, Minima knows to render the feed burner post footer at the end of each entry.

Thus, with a simple configuration and a drop of a control, you can add a blog anywhere.  Or, in the case of the Themelia web site, you can add a content stream anywhere.

Here's a snippet from the configuration for the Themelia web site:

<add name="framework" path="framework" defaultPage="/Sequence_/Home.aspx" acceptMissingTrailingSlash="true">
  <components>
    <add key="Minima" type="Minima.Web.Routing.MinimaViewerComponent, Minima.Web">
      <parameters>
        <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
      </parameters>
    </add>
  </components>
</add>

By looking at the Themelia web site, you can see that on the Themelia web site, Minima isn't being used as a blog engine, but as a content stream.  Go walk around the documentation of http://themelia.netfxharmonics.com/framework/docs.  I didn't make a bunch of pages, all I did was drop in that component and throw a Minima.Web.Controls.BlogViewer control on the page and BAM I have an entire documentation system already built based upon various entries from my blog.

As a side note, if you look on my blog, you will see each of the Themelia blog entries have a list of links, but the same thing in the Themelia documentation does not have the link list.  This is because I've set IgnoreBlogEntryFooter to true on the BlogViewer control and thus telling Minima to remove all text after the special code.  Thus I can post the same entry in two places.

This isn't a marketing post on why you should use Minima.  If you want to use Minima, go ahead, you can contact me on my web site for help.  However, the point is to learn as much as you can about modern technology using Minima as an example.  It's not meant to be used in major web sites by just anyone at this point (though I use it in production in many places).  Having said that, the next version of Minima will be part of the Themelia suite and will have much more user support and formal documentation.

In conclusion, I say again (and again and again), you may use Minima for your personal training all you want.  That's why it's public. 

Links

Cross-Browser JavaScript Tracing



No matter what system you are working with, you always need mechanisms for debugging.  One of the most important mechanisms a person can have is tracing.  Being able to see trace output from various places in your application is vital.  This is especially true with JavaScript.  I've been working with JavaScript since 1995, making stuff 12 years ago that would still be interesting today (in fact, I didn't know server-side development existed until 1998!) and I have noticed a clear correlation between the complexity of JavaScript applications and the absolute need for tracing.

Thus, a long, long time ago I built a tracing utility that would help me view all the information I need (and absolute no more or less).  These days this means being able to trace information to a console, dump arrays and objects, and be able to view line-numbered information for future reference.  The utility I've created has since been added to my Themelia suite (pronounces the-meh-LEEUH; as in thistle or the name Thelma), but today I would like to demonstrate it and deliver it separately.

The basis of my tracing utility is the Themelia.Trace namespace.  In this namespace is…. wait… what?  You're sick of listening to me talk?  Fine.  Here's the sample code which demonstrates the primary uses of Themelia.Trace, treat this as your reference documentation:

//+ enables tracing
Themelia.Trace.enable( );
//+ writes text
Themelia.Trace.write('Hello World!');
//+ writes a blank line
Themelia.Trace.addNewLine( );
//+ writes a numbered line
Themelia.Trace.writeLine('…and Hello World again!');
Themelia.Trace.writeLine('Another line…');
Themelia.Trace.writeLine('Yet another…');
Themelia.Trace.writeLine('One more…');
//+
//++ label
//+ writes labeled data to putput (e.g. 'variableName (2)')
Themelia.Trace.writeLabeledLine('variableName', 2);
//+
//++ buffer
//+ creates a buffer
var buffer = new Themelia.Trace.Buffer( );
//+ declares beginning of new segment
buffer.beginSegment('Sample');
//+ writes data under specific segment
buffer.write('data here');
//+ nested segment
buffer.beginSegment('Array Data');
//+ write array to buffer
var a = [1,2,3,4,5];
buffer.write(a);
//+ declares end of segment
buffer.endSegment('Array Data');
buffer.beginSegment('Object Data');
//+ write raw object/JSON data
buffer.write({
    color: '#0000ee',
    fontSize: '1.1em',
    fontWeight: 'bold'
});
buffer.endSegment('Object Data');
//+ same thing again
buffer.beginSegment('Another Object');
var o = {
    'personId': 2,
    name: 'david'
};
buffer.write(o);
buffer.endSegment('Another Object');
buffer.endSegment('Sample');
//+ writes all built-up data to output
buffer.flush( );

Notice a few thing about this reference sample:

  • First, you must use Themelia.Trace.enable( ) to turn tracing on.  In a production application, you would just comment this line out.
  • Second, Themelia.Trace.writeLine prefixes each line with a line number.  This is especially helpful when dealing with all kinds of async stuff floating around or when dealing with crazy events.
  • Third, you may use Themelia.Trace.writeLabeledLine to output data while giving it a name like "variableName (2)".
  • Fourth, if you want to run a tracer through your application and only later on have output, create an instance of Themelia.Trace.Buffer, write text to it, write an array to it, or write an object to it, then call flush( ) to send to data to output.  You may also use beginSegment and endSegment to create nested, indented portion of the output.
  • Fifth, notice you can throw entire arrays of objects/JSON into buffer.write( ) to write it to the screen.  This is especially handy when you want to trace your WCF JSON messages.

Trace to what?

Not everyone knows this, but Firefox, Google Chrome, Safari, and Opera each has its own console for allowing output.  Themelia.Trace works with each console in its own way.  Here are some screen shots to show you what I mean:

Firefox

Firefox since version 1.0 has the Firefox Console which allows you to write just about anything to a separate window.  I've done a video on this many years ago and last year I posted a quick "did you know"-style blog post on it, so there's no reason for me to cover it again here.  Just watch my Introduction to the Firefox Console for a detailed explanation of using the Firefox Console (you may also opt to watch my Setting up your Firefox Development Environment-- it should seriously help you out).

Firefox

Google Chrome

Chrome does things a little different than any other browser.  Instead of having a "browser" wide console, each tab has its own console.  Notice "browser" is in quotes.  Technically, each tab in Chrome is it's own mini browser, so this console-per-tab model makes perfect sense.  To access this console, just hit Alt-` on a specific tab.

Chrome

Safari

In Safari, you go to Preferences, in the Advanced Tab to check "Show Develop menu in menu bar".  When you do this, you will see the Develop menu show up.  The output console is at Develop -> Show Web Inspector.

Safari

Opera

In Opera 9, you go to Tools -> Advanced -> Developer Tools and you will see a big box show up at the bottom.  The console is the Error Console tab.

Opera9

Internet Explorer

To use Themelia.Trace with Internet Explorer, install Nikhil's Web Developer Helper.  This is different from the IE Developer Toolbar.

IEWebDevHelper

Firebug

It's important to note that, in many situations it's actually more effective to rely on Firebug for Firefox or Firebug lite for Safari/Chrome, IE, and Opera, then to use a console directly.  Therefore, Themelia.Trace allows you to set Themelia.Trace.alwaysUseFirebug to true and have all output redirected to Firebug instead of the default console.  Just try it, use the above sample, but put "Themelia.Trace.alwaysUseFirebug = true;" above it.  All data will redirect to Firebug.  Here's a screen shot (this looks basically the same in all browsers):

FirebugLite

There you have it.  A cross-browser solution to JavaScript tracing.

Links

Love Sudoku? Love brain puzzles? Check out my new world-wide Sudoku competition web site, currently in beta, at Sudokian.com.

kick it on DotNetKicks.com

Creating JavaScript Components and ASP.NET Controls



Every now and again I'll actually meet someone who realizes that you don't need a JavaScript framework to make full-scale AJAX applications happen… but rarely in the Microsoft community.  Most people think you need Prototype, jQuery, or ASP.NET AJAX framework in order to do anything from networking calls, DOM building, or component creation.  Obviously this isn't true.  In fact, when I designed the Brainbench AJAX exam, I specific designed it to test how effectively you can create your own full-scale JavaScript framework (now how well the AJAX developer did on following my design, I have no idea).

So, today I would like to show you how you can create your own strongly-typed ASP.NET-based JavaScript component without requiring a full framework.  Why would you not have Prototype or jQuery on your web site?  Well, you wouldn't.  Even Microsoft-oriented AJAX experts recognizes that jQuery provides an absolutely incredible boost to their applications.  However, when it comes to my primary landing page, I need that to be extremely tiny.  Thus, I rarely include jQuery or Prototype on that page (remember, Google makes EVERY page a landing page, but I mean the PRIMARY landing page.)

JavaScript Component

First, let's create the JavaScript component.  When dealing with JavaScript, if you can't do it without ASP.NET, don't try it in ASP.NET.  You only use ASP.NET to help package the component and make it strongly-typed.  If the implementation doesn't work, then you have more important things to focus on.

Generally speaking, here's the template I follow for any JavaScript component:

window.MyNamespace = window.MyNamespace || {};
//+
//- MyComponent -//
MyNamespace.MyComponent = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this.host = init.host;
                //+
                this.DOMElement = $(this.host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this.host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.myParameter) {
                this.myParameter = init.myParameter;
            }
            else {
                throw 'myParameter is required.';
            }
        }
    }
    ctor.prototype = {
        //- myfunction -//
        myfunction: function(t) {
        }
    };
    //+
    return ctor;
})( );

You may then create the component like the following anywhere in your page:

new MyNamespace.MyComponent({
    host: 'hostName',
    myParameter: 'stuff here'
 });

Now on to see a sample component, but, first, take note of the following shortcuts, which allow us to save a lot of typing:

var DOM = document;
var $ = function(id) { return document.getElementById(id); };

Here's a sample Label component:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        this.setText(this._initialText);
    }
    ctor.prototype = {
        //- myfunction -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

With the above JavaScript code and "<div id="host"></div>" somewhere in the HTML, we can use the following to create an instance of a label:

window.lblText = new Controls.Label({
    host: 'host',
    initialText: 'Hello World'
});

Now, if we had a button on the screen, we could handle it's click event, and use that to set the text of the button, as follows:

<div>
    <div id="host"></div>
    <input id="btnChangeText" type="button" value="Change Value" />
</div>
<script type="text/javascript" src="Component.js"></script>
<script type="text/javascript">
    //+ in reality you would use the dom ready event, but this is quicker for now
    window.onload = function( ){
        window.lblText = new Controls.Label({
            host: 'host',
            initialText: 'Hello World'
        });
         window.btnChangeText = $('btnChangeText');
         //+ in reality you would use a muli-cast event
         btnChangeText.onclick = function( ) {
            lblText.setText('This is the new text');
         };
    };
</script>

Thus, components are simple to work with.  You can do this with anything from a simple label to a windowing system to a marquee to any full-scale custom solution.

ASP.NET Control

Once the component works, you may then package the HTML and strongly-type it for ASP.NET.  The steps to doing this are very simple and once you do it, you can just repeat the simple steps (some times with a simple copy/paste) to make more components.

First, we need to create a .NET class library and add the System.Web assembly.   Next, add the JavaScript component to the .NET class library.

Next, in order to make the JavaScript file usable my your class library, you need to make sure it's set as an Embedded Resource.  In Visual Studio 2008, you do this by going to the properties window of the JavaScript file and changing the Build Action to Embedded Resource.

Then, you need to bridge the gap between the ASP.NET and JavaScript world by registering the JavaScript file as a web resource.  To do this you register an assembly-level WebResource attribute with the location and content type of your resource.  This is typically done in AssemblyInfo.cs.  The attribute pattern looks like this:

[assembly: System.Web.UI.WebResource("AssemblyName.FolderPath.FileName", "ContentType")]

Thus, if I were registering a JavaScript file named Label.js in the JavaScript.Controls assembly, under the _Resource/Controls folder, I would register my file like this:

[assembly: System.Web.UI.WebResource("JavaScript.Controls._Resource.Label.js", "text/javascript")]

Now, it's time to create a strongly-typed ASP.NET control.  This is done by creating a class which inherits from the System.Web.UI.Control class.  Every control in ASP.NET, from the TextBlock to the GridView, inherits from this base class.

When creating this control, we want to remember that our JavaScript control contains two required parameters: host and initialText.  Thus, we need to add these to our control as properties and validate these on the ASP.NET side of things.

Regardless of your control though, you need to tell ASP.NET what files you would like to send to the client.  This is done with the Page.ClientScript.RegisterClientScriptResource method, which accepts a type and the name of the resource.  Most of the time, the type parameter will just be the type of your control.  The name of the resource must match the web resource name you registered in AssemblyInfo.  This registration is typically done in the OnPreRender method of the control.

The last thing you need to do with the control is the most obvious: do something.  In our case, we need to write the client-side initialization code to the client.

Here's our complete control:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);


        //+
        //- @HostName -//
        public String HostName { get; set; }


        //- @InitialText -//
        public String InitialText { get; set; }


        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }


        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(HostName))
            {
                throw new InvalidOperationException("HostName must be set");
            }
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            host: '" + HostName + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

The code written to the client may looks kind of crazy, but that's because it's written very carefully.  First, notice it's wrapped in a script tag.  This is required.  Next, notice all the code is wrapped in a (function( ) { }) ( ) block.  This is a JavaScript containment technique.  It basically means that anything defined in it exists only for the time of execution.  In this case it means that the onLoad variable exists inside the function and only inside the function, thus will never conflict outside of it.  Next, notice I'm attaching the onLoad logic to the window.load event.  This isn't technically the correct way to do it, but it's the way that requires the least code and is only there for the sake of the example.  Ideally, we would write (or use a prewritten one) some sort of event handler which would allow us to bind handlers to events without having to check if we are using the lameness known as Internet Explorer (it uses window.attachEvent while real web browsers use addEventListener).

Now, having this control, we can then compile our assembly, add a reference to our web site, and register the control with our page or our web site.  Since this is a "Controls" namespace, it has the feel that it will contains multiple controls, thus it's best to register it in web.config for the entire web site to use.  Here's how this is done:

<configuration>
  <system.web>
    <pages>
      <controls>
        <add tagPrefix="c" assembly="JavaScript.Controls" namespace="JavaScript.Controls" />
      </controls>
    </pages>
  </system.web>
</configuration>

Now we are able to use the control in any page on our web site:

<c:Label id="lblText" runat="server" HostName="host" InitialText="Hello World" />

As mentioned previously, this same technique for creating, packaging and strongly-typing JavaScript components can be used for anything.  Having said that, this example that I have just provided borders the raw definition of useless.  No one cares about a stupid host-controlled label.

If you don't want a host-model, but prefer the in-place model, you need to change a few things.  After the changes, you'll have a template for creating any in-place control.

First, remove anything referencing a "host".  This includes client-side validation as well as server-side validation and the Control's HostName property.

Next, put an ID on the script tag.  This ID will be the ClientID suffixed with "ScriptHost" (or whatever you want).  Then, you need to inform the JavaScript control of the ClientID.

Your ASP.NET control should basically look something like this:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);


        //+
        //- @InitialText -//
        public String InitialText { get; set; }


        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }


        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"" id=""" + this.ClientID + @"ScriptHost"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            id: '" + this.ClientID + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

Now you just need to make sure the JavaScript control knows that it needs to place itself where it has been declared.  To do this, you just create a new element and insert it into the browser DOM immediately before the current script block.  Since we gave the script block and ID, this is simple.  Here's basically what your JavaScript should look like:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            if (init.id) {
                this._id = init.id;
                //+
                this.DOMElement = DOM.createElement('span');
                this.DOMElement.setAttribute('id', this._id);
            }
            else {
                throw 'id is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        var scriptHost = $(this._id + 'ScriptHost');
        scriptHost.parentNode.insertBefore(this.DOMElement, scriptHost);
        this.setText(init.initialText);
    }
    ctor.prototype = {
        //- setText -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

Notice that the JavaScript control constructor creates a span with the specified ID, grabs a reference to the script host, inserts the element immediately before the script host, then sets the text.

Of course, now that we have made these changes, you can just throw something like the following into your page and to use your in-place JavaScript control without ASP.NET.  It would look something like this:

<script type="text/javascript" id="lblTextScriptHost">
    window.lblText = new Controls.Label({
        id: 'lblText',
        initialText: 'Hello World'
    });
</script>

So, you can create your own JavaScript components without requiring jQuery or Prototype dependencies, but, if you are using jQuery or Prototype (and you should be!; even if you are using ASP.NET AJAX-- that's not a full JavaScript framework), then you can use this same ASP.NET control technique to package all your controls.

kick it on DotNetKicks.com

Architectural Overview: Creating Streamlined, Simplified, yet Scalable WCF Connectivity

Contents

Introduction

One of the most awesome things about WCF is that the concepts scale extremely well.  If you understand the ABCs of WCF, then you can do anything from creating a simple Hello World to a complex sales processing service.  It's all based on having an address, a binding, and a contract.  All the other concepts like behaviors, validators, and service factories are simply supplemental to the core of the system.  When you understand the basics, you have the general essence of all of WCF.

Because of this fully-scalable ABC concept, I'm able to use the same pattern for WCF architecture and development for every solution.  This is really nice because it makes it so that I don't have to wait time designing a new setup every time a new problem comes along.  In this discussion, I would like to demonstrate how you can create your own extremely efficient WCF solution based on my template.  Along the way, you will also learn a few pieces of WCF internals to help you understand WCF better.

Before I begin the explanation though, keep in mind that most concepts mentioned here are demonstrated in my Minima Blog Engine 3.1.  This is my training software demonstrating an enormous world of modern technologies.  It's regularly refactored and often fully re-architected to be in line with newer technologies.  This blog engine relies heavily on WCF as it's a service-oriented blog engine.  Regardless of how many blogs you have (or any other series of "content entries"-- for example, the documentation section of the Themelia web site is Minima), you have a single set of services that your entire organization uses.  If you understand the WCF usage in Minima, you will understand WCF very well.

Now, onto the meat (or salad, for the vegetarians) of the discussion...

Service Structure

On any one of my solutions, you will find a X.Service project and a X.ServiceImpl project where X is the solution "code" (either the solution name or some other word that represents the essence of the solution).  The former is the public .NET project which contains service contracts, data contracts, service configuration, and service clients.  The latter is the private .NET project which contains the service implementations, behaviors, fault management, service hosts, validators, and other service-side-only, black-boxes portions of the service.  All projects have access to the former, only the service itself will ever even know about the latter.

This is a very simple setup based upon a public/private model, like in public key cryptography.  The idea is that everything private is protected with all your might and everything public is released for anyone, within the context of the solution, to see.

For example, below is the Minima.Service project for Minima Blog Engine 3.1.  Feel free to ignore the folder structure, no one cares about that.  Just because I make each folder a namespace with prefixed folder names for exclusions, doesn't mean anyone else in the world does.  I find it to be the most optimal way to manage namespaced and non-namespaced groups, but the point of this discussion is the separation of concerns in the projects.

MinimaService

For the time being, simply notice that data contracts and service contracts are considered public.  Everything else in the file is either meaningless to this discussion or will be discussed later.

Here is the Minima.ServiceImpl project for the same solution:

MinimaServiceImpl

Here you can see anything from the host factory to various behaviors to validators to fault management to LINQ-to-SQL capabilities.  Everything here is considered private.  The outside world doesn't need to know, therefore shouldn't know.

For the sake of a more simplified discussion, let's switch from the full-scale solution example of Minima to a smaller "Person" service example.  This "Person" service is part of the overall "Contact" solution.  Here's the Contact.Service for our Contact solution:

PersonService

As you can see, you have a standard data contract, a service contract, and a few other things, which will be discussed in a bit.

For this example, we don't need validators, fault management, or behaviors, or host factories, all we need is a simple service in our Contact.ServiceImpl project:

PersonServiceImpl

By utilizing this model separating the private from the public you can easily send the Contact.Service assembly to anyone you want without requiring them to create their own client proxy or use the painfully horrible code generated by "Add Service Reference", which ranks in my lists as one of the worst code generators right next to FrontPage 95 and Word 2000.

As a side note, I should mention that I have colleagues who actually take this a step further and make a X.Service.Client project which houses the WCF client classes.  They are basically following a Service/Client/Implementation model where as I'm following a Public/Private model.  Just use which ever model makes sense for you.

The last piece needed in this WCF setup is the host itself.  This is just a matter of creating a new folder for the web site root, adding a web.config, and adding a X.svc file.  Period.  This is the entire service web site.

In the Person service example, the service host has only two files: web.config and Person.svc.

Below is the web.config, which declares two endpoints for the same service.  One of the endpoints is a plain old fashioned ASMX style "basic profile" endpoint and the other is one to be used for JSON connectivity.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <behaviors>
      <endpointBehaviors>
    <behavior name="JsonEndpointBehavior">
      <enableWebScript />
    </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
      <service name="Contact.Service.PersonService">
    <endpoint address="json" binding="webHttpBinding" contract="Contact.Service.IPersonService" behaviorConfiguration="JsonEndpointBehavior" />
    <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

The Person.svc is even most basic:

<%@ ServiceHost Service="Person.Service.PersonService" %>

This type of solution scales to any level.  If you want to add a service, just add a new X.svc file and register as a new service.  If you want to add another endpoint, just add that line.  It's incredibly simple and scales to solutions of any size and even works well with non-HTTP services like netTcpBinding services.

Service MetaData

Now let's look at each of these files to see how they are optimized.  First, lets' look at the data contract, Person:

using System;
using System.Runtime.Serialization;
//+
namespace Contact.Service
{
    [DataContract(Namespace = Information.Namespace.Contact)]
    public class Person
    {
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }


    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }


    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }


    //- @City -//
    [DataMember]
    public String City { get; set; }


    //- @State -//
    [DataMember]
    public String State { get; set; }


    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
    }
}

Everything about this file should be self explanatory.  There's a data contract attribute on the class and data member attributes on each member.  Simple.  But what's with the data contract namespace?

Well, earlier this year, a co-architect of mine mentioned to me that you can centralize your namespaces in static locations.  Genius.  No more typing the same namespace on each and every data and service contract.  Thus, the following file is included in each of my projects:

using System;
//+
namespace Contact.Service
{
    public class Information
    {
    //- @NamespaceRoot -//
    public const String NamespaceRoot = "http://www.netfxharmonics.com/service/";


    //+
    //- @Namespace -//
    public class Namespace
    {
        public const String Contact = Information.NamespaceRoot + "Contact/2008/11/";
    }
    }
}

If there are multiple services in a project, and in 95%+ of the situations there will be, then you can simply add more services to the Namespace class and reference them from your data and service contracts.  Thus, you never, EVER have to update your service namespaces in more than one location.  You can see this in the service contract as well:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
    //- GetPersonData -//
    [OperationContract]
    Person GetPersonData(String personGuid);
    }
}

This service contract doesn't get much simpler.  There's no reason to discuss it any longer.

Service Implementation

For the sake of our discussion, I'm not going to talk very much at all about the service implementation.  If you want to see a hardcore implementation, go look at my Minima Blog Engine 3.1.  You will see validators, fault management, operation behaviors, message headers, and on and on.  You will seriously learn a lot from the Minima project.

For our discussion, here's our Person service:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return new Person
        {
        FirstName = "John",
        LastName = "Doe",
        City = "Unknown",
        Guid = personGuid,
        PostalCode = "66062",
        State = "KS"
        };
    }
    }
}

Not too excited, eh?  But as it is, this is all that's required to create a WCF service implementation.  Just create an every day ol' class and implement a service contract.

As I've mentioned, though, you could do a ton more in your private service implementation.  To give you a little idea, say you wanted to make absolutely sure that no one turned off metadata exchange for your service.  This is something that I do for various projects and it's incredibly straight-forward: just create a service host factory, which creates the service host and programmatically adds endpoints and modified behaviors. Given that WCF is an incredibly streamlined system, I'm able to add other endpoints or other behaviors in the exact same way.

Here's what I mean:

using System;
using System.ServiceModel;
using System.ServiceModel.Description;
//+
namespace Contact.Service.Activation
{
    public class PersonServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory
    {
    //- @CreateServiceHost -//
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        ServiceHost host = new ServiceHost(typeof(PersonService), baseAddresses);
        //+ add metadata exchange
        ServiceMetadataBehavior serviceMetadataBehavior = host.Description.Behaviors.Find<ServiceMetadataBehavior>();
        if (serviceMetadataBehavior == null)
        {
        serviceMetadataBehavior = new ServiceMetadataBehavior();
        host.Description.Behaviors.Add(serviceMetadataBehavior);
        }
        serviceMetadataBehavior.HttpGetEnabled = true;
        ServiceEndpoint serviceEndpoint = host.Description.Endpoints.Find(typeof(IMetadataExchange));
        if (serviceEndpoint == null)
        {
        host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex");
        }
        //+
        return host;
    }
    }
}

Then, just modify your service host line:

<%@ ServiceHost Service="Person.Service.PersonService" Factory="Person.Service.PersonServiceFactory" %>

My point in mentioning that is to demonstrate how well your service will scale based upon a properly setup base infrastructure.  Each thing that you add will require a linear amount of work.  This isn't like WSE where you need three doctorates in order to modify the slightest thing.

Service Client without Magic

At this point, a few of my colleagues end their X.Service project and begin a X.Service.Client class.  That's fine.  Keeping the client away from the service metadata is nice, but its not required.  So, for the sake of this discussion, let's continue with the public/private model instead of the service/client/implementation model.

Thankfully, I have absolutely no colleagues who would ever hit "Add Service Reference".  I've actually never worked with a person who does this either.  This is good news because, as previously mentioned, the code generated by the add service reference is a complete disaster.  You also can't modify it directly without risking your changing being overwritten.  Even then, the code is SO bad, you wouldn't want to edit it.  There's no reason to have hundreds or THOUSANDS of lines of code to create a client to your service.

A much simpler and much more manageable solution is to keep your code centralized and DRY (don't repeat yourself).  To do this, you just need to realize that the publicly accessible X.Service project already contains the vast majority of everything needed to create a WCF client.  Because of this, you may access the service using WCF's built in mechanisms.  You don't need anything fancy, it's all built right in.

Remember, WCF simply requires the ABC: and address, binding, and contract.  On the client-side, WCF uses this information to create a channel.  This channel implements your service contract, thus allowing you to directly access your WCF service without any extra work required.  You already have the contract, so you just have to do is declare the address and binding.  Here's what I mean:

//+ address
EndpointAddress endpointAddress = new EndpointAddress("http://localhost:1003/Person.svc");
//+ binding
BasicHttpBinding basicHttpBinding = new BasicHttpBinding();
//+ contract
IPersonService personService = ChannelFactory<IPersonService>.CreateChannel(basicHttpBinding, endpointAddress);
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

No 3,000 line client generated code, no excess classes, no configuration.  Just direct access to your side.

Of course, if you want to use a configuration file, that's great too.  All you need to do in create a system.serviceModel section in your project or web site and declare your service endpoint.  What's really nice about this in WCF is that, since the concept for WCF are ABC on both the client and server, most of the configuration is already done for you.  You can just copy and paste the endpoint from the service configuration to your client configuration and add a name element.

<system.serviceModel>
  <client>
    <endpoint name="PersonServiceBasicHttpBinding" address="http://localhost:1003/Person.svc" binding="basicHttpBinding" contract="Simple.Service.IPersonService" />
  </client>
</system.serviceModel>

For the most part, you will also need to keep any bindings on the service side as well, though you wouldn't copy over validation information.

At this point you can just change the previously used WCF channel code to use an endpoint, but that's an architectural disaster.  You never want to compile configuration information into your system.  Instead of tying them directly, I like to create a custom configuration for my solution to allow the endpoint configuration to change.  For the sake of this discussion though, let's just use appSettings (do NOT rely on appSettings for everything!  That's a cop-out.  Create a custom configuration section!).  Here's our appSetting:

<appSettings>
  <add key="PersonServiceActiveEndpoint" value="PersonServiceBasicHttpBinding" />
</appSettings>

At this point, I can explain that Configuration class found in the public Contact.Service project:

using System;
//+
namespace Contact.Service
{
    public static class Configuration
    {
    //- @ActivePersonServiceEndpoint -//
    public static String ActivePersonServiceEndpoint
    {
        get
        {
        return System.Configuration.ConfigurationManager.AppSettings["PersonServiceActiveEndpoint"] ?? String.Empty;
        }
    }
    }
}

As you can see, this class just gives me strongly-typed access to my endpoint name.  Thus allowing me to access my WCF endpoint via a loosely connected configuration:

//+ configuration and contract
IPersonService personService = new ChannelFactory<IPersonService>(ServiceConfiguration.ActivePersonServiceEndpoint).CreateChannel();
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

When I feel like using a different endpoint, I don't modify the client endpoint, but just add another one with a new name and update the appSettings pointer (if you're info fancy names, this is essentially the bridge pattern).

Now, while this is a great way to directly access data by people who understand WCF architecture, it's probably not a good idea to allow your developers to have direct access to system internals.  Entry-level developers need to focus on the core competency of your company (i.e. sales, marketing, data management, etc), not ponder the awesomeness of system internals.  Thus, to allow other developers to work on a solution without having to remember WCF internals, I normally create two layers of abstraction on top of what I've already shown.

For the first layer, I create a concrete ClientBase class to hide the WCF channel mechanics.  This is the type of class that "Add Service Reference" would have created if your newbie developers accidentally used it.  However, the one we will create won't have meaningless attributes and virtually manageable excess code.

Below is our entire client class:

using System;
using System.ServiceModel;
using System.ServiceModel.Channels;
//+
namespace Contact.Service
{
    public class PersonClient : System.ServiceModel.ClientBase<IPersonService>, IPersonService
    {
    //- @Ctor-//
    public PersonClient(String endpointConfigurationName)
        : base(endpointConfigurationName) { }


    //+
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return Channel.GetPersonData(personGuid);
    }
    }
}

The pattern here is incredibly simple: create a class which inherits from System.ServiceModel.ClientBase<IServiceContractName> and implements your service contract.  When you implement the class, the only implementation you need to add is a call to the base channel.  In essence, all this class does is accept calls and pass them off to a pre-created channel.  When you add a new operation to your service contract, just implement the interface and add a single line of connecting code to wire up the client class.

The channel creation mechanics that I demonstrated earlier is now done provided automatically by the ClientBase class.  You can also modify this class a little by bridging up to a total of 10 different constructors provided by ClientBase.  For example, the following constructor call will allow developers to specify a specific binding and endpoint:

public PersonClient(Binding binding, EndpointAddress address)
    : base(binding, address) { }

At this point, we have something that protects developers from having to remember how to create a channel.  However, they still have to mess with configuration names and must remember to dispose this client object (there's an open channel, remember).  Therefore, I normally add another layer of abstraction.  This one will be directly accessibly for developer user.

This layer consists of a series of service agents.  Each service has its own agent and is essentially a series of static methods which provide the most efficient means of making a service call.  Here's what I mean:

using System;
//+
namespace Contact.Service
{
    public static class PersonAgent
    {
    //- @GetPersonData -//
    public static Person GetPersonData(String personGuid)
    {
        using (PersonClient client = new PersonClient(ServiceConfiguration.ActivePersonServiceEndpoint))
        {
        return client.GetPersonData(personGuid);
        }
    }
    }
}

As you can see, the pre-configured service endpoint is automatically used and the PersonClient is automatically disposed at the end of the call (ClientBase<T> implements IDisposable).  If you want to use a different service endpoint, then just change it in your appSettings configuration.

Conclusion

At this point, I've explained every class or each project in my WCF service project model.  It's up to you to decide how to best create and manage your data and service contracts as well as clients.  But, if you want a streamlined, efficient, model for all your service projects, you will want to create a publicly accessible project to house all your reusable elements.

Also, remember you don't need a full-on client class to access a service.  WCF communicates with channels and channel creation simply requires an address, binding, and contract.  If you have that information, just create your channel and make your call.  You can abstract the internals of this by using a ClientBase object, but this is entirely optional.  If the project you are working on requires hardcore WCF knowledge, there's no reason to pretty it up.  However, if non-WCF experts will be working with your system, abstractions are easy to create.

Links

Love Sudoku?  Love competition?  Try the new Sudokian.com experience today.

Understanding WCF



If you like this document, please consider writing a recommendation for me on my LinkedIn account.

Contents

Introduction

One of the most beautiful things about the Windows Communication Foundation (WCF) is that it's a completely streamlined technology.  When you can provide solutions to myriad of diverse problems using the same principles, you know you're dealing with a work of genius.  This is the case with WCF.  With a single service implementation, you can provide access to ASMX, PHP, Java, TCP, named pipe, and JSON-based services by add a single XML element for each type of connection you want to support.  On the flip side, with a single WCF client you can connect to each of these types of services, again, by adding a single like of XML for each.  It's that simple and streamlined.  Not only that, this client scenario works the same for both .NET and Silverlight.

In this document, I'm going to talk about how to access WCF services using Silverlight 2 without magic.  There will be no proxies, no generated code, no 3rd party utilities, and no disgusting "Add Service Reference" usage.  Just raw WCF.  This document will cover WCF connectivity in quite some depth.  We will talk about service setup, various WCF, SOA, and Silverlight paradigms, client setup,  some security issues, and a few supplemental features and techniques to help you aide and optimize service access.  You will learn about various WCF attributes, some interfaces, and a bunch of internals.  Though this document will be in depth, nothing will ever surpass the depth of MSDN.  So, for a more full discussion on any topic, see the WCF documentation on MSDN.

Even though we're focusing on Silverlight, most of what will be explained will be discussed in a .NET context and then applied to Silverlight 2.  That is, instead of learning .NET WCF and Silverlight WCF, you will .NET WCF and how to vary this for Silverlight.  This comparative learning method should help you both remember and understand the concepts better.  Before we begin, though, let's begin with a certain WCF service setup.  After all, we you don't have a service, we can't talk about accessing it.

Service Setup In Depth

When working with WCF, you are working with a completely streamlined system.  The most fundamental concept in this system is the ABC.  This concept scales from Hello World to the most complex sales processing system.  That is, for all WCF communication, you need an address, a binding, and a contract.  Actually, this is for any communication anywhere, even when talking to another person.  You have to know to whom, how, and what.  If you don't have these three, then there can't be any communication.

With these three pieces of information, you either create a service-side endpoint which a client will access or a client-side channel which the client will use to communicate with the service.

WCF services are setup using a 3 step method:

  • First, create a service contract with one or more operation contracts.
  • Second, create a service implementation for those contracts. 
  • Third, configure a service host to provide that implementation with an endpoint for that specific contract.

Let's begin by defining a service contract.  This is just a simple .NET interface with the System.ServiceModel.ServiceContractAttribute attribute applied to it.  This interface will contain various operation contracts, which are simply method signatures with the System.ServiceModel.OperationContractAttribute applied to each.  Both of these attributes are in the System.ServiceModel assembly.

Do not under any circumstances apply the ServiceContract attribute directly to the implementation (i.e. the class).  The ability to do this is probably the absolute worst feature in WCF.  It defeats the entire purpose of using WCF: your address, your binding, your contract and your implementation are complete separate.  Because this essentially makes your implementation your contract, all your configuration files will be incredibly confusing to those of us who know WCF well.  When I look for a contract, I look for something that starts with an "I".  Don't confuse me with "PersonService" as my contract.  Person service means person… service.  Not only that, but later on you will see how to use a contract to access a service.  It makes no sense to have my service access my service; thus with the implementation being the contract, your code will look painfully confusing to anyone who knows WCF.

Here's the sample contract that we will use for the duration of this document:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
        //- GetPersonData -//
        [OperationContract]
        Person GetPersonData(String personGuid);
    }
}

Keep in mind that when you design for WCF, you need to keep your interfaces as simple as possible.  The general rule of thumb is that you should have somewhere between 3 to 7 operations per service contract.  When you hit the 12-14 mark, it's seriously time to factor out your operations.  This is very important.  As I'll mention again later, in any one of my WCF projects I'll have upwards of dozens of service contracts per service.  You need to continuously keep in mind what your purpose is for creating this service, filtering those purposes through the SOA filter.  Don't design WCF services like you would a framework, which, even then shouldn't have many access points!

The Namespace property set on the attribute specified the namespace used to logically organize services.  Much like how .NET uses namespaces to separate various classes, structs, and interfaces, SOAP services use namespaces to separate various actions.  The name space may be arbitrarily chosen, but the client and service must just agree on this namespace. In this case, the namespace is the URI http://www.netfxharmonics.com/service/Contact/2008/11/.  This namespace will also be on the client.  This isn't a physical URL (universal resource locator), but a logical URI (universal resource identifier).  Despite what some may say, both terms are in active use in daily life.  Neither is more important than the other and neither is "deprecated".  All URLs are URIs, but not all URIs are URLs as you can see here.

Notice in this interface, there is a method interface that returns Person.  This is a data contract.  Data contracts are classes which have the System.Runtime.Serialization.DataContractAttribute attribute applied to them.  These have one or more data members, which are public or private properties or fields that have the System.Runtime.Serialization.DataMemberAttribute attribute applied to them.  Both of these attributes are in the System.Runtime.Serialization assembly.  This is important to remember; if you forget, you will probably assume them to be in the System.ServiceModel assembly and your contract will never compile.

Notice I said that data members are private or public properties or fields.  That was not a typo.  Unlike the serializer for the System.SerializableAttribute attribute, the serializer for DataContract attribute allows you to have private data members.  This allows you to hide information from developers, but allow services to see it.  Related to this is the how classes with the DataContract attribute differ from classes with the Serializable attribute.  When you use the Serializable attribute, you are using an opt-out model.  This means that when the attribute is applied to the class, each members is serializable.  You then opt-out particular fields (not properties; thus one major inflexibility) using the System.NonSerializedAttribute attribute.  On the other hand, when you apply the DataContract attribute, you are using an opt-in model.  Thus, when you apply this attribute, you must opt-in each field or property you wish to be serialized by applying the DataMember attribute.  Now to finally look at the Person data contract:

[DataContract(Namespace = Information.Namespace.Contact)]
public class Person
{
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }


    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }


    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }


    //- @City -//
    [DataMember]
    public String City { get; set; }


    //- @State -//
    [DataMember]
    public String State { get; set; }


    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
}

Note how simple this class is.  This is incredibly important.  You need to remember what this class represents: data moving over the wire.  Because of this, you need to make absolutely sure that you are sending only what you need.  Just because your internal "business object" has 10,000 properties doesn't mean that your service client will ever be able to handle it.  You can't get blood from a turnip.  Your business desires will never change the physics of the universe.  You need to design with this specific scenario of service-orientation in mind.  In the case of Silverlight, this is even more important since you are dealing with information that needs to get delegated through a web browser before the plug-in ever sees it.  Not only that, but every time you send an extra property over the wire, you are making your Silverlight application that less responsive.

When I coach architects on database design, I always remind them to design for the specific system which they'll be using (i.e. SQL Server) and always keep performance, space, and API usability in mind (this is why it's the job of the architect, not the DBA, to design databases!)  In the same way, if you are designing a system that you know will be used over the wire, account for that scenario ahead of time.  Much like security, performance and proper API design aren't "features", they're core parts of the system.  Do not design 10 different classes, each representing a property which will be used in another class which, in turn, will be serialized and sent over the wire.  This will be so absolutely massive that no one will ever be able to handle it.  If you have more than around 15 properties in your entire object graph, it's seriously time to rethink what you want to send.  And, never, ever, ever send an instance of System.Data.DataSet over the wire.  There has never been, is not now, and never will be any reason to ever send any instance of this type anywhere.  It's beyond massive and makes the 10,000 property data transfer object seem lightweight.  The fact that something is serializable doesn't mean that it should be.

This is main reason you should not apply the Serializable attribute to all classes.  Remember, this attribute follows an opt-out model (and a weak one at that).  If you want your "business objects" to work in your framework as well as over the wire, you need to remove this attribute and apply the DataContract attribute.  This will allow you to specify via the DataMember attribute which properties will be used over the wire, while leaving your existing framework completely untouched.  This is the reason the DataContract attribute exists!  Microsoft realized that the Serializable attribute is not fine grained enough for SOA purposes.  They also realized that there's no reason to force everyone in the world to write special data transfer objects for every operation.  Even then, use DataContract sparingly.  Just as you should keep as much private as possible and as much internal as possible, you want to keep as much un-serializable as possible.  Less is more.

In my Creating Streamlined, Simplified, yet Scalable WCF Connectivity document, I explain that these contracts are considered public.  That is, both the client and the service need the information.  It's the actual implementation that's private.  The client needs only the above information, where as the service needs the above information as well as the service implementation.  Therefore, as my document explains, everything mentioned above should be in a publicly accessible assembly separate from the service implementation to maximize flexibility.  This will also allow you to rely on the original contracts instead of relying on a situation where the contracts are converted to metadata over the wire and then converted to sloppily generated contracts.  That's slower, adds latency, adds another point of failure, and completely destroys your hand crafted, highly-optimized contracts.  Simply add a reference to the same assembly on both the client and server-side and you're done.  If multiple people are using the service, just hand out the public assembly.

At this point, many will try to do what I've just mentioned in a Silverlight environment to find that it doesn't seem to work.  That is, when you try to add a reference to a .NET assembly in a Silverlight project, you will get the following error message:

DotNetSilverlightReferenceMessageBox

Fortunately, this isn't the end of the world.  In my document entitled Reusing .NET Assemblies in Silverlight, I explain that this is only a Visual Studio 2008 constraint.  There's absolutely no technical reason why Silverlight can't use .NET assemblies.  Both the assembly and module formats are the same for Silverlight and .NET.  When you try to reference an assembly in a Silverlight project, Visual Studio 2008 does a check to see what version of mscorlib the assembly references.  If it's not 2.X.5.X, then it says it's not a Silverlight assembly.  So, all you need to do is modify your assembly to have it use the appropriate mscorlib file.  Of course, then it's still referencing the .NET System.ServiceModel and System.Runtime.Serialization assemblies.  Not a big deal, just copy/paste the Silverlight references in.  My aforementioned document explains everything you need to automate this procedure.

Therefore, there's no real problem here at all.  You can reuse all your contracts on both the service-side and on the client-side in both a .NET Silverlight environment.  As you will see a bit later, Silverlight follows an async communication model and, therefore, must use async-compatible service contracts.  At that time you may begin to think that you can't simply have a one-stop shop for all your contract needs.  However, this isn't the case.  As it turns out .NET can do asynchronous communication too, so when you create that new contract, you can keep it right next to your original service contract.  Thus, once again, you have a single point where you keep all your contracts.

Moving on to step 2, we need to use these contracts to create an implementation.  The service implementation is just a class which implements a service contract.  The service implementation for our document here is actually incredibly simple:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
        //- @GetPersonData -//
        public Person GetPersonData(String personGuid)
        {
            return new Person
            {
                FirstName = "John",
                LastName = "Doe",
                City = "Unknown",
                Guid = personGuid,
                PostalCode = "66062",
                State = "KS"
            };
        }
    }
}

That's it.  So, if you already have some logic you know is architecturally sound and you would like to turn it into a service.  Create an interface for your class and add some attributes to the interface.  That's your entire service implementation.

Step 3 is to configure a service host with the appropriate endpoints.  In our document, we are going to be using an HTTP based service.  Thus after we setup a new web site, we create a Person.svc file in the root and add to it a service directive specifying our service implementation.  Here's the entire Person.svc file:

<%@ ServiceHost Service="Contact.Service.PersonService" %>

No, I'm not joking.  If you keep your implementation in this class as well, then you are not using WCF properly.  In WCF, you keep your address, your binding, your contract, and your implementation completely separate.  By putting your implementation in this file, you are essentially tying the address to the implementation.  This defeats the entire purpose of WCF.  So, again, the above code is all that should ever be in any svc file anywhere.  Sometimes you may have another attribute set on your service directive, but this is basically it.

This is an unconfigured service host.  Thus, we must configure it.  We will do this in the service web site's web.config file.  There's really only one step to this, but that one step has a prerequisite.  The step is this: setup a service endpoint, but this requires a declared service.  Thus, we will declare a service and add it to an endpoint.  An endpoint specifies the WCF ABC: an address (where), a binding (how), and a contract (what).  Below is the entire web.config file up to this point:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <services>
      <service name="Contact.Service.PersonService">
        <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

This states that there can be "basicHttpBinding" communication through Contact.Service.IPersonService at address Person.svc to Contact.Service.PersonService.  Let's quickly cover each concept here.

The specified address is a relative address.  This means that the value of this attribute is appended onto the base address.  In this case the base address is the address specified by our web server.  In the case of a service outside of a web server, then you can specify an absolute address here.  But, remember, when using a web server, the web server is going to control the IP address and port bindings.  Our service is at Person.svc, thus we have already provided for us the base URL.  In this case the address is blank, but you will use this address attribute if you add more endpoints as you will see later.

The binding specifies how the information is to be format for transfer.  There's actually nothing too magical about a binding, though.  It's really just a collection of binding elements and pre-configured parameter defaults, which are easily changed in configuration.  Each binding element will have at a minimum two binding elements.  One of these is a message encoding binding element, which will specify how the message is formatted.  For example, the message could be text (via the TextMessageEncodingBindingElement class; note: binding elements are in the System.ServiceModel.Channels namespace), binary (via the BinaryMessageEncodingBindingElement class), or some other encoding.  The other required binding element is the transport binding element, which specified how the message is to go over the wire.  For example, the message could go over HTTP (via the HttpTransportBindingElement), HTTPS (via the HttpsTransportBindingElement), TCP (via the TcpTransportBindingElement), or even a bunch of others.  A binding may also have other binding elements to add more features.  I'll mention this again later, when we actually use a binding.

The last part of an endpoint, the contract, has already been discussed earlier.  One thing that you really need to remember about this though is that you are communication through a contract to the hosted service.  If you are familiar with interface based development in .NET or COM, then you already have a strong understanding of what this means.  However, let's review.

If a class implements an interface, you can access the instantiated object through the interface.  For example, in the following code, you are able to access the Dude object through the ISpeak interface:

interface ISpeak
{
    void Speak(String text);
}


class Dude : ISpeak
{
    public void Speak(String text)
    {
        //+ speak text
    }
}


public class Program
{
    public void Run()
    {
        ISpeak dude = new Dude();
        dude.Speak("Hello");
    }
}

You can think of accessing a WCF service as being exactly like that.  You can even push the comparison even further.  Say the Dude class implemented IEat as well.  Then we can access the instantiated Dude object through the IEat interface.  Here's what I mean:

interface ISpeak
{
    void Speak(String text);
}


interface IEat
{
    void Eat(String nameOfFood);
}


class Dude : ISpeak, IEat
{
    public void Speak(String text)
    {
        //+ speak text
    }
}


public class Program
{
    public void Run()
    {
        IEat dude = new Dude();
        dude.Eat("Pizza");
    }
}

In the same way, when configuring a WCF service, you will add an endpoint for contract through which you would like your service to be accessed.

Though it's beyond the scope of this document, WCF also allows you to version contracts.  Perhaps you added or removed a parameter from your contract.  Unless you want to break all the clients accessing the service, you must keep the old contract applied to your service (read: keep the old interface on the service class) and keep the old endpoint running by setting up a parallel endpoint.

You will add a new service endpoint every time you change your version, change your contract, or change your binding.  On a given service, you have have dozens of endpoints.  This is good thing.  Perhaps you provide for four different bindings with two of them having two separate configurations each, three different contracts, and 2 different versions of one of the contracts.  In this document, we are going to start out with one endpoint and add more later.

Now we have setup a complete service.  However, it's an incredibly simple service setup, thus not requiring too much architectural attention.  When you work with WCF in a real project, you will want to organize your WCF infrastructure to be a bit more architecturally friendly.  In my document entitled Creating Streamlined, Simplified, yet Scalable WCF Connectivity, I explain streamlining and simplifying WCF connectivity and how you can use a private/public project model to simplify your

Tim Ferris - Trial by Fire



This is beyond awesome.  Tim Ferris, author of one of the greatest books ever written, Four Hour Work Week, has announced that he has a new show called Trial by Fire.  I'm incredibly excited to hear this.  Time Ferris is one of my core role models for just about every area of life.  I regularly reread and reference his Four Hour Work Week book and am constantly studying his blog.  In fact, when you read my NetFXHarmonics web site, you are reading Ferris principles applied to the development world.

He calls himself a life hacker.  What it takes others years to master, he tries to learn in days.  This is the primary purpose of his Trial by Fire show.  It's also something I've been studying for years through my research in accelerated learning and experience induction.  Ferris sometimes mentions that his technique is to deconstruct, streamline, and remap.  If you read my recent posts on streamlining WCF and WCF in Silverlight, you've seen a taste of how you can apply these principles to development.  It's how I personally think, act, and speak.

For more information on Tim Ferris, his show, or his book, check out his blog at http://www.fourhourblog.com/.  His blog is essentially an extension to his book, Four Hour Work Week, a book every single person in the world needs to read and reread.  You absolutely must buy this book.  Get it in print or get it in audio, just get it.

FREE Silverlight Training on the Web



In case you didn't know it, knowledge is free.  In fact, it always has been.  Some cultures make it hard to obtain, but it's free nonetheless.  The Internet gives you extremely close access this knowledge.  You can randomly choose just about any topic in the world and find at least one article, blog posting, or Wikipedia entry on the topic.  In fact, when I was in college I only showed up once to my Kansas State University Physics II class.  Instead I kepted up with the class from home by watching the MIT OpenCourseWare video courses.  When it comes to Internet-related technologies like Silverlight, knowledge is even easier to find.

I see all kinds of courses by some of the biggest training companies offering all sort of great Silverlight courses.  However, these are extremely pricey.  There are also many books on Silverlight coming out.  Again, not free.  But think about it, how do you think the trainers and authors get their information?  When I was offered my Silverlight 2 book deal (since being on a deadline sucks, I turned it down), where do you think I would get my information?  It's all free online.  Here in December 2008, there are all kinds of amazing free resource for learning Silverlight.  You do not need training.  You do not need to buy a book.  Here are some of these resources that I've found this year to help bring you from ground zero to being a Silverlight master:

First, there's the 53-part video series at Silverlight.net.  This series just about every single topic you will ever see in your Silverlight career.  However, I would consider these to be at the basic level.  They cover the fundamentals of each topic, give great tips, and progressively give more interesting examples as the videos progress.  If all you are going to be doing is under-using Silverlight 2 as an RIA platform and for general [boring] UI development, then this series may be 90% of what you need.  Link: http://silverlight.net/Learn/videocat.aspx?cat=2

Second, there's the 44-part video series from Mike Taulty.  This is the guy behind the MSDN Nuggets videos.  These videos are more at the intermediate-advanced level.  It's also somewhat focused at "under-the-covers" development.  Mike doesn't do drag-n-drop videos.  He teaches real technology.  Whereas the previous series will discuss concepts and how to do things "out of the box", Mike's videos show you how to work with things at a more mechanical level, thus giving you a much greater level of control.  If you don't know the topics he's discussing in the videos, you don't know Silverlight.  Link: http://channel9.msdn.com/posts/Dan/Mike-Taulty-44-Silverlight-20-Screencasts/

Third, let's not forget that Microsoft has its annual Mix and PDC conferences.  Microsoft makes sure that the content for these conferences are freely available online.  The Mix videos are very specific and, therefore, should probably be watched on an as-needed basis.   You can just follow the link to see the wide variety of topics.  Since it's at a conference, however, some of the information will be marketing-speaking, but there's a lot of good stuff in the videos as well.  The PDC however is much less marketing-ish and there were a few Silverlight 2 sessions.  Links: http://silverlight.net/learn/videocat.aspx?cat=8 and https://sessions.microsoftpdc.com/public/timeline.aspx.

Fourth, if you're the reading-type, then you may prefer the Silverlight 2 e-book at learn-silverlight-tutorial.com.  This e-book covers a ton of information.  Much like the 53-part series, I would mark this down as basic-level.  It covers a touches on a wide variety of topics.  However, much of the information is just that: "a touch".  It's not very deep, but it's rather wide.  Link: http://www.learn-silverlight-tutorial.com/

Fifth, Microsoft has always been good about providing QuickStarts.  These are kind of a cross between visual, text, and hands-on learning.  These are also the typical go to card for any one new to anything.  The ASP.NET quick starts are still incredibly popular these many years later.  The Silverlight ones are quite well done as well.  The topics are basic-intermediate and range from topics like general UI controls to cooler stuff like JavaScript/DOM interop.  However, you may feel completely free to absolutely ignore the completely worthless "web services" section.  Whoever wrote that thought he or she was writing about the hopelessly-flawed ASMX, not the image-of-beauty WCF and, therefore, didn't even remotely bother to obey the most fundamental of WCF purposes and practices (i.e. keep your address, binding, and contract away from your implementation!) Link: http://silverlight.net/quickstarts/

Speaking of WCF, the last resource I want to mention is my document entitled "Understanding WCF in Silverlight 2".  This one has received a lot of attention since I wrote it in November 2008.  In fact, it's now listed on the WCF MSDN home page.  It's there because I cover WCF from the ground up for both .NET and Silverlight in a very deep manner.  If you are new to WCF, SOA, or Silverlight, then this is a good place to start (of course, no bias here.)  I wrote this document to help both people new to WCF and Silverlight as well as those who have been working either either for a while.  Even if you're not too serious about Silverlight, you should still read this detailed document to understand WCF better.  I don't play around with introductory nonsense, I hit the ground running with best-practices and proper architectural principles.  Link: http://www.netfxharmonics.com/2008/11/Understanding-WCF-Services-in-Silverlight-2

Though it's not a straight learning resource, I support I would also like to mention that you can always check out the Silverlight tag in my Delicous account: http://delicious.com/quantum00/silverlight.  However, keep in mind that just because I bookmark something, it doesn't mean I'm recommending the resource.  It just means it was interesting and/or provided some value to me.  You can expect this to be updated for the months to come.  I live off of my delicious account.

Another thing I would like to mention is that if you know WPF and web development, then you almost get Silverlight knowledge naturally.  Silverlight is essential a subset of WPF for the web.  You just take WPF, rip out a bunch of features, add just a handful of topics, move it to the web, and you have Silverlight.  Much of your skills are reusable if you already know WPF.  Actually, a lot of your skills are reusable if you're a .NET developer in general.  Just whip open Reflector and start looking through the framework, you'll see that there's a lot less than what's in the .NET framework, thus requiring much less learning time.

So, don't waste your money on books.  The blog is the new book.  Don't bother asking your employer for Silverlight training.  OK, well, if you just want some time off from work, sure, go ahead and ask.  Really, though, these resources will give you what you need for your Silverlight development.  In fact, if you were to compare the syllabus for an expensive course with the topics found in the first two sections of videos mentioned (97 of them!), you will see that the ROI for the course is virtually non-existent.

Links Summary

Reusing .NET Assemblies in Silverlight



Table of Contents

Introduction

Long before Silverlight 1.0 was released, it was actually called WPF/E or WPF Everywhere.  The idea was to allow you to create WPF like interfaces in your web browser.  This can be seen in a very small way in Silverlight 1.0.  All it provided was very basic primitive objects with the ability for interact with client-side technologies like JavaScript.  However, with Silverlight 2.0, Silverlight is actually more than what was originally promised with the term "WPF/E".  Silverlight is now far much more than a graphical technology.  All this stuff about Silverlight being "WPF for the Web" is more to make the marketing folks happy than anything else.

As a technology parallel to .NET, Silverlight is not part of the .NET family.  Rather, it essentially mirrors the .NET platform to create a new platform inside of a web browser where you have a mini-CLR and mini-Framework Class Library (FCL).  However, even though they are parallel technologies, you would suspect that Microsoft would allow some level of reuse between the two.  As it turns out, most topics are completely reusable.  Among other things, Silverlight has delegates, reference types, value types, a System namespace, and the ability to write code in both C# and VB.

Furthermore, despite the rumors, Silverlight also shares the exact same module and assembly format as .NET.  This may seem completely shocking to some people given the fact that Visual Studio 2008 doesn't allow you to reference a .NET assembly in a Silverlight project.  In reality, however, there's no technical reason for this prohibition.  There isn't a single byte difference between a Silverlight and .NET assembly.  One way to see this is by referencing a Silverlight assembly in a .NET project.  Just try it.  It works great.  So, why doesn't Visual Studio allow .NET assemblies in Silverlight projects?

To answer this, we need to understand that just because an optional helper tool (i.e. Visual Studio) doesn't allow something, that doesn't mean the technology itself doesn't.  In this case, the reason why Visual Studio allows a .NET project to reference Silverlight assemblies, but not the other way around is probably because .NET assemblies can normally do more.  For example, .NET has all kinds of XML related entities in its System.Xml assembly.  If Silverlight were to try to use this, it would blow up at runtime.  However, both Silverlight and .NET have an mscorlib assembly thus giving them a sense of brotherhood.  Having said that, Silverlight has the System.Windows.Browser assembly which, upon access in .NET, would make your .NET application explode!  Thus, the Visual Studio restriction laws are flawed.

Fortunately, there are ways around Visual Studio's fascist regime.  I'm going to talk about two different ways of reusing .NET assemblies and code in Silverlight.  The first technique is the more powerful assembly-level technique, while the second is more flexible file-level technique.  Each technique is useful for its own particular scenarios.  Please keep naive comments of "I'm ALWAYS going to…" and "I'm NEVER going to…" to yourself.  You need to make decisions of which of these techniques or possibly another technique to use on a case by case basis.

The Assembly-Level Technique

For this technique, you need to understand what's going on under the covers when you try to add a .NET reference to your Silverlight application in Visual Studio.  It's actually incredibly simple.  Visual Studio isn't a monolith that controls all your code from a centralized location; sometimes it uses plug-ins to do it's dirty work.

In this case, Visual Studio 2008 uses the Microsoft.VisualStudio.Silverlight .NET assembly.  In this assembly is the Microsoft.VisualStudio.Silverlight.SLUtil class which contains the IsSilverlightAssembly method.  When you add an assembly to a Silverlight Project, this method is called internally to see if your assembly is Silverlight.  If it is, it will add it.  If not, it won't.  It's just that simple.  But, given that the Silverlight and .NET assembly format is the same, how can it know?

You may be shocked to find out that the reason behind this is completely artificial: if the assembly references the 2.0.5.X version of the mscorlib assembly, then Visual Studio says that it's a Silverlight assembly!  This test is essentially all the IsSilverlightAssembly does.  Therefore, if you take your .NET 2.x/3.x assembly and change the version of mscorlib that your assembly references from 2.0.0.0 to 2.0.5.0, you may then add the assembly as a reference.  Now let's talk about this with a more hands on approach.

Below is the sample code we will be working with for this part of the discussion.  Say this code is placed in an empty .NET project.  When it is compiled, we will have an assembly.  Let's call it DotNet.dll.

using System;
//+
namespace DotNet
{
    public class Test
    {
        public String GetText()
        {
            return String.Format("{0} {1} {2} {3}", "This", "is", "a", "test");
        }
    }
}

Before we go any further, lets' discuss the state of the universe at this point.  If you ever try to solve a problem without understanding how the system works, you will at best be hacking the system.  Professionals don't do this.   Therefore, let's try to understand what's going on. 

The first thing you need to know is that when you add an assembly to a project in Visual Studio, you are simply telling Visual Studio to tell the compiler what reference you have so that when the compiler translates your code into IL, it knows what assemblies to include as "extern assembly" sections.  Even then, only the assemblies that are actually used in your code will have "extern assembly" sections.  Thus, even if you added reference every single assembly in your entire system but only use two, the IL will only have two extern sections (i.e. references assemblies).  The second thing you need to know is that no matter what, your assemblies will always have a reference to mscorlib.  This is the root of all things and is where System.Object is stored.

To help you understand this, let's take a look at the IL produced by this class.  To look at this IL, we are going to use .NET's ILDasm utility.  Reflector will not be your tool of choice here.  Reflector is awesome for referencing code, but not for working with it.  It's more about form than function.  With ILDasm we are going to run the below command:

ILDasm DotNet.dll /out:DotNet.il

For the sake of your sanity, use the Visual Studio command prompt for this.  Otherwise you will need to either state the absolute path of ILDasm or set the path.

This command will produce two files: DotNet.il and DotNet.res.  The res file is completely meaningless for our discussion and, therefore, will be ignored.  Here is the IL code in DotNet.il:

.assembly extern mscorlib
{
    .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
    .ver 2:0:0:0
}
.assembly DotNet
{
    /** a lot of assembly level attributes have been left out **/


    .hash algorithm 0x00008004
    .ver 1:0:0:0
}
.module DotNet.dll
.imagebase 0x00400000
.file alignment 0x00000200
.stackreserve 0x00100000
.subsystem 0x0003 
.corflags 0x00000001     


.class public auto ansi beforefieldinit DotNet.Test extends [mscorlib]System.Object
{
    .method public hidebysig instance string GetText() cil managed
    {
        .maxstack    4
        .locals init ([0] object[] CS$0$0000)
        IL_0000:    ldstr "{0} {1} {2} {3}"
        IL_0005:    ldc.i4.4
        IL_0006:    newarr [mscorlib]System.Object
        IL_000b:    stloc.0
        IL_000c:    ldloc.0
        IL_000d:    ldc.i4.0
        IL_000e:    ldstr "This"
        IL_0013:    stelem.ref
        IL_0014:    ldloc.0
        IL_0015:    ldc.i4.1
        IL_0016:    ldstr "is"
        IL_001b:    stelem.ref
        IL_001c:    ldloc.0
        IL_001d:    ldc.i4.2
        IL_001e:    ldstr "a"
        IL_0023:    stelem.ref
        IL_0024:    ldloc.0
        IL_0025:    ldc.i4.3
        IL_0026:    ldstr "test"
        IL_002b:    stelem.ref
        IL_002c:    ldloc.0
        IL_002d:    call  string [mscorlib]System.String::Format(string, object[])
        IL_0032:    ret
    }


    .method public hidebysig specialname rtspecialname 
         instance void    .ctor() cil managed
    {
        .maxstack    8
        IL_0000:    ldarg.0
        IL_0001:    call  instance void [mscorlib]System.Object::.ctor()
        IL_0006:    ret
    }
}

Right now we only care about the first section:

.assembly extern mscorlib
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
  .ver 2:0:0:0
}

This ".assembly extern ASSEMBLYNAME" pattern is how your assembly references are stored in your assembly.  In this case, you can see that mscorlib is referenced using both it's version and it's public key.  For our current mission, all we need to do is change the second 0 to a 5.  The public key tokens used in Silverlight are completely different from the ones in .NET, but we are trying to fool Visual Studio, not Silverlight.  This is a compile-time issue, not a runtime-issue.  Speaking more technically, we don't care about the public key token because this information is only used when an assembly is to be loaded.  The correct mscorlib assembly will have already loaded by the Silverlight application itself long before our assembly comes on the scene.  So, in our case, this entire mscorlib reference is really just to make the assembly legal and to fool Visual Studio.

Once you make the change from 2:0:0:0 to 2:0:5:0, all you need to do is use ILAsm to restore the state of the universe (unlike Reflector with C#, ILAsm can put humpty dumpty back together again).  Here's our command for doing this (in this case the resource part is completely optional, but let's add it for completeness):

ilasm DotNet.il /dll /resource:DotNet.res /out:DotNet2.dll

You are now free to reference your .NET assembly in your Silverlight project or application.  As I've already mentioned, Silverlight and .NET have the same assembly format.  There's nothing in Silverlight that stops us from referencing .NET assemblies, it was only Visual Studio stopping us.

At this point you have just the basics of this topic.  However, it's not the end of the story.  As you should be aware, .NET's core assemblies use four-part names.  That is, they have a strong name.  This is used to disambiguate them from other assemblies.  That is, instead of the System assembly being called merely "System", which can easily conflict with other assemblies (obviously written by non-.NET developers who don't realize that System should be reserved), it's actually named "System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089".  When you reference an assembly, you need to make sure to match the name, version, culture, and public key token.  When it comes to using .NET assemblies in Silverlight, this is critically important.

Let's say, for instance, that you created a .NET project which referenced and used entities from the System, System.ServiceModel, and System.Runtime.Serialization assemblies.  In this case, the IL produced by the .NET compiler will create the following three extern assembly sections:

.assembly extern System
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
  .ver 2:0:0:0
}
.assembly extern System.ServiceModel
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
  .ver 3:0:0:0
}
.assembly extern System.Runtime.Serialization
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89)
  .ver 3:0:0:0
}

Notice the public key token on each.  Here all three are the same, but for other .NET assemblies they may be different.  What's important here, though, is that the keys are used to identity the assemblies for .NET, not Silverlight.  Thus, even though you did add your .NET assembly to your Silverlight application, an exception would be thrown in runtime at the point where your application tries to access something in one of these assemblies.

The following shows you what would happen in the extreme case of trying to use the System.Web assembly in your Silverlight.  You would get the same error if you tried to access something in one of the above assemblies as well.

AssemblyException

As it stands, though, we can fix this just as easily as we fixed the mscorlib problem in Visual Studio.  All we need to do is open our IL and change the public keys and versions to the Silverlight versions.  Below is a list of the common Silverlight assemblies each with their public key token and version:

.assembly extern mscorlib
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Core
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Net
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Runtime.Serialization
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Windows
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}
.assembly extern System.Windows.Browser
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E)
  .ver 2:0:5:0
}


//+ note the different public key token in the following
.assembly extern System.ServiceModel
{
  .publickeytoken = (31 BF 38 56 AD 36 4E 35)
  .ver 2:0:5:0
}
.assembly extern System.Json
{
  .publickeytoken = (31 BF 38 56 AD 36 4E 35)
  .ver 2:0:5:0
}

Just use the same ILDasm/Edit/ILAsm procedure already mentioned to tell the assembly to use the appropriate Silverlight assemblies instead of the .NET assemblies.  This is an extremely simple procedure consisting of nothing more than a replace, a procedure that could easily be automated with very minimal effort.  It shouldn't take you much time at all to write a simple .NET application to do this for you.  It would just be a simple .NET to Silverlight converter and validator (to test for assemblies not supported in Silverlight).  Put that application in your Post Build Events (one of the top 5 greatest features of Visual Studio!) and you're done.  No special binary hex value searching necessary.  All you're doing is changing two well documented settings (the public key token and version).

For certain assemblies, this isn't the end of the story.  If your .NET assembly has a strong name, then by modifying it's IL, you have effectively rendered it useless.  Aside from disambiguation, strong names are also used for tamper protection.  You can sort of think of them as a CRC32 in this sense.  If you were to modify the IL of an assembly with a strong name, you would get a compile-time error like the following:

StrongNameException 

However, as you know by the fact that we have looked at the raw text of the source code with our own eyes, the strong name does absolutely no encryption of the IL.  That's one of the most common misconceptions of strong names.  They are not used as for public key encryption of the assembly.  Therefore, we are able to get around this by removing the public key from our assembly before using ILAsm.  Below is what the public key will look like in your IL file.  Just delete this section and run ILAsm.

.publickey = (00 24 00 00 04 80 00 00 94 00 00 00 06 02 00 00
              00 24 00 00 52 53 41 31 00 04 00 00 01 00 01 00
              37 3C 5A 7F 6D B6 3F 30 D8 3F DE E3 17 FE E5 2E
              68 43 16 A9 7C 42 69 5A 05 52 E6 73 C5 AC 58 7E
              B0 00 9F DC 1B 0A 78 57 79 12 79 53 E1 60 EB C9
              ED 49 7C 8C 73 1B 01 A7 BA 57 79 B5 53 83 8B CA
              8D F8 6F 3B BD A5 E4 BA 6A 12 B9 52 F2 E9 A3 FC
              42 17 E4 33 97 92 DC 21 30 57 B9 D3 63 7A F2 43
              73 42 70 18 89 8B 44 B9 D4 5A BA A9 21 A3 D9 E0
              86 20 3C 30 01 A9 B9 BB F4 D8 79 B7 7D 56 5A A9)

Upon using ILAsm to create the binary version of the same IL, you will be able to add your assembly, compile and run your application without a problem.  However, you can take this one step further by telling ILAsm to sign the assembly using your original strong name key.  To do this, just use the key command line option to specify the strong name key you would like to use.  Below is the new syntax for re-signing your assembly:

ILAsm DotNet.il /dll /resource:DotNet.res /out:DotNet11.dll /key=..\..\MyStrongNameKey.snk

At this point you have a strongly-named Silverlight assembly createdrom your existing .NET assembly.

Now, before moving on to explain a more flexible method of reuse, I want to cover a few miscellaneous topics.  First, for those of you who know some IL and are trying to be clever to make this process even simpler, you may think you could just do the following:

.assembly extern mscorlib { auto }

This won't work as ILAsm will look for "auto" and place the 2.0.0.0 version in it's place, thus leaving you right where you started.  Also, don't even think about leaving the entire mscorlib part off either.  That won't fool anyone since ILAsm will detect that it's missing and add it before continuing the assembly process.  You need to explicitly state that you want assembly version 2.0.5.0.

Second, you need to think twice before you add a Silverlight assembly to a .NET application.  In the Visual Studio world, if you add a .NET assembly, you add only that assembly. But, in the that assembly is a Silverlgiht assembly, then you will see all of the associated Silverlight assemblies added for each culture you have.  When I did this on my system, exactly 100 extra files were added to my Bin folder!  That's insane.  So, perhaps the Visual Studio team put a "Add Reference" block in the wrong place!

The File-Level Technique

Now all of this is great.  You can easily access your .NET assemblies in Silverlight.  But, many times this isn't even what you need.  You need to remember that every time you reference an assembly in Silverlight, you increase the size of your Silverlight XAP package.  Whereas .NET and Silverlight will only register assembly references in IL when they are actually used, Silverlight will package referenced assemblies in the XAP file regardless of use.  They assemblies will also be registered in the AppManifest.xaml file as an assembly part.  Though the XAP file is nothing more than a ZIP file, thereby shrinking the size of the assembly, this still spells "bloat" if all you need is just a few basic types from an assembly that's within your control.  For situations like this, there's a much simpler and much more flexible solution.

The solution to this again deals with understanding the internals of your system: whenever you add a file to your project in Visual Studio, all you are really doing is adding a file to an ItemGroup XML section in the .NET project file.  This is just a basic text file that describes the project.  As you may have guessed, the ItemGroup section simply contains groups of items.  In the case of compilation files (i.e. classes, structs, enums, etc...), they are Compile items.  Here's an example of a snippet from a .NET project:

<ItemGroup>
  <Compile Include="Client\PersonClient.cs" />
  <Compile Include="Agent\PersonAgent.cs" />
  <Compile Include="Properties\AssemblyInfo.cs" />
  <Compile Include="Configuration.cs" />
  <Compile Include="Information.cs" />
  <Compile Include="_DataContract\Person.cs" />
  <Compile Include="_ServiceContract\IPersonService.cs" />
</ItemGroup>

Given this information, all you need to do is (1) create a Silverlight version of this assembly, (2) open the project file and (3) copy/paste in the parts you want to use in your Silverlight project with the appropriate relative paths changed.  This will create a link from the Silverlight project's items to the physical items.  No copying is done.  They are pointing to the exact same file.  When they are compiled, there is no need to do any IL changes in your assemblies at all since the Silverlight assembly will be Silverlight and the .NET assembly will be .NET.

Now that you know about this under-the-covers approach, you should be aware that this is actually a fully supported option in Visual Studio.  Just go to add an existing item to your project and instead of clicking add or just hitting enter, hit the little arrow next to add and select "Add As Link".  This will do the exact same thing as what we did in our bulk copy/paste method in the project file.  Here's a screen shot of the option in Visual Studio:

AddAsLink

What may be more interesting to you is that this feature may be used anywhere in .NET.  You can use this to reuse any files in your entire system.  It's a very powerful technique to reuse specific items in assemblies.  It comes it very handy when two assemblies need to share classes and creating a third assembly which both may access leads to needless complexity.

Conclusion

Given these two techniques, you should be able to effectively architect a solution that scales to virtually any number of developers.  The first technique is easy to deploy using a custom utility and post build events, while the second is natively supported by any good version control systems.  Keep in mind though, that when using the first technique you may not always need to do this on every build.  The best approach I've seen for this is to have a centralized location on a network share that contains nightly (or whatever) builds of core assemblies.  Then, a login script will copy each of the assemblies to each developers machine.  This will cut down on the complexity of compilation and dramatically lower the time to compile any solution.

Regardless of which technique you use, you should feel a sense of freedom knowing of their existence.  This is especially true if all you are doing is trying to share data contracts between .NET and Silverlight.  As I've mentioned in my popular 70+ page "Understanding WCF in Silverlight 2" document, the "Add Service Reference" feature is not something that should be used in production.  In fact, it's painful in development as well.  Using the techniques described here, you can easily share your data contracts between your .NET server and the Silverlight client without the FrontPage/Word 95 style code generation.  For more information on this specific topics, see the aforementioned document.

Links

kick it on DotNetKicks.com