2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Firefox 3.0 Beta 1 Features



A few days ago Firefox 3.0 Beta 1 was released.  This is a major revision packed with some seriously awesome features.  Here's a rundown of some of the major features for normal users, power users, and developers (this is not an exhaustive list, but it covers a lot of ground-- also note that I've only tested Firefox 3.0 Beta 1 on Windows):

SQLite - SQLite databases are now used to store cookies, download history, a new concept called "Places" and other things.  This information being stored in a series of databases (in *.sqlite files) means that we can use SQLite front-ends to do SQL CRUD queries against the stored information.  Even if we don't use the SQLite databases directly, developers from all over the world will be able to create very powerful extensions to the features using the SQLite databases.  There's already at least one SQLite manager built as a Firefox extension.  Firefox has actually been using SQLite for a while, but it's only really been used for XUL and extension development.  If you are unfamiliar with SQLite, you should seriously check into it-- it's really awesome.  It's also the storage system for Google Gears.

Places - As I just mentioned, the new concept of "Places" is also stored in the database.  This feature tracks web surfing trends similarly to how various media players track music listening trends.  So, after a bit of surfing you'll be able to see what pages you visit most often.  Places also shows what bookmarks you have recently tagged, your most recently used tags, and a few other things.  Even if we don't use this feature in Firefox as is I'm sure more extensions will be built to help make Places more useful.  I can already visualize an extension to mash Places metadata with your Windows Media most-popular metadata to give you a view of all your favorite things in one place.

Tags and Easier Bookmarking - Firefox 3.0 also introduces del.icio.us-like tags to bookmarks.  This isn't that big of a deal to me, because with Firefox 2.0 you could install the del.icio.us bookmark extension to replace your static Firefox bookmarks to allow del.icio.us manage all your bookmarks.  It was so integrated that CTRL-D even sent your bookmark to del.icio.us.  The exciting part of Firefox 3.0 tagging is that the next del.icio.us extension will probably be faster and even easier to use in Firefox 3.0 since Firefox now has built in mechanisms for this.  Using the Firefox 3.0 tags feature by itself is nice too, though.

Coupled with this feature is the ability to simply click on a star to have your page sent to "Places" (it's actually very similar to the star in Gmail).  Another click of the star gives you a nice out-of-the-way box to set tags on the link.  It's actually very similar to what the del.icio.us extension did in Firefox 2.0, thus making me think even more that there will soon be an awesome del.icio.us extension for Firefox 3.0.

ACID2 Test Passed - It's official: Internet Explorer is the only major web browser that doesn't pass the ACID2 test (and it doesn't get near it).  Firefox has always been close (yes, since V1.0 it has the shape of a face) , but it finally crossed the finish line.  Internet Explorer's rendering on the other hand still looks like someone slaughtered a pig.  If you don't know what the ACID2 test is, it's THE test for a web browsers CSS usefulness.  The better the rendering, the better the browser can render.  As you will see in a moment, Internet Explorer is SO far off that it's not even CLOSE to being a 7th generation web browser (…and I do not apologize for bashing IE -- there's always time for that.)

Here are the renderings of Firefox 3.0b1, Opera 9.24, Safari 3.04, and Internet Explorer 7 (and 6) for the ACID 2 test:

Firefox 3.0 Beta 1

Opera 9.24

Safari 3.0.4 (Windows)

Internet Explorer 7 (this is a scaled version-- click for full)

Internet Explorer 6 (also scaled-- click for full).

Sheesh… notice any similarities? If you think IE7 is a major improvement over IE6, think again. It's just the 6th generation IE6.5 in a 7th generation skin (i.e. has tabs and integrated search).  Adding XMLHttpRequest doesn't make it a 7th generation browser (XMLHttpRequest was NOT in IE before IE7-- before IE7, the IE world had only ActiveX controls and Java proxies for remote scripting.  These are the opposite of standardized components.)  Trying adding window.addEventListener, removing that horrendous ClearType, and getting somewhere near the shadow of the ball park of the ACID2 test and we'll talk.

JavaScript 1.8 - Some people know it and take it for granted, yet others don't realize it and are offended by it: Firefox has the most powerful JavaScript in any web browser at all.  Most of us know that Internet Explorer's CSS is just about nonexistent, but most people don't know that Opera's analogous in the area of JavaScript.  Safari is a close second.  Firefox is the only web browser that continually and consciously has a constant flow of documented JavaScript features.  Internet Explorer is actually pretty good in this area (I know-- it's shocking) and Opera is continually getting better and better, but Firefox is head and shoulders above everyone else (and none of this is to even mention how are advanced Firefox' DOM implementation is -- Firefox even has native base64 conversion functions!). 

Firefox 1.5 had JavaScript 1.6, which included iterative methods (i.e. forEach), like in C# 2.0, and E4X.  Firefox 2.0 had JavaScript 1.7, which provided a functional programming feel to JavaScript similar to LINQ's functional nature.  Firefox 3.0 now has JavaScript 1.8 and takes JavaScript functional programming to the next level by including lambda expressions.  If you love C# 3.0, you will love JavaScript 1.8.  Firefox 3.0 may or may not also have client-side JSON serialization.  If it does, it should seriously fit nicely with the WCF 3.5 JSON feature.  By now, any one who still sees Firefox as anti-Microsoft technology needs to repent.

There are also new DOM features, like two new drag events and support for Internet Explorer's clientTop and clientLeft attributes.  Firefox 3.0 also has a scriptable idle service allowing you to check to see how long as user has been idle.  I wish I had that 8 years ago when I created a web-based screen saver for a kiosk.  Another thing I wish I had years ago is Firefox 3's new getElementsByClassName function.  Since it's native (C++) it's MUCH faster than any artificial JavaScript implementation (see John Resig's benchmarks.)

For more information on Firefox' powerful development capabilities, check out the MDC (Mozilla Development Center-- the Firefox equivalent of MSDN).  There you will find a detailed references for the DOM, JavaScript, AJAX, XSLT, CSS, SOAP, XML-RPC, SVG, Canvas (which was Silverlight before Silverlight and native to Firefox, Safari, and Opera-- notice which browser is missing?), XUL, and a whole host of other technologies you probably never knew existed or never knew were native in Firefox.  If you do ANY client-side web development, you need to check out these references and keep them close by.  The samples alone will save you hours of wasted debugging.

Lower Memory Utilization - Now, to be clear I'm not one of those far too uptight people who cry every time SQL Server uses multiple GBs of memory.  On the contrary I'm usually ecstatic to see that I'm actually using the memory that I paid so much money for.  I'm not too uptight about Firefox using a lot of memory either as I know it's caching everything it sees.  Since I use Firefox more than anything else, I have no problem with it using more memory than anything else-- that includes Photoshop.  However, Firefox 3.0 uses a lot less memory.  You can do simple configuration tweaks in Firefox 2.0 to make it use a lot less memory and to even release memory when you minimize and this all without any extensions, but Firefox 3.0 cleans up the memory as you go.  As I was watching the memory charts of Firefox, I was shocked to see it return 30MB of memory upon closing a tab.  Now it's going to be Safari that's the target of memory usage paranoid.

Webmail Handlers - This isn't a feature I've seen yet, but I'm really hoping comes to Gmail soon.  I'll just quote the release notes: "…web applications, such as your favorite webmail provider, can now be used instead of desktop applications for handling mailto: links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox)."  If Gmail does that registration, I'll finally be able to replace Google Chat as my mailto handler.

Offline Applications - This needs to be explicitly utilized by the developers of each particular web application, but now Firefox theoretically doesn't need Google Gears in order to use online application locally.  Firefox 2.0 already had one interesting offline feature in the form of HTML 5's sessionStorage attribute.  This feature was conceptually similar to ASP.NET's ViewState in that it persists across page refreshes, but not across pages.  Firefox 3 included two new events for offline functionality: "online" and "offline".  When an application goes offline, the offline event is raise and similarly with the online event.   I've checked out these events and they are rock-solid.  There are also other offline application features in Firefox 3.0, but they aren't that well documented yet.  You can see an example of the concept of office applications by using Google Reader and Google Gears.  I expect this feature to be available in Gmail soon and hopefully without ever needing a plugin.

One Click Website Info - When you click on the website icon in the address bar you get a box of information telling you a little about the website.  Really what were talking about here is SSL websites.  You can click the icon to get a quick view of the SSL information.  I personally just like the idea of not having to double-click.  I know, I'm picky.  It's the little things in life that make the difference, right?

Native viewing of ZIP files - This feature is not that well documented from what I've seen, but it's really awesome!  It allows you to view ZIP and JAR files directly in Firefox 3.0 by using the following pattern: jar:http://www.mywebsite.com/MyZipFile.zip!/.   Thus jar:http://www.davidbetz.net/dotnetcourse/CSharpLanguage.zip!/ (copy, don't click) views the contents of one of my course samples.  You know you're intrigued now.

There are also many new security features like the forged website blocker which stops you (or, your relatives) from going to verified phishing web sites and malware protection which does the same for malware web sites.  There are also user experience enhancements.  Now when you type in the address bar you are filtering by page title and URL, compared to just filtering by URL previously.  Also, zooming now attempts to zoom images and text, not just text, though I'm not finding that to be all that successful; safari on the iPhone/iPod touch still owns that one.  Other development features include support for animated PNGs (APNG), the ability to use a web service as a content handler, support for rgba and hsla colors, and… ready for this? Cross-site XMLHttpRequest!  That's right, we will finally be able to do cross-domain AJAX without script block hacks!  Other normal user/power user features include a permanent restart button (in Tools->Add-ons), a much better application content-type screen, a really, really nice page info window which includes a cookie viewer and the supposed ability to enable and disable images, popup windows, cookies, and extension and theme installations per web site.

On the negative side, the new download window is absolutely horrible.  Firefox' download manager and download options actually get worst with each major Firefox release.  The download setup is finally as bad as Safari's.  Firefox 1.0 had absolutely the best download setup I've ever seen.  You could go to the options screen and with the click of a button, a My Downloads folder was created and downloads would start going there.  That actually made sense!  In Firefox 1.5, they got rid of that awesome selling point, forcing you to make the folder yourself or suffer having all your downloads be thrown all over your desktop.  Lame.  At least in Firefox 1.5 you could click the button next to "All files downloaded to:" and have access to your downloads in a folder view of your desktop.  In Firefox 3.0 you can't even do that! I'm never getting to my downloads again! Well, not never, because the Firefox developer have to be smart enough to fix that and even if they aren't, Firefox has an awesome extension system that allows anyone to make a quick fix using XML, JavaScript and CSS.  Furthermore, the download manager API has been updated so extension developers can do much more.  It's also been moved from RDF to SQLite, thus allowing even more extensibility.

With all these additions, it's not hard to see that Firefox 3.0 is a major upgrade over previous versions pushing the Firefox dynasty even further in the face of its competition (that is, Opera and Safari-- IE isn't in the ball park.)  Some would criticize this statement though and possibly even say that I have double standards.  They would say that when Firefox gets a feature I proclaim it as awesome and slam other browsers for not having it, but when those other browsers get a feature that Firefox lacks, I ignore it.  To be sure, when other browsers get a feature that it lacks I very much criticize Firefox for it.  Their lack of perfection on the ACID2 test in Firefox 2.0 was a good example and their lousy download manager in Firefox 3.0 beta 1 is another.  I slammed them rather hard for that and submitted/voted for all kinds of other bugs in Firefox.  Furthermore, I love other browsers as well.  For example, because of it's beautiful anti-aliasing and support for the CTRL-L and CTRL-K shortcuts, I use Safari about as much these days.  Even still, Firefox is leaps and bounds ahead of the rest.  The zip viewer means nothing, the SQLite is only "cool", the "Places" is something I'm not too excited about, because it's the JavaScript, CSS, DOM and extension support that actually matters.  Web browsers need to be standards compliant and have a strong development feature set to be acceptable in today's web.  Opera will probably always be flashier, but Firefox will probably always be smarter.

As I've stated initially, there's more to Firefox 3.0 than what I've mentioned here.  If you want to know more about any point of Firefox 3.0, just check out the many links above or the developer notes below.  For more developer information, I highly suggest going to the Mozilla Developer Center.  For other information, just check out the release notes and it's links.

Related Links

New XAG Feature: Support for C# 3.0 Automatic Properties



One of the nicest features of C# 3.0 is one of the most subtle: automatic properties.  It's really nothing more than syntactical sugar and saves us a little bit of typing, but it's been a big help in making my code more self-documenting.  If you're unfamiliar with automatic properties, here is what one looks like:

public Int32 Id { get; set; }

When that single line is compiled and viewed in Reflector, you get the following:

[CompilerGenerated]
private int <Id>k__BackingField;


public int Id
{
    [CompilerGenerated]
    get
    {
        return this.<Id>k__BackingField;
    }
    [CompilerGenerated]
    set
    {
        this.<Id>k__BackingField = value;
    }
}

The new syntax is equivalent to a classic C# property.  Note that this property has a get accessor and a set accessor.  This is the only type of automatic property you will be able to create.  You need the full {get; set; } for the automatic property to compile.  { get; } or { set; } won't cut it.  If you need a property with only a get or set accessor, then you need to use a classic C# property.  However, you can use { get; private set; } for a read-only property.  It will create both accessors, but only the get accessor will be public.  Also keep in mind that the Visual Studio 2008 code-snippet shortcut "prop" now creates an automatic property and "propg" creates an automatic property with a private set accessor.

Since this feature helps so greatly in the readability of the code, I have added a new feature to XAG: minimized properties.  Here is what the classical C# 2.0 syntax would look like for simple DTO (data transfer object) using XAG:

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
    <SimpleType x:Key="ClassKey" Type="Class" AutoGenerateConstructorsByProperties="True" Namespace="ClassNamespace"  AccessModifier="Public">
        <Properties AccessModifier="Public">
            <Id Type="Int32" />
            <Name Type="String" />
            <Title Type="String" />
        </Properties>
    </SimpleType>
</Assembly>

Using XAG's express type creation, the XML compiles to the following C# code:

using System;


namespace ClassNamespace
{
    public class SimpleType
    {
        private Int32 id;
        private String name;
        private String title;
        public Int32 Id {
            get { return id; }
            set { id = value; }
        }
    <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(43,145,175)">String</span> Name {
        <span style="color: rgb(0,0,255)">get</span> { <span style="color: rgb(0,0,255)">return</span> name; }
        <span style="color: rgb(0,0,255)">set</span> { name = <span style="color: rgb(0,0,255)">value</span>; }
    }


    <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(43,145,175)">String</span> Title {
        <span style="color: rgb(0,0,255)">get</span> { <span style="color: rgb(0,0,255)">return</span> title; }
        <span style="color: rgb(0,0,255)">set</span> { title = <span style="color: rgb(0,0,255)">value</span>; }
    }


    <span style="color: rgb(0,0,255)">public</span> SimpleType(<span style="color: rgb(43,145,175)">Int32</span> id, <span style="color: rgb(43,145,175)">String</span> name, <span style="color: rgb(43,145,175)">String</span> title) {
        <span style="color: rgb(0,0,255)">this</span>.Id = id;
        <span style="color: rgb(0,0,255)">this</span>.Name = name;
        <span style="color: rgb(0,0,255)">this</span>.Title = title;
    }


    <span style="color: rgb(0,0,255)">public</span> SimpleType( ) {
    }
}
}

That's painfully verbose when compared with automatic properties.  The new feature in XAG allows you to choose between a classic property and a minimized property (an automatic property in C# 3.0).  Below is the same XAG DTO done with Minimized properties.  In this example, notice that AutoGenerateConstructorsByProperties is set to false (the default).  This is because C# 3.0 has feature called object initializers, which allow you to set properties when you instantiate an object without needing any special constructor.

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
  <SimpleType x:Key="ClassKey" Type="Class" Namespace="ClassNamespace" AccessModifier="Public">
    <Properties AccessModifier="Public" Minimized="True">
      <Id Type="Int32" />
      <Name Type="String" />
      <Title Type="String" />
    </Properties>
  </SimpleType>
</Assembly>

By simply setting Minimized to true (and optionally, AutoGenerateConstructorsByProperties to false), you get the following C# 3.0 code:

using System;


namespace ClassNamespace
{
    public class SimpleType
    {
        public Int32 Id { get; set; }
        public String Name { get; set; }
        public String Title { get; set; }
    <span style="color: rgb(0,0,255)">public</span> SimpleType( ) {
    }
}
}

You can also use this new minimize option with the existing options Static (a Boolean) and Mode (Blank, "GetOnly", or "SetOnly"), but you obviously can't use it with the Backing option.   The Backing option has a default value of true which means that the property is backed by a private field.  There is no such thing as an automatic property with an explicit backing field; that's the entire point of an automatic property.  The following example demonstrates a few legal combinations for properties in XAG.  Notice that you can tell XAG that you want all but a few specified properties to be minimized.

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
    <SimpleType x:Key="ClassKey" Type="Class" Namespace="ClassNamespace"  AccessModifier="Public">
        <Properties AccessModifier="Public" Minimized="True">
            <Id Type="Int32" />
            <Name Type="String" Static="true" Mode="GetOnly" />
            <Title Type="String" Minimized="False" Backing="False" Mode="GetOnly" />
        </Properties>
    </SimpleType>
</Assembly>

This XML code compiles to the following C# 3.0 class:

using System;


namespace ClassNamespace
{
    public class SimpleType
    {
        public Int32 Id { get; set; }
    <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(0,0,255)">static</span> <span style="color: rgb(43,145,175)">String</span> Name { <span style="color: rgb(0,0,255)">get</span>; <span style="color: rgb(0,0,255)">private</span> <span style="color: rgb(0,0,255)">set</span>; }


    <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(43,145,175)">String</span> Title {
        <span style="color: rgb(0,0,255)">get</span> { <span style="color: rgb(0,0,255)">throw</span> <span style="color: rgb(0,0,255)">new</span> <span style="color: rgb(43,145,175)">Exception</span>(<span style="color: rgb(163,21,21)">"The method or operation is not implemented."</span>); }
    }


    <span style="color: rgb(0,0,255)">public</span> SimpleType( ) {
    }
}
}

In C# 3.0, you could use that code with an object initializer like this:

SimpleType st = new SimpleType( )
{
    Id = 8
};


Int32 id = st.Id; // id == 8

You can find more information about my XML Assembly Compiler at http://www.jampadtechnology.com/xag/.

Related Links

.NET Framework 3.5 Released



If you don't already know, .NET 3.5 is finally out and with it came VS 2008.  I've been using it full time for many months now and there are some features which I've come to love and others which I find completely worthless.  Here is a quick break down of what I find cool (be sure to check out the links section to see more resources):

Notice I didn't mention anything about ASP.NET AJAX becoming native (or should I say naive?). This is an incredibly poorly designed technology bordering on the quality of Internet Explorer (ok ok, not even a leaky nuclear core is quite that bad).  The JavaScript intellisense is a complete joke and only gets in the way, the Sys namespaces pollute the Firebug watch window so you can never see your objects, and the syntax is painfully non-intuitive.    The only nice feature it has its it's does have is the ability to allow you to access ASMX services from JavaScript.  Having said that, the year is almost 2008.  It's not 2002 and therefore we use WCF, not ASMX.  In WCF 3.5 we can very easily create very flexible and powerful REST-based JSON services (adding straight XML support if needed with a single endpoint configuration element).  There's just no need to have SOAP turn your 6-byte request into a 300 byte message.  It adds up.  So, ASP.NET AJAX ("Atlas") is complete obsolete in my book. If you want to do real AJAX, then learn the fundamentals and whip out prototype/script.aculo.us and use WCF 3.5 for your service interaction.

Accelerated C# 2008 Now, if you're looking for an awesome resource for learning/mastering .NET 3.5 and C# 3.0, I highly recommend the book Accelerated C# 2008 by Trey Nash. It gets right to the point and doesn't mess around with entry-level nonsense. You get the knowledge you need right away and from it I estimate an experience induction of at least 7 months.

For full .NET Framework 3.5 examples, check out my Minima .NET 3.5 Blog Engine (on which this site runs) and my ESV Bible Web Service 2.0 Framework for .NET 3.5.

Links

ESV Bible Web Service Client for .NET 3.5



A while back, the guys over at the ESV Bible web site announced their new REST-based interface to replace their old SOAP interface.  This new interface provides the same functionality as the old, but allows for 5,000 queries per day instead of 500 previously and is based on REST architectural principles.  Because the service is fundamentally chatty, it made sense to switch to REST.  In the context of a Bible web service, it's hard to justify a 200-byte XML message when your actual request is 6 bytes ("John 1").  Also, because the method call is in the URI, the entire call is simplified all the more.

For those of you who are completely unfamiliar with REST interfaces, all you really need to know is that it's a resource (or noun) based architecture.  That is to say instead of calling, for example, a "GetItem" method, you simply access an "item" entity.  You access what the thing is, not what the thing does; kind of a web-based reversal of encapsulation.  In other words, instead of giving a server a command (a verb), you are accessing the resource directly (a noun).  There's obviously more to REST than this and you can get more information from this nice article titled "Building Web Services the REST Way".

RESTful architecture really is a nice way to telling a system what you want, not how to get it.  This is really the point of framework design and abstraction in general.  In light of this it's obvious to see that, as awesome as REST is, it's not how .NET developers want to think when working working on a project.  When I'm working with something I want to focus on the object at hand, not on the URLs and parameters.  For this reason, I built a .NET 3.5 framework that allows easy and efficient access to the new ESV Bible REST web service.  Here are some samples of how to use it:

Here's a simple passage query returning HTML data:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
String output = service.PassageQuery("Galatians 3:11");

With the flip of a switch you can turn it into plain text:

ESVBibleServiceV2 service = new ESVBibleServiceV2(OutputFormat.PlainText);
String output = service.PassageQuery("Galatians 3:11");

For more flexibility, you may use the provided parameter objects.  Using these in C# 3.0 is seamless thanks to object initializers:

PassageQueryParameters pqp = new PassageQueryParameters( ) { Passage = "John 14:6" };
ESVBibleServiceV2 service = new ESVBibleServiceV2(new PlainTextSettings( )
{
    LineLength = 100,
    Timeout = 30
});
String output = service.PassageQuery(pqp);

Here is a simple sample of accessing the verse of the day (in HTML without the audio link -- optional settings):

ESVBibleServiceV2 service = new ESVBibleServiceV2(new HtmlOutputSettings( )
{
    IncludeAudioLink = false
});
String output = service.DailyVerse( );

You can also access various reading plans via the provided .NET enumeration:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
String output = service.ReadingPlanQuery(new ReadingPlanQueryParameters( )
{
    ReadingPlan = ReadingPlan.EveryDayInTheWord
});

Searching is also streamlined:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
String output = service.Query("Justified");

Here is a length example showing how you can use the QueryInfoAsObject method to get information about a query as a strongly-type object:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
QueryInfoData result = service.QueryInfoAsObject("Samuel");


Console.WriteLine(result.QueryType);
Console.WriteLine("----------------------");
if (result.QueryType == QueryType.Passage) {
    Console.WriteLine("Passage: " + result.Readable);
    Console.WriteLine("Complete Chapter?: " + result.IsCompleteChapter);
    if (result.AlternateQueryType != QueryType.None) {
        Console.WriteLine(String.Format("Alternate: {0}, {1}", result.AlternateQueryType, result.AlternateResultCount));
    }
}


if (result.HasWarnings) {
    foreach (Warning w in result.Warnings) {
        Console.WriteLine(String.Format("{0}: {1}", w.Code, w.Readable));
    }
}

Here is the output:

QueryInfoAsObject Example Output

For more advanced users, the Crossway XML format is also available:

ESVBibleServiceV2 service = new ESVBibleServiceV2(new CrosswayXmlVersion10Settings( )
{
    IncludeWordIds = true,
    IncludeXmlDeclaration = true
});
String output = service.PassageQuery(new PassageQueryParameters( )
{
    Passage = "Galatians 3"
});
Console.WriteLine(output);

That same XML data is also retrievable as an XmlDocument for pure XML interaction:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
XmlDocument output = service.PassageQueryAsXmlDocument("Galatians 3");

For more flexible XML interaction, you may use XPath:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );


String output = service.PassageQueryValueViaXPath(new PassageQueryParameters( )
{
    Passage = "Gal 3:4-5",
    XPath = "//crossway-bible/passage/surrounding-chapters/current"
});

Sometimes, however, you will want more than one result from XPath:

String[] output = service.PassageQueryValueViaXPathMulti(new PassageQueryParameters( )
{
    Passage = "Gal 3:4-5",
    XPathSet = new[]
    {
        "//crossway-bible/passage/surrounding-chapters/previous",
        "//crossway-bible/passage/surrounding-chapters/next"                
    }
});

Here's what the result looks like the debugger:

XPathSet Example Output

I've also copied the documentation for functions and parameters into the .NET XML comments, so you can quickly and easily see what a certain function or parameter does and it's default:

ESVBibleServiceXmlComment

The new API uses your existing ESV Bible Web Service access key.  To use this key in this framework you simply add an element called ESVBibleServiceKey to the addSettings in your configuration file (a sample is provided with the framework).  You may also set it in any one of the parameter objects (i.e. PassageQueryParameters, QueryParameters, etc...), which override the key in the configuration file.  Per the API, you can use TEST for testing and IP for for general purpose queries.

Lastly, I would like to mention that this framework minimizes traffic by only sending options that deviate from defaults. So, for example, if you set IncludeWordIds to false and IncludeXmlDeclaration to true, only the IncludeXmlDeclaration value will be sent over the wire since IncludeWordIds is false by default.

You can access this ESV Bible Web Service 2.0 framework on CodePlex at the address in the links section.  Enjoy!

Links

Prototype and Scriptaculous Book



Today I noticed the book "Prototype and script.aculo.us: You never knew JavaScript could do this!" and while you do not need a book to learn P&S, this book will definitely induce a good 6 months to a year of experience into your skill set.  The book is available on Amazon in print or on the book's website in PDF format.

If you only want to know the basics of P&S, then you'll be fine with looking over the Prototype documentation and script.aculo.us samples.  However, regardless of how deep you want to go, you should definitely check out the freely available source code for the book available on the book's website.

As always, let the tools do the work, but don't rely on them for everything.  It's critically important that you understand AJAX developer from a deep mechanical level before you start using JavaScript or AJAX frameworks.  If you aren't well-versed in JavaScript and AJAX development, then I highly recommend AdvancED DOM Scripting: Dynamic Web Design Techniques by Jeffery Sambell.

Related Links

Accelerated Language Learning (Timothy Ferris)



Many years ago I wrote a paper on accelerated learning and experience induction.  This paper explains how I induce weeks of experience in days, months of experience in weeks, and years of experience in months and how to dramatically learn new technologies with little to no investment.  I know people who have worked in a field for 4 years, but only have 6 months worth of skill (usually VB developers -seriously).  I also know people who have worked for 6 months, but have over 4 years of skill (usually Linux geeks; paradoxically, VB developers usually are quicker to learn .NET basics than PHP developers, though they usually switch places in more advanced studies.)  How can anyone expect to gain skill by doing the exact same job for 4 years (e.g. building database driven interfaces, cleaning data, writing reports)?  Obviously, calendar-years of experience is not directly related to skill-years of experience.  As it turns out, my learning techniques are not uncommon.

Today, author Timothy Ferris (Four Hour Work Week) posted a blog entry about how he learns languages in an incredibly short timeframe.  His post was fascinating to me for many reasons, one of them being that his first step is as follows: "Before you invest (or waste) hundreds and thousands of hours on a language, you should deconstruct it."  This is the same first step in my accelerated learning method.  Apparently I was on to something!  In his deconstruction method, he asks a few key questions and does some component and paradigm comparisons to give you some idea of the language scope and of its difficulty.  Based on what you learn from the deconstruction, you should have a good idea of what the language entails.

In my learning system, I refer to this deconstruction as "learning the shell", which is followed by "learning the foundations", then "learning the specifics" -- Shell, Foundations, Specifics -- my SFS (pronounced "sifs") method.  The method exploits Pareto's Law, allowing you to learn 20% of the technology at first to give you 80% of the return.  That's MUCH more than what most so-called "experts" have anyhow!  As it turns out, Timothy Ferris uses Pareto's Law in his language learning as well.  You can hear about this in his interview with my other role model, Scott Hanselman.

For more information on Timothy Ferris' other life-optimization work, check out his book The Four Hour Work Week and his blog.

Related Links

Web Application Security Presentation



Today I found a really nice web application security presentation by Joe Walker.  Honestly, almost none of it is common sense and I would therefore encourage all web developers to check this out.  Also on the same page as the presentation are a number of very good AJAX security links like the XSS (Cross Site Scripting) cheat sheet.

BTW, this type of stuff is touched on in the Brainbench AJAX exam.

Links

Prototype and Scriptaculous



OK, it's time that I come out with it: I've switch to using the prototype and script.aculo.us ("scriptaculous") JavaScript/AJAX frameworks. For the longest time I've sworn my allegiance to manual AJAX/DOM manipulation as I've always found it to be the absolute most powerful way to get the job done correctly, but as it turns out prototype/scriptaculous provide an incredible level of simplification without taking any of your power from you.  It's the ONLY AJAX framework I found that didn't suck.  Though I'm a .NET developer, I can't  the Microsoft ASP.NET AJAX ("Atlas") extensions.  Except for it's awesome web service handling, which I use all the time, it's a slap in the face of AJAX development. It's bulky with hard to read source code that has an incredibly low usability.  It seems to be the opposite of the beauty of C# and .NET in general.  With those technologies, everything just falls together without ever needing to read a thing (assuming you are a qualified web professional who understands the foundational concepts of the system). Sure, you have to look up stuff in the docs, but you don't have to pour over book on the topic to be productive.  The same can be said for prototype and scriptaculous.

So, what is this thing? Actually are two frameworks, one, prototype, is a single JavaScript file and the other, scriptaculous, is a series of JavaScript files. Prototype is a foundational JavaScript framework that simplifies the existing client-side JavaScript API into something much more intuitive and that's also widely cross browser compatible. Finally! Cross browser compatibility without needing to support it!  That means we no longer have to do a zillion tests to see how we are supposed to get an element's height. I can just simply call $('box').getHeight( ) and be done with it! Prototype has classes (types) for Arrays (which including a very powerful each( ) function-- similar to .NET's ForEach method, Elements (which allows you to modify style, add classes, get ALL descendants -- not just the children), Events (instead of trying to test for addEventListener or attachEvent, just use Event.observe!), and classes for a ton of other things. To put it simply: prototype gives you a new client-side API. The source code is also incredibly each to read. It's just the same stuff most of us have already written, but now we don't have to support it.  If we build our applications on prototype, some one else has pre-tested a BUNCH of our system for us!

Scriptaculous is a different beast entirely. While prototype is a new general client-side API, scriptaculous goes more specifically into dynamics. For example, it allows you to turn a normal list into a sortable list with ONE line of JavaScript.  ONE.  Uno.  Eins.  It also allows you to Turn one div set into a series of draggable elements (products?) and another set of divs into containers that the items can go to (shopping carts?) There are also a very nice set of pre-built animations as well as other common things like an autocompleting textbox and an in-place editable label. These are I've normally built manually, but can use them without micro-managing code.  Code by subtraction RULES!  Scriptaculous is also VERY flexible. Everything you can do in scriptaculous is extremely customizable thanks to JavaScript's flexible dynamic object syntax and native higher-order function capabilities. That means, for example, that when you create a sortable list you can control exactly how it can scroll and set callback functions all in the same simple line of code. Also, note that scriptaculous uses prototype's API for it's lower-level functionality. This is why you will often see the two products named together, like the various books written on "prototype and scriptaculous".

What about some samples? Well, Prototype and Scriptaculous are both SO simple to work with I have absolutely no idea how someone can write a book on them. I go to various Borders bookstores about every day (it's my office), so I get to see many different books. When I flip through the prototype/scriptaculous books I get very confused. How can someone take hundreds of pages to write something that could be said in 20 or 30?  Verbosity sucks (yeah I know… look who's talking).  These framework are insultingly simple to work with.

Here are a few very quick samples.  For better samples, just download scriptaculous and view extremely well-documented prototype API online.

Prototype

Want to make a simple AJAX request?

new Ajax.Request('/service/', { 
  method: 'get', 
  onSuccess: function(t) { 
    alert(t.responseText); 
  }
}); 

No XmlHttpRequest object, no COM objects, nothing!

How about updating the content of an element?

Using this element...

<div id="myText"></div> 

...with this JavaScript...

$('myText').update('this is the new text'); 

... you get an updated element!  As you can see, it even uses the typical $ syntax (in addition to $$, $A, $F, $H, $R, and $w syntax!) Just look at the examples in the Prototype API to see more.  You will be shocked to see how easy it is to walk to DOM tree now.  You will also be amazed at how much easier arrays are to manipulate.

Script.aculo.us

Using this XHTML structure...

<ul id="greek">
<li>Alpha</li>
<li>Beta</li>
<li>Gamma</li>
<li>Delta</li>
</ul>

...with this SINGLE line of JavaScript...

Sortable.create('greek');

..., you have a sorting list (try that out-- you will also notice some nice spring-back animations happening too!)

Need callback when sort is completed? (well of course you do!)  Just give the <li> elements a patterned ID ('listid_count')... 
 

<ul id="greek">
<li id="greek_1">Alpha</li>
<li id="greek_2">Beta</li>
<li id="greek_3">Gamma</li>
<li id="greek_4">Delta</li>
</ul>

...and add a function callback and you're done.

Sortable.create('greek', {
  onUpdate: function( ){ 
    alert('something happened');
  } 
});

Ooooooooooooooo scary. IT'S THAT EASY! You don't need a book. Just use the docs and samples online.

Here's another one: want to move an item from one list to another?

Just use these elements...

<ul id="greek">
<li id="greek_1">Alpha</li>
<li id="greek_2">Beta</li>
<li id="greek_3">Gamma</li>
<li id="greek_4">Delta</li>
</ul>
<ul id="hebrew">
<li id="hebrew_1">Aleph</li>
<li id="hebrew_2">Bet</li>
<li id="hebrew_3">Gimmel</li>
<li id="hebrew_4">Dalet</li>
</ul> 

... with this JavaScript.

Sortable.create('greek', { containment: ['greek', 'hebrew'] });
Sortable.create('hebrew', { containment: ['greek', 'hebrew'] });

Want to save the entire state of a list?

var state = Sortable.serialize('greek');

Couple that with the simple prototype Ajax.Request call and you can very quickly save the state of your dynamic application.

Now close your jaw and stop drooling.  I haven't even shown the drag-n-drop, animations, or visual effects that Scriptaculous provides.  Also, get this: it's all FREE. Just go get it at the links below. Be sure to look over the docs a few times to get some more examples of the prototype functionality and scriptaculous usage. I've thrown out A LOT of my own code without looking back now that I have these amazing frameworks. This is good stuff.

AdvancED DOM Scripting Book

Oh, and as always... be very certain that you know your AJAX before you do this.  I know it goes without saying that you need to be a qualified professional to use powerful tools, but some amateurs and hobbyists (and men who get a hand crushed trying to fix the wash machine) think "Hey! This tool can do it for me! I don't need to know how it works!".  So, make sure you understand the three pillars of AJAX (AJAX Communication, Browser Dynamics, and Modern JavaScript) before you even bother with the powerful frameworks or else you will by flying blind.  Basically, if you can't recreate the Prototype framework (very easy to read code!), you shouldn't be using any JavaScript/AJAX framework.  If you aren't familiar with AJAX Communication, Browser Dynamics, or Modern JavaScript. Check out Jeffery Sambell's amazing book AdvancED DOM Scripting   It's an amazing guide covers all the prerequisites for AJAX development from service communication to DOM manipulation to CSS alteration.  It's amazing.  Even if you're an AJAX expert, buy this book!

Links

SQL Server Database Model Optimization for Developers



It's my assessment that most developers have no idea how much a poor database model implementation or implementation by a DBA unfamiliar with the data semantics can affect a system. Furthermore, most developers whom I have worked don't really understand the internals of SQL Server enough to be able to make informed decisions for their project. Suggestions concerning the internals of SQL Server are often met with extremely reluctance from developers. This is unfortunate, because it is only when we understand a system’s mechanics that we can fully optimize our usage of it. Some familiar with the history of Physics will recall the story of when Einstein "broke" space by his special theory of relativity. Before Einstein was able to "fix" space, he had to spend nearly a deciding trying to decipher how space worked. Thus was born the general theory of relativity.

It's not a universal rule, but I would have to say that the database model is the heart of any technical solution. Yet, in reality, the database implementation often seems to be one of the biggest bottle necks of a solution. Sometimes it’s a matter of poorly maintained databases, but from my experience it seems to mostly be a matter of a poorly designed implementation. More times than not, the SQL Server database model implementation has been designed by someone with either only a cursory knowledge of database modeling or by someone who is an expert in MySQL or Oracle, not SQL Server.

Database modeling does not end with defining entities and their respective relations. Rather, it extends completely into the implementation. What good is an amazing plan, if it is going to be implemented poorly? The implementation phase to which I am referring comes before the actual implementation, yet after what most people refer to as “modeling”. It’s actually not even a phase with a temporal separation, but is rather a phase that requires continual thought and input from the start about the semantic understanding of the real world solution. This phase includes things like data-type management, index design, and security. This phase is the job of the resident architect or senior level developer, not the job of the DBA. It needs to be overseen by someone who deeply understanding both SQL Server and the semantics of the solution. Most of the time the DBA needs to completely stay away from the original data model and focus more on the server specific tasks like monitoring backups and tweaking existing data models based on the specifications that an architect has previously documented. Having said this, I often find that it's not only not the architect or senior developer optimizing a project, often nobody even cares!

Developers need to start understanding that designing a proper data model based on the real world representation includes minimizing data usage, optimizing performance, and increasing usability (for the solution’s O/R mapping). These are not jobs for a DBA. Someone with close knowledge to the project needs to make these decisions. More times than not, a DBA simply does not have the understanding of the project required to make these important decisions. They should stay away from the requirements of the system, leaving this to the architect and senior-level developers. Despite what many well intentioned DBAs think, they do not own the data. They are merely assistants to the development team leaders.

Let's start off by looking at storage optimization. Developers should be able to look at their data model and notice certain somewhat obvious flaws. For example, suppose you have a table with a few million rows with each row containing multiple char(36) columns (a guid), two datatime columns (8-bytes each), six int columns (4-bytes each)-- two of which are foreign keys to reference/look-up/enumeration tables, and an int (4-bytes) column which is also table's primary key and identity. To optimize this table, you absolutely must know the semantics of the solution. For example, if we don't care about recording the seconds of a time, then the two datetime columns should be set to be smalldatetime columns (4-bytes each). Also, how many possible values could there be in the non-foreign key int columns? Under 32,727? If so, then these could easily be smallint columns (2-bytes each).

What about the primary key? The architect or senior-level developer should have a fairly good estimate on how large a table will ever become. If this table is simply a list of categories, then what should be do? Often the common response is to convert it to a tinyint (1-byte). In reality, however, we shouldn't even care about size of the primary key. It’s completely negligible; even if there were only 100 rows, switching it to a tinyint could cause all kinds of problems. The table would only be marginally smaller and all your O/R mappers are now using an Int16 instead of an Int32, which could potentially cause casting problems in your solution. However, if this table tracks transactions, then perhaps you need to make it a bigint (8-bytes). In this case, you need to put force a strong effort to making sure that you have optimized this table down to its absolutely raw core as those bigint values can add up.

Now, what about the foreign keys? If they are simply for reference, then the range of values probably isn't really that wide. Perhaps there are only 5 different values against which the data is to be constrained. In this case, the column could probably be a tinyint (1-byte). Since a primary key and foreign key must be the same data type, the primary key must also become a tinyint (1-byte). This small change alone could cut your database size by a few hundred MB. It wasn't just the foreign key table that dropped in size, but the references between the two tables are now smaller as well (-- I hope every now understand why you need to have a very good reason before you even think about using a bigint foreign key!) There's something else to notice here as well. Reference tables are very helpful for the developer to look at the raw data, but does there really need to be a constraint in the database? If the table simply contains an Id and Text column with only 8 possible values, then, while the table may be tremendously helpful for documentation purposes, you could potentially break the foreign key constraint and put the constraint logic in your application. However, keep in mind that this is for millions or possibility billions of rows. If the referencing table contains only a few thousand rows or if space doesn’t have a high opportunity cost, which may be the case if the solution is important enough to actually have that many rows in the first place, then this optimization could cause more problems than it solves. First off, your O/R mapper wouldn’t be able to detect the relation. Secondly, obviously, you wouldn’t have the database level constraint for applications not using the solution’s logic.

Another optimization that’s important is performance optimization. Sometimes a table will be used in many joins and will be used heavily by each of the CRUD (Create, Retrieve, Update, Delete) operations. Depending on how important the column is, you may be able to switch a varchar(10) to a char(10) . The column will allocate more space, but your operations may be more efficient. Also, try to avoid using variable length columns (varchar) as foreign keys. In fact, try to keep your keys as the smallest integer type you possibly can. This is both a space and performance optimization. It's also important to think very carefully about how the database will be accessed. Perhaps certain columns need extra indexes and others need less. Yes, less. Indexes are great for speeding up read access, but slow down insert operations. If you add too many indexes, your database inserts could run your system to a crawl and any index defragmentation could leave you with a painfully enormous transaction log or a non-functioning SQL Server.

This is exactly what happened to a company I worked with in 2005. Every Saturday night for several weeks in a row, the IT team would get an automated page from their monitoring service telling them that all their e-commerce web sites were down. Having received the phone call about 2AM, I looked into a few things and noticed that the transaction log had reached over 80GB for the 60GB database. Being a scientist who refuses fall into the post hoc ergo proctor hoc fallacy, I needed measurements and evidence. The first thing I did was write a stored procedure that would do some basic analysis on the transaction log by pulling data from the fndblog( ) function and doing a simple cube and save the results into a table for later review. Then I told them that the next time the problem occurred they were to run the stored procedure and call me the next Monday (a polite way of telling them that I’m sleeping at 2AM on Saturdays). Exactly one week later the same thing happened and the IT department ran the stored procedure as instructed (and, yes, waited to Monday to call me, for which I am grateful). Looking over the stored analysis data, I noticed that there were a tremendous number of operations on various table indexes. That gave me the evidence that I needed to look more closely at the indexes of each of the 5,000+ tables (yes, that’s four digits—now you know why I needed more information). After looking at the indexes, I realized that the database was implemented by someone who didn’t really understand the purpose of indexing and who probably had an itchy trigger finger on the Index Tuning Wizard. There were anywhere from 6 to 24 indexes on each table. This explained everything. When the weekly (Saturday at 2AM) SQL Server maintenance plan would run, each of the indexes were defragmented to clean up the work done by high volume of sales that occurred during the week. This, therefore, caused a very large number of index optimizations to occur. Each index defragmentation operation would be documented in the transaction log, filling the transaction log’s 80GB hard drive, thereby functionally disabling SQL Server.

In your index design, be sure to also optimize your index fill factors. Too full and you will cause a page split and bring your system to a crawl. Too empty and you're wasting space. Do not let a DBA do this. Every bit of optimization requires that a person deeply knowledgeable about the system to implement a complete database design. After the specifications have been written, then the DBA can get involved so that he or she can then run routine maintenance. It is for this reason that DBAs exist. For more information on the internals of SQL Server, see the book Inside SQL Server 2005: The Storage Engine by Kalen Delaney (see also Inside SQL Server 2000). This is a book which should be close to everyone who works with SQL Server at all times. Buy it. Read it. Internalized it.

There’s still more to database modeling. You want to also be sure to optimize for usability. Different O/R mapping solutions will have different specific guidelines, but some of the guidelines are rather general. One such guideline is fairly well known: use singular table names. It's so incredibly annoying to see code like "Entries entry = new Entries( );" The grammar just doesn't agree. Furthermore, LINQ automatically inflects certain tables. For example, a table called “BlogEntry” will be related to the LINQ entity “BlogEntry” as well as “BlogEntries” in the LINQ data context. Also, be sure to keep in mind that your O/R mapper may have special properties that you’ll want to work around. For example, if your O/R mapper creates an entity for every table and in each created entity there is a special "Code" property for internal O/R mapper use, then you want to make sure to avoid having any columns named "Code". O/R mappers will often work around this for you, but "p.Code" can get really confusing. You should also consider using Latin-style database naming (where you prefix each column with its table name—so-named because Latin words are typically inflected with their sentence meaning thereby allowing isolated subject/object identification), this is not only a world of help in straight SQL joins, but just think about how Intellisense works: alphabetically. If you prefix all your columns with the table name, then when you hit Ctrl-J you’ll have all your mapped properties grouped together. Otherwise, you’ll see the "Id" property and it could be be 10-15 internal O/R mapper properties before you find the next actual entity column. Doing this prefixing also usually alleviates conflicts with existing O/R mapper internal properties. This isn't quite as important for LINQ, but not having table-name prefixed columns in LLBLGen Pro can lead to some headaches.

Clearly, there's more to think about for database design than the entity relation diagramming we learn in Databases 101. It should also be clear that your design will become more optimized the more specific it becomes. A data model designed for any database server probably won’t be as efficient as a well thought out design for your specific database server and for your specific solution requirements. Just keep in mind that your database model is your applications view of reality and is often the heart of the system and it therefore should be optimized for space, speed, and usability. If you don't do this, you could be wasting many gigabytes of space (and therefore also hundreds of dollars of needless backups), have a painfully inefficient system, and have a hard time figuring out how to access the data in your application.

The Wandering Developer



This has been an interesting week.  I did an experiment to help prove something that deep down we all know anyway: YOU DON'T NEED TO BE AT THE OFFICE TO WORK.  Last weekend I drove to Chicago (from Kansas City) to fix a few problems caused by overworking in the previous week and while on the trip, I started my 4-Hour Workweek ("4HWW") training.  The trip was only a Saturday, Sunday, and Monday trip and I had to be back Tuesday for work.  However, on the way back the 4HWW training made me realize the obvious: I work remotely and I am remote.  DUH!  When I realized that, I immediately turned NORTH (home is south) away from Kansas City heading towards Minneapolis.  I also called my client telling him that I'm going to remotely call in for the meeting as there was absolutely no reason for me to physically be there.  While in Minneapolis I stayed with a relative and worked from an office in their house.  Since there was no boss, no client, and no coworkers to bother me, I was able to have PURE productivity just as the 4HWW book said I would have.

It never really made ANY sense to me why, living in the 21st century, we developers need to physically go to an office to have a boss fight our productivity at every turn.  People just work better when people aren't watching.  DUH!  Therefore, as of right now… I'm done working on site and am extending my consultant business ("Jampad Technology, Inc.") to from coast to coast (possibly global soon).  I am no longer going to work at any particular location, but will work from a different city in the United States at various intervals for the next few years (until I get sick of that and change careers completely).  Since I don't own a house, don't have kids, am not married and since my car is completely paid off and have the lowest rent in the world, I can do this without affecting anything.  Why didn't I do this soon?  Well, I only did the 4HWW training last weekend.  Phenomenal training!  I'm sick and tired of living out Office Space every day of my life and, as it turns out, my Seminary work isn't going to do itself.  Last year I instituted by quarterly vacation policy (I take a 3-9 day vacation every 3 months) and the success of that naturally lead to this next step.  It was either that or continue to be on the lame 100 Hour Work Week program that most people are on.  Forget that.  I'm sick of working in an office.  Period.

One thing that I realized recently was something that makes me feel stupid for not thinking of sooner.  As a right-brained (as opposed to left-brained) developer, architect, minimalist, and purist I always try to increase the level of abstraction for my life.  I'm always trying to make things more logically manageable instead of simply physically manageable.  The other day when I handed my drivers license to a cashier at a grocery store and she responded "Wow, you're a long way from home".  I immediately got to thinking what a strange thing that is to say.  First of all, what ever happened to the saying "home is where the heart is".  Is this something people hang on their kitchen wall, but don't ACTUALLY believe?  Is society so bad that people have bumper stickers and plaques of cute little saying, but don't actually believe them? (obviously, yes)  Secondly, this person was making a statement about my physical, not logical representation.  When I realized this, it dawned on me that much of the technology world (including myself) is living in a painful contradiction.  We are trying to making everything logically management (i.e. active directory, the Internet, web server farms), but we just can't seem to have a logical representation of the most important thing of all: people.  There's no reason for me to be in an office every single day just like there's no reason my web server needs to be with me.  Furthermore, what's with those awesome science fiction scenes in movies where people are remotely (logically) present in meetings via 3D projection from all over the world?  We dream of this stuff, but I'm taking it now.

So, I'm now available to help on projects nation-wide project.  If you need .NET (C#), ASP.NET, JavaScript/AJAX, LLBLGen Pro/LINQ, Firefox, or XHTML/CSS development , porting, auditing, architecture, or training, all based on best practices, please drop me a line using my i-name account.  My rate varies from project to project and for certain organizations my rate is substantially discounted.  Also, please note that I will never, ever work with the counter productive technologies VB or Visual Source Safe (if you want me to setup web-based Subversion on your site, drop me a line!)