Friday, November 30, 2007

How Does Long-Tail Economics Apply to Software Features?

The term "Long-Tail" is traditionally used to describe a business model where the cumulative sales of a high number of different of low demand products or services are greater than the cumulative sales of the few best selling products.  It's been a buzz word lately to describe a whole host of businesses such as Amazon, Google, and Apple iTunes.  The key factor of success in this business model is that the business must keep the cost of inventory and distribution of these products and services low.

When it comes to software features, the opposite usually applies.  Software features aren't traditionally thought of as assets.  As the small features come in, usually per individual client request, they're added to the software.  The features pile in, and the once pristine architecture becomes muddied by the one offs and exceptions that are made in order to accommodate these smaller features.  Over time, these features become more and more difficult to maintain as those exceptions in the architecture cause bugs.  People forget that the feature exists, until the one client that uses that feature finds the bug you introduced when working on a different feature for that other client. 

In most cases, the only real metric we can gather from this "inventory" of infrequently used features is that they will collectively consist of more lines of code than the highly used features.  Due to the fact that these features aren't usually thought of as inventory, the true cost of this inventory is never realized.

So what would happen if you were able to sift through your software's features and quantify each feature with the number of  clients it was sold to, and how much additional they paid (if any) for that feature?   How do you quantify the fact that if that feature didn't make it into the software, clients may have never purchased the software at all?  Would this number exceed the sales of the frequently used features of the software?

There's companies out there that firmly believe that the Long-Tail concept for software features is a very bad idea.  For instance, the folks at 37Signals take a very minimalist approach to software development.  Only features that are truly inline with the primary function and focus of the product are considered for the software.

So can a Long-tail model work when it comes to a piece of software?  Can you really quantify how much the features in your long-tail cost?  If you decide to cut your long-tail features, can you quantify the possible sales you lose?

Is there a third option?  Can you create a bunch of smaller pieces of software that partition architectures well?  Is it possible to use a standard API that would not be influenced by the needs of the individual long-tail features?  Could you create your own mashup of different features that are built as separate pieces of software to make your solution look like one integrated piece of software targeted towards that client?

Seems like the third option is pretty viable, and that is the model Google and Yahoo are following.  They treat their individual features as individual products.  Individual items that exist in inventory.  Each of these individual pieces of software combine together to create a greater whole that fits the needs of each individual customer.  Each of these long-tail software components speak the same language, but put no pressure on the other's architecture, though they may share common components.  And because these pieces of inventory can be easily combined with other pieces of inventory, and are maintained individually, the cost of inventory and distribution shrink, which is a pillar of the long-tail model.

It's an interesting concept, that I'd like to explore more.  Anyone else out there have any thoughts or comments?

Firebug's HTML Feature

The next tab over on Firebug's interface is the HTML tab.  When you first click on the tab, you get a collapsed view of the HTML document and a window on the right that shows the CSS properties for the body of the document's body:

image when you first arrive

You can expand and collapse the hierarchy and select different elements.  Highlighting an element will overlay the element on the web page itself to show you where that element exists on the page.  Also, there's some breadcrumbing in the header of the window that shows your place in the element hierarchy.

In the CSS window on the right, you can see the CSS properties for the currently selected element, as well as any inherited styles from parent elements.

css view

If you switch to the Layout tab on the CSS window, you can see information about the dimensions, padding, border, margin and offset of the element.

layout view

Click on the DOM tab, and you can see information about the elements as objects.  You can inspect their values as they exist on the client.

dom view

Another really cool thing you can do here, is edit the HTML and CSS on the browser, make changes real time, quickly.  This could save a ton of development time, as often times there's quite a bit of overhead in editing html in IDEs (especially when you're stuck in a spot where you've got some HTML going on in a code file, uggghhh.)

To edit, you can click on the edit button on the Firebug toolbar, and it switches the document, or the current element you are on into a text view.  You can edit the HTML from there.

Screen shot of html editing 

You can also edit the CSS  in the CSS window, which again, is extremely useful for the development of a web pages.

Thursday, November 29, 2007

Why is SQL Server 2005 SMO So Slow at Scripting Tables?

We currently use the SQL Server 2005 SMO objects for scripting up our database objects.  As part of our build process we have an executable that loops through the database objects and generates scripts for them.  An example of what we do is below:

Server server = new Server("(local)");
 
Database db = server.Databases["AdventureWorks"];
 
Console.WriteLine("Beginning Scripting Process.");
Console.WriteLine();
 
ScriptingOptions options = new ScriptingOptions();
 
options.AllowSystemObjects = false;
options.AppendToFile = false;
options.ContinueScriptingOnError = false;
options.ConvertUserDefinedDataTypesToBaseType = false;
options.IncludeHeaders = false;
options.IncludeIfNotExists = false;
options.NoCollation = false;
options.Default = true;
options.ExtendedProperties = true;
options.TargetServerVersion = SqlServerVersion.Version90;
options.LoginSid = false;
options.Permissions = false;
options.Statistics = false;
options.Indexes = true;
options.Triggers = true;
options.ClusteredIndexes = true;
options.NonClusteredIndexes = true;
options.XmlIndexes = true;
options.Bindings = true;
options.WithDependencies = false;
options.DriAllConstraints = true;
options.DriAllKeys = true;
options.DriChecks = true;
options.DriClustered = true;
options.DriDefaults = true;
options.DriForeignKeys = true;
options.DriIncludeSystemNames = true;
options.DriIndexes = true;
options.DriNonClustered = true;
options.DriPrimaryKey = true;
options.DriUniqueKeys = true;
 
Console.WriteLine("Begin Scripting Tables");
foreach (Table table in db.Tables)
{
    if (!table.IsSystemObject)
        table.Script(options);
}
Console.WriteLine("End Scripting Tables");

If you've ever tried using the SQL Server 2005 objects to script a large database, you know it's extremely slow.  Our database has about 700 tables, which is a pretty good size database, probably on the high end, but not outrageous.  When using SMO to script each of these entities, it takes about 45 minutes.  Using the old SQL 2000 DMO objects, this took about 5 minutes.  So why the 450% increase.  Well, I fired up SQL Profiler to take a look at the results, and here's what I found.  Each of these items happens at each iteration of the tables (all 21 of these steps happen per table).

  1. Get schemas, rules and defaults for the database
  2. Get schema for table to be scripted
  3. Get columns with column definitions
  4. Get column names
  5. Get replication info
  6. Get column names
  7. Get foreign keys
  8. Get trigger information
  9. Get constraints
  10. Get indexes
  11. Get full-text change tracking
  12. Get language for table
  13. Get column compute information
  14. Get column names
  15. Get extended properties
  16. Get more extended property info
  17. Get even more extended property info
  18. Get some more extended property info
  19. Get more extended property info
  20. Get CLR integration information
  21. Get more constraint information

Keep in mind that each of these are a sql query, some with complex joins, and each of which are being run as a separate database call.  Each database call is preceded with a database context switch ("USE [DATABASENAME]").

This is sooooo verbose.  I really hope that they do some work with this piece so that they get this thing to perform.

Another thing I don't understand, is that if you script through the SQL Server Management Studio, it performs much better.  Why would the SSMS use anything other than the SMO objects?  Is it using them differently, in a more performant way?  I've done quite a bit of searching, and have yet to find anything that would suggest a more efficient way to put together this code.

I think the only thing I can do for now is to switch my build process to do an incremental script, and only script the tables that have changed since the last release.  That stinks though, cause it's more logic in my build scripts that I don't need.

Ordering Adobe Photoshop for Download - Process Hasn't Changed Since 1998

My wife recently left her job and decided to give some full time parenting and some part time contracting a whirl.  She's does art for the web, and uses Photoshop as her primary tool.  Well, our version of Photoshop was pretty old, version 6 which was released some time in 2001.

She recently got some work that required her to upgrade to CS3, so we needed to upgrade.  She went to the Adobe site to see what the upgrade options were, and there weren't any!  None at all.  There is not an upgrade option for any version of Adobe Photoshop earlier than version 7.  Pretty wild, considering you're shelling out $650 for a new version, as opposed to $160 for an upgrade.  Boo Adobe.  You've got a lot of loyal folks out there, and with this greedy policy, you're setting yourself up to fail.

So, I came to grips with that fact we were going to have to part with a good portion of my MacBook Pro fund, and buy the full version.  My wife orders it from the Adobe site for download, hoping she could download the nearly 800 MB package and install it, and start billing some hours to replenish my hardware fetish account.  Would you think that after giving Adobe $600 bucks that they'd give you the software right away?  I would.  But they don't.  You have to wait till they send you the email.  What's that all about?

So 4 hours go by, and she gets an email saying it's ready for download.  She goes to the site, and no download.  Kinda not ready.  The links were there for the download, but when you click them, there's no download.

So I called the support hotline, and got some guy in India named Lance.  Well Lance in India says they need 5 hours for processing of the order to make sure the software is available.  Available?  WTF?  Um, let me check to see if your copy of Photoshop is available, our engineers are feverishly copying the source and compiling a copy for you to enjoy...  Seriously?

On top of that, Lance said that since she ordered so close to the end of the day (2PM PST), that they would not be able to fulfill the order until the next morning.

This is crazy.  Why is this so difficult?  Why do you have to check inventory for a piece of software?  You don't have enough people to do a Ctrl-C Ctrl-V?  Lame Adobe. 

Watch out, there's bits out there that are getting close to what you do, and they're doing it for free.

Tuesday, November 27, 2007

What I Use For My ToDo List

I've been using Remember the Milk for about a year now, and I've really been very pleased with it.  In fact, pleased enough to give them $20 bucks a year to use it.

They've really done a bunch of pretty hard core development on it.  All of it really Google focused.  They've integrated Google Gears, they've got a Google Gadget, Google Maps integrated for tasks with locations, and integration with Google Calendar.  They've got a great mobile site, and integration with IMified.  You can send emails to create new tasks, and also have a Windows Mobile application that synchronizes with the site.

My initial reason for hopping on board was that they are one of very few, if any (that I know of) todo lists that provide the ability for me to share my todo list with other users.  This is most helpful with the honey-do list that my wife shares with me.  I can add items to it, view it any time I want, and print it off for the weekend so I can reluctantly happily do my chores.

My only beef with it, is that the task management screen needs a bit of help.  It's obviously AJAXified, and has the usual stuff you've come to expect out of these interfaces. 

However it's got a couple shortcomings.  When you switch window focus, and you are currently editing a field (for example the task name) the field saves.   This is a pain because I often have to translate due dates from my school calendar to the task list, so I'm switching windows.  I'd love it if it didn't automatically save when the window loses focus.

The second issue, is that the list has a really weird edit concept to it.  As you mouse over items in the task list screen, it shows the details in a panel on the left.  Each item in the list has a checkbox next to it, and in order to edit an item, you have to check the box.

screen shot of task list

That seems a little weird to me.  You can check many items, and that leads to a pretty arbitrary decision as to which one I want to edit.  Seems like they could add an "edit" link for each of those items, or somehow disconnect the checkboxes, which are good for doing things like marking a bunch of items completed, or changing the priority for many items, but not good for acting on individual items.

Anyway, I'm very pleased with this todo list app.  It's been sooo helpful in organizing school, and work and home.

What todo app do you use?

Using Profile in the Firebug Console

I kind of feel bad about my post last night regarding the Firebug console.  I covered very little of the feature and didn't do it much justice.  I'll try and add a little more in this post.

Another aspect of the Firebug console is the profile.  What it allows you to do is time the various functions called during the rendering of the page that is loaded.  To use it, fire up Firebug, and open the console tab.  You'll see at the top that there is a "Profile" button.  If you click that, it will start up the profiler.  Just clicking that actually won't do all that much, it just tells Firebug to wait for something to happen in this current browser session.

After clicking that button, you should see the following:

profiler started

Then visit the page you want to profile.  When the page is done loading, click the Profile button again, and this will tally up the results and display them in the console like so:

profiler results

You can click on each one of these functions and it will take you right to the line in the client side source.  Obviously if you minify your java script or obfuscate it in anyway, these results might be a little difficult to diagnose, but you can obviously see the benefits here.

This is truly the kind of functionality you can expect out of a pricey profiling tool for C# like ANTS.  Great stuff for profiling your client side application code.

Great stuff.  I promise I'll post more about this tool as I get time.

Monday, November 26, 2007

Using the Firebug Console

Firebug is a plug-in for Firefox that provides a bunch of functionality to help web developers work with client-side technologies.  This has to be one of the best tools that I've ever used for web development,  and I'm hoping this is the first of a few posts about the tool.  There's so much there to talk about that I'm just going to start going from left to right on the Firebug menu.

You can install Firebug from here.

After installing, you'll see an icon in the corner of Firefox:

 Status Bar

If you click on that icon, you can select "Enable Firebug for this web site".  After clicking that, you should by default end up at the Firebug console.  Here, you can do just about anything you can do in the Visual Studio.NET Immediate window, but instead with Javascript.

For instance, I can type in the window "document.title":

 Firebug Console Window

There's all sorts of stuff you can do here, execute methods, print cookies, print variable and DOM element values.

This is such a handy tool.  If you don't have it, get it, you need it.

Sunday, November 25, 2007

Getting ReSharper Working in VS 2008 RTM

I'm a very big ReSharper fan.  I've always been more than willing to throw more hardware at my machine to make sure that I can get it to run well.  There's so many features that make me more productive, I think I'll have to post a top 10 or something like that in the future.

Anyway, after installing Visual Studio 2008 the addin wasn't installed and I've been having some withdrawal.  I had run across an article that had mentioned there was an early RC of ReSharper 3.1 out there that would work.  I found it here.

You have to uninstall any previous versions of ReSharper prior to installing this pre-release version.

After installing the RC, I have ReSharper again.

Now, you won't see any support for the new C# 3.0 compiler features (Lambda Expressions, Extension Methods, etc...) until ReSharper 4.  The product manager of ReSharper goes into more detail here, and according to him it looks like JetBrains is shooting to drop that in January.

Deciding When a New API or Standard is Needed

This was an interesting quote I ran across on James Snell's blog.  Not sure if it is his or someone elses:

If someone tells you that a new API or new standard is needed, they typically fall into one of two categories:

  • They either know pretty much everything about a problem and the existing solutions that they can say with authority that a change is needed, or…
  • They have very little real understanding of the problem and know next to nothing about any existing solutions.

There really is no middle ground. Sure, there are lots of people (myself included) who will write up specifications for stuff primarily for the purpose of exploring various alternative solutions and seeking public input… that however, is fundamentally different than asserting that a new standard or protocol or extension or whatever is absolutely necessary to solve a problem.

I think this rings pretty true.  I'm pretty guilty of it myself.  There's a couple of APIs that I've (re)written where I didn't do due diligence on the problem before diving in.

The engineers and architects that I have respected and admired most throughout my career always fell into James' first bullet above (obviously), and they always challenged me when deciding when something new was needed, or whether I needed to be more familiar with the existing.

Saturday, November 24, 2007

View From Where I'm Spending My Thanksgiving

Here's a few photos of the view from my parent's house that I stitched together with the Windows Live Photo Gallery.

Panoramic View at Door County

Wednesday, November 21, 2007

Simple Data Access in Visual Studio 2008 Using LINQ to SQL

Often when I have to throw together a quick sample, it will involve some sort of data from the database.  Prior to C# 3.0, the quickest thing I could do was create the sample data structures, put together some SQL, and then use the Microsoft Data Access Application Block to do the data access from C#.

Now, I can do this much faster.  The first thing I'll do is create the table(s) in SQL Server.  Then I'll go to my Visual Studio 2008 project and add new LINQ to SQL Classes:

Add DBML File

After that, I'll open the Server Explorer window, and expand my connection, and my tables, and drag my tables onto the .dbml file's designer surface

DBML Surface

And now, I can just start writing LINQ against the new Data class.  Below is an example of some LINQ that gets a collection of contacts out of the database.

ContactDataDataContext dataContext = new ContactDataDataContext();
var items = (from p in dataContext.GetTable<Contact>() select p);
Contact[] contacts = items.ToArray<Contact>();

Pretty great stuff.  I'm up and running with a SQL backend in a jiff.

How to Get CopySourceAsHTML to Work in Visual Studio 2008

The tool I used to get my source code to look pretty in my blog posts is a Visual Studio Add-In called CopySourceAsHTML.  When I do the copy, I follow the instructions here.

Now, that I want to start using Visual Studio.NET 2008, I need to get this working.

To do so, go to your C:\Users\<User>\Documents\Visual Studio 2005\Addins and copy all the files that start with CopySourceAsHtml (there should be four, a pdb, a dll, a config file and an add-in manifest) to the C:\Users\<User>\Documents\Visual Studio 2008\Addins directory.

Then, in the Visual Studio 2008 Addins directory, open the CopySourceAsHtml.AddIn file, and change the HostApplication Version nodes to the 9.0 version.  See below.

<HostApplication>
    <Name>Microsoft Visual Studio Macros</Name>
    <Version>9.0</Version>
</HostApplication>
<HostApplication>
    <Name>Microsoft Visual Studio</Name>
    <Version>9.0</Version>
</HostApplication>

Restart Visual Studio.NET 2008 and you should be all set.

Tuesday, November 20, 2007

Simple GET REST Service With WCF

The REST vs. WS-* debate has been pretty active lately.  I grew up in the SOAP school, and haven't spent much time in the REST world, so I thought I'd put together an example to learn a little more about putting together a REST service with the tools I use every day.

The concept around REST is that it's pretty much what you see is what you get.  URLs are used to identify an entity, and request methods define what it is you do with that entity.  GET is obviously to retrieve an entity, or a list of URLs for a collection of entities.  POST is to add a new entity, PUT is to update an entity, and DELETE is to remove an entity.

The sample I've provided is very simple, and assumes the GET method is used, and does nothing with the URL to determine the exact data item to retrieve.

The contract for the service is pretty simple an generic.  You can use it for every endpoint.

using System.ServiceModel.Channels;
using System.ServiceModel;
 
namespace RESTService
{
    [ServiceContract]
    public interface IRESTService
    {
        [OperationContract(Action = "*", ReplyAction = "*")]
        Message ProcessMessage(Message input);
    }
}

The service implementation for my contacts service looks like below.  Here is where I would need to put more logic to handle different request methods and some URL parsing to discover exactly what contact the request is for (i.e., http://localhost:50344/contact.svc/contacts/1234/)

using System.ServiceModel.Channels;
using System.Net;
 
namespace RESTService
{
    public class ContactService : IRESTService
    {
        public Message ProcessMessage(Message request)
        {
            Contact myContact = new Contact();
            myContact.FirstName = "Jim";
            myContact.LastName = "Fiorato";
 
            HttpResponseMessageProperty responseProperties = new HttpResponseMessageProperty();
 
            responseProperties.StatusCode = HttpStatusCode.OK;
            Message response = Message.CreateMessage(request.Version, request.Headers.Action, myContact);
 
            response.Properties[HttpResponseMessageProperty.Name] = responseProperties;
 
            return response;
        }
    }
}

My small Contact data structure is pretty simple:

using System.Runtime.Serialization;
 
[DataContract(Namespace = "http://tempuri.org/Contact")]
public class Contact
{
    [DataMember]
    public string FirstName;
 
    [DataMember]
    public string LastName;
}

And this portion of my web.config is of note as well:

<system.serviceModel>
    <bindings>
        <customBinding>
            <binding name="RESTBinding">
                <textMessageEncoding messageVersion="None" />
                <httpTransport />
            </binding>
        </customBinding>
    </bindings>
    <services>
        <service name="RESTService.ContactService">
            <host>
                <baseAddresses>
                    <add baseAddress="http://localhost:50344/" />
                </baseAddresses>
            </host>
            <endpoint address="contacts" binding="customBinding" bindingConfiguration="RESTBinding"
                contract="RESTService.IRESTService" />
        </service>
    </services>
</system.serviceModel>

So, when I visit my site with the URL http://localhost:50344/contact.svc/contacts, I'll get the following page.

Screen shot of REST service

Obviously, this is quite a bit more straight forward than the SOAP/WS-* alternative.  If I were to create this service with the SOAP tools in WCF, I'd get a page with a list of operations, and if I clicked on one of those, I'd get a test form if I was on the local server, to provide some post data to test my service.

I think it's thoughtful that WCF provides the support to do REST style services like this.  It's a bit more plumbing to write on my end.  As you can see from the service implementation, I've got to handle the request on my own.  But that request parsing facility should be fairly easy to abstract, write once and use often.

I'm attracted to this format of service delivery.  But before I hop on board, I need to do more investigation in to how a client would develop using this service with the tools that we've become accustomed to in the world of WSDL.  I need to think about not only the straight forward service references, but also the automatic JSON proxy we get from ASP.NET AJAX.

What's So Great About the Amazon Kindle (and it has nothing to do with books)

IMO, what's so great about the Amazon Kindle is the concept that they've integrated EVDO service into the device cost.  This completely blows my mind.  A wireless device, on a 3G cell network, and you've got no monthly or yearly service subscription.

Now, it's not like the Kindle has a browser, or an email client (although it looks like you can somehow email documents to yourself, but I imagine you email it to the Amazon Kindle site and download from there).  I'm leaving the blogs out, because you DO have to pay monthly ($.99 per month if you're freaking crazy) to subscribe.  But there's a handful of other devices that already have restricted internet-capable applications, and could really truly benefit from the wireless service.

Are we going to see the next generation of iPods or Zunes with 3G wireless capability and the cost of wireless service baked in?  How about the next Sony PSP, Nintendo DS?

I think it will be interesting to see how this pans out.  If it succeeds, we may start to see more and more devices with built-in wireless service, and less and less restrictions on what you can do with it.

Monday, November 19, 2007

Refactoring MSBuild Scripts - Extracting Common Tasks

I've spent a bit of time working with MSBuild lately, so I've been recognizing some smells with the scripts I've been writing.  One stinky thing I found myself doing, was repeating the same task over and over again.  So, in this post I thought I would highlight something that's fairly synonymous with "Extract Method" in the world of C# refactoring.

For this release, the QA team needed a bunch deployments per QA build.  Each needed a different configuration, but that said, our application always needs the same thing, despite the configuration: a database, a bunch of files, and an IIS site.  So basically what I ended up having was a bunch of common build artifacts, that only differed based upon things like file paths, database names, and backup locations.

What I want to focus on here though, are the small, one to five line tasks, that get called over and over.  Since they are such succinct tasks, I didn't feel that they warranted having their own MSBuild file, however, they were big enough to be extracted to their own tasks.

Below is a very brief example:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Default" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" >
    <Target Name="Default">
        <CallTarget Targets="DeployNumber1;DeployNumber2" />
    </Target>
 
    <Target Name="DeployNumber1">
        <MSBuild Projects="buildDatabase.proj" Properties="DBName=Database1" />
        <Exec Command="sqlcmd.exe -S (local) -d Database1 -Q &quot;CREATE USER [UserForInstance1] FOR LOGIN [UserForInstance1]&quot; -E -b -I" />
        <Exec Command="sqlcmd.exe -S (local) -d Database1 -Q &quot;EXEC sp_addrolemember N'db_owner', N'UserForInstance1'&quot; -E -b -I" />
        <Message Text="Deploy 1 Done" />
    </Target>
 
    <Target Name="DeployNumber2">
        <MSBuild Projects="buildDatabase.proj" Properties="DBName=Database2" />
        <Exec Command="sqlcmd.exe -S (local) -d Database2 -Q &quot;CREATE USER [UserForInstance2] FOR LOGIN [UserForInstance2]&quot; -E -b -I" />
        <Exec Command="sqlcmd.exe -S (local) -d Database2 -Q &quot;EXEC sp_addrolemember N'db_owner', N'UserForInstance2'&quot; -E -b -I" />
        <Message Text="Deploy 2 Done" />
    </Target>
 
</Project>

What this project does is two deployments of the same database, with different names and different users.   The two steps above aren't all that large, and are fairly maintainable in this context, but add a few hundred more lines of targets and tasks, and it just adds to the alphabet soup.  No matter how you slice it though, its still a cut and paste job, and when I'm blogging, I try to avoid that.  :)

So, what I want to do here is extract the MSBuild and Exec tasks out of the DeployNumber1 and DeployNumber2 targets, and move them to their own target.  Now, there's a trick to doing this because MSBuild inherently keeps track of when a task has been run, and by default will only allow you to run a single target once within the same project.  You can use the Inputs and Outputs attributes on the Target element in some cases to force task to run again, but that doesn't necessarily apply here.

The best way I have found around this, is to use the MSBuild task and have the project call itself to execute the target, passing the database name and user name to run the database build steps below.

The below project shows the refactored project:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Default" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" >
    <Target Name="Default">
        <CallTarget Targets="DeployNumber1;DeployNumber2" />
    </Target>
 
    <Target Name="DeployNumber1">
        <MSBuild Projects="$(MSBuildProjectFile)" Properties="DBName=Database1;DBUser=UserForInstance1;"
                Targets="DeployDatabase" />
        <Message Text="Deploy 1 Done" />
    </Target>
 
    <Target Name="DeployNumber2">
        <MSBuild Projects="$(MSBuildProjectFile)" Properties="DBName=Database2;DBUser=UserForInstance2;"
                Targets="DeployDatabase" />
        <Message Text="Deploy 2 Done" />
    </Target>
 
    <Target Name="DeployDatabase">
        <MSBuild Projects="buildDatabase.proj" Properties="DBName=$(DBName)" />
        <Exec Command="sqlcmd.exe -S (local) -d $(DBName) -Q &quot;CREATE USER [$(DBUser)] FOR LOGIN [$(DBUser)]&quot; -E -b -I" />
        <Exec Command="sqlcmd.exe -S (local) -d $(DBName) -Q &quot;EXEC sp_addrolemember N'db_owner', N'$(DBUser)'&quot; -E -b -I" />
    </Target>
 
</Project>

Saturday, November 17, 2007

Event Validation in ASP.NET

Recently at work, we ran into an issue where we needed to turn Page.EnableEventValidation on in our application.  Yes, would you believe it, we had to turn it ON.  Why did we have it off, you ask?  Because when we migrated our app from 1.1 to 2.0, we didn't have the time to diagnose and fix the issues caused by leaving the event validation on.

Not only didn't we have the time to find out what was causing it, we didn't have the time to do the research as to what event validation is, and why it's there.  Well, 2 years later, and I've made time to figure out what it is.

The purpose of event validation is to ensure that the posted value(s) of select lists are values that came from the application, and aren't something that has been injected into the page using malicious client script.

The example below shows how to test to see how event validation behaves.  What I do is add a drop down list to the page, then through Java Script, I dynamically add a value to the drop down list.

<%@ Page Language="C#" %>
 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
 
<script runat="server">
 
</script>
 
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title>Untitled Page</title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            <asp:DropDownList ID="myDropDownList" runat="Server" AutoPostBack="true">
                <asp:ListItem />
            </asp:DropDownList>
        </div>
    </form>
 
    <script language="javascript" type="text/javascript">
        var dropDownList = document.getElementById('<%= myDropDownList.ClientID %>');
        dropDownList.options[1] = new Option('Text 1', 'Value1');
    </script>
 
</body>
</html>

When I choose the item from the drop down list that was added dynamically, I get the following error:

Screen Shot of Error

Under the covers, what happens is that ASP.NET makes a hash of all of the values that exist for the drop down list while the page is still on the server, and then stores that hash in a hidden input on the page.

<input type="hidden" name="__EVENTVALIDATION"
            id="__EVENTVALIDATION"
            value="/wEWAwKp1OaKAQLPw96oCwLPw96oC/Pix6BjqjWDWDMfBB7u0UhhYz8u" />

When the page posts back, it verifies the values posted, with what exists in that hash.  If it can't find it, and you have event validation turned on, then you'll get the exception.

Now, the above example is a very reasonable thing to do.  However, if you want to do it, you've only go two options when it comes to event validation. 

The first option is to turn off event validation for the page.  Not the best plan, because now it's up to you to make sure that you're values are validated. 

The second option is to tell the server all of the available values you're going to potentially be adding dynamically.  To tell the server the possible values, you use the ClientScript.RegisterForEventValidation method.  Again, not a perfect option, as some times you'll be able to identify these values, but sometimes you won't be able to anticipate all those values (AJAX'ified drop down list for instance).

Below is the same page above, but it uses the ClientScript.RegisterForEventValidation method in the overridden Render method (this method can only called during the render phase of the page lifecycle).

<%@ Page Language="C#" %>
 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
 
<script runat="server">
    protected override void Render(HtmlTextWriter writer)
    {
        ClientScript.RegisterForEventValidation(myDropDownList.UniqueID, "Value1");
        base.Render(writer);
    }
</script>
 
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title>Untitled Page</title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            <asp:DropDownList ID="myDropDownList" runat="Server" AutoPostBack="true">
                <asp:ListItem />
            </asp:DropDownList>
        </div>
    </form>
 
    <script language="javascript" type="text/javascript">
        var dropDownList = document.getElementById('<%= myDropDownList.ClientID %>');
        dropDownList.options[1] = new Option('Text 1', 'Value1');
    </script>
 
</body>
</html>

Windows Vista Reliability and Performance Monitor Helped Me Get to Sleep

So, I've been having some issues lately with getting my laptop to sleep correctly.  It's really been a pain.  Shut the lid, then the hard drive would spin for a while, a blue screen, then a restart, a message saying asking me to check for a solution that didn't exist.  So basically, I'd close the lid when I was done with it at work, and by the time I got to the train to go home, I'd have half a battery, and half a train ride home that was very boring.

I'd been hearing about the Vista Reliability and Performance Monitor, but through some very cursory poking around, couldn't find it and thought it might be a Vista Ultimate only thing (I have Vista Business).  But, if I had been Vista savvy (or even a half-wit), I'd have used the search.

Anyway, this thing basically gives you a running reliability score over time, based upon application failures, hardware failures, Windows failures, and other miscellaneous failures.  Along with the score, you can see what you installed on each day.

So, all I had to do was look at the reliability monitor and find out when my reliability started to tank.  Then I could cross reference that with the applications I messed with at the time and hone in on a potential issue.

So, to get to this thing, click on the circle, search for "Reliability".  You'll get the Reliability and Performance Monitor.  Click on Reliability on the tree menu on the left, and you should see the following.

Reliability Monitor

You can see how bad my issue was.  OS was blue screening multiple times almost daily.  So, I just started scrolling this thing backwards, looking for the reliability score to go back up.  So, I found a period when things started to go wrong, and started to look for anything I might have installed.  Here's what I found:

Reliability Monitor

As you can see, I had a lot of fun with the Cisco VPN client that day.  I've been struggling with that one for a while, and I kind of suspected that had something to do with the sleep issue.  It was, but very indirectly.  It was actually due to the fact that I installed Microsoft Virtual PC because I gave up on getting the Cisco VPN client running on my computer.  So, last night, I uninstalled Virtual PC, and Viola!  All is well.  I can finally sleep again.

Thursday, November 15, 2007

Using the Yield Keyword

So, this has to be one of the more handy and obscure features of C# in .NET 2.0.

The yield keyword can only be used inside an iterator block, and what it allows you to do is easily populate a collection of items while iterating a collection of something else.  The example I put together below is pretty simple.  All it does is get a bunch of items from a database and then loads them into objects and uses the yield keyword to push them into the collection that is returned from the method.

Here's the method utilizing the yield keyword.  Notice there's no instantiation of the collection to put the objects in, the yield keyword tells the compiler to do it for you.

public static IEnumerable<MyObject> GetAllMyObjects()
{
    Database db = DatabaseFactory.CreateDatabase("default");
    using (IDataReader reader = db.ExecuteReader("GetAllMyObjects", new object[0]))
    {
        while (reader.Read())
        {
            yield return new MyObject(reader.GetInt32(0), reader.GetString(1));
        }
    }
}

And here's the calling method.

public static void Main(string[] args)
{
    foreach (MyObject myObject in GetAllMyObjects())
    {
        Console.WriteLine(myObject.Name);
    }
    Console.ReadLine();
}

Pretty cool stuff, cuts out a few lines of code.  Now, what the compiler does with this, is also equally interesting.  If you open the resulting binary up with reflector, you'll find that the compiler actually turns the "GetAllMyObjects" method into an IEnumerable<MyObject> class.  In this class, all the DB work is moved into the overriden MoveNext() method.  Here's what it looks like:

private bool MoveNext()
{
    try
    {
        switch (this.<>1__state)
        {
            case 0:
                this.<>1__state = -1;
                this.<db>5__1 = DatabaseFactory.CreateDatabase("default");
                this.<reader>5__2 = this.<db>5__1.ExecuteReader("GetAllMyObjects", new object[0]);
                this.<>1__state = 1;
                while (this.<reader>5__2.Read())
                {
                    this.<>2__current = new MyObject(this.<reader>5__2.GetInt32(0), this.<reader>5__2.GetString(1));
                    this.<>1__state = 2;
                    return true;
                Label_0091:
                    this.<>1__state = 1;
                }
                this.<>1__state = -1;
                if (this.<reader>5__2 != null)
                {
                    this.<reader>5__2.Dispose();
                }
                break;
 
            case 2:
                goto Label_0091;
        }
        return false;
    }
    fault
    {
        ((IDisposable) this).Dispose();
    }
}

Smart folks over there inside the C# team.

Wednesday, November 14, 2007

It's Hard Out There For a PMP

Too often, I'll get a question from a business analyst or project manager along the lines of "How much effort would it be to add this column to this table?"  It drive me nuts on two levels. 

  1. I have enough control issues as is.  I don't care much for letting some developers touch my database, much less a project manager doing data modeling for me.
  2. What's the freaking requirement?  What's the client want?

So often as engineers, we hear the solutions from the functional folks, and rarely ever hear the business problem.  When asked this question, I'll usually ignore it completely and try to work backwards.  I'll respond with "What are you trying to do?", and the response will be something like, "I want to search for this and I was thinking we can put it in this column."  And as you'd expect we'd do this for a couple rounds until we get to the requirement.  Sometimes it's like pulling teeth.

I understand it's hard for PM's and BA's with a technical background.  I'd have a hard time with it too.  What I worry about is that not every engineer will challenge the "How hard is it to add this column?" question.  What you'll almost always find, is that once the requirement is defined, the solution ends up looking nothing like what the project manager had in mind.

Thursday, November 08, 2007

Parallel Build Tasks in MSBuild 2008

One of the things I've been struggling with for some time is that my build process has a rather long running step (thanks to the chatty SQL Server 2005 SMO), and all the other steps wait for the long running step to complete. The long running step is fairly inconsequential to the steps following, and I've always wanted that step of the build to run in parallel to the rest of the tasks.

With the upcoming release of Visual Studio 2008, this will be built in, which is very handy. The MSBuild task now has a new boolean attribute called BuildInParallel. As far as I can tell, the MSBuild task is the only task that has this new attribute. Using that in conjunction with the new /m:<Number of CPUs> MSBuild command line argument, you can get the projects provided to the MSBuild task to run in parallel, based on the number of CPU's you specify.

Below are three MSBuild project files that show a simple example of this feature.

This is the long running project, which sleeps for 15 seconds:

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
    <Target Name="Test">
        <Message Text="Long Running - Start" />
        <Exec Command="Sleep.exe 15" />
        <Message Text="Long Running - End" />
    </Target>
</Project>

This is the project I want to run in parallel to the long running project, which sleeps for 5 seconds:

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
    <Target Name="Test">
        <Message Text="Fast - Start" />
        <Exec Command="Sleep.exe 5" />
        <Message Text="Fast - End" />
    </Target>
</Project>

And finally, this is the root project, the one I provide MSBuild at the command line (Note the BuildInParallel attribute on the MSBuild task). Any tasks that occur after the parallel MSBuild task will not start until all the projects in the parallel MSBuild have completed.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
    <Target Name="Test">
        <Message Text="Begin Build" />
        <MSBuild Projects="longrunning.proj;fast.proj" BuildInParallel="true" />
        <Message Text="End Build" />
    </Target>
</Project>

This is the result when I run the build without passing the /m:<Number of CPUs> argument to MSBuild (click the image to see full size):

serial output

This is the result when I run the build telling MSBuild to use 2 CPUs (click the image to see full size).

parallel output

Nice, reduced my build by 5 seconds.

One thing that I find odd, is that if you look closely, the message from the root build project "End Build" occurs before the end long running task message "Long Running - End". I've done some checking to make sure that following tasks wait for both of the projects in that MSBuild task to complete, and they do. There must be something about the output of that message that's messed up.

Gotta build a rig like Hanselman and get my build running on 4 CPUs. That would be lovely.