Wednesday, February 27, 2008

Domain Specific Languages

Domain specific languages are hot topics lately amongst the ALT.NET list serv.  DSL's are something that aren't necessarily something that all of us are thinking about every day, but they are definitely something that we use on a daily basis.  It's likely that if you working on a software project, that you've been working with at least one domain specific language.

So, what is a Domain Specific Language?  A DSL is a language that you work with or that you build that is tailored to solve a specific domain problem.  Some great examples that you probably use every day are Google query syntax, and MSBuild.  If you think about how you use these tools, you can see how they have developed a language that solves a specific domain problem.

You can usually classify domain specific languages into two areas.  Internal DSLs and External DSLs. 

Internal DSLs are languages that you create to use as a tool to help you develop your end product.  These languages are usually tied pretty closely to the underlying base language, resulting in an intermingling of base language constructs with your DSL.  Have you created objects that have conditionals and predicates for searching your entities?  If so, that's usually an example of an internal DSL.

The second kind is a language that you provide your users to use to do work.  You'll usually find these in rich reporting tools and also business rule engines that allow end users to write their own business rules.  These languages usually hide the underlying language completely, and provide extremely expressive and terse constructs that give the user an spoken language-like environment for doing work.

So what's the difference between a DSL and an API or a Framework?  I think the semantics are important.  There's a big difference between a facility and a dialect, and between a framework and a language.  When your recognize that what you are building is a language, you begin to design things differently.  As an example, you're perspective may shift from providing factories and service layers, to using expression builders.

Why do you care about recognizing when you're using DSL?  The reason it's important to recognize this distinction, is that the perspective you use during design changes drastically once you realize you're writing a language, and not an API.  Once you're designing from the right perspective, you'll find that you'll be providing a language that gives you and/or your user the ability to do extremely interesting things with your software.

What makes them difficult to use and implement?  There's usually more effort in designing a language than a framework or an API.  Where you can make technical decisions and assumptions in an API, you need to ensure fluidity and expressiveness in a DSL.  And with that expressiveness comes an infinite combinations of conditions, and an exponential number of test cases.

References

Domain-specific programming language
MFBliki: DomainSpecificLanguage

Tuesday, February 26, 2008

Great People Don't Just Leave, You Let Them Go

I've been talking to a bunch of people a lot about this lately, and I feel like I'm the only one that thinks this way.  I strongly feel that you can retain people for an indefinite amount of time, given you provide a fantastic place for them to work.  I don't believe in the argument "They need to move on in their career" or "It was their first job".  In my opinion, those statements are a cop out for the mistake of not communicating well with your team, and a failure to be sensitive to their needs.

Building a rich relationship with your team members to make sure that the communication is open is imperative to retention.  And I'm not talking about penciling in a lunch with the team members and mandate that they "be open with me".  It takes more than telling them to be open with you to achieve openness.  Earn their respect and trust with active listening, and then taking the time to truly think about what they do and why they do it.  If you don't feel like you have a great feel for that, then think if there are any other team that have a great relationship with the person, and talk to them and find out more.

You've got to be sensitive to what makes them happy, what pisses them off, and what conditions are ideal (or not ideal) for them to work in.  You've got to recognize the difference between the productive and the destructive items on their plate, and remove the destructive tasks immediately.  If you can't remove it right away, take a second to think about whether or not losing this great person is worth the benefit destructive task.  If you've made that argument, and it's still something that you absolutely can't remove right away, put a plan together, and communicate that plan and it's progress regularly, to show your all-star that you're committed to making sure they enjoy what they are doing.

Take the time right now to think about each of your great team members (hopefully this is everyone on your team).  Think about what their current areas of responsibility are.  Are any of those areas destructive?  Now think about their disposition.  Are they showing signs of being disengaged?

If either of those questions are yes, then DO SOMETHING ABOUT IT RIGHT NOW.  Send them an email saying you know the destructive task on their plate is a problem, and that you're focused on doing something about it.  Call them and tell them that you recognize that their environment isn't ideal right now, and that you're going to do something about it first thing in the morning.  If they aren't engaged, meet with them first thing and ask them why and continue to probe until you get to the root of the problem.

As I read this stuff, it all sounds so simple, and I'm sure we've all heard this before, but it always gets lost in the shuffle.  Don't let it get lost, great people are so hard to find, you can't afford to let them go.  I strongly believe if you're attentive and take the time to realize that there isn't a single more important thing that you do every day than retaining your great people, then you'll never let a great person go.

Friday, February 15, 2008

How Do You Tell QA That They Are Wasting Their Time?

Recently I've been struggling with inefficiencies in our process specifically at the handoff between development and quality assurance.  As our product grows, so does the demand that the product puts on quality assurance.  As the demands increase on quality assurance, the qa team demands more on the development team to make sure everything is communicated.  What developers could just change before, now has to be documented and communicated so that QA can get their coverage.

What's frustrating to me, is the fact that with all of this documentation and communication comes serious inefficiencies and waste.  Maybe it's just an issue with our process, or our state of mind, or our people.  I'm just hoping that there's someone out there with some thoughts, or insight.  I'll give an example.

For this latest release, we decided to stay on top of things, and upgrade to VS.NET 2008 and 3.5 framework.  So, the framework upgrade needed to be communicated to QA.  We needed to give them a spec on the work (which resulted in this post, trying to convince them that testing this wasn't all that important), and they had to dedicate resources to go through that document and deliver a test plan.

So, after the QA person went through the document, they started sending me emails about how to test this.  First, the question, "How would the 3.5 framework upgrade be tested?".  The answer to that is that it  may or may not throw an error, we may not use any 3.5 framework only features at this point.  Which was followed by a "Let’s just say, for the sake of argument that the framework was installed incorrectly.  Would there be a way for end users to notice it?"  This is where it starts to get frustrating to me.  So much waste, I want to scream "Just come over to my freaking desk and I'll show you the freaking framework in add/remove programs!".

What I'm really conflicted about is the fact that telling the QA team "don't worry about testing that", is like speaking Japanese to them.  They just don't understand those words.  And that's fine.  I think that's good in a way, but it's wasting my time.

Does anyone out there have any advice as to how to best communicate this.  To convince a QA team, who isn't super-technical, that a technical feature/improvement/fix, is safe, doesn't need to be tested?  Is there anything more I can do, or should I just grin and bear it?  I'd love your words of wisdom.  Thanks in advance.

Thursday, February 14, 2008

Coming to Grips With the Fact That Not Everyone is an Intelligent Teenage Girl

I was listening to This American Life the other day, and Ira Glass asked a question that I thought was extremely insightful. 

The story was about how a female television writer had the idea that she would create a jeopardy-like quiz show with teenage girls as the contestants.  In her vision of this show, the girls would intelligently answer challenging questions, thus influencing other teenage girls at a time in their lives where the presentation of intelligence isn't the highest priority.  As it turned out, the girls didn't answer the questions intelligently.  What the television writer thought would happen didn't happen at all, and in order to save the show, questions like "What is the capital of Utah?" turned into "Who has the best ass, Luke Perry or Brad Pitt?".

After this woman told the story, he asked the question:

"Do you think this was one of those situations where you make the incorrect assumption that everyone thinks the way that you do?"

That struck a chord with me.  I've been troubled by this lately, in that I feel as if I'm just projecting the way I think onto others.  I feel like I'm being judgmental about how folks who do what I do, spend their time.

For instance, the other day, I received a company-wide email from the VP of the company, asking for people to write some blog posts, and to send any ideas his way.  I, of course, had a few, and was chomping at the bit to start writing about it.  When I asked the VP what kind of response he got, from our 100 person company, it turns out I was the only response he got.

1 out of 100 who was willing, even eager to contribute and get their name out on a site that gets a pretty good amount of traffic.  I couldn't figure out why anyone wouldn't want to do it.  It was puzzling to me.

I need to be better at understanding the fact that people have different interests than I do, different perspectives on how best to spend their time, or what kinds of things are valuable to them.

I'm just going to keep telling myself that not everyone has to be a intelligent teenage girl.

Wednesday, February 13, 2008

Thoughts on Computer Science Undergrad Curriculum from a Computer Science Student

I lucked out in 1999 when I scored a job at a technology startup doing HTML.  It was a complete career switch for me.  I had been teaching restaurant managers how to manage and lead people, handle inventory, recruit new team members, and deliver quality product and service.  Then my best friend called me at the perfect time, and told me to teach myself HTML and apply for a job.  So, I grabbed Sam's Teach Yourself HTML in 24 hours, and did just that.  Applied, and started formatting content for static web sites.

Since then, I've learned a bit more and have fallen head over heels for the art of computer programming, system design, and the software development lifecycle.  However, I still felt like I was missing something that everyone else who had a computer science degree had.  I didn't know anything about bits and bytes.  I didn't know what assembly language looked like.  I'd never de-allocated a pointer, written a linked list, or done my own garbage collection. 

So, a few years ago, I enrolled at a institution known for their computer science program, and got started on my Bachelor's of Computer Science.

So, I think that I have an interesting perspective on the whole "Why can't programmers program" that started with Joel Spolsky's blog a few month's ago about the state of the undergraduate curriculum and it's need of restructuring.

I'm not sure restructuring is where I'd start.  I think the same rules apply when proposing a new standard.  You'd better know everything about the problem before suggesting a brand new solution.

What I see as the problem is the pressure the professional community is putting on the education system to change the curriculum.  And what we've got now is teaching staff that is trying to teach modern software development technology and methodology that they've not used or been taught themselves. 

The teaching staff is severely lacking in expertise around the technology and methodology they are teaching.  They're being forced to use Java.  They don't know Java.  They have no idea there's a ArrayList<E> class.  They're pressured to teach TDD, but they've never experienced the feeling when one of those unit tests saved their ass.

In the CS courses that I've had so far, I've yet to see an instructor that comes even close to some of the most talented engineers that I've worked with.  Our instructors should be the best developers out there.  They should know their subject inside and out.

At this point, I'd rather have the instructors speaking to what they know.  Let them teach convoluted, uncommented, functional programs in C if that's what they are experts in.  I'd rather be taught how to punch cards and feed them to a machine if the instructor is going to speak to me knowledgeably about the process, know all the little details about what's going on with those cards, and tell me what kinds of things they got burned by in the past.

Compare it to Mathematics curriculum.  Tonight I had to spend a half hour and 3 pages solving an trigonomic integral.  God knows that wasn't the most efficient way to do it, and you for sure know I'll never have to do it again.  But I learned it, and know I know how the calculator probably does it.  And now I can make assumptions about what is going on under the hood.  That experience is invaluable.  And all that said, if I were to ask my teacher a question, she'd be able to rattle off the answer instantly and make me feel like a total idiot.  She knows what she's doing.  She can rattle off exactly why my approach won't work.

I'm not getting that with the CS curriculum, but that's what I want.  That's what I'm missing.

I think ultimately we've swung too far to the other side and we're missing what we really need in the CS undergraduate program.  We need to learn mechanics and the details of constructing software in school.  Once we're done with school, we learn the art of programming in the workplace.  We'll never get that expertise in an institution.  And the more and more we press institutions to deliver the art of programming, the less programmers we get in the workforce that will be in a position to become masters.

Monday, February 11, 2008

Common Reason ADO.NET Data Services Don't Work - Reason 2 - Invalid Primary Key Name

I was helping a reader diagnose an issue with ADO.NET Data Services over the last week.  Basically the issue was that just about any table he had in his model wasn't being exposed through the REST service that ADO.NET provided when using LINQ to SQL Entities.  When using ADO.NET Entity Framework to generate his model, all was well.

As it turned out, the issue had to do with his primary key name.  Apparently, if you are using ADO.NET Data Services on top of a LINQ to SQL DataContext object, you need to make sure that your primary key is defined as "[ENTITY]ID" or "ID".  Anything else doesn't seem to work.

So, if your table looks like this:

USE [BlogExampleDB]
GO
/****** Object: Table [dbo].[TestTable]    Script Date: 02/11/2008 18:01:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[TestTable](
    [MyPrimaryKey] [int] IDENTITY(1,1) NOT NULL,
    [Column1] [nchar](10) NULL,
 CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED
(
    [MyPrimaryKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Your LINQ to SQL entity (dbml) will look like this:

Linqtosql entity

If you then plug that into your ADO.NET Data Service, and then visit your service, you end up with this:

<?xml version="1.0" encoding="iso-8859-1" standalone="yes"?>
<service xml:base="http://localhost:51499/Astoria/MySimpleService.svc/"
        xmlns:atom="http://www.w3.org/2005/Atom"
        xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app">
    <workspace>
        <atom:title>Default</atom:title>
    </workspace>
</service>

Your entity is nowhere to be found.

It all has to do with the name of your primary key.  In this example, if I change the primary key to either "TestTableID" or "ID", I'll see my entity show up.

Things to note about this:

  1. You don't have to change your underlying data model.  You only need to change the LINQ to SQL Entity.
  2. This only seems to affect LINQ to SQL entities, and not ADO.NET EDM's.

Thursday, February 07, 2008

Using Yahoo UI Charts With ADO.NET (Astoria) Data Services

Recently I've been working on a time tracking application for my wife so that she can track time and generate invoices for her fledgling consulting business.

*Shameless plug: If you need any UI design help, she's awesome, really.  Her designs are sharp and professional, and she'll even build it out in HTML for you.  You can contact me at jfiorato at gmail dot com if you're interested in her services.*

One of the things I wanted to add to this application, is some charts that display some interesting metrics.  I'd had some mild exposure to YUI's charts, and found the flash rendering of charts preferable to the image generation that you get with Google's recently released chart API.

To connect ADO.NET Data Services with the Yahoo Chart API, I'm going to do the following:

  1. Create my simple ADO.NET Data Service.
  2. Add a reference to the ADO.NET Data Services client side library.
  3. Add references to the Yahoo UI Chart API.
  4. Make a service call to ADO.NET through Javascript.
  5. Pipe the results to a Yahoo DataSource
  6. Provide the DataSource to the Yahoo Chart API

So, first thing I'm going to do is create my data service.  This involves creating a Linq to Sql class and making it available as a service.  You can follow the most of the instructions from my post about creating a simple REST service with ASP.NET Data Services.

Next I'll add my references to the ADO.NET Data Services Client Side UI Library to the ScriptManager:

<asp:ScriptManager ID="ScriptManager1" runat="server">
    <Scripts>
        <asp:ScriptReference Name="MicrosoftAjaxDataService.js" />
    </Scripts>
</asp:ScriptManager>

Then I'll do as the Yahoo Charts API documentation tells me, and add my div tag for the chart, then after that, a reference to all the Yahoo Javascript libraries.

<div id="myContainer" />
<!-- Dependencies -->
<script type="text/javascript" src="http://yui.yahooapis.com/2.4.1/build/yahoo-dom-event/yahoo-dom-event.js"></script>
<script type="text/javascript" src="http://yui.yahooapis.com/2.4.1/build/element/element-beta-min.js"></script>
<script type="text/javascript" src="http://yui.yahooapis.com/2.4.1/build/datasource/datasource-beta-min.js"></script>
<script type="text/javascript" src="http://yui.yahooapis.com/2.4.1/build/json/json-beta-min.js"></script>
<!-- OPTIONAL: Connection (enables XHR) -->
<script type="text/javascript" src="http://yui.yahooapis.com/2.4.1/build/connection/connection-min.js"></script>
<!-- Source files -->
<script type="text/javascript" src="http://yui.yahooapis.com/2.4.1/build/charts/charts-experimental-min.js"></script>

Once these are all put together, then I'll make my call to the ADO.NET Data Service through Javascript, then loop through the results, add them to a Yahoo DataSource object, then send the object to the Chart object.

<script type="text/javascript">
 
    var myAstoriaService = new Sys.Data.DataService("Services/TimeTrackerDataService.svc");
    myAstoriaService.query("/TimeEntries?$orderby=Date", ServiceCall_Success, ServiceCall_Error);
 
    function ServiceCall_Success(result, context, operation)
    {
        Sys.Debug.clearTrace();
        Sys.Debug.trace(result.length);
        var dataSet = new Array();
        for(i=0;i<result.length;i++)
        {
            dataSet[i] = { Date: YAHOO.util.Date.format(result[i].Date, "DD/MM/YYYY"), Duration: result[i].Duration.toString() };
        }
 
        YAHOO.widget.Chart.SWFURL = "http://yui.yahooapis.com/2.4.0/build/charts/assets/charts.swf";
 
        var myDataSource = new YAHOO.util.DataSource(dataSet);
        myDataSource.responseType = YAHOO.util.DataSource.TYPE_JSARRAY;
        myDataSource.responseSchema =
        {
            fields: [ "Date","Duration" ]
        };
 
        var myChart = new YAHOO.widget.ColumnChart( "myContainer", myDataSource,
        {
            xField: "Date",
            yField: "Duration"
        });
    }
 
    function ServiceCall_Error(error, context, operation)
    {
        alert(error);
    }
</script>

Now, the only bummer about this is that I have to use the ADO.NET Data Services Client Libraries.  The Yahoo Datasource is perfectly capable of taking a URL that returns either XML or JSON and populating itself.  However, it appears that the JSON that ADO.NET Data Services generates isn't exactly valid JSON, according to www.json.org and the Yahoo JSON parser.

I guess there isn't a whole lot of code here anyway.  So it's not too bad.  Another limitation, is that when you're building charts you usually need to do some grouping and sorting of those groups to get interesting chart data.  From what limited documentation there is on this right now, I can't find a way to do any grouping of the data through the URL Addressing Scheme.

Monday, February 04, 2008

How to Get ADO.NET Data Services (Astoria) Working With Forms Authentication

UPDATE: Since I wrote this post, I've given a presentation on Astoria, with an updated Forms Authentication example. This post has a link to the slides and demo code.


As I was playing around with ADO.NET Data Services, I was having some trouble getting the services to work in my forms authentication enabled site. every time I tried going to the service, I kept getting redirected to the login page. Even if I was already logged in.

First thing I tried was using the location node in the web.config to exclude my service. That didn't work. Then I tried changing my authorization node to allow all users. That still didn't work. The only thing I could get to work was to remove the forms authentication node all together.

Obviously, removing security from my site wasn't an option, so I started to poke around in Reflector a bit. Still no dice. I had no luck finding anything that could possibly be checking forms authentication and trying to redirect to the login page.

So, one thing I thought I'd try is adding some information into the web.config serviceModel node. I had been wondering how that node interacted with Astoria, because in my experience with WCF, the service configuration nodes are extremely important. So, I thought I'd toss in some really basic info to the config and see if that did it. So, I put the following in my web.config.

<system.serviceModel>
<services>
<service name="MyDataService">
<endpoint address="" binding="webHttpBinding" contract="Microsoft.Data.Web.IRequestHandler" />
</service>
</services>
</system.serviceModel>

And as you can guess (or not), that worked like a charm. Hopefully someone from the ADO.NET team has a look see to see why this is the case.

Sunday, February 03, 2008

Common Reason ADO.NET Data Services Don't Work With ADO.NET Entity Framework and Linq to SQL

I'm playing around a little bit with ADO.NET Data Services today, and had some trouble getting ADO.NET Entity Framework to work.  My ATOM document was returning an empty result set.  So, when I went to my page, I'd get results that looked like this:

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<service xml:base="http://localhost:60618/TimeTracker/Services/TimeTrackerDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app">
    <workspace>
        <atom:title>Default</atom:title>
    </workspace>
</service>

I double checked that I was allowing the appropriate read rights to the resources, as I had this line in my service:

config.SetResourceContainerAccessRule("*", ResourceContainerRights.AllRead);

As it turns out, this is caused by the fact that the designer file for the edmx or dbml file has an incorrect connection string or a missing connection string from the default constructor.  If you go through the ADO.NET Data Services wizard, and don't choose "Save entity connection settings..." checkbox, you'll get this error.

image

If you're using Linq to SQL or ADO.NET Entity Framework, your resolve the same way.  Just open up the designer files for the edmx or dbml files, and change the default constructor so that it is populating the connection string correctly.

Saturday, February 02, 2008

Struggling to Find People to Follow On Twitter? Here's a Few Good Ones

http://twitter.com/god - hasn't tweeted in a while, so maybe the lord is up to something...

http://twitter.com/dwightkschrute - Fact.  A beet farmer invented Twitter.

http://twitter.com/darthvader - If the dark side is your bag.

http://twitter.com/chucknorris - Chuck Norris doesn't twitter, twitter can't help but send messages about him.

http://twitter.com/WilliamShatner - Must. Keep. Under 140 characters.

http://twitter.com/insults - If you really want to insult someone's mother, and don't have any material.

http://twitter.com/GhostWhisperer - 312 followers representative of the number of people that watch the show?  Probably.

http://twitter.com/ord - If you like a to snicker at poor travelers woes, this site is for you.

Deploying ASP.NET Web Application Projects With MSBuild 2008 In Less Than 25 Lines

One of the things I've had to do recently is create a deployment script for my ASP.NET Web Application Project.  Not one of those project-less web projects, but an old-school VS 2003 style web project. 

The deployment of the web project needed to be part of a larger set of deployments, consisting of a database and some classes to be used as an API.  Additionally, I needed to deploy two versions, one that is for new installations, and another that is targeted at upgrading the previous version.  The upgrade deployment needed to leave the usual suspects like the web.config, and the themes directories, untouched.

I wanted the whole thing automated, and to occur from a build server, and not the desktop of an engineer.  The process needed to be consistent and ideally a single click would do it.

Because of all these requirements, I couldn't use tools like the "Publish" menu option in the IDE, or the Web Application Deployment Project.  So, I used the next best thing, MSBuild.

For this post, I'm going to focus solely on the upgrade version of the web projects, and stay away from the database and the API deployments.  Database deployments really depend on your versioning scheme.  K. Scott Allen is currently putting together some posts about a database versioning scheme that might be helpful for you, if you don't have one you're happy with right now.

So, I want to do the following with my build script:

  1. Compile the web application and put it's output in a directory called "Releases\[Version]\NewInstall\bin"
  2. Read the project file (csproj in my case) to find all the content files and folders that I need to run the application, and move those to the "Releases\[Version]\NewInstall\" directory
  3. Copy the contents of the "Releases\[Version]\NewInstall" directory to "Releases\[Version]\Upgrade" directory, and exclude all the files that I don't want to overwrite on upgrade (in this case, the web.config, and the themes directory).

To do the first item, I'm just going to use the MSBuild task to call the solution, providing the output directory.  I created properties for the version number and the build directory, knowing that I'll probably want to use them elsewhere.  Also, separated the version number and the build directory, because I'll probably want to use the build number on it's own as well.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
    <PropertyGroup>
        <VersionNumber>1.0.0</VersionNumber>
        <BuildRoot>..\Releases\$(VersionNumber)\</BuildRoot>
        <NewInstallDir>$(BuildRoot)\NewInstall\</NewInstallDir>
        <UpgradeDir>$(BuildRoot)\Upgrade\</UpgradeDir>
    </PropertyGroup>
    <Target Name="Build">
        <MSBuild Projects="DeploymentProject.sln" Properties="OutputPath=$(NewInstallDir)bin\" />
    </Target>
</Project>

The next thing I need to do is copy over the content files.  I want to make sure I don't deploy any of the source files.  Instead of regex or rolling something of my own to filter out the source files, what I can do is use the information in the the .csproj.

One handy thing you can do with MSBuild is import another project, and in this case, we can import the .csproj.  Once I do that, I have access to all the properties of the imported project, including the list of content files.  The .csproj keeps track of the files and categorizes them into, references, files to be compiled, content files, empty folders, and unknown.

<Import Project="DeploymentProject\DeploymentProject.csproj"/>

With the collection of content files, empty folders and unknown, and can move all of my non-compiled files to the deployment directory.  I'm going to put this into the same target as my compile task.  I'm not a big fan of chaining targets together with dependencies.  I find it confusing.

Here is what gets added right after the MSBuild task:

<Copy SourceFiles="@(Content->'DeploymentProject\%(RelativeDir)%(FileName)%(Extension)')"
      DestinationFiles="@(Content->'$(NewInstallDir)%(RelativeDir)%(FileName)%(Extension)')" />
<Copy SourceFiles="@(None->'DeploymentProject\%(RelativeDir)%(FileName)%(Extension)')"
      DestinationFiles="@(None->'$(NewInstallDir)%(RelativeDir)%(FileName)%(Extension)')" />
<MakeDir Directories="@(Folder->'$(NewInstallDir)%(RelativeDir)')" />

I'm using an MSBuild technique called transforms, which is explained in more in that link.

The next thing I need to do is copy the output to the upgrade directory.  We can do this much easier now that you can create items on the fly with MSBuild 2008.  The example speaks to this new feature:

<CreateItem Include="$(NewInstallDir)**" Exclude="**\App_Themes\**;**\Web.config">
    <Output ItemName="UpgradeFiles" TaskParameter="Include" />
</CreateItem>
<Copy SourceFiles="@(UpgradeFiles)"
      DestinationFiles="@(UpgradeFiles->'$(UpgradeDir)%(RecursiveDir)%(FileName)%(Extension)')" />
<MakeDir Directories="@(Folder->'$(UpgradeDir)%(RelativeDir)')" />

The last MakeDir task in there gets all the empty folders from the project directory.

Here's what it looks like put together:

  1 <?xml version="1.0" encoding="utf-8"?>
  2 <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
  3     <Import Project="DeploymentProject\DeploymentProject.csproj"/>
  4     <PropertyGroup>
  5         <VersionNumber>1.0.0</VersionNumber>
  6         <BuildRoot>Releases\$(VersionNumber)\</BuildRoot>
  7         <NewInstallDir>$(BuildRoot)NewInstall\</NewInstallDir>
  8         <UpgradeDir>$(BuildRoot)Upgrade\</UpgradeDir>
  9     </PropertyGroup>
10     <Target Name="Build">
11         <MSBuild Projects="DeploymentProject.sln" Properties="OutputPath=..\$(NewInstallDir)bin\" />
12         <Copy SourceFiles="@(Content->'DeploymentProject\%(RelativeDir)%(FileName)%(Extension)')"
13               DestinationFiles="@(Content->'$(NewInstallDir)%(RelativeDir)%(FileName)%(Extension)')" />
14         <Copy SourceFiles="@(None->'DeploymentProject\%(RelativeDir)%(FileName)%(Extension)')"
15               DestinationFiles="@(None->'$(NewInstallDir)%(RelativeDir)%(FileName)%(Extension)')" />
16         <MakeDir Directories="@(Folder->'$(NewInstallDir)%(RelativeDir)')" />
17         <CreateItem Include="$(NewInstallDir)**" Exclude="**\App_Themes\**;**\Web.config">
18             <Output ItemName="UpgradeFiles" TaskParameter="Include" />
19         </CreateItem>
20         <Copy SourceFiles="@(UpgradeFiles)"
21               DestinationFiles="@(UpgradeFiles->'$(UpgradeDir)%(RecursiveDir)%(FileName)%(Extension)')" />
22         <MakeDir Directories="@(Folder->'$(UpgradeDir)%(RelativeDir)')" />
23     </Target>
24 </Project>