Sunday, December 13, 2009

HTPC Build

Last year I decided it was time to update home theater pc. The old one was still on Windows XP with Windows Media Center 2005. The hardware was old and I wasn't really using it much any more, opting to use my cable box w/ much better high definition support.

So, I decided to do the build this time on my own, and I thought I'd detail it here for reference.

My requirements for the HTPC were:


  • High definition playback and recording (at least two shows at once)

  • A/V Component Form Factor for Case w/ built in IR Receiver

  • HDMI output to receiver/television

  • Blu-Ray Player

  • Energy Efficient

High definition playback and recording (at least two shows at once)


For the HD support, I opted to go with the SiliconDust HDHR-US HDHomeRun Networked Digital TV Tuner. I found that this setup (among the many I tried) was the easiest to setup in Windows Media Center. Additionally, you can use the broadcast signal throughout your house on any of your other PCs with VLC player or other software. This is a fantastic device, and they've got fantastic support.

A/V Component Form Factor for Case w/ built in IR Receiver


This was important to me because I didn't want a tower sitting around in my family room. Additionally, I didn't want any more cables that the kids would jack with, so IR receiver cables were not ideal. I chose the Antec case for the form factor and IR receiver, but was also pleased with it's cooling capabilities. One downside to this case is that the front display has very very poor contrast, making it difficult to read what's on there from 8' or more.

HDMI output to receiver/television


Again, less cables is better. Having the single HDMI out to the receiver, which is forwarded on to the television just makes everything simpler. When shopping for mother boards, I wanted to make sure they had this on board with a decent audio chipset. Asus has a great track record quality mother boards. Downside here is that at the time, there were no HDMI 1.3 mother boards, so audio formats like Dolby True-HD and DTS-HD aren't supported. Looks like you can find HDMI 1.3 capable mother boards now.

Blu-Ray Player


Might as well, right? Internal Blu-Ray players only run $50-$60 more than other optical drives. I opted not to get a DVD writer here because I rarely do any DVD writing, and if I did, I'd do it from my laptop and not my HTPC.

Energy Efficient


There's a few energy efficient aspects to this PC. First, the motherboard itself supports intelligent standby, which goes bare minimum power until the remote is used or a show needs to be recorded. Second, the chassis and CPU fan speeds are controlled by the motherboard. Third, I installed a 2.5" notebook hard drive for the OS drive, which is far less power hungry than a standard 3.5" disk.

So, that's the giddy-up. Happy to field any questions/comments on your own experiences.

Parts List


Total: 1257.84 + Tax + Shipping

Saturday, October 24, 2009

Picture Downloader - A Small Study in Affordable Scalability

Scalability always seems to be the poster child for YAGNI. Lately however, the barrier to entry of scaling out is decreasing as the services that provide elastic infrastructures are wildly abundant. Scaling a website is easier now than ever for just about anyone, but for some reason upfront scalability design seems to still be a bit taboo.

I wanted to test how difficult it would be to do upfront scalable implementation, and pay little or nothing until the scaling needed to happen. So, I decided to dive in and see how I would create a site that needed to start with serving a small number of pages, do a little bit of work, and use a little bit of storage, while paying a little bill, but with the ability to scale out the web page serving, the work, and the storage, infinitely.

Recently my Dad was asking how to download all the photos from a set in Flickr so he could burn them to a CD for my grandmother. There's a few desktop apps and Firefox extensions for it, but everything requires you to have to download and install software or use a certain browser. So, I thought a web application that looks at a picture site, and zips up the original images in a set or album and let's the user download them would be a good sample application for this test. You can see the application here.

Screen shot 2009-10-20 at 8.06.54 AM.png


Overview


The application itself can be divided into three parts. The web application, which serves as the interface for both the user and the workers; the workers, which download the images and zip them up; and storage which stores the zipped sets for download.

Web App


The web app is a pretty vanilla Ruby on Rails web application. It serves up a single page for the user to enter the information about the set/album they want to download, and it also provides a REST interface for the workers to talk to. This is where the PostgreSQL database resides, and stores the information about the downloads that have been requested.

To get cheap scalability, it's deployed to a host called Heroku. Heroku is an infrastructure that provides process-level scaling for web based Ruby applications. When you deploy an application to Heroku, it complies it into a "slug" and this "slug" can be started as a process on a server. Heroku uses Amazon's EC2 infrastructure and deploys the "slug" to a user-specified number of load balanced processes, with processing slots available across the virtual servers they have running at Amazon. You pay for the number of processes you want to have. 1 process is free, so you can host a low traffic website there for no money at all and scale up the pay scale as you need it.

Workers


The workers are Ruby scripts that do the actual downloading and zipping of the set/album. Workers are doing periodic lightweight polling of the web app, looking for new jobs to work on. Once a job is completed the zip is moved to the shared storage, and the worker tells the web app that the job is done and gives the location of the zip file.

To get cheap scalability, the workers run on Amazon's EC2 environment. There are two workers per server, and when the Amazon EC2 instance starts up, the code is updated from Git, and the processes are started as service daemons. The EC2 instances are started by the web application, and they shut down after an hour of inactivity. The web application determines how many EC2s to start based upon the number of jobs in the queue. At $.03 per hour, I pay nothing for no activity, and very little as the application scales up.

Storage


Since the web app and the workers are physically separated, I needed shared storage that would accommodate for potentially some really large zip files. Additionally, I needed a network that could serve such files reliably for download.

To get cheap scalability, I used Amazon's S3 storage for this shared storage. I get great copy speed from the EC2s to the S3 storage, since they're both on Amazon's network, and again, I only pay for what I use. At a minimum, I have to pay for the storage of the EC2 image, and on top of that, I pay for the zip files that are stored there, as well as for the data transfer and requests. You can see the pricing here, and you'll see that it's very affordable. The zip files are deleted after a month, to control storage costs.

Conclusion


Overall this project took me very little time to do, around 30 hours. With it, I've got a solution that costs me nearly nothing if it isn't used, and would be affordable if it was used quite a bit. That said, it's a pretty simple application, and the lines of separation between web app/workers and storage were very clear, so you're mileage may vary.

Also, you probably were wondering about database scaling. In this situation the PostgreSQL scaling would happen vertically, adding more dedicated resources to the database server, rather than more databases to the pool, whichHeroku supports as well.

I do need to give credit to my colleagues at the Nextpoint Lab who initially put together the scalable design for our software (which gets raving reviews, I'm just sayin'), as it was essentially the blueprint for this design.

I think my conclusion is that if the situation is right, upfront design and implementation of a scalable solution doesn't necessarily have to cost you an arm and a leg to get it done, nor does it need to be a huge gamble if the scale isn't needed immediately.

Saturday, July 25, 2009

Are You Following Your Team's Information Stream?

If not, you suck. Really, you do. You sucky suck suck. You're a bad, bad manager. Sorry to be the bearer of bad news.

It's sad really, because you've got a great team, that's filling the atmosphere with fantastic information, and you're not taking the time to soak it in. Chances are, most people on your team are on Twitter. Also chances are there's a solid percentage of your team that's got a blog somewhere. This is likely the most valuable information that you can possibly consume. But you don't, so you suck.

I don't think you're alone though... I think that most manager's only hook into their team's digital personality is IM status, and that's just sad (though, I've seen some really good IM statuses). But, did you know that your most introverted engineer has 3,000 status updates on Twitter? Did you know that your Project Manager has a blog about Haskell coding? Did you know that your QA engineer is a semi-pro kickboxer?

Your team is giving you feedback, intentionally, indirectly. I think it's time that you need to accept the fact that this stream is like a uber-informative facial expression. Your team is letting you know how their day went, what they think you're doing wrong (or right), and what they think the company is doing wrong (or right). Maybe it's a bit passive aggressive, but who cares if they're telling you something that you should be hearing? Is there really a better channel for them?

As a manager, you've always got time to learn more about the craft, practices, people... And your team member's streams are full of great information about it. Believe it or not, they're thinking about work just as much as you.

Your team is likely doing very, very interesting stuff outside of work. Having an idea of what that is, and showing that you're interested in it goes a long way toward strengthening your relationship. Think about walking in on a Monday morning and saying "Saw you went hand gliding, how awesome was that?" instead of sounding like a fucking tool at the water cooler saying "How was your weekend?" and getting the same old "Good, yours?", followed by the awkward silence then "ok, have a good one". Get involved, and you'll have something interesting to talk about.

Your team would appreciate it. As a manager, your team looks up to you, and them knowing that you're paying attention and interested in what their doing, makes them feel appreciated. The more they do to feel appreciated through this information stream, the better off everyone is. You're all growing.

So, here's the good news; I can retract the sucky bad manager claim if you start following your team's information stream today. Do it, or you still suck.

Tuesday, July 21, 2009

Response From programmer.grrl

You should read Amy G's response to my last post:

Companies are Addicted to Profit Like Smokers are Addicted to Nicotine?

It's good.

Monday, July 20, 2009

Define success...

Amy G wrote a response to my post about why I think that large development projects don't work.

My concern is, so what? Just because a project fell short of what was envisioned, blew the timeline, or damaged the team, does that mean it wasn't successful?

My answer to that, is, no, it doesn't mean that at all. But only along the same lines as: If you smoke a pack of cigarettes today, does that mean it's going to kill you? No, it probably won't. You'll likely live, but, you'll also likely smoke another pack tomorrow, and the next day, and then the answer changes.

Now, as if it wasn't bad enough that I equated large development projects to smoking cigarettes, I'm going to quote DeMarco and Lister, who I think she feels are icons of an unattainable utopian relationship between management/organizations and the folks in the trenches (but I don't want to put words in her mouth, you should ask her yourself).
Historians long ago formed an abstraction about different theories of value: The Spanish Theory, for one, held that only a fixed amount of value existed on earth, and therefore the path to the accumulation of wealth was to learn to extract it more efficiently from the soil or from people's backs. Then there was the English Theory that held that value could be created through ingenuity and technology. So the English had an Industrial Revolution, while the Spanish spun their wheels trying to exploit the land and the Indians in the New World. They moved huge quantities of gold across the ocean, and all they got for their effort was enormous inflation (too much gold money chasing too few usable goods).

Value in software comes from having a great understanding of the domain and the gap that the software is trying to fill, and coming up with innovative and smart solutions to fill that gap. Innovations and smart solutions don't come from people who are constantly working overtime, or constantly feeling defeated because of another missed timeline or unrealized feature.

But value in software is hard to gauge, especially when something of lower value can sell really well in the immediate future. Some businesses are great at selling products that are of lower value. But without value, any success will always be fleeting.

Thursday, July 09, 2009

Merge Tracking with Subversion 1.5

A year or so ago, a few of us at the office piled into a conference room to watch a webinar about all the new features Subversion 1.5. One of those features that sounded cool, but was conceptually opaque to me, was merge tracking. But, recently, I've had to get my merge on, and wanted to see what it was all about, and it was one of things I'd wished I'd taken the time to do sooner.

The bottom line, is that now Subversion actually has a workflow for keeping your branch in sync with the trunk, so you've got less surprises when it's time to put your work back into the trunk. You no longer need re-branch in order to get the latest trunk changes for your branch. You can just tell SVN to get any changes from the trunk since you either made the branch, or last took changes from the trunk.

The workflow is pretty simple. You make your branch, you work on it for a while, you say, "I'm going to be merging into the trunk eventually, I'd better make sure that goes smoothly." so you update your branch with the latest from the trunk:


$ pwd
/home/user/my-calc-branch

$ svn merge http://svn.example.com/repos/calc/trunk
--- Merging r345 through r356 into '.':
U button.c
U integer.c


So, since we didn't specify revisions, you can see that Subversion is fully aware of what revision of the trunk your branch came from. Also, Subversion is going to store all the revisions that you've just merged into your branch. You can do this to see:


$ cd my-calc-branch

$ svn mergeinfo http://svn.example.com/repos/calc/trunk
r341
r342
r343

r388
r389
r390


You can also see which changes from the trunk that you're missing:


$ svn mergeinfo http://svn.example.com/repos/calc/trunk --show-revs eligible
r391
r392
r393
r394
r395


So, you can repeat this process often, which I'd recommend. The more frequently you integrate the changes from the trunk, the more unicorns and rainbows come merge time.

Another benefit of Subversion keeping the merge information around, is that you can undo merges:


$ svn merge -c -303 http://svn.example.com/repos/calc/trunk
--- Reverse-merging r303 into 'integer.c':
U integer.c


This is just for keeping your branch in sync with the trunk. How bout when you want to move your changes into trunk when your work is complete? Well, all this merge history will be taken into account when you decide to merge back into the trunk. Subversion will understand what has already been pulled in from the trunk (or any branch for that matter), and only merge the revisions that aren't in the merge history.

This is a very handy feature, it's a shame I waited this long to dive in.

(Note: all these examples I shamelessly copied from the svn book)

Monday, July 06, 2009

Large-Scale Project Doesn't Equal Large-Scale Development

Jeff and I were have a discussion on Twitter last month around 37signals' "Getting Real" philosophy. Specifically, Ben's post initiated the conversation, and his post was based on this one from Signal vs. Noise. What we were discussing was how the "Getting Real" philosophy applies (or doesn't) to other organizations that have justifiably complicated problems to solve, or have to work with legacy systems, or have to deliver up front estimates on work to be completed, or have a QA team.

My point was that ultimately, "Getting Real" is delivered in an extremely prescriptive manner by the folks at 37signals. Their philosophy is contrived in a vacuum that they've built with well-packaged and well-priced software, which I'm sure they worked very hard to create, but, unfortunately, that model isn't in the stars for everyone.

Eventually, our Twitter conversation ended on me promising Jeff a blog post on why I thought "Getting Real" does apply to large scale development. The reason is that we need to stop thinking that large scale development is necessary. Just because the project may be relatively large, the development doesn't need to be equivalently large. The concept of parallelizing development with a large team to deliver a large amount of code for a large project all at once, is in my opinion, well... broke.

The only successful large projects I've been a part of have consisted of small development releases. Small teams focused on single or few chunks of functionality delivered in short iterations, with the team deferring as many of the decisions as late as possible. And this is where I think the "Getting Real" philosophy fits in really well, and works well even if it doesn't fit into the other aspects of the organization.

Take "large scale development" off the table, eliminate it from your vocabulary. Keep it short and sweet, even if the rest of your organization can't be that way.

With the "large scale development" mind set, the odds are stacked against you. You'll find that you've fallen short on what you've envisioned, fallen short on your timeline, and probably done some irreparable damage to your talented team.

Saturday, March 14, 2009

Memoization with Javascript

Memoization is somewhat common optimization where the results of a function are cached, so that subsequent calls to that same function, with the same arguments can skip the execution and return the object from cache. With interpreted languages, the savings of memoizing frequently called funcitons with a limited domain of arguments can save quite a bit of execution time. Obviously, you need to balance this speed with the amount of memory that your page will consume. Finding DOM objects is a pretty good application of memoization.


function memoize(obj, func) {
the_func = obj[func];
cache = {};

return function() {
var key = Array.prototype.join.call(arguments, '_');
if (!(key in cache))
cache[key] = the_func.apply(obj, arguments);
return cache[key];
}
}

var ElementFinder = {
findById: function(id) {
console.log('Calling Function with id: %s', id);
return document.getElementById(id);
}
}

ElementFinder.findById = memoize(ElementFinder, 'findById');

function load() {
for(var i=0;i<10;i++) {
console.log(ElementFinder.findById('my_div').id);
}
}

Tuesday, March 10, 2009

How Firefox 3.1 Uses Trace Trees to Optimize Javascript Runtimes

Version 3.1 of Firefox should be a pretty marked improvement for Javascript performance because of a compilation optimization that the Firefox team is putting in place for that release. Trace-based compilation identifies loops, and records frequently visited paths within those loops to determine what needs to be compiled.

Traditional Just-In-Time compilation usually happens at the method/function level, where the need for compilation is simply determined by whether or not the function/method has been visited. When a function is visited for the first time, the control flow graph for the entire function is created, then compiled into the proper instruction set. This could cause quite a bit of unnecessary overhead given the way that Javascript applications are delivered via the internets.

With trace compilation, the compiler first determines the loop headers, since most frequently called code usually exists within a loop. To determine loop headers, the compiler keeps a counter of back branches (ones that send control back to a previous location), and once that counter hits a certain threshold, that is identified as a "hot" trace. Subsequent paths starting at that loop header are then recorded and compiled.

In this scheme, the interpreter and the compiler are handing control back and forth to each other, which itself can be somewhat expensive. However, all along the way, whenever interpretation of a branch is occurring within these "hot" traces, the recorder is keeping track and compiling the alternate paths.

There's a bunch more information on this method of compilation, including information that's specific to TraceMonkey implementation.

Saturday, February 28, 2009

Manipulating scope with the Javascript "with" statement

The with statement in Javascript allows you to tack the scope from an expression into the current execution context. It's really similar to the VB With statement. Here's a simple example:

function showDivStyle(){

with($('my_div')) {
console.log(style.backgroundColor);
}
}
showDivStyle(); //logs the string "yellow" at this point.
It gets even more interesting when you combine it with a closure, as the scope is extended even more.
function showDivStyle(){
var myClosure;

with($('my_div')) {
myClosure = function() {
console.log(style.backgroundColor);
}
}

return myClosure;
}

theClosure = showDivStyle();
theClosure(); //logs the string "yellow" at this point.

Growl4Rails - Rails == Growl4???

I've had a few people ask how to setup Growl4Rails without using rails. It's pretty easy as is, although you may be a little miffed by the directory structure. Feel free to go in and munge with the source if you'd like it to be more flexible, or to update it to suit your needs. All of the path dependencies are in the CSS files.


First, grab the source from GitHub:
git clone --depth 1 git://github.com/jfiorato/growl4rails.git
After that, create and images/growl4rails, javascripts/growl4rails, and stylesheets/growl4rails directories under your public folder.

From the Growl4Rails source, copy the contents of the public/javascripts, public/images, and public/stylesheet directories to your newly created folders, respectively.

Then, make sure to copy prototype.js and effects.js to your javascripts directory.

Once all the resources are there, all you need to do is add the includes to your layout/masterpage/template:

<script src="javascripts/prototype.js" type="text/javascript"></script>
<script src="javascripts/effects.js" type="text/javascript"></script>

<script type="text/javascript" langauge="javascript">
var growl4rails_duration = 5000;
var growl4rails_max_showing = 5;
</script>

<script src="javascripts/growl4rails/growl4rails.js" type="text/javascript"></script>
<link href="stylesheets/growl4rails/growl4rails.css" media="screen" rel="stylesheet" type="text/css">


Friday, February 27, 2009

Foolish writers and readers are created for each other

Probably like a lot of people, the hoo-ha between Atwood/Spolsky and Robert Martin on the importance of code quality and unit tests has had me thinking a lot lately. But it's not the suggestion code quality is of little value, or that writing unit tests are thriftless that was the subject of my reflections.

What's got me thinking about it all, is how both sides of the argument are extremely anecdotal. Atwood, Spolsky and Martin are all being very prescriptive about subjects that are extremely idiosyncratic, and these guys professionally represent two extreme ends of the programming spectrum; big company consulting and small team software R&D. I'd even go so far as to say they are different professions.

Anyway, I've come to the conclusion that I need to be more careful when being so prescriptive in my writing, and I need to do a much better job at qualifying my opinions on not-so-cut-and-dry matters.

"Foolish writers and readers are created for each other."
- Horace Walpole

Working Remotely Successfully

Before I started here at Nextpoint, I had some doubts that working remotely would work out all that well. Yes, not having to put pants on in the morning and being able to make a trip to the pantry every half hour for more Thin Mints certainly has it's appeal, but I was worried about being "That guy in Chicago".

However, 4 months in, and I've been extremely happy with the way it's all panned out. I think there's some key arrangements and tools that we've had in place that have made it successful. Keep in mind that our team is small, 4 people (not including me), so your mileage may vary.

Regularly make trips to see the team in person

This is the most important thing, as it allows me to personally connect with the team. We get a meal or two in, get the water cooler talk in, and we can get the personal connections together. I'll usually take some time as well to give the team a tour of what I've been working on.

I've found that if I don't make these trips regularly, I'll start to feel disconnected, and actually start becoming a little distressed. Being up there lets me tap into the vibe, and gets me pumped about what I'm doing. I'm fortunate enough to be able to do it every other week, since the office is in driving distance and I can keep the trips to a single day. But, I think that if the distance were greater, once a month or once every two months would be a good interval as well.

Group chat is a must

Group chat is a key tool that has made working remotely successful. Justin found a fantastic group chat feature of AIM that allows you to create a group then invite other AIM screen names to be members of this group. Each AIM screen name adds the group as a buddy. Then all instant messages are blasted out to each of the members. This eliminates the need to organize everyone into the chat, and just lets you chat and whoever is online gets the messages.

We had also tried out Yammer, but we weren't all keen on more apps/configurations/browser tabs in order to use the tool. We already use Adium, so just adding another account was simple.

Daily IN/OUT status

Being remote, it's hard to know what everyone is working on. It's also equally difficult to let everyone know that you're making progress. I wanted to let everyone know that I wasn't just sitting at home and watching Days of Our Lives.

Providing "what I'm going to work on today" when we get in, and a "here's what I did today" is a great way to keep up with what everyone is working on. We've been using Basecamp for tasks and Wiki, and with it, you get a group chat tool Campfire. This is different than the AIM tool, because there's a history kept, and you don't have to be online to catch up. This way, I can see what everyone else is working on, and what they've done. If we didn't have Campfire, a tool like Yammer could work well.

Video conferencing isn't necessary

When I first started, I thought for sure that we'd need a video/audio conference solution as well as a virtual white board. At first I tried to make a point of it to call with Skype, but that was difficult to do as a group (plus video of of me with my cat humping the bed in the background was a little difficult for the rest of the team to watch). So then I bought a Skype speaker phone, which ended up being a waste of money. Then Justin (we call him MacGreerver) crafted a speaker phone out of an old set of iPod headphones, which worked 100x better than the $160 speakerphone I purchased.

We haven't used any of these things in a couple of months now. The AIM group chat has completely made this unnecessary. Any time we need to talk one on one, we chat. When we need to talk as a group, we group chat. When we need to leave status, we use Campfire.

These tools and ideas have really made working remotely successful. If we adopt any new tools or ideas, I'll make sure to update this post.

So, as I sit here, with no pants on and minty-chocolate breath, I'm extremely happy that I gave it a try.

Wednesday, January 21, 2009

Update to Growl4Rails

Growl4Rails is now production ready. I've added the ability to show multiple growls at once, as well as improved support for IE. Here's a screen shot of the multiple Growls:



Unfortunately, the multiple growls thing was an entire rewrite of the code, so the usage is quite different.

Now, when you setup the includes, you specify default duration and max number of growls to show. These arguments are optional and will default to 3000 milliseconds and 3, respectively.


<%= growl4rails_includes(3000, 5) %>

Also, when showing the growls, the args are now a single hash, with named keys:

<script type="text/javascript" language="javascript">
Growl4Rails.showGrowl({
image_path:"/images/download.png",
title:"Foo Bar.pdf",
message:"File is ready for download."
});
</script>


When you want to handle the click event, you can wire up the event like so:

var growl_id = Growl4Rails.showGrowl({
image_path:"/images/download.png",
title:"Foo Bar.pdf",
message:"File is ready for download."
});
document.observe(growl_id + ':clicked', function(event) {
console.log('Growl %s was clicked.', Event.findElement(event).id);
});

Again, it's open source, so if you want to contribute, or if you have any feedback, you can check out the project on GitHub.

Saturday, January 17, 2009

Javascript Templating

Writing HTML with Javascript has always been a necessary evil, and for me a painful thing to look at in code. Usually we have to do this when we've got some set of data in a collection on the client, and we need to loop through the collection and build HTML.

Back in the day, before I was privy to Javascript templating, I'd do this:


var myArray = ['John', 'Steve', 'Bill'];
var myHTML = '';
for(i=0;i<myArray.length;i++) {
myHTML += '<div>Hello, my name is ' + myArray[i] + '.</div>';
}
var container = document.getElementById('myContainer');
container.innerHTML = myHTML;

There's a few things wrong with this. It's everything bad about the close intermingling of two different languages, or having one language write the other. It's also extremely inefficient. The concatenation of the string results in 3 additional strings allocated during each iteration. And finally, what if my data structure gets more complicated (which in this case, there's no doubt it will), and the array becomes multi-dimensional, then this code gets even more gnarly.

Today, with the arrival of Javascript frameworks like Prototype, jQuery and Dojo, doing this work becomes not only trivial, but much better performing. I'm going to use Prototype for my examples here, but you'll find the other frameworks only differ slightly in semantics.

//use a more descriptive structure
var myData = [{name: 'John'}, {name: 'Steve'}, {name: 'Bill'}];

//Template is a Prototype class
var template = new Template('<div>Hello, my name is #{name}.</div>');

//each is a handy Prototype array extension
myData.each(function(item) {
//the $('foo') syntax is Prototype shorthand for getElementById.
$('myContainer').insert(template.evaluate(item));
});

Definitely a much cleaner approach here. With the old way of doing things, as the complexity of what you have to display grows, the complexity of the code is exponential. However, with the template approach, complexity is closer to linear.