It’s been several months since I’ve had a chance to update The Great Web Framework Shootout, but this weekend I decided that it was time to dig in and freshen things up a bit.
Not only have most of the frameworks seen new releases since the last revision, but I finally decided to move all of the tests over to Amazon’s “release” version of the Ubuntu LTS AMI.
Below is a quick summary of what’s new in this revision:
- All tests were performed on the updated Ubuntu LTS AMI (ami-fbbf7892 ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20110719.manifest.xml)
- The updated AMI was configured with Python 2.6.5, PHP 5.3.2, Ruby 1.9.2p290, Apache 2.2.14 (default config), mod_wsgi 2.8 (embedded mode), and mod_passenger 3.0.9
- Rails 2.x and 3.0 were dropped from the “full stack(ish)” tests in favor of Rails 3.1.
- CakePHP 1.2 was dropped from the PHP tests in favor of 1.3, but Symfony and Yii were added as they seem to have considerable market share.
- CakePHP’s caching engine was incorrectly configured during the last round of tests, and this has been corrected.
Circle me on Google+ to keep track of further updates, and feel free to contact me there with any questions or comments.
[A lot of the information below is out of date. Please see the new framework shootout page for the latest benchmarks.]
As I briefly mentioned in Round 1, this whole thing came about as an experiment to satisfy my own curiosity. Unfortunately, I wasn’t expecting these posts to draw the amount of attention they have been getting, and several people informed me of a few “issues” with the first round. Since my initial approach to this topic was somewhat casual, I didn’t really take the time to perform each test in a “proper scientific fashion.” Although this was clearly stated in the introduction to round one, it unfortunately resulted in performance estimations that were somewhat less than accurate.
After input from various people much smarter than myself, I quickly went to work tweaking my test environment and building “proper” test apps. In the midst of this, a conversation about PHP accelerators prompted me to put PHP under the spotlight, which brought about Round 2 as an interim round. This gave me a chance to demonstrate the necessity of PHP acceleration, and only continued to solidify my opinion of PHP as an inferior web development language (remember, I just said my opinion).
Which brings us to Round 3. A lot of work has gone into “doing it right” this time, so I am fairly confident that these results are a much more accurate representation of each test subject’s performance estimations. Remember, benchmark test code typically has no real-world value, so “performance estimations” are about all I can promise here. Your mileage will vary. As a wise person once said:
“All this benchmarking is doing is proving what we already know: More code takes longer to execute.” – Ben Bangert (dev lead of Pylons)