|« September 2014|
I started a new Rails project last week. The customer had done an unusually good job of working out the site look-and-feel ahead of time, so my first day on the project I grabbed the HTML mockup of the first page we were going to implement and turned it into a Rails app. It was still static HTML -- there was no code at all in the application, and no database yet -- but the page was being rendered by the new Rails application. I carved the page up into a layout, main template, and appropriate partials, and got ready to start implementing features the next day.
But before I called it a day, I had an idea that seemed promising. I spent about 10 minutes (certainly not more than 15) going through that HTML looking for anything representing a feature that needed to be implemented. That meant links and buttons, plus anything that needed to be generated dynamically. Whenever I found something like that, I added a CSS class to the markup: "unimpl".
Then, using Chris Pederick's Web Developer Toolbar extension in Firefox, it became a simple matter to quickly highlight all of the not-yet-implemented features in the application:
Just use CSS selector syntax in the dialog to select all elements with class "unimpl":
Here's what the result looks like (demonstrated with the Streamlined sample application rather than my current project):
(By the way, those features are really implemented in Streamlined; I just marked that application up as an example.)
This provides a great, easy way of keeping track of things that remain to be done. As I implement those features, I replace static HTML with generated HTML, and I just delete the "unimpl" class as I go. Sometimes I partially implement a large part of the page; in that case, I move "unimpl" from the surrounding element to the individual pieces inside that I haven't completed yet.
This is no substitute for requirements, or an issue tracking system, or anything like that. But it's already proven very convenient as I implement features, trying to decide what's next. It's particularly nice when I have just a few minutes available … maybe not enough time to tackle a big feature, but I can quickly scan to see if there are any small things I could fix.
It takes just a little bit of discipline and consistency on the part of the development team, but it was really easy to annotate that first mockup, and now that the layout and commonly used partials are done, doing subsequent mockups will be even easier.
This morning I became annoyed by the three-click process to highlight those elements using the toolbar (how lazy is that?) so I hacked TextMate Footnotes to add a quick toggle link in the footnotes div at the bottom of the page:
If you're interested in that, let me know. If it's useful to people, I'll polish it up and submit it for inclusion in footnotes.
For a while now I've been using Duane Johnson's TextMate Footnotes plugin with my Rails development. It's been the biggest boost to my productivity since I started using Rails. I kind of assumed that most Rails developers were using it, but apparently it's not as widely known as it should be.
I first learned about footnotes from Geoff Grosenbach, when he interviewed me last year for the Ruby on Rails Podcast. Geoff mentioned that it links lines from a Rails stack trace (displayed in the browser in development mode) so that clicking will open the appropriate file in TextMate, positioned to the correct line. Nice!
But it does more than that. When there is no error and your page renders correctly, the plugin adds "footnotes" to it: a little div at the bottom of the page with useful features for development and diagnosis:
You can show the contents of the current session, current cookies, the parameters passed to the controller, and the last 200 lines of the Rails log. But the most useful things are the links that ask TextMate to open the controller, view, layout, and other important files that Rails used to build that page. (It only does all this when the app is running in development mode, of course.)
If you're doing Rails development on OS X, install it this way:
$ script/plugin install -x http://macromates.com/svn/Bundles/trunk/Bundles/Rails.tmbundle/Support/plugins/footnotes
If you aren't on OS X, I know there are ways to define URL schemes like the "txmt" scheme TextMate defines on OS X. What are you waiting for? Arrange for "txmt:" URLs to open your favorite editor appropriately, install footnotes, modify it (to remove the "only on OS X" code), and have fun!
It sounded intriguing, but I really didn't have time to investigate it. Then, at the Lone Star Software Symposium in early November, Daniel Steinberg mentioned FIT and how interesting it is. And last week, Mike Clark sent me a draft of some early work he's done with FIT. By the rule of three, then, I suddenly had to spend the time on it ... when three people of that caliber are talking about something, it deserves attention.
And FIT does deserve attention. Ward designed it to address one of the more difficult parts of Extreme Programming: the idea that customers should specify -- and ideally write -- automated acceptance tests. FIT is a fascinating approach to that problem. Naturally, the programmers must help, but they help in very small ways; primarily by writing tiny, simple adapter classes that hook application objects into FIT.
FIT is still in the early stages, and there are numerous problems to be solved. But it has the potential to work really well, at least partly because it is simple and adaptable rather than feature-complete and all-encompassing. If you haven't looked at it and played with it, make the time to do so.
I sent back that Mike Clark had written about CruiseControl and an alternative, AntHill, earlier in the week. "I'm on it," Daniel replied. As I was looking up Mike's blog entry, I noticed that Erik was the one who told Mike about AntHill.
What I didn't know was that AntHill was written by another guy we know, Maciej Zawadzki. Also unbeknownst to me, Maciej was sitting behind me. And later in the talk, Erik got around to mentioning both CruiseControl and AntHill. Fun.
One part of the discussion centered on the decision to build each version of the pet store according to the vendor's recommended "best practices". This resulted is that neither version of the app has an architecture I would recommend to clients -- but more importantly, the .NET application is optimized for raw performance, while the J2EE application is aimed more at flexibility, maintainability, and robustness.
Jason Hunter had a really nice suggestion for the next round: instead of measuring the performance of the things, let's set up two development teams, one for each version of the application, neither of them having seen the code before. Give them new features to implement, and see how long it takes. Given the existing code quality (poor) and some of the choices made in the J2EE version, it still wouldn't be a really representative test. But it would at least be interesting, which is more than you can say about the original.
The meeting Tuesday night was a discussion of tools, and Dave started it off with the following intentionally provocative statement: "It is impossible to do quality software engineering with an IDE." Of course, there were a few people who took issue with that, but we quickly converged on something we could all agree on: while IDEs offer a lot of useful tools, you should never allow the IDE to shape your project. The project should assume its natural shape, and the tools should support that. We also agreed that the fastest way to go down the wrong path in that regard is to let the IDE handle builds.
The problem, of course, is that IDEs tend to impose a particular view of project structure. And more often than not, they only support particular compilation tools. So if you want to build part of your system using another tool, or a custom code generator, that part of your build can't be automated by the IDE. So you have to make a choice between automation and the natural technologies for developing your system. You can't win unless you take the build process away from the IDE and use a more general-purpose tool.
There's always a tradeoff between special-purpose and general-purpose tools. Special-purpose tools can make life a lot easier when your needs match what they were designed for, but they aren't easily adapted to other situations. General-purpose tools are never quite as easy as a specialized tool at its best, but they're life savers when you're thrown a curve ball.
I think the choice between specialization and generality is particularly important for software development. Part of this is because even software projects that seem superficially similar tend to differ in many surprising respects. (In other words, software projects nearly always throw you a curve ball.) Additionally, there is the problem that we still don't understand software development very well in any fundamental sense, so folks building specialized tools often make mistakes when deciding where to restrict flexibility. (Another of Duncan's recent blog entries aims straight at this, and scores a bullseye -- I don't know how many copies of that Gosling paper I've printed and given to clients over the past few years.)
A great example of the way seemingly benign restrictions can bite you later is
make. That program is shockingly general, at least if you measure that by the number of uses to which it's been put that the designer never envisioned. But it has one unfortunate limitation -- a bias toward operations within a single directory -- that renders it a poor choice for a task very close to its design center: building Java programs.
As Duncan points out, EJBs were designed for very narrow problems, and might be very good for those problems. (I'm not so sure even that's true, but that's another argument.) But they enforce a very narrow, specialized programming model. No threads. No ... well, no lots of things. You give up a lot to program in an EJB container, and that might be worth it if your problems are close enough to EJB's strengths that you really get a lot in return. But as Mike points out, most systems are warped by EJBs, without seeing any compensating benefit.
It's instructive to contrast EJBs with the other core element of J2EE: servlets. Within the externally imposed constraints of the request/response style of HTTP, they are nearly as general-purpose as you can get. "Here's the request: deal with it and get back to me with a response." There are a lot of other things that servlet containers can do for a servlet, but the servlet has to ask. And by comparison with EJBs, servlets are broadly applicable and extremely successful. (After that exercise, if you want to get depressed, think about the technology that was sacrificed on the EJB altar: Jini.)
When looking at a software development technology, favor generality over specialized tools that claim to make things easy. Unless the tool is aimed at a very well understood domain, and your system is squarely within that domain, chances are you'll regret going the special-purpose route.
I don't know how many years it's been since Alan Kay said that, and people still don't listen.
I'm working on an application now that uses a Swing-based client, deployed using JNLP (i.e., Java Web Start). The client talks to a SOAP-based web service. It's written using Apache SOAP, which has worked out pretty well, but for various reasons I've been looking into reimplementing the web service part using Sun's JAX-RPC.
Some background: JNLP-based apps are deployed from a web server, and launched (the first time, at least) from a web page. But they're full-fledged Java apps that get downloaded as JAR files and run on the client machine. So the best way to package it is to put the whole client -- the JAR files, the HTML pages, and the JNLP file -- in the WAR file with the server-side stuff. And that's the way our build system works. We compile the whole thing, build the client JAR, and then build the WAR that includes the client.
In JAX-RPC (at least, in Sun's implementation of it) they've tried to make the process easy. The service is defined as an RMI interface, and the implementation also fits RMI conventions. So far, so good. Then you compile the service definition and the implementation class. Then things get bad. In an effort to make this all simple, they've gone too far. The next step is that you package your server and a SOAP mapping file into a WAR file, and pass that WAR to a tool called "wsdeploy". That tool reads your WAR, generates stub and tie classes that are used to hook the client and server code into the SOAP infrastructure, and then repackages your server, along with the tie classes and the server-side SOAP support, into a brand new WAR file.
All of that has to be done before you have the stub files that you need to properly compile your client code. So our nice, clean, 3-step build process (compile, package client, package server) turns into this:
wsdeploy, which does these things:
That'll sure shorten the edit/compile/debug cycle and keep my ant scripts clean.
It turns out that there is another tool, xrpcc, that generates the stubs and ties in a more conventional way. But I haven't been able to find any documentation for that tool. It, too, uses an XML-based config file to define the SOAP mapping, but it's a different flavor of XML, with no documentation that I can find.
I'm all in favor of a good out-of-the-box experience. But it's frustrating to hit a brick wall when you try to move beyond that to real work.