Grails with Jason Rudolph

Grails committer Jason Rudolph showed off the power of Grails at last night's Northern Virginia Java Users Group (NovaJUG) using a technique guaranteed to impress. He started with a JDK, a Grails installation, and an empty directory. In a few minutes, he had a skeletal Java web application created and functioning in a web browser, ready for enhancement. Audience members literally oohed and aahed.

Jason Rudolph mugshot
Jason Rudolph
Grails is a rapid web application development framework for Groovy that allows Java developers to hold their heads higher when Rails advocates gloat about their Ruby framework. Grails provides both a development environment to make coding easier by generating code, and an MVC web framework with a servlet front controller, domain objects that easily tie into a database, and tag libraries for scripting Groovy Server Pages. "It's absolutely everything you need, soup to nuts, to start building a web application," he said.

Grails is in release 0.6, with a 1.0 release candidate currently in development. Since code is written in Groovy, Grails provides syntax enhancements and features unavailable directly in Java, but provides for full Java integration and reuse of existing Java libraries. Like Rails, Grails provides convention-over-configuration to minimize configuration files and tedious coding. It builds upon Spring and Hibernate.

During his presentation, Jason showed how to create domain classes, controller classes, and to customize GSP pages. He added constraints to domain values with simple declarations in the domain class, and showed how to change the default error messages that are displayed on the web page when constraint validation fails. It was refreshing to watch him make a simple change to a Groovy source file then see the result just by reloading the web page. No build step. No deployment step. In development mode, Grails watches for file changes and performs the build and deployment steps for you, he said. Grails writes regular messages to the log file when it performs this hidden work to remind you not to use this feature in production.

Jason touched upon some of the similarities and differences between Grails and Ruby on Rails. The philosophy of persistence is different. Grails considers your domain class to be the source of record. It talks to the database to create and update tables during development. Rails considers the database the source of record for domain entities, and creates domain classes appropriately. You can switch the database auto-create features off if you're using an existing database, he said. The Grails development team is looking at using Middlegen in a future release to generate domain objects from an existing database schema, he said. Jason said a quality shared by Grails and Rails is they both work best when used on greenfield projects without an existing database or codebase.

Other features provided directly by Grails or through plugins:
  • Custom URL mapping
  • Alternate domain implementations, such as mapping domain objects to EJB3 entity beans
  • Expose actions as web services
  • Web page flow
  • Many taglibs
  • Authentication and authorization
  • Integrating a search engine (e.g. Lucene).
Future releases are expected to support for:
  • An object-relational mapping language
  • Using JSP custom tag libraries inside GSP pages
  • JPA integration
  • Generating the domain classes from the database, as mentioned above.
In addition to working on Grails, Jason is a principal at consulting company Relevance LLC and the author of Getting Started With Grails (available for free download). He'll be speaking next week in London at Grails eXchange, including a session on using EJB entity beans with Grails.

Slides and code from his presentation are available on his website.

Creating a simple rules engine using the Java scripting API

Part 2 of my IBM developerWorks article, Invoke dynamic languages dynamically, creates a simple rules engine using the Java scripting API. Business rules, written in a combination of Ruby, Groovy, and JavaScript, determine whether a borrower qualifies for a variety of home loans. I used a rules engine as a sample application because it seemed more compelling than another hello-world application, and it also seemed like an interesting use of the scripting API.

The Java scripting API, also known as JSR-223, works as a viable basis for a rules engine when a full-blown business rules engine isn't needed because it offers several of the benefits you get from using a regular rules engine. For instance, when business rules are stored as external scripts, the scripting API:
  • Allows you to work easily with large sets of rapidly changing rules
  • Allows frequent and flexible additions and changes to rules
  • Separates rules from processing logic
  • Centralizes rules and makes them easier to manage
The Java scripting API fulfills those design goals because scripting code can be kept external to the main Java application, and can be discovered, read and invoked at run time. These same advantages are provided by rules-engine products such as Drools, Jess, or JRules. However, you derive additional advantages by using scripting languages to hold your rules and the Java scripting API to invoke them:
  • Easy to program: Use a scripting language -- or several -- of your choice
  • Free and easy to set up (partially built into Java SE 6)
  • Small number of required external dependencies
  • No need to learn a complex declarative business-rules language. For example, here's a sample from Drools:
    rule "Approve if not rejected"
    salience -100
    agenda-group "approval"
    when
    not Rejection()
    p : Policy(approved == false, policyState:status)
    exists Driver(age > 25)
    Process(status == policyState)
    then
    log("APPROVED: due to no objections.");
    p.setApproved(true);
    end
    
What would the design of a rules engine based on the Java scripting API look like? The ScriptMortgageQualifier class in part 2 of my article shows one such design. It stores business objects that the external rules will use in decision-making in the ScriptEngine's context, and receives rule execution results in a separate shared Java object stored in the ScriptEngine context. Rules (scripts) are responsible for storing results of their decisions in the shared Java object, which the main Java code inspects after the rules are run to determine what action to take.

In my sample application, I use individual files to store the rules. The application scans the rules directory on each pass and executes whatever rule scripts it finds there. An advantage of using the Java scripting API to find the rule scripts is the rules can be written in any of dozens of languages supported by script-engine implementations. The rules engine doesn't care what language the rules are written in as long as the applicable script engine and interpreter can be loaded at runtime, such as being supplied by JARs in the classpath. In my sample, I coded rules in Groovy, JavaScript, and Ruby.

Another possible way of structuring rule logic would be to have the rules themselves set additional attributes that other rules could then use (that is, learn from). For instance, say one set of rules runs and determines that the prospective home purchaser has a bank balance of $10 million. The rule could set a property (a global script variable) called VIP (very important person) to true. As a global variable, the property would be available in the ScriptEngine context and passed along to the next rule to be run. That next rule could use different logic based on the fact that this borrower is a VIP.

The above example begins to reveal the shortcomings of designing a rules engine around the scripting API. Most formal rules engines have the notion that all rules are considered to be in effect at all time. Setting a fact such as "customer has VIP status" in one rule should be taken into consideration by all rules to determine if that new fact changes other facts. But satisfying that feature by invoking external rules stored as scripts would require script writers to order the rules in the proper sequence. Trying to sequence your business rules correctly to account for fact-dependencies is error prone -- and impossible when the rules have mutual dependencies. This limitation of requiring rules to be run in a proper sequence is certainly where you would want to consider using a better rules engine.

Rule sequencing isn't the only disadvantage to executing rules stored as external scripts. Writing business rules in Groovy, Ruby or another scripting language has the disadvantage of:
  • Rules in scripting languages are written imperatively rather than declaratively
  • Complex business logic written imperatively might require deeply nested conditional statements, which makes the rules hard to read and prone to error
  • To avoid the above problem of coding deeply nested if-then statements in your script, you might be tempted to write code that processes a decision table -- reinventing the wheel built by better rules engines
  • The temptation to write your business rules in multiple scripting languages could become a maintenance headache
In other words, the Java scripting API will not always work as the best solution when your application needs a rules engine. However, the Java scripting API allows business rules to be stored externally, to be written in a language that probably is easier to read than Java, and lets the rules change regularly and fairly easily without having to rebuild your application. If you don't mind writing your business rules in a procedural language instead of a dedicated, declarative rules language, the scripting API could be a good solution. It fills the gap between those times when writing business rules as Java code inside your application has gotten out of hand and when graduating to a fully fledged rules engine isn't yet necessary.

If you're trying to decide whether your application calls for a dedicated rules engine, the Jess website has a good article, Some Guidelines For Deciding Whether To Use A Rules Engine.

Returning from Ruby or JavaScript called from the Java Scripting API

Since the Java Scripting API makes it easy to execute external scripts written in a variety of dynamic languages, I tried to find a consistent way to return early from top-level code written in JavaScript and Ruby. My goal was to be able to structure short Ruby and JavaScript scripts by coding everything at the "top level," that is, outside of any defined function, method, or class. That way, the Ruby or JavaScript scriptlets would be easier to write and I could eval them from Java without having to call a specific function or method by name.

After hunting around, I found no simple or easy way a JavaScript or Ruby script could return early from being evaluated when the scripting code is outside of a function or method. A return statement is not allowed outside a function in JavaScript, nor is it allowed outside a method in Ruby. The only consistent language feature I found that guaranteed early script exit was for the code to throw an exception.

If you're unfamiliar with the Java Scripting API (JSR-223, Scripting for the Java Platform), it was added in Java Standard Edition 6 to provide a consistent way to embed scripting-language interpreters into a Java application. The API's javax.script package contains classes and interfaces that let you call and share data with an external script written in dozens of scripting languages, including powerful dynamic languages like Ruby and Groovy. The Java Scripting API is based primarily on the Apache Jakarta Bean Scripting Framework project, but provides extra features and is now built into the Java language. You can use the Scripting API in Java 1.5 by adding the new packages, available by downloading the JSR-223 reference implementation.

Here is what I set out to accomplish.

I wanted to be able to pass Java objects to scripts written in Ruby and JavaScript and let those scripts process the shared Java objects. The goal was to take advantage of the cleaner, more concise syntax these languages offer and allow end-users the ability to supply the Ruby and JavaScript code. That was why I didn't want to require script providers to code their logic inside a method or function. But by placing all code at the top level, the script writer would have no language feature available to return early from script processing.

For example, the Java code that called the script would look something like:
// Java objects to share with the scripts:
String textToProcess = ... // Text for scripts to process
int myStatus = ...         // Some type of status indicator
// etc.
ScriptEngineManager scriptEngineMgr = new ScriptEngineManager();
ScriptEngine rubyEngine = scriptEngineMgr.getEngineByName("ruby");
rubyEngine.put("textToProcess", textToProcess);
rubyEngine.put("status", Integer.valueOf(myStatus));
// ...
// Put a shared object the script will use to return results.
ResultsObject result = new ResultsObject();
rubyEngine.put("result", result);
// Read Ruby script from external source and execute it
String rubyScript = ...
rubyEngine.eval(rubyScript);
// Read results set by the script.
Long resultCode = result.getResultCode();
// etc...
The Ruby script would look something like:
# Don't process the text if the status is greater than 200
if $status > 200
return   # <-- This is illegal Ruby!
end
# Process the $textToProcess text...
...
although the conditions in which the script writer would want to exit could be a lot more complicated and couldn't be structured around an if-else statement.

The problem here is the Ruby script has no simple, clear way to prevent the entire script from being run, short of raising an exception. It is possible to work around the problem by requiring the script to be coded inside of a method. You also could require script writers to code around the problem by wrapping all code inside a needless outer loop and using a break statement to serve the purpose of a return statement.

The above code could thus be replaced by:
1.times do
# Don't process the text if the status is greater than 200
if $status > 200
break # This does work.
end
# Process the $textToProcess text...
...
end
An extra outer loop should work for JavaScript, too.

The problem with using an outer loop to provide a script return is that it requires the script writer to code the loop. That solution violates my goal of making the scripts as easy as possible to write -- and read.

My eventual solution, which I'm not satisfied with, was to allow the script to perform the equivalent of a top-level return statement by throwing an exception. To make the solution more palatable and cleaner for the script writer, I created a Java class that would throw the actual exception. The Java class also permits the script to return an optional reason message when exiting.

Here is the revised Java code that would call the scripts:
// Java objects to share with the scripts:
String textToProcess = ... // Text for scripts to process
int myStatus = ...         // Some type of status indicator
// etc.
ScriptEngineManager scriptEngineMgr = new ScriptEngineManager();
ScriptEngine rubyEngine = scriptEngineMgr.getEngineByName("ruby");
rubyEngine.put("textToProcess", textToProcess);
rubyEngine.put("status", Integer.valueOf(myStatus));
// ...
// Put a shared object the script will use to return results.
ResultsObject result = new ResultsObject();
rubyEngine.put("result", result);
// Add an object scripts can call to exit early from processing.
rubyEngine.put("scriptExit", new ScriptEarlyExit());
// Read Ruby script from external source and execute it
String rubyScript = ...
rubyEngine.eval(rubyScript);
// Read results of the script.
Long resultCode = result.getResultCode();
// etc...
The Java code now supplies all scripts with a ScriptEarlyExit object they can use to invoke the equivalent of a return statement. Here is the ScriptEarlyExit class:
/** Object passed to all scripts so they can indicate an early exit. */
public class ScriptEarlyExit {
public void withMessage(String msg) throws ScriptEarlyExitException {
throw new ScriptEarlyExitException(msg);
}
public void noMessage() throws ScriptEarlyExitException {
throw new ScriptEarlyExitException(null);
}
}
The ScriptEarlyExitException class is a simple Exception subclass:
/** Internal exception so ScriptEarlyExit methods can exit scripts early */
public class ScriptEarlyExitException extends Exception {
public ScriptEarlyExitException(String msg) {
super(msg);
}
}
With the ScriptEarlyExit object made available to scripts by the call to rubyEngine.put("scriptExit", new ScriptEarlyExit()), any script in any language should now be able to exit early. The Ruby script revised to use the new object would be coded like:
# Don't process the text if the status is greater than 200
if $status > 200
$scriptExit.with_message 'Not processing because of invalid status'
end
# Continue processing
...
The Java method call from the script provides a consistent, fairly clean way to return early from script processing. I tested calling this ScriptEarlyExit object from Ruby using JRuby 1.0, from JavaScript using the Rhino interpreter built into Sun's Java 1.6, and from Groovy 1.0. It worked well with them all.

This solution did require solving another problem. Using a Java exception to end script processing means the script engine is going to bubble up a javax.script.ScriptException back to Java. I needed a way to determine whether that exception was a real ScriptException or my fake ScriptEarlyExitException.

The solution was to check the script exception message to see if my special exception was embedded in the string. The coded ended up looking like:
try {
rubyEngine.eval(rubyScript);
} catch (ScriptException se) {
// Re-throw exception unless it's our early-exit exception.
if (se.getMessage() == null ||
!se.getMessage().contains("ScriptEarlyExitException")
) {
throw se; // a real ScriptException
}
// Set script result message if early-exit exception embedded.
// Will not work with Java 6's included JavaScript engine.
Throwable t = se.getCause();
while (t != null) {
if (t instanceof ScriptEarlyExitException) {
result.setExitMessage(t.getMessage());
break;
}
t = t.getCause();
}
}
The catch block examines the exception's message for the "ScriptEarlyExitException" string, and ignores the ScriptException if found. The code in the catch block then looks to see if one of the causes of the ScriptException was the ScriptEarlyExitException. If so, the ScriptEarlyExitException exception's message string will hold the value set when the script called the withMessage method on the shared ScriptEarlyExit object. That is, when Ruby calls:
$scriptExit.with_message 'Not processing because of invalid status'
the
ScriptEarlyExitException.getMessage()
will contain the string "Not processing because of invalid status". The catch clause sets that string to the ResultsObject object's exitMessage property using the code:
result.setExitMessage(t.getMessage());
As the comment in the above code indicates, retrieving the "exit" message from the Rhino JavaScript engine doesn't work. Or at least finding and parsing the exit string out of the resulting ScriptException is more tedious. That's because the Rhino script engine does not wrap caught Java exceptions into the resulting stack trace. With Rhino, the loop:
Throwable t = se.getCause();
while (t != null) {
if (t instanceof ScriptEarlyExitException) {
result.setExitMessage(t.getMessage());
break;
}
t = t.getCause();
}
never finds a ScriptEarlyExitException.

As I mentioned, this solution of having scripts call a method on a shared Java object in order to exit script processing early by throwing an exception isn't elegant. But it does work to let scripts execute the equivalent of a top-level "return" statement. This solution likely will work with other JSR-223 scripting engines besides the ones I tested. It seems, though, that there must be a better way. Groovy, by the way, permits a return statement in top-level code. That's pretty nice.

Are you a Ruby or JavaScript pro with a better solution? Is there an easier way for Ruby or JavaScript to return from a script even when the script code is outside a method/function? If you would like to share better techniques, please post a comment here or email me at the address shown in the right-hand column under the "Feedback" heading. If you post a comment on this blog, I ask your forgiveness in that comments are moderated before appearing, but there is no indication of that when you click the "Post" button.

Still using StringBuffer? That’s sooo Java 1.4

Pop quiz: Hashtable is to HashMap as StringBuffer is to ... <fill in the blank>

Answer: StringBuilder.

I recently worked on a Java project where the target environment was Java 1.5. Although Java 1.5 has been out for almost three years, the client was just upgrading to it to take advantage of its language features and APIs.

While working on the project, I noticed most developers continued to use the StringBuffer class when StringBuilder would have been the better choice. In asking around, most developers said they were unaware of StringBuilder.

In case you're using Java 1.5 or 1.6 but not yet using StringBuilder, StringBuilder is an unsynchronized version of the tried-and-true StringBuffer class. Most of StringBuffer's public methods are synchronized to allow multiple threads to read and modify the string simultaneously. But since StringBuffer is almost always used to build up a string within a method, or to build a string over several method calls within a single-threaded environment, the synchronized nature of StringBuffer is overkill. An article in Dr. Dobb's Journal in June 2006 estimated switching from StringBuffer to StringBuilder could speed string building by 38%.

That's why Sun added StringBuilder to the language in JDK 5. None of StringBuilder's methods is synchronized, so the class is not meant to be used when multiple threads need to access the string. In multi-threaded contexts, you will want to use StringBuffer. But consider your own code. How many times have you needed to share a StringBuffer between multiple threads? You'll probably find that StringBuilder is often the better choice.

Independence Day in D.C.

Yesterday saw another great celebration on the National Mall in Washington of our nation's declared independence. Two hundred thirty-one years ago, the Continental Congress adopted Thomas Jefferson's draft of the Declaration of Independence.

'Jefferson' and 'Franklin' read the Declaration
"Thomas Jefferson" looks on as "Benjamin Franklin" reads the Declaration of
Independence on the National Mall in Washington, D.C.
We began the morning at the National Archives, where the original Declaration of Independence is stored, for the annual dramatic reading of the document by men portraying three of the original signers: John Adams, Thomas Jefferson and Benjamin Franklin. Last year, the last couple of paragraphs were read by two men of our armed forces who were wounded in Iraq or Afghanistan. One of the men suffered head injuries, and his reading was stilted and slurred, yet he bravely read through the document. It brought tears to many in the crowd assembled on the steps outside the archives and spilling out onto Pennsylvania Avenue.

This year, they brought a veteran of World War II to read the last part of the Declaration, and filmmaker Ken Burns talked about his upcoming World War II documentary, The War, which recounts the war from soldiers who fought it. I heard no mention of any active war going on, or of any of the men and women fighting in it. Iraq already seems like a war we're fighting to forget.

'Jefferson' and 'Franklin' read the Declaration
Rockets red glare light up the boats on the Potomac River during the
fireworks finale.
We watched a little of the Independence Day parade down Pennsylvania Avenue, walked through the exhibits and listened to music at the Smithsonian Folklife Festival on the Mall, then returned home in the afternoon to watch the fireworks from our balcony.

At around 5 p.m., a lightning storm prompted police to evacuate the open areas of the Mall and the Marine Corps Memorial. Officers asked picnickers and others staking out seats for the concert and fireworks to seek shelter in the various museums and memorials. The storm passed through after about an hour, and the 8 p.m. concert at the Capitol began on time, as did the fireworks an hour later. Last year we watched the fireworks from the Lincoln Memorial. This year, we were able to enjoy the view from our home in Rosslyn.

The fireworks show was great, as usual, but this year I thought it was marred a bit by two orbiting police helicopters, one to the east of the Mall and one to the west. Security was visibly tighter this year, the terror tenor of our times.

And to put another damper on an otherwise perfect evening, three men who put on the fireworks display were hurt and burned, one seriously, when unexploded fireworks went off about 15 minutes after the finale. I was still looking toward the Lincoln Memorial and saw two or three fireworks explode at ground level. May the injured fireworkers recover fully.

Eclipse 3.2 JUnit runner gets confused connecting to server?

I opened an Eclipse project today, ran a unit test, and got a socket exception I'd never seen before. The project was one I had set aside a few weeks ago after playing with the NetBeans 6 preview release.

After opening the project in Eclipse, I went straight to one of the JUnit test classes, made a small tweak to one of the test methods, then hit my usual Alt-Shift-X + T keyboard shortcut to run the test case with JUnit. Instead of seeing a green or red bar, Eclipse just sat there staring at me, saying it was running the test class with JUnit. The console view showed the red "terminate" button in bright red, indicating the run was proceeding, albeit at an exceedingly slow pace. After about 30 seconds, the console displayed:
Could not connect to:  : 3393
java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:519)
at java.net.Socket.connect(Socket.java:469)
at java.net.Socket.(Socket.java:366)
at java.net.Socket.(Socket.java:179)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.connect(RemoteTestRunner.java:560)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:377)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
A socket connection error? I was just trying to run a local JUnit test, not connect with any remote server.

My first theory was I must have been playing with remote debugging for this application a few weeks ago and configured Eclipse to connect with a remote JVM. I spent a minute going through the Eclipse configuration for the JUnit test to check out its settings. I saw nothing set for any remote JUnit connection. (I'm not even sure Eclipse's JUnit runner can do that.) Everything looked right, so I ran the test again and got the same connection refused exception.

My second theory was that I hadn't rebuilt the application since upgrading to JSE 1.6.0_01 from 1.6.0, and that Eclipse was doing its best to find a running 1.6.0 JVM to connect with. (This seemed far-fetched, but a rebuild only took a couple of seconds.) A rebuild didn't solve the problem.

My third theory was I had been using NetBeans for so long I must have forgotten how to run the JUnit test in Eclipse. Perhaps I was telling Eclipse to debug a remote application instead of running JUnit. I ran the test again, this time through the menu option. No luck.

That sent me searching the web for the solution. I found it pretty quickly, but not the underlying reason behind the problem.

The solution was to restart Eclipse. Why this worked I don't know, since I had just launched Eclipse minutes before. Apparently the JUnit runner thread in Eclipse attaches to an Eclipse server thread to run the tests. It would seem the client thread was trying to connect to the wrong port (3393) or that the server thread that had been listening on port 3393 for runtime requests failed. Either way, I would have expected Eclipse to log the error. Strangely, the only item in the Eclipse error log said:
Warnings while parsing the commands from the 'org.eclipse.ui.commands'
and 'org.eclipse.ui.actionDefinitions' extension points.
with a sub-message saying:
Commands should really have a category: plug-in='org.codehaus.groovy.eclipse',
id='org.codehaus.groovy.eclipse.debug.ui.testShortcut.debug',
categoryId='org.eclipse.debug.ui.category.debug'
Well, I did recently install the Groovy plugin. Did that cause the problem? If so, Eclipse thinks not being able to connect with the JUnit runtime is just a warning?

Anyone have the real answer as to what caused Eclipse to get so confused while trying to launch the JUnit runner? None of the web pages I viewed talking about the problem mentioned the cause for the failure.

IBM Strikes Out in Second Life

I left the real world yesterday to "attend" a technical briefing in Second Life, hosted by IBM, on what Web 2.0 means for business. I want to congratulate IBM for experimenting with virtual worlds. But in this case, the pretend physical nature of the online briefing detracted from the message and added nothing discernable. I spent more time fighting the Second Life client application than I did listening to the IBM presenters.

My avatar attends IBM briefing in Second Life
My generically clad avatar looks at the right side of the virtual stage during
IBM's technical briefing.
The technical briefing had to begin a half-hour early to allow IBM and the two- or three-dozen attendees to work out the technical kinks, including the basics like learning to walk and sit in Second Life. For those who haven't heard much about Second Life, it is a 3-dimensional online virtual world in which everyone is represented by a (usually) human-appearing image you can move around using the Second Life client application. The virtual world allows you to see and interact with real people who also are visiting Second Life. Second Life includes isolated islands and a mainland filled with various structures and objects created by Second Life owners and visitors.

Holding a meeting in a 3D virtual world promises new tools for collaboration. You could hold a main meeting, break out into smaller groups as needed while still easily rejoining the main group, share notes, share software, demo software on a virtual computer in the virtual world, and draw on whiteboards that can be stretched to fit your needs, using colored pens that never run dry. I'm unsure what capabilities Second Life provides today to do any of these things, but I don't think talking is one of them. Attendees to the IBM session had to dial a regular conference telephone line to hear the presenters.

I say IBM struck out by holding this technical briefing in Second Life because the presenters merely talked, showed slides, and provided handouts. You don't need a 3D virtual world to do these things. The bad part was Second Life detracted from the actual content of the briefing by having to deal with virtual-world activities instead of merely listening, reading and thinking.

First, I had trouble finding the conference room. The coordinates IBM provided took me to what looked like a sand-dune filled desert with a beautiful virtual sunset on the horizon. The only other thing I could see was one or two other virtual attendees walking around aimlessly. Flapping my arms eventually got me there. You see, the presentation was held on a platform floating in space above the ground. You had to fly up several meters to see it. (In Second Life you can fly.) Strike one.

Second, the Second Life client isn't very stable. It froze and crashed while I was trying to move around. Strike two.

Viewing the IBM slideshow in Second Life
Here is me trying to view the slides in Second Life. I captured this screenshot when
the slide was in focus.
Third, it was difficult to view the slides in Second Life. I first had to figure out what keystrokes I needed to zoom my vision onto the virtual projector screen. When I mastered that skill, I discovered the slides took a long time on my (fairly powerful) PC to paint and focus. Sometimes the slide would just start to appear on my screen as the presenter moved onto the next slide, which would blank out the slide I was frantically trying to read, and then the next slide would take 5 to 10 seconds to start painting on my virtual screen. Strike three.

Fourth, Second Life forces you to create a new name for yourself while visiting. You can choose a first name, but you have to choose from a list of Second Life family names. As a result, you can't tell who the IBM speaker at the podium is without someone translating that "Foobar Frobney" (or whatever) is really IBM employee Alfredo Gutierrez. Strike, um, three and a half.

Even though Second Life's virtual-world wasn't the best forum for this technical briefing, I want to give IBM credit for trying. Virtual reality holds promise for providing better, more natural tools for online collaboration than simple slideshows and telephone conference lines. However, IBM will need to learn to use the best tool for the job. If you are just going to talk and show slides, there are more effective technologies today than Second Life.

Sun Tech Days in D.C. a Mini JavaOne

I spent last Thursday at the International Trade Center in Washington attending Sun Tech Days 2007, the last stop in a 15-city world technology tour showcasing what's new in Java and Solaris. Here are some highlights of the day, and notes from the keynote address by Sun Microsystems CEO Jonathan Schwartz. Overall, the day was like a mini JavaOne: exposure to new technologies without actually teaching you how to use them. With most technical sessions lasting just 50 minutes, exposure is about all one can expect.

Session highlights

  • Sun considers GlassFish a production-ready JEE 5 application server.
    I hadn't been following the GlassFish project, so it was good to learn about its relative maturity. GlassFish V2 (in beta) adds clustering support.
  • Java 6 added features to JMX to make managed-bean development easier using annotations.
    I learned this in an aside during a JMX talk that focused on JMX features in Java 5. Yes, Java 5 has been out for almost three years, but Sun treats it like new technology because most companies and developers haven't migrated from 1.3 or 1.4. And it was good to hear Sun advocating and explaining JMX because developers could benefit from using its instrumentation and monitoring features in their applications, but the JMX learning curve has always been steep.
  • jMaki tries to simplify Ajax development by unifying the APIs of popular Ajax libraries.
    jMaki provides JSP tags that help you call Ajax components from other Ajax frameworks, such as Dojo, Script.aculo.us, Yahoo UI Widgets, and Google's Ajax framework.
  • Sun is focusing more on JRuby than Groovy because of Rails.
    This isn't actually new, but it was refreshing to hear a Sun engineer acknowledge that Sun's newfound excitement over JRuby is based on Rail's current sexiness quotient, and that attention to other JVM languages like Groovy likely will suffer a little as a result.
  • Web 2.0 is still a vague concept.
    After I left the session on "AJAX and Web 2.0 Frameworks," two attendees both mentioned that they still had no idea what Web 2.0 means. The speaker never once defined it, and she left no time at the end of her presentation for questions.
  • Sun engineer evangelists can get pretty tired after a 15-city world tour.
    One engineer played music from his laptop, drowning out part of another engineer's talk, and joked that audience members should use the corners of the room to relieve themselves. Another engineer went through his slides with the enthusiasm of a cow chewing cud, saying things like "as you can see in the code here" while the actual code was hidden because his NetBeans display had only about 40% of the real-estate showing the code window, and he was too tired to open the window or scroll the code to the right.

Jonathan Schwartz's keynote address

Someone who didn't lack enthusiasm was CEO and President Jonathan Schwartz. He started the morning with a keynote address giving his vision for Sun's future. Schwartz is a thought-provoking speaker. His confidence provides a sense that you want to believe in his vision for success. As much as I think some of Sun's products are pretty cool, I still wonder whether Sun will succeed in differentiating its products sufficiently to win over the market.

Sun CEO Jonathan Schwartz
Sun Microsystems CEO Jonathan Schwartz
speaks at Sun Tech Days 2007 on June 7
in Washington, D.C.
Schwartz's talk focused on market trends and how Sun will be there to satisfy the needs sparked by those trends. The number of consumers accessing the Internet keeps growing, he said, at the same time the cost of opening an Internet business keeps falling. "Barriers to entry have plummeted to near nothing," he said. As consumers go online, they'll want to store photos, post blogs, access bank accounts, and other services. The continuing growth of Internet consumers and businesses will require ever-better and larger network infrastructure, servers and storage--and Sun hopes to be there selling those products.

Schwartz said the growth of Internet access will come mostly from consumers using mobile devices. More of the world uses a phone to access the Internet than a PC, he said. The United States is an anomaly in so many people owning expensive computers, he said. If you're going to meet the world's demand for mobile online access, "you're going to need to figure out how to work with that."

One way Sun is going to work with the growth of mobile Internet devices will be to develop a new mobile-phone software platform called JavaFX. JavaFX, which includes JavaFX Mobile, is intended to make developing mobile phone software easier, and will run existing Java ME applications. Sun announced JavaFX at JavaOne last month, and released JavaFX Script, a new declarative scripting language to build "rich content" applications. Schwartz said Sun will release JavaFX as free open source software. The license will be partly restrictive. The JavaFX website says handset manufacturers will need to purchase an OEM license to embed JavaFX on their devices. Perhaps JavaFX will solve some of the difficulties developing mobile applications for Java ME, where every application has to be customized for nearly every possible device.

Schwartz spoke at some length of Sun's commitment to open source software. He said Sun spent $500 million to release Solaris as open source, both in staff time and the cost of procuring intellectual property. Sun also provides a free open source Java development kit, a free IDE (NetBeans), a free JEE server (GlassFish), and other open source projects that must cost Sun a large amount of money.

"Unfortunately the most important
audience we have to serve has
no money and spends no money.
It's you. And we love you."
--Jonathan Schwartz
Sun's large investment in the free, open source software business is still one area I haven't figured out. My question has always been will Sun ever generate enough goodwill or product synergies to sell hardware to go along with that free software? Schwartz seemed to acknowledge the difficulty in transferring the gift of software into sales of profitable hardware: "Unfortunately the most important audience we have to serve has no money and spends no money," Schwartz said. "It's you [the developer]," he said with a laugh, "and we love you."

Will the love from the developer community transform into money? For example, Schwartz said that 70% of Sun's free Solaris operating system is installed on non-Sun hardware, like Dells and HPs. He said Solaris's exposure beyond Sun hardware opens the door to new customers. But do companies or developers installing Solaris for free on an x86 platform ever end up buying Sun blade servers, Sun disk arrays or Sun tape storage?

Sun CEO Jonathan Schwartz
Jonathan Schwartz at Sun Tech Days
He concluded by saying Sun will succeed by migrating away from commodity products toward innovative, market-leading products. Schwartz said the marketplace is split into segments growing faster than Moore's Law and those growing slower, which I took to mean the capabilities or innovations in some market segments is doubling every two years. The segments growing faster than Moore's Law, he said, are the consumer Internet, high-performance computing, simulations/analytics, and software as a service. Those growing slower are payroll, ERP, general ledger, and CRM. Products in the "slower" market segments will get cheaper every year.

Although he acknowledged Sun makes its payroll selling commodity products attractive to the "slower" market segments, the growth (and profit) is in the faster segments. "If you're in our business," he said, "you don't want to hang out too long at the bottom."

For innovative products, he mentioned Sun's focus on power-efficient hardware, its new Sun Fire X4500 storage server with 24 terabytes capacity at less than $2 per gigabyte, and its portable, self-contained data center that fits inside a standard shipping container (Project Blackbox). The primary costs in running a data center today, he said, are people to run them, real estate, and electricity, in that order. Sun is therefore focusing on products that reduce the need for human intervention, fit in a smaller area, and consume less power.

Will the focus on high-margin, innovative hardware help Sun succeed? As a longtime Java developer, I have a special warm feeling for Sun. Sun gave us the Java platform. Sun donated the popular Tomcat web application server to open source. And Sun's recent attention back to Java developers has been heartening, with its major improvements to NetBeans, its development of GlassFish, and Java's new dynamic/scripting-language support, including the development of Ruby as a first-class JVM language. Yet most of what I heard from Schwartz on Thursday was that giving us all this software for free costs a lot of money, and Sun's focus needs to be on selling innovative hardware. What I didn't hear Schwartz explain was why Sun is focusing on free, open source software for developers -- how it helps Sun's bottom line -- and thus a strong feeling that the support will continue.

Josh Bloch’s Java puzzlers tonight in D.C.

I just heard that Joshua Bloch from Google will be in downtown D.C. tonight presenting "Java Puzzlers' Greatest Hits." If you haven't seen his presentation at a conference or JUG meeting, I highly recommend attending. Bloch throws Java code snippets or questions related to Java up on the wall, and audience members puzzle-out the answers. I learned things about Java that surprised me when I first heard his talk in 2004. The simple snippets of Java code in his questions often don't do precisely what you'd think -- because they won't compile, because of primitive integer overflow, because the code stumbles into a collections corner case, or other Java language subtlety.

Since his 2004 talk, Bloch has written a book on the puzzlers, Java Puzzlers: Traps, Pitfalls, and Corner Cases (Addison-Wesley, 2005) with Neal Gafter -- and also left Sun Microsystems to join Google as chief Java architect.

Here are the details of Bloch's presentation, hosted by Google. RSVP to pittsburgh@google.com.

When: 6:00pm
Date: Today, Tuesday, March 27
Where:
The Renaissance Mayflower Hotel, Senate Room
1127 Connecticut Ave. NW
Washington, DC 20036
Metrorail: Farragut North or Farragut West (head north on Connecticut)
Hotel phone: 202-347-3000

Eclipse gave a surprising left jab while unboxing

I'm working on a project for a client to integrate an existing web-based mortgage application to work with a large mortgage-loan consolidator. The existing application has a large code base originally targeted for Java 1.3. We needed to create an integration API and wanted to take advantage of some of the concurrency classes introduced in Java 1.5. The client gave approval to use 1.5 for the new code.

Upgrading went smoothly. We had to rename an existing package that included the new enum keyword, but the 1.3 code easily upgraded to 1.5. Being able to use 1.5 was nice because I like the generics support, the simplified "for-each" syntax to iterate over collections, and the simplified concurrent package. Many of the new concurrency features also are available for earlier versions of Java in the backport-util-concurrent library. I like Eclipse's support for 1.5, such as typing "foreach[Ctrl-SPACE]" and having Eclipse make a pretty good guess at filling in the simplified for-each loop code, including the collection reference to iterate over, and picking a good name for the temporary iterator variable.

For example, if you start a method:
public CreditReport getTriMergeForBorrower(
Borrower borrower, Set creditReports
) {
and you want to loop over the creditReports to search for the correct one for this borrower, you can type:
foreach[Ctrl-SPACE]
and Eclipse will replace that with:
for (CreditReport report : creditReports) {
}
Eclipse looks "up" in the code to find the nearest iterable, fills in the correct type for the iteration temporary variable, and gives the temporary variable a reasonable name. Pretty nice. But as the title of this entry implies, I got an unexpected jab from Eclipse, or more accurately, my reliance on it.

Eclipse is aware of another "new" feature in 1.5: autoboxing and unboxing. Autoboxing is the term for the Java compiler allowing you to write code that treats primitives as their equivalent object types (an int treated as if it were a java.lang.Integer, for example). Auto-unboxing is the opposite: using an object when the expression calls for a primitive. If you haven't been able to use Java 1.5 on a project, here's a short introduction.

With autoboxing, you can write code like this (from the above-reference Sun website):
public static void main(String[] args) {
Map m = new TreeMap();
for (String word : args) {
Integer freq = m.get(word);
m.put(word, (freq == null ? 1 : freq + 1));
}
System.out.println(m);
}
Notice how it appears you can pass an int as the parameter to be inserted into the TreeMap. Even though you really can't put a primitive into a collection like a map, the compiler (not the Java runtime) corrects this "incorrect" coding by inserting hidden code to create an Integer object to wrap the primitive int value. Autoboxing makes the code look a little cleaner: Let the compiler do the work rather than the programmer.

I've known you could write autoboxing code like the above for a couple of years. In August, 2004, two months before Java 1.5's general release, I attended a talk by Joshua Bloch and Neal Gafter at the Denver Java Users Group, introducing the new features in Java 1.5 (Tiger). But what I hadn't considered was that you sometimes could write auto-unboxing code without realizing it. It happened to me with code like:
if (creditReport != null && creditReport.isTriMerge()) {
// do some processing
}
When testing the code, it threw a NullPointerException. When I saw the stack trace, I said Huh? The creditReport reference obviously isn't null when invoking isTriMerge, so where did the NPE come from? A moment later, it hit me. The isTriMerge method must be returning a Boolean, not a boolean as I had assumed when I looked at the business object's API using Eclipse's Ctrl-SPACE to show options. I let Eclipse fill in the isTriMerge method name when I typed the if statement. Eclipse didn't complain about the syntax, so my natural assumption was a method named "isXXX" would return a boolean.

Instead, the CreditReport business object uses object wrappers for all its primitive-value fields, so when the object is persisted in the database, it can use NULL values to indicate "unspecified." The getters all return objects.

When writing code that takes advantage of unboxing, you can end up creating seemingly odd statements like:
if (creditReport.isTriMerge() != null &&
creditReport.isTriMerge()
) {
// Do something
}
where you check a value for null and whether the value is true. It looks unusual, compared to the pre-Java 1.5 version:
if (creditReport.isTriMerge() != null &&
creditReport.isTriMerge().booleanValue
) {
....
which is what the compiler is actually inserting into the .class file's bytecode.

Unexpected NullPointerExceptions occur when you don't realize that your "primitive" value is really an object being dereferenced inside hidden code. I found this big discussion of automatic unboxing on TheServerSide from three years ago, debating whether unboxing is a good thing.

So, live and learn. I don't see autoboxing/unboxing as bad. Programmers just need to be cognizant of whether a variable holds a reference to a primitive object wrapper rather than a true primitive value, and whether a method returns an object rather than a primitive. And if you want to be cautious, you can ask Eclipse to warn you about possible unboxing problems. Eclipse (I'm using 3.2) has a Java code setting: Window | Preferences | Java | Compiler | Errors/Warnings | Potential programming problems | Boxing and unboxing conversions. You can tell Eclipse to flag boxing/unboxing in the code as a warning or error. That way, you're less likely to receive an unexpected jab from hidden unboxing code. Eclipse's default is to ignore boxing and unboxing in the code as long as it's legal syntax. (I'm sure IDEA and NetBeans can do the same thing. If you're an IDEA or NetBeans user and want to post those IDE equivalents in a comment, please do.)