Koen Aers jBPM EclipseWorld sneak preview

Even though Koen Aers from JBoss had to be up early Thursday to give a jBPM presentation at EclipseWorld 2007, he kindly stopped by our Northern Virginia Java Users Group (NovaJUG) meeting Wednesday night to talk about business process management in general and JBoss's jBPM platform specifically.

Koen Aers presents at NovaJUG
Koen Aers presents on jBPM at
the NovaJUG meeting Nov. 7.
[photo from my phone]
JBoss jBPM is an open source business process management platform that helps separate business processes and tasks from the rest of the business logic. The platform defines a process definition language, and provides a Java library to execute and persist the business processes. It can be used in a standard Java SE application as well as deployed to servlet containers -- not just JBoss Application Server.

Business process engines can make applications easier to write, but they have received a bad reputation, Aers said. The reputation stems from the fact that most business process management systems are behemoths that take up half your hard disk and come with a steep learning curve, he said. JBPM is about a 500KB core library, not counting its Hibernate database persistence layer, and developers can learn and use only a small part of the whole platform.

BPM engines don't need to be complex. At their core, he said, business process engines boil down to the management of state: what state is each instance of a business process in at the moment, and what internal or human activities trigger a transition to a new state.

Here are my notes from Koen Aers's jBPM presentation.

Why use a process language?
  • Simplify an application by extracting the state-management logic.
  • Improves communication: Process languages should support graphical modeling that maps to executable notation.
  • Automatic persistence history can be used for business intelligence.
What is a Business Process Management System?
  • A tool that allows an analyst to model workflows (business processes) and hand over results to a developer, who will add the details to make it executable.
  • With modeling, the more expressive the modeling notation, the harder it is to make the model executable.
  • Thus the choice of modeling notation is important.
    Popular modeling notations:
    BPMN: a pure modeling notation. No automatic translations to code.
    BPEL: The purpose is to orchestrate web services and publish result as a new web service.
    XPDL: A format for storing process models.
  • A big repository that holds executable processes, persists the execution state of the processes, and records history of what happened during the process executions.
JBoss jBPM uses its own notation, called jPDL. The jBPM architecture was built to add a "process virtual machine" on top of the Java VM. It would be responsible for executing the processes stored in jPDL, BPEL, XPDL, a web page-flow language or any similar language that defines a process flow. However, currently the engine supports only JBoss's jPDL. Bull is working to add XPDL support to jBPM using Bonita, which should be released under an LGPL license next year, Aers said.

JPDL is an XML language defined by a schema. The language is extensible to support custom business processes. The language also supports defining Java actions that can be invoked at numerous points as the business process changes states.

Aers showed a demo of coding a business process using JBoss's visual process designer, an Eclipse plugin. The plugin lets you edit the jPDL both as XML and visually. Aers is a developer for the designer tool.

Before he started working on the jBPM designer tool, Aers said, he would code jPDL using straight XML with an editor that supported auto-completion from the XSD. The designer is primarily a marketing tool, he said, to support people's expectations of what a powerful BPMS must provide. "If you go to a presentation and you don't have a [graphical] designer, then you suck" in the customer's view, he said.

Grails with Jason Rudolph

Grails committer Jason Rudolph showed off the power of Grails at last night's Northern Virginia Java Users Group (NovaJUG) using a technique guaranteed to impress. He started with a JDK, a Grails installation, and an empty directory. In a few minutes, he had a skeletal Java web application created and functioning in a web browser, ready for enhancement. Audience members literally oohed and aahed.

Jason Rudolph mugshot
Jason Rudolph
Grails is a rapid web application development framework for Groovy that allows Java developers to hold their heads higher when Rails advocates gloat about their Ruby framework. Grails provides both a development environment to make coding easier by generating code, and an MVC web framework with a servlet front controller, domain objects that easily tie into a database, and tag libraries for scripting Groovy Server Pages. "It's absolutely everything you need, soup to nuts, to start building a web application," he said.

Grails is in release 0.6, with a 1.0 release candidate currently in development. Since code is written in Groovy, Grails provides syntax enhancements and features unavailable directly in Java, but provides for full Java integration and reuse of existing Java libraries. Like Rails, Grails provides convention-over-configuration to minimize configuration files and tedious coding. It builds upon Spring and Hibernate.

During his presentation, Jason showed how to create domain classes, controller classes, and to customize GSP pages. He added constraints to domain values with simple declarations in the domain class, and showed how to change the default error messages that are displayed on the web page when constraint validation fails. It was refreshing to watch him make a simple change to a Groovy source file then see the result just by reloading the web page. No build step. No deployment step. In development mode, Grails watches for file changes and performs the build and deployment steps for you, he said. Grails writes regular messages to the log file when it performs this hidden work to remind you not to use this feature in production.

Jason touched upon some of the similarities and differences between Grails and Ruby on Rails. The philosophy of persistence is different. Grails considers your domain class to be the source of record. It talks to the database to create and update tables during development. Rails considers the database the source of record for domain entities, and creates domain classes appropriately. You can switch the database auto-create features off if you're using an existing database, he said. The Grails development team is looking at using Middlegen in a future release to generate domain objects from an existing database schema, he said. Jason said a quality shared by Grails and Rails is they both work best when used on greenfield projects without an existing database or codebase.

Other features provided directly by Grails or through plugins:
  • Custom URL mapping
  • Alternate domain implementations, such as mapping domain objects to EJB3 entity beans
  • Expose actions as web services
  • Web page flow
  • Many taglibs
  • Authentication and authorization
  • Integrating a search engine (e.g. Lucene).
Future releases are expected to support for:
  • An object-relational mapping language
  • Using JSP custom tag libraries inside GSP pages
  • JPA integration
  • Generating the domain classes from the database, as mentioned above.
In addition to working on Grails, Jason is a principal at consulting company Relevance LLC and the author of Getting Started With Grails (available for free download). He'll be speaking next week in London at Grails eXchange, including a session on using EJB entity beans with Grails.

Slides and code from his presentation are available on his website.

Creating a simple rules engine using the Java scripting API

Part 2 of my IBM developerWorks article, Invoke dynamic languages dynamically, creates a simple rules engine using the Java scripting API. Business rules, written in a combination of Ruby, Groovy, and JavaScript, determine whether a borrower qualifies for a variety of home loans. I used a rules engine as a sample application because it seemed more compelling than another hello-world application, and it also seemed like an interesting use of the scripting API.

The Java scripting API, also known as JSR-223, works as a viable basis for a rules engine when a full-blown business rules engine isn't needed because it offers several of the benefits you get from using a regular rules engine. For instance, when business rules are stored as external scripts, the scripting API:
  • Allows you to work easily with large sets of rapidly changing rules
  • Allows frequent and flexible additions and changes to rules
  • Separates rules from processing logic
  • Centralizes rules and makes them easier to manage
The Java scripting API fulfills those design goals because scripting code can be kept external to the main Java application, and can be discovered, read and invoked at run time. These same advantages are provided by rules-engine products such as Drools, Jess, or JRules. However, you derive additional advantages by using scripting languages to hold your rules and the Java scripting API to invoke them:
  • Easy to program: Use a scripting language -- or several -- of your choice
  • Free and easy to set up (partially built into Java SE 6)
  • Small number of required external dependencies
  • No need to learn a complex declarative business-rules language. For example, here's a sample from Drools:
    rule "Approve if not rejected"
    salience -100
    agenda-group "approval"
    when
    not Rejection()
    p : Policy(approved == false, policyState:status)
    exists Driver(age > 25)
    Process(status == policyState)
    then
    log("APPROVED: due to no objections.");
    p.setApproved(true);
    end
    
What would the design of a rules engine based on the Java scripting API look like? The ScriptMortgageQualifier class in part 2 of my article shows one such design. It stores business objects that the external rules will use in decision-making in the ScriptEngine's context, and receives rule execution results in a separate shared Java object stored in the ScriptEngine context. Rules (scripts) are responsible for storing results of their decisions in the shared Java object, which the main Java code inspects after the rules are run to determine what action to take.

In my sample application, I use individual files to store the rules. The application scans the rules directory on each pass and executes whatever rule scripts it finds there. An advantage of using the Java scripting API to find the rule scripts is the rules can be written in any of dozens of languages supported by script-engine implementations. The rules engine doesn't care what language the rules are written in as long as the applicable script engine and interpreter can be loaded at runtime, such as being supplied by JARs in the classpath. In my sample, I coded rules in Groovy, JavaScript, and Ruby.

Another possible way of structuring rule logic would be to have the rules themselves set additional attributes that other rules could then use (that is, learn from). For instance, say one set of rules runs and determines that the prospective home purchaser has a bank balance of $10 million. The rule could set a property (a global script variable) called VIP (very important person) to true. As a global variable, the property would be available in the ScriptEngine context and passed along to the next rule to be run. That next rule could use different logic based on the fact that this borrower is a VIP.

The above example begins to reveal the shortcomings of designing a rules engine around the scripting API. Most formal rules engines have the notion that all rules are considered to be in effect at all time. Setting a fact such as "customer has VIP status" in one rule should be taken into consideration by all rules to determine if that new fact changes other facts. But satisfying that feature by invoking external rules stored as scripts would require script writers to order the rules in the proper sequence. Trying to sequence your business rules correctly to account for fact-dependencies is error prone -- and impossible when the rules have mutual dependencies. This limitation of requiring rules to be run in a proper sequence is certainly where you would want to consider using a better rules engine.

Rule sequencing isn't the only disadvantage to executing rules stored as external scripts. Writing business rules in Groovy, Ruby or another scripting language has the disadvantage of:
  • Rules in scripting languages are written imperatively rather than declaratively
  • Complex business logic written imperatively might require deeply nested conditional statements, which makes the rules hard to read and prone to error
  • To avoid the above problem of coding deeply nested if-then statements in your script, you might be tempted to write code that processes a decision table -- reinventing the wheel built by better rules engines
  • The temptation to write your business rules in multiple scripting languages could become a maintenance headache
In other words, the Java scripting API will not always work as the best solution when your application needs a rules engine. However, the Java scripting API allows business rules to be stored externally, to be written in a language that probably is easier to read than Java, and lets the rules change regularly and fairly easily without having to rebuild your application. If you don't mind writing your business rules in a procedural language instead of a dedicated, declarative rules language, the scripting API could be a good solution. It fills the gap between those times when writing business rules as Java code inside your application has gotten out of hand and when graduating to a fully fledged rules engine isn't yet necessary.

If you're trying to decide whether your application calls for a dedicated rules engine, the Jess website has a good article, Some Guidelines For Deciding Whether To Use A Rules Engine.

Returning from Ruby or JavaScript called from the Java Scripting API

Since the Java Scripting API makes it easy to execute external scripts written in a variety of dynamic languages, I tried to find a consistent way to return early from top-level code written in JavaScript and Ruby. My goal was to be able to structure short Ruby and JavaScript scripts by coding everything at the "top level," that is, outside of any defined function, method, or class. That way, the Ruby or JavaScript scriptlets would be easier to write and I could eval them from Java without having to call a specific function or method by name.

After hunting around, I found no simple or easy way a JavaScript or Ruby script could return early from being evaluated when the scripting code is outside of a function or method. A return statement is not allowed outside a function in JavaScript, nor is it allowed outside a method in Ruby. The only consistent language feature I found that guaranteed early script exit was for the code to throw an exception.

If you're unfamiliar with the Java Scripting API (JSR-223, Scripting for the Java Platform), it was added in Java Standard Edition 6 to provide a consistent way to embed scripting-language interpreters into a Java application. The API's javax.script package contains classes and interfaces that let you call and share data with an external script written in dozens of scripting languages, including powerful dynamic languages like Ruby and Groovy. The Java Scripting API is based primarily on the Apache Jakarta Bean Scripting Framework project, but provides extra features and is now built into the Java language. You can use the Scripting API in Java 1.5 by adding the new packages, available by downloading the JSR-223 reference implementation.

Here is what I set out to accomplish.

I wanted to be able to pass Java objects to scripts written in Ruby and JavaScript and let those scripts process the shared Java objects. The goal was to take advantage of the cleaner, more concise syntax these languages offer and allow end-users the ability to supply the Ruby and JavaScript code. That was why I didn't want to require script providers to code their logic inside a method or function. But by placing all code at the top level, the script writer would have no language feature available to return early from script processing.

For example, the Java code that called the script would look something like:
// Java objects to share with the scripts:
String textToProcess = ... // Text for scripts to process
int myStatus = ...         // Some type of status indicator
// etc.
ScriptEngineManager scriptEngineMgr = new ScriptEngineManager();
ScriptEngine rubyEngine = scriptEngineMgr.getEngineByName("ruby");
rubyEngine.put("textToProcess", textToProcess);
rubyEngine.put("status", Integer.valueOf(myStatus));
// ...
// Put a shared object the script will use to return results.
ResultsObject result = new ResultsObject();
rubyEngine.put("result", result);
// Read Ruby script from external source and execute it
String rubyScript = ...
rubyEngine.eval(rubyScript);
// Read results set by the script.
Long resultCode = result.getResultCode();
// etc...
The Ruby script would look something like:
# Don't process the text if the status is greater than 200
if $status > 200
return   # <-- This is illegal Ruby!
end
# Process the $textToProcess text...
...
although the conditions in which the script writer would want to exit could be a lot more complicated and couldn't be structured around an if-else statement.

The problem here is the Ruby script has no simple, clear way to prevent the entire script from being run, short of raising an exception. It is possible to work around the problem by requiring the script to be coded inside of a method. You also could require script writers to code around the problem by wrapping all code inside a needless outer loop and using a break statement to serve the purpose of a return statement.

The above code could thus be replaced by:
1.times do
# Don't process the text if the status is greater than 200
if $status > 200
break # This does work.
end
# Process the $textToProcess text...
...
end
An extra outer loop should work for JavaScript, too.

The problem with using an outer loop to provide a script return is that it requires the script writer to code the loop. That solution violates my goal of making the scripts as easy as possible to write -- and read.

My eventual solution, which I'm not satisfied with, was to allow the script to perform the equivalent of a top-level return statement by throwing an exception. To make the solution more palatable and cleaner for the script writer, I created a Java class that would throw the actual exception. The Java class also permits the script to return an optional reason message when exiting.

Here is the revised Java code that would call the scripts:
// Java objects to share with the scripts:
String textToProcess = ... // Text for scripts to process
int myStatus = ...         // Some type of status indicator
// etc.
ScriptEngineManager scriptEngineMgr = new ScriptEngineManager();
ScriptEngine rubyEngine = scriptEngineMgr.getEngineByName("ruby");
rubyEngine.put("textToProcess", textToProcess);
rubyEngine.put("status", Integer.valueOf(myStatus));
// ...
// Put a shared object the script will use to return results.
ResultsObject result = new ResultsObject();
rubyEngine.put("result", result);
// Add an object scripts can call to exit early from processing.
rubyEngine.put("scriptExit", new ScriptEarlyExit());
// Read Ruby script from external source and execute it
String rubyScript = ...
rubyEngine.eval(rubyScript);
// Read results of the script.
Long resultCode = result.getResultCode();
// etc...
The Java code now supplies all scripts with a ScriptEarlyExit object they can use to invoke the equivalent of a return statement. Here is the ScriptEarlyExit class:
/** Object passed to all scripts so they can indicate an early exit. */
public class ScriptEarlyExit {
public void withMessage(String msg) throws ScriptEarlyExitException {
throw new ScriptEarlyExitException(msg);
}
public void noMessage() throws ScriptEarlyExitException {
throw new ScriptEarlyExitException(null);
}
}
The ScriptEarlyExitException class is a simple Exception subclass:
/** Internal exception so ScriptEarlyExit methods can exit scripts early */
public class ScriptEarlyExitException extends Exception {
public ScriptEarlyExitException(String msg) {
super(msg);
}
}
With the ScriptEarlyExit object made available to scripts by the call to rubyEngine.put("scriptExit", new ScriptEarlyExit()), any script in any language should now be able to exit early. The Ruby script revised to use the new object would be coded like:
# Don't process the text if the status is greater than 200
if $status > 200
$scriptExit.with_message 'Not processing because of invalid status'
end
# Continue processing
...
The Java method call from the script provides a consistent, fairly clean way to return early from script processing. I tested calling this ScriptEarlyExit object from Ruby using JRuby 1.0, from JavaScript using the Rhino interpreter built into Sun's Java 1.6, and from Groovy 1.0. It worked well with them all.

This solution did require solving another problem. Using a Java exception to end script processing means the script engine is going to bubble up a javax.script.ScriptException back to Java. I needed a way to determine whether that exception was a real ScriptException or my fake ScriptEarlyExitException.

The solution was to check the script exception message to see if my special exception was embedded in the string. The coded ended up looking like:
try {
rubyEngine.eval(rubyScript);
} catch (ScriptException se) {
// Re-throw exception unless it's our early-exit exception.
if (se.getMessage() == null ||
!se.getMessage().contains("ScriptEarlyExitException")
) {
throw se; // a real ScriptException
}
// Set script result message if early-exit exception embedded.
// Will not work with Java 6's included JavaScript engine.
Throwable t = se.getCause();
while (t != null) {
if (t instanceof ScriptEarlyExitException) {
result.setExitMessage(t.getMessage());
break;
}
t = t.getCause();
}
}
The catch block examines the exception's message for the "ScriptEarlyExitException" string, and ignores the ScriptException if found. The code in the catch block then looks to see if one of the causes of the ScriptException was the ScriptEarlyExitException. If so, the ScriptEarlyExitException exception's message string will hold the value set when the script called the withMessage method on the shared ScriptEarlyExit object. That is, when Ruby calls:
$scriptExit.with_message 'Not processing because of invalid status'
the
ScriptEarlyExitException.getMessage()
will contain the string "Not processing because of invalid status". The catch clause sets that string to the ResultsObject object's exitMessage property using the code:
result.setExitMessage(t.getMessage());
As the comment in the above code indicates, retrieving the "exit" message from the Rhino JavaScript engine doesn't work. Or at least finding and parsing the exit string out of the resulting ScriptException is more tedious. That's because the Rhino script engine does not wrap caught Java exceptions into the resulting stack trace. With Rhino, the loop:
Throwable t = se.getCause();
while (t != null) {
if (t instanceof ScriptEarlyExitException) {
result.setExitMessage(t.getMessage());
break;
}
t = t.getCause();
}
never finds a ScriptEarlyExitException.

As I mentioned, this solution of having scripts call a method on a shared Java object in order to exit script processing early by throwing an exception isn't elegant. But it does work to let scripts execute the equivalent of a top-level "return" statement. This solution likely will work with other JSR-223 scripting engines besides the ones I tested. It seems, though, that there must be a better way. Groovy, by the way, permits a return statement in top-level code. That's pretty nice.

Are you a Ruby or JavaScript pro with a better solution? Is there an easier way for Ruby or JavaScript to return from a script even when the script code is outside a method/function? If you would like to share better techniques, please post a comment here or email me at the address shown in the right-hand column under the "Feedback" heading. If you post a comment on this blog, I ask your forgiveness in that comments are moderated before appearing, but there is no indication of that when you click the "Post" button.

Still using StringBuffer? That’s sooo Java 1.4

Pop quiz: Hashtable is to HashMap as StringBuffer is to ... <fill in the blank>

Answer: StringBuilder.

I recently worked on a Java project where the target environment was Java 1.5. Although Java 1.5 has been out for almost three years, the client was just upgrading to it to take advantage of its language features and APIs.

While working on the project, I noticed most developers continued to use the StringBuffer class when StringBuilder would have been the better choice. In asking around, most developers said they were unaware of StringBuilder.

In case you're using Java 1.5 or 1.6 but not yet using StringBuilder, StringBuilder is an unsynchronized version of the tried-and-true StringBuffer class. Most of StringBuffer's public methods are synchronized to allow multiple threads to read and modify the string simultaneously. But since StringBuffer is almost always used to build up a string within a method, or to build a string over several method calls within a single-threaded environment, the synchronized nature of StringBuffer is overkill. An article in Dr. Dobb's Journal in June 2006 estimated switching from StringBuffer to StringBuilder could speed string building by 38%.

That's why Sun added StringBuilder to the language in JDK 5. None of StringBuilder's methods is synchronized, so the class is not meant to be used when multiple threads need to access the string. In multi-threaded contexts, you will want to use StringBuffer. But consider your own code. How many times have you needed to share a StringBuffer between multiple threads? You'll probably find that StringBuilder is often the better choice.

Eclipse 3.2 JUnit runner gets confused connecting to server?

I opened an Eclipse project today, ran a unit test, and got a socket exception I'd never seen before. The project was one I had set aside a few weeks ago after playing with the NetBeans 6 preview release.

After opening the project in Eclipse, I went straight to one of the JUnit test classes, made a small tweak to one of the test methods, then hit my usual Alt-Shift-X + T keyboard shortcut to run the test case with JUnit. Instead of seeing a green or red bar, Eclipse just sat there staring at me, saying it was running the test class with JUnit. The console view showed the red "terminate" button in bright red, indicating the run was proceeding, albeit at an exceedingly slow pace. After about 30 seconds, the console displayed:
Could not connect to:  : 3393
java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:519)
at java.net.Socket.connect(Socket.java:469)
at java.net.Socket.(Socket.java:366)
at java.net.Socket.(Socket.java:179)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.connect(RemoteTestRunner.java:560)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:377)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
A socket connection error? I was just trying to run a local JUnit test, not connect with any remote server.

My first theory was I must have been playing with remote debugging for this application a few weeks ago and configured Eclipse to connect with a remote JVM. I spent a minute going through the Eclipse configuration for the JUnit test to check out its settings. I saw nothing set for any remote JUnit connection. (I'm not even sure Eclipse's JUnit runner can do that.) Everything looked right, so I ran the test again and got the same connection refused exception.

My second theory was that I hadn't rebuilt the application since upgrading to JSE 1.6.0_01 from 1.6.0, and that Eclipse was doing its best to find a running 1.6.0 JVM to connect with. (This seemed far-fetched, but a rebuild only took a couple of seconds.) A rebuild didn't solve the problem.

My third theory was I had been using NetBeans for so long I must have forgotten how to run the JUnit test in Eclipse. Perhaps I was telling Eclipse to debug a remote application instead of running JUnit. I ran the test again, this time through the menu option. No luck.

That sent me searching the web for the solution. I found it pretty quickly, but not the underlying reason behind the problem.

The solution was to restart Eclipse. Why this worked I don't know, since I had just launched Eclipse minutes before. Apparently the JUnit runner thread in Eclipse attaches to an Eclipse server thread to run the tests. It would seem the client thread was trying to connect to the wrong port (3393) or that the server thread that had been listening on port 3393 for runtime requests failed. Either way, I would have expected Eclipse to log the error. Strangely, the only item in the Eclipse error log said:
Warnings while parsing the commands from the 'org.eclipse.ui.commands'
and 'org.eclipse.ui.actionDefinitions' extension points.
with a sub-message saying:
Commands should really have a category: plug-in='org.codehaus.groovy.eclipse',
id='org.codehaus.groovy.eclipse.debug.ui.testShortcut.debug',
categoryId='org.eclipse.debug.ui.category.debug'
Well, I did recently install the Groovy plugin. Did that cause the problem? If so, Eclipse thinks not being able to connect with the JUnit runtime is just a warning?

Anyone have the real answer as to what caused Eclipse to get so confused while trying to launch the JUnit runner? None of the web pages I viewed talking about the problem mentioned the cause for the failure.

Sun Tech Days in D.C. a Mini JavaOne

I spent last Thursday at the International Trade Center in Washington attending Sun Tech Days 2007, the last stop in a 15-city world technology tour showcasing what's new in Java and Solaris. Here are some highlights of the day, and notes from the keynote address by Sun Microsystems CEO Jonathan Schwartz. Overall, the day was like a mini JavaOne: exposure to new technologies without actually teaching you how to use them. With most technical sessions lasting just 50 minutes, exposure is about all one can expect.

Session highlights

  • Sun considers GlassFish a production-ready JEE 5 application server.
    I hadn't been following the GlassFish project, so it was good to learn about its relative maturity. GlassFish V2 (in beta) adds clustering support.
  • Java 6 added features to JMX to make managed-bean development easier using annotations.
    I learned this in an aside during a JMX talk that focused on JMX features in Java 5. Yes, Java 5 has been out for almost three years, but Sun treats it like new technology because most companies and developers haven't migrated from 1.3 or 1.4. And it was good to hear Sun advocating and explaining JMX because developers could benefit from using its instrumentation and monitoring features in their applications, but the JMX learning curve has always been steep.
  • jMaki tries to simplify Ajax development by unifying the APIs of popular Ajax libraries.
    jMaki provides JSP tags that help you call Ajax components from other Ajax frameworks, such as Dojo, Script.aculo.us, Yahoo UI Widgets, and Google's Ajax framework.
  • Sun is focusing more on JRuby than Groovy because of Rails.
    This isn't actually new, but it was refreshing to hear a Sun engineer acknowledge that Sun's newfound excitement over JRuby is based on Rail's current sexiness quotient, and that attention to other JVM languages like Groovy likely will suffer a little as a result.
  • Web 2.0 is still a vague concept.
    After I left the session on "AJAX and Web 2.0 Frameworks," two attendees both mentioned that they still had no idea what Web 2.0 means. The speaker never once defined it, and she left no time at the end of her presentation for questions.
  • Sun engineer evangelists can get pretty tired after a 15-city world tour.
    One engineer played music from his laptop, drowning out part of another engineer's talk, and joked that audience members should use the corners of the room to relieve themselves. Another engineer went through his slides with the enthusiasm of a cow chewing cud, saying things like "as you can see in the code here" while the actual code was hidden because his NetBeans display had only about 40% of the real-estate showing the code window, and he was too tired to open the window or scroll the code to the right.

Jonathan Schwartz's keynote address

Someone who didn't lack enthusiasm was CEO and President Jonathan Schwartz. He started the morning with a keynote address giving his vision for Sun's future. Schwartz is a thought-provoking speaker. His confidence provides a sense that you want to believe in his vision for success. As much as I think some of Sun's products are pretty cool, I still wonder whether Sun will succeed in differentiating its products sufficiently to win over the market.

Sun CEO Jonathan Schwartz
Sun Microsystems CEO Jonathan Schwartz
speaks at Sun Tech Days 2007 on June 7
in Washington, D.C.
Schwartz's talk focused on market trends and how Sun will be there to satisfy the needs sparked by those trends. The number of consumers accessing the Internet keeps growing, he said, at the same time the cost of opening an Internet business keeps falling. "Barriers to entry have plummeted to near nothing," he said. As consumers go online, they'll want to store photos, post blogs, access bank accounts, and other services. The continuing growth of Internet consumers and businesses will require ever-better and larger network infrastructure, servers and storage--and Sun hopes to be there selling those products.

Schwartz said the growth of Internet access will come mostly from consumers using mobile devices. More of the world uses a phone to access the Internet than a PC, he said. The United States is an anomaly in so many people owning expensive computers, he said. If you're going to meet the world's demand for mobile online access, "you're going to need to figure out how to work with that."

One way Sun is going to work with the growth of mobile Internet devices will be to develop a new mobile-phone software platform called JavaFX. JavaFX, which includes JavaFX Mobile, is intended to make developing mobile phone software easier, and will run existing Java ME applications. Sun announced JavaFX at JavaOne last month, and released JavaFX Script, a new declarative scripting language to build "rich content" applications. Schwartz said Sun will release JavaFX as free open source software. The license will be partly restrictive. The JavaFX website says handset manufacturers will need to purchase an OEM license to embed JavaFX on their devices. Perhaps JavaFX will solve some of the difficulties developing mobile applications for Java ME, where every application has to be customized for nearly every possible device.

Schwartz spoke at some length of Sun's commitment to open source software. He said Sun spent $500 million to release Solaris as open source, both in staff time and the cost of procuring intellectual property. Sun also provides a free open source Java development kit, a free IDE (NetBeans), a free JEE server (GlassFish), and other open source projects that must cost Sun a large amount of money.

"Unfortunately the most important
audience we have to serve has
no money and spends no money.
It's you. And we love you."
--Jonathan Schwartz
Sun's large investment in the free, open source software business is still one area I haven't figured out. My question has always been will Sun ever generate enough goodwill or product synergies to sell hardware to go along with that free software? Schwartz seemed to acknowledge the difficulty in transferring the gift of software into sales of profitable hardware: "Unfortunately the most important audience we have to serve has no money and spends no money," Schwartz said. "It's you [the developer]," he said with a laugh, "and we love you."

Will the love from the developer community transform into money? For example, Schwartz said that 70% of Sun's free Solaris operating system is installed on non-Sun hardware, like Dells and HPs. He said Solaris's exposure beyond Sun hardware opens the door to new customers. But do companies or developers installing Solaris for free on an x86 platform ever end up buying Sun blade servers, Sun disk arrays or Sun tape storage?

Sun CEO Jonathan Schwartz
Jonathan Schwartz at Sun Tech Days
He concluded by saying Sun will succeed by migrating away from commodity products toward innovative, market-leading products. Schwartz said the marketplace is split into segments growing faster than Moore's Law and those growing slower, which I took to mean the capabilities or innovations in some market segments is doubling every two years. The segments growing faster than Moore's Law, he said, are the consumer Internet, high-performance computing, simulations/analytics, and software as a service. Those growing slower are payroll, ERP, general ledger, and CRM. Products in the "slower" market segments will get cheaper every year.

Although he acknowledged Sun makes its payroll selling commodity products attractive to the "slower" market segments, the growth (and profit) is in the faster segments. "If you're in our business," he said, "you don't want to hang out too long at the bottom."

For innovative products, he mentioned Sun's focus on power-efficient hardware, its new Sun Fire X4500 storage server with 24 terabytes capacity at less than $2 per gigabyte, and its portable, self-contained data center that fits inside a standard shipping container (Project Blackbox). The primary costs in running a data center today, he said, are people to run them, real estate, and electricity, in that order. Sun is therefore focusing on products that reduce the need for human intervention, fit in a smaller area, and consume less power.

Will the focus on high-margin, innovative hardware help Sun succeed? As a longtime Java developer, I have a special warm feeling for Sun. Sun gave us the Java platform. Sun donated the popular Tomcat web application server to open source. And Sun's recent attention back to Java developers has been heartening, with its major improvements to NetBeans, its development of GlassFish, and Java's new dynamic/scripting-language support, including the development of Ruby as a first-class JVM language. Yet most of what I heard from Schwartz on Thursday was that giving us all this software for free costs a lot of money, and Sun's focus needs to be on selling innovative hardware. What I didn't hear Schwartz explain was why Sun is focusing on free, open source software for developers -- how it helps Sun's bottom line -- and thus a strong feeling that the support will continue.

Josh Bloch’s Java puzzlers tonight in D.C.

I just heard that Joshua Bloch from Google will be in downtown D.C. tonight presenting "Java Puzzlers' Greatest Hits." If you haven't seen his presentation at a conference or JUG meeting, I highly recommend attending. Bloch throws Java code snippets or questions related to Java up on the wall, and audience members puzzle-out the answers. I learned things about Java that surprised me when I first heard his talk in 2004. The simple snippets of Java code in his questions often don't do precisely what you'd think -- because they won't compile, because of primitive integer overflow, because the code stumbles into a collections corner case, or other Java language subtlety.

Since his 2004 talk, Bloch has written a book on the puzzlers, Java Puzzlers: Traps, Pitfalls, and Corner Cases (Addison-Wesley, 2005) with Neal Gafter -- and also left Sun Microsystems to join Google as chief Java architect.

Here are the details of Bloch's presentation, hosted by Google. RSVP to pittsburgh@google.com.

When: 6:00pm
Date: Today, Tuesday, March 27
Where:
The Renaissance Mayflower Hotel, Senate Room
1127 Connecticut Ave. NW
Washington, DC 20036
Metrorail: Farragut North or Farragut West (head north on Connecticut)
Hotel phone: 202-347-3000

Eclipse gave a surprising left jab while unboxing

I'm working on a project for a client to integrate an existing web-based mortgage application to work with a large mortgage-loan consolidator. The existing application has a large code base originally targeted for Java 1.3. We needed to create an integration API and wanted to take advantage of some of the concurrency classes introduced in Java 1.5. The client gave approval to use 1.5 for the new code.

Upgrading went smoothly. We had to rename an existing package that included the new enum keyword, but the 1.3 code easily upgraded to 1.5. Being able to use 1.5 was nice because I like the generics support, the simplified "for-each" syntax to iterate over collections, and the simplified concurrent package. Many of the new concurrency features also are available for earlier versions of Java in the backport-util-concurrent library. I like Eclipse's support for 1.5, such as typing "foreach[Ctrl-SPACE]" and having Eclipse make a pretty good guess at filling in the simplified for-each loop code, including the collection reference to iterate over, and picking a good name for the temporary iterator variable.

For example, if you start a method:
public CreditReport getTriMergeForBorrower(
Borrower borrower, Set creditReports
) {
and you want to loop over the creditReports to search for the correct one for this borrower, you can type:
foreach[Ctrl-SPACE]
and Eclipse will replace that with:
for (CreditReport report : creditReports) {
}
Eclipse looks "up" in the code to find the nearest iterable, fills in the correct type for the iteration temporary variable, and gives the temporary variable a reasonable name. Pretty nice. But as the title of this entry implies, I got an unexpected jab from Eclipse, or more accurately, my reliance on it.

Eclipse is aware of another "new" feature in 1.5: autoboxing and unboxing. Autoboxing is the term for the Java compiler allowing you to write code that treats primitives as their equivalent object types (an int treated as if it were a java.lang.Integer, for example). Auto-unboxing is the opposite: using an object when the expression calls for a primitive. If you haven't been able to use Java 1.5 on a project, here's a short introduction.

With autoboxing, you can write code like this (from the above-reference Sun website):
public static void main(String[] args) {
Map m = new TreeMap();
for (String word : args) {
Integer freq = m.get(word);
m.put(word, (freq == null ? 1 : freq + 1));
}
System.out.println(m);
}
Notice how it appears you can pass an int as the parameter to be inserted into the TreeMap. Even though you really can't put a primitive into a collection like a map, the compiler (not the Java runtime) corrects this "incorrect" coding by inserting hidden code to create an Integer object to wrap the primitive int value. Autoboxing makes the code look a little cleaner: Let the compiler do the work rather than the programmer.

I've known you could write autoboxing code like the above for a couple of years. In August, 2004, two months before Java 1.5's general release, I attended a talk by Joshua Bloch and Neal Gafter at the Denver Java Users Group, introducing the new features in Java 1.5 (Tiger). But what I hadn't considered was that you sometimes could write auto-unboxing code without realizing it. It happened to me with code like:
if (creditReport != null && creditReport.isTriMerge()) {
// do some processing
}
When testing the code, it threw a NullPointerException. When I saw the stack trace, I said Huh? The creditReport reference obviously isn't null when invoking isTriMerge, so where did the NPE come from? A moment later, it hit me. The isTriMerge method must be returning a Boolean, not a boolean as I had assumed when I looked at the business object's API using Eclipse's Ctrl-SPACE to show options. I let Eclipse fill in the isTriMerge method name when I typed the if statement. Eclipse didn't complain about the syntax, so my natural assumption was a method named "isXXX" would return a boolean.

Instead, the CreditReport business object uses object wrappers for all its primitive-value fields, so when the object is persisted in the database, it can use NULL values to indicate "unspecified." The getters all return objects.

When writing code that takes advantage of unboxing, you can end up creating seemingly odd statements like:
if (creditReport.isTriMerge() != null &&
creditReport.isTriMerge()
) {
// Do something
}
where you check a value for null and whether the value is true. It looks unusual, compared to the pre-Java 1.5 version:
if (creditReport.isTriMerge() != null &&
creditReport.isTriMerge().booleanValue
) {
....
which is what the compiler is actually inserting into the .class file's bytecode.

Unexpected NullPointerExceptions occur when you don't realize that your "primitive" value is really an object being dereferenced inside hidden code. I found this big discussion of automatic unboxing on TheServerSide from three years ago, debating whether unboxing is a good thing.

So, live and learn. I don't see autoboxing/unboxing as bad. Programmers just need to be cognizant of whether a variable holds a reference to a primitive object wrapper rather than a true primitive value, and whether a method returns an object rather than a primitive. And if you want to be cautious, you can ask Eclipse to warn you about possible unboxing problems. Eclipse (I'm using 3.2) has a Java code setting: Window | Preferences | Java | Compiler | Errors/Warnings | Potential programming problems | Boxing and unboxing conversions. You can tell Eclipse to flag boxing/unboxing in the code as a warning or error. That way, you're less likely to receive an unexpected jab from hidden unboxing code. Eclipse's default is to ignore boxing and unboxing in the code as long as it's legal syntax. (I'm sure IDEA and NetBeans can do the same thing. If you're an IDEA or NetBeans user and want to post those IDE equivalents in a comment, please do.)

Ajax Architecture with Stuart Halloway

When should you use Ajax? Whenever you want to create a rich client application with the universal reach of the Internet. Stuart Halloway, speaking Friday afternoon at this fall's Northern Virginia Software Symposium, predicts Ajax will be part of nearly all web applications within the next year. Stuart's first session of the day focused on the architectural issues involving Ajax, from technical features to selling Ajax to management. In 90 minutes he discussed:
  • the reasons to use Ajax
  • how to introduce Ajax at your company
  • the resistance you'll encounter when you do
  • the tools and libraries to use in Ajax development
  • the architectural decisions you'll need to make
  • how Ajax isn't a panacea for all applications
Stuart demonstrated Ajax in action by showing how to build several incarnations of a web form that, once you input a U.S. Zip code, the address's city and state fields fill in automatically from an asynchronous JavaScript server call. Low-tech stuff compared to Google Maps, but simple enough to demonstrate several ways to get the job done.

The interest in Ajax has been increasing over the last couple of years not because of a breakthrough in technology, Stuart said, but because of a breakthrough in how we look at using asynchronous JavaScript to make web forms more dynamic. Instead of having web user-interface developers deal with the vagaries of the different browser versions, different implementations of JavaScript, different implementations of Cascading Style Sheets, and different implementations of the web page document object model, they learned in February 2005 from
Jesse James Garrett photo
Jesse James Garrett
Jesse James Garrett to hide browser differences behind an adaptive interface provided by a library like Prototype or Dojo. Garrett coined the term Ajax in his seminal paper, "Ajax: A New Approach to Web Applications." Of course, it also helps, Stuart added, that the functionality provided by Internet Explorer and Firefox have converged over the years.

The reason to use Ajax in your web applications is to create a better experience for your users. Ajax allows your web page to communicate to the server in order to update the page "behind the user's back," making the application more responsive to the user's actions without having to reload the page.

The best way to introduce Ajax at your company, he said, is in non-core web applications. Depending on your company's culture, Stuart said, you can sell Ajax either as proven technology -- XML, HTTP requests, and JavaScript -- or by saying "Ajax is the revolution and we're all on board." When introducing Ajax, he said, stay "degradable." That is, ensure your web application still works if the user turns off JavaScript or uses a browser that doesn't support it. The fewer negative issues you create, the more the benefits will shine through and convince others to introduce Ajax into more web applications. If you want to be conservative, he said, wait until the web MVC frameworks, like JavaServer Faces, provide better support for Ajax in their page widgets.

Stuart mentioned several open source development tools and JavaScript libraries to use in your Ajax applications:
  • Firefox
    Consider the Firefox browser (with its extensions that follow) your development platform and Internet Explorer as your deployment platform, Stuart said.
  • JavaScript Shell
    a Firefox bookmarklet that allows you to dynamically run JavaScript statements against your current page in a debugging window. This tool is useful, Stuart said, "for poking around the page to figure out what's broken."
  • FireBug
    a Firefox add-on with debugging features to monitor your page's JavaScript, CSS, and HTML. One feature allows you to spy on all HTTP traffic JavaScript functions send to the server.
  • Web Developer
    a Firefox add-on toolbar that allows you to disable JavaScript, cookies, view and modify a page's CSS, view a page's generated source rather than the HTML originally loaded, and a host of other useful development tools.
  • Tamper Data
    a Firefox add-on that logs all web navigation. It not only allows you to see what requests and responses are traveling between the Ajax components and the server, but it allows you to modify them or completely stop the request and see how the application reacts.
Ajax Libraries

Stuart mentioned several Ajax libraries. Since JavaScript libraries generally don't trample on each other, he said, you can often use more than one in a web application.
  • Prototype
    A survey at an Ajaxian conference showed that more than half of the Ajax developers were using Prototype, Stuart said. Most of the rest were using Scriptaculous (next). Prototype allows you to register multiple event handlers to events (Event.observe()). You can register events outside of the HTML widget to allow you to separate concerns: your HTML page designer doesn't have to worry about coding the JavaScript events. Prototype also provides the Ajax.Request function that works as a factory to return the appropriate function that works with the user's browser version.
  • Scriptaculous
    A library built on Prototype to provide page effects (highlight, fade), drag and drop, auto-complete and other features.
  • Dojo
    This is a heavy-weight "kitchen sink" library, Stuart said, that provides almost everything you need for an Ajax application.
  • Google Web Toolkit
    This toolkit allows Java developers to build front-end components in Java. The toolkit converts the Java code to JavaScript and HTML.
  • Direct Web Remoting
    Stuart said DWR is No. 3 in popularity, behind Prototype and Scriptaculous. It's Java RMI like. You write JavaScript stubs that run in the browser and make RMI calls to the server. It assumes you have Java running on the server side.
Introducing Ajax into an application raises two key architectural questions, Stuart said. These are the questions to think about early and thoroughly because changing the answers later could undermine previous development work.
  1. What to send on the wire?
    When JavaScript calls to the server for updated information, does the server send back HTML, XML, JavaScript, or JSON? HTML is view centric, JavaScript is code-centric, and XML and JSON are model-centric. (See also this blog for a discussion.)

    Stuart said he believes 95% of all Ajax traffic soon will be HTML. It's developer-efficient (no parsing and XML creation) and the snippet of HTML can simply be rendered on the page. JSON will dominate applications that focus on data exchange because it is easier to parse than XML. XML will be the loser. Sending back JavaScript allows you to send back code that the browser would evaluate.

  2. What library to use?
    Choose a library that supports the features you need. The library should hide the browser differences in the XMLHttpRequest. Prototype does this and builds upon JavaScript to make it more like a regular programming language.

    Use Scriptaculous if you need to adds page effects and you'd like to use its widgets.

    Dojo is the library beloved by Java programmers, Stuart said. It's bigger than all the other libraries, but it does more. Its API provides fixes for Back-button issues, client-side data storage, and other features.

    Stuart dislikes Google Web Toolkit. "I think it's architecturally wrong," he said, but I think I zoned out when he described the reasons for his dislike.

Although Ajax is useful, it won't be a panacea for all applications, Stuart said. Learning to design with Ajax correctly will involve some of the same growing pains the development community learned in creating n-tier applications. Stuart likened the future growing pains in the Ajax world to the pains Java programmers suffered in implementing n-tier web applications using EJB. "We're going to make a ton of mistakes" implementing Ajax applications, he said. "Asynchronous is hard. Asynchronous is as hard as threads, except you don't have an API in front of you reminding you how hard it is."