Tag Archives: software architecture

Working With JMS on JBoss Web Profile

If you didn’t get in contact with messaging systems yet you better do it soon. This concept is a key element in the architecture of scalable applications. Before it was a product but now it’s pervasive even natively embedded in some programming languages.

Let me help you to use messaging in a JavaEE application server. The Java application code is portable between application servers. Unfortunately, each application server has its own way to configure a messaging system. Since I can’t cover all of them, I will concentrate on JBoss 7 or superior (JavaEE 6/7). JBoss uses a messaging system called HornetQ, an open source message-oriented middleware. It offers queues (point-to-point) and topics (publisher-subscriber). I’m covering only queues in this post. Queues and topics can be used within a JavaEE application or across several applications. It’s a kind of integration pattern, but sadly less popular than web services. Since I’m not dealing with integration, I will narrow even more this post to cover a queue accessible locally only.

metro-crowd

I recently had to use a queue to asynchronously generate large files. Users were waiting too much for a response from the server after requesting those large files. The problem became serious when multiple users were doing that request simultaneously. By using a queue, I was able to generate files asynchronously, so the users didn’t have to wait anymore, and in sequence, avoiding exponential use of memory and IO.

To put some pepper on the issue, we were using JBoss Web Profile and it doesn’t support messaging. We would need to migrate to JBoss Full Profile, but it would require us to migrate all development machines and all server environments, otherwise the deployment descriptor with the queue configuration would break the deployment everywhere. Also, migrating to the full profile would bring together several additional services – that we don’t need at all – just to consume more resources. So, I had to figure out how to make the messaging system work in the web profile.

The first idea that came to my mind was to simply identify the messaging configuration in the full profile (standalone-full.xml) and copy it to the web profile (standalone.xml). I started by adding the extension module:

<extensions>
...
<extension module="org.jboss.as.messaging"/>
...
</extensions>

and with it comes its respective rather long subsystem:

<subsystem xmlns="urn:jboss:domain:messaging:1.4">
  <hornetq-server>
    <persistence-enabled>true</persistence-enabled>
    <journal-type>NIO</journal-type>
    <journal-min-files>2</journal-min-files>
    <connectors>
      <netty-connector name="netty" socket-binding="messaging"/>
      <netty-connector name="netty-throughput" 
            socket-binding="messaging-throughput">
        <param key="batch-delay" value="50"/>
      </netty-connector>
      <in-vm-connector name="in-vm" server-id="0"/>
    </connectors>
    <acceptors>
      <netty-acceptor name="netty" socket-binding="messaging"/>
      <netty-acceptor name="netty-throughput"
            socket-binding="messaging-throughput">
        <param key="batch-delay" value="50"/>
        <param key="direct-deliver" value="false"/>
      </netty-acceptor>
      <in-vm-acceptor name="in-vm" server-id="0"/>
    </acceptors>
    <security-settings>
      <security-setting match="#">
        <permission type="send" roles="guest"/>
        <permission type="consume" roles="guest"/>
        <permission type="createNonDurableQueue" roles="guest"/>
        <permission type="deleteNonDurableQueue" roles="guest"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <address-setting match="#">
        <dead-letter-address>jms.queue.DLQ</dead-letter-address>
        <expiry-address>jms.queue.ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <max-size-bytes>10485760</max-size-bytes>
        <page-size-bytes>2097152</page-size-bytes>
        <address-full-policy>PAGE</address-full-policy>
        <message-counter-history-day-limit>
            10
        </message-counter-history-day-limit>
      </address-setting>
    </address-settings>
    <jms-connection-factories>
      <connection-factory name="InVmConnectionFactory">
        <connectors>
          <connector-ref connector-name="in-vm"/>
        </connectors>
        <entries>
          <entry name="java:/ConnectionFactory"/>
        </entries>
      </connection-factory>
      <connection-factory name="RemoteConnectionFactory">
        <connectors>
          <connector-ref connector-name="netty"/>
        </connectors>
        <entries>
          <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
        </entries>
      </connection-factory>
      <pooled-connection-factory name="hornetq-ra">
        <transaction mode="xa"/>
        <connectors>
          <connector-ref connector-name="in-vm"/>
        </connectors>
        <entries>
          <entry name="java:/JmsXA"/>
        </entries>
      </pooled-connection-factory>
    </jms-connection-factories>
    <jms-destinations>
      <jms-queue name="ExpiryQueue">
        <entry name="java:/jms/queue/ExpiryQueue"/>
      </jms-queue>
      <jms-queue name="DLQ">
        <entry name="java:/jms/queue/DLQ"/>
      </jms-queue>
    </jms-destinations>
  </hornetq-server>
</subsystem>

No changes from the original. I just copied and pasted the entire messaging subsystem. Then I added a socket binding just in case I needed queues and topics for integration purposes later on:

<socket-binding-group
   name="standard-sockets"
   default-interface="public"
   port-offset="${jboss.socket.binding.port-offset:0}">
  ...
  <socket-binding name="messaging" port="5445"/>
  <socket-binding name="messaging-group"
     port="0"
     multicast-address="${jboss.messaging.group.address:231.7.7.7}"
     multicast-port="${jboss.messaging.group.port:9876}"/>
  <socket-binding name="messaging-throughput" port="5455"/>
  ...
</socket-binding-group>

Finally, I added a reference to the resource adapter, defined above, in the EJB subsystem, as follows:

<subsystem xmlns="urn:jboss:domain:ejb3:1.4">
  ...
  <mdb>
    <resource-adapter-ref
       resource-adapter-name="${ejb.resource-adapter-name:hornetq-ra}"/>
    <bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
  </mdb>
  ...
</subsystem>

And it worked! Be aware you can simply use the full profile to make the messaging work. No need to do all this configuration. But keep in mind that the full profile is going to load additional things you don’t need at all, such as:

  • org.jboss.as.cmp: container-managed persistence, deprecated in favour of JPA.
  • org.jboss.as.jacorb: an implementation of CORBA.
  • org.jboss.as.jsr77: abstracts manageable aspects of the J2EE architecture to provide a model for implementing instrumentation and information access.

metro-queue

On the application side I did three things:

1 – add the deployment descriptor hornetq-jms.xml to the folder WEB-INF to automatically create a queue during the deployment process. The descriptor has the following content:

<?xml version="1.0" encoding="UTF-8"?>
<messaging-deployment xmlns="urn:jboss:messaging-deployment:1.0">
  <hornetq-server>
    <jms-destinations>
      <jms-queue name="FileGenerationQueue">
        <entry name="/queue/FileGeneration"/>
      </jms-queue>
    </jms-destinations>
  </hornetq-server>
</messaging-deployment>

2 – create a MDB (Message-Driven Bean) to listen to the queue and process the messages as they arrive. For example:

@MessageDriven(name="FileGenerationQueue", activationConfig = {
     @ActivationConfigProperty(propertyName = "destination",
                               propertyValue = "queue/FileGeneration"),
     @ActivationConfigProperty(propertyName = "destinationType",
                               propertyValue = "javax.jms.Queue"),
     @ActivationConfigProperty(propertyName = "acknowledgeMode",
                               propertyValue = "Auto-acknowledge")})
public class LargeFileGenerationBean implements javax.jms.MessageListener {
  @Override
  public void onMessage(Message message) {
    // Code that will process messages coming from the queue.
  }
}

3 – and modify a Request Scoped Managed Bean to send messages to the queue. For example:

@ManagedBean
@RequestScoped
public class MyManagedBean implements Serializable {

  @Resource(mappedName="java:/ConnectionFactory")
  private ConnectionFactory connectionFactory;

  @Resource(mappedName="java:/queue/FileGeneration")
  private Queue queue;

  // Serializable class encapsulating data criteria.
  private DataCriteria dc;

  public String sendMessageToQueue(String message) {
    try (Connection connection = connectionFactory.createConnection()) {
      Session session = conn.createSession(false,
      Session.AUTO_ACKNOWLEDGE);
      MessageProducer producer = session.createProducer(queue);
      conn.start();

      ObjectMessage message = session.createObjectMessage(dc);
      producer.send(message);
    } catch (JMSException e) {
      ...
    }
  }
}

Let me know if you have any issues, then we can find the solution and make it better.

What Comes Next

In my previous post I explained why I left JavaEE behind. Now, I’m going to explain my reasoning process to come up with what I’ll learn, teach and use next. The criteria I’m using are:

  • Cloud friendly: the technology should be ready to scale horizontally, without constraints, additional products or exponential use of resources.
  • Learning curve: I should be able to learn and teach fast even if it requires me to change the way I think about programming. I recognize I have lots of new concepts to learn before I realise the advantages of other technologies.
  • Performance: everything I write I want to be faster than any interpreted language. I know that premature optimization is a bad idea, but I need a technology that even when I decide to postpone optimizations it will be fast enough.
  • Community: the community doesn’t need to be big, but it should be active and kind with newcomers. The majority of libraries they produce should be open source.
  • Reusability: I should be able to reuse the libraries I’m used to, or find equivalent ones.
  • Coverage: I should be able to write the same kind of software I’m used to and not be limited if I decide to do more.
  • Documentation: the technology should be well documented, with books, websites, blogs, wikis and teaching materials.
  • Development Stack: the stack should implement MVC for web applications, database migration, database abstraction, SSL, authentication, authorization, REST web services, etc.

The fundamental choice starts with the programming language. It must support functional programming in order to be cloud friendly, but it also has to avoid mutable state to prevent concurrency issues and make it difficult for programmers to write code with side effects. I already use Java, but at this point we have to eliminate it because mutable state is the default behaviour in the language. To avoid it, we have to write a whole bunch of additional code that needs to be tested as everything else. Java requires the use of design patterns to overcome the deficiencies of the language. Static analysis tools, such as SonarQube, are required to keep the code safe, but, unfortunately, they require a significant effort that has nothing to do with the business problem we are solving.

“Most people talk about Java the language, and this may sound odd coming from me, but I could hardly care less. At the core of the Java ecosystem is the JVM.” James Gosling (2011, TheServerSide)

Despite the Java programming language being out of the picture, the Java Virtual Machine (JVM) is still relevant. It’s a portable and mature platform capable of running multiple programming languages in several operating systems with a transparent memory and thread management, with performance peaks compared to C/C++. What Java doesn’t do for us, other programming languages do, running in the same virtual machine and reusing the existing Java ecosystem, which is huge! Therefore, languages that have their own compilers and independent virtual machines are discarded because they hardly will reach the maturity and popularity of the JVM and the Common Language Runtime (CLR). So, at this point we discard Erlang, Harskell, Go, and all other languages that don’t run in the JVM or CLR.

I have mentioned CLR, the .net runtime, but I have no experience with it so far. So, I have to narrow my choices to JVM hosted languages. I couldn’t find any official  or reliable survey about the popularity of JVM languages, but I did find several pools showing that Groovy, Scala, Clojure and JRuby are indeed the top four JVM hosted languages, in no specific order.

scala-groovy-clojure

Groovy‘s popularity is due to the fact that it looks very much like Java, but without its cumulative historical problems. Therefore, Groovy’s learning curve for Java developers is by far the lowest one compared to Scala and Clojure. Scala comes next with its richer type system and object orientation. We’re able to map our Java knowledge within Scala, but this language is so full of possibilities that it’s the hardest one to master. We have problems to read other people’s code because developers have too much freedom to express themselves. Clojure, on the other hand, is the hardest one to start programming because of its radical differences from Java, but it’s the easiest one to master because of its simplicity. We do a lot more with less code and the code is readable as long as you know functional programming principles. Since JRuby didn’t perform well in the surveys above, I’m discarding it for the moment.

The chart bellow shows job trends in the US, according to indeed.com. It actually reflects the size and influence of the community gravitating those languages. Groovy has been performing well since the beginning, but it is now threatened by Scala, although it isn’t clear yet which one will stand out. The interest for Clojure is constantly and shyly increasing, as functional programming becomes popular and the learning material available helps to reduce the introductory learning curve. In any case, it still has a long way to go.

Groovy, Scala and Clojure have at least the same coverage as Java, with the advantage of writing less to do more. There is absolutely no problem that can be solved only by using Java. Actually, concurrency problems are far more complex to solve with Java, making those alternatives much more interesting.

In order to master those programming languages, I had a look at the volume and quality of the documentation available. This is very hard to measure. For those who like numbers, I have compiled the following table:

Language Appeared Google StackOverflow
Groovy 2003 602K 11468
Scala 2003 1.510K 35207
Clojure 2007 350K 9278

The problem is that this table can be interpreted in many different ways:

  1. These numbers are far from precise. They change everyday because of the nature of the internet.
  2. Looking at the volume, we might conclude that the more entries we get the more documentation we can find, but it can also be a sign of complexity, taking a lot more documentation to explain a thing. Therefore, the fact that Clojure has less entries doesn’t mean it is less documented than Scala or Groove.
  3. Some languages are older than others, accumulating exposure to the community, thus producing more content. But in this case, older content are counted but hardly relevant nowadays.

I can say that the documentation I found was fairly good enough to address all my questions so far.

The last point is the development stack, a set of libraries and frameworks covering most of the needs of a regular enterprise developer. The following table shows a non-exhaustive list:

Feature Groovy Scala Clojure
Build Tool Gradle SBT Leiningen
Persistence Grails Slick HoneySQL
Database Migration Grails Play Framework Joplin
MVC Grails Play Framework
Lift
Compojure + Ring
Security Spring Security SecureSocial
Silhouette
Buddy
Testing Spock Scala Test Expectations
IDE Support IntelliJ
Eclipse
IntelliJ
Eclipse
IntelliJ
Eclipse
LightTable
Emacs
NightCode
Vim
RESTful Web Services Grails Spray
Play Framework
Liberator

Notice that Grails appears several times in the Groovy column. It’s a web framework offering a good deal of productivity thanks to a Convention-over-Configuration approach. The same happens in the Scala column with several occurrences of Play Framework. While the approach followed by Groovy and Scala offers more productivity and reproducibility, it also reduces the flexibility of the architecture, making it hard to replace an inefficient part by a more efficient one. Clojure is more concerned about the architecture and offers a separate library for every feature. In case a competitor library becomes more efficient than the one we are using, we can easily integrate the new library and gradually replace the inefficient one.

My strategy to use those three technologies from now on is the following:

  • Groovy: when I find a chaotic and inefficient JavaEE application, I will propose to migrate it to Groovy + Grails. It will make the project economically viable again and recover the time wasted with complexity. The team can start writing Groovy code right away in the same project, gradually replacing the Java code and JavaEE dependencies.
  • Scala: the main advantage of the Scala stack is its reactive platform, offering an unprecedented performance boost for concurrent applications. So, when performance is one of the main requirements of the application and the team is smart and organized, I will suggest Scala as the way to go.
  • Clojure: For everything else I will suggest Clojure, which is very productive, simple and has an excellent performance. That’s by far the best programming experience I ever had.

In summary, I still use JavaEE for existing well designed applications, but I will use Groovy to save chaotic JavaEE applications from complete failure. Scala and Clojure will be used for new projects, depending on their characteristics and context of use.

The Consequences of Deferring Project Jigsaw

Mr. Mark Reinhold has announced in July 2012 that they were planning to withdraw Project Jigsaw from Java 8 because Jigsaw would delay its release, planned for September 2013 (One year from now). This date is known because Oracle has decided to implement a two years roadmap planning for Java, so September 2013 is actually 2 years after the release of Java 7.

According to Jigsaw’s website…

“The goal of this Project is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK. The original goal of this Project was to design and implement a module system focused narrowly upon the goal of modularizing the JDK, and to apply that system to the JDK itself. The growing demand for a truly standard module system for the Java Platform motivated expanding the scope of the Project to produce a module system that can ultimately become a JCP-approved part of the Java SE Platform and also serve the needs of the ME and EE Platforms.”

They also say:

“Jigsaw was originally intended for Java 7 but was deferred to Java 8.”

Now they want to defer it to Java 9 🙁 More details of their decision making are available in a Q&A post on Reinhold’s blog. You may read and follow the discussion there. Here is my opinion:

Without Jigsaw, I believe that it’s very difficult to put Java everywhere. Without Jigsaw, the idea of multi-platform is getting restricted to servers in a age of smartphones and tablets. Jigsaw may be “late for the train”, but it is letting Java late for the entire platform ecosystem.

Deprecated installation screenshot
Observing the market, we can see that development is becoming platform-dependent (iOS, Android, etc.) Only Java can beat this trending because of its large experience on multiplatform implementation, and the time to do it is NOW! Otherwise, in 3 or 4 years there will be no Java on devices, and the development community will have enough knowledge to live with that. Therefore, Java will be basically a server-side technology.

The reasoning behind my prediction is the following: mobile devices are limited in terms of resources and a modular JVM would allow the creation of tailored JVM considering the constraints of each device. I put myself in the shoes of those devices manufacturers: “I wouldn’t distribute something in my products that might impact negatively the user experience in terms of performance”. That was the argument (at least the public one) Apple used to avoid distributing the Flash plugin for iOS’s browser. Probably because of that, Adobe definitively gave up Flash on mobile devices. A modular JVM would simplify a lot Oracle’s negotiation with many device players. It would be reasonable for Apple to include Java as a language for iPad and iPhone applications; Google would finally embed the JVM into Android to evolve faster with new Java language features, getting busy just with a module to extend the JVM to specific Android’s capabilities; it would be even possible to save Nokia from bankruptcy 😀

You may wonder whether Apple and Google would ever adopt JVM as a standard runtime platform. Have you heard about opportunity cost? It states that our current choices and activities are actually blocking other possible choices and activities. The tricky part is to chose the opportunity that is least costly or with the highest profit. Having said that, we can see the scenario considering that Java was not an option because it wasn’t modular when those companies made their decisions. If Java was modular and Apple had adopted it, iOS platform would have at least three times more apps than Android. “Java” was in Google’s strategy to catchup with Apple. Only Java could allow Google to do it in such a short period of time. So, it’s not so simple to ignore Java.

Now, Oracle vs. Google: Of course the effort to move Java forward should be economically viable, and in order to use Java, Google would have to spend some money. Unfortunately, Oracle and Google work with different currencies. While Oracle thinks in terms of licenses, Google thinks in terms of advertising. These currencies are incompatible, very difficult to convert, because while license is cost, advertising is profit. Therefore, Oracle would never reach a deal increasing Google’s cost, but it would be possible to get a deal decreasing Google’s profit. In other words, Oracle could have a percentage of Google’s profit on advertising sold through Java apps in order to make Java available for Android. Google makes this kind of deal with a lot of companies like Yahoo, AOL and others. Why not with Oracle?

If Oracle doesn’t give all resources that the JDK team needs to make Jigsaw a reality in Java 8, Oracle will be completely out of the pervasive game very soon. Without breaking the JDK into manageable and efficient pieces, Oracle won’t have arguments to convince the industry that Java is the way to go on the long run.

Before deciding to drop Jigsaw out, I beg Oracle to think about the consequences! They must ignore the fixed release roadmap and accept the difficulty of the task. We can stay happy with Java 7 (it’s not widely adopted anyway) as long as Jigsaw is on the way to Java 8. This fixed release cycle can actually come back after Java 8.

I would love to be wrong and be taken by surprise with an official Oracle’s announcement of the definitive support for JavaFX on Apple and Android devices during the next JavaOne 😉 However, I think the likelihood is very low 🙁

Architects Need a Pragmatic Software Development Process

I have been a non-stop software architect since 2006. During my experience, I realized that it’s really hard to perform the role of architect in an organization that doesn’t have a software development process or have it too simplified. When the development is not fairly organized, project managers don’t find a room in their schedule to implement architectural recommendations. They probably have time, people and resources, but since they don’t have a precise idea of the team’s productivity, they feel afraid of accepting new non-functional requirements or changing existing ones. I’ve noticed that, in a chaotic environment, people become excessively pragmatic, averse to changes.

Architects expect that the organization adopts a more predictable and transparent software process. This way, it’s possible to visualize the impact of recommendations and negotiate when they are going to be implemented. They minimally expect a process that has iterations inspired on the classical PDCA (Plan, Do, Check and Act) cycle because it has loops with feedback, which are the foundation for continuous improvement.

The figure below depicts what could be considered as a pragmatic software process.

Iterations are overlapped in time in order to optimize people allocation, use of resources and guarantee the feedback from previous iterations. Each iteration is performed in a fixed period of time. This time depends on the context and it tends to fall as the organization gains more maturity. An iteration is composed of 4 phases (plan, do, check and act) and 5 events that may occur according to the planning. They are:

  • T1: It represents the beginning of the iteration, starting with its planning. The scope of the planning covers only the period of the current iteration. It should not be mixed with the general project planning, which is produced in one of the initial iterations to plan all other iterations. All members of the team participate in the planning.
  • T2: The execution of what was planned for the iteration starts. All members of the team must have something to do within the scope of the iteration. Nothing planned for future iterations should be done in the current iteration. People may produce all sort of output, such as documents, code, reports, meeting minutes, etc.
  • T3: Everything that is produced should be checked. Documents should be reviewed, code should be tested, user interfaces and integrations with other systems should be tested, etc. All found issues must be registered to be solved in due time.
  • T4: Solve all issues found during the check phase and release all planned deliverables. Everybody should deliver something. In case time is not enough to solve some found issues, they must be included in the planning of the next iteration with the highest priority. Statistics should be produced during this phase in order to compare the planning with the execution. The planning of the next iteration also starts at this point, taking advantage of the experience from the previous iteration.
  • T5: Once everything is released, the current iteration finishes. T2 of the next iteration immediately starts because most of people and resources are already available.

T1 to T5 repeats several times, in a fixed period of time, until the end of the project. This suggestion is process-agnostic, thus it can be implemented no matter what software process we claim to have in place or any other modern process we can think of.

In addition to the process, there are also some good practices:

  1. Consider everything that describes how the system implements business needs as use cases. It can also be functional and nonfunctional requirements, user stories, scenarios, etc; but there must be only one sort of artefact to describe business needs.
  2. Write use cases in a way that the text can be reused to: a) help business people to visualize how their needs will work; b) guide testers on the exploratory tests; and c) help the support team to prepare the user manual.
  3. Avoid technical terms in use cases. If really needed, technical details may be documented in another artefact, such as use case realizations.
  4. If needed, create use case realizations using UML models only. Representing use case realisations as documents implies on a huge overhead. Any necessary textual information can be added in the comments area of UML’s elements.
  5. Fix the size of use cases according to the effort to realize it. For example: we can fix that the maximum size of a use case is 1 week. If the estimation is higher than that, then the use case must be divided in two others. If the estimation is far lower than that, then the use case must be merged with another closely related use case. By simply counting the number of use cases we immediately know the effort and the resources required to execute the project. This fixed number is a parameter to compare the planning with the execution. By performing this comparison after every iteration, we gradually know how precise our estimations are becoming.
  6. Use a wiki to document use cases and other required documentations, such as test cases, release notes, etc. Create a wiki page for each use case and use signs to indicate what is still pending to be released. The advantages of the wiki are: a) use cases are immediately available for all stakeholders as they are gathered; b) stakeholders can follow the evolution of the use cases by following updates in the page; c) it’s possible to know everyone who contributed to the use case and what exactly they did; and d) it’s possible to add comments to use cases, preserving all the discussion around it.
  7. If the organization has business processes, which is another thing that architects also love, then put references in the business process’ activities pointing to the use cases that implement them. A reference is a link to the page where the use case is published on the wiki.
  8. Follow up use cases using an issue tracking system, such as Jira. Each use case must have a corresponding Jira ticket and every detail of the use case’s planning, execution, checking and delivery must be registered in that ticket. The advantages of linking Jira tickets with use cases are: a) Jira tickets represent the execution of the planning and their figures can be compared with the planning, generating statistics on which managers can rely on; b) we know exactly every person who contributed to the use case, what they did, and for how long; and c) it’s an important source of lessons learned.
  9. Test, test, test! It must be an obsessive compulsive behaviour. Nothing goes out without passing through the extensive test session.
  10. Constantly train and provide all needed bibliography to the team on the technologies in use. The more technical knowledge we have inside of the team, the highest is our capability to solve problems and increase productivity.

Working this way, everything becomes quantifiable, predictable, comparable and traceable.

From the practices above we can extract the traceability flow from business to the lowest IT level, as depicted in the figure below.

Business process elements such as swimlanes and activities may inspire actors and use cases. Use cases and actors are documented on wiki pages. Each use case and actor has a page on the wiki, which has a unique URL and can be used to refer the element on email messages, documents, Jira tickets and so on. An Jira ticket is created for each use case and it contains a link to the use case’s wiki page. This wiki page can also have a link to the ticket since it also has a unique URL. Jira tickets can be automatically linked the source code through the version control system (SVN) and declaratively linked to system’s features and user interfaces. Since it’s possible to create mock-ups in wiki pages, then we also link those wiki pages with user interfaces to compare the mock-ups with the final user interface. We finally have actors linked to security roles.

I admit that architects are not qualified to define and implement a software development process in the organization (they actually believe more in the Programming Motherfucker philosophy :D), but they are constantly willing to contribute to have one in place. As they have instruments to monitor servers, releases, tests, performance and so on, they also want project managers having instruments to estimate effort, predict events, anticipate problems and, therefore, produce better planning and results. Warning: Whatever we put in our software processes that is not quantifiable or measurable will become an expensive overhead.

Choosing Between Vaadin and JSF

With the recent release of Primefaces 3.0, JSF finally reaches an unprecedent level of maturity and utility that puts it face to face with other popular Rich Internet Applications (RIA) options, such as Google Web Toolkit (GWT), ExtJS, Vaadin, Flex and others. This open source project also proved to be very active and in a constant growing path.

I have been working with JSF + Primefaces since October 2010, when I started developing the project JUG Management, a web application conceived to manage user groups or communities focused on a certain domain of knowledge, whose members are constantly sharing information and attending social and educational events. JSF is a standard Java framework for building user interfaces for web applications with well-established development patterns and built upon the experience of many preexisting Java Web development frameworks. It is component-based and server-side user interface rendering, sending to clients (web browsers) pre-processed web based content such as HTML, JavaScript and CSS. My experience on this technology is openly available on java.net.

Meanwhile, I had the opportunity to create a Proof of Concept (PoC) to compare JSF and Vaadin in order to help developers and architects to decide between one of them. Vaadin is a web application framework for RIA that offers robust server-side architecture, in contrast to other Javascript libraries and browser plugin-based solutions. The business logic runs on the server while a richer user interface, based on Google Web Toolkit (GWT), is fully rendered by the web browser, ensuring a fluent user experience.

The result of the PoC was surprisingly interesting 🙂 It ended up proposing both technologies instead of eliminating one of them. I found out, exploring available books, articles, blogs and websites, that despite being able to implement all sorts of web applications, each technology has special characteristics, optimized to certain kinds of those applications. In practical terms, if we find out that JSF is better for a certain kind of application, that’s because it would take more time and code to do the same with Vaadin. The inverse logic is also true. In order to understand that, we have to visit two fundamental concepts that have direct impact on web applications:

  • Context of Use considers the user who will operate the application, the environment where the user is inserted, and the device the user is interacting with.
  • Information Architecture considers the user of the application again, the business domain in which he or she works on and the content managed in that domain.

Notice in the figure bellow that the user is always the center of attention in both concepts. That’s because we are evaluating two frameworks that have direct impact on the way users interact with web applications.

Visiting the concepts above we have:

Environment
Some applications are available for internal purpose only, such as the ones available on the intranet, other applications are used by external users, such as the company website.

Users of internal applications are more homogeneous and in limited number, which means that the UI can be a bit more complex to allow faster user interactions. That explains the fight Microsoft Office vs. Google Docs. The last one is not yet fully acceptable in the office environment because it has less functionalities than Microsoft Office. The latter is, on the other hand, more complex and more expensive. However, a limited number of users to a larger number of features makes acceptable to have some additional costs with training sessions to profit from the productivity features.

A company website targets heterogeneous users in unlimited environments. It is not possible to train all this people, thus simpler user interfaces with short and self-explanatory interactions are desirable.

Considering the environment, we would recommend Vaadin for homogeneous users in limited environments and JSF for heterogeneous users in unlimited environments.

Device
Different devices demand multiple sets of UI components, designed to look great from small to large screens. Fortunately, both frameworks have components to support the full range of screen sizes from regular desktops to mobile devices. The problem is that different devices bring different connectivity capabilities and the application should be ready to deal with short band-width and reduced transfer rates. In this case, Vaadin seems to be more suitable for multiple devices, as long as the variety of devices is not so extensive, because the user interface is rendered locally, using JavaScript, and it has a richer Ajax support to optimize the exchange of application data with the server.

Business Domain
In principle, good quality UI frameworks such as JSF and Vaadin can implement any business domain. The problem is how experienced the team is with the technology or how small is the learning curve to master it. Business is about timing and the technology that offers the best productivity will certainly win. If your team has previous experience with Swing then Vaadin is the natural choice. If the previous experience was more web-oriented, manipulating HTML, CSS ans Scripts, then JSF is recommended.

Content
Content is a very relevant criterion for choosing between Vaadin and JSF. In case the application needs to deal with volumous content of any type, such as long textual descriptions, videos, presentations, animations, graphics, charts and so on, then JSF is the recommended over Vaadin because JSF uses a web content rendering strategy to profit from all content-types supported by web browsers without the need for additional plugins or tags. The support for multiple content is only available on Vaadin through the use of plugins, which must be individually assessed before adoption.

User
Last, but not least, we have the user, who is the most important criterion when choosing a UI framework. We would emphasize two aspects:

  1. The user population: the largest is the target population the highest are the concerns about application compatibility. It must deal with several versions and types of browsers, operating systems, computers with different memory capacity and monitor resolution. All these without failures or security issues. For larger populations, the most appropriate technology is the most compatible one in a cross-platform environment, which is the case of JSF, since it uses a balanced combination of HTML, JavaScript and CSS, while Vaadin relies only on JavaScript and CSS. But shorter populations would have better profit with Vaadin because cross-browser compatibility is and will remain being a very hard work to be done by Vaadin’s development team behind the scene.
  2. User’s tasks: If the application is intensively operated by users then it is expected that it has more user’s tasks implemented. On the other hand, if the application is rarely used or has short intervals of intensive use, then there is a lower concentration of user’s tasks. According to the PoC, Vaadin is the technology that provides the best support on delivering user tasks with richer user interaction because of its fast visual response. JSF is less optimized on which concerns the user interaction.

In conclusion, instead of discarding one of these frameworks consider both on the shelf of the company’s architectural choices, but visit the criteria above to make sure that you are using the right technology to implement the expected solution. A simple way to apply those criteria would be to assign weights to each criterion, according to the project’s characteristics; set which technology is appropriate for each criterion; and sum the weights for each technology. The highest weight elects the technology to be used in the project.