Search This Blog

Tuesday, August 26, 2008

To Service or Not to Service..thats the question!

Service Oriented Architecture or SOA as the acronym has been discussed, defined, re-defined,argued, incorporated and touted. In this blog, I hope to share some thoughts on deciding whether code should be abstracted into a service or not. It is based of discussions with many of my esteemed colleagues over the years. Some of the discussions have also been with my 3+ year ole kids...;-)))) What the kids said, I take very very seriously lol!

During software development, I often have heard the term abstraction of common code or DRYing out common code. In other word, my architect or lead saying "This is business logic, it should not be intermixed with your UI code. You should move this to a business delegate so other callers can use it as well." or "This code looks like it can be re-used by other callers, it should be abstracted out to a common class or function." Highly valued and respected advice.

Abstraction of common code to place where it can be re-used time and again by different callers makes for a very strong logical argument. The "DRY" (Don't Repeat Yourself) principle.

As the developer, the follow up question is often "So where exactly should I move it to?" A method in the same class? A different class? A service?

As this point, IMHO, interrogation of the scope or breadth of the abstraction is required in order to determine where to extract the common piece of functionality to. In other words, at this point the question "Who will be the consumers of this piece of code?" is the question of most importance IMO.

If the common piece of functionality is strongly tied to the application using it and callers from any other application would NOT benefit from the abstraction, then the answer becomes as simple as having the common piece of functionality defined in a Class or Function within the application so that it may be re-used.

Now if the benefits of the common code can be reaped by callers from different applications then this piece of code needs to be distributable or shareable so that callers from different applications can readily use it.

One way that the distribution and sharing can be achieved is to place the common code in a software library. For the J2EE developer, I am talking about a jar, for a C developer, a lib file. This software library is then be made accessible to callers and the callers would not need to repeat the common code.

The above approach works well when the library will not suffer much "FLUX". Flux in the code would mean distributing the changes to all consumers. Which applications do I need to distribute the jar/library to? In addition, what if we require two versions of the code to be available? There is a coordination penalty to be suffered here. The solution might work in the number of consumers are of a very low number. In addition, this piece of code is specific to Java consumers. What if this code would be beneficial to consumers written in different languages?

Using a distributed computing approach one can centralize the solution and also make it available to consumers with different languages (via Corba, REST, SOAP) . This eliminates the problem of having to distribute the jar/code to all consumers. Multiple versions of the same piece of software can be simultaneously supported using separate end points, for example, "/orderservice/v1", "/orderservice/v2".

We are moving away from a local to a global way, losing control. What I mean is when the library was self contained and distributed to only a handful of applications, things were simple and more controlled. With exposing as a distributed service, we are opening the code up to more wider audience and thus more exposure.

In deciding that a service will be distributed, one will quite likely find themselves addressing some non-functional requirements:

Semantics (Inter operability with different Consumers):

What languages will the consumers of the distributed code be in? For example, C, C++, Java or maybe only Java! This is one of the most difficult questions to answer. One would at this time pose the question "Why not just design the distributed code to simply be semantic agnostic?". For one, the native features and structures provided are often unusable and wrappers or translations are required. Translation to provide for cross language consumption has a price that to be paid. For example, with an RMI service on can depend on a total java solution and reap the benefits of the structures and semantics therein.

Quality of Service (QOS):

How much will the service be loaded, i.e., frequency of request? What is the acceptable failure rate? Are there maintenance windows that can be defined? What is the acceptable response time? How large is the payload exchanged? Scalability, fail over are some of the major players. These are some of the QOS requirements that need to be addressed.

Security:

Unlike the library distribution where the consumers are known and potentially trusted, as the service has now become "public", security becomes a valid concern. Does a consumer require authentication? Can a consumer execute the operation (authorization)? Does the data transferred between client and server require encryption?

Transactions:

A process or code when abstracted might have been a participant in a more global transaction. Consider for example, there is initially an application that will book a hotel room, a car and flight reservation as part of one single transaction. If we abstract out the flight booking to a service in light of other applications wanting to book flights, what do we do about our original process that requires all to work or have no booking at all?

Synchronicity:
Some processes are synchronous while others do not have to be. For example, when the wife commands to vacuum the house, it had better be done immediately. If the kids ask for a new toy, it can be provided later on. When designing the code, one needs to determine if the process has to be done synchronously or not? I guess this would apply equally to the library routine present in a jar.

Volatility:
It is arguable that this requirement falls under the functional requirement umbrella. Questions such as "Will the service undergo much flux?", "Do changes need to be backward compatible?", "How can separate versions of the service co-exist?", "Transition strategies from version a to version b of the service."

Discovery:

How will the shared piece of code be discovered to be usable by consumers? Registry, fixed URIs? Again security is coexistent with this requirement.

Having distributed code involves considerable thought. It is almost impossible to provide canned answers to the above as the requirement variance is large from company to company, use case to use case.

I will express some opinions based of the above:

Prophets are a rare commodity. Even if they exist, their reliability is often questionable. I feel it is often best to adopt a direction that is sound at given moment of time and space (space accounts for effort ;-)) Trying to sound smart here ;-).

If there is going to be very limited amount of semantically equivalent consumers of a duplicated piece of functionality, it is better to lean toward localization of the same. Following the direction of a shared function or library would be preferred.

If the functionality has the prospects (put on prophetic hat here) of being used by different semantics consider a service. Note that even different semantics can be achieved without a service, for example Java Native Interface or JNI.

If the functionality will be consumed by different consumers using the same semantics and one expects the number of consumers to be large, consider a distributed environment whose semantics satisfy the majority of the consumers. Thinking RMI, EJB here for Java consumers. For example, if in a total Java shop, if RMI services can be accommodated, why think of CORBA or SOAP?

If the functionality will be consumed by different consumers using different semantics, consider a language neutral implementation of the service such as COBRA, SOAP, POX, JMS etc.

Design consumers for change and flexibility. If consumers of a service or piece of functionality are developed so that they can easily adapt to change, we have a win-win situation.

To illustrate the same, consider a client that needs information about products, product identifiers, name and description in order to function.

If the code were designed as follows:





public interface ProductDAO {
public Product getProduct(Integer id);
}




with an initial implementation as shown below:



public class ProductDAOImpl implements ProductDAO {
public Product getProduct(Integer id) {
ProductModel prod = getProduct(id);
Product prod = mapToClient(prod);
return prod;
}
}




Then if the code was abstracted to a web service, the client would only need to provide a different implementation as shown below:



public class ProductSoapDaoImpl implements ProductDAO {
public Product getProduct(Integer id) {
ProductDTO dto = makeSoapCall(id);
Product prod = mapProduct(dto);

return prod;
}

private ProductDTO makeSoapCall(Integer id) {
....
}
}





The point to note is that we have an "Agile" client, i.e, a client that has been designed to adopt to change. The Product class used is still a product as far as the Client application is concerned. How its obtained is immaterial. Deciding where a client needs to be agile is again a Prophetic task! I hope to point out with the above that if "change or re-use" is anticipated, develop for the same, regardless of which layer of the architecture you are a part of.

Because we have SOA enviroment, not every piece of re-usable code needs to be a distributed Service. Put on your best prophetic hat when thinking semantics of a service. Remember securing is enduring.

Chris Angel is just freaking awesome!

Wednesday, August 20, 2008

Easy Mock Class Extensions: Byte code enhancement, Interface elimination, Mocking final classes/methods

Today, I had a very constructive discussion with some colleagues. One of the primary purposes served by interfaces has been to support "Mocking" of tests, especially in cases where there is at most one implementation in a span of time. After thinking of Easy Mock class Extensions, where one could mock a class, I raised the question of the value of the interface in such a case as one could easily mock an implementation which started a very interesting discussion.

Now, with Mock frameworks like JMock or EasyMock, mocking of Classes is possible but they explicitly declare that classes that are declared final or methods that are declared final cannot be mocked. I thought at the time of how the instrumentation api could be leveraged to assist in mocking the final class or a class with final methods.

One typical case, where we propagate the pattern of an interface and implementation pattern is in the DAO layer. If an API is designed as an external API, i.e., others will use the developed API and/or the API will be deployed in unknown environments where specific implementatio might be in use, then the separation of the DAO interface and implementation is justified. In a standard/isolated case, i.e., a case where one has decided on their database provider,the O/R mapping strategy, the value of the overhead of implementing interfaces is diminished as at any real given time, there would typically be only one concrete implementation. So it begs the question as to why the DAO interface is even required?

One argument that I have heard as the case for interfaces has been testability. For example:

Service service = new ServiceImpl();
service.setDAO(new DAOMockImpl()));
service.executeServiceMethod();

In the above example, the DAO is mocked with an implementation of the DAO interface.

The same could easily acheived by mocking the DAOImplementation class itself.

Anyway, I wanted to see if anyone has in fact handled the case of using byte code enhancement to handle the final problem with mocks. I did find an excellent example of the same in Xzajo's Weblog. In the example, present in the Blog, a very descriptive method of how byte code enhancement is utilized to allow proxying of final classes/methods is described. Very very nice! My example, shown below demonstrates how the same can be achieved in a Maven/Easy mock environment and utilizes the code from Xzajo's blog.

One change that I have is that we probably do not want all loaded classes to be devoid of their "final" keyword. For this reason, I propose a method of specifying the packages one would like to be enhanced.

The instrumentation class utilizes the The Byte Code Engineering Library (BCEL) . We could easily change the same to use some other library if required. I have defined a class called FinalizerRemover which implements ClassFileTransformer and the implementation details are shown below:



1 public byte[] transform(ClassLoader loader, String className, Class redefiningClass,

2 ProtectionDomain domain, byte[] bytes) throws IllegalClassFormatException {

3 byte returnedBytes[] = bytes;
4

5 try {
6 ClassParser parser = new ClassParser(new ByteArrayInputStream(bytes), className);

7 JavaClass clazz = parser.parse();
8 if (instrument(clazz.getPackageName())) {

9 if (clazz.isFinal()) {
10 System.out.println("Changing final modifier of class:" + clazz);

11 clazz.setAccessFlags(clazz.getAccessFlags() & (~Constants.ACC_FINAL));

12 }
13 Method[] methods = clazz.getMethods();

14 for (Method m : methods) {
15 if (m.isPublic() && m.isFinal()) {

16 System.out.println("Transforming Method:" + m);
17 m.setAccessFlags(m.getAccessFlags() & (~Constants.ACC_FINAL));

18 System.out.println("Transformed Method:" + m);
19 }

20 }
21 }
22 returnedBytes = clazz.getBytes();

23
24 }
25 catch (Exception e) {

26 e.printStackTrace();
27 }
28
29 return returnedBytes;

30 }





In the above example, I have method check to see whether the class requires transformation or not. I use an argument to the VM -Dtpackages, a comma separated list of packages requiring transformation.

The code is present in a maven project called Finalizer remover producing a jar file called finalizerremover.jar.

A test maven module (FinalizeOverrideProject) is defined that contains DAO's which are final classes. A service layer class refers to the DAO as shown below:




1 public class ProductServiceImpl {

2 private ProductDAOImpl productDAO;
3
4 public final void setProductDAO(ProductDAOImpl productDAO) {

5 this.productDAO = productDAO;
6 }

7
8 public Product getProduct(Integer id) {

9 return productDAO.getProduct(id);
10 }

11 }





The ProductDAO mentioned above is an actual implementation and not an interface.
The test code for the ProductServiceImpl is as follows:



1 private ProductDAOImpl productDAOMock;

2 @Before
3 public void setUp() {

4 productDAOMock = EasyMock.createMock(ProductDAOImpl.class);

5 }
6 @Test
7 public void testGetProduct() {

8 ProductServiceImpl impl = new ProductServiceImpl();
9 impl.setProductDAO(productDAOMock);

10
11 Integer id = 123;
12 Product p = new Product(id, "Porsche Boxster");

13
14 EasyMock.expect(productDAOMock.getProduct(id)).andReturn(p);

15 EasyMock.replay(productDAOMock);
16 Product result = impl.getProduct(id);

17 assertNotNull(result);
18 EasyMock.verify(productDAOMock);

19 }





If run without the byte code enhancement, the above test would not work as it would not be possible for EasyMock to "mock" the final DAO class. However, as the DAO class is
enhanced to strip out the final methods, mocking is possible.

To enable the instrumentation to work when we run a "mvn test", the following changes are affected to the maven pom to specify the intrumented jar and the packages to be instrumented:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>

<version>2.4.2</version>
<configuration>
<argLine>-Dtpackages="com.welflex" -javaagent:${settings.localRepository}/com/welflex/finalizeRemover/finalizeremover/1.0-SNAPSHOT/finalizeremover-1.0-SNAPSHOT.jar</argLine>
<useSystemClassLoader>true</useSystemClassLoader>

</configuration>
</plugin>

Maven's surefire plugin is configured to run with the instrumentation jar. The intrumentation jar is obtained from the local maven repo.

The example is run on a Maven 2.0.9, Java 1.6.X environment. In order to run the example issue a
"mvn install" from the FinalizeRemover project. The operation will result in the deployment of
the intrumentation library in a local repository. Execute a "mvn test" on the FinalizeOverrideProject to test and view the example.

Conclusion:

Think more about redundant interfaces, orthogonality has a price. More later...

Downloads: The code for the example can be obtained from HERE!

Sunday, August 17, 2008

Whats in a name?

I have often had discussions with people regarding which is the better way to represent Acronyms in Java. Should a class that represents a Billing Telephone Number be represented as BTN or Btn?
I often fall back to the Java language classes for reference. URL, UUID, URI seem to support the full uppercase naming convention. But is that the correct way? It has to be more than a matter of convention and style to sway one way or another.

I have been reading this excellent book by Gafter and Bloch called "Java Puzzlers" and find it to be an excellent book that often makes me think "WTF?" when reading parts of it. I only walk away humbled regarding my understanding of the java language. I truly recommend this book as a read for any java enthusiast. Just, please, don't use the examples for interview questions. Some hapless bloke like me wouldn't stand a chance :-)

Anyway, back to the naming conventions. Consider the following example:

1 public class Naming {

2 public static void main(String args[]) {

3 System.out.println(A.BTN.NUMBER);

4 }
5 }
6
7 class A {

8 static class BTN {
9 static Integer NUMBER = 9789999;

10 }
11
12 static Btn BTN = new Btn();

13 }
14
15 class Btn {
16 Integer NUMBER = 12345678;

17 }


Will it even compile considering Class A has a static variable called BTN and a class called BTN ?
For the sake of discussion, lets say the class does compile, what then will be the run time output? Will it print out 9789999 from the inner static class of A or will it print out 12345678 from the instance of class Btn?

If you surmised the answer to be the latter you are right, the code does infact print out 12345678. Now why is that?

The Java Language Specifications (JLS 6.5.2) states that when a variable and a type have the sam me name, the variable takes precedence. In other words, the variable name tends to obscure the type name.

Based of the above, if the program mentioned above had used the java naming conventions, i.e., the static inner class of B had been named Btn, the problem would not have surfaced.

With the same said, following the conventions regarding naming of java classes, packages, variables is quite essential.

Lessons Learned:

1. Use Camel case starting with a capital letter for Class Names
2. For static constants use ALL CAPITALS.
3. Package names in all lower case.
4. Avoid top level package or domain names for variable names. For example, don't name classes String, Object; or don't name variable as java, net, org etc. Read the book for more.

The book is really nice, some example's seem to indicate warped cases which the normal developer would not even tread. Regardless it is worth a read. Let me leave you with some other pit falls, maybe convince you to want a read ;-).


1 public static void main(String args[]) {

2 System.out.println(2.00 - 1.10);
3 System.out.println('A' + 'B');

4 }



What gets printed? Enjoy!

Friday, August 8, 2008

JAXRS, JBoss RestEasy, Maven, Spring...rock on!

I'm drowning my audience with Rest related code. What can I do, the hunger is biting me bad! JSR 311/JAXRS is an API. Different providers provide their own implementations of the same. I have already explored RESTLET's version of the implementation and how it applies to my SpringRestlet example. Note the common theme running here..I need a framework to be able to provide a Spring Hook otherwise, I turn my head the other way :-). Yeah, a Spring fan (those who remember the movie "Swim fan" feel my obsession)!

That said, we move on to the example. RestEasy is a JBoss project that allows one to build RESTful webservices. I like their philosophy where they say, the goal is to build an easy way to speak REST.

As of this blog, Rest Easy JAXRS is at : 1.0 BETA 5. Some of the things I liked about the implementation right away:

1. Support for a Client API to talk to Restful Webservice
2. Easy Spring Integration
3. Maven Support
4. Decent Documentation
5. Easy to get going with a Starter Webservice
6. JBoss Backing..we have Hibernate after all ;-)
7. Last but not the least led by a member of the JSR

In my previous blog with the RESTLET framework support for JAXRS and Spring I had developed my own custom Servlet. With Rest Easy, I did not have to do the same. My webapp's web.xml has exactly the same configuration as mentioned in the RestEasy documentation. One major issue that I encountered is that I could not get auto discovery of Spring managed Resources and Providers (as advertised by the framework) without annotating the corresponding classes with the Spring @Component annotation. Maybe there is something I am missing. Regardless, I have filed a bug report with the RestEasy jira, lets see what surfaces from the same.

With the Resteasy version of my Spring/Restlet example , the webapp module has only 3 classes! An OrderNotFoundProvider class that translates all OrderNotFoundExceptions to meaningful REST responses, an OrderResource and a ProductsResource. I do not have any other JAVA artifacts in this module. A provider class for the Products that managed the marshalling specifics (
JSON marshalling/unmarshalling) has been moved to the common maven module to be shared among client and web modules. Regardless, look at my webapp module now, all we have is Resource and Exception management classes :-). Gone are the Application, Servlet, ApplicationConfig ...etc etc classes.

Unlike in the previous example of JAXRS that I had provided, in this example, I have changed the Client to use RestEasy's JAXRS Client support. I must admit, I am rather impressed by their effort with the same. So what are the changed in the client, the OrderClient has changed to:
@ConsumeMime("application/xml") 
public interface OrderClient {

/**
* Create an Order.
*
* @param orderDTO Order DTO
* @return OrderDTO with created id
* @throws IOException If an error occurs
*/

@POST
@Path("/order")
@ProduceMime("application/xml")
@ConsumeMime("application/xml")
public OrderDTO createOrder(OrderDTO orderDTO) throws IOException;

/**
* Updates an Order.
*
* @param orderDTO Order DTO
*/

@PUT
@Path("/order/{id}")
@ProduceMime("application/xml")
public void updateOrder(OrderDTO orderDTO, @PathParam ("id") Long id);

/**
* Retrieves an Order with the specified <code>orderId</code>.
*
* @param orderId Order Id
* @return OrderDTO
* @throws OrderNotFoundException if order is not found
* @throws IOException if an error occurs
*/

@GET
@Path("/order/{id}")
@ProduceMime("application/xml")
public OrderDTO getOrder(@PathParam("id") Long orderId) throws OrderNotFoundEx
ception, IOException;

/**
* Deletes an Order with the specified <code>orderId</code>
*
* @param orderId order Id
* @throws OrderException If an error occurs
*/

@DELETE
@Path("/order/{id}")
@ProduceMime("application/xml")
public void deleteOrder(@PathParam("id") Long orderId) throws OrderException;
}


Note the use of JAXRS annotations in the Order Client. Code is being shared, always a good thing :-). It is sufficient with the OrderClient defintion to talk to the webservice using Reseasy code. The same can be accomplished with the following lines from a consumer:

ResteasyProviderFactory.initializeInstance(); 
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());

OrderClient client = ProxyFactory.create(OrderClient.class, "http://localhost:9090/IntegrationTest");

The first two lines initialize the Resteasy client code. The third line is where a Proxy is created for the OrderClient. In the spirit of my rest/spring example, I have decided to provide an abstraction where I delegate to the proxy mentioned above as shown below ;-) :
public class OrderClientImpl implements OrderClient { 
private final OrderClient delegate;

/**
* @param uri Server Uri
*/

public OrderClientImpl(String uri) {
delegate = ProxyFactory.create(OrderClient.class, uri);
}

public OrderDTO createOrder(OrderDTO orderDTO) throws IOException {
return delegate.createOrder(orderDTO);
}

public void deleteOrder(Long orderId) throws OrderException {
delegate.deleteOrder(orderId);
}

public OrderDTO getOrder(Long orderId) throws OrderNotFoundException, IOExcept
ion {
return delegate.getOrder(orderId);
}

public void updateOrder(OrderDTO orderDTO, Long id) {
delegate.updateOrder(orderDTO, id);
}

}

I choose to depend on my Integration test and/or consumer of the service to initialize the Resteasy framework via plugins/listener what have you.

Above said, the code just works nice. I feel with the Resteasy JAXRS implementation, I have reduced the amount of coding required to build REST web service that integrates with my favorite framework Spring. Onward and upward you Resteasy folks!

The Resteasy version of the JAXRS Spring/RestEasy/Maven/Dozer project can be downloaded from HERE!
The example was run using JDK 1.6.X and Apache maven 2.0.9 on Linux environment.
Enjoy! I am off to look at CXF from the apache foundation next..:-) ...

Wednesday, August 6, 2008

Singleton and Lazy Initialization

Instantiating a Singleton lazily has always been a challenge in pre JDK1.5.X days due to semantics of the volatile modifier and/or synchronization penalties. The double checked locking has been discussed quite a bit. I found this one technique of lazily loading a singleton that I thought I'd share. I must admit, I am late in realizing the same :-(. In lawers terms, if this has been "Asked and answered", too bad! There must be a few slow people like me out there...:-), hopefully ;-) The principle used is the "Lazy initialization holder class idiom", JLS 12.4.1.

The singleton class has a Holder class which creates the Singleton instance as shown below:

public class Singleton { 
static {
Watcher.singletonLoaded = true;
}

/**
* Prevent instantiation.
*/

private Singleton() {}

/**
* @return Lazily loaded instance.
*/

public static Singleton instance() {
return SingletonHolder.INSTANCE;
}

private static final class SingletonHolder {
static {
Watcher.singletonHolderLoaded = true;
}
public static final Singleton INSTANCE = new Singleton();
}
}


In the above code when Singleton.instance() is invoked for the first time, the side effect of loading the HolderClass and the creation of the Singleton INSTANCE occurs.

This is kinda nice as there is no synchronization and depends on the fact that class loading is serial, i.e., two threads will not load the same class twice with the same class loader at the same time.

The Watcher class shown above is only a simple way to track when the class has been loaded.

The following represent some unit test that demonstrate when the class is loaded and when the Singleton is obtained.
public class SingletonTest { 
@Test public void test() throws ClassNotFoundException {
Class.forName("Singleton");

assertTrue("Singleton class must have been loaded", Watcher.singletonLoaded);
assertFalse("Single Holder should not have been loaded", Watcher.singletonHolderLoaded);

Singleton.instance();

assertTrue("Singleton Holder must have been loaded", Watcher.singletonHolderLoaded);
}
}


The Watcher class shown above is a simple static class with booleans as indicators.

Monday, August 4, 2008

Restlet, JAXRS, JSR 311, Maven, Spring and more.

I have been wanting to alter my Restlet example that utilizes Spring, Restlet and Maven to use JAXRS or JSR 311 API. The Restlet project supports JAXRS.

For the sake of simplicity, I did not change the client code in anyway, i.e., preferring to use the Restlet API for invocations. In addition JAXRS does not provide for any Client API specifications ;-)

I changed the OrderResource as follows:
@Component @Path("/order")

public class OrderResource {
private static final Logger log = Logger.getLogger(OrderResource.class);

@Autowired private OrderService orderService;
@Autowired private MapperIF beanMapper;

public OrderResource() {
super();
}

private OrderDTO persistOrder(OrderDTO orderDTO) {
if (log.isDebugEnabled()) {
log.debug("Persisting order:" + orderDTO);
}

Order order = (Order) beanMapper.map(orderDTO, Order.class);

if (log.isDebugEnabled()) {
log.debug("Mapped Order" + order);
}

orderService.persist(order);

if (log.isDebugEnabled()) {
log.debug("Mapping persisted order to OrderDTO:" + order);
}
orderDTO = (OrderDTO) beanMapper.map(order, OrderDTO.class);

if (log.isDebugEnabled()) {
log.debug("Returning mapped order:" + orderDTO);
}

return orderDTO;
}

@Path("/{id}") @ConsumeMime("application/xml") @PUT public void updateOrder(
@PathParam("id") String id, OrderDTO orderDTO) {
if (log.isDebugEnabled()) {
log.debug("Enter Update Order, Id=" + id + ", Order DTO:" + orderDTO);
}
Long idLong = new Long(id);
orderDTO.setOrderId(idLong);
orderDTO = persistOrder(orderDTO);

if (log.isDebugEnabled()) {
log.debug("Order Persisted:" + orderDTO);
}
}

@ProduceMime("application/xml") @ConsumeMime("application/xml") @POST public OrderDTO storeOrder(
OrderDTO orderDTO) {
orderDTO = persistOrder(orderDTO);

return orderDTO;
}

@GET @Path("/{id}") @ProduceMime("application/xml") public OrderDTO getOrder(
@PathParam("id") String id) throws OrderNotFoundException {
Long orderId = new Long(id);

Order order = null;

try {
order = orderService.getOrder(orderId);
}
catch (OrderNotFoundException nfe) {
log.error("Order Not Found", nfe);
throw nfe;
}

log.info("Order found..");

OrderDTO orderDTO = (OrderDTO) beanMapper.map(order, OrderDTO.class);

return orderDTO;
}

@DELETE @Path("/{id}") public void deleteOrder(@PathParam("id") String id) {
Long orderId = new Long(id);
orderService.delete(orderId);
}

public void validate() {
Assert.notNull(orderService);
}
}


Notable Changes to the OrderResource:


  1. The @Path annotation on the OrderResource class tells the container that the OrderResource will handle calls of the context /order.
  2. The @Path annotation on some of the methods of the OrderResource denote specifics of the path.
  3. @ConsumeMime and @ProduceMime annotations indicate the mime types that will be consumed or produced by the method respectively.
  4. @POST, @GET, @PUT, @DELETE denote the different HTTP methods and a method annotated with one of these annotations will handle the request of the specific type.
  5. @PathParam denotes a parameter that will be available for the method.

In the above example, we have eliminated code that extends a Restlet Resource class. We have used annotations to specify what HTTP methods the Resource supports. We have also eliminated the Representation concept from the methods in favor of @ProduceMine and @ConsumeMine which help define what mime types can be produced and consumed by the method respectively. JAXRS introduces the concept of Providers that help in marshalling/unmarshalling different mime types. Providers are annotated with the @Provider annotation. In addition, in the case of Exceptions, Exception Providers also can be developed that determine the response to be provided to a client.

If a method is annotated with @ConsumeMime or @ProduceMime of type "application/xml" and the object part of the method argument or return type is a JAXB object, i.e., an object that has the annotation @XmlRootElement, automatic JAXB marshalling is accomplished. The OrderDTO is one such object.


From the example, we also had a ProductResource. The ProductResource from the earlier example only supported the MIME type of "application/jspon". The updated ProductResource is shown below:

@Component @Path("/products") public class ProductsResource {

private static final Logger log = Logger.getLogger(ProductsResource.class);

@Autowired private ProductService productService;
@Autowired private MapperIF beanMapper;

private Set map(Set products) {
Set productDTOs = new HashSet();

for (Product product : products) {
ProductDTO productDTO = (ProductDTO) beanMapper.map(product, ProductDTO.class);
productDTOs.add(productDTO);
}

return productDTOs;
}

/**
* Gets a {@link ProductListDTO} of Products that are supported.
*
* @return a List of Products
*/
@GET @ProduceMime( { "application/json" }) public ProductListDTO getProducts() {
log.debug("Enter getProducts()");

Set products = productService.getProducts();
Set productDTOs = map(products);

if (log.isDebugEnabled()) {
log.debug("Returning Products:" + productDTOs);
}

return new ProductListDTO(productDTOs);
}
}


Unlike in JAXB where the annotations determine the marshalling sematics, for the JSON marshalling, I had to do some customizationwhere we specifically detail how the marshalling should occur.

I have been discussing the operation with Jerome on the Restlet Discussion forum and maybe it will become easier to just specify the mime type and not have to worry about the conversion.

Until then, we can accomplish the conversion to JSON using a custom Provider as shown below:
@ProduceMime("application/json")

@Provider
public class ProductProvider implements
MessageBodyWriter {
private static final Logger log = Logger.getLogger(ProductProvider.class);

public long getSize(ProductListDTO t) {
return -1;
}

public boolean isWriteable(Class type, Type genericType, Annotation[] annotations) {
return true;
}

public void writeTo(ProductListDTO t, Class type, Type genericType, Annotation[] annotations,
MediaType mediaType, MultivaluedMap httpHeaders, OutputStream entityStream) throws IOException,
WebApplicationException {

log.debug("Write To of ProductProvider invoked...");

JSONArray jsonArray = new JSONArray();

for (ProductDTO product : t.getProducts()) {
jsonArray.put(product.getProductId()).put(product.getName()).put(product.getDescription());
}

OutputStreamWriter writer = new OutputStreamWriter(entityStream);

try {
writer.write(jsonArray.toString());
writer.flush();
}
catch (IOException e) {
log.error("Error Writing JSON Array:", e);
throw e;
}

log.debug("Exit Write To of ProductProvider");
}
}



When an Order is not found, an OrderNotFoundException is thrown. The same is translated to a Response of HTTP code 404 to the consumer via the following Provider:





@Provider public class OrderNotFoundProvider implements ExceptionMapper {

public Response toResponse(OrderNotFoundException exception) {
return Response.status(Response.Status.NOT_FOUND).build();
}
}




So how do all there components tie together. I have largely based the glue code on a very nice example from the Restlet WIKI about JAXRS Support. I have an OrderConfig class as shown below that indicates the supported MediaType mappings, the Resource Classes and the Custom Provider classes:





public class OrderConfig extends ApplicationConfig {

public Set> getResourceClasses() {
Set> rrcs = new HashSet>();

rrcs.add(OrderResource.class);
rrcs.add(ProductsResource.class);

return rrcs;
}

@Override public Map getMediaTypeMappings() {
Map map = new HashMap();

map.put("html", MediaType.TEXT_HTML_TYPE);
map.put("xml", MediaType.APPLICATION_XML_TYPE);
map.put("json", MediaType.APPLICATION_JSON_TYPE);

return map;
}

public Set>getProviderClasses() {
Set> rrcs = new HashSet>();
rrcs.add(ProductProvider.class);
rrcs.add(OrderNotFoundProvider.class);

return rrcs;
}
}


The Restlet OrderApplication class from my earlier example has now transformed to an instance of JaxRsApplication to which it attaches the above mentioned OrderConfig class:

public class OrderApplication extends JaxRsApplication {

/**
* Class Constructor. Attaches the {@link OrderConfig} class.
*
* @param context Restlet Context
*/
public OrderApplication(Context context) {
super(context);
attach(new OrderConfig());
}
}


One issue we need to address is how will resource classes, Mapper Beans, Services etc get Autowired and injected, i.e., where is the Spring Hook? The JaxRsApplication class supports the concept of Custom Resource creation factories. This hook is utilized by creating a Custom Spring ObjectFactory that instantiates and provides Spring Managed bean. Setting the hook into the JaxRsApplication is achieved using a Custom Restlet ServerServlet as shown below:


public class SpringServlet extends ServerServlet {


public Application createApplication(Context context) {
JaxRsApplication application = (JaxRsApplication) super.createApplication(context);

// Set the Object Factory to Spring Object Factory
application.setObjectFactory(new SpringObjectFactory(getWebApplicationContext()
.getAutowireCapableBeanFactory()));

return application;
}

private static class SpringObjectFactory implements ObjectFactory {
private final AutowireCapableBeanFactory beanFactory;

public SpringObjectFactory(AutowireCapableBeanFactory beanFactory) {
this.beanFactory = beanFactory;
}

public T getInstance(Class jaxRsClass) throws InstantiateException {

@SuppressWarnings("unchecked")
T object = (T) beanFactory.createBean(jaxRsClass,
AutowireCapableBeanFactory.AUTOWIRE_AUTODETECT, false);

return object;
}
}

public WebApplicationContext getWebApplicationContext() {
return WebApplicationContextUtils.getRequiredWebApplicationContext(getServletContext());
}
}

I could not use the Restlet's SpringServerServlet for the implementation as it expects a RestletResource and Router etc.

Thoughts on JSR 311 and JAXRS and forward:

I quite like the JSR 311 method of creating the Web Service. I found the clean separation of Providers and Resources via annotations really helpful. The use of annotation makes reading the Resource code very simple and the Resource code itself is not working with Representation's like before. I would like to introduce WADL and WADL2JAVA into the example at some point. I also would like to see better JSON support. In addition, I am curious as how other implementations of JAXRS work and in particular provide for easy integration with my favorite framework Spring. One thing is the lack of a Client API from the specification that I regret.

Enviorment on which example was run:

OS - Windows Vista, JDK-1.6.X, Maven 2.0.8.

The JAXRS, Spring, Maven, Dozer example can be downloaded from HERE.

If you are unable to run the example, as always Ping me and I will be glad to help if I can :-)