Thursday, March 31, 2016

How does spring's DispatcherServlet work

In Spring webmvc, there is a specifial servlet which is the portal between Servlet container and Spring webmvc framework. It's the DispatcherServlet. A spring web application usually maps all requst to this DispatcherServlet.

A servlet's job is take in a HttpServletRequest, do all the business process and finally return a HttpServletResponse. 

1. Basic process flow of DispatcherServlet

How does DispatcherServlet work

The most important components for understanding DispatcherServlet are 4 interfaces, HandlerMapping, HandlerAdapter, ViewResolver and View. Every interface has ONE core method.  So in short, you can think only 4 methods get main logic done in DispatcherServlet. The lines in above chart also show how previous method affects latter ones.

2. More description

2.1 HandlerMapping

In a spring web application, usually there are a lot of @RequstMapping annotated methods within many @Controller annotated classes. HandlerMapping solve this problem: "Which method in which class should be used to process this current request?" Spring has default HandlerMapping instances built in. For example in spring 4.2.3.RELEASE, BeanNameUrlHandlerMapping and DefaultAnnotationHandlerMapping are used by default. Here are the tyep hierarchy of HandlerMapping.

image

In the HandlerMapping hierarchy, 2 classes are default in DispatcherServlet, although one of them is deprecated. Spring document recommand to use the RequestMappingHandlerMapping to replace the deprecated DefaultAnnotationHandlerMapping.

The returned type of getHandler(…) HandlerExecutionChain. HandlerExecutionChain is a combination of 1 handler + 1 or more interceptors. You can think it as a wrapper of the real handler. One thing to notice is the type of handler is Object, which means a handler to process the income request can be any class! That's the ultimate flexibility spring provides.

Also since the handler type is Object, how DispatcherServlet use it, since type Object expose no real business methods. How to use this "Object handler" to process http request? That's exactly what HandlerAdapter does.

2.2 HandlerAdapter

The key method in HandlerAdapter interface is handle(…, Object handler). The last parameter is the "Object handler" that returned from HandlerMapping, through a wrapper class, HandlerExecutionChain. In this handle method, method of a @Controller class finally get invoked.

image

One thing need to notice is that HandlerMapping and HandlerAdapter usually are closely coupled. All though the typeof  HandlerMapping returned handler is Object in method signature, it's actually some type that only be recognized by a certain HandlerAdapter implementation.  For example,  the handler that returned by RequestMappingHandlerMapping is actually a instance of org.springframework.web.method.HandlerMethod. Only RequestMappingHandlerAdapter knows how to invoke the HandlerMethod.

The return type of a HandlerAdpater's handle(…) method is ModelAndView. As the name says, it has mode and view( or view name) in it.

2.3 ViewResolver

The task for ViewResolver is to find View instance by view's name. The return type of method resolveViewName(…) is View.

image

2.4 View

The key mothod of view instance is render(Map<String,?> model, …), in here the final html result back to user is created. The first parameter is the model data used to create page.  Up to now the main flow of processing an income Http request is over, the created final html page will be send back by servlet container.

3. More

Besides all above interfaces. Another important interface you many come into contact with is HandlerExceptionResolver.

image

If you want to set a default view for all Exceptions or set different views for different types of Excpetion,  extend from SimpleMappingExceptionResolver is very convenient.

See tutorial "Exception handling in spring mvc/rest application" on how to take care of exceptions in Spring webmvc.

What are default handlers/resolvers in Spring DispatcherServlet

Spring webmvc's DispatcherServlet rely on several important notions.

  • HandlerMapping  - find which controller's which method to process current http request.
  • HandlerAdapter - use HandlerMapping's result, actually make performe the request handling, return a ModelAndView object
  • ViewResolver - find view object by view's name in HandlerAdaper's return. This one with previous 2 are the main logic of spring's DispatcherServlet.
  • HandlerExceptionResolver - find correct method to handle any exception occured in previous process.
  • LocaleResolver - resolve Locale
  • ThemeResolver - find theme

The first 4 are the key conponents in spring webmvc's DispatcherServlet.

Here is how can you find out the default values for a certain spring webmvc version.  All these default values are defined in a properties file, DispatcherServlet.properties, in the same directory with DispatcherServlet class.  So in Eclipse maven project, you can simple open the spring mvc package and open this file from the maven dependencies like below.

image

Tuesday, March 29, 2016

How to use distributed JMeter to test netty server performance

JMeter is good at performance testing. Usually, we often need more than one hosts to send traffic to one server to see how the sever behaves. This Many-To-One test scenario is called distributed testing.  In JMeter's world, the hosts, which generate traffic,  are called 'jmeter server'. You can think them as slaves. The one, on which you setup test plan and control the start/stop of others, is called 'jmeter client' . You can think this only one as master.

image

Netty is a high performanc NIO framework. In this demo, a simple Echo server written with Netty 4 is what's going to be tested.  This demo use 5 other hosts to generate traffic to the echo server with long connection.  Everyone establishes 20,000 connections to the server and 100,000 connections totally for 5 hosts.

This article wants to bring you:

  1. Basic concepts of JMeter. like Thread group, Logic Controller, Sampler, Timer and Listener,
  2. How to setup a jmeter testplan with loop, thread group, timer, sampler and listeners
  3. How to run jmeter in distributed test
  4. How to read the test result.
  5. Have a basic understanding of netty's performance. Since only 5 hosts used to generate traffic,  we can't fully show the capacity of the server, but only some idea of the workload in 100,000 connections. 

The JMeter version used in this article is 2.13. The server under test works as an Echo server, just send back what is recieved. The Echo server's code can be found in the end of this article.

1. JMeter basic elements

Every test you run is called a 'test plan', it's a container of other elements.  In a 'test plan', there is usually a 'thread group', which defines the concurrency of your test, like how many thread you want to use to run the 'sampler'.

What is 'sampler'? it's kinda of a real action the test performs, like send out a http request, a tcp request or make a JDBC connection.  'Timer' can help your test wait for some time before perform another action. The name of 'Listener' is a little abstract, but what 'listener' do is just show you the result of the test.  You can have more than one listeners to show the result in different format. e.g. in table or in chart.

JMeter also bunch of 'logic controller' to help you control test logic. In this demo, loop is used to send TCP request repeatedly.

2.Config JMeter test plan.

Test plan is the running unit of JMeter. It has all the test logic defined in it. In distributed testing, this test plan only need to be defined once on the master, and will be submitted to all slaves hosts automatically.

Let's define a test plan to send tcp request repeatly.  The steps are common for other kind of tests.

2.1 Add 'Thread Group'

Right click the test plan and add a thread group.

image

The default thread group only have 1 thread. Change it to 20,000.

image

2.2 Add Loop

we need a thread send tcp requst, recieve the response, then wait for a few seconds and send again. So we need a loop. In this loop we send tcp request and sleep.  Right click the thread group, add loop controller.

image

For this demo, let's choose loop forever.

image

2.3 Add Sampler

Now we have a big goup of thread ready to do something. Sampler defines what the real task is.  Right click on the loop controller, add a tcp sampler.

image

The tcp sampler needs some configuration. 

image

Besides the ip and port of the server, there are 3 places need to config. First re-use connection for 'long connection'. Secondly set String 'send message\n'  as the payload sent to server. Finally, set the EOL byte, jmeter default is 0x00, since we send string message, '\n' is the delimer for every message.

2.4 Add timer

Add a wait after each tcp request. Right click TCP sampler, add a constant timer.

image

Change the delay to 5000 ms.

image

Now the test logic is done, but we need to add listerner to check the test result.

2.5 Add listeners

We add 2 listeners by right click on the test plan.  One is 'Aggregate Report' to see the statistic result of test. The other listener dispaly response time in line chart. (It's a jmeter plugin from the plugin standard set, see http://jmeter-plugins.org/) . The test plan is ready to run, now it looks like below. 

image

3. Run the test locally

Select 'Run' ->'Start', let's run the test locally.  We can see the test result from listeners. The 'Aggregate Report' listener looks like below.

image

The 'Response Times Over Time' listener has the response time as a line chart.

image

So far so good. The sever seems can handle client from 1 host very easily. Next we are going to add more host to generate traffice. This's so-called distributed testing.

4. Run the test remotely

4.1 Prerequisition

  • Every slave hosts has same version jmeter install. (ver 2.13 in this demo)
  • all host in the same subnet to reduce the network delay.

4.2 Change properties file of 'master' jmeter

Modify ${JMeter_Install_Dir}/bin/jmeter.properties file, add all slaves' ip

image

Also in this file, change jmeter mode to 'StrippedAsynch' to reduce master's workload. This's necessary when play with large concurrency performance test.

image

Remember only change this on the master's host.  Now if start the jmeter on master host, you can see the slaves from menu.

image

Now the master, or we can say jmeter client,  is ready. 

4.3 Start JMeter engine on every slave host

Run 'jmeter-server' command on every slave hosts.

image

4.4 Run remotely

On master, click  run remote all button.

image

Very soon you will find the active thread goes up to 100k (20k x 5 slaves)

image

Now the test result looks like below.

image

image

Check  the server's workload, cpu usage is still quite low. On our 2CPU server ( Intel Xeon E5-27750 2.0G 20M Cache, total 32 thread cores), only about 2 cores are used. 

image

Although the echo server has no real logic, the nice performance of Netty framework is impressive.

5. More

Here are the source of our Netty 4 echo server.  It has 2 classes. First the ServerHandler.java

package com.shengwang.demo;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

public class ServerHandler extends ChannelInboundHandlerAdapter {

  @Override
  public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    ctx.writeAndFlush(msg);  // loop message back
  }
}

Then the main class.

package com.shengwang.demo;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;
import io.netty.util.concurrent.DefaultEventExecutorGroup;
import io.netty.util.concurrent.EventExecutorGroup;

import java.io.IOException;

public class NettyServer {

  public static void main(String[] args) throws IOException, InterruptedException {
    NioEventLoopGroup boosGroup = new NioEventLoopGroup();
    NioEventLoopGroup workerGroup = new NioEventLoopGroup();
    ServerBootstrap bootstrap = new ServerBootstrap();
    bootstrap.group(boosGroup, workerGroup);
    bootstrap.channel(NioServerSocketChannel.class);
    
    // ===========================================================
    // 1. define a separate thread pool to execute handlers with
    // slow business logic. e.g database operation
    // ===========================================================
    final EventExecutorGroup group = new DefaultEventExecutorGroup(1500); //thread pool of 500
    
    bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
      @Override
      protected void initChannel(SocketChannel ch) throws Exception {
        ChannelPipeline pipeline = ch.pipeline();
        pipeline.addLast(new StringEncoder()); 
        pipeline.addLast(new StringDecoder()); 
        
        //===========================================================
        // 2. run handler with slow business logic 
        //    in separate thread from I/O thread
        //===========================================================
        pipeline.addLast(group,"serverHandler",new ServerHandler()); 
      }
    });
    
    bootstrap.childOption(ChannelOption.SO_KEEPALIVE, true);
    bootstrap.bind(19000).sync();
  }
}

This echo server depends on netty 4.0.34.

    <dependency>
      <groupId>io.netty</groupId>
      <artifactId>netty-all</artifactId>
      <version>4.0.34.Final</version>
    </dependency>

Thursday, March 24, 2016

Netty tutorial - hello world example

Netty is a NIO client server framework which enables quick and easy development of network applications. In this tutorial the basic concepts of Netty are introduced, as well as  a hello world level example. This hello world example, based on Netty 4, has a server and a client, including heart beat between them, and POJO sending and recieving.

1. Concepts

Netty's high performance rely on NIO. Netty has serveral important conceps: channel, pipeline, and Inbound/Outbound handler.  (

Channel can be thought as a tunnel that I/O request will go through.  Every Channel has its own pipeline.  On API level, the most used channel are io.netty.channel.NioServerSocketChannel for socket server and io.netty.channel.NioSocketChannel for socket client.

Pipeline is one of the most important notion to Netty.  You can treat pipeline as a bi-direction queue.  The queue is filled with inbound and outbound handlers. Every handler works like a servlet filter. As the name says , "Inbound" handlers only process read-in I/O event, "OutBound" handlers only process write-out I/O event, "InOutbound" handlers process both.  For example a pipeline configured with 5 handlers looks like blow.

image

This pipeline is equivalent to the following logic. The input I/O event is process by handlers 1-3-4-5. The output is process by handes 5-2.

image

In real project, the first input handler, handler 1 in above chart, is usually an decoder.  The last output handler, handler 2 in above chart, is usually a encoder.  The last InOutboundHandler usually do the real business, process input data object and send reply back. In real usage, the last business logic handler often executes in a different thread than I/O thread so that the I/O is not blocked by any time-consuming tasks. (see example below)

Decoder transfers the read-in ByteBuf into data structure that is used in above bussiness logic. e.g. transfer byte stream into POJOs. If a frame is not fully received, it will block until its completion, so the next handler would NOT need to face a partial  frame.

Encoder transfers the internal data structure to ByteBuf that will finally write out by socket.

How does the event flow through all the handler? One thing need to notice is that, every handler is reponsble to propagate the event to the next handler. One handler need to explicitly invoke a method of ChannelHanderContext to trigger the next handler to work. Those methods include:

Inbound event propagation methods:

  • ChannelHandlerContext.fireChannelRegistered()
  • ChannelHandlerContext.fireChannelActive()
  • ChannelHandlerContext.fireChannelRead(Object)
  • ChannelHandlerContext.fireChannelReadComplete()
  • ChannelHandlerContext.fireExceptionCaught(Throwable)
  • ChannelHandlerContext.fireUserEventTriggered(Object)
  • ChannelHandlerContext.fireChannelWritabilityChanged()
  • ChannelHandlerContext.fireChannelInactive()
  • ChannelHandlerContext.fireChannelUnregistered()

Outbound event propagation methods:

  • ChannelHandlerContext.bind(SocketAddress, ChannelPromise)
  • ChannelHandlerContext.connect(SocketAddress, SocketAddress, ChannelPromise)
  • ChannelHandlerContext.write(Object, ChannelPromise)
  • ChannelHandlerContext.flush()
  • ChannelHandlerContext.read()
  • ChannelHandlerContext.disconnect(ChannelPromise)
  • ChannelHandlerContext.close(ChannelPromise)
  • ChannelHandlerContext.deregister(ChannelPromise)

The demo in this article use set a heart beat between client and server to keep the long connection.  Netty's IdleStateHandler is used for heart beat on idle.  In this IdleStateHandler, fireUserEventTriggered() is invoked to trigger the action of next handler.

2. Hello world example using Netty 4

This example has 1 server and 1 client. Long connection is used for data transfering. A heart beat message will be send from server to client if there is no data between them for every 5 seconds.  The heart beat message has a timestamp of sending time. The client do nothing when get the heart beat but simply send it back to server. Server can print out the loopback delay by using recv time substract by sending time.

This example shows:

  • How to send/recv POJOs with the help of encoder/decoder
  • How to add heart beat for long connection.

The pipeline of demo server looks like below.

image

The IdleStateHandler is located at the very beginning so it can judge whether there is traffic in or out even the input traffic is in wrong frame format. 

The pipeline of demo client looks like below.

image

 

2.1 Add netty dependency

    <dependency>
      <groupId>io.netty</groupId>
      <artifactId>netty-all</artifactId>
      <version>4.0.34.Final</version>
    </dependency>

Add netty to your pom.xml if maven is used.

2.2 Define Common Classes

There are 3 classes used both in server and client.  A POJO class LoopBackTimeStamp.java which is sent and recieved, a encoder class TimeStampEncoder.java and a decoder class TimeStampDecoder.java 

First the LoopBackTimeStamp.java

package com.shengwang.demo;

import java.nio.ByteBuffer;

public class LoopBackTimeStamp {
  private long sendTimeStamp;
  private long recvTimeStamp;

  public LoopBackTimeStamp() {
    this.sendTimeStamp = System.nanoTime();
  }

  public long timeLapseInNanoSecond() {
    return recvTimeStamp - sendTimeStamp;
  }

  /**
   * Transfer 2 long number to a 16 byte-long byte[], every 8 bytes represent a long number.
   * @return
   */
  public byte[] toByteArray() {

    final int byteOfLong = Long.SIZE / Byte.SIZE;
    byte[] ba = new byte[byteOfLong * 2];
    byte[] t1 = ByteBuffer.allocate(byteOfLong).putLong(sendTimeStamp).array();
    byte[] t2 = ByteBuffer.allocate(byteOfLong).putLong(recvTimeStamp).array();

    for (int i = 0; i < byteOfLong; i++) {
      ba[i] = t1[i];
    }

    for (int i = 0; i < byteOfLong; i++) {
      ba[i + byteOfLong] = t2[i];
    }
    return ba;
  }

  /**
   * Transfer a 16 byte-long byte[] to 2 long numbers, every 8 bytes represent a long number.
   * @param content
   */
  public void fromByteArray(byte[] content) {
    int len = content.length;
    final int byteOfLong = Long.SIZE / Byte.SIZE;
    if (len != byteOfLong * 2) {
      System.out.println("Error on content length");
      return;
    }
    ByteBuffer buf1 = ByteBuffer.allocate(byteOfLong).put(content, 0, byteOfLong);
    ByteBuffer buf2 = ByteBuffer.allocate(byteOfLong).put(content, byteOfLong, byteOfLong);
    buf1.rewind();
    buf2.rewind();
    this.sendTimeStamp = buf1.getLong();
    this.recvTimeStamp = buf2.getLong();
  }
  
  // getter/setter ignored
}

The LoopBackTimeStamp class has 2 long numbers. it also has 2 methods, toByteArray() is used to transfer the internal 2 long number into a byte array of 16 bytes. fromByteArray() works reversely, change 16 bytes array back to the 2 long numbers.

Then the encoder and decoder.  The encoder TimeStampEncoder tranfer a LoopBackTimeStamp object into byte array that can be sent out. 

package com.shengwang.demo.codec;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;

import com.shengwang.demo.LoopBackTimeStamp;

public class TimeStampEncoder extends MessageToByteEncoder<LoopBackTimeStamp> {
  @Override
  protected void encode(ChannelHandlerContext ctx, LoopBackTimeStamp msg, ByteBuf out) throws Exception {
    out.writeBytes(msg.toByteArray());
  }
}

The decoder transfer bytes recieved from socket into a LoopBackTimeStamp object for business handler to process.

package com.shengwang.demo.codec;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

import com.shengwang.demo.LoopBackTimeStamp;

public class TimeStampDecoder extends ByteToMessageDecoder {

  @Override
  protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
    final int messageLength = Long.SIZE/Byte.SIZE *2;
    if (in.readableBytes() < messageLength) {
      return;
    }
    
    byte [] ba = new byte[messageLength];
    in.readBytes(ba, 0, messageLength);  // block until read 16 bytes from sockets
    LoopBackTimeStamp loopBackTimeStamp = new LoopBackTimeStamp();
    loopBackTimeStamp.fromByteArray(ba);
    out.add(loopBackTimeStamp);
  }
}

The decoder try to read 16 bytes as a whole, then create a LoopBackTimeStamp object from this 16 bytes array. It blocks if less than 16 bytes recieved, until a complete frame recieved.

2.3 Define server classes

Besides the above 3 common classes, server and client both have 2 own classes respectively, Main + a Handler for real logic. The logic handler for server, ServerHandler.java,  is as follows.

package com.shengwang.demo;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.handler.timeout.IdleState;
import io.netty.handler.timeout.IdleStateEvent;

public class ServerHandler extends ChannelInboundHandlerAdapter {

  @Override
  public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    LoopBackTimeStamp ts = (LoopBackTimeStamp) msg;
    ts.setRecvTimeStamp(System.nanoTime());
    System.out.println("loop delay in ms : " + 1.0 * ts.timeLapseInNanoSecond() / 1000000L);
  }

  // Here is how we send out heart beat for idle to long
  @Override
  public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
    if (evt instanceof IdleStateEvent) {
      IdleStateEvent event = (IdleStateEvent) evt;
      if (event.state() == IdleState.ALL_IDLE) { // idle for no read and write
        ctx.writeAndFlush(new LoopBackTimeStamp());
      }
    }
  }

  @Override
  public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    // Close the connection when an exception is raised.
    cause.printStackTrace();
    ctx.close();
  }
}

All three methods are overriden methods. The first channelRead() read loop back message and print out time spent on the trip. The second method handle the event fired by IdleStateHandler (you may want to scroll up to review how server pipeline configured). When idle too long, a LoopBackTimeStamp object is sent out as heart beat.

Another class for server is a main class NettyServer.java.

package com.shengwang.demo;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.timeout.IdleStateHandler;
import io.netty.util.concurrent.DefaultEventExecutorGroup;
import io.netty.util.concurrent.EventExecutorGroup;
import java.io.IOException;
import com.shengwang.demo.codec.TimeStampDecoder;
import com.shengwang.demo.codec.TimeStampEncoder;

public class NettyServer {

  public static void main(String[] args) throws IOException, InterruptedException {
    NioEventLoopGroup boosGroup = new NioEventLoopGroup();
    NioEventLoopGroup workerGroup = new NioEventLoopGroup();
    ServerBootstrap bootstrap = new ServerBootstrap();
    bootstrap.group(boosGroup, workerGroup);
    bootstrap.channel(NioServerSocketChannel.class);
    
    // ===========================================================
    // 1. define a separate thread pool to execute handlers with
    //    slow business logic. e.g database operation
    // ===========================================================
    final EventExecutorGroup group = new DefaultEventExecutorGroup(1500); //thread pool of 1500
    
    bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
      @Override
      protected void initChannel(SocketChannel ch) throws Exception {
        ChannelPipeline pipeline = ch.pipeline();
        pipeline.addLast("idleStateHandler",new IdleStateHandler(0,0,5)); // add with name
        pipeline.addLast(new TimeStampEncoder()); // add without name, name auto generated
        pipeline.addLast(new TimeStampDecoder()); // add without name, name auto generated
        
        //===========================================================
        // 2. run handler with slow business logic 
        //    in separate thread from I/O thread
        //===========================================================
        pipeline.addLast(group,"serverHandler",new ServerHandler()); 
      }
    });
    
    bootstrap.childOption(ChannelOption.SO_KEEPALIVE, true);
    bootstrap.bind(19000).sync();
  }
}

Most of the main codes are boilerplate of how to init a netty sever,  pay attention to how to add thoese handlers to the pipeline and how to run business logic handler in separated thread.

2.4 Define client classes

Client, like server, also has 2 classes. main + a handler.  The ClientHandler class, like ServerHandler class, is also a "Inbound handler", only process income message.

package com.shengwang.demo;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

public class ClientHandler extends ChannelInboundHandlerAdapter {

  @Override
  public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    LoopBackTimeStamp ts = (LoopBackTimeStamp) msg;
    ctx.writeAndFlush(ts); //recieved message sent back directly
  }

  @Override
  public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    // Close the connection when an exception is raised.
    cause.printStackTrace();
    ctx.close();
  }
}

The client reads message and directly send it back for loopback.

The main class for client, NettyClient.java,  shows below.

package com.shengwang.demo;

import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;

import com.shengwang.demo.codec.TimeStampDecoder;
import com.shengwang.demo.codec.TimeStampEncoder;

public class NettyClient {

  public static void main(String[] args) {
    NioEventLoopGroup workerGroup = new NioEventLoopGroup();
    Bootstrap b = new Bootstrap();
    b.group(workerGroup);
    b.channel(NioSocketChannel.class);

    b.handler(new ChannelInitializer<SocketChannel>() {
      @Override
      public void initChannel(SocketChannel ch) throws Exception {
        ch.pipeline().addLast(new TimeStampEncoder(),new TimeStampDecoder(),new ClientHandler());
      }
    });
    
    String serverIp = "192.168.203.156";
    b.connect(serverIp, 19000);
  }
}

The demo client connect to a hard code ip and port.  

Finally the project hierarchy looks like:

image

 

3. Run it

First let's run the server, then open another windows to run client.  After client connected, you see every 5 seconds, a loop back trip message is print out  on screen.

image

Furthurmore, this demo is also used to coarsely estimate the hardware requirement in our project for a server support large long connection clients. When running the NettyServer on a server with 2 CPU (Xeon E5-2650  2.0GHZ, 20M Cache, 8 core, 16 threads) and 32G RAM.  the work load looks like below with 264,000 connections.

image

6 hosts are used as client to run NettyClient. So every host has about 40,000 connections. The connections on the same client host trigger heart beat at the same time, so the cpu usage roughly relects this workload.  If the heart beat can be scattered a little, the cpu workload drops clearly.

Thursday, March 17, 2016

Hello world Spark example

In this article, there is 3 hello world level demos.  The difference is where does the spark get its input data.

  • The first one, spark get its input data from external socket server. This method is often used in testing or developing phase.
  • The second demo, spark get its input data from Flume agent.
  • The third one, spark get its input data from Kafka. The last one may be used more common in real project.

1. Brief

Flume,Spark, Kafka are all apache projects.

Flume is mainly designed to move data, large amount of data, from one place to another, especially from Non-Big-Data-World to Big-Data-World.  e.g. from log file to HDFS. Usually the role of Flume agent in the cluster is data producer.

Spark is designed to process large scale data in a near-real-time way.  Input data will be process in consecutive batch of operations, usually these operation are map-reduce. If the batch interval set to 10 seconds, a map-reduce operation can process the data coming at that time windows every 10 seconds.  An working scenario is using spark 's input is infinite stream of lines of web access log.  Usually the role of Spark in the cluseter is data consumer.

Kafka, like a JMS in traditional non big data application, can be used to connect Flume, namely producer, to Spark, namely consumer.

2. Spark demos

All three spark demo just get input from stream and print out to screen. Every demo is very simple, only has one main class. You can replace the print out with some real task in your own use. In this article all demos run on CDH release 5.4.9. (flume 1.5.0, spark 1.3.0, kafka 2.0 )

Pay attention to how they get input stream as well as how to process the stream. The generic types of the stream are different, which are String, FlumeEvent and Tuple2<String,String> for demo1, 2, 3 respectively.

2.1 Data from external socket server.

In demo1, Spark gets its input from a external socket server.  Add dependency to  spark-streaming_xxx to you pom.

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.10</artifactId>
      <version>1.3.0</version>
    </dependency>

The main class is defined like below.

package com.shengwang.demo;

import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;


public class NakedSparkMain {
  private static final int INTERVAL = 5000;
  
  private static final String DATA_HOST = "localhost";
  private static final int DATA_PORT = 11111;
  
  public static void main(String[] args) {
    NakedSparkMain main = new NakedSparkMain();
    main.createJavaStreamingDStream();
  }
  
  private void createJavaStreamingDStream() {
    // =========================================
    // init Streaming context
    // =========================================
    SparkConf conf = new SparkConf().setAppName("naked-spark application");
    JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(INTERVAL));

    // =========================================
    // Data input - get data from socket server
    // =========================================
    JavaReceiverInputDStream<String> stringStream = ssc.socketTextStream(DATA_HOST, DATA_PORT); 
    
    // =========================================
    // Data process part - print every line 
    // =========================================
    stringStream.foreach(x -> {
      x.foreach(y -> {
        System.out.println("----- message -----");
        System.out.println("input = " + y);
      });
      return null;
    });
    
    ssc.start();
    ssc.awaitTermination();
  }
}

2. 2 Data from Flume sink

In demo2, Spark gets its input from a flume avro agent.  Add dependency to  spark-streaming-flume_xxx to you pom.

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume_2.10</artifactId>
      <version>1.3.0</version>
    </dependency>

The main class is defined like below.

package com.sanss.qos.spark.stream.flume;

import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.flume.FlumeUtils;
import org.apache.spark.streaming.flume.SparkFlumeEvent;

public class FlumeMain {
  private static final int INTERVAL = 5000;
  
  private static final String FLUME_HOST = "localhost";
  private static final int FLUME_PORT = 11111;
  
  public static void main(String[] args) {
    FlumeMain main = new FlumeMain();
    main.createJavaStreamingDStream();
  }
  
  private void createJavaStreamingDStream() {
    // =========================================
    // init Streaming context
    // =========================================
    SparkConf conf = new SparkConf().setAppName("flume-to-spark application");
    JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(INTERVAL));
    
    // =========================================
    // Data input - get data from flume agent
    // =========================================
    JavaReceiverInputDStream<SparkFlumeEvent> flumeStream = FlumeUtils.createStream(ssc, FLUME_HOST, FLUME_PORT); 
    
    // =========================================
    // Data process part - print every line 
    // =========================================
    flumeStream.foreach(x -> {
      x.foreach(y -> {
        System.out.println("----- message -----");
        System.out.println("input = " + getMessageFromFlumeEvent(y));
      });
      return null;
    });
    
    ssc.start();
    ssc.awaitTermination();
  }

  private String getMessageFromFlumeEvent(SparkFlumeEvent x) {
    return new String (x.event().getBody().array());
  }
}

This demo hardcode the flume sink ip and port to localhost and 11111, make sure the flume configuration match the code.

2.3 Data from Kafka

In demo3, Spark gets its input from kafka topic.  Add dependency to  spark-streaming-kafka_xxx to you pom.

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka_2.10</artifactId>
      <version>1.3.0</version>
    </dependency>

The main class is defined like below.

package com.sanss.qos.spark.stream.flume;

import java.util.HashMap;
import java.util.Map;

import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

public class KafkaSparkMain {
  private static final int INTERVAL = 5000;
  
  private String zookeeper = "cdh01:2181";   // hardcode zookeeper
  private String groupId = "qosTopic";
  private String topicName = "mytopic1";
  private Map<String, Integer> topics = new HashMap<>();
  
  public static void main(String[] args) {
    KafkaSparkMain main = new KafkaSparkMain();
    main.createJavaStreamingDStream();
  }
  
  private void createJavaStreamingDStream() {
    // =========================================
    // init Streaming context
    // =========================================
    SparkConf conf = new SparkConf().setAppName("kafka-to-spark application");
    JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(INTERVAL));

    // =========================================
    // Data input - get data from kafka topic
    // =========================================
    topics.put(topicName, 1);
    JavaPairReceiverInputDStream<String, String> kafkaStream = KafkaUtils.createStream(ssc, zookeeper, groupId, topics);  
  
    // =========================================
    // Data process part - print every line 
    // =========================================
    kafkaStream.foreach(x -> {
      x.foreach(y -> {
        System.out.println("----- message -----");
        System.out.println("input = " + y._1 + "=>" + y._2);
      });
      return null;
    });
    
    ssc.start();
    ssc.awaitTermination();
  }
}

This demo hardcode read data from a kafka topic 'mytopic1', make sure you have data published to that topic.

3. Run the demo

For simplicity, only run the first one, the other 2 result same but need more external environment. The first one only need a external socket server, like linux  netcat tool can do this for you. That's also why the first one can be very convenient during developing phase.

First let's start a socket server listen on port 1111 with netcat.

image

Now start our spark application on a server with spark installed.

spark-submit --master local[*]  --class com.shengwang.demo.NakedSparkMain  /usr/sanss/qos-spark-1.0.jar

After spark application runs, you will find it dump a lot of log every 5 seconds, because we set the batch duration is 5 seconds in programe. Now type something in netcat window

image

You will see the message pring out in spark outout like below.

image

Wednesday, March 16, 2016

How to send message to Flume (avro or thrift) in Java

Apache Flume is mostly used to transfer data from "Out-of-Hadoop" world into Hadoop family.  In Flume, every message transfered is called 'Event', input is call "Source", output is called "Sink".

1. Flume Source & Sink type

Flume has many input and output types.

Here are some  wildly used source(input) types:

  • "syslog" is comming with Unix system. 
  • "avro" and "thrift", these 2 are very similar. Both are apache projects.  Flume will listen on a given port for RPC call and data is encoded by Avro or Thrift way repectively.
  • "netcat"  listens on a given port and turns each line of text into an event. Just like a normal socket server. Every line is an Event.
  • "exec", get source from a external command, like 'tail -f you_log_file'
  • "http", A source type which accepts Flume Events by HTTP POST and GET.
  • "jms", allows convert message from JMS. e.g.  ActiveMQ, into flume event
  • "kafka", reads message from Apache Kafka topic.

Here are some  wildly used sink(output) types:

  • "logger", log event at INFO level. Typically useful for testing/debugging purpose.
  • "hdfs", writes events into the Hadoop Distributed File System (HDFS)
  • "hive", write events into a Hive table or partition
  • "hbase" and "asynchbase", writes data to HBase in sync or async way.
  • "avro" and "thrift", events sent to this sink are turned into Avro or Thrift events and sent to the configured hostname / port pair.
  • "null" just discards all events like /dev/null
  • "kafka", public data to kafka topic.

2. Flume client for Avro/Thrift source.

If you using avro or thrift source as message input,  following is a java client to send message into flume. This is useful for developing or testing. For me a client like this helps me debugging my spark code.

package com.shengwang.demo;

import java.nio.charset.Charset;
import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder;

public class DemoMain {

  public static void main(String[] args) throws EventDeliveryException {
    String ip = "192.168.203.156";
    int port = 34343;
    
//  RpcClient client = RpcClientFactory.getDefaultInstance(ip, port); // Avro
    RpcClient client = RpcClientFactory.getThriftInstance(ip, port);  // Thrift
    Event event = EventBuilder.withBody("hello flume",Charset.forName("UTF8"));
    client.append(event);
    client.close();
  }
}

You need to add following dependency to your pom.xml file

    <dependency>
      <groupId>org.apache.flume</groupId>
      <artifactId>flume-ng-sdk</artifactId>
      <version>1.5.0</version>
    </dependency>

Since I'm using CDH 5.4.9, the flume version is 1.5.0 in my environment. Choose the version matches yours. The hierarchy looks like below.

image

3. Run it

For my lab's configuration, flume has a thrift source and output event to kafka. So I can use kafka console comsumer to print out the message. The message send out from my Java client to Flume, then transfers to Kafka topic by flume and finally gets consumed and prints out on command line. 

So first start the kafka-console-consumer on server running kafka, which is a tool come with kafka package, it will block and wait for message. Then run our DemoMain client. The message "hello flume" will print out on screen.

image

Friday, March 4, 2016

Java-based Spring mvc configuration - with Spring security

This article need you already understand how to config Spring mvc in java-based configuration without Spring security. If not, check this article first on "Java-based Spring mvc configuration - without Spring security"

This article is a hello world level spring mvc application with following features:

  • All configuration are java-based
  • Spring security enabled
  • User/password/rolename are stored in database. (MySQL, in this demo)
  • Tables used to store username/password/rolename are not spring security's default schema. ( spring security has a default table schema for user and authories storage, but not flexiable enough).

This article can also be used as a hello world level example on how to use spring security in Spring webmvc  application.

0. What you need

  • Maven 3.2+
  • Spring 4 +
  • Spring security 4.+

1.Brief about using spring security

To use Spring security in your project you need to define your own class, MyUserDetailsService in this demo,  which implements interface UserDetailsService(only-read user info) or interface UserDetailsManager(can create new user). These 2 interfaces are defined in Spring security. Spring security also provides a in-memory implements for above interfaces, but it's only for prototype or testing, not for real usage.

The user information type used in Spring security is a interface called UserDetails.  To make your code clean, you can implements this interface on your user entity definition, so the entity loaded from database can be used in spring security framework directly.

Spring security internally use servlet filter to fulfill all the functions.  You need to create a DaoAuthenticationProvider, and set your own MyUserDetailsService to it.

2. Cheet sheet

step1. Define entity class, implements spring security's interfaces (UserDetails and GrantedAuthority for user and authority respectively)

step2. Define class implements UserDetailsService or UserDetailsManager. In this demo it's MyUserDetailsServices.java

step3. Define a java config class extends WebSecurityConfigurerAdapter. In this demo it's SecurityConfig.java. There are 3 thnings defined in this class.

  • A bean of type DaoAuthenticationProvider, will use UserDetailsService implementation defined in step 2.
  • A config method which will use the just defined DaoAuthenticationProvider bean. ( A config method is a method annotated with @Autowired, method name is not important, arguments for the method will autowired from the spring context)
  • Override  configure(HttpSecurity http) method , set all access rules based on url patterns.

step4. Define a java class extends AbstractSecurityWebApplicationInitializer. In this demo it's SecurityWebApplicationInitializer.java.

step5. Finally make sure SecurityCinfig  from step 3 is loaded. In this demo just add SecurityConfig to getRootConfigClasses() for class extends AbstractAnnotationConfigDispatcherServletInitializer.

The purple types are from Spring.

Still not clear on how to do? Don't worry, check the demo code below first, then come back to see whether you'v got it.

3. Demo

3.1. Define Entity class

There are 2 entity classes. One for username+password, the other one for user's rolename. One user can have more than one roles. 

package com.shengwang.demo.model;

import org.springframework.security.core.userdetails.UserDetails;
// omit other import


@Entity
@Table(name="t_user")
public class UserEntity implements UserDetails{

  @Id
  @GeneratedValue(strategy=GenerationType.IDENTITY)
  private Long userId;
  private String username;
  private String password;
  private Boolean enabled;

  // Make sense to Eager fetch user's authorities
  @OneToMany(mappedBy="user",fetch=FetchType.EAGER)
  private List<AuthorityEntity> authorities;

  // omit getter/setter
  
  // methods from UserDetails interface
  @Override
  public boolean isAccountNonExpired() {
    return true;  // database has no mapping fields, always true
  }

  @Override
  public boolean isAccountNonLocked() {
    return true;  // database has no mapping fields, always true
  }

  @Override
  public boolean isCredentialsNonExpired() {
    return true;  // database has no mapping fields, always true
  }

  @Override
  public boolean isEnabled() {
    return enabled;
  }
}

This UserEntity implements spring security's UserDetails interface. For simplicity, it only has 3 non-trivial fields, username, password and enabled besides the primary key. Another entity is defined in AuthorityEntity.java

package com.shengwang.demo.model;

import org.springframework.security.core.GrantedAuthority;
// omit other import

@Entity
@Table(name="t_authority")
public class AuthorityEntity implements GrantedAuthority{

  private static final long serialVersionUID = 1204090309640166925L;

  @Id
  @GeneratedValue(strategy=GenerationType.IDENTITY)
  private long id;
  private String roleName;

  //bi-directional many-to-one association to User
  @ManyToOne
  @JoinColumn(name="userId")
  private UserEntity user;
  
  // omit getter/setter
  
  // mothod from interface GrantedAuthority
  @Override
  public String getAuthority() {
    return this.roleName;
  }  
}

The AuthorityEntity implments Spring security's GrantedAuthority interface. The only non-trivial field is string for role name.  Here the points for entity definition are interfaces they implement.

3.2  Define class implements UserDetailsService

Here is our MyUserDetailsService. Choose a better name in your real project. It only has 1 method which access database, find user by username.

package com.shengwang.demo.service;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.stereotype.Service;

import com.shengwang.demo.model.UserRepository;

@Service
public class MyUserDetailsService implements UserDetailsService {

  @Autowired
  UserRepository userRepo;
  
  @Override
  public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
    return userRepo.findByUsername(username);
  }
}

The mothod loadUserByUsername() access database to load the user entity. Since the user entity class implements the UserDetails interface, it can be returned directly.

3.3 Define config class extends from WebSecurityConfigurerAdapter

This SecurityConfig class, extends from  WebSecurityConfigurerAdapter is the key of Spring security configuration.  In fact it will finally cause Spring create "springSecuritFilterChain" eventually. This class has annotation @EnableWebSecurity.

package com.shengwang.demo;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.security.authentication.dao.DaoAuthenticationProvider;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

import com.shengwang.demo.service.MyUserDetailsService;

@EnableWebSecurity

public class SecurityConfig extends WebSecurityConfigurerAdapter {
  Logger logger = LoggerFactory.getLogger(SecurityConfig.class);

//  // In memory authentication, only for prototype or demo
//  @Autowired
//  public void configAuthenticationProvider(AuthenticationManagerBuilder auth) throws Exception {
//    logger.info("hello");
//    auth.inMemoryAuthentication().withUser("user").password("password").roles("USER");
//  }

  /**
   * 1. Define a DaoAuthenticationProvider bean. This bean use our own
   * UserDetailsService implementation for load user from database
   */
  @Bean
  DaoAuthenticationProvider daoAuthenticationProvider(MyUserDetailsService myUserDetailsService) {
    DaoAuthenticationProvider daoAuthenticationProvider = new DaoAuthenticationProvider();
    daoAuthenticationProvider.setUserDetailsService(myUserDetailsService);
    return daoAuthenticationProvider;
  }

  /**
   * 2. Use above bean for Authentication
   */
  @Autowired
  public void configAuthenticationProvider(AuthenticationManagerBuilder auth,
      DaoAuthenticationProvider daoAuthenticationProvider) throws Exception {
    auth.authenticationProvider(daoAuthenticationProvider);
  }

  /**
   * 3. Define access rules based on request url pattern. 
   * The rulese in this demo: 
   *    Anyone can access homepage / 
   *    Anyone can access static resources like /imgs/** 
   *    Only "ADMIN" role can access url under /admin/** 
   *    Only "USER" role can access url under /**
   */
  @Override
  protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests().antMatchers("/").permitAll(); 
    http.authorizeRequests().antMatchers("/imgs/**").permitAll(); 
    http.authorizeRequests().antMatchers("/admin/**").hasRole("ADMIN");
    http.authorizeRequests().antMatchers("/**").hasRole("USER").and().formLogin();
  }
}

Notice three things are defined here. A bean of  type DaoAuthenticationProvider, a method set to use that bean  and finally a overriden method to set up your own access rules based on http url.

3.4 Define class extends SecurityWebApplicationInitializer

This class will finally get the springSecurityFilterChain registered. Althought it's empty, it's mandatory!

package com.shengwang.demo;

import org.springframework.security.web.context.AbstractSecurityWebApplicationInitializer;

public class SecurityWebApplicationInitializer extends AbstractSecurityWebApplicationInitializer { 

}

3.5 Load the SecurityConfig in to root context

Load the class SecurityConfig into Spring web root context.

package com.shengwang.demo;

import javax.servlet.Filter;

import org.springframework.web.filter.DelegatingFilterProxy;
import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer;

public class MyWebApplicationInitializerWithSpringSecurity extends AbstractAnnotationConfigDispatcherServletInitializer {

  @Override
  protected Class<?>[] getRootConfigClasses() {
    // compare to no spring security scenario
    // the only change is adding SecurityConfig.class 
    return new Class[] {SecurityConfig.class,RootConfig.class};
  }

  @Override
  protected Class<?>[] getServletConfigClasses() {
    return new Class[] {DispatcherConfig.class};
  }
  
  @Override
  protected String[] getServletMappings() {
    return new String[] { "/" };
  }

  // register other filters used in application
  @Override
  protected Filter[] getServletFilters() {
    return new Filter[] { new DelegatingFilterProxy("mdcInsertingServletFilter") };
  }
}

All done. Now the project hierrarchy may look like below.

image

4. Run it

Suppose we have our entity tables ready. User 'admin' has both ADMIN role and USER role. User 'tom' has only USER role.

imageimage

We have simple index page as below with 3 links , first to /admin, second to /user, last one to a statice resource /imgs/sample.jpg. Just to test the access rules we set in SecurityConfig.java.

image

Click the last link will directly show you the image without asking you to login in.

image

 

If you click first link to /admin, a default (and ugly) login page appears ask you to login.

image

If we only loing as a user WITHOUT the ADMIN role, like above, the access to /admin will result  403- Access Denied.

image

Thursday, March 3, 2016

Replace jpa persistence.xml with java based configuration in Spring

Jpa 's /META/INF/persistence.xml  with transaction-type set "RESOURCE_LOCAL"  can be replace in spring configuration.

The demo persistence.xml looks like below.

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
  xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
  <persistence-unit name="hibernate-persistence"  transaction-type="RESOURCE_LOCAL">
    <provider>org.hibernate.ejb.HibernatePersistence</provider>
    <class>com.shengwang.demo.security.model.UserEntity</class>
    <class>com.shengwang.demo.security.model.AuthorityEntity</class>
    <exclude-unlisted-classes>false</exclude-unlisted-classes>
    <properties>
      <property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver" />
      <property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/spring" />
      <property name="javax.persistence.jdbc.user" value="root" />
      <property name="javax.persistence.jdbc.password" value="IHave1Dream!" />
      <property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect"/>
      
      <!-- ================================== -->
      <!-- some optional hibernate properties -->
      <!-- ================================== -->
      <property name="hibernate.hbm2ddl.auto" value="create"/>
      <property name="hibernate.hbm2ddl.import_files" value="insert-data.sql"/>
      <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.ImprovedNamingStrategy"/> 
    </properties>
  </persistence-unit>
</persistence>

This xml file can be replace with following spring config.

package com.shengwang.demo;

// omit import

@Configuration
@EnableJpaRepositories(basePackages = "com.shengwang.demo.model")
public class RootConfig {

  @Bean
  public BasicDataSource dataSource() {
    // org.apache.commons.dbcp.BasicDataSource
    BasicDataSource basicDataSource = new BasicDataSource();
    basicDataSource.setDriverClassName("com.mysql.jdbc.Driver");
    basicDataSource.setUrl("jdbc:mysql://localhost:3306/spring");
    basicDataSource.setUsername("root");
    basicDataSource.setPassword("IHave1Dream!");
    return basicDataSource;
  }

  @Bean
  public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource) {
    LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();
//    entityManagerFactory.setPersistenceUnitName("hibernate-persistence");
    entityManagerFactory.setDataSource(dataSource);
    entityManagerFactory.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
    entityManagerFactory.setJpaDialect(new HibernateJpaDialect());
    entityManagerFactory.setPackagesToScan("com.shengwang.demo.model");
    
    entityManagerFactory.setJpaPropertyMap(hibernateJpaProperties());
    return entityManagerFactory;
  }

  private Map<String, ?> hibernateJpaProperties() {
    HashMap<String, String> properties = new HashMap<>();
    properties.put("hibernate.hbm2ddl.auto", "create");
    properties.put("hibernate.show_sql", "false");
    properties.put("hibernate.format_sql", "false");
    properties.put("hibernate.hbm2ddl.import_files", "insert-data.sql");
    properties.put("hibernate.ejb.naming_strategy", "org.hibernate.cfg.ImprovedNamingStrategy");
    
    properties.put("hibernate.c3p0.min_size", "2");
    properties.put("hibernate.c3p0.max_size", "5");
    properties.put("hibernate.c3p0.timeout", "300"); // 5mins
    
    return properties;
  }

  @Bean
  public JpaTransactionManager transactionManager(EntityManagerFactory emf) {
    //org.springframework.orm.jpa.JpaTransactionManager
    JpaTransactionManager jpaTransactionManager = new JpaTransactionManager();
    jpaTransactionManager.setEntityManagerFactory(emf);
    return jpaTransactionManager;
  }

}

In the java-based config, first define a DataSource bean, then use this data source to create entityManagerFactory, finally use this entityManagerFactory bean to create a transactionManager bean.

Wednesday, March 2, 2016

Java-based Spring mvc configuration - without Spring security

Spring 3+ supports using java-based configuration in replacement of the tradictional web.xml. You can replace web.xml with any class which implments interface org.springframework.web.WebApplicationInitializer ( which only has one onStartup() method).

Usually all you need to provide to a web.xml for a spring mvc application are:

  1. Spring Config for root context
  2. Spring config for servlet context   (for spring mvc, it's config for org.springframework.web.servlet.DispatcherServlet)
  3. Where to map spring's DispatcherServlet (ususally "/")
  4. Filters, if any

To facilitate usage of this WebApplicationInitializer  interface, Spring also provide serveral Abstract classes, which has methods directly ask for informations list above. The most used is class org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer

In this article, there are 2 demos, first uses WebApplicationInitializer, second uses AbstractAnnotationConfigDispatcherServletInitializer. Spring 4.2 is used in the demos.

This article only show basics when Spring security is not used. For more practical usage with Spring security, please check another article on "Java-based Spring mvc configuration  - with Spring security". 

0. Java config classes for root and servlet context

Both demos use same Java config classes to config root and servlet context.

The java config for root context is named RootConfig.java.

package com.shengwang.demo;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;

import ch.qos.logback.classic.helpers.MDCInsertingServletFilter;

@Configuration
@EnableWebMvc
@ComponentScan({"com.shengwang.demo.service","com.shengwang.demo.model"})
public class RootConfig {
  
  // logback's filter for easy logging info. e.g client's IP
  @Bean
  public MDCInsertingServletFilter mdcInsertingServletFilter() {
    return new MDCInsertingServletFilter();
  }
}

The root context here has service and model beans registerd, but in practical it's really not that strict. Also a filter bean proviced by logback is configured in the root context. This "mdcInsertingServletFilter" bean is actually a filter and will be used later.

The java config for dispatcher servlet is called DispatcherConfig.java.

package com.shengwang.demo;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.ViewResolver;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
import org.springframework.web.servlet.config.annotation.ResourceHandlerRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;
import org.springframework.web.servlet.view.InternalResourceViewResolver;


@Configuration
@EnableWebMvc
@ComponentScan("com.shengwang.demo.controller")

public class DispatcherConfig extends WebMvcConfigurerAdapter{
  
  // To use jsp
  @Bean
  ViewResolver internalViewResolver() {
    // the view resolver bean ...
    InternalResourceViewResolver resolver = new InternalResourceViewResolver();
    resolver.setPrefix("/WEB-INF/");
    resolver.setSuffix(".jsp");
    return resolver;
  }
  
  // config static resources
  @Override
  public void addResourceHandlers(ResourceHandlerRegistry registry) {
    // Don't forget the ending "/" for location or you will hit 404.
    registry.addResourceHandler("/img/**").addResourceLocations("/static/images/");
    registry.addResourceHandler("/js/**").addResourceLocations("/static/js/");
    registry.addResourceHandler("/css/**").addResourceLocations("/static/css/");
  }
}

The servlet context contain controller beans and a view resolver bean to render JSP page, just for demo. Usually it extends WebMvcConfigurerAdapter, so you can configure web mvc specific behaviours, such as static resource mapping or async support by overriding corresponding methods. (WebMvcConfigurerAdapter workds as default web mvc config with a bunch of empty methods)

The content of the RootConfig.java and DispatcherConfig.java are list here for completeness of the demos. Just remember RootConfig.java for root context, DispatcherConfig.java for servlet context. They are used in following demos. For both demos, the hierrarchy looks like below.

image

 

1. Demo One - Use WebApplicationInitializer directly

The MyWebApplicatonInitializerWithoutSpringSecurity.java implements interface WebApplicationInitializer. Spring mvc doesn't need this class to be a bean, no @Bean  or @Component annotation for this class.

package com.shengwang.demo;

import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.ServletRegistration;

import org.springframework.web.WebApplicationInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;
import org.springframework.web.filter.DelegatingFilterProxy;
import org.springframework.web.servlet.DispatcherServlet;


public class MyWebApplicatonInitializerWithoutSpringSecurity implements WebApplicationInitializer {
  
  @Override
  public void onStartup(ServletContext container) throws ServletException {
    useRootContext(container);
    useDispatcherContext(container);
    addFilter(container);
  }

  private void useRootContext(ServletContext container) {
    // Create the 'root' Spring application context
    AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
    rootContext.register(RootConfig.class); // <-- Use RootConfig.java

    // Manage the lifecycle of the root application context
    container.addListener(new ContextLoaderListener(rootContext));
  }
  
  private void useDispatcherContext(ServletContext container) {
    // Create the dispatcher servlet's Spring application context
    AnnotationConfigWebApplicationContext dispatcherContext = new AnnotationConfigWebApplicationContext();
    dispatcherContext.register(DispatcherConfig.class); // <-- Use DispatcherConfig.java

    // Define mapping between <servlet> and <servlet-mapping>
    ServletRegistration.Dynamic dispatcher = container.addServlet("dispatcher", new DispatcherServlet(
        dispatcherContext));
    dispatcher.addMapping("/");
    dispatcher.setLoadOnStartup(1);
  }

  private void addFilter(ServletContext container) {
    String filterName = "WhatEverYouWantToNameYourFilter";
    String filterBeanName = "mdcInsertingServletFilter";
    container.addFilter(filterName, new DelegatingFilterProxy(filterBeanName)).addMappingForUrlPatterns(null, false, "/");
  }
}

2. Demo Two - Use AbstractAnnotationConfigDispatcherServletInitializer for simplicity

Define own class extends from this abstract class.

package com.shengwang.demo;

import javax.servlet.Filter;

import org.springframework.web.filter.DelegatingFilterProxy;
import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer;


public class MyWebApplicationInitializerWithoutSpringSecurity extends AbstractAnnotationConfigDispatcherServletInitializer {

  // return null, if you don't use both root and servlet context
  @Override
  protected Class<?>[] getRootConfigClasses() {
    return new Class[] {RootConfig.class};
  }
  
  // return null, if you don't use both root and servlet context
  @Override
  protected Class<?>[] getServletConfigClasses() {
    return new Class[] {DispatcherConfig.class};
  }
  
  // map dispatcherServlet to "/"
  @Override
  protected String[] getServletMappings() {
    return new String[] { "/" };
  }

  // returned filters will be register to dispatcherServlet
  @Override
  protected Filter[] getServletFilters() {
    // bean defined in RootConfig.java
    String beanNameOfFilter = "mdcInsertingServletFilter";
    return new Filter[] { new DelegatingFilterProxy(beanNameOfFilter) };
  }
}

There are 4 override  mothods, each has a every clear purpose.

3.  Misc

To use java-based spring mvc to replace web.xml, don't forget to config maven war plugin to ignore the "web.xml missging error" in the pom.xml.

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-war-plugin</artifactId>
    <version>2.2</version>

    <!-- ignore missing web.xml error -->
    <configuration>
      <failOnMissingWebXml>false</failOnMissingWebXml>
    </configuration>
  </plugin>

See Also

Simplified Web Configuration with Spring  by Josh Long

Powered by Blogger.

About The Author

My Photo
Has been a senior software developer, project manager for 10+ years. Dedicate himself to Alcatel-Lucent and China Telecom for delivering software solutions.

Pages

Unordered List