Monday, June 30, 2014

JPA2.1 and LockModeType

Below is a self-explained test case.
    public void testLock() {
         * Find by primary key. Search for an entity of the specified class and
         * primary key. If the entity instance is contained in the persistence
         * context, it is returned from there...Note that only the find(...)
         * method of entity manager will try to retrieve entity from persistence
         * context first.
        Merchant retailer = this.getMerchantDao().findById(Merchant.class, 111l);
        Map hints = new HashMap();
         * Regarding what standard hints properties are supported, refer to
         * "# Lock Mode Properties and Use" of JPA2.1 specification
         * document.
         * If no hint 'javax.persistence.lock.timeout' supplied, the underlying
         * SQL will be:
         * select MERCHANT_ID from MERCHANT where MERCHANT_ID =111 for update
         * If provide this hint and set value to 0, the SQL will be:
         * select MERCHANT_ID from MERCHANT where MERCHANT_ID =111 for update
         * nowait
         * If provide this hint and set a value which is greater than 0(the
         * measurement unit for hint is millisecond, however it is second in
         * SQL, that says if your provide hint with value 10 milliseconds, the
         * SQL will tell you wait 0 seconds ), the SQL will be:
         * select MERCHANT_ID from MERCHANT where MERCHANT_ID =111 for update
         * wait XXX
        hints.put("javax.persistence.lock.timeout", 10000);
         * Entity manager won't query all fields of entity in this case, and
         * simply perform SQL: select MERCHANT_ID from MERCHANT where
         * MERCHANT_ID =111 for it won't reload entity
         * The OPTIMISTIC lock must work with version checking.
        this.entityManager.lock(retailer, LockModeType.PESSIMISTIC_READ, hints);
        // must call refresh() to reload entity
        this.getEntityManager().refresh(retailer, LockModeType.PESSIMISTIC_READ);
        // retailer = this.getMerchantDao().findById(Merchant.class, 111l);

Friday, June 27, 2014

How to emulate network timeout?

How to emulate network timeout?

Before march into how to emulate network timeout, we have to understand the details of network timeout first.

Socket Timeout

We already knew that there are 3 times of handshake before establish a tcp connection.
- client send 'SYN'.(client: SYN_SENT)
- server response 'SYN'+'ACK'(client: SYN_SENT, server:SYN_SENT)
- client response 'ACK'(client:ESTABLISHED, server(got ACK):ESTABLISHED)

Client Connects to a Nonexistent Port

In this case, host's IP is valid, however the tcp port is nonexistent, and in general server will response 'connection refused'.

Client Connects to a Nonexistent IP

In this case, client will get 'host unreachable'.

Packet Lost During Establishing Connection

As there are 3 times of handshake, each packet maybe lost in network.

SYN of client lost

As no 'ACK' of 'SYN', client will keep resending 'SYN'. If the process of resending fail finally, connection timeout occurs.

SYN+ACK of server lost

As no 'ACK' of server's 'SYN', server will keep resending 'SYN'. If the process of resending fail finally, connection timeout occurs at server side. Also client can't get 'ACK', it behaves like last case.

ACK of client lost

In this case, client will regard that connection has been established successfully, however the state of server is still SYN_SENT, so if client try to send data, it won't reach the server, and as no ACK of sending, client will retransmit, and finally get connection timeout.
Refer to 深入理解socket网络异常

Emulate Socket Timeout

On *nix OS, we can achieve this by command 'iptables'. A full iptables reference can be found here and here
By means of 'iptables', we will emulate 2 kinds of timeout. Let's IGPE is deployed on a remote server, and IGPE port is 9090.

Emulate Connection Timeout

Connection timeout means no connection established at all, in general thay says the handshakes of TCP fails. The client can be very sure that no any request data reaches the backend if connection timeout.
Use below command to drop the response SYN packet(flag SYN set).
iptables -A OUTPUT -p tcp -m tcp --tcp-flags SYN SYN --sport 9090 -j DROP
As any pakcet lost during handshake will trigger connection timeout, we can approach this by many means.
The arguments of --tcp-flags is a little confused, i am not completedly understood, you can refer to the manual page. In above rule, I am trying to drop all outgoing packets whose 'SYN' flag has been set, and source tcp port is 9090.
Use below command to view your rules:
iptables -L -v

And then remember to use below command to clear all your rules:
iptables -F

Emulate Read Timeout

Read timeout means the connection has established successfully, however the client fail to get response(wail to timeout). In this case, the client can't be sure whether the request data has reached the backend or not(or say whether the request has been handled by the backend or not).
Use below command to drop the data response packet(flag PSH set).
iptables -A OUTPUT -p tcp -m tcp --tcp-flags PSH PSH --sport 9090 -j DROP
- PSH flag means there is data transfer in packet, not only a control packet.(oh NO, PSH aims to inform TCP buffer to send data immediately, no need to wait buffer is fulll, refer to
- I only wanna drop the response data packet, and emulate read timeout by this way.

The Retionale

In a client/server deployment, the server port will be fixed however the client tcp port is randomly picked. To a INPUT chain(the client request), the source port is client tcp port, however to a OUTPUT chain(server response), the source port is server tcp port. This must be clarified.
In my first try, I try to emulate connection timeout by below rule:
iptables -A INPUT -p tcp -m tcp --dport 9090 -j DROP

It won't result in a connction timeout, as all incoming packets will be dropped, that says no 'ACK' packet of request 'SYN' will be returned, in this case client will regard that the host is unreachable(not connection timeout). The key of emulating connection timeout is let client get the 'ACK' pakcet, however no 'SYN' packet from server, in this case, client will keep issuing 'SYN' packets and at final regard it as timeout.
If try to emulate read timeout, we must affect the OUTPUT chain.
Also seem there are build-in tool in linux can emulate these 2 situations, refer to netem

Referenced Documents

Thursday, November 07, 2013

Understanding Vmstat Output

Below is a Vmstat output:
procs    -----------memory-------------- ---swap--  -----io----  --system--   -----cpu--------
r  b          swpd    free    buff   cache     si     so    bi     bo       in      cs  us  sy id wa st
2 0    2573144 12404  1140  47128  185  263  185  299  3173  3705  92  8  0    0 0
3 0    2574708 12304  1188  47436  192  187  192  234  3079  3468  92  8  0    0 0
Under Procs we have
       r: The number of processes waiting for run time or placed in run queue or are already executing (running)
       b: The number of processes in uninterruptible sleep. (b=blocked queue, waiting for resource (e.g. filesystem I/O blocked, inode lock))

If runnable threads (r) divided by the number of CPU is greater than one -> possible CPU bottleneck

(The (r) coulmn should be compared with number of CPUs (logical CPUs as in uptime) if we have enough CPUs or we have more threads.)

High numbers in the blocked processes column (b) indicates slow disks.

(r) should always be higher than (b); if it is not, it usually means you have a CPU bottleneck

Note: “cat /proc/cpuinfo” dispalys the cpu info on the machine
>cat /proc/cpuinfo|grep processor|wc -l
output: 16

Remember that we need to know the number of CPUs on our server because the vmstat r value must never exceed the number of CPUs. r value of 13 is perfectly acceptable for a 16-CPU server, while a value of 16 would be a serious problem for a 12-CPU server.

Whenever the value of the r column exceeds the number of CPUs on the server, tasks are forced to wait for execution.There are several solutions to managing CPU overload, and these alternatives are:
1.      Add more processors (CPUs) to the server.
2.      Load balance the system tasks by rescheduling large batch tasks to execute during off-peak hours.

Under Memory we have:

swpd: shows how many blocks are swapped out to disk (paged). The amount of Virtual memory used.
            Note: you can see the swap area configured in server using "cat proc/swaps"
>cat /proc/meminfo
>cat /proc/swaps
Filename                        Type            Size    Used    Priority
/dev/dm-7                       partition       16777208        21688   -1

free: The amount of Idle Memory
buff: Memory used as buffers, like before/after I/O operations
cache: Memory used as cache by the Operating System

Under Swap we have:
si: Amount of memory swapped in from disk (/s). This shows page-ins
so: Amount of memory swapped to disk (/s). This shows page-outs. The so column is zero consistently, indicating there are no page-outs.

In Ideal condition, si and so should be at 0 most of the time, and we definitely don’t like to see more than 10 blocks per second.

Under IO we have:
bi: Blocks received from block device - Read (like a hard disk)(blocks/s)
bo: Blocks sent to a block device – Write(blocks/s)

Under System we have:
in: The number of interrupts per second, including the clock.
cs: The number of context switches per second.

(A context switch occurs when the currently running thread is different from the previously running thread, so it is taken off of the CPU.)

It is not uncommon to see the context switch rate be approximately the same as device interrupt rate (in column)

If cs is high, it may indicate too much process switching is occurring, thus using memory inefficiently.
If cs is higher then sy, system is doing more context switching than actual work.

High r with high cs -> possible lock contention
Lock contention occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more granular the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row.)

When you are seeing blocked processes or high values on waiting on I/O (wa), it usually signifies either real I/O issues where you are waiting for file accesses or an I/O condition associated with paging due to a lack of memory on your system.

Note: the memory, swap, and I/O statistics are in blocks, not in bytes. In Linux, blocks are usually 1,024 bytes (1 KB).

Under CPU we have:
These are percentages of total CPU time.
       us: % of CPU time spent in user mode (not using kernel code, not able to acces to kernel resources). Time spent running non-kernel code. (user time, including nice time)
       sy: % of CPU time spent running kernel code. (system time)
       id: % of CPU  idle time
       wa: % of CPU time spent waiting for IO.

Note: the memory, swap, and I/O statistics are in blocks, not in bytes. In Linux, blocks are usually 1,024 bytes (1 KB).

To measure true idle time measure id+wa together:
- if id=0%, it does not mean all CPU is consumed, because "wait" (wa) can be 100% and waiting for an I/O to complete
- if wait=0%, it does not mean I have no I/O waiting issues, because as long I have threads which keep the CPU busy I could have additional threads waiting for I/O, but this will be masked by the running threads

If process A is running and process B is waiting on I/O, the wait% still would have a 0 number.
A 0 number doesn't mean I/O is not occurring, it means that the system is not waiting on I/O.
If process A and process B are both waiting on I/O, and there is nothing that can use the CPU, then you would see that column increase.

- if wait% is high, it does not mean I have io performance problem, it can be an indication that I am doing some IO but the cpu is not kept busy at all
- if id% is high then likely there is no CPU or I/O problem

To measure cpu utilization measure us+sy together (and compare it to physc):
- if us+sy is always greater than 80%, then CPU is approaching its limits 
- if us+sy = 100% -> possible CPU bottleneck
- if sy is high, your appl. is issuing many system calls to the kernel and asking the kernel to work. It measures how heavily the appl. is using kernel services.
- if sy  is higher than us, this means your system is spending less time on real work (not good)

Mointor System with Vmstat:
>nohup vmstat -n 10 604879 > myvmstatfile.dat &
To generate one week of Virtual Memory stats spaced out at ten second intervals (less the last one) is 60,479 10 second intervals

> nohup vmstat -n 3 5|awk '{now=strftime("%Y-%m-%d %T "); print now $0}' > &
Append timestamp to the vmstat output.

Friday, July 12, 2013

Dive into Spring test framework - Part2

Part1 - Dive into Spring test framework(junit3.8)

Now let's put eyes on Spring testcontext framework which is introduced since Spring 3.X.

The general Idea of Spring TextContext framework

Spring3.X has deprecated JUnit 3.8 class hierarchy, let's have a look at Spring TextContext framework. Below is a test class by means of TestContext.
  1 package com.mpos.lottery.te.draw.dao;
  3 import javax.persistence.EntityManager;
  4 import javax.persistence.PersistenceContext;
  6 import org.apache.commons.logging.Log;
  7 import org.apache.commons.logging.LogFactory;
  8 import org.junit.After;
  9 import org.junit.Before;
 10 import org.junit.Test;
 11 import org.springframework.test.annotation.Rollback;
 12 import org.springframework.test.context.ContextConfiguration;
 13 import org.springframework.test.context.junit4.AbstractTransactionalJUnit4SpringContextTests;
 14 import org.springframework.test.context.transaction.AfterTransaction;
 15 import org.springframework.test.context.transaction.BeforeTransaction;
 16 import org.springframework.test.context.transaction.TransactionConfiguration;
 18 import com.mpos.lottery.te.common.dao.ShardKeyContextHolder;
 20 /**
 21  * Spring TestContext Framework. If extending from
 22  * <code>AbstractTransactionalJUnit4SpringContextTests</code>, you don't need to
 23  * declare <code>@RunWith</code>,
 24  * <code>TestExecutionListeners(3 default listeners)</code> and
 25  * <code>@Transactional</code>. Refer to
 26  * {@link AbstractTransactionalJUnit4SpringContextTests} for more information.
 27  * <p>
 28  * Legacy JUnit 3.8 class hierarchy is deprecated.
 29  *
 30  * @author Ramon Li
 31  */
 32 //@RunWith(SpringJUnit4ClassRunner.class)
 33 @ContextConfiguration(locations = { "/spring-service.xml", "/spring-dao.xml",
 34         "/spring-shard-datasource.xml" })
 35 @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = false)
 36 //@TestExecutionListeners(listeners = { TransactionalTestExecutionListener.class,
 37 //        ShardAwareTestExecutionListener.class })
 38 //@Transactional
 39 public class GameDaoTest extends AbstractTransactionalJUnit4SpringContextTests {
 40         private Log logger = LogFactory.getLog(GameDaoTest.class);
 41         // Must declare @Autowire(by type) or @Resource(JSR-250)(by name)
 42         // explicitly, otherwise spring won't inject the dependency.
 43         private GameDao gameDao;
 44         @PersistenceContext(unitName = "lottery_te")
 45         private EntityManager entityManager;
 47         public GameDaoTest() {
 48                 logger.debug("GameDaoTest()");
 49                 // As spring test framework will create a auto-rollbacked transaction
 50                 // before setup a test case(even @BeforeTransaction, the data source has
 51                 // been determined), we must set the shard key before creating
 52                 // transaction, otherwise the default data source of
 53                 // <code>ShardKeyRoutingDataSource</code> will be returned if it has
 54                 // been set.
 55                 ShardKeyContextHolder.setShardKey(new Integer("2"));
 56         }
 58         @BeforeTransaction
 59         public void verifyInitialDatabaseState() {
 60                 // logic to verify the initial state before a transaction is started
 61                 logger.debug("@BeforeTransaction:verifyInitialDatabaseState()");
 63                 logger.debug("EntityManager:" + this.entityManager);
 64                 logger.debug("gameDao:" + this.gameDao);
 65         }
 67         @Before
 68         public void setUpTestDataWithinTransaction() {
 69                 // set up test data within the transaction
 70                 logger.debug("@Before:setUpTestDataWithinTransaction()");
 71         }
 73         @Test
 74         // overrides the class-level defaultRollback setting
 75         @Rollback(true)
 76         public void test_2() {
 77                 // logic which uses the test data and modifies database state
 78                 logger.debug("test_2()");
 80         }
 82         @Test
 83         public void test_1() {
 84                 logger.debug("test_1()");
 85                 // logger.debug("**** Start to query oracle data source.");
 86                 String sql = "select TYPE_NAME from GAME_TYPE where GAME_TYPE_ID=9";
 87                 // setSharkKey() won't affect here
 88                 ShardKeyContextHolder.setShardKey(new Integer("1"));
 89                 // Map<String, Object> result1 =
 90                 // this.getJdbcTemplate().queryForMap(sql);
 92                 // logger.debug("**** Start to query mysql data source.");
 93                 // setSharkKey() won't affect here
 94                 // ShardKeyContextHolder.setShardKey(new Integer("2"));
 95                 // Map<String, Object> result2 =
 96                 // this.getJdbcTemplate().queryForMap(sql);
 98                 // Avoid false positives when testing ORM code[Spring manual document]
 99                 this.entityManager.flush();
100         }
102         @After
103         public void tearDownWithinTransaction() {
104                 // execute "tear down" logic within the transaction.
105                 logger.debug("@After:tearDownWithinTransaction()");
106         }
108         @AfterTransaction
109         public void verifyFinalDatabaseState() {
110                 // logic to verify the final state after transaction has rolled back
111                 logger.debug("@AfterTransaction:verifyFinalDatabaseState()");
113         }
115 }

Be honest to say, Spring is good at its automatical transaction rollback, if you know it very well and maintain your test code with big care. The bad side is it enlarges the transaction boundary, in general the boundary of your transaction will be the invocation of a service method, spring test framework enlarges it to the test method.
It will incur below two issues:

  • Hibernate flush. If no select on given entity, hibernate won't flush DML of that entity info underlying database, until committing or flush explicitly.
  • Hibernate lazy loading. If you want to deserialize a entity out of transaction, you will know what I mean.
Below is my base test class which all transactional integration test should inherit from.
package com.mpos.lottery.te.test.integration;

import static org.junit.Assert.assertEquals;

import java.util.Calendar;
import java.util.Date;
import java.util.UUID;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.AbstractTransactionalJUnit4SpringContextTests;
import org.springframework.test.context.transaction.AfterTransaction;
import org.springframework.test.context.transaction.BeforeTransaction;
import org.springframework.test.context.transaction.TransactionConfiguration;
import org.springframework.test.context.transaction.TransactionalTestExecutionListener;

import com.mpos.lottery.te.config.MLotteryContext;
import com.mpos.lottery.te.gamespec.prize.Payout;
import com.mpos.lottery.te.hasplicense.domain.License;
import com.mpos.lottery.te.trans.domain.Transaction;

 * This test will be ran against <code>DispatchServlet</code> directly, that
 * says we must support lookup <code>ApplicationContext</code> from
 * <code>ServletContext</code>, refer to
 * {@link}
 * <p>
 * Spring TestContext Framework. If extending from
 * <code>AbstractTransactionalJUnit4SpringContextTests</code>, you don't need to
 * declare <code>@RunWith</code>,
 * <code>TestExecutionListeners(3 default listeners)</code> and
 * <code>@Transactional</code>. Refer to
 * {@link AbstractTransactionalJUnit4SpringContextTests} for more information.
 * <p>
 * Legacy JUnit 3.8 class hierarchy is deprecated. Under new sprint test context
 * framework, a field of property must be annotated with <code>@Autowired</code>
 * or <code>@Resource</code>(<code>@Autowired</code> in conjunction with
 * <code>@Qualifier</code>) explicitly to let spring inject dependency
 * automatically.
 * <p>
 * Reference:
 * <ul>
 * <li></li>
 * <li>
 * -to-register-
 * BeanPostProcessor-programaticaly</li>
 * </ul>
 * @author Ramon Li

// @RunWith(SpringJUnit4ClassRunner.class)

// Refer to the doc of WebContextLoader.
@ContextConfiguration(loader = WebApplicationContextLoader.class, locations = { "spring/spring-core.xml",
        "spring/spring-core-dao.xml", "spring/game/spring-raffle.xml", "spring/game/spring-ig.xml",
        "spring/game/spring-extraball.xml", "spring/game/spring-lotto.xml", "spring/game/spring-toto.xml",
        "spring/game/spring-lfn.xml", "spring/spring-3rdparty.xml", "spring/game/spring-magic100.xml",
        "spring/game/spring-digital.xml" })
// this annotation defines the transaction manager for each test case.
@TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
// As our TEST extending from AbstractTransactionalJUnit4SpringContextTests,
// below 3 listeners have been registered by default, and it will be inherited
// by subclass.
// @TestExecutionListeners(listeners = {ShardAwareTestExecutionListener.class})
// @Transactional
public class BaseTransactionalIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests {
    private static Log logger = LogFactory.getLog(BaseTransactionalIntegrationTest.class);
     * Always auto wire the data source to a javax.sql.DataSource with name
     * 'dataSource' even there are multiple data sources. It means there must be
     * a DataSource bean named 'dataSource' and a
     * <code>PlatformTransactionManager</code> named 'transactionManager'.
     * <p>
     * @see AbstractTransactionalJUnit4SpringContextTests#setDataSource(javax.sql.DataSource)
    @PersistenceContext(unitName = "lottery_te")
    protected EntityManager entityManager;

     * do something if want configure test case when initialization.
    public BaseTransactionalIntegrationTest() {
        // initialize MLottery context.
        // enable HASP license

    // run once for current test suite.
    public static void beforeClass() {

     * logic to verify the initial state before a transaction is started.
     * <p>
     * The @BeforeTransaction methods declared in superclass will be run after
     * those of the current class. Supported by
     * {@link TransactionalTestExecutionListener}
    public void verifyInitialDatabaseState() throws Exception {

     * Set up test data within the transaction.
     * <p>
     * The @Before methods of superclass will be run before those of the current
     * class. No other ordering is defined.
     * <p>
     * NOTE: Any before methods (for example, methods annotated with JUnit 4's
     * <code>@Before</code>) and any after methods (such as methods annotated
     * with JUnit 4's <code>@After</code>) are executed within a transaction.
    public void setUpTestDataWithinTransaction() {

     * execute "tear down" logic within the transaction.
     * <p>
     * The @After methods declared in superclass will be run after those of the
     * current class.
    public void tearDownWithinTransaction() {

     * logic to verify the final state after transaction has rolled back.
     * <p>
     * The @AfterTransaction methods declared in superclass will be run after
     * those of the current class.
    public void verifyFinalDatabaseState() {

    public static void afterClass() {

    // ----------------------------------------------------------------
    // ----------------------------------------------------------------

    protected void initializeMLotteryContext() {
        logger.debug("Retrieve a ApplicationContext(" + this.applicationContext + ").");

    protected void printMethod() {
        StringBuffer lineBuffer = new StringBuffer("+");
        for (int i = 0; i < 120; i++) {
        String line = lineBuffer.toString();

        // Get the test method. If index=0, it means get current method.
        StackTraceElement eles[] = new Exception().getStackTrace();
        // StackTraceElement eles[] = new Exception().getStackTrace();
        // for (StackTraceElement ele : eles){
        // System.out.println("class:" + ele.getClassName());
        // System.out.println("method:" + ele.getMethodName());
        // }
        String className = eles[1].getClassName();
        int index = className.lastIndexOf(".");
        className = className.substring((index == -1 ? 0 : (index + 1)));

        String method = className + "." + eles[1].getMethodName();
        StringBuffer padding = new StringBuffer();
        for (int i = 0; i < line.length(); i++) {
            padding.append(" ");
        String methodSig = (method + padding.toString()).substring(0, line.length() - 3);"| " + methodSig + "|");;

    protected void enableLicense() {
        Calendar cal = Calendar.getInstance();
        cal.setTime(new Date());
        cal.set(Calendar.YEAR, cal.get(Calendar.YEAR) + 1);

    protected String uuid() {
        UUID uuid = UUID.randomUUID();
        String uuidStr = uuid.toString();
        return uuidStr.replace("-", "");

    // ----------------------------------------------------------------
    // ----------------------------------------------------------------

    protected void assertTransaction(Transaction expectedTrans, Transaction actualTrans) {
        assertEquals(expectedTrans.getId(), actualTrans.getId());
        assertEquals(expectedTrans.getGameId(), actualTrans.getGameId());
                actualTrans.getTotalAmount().doubleValue(), 0);
        assertEquals(expectedTrans.getTicketSerialNo(), actualTrans.getTicketSerialNo());
        assertEquals(expectedTrans.getDeviceId(), actualTrans.getDeviceId());
        assertEquals(expectedTrans.getMerchantId(), actualTrans.getMerchantId());
        assertEquals(expectedTrans.getType(), actualTrans.getType());
        assertEquals(expectedTrans.getOperatorId(), actualTrans.getOperatorId());
        assertEquals(expectedTrans.getTraceMessageId(), actualTrans.getTraceMessageId());
        assertEquals(expectedTrans.getResponseCode(), actualTrans.getResponseCode());

    protected void assertTicket(BaseTicket expectTicket, BaseTicket actualTicket) {
        assertEquals(expectTicket.getSerialNo(), actualTicket.getSerialNo());
        assertEquals(expectTicket.getStatus(), actualTicket.getStatus());
                actualTicket.getTotalAmount().doubleValue(), 0);
        assertEquals(expectTicket.getMultipleDraws(), actualTicket.getMultipleDraws());
        assertEquals(expectTicket.getMobile(), actualTicket.getMobile());
        assertEquals(expectTicket.getCreditCardSN(), actualTicket.getCreditCardSN());
        assertEquals(expectTicket.getDevId(), actualTicket.getDevId());
        assertEquals(expectTicket.getMerchantId(), actualTicket.getMerchantId());
        assertEquals(expectTicket.getOperatorId(), actualTicket.getOperatorId());
        assertEquals(expectTicket.getTicketFrom(), actualTicket.getTicketFrom());
        assertEquals(expectTicket.getTicketType(), actualTicket.getTicketType());
        assertEquals(expectTicket.getTransType(), actualTicket.getTransType());
        assertEquals(expectTicket.isCountInPool(), actualTicket.isCountInPool());
        assertEquals(expectTicket.getGameInstance().getId(), actualTicket.getGameInstance().getId());
        assertEquals(expectTicket.getPIN(), actualTicket.getPIN());

    protected void assertPayout(Payout exp, Payout actual) {
        assertEquals(exp.getTransaction().getId(), actual.getTransaction().getId());
        assertEquals(exp.getGameId(), actual.getGameId());
        assertEquals(exp.getGameInstanceId(), actual.getGameInstanceId());
        assertEquals(exp.getDevId(), actual.getDevId());
        assertEquals(exp.getMerchantId(), actual.getMerchantId());
        assertEquals(exp.getOperatorId(), actual.getOperatorId());
        assertEquals(exp.getTicketSerialNo(), actual.getTicketSerialNo());
        assertEquals(exp.getBeforeTaxObjectAmount().doubleValue(), actual.getBeforeTaxObjectAmount()
                .doubleValue(), 0);
        assertEquals(exp.getBeforeTaxTotalAmount().doubleValue(), actual.getBeforeTaxTotalAmount()
                .doubleValue(), 0);
        assertEquals(exp.getTotalAmount().doubleValue(), actual.getTotalAmount().doubleValue(), 0);
        assertEquals(exp.getNumberOfObject(), actual.getNumberOfObject());

    // ----------------------------------------------------------------
    // ----------------------------------------------------------------

    public EntityManager getEntityManager() {
        return entityManager;

    public void setEntityManager(EntityManager entityManager) {
        this.entityManager = entityManager;

What happen if one single test method make 2 separated requests?

In my project, there is a services named 'sell' for client to make a sale, and a corresponding service named 'enquiry' to query that sale.

Now we plan to test the service 'enquiry' and write a test case named 'testEnquiry'. ok, how do we prepare the test data of a sale which will be quired? There are at least 2 options.

Prepare test data and import them into database before running test

By this mean, there are possibilities that your prepared test data doesn't meet the specification of service 'sell'. That says you prepared test data may write a column named 'gameId', however 'sell' service won't write that column. In such case, your test case will pass, however in production environment, the 'enquiry' service will fail.

Call 'sell' service in 'testEnquiry' method

The pseud code seem as below:
public static void testEnquiry(){
    //assert ouput
The callSellService() and callEnquiryService() are in same single transaction. Here I will give a real case in my project. The callSellService() will generates tickets(List), and callEnquiryService() will query tickets generated by sale service, then marshell it into xml.
What makes me surprise is that the tickets entities retrieved by callEnquiryService() are same with tickets entities generated by callSellService(). I mean they are same java object, not only the same fields/properties.
However in production, may fields in tickets retrieved by callEnquiryService() are missed, as in production environemnt callSellService() and callEnquiryService() are completely different 2 transactions.

Which option is better? Or 3rd option?

I prefer the 2nd option, prepare test data by real transactions. Then how to face its problem? after some research, the solution is simple and effect.
public static void testEnquiry(){
    //assert ouput
  • this.entityManager.flush() will flush all entity state to underlying database. This must be  called, otherwise all change of entity will be lost.
  • this.entityManager.clear() will clear all entities, and make them in detached state, then any subsequent call to entity manager will new entity.

May DBUnit is another choice, however that means I have to convert my sql script into xml, oh, that is a big challenge.

Monday, December 24, 2012

Prepare to learn Groovy

Plan to learn Groovy, as there are 2 great frameworks written in groovy, Gradle and Grails. Besides if Grinder will support groovy, it will be a great plus.

Gradle is a next-generation build tool, in my opinion, it will replace Maven and Ant.
Below is my build.gradle for one of my projects:

  1 apply plugin: 'war'
  2 // 'war' plugin will apply 'java' plugin automatically
  3 apply plugin: 'java'
  4 apply plugin: 'eclipse'
  5 // run web application
  6 apply plugin: 'jetty'
  8 /**
  9  * Gradle includes 2 phases: configuration and execution, refer to
 10  *
 11  *
 13  * In configuration phase, all codes except doFirst() and doLast() will be executed from top to bottom of the script.
 14  * The 'dependsOn' doesn't make any sense in configration phase, for example, the 'jar' and 'svnrev' tasks, if we put
 15  * 'svnrev' after 'jar', then variable 'svnrev.lastRev' can't be parsed at 'jar' task, as it completely hasn't been
 16  * initilized at all.
 17  *
 19  * In execution phase, the dependency mechanism of task will work. Be reminded that only doFirst() and doLast() will be
 20  * executed at execution phase, and gradle will finished configuration of whose script first to initialize and determine
 21  * what tasks should be executed and what is the execution order.
 22  */
 24 logger.quiet(">> Start building of $_name.$version.")
 25 /**
 26  * SHIT, here you can't give a statement like 'compileJava.options.encoding = $_sourcecode_encoding', if do so,
 27  * "Could not find property '$_sourcecode_encoding'" will be thrown out. '$_encoding' can't only be used in String?
 28  */
 29 compileJava.options.encoding = _sourcecode_encoding
 30 compileTestJava.options.encoding = _sourcecode_encoding
 31 // Define a temporary variable.
 32 //_tmp="define a temporary variable"
 33 //logger.quiet(">> Define a temporary variable: _tmp: $_tmp")
 35 // Properties added by the java plugin
 36 sourceCompatibility="1.6"
 37 targetCompatibility="1.6"
 38 //Properties added by the war plugin
 39 webAppDirName="src/main/WWW"
 41 configurations {
 42     provided {
 43         description = 'Non-exported comiple-time dependencies.'
 44     }
 45 }
 47 /**
 48  * In Gradle dependencies are grouped into configurations, and there are 4 pre-defined configuration:
 49  * compile, runtime, testCompile and testRuntime. In general dependencies of the later one will contain previous one.
 50  */
 51 dependencies {
 52     // configurationName dependencyNotation1, dependencyNotation2, ...
 53     // compile group: 'commons-collections', name: 'commons-collections', version: '3.2'
 55     provided files('lib/DEV/j2ee/servlet-api.jar')
 56     compile fileTree(dir: 'lib', include: '**/*.jar', exclude: 'DEV/**/*.jar')
 57     /**
 58      * Below dependency will result in a exception:
 59      *      Circular dependency between tasks. Cycle includes [task ':compileJava', task ':classes'].
 60      * As sourceSets.main.output is generated by task 'compileJava', however if we declare the dependency here, it means task
 61      * 'compileJava' will depend on this file too, then a circular dependency occurs.
 62      */
 63     //compile sourceSets.main.output
 64     testCompile fileTree(dir:"lib", include:"DEV/**/*.jar")
 65 }
 67 sourceSets {
 68     /**
 69      * The Java plugin defines two standard source sets, called main and test.
 70      * Changing the project layout, the default project layout is as below:
 71      *  - src/main/java   Production Java source
 72      *  - src/main/resources  Production resources
 73      *  - src/test/java   Test Java source
 74      *  - src/test/resources  Test resources
 75      *  - src/sourceSet/java  Java source for the given source set
 76      *  - src/sourceSet/resources Resources for the given source set
 77      * Refer to and
 78      * for more information.
 79      */
 80     main {
 81         compileClasspath = compileClasspath + configurations.provided
 82         //compileClasspath.collect().each({println it})
 83         resources {
 84             srcDir 'src/main/resource'
 85         }
 86     }
 87     test {
 88         java {
 89             srcDir 'src/test/unittest'
 90             // integration test needs database, and need to import test data first.
 91             srcDir 'src/test/integration'
 92         }
 93         resources {
 94             srcDir 'src/test/resource'
 95         }
 96     }
 97 }
 99 // Retrieve the last revision of project, refer to
100 task svnrev {
101     // use ant to retrieve revision.
102     ant.taskdef(resource: 'org/tigris/subversion/svnant/svnantlib.xml') {
103         classpath {
104             fileset(dir: 'lib/DEV/svnant-1.2.1', includes: '*.jar')
105         }
106     }
107     ant.svn(javahl: 'false', svnkit: 'true', username: "${_svn_user}", password: "${_svn_password}", failonerror: 'false') {
108 "${_svn_source_url}", propPrefix: 'svninfo')
109     }
110     // retrieve property of ant project and assign it to a task's property, refer to:
111     //
112     ext.lastRev = ant.getProject().properties['svninfo.lastRev']
113     // retrieve property of gradle project
114     //getProject().properties['buildFile']
115 }
117 import
118 import org.gradle.api.internal.file.IdentityFileResolver
119 task generateManifest {
120     // define a task's property
121     ext.m = new DefaultManifest(new IdentityFileResolver())
123     // add some attributes
124     m.attributes([
125         'Implementation-Title':"$_name",
126         'Implementation-Version':"${version}_${svnrev.lastRev}",
127         'Implementation-Vendor':"$_company",
128         'Created-By' : _team,
129         'Build-Time' : new Date()
130     ])
131     //manifest.writeTo('build/')
132 }
133 war.dependsOn 'generateManifest'
135 war {
136     archiveName = _name + ".war"
137     manifest = generateManifest.m
138 }
140 // Define a global variable
141 def user_tag
143 task svntag <<{
144     def console = System.console()
145     if (console) {
146         user_tag = console.readLine("> Please enter your tag(${version}): ")
147         if (!user_tag) {
148             logger.error "Please give a tag definition."
149             System.exit(0)
150         }
151     } else {
152         logger.error "Cannot get console."
153     }
154     /**
155      * We must define below logic in doFirst/Last(), otherwise it will try to make a tag each time as it is in configuration phase.
156      */
157     ant.svn(javahl: 'false', svnkit: 'true', username: "${_svn_user}", password: "${_svn_password}", failonerror: 'false') {
158         ant.copy(srcurl: "${_svn_source_url}", desturl:"${_svn_tag_url}/${user_tag}", message:"Create tag: ${_svn_tag_url}/${user_tag}")
159     }
160 }
162 task dist(type: Zip) {
163     description = "Build a distribution package containing war and shell script."
165     archiveName = _name + "_v${version}.zip"
167     // if use include, gradle will inform 'Skipping task ':zip' as it has no source files'...why?
168     // include 'config'
170     from('.') {
171         include 'README.txt'
172         include 'CHANGELOG.txt'
173     }
174     from war.destinationDir
175     into('bin') {
176         from('bin'){
177             include ''
178             include ''
179             include ''
180         }
181     }
182     from 'etc/manual'
184     doLast {
185         //print 'source of zip:' + project['zip'].source.each({println it})
186     }
187 }

Grails is competing with Play!, however after trying a period time of scala, it makes me dismayed and indeed its learning curve is really hard. At present, I haven't even give Groovy a simple try, my only experience is gradle build script, what makes me dizzy is its closure. Maybe after put more effort to learn Groovy, those closures will be more friendly to me.

A great distributed performance testing tool, at present it only support jython and closure. Python is my 2nd language, in fact just knew more than scala than groovy. I have found that someone also looking for the groovy supports of grinder, and has put real effort to implement it.