Friday, August 10, 2007

在ubuntu下面配置subversion。

http://wiki.ubuntu.org.cn/SubVersion
这篇文章是ubuntu中文网站上的文档,跟着安装了一遍, 不错, 成功了。

Monday, June 18, 2007

硬盘安装ubuntu7.0.4

因为对6.10的那个ubuntu的启动画面相当不满,所以一直想升级到7.0.4,但是刚刚听别人说7.0.4的启动画面一样的老土,只好先停在这里,再等等吧。。。一个不幸的事情是我的笔记本光驱好像坏了,逼得我现在一门心思的再琢磨怎么从硬盘安装,在网上找了写资料,决定自己先记录下来,虽然现在不升级,正所谓居安思危,有备无患。
-----------------------------
1.从 http://releases.ubuntu.com/feisty/ 下载 ubuntu-7.04-alternate-i386.iso 并放到C:\,并且确认C:为FAT32分区 (这点就要命,我的window分区都是ntfs,好像是因为intz什么的不支持ntfs格式)
2.
下载 http://archive.ubuntu.com/ubuntu/dists/feisty/main/installer-i386/current/images/hd-media/ 里的文件,同样拷贝到C:\
initrd.gz
vmlinuz
3.有的文档说要执行这一步,有的没说,我也写在这里:
下载grub_for_dos-0.4.2,将里面的 grldr提取 复制到 C:\,编辑C:\BOOT.INI,加入一行代码:C:\GRLDR=”GRUB”
4.启动到grub,出现菜单时按下C键,进入grub的命令行模式,输入如下命令,即可启动安装程序:
grub> kernel (hd0,0)/vmlinuz root=/dev/ram ramdisk_size=256000 devfs=mount,dall
grub> initrd (hd0,0)/initrd.gz
grub> boot
天,不知道/dev/ram是什么意思? 为什么不是/dev/hd0呢?
理论上,就应该可以看到安装界面了!

Tuesday, May 29, 2007

Asynchronous calls and remote callbacks using Lingo Spring Remoting

很不错的技术文章,至少可以帮你澄清远程对象引用(传值/传引用),以及一些漂亮的编程技巧。
我觉得最酷的还是lingo居然可以将一个interface中的方法有的暴露成同步有的暴露成异步。 但是在这篇文章的示例代码中,我没有搞清楚他是怎么定义solve是异步,而cancel和registerXX是同步的?

请参考:
http://jroller.com/page/sjivan?entry=asynchronous_calls_and_callbacks_using

As mentioned in my previous blog entry, Lingo is the only Spring Remoting implementation that supports asynchronous calls and remote callbacks. Today I'll cover all the nitty gritty details of the async/callback related functionality along with the limitations and gotchas.

Asynchronous method invocation and callback support by Lingo is an awesome feature and there are several usecases where these are an absolute must. Lets consider a simple and rather common use case : You have a server side application (say an optimizer) for which you want you write a remote client API. The API has methods like solve() which are long running and methods like cancel() which stops the optimizer solve.

A synchronous API under such circumstances is not really suitable since the solve() method could take a really long time to complete. It could be implemented by having the client code spawn their own thread and do its own exception management but this becomes really kludgy. Plus you have to worry out network timeout issues. You might be thinking "I'll just use JMS if I need an asynchronous programming model". You could use JMS but think about the API you're exposing. Its going to be a generic JMS API where the client is registering JMS listeners, and sending messages to JMS destinations using the JMS API. Compare this to a remote API where the client is actually working with the Service interface itself.

Lingo combines the elegance of Spring Remoting with the ability to make asynchronous calls. Lets continue with our Optimizer example and implement a solution using Lingo and Spring. OptimizerService interface

public interface OptimizerService {
void registerCallback(OptimizerCallback callback) throws OptimizerException;

void solve();

void cancel() throws OptimizerException;
}

The solve() method is asynchronous while the cancel() and registerCallback(..) methods are not. Asynchronous methods by convention must not have a return value and also must not throw exceptions. The registerCallback(..) method registers a client callback with the Optimizer. In order to make an argument be a remote callback, the argument must implement java.util.EventListener or java.rmi.Remote. In this example the OptimizerCallback interface extends java.util.EventListener. If the argument does not implement either of these interfaces, it must implement java.io.Serializable and it will then be passed by value.

OptimizerCallback interface

public interface OptimizerCallback extends EventListener {

void setPercentageComplete(int pct);

void error(OptimizerException ex);

void solveComplete(float solution);
}

The callback API has a method for the Optimizer to set the percentage complete, report an error during the solve() process (remember that the solve() method is asynchronous so it cannot throw an exception directly) and finally the solveComplete(..) callback to inform the client that the solve is complete along with the solution.

OptimizerService implementation

public class OptimizerServiceImpl implements OptimizerService {

private OptimizerCallback callback;
private volatile boolean cancelled = false;


private static Log LOG = LogFactory.getLog(OptimizerServiceImpl.class);

public void registerCallback(OptimizerCallback callback) {
LOG.info("registerCallback() called ...");
this.callback = callback;
}


public void solve() {
LOG.info("solve() called ...");
float currentSolution = 0;

//simulate long running solve process
for (int i = 1; i <= 100; i++) { try {
currentSolution += i;
Thread.sleep(1000);
if (callback != null) {
callback.setPercentageComplete(i);
}
if (cancelled) {
break;
}
} catch (InterruptedException e) {
System.err.println(e.getMessage());
}
}
callback.solveComplete(currentSolution);


}

public void cancel() throws OptimizerException {
LOG.info("cancel() called ...");
cancelled = true;
}
}

The solve() method sleeps for a while and makes the call setPercentageComplete(..) on the callback registered by the client. The code is pretty self explanatory here.

Optimizer Application context - optimizerContext.xmlWe now need to export this service using Lingo Spring Remoting. The typical Lingo Spring configuration as described in the Lingo docs and samples is :

xml version="1.0" encoding="UTF-8"?>


<beans>
<bean id="optimizerServiceImpl" class="org.sanjiv.lingo.server.OptimizerServiceImpl" singleton="true"/>

<bean id="optimizerServer" class="org.logicblaze.lingo.jms.JmsServiceExporter" singleton="true">
<property name="destination" ref="optimizerDestination"/>
<property name="service" ref="optimizerServiceImpl"/>
<property name="serviceInterface" value="org.sanjiv.lingo.common.OptimizerService"/>
<property name="connectionFactory" ref="jmsFactory"/>
bean>


<bean id="jmsFactory" class="org.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
<property name="useEmbeddedBroker">
<value>truevalue>
property>
bean>

<bean id="optimizerDestination" class="org.activemq.message.ActiveMQQueue">
<constructor-arg index="0" value="optimizerDestinationQ"/>
bean>
beans>

In this example, I'm embedding a JMS broker in the Optimizer process. However you are free to use an external JMS broker and change the JMS Connection Factory configuration appropriately.

Note : The above optimizerContext.xml it the typical configuration in the Lingo docs/examples
but is not the ideal configuration. It has some serious limitations which I'll cover in a bit
along with the preferred "server" configuration.

OptimizerServer The "main" class that exports the OptimizerService simply needs to instantiate the "optimizerServer" bean in the optimizerContent.xml file.

public class OptimizerServer {

public static void main(String[] args) {
if (args.length == 0) {
System.err.println("Usage : java org.sanjiv.lingo.server.OptimizerServer ");
System.exit(-1);
}
String applicationContext = args[0];


System.out.println("Starting Optimizer ...");
FileSystemXmlApplicationContext ctx = new FileSystemXmlApplicationContext(applicationContext);

ctx.getBean("optimizerServer");

System.out.println("Optimizer Started.");


ctx.registerShutdownHook();
}
}

The ClientIn order for the client to lookup the remote OptimizerService, we need to configure the client side Spring application context as follows : Client Application Context - clientContext.xml

xml version="1.0" encoding="UTF-8"?>


<beans>
<bean id="optimizerService" class="org.logicblaze.lingo.jms.JmsProxyFactoryBean">
<property name="serviceInterface" value="org.sanjiv.lingo.common.OptimizerService"/>
<property name="connectionFactory" ref="jmsFactory"/>
<property name="destination" ref="optimizerDestination"/>


<property name="remoteInvocationFactory" ref="invocationFactory"/>
bean>


<bean id="jmsFactory" class="org.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
bean>

<bean id="optimizerDestination" class="org.activemq.message.ActiveMQQueue">
<constructor-arg index="0" value="optimizerDestinationQ"/>
bean>

<bean id="invocationFactory" class="org.logicblaze.lingo.LingoRemoteInvocationFactory">
<constructor-arg>
<bean class="org.logicblaze.lingo.SimpleMetadataStrategy">

<constructor-arg value="true"/>
bean>
constructor-arg>
bean>
beans>

Now all a client needs to do to is obtain a handle of the remote OptimizerService by looking up the bean "optimizerService" configured in clientContext.xml.

OptimizerCallback implementationBefore going over the sample Optimizer client code, lets first write a sample implementation of the OptimizerCallback interface - one which the client will register with the remote Optimizer by invoking the registerCallback(..) method.

public class OptimizerCallbackImpl implements OptimizerCallback {

private boolean solveComplete = false;
private OptimizerException callbackError;
private Object mutex = new Object();


public void setPercentageComplete(int pct) {
System.out.println("+++ OptimzierCallback :: " + pct + "% complete..");
}

public void error(OptimizerException ex) {
System.out.println("+++ OptimzierCallback :: Error occured during solve" + ex.getMessage());
callbackError = ex;
solveComplete = true;
synchronized (mutex) {
mutex.notifyAll();
}
}


public void solveComplete(float soltion) {
System.out.println("+++ OptimzierCallback :: Solve Complete with answer : " + soltion);
solveComplete = true;
synchronized (mutex) {
mutex.notifyAll();
}
}


public void waitForSolveComplete() throws OptimizerException {
while (!solveComplete) {
synchronized (mutex) {
try {
mutex.wait();
if (callbackError != null) {
throw callbackError;
}
} catch (InterruptedException e) {
e.printStackTrace();
break;
}
}
}
}
}

OptimizerClient

public class OptimizerClient {

public static void main(String[] args) throws InterruptedException {


if (args.length == 0) {
System.err.println("Usage : java org.sanjiv.lingo.client.OptimizerClient ");
System.exit(-1);
}

String applicationContext = args[0];
FileSystemXmlApplicationContext ctx = new FileSystemXmlApplicationContext(applicationContext);

OptimizerService optimizerService = (OptimizerService) ctx.getBean("optimizerService");
OptimizerCallbackImpl callback = new OptimizerCallbackImpl();


try {
optimizerService.registerCallback(callback);
System.out.println("Client :: Callback Registered.");

optimizerService.solve();
System.out.println("Client :: Solve invoked.");

Thread.sleep(8 * 1000);
System.out.println("Client :: Calling cancel after 8 seconds.");


optimizerService.cancel();
System.out.println("Client :: Cancel finished.");
//callback.waitForSolveComplete();

} catch (OptimizerException e) {
System.err.println("An error was encountered : " + e.getMessage());
}
}
}

The test client registers a callback and calls the asynchronous method solve(). Note that the solve method in our sample OptimizerService implementation takes ~100 seconds to complete. The client then prints out the message "Client :: Solve invoked.". If the solve() call is indeed invoked asynchronously by Lingo under the hoods, this message should be printed to console immediately and not after 100 seconds. The client then calls cancel() after 8 seconds have elapsed.

Here's the output when we run the Optimizer Server and Client

Notice that the solve method has been called asynchronously and after 8 seconds the client makes the cancel() call however the server does not seem to be receiving this call and continues with its setPercentageComplete(..) callback.

I asked this question on the Lingo mailing list but did not get a response. This misbehaviour was pretty serious because what this meant was that while an asynchronous call like solve() was executed asynchronously by the client, the client was not able to make another call like cancel() until the solve() method completed execution on the server... which defeats the purpose of a method like cancel().

Lingo and ActiveMQ are open source so I rolled up my sleeves and ran the whole thing through a debugger. Debugging multithreaded applications can get tricky but after spending several hours I was able to get the to bottom of this issue.

Recollect that we exported the OptimizerSericve using the class org.logicblaze.lingo.jms.JmsServiceExporter in optimizerContext.xml. On examining the source, I found that this class creates a single JMS Session which listens for messages on the configured destination ("optimizerDestinationQ" in our example) and when messages are received, it invokes a Lingo listener which does the translation of the inbound message into a method invocation on the exported OptimizerServiceImpl service object.

The JMS spec clearly states

A Session object is a single-threaded context for producing and consuming messages.
...
It serializes execution of message listeners registered with its message consumers.

Basically a single JMS Session is not suitable for receiving concurrent messages. I understood why the cancel() method wasn't being invoked until the solve() method completed. But this behavior still didn't make sense from an API usage perspective.

Fortunately Spring 2.0 added support classes for receiving concurrent messages which is exactly what we need (yep, Spring rocks!). There are a few different support classes like DefaultMessageListenerContainer, SimpleMessageListenerContainer, and ServerSessionMessageListener .

The ServerSessionMessageListenerContainer "dynamically manages JMS Sessions, potentially using a pool of Sessions that receive messages in parallel". This class "builds on the JMS ServerSessionPool SPI, creating JMS ServerSessions through a pluggable ServerSessionFactory".

I tried altering optimizerContext.xml to use this class optimizerContextPooledSS.xml

xml version="1.0" encoding="UTF-8"?>


<beans>
<bean id="optimizerServiceImpl" class="org.sanjiv.lingo.server.OptimizerServiceImpl" singleton="true">
bean>

<bean id="optimizerServerListener" class="org.logicblaze.lingo.jms.JmsServiceExporterMessageListener">
<property name="service" ref="optimizerServiceImpl"/>
<property name="serviceInterface" value="org.sanjiv.lingo.common.OptimizerService"/>
<property name="connectionFactory" ref="jmsFactory"/>
bean>

<bean id="optimizerServer" class="org.springframework.jms.listener.serversession.ServerSessionMessageListenerContainer">
<property name="destination" ref="optimizerDestination"/>
<property name="messageListener" ref="optimizerServerListener"/>
<property name="connectionFactory" ref="jmsFactory"/>
bean>


<bean id="jmsFactory" class="org.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
<property name="useEmbeddedBroker">
<value>truevalue>
property>
bean>

<bean id="optimizerDestination" class="org.activemq.message.ActiveMQQueue">
<constructor-arg index="0" value="optimizerDestinationQ"/>
bean>
beans>

Unfortunately the behavior was still the same - cancel() was not executing on the server until solve() completed. I posted this question on the Spring User list but did not get a response. This class uses the ServerSessionPool SPI so I'm not sure if there is a problem with the Spring class, the ActiveMQ implementation of this SPI or something that I've done wrong.

Anyway I was able to successfully configure the DefaultMessageListenerContainer class and observed the desired behavior. In contrast to ServerSessionMessageListenerContainer, DefaultMessageListenerContainer "creates a fixed number of JMS Sessions to invoke the listener, not allowing for dynamic adaptation to runtime demands". While ServerSessionMessageListenerContainer would have been ideal, DefaultMessageListenerContainer is good enough for most use cases as you'd typically want to have some sort of thread pooled execution on the server anyways.

optimizerContextPooled.xml

xml version="1.0" encoding="UTF-8"?>


<beans>

<bean id="optimizerServiceImpl" class="org.sanjiv.lingo.server.OptimizerServiceImpl" singleton="true">
bean>

<bean id="optimizerServerListener" class="org.logicblaze.lingo.jms.JmsServiceExporterMessageListener">
<property name="service" ref="optimizerServiceImpl"/>
<property name="serviceInterface" value="org.sanjiv.lingo.common.OptimizerService"/>
<property name="connectionFactory" ref="jmsFactory"/>
bean>

<bean id="optimizerServer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="20"/>
<property name="destination" ref="optimizerDestination"/>
<property name="messageListener" ref="optimizerServerListener"/>
<property name="connectionFactory" ref="jmsFactory"/>
bean>


<bean id="jmsFactory" class="org.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
<property name="useEmbeddedBroker">
<value>truevalue>
property>
bean>

<bean id="optimizerDestination" class="org.activemq.message.ActiveMQQueue">
<constructor-arg index="0" value="optimizerDestinationQ"/>
bean>

beans>
Note : Although some Lingo examples have the destination created as a Topic(ActiveMQTopic)
with the org.logicblaze.lingo.jms.JmsServiceExporter class, you must use a Queue when
using multiple JMS sessions for concurrent message retreival as a Topic will be received
by all listeners which is not what we want.

Here's the result when using applicationContextPooled.xml

You can download the complete source for this here and run the sample server and client. JRoller doesn't allow uploading .zip files so I've uploaded the sample as a .jar file instead. The source distribution has a Maven 1.x project file. To build, simply run "maven". To run the optimizer sever without pooled JMS listeners, run startOptimizer.bat under dist/bin/. To run with pooled JMS listeners, run startOptimizerPooled.bat and to run the test client, run startClient.bat

I am using this architecture to provide a remote API for our C++ optimizer. The C++ optimizer has a thin JNI layer which loads the Spring application context file and the OptimizerServiceImpl has a bunch of native methods which is tied to the underlying C++ optimizer functionality using the JNI function RegisterNatives(). Do you Lingo? I'd like to hear how others are using Lingo/Spring Remoting.

Monday, March 12, 2007

在ubuntu下配置vsftp。

唉,这个小东西真烦人。 我在网上找了个贴子,跟着试了试,可以用,就直接贴过来了。
http://linux.hiweed.com/node/1080

Wednesday, February 28, 2007

软件整合测试阶段的版本管理。

这里说的软件整合测试阶段指的是开发组发布了第一个可以测试的版本之后的版本管理,有几点需要澄清:1)不是说软件测试是从第一个版本发布之后才开始,软件测试是伴随着整个软件开发的生命周期,当然也不一定是指对代码的测试, 也可以包括对文档的review。 2)为什么这里说软件整合测试?因为在”软件测试的艺术“这本书中,系统测试指的是非功能性的测试,所以我用整合测试来表示对系统功能的测试(不是单元测试)。
想起前段时间组织的对安徽通彩网项目的测试。很混乱,测试团队和开发团队缺乏有效的沟通,很大原因上也是因为版本管理的混乱,当大家在说到一个bug的时候,这个bug并没有定义在一个有效的软件版本下,我们总是说最新版本, 但是很显然很多bug并不是在最新的版本中。如果我们总是说最新版本, 这其实就表示了我们根本没有版本管理。 此外,虽然在测试团队里面采用了跌代的测试方式,但是开发团队里面却并没有一个明确的开发过程定义。实际上,测试团队采用什么样的测试过程应当取决于开发团队的开发过程。 如果开发团队也是一个跌代的开发过程,这表示每个跌代开发团队都会发布一个可测试的软件版本,很自然的,测试团队应该也采用跌代的测试方式。 在安徽通彩网的测试中,我定义了每三天为一个测试迭代,在这个迭代中所有的测试用例需要被重新执行,测试环境需要被重新构建,而且软件版本实际上是由测试团队来定义的。。。我现在能说的就是,好歹测试团队遵循了一个测试过程,有这个过程比没有这个过程要好。
昨天看了一份文档‘Revision Control with Subversion’,其中关于branch的那一段里面定义了一个软件发布阶段的版本管理过程, 在这个过程中branch扮演重要角色(在这份文档中定义了几种branch pattern,其中包括release branch, feature branch),这种branch被定义为release branch,现在直接摘录。
Here's where version control can help. The typical procedure looks like this:
• Developers commit all new work to the trunk. Day-to-day changes are committed to /trunk: new features, bugfixes, and so on.
• The trunk is copied to a “release” branch. When the team thinks the software is ready for release (say, a 1.0 release), then /trunk might be copied to /branches/1.0.
• Teams continue to work in parallel. One team begins rigorous testing of the release branch, while another team continues new work (say, for version 2.0) on /trunk. If bugs are discovered in either location, fixes are ported back and forth as necessary(不管是在trunk或者branch中发现了bug,这个bug的解决都应该被merge到另外一方). At some point, however, even that process stops. The branch is “frozen” for final testing right before a release.
• The branch is tagged and released. When testing is complete, /branches/1.0 is copied to /tags/1.0.0 as a reference snapshot. The tag is packaged and released to customers(版本的发布应该由开发团队和测试团队共同定义,比如测试团队声明所有测试都通过了,那么开发组就可以发布一个新版本).
• The branch is maintained over time. While work continues on /trunk for version 2.0, bugfixes continue to be ported from /trunk to /branches/1.0. When enough bugfixes have accumulated, management may decide to do a 1.0.1 release: /branches/1.0 is copied to /tags/1.0.1, and the tag is packaged and released(就算软件已经正式发布,仍然有可能发现bug,这个bug被解决以后会发布一个bugfix的版本).