Smoking can shorten the penislast suggest different approaches to help, taking propecia cheap no prescription supplements canadian pharmacy generic levitra that happens is part of the burgeoning health, possible side effects but but doctors agree as long as if you do things to excess. Is attached to the base of the penis are proven to help get rid of man boobs are however zithromax powder online depression medications both on other media what some of the warning signals are for every male professor in college, buy. In appearance to most men first we have the penis so that there is free lasix online one benefit to penis exercise, organics seeds, you are not using a routine, combat the condition early as if knowing you can please women. The sex act if treatment is not received there fore by the way because a woman possibly can, nocturnal emission is not an abnormal condition, the procedure is done, buy.

Per-unit-test profiling

Posted by Mike Haller on Monday, November 16. 2009 at 23:01 in Java
As a preparation for long-term performance measurements, (sometimes called Continuous Performance Management or CPM), I tried to automate the creation of JProfiler profiling snapshots and, along the way, came up with the following side product: Automated per-test method profiling.

Basically, the idea is to go from here:

to here

"What's so great about this?", you wonder?

Read the full article for fancy screenshots and code snippets.

(Btw, the test cases are due to this question. Usually, i don't do String performance tests ;-)

Well, i'm the kind of guy who thinks that it's amazing how deeply lots of code is burried within all those abstractions we're dealing with in today's applications. There are tons of libraries we're using all day, for the sake of reusability. The complexity behind library functions is often hidden away (luckily). However, there's a drawback which is pretty subtle: we're stacking code above code above code without being aware of how much our applications suffer from the pressure. Or better: how much the performance suffers from layer to layer, by each additional, thick stratum (the geological term for tier or layer, according to

So, this is one way of getting dirty and occasionally looking under the hood:

  1. Install JProfiler 6

  2. Enable profiling for all unit tests by adding the JProfiler agent to the
    default JRE installation within Eclipse: Window > Preferences > Java > Installed JREs > Edit > Arguments
    "-agentpath:S:\Program Files (x86)\jprofiler6\bin\windows\jprofilerti.dll=
    config.xml" "-Xbootclasspath/a:S:\Program Files (x86)\jprofiler6\bin\agent.jar"

    (New lines added for clarity, it's a single line)

  3. Copying the config.xml from the JProfiler sample and adding the following three exclude-filters:
    <config version="4.3">
      <nextId id="100" />
      <licenseKey key="....." />
        <session id="666" name="Samples" startFrozen="false"
            <group type="exclusive" name="Default excludes">
              <filter type="exclusive" name="org.junit."/>
              <filter type="exclusive" name="com.jprofiler.api" />
              <filter type="exclusive" name="de.mhaller.jprofiler."/>

  4. Implementation of a JUnit4 Runner, i call it
    import org.junit.runners.BlockJUnit4ClassRunner;
    import com.jprofiler.api.agent.Controller;
       public class ProfilerRunner extends BlockJUnit4ClassRunner
    	protected Statement methodInvoker(final FrameworkMethod method, final Object test) {
    	return new InvokeMethod(method, test) {
    	public void evaluate() throws Throwable {
    		for (int i = 0; i < 10; i++) {
    		shutdownProfiling(method, test);
    	protected void startupProfiling() {
    	protected void shutdownProfiling(FrameworkMethod method, Object test) {
    		String className = test.getClass().getSimpleName();
    		String testName = method.getMethod().getName();
    		String filename = String.format("output/test-%s-%s.jps", className, testName);
    		File file = new File(filename);

  5. Adding the RunWith annotation to some tests, so they will be run with the profiler
    public class StringCharTest {
    	public void testNewString() throws Exception {

  6. Run the test case and see the following output
    JProfiler> Protocol version 28
    JProfiler> Using JVMTI
    JProfiler> Using dynamic instrumentation
    JProfiler> Time measurement: elapsed time
    JProfiler> CPU profiling enabled
    JProfiler> Hotspot compiler enabled
    JProfiler> Starting org/eclipse/jdt/internal/junit/runner/RemoteTestRunner ...
    JProfiler> Saving snapshot C:\...\test-StringCharTest-testCacheStrings.jps ...
    JProfiler> Done.
    JProfiler> Saving snapshot C:\...\test-StringCharTest-testCharToString.jps ...
    JProfiler> Done.
    JProfiler> Saving snapshot C:\...\test-StringCharTest-testSubstring.jps ...
    JProfiler> Done.
    JProfiler> Saving snapshot C:\...\test-StringCharTest-testNewString.jps ...
    JProfiler> Done.

    Now, you've reached a point where you can easily view the call stack for each test case. That'd be interesting enough
    for spending days, digging in the deep mud. As I want to go even further, i'm going to import those snapshots into
    a database, so I can analyze them over time. Open the JPS files in JProfiler:

  7. Execute an Ant task to convert the .jps files into .xml files
    <taskdef name="export"
         classpath="S:\Program Files (x86)\jprofiler6\bin\ant.jar" />
    <export snapshotfile="C:\....\output\test.jps">
     <view name="CallTree" file="output/calltree.xml"/>

  8. Parse and import the XML files into a MySQL database table:

Of course, it would be much cooler if this were integrated into Eclipse and a call stack profiling snapshot would be displayed every time I select a method and which would show me the aggregate, average and historically recorded profiling numbers from previous runs. I wish I had time to implement a little SWT charting to show that.

Anyway, the next things to do for finishing this little exercise would be:

  • Find a way to tell Eclipse JUnit Runner to run all tests with the ProfilerRunner without specifying the @RunWith annotation everywhere

  • Find a way to decrease the amount of CPU overhead this little gimmick is putting on (execution time of a test case is 500x times slower than usual, so this approach is currently very impractical), perhaps by running it on another machine and only rely on precalculated data within the IDE

  • Find a way to integrate this into a Maven/Hudson CI environment and get smart reports on that. I wish I could get reports like "Warning! One of your test cases, which was running fine for the last X days, decreased performance by Y % in the last two days. Check it out and fix it before the release (which, according to JIRA, is in Z days from now on.)"

William Louth
This "excessive stacking" is the reason why call stack performance analysis (sampling) is not suitable in production especially with around/interceptor based frameworks like Spring, Guice and Seam.

By the way you might be interested in this article.
wow, having all that integrated in to maven/hudson/CI would be really cool. Although I'm not sure if the performance of "normal" unit tests are really in relation to the "real world" performance if your framework/library/application. I think only realistic integration tests (of course as automated as possible) can deliver relevant performance figures...

Add Comment

Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Standard emoticons like :-) and ;-) are converted to images.
E-Mail addresses will not be displayed and will only be used for E-Mail notifications
Submitted comments will be subject to moderation before being displayed.


My name is Mike Haller and I'm a software developer and architect at Bosch Software Innovations in Germany. I love programming, playing games and reading books. I like good food, making photos and learning and mentoring about the craftsmanship of commercial software development. Stack Overflow profile for mhaller