Playing with Atomicity and the volatile modifier

During the course of some reading that I am doing for Whirlycache, I started to wonder about the effect that Java’s volatile modifier really has. So I wrote a test to find out.

Basically, it’s just 32 threads running in a loop of 10 million iterations doing auto-increments on two long fields, one of which is marked as volatile. The results consistently look something like this:

    Number of loops: 320000000
    Non-volatile long: 134162630
    Volatile long: 29800659

    We can learn two things here: Java’s autoincrement operator is indeed not atomic since it involves a read and a write (but that’s not real news) and volatile made about 4.5 times more of a difference than I thought it would in this test.

    The surprising thing was that the volatile long value was much less than the non-volatile long. I expected the opposite, but it turns out that the fact that each thread has to read the contents of the volatile modifier out of main memory causes each operation’s result to be clobbered by another thread concurrently writing to the same field. When the field isn’t marked as volatile, each thread maintains a local copy of the field in main memory and actually has a chance of doing some work on it before writing a modified copy of the field back into the main memory.

    I also found a very good article today on IBM developerworks about Java synchronization.

    Whirlycache Added to

    Over the past few weeks, I’ve been working on some ideas for a Java object cache and now there’s some code to back it up. Seth Fitzsimmons was kind enough to go to the trouble of writing some code and setting up some project space on That’s where you can monitor our progress.

    From what I can tell, Whirlycache is the fastest Java object cache around today. I’d love to know if I am wrong about that.

    Spam, Reinvented… Again: Asciispam

    Just got this in my inbox. It successfully bypassed 2 Bayesian filters. Since this one arrived, I have received another as well. It’s a little tricky for the filter to catch, because the filter’s scoring engine will be somewhat fooled by the low occurrence rate of strings like “yp52”. This could get ugly.

    vslm128u  1475396z      d7n47rx405      gs4v49ordg2f0n61
      548q      ma23      c705rh  ci9dg5    k2    o3xc    tv
      0h41      87z4    86w97a      yorcw9  7l    5ymc    e7
      30y7      7ue1    yp52          4zdl  8v    3u80    t5
      0z32u4rzm51m8e    81br          e66g        k7y4
      lqv9      k1bg    gob3          5gfq        0ggv
      6z30      93gv    ex57fb      mq9ozs        7o68
      s7ou      47fn      2710z0  68a880          9c56
    18v5qbfg  h51l38e2      fqgd3ekmgr        90ar771svgug    
        6gt4ug3q65w4    1317174h0d9u670i    ub6g0dxm  s8hfgg20
      qlh682  8mi2zl      0f12        53      77cu01  036w89
      z8g4      52pv      yo68        9v        c7e54xokw6
      t4v286w4            18gt    38            e216oaz8ez
        8yk8812e79        z3mmkk73y8              6xi105
          ky51093041      6ixy    f0            890v65836g
      39tz      6788      re4t        4a        3i00qe1xwm
      vr8845  bez372      xeph        4r      kxfwa1  ygsyc8
      nq30m9sp564b      3wqv576q02ou08i3    9t0ht271  gllm7f4e

    PRX on Slashdot

    My good friends (and favorite client!) at PRX were featured on Slashdot today. Inevitably, this brought about a ton of unexpected traffic which momentarily took the site offline. After tweaking some HTTP/1.1 keepalive settings in Apache and increasing the max heap size on the jvm, things seem to be pretty stable.

    Their /. experience seems to have been pretty similar to mine in that traffic seems to top out at around 1.5Mbps and is pretty solid for about a day. As a reference, we’re talking about an increase in traffic like this:


    I still have yet to figure out why some sysadmins can’t handle slashdotting. By my estimates, a reasonably well designed system running on an $800 desktop machine ought to be able to handle spikes like this.