In general, caching is a good thing. Creating a temporary local copy of some data can dramatically speed up repeated access to that data, especially if it’s hard to recreate or slow to access. However, if the local copy falls out of sync with the real data it can sometimes be disastrous. Most of the hard work in caching is finding ways to ensure that the cached copy is fresh. This is called ensuring “cache coherency”.
For example, web browsers ask web servers if the remote content is newer than some time. If yes, then the cache is deleted and a new copy is fetched. If not, then the cache is used. This saves time because the simple “Is it fresh?” question is only a few bytes instead of the full size of the content. IE 5 had a painful misfeature that it didn’t send the “Is it fresh?” question for subsidiary files, like .css, .js, .jpg, etc files. So, even if the main .html page was up to date, sometimes IE used the wrong version of the supporting data.
Flash caches incorrectly
Last week I discovered a collosal caching problem with Flash MX 2004 (the authoring software, not the Flash Player that is in plugged into your browser). When switching to MX 2004, most users (me included) notice that compile time is dramatically improved. One of the many reasons is that Flash caches object files behind the scenes. That is, if you have a class defined in a .as file, Flash compiles it to a .aso file and stores that object file deep in a prefs directory. On subsequent compilations, if the .as file is older than the .aso file, the latter is reused, possibly saving several seconds.
Well, Macromedia screwed it up. By making the test be “Is it older?” they opened themselves up to a cache coherency problem. Consider this scenario:
Say, Alice and Bob are working on the same CVS project. Alice checks out some code one morning and starts programming. She makes some changes to Foo.as. Meanwhile, Bob speedily makes a critical bug fix to Bar.as and commits his change. An hour later, Alice finishes her Foo.as work and compiles her .fla. It fails because of the bug in Bar.as. Bob tells her, “I fixed that bug. Just do a cvs update and recompile.” She does, but the bug persists.
Why? It’s all about timestamps. Here’s what happened:
- Alice checked out Bar.as with timestamp of 10:00 AM
- Bob edited and checked in Bar.as with timestamp of 10:30 AM
- Alice compiled at 11:00 AM, creating a Bar.aso file with a 11:00 AM timestamp
- Alice cvs updated at 11:30 AM, getting Bar.as with a 10:30 AM timestamp
- Alice recompiled her .fla, but Bar.aso is “newer” than Bar.as, so Flash ignored Bar.as and used the buggy Bar.aso instead.
So, Alice is stuck with an out-of-date .aso file that Flash won’t fix.
There are two easy workarounds to this problem, but both of them require manual intervention.
Touch the .as file
By editing the .as files, you make it newer again and Flash rebuilds the .aso. Instead of manually editing, the Unix “touch” command comes in handy. Here’s a one-line command to touch all of your project .as files to ensure they are newest:
find . -name '*.as' | xargs touch
If you do a lot of Actionscript work, though, this is a weak workaround becuase you may have updated .as files somewhere else in your classpath.
Blow away the .aso files
Like in the browser world, the answer here is “Clear your cache!” By deleting the .aso files you can force Flash to rebuild them all, at the expense of compile time. The trick is that you need to know where they are. Here’s a command that works on Mac OS X only (all one line):
find ~/Library/Application*Support/Macromedia/Flash*MX*2004/en/Conf*/Classes \
-name '*.aso' | perl -lpe 'unlink $_'
You have to remember to run one of these every time you have a case where you might introduce an older-but-changed .as file (often from a source repository like CVS or from a network drive).
What Macromedia should do
How can Macromedia fix this problem? Very simply! This problem was solved by caching C compilers. There are two popular ways to ensure synchronicity between source and object:
- Make the object have the EXACT same timestamp as the source
- Cache some representation of the source with the object
In the former case, the algorithm for cache testing is this:
if (timestamp(object file) == timestamp(source file))
reuse object file
recompile source file
set timestamp(object file) = timestamp(source file)
In any POSIX environment, this timestamp setting can be accomplished with the stat() and utime() functions.
This exactitude of timestamps can be problematic when the source and object files are on different drives. For example, if one is on a network drive and the other on a local drive, and the two computers disagree on the current time, the timestamps may not match1.
A better solution is to keep a representation of the source along with the object. Copying the whole source is possible, but wasteful. Instead, a hash of the source can be computed and stored. Then, the next time you want to compile, recompute the hash (much faster than recompiling) and compare. If it matches, use the object; otherwise recompile. The fantastic ccache project uses MD4 hashes of C source code (after macro interpolation) to detect cache hits and misses.
This is an easy mistake to make. For the vast majority of users, Macromedia’s caching solution is a pure win. It’s only when you run into cases where timestamps do not increase monotonically that one encounters problems.
The best advice for programmers trying to implement a caching solution is be paranoid and consider your failure cases first. If you cannot judge your failure cases, or cannot afford any failure, you shouldn’t cache at all. In cases like compilation where a correct solution is paramount, correct-but-slow data is always better than fast-but-crappy data. This is even more true in financial settings. Imagine a scenario where ATMs cached data about account balances. All a gang of crooks would need to do is hit two ATMs at the same time. The same can go for e-voting machines without paper trails. In some cases, it’s better to just not cache because you can spend so much developer effort ensuring that the cache is good that you lose time making the process good in general.
1 This is a problem I’ve experienced with SMB. Generally, I try to allow a two-second timestamp window. If the machines are both NTP synched, then they should always agree to that level of precision, one hopes.