Re: Re: Shrinking ext3 directories
On Jun 21, 2002 05:28 +0200, Daniel Phillips wrote:
> I ran a bakeoff between your new half-md4 and dx_hack_hash on Ext2. As
> predicted, half-md4 does produce very even bucket distributions. For 200,000
> half-md4: 2872 avg bytes filled per 4k block (70%)
> dx_hack_hash: 2853 avg bytes filled per 4k block (69%)
> but guess which was faster overall?
> half-md4: user 0.43 system 6.88 real 0:07.33 CPU 99%
> dx_hack_hash: user 0.43 system 6.40 real 0:06.82 CPU 100%
> This is quite reproducible: dx_hack_hash is always faster by about 6%. This
> must be due entirely to the difference in hashing cost, since half-md4
> produces measurably better distributions. Now what do we do?
While I normally advocate the "cheapest" way of implementing a given
solution (and dx_hack_hash is definitely the lowest-cost hash function
we could reasonably have), I would still be inclined to go with half-MD4
for this. A few reasons for that:
1) CPUs are getting faster all the time
2) it is a well-understood algorithm that has very good behaviour
3) it is much harder to spoof MD4 than dx_hack_hash
4) it is probably better to have the most uniform hash function we can
find than to do lots more block split/coalesce operations, so the
extra cost of half-MD4 may be a benefit overall
It would be interesting to re-run this test to create a few million
entries, but with periodic deletes.
Hmm, now that I think about it, split/coalesce operations are only
important on create and delete, while the hash cost is paid for each
lookup as well. It would be interesting to see the comparison with
a test something like this (sorry, don't have a system which has both
hash functions working righ...
for d in `seq -f "directory_name_%05g" 1 10000` ; do
for f in `seq -f "file_name_%05g" 1 10000` ; do
mount -o noatime $DEV $TESTDIR
for d in `seq -f "directory_name_%05g" 10000 -1 1` ; do
for f in `seq -f "file_name_%05g" 10000 -1 1` ; do
stat $TESTDIR/$d/$d_$f > /dev/null
Having the longer filenames will put more load on the hash function,
so we will see if we are really paying a big price for the overhead,
and the stat test will remove all of the creation time and disk dirtying
and just leave us with the pure lookup costs hopefully.
ThinkGeek at http://www.ThinkGeek.com/