# $Id: README,v 1.4 2002/04/10 13:31:30 ksb Exp $ When you have an ftp "pub/incoming" directory that local users swap files into and out of all the time. They park a big data set there for their buddy to pull, or their buddy pokes a file in for them. This is really common, but eventually the spool area files with old crud that you are not sure can be deleted. So you code a crontab line to find files older than a few days and delete them with "xargs rm" them someone complains that the spool area is only 5% full -- why did you delete that file? I've got a program for you. You can run this program like your "find + xargs" command from cron. It will only delete files when the spool partition is over a set threshold, and only deletes about what it takes to get the partition back under the wire. It is given a list of the directories to scan, builds a weighted list of files it would like to delete (by size and age), then deletes just enough files to do the trick. It is cheap enough to run a few times a day (at least) and does almost nothing if the partition alreay has enough free resources. For Example to keep ~ftp/pub/incoming clean and fresh: kruft -S75 -I90 /var/spool/ftp/pub/incoming A longer (more real) example. The SA+C MTP process creates some wrekage in many scattered "OldTarBalls" directories under /home/stage[0-9]. Each subdirectory of each OldTarBalls/ is named for the $year.$month it was created, each of them has many tar files in them we can delete. This shell script (using xapply, another ksb tool) cleans them up with kruft(tm). We might use xargs here #!/bin/ksh # clean up the OldTarBalls dirs -- via kruft -- ksb find ${1-/home/stage?} -type d -name OldTarBalls -print | xapply -f 'find %1/[1-9]???.[0-1]? -type d -print -prune 2>/dev/null' - | xargs /usr/local/etc/kruft -a -S70 -I85 exit 0 -- ksb, 01 June 2001