# $Id: TODO,v 1.11 2010/12/15 19:09:07 ksb Exp $ Read lsof output to exclude any open files, maybe an option. Something like popen("[op] lsof -snbP -F aDis0", "rb") would be nice. While sends lines like: "p601^@\na ^@D0x6801^@s4096^@i2^@\n". I think we need to be able to fake a hard limit for a direcrory. Imagine that we have a spool that should use no more than 2Gb under a directory. Now one must express that in terms of the whole file system above. We should be able to say: kruft -d 2g -a /var/spool/lpc/* (I think exclude 'd 'SI'). We'd build the size of the directory as we stat the files, then feed that into the heap code. Maybe give it an option to use something other than "unlink" to purge target files. Like the "purge" program itself's note to be able send the names to be deleted to a filter on stdout (-O). Lots of minor issues with text filenames (-z from xapply to fix that). We could just provide -Oz and let a pipe fix it as needed. Alternatively we should fork a process per file (-x cmd). In that code we might expand several %escapes from xapply or so. So we start a process to reduce the size of the file. Like xapply we should substitute %f, %u, %(mixer) %[dicer], and some based on how much we need to get to get under our space/inode goal (%b, %c). Let's let %t make a tempfile under $TMPDIR (when spcified, and rm it after the command) so something like: kruft -x "tail -100 %f >%t && cp %t %f || truncate -r /dev/null %f" ... This would replace a file with the last 100 lines of the file, or truncate it. After the process runs we'd have to re-stat(2) the file to see how effective the command was. If it were totally ineffective we should count those and keep track of that in a percent escape (as well as non-zero exit codes). That makes this pretty smart (compared to not doing it). We could even (shudder) take -P and use an xclate/xapply machine to do the work (too much for the problem at hand, but easy [for me] to do). Under -n (and -z) we should guess as the effectiveness of our picks: that is to say we should keep some idea of the target blocks/inodes that we want to free and quit when we reach that goal. Under -x would could provide the goal in percent expanders (one for space reduction, one for inode reduction). For example %b == number bytes left to reduce fs, %c number of inodes we must remove. We might try truncate, then unlink, or provide a filter that does that and use it under -O above. I don't need a shredder option here, use -x. -- ksb, Dec 2010