# $Id: INSTALL,v 1.4 2009/03/10 14:47:58 ksb Exp $ Install this on systems where you trust most of the Users. If a login wants to torque the system kicker would be a great way to start. (Of course crontabs are just about as bad.) The normal master source rules apply and you need mkcmd as usual. Get msrc0, msrc, mkcmd, and install Pkgs from your favorite Pundits site (ftp.physics.purdue.edu, ftp.npcguild.org, www.npcguild.org/~ksb). The plan -------- You should configure at(1)'s queuedefs file before you get too far along here, because if you can't reconfigure batch this won't help you much. Then install the program and spool directories. Add cron support. Then install some test tasks (then remove them). Queuedefs and allow lists ------------------------- See the manual page for queuedefs on your system. Usually it has a format like queue-name "." joblimit "j" nice "n" [ sleep "w" ] The default for the batch queue is almost always: b.2j2n90w If you want more than just the default "a" and "b" queues you should add them. I use "r Report" "z Compress" "c Copy" "x cleanup" for my log reporting tasks. r.1j3n z.1j9n c.1j2n x.1j0n Which means that 1 of each type of task can be running, this keeps the host running well, but not so loaded that it thrashes. On some hosts the default is that very few people can use cron/at/batch. You might need to touch "cron.deny" and remove "cron.allow" and/or touch "at.deny" and remove "at.allow" in /usr/lib/cron, /var/at or some such. It's in the manual page for at(1), mostly. The driver (kicker) ------------------- The program is trivial, about 300 lines in kicker.mc and kicker.m. Read it. It just forks a shell as each user it find owning a file and drops that file in a batch queue. It is not setuid, it is just run from root's crontab or the system startup scripts. Install it with msrc. Or use "makeme" to build it locally. The spool directories --------------------- Tasks should be stored in a directory structure like this: a directory for system boot and system shutdown, and one for every hour of the day, plus one for "end-of-day" and optionally one for "top of the hour": 00 injection time for midnight tasks 01 01:00 hours 02 02:00 ... ... 22 22:00 hours 23 23:00 hours eod 23:55 hours (or so) boot when system boot starts run level 3 shutdown when run level 3 closes (which doesn't help at all) top run at the top of every hour [optionally] The modes on the base directory (/var/kicker) should not be easy to change, maybe root only for write. The time based directories (/var/kicker/01 and the like) can be either "group write" or owned by a special user to allow updates. {I'd use op, installus, or vinst -U as you might guess.} So you get to set some site policy: You can use the stock rules in the Makefile.host recipe to get started (make install_spool). They let anyone in group "adm" (or "sys") add tasks to the kicker cycle. That might mean that the logins that can use kicker are in group adm, or can run a program setgid adm to put a file in place (aka "op" or "installus"). This is version 1.21, but in versions of kicker <1.10 don't use "installus" because files in the OLD dirs will still be executed! In versions higher it is OK because "OLD" is never searched for task files. Some of our move-to-production tasks just install the files as root with a tar file we get from the Customer. When my Customers want to hose us there are easier ways than the move-to-production system to do it! The cron support ---------------- Once again you get to set policy. The file kicker.cron has the lines from my /etc/crontab on a FreeBSD host. Delete the "root " part for a Sun crontab. You can delete the "top" directory to all the hours (except eod) if you don't want that support, and you might rmdir /var/kicker/top. Once you put these tasks in root's crontab, kicker is ready to go. To make any "boot" tasks work you'll have to install the "kicker.sh" file in /etc/init.d with links (Sun style) or /usr/local/etc/rc.d (BSD style). I would, because it saves you all the requests to add stuff to the system boot sequence. The "shutdown" tasks tend to be run at the next boot, because there is not enough time to get them queued before the host halts. That's a bug, kinda. Tasks ----- In those time based spool directories we keep task files owned by the login that should run them. The first letter of a task's name is the batch queue to run the job in if is has a slash (/) or dot (.) or hyphen (-) after it, visually: QUEUE/TASK QUEUE.TASK QUEUE-TASK If it doesn't have a queue name then we use "b" (the lowest priority batch queue). E.g. 16/netlabel/R.traffic when owned by "log" might be a traffic report for netlabel that runs from the "R" queue at 16:00. You'd have to define the "R" queue under batch's queuedefs file (Solaris and the like) or take the BSD style high letter are niced more. Testing ------- So add a test task for the next hour, like: #!/bin/sh ( id ; pwd; env ) | Mail -s "`date`" myself exit 0 As /var/kicker/17/b.testing, make sure it is owned by "myself" in your primary login group (or a group you are not in normally, as you like). Run "kicker -v 17" as the superuser to watch it (not) work, read the e-mail. If that worked you should wait for 17:00:20 to see if the task runs from cron. Wait for the test e-mail. If you got what you expected all is well, else you'll have to debug the data-flow: cron -> kicker -> /var/kicker/XX -> Q.task(uid.gid) -> execute Setting the allow/deby files for at/batch is the most common issue, check that before you tell me kicker doesn't work. -- ksb, March 2009