Archive for October, 2008

CRONTAB AT A GLANCE

Posted: October 29, 2008 in LINUX
Tags: ,

How do I add cron job under Linux or UNIX like operating system?

A. Cron job are used to schedule commands to be executed periodically i.e. to setup commands which will repeatedly run at a set time, you can use the cron jobs.

crontab is the command used to install, deinstall or list the tables used to drive the cron daemon in Vixie Cron. Each user can have their own crontab, and though these are files in /var/spool/cron/crontabs, they are not intended to be edited directly. You need to use crontab command for editing or setting up your own cron jobs.

To edit your crontab file, type the following command:
$ crontab -e
Syntax of crontab
Your cron job looks like as follows:
1 2 3 4 5 /path/to/command arg1 arg2

Where,

1: Minute (0-59)
2: Hours (0-23)
3: Day (0-31)
4: Month (0-12 [12 == December])
5: Day of the week(0-7 [7 or 0 == sunday])
/path/to/command – Script or command name to schedule
Same above five fields structure can be easily remembered with following diagram:

* * * * * command to be executed
– – – – –
| | | | |
| | | | —– Day of week (0 – 7) (Sunday=0 or 7)
| | | ——- Month (1 – 12)
| | ——— Day of month (1 – 31)
| ———– Hour (0 – 23)
————- Minute (0 – 59)Example(s)
If you wished to have a script named /root/backup.sh run every day at 3am, my crontab entry would look like as follows:
(a) Install your cronjob:# crontab -e(b)Append following entry:0 3 * * * /root/backup.shRun five minutes after midnight, every day:5 0 * * * /path/to/commandRun at 2:15pm on the first of every month:15 14 1 * * /path/to/commandRun at 10 pm on weekdays: 0 22 * * 1-5 /path/to/command Run 23 minutes after midnigbt, 2am, 4am …, everyday:23 0-23/2 * * * /path/to/commandRun at 5 after 4 every sunday:5 4 * * sun /path/to/command

Use of operators
An operator allows you to specifying multiple values in a field. There are three operators:

The asterisk (*) : This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.
The comma (,) : This operator specifies a list of values, for example: “1,5,10,15,20, 25”.
The dash (-) : This operator specifies a range of values, for example: “5-15” days , which is equivalent to typing “5,6,7,8,9,….,13,14,15” using the comma operator.
How do I disabling Email output?
By default the output of a command or a script (if any produced), will be email to your local email account. To stop receiving email output from crontab you need to append >/dev/null 2>&1. For example:0 3 * * * /root/backup.sh >/dev/null 2>&1To mail output to particluer email account let us say vivek@nixcraft.in you need to define MAILTO variable to your cron job:MAILTO=”vivek@nixcraft.in”
0 3 * * * /root/backup.sh >/dev/null 2>&1

Task:To list your crontab jobs use the command
Type the following command:# crontab -lTo remove or erase all crontab jobs use the command:# crontab -r
Use special string to save time
Instead of the first five fields, you can use any one of eight special strings. It will not just save your time but it will improve readability.

Special string Meaning
@reboot Run once, at startup.
@yearly Run once a year, “0 0 1 1 *”.
@annually (same as @yearly)
@monthly Run once a month, “0 0 1 * *”.
@weekly Run once a week, “0 0 * * 0”.
@daily Run once a day, “0 0 * * *”.
@midnight (same as @daily)
@hourly Run once an hour, “0 * * * *”.

Run ntpdate every hour:
@hourly /path/to/ntpdate
Make a backup everyday:
@daily /path/to/backup/script.sh
Understanding /etc/crontab file and /etc/cron.d/* directories
/etc/crontab is system crontabs file. Usually only used by root user or daemons to configure system wide jobs. All individual user must must use crontab command to install and edit their jobs as described above. /var/spool/cron/ or /var/cron/tabs/ is directory for personal user crontab files. It must be backup with users home directory.

Typical /etc/crontab file entries:

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthlyAdditionally, cron reads the files in /etc/cron.d/ directory. Usually system daemon such as sa-update or sysstat places their cronjob here. As a root user or superuser you can use following directories to configure cronjobs. You can directly drop your scripts here. run-parts command run scripts or programs in a directory via /etc/crontab

Directory Description
/etc/cron.d/ Put all scripts here and call them from /etc/crontab file.
/etc/cron.daily/ Run all scripts once a day
/etc/cron.hourly/ Run all scripts once an hour
/etc/cron.monthly/ Run all scripts once a month
/etc/cron.weekly/ Run all scripts once a week

How do I use above directories to put scripts?
Here is a sample shell script (clean.cache) to clean up cached files every 10 days. This script is directly created at /etc/cron.daliy/ directory i.e. create a file called /etc/cron.daily/clean.cache:

#!/bin/bash
CROOT=”/tmp/cachelighttpd/”
DAYS=10
LUSER=”lighttpd”
LGROUP=”lighttpd”

# start cleaning
/usr/bin/find ${CROOT} -type f -mtime +${DAYS} | xargs -r /bin/rm

# if directory deleted by some other script just get it back
if [ ! -d $CROOT ]
then
/bin/mkdir -p $CROOT
/bin/chown ${LUSER}:${LGROUP} ${CROOT}
fi

iptables

Posted: October 23, 2008 in LINUX, SYSTEM UTILITY
Tags: , ,

#!/bin/sh
#
# iptables.sh
#
# An example of a simple iptables configuration. This script
# can enable ‘masquerading’ and will open user definable ports.
#
###################################################################
# Begin variable declarations and user configuration options ######
#
# Set the location of iptables (default).
IPTABLES=/sbin/iptables

# Local Interfaces
# This is the WAN interface that is our link to the outside world.
# For pppd and pppoe users.
# WAN_IFACE=”ppp0″
WAN_IFACE=”eth0″
#
# Local Area Network (LAN) interface.
#LAN_IFACE=”eth0″
LAN_IFACE=”eth1″

# Our private LAN address(es), for masquerading.
LAN_NET=”192.168.1.0/24″

# For static IP, set it here!
#WAN_IP=”1.2.3.4″

# Set a list of public server port numbers here…not too many!
# These will be open to the world, so use caution. The example is
# sshd, and HTTP (www). Any services included here should be the
# latest version available from your vendor. Comment out to disable
# all Public services. Do not put any ports to be forwarded here,
# this only direct access.
#PUBLIC_PORTS=”22 80 443″
PUBLIC_PORTS=”22″

# If we want to do port forwarding, this is the host
# that will be forwarded to.
#FORWARD_HOST=”192.168.1.3″

# A list of ports that are to be forwarded.
#FORWARD_PORTS=”25  80″

# If you get your public IP address via DHCP, set this.
DHCP_SERVER=66.21.184.66

# If you need identd for a mail server, set this.
MAIL_SERVER=

# A list of unwelcome hosts or nets. These will be denied access
# to everything, even our ‘Public’ services. Provide your own list.
#BLACKLIST=”11.22.33.44 55.66.77.88″

# A list of “trusted” hosts and/or nets. These will have access to
# ALL protocols, and ALL open ports. Be selective here.
#TRUSTED=”1.2.3.4/8  5.6.7.8″

## end user configuration options #################################
###################################################################

# Any and all addresses from anywhere.
ANYWHERE=”0/0″

# These modules may need to be loaded:
modprobe ip_conntrack_ftp
modprobe ip_nat_ftp

# Start building chains and rules #################################
#
# Let’s start clean and flush all chains to an empty state.
$IPTABLES -F
$IPTABLES -X

# Set the default policies of the built-in chains. If no match for any
# of the rules below, these will be the defaults that IPTABLES uses.
$IPTABLES -P FORWARD DROP
$IPTABLES -P OUTPUT ACCEPT
$IPTABLES -P INPUT DROP

# Accept localhost/loopback traffic.
$IPTABLES -A INPUT -i lo -j ACCEPT

# Get our dynamic IP now from the Inet interface. WAN_IP will be the
# address we are protecting from outside addresses.
[ -z “$WAN_IP” ] &&\
  WAN_IP=`ifconfig $WAN_IFACE |grep inet |cut -d : -f 2 |cut -d \  -f 1`

# Bail out with error message if no IP available! Default policy is
# already set, so all is not lost here.
[ -z “$WAN_IP” ] && echo “$WAN_IFACE not configured, aborting.” && exit 1

WAN_MASK=`ifconfig $WAN_IFACE |grep Mask |cut -d : -f 4`
WAN_NET=”$WAN_IP/$WAN_MASK”

## Reserved IPs:
#
# We should never see these private addresses coming in from outside
# to our external interface.
$IPTABLES -A INPUT -i $WAN_IFACE -s 10.0.0.0/8      -j DROP
$IPTABLES -A INPUT -i $WAN_IFACE -s 172.16.0.0/12   -j DROP
$IPTABLES -A INPUT -i $WAN_IFACE -s 192.168.0.0/16  -j DROP
$IPTABLES -A INPUT -i $WAN_IFACE -s 127.0.0.0/8     -j DROP
$IPTABLES -A INPUT -i $WAN_IFACE -s 169.254.0.0/16  -j DROP
$IPTABLES -A INPUT -i $WAN_IFACE -s 224.0.0.0/4     -j DROP
$IPTABLES -A INPUT -i $WAN_IFACE -s 240.0.0.0/5     -j DROP
# Bogus routing
$IPTABLES -A INPUT -s 255.255.255.255 -d $ANYWHERE -j DROP

# Unclean
$IPTABLES -A INPUT -i $WAN_IFACE -m unclean -m limit \
  –limit 15/minute -j LOG –log-prefix “Unclean: ”
$IPTABLES -A INPUT -i $WAN_IFACE -m unclean -j DROP

## LAN access and masquerading
#
# Allow connections from our own LAN’s private IP addresses via the LAN
# interface and set up forwarding for masqueraders if we have a LAN_NET
# defined above.
if [ -n “$LAN_NET” ]; then
echo 1 > /proc/sys/net/ipv4/ip_forward
$IPTABLES -A INPUT -i $LAN_IFACE  -j ACCEPT
# $IPTABLES -A INPUT -i $LAN_IFACE -s $LAN_NET -d $LAN_NET  -j ACCEPT 
$IPTABLES -t nat -A POSTROUTING -s $LAN_NET -o $WAN_IFACE -j MASQUERADE
fi

## Blacklist
#
# Get the blacklisted hosts/nets out of the way, before we start opening
# up any services. These will have no access to us at all, and will
# be logged.
for i in $BLACKLIST; do
$IPTABLES -A INPUT -s $i -m limit –limit 5/minute \
   -j LOG –log-prefix “Blacklisted: ”
$IPTABLES -A INPUT -s $i -j DROP
done

## Trusted hosts/nets
#
# This is our trusted host list. These have access to everything.
for i in $TRUSTED; do
$IPTABLES -A INPUT -s $i -j ACCEPT
done

# Port Forwarding
#
# Which ports get forwarded to which host. This is one to one
# port mapping (ie 80 -> 80) in this case.
[ -n “$FORWARD_HOST” ] &&\
for i in $FORWARD_PORTS; do
   $IPTABLES -A FORWARD -p tcp -s $ANYWHERE -d $FORWARD_HOST \
     –dport $i -j ACCEPT
   $IPTABLES -t nat -A PREROUTING -p tcp -d $WAN_IP –dport $i \
     -j DNAT –to $FORWARD_HOST:$i
done

## Open, but Restricted Access ports
#
# Allow DHCP server (their port 67) to client (to our port 68) UDP
# traffic from outside source.
[ -n “$DHCP_SERVER” ] &&\
$IPTABLES -A INPUT -p udp -s $DHCP_SERVER –sport 67 \
   -d $ANYWHERE –dport 68 -j ACCEPT

# Allow ‘identd’ (to our TCP port 113) from mail server only.
[ -n “$MAIL_SERVER” ] &&\
$IPTABLES -A INPUT -p tcp -s $MAIL_SERVER  -d $WAN_IP –dport 113 -j ACCEPT

# Open up Public server ports here (available to the world):
for i in $PUBLIC_PORTS; do
$IPTABLES -A INPUT -p tcp -s $ANYWHERE -d $WAN_IP –dport $i -j ACCEPT
done

# So I can check my home POP3 mailbox from work. Also, so I can ssh
# in to home system. Only allow connections from my workplace’s
# various IPs. Everything else is blocked.
$IPTABLES -A INPUT -p tcp -s 255.10.9.8/29 -d $WAN_IP –dport 110 -j ACCEPT

## ICMP (ping)
#
# ICMP rules, allow the bare essential types of ICMP only. Ping
# request is blocked, ie we won’t respond to someone else’s pings,
# but can still ping out.
$IPTABLES -A INPUT  -p icmp  –icmp-type echo-reply \
   -s $ANYWHERE -d $WAN_IP -j ACCEPT
$IPTABLES -A INPUT  -p icmp  –icmp-type destination-unreachable \
   -s $ANYWHERE -d $WAN_IP -j ACCEPT
$IPTABLES -A INPUT  -p icmp  –icmp-type time-exceeded \
   -s $ANYWHERE -d $WAN_IP -j ACCEPT

# Identd Reject
#
# Special rule to reject (with rst) any identd/auth/port 113
# connections. This will speed up some services that ask for this,
# but don’t require it. Be careful, some servers may require this
# one (IRC for instance).
#$IPTABLES -A INPUT -p tcp –dport 113 -j REJECT –reject-with tcp-reset

###################################################################
# Build a custom chain here, and set the default to DROP. All
# other traffic not allowed by the rules above, ultimately will
# wind up here, where it is blocked and logged, unless it passes
# our stateful rules for ESTABLISHED and RELATED connections. Let
# connection tracking do most of the worrying! We add the logging
# ability here with the ‘-j LOG’ target. Outgoing traffic is
# allowed as that is the default policy for the ‘output’ chain.
# There are no restrictions placed on that in this script.

# New chain…
$IPTABLES -N DEFAULT
# Use the ‘state’ module to allow only certain connections based
# on their ‘state’.
$IPTABLES -A DEFAULT -m state –state ESTABLISHED,RELATED -j ACCEPT
$IPTABLES -A DEFAULT -m state –state NEW -i ! $WAN_IFACE -j ACCEPT
# Enable logging for anything that gets this far.
$IPTABLES -A DEFAULT -j LOG -m limit –limit 30/minute –log-prefix “Dropping: ”
# Now drop it, if it has gotten here.
$IPTABLES -A DEFAULT -j DROP

# This is the ‘bottom line’ so to speak. Everything winds up
# here, where we bounce it to our custom built ‘DEFAULT’ chain
# that we defined just above. This is for both the FORWARD and
# INPUT chains.

$IPTABLES -A FORWARD -j DEFAULT
$IPTABLES -A INPUT   -j DEFAULT

echo “Iptables firewall is up `date`.”

##– eof iptables.sh

CLAMAV STARTUP and UPDATE SCRIPT

Posted: October 21, 2008 in SENDMAIL
Tags:
  • Create the signature-updater script
  • cat > clamav_update << "EOF"
  • #!/bin/sh
    /usr/local/bin/freshclam –quiet –stdout –datadir /usr/local/share/clamav –log /var/log/clamav/clam-update.log
    EOF
  • make the script executable
    • chmod 700 clamav_update
  • copy the script to /etc/cron.hourly or create an entry in cron
  • execute the script to update the software
  • create a startup script (/etc/rc.d/clamav)
    • #!/bin/sh
      
      FOO_BIN=/usr/sbin/clamd
      test -x $FOO_BIN || exit 5
      
      case "$1" in
          start)
      	echo "Starting `$FOO_BIN -V`"
      	$FOO_BIN
      
      	;;
          stop)
      	echo "Shutting down `$FOO_BIN -V`"
      	killall $FOO_BIN
      
      	;;
          restart)
      	$0 stop
      	$0 start
      
      	;;
          *)
      	echo "Usage: $0 {start|stop|restart}"
      	exit 1
      	;;
      esac
    • Another Clamav startup script

    create an init script for ClamAV (/etc/init.d/clamd):

    #!/bin/bash
    
    TMPDIR=/tmp
    PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/X11R6/bin
    
    case "$1" in
        start)
            echo "Starting ClamAV..."
            if [ -S /tmp/clamd ]; then
              echo "ClamAV is already running!"
            else
              /usr/local/bin/freshclam -d -c 10 --datadir=/usr/local/share/clamav
              /usr/local/sbin/clamd
            fi
            echo "ClamAV is now up and running!"
        ;;
        stop)
            echo "Shutting down ClamAV..."
            array=(`ps ax | grep -iw '/usr/local/bin/freshclam' | grep -iv 'grep' \
    		       | awk '{print $1}' | cut -f1 -d/ | tr '\n' ' '`)
            element_count=${#array[@]}
            index=0
            while [ "$index" -lt "$element_count" ]
            do
              kill -9 ${array[$index]}
              let "index = $index + 1"
            done
            array=(`ps ax | grep -iw '/usr/local/sbin/clamd' | grep -iv 'grep' \
    		       | awk '{print $1}' | cut -f1 -d/ | tr '\n' ' '`)
            element_count=${#array[@]}
            index=0
            while [ "$index" -lt "$element_count" ]
            do
              kill -9 ${array[$index]}
              let "index = $index + 1"
            done
            if [ -S /tmp/clamd ]; then
              rm -f /tmp/clamd
            fi
            echo "ClamAV stopped!"
        ;;
        restart)
            $0 stop  && sleep 3
            $0 start
        ;;
        *)
        echo "Usage: $0 {start|stop|restart}"
        exit 1
    esac
    exit 0

    chmod 755 /etc/init.d/clamd

    Now we start ClamAV:

    /etc/init.d/clamd start

    you will now notice some clamd processes (which use the socket /tmp/clamd) and a freshclam process which is responsible for getting the newest virus signature updates. They are located under /usr/local/share/clamav. The command

    /usr/local/bin/freshclam -d -c 10 –datadir=/usr/local/share/clamav

    in our clamd init script makes sure that freshclam checks for new signatures 10 times per day.

    In order to start ClamAV at boot time do the following:

    ln -s /etc/init.d/clamd /etc/rc2.d/S20clamd
    ln -s /etc/init.d/clamd /etc/rc3.d/S20clamd
    ln -s /etc/init.d/clamd /etc/rc4.d/S20clamd
    ln -s /etc/init.d/clamd /etc/rc5.d/S20clamd
    ln -s /etc/init.d/clamd /etc/rc0.d/K20clamd
    ln -s /etc/init.d/clamd /etc/rc1.d/K20clamd
    ln -s /etc/init.d/clamd /etc/rc6.d/K20clamd

      Running clamav under daemontools

      From : http://qmail.jms1.net/clamav/daemontools.shtml

      I have been running ClamAV under daemontools on my server for several months now, with excellent results. This web page explains how I’m doing it.


      Create the clamav user

      The “clamd” and “freshclam” programs are designed to run in the background. For security reasons, we don’t want these processes to run as root, so we will create a new userid called “clamav” under which these processes will run.

      The procedure for doing this is usually specific to your OS and distribution, this example shows how to do it using CentOS Linux.

      # useradd -M -d /nohome clamav

      If you will be using clamav in conjunction with qmail-scanner, you may wish to make the clamav user a member of the qscand group. This is normally done using a command like this:

      # usermod -a -G qscand clamav

      If you will be using clamav in conjunction with simscan, you will need to make the clamav user a member of the simscan group. This is normally done using a command like this:

      # usermod -a -G simscan clamav


      Compile clamav

      The next step, obviously, is to install clamav. I do this using the instructions which are included in the source code package. The process looks like this:

      $ tar xzf clamav-0.91.1.tar.gz
      $ cd clamav-0.91.1
      $ ./configure
      Lots of messages, hopefully no error messages
      $ make
      Lots of messages, hopefully no error messages

      If you are upgrading and already have these services running, you need to shut them down before continuing.
      $ sudo svc -d /service/clamd /service/freshclam
      Password: You will not see the password as you type it

      $ sudo make install
      Lots of messages, hopefully no error messages

      If you had shut these services down before, you should start them up again immediately.
      $ sudo svc -u /service/clamd /service/freshclam


      Configuring clamd

      The normal procedure to scan a file for viruses is to run the program “clamscan” with the filename(s) on the command line. The problem with this is that when clamscan starts, it has to read the virus definitions into memory, a process which can take several seconds on some machines, and which takes a non-zero amount of time for any machine. For a mail server which is scanning every incoming message, and which may be processing hundreds or thousands of messages per hour, this overhead can seriously slow the machine down.

      The “clamd” process runs in the background. When it starts, it reads the virus database into memory one time, and then it listens for commands from clients. The “clamdscan” program is such a client- it passes the filenames from its command line to clamd and has clamd do the actual virus scanning, since it already has the virus definitions in memory. This can make a HUGE difference on a server, whether it’s heavily loaded or not.

      As a test, I just tried “clamscan” and “clamdscan” against the same file- clamscan took 1.582 seconds, while clamdscan only took 0.187 seconds.

      The “clamd” program is configured using a clamd.conf file, which by default is installed in /usr/local/etc. The changes you need to make are as follows (lines in red show the original contents, and the lines in blue show what they need to be changed to.)

      You must comment out this line or clamd will not run.
      # Comment or remove the line below.
      Example
      #Example

      Log file locking is not necessary under daemontools.
      # By default the log file is locked for writing – the lock protects against
      # running clamd multiple times (if want to run another clamd, please
      # copy the configuration file, change the LogFile variable, and run
      # the daemon with –config-file option).
      # This option disables log file locking.
      # Default: no
      #LogFileUnlock yes
      LogFileUnlock yes

      The multilog program (part of daemontools) takes care of this automatically.
      # Maximum size of the log file.
      # Value of 0 disables the limit.
      # You may use ‘M’ or ‘m’ for megabytes (1M = 1m = 1048576 bytes)
      # and ‘K’ or ‘k’ for kilobytes (1K = 1k = 1024 bytes). To specify the size
      # in bytes just don’t use modifiers.
      # Default: 1M
      #LogFileMaxSize 2M
      LogFileMaxSize 0

      Again, multilog takes care of this automatically.
      # Log time with each message.
      # Default: no
      #LogTime yes
      LogTime no

      I like to see log messages whenever files are found to NOT contain viruses, both because it shows me that clamd is working correctly, and because it can provide a log of which files were scanned when. You may not want or need these logs on your own server- if not, then don’t make this change.
      # Also log clean files. Useful in debugging but drastically increases the
      # log size.
      # Default: no
      #LogClean yes
      LogClean yes

      I like to see EVERYTHING that clamd is doing. You may not want or need to see this much detail on your own server- if not, then don’t make this change.
      # Enable verbose logging.
      # Default: no
      #LogVerbose yes
      LogVerbose yes

      I like to explicitly document where the virus definitions are stored, just to prevent confusion later on. You may not want or need to do this on your own server- if not, then don’t make this change. This location, /usr/local/share/clamav, is where the default installation procedure puts the files. You will need to know this if you’re going to install simscan.
      # Path to the database directory.
      # Default: hardcoded (depends on installation options)
      #DatabaseDirectory /var/lib/clamav
      DatabaseDirectory /usr/local/share/clamav

      This is a GOOD thing to do.
      # Remove stale socket after unclean shutdown.
      # Default: no
      #FixStaleSocket yes
      FixStaleSocket yes

      This option tells clamd what userid it should run as.
      # Run as another user (clamd must be started by root to make this option
      # working).
      # Default: don’t drop privileges
      #User clamav
      User clamav

      This allows clamd to access files using any “group” privileges it may have. Without this, clamd will not try to use any “group” privileges, and will only access files which are readable by the entire world.
      # Initialize supplementary group access (clamd must be started by root).
      # Default: no
      #AllowSupplementaryGroups no
      AllowSupplementaryGroups yes

      THIS IS ABSOLUTELY REQUIRED in order for clamd to run under daemontools!!!
      # Don’t fork into background.
      # Default: no
      #Foreground yes
      Foreground yes


      Configuring freshclam

      The freshclam program checks for updated virus definition files and, if it finds them, downloads and installs them automatically. It then sends a message to clamd, telling it to read the new definitions into memory, and can also call another program that we specify. We will be using this “call another program” capability to inform qmail-scanner and/or simscan to update its version database, so the headers that they add to email messages will have accurate version numbers.

      To configure freshclam, we will edit a file called freshclam.conf, which will be found in the same directory where we found the clamd.conf file (above.) This is a list of the changes we need to make:

      You must comment out this line or freshclam will not run.
      # Comment or remove the line below.
      Example
      #Example

      This must match the DatabaseDirectory line in the clamd.conf file.
      # Path to the database directory.
      # WARNING: It must match clamd.conf’s directive!
      # Default: hardcoded (depends on installation options)
      DatabaseDirectory /var/lib/clamav
      DatabaseDirectory /usr/local/share/clamav

      This tells freshclam what userid it should run as. I normally un-comment this line, just to make it obvious that it runs as the clamav user.
      # By default when started freshclam drops privileges and switches to the
      # “clamav” user. This directive allows you to change the database owner.
      # Default: clamav (may depend on installation options)
      #DatabaseOwner clamav
      DatabaseOwner clamav

      This tells freshclam to send the notification to clamd whenever new virus definitions are downloaded. THIS IS REQUIRED for clamd to start using the new virus definitions as soon as they are available, otherwise clamd will only check the files once every 30 minutes to see if they have been updated. This directive actually points to the clamd.conf file which configured the running clamd process- freshclam reads the “LocalSocket” (or “TCPSocket“) line in this file in order to contact the clamd process.
      # Send the RELOAD command to clamd.
      # Default: no
      #NotifyClamd /path/to/clamd.conf
      NotifyClamd /usr/local/etc/clamd.conf

      This tells freshclam to run an external program whenever new virus definitions are downloaded. We will be using this to update the qmail-scanner and simscan version databases.
      # Run command after successful database update.
      # Default: disabled
      #OnUpdateExecute command
      OnUpdateExecute /usr/local/sbin/freshclam-good

      This tells freshclam to run an external program whenever it tries to download new virus definitions and encounters an error. I use this to email myself a notification, so that I can check on things and make sure any problems are taken care of.
      # Run command when database update process fails.
      # Default: disabled
      #OnErrorExecute command
      OnErrorExecute /usr/local/sbin/freshclam-bad

      THIS IS ABSOLUTELY REQUIRED in order for freshclam to run under daemontools!!!
      # Don’t fork into background.
      # Default: no
      #Foreground yes
      Foreground yes

      As you can see, whenever freshclam downloads new virus definitions, or whenever it runs into a problem with the process, it calls a script whose name you specifiy in the freshclam.conf file. I use two scripts, which I call “freshclam-good” and “freshclam-bad“, for this purpose.

      Below are examples of what my own scripts look like, however you should really write your own scripts because your needs may not be the same as mine (in fact, I’m pretty sure they are not the same as mine.)

      My “freshclam-good” script looks something like this: My “freshclam-bad” script looks something like this:
      (the email addresses have been changed, of course)
      #!/bin/sh # # freshclam-good # # if you want to be notified whenever the virus # definitions are updated, add some code here to # send yourself an email or whatever. # update qmail-scanner and simscan version files. if [ -e /var/qmail/bin/qmail-scanner-queue.pl ] then /var/qmail/bin/qmail-scanner-queue.pl -z fi if [ -e /usr/local/sbin/update-simscan ] then /usr/local/sbin/update-simscan fi exit 0 #!/bin/sh # # freshclam-good # # if you want to be notified whenever there is a # problem updating the virus definitions, add some # code here to send yourself an email or whatever. # email notification to phone PATH=”/usr/bin:/bin:/var/qmail/bin” cat <<EOF | qmail-inject From: System <postmaster@domain.xyz> To: Phone <1234567890@cell.carrier.xyz> Subject: freshclam error The freshclam program has encountered an error. EOF exit 0

      And as you can tell from the sample freshclam.conf file, these scripts are in the /usr/local/sbin directory. They are owned by root and have permissions “0755“.

      The “update-simscan” program you see mentioned is available on my simscan page. It’s basically a setuid wrapper which runs the “simscanmk -g” command as root, and is necessary because the scripts you see here will be running as the “clamav” user, which doesn’t have write permission to the /var/qmail/control/simversions.cdb file.


      Building the service directories

      We will be setting up two different services- one for clamd and one for freshclam.

      I keep the daemontools services on my servers organized with the physical service directories all under /var/service.

      # cd /var/service
      # mkdir -m 1755 clamd
      # mkdir -m 0755 clamd/log
      # cd clamd
      # wget -O run http://qmail.jms1.net/clamav/service-clamd-run

      # chmod 755 run
      # cd log
      # wget -O run http://qmail.jms1.net/scripts/service-any-log-run

      # chmod 755 run

      # cd /var/service
      # mkdir -m 1755 freshclam
      # mkdir -m 0755 freshclam/log
      # cd freshclam
      # wget -O run http://qmail.jms1.net/clamav/service-freshclam-run

      # chmod 755 run
      # cd log
      # wget -O run http://qmail.jms1.net/scripts/service-any-log-run

      # chmod 755 run

      Here are the download links for the run scripts.

      File: service-clamd-run
      Size: 902 bytes
      MD5: 33d53c07d4b156c09ba8ede35af92e3c
      SHA-1: 681945dc77b94800e2a3fae4de6b629dfba4bf98
      RIPEMD-160: 56f1c050a684d399485729ca90500e567fb1bf0c
      PGP Signature: service-clamd-run.asc
      File: service-freshclam-run
      Size: 921 bytes
      MD5: 40c089b19883b6d7a9fe60c374c9998b
      SHA-1: b96bb58b046cee6af28de929a732a587671d4b98
      RIPEMD-160: b6c80e24ad189ffed09bdf8b7d0f29bfd0af7f2a
      PGP Signature: service-freshclam-run.asc

      Starting the services

      Starting the services is just like starting any other daemontools services- simply create a symbolic link from the “/service” directory to wherever the physical service directory is.

      # cd /service
      # ln -s /var/service/clamd .
      # ln -s /var/service/freshclam .

      Wait a few seconds, and then use svscan to make sure the services are running correctly. You should see up-times of two or more seconds.

      # svstat /service/clamd /service/freshclam
      /service/clamd: up (pid 7172) 7 seconds
      /service/freshclam: up (pid 7190) 7 seconds

      Yahoo! Messenger for Unix/Linux

      Send instant messages to your Windows and Unix friends!

      System Requirements

      Linux: Yahoo! Messenger runs on the Intel chipset and has been tested on RedHat 6.2, 7.2 and 8 and 9; Debian Woody

      FreeBSD: Yahoo! Messenger has been tested on FreeBSD 4.5.

      These packages require X Windows, GTK 1.2 or greater, openssl 0.9.6 or greater and gdk-pixbuf 0.8.0 or greater

      Upgrading your Client

      If you are upgrading your client from a previous version of Yahoo! Messenger, please remove the older version first before installing this version.

      In addition, due to changes in the base libraries, please rename the preferences file in the directory $HOME/.ymessenger from .ymessenger/preferences to .ymessenger/preferences.old.

      The new client will automatically configure your setup. You will not lose any offline or archived messages in this process.

      Other notes

      This client uses an un-GNOMEified version of GtkHTML 0.8, which is under the LGPL. Download the source.

      Note: You can use the md5sum utility to verify the correctness of the downloaded file. The checksums are provided with each file.

      Installation Instructions

      RedHat Linux

      1. Save the appropriate file to your machine:
        RedHat 6.x
        RedHat 7.x
        RedHat 8.0
        RedHat 9
      2. Log in as root and type: rpm -i <filename> with the appropriate filename depending on your version to install the application.
      3. Run /usr/bin/ymessenger from X Window to launch the application.

      Debian Linux

      1. Save the file to your machine.
      2. Log in as root and type: dpkg -i ymessenger_1.0.4_1_i386.deb to install the application.
      3. Run /usr/bin/ymessenger from X Window to launch the application.

      FreeBSD Installation

      1. Save the file to your machine.
      2. Log in as root and type: pkg_add fbsd4.ymessenger.tgz to install the application.
      3. Run /usr/bin/ymessenger from X Window to launch the application.

      Download

      http://pager.yahoo.com/ar/unix.php

      Tested with REDHAT9, FEDORA 1/2/4/5,

      Download a single file using wget

      $ wget http://www.cyberciti.biz/here/lsst.tar.gz
      $ wget ftp://ftp.freebsd.org/pub/sys.tar.gz

      Download multiple files on command line using wget

      $ wget http://www.cyberciti.biz/download/lsst.tar.gz ftp://ftp.freebsd.org/pub/sys.tar.gz ftp://ftp.redhat.com/pub/xyz-1rc-i386.rpmOR

      i) Create variable that holds all urls and later use ‘BASH for loop’ to download all files:
      $ URLS=”http://www.cyberciti.biz/download/lsst.tar.gz ftp://ftp.freebsd.org/pub/sys.tar.gz ftp://ftp.redhat.com/pub/xyz-1rc-i386.rpm http://xyz.com/abc.iso" ii) Use for loop as follows:
      $ for u in $URLS; do wget $u; doneiii) However, a better way is to put all urls in text file and use -i option to wget to download all files:

      (a) Create text file using vi
      $ vi /tmp/download.txtAdd list of urls:
      http://www.cyberciti.biz/download/lsst.tar.gz
      ftp://ftp.freebsd.org/pub/sys.tar.gz
      ftp://ftp.redhat.com/pub/xyz-1rc-i386.rpm
      http://xyz.com/abc.iso
      (b) Run wget as follows:
      $ wget -i /tmp/download.txt(c) Force wget to resume download
      You can use -c option to wget. This is useful when you want to finish up a download started by a previous instance of wget and the net connection was lost. In such case you can add -c option as follows:
      $ wget -c http://www.cyberciti.biz/download/lsst.tar.gz
      $ wget -c -i /tmp/download.txt
      Please note that all ftp/http server does not supports the download resume feature.

      Force wget to download all files in background, and log the activity in a file:

      $ wget -cb -o /tmp/download.log -i /tmp/download.txtOR$ nohup wget -c -o /tmp/download.log -i /tmp/download.txt &nohup runs the given COMMAND (in this example wget) with hangup signals ignored, so that the command can continue running in the background after you log out.

      Limit the download speed to amount bytes/kilobytes per seconds.

      This is useful when you download a large file file, such as an ISO image. Recently one of admin started to download SuSe Linux DVD on one of production server for evaluation purpose. Soon wget started to eat up all bandwidth. No need to predict end result of such a disaster.
      $ wget -c -o /tmp/susedvd.log --limit-rate=50k ftp://ftp.novell.com/pub/suse/dvd1.iso Use m suffix for megabytes (–limit-rate=1m). Above command will limit the retrieval rate to 50KB/s. It is also possible to specify disk quota for automatic retrievals to avoid disk DoS attack. Following command will be aborted when the quota is
      (100MB+) exceeded.
      $ wget -cb -o /tmp/download.log -i /tmp/download.txt --quota=100mF) Use http username/password on an HTTP server:
      $ wget –http-user=foo –http-password=bar http://cyberciti.biz/vivek/csits.tar.gzG) Download all mp3 or pdf file from remote FTP server:
      Generally you can use shell special character aka wildcards such as *, ?, [] to specify selection criteria for files. Same can be use with FTP servers while downloading files.
      $ wget ftp://somedom.com/pub/downloads/*.pdf
      $ wget ftp://somedom.com/pub/downloads/*.pdf
      OR$ wget -g on ftp://somedom.com/pub/downloads/*.pdfH) Use aget when you need multithreaded http download:
      aget fetches HTTP URLs in a manner similar to wget, but segments the retrieval into multiple parts to increase download speed. It can be many times as fast as wget in some circumstances( it is just like Flashget under MS Windows but with CLI):
      $ aget -n=5 http://download.soft.com/soft1.tar.gzAbove command will download soft1.tar.gz in 5 segments.

      Command to resume file download with wget

      After reading man page, I found -c option. It will continue getting a partially downloaded file. This is useful when you want to finish a download started by a previous instance of wget, or by another program.
      $ wget -c http://ftp.ussg.iu.edu/linux/ubuntu-releases/5.10/ubuntu-5.10-install-i386.is

      Here is quick tip, if you wish to perform an unattended download of large files such as Linux DVD ISO use wget as follows:

      wget -bqc http://path.com/url.iso

      Where,

      => -b : Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.

      => -q : Turn off Wget’s output aka save disk space.

      => -c : Resume broken download i.e. continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program.

      This tip will save your time while downloading large ISO image from the internet.

      You can also use nohup command to execute commands after you exit from a shell prompt:
      $ nohup wget http://domain.com/dvd.iso &