Intrusion Detection: Network Security beyond the Firewall
(Publisher: John Wiley & Sons, Inc.)
Author(s): Terry Escamilla
ISBN: 0471290009
Publication Date: 11/01/98

Previous Table of Contents Next


The chief advantages of the statistical anomaly approach include the following:

  Well-understood statistical techniques can be used, provided that the underlying assumptions about the data are valid.
  The set of variables that track behavior does not require a significant amount of memory storage.
  Statistical techniques also can be used for dealing with time. Moving averages, smoothing techniques, weighting, and interval multipliers all provide refinement methods for improving the accuracy of what the system detects.
  Simple thresholds of behaviors, such as failed logins, are easily understood by operators.

Concerns about statistical anomaly approaches include the following:

  The underlying assumptions about the data may not be statistically sound.
  Combining values from different variables also might be statistically incorrect.
  Establishing the baseline is often a challenge. How do you know what is normal for all of the users, networks, applications, and other entities at your site?
  Not all users exhibit consistent behavior. Some employees may log in at different times each day, execute different commands somewhat randomly, and access resources in unpredictable ways. Experienced users are the usual example for variable behaviors.
  A hacker who knows that intrusions are being determined based on statistical behavior is able to circumvent detection by avoiding activities that are measured and by choosing an alternative attack instead.
  An attacker who uses multiple accounts can spread abusive behavior among the accounts without exceeding thresholds.
  No provisions have been made for the order of events. A pattern-matching engine will detect a race condition attack, but a statistical engine will not.
  Understanding when intrusive behavior begins to be averaged out over time is not easy. Alternative days of heavy and light use of a resource tends to be averaged out over time. Therefore, more complex statistical techniques may be required if user’s behaviors vary widely. Complicated statistical models make it harder to interpret the results.
  Setting thresholds for indicating intrusive events requires experience. How do you know when someone has read too many files?

Statistical approaches have been applied to pattern-matching problems, too. Successful projects have been developed for fingerprint systems, robotics, manufacturing, and voice recognition systems. Additional research is ongoing to find the best fit for statistical techniques in intrusion detection.

Do you need both types of IDS engines? At this time, it appears that both anomaly detection and pattern matching are incommensurate, meaning that you can certainly benefit more by having both tools rather than a single type. Most current research IDS projects rely on both statistical techniques and pattern-matching tests to catch intruders. See www.csl.sri.com/emerald or www.csl.sri.com/nides for examples.

Real Time or Interval Based

Vulnerability scanners, which look for weaknesses in your environment, normally are run on an interval basis. The idea is to occasionally inspect your network and systems for weaknesses. The problem with interval scanning is that you might discover a problem only after an intruder has damaged your data.

You also can run anomaly detectors and pattern matchers in batch or interval mode, although the more useful approach is to run a real-time version of these monitors. Running a product in real time naturally has performance and resource-consumption consequences. A real-time IDS that exhibits adaptive behavior is not commercially available yet but should appear in the future. The idea is to monitor a subset of the total range of events and increase the number of events you want to monitor only when something interesting happens on the system. The challenge is to define the minimum initial set of events to monitor and to know when to start logging other events. Picking the wrong initial set of events might cause you to miss some intrusions.

As it turns out, both real-time and interval scanning IDSs are needed. When risk of an event is low, checking for problems on an interval basis is recommended. When the threat is high or the consequences are serious, watching for intrusions in real time is required. Deciding which events to monitor in real time as opposed to which events to scan for occasionally should be configurable by the customer. In this way, you can decide what is important in your environment.

Data Source

The two main categories of information that an IDS examines are network data and system data. Network traffic is usually obtained by activating a network adapter in promiscuous mode. Most network IDS vendors recommend that you dedicate a system on the network to sniffing and analyzing traffic. Note that this data source comes for free. That is, you do not need to turn on any special auditing or logging features in your products to capture network traffic.

The traditional source of system information is the audit trails emitted by the kernel. With a few exceptions, audit logs contain sufficient detail to track activities to individual users. RACF and similar mainframe security products have long been praised for their auditing capabilities. Even though auditing can generate significant amounts of information, no other facility is available for gathering a comprehensive picture of system activities.

IDS vendors are under increasing pressure to examine application logs. Indeed, one can argue that all the interesting subject-object interactions will be occurring at the application level rather than the OS level in the future. If you look at the /etc/passwd file or NT Registry in the near term and find no significant user accounts, it’s probably because all of the users and groups are defined in application-specific data stores. Databases are the obvious example, but other examples are not hard to find. Because the majority of interesting transactions will be occurring at the application level, IDS vendors will need to apply their expertise to developing additional models and patterns specifically for individual applications. Because many IDS tools are being deployed with firewalls, you can expect future IDS releases to analyze firewall logs for intrusions. Axent’s ITA already supports analysis for a number of firewall logs. The Computer Misuse Detection System from SAIC also monitors log files from Raptor, Interlock, and CyberShield.


Previous Table of Contents Next