CYBRARY STUDY GUIDE
Prepare Yourself to Pass the CISSP Exam
Review terms and concepts such as security management, configuration policy, information classification, access control and accountability with this study guide. With over 200+ topics covered, this study guide will help ensure that you’ll be fully prepared in order to earn your CISSP certification. Need to go deeper? Take our CISSP training course.
After completing our CISSP training course, you might feel that you’re ready to take on the CISSP exam. However, in order to ensure that you’ll be as successful as possible, you’ll need to complement your training with our free CISSP exam study guide. The six-hour exam is pretty challenging, and it requires full comprehension of all the learning modules presented in the course curriculum. Luckily, our study guide provides more than enough information for you to review.
Defining Security Management
Security management concepts and principles are key components in a security policy and solution procedures. They contain important documents such as policies, practices, and guidelines that establish the framework for a secure information system. These documents present the organization’s information benefits and lay out its security procedures. The main objectives and goals of security are defined within the CIA Triad, or three main security principles: confidentiality, integrity and availability. Security controls must acknowledge one or more of these three principles:
Confidentiality: The guarding of sensitive information through rigorous measures to prevent exposure or sharing of the information with unauthorized persons. Once the information is intentionally or unintentionally released, confidentiality is lost. Breaches of confidentiality include stealing files, shoulder surfing, or screen recording. Integrity: The practice of maintaining data consistency and ensuring the information hasn’t been altered or compromised in any way. This process is applied to data in active use, data that is stored and data that’s transferred. Availability: Allowance of data being accessed at any time by authorized persons.
Risk is the preexisting hazard(s) that may cause damage or loss. It does not assume certainty that a hazard will develop rather its inherent potential to occur. Risk management is applied to ascertain the presence of risk, measure the potential threat and how to manage it. In taking assertive steps to prevent or manage a known risk, the resulting damage can be contained. Risk identification is the process of determining the existing threats within an organization. An organization can be impacted by various types of risk and it is the responsibility of that organization to know what they are. Disaster-preparedness is a key aspect of this awareness whether they’re of natural origin or caused by accidents. Disasters of natural origin refer to…
…storms, fires, tornadoes, earthquakes or any events that occur in the environment. This also pertains to incidents that cause damage to a business and the internal environment. For example, an electrical malfunction that results in a fire, water damage from clogged sewage pipes or power outages. Another form of risk is equipment failure from encountering an internet virus or hackers who target the system. Situations like these can be catastrophic to a business if internal data is compromised or lost, and disruption of service which impedes or in severe cases, shuts-down the organization’s day to day processes. Internal risks are threats that exist within the organization’s personnel from detrimental actions or behavior of an employee. Internal data that’s considered highly-sensitive is a prime target for theft. An employee can steal then replicate that data for monetary gain, or, illegally download software or programs. These incidents create legal liability issues for the business such as lawsuits and loss of profit.
Risk management is the process of establishing what threats pose a risk to an organization, identifying the vulnerabilities in determining that threat level, and decisions about how the risk should be addressed. In many cases this entails establishing a risk management team to initiate the process in discovering threats and vulnerabilities, defining the organization’s assets, and developing a response-plan to manage risks. Risk management is composed of three key concepts…
…threat, either natural or man-made that could cause damage to an organization; vulnerability, the existing weaknesses from flawed policies or loopholes that could be taken advantage of by a malicious entity; and controls, which are methods to improve defense against known threats, prevent disaster, and correct system weaknesses to reduce vulnerabilities.
Identifying Threats and Vulnerabilities in Risk Management
Identifying threats and vulnerabilities are important because Threats can arise from several places such as the natural environment or human influence. They can also be the result of technical errors or unintended consequences from the actions of an employee. Ascertaining threats and vulnerabilities can be applied by creating a list of items as shown in Table 1.1:
Table 1.1 exemplifies the relationship among threats, vulnerabilities, and risk. For example, an employee inadvertently releases a password or file that exposes confidential information to would-be hackers.
Assessing Asset Value
Pinpointing the assets that are essential to an organization and making certain those assets are preserved is another critical component of Risk Management. Identifying threats without a clear understanding of assets leaves an imbalanced approach in dealing with potential threats. Identifying an organization’s assets aids in the development of countermeasures in the face of a threat so those assets are fully protected. If an organization has limited finances it’s crucial to…
…assess the value of its assets through a quantitative assessment (the company’s financials) or qualitative assessment, its value to the overall strategy.
Quantitative Assessment: Risk Analysis Process
A quantitative assessment measures the monetary value of all the elements combined in a risk assessment as well as the assets and threats of a risk analysis. Components of the risk analysis process include: property value; residual damage; rate of occurrence (threats); safeguard effectiveness; safeguard expenses; unknowns and probabilities. All of these components need to be quantified. When doing this assessment all incurred costs such as loss of hours; repairs; damage and replacement of equipment. The three steps of quantitative assessment are as follows:
Single loss expectancy (SLE), which is the estimated possibility for one time losses of an asset. The SLE is calculated as the asset value multiplied by its exposure factor, which is the percent of damage that a realized threat would have on the asset. Annual rate of occurrence (ARO), the estimated number of times an incident may occur within a year. Annual loss expectancy (ALE), which combines the SLE and ARO to determine the magnitude of the risk. The ALE is calculated as SLE multiplied by the ARO.
Qualitative Assessment: Risk Analysis Process
Assessing monetary value to the elements of a risk analysis can be challenging. Incorporating qualitative components into the process will help evaluate the quantitative component. A qualitative assessment rates the degree of threats and sensitivity of confidential assets then places them into categories based on their rating. The following ratings can be applied:
Low: When loss of a member (part) would be a minor setback that could be tolerated for a brief time. Medium: When loss of a member (part) could result in some degree of damage to the organization or a moderate expense to fix the damage. High: When loss of a member (part) would result in severe compromise of trust between the organization and its clients/employees, and could result in a legal action, or loss of profit and earnings.
How to Handle Risk
Risk can be handled in four ways:
implementing defense mechanisms to prevent or reduce the risk, referred to as risk reduction; securing insurance to transfer part or all of the potential cost of a loss to a third party, or risk transference; assuming the potential cost and losses if the risk comes to fruition, referred to as risk acceptance; and by ignoring a risk, which is termed a risk rejection. These tactics are combined to address and handle risks.
Security Policies and Procedures
Security policies are official, authorized documents that are created in compliance of the security philosophy of an organization. These documents are an overview of the organization’s assets and the degree of protection each asset or group of assets have. Well-crafted, coherent security policies would outline a set of rules to which users in the organization should follow when connecting to network resources. This would contain a listing of permissible and impermissible activities while clarifying and outlining security responsibilities. Security policies aren’t mandates for how the policies should be applied, rather a guideline for administrators to reference when developing a plan of action and resulting reactions. Security policies can be formulated to satisfy the following needs:
Advisory policies – in place for all employees to be aware of the consequences of their actions. Informative policies – designed to inform and educate employees about company procedures. Regulatory policies – make certain the organization is compliant with local, state and federal laws. Because security policies contain comprehensive information, it’s helpful to break the policy down into sub-documents, with each one covering a specific topic. These documents would include: User Policy – reviews the appropriate use of various items such confidential filing systems or Internet access. Configuration Policy – clarifies what applications are to be configured on the network and should assign a particular build for each system. This is a significant item to make certain all the network systems follow a set organization to reduce troubleshooting time. Patch Management system – explains the testing and distribution of updates being applied. Once approved, it’s incorporated into the standard build. This validates all new systems in accordance with the approved patch. Infrastructure Policy – explicitly defines how the system is to be managed and maintained, and who is charged with that responsibility. It also addresses: Service Quality Checking and Controlling the systems Processing and Consolidating of Logs Managing change The scheme for addressing network The standard for naming User Account Policy – this clearly defines which users have clearance and what permissions that entails. Make certain this follows the PC configuration policy. This can be managed by limiting user permissions. Other policies: Depending on the organization, there will be other policies covering miscellaneous items such as encryption, password requirements, remote access, emailing sensitive information, and others.
The Objectives of a Security Policy
Guiding your technical team on their choice of equipment is a good starting-point. The policy terminology will likely not include this kind of information as to which equipment or designs are to be used. Once a decision is made or the equipment is in place, the second objective would be to advise the team in arranging the equipment. The policy might state that the team will be tasked with blocking certain websites from the system, but doesn’t specifically list which sites. The third objective defines the responsibilities of users and administrators. This aids in the process of evaluating the proficiency of security measures. The fourth objective would break down the ensuing ramifications of policy violation. The last objective would be to…
…define and clarify the reactions to network threats. It’s important to also clarify the process for escalating items that might go unrecognized on the network. Each member of the team should be prepared to employ an action plan in the event of a breach to the network.
Standards, Guidelines and Procedures
Standards, guidelines, and procedures comprise three elements of policy implementation. They present the specifics of the policy, how they should be applied and what standards and procedures should be practiced. Standards are itemized procedures applied in order to satisfy a policy requirement but do not define the method of implementation. Guidelines are instructions or suggestions of how policies or procedures should be implemented. They usually allow some flexibility for situations that require adjustments within policy boundaries. Procedures are the most…
…definitive security documents. They outline specific step-by-step applications of secure configurations to meet policy requirements. These documents incorporate certain technologies and devices used and the wording may change in tandem with equipment upgrades or changes.
Roles and Responsibilities in Security Policy
The delineation of roles and responsibilities is critical with respect to implementation of security policies. This also strengthens an organization’s security protocol through division of responsibilities. The table below illustrates the various roles and responsibilities of those involved implementation of a security policy:
Information Classification in Security
Organizations qualify their data based on various factors and not all data holds the same value. Depending on the user and their designated role, the data will have greater or less value. Information such as formulas or product development, are of high value and having that data compromised in any way could be catastrophic for an enterprise. Thus the data has a much higher classification. Classifying information is intended to support system integrity, confidentiality and restricted access to preserve value and minimize risk. It also rates an organization’s information assets. Each level of classification should carry specific requirements and procedures. Two common examples of a classification system are…
…military information classification and commercial information classification. Both models have introduced terminology for qualifying information.
Security Training and Awareness
Human error is often the weak link in security due to a lack of awareness on the employee’s part about the consequences of improper actions, and how that ultimately impacts the system as a whole. Security awareness is a critical component to reducing the incidents of security breaches or breakdowns, but is commonly overlooked. Security awareness programs effective strategy to raise the awareness of employees and their role in making certain a comprehensive understanding of security policies and the ramifications of their actions on overall security. Employees should be educated on a policy’s basic components and their benefits to the organization. It’s important for security awareness training to be developed and disseminated differently within the organization. There are three distinct groups that security awareness training should be administered to: end users, data handlers and management. The three doctrines of Security Awareness:
Awareness, Training and Education. Security awareness is the collective awareness among company members about the critical need and value of security and security controls. When personnel demonstrate a coherent understanding of security, they’re deemed “security aware.” Training programs are designed to instruct users on specific skills and are conducted in a classroom environment or can be implemented through individualized training. Security training is conducted over a short period.
Access Control and Accountability
Access control and accountability are imperative to understanding computer and network security. These two methods are used to secure property, data and networks from intended or unintended corruption. These two concepts, in combination with Auditing, are used to sustain the Confidentiality, Integrity, and Availability (CIA) security concept, and access to networks and equipment using Remote Authentication Dial-In User Service (RADIUS) and Terminal Access Controller Access Control System (TACACS/TACACS+). Access control is the process of…
…administering user permission of resources and services. Devices such as a smart card or a user name and password is an example of how access control is implemented. Routers are another example; virtual private networks (VPNs); remote access points such as remote access servers (RAS); or the use of wireless access points (WAPs). It can also be a document or shared service via a network operating system (NOS). The three main constructs of access control are: Discretionary Access Control (DAC) Mandatory Access Control (MAC) Role-Based access control (RBAC)
Discretionary Access Control (DAC)
Discretionary Access Control (DAC) allows the owner of a system or device to manage access control at his or her own discretion. The holder programs access authorization at his or her personal discretion. With Mandatory Access Control (MAC) and Role-Based Access Control (RBAC), access to the information is in compliance with an established set of rules. This is the common set-up for access control and includes setting permissions on files, folders, and shared information. Access control is implemented in every mode or forum that information is found in your organization. This consists of electronic data as well as hard-copy files, photographs, displays, and communication packets. With DAC, an access control list (ACL) is the file that lists the users who have authorized access to resources and the type of access they are permitted. In the case of discretionary authentication, an ACL can become extensive if individual users are added which may complicate system management. There several risks associated with DAC:
Software might be used or updated by unauthorized personnel. Classified information could be exposed accidentally or deliberately by users who don’t have authorized access. Auditing of files might be problematic.
Mandatory Access Control (MAC)
Mandatory Access Control (MAC) is typically included in the operating system being used. MAC controls are present across most Windows, Unix, Linux, and popular Operating Systems. Mandatory access control technically performs as multilevel security. Users are placed into categories and tagged with security labels to show what level of clearance they’re operating with. It permits licensed or cleared persons a certain level of access. Mandatory controls are usually fixed codes, and individually assigned to each object or resource. MAC techniques control the need for…
…ongoing maintenance of ACLs because authorization decisions are built into the hierarchy. When establishing a MAC policy, clients are not authorized to change permissions or rights associated with objects.
Role-Based Access Control (RBAC)
Role-Based Access Control (RBAC) integrates mandatory and discretionary formats with advanced applications. Access to information is based on the specific role a user is assigned within the organization. For instance employees who work in product development would be permitted access to confidential information while someone in another department would be denied access. RBAC is a level up from DAC and MAC allowing administrators to enforce security policies that reflect the structure of an organization. RBAC classifies users by common functions and access needs. When structuring a system of user groups…
…you can program the access levels for various resources within the system. Access to different resources / user group permissions are assigned as roles. When roles are correlated to a resource, the resource name verifies that role then determines if access is granted to proceed. A role-based system provides a more comprehensive form of systematic controls. It requires more development and is a higher investment, but has wider flexibility in comparison to MAC.
Identification and Authentication: Access Control Systems
Identification and authentication are integral to an access control system. Identification is carried out by the user or service supplying the system with user IDs. Authentication is the process to obtain ID verification of the user or service requesting access. Both the sender and recipient can verify the other as a legitimate user with whom they’re trying to communicate. If persons wishing to communicate or exchange information cannot verify each other trust is compromised, deterring further activity. Authentication can be based on three types:
A code the user only knows, such as a PIN number. An object the user has been granted possession of, such as a smart card. Something biometric, such as a fingerprint.
Using Passwords as an Authentication Method
Of the three authentication methods, passwords are the most widely used, though they’re the easiest to decode as many users will use passwords that are easy to remember, such as an anniversary or birthday. There are cases where passwords are only used once (a “one-time” password) These provide the highest level of security because a new password is required for each new log-on. The preference for users tends to be “static passwords”, a password that’s created and saved, and used for subsequent log-ons. The longer the password remains unchanged, the higher the probability of it being compromised. It’s common practice for security administrators to..
…require frequent changing of passwords whether it’s every two weeks, quarterly, or after a certain number of log-ins, the frequency of these required changes depends on the level of confidentiality of the data the passwords protect.
Using Tokens as an Authentication Method
Tokens are the second type of authentication method – an object or device the user holds. Tokens are considered to have a higher degree of security being more difficult to access or falsify. These can be credit card-size memory cards, or smart cards; or keypads that are used to supply static and dynamic passwords. Smart cards are assigned a personal identification number (PIN) allowing user-control over the token. These devices are often used as…
…one-time passwords because of their added security. Because these can be used in conjunction with a password, these provide a layer of multi-factor security.
Biometrics as an Authentication Method
Biometrics are the third type of authentication method. This form of ID verification is applied through a behavioral or physiological characteristic unique to an individual user. Of the three types, biometrics provides purist means of authentication, but is also more expensive. Biometric systems work by recording physical information that is highly precise and unique to the individual user. Biometrics include factors such as voice recognition, facial scan, iris / retinal scan, fingerprint / palm scans, or any other scans which rely on a physical characteristic of an individual. Biometric systems offer…
…various degrees of accuracy, measured by the percentage of Type I and Type II errors it produces. Type I errors, which is the false rejection rate, are a measure of the number of recognized users that were denied access by the system. Type II errors or the false acceptance rate, measures the number of unauthorized users that were mistakenly permitted access. When Type I and Type II errors are equalized, it indicates the accuracy of the system. This leveling out is known as the Crossover Error Rate (CER). The lower the CER, the better.
Access Control Types
Access Control Types:
The three types of access control offer different levels of protection, and each can be configured based on the needs of the organization. This affords the security administrator extensive discretionary control over security mechanisms and reinforces the organization’s security as a whole. The main objective of security control mechanisms is to prevent, identify, or recover from problems. Preventive controls are used to impede breaches of security or invasive attacks on the system. Detective controls scan the system for harmful agents; and corrective controls repair the systems from damaging attacks. To apply these measures, controls can be administrative, technical, and physical. Administrative controls are the rules and procedures implemented by the organization. Security awareness training, password administration, background checks are preventive administrative controls. Technical controls are instrumental in protecting the IT infrastructure. These include the restriction of access to systems through user authentication, network segmentation, and protecting information through encryption and antivirus programs. Physical controls protect organizations against theft, loss, and unauthorized access. These include: alarm systems, gated entries, locks, guard dogs, video monitoring systems; the securing of computer equipment; management of cabling infrastructure. Preventive and detective control types can be integrated with administrative, technical and physical applications to create the following pairings:
Preventive administrative controls, which deal with the functions that support the access control objectives and includes structural policies and procedures, background checks, contractual agreements, employee termination procedures, user security training, behavior standards, and user-permission procedures to obtain access to networks and data. Preventive technical controls, which implement technology to execute access control policies. These controls can be built into an operating system, administered through software applications, or supporting hardware/software. Some common preventive/technical controls are: Anti-malware software, authentication methods such as biometrics, tokens, and passwords, hardened user interfaces, etc. Preventive physical controls are programmed to restrict physical access to areas with systems holding confidential information or areas that are used for storage of the backup data files. Often a protective border is in place to block unwanted access to the restricted area. Detective Administrative controls can be implemented for prevention of future security violations or to detect existing violations. The mechanisms implemented by this control pairing are mandatory user training, least privilege, separation of duties, policies / procedures, random and regular audits. Detective Technical controls, which use technical processes to detect breaches of security policy. These processes include intrusion detection systems and automatically-generated violation reports from audit trail information. These reports can show modifications of normal operation procedures to detect known records of unauthorized access. Audit records should be protected at the highest level of security in the system because of their critical informational value. Detective physical controls rely on sensors or cameras to detect a violation. These devices still rely on human discernment to determine if the violation is authentic.
Multi Factor Authentication
Multifactor authentication is the combination of two or more authentication factors, such as using a token along with a PIN. A user must have..
…the token device and the PIN for successful log on. Multi Factor authentication boosts system security.
Single Sign-On (SSO)
Most information systems are constructed with multiple systems, resources and data that users will require access to. Each necessitates access control which entails ongoing renewal of passwords, although users will often use the same password rather than creating numerous codes, or write them down to keep track of the information which can compromise security. Single sign-on handles this problem by requiring users to authenticate once to a single authority, then allowing them access to all other protected systems and resources without having to re-authenticate. This type of set-up has a couple of disadvantages:
…it’s expensive; and if an unauthorized individual is able to gain access, that person then has open access to all system resources. Passwords should never be stored or sent in clear-text. Systems such as Kerberos, NetSP / Krypto-Knight can be used to implement single sign-on.
Kerberos is the preferred single sign-on authentication system in many medium and large information systems. It’s designed to centralize the authentication information for any user or entity requesting access to resources. Kerberos uses symmetric key cryptography and assigns tickets to the entity that requests access. When a user attempts to access the local system, a local agent or process dispatches an authentication attempt to the Kerberos ticket-granting server (TGS). The TGS delivers the encrypted credentials for the user attempting to access the system. The local agent decodes the credentials using the user-supplied password. If the right password has been delivered, the user is validated and assigned authentication tickets, allowing access to other Kerberos-authenticated services. A user is also assigned a set of cipher keys that can be used to encrypt all data sessions. All services and users in the system are given tickets from the TGS and are authenticated by an authentication server (AS). This provides a single source of authority to track and authenticate users. Realms can trust one another, this helps ensure the scalability of Kerberos systems. Kerberos is applied by a…
…Key Distribution Center (KDC) holding the data that allows Kerberos clients to authenticate. The information is stored in a database that permits single sign-on. The KDC’s database operates differently than other databases. The KDC and the client establish trust relationships using public key cryptography. Through the trust relationship, KDC is able to verify which services a host and user can access. Kerberos addresses the sensitivity and integrity of information but not its availability. All the secret keys are logged on the TGS and authentication is performed on Kerberos TGS and the authentication servers. A client’s secret key and session keys are temporarily stored on the client workstation as well. There is risk of exposure to harmful agents such as a malicious code and because passwords are used to access Kerberos to request service, an unauthorized user could try to decode the password to forge access.
Secure European System and Applications in a Multivendor Environment (SESAME)
The Secure European System and Applications in a Multivendor Environment (SESAME) project was formed to counteract weaknesses in Kerberos system. SESAME uses public key cryptography to deliver secret keys as well as MD5 and CRC32 one-way hash functions and two certificates or tickets. One certificate verifies authentication, as in Kerberos, and the other certificate monitors the access privileges assigned to a client. SESAME is similar to…
…Kerberos in that it has inherent weaknesses. SESAME authenticates using only a portion of a message, not the entire message. It is also vulnerable to password guessing.
KryptoKnight and NetSP
KryptoKnight, a system developed by IBM, provides single sign-on, key distribution, and authentication services for computers with widely varying functions and capabilities. KryptoKnight employs a trusted Key Distribution Center (KDC) that logs the secret key of each party. The difference in functionality between Kerberos and KryptoKnight is the user-to-user relationship among the parties and the KDC, instead of the standard server/client relationship. To apply single sign-on, the KDC has a party’s secret key that is a password hash. Introductory communication from…
…the party to the KDC is the user’s name and a value, which is a function of a nonce, a randomly-generated one-time use authenticator, and the password. The KDC verifies the user and allocates a ticket encrypted with the user’s secret key. The ticket is then decoded by the user and applied for authentication to access services from other servers on the system. NetSP is based on KryptoKnight. It uses a workstation as an authentication server.
Centralized Access Control: Access Control Systems
Access control requirements are varied therefore access control systems can be just as diverse. Generally access control systems operate in two categories: centralized access control and decentralized or distributed access control. Based on the needs and environment of an organization, one system is more befitting than the other. A Centralized Access Control system keeps a database of…
…user IDs, rights, and permissions in a database on a central server. Remote Authentication Dial-In User Service (RADIUS), TACACS, and DIAMETER are common centralized access control systems.
Remote Authentication Dial-In Service (RADIUS) and DIAMETER
Remote Authentication Dial-In User Service (RADIUS) works using a client to server system that provides authentication, authorization, and accounting (AAA) for remote dial-up access while protecting the system from unwarranted access. RADIUS runs a centralized user administration by logging all user profiles in a central location that remote services can access. To successfully authenticate to a RADIUS server, users enter their credentials which are sent out in an encryption contained in an Access-Request packet to the RADIUS server. The next step is for the server to receive and accept or deny the entered credentials. If the RADIUS server accepts the credentials…
…it dispatches an Access-Accept packet and the user is authenticated. If RADIUS refuses the credentials, it sends an Access-Reject packet. There are instances where the RADIUS server challenges the credentials. In this case an Access-Challenge packet is sent, requesting the user provide additional information in order to complete authentication. For users connecting to the service through remote dial-up access, RADIUS provides callback security where the server terminates the connection and finds a new connection by dialing a pre-assigned telephone number to which the user’s modem is attached. This offers another layer of security against unwarranted access via dial-up connections. Because RADIUS has been a proven success, an upgraded version called DIAMETER was developed. DIAMETER can be used on all forms of remote connectivity, and is not limited to dial-up.
Terminal Access Controller Access Control System
There are three versions of Terminal Access Controller Access Control System (TACACS): TACACS, Extended TACACS (XTACACS), and TACACS+. Each version authenticates users and prohibits access to unauthorized users without a verified username and password. TACACS blends the authentication and authorization functions. XTACACS uses a segmented arrangement of authentication, authorization, and auditing functions, giving the administrator more control over implementation. TACACS+ also allows the division of the authentication, authorization, and auditing but also offers two-factor authentication. The TACACS authentication process is…
…comparable to RADIUS as it has the same functionality except RADIUS is an open standard while TACACS is Cisco proprietary. This has prevented TACACS from having the same popularity of RADIUS.
Decentralized/Distributed Access Control
A decentralized access-control system keeps user IDs, rights, and permissions in different locations on the network. These locations are often…
…spread out across different subnets by placing them on servers connected to networks contiguous to the user requesting access and utilize linked or associated databases.
Methods Used to Bypass Access Control
Attackers attempt a range of tactics and schemes to try to bypass or decode access control mechanisms, making access control one of the most vulnerable and targeted security mechanisms. Password Attacks: Access control on most systems is achieved with a username and password. One of the weaknesses is users lapse with maintaining password security, a habit hackers are well aware of and try to use to seize passwords. Two types of attacks are commonly used: a dictionary attack or a brute-force attack. Dictionary Attacks: A dictionary attack uses a fixed dictionary file that a program will scan to find a match with a user’s password. Passwords are typically registered in a hashed format. Most password-decoding programs use a method called comparative analysis where all commonly used variations of words in the dictionary file are hashed. The resulting hash is then compared to the encrypted password. If a match is found, the password is decoded. So in the case of passwords that are commonly known, or dictionary-based words, a dictionary attack will crack them pretty quickly. Brute-Force Attacks: A brute force attack is a unilateral trial of every possible combination of letters, numbers, and symbols in an aggressive plot to seize passwords for user accounts. Today’s advanced technology lends itself to the success of brute force attacks even with strong passwords, however, the length of the password enhances its protection against brute force attacks because lengthy passwords require more time to decode. Still, most passwords of 14 characters or less can be decoded within 7 days. One type of brute-force attack uses a rainbow table. In this variation, all possible passwords are pre-computed before an attack is launched. Once it scans all potential passwords, their corresponding encrypted values are stored in a file called the rainbow table. The encrypted data is then compared to variations stored in the rainbow table and can be cracked in a matter of seconds. Back Door Attacks:
A back door attack gives an attacker access to a system from another device using their own login credentials. Back doors are placed in the system to allow a programmer to debug and modify code during a test-run of the software. Another type of back door can be implemented in a system by malicious code, allowing uncontrolled access to systems or services. Software programs that use a back door: Virtual Network Computing (VNC) Back Orifice NetBus Sub7 (or SubSeven) PC Anywhere Terminal The malicious code can also be concealed in another application, or what’s known as a Trojan horse.
Spoofing, Man-in-the-Middle and Replay Attacks
Spoofing: Spoofing is a process that alters a packet at the TCP level. The attacker dispatches a packet with an IP address of a known and trusted host to the target host, gaining access as an imposter. The attacker can also masquerade known services such as Web, FTP, and email. Man-in-the-Middle Attacks: A man-in-the-middle attack is a tactic that’s used to snag information transmitted between two hosts. This method allows the attacker to position itself between the two hosts while remaining invisible to them. This is achieved through altering routing information and DNS values, IP address theft, or defrauding of ARP caches to replicate two legitimate hosts. Using a man-in-the-middle plot allows an attacker to obtain logon credentials or sensitive data that is being transmitted, and modify that data before forwarding it to the intended host. To defend against a man-in-the-middle attack, you need to implement DNS protection by blocking access to its records and name caching system. Replay Attacks:
A replay attack, also known as a playback attack, has similarities to a man-in-the-middle attack. In replay attacks, the attacker will chronicle the traffic between a client and server then resends the packets to the server with minor changes to the source IP address and time stamp on the packet. This opens up an opportunity for the attacker to go back to the previous communication link with the server and access data. To protect your system from this type of invasion, applying complex sequencing rules and time stamps will counteract re-transmitted packets being accepted as valid.
Exploits and Attacks to Gain Control
Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks: Denial-of-Service (DoS) and Distributed Denial of Service (DDoS) attacks target and absorb resources to the extent that those resources or services can no longer be used. This is a more surreptitious form of attack as ID of an authorized user isn’t required. These attacks usually occur during network connectivity & host availability tests. Here are examples of DoS and DDoS attacks: Smurf: an attack based on using the Internet Control Message Protocol (ICMP) echo request through the ping function. The originating site (source site) will send an altered or spoofed ping packet to the broadcast address of a network (the bounce site). The target site’s address is carried in the modified ping packet. This triggers the bounce site to broadcast bogus information to all of the devices on its local network. The devices then respond with a reply to the target system, which will then be flooded with these replies. Buffer Overflow: this is an attack where a process is flooded with data beyond its capacity to handle. If that process isn’t equipped to deal with an excessive amount of data, it reacts in unexpected ways that an attacker can exploit. Ping of Death: this is a version of the buffer overflow attack. This packet exploits a flaw with ICMP by sending an ECHO packet of more than 65K octets of data, which can create an overflow of system variables which causes the system to crash. Teardrop: this attack targets UDP. The attacker revises length and fragmentation of offset fields in sequential UDP packets and transmits them to a system. When the system attempts to reassemble the packets from the fragments, the fragments overwrite each other cycling contradictory instructions to the system on how the fragments are offset on these packets. The end result: the target system crashes. SYN is a method where the attacker exploits the use of the buffer space during a three-way Transmission Control Protocol (TCP) session initialization handshake. A source host sends a TCP SYN request when requesting a connection session with the destination host that will respond with an acknowledgement (ACK), and return a SYN response. The normal process from here is the source host sends a final ACK packet, but in a SYN attack the attacker sends a barrage of SYN requests without ever sending the final ACK. This causes the target system to time out while waiting for the proper response, eventually making the system crash or become unusable. TCP Hijacking: In a TCP hijacking, the session between a trusted client and network server is hijacked. The attacker substitutes its IP address for that of the trusted client. Once the session is disrupted, the attacker has the opportunity to create a new back door account or can access files and services that a legitimate host has access to. This type of attack usually happens after a trusted client has connected to the network server. More on Social Engineering, Dumpster Diving and Software Exploitation:
Social Engineering: Social engineering is not a computer exploit but an easy tactic for attackers who wish to access sensitive information that can compromise information systems. It’s challenging to develop an effective defense against this type of attack. Social Engineering uses social interactions and relationships to seize information, such as passwords, to gain entry into a protected system. Some examples: the attacker obtains access to a device from an authorized user and takes unwarranted actions such as requesting passwords; befriending and soliciting a colleague for sensitive information. The best defense against Social Engineering is awareness training and reinforcement of security policies regarding disclosure of information. Dumpster Diving: Like Social Engineering, Dumpster Diving is not an active attack against a system. This method is where the attacker forages through information that has been discarded. Attackers will thoroughly rummage through trash to locate items of value such as: credit card statements; password lists; and organization charts. It may also include phone numbers and usernames, information that can be used in social engineering attacks. Software Exploitation: Software Exploitation is not a concentrated attack executed in a single hit, but a strategic exploitation of weaknesses in the code of a software program. Once vulnerabilities in the operating system are identified, an attacker can use this to their advantage to infiltrate resources and data.
Monitoring and Intrusion Detection
Monitoring: Monitoring is making certain authenticated users are held accountable for their actions while logged onto a system, as well as tracking unauthorized or abnormal activities on a system and system failures. Accountability is achieved by noting the activities of users and system services that form the operating environment and security mechanisms. A log of activities provides a record for troubleshooting analysis and supplies evidential material for legal situations. Auditing is the process of going back and reviewing these logs and it is typically incorporated into many operating systems. Audits can be used to measure a system’s health and performance. System crashes may indicate defective programs, or invasive attempts from an unauthorized source. Logs prior to a system crash can help determine the cause. Intrusion Detection System (IDS): An Intrusion Detection System (IDS) is a detective access control system programmed for ongoing monitoring of network activities and to trace any scanning and probing activities, or red flags that indicate unauthorized attempts to access the system in real-time. IDS can also be programmed to scan for potential attacks, follow an attacker’s actions, send out alerts to warn of a pending attack, scan the system for weak points and implement mechanisms that will prevent unauthorized access. It can also trace system failures and diagnose system performance. Damaging or invasive events detected by IDS can originate from: viruses; malicious code; external connections; trusted internal users engaging in unauthorized activities; and unauthorized access attempts from trusted locations. IDS systems can be split into two general categories:
…network-based intrusion-detection systems (NIDS) and host-based intrusion-detection systems (HIDS), based on their utilization. Two different mechanisms are employed by IDS to detect malicious events: Knowledge-based, or signature-based detection and behavior-based detection. Both methods use different tactics to detect intrusions.
Host-Based IDS for Detection
Host-Based IDS (HIDS) is installed on individual systems and its function is to protect that corresponding system. Similar to a virus scanner in both function and design, they’re more reliable than NIDS’s in scanning for attacks on individual systems because HIDS can scrutinize events in greater detail than a network-based IDS. HIDS utilize…
…audit trails and system logs. Audit trails are very reliable for tracking system events and monitoring traffic to red flag suspicious activity. Suspect activity can be anything from modification of permissions to disabling certain system security settings. One downside is that HIDS don’t do well with tracking denial of service (DoS) attacks, particularly those that consume bandwidth. Also, HIDS dominates system resources from the computer being monitored, reducing the performance of that system.
Network-Based IDS (NIDS) for Detection
A network-based IDS (NIDS) records and evaluates network traffic, examining each network packet as it cycles through the system. One NIDS can monitor an expansive network if installed on a backbone of that network or can safeguard multiple hosts on a single network segment. NIDS is equipped to provide enhanced defense between the firewall and hosts, though it may have issues keeping up with heavy volumes of traffic or an abundance of data that flows through the network. In this case an attack could go undetected. Additionally, NIDSs don’t perform well on switched networks, particularly if the routers are without a monitoring port. NIDS are reliant on…
…the placement of sensors at various locations on a network. The sensors are typically lightweight computers with the sole purpose of traffic monitoring. This allows the bastion host to be fortified against an attack, decreasing the number of vulnerabilities to the NIDS, and allows the NIDS to operate in stealth mode, meaning NIDS is invisible to the network. An attacker would have to know the exact location and system ID of the NIDS to detect its presence. NIDS has little interference on the overall performance of the network.
Knowledge-Based and Behavior-Based IDS
Knowledge-Based IDS: Knowledge-Based IDS, also known as signature based, are reliant on a database of known attack signatures. Knowledge-based systems look closely at data and try to match it to a signature pattern in the signature database. If an incident matches a signature, the IDS registers that an attack has happened or is happening and responds with an alert, alarm or modification to firewall configuration. The main weakness of a knowledge-based IDS is that its effectiveness is based on known attack methods. Upgraded or altered versions of known attacks are often undetected by the IDS. Therefore, a knowledge-based IDS is only as effective as its signature database so the database must be kept updated. Behavior-Based IDS:
Behavior-Based IDS, also referred to as a statistical intrusion IDS, profile-based IDS (anomaly detection) and heuristics-based IDS, monitors normal activities and events on the system and scans for abnormal activities or events that are considered possible malicious activities. This allows behavior-based IDS to look for new and unknown vulnerabilities, attacks, and intrusion methods. Behavior-Based IDS have been known to produce false positives or false alarms because patterns of normal activities and events are fluid and can change day-to-day. This is the main weakness of behavior-based IDS: if it produces multiple false positives, security administrators are less inclined to respond to the red flags.
Honeypots for Intrusion Detection
A Honeypot has a similar function to IDS as it also detects intrusion attempts. Honeypots are also used as decoys to lure attackers away from vulnerable systems by appearing as a valuable system. Honeypots usually…
…contain counterfeit files, services, and databases to entice and entrap an intruder. This qualifies honeypots as an ideal tactic for IDS monitoring.
The Three Types of Penetration Tests
Penetration testing, which is also referred to as ethical hacking, tests a system’s defense against attacks, and performs a detailed analysis of the system’s weaknesses. A penetration test can also be applied to ascertain what happens when the system goes into reaction-mode to an attack and what information can be collected from the system. The three types of penetration tests: Full Knowledge Test: The penetration testing team has the most extensive knowledge possible about the system to be tested. This test will replicate a certain attack that might be tried by an informed employee of an organization. Partial Knowledge Test: The penetration testing team has knowledge that might have relevance to a specific type of attack. Zero Knowledge Test: The penetration testing team comes in with no information about the system and must collect the information which is part of the testing process. The next step is to simulate an attack by a would-be hacker that has no prior knowledge of the information system. Penetration testing is often described as…
…being either white box or black box testing. With white box testing, the penetration testing team has access to the internal system code. This allows them to go in with more knowledge and more specifically target weaknesses in the known code. In closed-box testing, the penetration testing team does not have access to internal code. This testing simulates an attack from someone with no knowledge of the internal system. This results in tactics / attacks that are much more general and involve enumerating what may be inside the system.
Alternative Methods for Testing Security
Application Security: This type of testing is for organizations that offer access to core business functionality through web-based applications. Application security testing examines and qualifies controls over the application and its process flow. Denial-of-Service (DoS): Examines a network’s vulnerability to DoS attacks. War Dialing: A systematic method that calls a range of telephone numbers to identify modems, remote-access devices, and maintenance connections of systems that could operate on an organization’s network. Wireless Network: This examines the controls over an organization’s wireless access policies and prevents or removes any improperly configured devices that have been applied creating additional compromised security. Social Engineering:
A testing method of social interaction techniques (typically used with the organization’s employees, suppliers, and contractors) that draws information and penetrates the organization’s systems.
OSI Reference Model: Network Security
The Open Systems Interconnection (OSI) reference model was developed by the International Standards Organization (ISO) in 1984 to define network communications, and interpret the flow of data on a network. The OSI reference model is made up of seven layers that starts with the physical connection and ends with the application. The seven OSI layers are:
Physical Layer Data-Link Layer Network Layer Transport Layer Session Layer Presentation Layer Application Layer Each layer performs a specific function and can have one or more sublayers. The upper layers of the OSI reference model define functions with respect to the application; the lower three layers detail functions that handle transport and delivery of data from the source to the destination. The Application Layer, Presentation Layer, and Session Layer, define functions focused on the application. The lower four layers: Transport Layer, Network Layer, Data-Link Layer, and the Physical Layer, define functions focused on end-to-end delivery of the data.
OSI Layer Definitions
OSI Layer Definitions:
The Physical Layer corresponds to the physical elements of the transmission medium such as signaling specifications, cable types and interfaces. It also characterizes voltage levels, physical data rates, and transmission distances. The Data-Link Layer works with the transport of data across one particular link or medium. Data at the data link layer is condensed into frames. Data-link specifications correspond to physical addresses, frame sequences, flow control, and physical topology. At this level data is transformed from frames into bits when transmitted across media and changed back into frames when received from the media. Bridges and switches operate at the data-link layer. In the IEEE 802, the data link layer is divided into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC). The upper sublayer – Logical Link Control (LLC) administers the communication between devices. The lower sublayer – Media Access Control (MAC) manages protocol access to the physical media. The Network Layer corresponds to data routing, labeled packets at this layer, and establishes the methods to facilitate this including the functionality of routing, logical addressing, and how routes are determined. It also defines how packets are broken down into smaller packets to support media with smaller maximum transmission unit (MTU) sizes. The Transport Layer…
…handles several functions including selection of protocols. This layer provides dependable, transparent transport of data segments from upper layers. The most significant functions of the Transport layer: error recovery (retransmission); and flow control to control unneeded congestion when sending data at a rate that the network can support, depending on the choice of protocols. Multiplexing of incoming data for different flows to applications on the same host is also performed. Messages are labeled with a sequence number at the transmission end. At the receiving end the packets are reassembled, inspected for errors, and acknowledged. Reordering of the incoming message when packets are received out of order is also handled at this layer. The Session Layer establishes how to start, manage, and end communication sessions between applications. Communication sessions entail the service requests and responses that transmit between applications on different devices. This includes the control and management of multiple bidirectional messages so that the application can be alerted if only a portion of the series of messages are completed. This supplies the presentation layer with a complete view of an incoming stream of data. The Presentation Layer verifies that data transmitted from an application on the source system is able to be interpreted on the application layer by implementing data representation with a range of coding and conversion functions which includes: conversion of character representation formats, such as text encoding type, picture encoding, encryptions, and voice / video codecs are defined at this layer. The Application Layer provides network communication services to the end user or operating system. It communicates with software by defining communication resources, evaluates network availability, and sends out information services. It also provides synchronization between the peer applications that operate on separate systems.
Inter-OSI Layer Interaction
When a host receives a data transmission from another host on the network, that data is processed at each level of the OSI to the next higher layer, in order to present a useful data transmission to the end-user. Part of this processing entails the creation of headers and trailers by the sending host’s software or hardware that are placed before or after the data is sent to the next higher layer. Each layer is tagged with a header and trailer, usually in each data packet that comprises the data flow. The following is a break down of the sequence of processing at each OSI layer:
The Physical Layer (1st Layer) ensures bit synchronization and places the received binary pattern into a buffer. It alerts the Data Link Layer (Layer 2) a frame has been received after decoding the incoming signal into a bit stream. Layer 1 performs as a delivery system for a stream of bits across the medium. The Data-Link Layer (2nd Layer) looks at the frame check sequence (FCS) in the trailer to check if errors occurred in transmission. If an error is discovered, the frame is discarded. The current host inspects the data link address to determine if it’s intended to receive the data or whether to process the data further. If the data is addressed to the host, the data between the Layer 2 header and trailer is passed on to the Network Layer (Layer 3) software. The Network Layer (3rd Layer) checks the destination address. If the address is the current host’s address, processing continues and data after the Layer 3 header is passed to the Transport Layer (Layer 4) software. In this case Layer 3 provides end-to-end delivery. The Transport Layer (4th Layer) performs error recovery. To detect these errors, identifying pieces of data are encoded in the Layer 4 header along with acknowledgment of information. After error recovery and reordering of the incoming data, it is then sent to the Session Layer (Layer 5). The Session Layer (5th Layer) verifies that a series of messages is completed. The Layer 5 header includes fields indicating sequence of the packet in the data stream, and the position of the data packet in the flow. After this layer has confirmed that all flows are completed, it sends the data after the Layer 5 header to the Presentation Layer (Layer 6). The Presentation Layer (6th Layer) defines and controls the data format of the data transmission. It places the data in the proper format specified in the Layer 6 header. Typically, this header is included exclusively for initialization flows, not with every data packet that’s sent. After the data formats have been converted, the data after the Layer 6 header is forwarded to the Application Layer (Layer 7). The Application Layer (7th Layer) works with the final header and inspects the end-user data. The agreement of operating standards is signaled by the header by the applications on the two hosts. The headers indicate the values for all parameters so the header is usually transmitted and received at application initialization time only. Not only is this phase responsible for processing between adjacent OSI layers, all layers must interact with their corresponding layers on another computer to successfully utilize its functions. For authentic communication with the same layer on another computer each layer identifies additional data bits in the header as well as the trailer in some instances that’s generated by the sending host’s software or hardware. The recipient host reads the headers and trailers generated by all corresponding layers of the sending host in order to decipher processes of each layer and how to interact within their structure.
The Four TCP/IP Layers
The TCP/IP Application Layer: This pertains to communications services to applications and acts as a go-between for the network and the application. It also handles presentation and administering communication sessions. It encompasses the Application Layer, Presentation Layer and Session Layer of the OSI reference model. Examples: HTTP, POP3, and SNMP. The TCP/IP Transport Layer: This layer addresses multiple functions, including selection of protocols, error recovery and flow control. This layer can also support retransmission, i.e., error recovery and may use flow control to curtail unneeded congestion by sending data at a rate the network can handle, or depending on the choice of protocols, it may bypass this function. Multiplexing of received data for various flows to applications on the same host is also performed. Reordering of the incoming data stream when packets come in out of order is also handled. This is associated with the Transport Layer of the OSI reference model. Examples include: TCP and UPD, which are called Transport Layer, or Layer 4 protocols. TCP supports connection-oriented service while UDP offers connectionless support in the Transport Layer. The TCP/IP Internetwork Layer: Also known as the Internet Layer, defines end-to-end delivery of packets and defines logical addressing to accomplish this. It also defines the process of routing and how routes are processed; and how to break down a packet into smaller packets to work with media with smaller maximum transmission unit sizes. It corresponds with the Network Layer of the OSI reference model. However, while the OSI network-layer protocols offer connection-based (Connection-Mode Network Service (CMNS), X.25) or Connectionless Network Service (CLNS), IP only provides connectionless network service. The routing protocols are network layer protocols assigned with an IP protocol number. Border Gateway Protocol (BGP) is different in that it uses a TCP port number; and Intermediate System-to-Intermediate System (IS-IS) presides over the data-link layer. The TCP/IP Network Interface Layer:
This handles the physical characteristics of the transmission medium, and transferring data across one particular link or medium. It specifies delivery across an individual link as well as the physical layer specifications. It encompasses the Data Link Layer and Physical Layer of the OSI reference model. Examples include: Ethernet and Frame Relay.
Types of TCP/IP Protocols
Transmission Control Protocol/Internet Protocol (TCP/IP) is constructed of a host of protocols that were originally developed by the U.S. Department of Defense (DoD) in the 1970s to accommodate the construction of the Internet. The protocols are: Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Internet Protocol (IP), Address Resolution Protocol (ARP), Reverse Address Resolution Protocol (RARP) and Internet Control Message Protocol (ICMP). Transmission Control Protocol (TCP), the most commonly used protocol, accounts for the bulk of the traffic on a TCP/IP network. TCP is a connection-oriented protocol that offers a full-duplex. TCP safeguards data delivery across any IP link by implementing controls such as connection startup, flow control, slow start, and acknowledgments. TCP packets that are received are arranged to match the original transmission sequence numbers. Because any absent or corrupted packets are present, TCP is an expensive network tool. User Datagram Protocol (UDP) is comparable to TCP in terms of connectionless support, and does not generate a virtual circuit nor does it signal the destination before delivering the data. Additionally error correction is not provided with UDP, and there is no sequencing of packet segments as it isn’t concerned with what order the packet segments are received by the destination. In this case it’s labeled as an “unreliable protocol.” Therefore UDP has much less overhead making it an optimal choice for applications, such as streaming video or audio, that don’t sustain any detrimental effect by occasional packet loss. TCP and UDP use port numbers to interact with the upper layers. Internet Protocol (IP):
…is a commonly used network layer protocol that implements unique IP addresses to identify or define each distinct host or end system on a network. This IP address allows communication between hosts on an IP network. Each IP packet contains the source IP address (or sender), and the destination IP address (or recipient). Intermediary devices between the sender and recipient make routing determinations based on the packet’s destination IP address. Address Resolution Protocol (ARP) generates a blueprint for the destination IP address to the physical hardware address, called the MAC address, of the recipient host. An ARP request containing the recipient IP address, the sender’s IP address and MAC address is sent to each host within a subnet when the destination MAC address is not indicated in the ARP table. When a device accepts the ARP request, and has the IP address, it transmits the corresponding MAC address to the sender of the ARP request. Reverse Address Resolution Protocol (RARP) maps the MAC address to the IP address. When the MAC address is known and the IP address is not known, the RARP protocol responds by sending out a packet that includes its MAC address and a request for the IP address that should be assigned to that MAC address. A RARP server will then send the correct IP address. Internet Control Message Protocol (ICMP) is a protocol for management and messaging service for internet protocol (IP). It logs system errors and supplies additional information relevant to IP packet processing, such as alerting hosts of an alternate route to a destination if there are issues with an existing route, and can help find the source of the problem with that route. The ‘PING’ command is a utility that utilizes an ICMP echo request to test connectivity between two points on the network
Transfer and Application Layer Protocols
Telnet is an application layer protocol. Users that run a Telnet client program are able to connect to a remote Telnet system. TCP’s destination port number is 23 and is widely used to control routers and switches. The disadvantage to Telnet is the protocol isn’t completely fail-proof because data comes in as plain text making passwords vulnerable to decoding. SSH is a more secure option for remote logins. File Transfer Protocol (FTP) File transfer protocol is used in TCP/IP networks. One of the more popular protocols, FTP is TCP-based. When an FTP client links up with an FTP server, a TCP connection is made with FTP server’s port 21. Data is transmitted over a separate FTP data connection, another TCP connection, established to well-known port 20. This prevents file transfer interference on the control connection. Trivial File Transfer Protocol (TFTP) is a more simplified file transfer protocol that utilizes a small group of features, doesn’t require a lot of memory to load, and minimal time to program. TFTP uses User Datagram Protocol (UDP), with no verification of connection / delivery and no error recovery (on the transport layer). TFTP does use application layer recovery by embedding a small header between the UDP header and the data. Simple Network Management Protocol (SNMP) is an application layer protocol that manages IP devices. With SNMP, network administrators can control parameters on a device remotely, and supervise network performance over a given duration. The three versions of SNMP are: SNMP version 1 (SNMPv1); SNMP version 2 (SNMPv2); and SNMP version 3 (SNMPv3). Simple Mail Transfer Protocol (SMTP):
…provides e-mail services to IP devices over the Internet. Two mail servers will use SMTP to exchange email. After the email is transmitted, users can access their mail from the server and read it. This is done via any mail client, which uses different protocols, such as Post Office Protocol 3 (POP3), to connect to the server. SMTP uses well-known ports TCP port 25 and UDP port 25, while SMTP applications use only TCP port 25. BOOTP is a protocol that allows a booting host to configure itself with advanced methods in obtaining its IP address, IP gateway, and other data from a remote server. BOOTP allows use of numerous network hosts to be centrally managed on a single server without having to configure each host independently.
Communication and Network Security
A network is defined as a series of two or more computers connected together to communicate and exchange information and other resources, such as centralized data and software. Networks are built around a cable or wireless connection that employs radio wave or infrared signals to connect the computers. For a network to work properly it must supply connections, communications, and services. Connections are measured by the hardware or physical components needed to connect a system to the network. This includes the network medium, or the hardware that physically connects one computer to another, and the network interface, which is attributed to the hardware that connects a computer to the network medium and is usually a network interface card (NIC). Communications deals with the network protocols that are in place to...
…outline the rules that direct network communication between the networked computers. Through the network protocols, computers with different operating systems and software are able to communicate with each other. Services are the resources that a computer shares with the rest of the networked computers. An example would be a shared printer.
Types of Networks
Networks can be divided into two types with respect to how data is stored, how network security is administered, and how the computers on the network interact. The two types are:
On a Peer-To-Peer (P2P) Network: When computers are connected on a network, each computer functions as a server that shares data and services, or as a client that utilizes data or services on another computer. Security parameters are established by the computer’s owner and that owner/user also selects which resources are to be shared with other users. A network is typically made up of 15 and 20 computers. A Server/Client Network: Also called Server networks, they’re made up of one or more dedicated computers configured as servers. This server administers access to all shared files and peripherals. The server operates the Network Operating System (NOS), handles security and access to resources. Computers in this network, called client computers, connect to access available resources. The following are some common Network Operating Systems: Microsoft’s Windows NT Server 4, Windows 2000 Server, and Novell’s NetWare. Before the release of Windows NT, most dedicated servers worked only as hosts. Windows NT allows these servers to operate as an individual workstation as well.
Types of Network Topologies
The mapping of a LAN design is called Topology. There are four categories: Star topology, Bus topology, Ring topology and Mesh topology. Hybrid combinations of these topologies also exist. Star topology – all computers and devices are connected to a main hub or switch. The hub or switch amasses and disburses the flow of data within the network. Star topology is the most common type of network and follows the Ethernet standard. Bus topology – in this arrangement computers and devices are connected to a single linear cable called a trunk. The trunk is also referred to as the backbone or a segment. Each end of the trunk must be discharged to prevent the signal from rebounding back up the cable. Ring topology – computers and devices are connected to a closed loop cable. Here there are no terminating ends so if one system crashes the entire network goes down. Each computer functions as a repeater and charges the signal before sending it to the next station. In Ring topology data is sent through the network by way of a token. If the token isn’t carrying any data, a computer waiting to send data seizes it, attaches the data and the electronic address to the token and sends it on. Once the token is received by the destination computer it strips the data and the token is sent on. This is why it’s named a token ring network. Mesh topology –
all computers and devices are connected with many repeated interconnections between network nodes. There are two types: full mesh and partial mesh. With full mesh topology every computer or device has a link that connects it to every other computer or device in a network. This form of mesh topology is expensive but offers a high amount of redundancy should one of the nodes fail. If this occurs network traffic can be redirected to any of the other nodes. Full mesh is usually reserved for backbone networks. Partial mesh topology takes some of the devices and configures them in a full mesh scheme while other devices are connected to only one or two other devices in the network. Partial mesh topology is less expensive to put in place and offers less redundancy than full mesh topology. Partial mesh is typically found in sub networks connected to a fully meshed backbone.
Using Coaxial Cables to Build Network
Coaxial Cable: Coaxial cable has two conductors contained in the sheath. One conductor is inside the other. A copper core runs through the center of the cable that transmits the electrical signals. The core is solid copper or made of intertwined copper strands. A layer of insulation is around the core, and surrounding that is the second conductor, which is made of braided copper mesh. The second conductor serves as the cable’s ground. The framework is encased in an insulating sheath made of PVC or Teflon. Here are the two types of coaxial cable: Thick Ethernet: Ethernet cable is known as Thick Ethernet cable, and is also called 10Base5 graded as RG-8. A station is attached to the main cable by a vampire tap that attaches to the cable. Vampire tap derives from the metal tooth that pinches the cable. The tap connects with an external transceiver with a 15-pin AUI connector to which you attach a cable that connects to the station. DIX is an acronym for the companies that worked on this configuration: Digital, Intel, and Xerox. A second option is the N-series connector. The N connector is a male/female screw-and-barrel configuration. A CB radio uses the PL-259 connector, and is similar in appearance to the N connector. Thin Ethernet:
Thin Ethernet, known as Thinnet or 10Base2, is a thin coaxial cable. It’s smaller in diameter than the coaxial cable. Thin Ethernet coaxial cable is RG-58. With Thin Ethernet, BNC connectors are used to attach workstations to the network. The BNC connector is locked securely with a slight-twist motion.
Twisted Pair Cables Used to Build Networks
Twisted Pair Cable: Twisted pair cable installed in a star topology is used in LANs today. These pairs are called a Shielded Twisted Pair (STP) cable and Unshielded Twisted Pair (UTP) cable. LANs primarily use UTP cable. UTP cable has eight separate conductors, as opposed only two in coaxial cable. The eight wires are placed in four pairs of twisted conductors, and each conductor is a single insulated wire. The twists inhibit interference from signals on other wire pairs and act as a buffer to outside interference. The connectors used for twisted pair cables are called RJ-45; telephone cables use the same connectors (twisted pair cable has been used with telephone installations). With LAN, twisted pair cable usage is more recent. Coaxial cable has been replaced by twisted pairs in the data networking world. The cable in twisted pair is more flexible than coaxial cable, making it easier to work with for installation. Qualified telephone cable installers would have a base knowledge for installing LAN cables. UTP Cable Grades: UTP cable has different grades or “categories” formulated by the Electronics Industry Association (EIA) and the Telecommunications Industry Association (TIA). Category 3 and Category 5 are the two most significant UTP grades used for LAN. Category 3 cable was designed for telephone networks and was eventually used for Ethernet. Category 3 cable is sufficient for 10 Mbps Ethernet networks, but not for Fast Ethernet, except under special conditions. While it’s feasible to use Category 3 cable installation to build a standard Ethernet network, most new UTP cable installations today use at least Category 5 cable. Category 5 UTP is best suited for 100BaseTX Fast Ethernet networks running at 100 Mbps. Along with the formally ratified EIA/TIA categories, there are other UTP cable grades yet to be standardized. There’s a cable standard known as Level 5 that’s currently being marketed by the company Anixter, Inc. This cable grade is being tagged with names like Enhanced Category 5. It increases the bandwidth of Category 5 from 100 to 350 MHz, making it capable of running the most updated Gigabit Ethernet protocol at 1,000 Mbps (1 Gbps). STP Cable Grades:
Shielded twisted pair cable is similar in composition to UTP. There are two pairs of wires with a foil or mesh shielding around each pair. STP shielding is of better quality than shielding found in UTP, specifically with installations where electromagnetic interference is an issue due to the proximity of electrical equipment. The STP cable types were standardized by IBM, the developers of the Token Ring protocol. STP networks use Type 1A for longer length of cables and Type 6A for shorter patch cables. Type 1A has two pairs of 22 gauge wires with a foil shielding, and Type 6A contains two pairs of 26 gauge wires with foil or mesh shielding. IBM data connectors (IDCs) are utilized in Token Ring STP networks. Most Token Ring LANs today use UTP cable.
Fiber Optic Cables
Fiber optic cable is a much different type of channel in the network structure. Rather than transmitting signals over copper conductors in the form of electrical voltages, fiber optic cables transmit pulses of light over a glass or plastic conductor. Fiber optic cable is resistive to electromagnetic interference and less prone to attenuation than are copper cables. Attenuation is a signal’s propensity to weaken as it travels over a cable. With copper cables, signals weaken to the point of becoming indecipherable after 100 to 500 meters. In some cases fiber optic cables can extend distances up to 120 kilometers without significant signal weakening. Fiber optic cable is the preferred medium for installations that span long distances. Additionally, it is more secure than copper because the fiber optic link can’t be disturbed without affecting the normal communication over that link. Fiber optic cable is found in two types: single-mode and multimode. The difference between them is identified by the thickness of the core and the cladding. Single-mode fiber uses a single-wavelength laser as a light source, and is able to…
…transmit signals for significantly long distances. Single-mode fiber is commonly found in outdoor installations such as cable television networks that span great distances. This type of cable is less applicable to LAN installations because it is more costly than multimode and has a higher bend radius. Multimode fiber uses a light emitting diode (LED) as a light source instead of a laser and carries multiple wavelengths. As opposed to single-mode, multimode fiber cannot span great distances but it bends around corners better and is much cheaper.
Wireless Network Functions
Conventional Ethernet networks rely on cables connected to computers via hubs and switches. This limits the computer’s mobility and requires that even portable computers be physically connected to access the network. The optimal alternative is wireless networking. Wireless networks use network cards, called Wireless Network Adapters, that use radio signals or infrared (IR) signals to send and receive data via a Wireless Access Point (WAP). The Wireless Access Point uses the RJ-45 port to connect to a 10BASE-T or 10/100BASE-T Ethernet hub or switch. A radio transceiver, encryption, and communications software are incorporated. It converts conventional Ethernet signals into wireless Ethernet signals that are transmitted to wireless network adapters, and performs a reverse transfer of signals from wireless network adapters to the conventional Ethernet network. WAP devices come in…
…different standards, and some provide the Cable Modem Router and Switch functions.
Wireless Network Standards
The early days of wireless networking were single-vendor proprietary solutions that were technically incompatible with wireless network products from other vendors. In 1997 the IEEE 802.11 wireless Ethernet standard was developed. Wireless network products developed with this standard are capable of being vendor neutral. The IEEE 802.11 wireless Ethernet standard consists of the IEEE 802.11b standard, the IEEE 802.11a standard, and the most recent IEEE 802.11g standard. IEEE 802.11 was the original standard for wireless networks endorsed in 1997. It operated at a maximum speed of 2 Mbps with sustainable interoperability of wireless products from different vendors. Yet the standard was found to have some issues that complicated compatibility between devices. To maintain compatibility, partnering companies formed the Wireless Ethernet Compatibility Alliance (WECA), officially dubbed the Wi-Fi Alliance, to ensure product compatibility. The term Wi-Fi is now attributed to any IEEE 802.11 wireless network products that have passed the Wi-Fi Alliance certification tests. IEEE 802.11b, also known as…
…11 Mbps Wi-Fi, operates at a maximum speed of 11 Mbps, slightly faster than 10BASE-T Ethernet. Most IEEE 802.11b hardware functions at four speeds, and varies between using three different encoding types depending on speed and range. It operates at 11 Mbps or 5.5 Mbps using quaternary phase-shift keying/complementary code keying (QPSK/CCK), at 2 Mbps using differential quaternary phase-shift keying (DQPSK); and at 1 Mbps using differential binary phase-shift keying (DBPSK). As distances vary and signal strength is enhanced or weakened, IEEE 802.11b hardware switches to the most suitable data-encoding method. Wireless networks running IEEE 802.11b standard operate on 2.4 GHz radio frequency bands used by many portable phones, wireless speakers, security devices, and the Bluetooth short-range networking. Though widespread use of these products can create conditions for interference, the short range of wireless networks (indoor ranges up to 300 feet and outdoor ranges up to 1,500 feet, depending on the by product) minimizes this. Spread-spectrum is a method used by many devices to decrease potential interference. IEEE 802.11b networks connect to wired Ethernet networks or can be used as independent networks. IEEE 802.11a uses the 5 GHz frequency band offering higher speeds, reaching a maximum speed of 54 Mbps. The 5 GHz frequency band also assists in diminishing interference from devices with lower-frequency IEEE 802.11b networks. IEEE 802.11a hardware maintains relatively high speeds at both short and long distances. IEEE 802.11a uses the 5 GHz frequency as opposed to the 2.4 GHz frequency band used by IEEE 802.11b, therefore standard IEEE 802.11a hardware cannot communicate with 802.11b hardware. Dual-band hardware resolves this issue. Dual-band hardware is compatible with either IEEE 802.11a or IEEE 802.11b networks, allowing a user flexibility bouncing between an IEEE 802.11b wireless network at home or coffee house to a faster IEEE 802.11a office network. IEEE 802.11g is also known as Wireless-G and integrates compatibility of IEEE 802.11b with the speed of IEEE 802.11a at long distances. In 2003 the standard was authenticated, however, many network vendors were selling products based on the draft IEEE 802.11g standard before the final standard was approved. IEEE 802.11g hardware was slower and less in accordance with specification promises. In some cases, compatibility issues with early-release IEEE 802.11g hardware can be resolved through firmware upgrades.
Wireless Network Modes
Wireless networks work in one of two modes labeled as topologies: ad-hoc mode and infrastructure mode. The mode utilized depends on whether the objective is for computers to communicate directly with each other, or via a WAP. -In ad-hoc mode, data is sent to and from wireless network adapters connected to the computers. This eliminates the requirement to purchase a WAP. Throughput rates between two wireless network adapters are twice as fast compared to using a single network adapter connecting to a WAP. However, when a network is in ad-hoc mode it can’t link up to a wired network. An ad-hoc network is also referred to as a peer-to-peer network. -In infrastructure mode…
…data between computers is delivered via a WAP. WAP allows connectivity to a wired network, affording expansion of a wired network with wireless capability. Wired and wirelessly networked computers can communicate with each other. WAP also lengthens the wireless network’s range as its placement between two wireless network adapters doubles their range. Routers and firewalls are also incorporated in some WAPs. The router provides shared Internet access between all your computers, and the firewall protects the network.
Use of Bluetooth in Networking
Bluetooth operates in the 2.4 GHz frequency spectrum. This allows a maximum data connection speed of 723 Kbps. Bluetooth consumes lower power levels than wireless LAN technologies (802.11x) thus its radio wave is not as strong as the 802.11. Bluetooth uses peer-to-peer networking, and does not need line of sight between any of the connected devices. Bluetooth also has multiple-connection capability for devices in a point-to-multipoint fashion. Bluetooth is not exclusively used with cell phones and PDAs, other networked devices use it as well. The value of Bluetooth is measured by its network effect of a networked device. The three classifications of Bluetooth:
Class 1, with a range of up to 100m and 100mW of power. Class 2, its range reaches up to 20m and has 2.5mW of power. Class 3, the most commonly implemented classification. It has a range of up to 10m and has 1mW of power.
Using IrDA in Networking
IrDA uses infrared (IR) signals to send data for IrDA 1.1. Average distance is 3 to 6 meters. Some IR technologies have a maximum distance of 1.5 miles. Because IR signals are used to transmit data, distance capability for long-range IR varies with weather conditions (such as humidity). IR is a…
…line-of-sight technology operable on a clear path between the transmitter and receiver.
Primary Networking Devices
Hubs and repeaters operate at the physical layer of the OSI model and transfer incoming data to all other ports on the device. These devices don’t read frames or packets; they boost the signal and broadcast to all ports. Repeaters do not interrupt broadcast or collision domains and are considered as protocol transparent because they don’t interact with upper-layer protocols, such as IP, IPX, DECnet, etc. Bridges and Layer 2 switches operate at the data-link layer of the OSI model. Bridges obtain the MAC layer addresses of each node of the segments and construct a list of MAC addresses and ports, pinpointing which interface a particular MAC is connected to. If an incoming frame does not carry a destination MAC address found on the list, bridges forward the frame to all ports except the originating port from which the frame came. If the destination MAC address is in the registry, bridges send the frame through the port to which the destination MAC address is attached. If the destination MAC address is not on the same port from which the frame came, the bridge filters (drops) the frame. Bridges are hold-and-forward devices. They store the complete incoming frame and ensure the checksum before forwarding the frame. If a checksum error is found, the frame is discarded. Switches use fast integrated circuits that minimize latency. “Cut-through mode” is when…
…a switch doesn’t wait for the entire frame to enter its buffer but immediately forwards the frame once it reads the destination MAC address. This raises the occurrence rate for error frames as the entire frame is forwarded without completed error inspection. Ports on a bridge or switch are separate collision domains but all ports in a switch are in the same broadcast domain as bridges and switches do not control broadcasts. Rather they transmit broadcasts to all ports. Collision Domains: A collision domain is a set of network interface cards (NICs) for which a frame sent by one NIC could result in a collision with a frame sent by any other NIC in the same collision domain. In a collision domain all devices on the network compete for the same bandwidth. Broadcast Domains: A broadcast domain is a set of NICs for which a broadcast frame sent by one NIC is received by all other NICs in the same broadcast domain. Layer-2 Switching and Routing: The major difference between Layer-2 switching and routing is that switching occurs at Layer 2 of the OSI reference model and routing occurs at Layer 3. Switches forward frames based on MAC address information while Routers forward packets based on logical addresses (IP address). Routers and Layer-3 switches operate in the network layer of the OSI model and determine forwarding actions based on network layer addresses, such as IP addresses. Routers identify both collision and broadcast domains as each router interface is a separate broadcast domain that is identified by a separate subnetwork. Routers are protocol-based, and are capable of forwarding packets of routable protocols. Routers are programmed to run routing protocols, such as Routing Information Protocol (RIP); Interior Gateway Routing Protocol (IGRP); Open shortest Path First (OSPF); Intermediate System-to-Intermediate System (IS-IS); Enhanced Interior Gateway Routing Protocol (EIGRP); and Border Gateway Protocol (BGP), to find the most optimal paths to a destination. Routers send information about destination networks and their interface status by using these protocols. Routers can also be set-up with static routes via manual configuration. LAN switches that run routing protocols and can interact with routers as peers are called Layer-3 switches. Layer-3 switches unpack local traffic from wide-area network (WAN) routers through network-layer forwarding within the local-area networks (LANs). Routers and Layer-3 switches make forwarding determinations based on IP addresses and not MAC addresses. Both share route information based on the dynamic routing protocol they participate in.
Types of Ethernet
A range of network technologies can be used to establish network connections, including Ethernet, Fiber Distribution Data Interface (FDDI), Copper Distribution Data Interface (CDDI), Token Ring, and Asynchronous Transfer Mode (ATM). Ethernet is the most common choice in installed networks because of its affordability and scalability to higher bandwidths. Ethernet: Ethernet is based on the Institute of Electrical and Electronics Engineers (IEEE) standard IEEE 802.3.Ethernet is based on the carrier sense multiple access collision detect (CSMA/CD) technology, which mandates that transmitting stations standby for a period of time when a collision occurs. Coaxial cable was the initial physical media established in Ethernet standard. Coaxial Ethernet cable comes in two categories: Thicknet (10Base5) and Thinnet (10Base2). These cables vary in size and length. Ethernet coaxial cable lengths can be long and are prone to electromagnetic interference (EMI) and eavesdropping. Wired networks typically use twisted-pair media for connections to the desktop. Twisted-pair also comes in two categories: Unshielded twisted-pair (UTP) and Shielded twisted-pair (STP). One pair of insulated copper wires that are intertwined forms a twisted-pair. The pairs are interlinked at the top to diminish interference and crosstalk. STP and UTP are prone to high attenuation so the lines are usually restricted to a maximum distance of 100 meters between devices. Also, these cables have high receptivity to EMI and eavesdropping. 10BaseT UTP Cable is the most commonly used cable. Another option is fiber optic cable (10BaseFL), which carries light signals produced either by light emitting diodes (LEDs) or laser diodes (LDs) as opposed to electrical signals. These cables execute much higher transmission speeds and extended distances but are more expensive. The benefit of fiber optic is its insusceptibility to EMI and eavesdropping. They also have minimal attenuation which allows them to connect to active devices that are up to 2 km apart. Again, the expense factor of these devices is something to be considered while cable installation is complex. Fast Ethernet: Fast Ethernet runs at 100 Mbps and is based on the IEEE 802.3u standard. The Ethernet cabling schemes, CSMA/CD operation and all upper-layer protocol operations have been supported by Fast Ethernet. Fast Ethernet has backward compatibility with 10 Mbps Ethernet. This is possible because the two devices at each end of a network connection can automatically configure link capabilities so that they both can operate at a parallel level. This involves the detection and implementation of the highest common bandwidth and self-negotiating half-duplex or full-duplex operation. Because of this, Fast Ethernet is also known as 10/100 Mbps Ethernet. Gigabit Ethernet:
Gigabit Ethernet is a modification of the Fast Ethernet standard using the same IEEE 802.3 Ethernet frame format. Gigabit Ethernet has a maximum throughput of 1,000 Mbps (1 Gbps). As with Fast Ethernet, Gigabit Ethernet has compatibility with earlier Ethernet standards. One difference is the physical layer has been adjusted to raise data transmission speeds: The IEEE 802.3 Ethernet standard and the American National Standards Institute (ANSI) X3T11 FibreChannel. IEEE 802.3 provided the basics of frame format, CSMA/CD, full duplex, and other elements of Ethernet. FibreChannel provided a base of high-speed ASICs, optical features, and encoding/decoding and serialization mechanisms. This protocol is termed IEEE 802.3z Gigabit Ethernet. Gigabit Ethernet accommodates several cabling types, referred to as 1000BaseX. Table 3.8 breaks down the cabling specifications for each type.
The Token Ring
The IEEE standard for Token Ring is IEEE 802.5. Token Ring was created by IBM for the forwarding of data on a logical unidirectional ring. Token Ring, like Ethernet, is a LAN technology that supports shared media access to several connected hosts and is applied in the Data-Link Layer. Token Ring networks transmit a small frame, called a token, to the hosts on the network. A host that holds the token has the right to transmit a frame onto the ring. After a station has the token, it adjusts it into a data frame, attaches the data for transmission, and forwards the frame to the next station. A ring will not contain a token until the data frame is received by the source station and tagged as read and copied whereupon the token is returned back into the ring. One station at a time can transmit a token to avert collisions on the Token Ring network. A Token Ring network has a bandwidth of 4 Mbps or 16 Mbps. At the higher end, hosts are allowed to present a new token once transmission of a frame is completed. This expedient token release boosts efficiency by allowing more than one host to transmit a frame during the original token’s round trip. One station is responsible for acting as a ring monitor to offer recovery from runaway frames or tokens. The ring monitor will remove frames that have circled the ring once, if no other station removes them. Traditional Token Ring networks use…
…Multistation Access Units (MSAUs) to offer connectivity between hosts. MSAUs have several ports that a host can link to, with a B connector for Type 2 cabling or an RJ-45 connector for Category 5 UTP cabling. One of the internal functions of MSAU is it allows host-to-host connections to form a ring segment. The Ring-In and Ring-Out connectors of a MSAU can be connected to other MSAUs to form a complete ring topology. Token Ring also enables an optional priority system that permits stations that are given a higher priority value access to the network more often than permitted by the standard process. Eight levels of priority are implemented using a 3-bit reservation field and a 3-bit priority field. As the frame passes, a station sets a higher priority in the reservation field, reserving the token. The transmitting station then dispatches a token out with the higher priority set. After the high priority station completes sending its frame, it releases a token with the normal or previous priority.
Token Ring Operation
Another station on the Token Ring is chosen to be the Active Monitor (AM). This station extracts continuously circulating frames that are not removed by a defective transmitting station. As a frame passes the AM, the monitor count bit is set. If a frame passes with the monitor count bit set, the AM assumes that the original sender of the respective frame was unable to remove the frame from the ring. The AM then discards this frame, sends a Token Soft Error message to the Ring Error Monitor, and produces a new token. The AM also supplies timing status to ring stations. It implements a 24-bit propagation delay to prevent the end of a frame from wrapping back to the beginning of the frame, and also verifies that a data frame or token is received every 10 milliseconds. Standby Monitors and Ring Error Monitors are also on Token Ring networks. Standby Monitors will assume AM responsibilities if the main AM is extracted from the ring or no longer performs its functions. Ring Error Monitors can also be present on the ring to collect ring status and error information. If a station does not receive any additional frames…
…either a data frame or a token – from its upstream neighbor, it transmits a beacon MAC frame, which contains the beaconing station’s MAC address and the address of the nearest active upstream neighbor (NAUN), and indicates that the problem lies between the two stations. An adapter will send continuous beaconing until frames are received again.
Early Token Release (ETR)
With ETR, a token is released right after the sending station transmits its frame. The sending station won’t wait for the data frame to circle the ring. Stations running ETR have compatibility with stations not running ETR. With ETR…
…a free token can circle the ring with multiple data frames.
Areas of the Network
The network is categorized based on the traffic’s originating point and its destination. This can be:
Trusted – a private sector of the network that requires shielding against security threats and attacks. Traffic originating from less trusted areas of the firewall is blocked, enhancing security on computers. Untrusted – areas of the network like the Internet segment of the firewall that are vulnerable to security threats. Demilitarized Zone (DMZ) – this is an area like a web server, that normally supports computers or services that are used by authorized users and untrusted external individuals. The Demilitarized Zone operates between trusted and untrusted zones. The Demilitarized Zone is considered untrusted when classifying the area from within the private trusted network. Traffic originating from the DMZ will be blocked in this case. A firewall configuration is made up of an inside trusted interface and an outside untrusted interface. Firewalls that are jointly configured can have A DMZ area set up between them. The perimeter router provides the Internet Service Provider (ISP) connection. The more advanced firewall models are known as three-pronged firewalls that have no fewer than three interfaces: an inside trusted interface, an outside untrusted interface and a DMZ connecting to an area that is partially trusted.
Common Data Network Services
File Transfer Protocol (FTP): File Transfer Protocol (FTP) is a TCP-based application with many options and features such as modifying directories, implementing wildcard characters in listed files, transmitting multiple files at once, and utilizing a variety of character sets or file formats. It can be set up for anonymous access without the use of a password, or it can be configured to require a username and password. It also offers an interface resembling a UNIX file directory. When an FTP client tries to hook up to an FTP server, a TCP connection is directed to the FTP server’s well-known port 21. A username and password is requested from the FTP client, which the server uses for verification of the files available to that user. This security corresponds to the file security on the server’s platform. All the commands used to administer the transferring of files are sent across this connection. This gives the user a range of commands to activate settings for transfer and other actions. The file is sent over a separate FTP data connection over TCP port 20. This prohibits a file transfer from making modifications to the control session. Secure File Transfer Protocol (SFTP): SFTP (Secure File Transfer Protocol) is a secure version of the File Transfer Protocol (FTP) that includes enhanced encryption and authentication. It administers secure file transfer using SSH or SSH-2. Like FTP, SFTP can be used to transfer files between a client and a server over a network. The same functionality applies to remote servers. An SFTP can be used exclusively for file transfer access, or it can provide system command access as well. SFTP can limit users to their home directories, is not susceptible to the “flashfxp” transfer utility, and is much less vulnerable to exploitation than FTP. It can be programmed to authorize users with certificates and passwords. Secure Shell (SSH) and Secure Shell version 2 (SSH-2):
SSH is an open standard used for remote administration and file transfer over a network. An encrypted tunnel is created between an SSH client and an SSH server, and it’s programmed to authenticate the client to the server. SSH is the safer alternative to clear-text telnet sessions that are inherently vulnerable. SSH uses port 22 and can be used to substitute both FTP and Telnet. SSH communications are encrypted with the International Data Encryptions Algorithm. Rivest, Shamir, & Addleman (RSA) methods are used for key exchange. Keys are wiped out and recreated every hour. SSH is used to defend against: IP spoofing or IP source routing: The attacker uses the source IP address in his packets to sneak in as a trusted source. DNS spoofing: The attacker forges name server records in the DNS. Real-time data modification: An intermediary host hijacks active communication and impersonates both parties in their exchange. The attacker receives information sent by the real sender, alters it and forwards it to the recipient on behalf of the sender. Authentication replay attacks: The attacker records the stream of data and cuts off all user replies from the stream to establish a connection. If a hacker gets into a workstation where SSH is used and gains root access privileges, he can then modify the SSH application to his liking. Secure Shell version 2 (SSH-2) is a security enhanced version SSH and should be used in place of SSH. Trivial File Transfer Protocol (TFTP): Trivial File Transfer Protocol (TFTP) is a more basic version of FTP. It has a limited set of features, doesn’t require a lot of memory to load, and can be programmed in a short amount of time. There’s no browsing capability, it only sends and receives files. TFTP has been used to seize router configuration files by registering a terminal session during a configuration session and then storing that configuration on a TFTP server. During the configuration the server can be accessed to extract or save configuration data on the network. The disadvantage is that unlike FTP, session authentication does not occur which makes TFTP an open target.
Types of Data Networks
Data networks are two or more computers that are connected to share information, hardware, programs and so forth. In order to communicate on the network, every computer is required to have a network interface card (NIC), a transmission medium, a Network Operating System (NOS), and a network connectivity device. Data networks can be categorized and labeled according to the geographical area that the network covers. The four geographically defined networks: a Local Area Network (LAN), a Campus Area Network (CAN), a Metropolitan Area Network (MAN), and a Wide Area Network (WAN). According to the network’s infrastructure, there are three additional network descriptions: the Internet, an Intranet and an Extranet. The most common network definitions are…
…LAN, WAN and the Internet.
Wide Area Networks (WAN)
A Wide Area Network (WAN) is an admixture of physically or logically interconnected subnetwork that incorporates a larger geographic area than LANs and extends across metropolitan, regional, and national boundaries. Differences between WAN and LANs:
WANs cover long distances. WAN speeds are slower. WANs have both on-demand and permanent connectivity; LANs have permanent connections between stations. WANs can use public or private network transports. WANs use private network transports.
Internet, Intranet and Extranet
Internet: The Internet is a TCP/IP-based WAN that was originally planned for the U.S. Department of Defense (DoD). Internet Service Providers (ISPs) connect the global network, both public and private, that comprises the Internet. Intranet: An intranet is a logical network that utilizes an organization’s internal, physical network infrastructure that extends over large areas. Intranets employ TCP/IP and HTTP standards which are used to activate corporate web sites accessed by all employees on the intranet. This method is more secure and controlled than publishing corporate web sites on the World Wide Web. Extranet:
This shares similarities with the intranet — it’s a private network that uses Internet protocols and can be used to publish corporate web sites. The main difference is users outside of the organization have access to the system, such as business associates.
Dedicated Lines An example of a dedicated line is a leased line or a point-to-point link, is a communications line with ongoing transmission, rather than having an on/off status as transmission is required. These lines run over a dedicated analog or digital point-to-point connection that can interconnect different types of networks. Synchronous circuits require the same clock so that the receiving circuit knows exactly when each frame bit is received. There are several dedicated line speeds used, and they’re based on the standard digital signal level 0 (DS-0) rate of 64 kbps. The T-carriers are the most common dedicated lines in North America. The T1 carrier can carry 24 DS-0s for a capacity of 1.544 Mbps. The T3 carrier is a dedicated phone connection. It consists of 672 individual DS-0 channels and supports data rates of approximately 45 Mbps. The T3 is also commonly called DS-3 and carries 28 T1 lines. The E1 carrier is the most common dedicated line in Europe and other countries, and can carry 30 DS- 0s for a capacity of 2.048 Mbps. WAN Switching:
WAN switching is used with networks that operate beyond the single point-to-point connection. There are two types of WAN switching: Circuit Switching and Packet Switching.
In a circuit-switched network, a dedicated point-to-point connection or circuit is required for transmission between the sender and receiver. Circuit-Switch networking is commonly used in telephone companies. Integrated Services Digital Network (ISDN) is an example of a circuit-switched network. It provides permanent, ongoing WAN connectivity and is the most widely used connectivity between routers. Digital signals are employed with ISDN to support faster speeds than analog. Transmission speeds run up to 64 kbps. With Internet connectivity, ISDN has…
…been bumped down the ranks by concurrent technologies such as Digital Subscriber Line (DSL), Asymmetric Digital Subscriber Line (ADSL) cable modems, and faster analog modems. Still, ISDN remains a common method for short-term connectivity between routers and is frequently used to create a backup link when the primary leased line or Frame Relay connection goes down.
Packet-Switched Networks (PSN)
In a packet-switched network (PSN), nodes share bandwidth with each other by sending small data units called packets. One difference from circuit-switched networks is information in packet-switched networks is diced up into packets and then forwarded to the next destination based on the router’s routing table. There the packets are reassembled based on their originating sequence numbers. PSNs are more economical than dedicated circuits because they create virtual circuits, which are used as needed. Examples of PSNs:
X.25: A connection-oriented packet-switching network, in which the packets are transmitted over virtual circuits and is defined by the International Telecommunications Union (ITU-T). The ITU-T specifications identifies the point-to-point communication between Data Terminal Equipment (DTE), Data Circuit-Terminating Equipment (DCE), or a Data Service Unit/Channel Service Unit (DSU/CSU), which supports both switched virtual circuits (SVCs) and permanent virtual circuits (PVCs). Data terminal equipment (DTE) and data circuit-terminating equipment (DCE) are processed through routers and other devices. Routers are typically DTEs that are linked-up with modems or packet switches, which perform the DCE function. X.25 was designed to support most systems that are connected to the network. It has evolved to an international standard and is more widely used outside the United States. Link Access Procedure-Balanced (LAPB): this was developed for usage with X.25. LAPB defines methods for exchanging frames, monitoring frame sequence and absent frames, and carrying out frame acknowledgements and retransmission when necessary. Frame Relay: an advanced-performance, connection-based WAN technology. It is the follow-up to X.25 and LAPB and functions at speeds from 56 Kbps to 45 Mbps. It’s versatile in its deployment options. It operates by statistically multiplexing several data streams over a single link. Each data stream is called a virtual circuit (VC). The two models of Frame Relay VCs: Permanent Virtual Circuits (PVCs) and Switched Virtual Circuits (SVCs). Each VC is assigned an identifier to keep it unique. This identifier is called Data Link Connection Identifier (DLCI), and established on a per-leg basis over the transmission. It must be unique and accepted by two adjacent Frame Relay devices. As long as the two are in agreement, the value can be any valid number that doesn’t have to be the same end to end. Valid DLCI numbers are 16-1007. For DLCI purposes, 0-15 and 1008-1023 are reserved. The logical connection between the Frame Relay (FR) switch and the customer premises equipment (CPE) is also established by DLCI. Switched Multimegabit Data Service (SMDS): a high-speed, connectionless, packet-switched public network service. It is transmitted over a SONET ring with a maximum service area of 30 miles. It provides bandwidth to organizations that transmit massive amounts of data over WANs on a bursty or incremental basis. Asynchronous Transfer Mode (ATM): a connection-oriented high-bandwidth, low-delay transport technology that uses both switching and multiplexing. It handles the transmission of voice, data, and video across service provider networks and uses 53-byte, fixed size cells rather than frames. It can provide bandwidth on demand, making it ideal for bursty applications. ATMs are reliant on high speed, high-bandwidth mediums like fiber optics. Voice over IP (VoIP): a multi-service digital access technology that integrates various types of data into a single IP packet, including data, voice, audio and video. Multiple-processing is an advantage in terms of expense, functionality and interoperability.
Network Address Translation (NAT)
Organizations that use private IP addresses have the advantage of using a private addressing in a network, while using the Internet at the same time, by way of implementing Network Address Translation (NAT). NAT is defined in RFC 1631 and allows communication with hosts that don’t have a valid registered IP address through the Internet. This allows hosts using private addresses or addresses that aren’t Internet-ready to be used while communicating with other hosts on the web. This is achieved by taking a registered IP address to be used in place of the private address when interacting with other hosts on the Internet. NAT changes the private IP addresses to publicly registered IP addresses inside each IP packet. There are several types of NAT: Static NAT; Dynamic NAT; Overloading NAT with Port Address Translation (PAT). Definitions of each, are as follows:
Static NAT: the IP addresses have a fixed blueprint in relation to each other allowing the NAT router to configure a one-to-one mapping between the private address and the registered address used on its behalf. Supporting two IP hosts on a private network incorporates a second static one-to-one mapping using a second IP address in the public address domain, correlating to the number of addresses supported by the registered IP address. Dynamic NAT: is similar to static NAT in that the NAT router creates a one-to-one mapping between an inside local and inside global address and modifies the IP addresses in packets as they go out and access the inside network; however this occurs automatically. This is achieved by setting up a series of possible inside global addresses and identifying criteria for the set of inside local IP addresses whose traffic should be translated with NAT. With a dynamic NAT router, you can add more IP addresses to the inside local address list than in the inside global address pool. When the number of registered public IP addresses is established in the inside global address pool, the router assigns addresses from the pool until all are allocated. If a new packet comes through and it needs a NAT entry, but all the pooled IP addresses are already assigned, the router eliminates the packet. The user then needs to retry until a NAT entry times out, allowing the NAT function to continue the process for the next host that sends a packet. This can be resolved with the use of Port Address Translation. Port Address Translation (PAT): PAT, also known as overloading NAT, is implemented in those networks where the majority of IP hosts need to connect with the Internet. If private IP addresses are used, the NAT router will require an extensive list of registered IP addresses. When employing static NAT, each private IP host requiring Internet access needs a publicly registered IP address. If a large portion of the IP hosts are reliant on Internet access during business hours, a substantial number of registered IP addresses would also be required. These situations can be resolved by overloading with port address translation. Overloading allows NAT to support many clients with a minimal number of public IP addresses. To support multiple inside local IP addresses with only a small number of inside global, publicly registered IP addresses, NAT overload implements Port Address Translation (PAT), translating the IP address as well as translating the port number. When the dynamic mapping is established by NAT, it selects an inside global IP address and assigns a unique port number to that address. The NAT router registers every unique combination of inside local IP address and port, with translation to the inside global address and a unique port number referenced with the inside global address. The port number field has 16 bits, allowing NAT overload to use more than 65,000 port numbers, allowing it to scale well without relying on numerous registered IP addresses. NAT is also used for organizations that use a network number registered to another company instead of private addresses. An organization that uses a network number registered with a different organization, and both have Internet access, NAT can be used to translate both the source and the destination IP addresses. Both the originating addresses and destination addresses must be modified as the packet passes through the NAT router.
Connecting Systems to a Remote Location
There are various methods to connect systems to a remote location. From the network layer up, a remote connection is the same as a direct LAN connection, but the data-link and physical layers can differ. Public Switched Telephone Network (PSTN): PSTN is a telephone service and uses copper-based twisted pair cable. A modem can be connected to the line to transmit data to any location. Connections are fed into a centralized locale where calls are then converted to analog and routed to their destinations. Modems convert the digital signals to analog, and transmit them over the PSTN. PSTN connections are slow. The quality of the connection depends on location and the state of the cables. Most modems support the Plug and Play standard. Networks can detect and install the correct drivers for it. With external modems, the IRQ and I/O address are designated to the serial port that links the modem to the system. The system is supported by two serial ports and is assigned to: COM1 and COM2 share IRQ4, and COM2 and COM4 share IRQ3. A chip, universal asynchronous receiver-transmitter (UART), maintains communication. A Virtual Private Network (VPN): VPN serves as the connection between a remote system and a server on a private network that uses Internet support. A remote user can access the Internet through a modem and connect to an ISP. A secured connection is then established between the remote system and network server to protect information that’s transmitted by using tunneling. The protocol that allows tunneling is the Point-to-Point Tunneling Protocol (PPTP). PPTP works with PPP to secure a connection between the client computer and a server. Once tunneling is activated the system transmits data by encapsulating the PPP data. Integrated Services Digital Network (ISDN):
Developed to replace the analog telephone system. Digital Subscriber Line (DSL) and cable television (CATV) services are advanced systems with faster performance and cheaper than ISDN. No modem is needed with ISDN as it uses dial-up service and has connectivity with different sites. ISDN uses the same wiring as PSTN and is higher quality in terms of speed. Though it uses the same wiring, additional equipment is required at the terminal locations. The telephone company provides a U-interface with a four-wire connection. Because of the speed, the length of the connection is limited. Digital Subscriber Line (DSL): Also referred to as xDSL, DSL is a broad label for an array of digital communication services that use standard telephone lines. Its data transmission is faster than both PSTN and ISDN. DSL operates at a higher frequency than standard phone services and supports special signaling schemes. DSL is a direct, stable connection. DSL services include: High-bit-rate Digital Subscriber Line (HDSL): Is commonly used by larger organizations in place of a dedicated leased line. HDSL has a max length of 12,000 feet and transmit at full-duplex at a rate of 1.544 Mbps when using two wire pairs Symmetrical Digital Subscriber Line (SDSL): Symmetrical DSL supports the same upstream speeds as its downstream speeds. The max length for a DSL cable is 10,000 feet and can transmit at 1.544 Mbps or 1.048 Mbps using a paired wire connection. Asymmetric Digital Subscriber Line (ADSL): Asymmetric DSL has faster downstream rates then upstream rates. Downstream transmission rate of anywhere between 1.544 – 8.4 Mbps down and a max of 640 Kbps upstream. The maximum cable length for ADSL is 18,000 feet. Rate-Adaptive Digital Subscriber Line (RADSL): Can adapt it’s transmission speed according to the type of traffic that is being sent over it. The transmission rates can vary from 640 Kbps to 2.2 Mbps downstream and 272 Kbps to 1.088 Mbps upstream. It has a connection length of 10,000 to 18,000 feet. Like ADSL, it is used for internet/intranet access, remote LAN access, virtual private networking, video-on-demand, voice-over-IP, however, the transmission speed is dynamically adjusted to match the link length and signal quality. ADSL Lite: A low level internet connection solution ADSL Lite has a transmission rate of up to 1 Mbps down-stream and up to 512 Kbps upstream. It has a connection length of 18,000 feet. Very-high-bit-rate Digital Subscriber Line (VDSL): has a transmission rate of 12.96 to 51.84 Mbps downstream and 1.6 to 2.3 Mbps upstream. It has a connection length of 1,000 to 4,500 feet. ISDN Digital Subscriber Line (IDSL): has a transmission rate of up to 144 Kbps in full duplex; and a connection length of 18,000 feet. Cable Television (CATV): CATV uses broadband transmission, which allows one network medium to carry multiple signals simultaneously. CATV can stream Internet data as fast as TV signals however the connections are not secure. Users that access the network with Windows can see others on the same network. This obviously compromises security as the bandwidth is shared. Firewalls will help resolve this issue. A CATV connection can’t be used to connect a PC with office LAN.
Remote Access Requirements to Establish a Network Connection
The following is required to establish a remote network connection:
Common protocols – computers that are connected have to share common protocols at the data-link layer and above. Both computers require a data-link layer protocol that supports point-to-point connections, such as PPP or SLIP. The computers must share a common network and transport layer protocol for example TCP/IP or IBX TCP/IP configuration – when using TCP/IP protocols to connect with the host network, a system must have an IP address and other configuration parameters required for that network. DHCP is utilized in most networks to automatically assign IP address configuration Most remote networking solutions enable DHCP to automatically assign configuration parameters. Host and remote software – the remote system utilizes a client program that can set up a connection. Security – protective mechanisms are required for the host computer and the other systems on the network it is connected with to administer access to the network resources.
All About VPNs: Applications and Remote Access
A Virtual Private Network (VPN) will allow a remote user to produce a VPN to the corporate network on the Internet. Users can dial-up to a local Internet service provider (ISP) rather than having to make long distance calls to connect to a corporate network. The user can call the local ISP. Through this connection a virtual private network is established between the remote user and the corporate VPN server across the Internet. A dedicated line or dial-up can be used to connect to an ISP when establishing a VPN connection. VPNs provide secure remote connections when communicating across a public network (the Internet) to protect confidentiality and prevent corruptibility of data preserving its authentication. Therefore, VPN connections are remote access connections that provide the same scale of security available on a LAN. This is accomplished through the tunneling method which transmits data from one computer to another by placing a sheathe over the data packets in an additional header. The additional header contains routing information so that the encapsulated payload can be transmitted across the public network. Both the client and tunnel server must use the same tunneling protocol for a tunnel to be set in place. Tunneling technology can be based on either a Layer 2 or a Layer 3 tunneling protocol. It’s important to remember that tunneling coupled with a VPN connection isn’t a true substitute for encryption/decryption. When the highest level of security is required, the best possible encryption should be used within the VPN itself. The two types of configurations for a VPN:
A client-to-gateway VPN: method used when a remote user connects to a private network using a VPN. The user can link to the network with any dial-up provider or a separate LAN with Internet access rather than over the phone system. A gateway-to-gateway VPN: used to establish a permanent link between two VPN servers on different networks, each with its own Internet connectivity. There are various VPN applications based on the basic VPN configuration and the network infrastructure. These are: Remote Access — based on a client-to-gateway VPN Intranet Access — based on a gateway-to-gateway VPN Extranet Access — based on a gateway-to-gateway VPN Remote Access VPN: Organizations will often create their own VPN connections via the Internet to assure remote users private access to a shared network through the ISP(s). Email messaging and software applications can also be accessed from a remote VPN. Through the use of analog, ISDN, DSL, cable technology, dial and mobile IP, VPNs are put in place over large network infrastructures. Intranet Access VPN: Gateway-to-gateway VPNs will let an organization expand its internal network to remote branch offices. These VPNs establish a secure point of contact between two end devices, which are usually two routers. A user on a remote LAN connected to the local router can communicate with the other LAN via this connection. Access to certain data through an Intranet VPN would yield to an organization’s security policy. Data is protected by using dedicated circuits. Frame Relay, Asynchronous Transfer Mode (ATM), or point-to-point circuits are examples of VPN infrastructures. Extranet Access VPN: Extranet Access VPNs are similar to Intranet Access VPNs but allow remote access for agents, business partners or any other pertinent associates. Extranet VPNs activate these connections to the organization’s secured network. A composite of remote access and intranet access infrastructures are implemented. The distinction would be the defined authorizations assigned to these users. Some degree of security would be needed to administer access to the network, protect network resources, and prohibit unauthorized users from accessing the information. Integrating VPN in a Routed Intranet: Organizations work with a range of data that needs to be treated differently according to confidentiality. Since highly-sensitive data requires the greatest degree of protection it needs to be extracted and contained from the rest of the organization’s network. This can create accessibility problems for those users not physically connected to the isolated LAN. VPNs allow a remote LAN to be physically connected to the rest of the organization’s network but separated by a VPN server. Here, the LAN functions as an access control mechanism as opposed to being routed to the rest of the network. Users therefore require authorization to form a VPN connection with the separated server to access protected data. Encryption can be utilized when users engage in communication across the VPN to ensure security. VPN and Remote Access Protocols: Tunneling protocols are used to encrypt packets of data and transmit them through a public network. The two widely used VPN protocols are: Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling Protocol (L2TP). IP Security (IPSec) is also used for encryption. The other remote access protocols include: Point-to-Point protocol (PPP), RADIUS and TACACS.
Point-to-Point Protocol (PPP)
Point-to-Point protocol (PPP) is used to implement TCP/IP over point-to-point connections. It has the basic function of encapsulating Network Layer (Layer 3) protocol information over point-to-point links. PPP uses its own framing method which allows encapsulation of any Layer 3 protocol. Because PPP uses a point-to-point structure, no mapping of protocol addresses is required. PPP utilizes the Link Control Protocol (LCP) to communicate between a PPP client and host. LCP tests the link between client and PPP host and specifies PPP client configuration. PPP has several capabilities that makes it adaptable to various set-ups:
Multiplexing of network layer protocols Link configuration Link quality testing Authentication Header compression Error detection Link parameter negotiation For authentication, PPP has a number of options, including: Password Authentication Protocol (PAP), Shiva Password Authentication Protocol (SPAP), Challenge Handshake Authentication Protocol (CHAP) and Extensible Authentication Protocol (EAP). These two protocols offer varying levels of protection. Both require an established username and password. This can be done on the router or on a TACACS or RADIUS authentication server.
Password Authentication Protocol (PAP): a clear text exchange of username and password data. After a user dials in, a username request is sent. After a username is entered, a password request is sent out. All communications are transmitted in clear text with no encryption. PAP is a one-way authentication between the router and the host. Shiva Password Authentication Protocol (SPAP): a reversible encryption mechanism. A client uses SPAP when connecting to a Shiva LANRover. This authentication method is more secure than PAP but less secure than CHAP. Challenge Handshake Authentication Protocol (CHAP): enhanced security compared with PAP. It uses a two-way encrypted authentication method. The remote router holds the usernames and passwords but they’re not transmitted as they were with PAP. With CHAP, when a user dials in, the access server issues a challenge message to the remote user after the PPP link is established. The remote end responds with a one-way hash function. This hash is generally an MD5 entity. If the value of the hash accurately matches authentication is granted. If it doesn’t match, the connection is ended. CHAP sends out a challenge every two minutes for the duration of the connection. If the authentication fails at any time, the connection is ended. Frequency of challenges is administered by the access server. Extensible Authentication Protocol (EAP): an authentication protocol that can be expanded with increased authentication methods that can be installed separately. It activates a fluid authentication mechanism to approve a remote access connection. Types of EAP:
EAP with MD5-Challenge uses the same challenge handshake protocol as PPP-based CHAP, but the challenges and responses are sent as EAP messages. A typical use for MD5-Challenge is to authenticate nonWindows remote access clients. EAP with MD5-Challenge does not support encryption of connection data. Protected Extensible Authentication Protocol (PEAP) is primarily used to authenticate wireless users with a username and password. Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) is used to enable remote access authentication with a smart card or a public key certificate.
Point-to-Point Tunneling Protocol (PPTP)
Point-to-Point Tunneling Protocol (PPTP) is an extension of PPP that takes advantage of the authentication, compression, and encryption mechanisms of PPP. With dial-up remote access, PTTP is the most commonly used protocol but for single client-to-server connections as it allows only a single point-to-point connection per session. It encapsulates PPP frames into IP datagrams for transmission over an IP network. PPTP tunnels the PPP frame within a Generic Routing Encapsulation (GRE) header using IP protocol 47 and a TCP header using port 1723. For PPTP traffic to pass through a firewall, the firewall must allow TCP port 1723 and IP protocol 47. PPTP is commonly used by Windows clients for nonparallel communications. It allows IP, IPX or…
…NetBEUI traffic to be encrypted and then encapsulated in an IP header. PPTP uses Microsoft Point-to-Point Encryption (MPPE) and compression from PPP. PPTP tunnels must be established by using the same authentication mechanisms as PPP connections (PAP, CHAP, and EAP).
Layer 2 Tunneling Protocol (L2TP)
Layer 2 Tunneling Protocol (L2TP) is a hybrid of PPTP and Layer 2 Forwarding (L2F). It uses the same authentication mechanisms as PPTP but its tunneling protocol is advanced as it relies on IPSec for encryption. Like PPTP, it uses a single point-to-point connection per session. L2TP also provides encryption for IP, IPX, or NetBEUI traffic and transmits it over any medium that supports point-to-point datagram delivery, such as IP, X.25, Frame Relay, or ATM networks. This blending of L2TP and IPSec is known as L2TP/IPSec. When using IP as its datagram transport, L2TP can be used as a tunneling protocol over the Internet. L2TP tunnels must be authenticated by using the same authentication mechanisms as PPP connections. Because it doesn’t conform with the security requirements of L2TP, PPP encryption is not used. PPP encryption can provide confidentiality but not per packet authentication, integrity, or replay protection. In this case data encryption is provided by IPSec, which uses Data Encryption Standard (DES) or Triple DES (3DES) by using encryption keys produced by IPSec’s Internet Key Exchange (IKE) negotiation process. L2TP/IPSec used the source and destination IP addresses for…
…authentication and installed this information inside the encrypted part of the packet. Thus NAT servers were unable to modify the source and destination IP addresses. NAT Traversal (NAT-T), a new function of L2TP/IPSec, enables you to use L2TP to connect to an L2TP Server when the client is located behind a NAT server. However, the client, the server, and the NAT Server must all support NAT-T. New VPN server installations should use L2TP rather than PPTP.
IP Security Protocol (IPSec)
IPSec is a set of standards that support protected transfer of information across an IP internetwork. IPSec Encapsulating Security Payload (ESP) Tunnel method supports the encapsulation and encryption of entire IP datagrams for secure transfer across a private or public IP internetwork. IPSec requires the two computers engaged in communications to arrange the highest common security policy. The source of communication implements IPSec to…
…encrypt the data before it transmits the data across the network. Once data is received, the destination computer decrypts the information prior to passing it to the destination process. This encryption and decryption process is done transparently.
Remote Authentication Dial-In User Service (RADIUS) and DIAMETER
Remote Authentication Dial-In User Service (RADIUS) is a client/server-based system that supports authentication, authorization, and accounting (AAA) services for remote user access while safeguarding the system from unauthorized access. RADIUS organizes a centralized user administration by keeping record of all user profiles in one location that all remote services have access to. To validate a RADIUS server, user credentials are required. That information is encrypted and sent to the RADIUS server in an Access-Request packet. Once credentials are received, the RADIUS server accepts, rejects or challenges the information. If credentials are accepted, the RADIUS server sends an Access-Accept packet and the user is authenticated. If the credentials are rejected, the RADIUS server sends an Access-Reject packet. If the information is challenged, it sends an Access-Challenge packet that requests additional information from the user the RADIUS server will use for authentication. For remote dial-up access…
…RADIUS also supports callback security where the server will terminate the connection and establish a new connection by dialing a predefined telephone number attached to the user’s modem. Callback security works as an extra layer of protection from unwarranted access over dial-up connections. Because of the success of RADIUS, DIAMETER was developed. An upgraded version of RADIUS, DIAMETER is designed for use on all methods of remote connectivity in addition to dial-up.
Terminal Access Controller Access Control System
The three versions of Terminal Access Controller Access Control System (TACACS) are: TACACS, Extended TACACS (XTACACS), and TACACS+. Each version authenticates users and prohibits access to those without a verified username/password pairing. TACACS combines the authentication and authorization functions. XTACACS allows the separation of the authentication, authorization, and auditing functions, giving administrators more discerning control over its deployment. TACACS+ also allows the division of the authentication, authorization, and auditing but also provides two-factor authentication. The authentication process with TACACS is similar to RADIUS…
…and it parallels in functionality. However, RADIUS follows an Internet standard, and TACACS is a proprietary protocol. This difference has made TACACS less popular than RADIUS.
E-mail is one of the most commonly used Internet services. Its infrastructure is a system of e-mail servers that use the Simple Mail Transfer Protocol (SMTP) to acquire messages from clients and to send those messages to other email servers, and e-mail clients that use Post Office Protocol version 3 (POP3) or Internet Message Access Protocol (IMAP) to transmit and retrieve e-mail to and from the e-mail server. These protocols give us the efficiency we’ve become used to when sending and receiving email, however, they’re not secure and don’t have the adequate mechanisms to ensure confidentiality, integrity, and availability. There are methods to incorporate secure email messaging. A security policy is a critical measure to sustain email security. The e-mail security policy must define acceptable use policies for e-mail, specifying the activities that can and cannot be performed over the organization’s email infrastructure. This permits the transmission of work-focused messaging but curtails transmission of personal email. As well, illegal, immoral, or offensive content can be prohibited, and can include personal business email. Access control over email will ensure that users have access to only their inbox and email archive databases. The mechanisms and processes used to control the organizations’ email infrastructure should be clarified. End users don’t necessarily have …
…to know the mechanics of email management, but they need to be informed of the policies that define what is considered private communication. End users also should be informed as to whether email should be saved and stored in archives for future reference; and if email is subject to review for violations by an auditor.
Email Security Issues
POP3, IMAP and SMTP are the protocols that support e-mail. These protocols don’t provide security nor do they provide encryption, source verification, or integrity checking. Because encryption methods aren’t provided for transmitted e-mail, the interception and access to e-mail is a real risk. E-mail protocols don’t offer confirmation of a valid sender or source of a message. Thus email address spoofing is a process that can be readily learned. E-mail headers can be altered at their source and during transmission. It is also possible to deliver email directly to a user’s inbox on an e-mail server by directly connecting to the email server’s SMTP port. Integrity checks aren’t incorporated in email messaging to ensure that a message was not altered during transmission. Additionally, email itself can be used as an attack method. Culprits most commonly use…
…attachments to send malicious code, such as viruses, worms, and Trojan horses. Mailbombing is another method. DoS or denial of service attacks dispatch large quantities of email messages to a user’s inbox or through a STMP server, flooding the system with messages and can result in storage capacity consumption or processing capability utilization. Lastly, SPAM mail is perhaps the most mild version of email attack but creates a headache for the recipient and is an abuse of system resources both locally and over the Internet. Though there are SPAM blockers it’s difficult to completely eliminate spam as the source of the messages is usually spoofed.
Email Security Solutions
There are several protocols, services, and remedies that can be implemented to add security to an existing email infrastructure including: S/MIME, MOSS, PEM, and PGP. Secure Multipurpose Internet Mail Extensions (S/MIME): implements email authentication through X.509 digital certificates and privacy through Public Key Cryptography Standard (PKCS) encryption. Two types of messages can be created using S/MIME: signed messages and enveloped messages. A signed message offers integrity and sender authentication. An enveloped message offers integrity, sender authentication, and confidentiality. All major email vendors support S/MIME. MIME Object Security Services (MOSS): utilized for authenticity, confidentiality, integrity, and non-repudiation for e-mail messages. It uses Message Digest 2 (MD2) and MD5 algorithms; Rivest, Shamir, and Addleman (RSA) public key; and Data Encryption Standard (DES) to support authentication and encryption services. Privacy Enhanced Mail (PEM): an e-mail encryption mechanism that is used to allow authentication, integrity, and confidentiality. It uses RSA, DES, and X.509. Pretty Good Privacy (PGP): an asymmetric public-private key system that uses the IDEA algorithm to encrypt, decrypt, and digitally sign files and e-mail messages. It is not standard but is widely supported on the Internet. Benefits of PGP:
PGP creates your key pair, which is your public and private key. PGP allows you to store other users’ public keys on a local key ring. The sender uses the recipient’s public key to encrypt messages. The recipient uses his or her own private key (or secret key) to decrypt those messages.
Voice Communications in Network Security
With the merging of voice, data and video, with technologies such as Voice over IP (VoIP), verifying voice communication is related to network security. When voice communications take place within a network infrastructure, issues of confidentiality, integrity, and authentication are critical. Private Branch Exchange (PBX) or Plain Old Telephone Service (POTS) voice communications have inherent vulnerability to interception, eavesdropping, and tapping. Physical security is required to retain security over voice communications within the physical areas of the organization. External security of voice communications is primarily a responsibility of the telephone company. PBX systems can be violated by attackers, known as “phreakers”, to evade toll charges and conceal their identity. Phreakers can potentially gain access to personal voicemail and reroute or delete messages, as well as redirect inbound and outbound calls. Security measures to block phreaking include logical or technical controls, administrative controls, and physical controls:
Replace remote access or long-distance calling through the PBX with a credit card or calling card system. Restrict dial-in and dial-out features to only authorized users. Use unpublished phone numbers that are outside of the prefix block range of your voice numbers for your dial-in modems. Block or disable any unassigned access codes or accounts. Define an acceptable use policy. Log and audit all activities on the PBX and review the audit trails regularly. Disable maintenance modems and accounts. Change all default configurations, especially passwords and capabilities related to administrative or privileged features. Block remote calling. Deploy Direct Inward System Access (DISA) technologies to reduce PBX fraud by external parties. Tools used by phreakers are known as colored boxes which include: Black boxes, which are used to manipulate line voltages to steal long-distance services. They are usually custom-built circuit boards with a battery and wire clips. Red boxes, which are used to simulate tons of coins being deposited into a pay phone. They are usually small tape recorders. Blue boxes, which are used to simulate 2600 Hz tones to interact directly with telephone network trunk systems. This could be a whistle, a tape recorder, or a digital tone generator. White boxes, which are used to control the phone system. A white box is a DTMF or dual-tone multifrequency generator. It can be a custom-built device or one of the pieces of equipment that most telephone repair personnel use.
Encryption in Cryptography
Algorithms are the basis of cryptography. Encryption, a type of cryptography, refers to the mechanism of scrambling information so it cannot be deciphered or read by an unauthorized observer. An algorithm is a procedure for taking the original message, called plaintext, and using instructions combined with a message key to create a scrambled message, referred to as ciphertext. A cryptographic key is a piece of data used to encrypt plaintext to ciphertext, or ciphertext to plaintext, or both. Crypto is of Greek origin of the word kruptos, which means hidden. The end goal of cryptography is to conceal information so that only the intended recipients can “unhide” it. This concealing of information is called encryption, and when the information is unhidden, it is called decryption. There are two different subclasses of algorithms: block ciphers and stream ciphers. Block ciphers process “blocks” or chunks of text in a series. Encryption: Encryption is a form of cryptography that “scrambles” plain text into unintelligible ciphertext. Encryption is the basis of security measures such as…
…digital signatures, digital certificates, and the public key infrastructure (PKI). Computer-based encryption techniques use keys to encrypt and decrypt data. A key is a variable that is a large binary number. Measurement of key length is based on bits, and the more bits in a key, the more challenging the key will be to “crack.” A key is only one aspect of the encryption process. It’s coupled with an encryption algorithm to produce the cipher text. Encryption techniques are classified as either symmetric or asymmetric, depending on the number of keys that are used.
A symmetric algorithm uses the same key for encrypting and decrypting data. Symmetric algorithms supply confidentiality by encrypting data or messages. Some previous and current symmetric key encryption algorithms include Data Encryption Standard (DES), Triple DES (3DES), Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), Blowfish, and RC4. Speed: The algorithms used with symmetric encryption are fast, so they have less interference with system performance and are particularly effective with encrypting large amounts of data. Strength: Symmetric algorithms are hard to decode without the correct algorithm. Well-tested symmetric algorithms such as 3DES and AES are almost impossible to decipher without the correct key. There is a method of taking encrypted data and encrypting it a second or even third time. Some of the disadvantages of using symmetric keys: Poor key distribution mechanism: There is no simplistic method to securely distribute a shared secret; thus wide-scale deployment of symmetric keys is difficult. Single key: A single key or single shared secret. When a single key secret is compromised, the impact is extensive. Because there is a single key that can be shared with some or many, symmetric keys are not suited to provide integrity, authentication, or nonrepudiation. Some of the characteristics of specific symmetric keys:
DES: 56-bit key, U.S. Government standard until 1998, but not considered strong enough for today’s standards, relatively slow. Triple DES: Performs 3DES operations, equivalent of 168-bit keys, more secure than DES, widely used, relatively slow. AES: Variable key lengths, latest standard for U.S. Government use, replacing DES. DEA: 128-bit key, requires licensing for commercial use. Blowfish: Variable key length, free algorithm, extremely fast. RC4: Variable key length, stream cipher, effectively in public domain.
Asymmetric algorithms use different keys to encrypt and decrypt data. An example of asymmetric encryption is public key cryptography. Public key cryptography uses two keys that form a key pair called the public key and the private key. The key that encrypts the plaintext cannot be used to decrypt the ciphertext. The public key encrypts the plaintext, and the private key decrypts the ciphertext. Public key: provided to those who send you encrypted data. Private key: a key in the sole possession of the user. When a plaintext message is encrypted using the public key, only the possessor of the private key can decrypt the ciphertext. When a plaintext message is encrypted using the private key it can be decrypted by anyone who has the public key. There is absolute certainty the plaintext message originated with the possessor of the private key. Asymmetric keys provide authentication, integrity, and nonrepudiation. They can also support confidentiality when used for key management. There are pros and cons to using asymmetric keys:
When using a public key and a private key, the public key can be given to anyone who intends to send encrypted information, but only the recipient can decrypt that information. This helps preserve data confidentiality. A private key can be used to produce a digital signature which is a verification method of the person’s (possessor of private key) identity. This helps provide authentication and nonrepudiation. Some drawbacks in using asymmetric keys are asymmetric algorithms are slower than symmetric algorithms because of the high-complexity involved with encrypting and decrypting data; therefore it’s not proficient in providing confidentiality for large amounts of data. Some features of specific asymmetric keys are: RSA: Variable-length key, de facto standard for public key encryption. Diffie-Hellman: Variable-length key, used to securely establish a shared secret. Elliptic curve cryptography: Variable-length key, not up to par with speed for widespread implementation. Cryptography is used to secure confidentiality, integrity, identification and authentication, and nonrepudiation. Still, it is possible for a hacker to decrypt the information given enough time and persistence. The strength of symmetric and asymmetric keys is derived from the length of the key and the algorithm used to encrypt the data. DES and Triple DES: DES uses a single 64-bit key—56 bits of data and 8 bits of parity—and operates on data in 64-bit chunks. Each round consists of a substitution phase, wherein the data is substituted with pieces of the key, and a permutation phase. S Boxes are where the substitution process occurs. The permutation process also known as diffusion operations, occurs in Pboxes. Both processes occur in the “F Module” of the diagram. DES security is rooted in the fact that substitution operations are non-linear. Permutation operations supply another layer of security by scrambling the already partially encrypted message. Triple DES (3-DES) is a technique that uses the DES cipher to enhance its security. In 3-DES, two to three 56-bit DES subkeys are linked to form a single 112 or 168-bit 3-DES key. The end-result is ciphertext that’s impervious to attacks from all currently known brute-force attacks and techniques. For 112-bit security, two different DES keys are used and one is repeated, and for 168-bit security, three different DES keys are connected together.
Advanced Encryption Standard (Rinjndael)
Because of its small key size of 56 bits, DES can no longer defend against coordinated brute-force attacks using modern cryptanalysis. The National Institute of Standards and Technology has appointed the Advanced Encryption Standard to be the authorized Federal Information Processing Standard for all non-confidential communications by the U.S. government. NIST is also seeing applications in the private sector. Rijndael was chosen by NIST from a group that included four other finalists: MARS, RC6, Serpent, and Twofish. NIST has successful defense against side-channel attacks such as power and timing-based attacks. These forms of attacks monitor the time it takes to encrypt a message or the slight changes in power usage during the encryption and decryption processes. These attacks are sophisticated enough that hackers can obtain keys used by the device. Rijndael uses iterative rounds like International Data Encryption Algorithm. A hashing algorithm is used to secure data integrity. A hash is a one-way mathematical function (OWF) that creates a fixed-sized value. Common hash algorithms currently in use:
MD4: Produces a 128 bit message digest very fast, appropriate for medium security usage. MD5: Produces a 128 bit message digest, fast more secure than MD4, and widely used. SHA-1: Produces a 160 bit message digest, standard for the U.S. government, but slower than MD5.
Components of Public Key Infrastructure
Using asymmetric key pairs is easy to apply however when expanded beyond a small community there are potential vulnerabilities. If a private key is compromised, it is difficult to locate and remove that key. The security infrastructure developed to address these problems is known as a public key infrastructure (PKI). PKI uses asymmetric key pairs and combines software, encryption technologies, and services to safeguard the security of communications RFC 2459 defines the X.509 PKI, which is the PKI defined for use on the Internet. This incorporates the use of certificates, certification authorities (CAs), certificate management tools, and certificate-enabled applications. Components of a PKI: A PKI uses public key encryption technologies to bind public keys to their owners and to assist with safe distribution of keys across networks. PKI provides a range of services, technologies, protocols, and standards that allow the distribution and management of a strong and scalable information security system. Setting up PKI will allow organizations to conduct business electronically with these assurances:
The person or process sending a transaction is the actual originator. The person or process receiving a transaction is the actual receiver. The integrity of the data has not been compromised.
Digital Certificates: PKI
X.509 was developed from the X.500 standard. X.500 is a directory service standard that was ratified by the International Telecommunications Union. The objective was to develop an accessible, easy-to-use electronic directory of people provided for all Internet users. The X.500 directory standard specifies a common root of a hierarchical tree. Picture an upside down tree, the root of the tree is at the top level, and all other containers are below it. A CN= before a username represents it as a common name, a C= precedes a “country” and an O= precedes “organization.” Each X.500 local directory is considered a directory system agent. The DSA can represent either single or multiple organizations. Each DSA connects to…
…the others through a directory information tree which is a hierarchical naming scheme that provides the naming context for objects within a directory. X.509 is the standard used to define what makes up a digital certificate.
Certificate Policies and Rules
A CA can assign a certificate for various reasons, but must specify exactly what the certificate will be used for. The set of rules that illustrate how certificates may be used is called a certificate policy. The X.509 standard defines certificate policies as “a named set of rules that indicates the applicability of a certificate. Different organizations have different security requirements. Digital certificates are used for securing e-mail. TestKing wants a digital certificate for their online store. The Department of Defense wants a digital certificate to secure top-secret information on nuclear submarines. The certificate policy is a plaintext document that is given a unique object. Certificate Practice Statements: It’s vital to have a policy in place to convey what is going to be done. A CPS describes how the CA strategy in managing certificates it issues. If a CA does not include a CPS, users should consider finding another CA. Revocation:
Certificates are rendered void when the information contained in the certificate is outdated or no longer trusted. This can occur when a company switches Internet Service Providers, relocates, or the administrator listed on the certificate has changed. Some policies are technically-oriented and others outline the process followed to create and manage certificates. With security systems, it’s important to make certain the CA has a policy covering each item required. One of the main reasons to revoke a certificate would be if the private key has been compromised in any way. A key that has been compromised should be revoked immediately in addition to notifying all certificate users of the date that the certificate will no longer be valid. Once users are notified, the CA then has the responsibility to immediately change the status of the certificate as revoked. Also, the date of revocation needs to be published including the last date that communications were considered trustworthy. When a certificate revocation request is sent to a CA, the CA is required to authenticate the request with the certificate owner. Once that is done, the certificate is revoked and notification is sent out. A certificate issued by a CA lists an expiration date that specifies how long the certificate is valid. If a certificate needs to be revoked prior to that date, the CA can be instructed to add the certificate to its CRL. When a certificate is revoked, the CA administrator must provide a reason code. PKI-enabled applications are designed to check CAs for their updated CRL, and do not operate if they cannot verify that the certificate has not been added to the CRL.
PKI Standards and Protocols
Without standards and protocols, PKI would become unsustainable. The Public-Key Cryptography Standards (PKCS) are established protocols used for securing the exchange of information through PKI. The PKCS standards were developed by RSA laboratories: PKCS #1: RSA Cryptography Standard outlines the encryption of data using the RSA algorithm. The purpose of the RSA Cryptography Standard is in the development of digital signatures and digital envelopes. PKCS #1 also describes syntax for RSA public keys and private keys. The public-key syntax is used for certificates, while the private-key syntax is used for encrypting private keys. *(#2 and #4 merged with #1) PKCS #3: Diffie-Hellman Key Agreement Standard outlines the use of the Diffie-Hellman Key Agreement, a method of sharing a secret key between two parties. The secret key is used to encrypt ongoing data transfer between the two parties. Whitfield Diffie and Martin Hellman developed the Diffie-Hellman algorithm in the 1970s as the first asymmetric cryptographic system. Diffie-Hellman overcomes the issues of symmetric key systems because management of the keys is less difficult. PKCS #5: Password-Based Cryptography Standard defines a method for encrypting a string with a secret key that is derived from a password. The result of the method is an octet string (8-character string). PKCS #6: Extended-Certificate Syntax Standard deals with extended certificates. Extended Certificates are made up of the X.509 certificate plus additional attributes. The additional attributes and the X.509 certificate can be verified using a single public-key operation. The issuer that signs the extended certificate is the same as the one that signs the X.509 certificate. PKCS #7:
Cryptographic Message Syntax Standard is the foundation for Secure/Multipurpose Internet Mail Extensions (S/MIME) standard. Is also compatible with Privacy-Enhanced Mail and can be used in several different architectures of key management. PKCS #8: Private-Key Information Syntax Standard describes a method of communication for private-key information that includes the use of public-key algorithms and additional attributes. PKCS #8 is primarily used for encrypting private keys when they are being transmitted between computers. PKCS #9: Selected Attribute Types defines the types of attributes for use in extended certificates (PKCS#6), digitally signed messages (PKCS#7), and private-key information (PKCS#8). PKCS #10: Certification Request Syntax Standard describes syntax for certification requests. A certification request consists of a distinguished name, a public key, and additional attributes. Certification requests are sent to a CA, which then issues the certificate. PKCS #11: Cryptographic Token Interface Standard specifies an application program interface for token devices that hold encrypted information and perform cryptographic functions, such as Smart Cards and USB pigtails. PKCS #12: Personal Information Exchange Syntax Standard specifies a portable format for storing or transporting a user’s private keys and certificates. Ties into both PKCS #8 (communication of private key information) and PKCS #11 (Cryptographic Token Interface Standard). Portable formats include diskettes, Smart Cards, and Personal Computer Memory Card International Association (PCMCIA) cards. PKI standards and protocols are “living documents”, meaning they are fluid and always changing. Additional standards are always being suggested, but before they can be recognized as standards they are put through extensive testing and scrutiny.
Key Management Life Cycle
Certificates and keys have a certain duration. Various factors play into the lifespan of a particular key. Several things can occur to impact the lifespan of a key such as being compromised or revoked. There’s also an expiration date for keys. As is the case with a driver’s license or credit card, keys are considered valid for a finite amount of time. Once that time period has expired, the key must be renewed or replaced. Centralized versus Decentralized Keys: PKI applications use different types of key management. The hierarchical model uses centralized key management. The centralization is based on all of the public keys being stored within one location. Older applications of PGP used decentralized key management, because keys would be contained in a user’s key ring and no one entity is superior over another. The choice to use either centralized or decentralized key management correlates to the…
…the size of the organization. With older versions of PGP, you could only hold the keys of those PGP users that you trust. For larger organizations of where thousands of employees are required to use digital signatures when communicating, managing PGP keys would be impractical. In either case (centralized or decentralized), a secure method of storing those keys must be established.
Software Storage of an Archived Key
Software storage of an archived key is where the key is kept on a disk or other type of removable media. When you need to provide another user with a key, you can copy the key to a floppy disk and use the copy to perform the operation. When the key is in use, it’s transferred to active memory on the computer. To protect the integrity of the key, it can be stored in an approved cryptographic module. When the copy of the private key is no longer needed…
…the media that was used to copy it must be destroyed. Software storage is an easier method and inexpensive, but it is also more vulnerable to being compromised than a hardware solution.
Hardware Storage of a Key
Hardware storage of a key is its placement on a hardware storage medium, such as a smart card or hardware security module. HSMs also produce the keys on the hardware device as a substitute for transmitting a private key over a network connection or other medium. When a user is given a key, the smart card that holds the key is programmed and then given to the user. This method of key storage is very…
…difficult to corrupt and requires specialized equipment, making it more costly than the software storage solution.
Private Key Protection
Private Key Protection: The storage of private keys in a secure location is mandatory when dealing with PKI. Many people take private keys for corporate CAs completely offline, store them in a secure place, and only use them when they need to generate a new key. Key Escrow: Private key escrow is a process where the CA maintains a copy of the private key associated with the public key that has been signed by the CA. This gives them full access to…
…all encrypted information using the public key from a user’s certificate. A corporate PKI solution usually incorporates a key escrow element. Employees are obligated to adhere to security policies that provide full access to the corporation to all intellectual property generated by a user for the company as part of that person’s terms of employment. A corporation is required to have the ability to access data an employee produces to maintain the operations of the business. Key escrow also helps an organization minimize occurrences of lost or forgotten passwords.
Certification Expiration and Revocation List
When a certificate is created, it is stamped with Valid From and Valid To dates. The interim period between these dates is the cycle of time the certificate and key pairs are valid. Once a certificate’s validity period has expired, it must be either renewed or destroyed. Certification Revocation List: The X.509 standard mandates CAs to publish CRLs. The basic information contained in a CRL is the revocation status of certificates that the CA manages. There are several variations of a revocation list:
A simple CRL is a container that holds the list of revoked certificates. A simple CRL contains the name of the CA, the time and date the CRL was published, and when the next CRL will be published. A simple CRL is a single file that continues to grow over time. The fact that only information about the certificate is included and not the certificate itself controls the size of a simple CRL container. Delta CRLs were created to handle the issue that simple CRLs cannot—size and distribution. Although a simple CRL only contains certain information about the revoked certificate, it can still become a large file. In a Delta CRL configuration, a base CRL is sent out to all end parties to initialize their copies of the CRL. After the base CRL is sent out, updates known as deltas are sent out on a periodic basis to inform the end parties of changes.
The M of N Control Policy
This is a back-up process of public and private key material over multiple systems or devices. It’s a tool that prevents the re-creation of private and public key material from the backup. The key materials are backed up and then mathematically distributed across several systems or devices. Usually three people are assigned specific, separate job responsibilities within different portions of the organization. These clarifications impedes attempts to recover keys without permission. The mathematical equation supports any number of users up to 255 for the splitting activity. Assuming a key can be used throughout its validation period without revocation, it is then renewed. Identify verification is not required to obtain a new certificate. If the certificate is in good standing, and the key is renewed with the same CA, the old key can be used to sign the request for the new key. There should be established trust between the renewer and the CA based on the person’s credentials. Key update is a second type of renewal where a new key is produced by modifying the existing key. The process of key renewal depends on the user and CA requirements. The process is also applied with a CA’s key pair as those keys undergo renewal as well. A CA can also use its old key to sign the new key. The PKI renewal process is performed by creating three new keys:
The CA produces another self-signed certificate. This time, the CA signs the new public key using the old private key that is about to expire. Next, the CA server signs the old public keys with the new private key. This is done to avoid an overlap between the new key activation and old key expiration. Lastly, the new public key is signed with the new private key. The reason for these steps is based on two important points: Since a CA verifies the credentials of other parties, rigorous steps need to be implemented when renewing the CA’s own certificate. Second, creating numerous keys makes the changeover from old keys to new keys transparent to the end user. When a key pair and certificate validation is expired, they must be destroyed. If the key pair is used for digital signatures, the private key portion should be destroyed to prevent future signing attempts. Key pairs used for privacy purposes can be archived in case it needs to be used to decrypt archived data that was encrypted using it. The digital certificate must be added to the CRL as soon as the certificate is no longer valid. This process occurs irrespective of the archive or non-archive status of the private key for future use. The extra step of notifying individuals who use the certificate of its invalid status may be needed depending on the sensitivity level.
Key Pair Usage
Key pairs are used in a range of functions. With most PKI implementations, only single key pairs are used. Sometimes a CA needs to generate multiple key pairs in situations where backup private keys are required but the possibility of a forged digital signature is acknowledged. For example, if someone is the backup operator, that person is responsible for the backup of all data, including the user's private keys. If that individual has any grievances they could use a private key to forge a signature for personal gain. The recipient of that signature, say the CFO, would have no reason to distrust the message and its content. To avoid scenarios such as this, many public key infrastructures support the use of dual keys. In the example above, the CFO has two separate key pairs. The first key pair is used for…
…authentication or encryption, while the second key pair is used for digital signatures. The private key used for authentication and encryption can still be backed up for safekeeping. The second private key would never be backed up and would not provide the security loophole that using single keys creates. The CFO could continue using his second private key for signing emails without fear of the key being misused.
All About The Central Processing Unit (CPU)
The central processing unit (CPU) is the computer component that controls system operations It comprises an arithmetic logic unit (ALU), which runs all arithmetic and logical operations, and a control unit, which processes the decoding of instructions and implementation of the requested instructions. Older computers employed multiple chips to take on the task. There are some functions that require support chips, which are often referred to collectively as a chip set. The CPU relies on two inputs to run its operations: instructions and data. The data is passed to the CPU where it is shaped and processed in either supervisor or problem state. In a problem state, the CPU works on the data with non privileged instructions. In supervisor state, the CPU executes privileged instructions. There are two basic forms of CPUs produced for today’s computer systems:
Reduced Instruction Set Computing (RISC) uses basic instructions that require a reduced number of clock cycles. Complex Instruction Set Computing (CISC) runs multiple operations for a single instruction. The CPU can be ranked according to its functionality. However, both the hardware and software must be able to accommodate these features including: multitasking, when the CPU handles several tasks at once; multiprogramming, same idea as multitasking but the CPU alternates between processes rather than simultaneous execution; multithreading which runs multiple concurrent tasks executed within a single process; and multiprocessing where a system contains more than one CPU.
Types of Computer Memory
Computer memory retains data that is to be processed by the CPU. Computer memory falls into two categories: nonvolatile memory, which is retained even when the computer is powered off; and volatile memory, data that is lost when the computer loses power. Read-only memory (ROM): this is a type of nonvolatile memory that is preserved when the computer is powered off. When ROMs are manufactured a chip containing its contents is “burned in” and can’t be modified. These chips contain what’s known as “bootstrap” data and the power-on self test (POST) diagnostics the computer uses to start up prior to loading the operating system. Random access memory (RAM): a type of volatile memory that is vulnerable to power being lost on the computer. When this occurs, all data held in RAM is lost. The CPU relies on RAM for readable and writable memory for processing tasks. RAM maintains its data when power is supplied to it. “Random” refers to the CPU being able to access or move data to and from any addressable RAM on the system. Cache memory:
This component contains a specialized, high-speed memory chip that keeps frequently accessed data located on or near the CPU. With faster speed than RAM, Cache memory is more expensive. The CPU operates in a perpetual mode of requesting and processing data, and executing code and relies on fast access to the data. This is why the memory chip is located close to the CPU for fast retrieval and execution of data. Caches are arranged in layers. The highest layer is known as the level 1 (L1) cache and is closest to the CPU. Modern computers can also have level 2 (L2) and level 3 (L3) cache memory. Virtual memory: The operating system uses virtual memory for its management functions. Located on the hard disk drive, virtual memory can be utilized if the system’s running low on RAM. A swap file is an example of virtual memory. Accessing virtual memory requires a good amount of system overhead and can slow down processing. Active memory: Data that is directly accessible to the CPU and held in memory is called active memory. Data that is held externally to the computer system’s active memory is held in storage or in secondary memory.
Security of Data Storage Devices
Data storage devices are magnetic and optical devices or media designed to retain data. Storage devices include: hard drives, floppy disks, magnetic tapes, compact discs (CDs), digital video disks (DVDs), flash memory cards and flash drives. Three issues to take into account with the security of data storage devices:
Even after the information has been erased, data residue may remain on storage devices making the data retrievable. The same applies with a disk that has been reformatted. Data storage devices are vulnerable to theft. Data storage devices are also susceptible to unauthorized access, particularly removable storage devices, such as floppy disks, flash memory cards and flash drives. Input and Output Devices: Input and output devices can introduce security risks to a system. A technology known as TEMPEST, for example, can weaken security of data displayed on a monitor. With TEMPEST an electronic discharge from the monitor can be read from a remote location. Printouts are also security risks should an unauthorized person access sensitive documents.
Security Policy and Computer Architecture
A security policy is a critical component of the design and implementation of information systems. This document outlines the set of rules, practices, and procedures that specify how the system should manage, safeguard, and circulate sensitive information. Thus its objective is to educate and guide the design, development, implementation, testing and maintenance of the information system. The three most important security rules and principles are the following:
The rule of least privilege is a vital component of the design of computers and operating systems. Designing operating system processes should include an assured function of running in user mode whenever possible. The more processes that run in privileged mode, the higher the risk of malicious incidence where an unauthorized user could corrupt or gain controlled access to the system. The rule of separation of privilege relies on the use of granular access permissions or specific permissions for each type of privileged operation. This gives designers discriminating control in assigning rights to run certain functions, and not full system access. Accountability is vital in the security design. Today’s computing is based on the client/server model where users operate on independent computers with access to resources and services giving clients a range of computing and storage capabilities.
Vulnerabilities and Safeguards
Vulnerabilities: Desktop systems contain various forms of data, some more sensitive than others. Therefore safeguard measures to secure that data are required. Some users may have limited security awareness the underlying architecture has to compensate for. Client systems can be gateways to critical information systems on a network. Communications hardware can also harbor vulnerable points of access into a distributed environment. Modems connected to computers that are linked to a larger network can create risks for the network with dial-in attacks. The same applies with data that’s downloaded from the web and carries malicious code such as Trojan horses. The storage devices on client computers may not be protected from physical intrusion or theft; and data on client computers may not be secured with a proper backup. Safeguards: Distributed environments are reliant on multiple security mechanisms to assure vulnerabilities are removed, monitored, or remedied. Clients are required to adhere to procedures that implement safeguards on their contents and their user’s activities. These safeguards include:
Email screening to block malicious software that could penetrate the system; assigning email policies that specify appropriate use and limits potential liability. Download/upload policies must be followed to allow for incoming and outgoing data to be screened and suspect materials to be blocked. Imposing access controls that can include multifactor authentication and/or biometrics, to monitor and deny access to client computers and prevent unwarranted access to servers and services. Graphic user interface mechanisms and database management systems should be implemented, and their use required to restrict and manage access to critical information. The encryption of sensitive files and data stored on client computers. The isolation of processes that run in user and supervisory mode so that unapproved access to privileged processes and capabilities is prevented. Protection zones should be implemented so that corruption of a client computer will not compromise the entire network. Disks and other sensitive materials should be safeguarded from unwarranted access. Client computers should be backed up regularly. Security awareness training should be available for client computer users. Client computers and their storage devices should be provided with safeguards against environmental hazards. Client computers should be included in disaster recovery and business continuity planning.
Using Security Mechanisms to Enhance Security
To enhance security, mechanisms should be established and implemented to control processes and applications. These mechanisms could include process isolation, protection rings, and trusted computer base (TCB). Process Isolation: Process isolation, executed by the operating system, maintains a high level of system trust by enforcing memory boundaries. Without process isolation, processes would overlap on each other’s memory space, compromising data or possibly making the system unstable. The operating system must also block unauthorized users from entering areas of the system to which they should not have access. These restrictions are done through the use of a virtual machine, which gives the user the impression they have full-access to the system, but in reality, processes are completely isolated. Further, some operating systems also use hardware isolation to increase system security. With hardware isolation, the processes are segmented both logically and physically. Single-State and Multistate Systems: Single-state and multistate systems were developed to meet the requirements of handling sensitive government information with categories such as sensitive, secret, and top secret. These systems have influence on whether the sensitive information that is processed and retained on a system is managed by the system itself, or by an administrator. Single-state systems: also known as dedicated systems, are programmed to process a single category of information and are dedicated to one mode of operation. The system administrator is responsible for maintaining policy and procedures, while delegating which users have access and what level of access to the system. Multistate systems: allow multiple users to log in to the system and access various types of data correlated to the users’ levels of clearance. Multistate systems can run as a compartmentalized system and assign data on a need-to-know basis. Rings of Protection:
The operating system adheres to rings of protection to ascertain a user’s level of clearance. It guides the operating system with various levels at which to execute, code or restrict its access. Rings are organized into domains or layers with the most privileged domain located in the center and the least-privileged domain in the outermost ring. See figure 5.1 Layer 0 is the most trusted level. The operating system kernel resides at this level. Any process running at layer 0 is said to be operating in privileged mode. Layer 1 contains non-privileged portions of the operating system. Layer 2 is where I/O drivers, low-level operations, and utilities reside. Layer 3 is where applications and processes operate. This is the level at which individuals usually interact with the operating system. Applications operating here are said to be working in user mode. As shown, access rights decline as the ring number increases making the most trusted processes placed in the center rings; and system components reside in the appropriate ring according to the principle of least privilege. This systematic arrangement assures that the processes have only the minimum privileges necessary to perform their functions. Trusted Computer Base (TCB): The trusted computer base (TCB), defined in the U.S. Department of Defense standard known as “the Orange Book” (DoD Standard 5200.28), is the totality of protection mechanisms within a computer, including hardware, software, controls, and processes, that collaborate to achieve a trusted base to enforce an organization’s security policy. This is the trusted portion of an information system that can be followed and relied on to maintain security policy. The TCB handles confidentiality and integrity and monitors four basic functions: Input/output operations: this can present a security concern as operations from the outer rings might interface with rings of increased protection. Execution domain switching: another security concern because applications running in one domain often draw from applications or services in other domains. Memory protection: this must be monitored to verify confidentiality and integrity in storage. Process activation: the security risk here lies in the reading and processing of status information, and file access lists that are vulnerable to compromise of confidentiality in a multiprogramming environment. The reference monitor is an important element of the TCB and is an abstract mechanism for validating access to objects by authorized subjects. The security kernel which processes user/application requests for system access, implements the reference monitor. Because it’s charged with maintaining control of authorized access, it must be safeguarded from alteration or any slight change and be tested for any anomalies.
Information Security Models
Information security models are methods used to authenticate security policies as they are intended to provide a precise set of rules that a computer can follow to implement the fundamental security concepts, processes, and procedures contained in a security policy. These models can be abstract or intuitive. State Machine Model: The state machine model refers to a system that is always in secure mode regardless of the operational state it is in. According to the state machine model, a state is a snapshot of a system at a specific moment in time. The state machine model derives from the computer science definition of a finite state machine (FSM), integrating an external input with an internal machine state to model all types of systems, including parsers, decoders, and interpreters. Given an input and a state, an FSM transitions to another state and may create an output. A transition takes place when accepting input or producing output and always results in a new state. All state transitions must be examined and if all components of the state meet the requirements of the security policy, then the state is considered secure. When each state transitions to another secure state, the system is rendered as a secure state machine. Many other security models are influenced by the secure state concept. Bell-LaPadula Model: The Bell-LaPadula Model was developed to formalize the U.S. Department of Defense (DoD) multi-level security policy. The DoD classifies resources into four different levels. In ascending order from least sensitive to most sensitive are the following: Unclassified, Confidential, Secret, and Top Secret. Going by the Bell-LaPadula model, a subject with any level of clearance can access resources at or below its clearance level. However, only those resources that a person needs access to are made available. For example, an individual cleared for the Secret level only has access documents labeled Secret. With these restrictions, the Bell-LaPadula model preserves the confidentiality of objects. It does not acknowledge integrity or availability of objects. The Bell-LaPadula model is based on the state machine model. It also implements mandatory access controls and the lattice model. The lattice tiers are the classification levels used by the security policy of the organization. In this model, secure states are delimited by two rules called properties:
The Simple Security Property (SS Property) states that a subject at a specific classification level cannot read data with a higher classification level. The * Security Property (* Property) states that a subject at a specific classification level cannot write data to a lower classification level. Subjects: A subject is an active entity that is seeking rights to a resource or object. A subject can be a person, a program, or a process. Objects: An object is a passive entity, such as a file or a storage resource. In some cases, an item can be a subject in one context and an object in another. The Bell-LaPadula does not deal with integrity or availability, access control management, and file sharing. It also does not impede covert channels, a mechanism that allows data to be communicated outside of normal, expected, or detectable methods.
The Biba and Clark-Wilson Integrity Models
The Biba model was developed as a direct analogue to the Bell-LaPadula model and is also a state machine model based on a classification lattice with mandatory access controls. It was developed to address three integrity issues: The prevention of object modification by unauthorized subjects. The prevention of unauthorized object modification by authorized subjects. The protection of internal and external object consistency. In this model there are three axioms: The Simple Integrity Axiom (SI Axiom), which states that a subject at a specific classification level cannot read data with a lower classification level. The * Integrity Axiom (* Axiom), which states that a subject at a specific classification level cannot write data to a higher classification level. A subject at one level of integrity cannot invoke a subject at a higher level of integrity. The Biba model only acknowledges integrity, not confidentiality or availability. Its main focus is safeguarding objects from outside threats and regards internal threats handled by appropriate programs. Access control management is not acknowledged by the Biba model, and there’s no function that allows modification of an object or subject’s classification level. In addition, it does not prevent covert channels. Clark-Wilson Integrity Model:
The Clark-Wilson model is an integrity model that was developed after the Biba model. It addresses integrity protection from a different perspective. Instead of using a lattice structure, it implements a subject-program-object or three-part relationship. Subjects have accessibility to objects exclusively through programs. There’s no direct access. The Clark-Wilson model offers integrity through two principles: well-formed transactions and separation of duties. Well-formed transactions take the form of programs, the method in which subjects are able to access objects. Each program has restrictions in terms of what it can or can’t do to an object, effectively limiting the subject’s capabilities. If the programs are properly designed, then the triple relationship is successful in protecting the integrity of the object. Separation of duties is the method of dividing critical functions into two or more parts. Each part is required to be handled by a different subject. This prevents authorized subjects from making unauthorized modifications to objects, further protecting the integrity of the object. The Clark-Wilson model requires auditing along with the above-mentioned principles. Auditing tracks monitors to objects as well as inputs from outside the system.
The Information Flow Model
The information flow model is based on a state machine model, and consists of objects, state transitions, and lattice states. Information flow models are constructed to block unauthorized, insecure, or restricted information flow, either between subjects and objects at the same classification level, or between subjects and objects at different classification levels. It permits authorized information flows within the same classification level or between different classification levels, while preventing all unauthorized information flows between or among the classification levels. The Bell-LaPadula model and the Biba model are both…
…information flow models. Bell-LaPadula concentrates on blocking the information flow from a high security level to a low security level. Biba is focused on preventing information from flowing from a low security level to a high security level.
The Noninterference Model
The noninterference model is based on the information flow model but addresses how the actions of a higher security level subject impacts the system state or actions of a subject at a lower security level. In this model, the actions at the higher security level subject should have…
…influence on the actions of a subject at a lower security level. Essentially the higher security subject should go unnoticed at the lower level.
The Take-Grant Model
The Take-Grant model is a confidentiality-based model that uses a directed graph to specify the rights that can be passed from one subject to another or from a subject to an object. The model gives permission to subjects to take rights from other subjects. Subjects with the grant right have permission to…
…grant rights and have permission to grant rights to other subjects.
The Access Control Matrix
An access control matrix is a table that states a subject’s access rights on an object. A subject’s access rights can be of the type read, write, and execute. Each column of the access control matrix is called an Access Control List (ACL) while each row is called a capability list. An ACL is connected to the object and outlines actions each subject can perform on that object. A capability list is connected to the subject and outlines the actions that a specific subject is allowed to perform on each object. The access matrix model follows…
…discretionary access control because the entries in the matrix are at the discretion of an individual who has authority over the table.
The Brewer and Nash Model
The Brewer and Nash model has similarities with the Bell-LaPadula model and is also referred to as the Chinese Wall model. This model allows access controls to change dynamically based on a user’s past activity. This model applies to a single integrated database; it seeks to create security domains that are sensitive to the notion of conflict of interest (COI). Data is created with indications of which security domains are potentially in conflict and blocks any subject with access to one domain that belongs to a specific conflict class from accessing any other domain that belongs to the same conflict class. This structure is based on…
…data isolation within each conflict class to shield users from potential conflict of interest scenarios.
Employees and Operational Security
The operations security deals with the daily activities that are required to preserve the confidentiality, integrity and availability (CIA) of the system after it has been developed and executed. This involves using hardware controls, media controls, and subject controls that are designed to be safeguards against asset threats, as well as daily activities such as the handling of attacks and violations, appropriate administrative management and control, and establishing a threshold to determine notable violations. Given the importance of operational security, it’s important to screen and verify new employees in terms of background experience, level of education and skill set. An employee can impact operational security. Some organizations perform background checks as part of the vetting process. When going through the hiring process, a probationary period can be instated where the individual is informed whether they have to obtain special qualifications or security clearances for the job, as well as signing a non-compete, nondisclosure, and possibly a non-solicitation agreement. Once the candidate has been hired, there are additional operational security controls that can be implemented such as an orientation, separation of duties, job rotation, least privilege, mandatory vacations, audit controls, and effective termination practices. New-Hire Orientation: A new-hire orientation training program can be instituted to make certain new employees aware of and become familiar with the organization’s policies to perform. The objective should be to educate new employees on the established security policies and processes of the organization, and acceptable use of those policies. Going forward, security awareness can be perpetuated by sending the occasional security-awareness email or newsletter that reinforces the practices of good security. Policy reviews can also be conducted so employees can go over current policies and obtain a signed copy they’ve agreed to. Separation of Duties: The separation of duties is the process of dividing a given task into smaller components so that more than one person has a role in completing the task. This correlates to the principle of least privilege and denies authorized subjects from making unauthorized modifications to objects, further protecting the integrity of the object. Job Rotation: This allows an organization to detect fraudulent behavior more readily. It also provides job redundancy and backup. Least Privilege:
The rule of least privilege mandates that employees have access only to the resources needed to complete their job tasks. This inhibits resource misuse. Over a certain duration, least privilege can lead to privilege creep, where employees jump from job to job, acquiring more rights and access. The rights and access they no longer need should be removed. Mandatory Vacations: Employees that don’t take vacation time aren’t always honorable workers. They may have skipped vacation because they’re engaged in fraudulent activities. Remaining on the job gives them opportunity to execute their scheme by appearing to be dedicated to work. This type of activity could be exposed when an employee is required to take vacation time. A week should be ample time for illicit activities to be discovered. Termination: Employee termination is sometimes a required action. Standardized termination procedures should be executed to protect the organization and its resources. These protocols ensure equal treatment of employees and prevent any opportunity for a former employee to destroy or damage company property. Steps that should be incorporated include: Revoking computer access at the time of notification Monitoring the employee while he or she gathers personal effects Making certain at no time the employee is left alone after the termination process Verifying that the employee returns company identification and any company property Escorting the employee from the building
Threats, Vulnerabilities and Attacks
A threat is any incident that can cause damage to a system and can create a loss of confidentiality, availability, or integrity. Threats can be deliberate or accidental. A vulnerability is a latent weakness in a system that can be exposed by a threat. Decreasing system vulnerability reduces overall risk and can also limit the impact of threats on the system. Threats: Threats can be classified into several categories, including malicious activities, accidental loss and inappropriate actions. Malicious Activities: Malicious activities are deliberate threats usually for personal gain or for imposed destruction. These deliberate activities include actions such as software cracking, keyloggers, viruses, shoulder surfing, password guessing, and any actions that are prohibited, destructive, are done for gain. Also included is theft, which includes swiping of information or trade secrets for profit or unauthorized disclosure, and physical looting. Accidental Loss: Accidental loss is a loss that is sustained involuntarily. Accidental loss can include: input errors and omissions by an operator, or accounting errors introduced into the data through faulty processing procedures. Inappropriate Activities: Inappropriate activities may not fall into the malicious category but might be grounds for dismissal. These include…
…using organizational systems to store inappropriate content such as pornography, political, or violent content, sexual or racial harassment; waste of organizational resources and the abuse of privileges, which includes unauthorized access to information to compromise the confidentiality of sensitive company information.
Exploiting Vulnerabilities to Launch Attacks
Default and Maintenance Accounts: Default and maintenance accounts are weaknesses that can be employed to access information systems especially default and maintenance accounts that still have preset or easily decoded passwords. Access to hardware by maintenance personnel can also qualify as a security violation. Data-Scavenging Attacks: Data scavenging is the method of assembling data bits over a duration and gradually piecing them together to obtain useful information. These are: Keyboard Attacks – uses normal utilities and tools to garner information available to normal system users who are sitting at the keyboard. Laboratory Attacks – uses advanced and specialized electronic equipment. Initial Program Load Vulnerabilities: The initial installation of a system is referred to as the initial program load (IPL) and harbors a unique set of vulnerabilities. During the IPL process the system administrator pulls up the facility’s system and can put the system into a single-user mode, void of important security features. In single-user mode the administrator has access to unauthorized programs or data, reset passwords, modification of various resources, and reassignment of the data ports or communications lines. In a local area network (LAN), a system administrator could also override the system’s security settings by booting the system from a tape, CD-ROM, or floppy disk. Social Engineering: In social engineering, an attacker employs social skills to gather information needed to corrupt information systems from an unsuspecting user. This can be sensitive information such as a password to secure access to a system. Social engineering can be achieved by: Impersonation – the attacker impersonates an authorized person and uses their qualifications to solicit information or to persuade an unsuspecting user to alter system settings. Intimidation – includes verbal abuse directed towards the user or threatening behavior to permit access or release information. Flattery – positive reinforcement used to impel the user into giving access or information for system access. Network Address Hijacking:
An attacker may have capability of redirecting traffic from a server or network device to his or her personal system, by address modification or by network address hijacking. This method allows the perpetrator to seize traffic to and from the devices for data analysis or modification or to obtain password information from the server to access user accounts. By rerouting the data output, the intruder can gain administrator terminal functions and circumvent the system logs.
Auditing, Monitoring and Intrusion Detection
Operational security requires ongoing review of an operational system to verify system security controls are operating correctly and effectively. Consistent auditing and monitoring achieve this and both rely on accountability Auditing and Audit Trails: Effective auditing is reliant on accountability, which is managed by logging the activities of users and system services that maintain the operating environment and the security mechanisms. If a user’s actions can’t be verified that individual cannot be held accountable for a specific action rendering auditing ineffective as security policies cannot be enforced. Logging can help retrace actions and events, provide evidence for prosecution, and run problem reports and analysis. The process of analyzing logs is called auditing and is an inherent function of an operating system. The audit trails are created by logging security incidents and is a running file of records that provide documentary evidence of user actions. Trails may isolate specific events or contain all of the activities on a system. This can be used as a tool to identify whether a user has violated security policies. It allows a security administrator to monitor user activity over time, and include information about additions, omissions, or alterations to the data within a system. Audit trails are not protective controls as they are usually examined after the event. Monitoring: System monitoring is critical to all of the domains of information security. The main purpose of monitoring is…
…the discovery of violations such as unauthorized or abnormal computer usage. Network utilities such as Snort and TCPdump, are commonly used by organizations to monitor network traffic for suspicious activity and anomalies. Failure recognition and response, which includes reporting methods, is a critical part of monitoring. An intrusion-detection system (IDS) is another monitoring mechanism. It is a technical detective access control system designed to constantly monitor network activities and to trace any scanning and probing activities, or patterns that appear to be attempts at unauthorized access to the information system in real-time. IDS can also be programmed to scan for attacks, track an attacker’s movements, alert an administrator to an ongoing attack, run system diagnostics for possible weaknesses, and can be configured to put defensive measures in place to block any additional unauthorized access. IDSs can also be used to identify system failures and system performance. Attacks discovered by an IDS can come from external connections, viruses, malicious code, trusted internal users attempting to employ unauthorized activities, and unauthorized access attempts from trusted locations. Clipping levels play a significant role in system monitoring. This allows a user to make an occasional error before investigation is activated. It functions as a violation threshold before violations are logged or follow-up response occurs. Once that threshold is surpassed investigation or notification begins.
Controls for Operational Security
Operational security is executed through various types of controls. These controls offer varying degrees of protection and fall into six broad categories:
Preventive controls – designed to reduce damage and frequency of unintentional errors and to prevent unauthorized access to the system. Data validation mechanisms are examples of preventative operational security controls. Detective controls – used to detect errors once they have occurred. These controls go into effect after the incident, and can be used to trace an unauthorized transaction for prosecution, or to reduce adverse influence of an error on the system by catching it early. An audit trail is an example of a detective operational security control. Corrective or recovery controls – these are designed to alleviate consequences of a loss event through data recovery procedures. Redundant systems, RAID and tape backup are examples of corrective operational security controls. Deterrent or directive controls – employed to encourage compliance with external controls and impede violations. These controls are meant to enhance other types of controls. An administrative policy stating that those who place unauthorized modems on the network could be fired is an example of a deterrent operational security control. Application controls – mechanisms designed into a software application to reduce and trace software’s operational irregularities. Transaction controls – used to provide control over the various phases of a data transaction. Some common types of transaction controls include: Input controls are designed to ensure that transactions are properly implemented and are inputted only once into the system. Processing controls are designed to ensure that transactions are valid and accurate and that wrong entries are reprocessed correctly and immediately. Output controls are designed to safeguard the confidentiality of an output and verifies the integrity of an output by comparing the input transaction with the output data. Change controls are designed to protect data integrity in a system while adjustments are made to the configuration. Test controls are implemented during the testing of a system to prevent violations of confidentiality and to preserve a transaction’s integrity.
Orange Book Controls
The Orange Book is one of the National Security Agency’s Rainbow Series of books on evaluating “Trusted Computer Systems”. This is the main book in the Rainbow Series and defines the Trusted Computer System Evaluation Criteria (TCSEC). The TCSEC outlines hierarchical degrees of security with the letter D being the least secure through A for the most secure. The Orange Book also identifies assurance requirements for secure computer operations applied to ensure that a trusted computing base’s security policy has been correctly employed and that the system’s security features have effectively implemented that policy. Two types of assurances are defined in the Orange Book. These are:
Operational assurance – examines the fundamental features and structure of a system. These include system architecture, system integrity, covert channel analysis, trusted facility management, and trusted recovery. Life cycle assurance – concerned with the controls and standards required for constructing and maintaining a system. These include security testing, design specification and testing and configuration security testing.
Covert Channel Analysis
A covert channel is a communication channel not normally used in system communications and is therefore not protected by the system’s security mechanisms. This makes it a vulnerability that could be exploited to corrupt a system’s security policy. The two common types of covert channels:
Covert storage channels – these transfer data by modifying it on a resource, such as a hard disk drive shared by two subjects at different security levels. This can be achieved by a program transferring information to a less secure program by changing the amount or the patterns of free hard disk space, or by changing the characteristics of a file. Covert timing channels – covert channels in which one process signals information to another by manipulating one of its observable system resources in such a way that it affects the real response time observed by the second process. This usually manipulates a system clock or timing device. Information is then transferred by using timing measurements such as the duration required to perform an operation, the amount of CPU time expended, or the time occurring between two events
Trusted Facility Management
Trusted facility management is the selection of a specific user to administer the security functions of a system. This must adhere to requirements for B2 systems and B3 systems. The B2 systems require that the trusted computing base accommodate separate operator and administrator functions, while the B3 systems require…
…that the functions the security administrator are responsible for are explicitly identified. This mandates that the security administrator exclusively employs functions as defined after taking a distinct action to assume the security administrator role on the system. Other functions that can be performed in the security administrator role should be confined to those functions that are essential to the security role. In addition, trusted facility management is committed to the concept of least privilege and also correlates to the separation of duties and need to know concepts.
Trusted Recovery, Failure Preparation and System Recovery
Trusted Recovery: A system failure is a serious security risk because the security controls might be overridden when the system is not functioning properly. Trusted recovery is designed to prevent this type of corruption in the event of such a system failure. It’s required for B3-level and A1-level systems and allows the system to be restarted without disrupting its required protection levels, and be able to recover and roll back without being afflicted after the failure. Two processes are involved in trusted recovery: preparing for system failure and recovering from system failure. Failure Preparation: Preparing for system failure entails running regular backups of all essential data. This preparation must allow full data recovery in a protected and orderly manner while protecting the continued security of the system. This process may also be needed if a system issue such as a faulty database or any kind of violation, is detected, or if the system needs to be stopped and restarted. System Recovery: Secure system recovery methods include…
…rebooting the system into a single-user mode so that no other user access is permitted at this time; recovering all file systems that were running at the time of the system failure; restoring any lost or corrupted files from the most recent backups; recovering the required security; and confirming the integrity of security-critical files, such as the system password file. Once these processes have been successfully performed and the system's data is secure, users can be allowed to access the system.
About Operations Controls
Operations controls are the methods used to preserve operational security. These include resource protection, hardware controls, software controls, privileged-entity controls, media controls, and physical access controls. Resource Protection: Resource protection is a safeguard from both loss and compromise, of an organization’s computing resources, such as hardware, software, and data that is owned and used by the organization. Resource protection is designed to decrease the level of impact that can result from the unauthorized access and/or alteration of data by limiting the opportunities for its misuse. Hardware resources that require protection include communications devices such as routers, firewalls, gateways, switches, modems, and access servers; storage devices such as floppy disks, removable drives, external hard drives, tapes, and cartridges; processing systems such as file servers, mail servers, Internet servers, backup servers, and tape drives; standalone computers; and printers and fax machines. Software resources that require protection include program libraries and source code; software application packages; and operating system software and systems utilities. Data resources that require protection include backup data; user data files; password files; operating data directories; and system logs and audit trails. Hardware Controls:
Hardware controls entail controls over hardware maintenance; hardware accounts; diagnostic ports and physical hardware. Hardware maintenance usually requires support and operations staff, vendors, or service providers that have physical or logical access to a system. Security controls are essential during this access and can background investigations of the service personnel and supervising and escorting the maintenance personnel. Most computer systems have built-in maintenance mechanisms that are usually supervisor-level accounts created at the factory with default passwords that are widely known. These passwords, and if possible, the account name, should be changed. As an alternate option, the account can be disabled until it is needed. If an account is used remotely, authentication of the maintenance provider can be performed by using callback or encryption. Most systems have diagnostic ports through which troubleshooting can be done. This usually offers direct access to the hardware. These ports should be used only by authorized personnel and should provide internal or external unauthorized access. Physical controls, including locks and alarms are used for data processing hardware components, including operator terminals and keyboards, media storage cabinets, server equipment, data centers, modem pools, and telecommunication circuit rooms. Software Controls: Software controls entails software support, and administering the software that is and can be used in a system. Components of software controls: anti-virus management, software testing, software utilities, software storage, and backup controls. Anti-virus management involves managing which applications and utilities can be implemented or executed on a system to limit the potential for viruses, unexpected software interactions, and the subversion of security controls. Vigorous software testing is required to ascertain compatibility of custom software applications with the system and to discover any problematic or unforeseen software interactions. Software testing should also be done with software upgrades. System utilities and software utilities can impact the integrity of system operations as well as logical access controls. So it’s important that system utilities as well as software utilities be controlled by a security policy. Secure software storage requires the implementation of both logical and physical access controls to ensure that software and copies of backups have not been modified without proper authorization. Backup controls are used to ensure that backup data is stored securely and to test the restore accuracy of a backup system.
Privileged Entity Controls
Privileged entity access, also known as privileged operations functions, is the expanded or special access to computing resources given to operators and system administrators. This expanded or special access can often be arranged into specific classes…
…where users can be assigned to a corresponding class based on their job title or job requirements. Examples of privileged entity operator functions include special access to system controls; access to special parameters; and access to the system control program.
Media Resource Protection and Security Controls
Media Resource Protection: Media resource protection can be classified as media security controls that are used to monitor and block threats that compromise confidentiality, integrity and authenticity, and media viability controls, which are implemented to preserve the proper working condition of the media. Media Security Controls: Media security controls are designed to prevent the violation or loss of sensitive information when the media is stored outside the system. This is achieved through logging the use of data media which offers accountability and assists in physical inventory control; physical access control is used to block unauthorized personnel from accessing the media; and sanitization of the media to prevent data residue and to maintain safe and proper disposal of the data media. Three methods can be used for media sanitization, namely overwriting, degaussing, and destruction:
In AC erasure media is degaussed by applying an alternating field that is reduced in amplitude over time from an initial high value. In DC erasure media is saturated by applying a unidirectional field. The physical destruction of paper reports, diskettes and optical media are required before disposal. These techniques can include shredding or burning documentation, breaking up CD-ROMS and diskettes, or eliminating them with acid. For best practices, paper reports should be destroyed by personnel with authorized level of security clearance.
Media Viability and Physical Access Controls
Media Viability Controls: The viability of the data media can be preserved with numerous physical controls. The objective of these controls is to protect the media from damage during handling, short and long-term usage and transportation. Appropriate labeling of media is important in a system recovery process. Labels can be used to identify the type of media and any special instructions with regard to handling; and logging of serial numbers or bar codes for retrieval during a system recovery. Proper handling of the media is vital and requires the media with care, cleanliness, and protection from physical damage during transportation to the archive sites. Finally, storage of media in clean and dust-free environments, and if possible optimal humidity levels, will preserve their viability. Physical Access Controls: The control of physical access to the hardware and software resources is also required. All users are required to have a certain level of control and accountability when…
…accessing physical resources. Some personnel will require special physical access to effectively perform their functions. These include IT department personnel, cleaning and maintenance personnel, service contract personnel, as well as consultants, contractors, and temporary staff.
Malicious Code, Viruses and Worms
Malicious Code: Malicious code includes a series of programmed computer security threats that comprise various network, operating system, software, and physical security vulnerabilities to disburse malicious payloads to computer systems. This is any programmed code specifically designed to inflict, damage, penetrate, or break a system, and includes viruses, worms, Trojans horses, denial-of-service tools, logic bombs, and back doors. Harmful code such as viruses and Trojan horses, cycle through unsuspecting users who unknowingly spread from system to system, while others, such as worms, spread quickly through vulnerable systems without requiring any user interaction. Viruses: Viruses are one of the earliest forms of malicious code to attack information systems and are pervasive with major outbreaks occurring regularly. Viruses harbor malicious payloads, some of which can cause the complete ruination of data stored on the local hard drive. It is estimated that there were approximately 65,000 strains of viruses on the Internet in early 2004. Viruses have two main purposes: propagation and destruction. The propagation function defines how the viruses move from system to system, while the virus’s payload executes the malicious and often catastrophic activity designed by the writer of the virus. Viruses fall into three broad categories based on their propagation methods: boot sector viruses, file viruses and macro viruses. Details and definitions below:
Master Boot Record (MBR) viruses are the oldest form of viruses. It attacks the MBR of floppy disks or the hard disk drive the computer uses to load the operating system during the boot process. This type of virus is dispersed through the use of infected floppy disks and was highly effective when floppy disks were the main source of file sharing between systems. File viruses are executable files having .exe, .com or .bat file extensions and rely on unknowing users to run the file. Social engineering is usually tried to coerce the user to execute the file. Alternatively, the virus may replace an operating system file and would be released when the operating system attempts to execute that file. Macro viruses exploit the scripting functionality used by common software applications, such as the applications belonging to the Microsoft Office suite that may be loaded onto the system. This type of virus represents the most advanced form of virus programs and first appeared in the mid-1990s. Worms: Worms contain the same harmful potential as viruses, but they’re not reliant on users to be spread. The Internet Worm was the first major security incident to occur on the Internet. Today hundreds of new worms have been released on the Internet. The catastrophic potentiality of these worms leaves the Internet in a perpetual state of risk. This calls upon system administrators to be proactive in ensuring the best security patches are applied to their Internet-connected systems.
Logic Bombs, Trojan Horses and Active Content
Logic Bombs: Logic bombs are malicious code that corrupt a system but are dormant until they’re activated by the occurrence of one or more logical conditions, and deliver malicious payload to unsuspecting computer users. Simple logic bombs may be triggered according to system date or time while others may use more advanced specifications such as the removal of a file or user account, or the changing of permissions and access controls. Many viruses and Trojan horses, such as the famous Michelangelo virus, contain a logic bomb component. Trojan Horses: A trojan horse takes a piece of malicious code and conceals it. Some Trojan horses are fairly benign while others wipe out all the data on a system causing extensive damage in a short period of time. Back Orifice is a well-known Trojan horse for the Windows operating system. To release Back Orifice onto a system, an attacker places Back Orifice within the installation package of a useful application or utility. When an unknowing user installs the useful application or utility they also install Back Orifice, now running in the background and gives the malicious attacker remote administrative access to the target computer. Active Content:
Active content on websites users visit is another course of attack. The delivery of active content is dependent on web applications that are downloaded to users’ computers for execution. These applications are based on technologies like Java applets and ActiveX controls. This minimizes the load on the web server and increases response time. However an unaware user may download malicious active content, known as hostile applets, from a mistrusted source and allow it to run on their systems, resulting in a major vulnerability. These hostile applets can inflict a range of damage, including a denial of service attack that eats up system resources or the theft and/or destruction of data. Most web browsers require the user to allow the active content to be automatically downloaded, installed, and executed from trusted sites. However, a policy should be put in place to ensure the proper user control of active content.
What is Spyware?
Spyware applications are usually similar in deployment to Trojan horses. They are installed when an unsuspecting user downloads and installs a free application from the Internet. However, more advanced spyware applications could be…
…installed onto a user’s computer when the user uses a browser that is vulnerable to visit an untrusted website. Defenses against spyware include not downloading or installing adware-supported applications, and applying the latest security patches for the operating system as well as the browser, or switching to a more secure browser.
SQL Injection and Malicious Users
Creators of applications that require user input should be cognizant of malicious users who target and exploit possible vulnerabilities in the protocol or application. An example is malformed input or SQL injection targeted at database applications. An attacker can attempt to introduce database or SQL commands to disrupt the normal operation of the database. This could cause the database to…
…malfunction and leak information. Here, the attacker searches for web applications in which to insert SQL commands. They use logic such as 1 = 1– or a single quote, such as ‘ to test the database for vulnerabilities. Feedback from the database application shows the database is susceptible to attack. These forms of attacks can be thwarted by applying pre-validation; post-validation; and client-side validation.
System Development Life Cycle (SDLC)
The System Development Life Cycle (SDLC) is a structure for system development. Its purpose is to manage the development process and implement security at each stage of the development process. The principal elements of the SDLC are listed in “Generally Accepted Principles and Practices for Securing Information Technology Systems” (SP 800-14, National Institute of Standards and Technology, September 1996) and “Security Considerations in the Information System Development Life Cycle” (SP 800-64, National Institute of Standards and Technology, September, October 2003). The five stages of the SDLC are listed in NIST SP 800-14 as follows:
Initiation – the beginning process that determines the need for the system and documenting its purpose and includes measuring the sensitivity of the system and data to be processed. This is called a sensitivity assessment. Development/Acquisition – involves the design, development, programming and acquisition of the system. In this stage programmers develop the application code while concentrating on security measures to make certain that input and output controls, audit mechanisms, and file-protection schemes are used. Implementation – this phase runs testing, security testing, accreditation, and installation of the system. This occurs once application coding has been completed. The testing should be handled by auditors or quality assurance engineers, not the programmers. If the code is written and verified by the same individuals, errors can go unnoticed and security functions can be bypassed. Thus assigning specific duties is important. Operation/Maintenance – identifying processes the system is designed to inform which include: security operations, modification/addition of hardware and/or software, administration, operational assurance, monitoring, and audits. Disposal – this phase overviews the state of the system or system components and products, such as hardware, software, and information; disk sanitization; archiving files; and moving equipment. This stage is usually reached when the system is no longer required.
Application Development: The Waterfall Model
The development of quality software applications is not attainable without the development process model. A process model guides the project procedures and activities and represents the lifespan of a project. It ensures that the application meets the customer’s requirements and that its development adheres to budget and time schedule. Several process models have developed over the last twenty-plus years. Historically, some process models are static and others do not allow checkpoints. Two process models, the waterfall model and the spiral model, offer different approaches to the project life cycle. Other models include:
The Waterfall Model: The waterfall model is one of the most commonly known development process models. This model runs on the presumption a series of tasks can be compressed in a single stage and that each stage flows logically to the next stage. In this model one stage must be finished before moving on to the next stage. Project milestones are used as transition and assessment points of each stage. Each following stage could require modifications to only the preceding stage in the model. Therefore, developers don’t have to go back several stages. If modification is required, the stage has not been officially completed. The modifications must be done and the project milestone met before the stage is officially recognized as completed. Stages of the waterfall model include: System feasibility Software plans and requirements Product design Detailed design Code Integration Implementation Operations and maintenance This model is optimal for projects where the requirements can be clearly outlined and are not likely to be changed in the future. Benefits of the waterfall model are it establishes a fixed order and is easily documented. However, it’s not practical for more complex projects.
The Spiral Model
The spiral model is based on the ongoing need to refine the requirements and estimates for a project. This model is useful for rapid application development of small projects. Each stage of the model opens with design goals and is completed after client review. This affords a collaborative approach and fosters synergy between the development team and the client because the client has direct input in all stages by providing feedback and approval. Each stage also requires…
…a risk assessment where estimated costs to complete the project are discussed, and schedules are revised. At this point decisions can be made about moving forward or aborting the project. The spiral model does not incorporate clear checkpoints which may create confusion during the process.
Cost Estimation Models
Cost estimation models don’t entail developmental processes but are used to estimate costs of software development projects. An early example is the Basic COCOMO Model, which estimates software development effort and cost as a function of the size of the software product in source instructions. This model uses two equations:
“The number of man-months (MM) required to develop the most common type of software product, in terms of the number of thousands of delivered source instructions (KDSI) in the software product” MM = 2.4 (KDSI)05 “The development schedule (TDEV) in months” TDEV = 2.5(MM)38 A more advanced model, the Intermediate COCOMO Model, factors in hardware limitations, personnel quality, use of advanced tools, and other attributes and their aggregate impact on overall project costs. An even more advanced model is the Detailed COCOMO Model, which factors in the effects of the additional factors used in the intermediate model on the costs of individual project phases. The Function Point Measurement Model, another cost estimation model, does not require the user to estimate the number of delivered source instructions. Instead, it focuses on functions, including external input types; external output types; logical internal file types; external interface file types; and external inquiry types. These functions are calculated and evaluated according to complexity and used to determine software development processes. A third type of model applies the Rayleigh curve to software development cost and effort estimation. Software Life Cycle Model (SLIM) uses this approach. SLIM makes its estimates by concentrating on the number of lines of source code that are modified by the manpower buildup index (MBI), which estimates the rate of buildup of staff on the project; and a productivity factor (PF), which is based on the technology used.
The Software Maintenance and Change Control
Information security should be a significant aspect of the software development process. This ensures secure applications are being used at the optimal level while minimizing development costs and code reworking. The Software Maintenance and Change Control: Change management is a formalized process designed to control any changes made to systems and programs, and to examine the request, determine its feasibility and impact, and produce a timeline to implement approved changes. The change-management process offers all stakeholders time for strategic input before changes are made. The six steps in change management:
Define change-management processes and practices Receive change requests Plan and document the implementation of changes Implement and monitor the changes Evaluate and report on implemented changes Modify the change-management plan, if necessary During the maintenance stage, one approach is to divide it into three sub-stages: request control; change control; and release control. Request control – manages the users’ requests for modifications to the software product and collects information used to administer this process. Steps included in this process are: -Establishing the priorities of requests -Estimating the cost of the changes requested -Determining the interface that is presented to the user Change control – the principal step in the maintenance stage and handles the following issues: -Recreating and analyzing the problem -Developing the changes and corresponding tests -Performing quality control -The tool types to be used in implementing the changes -The documentation of the changes -The restriction of the changes’ effects on other parts of the code -Recertification and accreditation, if required Release control – implements the latest release of the software, and involves determining which requests will be included in the new release, archiving of the release, configuration management, quality control, distribution, and acceptance testing.
Object-Orienting Programming (OOP)
Object-oriented programming (OOP) is a modular form of programming that allows pieces of software to be reusable and interchangeable between programs. The practice of recycling tested and reliable objects is more of an effective method of programming and reduces programming costs. Because it utilizes modules, a programmer can easily make changes to an existing program. Code from one class can be passed down to another through the process of inheritance. Thus, new modules that inherit features from existing objects can be implemented into the program. These objects can be managed through an object program library that controls and manages the deposit and application of tested objects to users. To safeguard against incidents of disclosure and violations of the integrity of objects, security controls must be implemented for the program library. In addition, objects can be made accessible to users through Object Request Brokers (ORBs) designed to support the interaction of heterogeneous, distributed environments. Thus, ORBs…
…find and distribute objects across networks and can be considered middleware as they reside between two other entities. An ORB is a component of the Object Request Architecture (ORA), which is an advanced framework for a distributed environment and was developed by the Object Management Group (OMG). The other components of the ORA are object services, application objects, and common facilities. The OMG has also developed a Common Object Request Broker Architecture (CORBA), which specifies an industry standard that allows programs written in different languages with the use of different platforms and operating systems to interface and communicate with each other. To use this compatible interchange, a user requires a small amount of initial code and an Interface Definition Language (IDL) file. The IDL file then identifies the methods, classes, and objects that are the interface targets.
All About Database Management
A database system can be used to retain and process data in one or more tables without needing to write specific programs to run these functions. A Database Management System (DBMS) provides high-level commands to work on the data in the database. The DBMS allows the database administrator to control all components of the database, including its design, performance, and security. The following are examples of various types databases:
Hierarchical databases – links structures in a tree structure. Each record is restricted to one owner thus a hierarchical database often cannot be correlated to structures in the real world. Mesh databases – more flexible than the hierarchical database, it employs a lattice structure where each record can have multiple parent and child records. Relational databases – consist of a collection of tables linked by their primary keys. Relational database designs are commonly used by organizations. Most relational databases use SQL as their query language. Object-oriented database – designed to resolve some of the limitations of large relational databases. Object-oriented databases don’t employ advanced language such as SQL but support modeling and the creation of data as objects.
Transaction Processing in Database Management
In database management, transaction management is required to mandate only one user at a time can modify data and that transactions are valid and complete. When more than one user undergoes modification in the database at the same time, which is known as concurrency, controls must apply so that modifications affected by one user do not impact modifications affected by another user. This can be accomplished by data locking. Locking resolves the issues associated with concurrency and provides successful completion of all transactions. Locking also supports isolation of the transaction, allowing all transactions to run in complete separation from one another, even if multiple transactions can be running at any time. This is called serializability and accomplished by implementing a set of concurrent transactions equivalent to the database state that would be achieved if the set of transactions were executed serially, i.e., in sequential stages. Serialization is an important aspect of processing transactions to ensure data in the database is correct at all times, however many transactions do not always require full isolation. If a transaction is prepared to accept inconsistent data, it’s labeled the isolation level and is the degree to which one transaction must be separated from other transactions. A lower isolation level increases concurrency at the expense of data correctness. Conversely, a higher isolation level ensures that data is correct but can negatively affect concurrency. If the transaction is processed without any errors while being executed, modifications in the transaction become a permanent part of the database. All transactions that include data alterations will transition to a new point of consistency and become committed or will be rolled back to the original state of consistency. Transactions are not left in an in-between state if the database is not consistent. All transactions must contain a logical unit of work exhibit four properties, called the ACID properties (Atomicity, Consistency, Isolation and Durability). These properties dictate that:
A transaction must be an atomic unit of work, i.e., all of its data modifications are performed, or none are performed. When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction’s modifications in order to maintain all data integrity and all internal data structures must be correct at the end of the transaction. Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in its original state before another concurrent transaction modified it or it sees the data after the second transaction is completed, but it does not see an in-between state. This situation is referred to as serializability, because it results in the capability to reload the starting data and replay a series of transactions in order to end up with the data in the same state it was in after the original transactions were performed. After a transaction has completed, its effects are permanently in place in the system. The modifications must persist even in the event of a system failure, i.e., they must be durable. When a transaction begins, DBMS must hold several resources to the end of the transaction to shield the ACID properties of the transaction. If data is altered, the modified rows must be protected with exclusive locks that block any other transaction from reading the rows, and exclusive locks must be retained until the transaction is committed or rolled back.
A data warehouse is an electronic vault of data from multiple different databases that is available to users for making queries. These warehouses have been merged, integrated, and formulated so they can be used as a measurement in trend analysis and business matters. It offers a strategic view:
To produce a data warehouse, data is retrieved from an operational database, repetitive content is removed, and the data is normalized. The data is then transferred into a relational database and can be analyzed by using On-Line Analytical Processing (OLAP) and statistical modeling tools. Data that is kept in a data warehouse must be maintained to ensure that it is timely and valid.
The Data Mining Process
Data mining is the process of analyzing data to identify and interpret patterns and relationships about the data. The end-result of data mining is metadata, or data about data. The patterns gleaned from the data can help organizations get a clearer perspective on their competitors and understand behavior and patterns of their customers to carry out strategic marketing. Information acquired from the metadata should…
…be returned for inclusion into the data warehouse to be available for future queries and metadata analyses. The data mining technique is useful in security situations to monitor anomalies or determine whether there are aggregation or inference problems and for analyzing audit information.
What is a Data Dictionary?
A data dictionary is a database for system developers. It logs all of the data structures used by an application. Sophisticated data dictionaries integrate application generators that use the data logged in the dictionary to automate some of the program production tasks. The data dictionary communicates with the DBMS, the program library, applications, and the information security system. A data dictionary can also be…
…organized with one primary data dictionary and secondary data dictionaries. The main data dictionary supplies a foundation of data definitions and central control, and the secondary data dictionaries facilitate separate development projects as backup to the primary dictionary, and function as a partition between the development and test databases.
The Knowledge Management Tool
Knowledge management is a comprehensive tool in that it utilizes all of the knowledge of the organization. It attempts to interconnect databases, document management, business processes, and information systems. It deciphers data that originates from these systems and automates the knowledge extraction. This knowledge discovery process takes the form of data mining with three main approaches:
Classification approach — used for pattern discovery and for large databases that need to be condensed to only a few individual records. Probabilistic approach — used in planning and control systems, and in applications that involve ambiguity. Statistical approach — used to construct rules and generalize patterns in the data.
Business Continuity Planning and Disaster Recovery Planning
Natural disasters are a given threat every organization needs to be prepared for. Earthquakes, tornadoes, or manmade disasters such as arson or explosions are incidents that jeopardize the very existence of the organization. Every organization requires business continuity and disaster recovery planning to manage the impact of such disasters. Business Continuity and Disaster Recovery Planning involve the preparation, testing, and updating of policies and procedures required to protect critical business assets from major disruptions to normal operations. Business Continuity Planning (BCP) involves…
the assessment of risks to organizational processes and the creation of policies, plans, and procedures to effectively deal with those risks, while Disaster Recovery Planning (DRP) describes the protocol in place for the organization to return to normal operations after a disaster.
The Business Continuity Planning (BCP) Process
The BCP process, as defined by (ISC)2, has five stages:
- Project Scope and Planning
- Business Impact Assessment (BIA)
- Continuity Planning
- Plan Approval and Implementation
Project Scope and Planning
The Scope and Plan Initiation is the first stage in the creation of a business continuity plan (BCP). It involves drafting the scope for the plan and the other elements needed to define the framework of the plan. This phase should include…
…careful analysis of the organization’s operations and support services as it relates to crisis response and planning. Scope planning can include: creating a detailed account of the work required, listing the resources to be used, and defining the management practices to be employed.
Business Organization Analysis
An analysis of the business organization is one of the first action steps for those responsible for business continuity planning. This analysis is used to take stock of all departments and individuals who have a stake in the BCP process. This could include:
Operational departments that are responsible for business core services Critical support services, such as the IT department, plant maintenance department, and other groups responsible for the maintenance of systems that support the operational departments Senior staff members responsible for the continued viability of the organization Business organization analysis is usually executed by individuals leading the BCP effort. A thorough review of the analysis should be a group task of the BCP team.
BCP Team Selection
The BCP team should not exclusively involve the IT and/or security departments. Instead the BCP team should include, as a minimum, members from each of the organization’s departments that manage core services performed by the organization; members from the key support departments identified by the organizational analysis, IT representatives with technical knowledge in topics covered by the BCP, security representatives with knowledge of the BCP process, legal representatives familiar with corporate legal, regulatory, and contractual responsibilities; and members from senior management. This assures…
…inclusion of knowledgeable individuals who maintain day-to-day operations of the business and ensure they’re informed about plan specifics before implementation.
Resource Requirements: The Three Phase Process
After the business organization analysis has been performed, the team should turn to examination of resources required by the BCP effort. This involves three phases:
BCP development, which requires use of resources as the BCP team implements the four elements of the BCP process. A significant resource will be hours and effort invested by members of the BCP team and the support staff. BCP testing, training, and maintenance, that will require hardware and software development. BCP implementation, which is activated when a disaster occurs and the BCP team elects to perform a full-scale utilization of the business continuity plan. This critical phase will require significant use of resources, including utilization of “hard” resources.
Business Impact Assessment (BIA)
The purpose of a Business Impact Assessment (BIA) is to produce a document that outlines the resources that are critical to the continued sustainability of the organization, existing vulnerabilities that could compromise those resources, probability that those threats will occur and sustained impact on the organization. Adverse effects could be financial or operational. A vulnerability assessment is a vital element of the BIA process, and has three primary objectives:
Criticality Prioritization – involves the specification and execution of critical business unit processes and assessment adverse effects from unforeseen disruption to these processes. Downtime Estimation – assists in evaluating Maximum Tolerable Downtime (MTD) that the organization can afford and still remain viable. Often the discovery is that MTD is much shorter than expected; and the organization can only tolerate a very brief period of interruption than was speculated. Resource Requirements – involves pinpointing the resource requirements for the critical processes. Here the most time-sensitive processes should receive the most resource support.
Priority and Risk Identification
The first BIA task is recognizing those business priorities that are most vital to daily operations of the organization. This entails producing a detailed list of business processes and ranking them in order of importance. One approach is to split up the task among team members where each person drafts the list that identifies important functions within their department. Each list can be merged into a master prioritized list for the entire organization. Priority identification is a qualitative method that…
…helps establish business priorities. Another step would be for the BCP team to draft a list of organization assets and then attach an asset value (AV) in monetary terms to each. These figures will be used in the final BIA steps to create a financially based BIA. The BPC team should also establish the maximum tolerable downtime (MTD) for each business function. Risk Identification: The next phase of the Business Impact Assessment is the outlining of both the natural risks and man-made risks the organization is vulnerable to. Natural risks include: hurricanes, tornadoes; earthquakes; mudslides and avalanches; and volcanic eruptions. Man-made risks include: terrorist acts, wars, and civil anarchy; theft and vandalism; fires and explosions; power outages; building collapses; transportation failures; and labor unrest.
The Likelihood Assessment
The next phase of the Business Impact Assessment is to identify the probability of each risk occurring. The assessment formulation is based on an annualized rate of occurrence (ARO) that indicates the number of occasions the organization expects to experience a disaster each year. There should be an ARO applied to each known risk. These calculations should be estimated according to…
…corporate history, experience, and advice from experts, such as meteorologists, seismologists, fire prevention professionals, and other consultants.
The Impact Assessment
In the impact assessment, the BPC team should carefully study the data gathered during risk identification and likelihood assessment then try to evaluate the repercussions each of the identified risks would have upon the viability of the organization if it were to occur. There are three metrics the BPC team would need to examine:
The exposure factor (EF) – the cumulative damage a risk poses to the asset, expressed as a percentage of the asset’s value. The single loss expectancy (SLE) – assessment of monetary loss incurred each time the risk materializes. The annualized loss expectancy (ALE) – the monetary loss that the business expects to see as a result of the risk impacting the asset over the course of a year. The BPC team should also factor in the non-monetary consequences an interruption would have on the organization. This could include: loss of goodwill among the organization’s client base; loss of employees after prolonged downtime; the social and ethical responsibilities to the community; and negative publicity.
The Continuity Planning Process
Continuity Planning is concerned with the development and implementation of a continuity strategy to reduce the damage a risk could inflict if it occurs. The first step of Continuity Planning is to develop a strategy that fills the gap between the Business Impact Assessment and Continuity Planning stage. Strategy Development: During the strategy development stage, the BCP team must identify which risks will be handled by the business continuity plan based on the prioritized list created in the previous phases. The team needs to contend with all the contingencies for the implementation of provisions and protocols required for zero downtime in the event of each and every possible risk. This requires a review of the maximum tolerable downtime (MTD) estimates so the team can decide which risks are deemed acceptable and which risks must attempt to be alleviated by BCP continuity provisions. Once the BCP team has established which risks require mitigation and the scope of resources to be used for each mitigation task, the next stage of Continuity Planning, namely the provisions and processes stage may begin. Provisions and Processes: The provisions and processes stage is the crux of Continuity Planning. Here, the BCP team develops procedures and methods that will alleviate risks declared unacceptable during the Strategy Development stage. Through the application of provisions and processes, these are three groups of assets that must be safeguarded:
The BCP must ensure the safety of personnel before, during, and after an emergency. Once this is accomplished, plans can be implemented to allow employees to conduct both their BCP and operational tasks as effectively as possible given the circumstances. To assure the completion of BCP tasks, employees should be provided with all of the resources needed to successfully execute their assigned tasks, including shelter and food where required. Organizations that require specialized facilities for emergency operations, such as operations centers, warehouses, distribution/logistics centers, and repair/maintenance depots; as well as standard office facilities, manufacturing plants, etc. would require full availability of these facilities to sustain the organization’s continued viability. Therefore, the BCP should establish mechanisms and procedures that can be put into place to fortify existing facilities against the risks defined in the strategy development stage. If reinforcement of these facilities is not possible, the BCP should identify alternative sites where business operations can resume immediately or within a timeframe that is shorter than the maximum tolerable downtime. Every organization relies on a functional infrastructure to conduct its critical processes. For most organizations, an integral part of this infrastructure is an IT system that’s made of a number of servers, workstations, and critical communications links between sites. One of BCP priorities is to determine how these systems will be safeguarded against risks identified during the strategy development stage. The BCP should identify methods and procedures to protect systems against potential risks by implementing protective measures such as computer-safe fire suppression systems and ongoing power supplies. Also, either redundant components or completely redundant systems/communications links can be applied to protect business operations.
Plan Approval and Implementation
Once the BCP team has carried out the design phase, the BCP document should be submitted to the organization’s senior management, including the chief executive officer (CEO), chairman, and board of Directors, for approval (unless senior management was involved throughout the development stages of the plan). The BCP team should provide a detailed description of the plan’s purpose and specific provisions. Once the BCP has met approval by senior management, the BCP team can start to put the plan into action. An implementation schedule should be developed that utilizes the resources committed to the program to accomplish the established process and provision objectives in as prompt a manner as possible given the scope of the modifications and the organizational climate. After all of the resources are fully appropriated, the BCP team should…
…monitor the process of an appropriate BCP maintenance program to verify the plan remains responsive to evolving business needs.
Training and education are vital to the implementation of the BCP. All personnel who will actively participate in the plan should receive training on the overall plan and their individual roles and responsibilities. In addition, everyone in the organization will be part of a plan overview briefing. Those members who are assigned specific BCP responsibilities should be trained and evaluated on their BCP tasks to measure efficiency and ensure they are able to complete tasks when disaster strikes. In the event of unexpected situations, at least one backup person should be trained for every BCP task to ensure redundancy should the person assigned the task cannot reach the workplace during an emergency.
Documentation is a crucial step in the BCP process and carries three important benefits:
Documentation provides a written continuity document for BCP team members to reference in the event of an emergency, and in the absence of senior BCP team members to monitor the process. Documentation functions as an informational archive of the BCP process that will guide future personnel looking for clarity and purpose of various procedures and implement necessary changes in the plan. Documentation assists in catching flaws in the plan. It also allows draft documents of the plan to be given to non-BCP team members for a “sanity check.”
Continuity Planning Goals
The BCP document should outline the objectives of continuity planning as proposed by the BCP team and senior management. The central goal of BCP is to protect and sustain the continuous operation of the organization in emergency situations. Additional goals can be added in this section specific to the organization’s needs. Statement of Importance: The BCP document should outline the objectives of continuity planning as proposed by the BCP team and senior management. The central goal of BCP is to protect and sustain the continuous operation of the organization in emergency situations. Additional goals can be added in this section specific to the organization’s needs. Statement of Priorities: The statement of priorities is a result of those priorities officially outlined in the BIA. This includes outlining the functions considered integral to the continued operation of the organization in order of importance. These priorities should also include those functions required for sustained business operations in emergency situations. Statement of Organizational Responsibility: The statement of organizational responsibility is established by senior management and can be integrated into the statement of importance. Organizational responsibility reiterates the organization’s commitment to Business Continuity Planning and informs employees, vendors, and affiliates of their responsibility to have an active role in assisting with the BCP process. Statement of Urgency and Timing:
The statement of urgency and timing conveys the importance of implementing the BCP and presents the timetable determined by the BCP team and agreed to by senior management. This statement is shaped by emergencies assigned to the BCP process by the organization’s senior management. If urgency and timing is included in the statement of priorities and statement of organizational responsibility, the timetable should be added as a separate document. Otherwise, the timetable and this statement can be placed into the same document.
Risk Assessment and Acceptance/Mitigation
Risk Assessment: The risk assessment of the BCP documentation reviews the decision-making process performed during the Business Impact Assessment (BIA). It should include a review of all of the risks identified during the BIA as well as the quantitative and qualitative analyses that were done to evaluate these risks. For the quantitative analysis, the actual AV, EF, ARO, SLE, and ALE figures should be included. For the qualitative analysis, the rationale behind the risk analysis should be provided to the reader. Risk Acceptance/Mitigation: The risk acceptance/mitigation contains the end-result of the strategy development stage of the BCP process. It reviews each risk identified in the risk analysis portion of the document and describes one of two thought processes:
For risks that were deemed acceptable, it should detail why the risk was considered acceptable as well as potential future incidents that might call for reconsideration of this determination. For risks that were deemed unacceptable, it should detail the provisions and processes put into place to alleviate the risk to the organization’s continued viability.
The Vital Records Program
The BCP documentation should detail a vital records program for the organization. This document specifies…
…the storage of important business records and the methods for producing and storing backup copies of those records.
Emergency Response Guidelines
The emergency response guidelines details organizational and individual responsibilities for prompt response to an emergency situation. ERGs should provide the first employees that encounter an emergency with protocol that should be followed to activate provisions of the BCP that do not automatically activate. ERGs should cover:
prompt response procedures; who is notified; and secondary response procedures to activate until the entire BCP team is assembled.
Disaster Recovery Planning: Natural Disasters
Disaster recovery planning (DRP) is the outlining of all the potential disasters the organization might encounter, and development of processes required to contend with realization of those disasters. An effective DRP should be designed to run on a series of processes that kick into gear with minimal delay. Key personnel should receive comprehensive training to ensure a smooth operation in the face of disaster. As well, the first responders on the scene should be able to promptly begin the recovery effort in an organized fashion. Natural disasters are extreme occurrences that take place due to drastic changes in the atmosphere that are beyond human control. These occurrences range from hurricanes, which today’s technology is able to provide a warning system for these events, to earthquakes, which can cause wide-scale destruction without warning. An effective disaster recovery plan should provide methods for immediate response to both predictable and unpredictable of disasters:
Earthquakes usually occur along fault lines that exist in many areas of the world. If an organization is located in an earthquake zone, the DRP should include procedures that will be followed when an earthquake occurs. Floods can occur near rivers and other bodies of water, and are usually the result of heavy rainfall. Flood warnings are most prevalent during the rain season when rivers and water masses overflow their banks. Flash floods on the other hand, can occur during storms or when torrential downpours persist and overwhelm the ecosystem. Additionally, floods can occur when dams rupture. Storms are one of the most common natural disasters and take many forms. Severe storms can produce torrential rainfall increasing the risk of flash floods. Hurricanes and tornadoes, which carry powerful wind speeds in excess of 100 miles per hour, can weaken structural foundations of homes and buildings, and create widespread debris such as fallen trees. Thunderstorms carry the risk of lightning of varying intensity, which can inflict serious damage to sensitive electronic components, as well as threatening power lines. Fires can occur naturally, from lightning or wildfires during the dry season, and can cause extensive destruction.
Deliberate and accidental man-made disasters bring a myriad of risks to an organization. Some of the more common man-made disasters that need to be considered when preparing a business continuity plan and disaster recovery plan include: fires, explosions, terrorist attacks, labor unrest, theft and vandalism:
Man-made fires tend to be usually more contained than wildfires. They can arise from carelessness, faulty electrical wiring, or improper fire protection practices. These fires could affect buildings, facilities, or server rooms. Explosions can result from a number of man-made factors, and can be accidental such as gas leaks, or intentional, such as a bomb blast. The resulting damage of bombings and explosions are similar to those caused by a large-scale fire. Acts of terrorism pose a significant challenge to disaster recovery planners because of their erratic nature and difficulty to predict. However, planners must make certain resources are not over extended to a terrorist threat at the expense of threats that are more likely to occur. Labor unrest or strikes should receive equal consideration as a fire or storm in the disaster recovery plan. A strike could suddenly arise from ongoing resentments or other labor-related concerns previously undetected. The BCP and DRP teams should identify possible labor unrest and consider alternative plans if labor unrest occurs. Theft and vandalism represent the same kind of threat as a terrorist attack, but on a much smaller scale. However, the plausibility factor is greater with theft or vandalism than with a terrorist attack. A business continuity and disaster recovery plan should include effective preventative measures to thwart the frequency of these incidents as well as contingency plans to mitigate the damage theft and vandalism have on an organization’s ongoing operations.
There are a number of action steps to be taken in designing an efficient disaster recovery plan that will facilitate the quick restoration of normal business operations and the resumption of activity at the main business location. These action steps include:
prioritizing business units, crisis management, emergency communications, and the actual recovery process. This recovery phase could include features such as cold sites, warm sites or hot sites.
Emergency Response Plan
The disaster recovery plan should outline the protocol key personnel should follow upon the discovery that a disaster is unfolding or is imminent. The protocol will depend on the type of disaster that strikes, the personnel responding to the emergency, and window of time for facilities to be evacuated and/or equipment to be shut down. These procedures will likely be…
…performed in the midst of an unfolding crisis. Therefore a checklist should be included of tasks arranged in order of priority with the most critical tasks first on the checklist.
The disaster recovery plan should include a list of personnel to be contacted in the event of a disaster. Normally, this will include essential members of the DRP team as well as those personnel who are responsible for critical disaster recovery tasks throughout the organization. The PN list should have…
…an alternate means of contact for each member and a backup person if the primary contact person is unreachable or can’t make it to the recovery site. This checklist should be distributed to all personnel who might respond to a disaster, which will assist in prompt notification of key personnel.
Business Unit Priorities
To efficiently stabilize the ongoing processes of an organization when a disaster occurs, a recovery plan should identify those business units with the highest priority. These units should be reinstated first. It’s important for the DRP team to identify those business units and reach consensus on order of prioritization. This is similar to the prioritization task the BCP team performed during the Business Impact Assessment (BIA) and could be based on the resulting documentation of the BIA prioritization task. In addition to listing units in prioritized order, a breakdown of processes for each business unit should also be drafted, also in order of priority. This breakdown will…
…clarify which processes merit highest priority as not every function performed by the highest-priority business unit qualifies as top priority. In this case it might be prudent to restore the highest-priority unit to 50 percent capacity and then move on to lower-priority units to reinstate some minimum operating capacity across the organization before attempting complete recovery.
Crisis Management for Disaster Recovery
An efficient disaster recovery plan should help assuage the panic that will set in once a disaster strikes. Those employees who are most likely to be at ground zero, such as security guards, technical personnel, etc., should be trained in the disaster recovery procedures and know the proper notification procedures and immediate response mechanisms. Also, ongoing training on…
…disaster recovery responsibilities should be done. Crisis training should also be provided if the budget permits. This extra measure will ensure some personnel will know disaster protocol and can offer guidance to other employees who didn’t receive comprehensive training.
Emergency Communications for Disaster Recovery
Communication is critical in the disaster recovery process. An organization should be able to communicate both internally and externally when a disaster strikes. It is assumed a disaster of significance would receive attention within the local community. Therefore, if an organization…
…is unable to inform persons outside the organization of its recovery status, the public could assume that the organization is unable to recover. It is critical internal communications are sustained internally during a disaster so employees know what is expected of them. If an incident such as a tornado destroys communication lines, it’s important to determine other means of communicating both internally and externally.
Hot, Cold and Warm Sites
Hot Sites: A hot site is in direct contrast to the cold site in terms of functionality. It is a backup facility that is fully operational and is equipped with the necessary hardware, software, telecommunication lines, and network connectivity to allow an organization to be up and running almost immediately. A hot site will have all required servers, workstations, and communications links and can function as a branch office or data center that is online and connected to the production network. In addition, a backup of the data from the systems at the primary site is held on servers at the hot site. This can be a replication copy of the data from production servers, which may be replicated to the hot site in real time, so an exact duplicate of the systems are ready if and when required. The data can also be stored on the servers with the most updated information available from replicated copies. Hot sites greatly reduce or eliminate downtime for the organization. The disadvantage of this type of facility is cost. Maintaining a fully functional hot site essentially doubles the organization’s budget for hardware, software, and services and requires the use of additional manpower to maintain the site. Cold Sites: A cold site is a facility that is large enough to handle the operational load of the organization and has the appropriate electrical and environmental support systems. The drawback is a cold site is void of online computing facilities, does not have active broadband communications links, and has no part of the production network. A cold side may have a portion of needed equipment to resume operations but it would require installation time for the data to be restored to servers. Because it has no operating computing base or communication links, a cold site is inexpensive to maintain as it doesn’t require maintenance of workstations and servers. The challenge is the amount of time and work involved to set up fundamental resources for the site to be fully operational. Warm Sites:
A warm site is the middle ground between a hot site and a cold site. Although it’s not as well equipped as a hot site, it has a portion of the necessary hardware, software, data circuits, and other resources needed to quickly restore normal business operations. This equipment is usually preconfigured and primed to run appropriate applications to support the organization’s operations. Though there is not data replication to the servers and a backup copy is not available. In this case the bulk of the data must be taken to the site and restored to the standby servers. Activation of a warm site typically takes 12 hours from the time a disaster is declared. However, warm sites avoid the significant telecommunications and personnel costs inherent in maintaining a near-real-time copy of the operational data environment.
Alternate Recovery Sites
Alternate Recovery Sites:
Alternate recovery sites are significant to the disaster recovery plan as they give organizations a backup to maintain operations and minimize downtime (or no downtime at all) in the event of a disaster. An organization may require temporary facilities where data can be restored to servers, and business functions can resume. Without this type of facility, the organization would be forced to relocate and replace equipment before normal operations could resume. This can demand extensive use of resources, including labor, time and finance, and could result in the organization no longer being an economically viable entity. With the availability of an alternate recovery site, an organization can...
…restart its business operations when the primary site is rendered unsound by the disaster. There are many options for alternative recovery sites, but the four most commonly used in disaster recovery planning are: cold sites, warm sites, hot sites, and mobile sites. When determining the appropriate location for these sites, it’s important they’re located in a different area. If the alternate site is within close proximity of the primary site, it’s vulnerable to the same disaster.
A mobile site is one or more self-contained trailers that have all of the environmental control systems necessary to sustain a safe computing environment. Larger corporations sometimes have these sites on a…
…”fly-away” basis, ready to activate and send them to any operating location around the world via air, rail, sea, or surface transportation. Smaller firms might negotiate with a mobile site vendor in the local area to provide these services on an as-needed basis.
Mutual Assistance Agreements (MAAs)
Mutual Assistance Agreements (MAAs) provide an alternate processing option. Under an MAA, two organizations commit to helping each other in a disaster situation by sharing computing facilities or other technological resources. They offer support by decreasing the expenditure for either organization to establish and maintain expensive alternate processing sites, such as the hot sites, warm sites, cold sites, and mobile processing sites. However, there are a few disadvantages to Mutual Assistance Agreements:
MAAs are difficult to administrate. Both parties are assuming implicit trust that support will be provided in the event of a disaster. However, the unaffected organization could dishonor the agreement. Both organizations should be located within reasonable proximity to each other to expedite transportation of personnel between sites. If both locations are within a close distance, then both organizations are subject to the same disaster. Confidentiality issues often make organizations reluctant to share data with each other. Despite these concerns, an MAA may be a useful disaster recovery solution for organizations, especially if cost is an overriding factor.
Database Recovery Definitions
For organizations that depend on databases as part of their business process, the DRP team should cover database recovery planning in the disaster recovery strategy. There are various methods that can be used to ensure protection of the database such as: electronic vaulting, remote journaling, and remote mirroring. Each technique has its own benefits and drawbacks, And the DRP team should carefully review the organization’s computing requirements and available resources in order to select the option best suited to the organization. The definitions are below:
Electronic vaulting is the process of backing up the data in a database and sending it to a remote site through bulk transfers. The remote site can be a designated alternative recovery site, such as a warm site, or an external location used to preserve backup data. When data is stored off-site a time delay should be factored into the moment a disaster is declared and the time backup site is ready to be used. Remote journaling involves the backing up the data in a database and transporting it to a remote site more frequently, usually once every hour. This also necessitates the transfer of copies of transaction logs that record all transactions since the previous bulk transfer. Remote journaling and electronic vaulting are similar processes in that transaction logs transferred to the remote site are not allocated to a live database server but are maintained in a backup device. When a disaster ensues technicians will access the appropriate transaction logs and apply them to the production database. Remote mirroring is the most sophisticated and most costly database backup solution. With the remote mirroring process, a live database server is maintained at the remote site. The remote server retrieves copies of database alterations as they’re applied to the production server at the main location allowing the remote or mirrored server to take over at any time. Remote mirroring is a popular option of organizations, it demands high infrastructure and manpower costs to support the mirrored server.
Documentation for the Disaster Recovery Plan
The disaster recovery plan should be fully documented and proper training should be given to all members who will be involved in the disaster recovery effort. When developing a training plan, the DRP team should think about…
…orientation training for new employees; training for members taking on a new role in the disaster recovery plan; occasional reviews of the plan for all team members; and refresher training for all other employees.
Testing and Maintenance for the Disaster Recovery Plan
The disaster recovery plan should also be tested on occasion to check for any flaws and make sure that the plan’s applications are sound and in step with the evolving needs of the organization. The types of tests that can be run will vary according to the level of recovery facility (cold, warm, etc.) available to the organization. The five main tests that should be conducted are the following:
The checklist test is the delivery of copies of the disaster recovery checklists to the DRP team, and also key personnel for review. This ensures that key personnel are informed of their responsibilities and review the information on a periodic basis. It also allows for spot checking of any erroneous or obsolete information, and revise items that require updating due to changes within the organization. It allows for the identification of situations in which key personnel have left the organization and need to be replaced. In these situations, the disaster recovery responsibilities assigned to those employees should be reassigned. The structured walk-through involves role-play by the DRP team of a disaster scenario. The test moderator tests a specific scenario and presents the details to the team at the time of the test. The DRP team members then review copies of the disaster recovery plan and discuss the appropriate responses or any problematic areas with that particular type of disaster. The simulation test is similar to the structured walk-through. Here the DRP team members are given a test scenario and asked to come up with an appropriate response. These response methods are then tested for efficiency. This may involve the scheduling around non-critical business activities and the use of some operational personnel. The parallel test entails relocation of key personnel to the alternate recovery site and the activation of site activation procedures. During this test, the operations at the main facility are not interrupted. The full-interruption test is similar to parallel tests, but operations at the primary site are shut down and transferred to the recovery site.
Types of Computer Crimes
Computer crimes consist of situations where computers are used as a tool to plan or commit the crime; or situations where a computer or a network is the victim of the crime. The most common types of computer crimes:
Denial of Service (DoS) and Distributed Denial of Service (DDoS), password theft, network invasions, emanation eavesdropping, social engineering, unlawful content, such as child pornography, fraud, software piracy, dumpster diving, malicious code, spoofing of IP addresses, information warfare, which is a barrage on the information infrastructure of a nation and could include attacks on military or government networks, communication systems, power grids, and the financial, espionage, destruction or the alteration of information, masquerading, embezzlement, and the use of computers in the planning of terrorism.
The Common Law System
There are three principal categories of laws in the legal system, which is referred to as the Common Law system. These categories of laws under the Common Law system are criminal law, civil law, and administrative law, each of which is used to address different circumstances, and levy various penalties upon the perpetrator. Criminal law serves to sustain social peace and safety and consists of the laws that the police and other law enforcement agencies work with. It includes laws against acts such as murder, assault, robbery, and arson. There are several criminal laws in place to protect society against computer crime, including the Computer Fraud and Abuse Act, the Electronic Communications Privacy Act, and the Identity Theft and Assumption Deterrence Act. These laws are developed by elected representatives who serve in the legislative branch of government and must adhere to the country’s constitution. All laws are open to judicial review by the courts. If a court finds that a law is unconstitutional, it has the power to render it invalid. Civil law is designed to preserve social order and outlines the laws that govern issues requiring impartial arbitration, such as contract disagreements and real estate transactions. Civil laws also lay the groundwork for the executive branch of government to execute its duties. Like criminal laws, civil laws are put into effect by the elected representatives who serve in the legislative branch of government and are subject to the same constitutional restraints and judicial review procedures. Administrative law:
…is designed to administer authoritative decisions by outlining procedures to be adhered to within a federal agency. Administrative law is not developed by the legislative branch of government but is declared in the Code of Federal Regulations (CFR). Although administrative law isn’t dependent upon an act of the legislative branch to obtain the force of law, it must be in accordance with all existing civil and criminal laws. Therefore government entities can’t apply laws that challenge existing laws passed by the legislature. Additionally, administrative law must also be in accordance with the country’s constitution and are subject to judicial review. Other aspects of law within the common law system that pertain to information systems are intellectual property and privacy laws.
Intellectual Property Law Categories
Intellectual property law consists of a number of categories designed to protect the intellectual property of the author. These categories include the following:
The patent law protects inventions and processes, ornamental designs, and new varieties of plants. It provides the owner of the patent with the legal right to prevent others from using or reproducing the object covered by the patent for a specified period of time. Where a patent obtained by an individual builds on other patents, the individual must obtain permission from the owner(s) of the earlier patent(s) to exploit the new patent. In the United States, patent law that protects inventions and processes are granted for a period of 20 years from the date the application was made, patent law that protects ornamental designs are granted for a period of 14 years, and patent law that protects the creation of new varieties of plants are granted for a period of 17 years. Once the patent on an invention or design has expired, anyone is free to make, use, or sell the invention or design. A copyright protects original works of authorship and the rights of the author to solely control the reproduction, adaptation, public distribution, and performance of these original works. Copyrights can also be assigned to software and databases. The copyright law has two provisions that address uses of copyrighted material by educators, researchers, and librarians. Trade Secret law protects and maintains the confidentiality of proprietary technical or business-related information. However, the owner of the information needs to acquire resources to develop the information, which has to be of value to the business of the owner; and the information cannot be apparent. A trademark establishes the identity of an owner, vendor or manufacturer. This can be a name, a word, a symbol, a color, a sound, a product shape, a device, or any combination of these giving them a unique identity from those manufactured or sold by competitors. A warranty is a contract that binds an organization to its product. There are two types of warranties: An implied warranty is a non-verbal, unwritten promise created by state law that is transferred from a manufacturer or a merchant to the customer. There are two types of implied warranties: The implied warranty of fitness for a particular purpose is a commitment made by the merchant when the consumer relies on the advice of the merchant that the product is suited for a specific purpose. The implied warranty of merchantability is the merchant’s or manufacturer’s promise that the product sold to the consumer is fit to be sold and will perform the functions that it is intended to perform.
- An express warranty is definitively given by the manufacturer or merchant to the customer when a sales transaction takes place. This type of warranty defines the offer to remedy any problems or deficiencies with the product.
Information Privacy and Privacy Laws
Privacy is the legal protection from unauthorized publication of the individual’s personally identifiable information (PII). This right to privacy is exemplified in the following basic principles:
Notice – regarding collection, use and disclosure of PII Choice – to opt out or opt in regarding disclosure of PII to third parties Access – by consumers to their PII to permit review and correction of information Security – to protect PII from unauthorized disclosure Enforcement – of applicable privacy policies and obligations
Organizational Privacy Policies
Organizations establish and disclose privacy policies outlining their approach to handling PII. These usually entail:
Statement of the organization’s commitment to privacy. The type of information the organization would collect. This could include names, addresses, credit card numbers, phone numbers, etc.
Retaining and using e-mail correspondence.
Information gathered through cookies and Web server logs and how that information is used.
How information is shared with affiliates and strategic partners.
Mechanisms to secure information transmissions, such as encryption and digital signatures.
Mechanisms to protect PII stored by the organization.
Evaluation of information protection practices.
Means for the user to access and correct PII held by the organization.
Rules for disclosing PII to outside parties.
Providing PII that is legally required.
Privacy-Related Legislation and Guidelines
Critical legislation and suggested guidelines for privacy include:
The Cable Communications Policy Act – provides for judicious use of PII by cable operators internally but places restrictions on disclosures to third parties. The Children’s Online Privacy Protection Act (COPPA) – provides protection to children under the age of 13. Customer Proprietary Network Information Rules – these pertain to telephone companies and curb their use of customer information internally and to third parties. The Financial Services Modernization Act (Gramm-Leach-Bliley) – mandate for financial institutions to give customers clear descriptions of the institution’s policies and procedures for protecting the PII of customers. Telephone Consumer Protection Act – regulates communications between companies and consumers, such as in telemarketing. The 1973 U.S. Code of Fair Information Practices declares: No personal data record–keeping systems whose very existence is secret. Transparency – there must be a way for a person to discover what information about them is on record and how it is used. There must be a way for a person to prevent disclosure of information about them to be used for purposes other than its original intent. Any organization creating, maintaining, using, or disseminating records of identifiable personal data must ensure the integrity of the data for their intended use and must take precautions to prevent misuses of that data. The Health Insurance Portability and Accountability Act (HIPAA) – includes Privacy and Security Rules and standards for electronic transactions and code sets.
The Platform for Privacy Preferences (P3P)
Electronic Monitoring for Privacy
Another area relating to privacy practices is keystroke monitoring, e-mail monitoring, and the use of surveillance cameras, badges and magnetic entry cards. Important issues in electronic monitoring are that the monitoring process is conducted in a lawful manner and is implemented in a consistent fashion. Organizations that monitor employee e-mail should inform them that email is being monitored. This can be done by the use of a pronounced logon banner or some other applied notification. Additionally…
…the organization should make certain their monitoring process is applied uniformly with explicit terms defining acceptable usage of the e-mail system, as well as back up procedures for email archiving, and should not provide a guarantee of e-mail privacy. These should be outlined in the organization’s email usage policy.
Computer Security, Privacy and Crime Laws
The laws, regulations, and mandates about the protection of computer-related information are as follows:
The U.S. Fair Credit Reporting Act of 1970 deals with consumer reporting agencies. The U.S. Racketeer Influenced and Corrupt Organization (RICO) Act of 1970 that refers to criminal and civil crimes involving racketeers affecting the operation of legitimate businesses; crimes detailed in this act: mail fraud, securities fraud, and the use of a computer to perpetrate fraud. The U.S. Code of Fair Information Practices of 1973 pertains to personal record keeping. The U.S. Privacy Act of 1974 corresponds to federal agencies; provides protected information about private individuals contained in federal databases, and allows access to these databases. This Act appoints the U.S. Treasury Department with the duties of applying physical security practices, information management methods, and computer and network controls. The Foreign Intelligence Surveillance Act of 1978 (FISA) covers electronic monitoring and physical searches. It allows for electronic surveillance and physical searches without a required search warrant in cases of international terrorism, spying, or acts of sabotage that are conducted by a foreign authority or its agent and is not intended for use in prosecuting U.S. citizens. The Organization for Economic Cooperation and Development (OECD) Guidelines of 1980 addresses data collection limitations, data integrity, specifications of the purpose for data collection, data use restrictions, information security safeguards, transparency, participation by the individual on whom the data is being collected, and accountability of the data controller. The Medical Computer Crime Act of 1984 involves illegal access or modification of electronic medical records through phone or data networks. The Federal Computer Crime Law of 1984 was the first computer crime law passed in the U.S. and was enhanced in 1986 then modified in 1994. This law acknowledges classified defense or foreign relations information, records of financial institutions or credit reporting agencies, and government computers. Unlawful access or access that abuses authorization became a felony for classified information and a misdemeanor for financial information. This law made it a misdemeanor to willingly access a U.S. Government computer illegally or beyond authorization if the U.S. government’s use of the computer would be affected.
The Computer Fraud and Abuse Act
The Computer Fraud and Abuse Act of 1986 was amended in 1996 and enhanced Federal Computer Crime Law of 1984 by introducing three new crimes:
When use of a federal interest computer assists an intended fraud. When modifying, corrupting, or destroying information in a federal interest computer or blocking the use of the computer or information resulting in a deficit of $1,000 or more or could thwart medical treatment. Trafficking in computer passwords if it affects interstate or foreign commerce or allows unlawful access to government computers.
Computer Security Legislation
The Electronic Communications Privacy Act of 1986 deals with eavesdropping and the interception of message contents without discerning between private or public systems. This law updated the Federal privacy clause in the Omnibus Crime Control and Safe Streets Act of 1968 to include digitized voice, data, or video, whether transmitted over wire, microwave, or fiber optics. Court warrants are purposed to intercept wire or oral communications, except for phone companies, the FCC, and police officers that are involved through the consent of one of the parties. The Computer Security Act of 1987 places mandates on federal government agencies to conduct security-related training, to pinpoint sensitive systems, and to develop a security plan for those sensitive systems. A category of sensitive information called Sensitive But Unclassified (SBU) has to be taken into account. This category, formerly known as Sensitive Unclassified Information (SUI), addresses information below the government’s classified level that is valuable enough to protect, such as medical information, financial information, and research and development knowledge. This act also divided the government’s responsibility for security between the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA). NIST was given the duty of monitoring information security in general, mainly for the commercial and SBU arenas, and NSA retained the duties for cryptography for classified government and military applications. The Computer Security Act established the national Computer System Security and Privacy Advisory Board (CSSPAB) are a twelve-member advisory group of experts in computer and telecommunications systems security. The British Computer Misuse Act of 1990 deals with computer-related criminal offenses. The Federal Sentencing Guidelines of 1991 outlines punishment procedures for those found guilty of breaking federal law. Additional laws follow:
The OECD Guidelines to Serve as a Total Security Framework of 1992 includes laws, policies, technical and administrative measures, and education. The Communications Assistance for Law Enforcement Act of 1994 mandates that all communications carriers make wiretaps possible.
Achievements of the Computer Abuse Amendments Act of 1994
The Computer Abuse Amendments Act of 1994 achieved the following:
Changed the federal interest computer to a computer used in interstate commerce or communications. Covers viruses and worms. Included intentional damage as well as damage done with “reckless disregard of substantial and unjustifiable risk.” Limited imprisonment for the unintentional damage to one year. Provides for civil action to obtain compensatory damages or other relief.
The Paperwork Reduction Acts of 1980
The Paperwork Reduction Acts of 1980 was enhanced in 1995 and provides Information Resources Management (IRM) directives for the U.S. Government. This law established the Office of Information and Regulatory Affairs (OIRA) in the Office of Management and Budget (OMB). Under the Paperwork Reduction Act, agencies must:
Oversee information resources to improve integrity, quality, and utility of information to all users. Oversee information resources to protect privacy and security. Appoint a senior official who reports directly to the Secretary of the Treasury, to assure all duties assigned by the Act are achieved. Pinpoint and offer security protections in accordance with the Computer Security Act of 1987 commensurate with the degree of harm and risk potentially resulting from the misuse, loss, or unlawful access with respect to information collected by an agency or maintained on behalf of an agency. Apply and enforce applicable policies, procedures, standards, and guidelines on privacy, confidentiality, security, disclosures, and sharing of information collected or maintained by or for the agency.
Important Computer Privacy Laws
The Council Directive (Law) on Data Protection for the European Union (EU) of 1995 declares that each EU nation is to apply protections similar to those of the OECD Guidelines. The Economic and Protection of Proprietary Information Act of 1996 corresponds to industrial and corporate espionage and expands the definition of property to include proprietary economic information in order to include the theft of this information. The Kennedy-Kassebaum Health Insurance and Portability Accountability Act of 1996 (HIPAA) addresses the concerns of personal health care information privacy, security, transactions and code sets, unique identifiers, and health plan portability in the United States. The National Information Infrastructure Protection Act of 1996 amended the Computer Fraud and Abuse Act of 1986 and is modeled after the OECD Guidelines for the Security of Information Systems. It works with the safeguarding of confidentiality, integrity, and availability of data and systems. This path is designed to encourage other countries to adopt a similar framework, creating a more systematic approach to handling computer crime in the existing global information infrastructure. The Information Technology Management Reform Act (ITMRA) of 1996 is also known as the Clinger-Cohen Act, and frees the General Services Administration of responsibility for procurement of automated systems and contract appeals. OMB is designed to provide guidance, policy, and control for information technology procurement. With the Paperwork Reduction Act, as enhanced, this Act delineates OMB’s responsibilities for overseeing agency practices regarding information privacy and security. The Title I, Economic Espionage Act of 1996 deals with the numerous acts of economic espionage and the national security components of the crime. The theft of trade secrets is also defined in the Act as a federal crime. The Digital Millennium Copyright Act (DMCA) of 1998 inhibits trading, manufacturing, or selling in any way that is designed to override copyright protection mechanisms. It also addresses ISPs that unknowingly support the posting of copyrighted material by subscribers. If the ISP is alerted the material is copyrighted, the ISP must remove the material. Additionally, if the posting party proves that the removed material was of “lawful use,” the ISP must restore the material and notify the copyright owner within 14 business days. Two important rulings regarding the DMCA were made in 2001. The rulings involved DeCSS, which is a program that bypasses the Content Scrambling System (CSS) software used to prevent the viewing of DVD movie disks on unlicensed platforms. Additional laws follow:
The Uniform Computers Information Transactions Act of 1999 (UCITA) is concerned with libraries’ access to and use of software packages, as well as the licensing practices of software vendors. The Electronic Signatures in Global and National Commerce Act of 2000 (“ESIGN”) is concerned with the use of electronic records and signatures in interstate and foreign commerce. It protects the validity and legal effect of contracts entered into electronically. A key provision of the act mandates that businesses obtain electronic consent or confirmation from consumers to receive information electronically that a law normally requires to be in writing. The legislation is intent on protecting the consumers’ rights under consumer protection laws and went to extensive measures to meet this goal. Thus, a business must receive confirmation from the consumer in electronic format that the consumer consents to receiving information electronically formerly in written form. This provision assures that the consumer has access to the Internet and understands the basics of electronic communications. The Provide Appropriate Tools Required to Intercept and Obstruct Terrorism (PATRIOT) Act of 2001 allows for the subpoena of electronic records, the monitoring of Internet communications and the search and seizure of information on live systems, backups, and archives. The Generally Accepted Systems Security Principles (GASSP) are not established laws but are considered principles that have a foundation in the OECD Guidelines, and states: Computer security supports the mission of the organization. Computer security is an integral element of sound management. Computer security should be cost-effective. Systems owners have security responsibilities outside their organizations. Computer security responsibilities and accountability should be made explicit. Computer security requires a comprehensive and integrated approach. Computer security should be periodically reassessed. Computer security is constrained by societal factors. The E-Government Act. Title III, the Federal Information Security Management Act of 2002 (FISMA) deals with information security controls over information resources that support Federal operations and assets.
What is Computer Forensics?
Computer forensics is the investigation of computer crimes with the objective of identifying and prosecuting the perpetrator. It involves the collection, examination and safeguarding of information from and related to computer systems that can be used to pinpoint and prosecute the perpetrator. For this information to be admissible in a court of law as evidence, standard computer forensics methods must be used to protect the integrity of that evidence. Because information that is stored on the computer is in digital format, there are particular challenges…
…involved in the investigation of computer crimes. Investigators and prosecutors have a compressed time frame to conduct their investigation and may impose upon the normal business procedures of an organization. When gathering evidence, there might be complications in obtaining key information as it might be stored on the same computer as data needed for the normal conduct of business.
Categories of Evidence in Computer Forensics
To be admissible in a court of law, the evidence must have relevance and be legally permissible, reliable, correctly identified, with its integrity preserved. The gathering, overseeing, and preservation of evidence are priority. The evidence gathered at a computer crime scene is usually intangible and susceptible to easy alteration without being traceable. Because of this risk, evidence must be handled carefully and properly monitored throughout the evidence life cycle, which entails the evidence gathering and application process. This includes the discovery and recognition, protection, recording, collection, identification, preservation, transportation, presentation in a court of law, and the return of evidence to the owner. The gathering of evidence could also include collecting all relevant storage media, obtaining an image of the hard disk before cutting power, taking and printing a screen shot, and avoiding degaussing equipment. Preservation of evidence includes archiving and logging all information related to the computer crime until investigation procedures and legal proceedings are completed; safeguarding magnetic media from deletion, storing evidence in the appropriate environment both onsite and offsite, and defining, documenting, and following a strict methods for securing and accessing evidence both onsite and offsite. Evidence gathered for a court of law falls into different categories, such as:
Best evidence, the originating or source evidence rather than a copy or duplicate of the evidence Secondary evidence, a replication of the evidence or oral description of its contents and is not as solid as best evidence. Direct evidence, which proves or disproves a specific act through oral testimony based on information gathered firsthand by a witness. Conclusive evidence, considered incontrovertible evidence that trumps all other categories of evidence. Opinions, a category that can be divided into two types: Expert opinions, which can offer an opinion based on personal expertise and facts. Non Expert opinions, which can testify only as to facts. Circumstantial evidence, inferences of information from other, intermediate, relevant facts. Hearsay evidence, evidence obtained from another source outside of firsthand information. Hearsay evidence is considered weak information and generally not admissible in court. Computer-generated records and other business records are considered hearsay evidence because the information can’t be proven as implicitly accurate and reliable. However, there are certain exceptions when considering records as evidence: Made during the regular conduct of business and authenticated by witnesses familiar with their use Relied upon in the regular course of business Made by a person with knowledge of the records Made by a person with information transmitted by a person with knowledge Made at or near the time of occurrence of the act being investigated In the custody of the witness on a regular basis
Chain of Custody
Because of the critical nature of evidence, it is crucial that its continuity be preserved and documented. A chain of custody, also referred to as a chain of evidence, must be produced to show how the gathered evidence went from the crime scene to the courtroom. Policies and procedures dealing with the management of evidence must be followed. Evidence management starts at the crime scene. When a crime scene is being processed, each piece of evidence must be sealed inside an evidence bag that has two-sided tape that allows it to be sealed shut. Each evidence bag must be tagged identifying the evidence and be marked with a case number, the date and time the evidence was secured, the name of the investigator who discovered the evidence, and the name or badge number of the person who obtained the evidence. An evidence log should also be created. Information contained in the log should include…
…a description of each piece of evidence, serial numbers, identifying marks or numbers, and other information that is required by policy or local law. This log also specifies chain of custody, chronicling who was in possession of the evidence after it was initially tagged, transported, and locked in storage, and which individuals had access to the evidence while it was held in storage.
The Computer Crime Investigation Process
Due to the ongoing business procedures of an organization, a computer crime investigation is complicated by several factors. The investigation process could affect critical operations. As such, it’s important to have an action plan in place for handling reports of suspected computer crimes, and a designated committee should be created beforehand. This committee should formulate prior correspondence with law enforcement; identify when and whether to call in law enforcement; protocol for reporting computer crimes and for handling and processing reports of computer crime; preparations for and conducting investigations, and ensure the proper collection of evidence. When a computer crime is suspected, precautionary measures should be taken to not alert the suspect once a crime has been reported. The first step should be a preliminary investigation to discern whether a crime has been committed. This preliminary investigation could entail…
…inspection of audit records and system logs, interviewing witnesses, and ascertaining the damage. It’s very important to know when disclosure to authorities is required by law. The timing of this disclosure is critical. Law enforcement agencies in the United States are obligated by the Fourth Amendment to the U.S. Constitution, which states that a warrant must be obtained prior to a search for evidence. Private citizens are not bound by the Fourth Amendment and can engage in a search for evidence without a warrant. The exception would be if a private individual were asked to search for evidence by a member of law enforcement. In these situations a warrant would be required because the private individual is acting as an agent of law enforcement. An exception to the search warrant requirement for law enforcement officers is the Exigent Circumstances Doctrine. In accordance with this doctrine, if probable cause is apparent and destruction of the evidence is believed to be imminent, the search can be done without the delay of obtaining a warrant.
Role of the First Responder
The first responder is the first person to encounter a crime scene. A first responder has the expertise and skill to deal with the incident. The first responder may be an officer, security personnel, or a member of the IT staff or incident response team. The first responder is responsible for determining the magnitude and scope of the crime scene, securing it, and preserving evidence. Securing the scene is critical to both criminal investigations and internal incidents. Both use computer forensics to collect evidence. The methods for investigating internal policy violations and criminal law violations are basically the same. Depending on the circumstances the internal investigations may…
…not need the involvement of law enforcement. Once the crime scene has been established, the first responder must then set up a perimeter to contain it. Protecting the crime scene requires blocking off the area where evidence resides. Everything contained in an area should be treated as possible evidence. This includes functioning and nonfunctioning workstations, laptops, servers, handheld PDAs, manuals, and any other items contained in the area of the crime. Until a crime scene has been processed, all non-investigating persons should be prevented from entering the area, and those present at the time of the incident should be documented. The first responder must not touch anything contained in the crime scene. Preserving volatile evidence is another responsibility of the first responder. Traditional forensics may also be used to ascertain the identity of the individual behind the crime. Law enforcement may collect DNA, fingerprints, hair, fibers, or other physical evidence.
The Computer Crime Investigator
When the investigator arrives on the scene, it is the first priority for the first responder to give that investigator as much information as possible. If the first responder touched or came in contact with anything, it is critical the investigator be alerted so that it can be included in the report. Any observations should be noted as this may offer insight into resolving the incident. If a member of the incident response team arrives first and collects some evidence, the person in charge of the team should turn over that evidence to the investigator along with any relevant information. If more than one team member collected evidence…
…documentation needs to be provided to the investigator detailing what each person saw and did. The appointed investigator should clearly communicate they are leading the process and that all information or decisions made should be approved by them. A chain of custody should also be established. There must be a record of who handled or possessed evidence during the course of the investigation. If the first responder has conducted an initial search for evidence, the investigator will need to determine what qualifies as evidence and where it resides. If extra evidence is discovered, the perimeter securing the scene may change. The investigator will either call on crime scene technicians to begin to process the scene once the boundaries are established, or the investigator will perform the duties of the technician. The investigator or a designated person stays at the scene until all evidence has been properly collected and transported.
The Crime Scene Technician
Crime scene technicians are individuals who have been trained in computer forensics, and have the knowledge, skills, and tools necessary to process a crime scene. Technicians are in charge of safeguarding and preserving evidence through meticulous procedure. The technician may obtain data from a system’s memory. They can also take images of hard disks before shutting them down. All physical evidence is sealed in a bag and tagged to identify it as a specific piece of evidence. Information describing the evidence is added to a log so that precise inventory of each piece exists. Evidence is packaged to reduce the risk of exposure or damage such as…
…that from electrostatic discharge or jostling during transport. Once evidence reaches its destination, it’s kept under lock and key to prevent tampering; until it is properly examined and analyzed. Those involved in the investigation process have different responsibilities, and the people in each role must have specific knowledge to perform it properly.
Liability in Disasters
The senior management of an organization has the duty of protecting the organization from losses as a result of natural disasters, malicious code, compromise of proprietary information, damage to reputation, violation of the law, employee privacy suits, and stockholder suits. Senior management must adhere to the prudent man rule, which…
…obligates them to perform their duties with the same diligence and care that ordinary, prudent people would under similar circumstances. Exercising due care means that senior management must apply mechanisms to prevent the organization’s IT infrastructure from being used as a tool to attack another organization’s IT system. Inability to follow the prudent man rule would make that individual liable under the Federal Sentencing Guidelines of 1997.
The (ISC)2 Code of Ethics
In order to impart proper computing behavior, ethics should be woven into an organizational policy and further refined into an organizational ethical computing policy. Many organizations have contended with the issue of ethical computing and have generated guidelines for ethical behavior. The (ISC)2 Code of Ethics mandates that a Certified Information Systems Security Professionals (CISSPs) shall:
Conduct themselves in accordance with the highest standards of moral, ethical, and legal behavior. Not commit or be a party to any unlawful or unethical act that may negatively affect their professional reputation or the reputation of their profession. Appropriately report activity related to the profession that they believe to be unlawful and shall cooperate with resulting investigations. Support efforts to promote understanding and acceptance of prudent information security measures throughout the public, private, and academic sectors of our global information society. Provide competent service to their employers and clients, and shall avoid any conflicts of interest. Execute responsibilities in a manner consistent with the highest standards of their profession. Not misuse the information with which they come into contact during the course of their duties, and they shall maintain the confidentiality of all information in their possession that is so identified.
The Computer Ethics Institute’s Ten Commandments of Computer Ethics
The Coalition for Computer Ethics, which is embodied as the Computer Ethics Institute (CEI), concentrates on the interface of advances in information technologies, ethics, and corporate and public policy. The CEI acknowledges industrial, academic, and public policy organizations and is concerned with the ethical issues concerned with the advancement of information technologies in society. It has asserted the following ten commandments of computer ethics:
Thou shalt not use a computer to harm other people. Thou shalt not interfere with other people’s computer work. Thou shalt not snoop around in other people’s computer files. Thou shalt not use a computer to steal. Thou shalt not use a computer to bear false witness. Thou shalt not copy or use proprietary software for which you have not paid. Thou shalt not use other people’s computer resources without authorization or the proper compensation. Thou shalt not appropriate other people’s intellectual output. Thou shalt think about the social consequences of the program you are writing for the system you are designing. Thou shalt use a computer in ways that ensure consideration and respect for your fellow humans.
The Internet Activities Board (IAB) Ethics and the Internet
Under the Internet Activities Board (IAB) Ethics and the Internet, outlined in RFC 1087, activity that is defined as objectionable and unethical that purposely:
Seeks to gain unauthorized access to the resources of the Internet Destroys the integrity of computer-based information Disrupts the intended use of the Internet Wastes resources such as people, capacity, and computers through such actions Compromises the privacy of users Involves negligence in the conduct of Internet-wide experiments
The U.S. Department of Health, Education and Welfare Code of Fair Information Practices
The United States Department of Health, Education, and Welfare has established a list of fair information practices that concentrates on the privacy of individually identifiable personal information, which declare:
There must not be personal data record-keeping systems whose very existence is secret. There must be a way for a person to find out what information about them is in a record and how it is used. There must be a way for a person to prevent information about them, which was obtained for one purpose, from being used or made available for other purposes without their consent. Any organization creating, maintaining, using, or disseminating records of identifiable personal data must ensure the reliability of the data for their intended use and must take precautions to prevent misuses of that data.
The Organization for Economic Cooperation and Development (OECD)
The Organization for Economic Cooperation and Development (OECD) also established guidelines for ethical computing:
Collection Limitation Principle affirms that there should be limits on the gathering of personal data, and any such data should be obtained by lawful and justified means and, where appropriate, with the knowledge or consent of the data subject. Data Quality Principle affirms that personal data should have relevance to the purposes for which they are to be used, and to the extent necessary for those purposes, should be accurate, complete and kept up-to-date. Purpose Specification Principle affirms that the purposes for the collection of personal data be specified not after the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose. Use Limitation Principle affirms that personal data should not be disclosed, made accessible, or otherwise used except with the consent of the data subject, or by the authority of the law. Security Safeguards Principle affirms that personal data should be protected by sound security safeguards against such risks as loss or unauthorized access, destruction, use, modification, or disclosure of data. Openness Principle affirms that there should be a policy of openness about developments, practices, and policies with respect to personal data. Methods should be readily available to establish the existence and nature of personal data and the main purposes of their use, as well as the identity and usual residence of the data controller. Individual Participation Principle states that an individual should have the right: To obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to him. To have communicated to him data relating to him within a reasonable time at a charge, if any, that is not excessive: -In a reasonable manner. -In a form that is readily intelligible to him. To be given reasons if a request is denied, and to be able to challenge such denial. To challenge data relating to him and, if the challenge is successful to have the data erased, rectified, completed or amended. Accountability Principle affirms that a data controller should be accountable for complying with measures that give effect to the principles stated above. Transborder Issues affirms that a member country should abstain from restricting transborder transmissions of personal data between itself and another member country except where the latter does not yet accordingly observe these guidelines or where the re-export of such data would bypass its domestic privacy legislation. A Member country can also enforce restrictions in respect of certain categories of personal data for which its domestic privacy legislation includes specific regulations in view of the nature of those data and for which the other Member country provides no equivalent protection.
About Physical Security
Physical security pertains to facility construction and location, facility security including physical access control and technical controls, security maintenance. Its intended function is to safeguard against physical threats such as fire and smoke; water; earthquakes, landslides, and volcanoes; storms; sabotage and vandalism; explosion and other forms of destruction; building collapse; toxic materials; power outages; equipment failure; and personnel loss. In the majority of cases, a disaster recovery plan or a business continuity plan will be required in the event that a severe physical threat occurs. The security controls that can be applied to manage physical security can be administrative controls, technical controls, and physical controls. Definitions of each security controls are as follows:
Administrative physical security controls include facility construction and selection, site management, personnel controls, awareness training, and emergency response and procedures. Technical physical security controls include access controls; intrusion detection; alarms; closed circuit television (CCTV); monitoring; heating, ventilating, and air conditioning (HVAC); power supplies; and fire detection and suppression. Physical controls for physical security include fencing, lighting, locks, construction materials, mantraps, security guards and guard dogs.
Administrative Physical Security Controls
Administrative physical security controls are related to the use of proper administrative processes. These processes include facility requirements planning for proper emergency protocol, personnel control, and proper facility security management. Facility Requirements Planning: Without appropriate control over the physical environment, no amount of administrative, technical, or logical access controls can offer effective security to an organization. Control over the physical environment starts with organizing the security requirements for a facility. A secure facility plan details the security needs of the organization and highlights methods or mechanisms to implement effective security. Such a plan is created through a process called critical path analysis, which is an organized effort to identify relationships between mission-critical applications, methods, and procedures and all of the necessary supporting elements. When critical path analysis is performed appropriately, a comprehensive picture of interdependencies and interactions necessary to support the organization is produced. Once the analysis is established, the results supply an itemized list to be physically secure. This needs to be accomplished in the preliminary stages of the construction of a facility. One of the central physical security elements established during the construction stage include identifying and designing a secure site that will house the organization’s IT infrastructure and operations. The security needs of the organization should be the chief concern when identifying a site. The site should be accessible to both employees and external services without close proximity to possible hazards or areas with a high crime rate. Another issue to be considered is the level of susceptibility to natural disasters in the area. The site should not be prone to earthquakes, mud slides, sinkholes, fires, floods, hurricanes, tornadoes, falling rocks, snow, rainfall, ice, humidity, heat, extreme cold, and so forth. The site should also be within reachable distance to emergency services, such as police, fire, and hospitals or medical facilities. Secure Facility Design: The proper security applications for a facility must be planned prior to the construction of the facility. There are several security measures to take into account in the design process, including the combustibility of construction materials, load rating, placement, and control of items such as walls, doors, ceilings, flooring, HVAC, power, water, sewage, gas, etc. The walls are required to have an acceptable fire rating. Closets or rooms that store media must have a high fire rating. The same applies to ceilings, and as with floors, ceilings must have a secure weight-bearing rating. Additionally, the floor needs to be grounded against static buildup and must apply a non-conducting surface material in the data center. Electrical cables must be contained in metal conduit, and data cables must be enclosed in raceways. The data center should be without windows, but if there are windows they must be translucent and shatterproof. Doors should be fortified against forced entry and have a fire rating equal to the walls. Also, emergency exits must be clearly marked and monitored or alarmed. Personnel safety should be the primary concern. The facility should also be supplied with backup power sources. All marked locations within a facility should not have equal access. Areas that contain valuable assets or vital importance should have restricted access. Valuable and confidential assets should be placed in the maximum protection area provided by a facility. Work areas and visitor areas should also be planned for. Walls or partitions can be in place to divide similar but distinct work areas. These partitions can impede casual eavesdropping or shoulder surfing, which is a method of collecting information from a system by observing the monitor or the use of the keyboard by the operator. Floor-to-ceiling walls should be used to partition off areas with varying levels of sensitivity and confidentiality. Computer rooms should be designed to support the operation of the IT infrastructure and to block unauthorized physical access. Facility Security Management: Audit trails and emergency procedures are in the category of facility security management. These are areas of the Administrative Security Controls that don’t…
…involve the initial planning of the facility but are required to preserve security on an ongoing basis. In information systems, an audit trail is a chronicle of events that concentrates on a specific type of activity, such as detecting security violations, performance problems, and design and programming flaws in applications. In physical security, audit trails are access control logs and are critical in tracing unlawful access attempts and identifying the perpetrator who attempted them. These are detective rather than preventative controls. To work as an effective tool, access logs must record the date, time and location of the access attempt, success or failure of the access attempt, the identity of the individual who attempted access as well as the identity of the person, if that individual was involved, who altered the access privileges at the supervisor level, and must be audited regularly. Some audit trail systems can also send alerts to a security officer when several or back-to-back access failure attempts occur. The implementation of emergency procedures and proper training of employees of these procedures is another important aspect of administrative physical controls. These procedures should be clearly stated, readily accessible, and updated when required. They should include emergency system shutdown protocol, evacuation procedures, employee training, awareness programs, and periodic drills, and periodic equipment and systems tests. Facilities that use restricted areas to control physical security will need to address facility visitors. Escorts can be assigned to visitors. Additionally, a visitor’s access and activities should be observed carefully. Administrative Personnel Controls: Administrative personnel controls are administrative processes that are typically implemented by the Human Resources (HR) department during employee hiring and termination. These often consist of pre-employment screening, which entails employment, professional references, background checks, or credit rating checks for sensitive positions; ongoing employee checks, including security clearance checks for employees that have access to confidential information, and ongoing employee ratings or reviews by their supervisor; and post-employment procedures, including exit interviews, removal of network access and the changing of passwords, and the return of computer inventory or laptops.
Physical Access Controls
There are several types of physical access control methods that can be applied to administer, monitor, and manage access to a facility. These physical access control mechanisms range from deterrents to detection mechanisms. If facilities that have different sections, divisions, or areas that are designated as public, private, or restricted should have specialized physical access controls, monitoring, and prevention mechanisms for each of the designated areas. These methods can be used to separate, isolate, and control access to the areas of the facility and include fences, gates, turnstiles and mantraps; security guards and guard dogs; badges, keys and locks; motion detectors and alarms; as well as adequate lighting. A fence can be used to cordon off different areas and can include a wide range of components, materials, and construction methods. It can come in various constructs: painted stripes on the ground, chain link fences, barbed wire or cement walls. Various types of fences are effective in keeping out different types of intruders: fences that are 3 to 4 feet high hinder casual trespassers; fences that are 6 to 7 feet high are difficult to climb; and fences that are 8 feet high with three strands of barbed wire deter aggressive intruders. Gates can be used to control entry and exit points in a fence. The deterrent of a gate must be equivalent to the deterrent level of the fence to maintain effective security of the fence as a whole. Additionally, hinges and locking mechanisms of the gate should be fortified to diminish tampering, destruction, or removal. And as an extra layer of security, gates can be protected by security guards, guard dogs or CCTV. A turnstile is a specialized gate that allows only one person at a time from gaining entry to a building or room, and often permits entry but not exit or vice versa. A mantrap is a…
…a double set of doors that are often guarded by security personnel. It’s designated as a holding area until an individual’s identity and authentication is verified. If that information is verified and they are cleared for entry, the inner door opens, allowing them to enter the facility. If they are not cleared for entry, both doors remain locked until an escort arrives to escort them off the property or arrest them for trespassing. Locks and keys are a basic form of security and authorization mechanism. A user requires the correct key or combination to gain entry. Such users are considered authorized. Key-based locks are the most utilized and inexpensive forms of physical access control devices. Combination locks offer a wider range of control and they can be configured with multiple valid access combinations. Security guards may be stationed around a perimeter or inside to oversee access points or watch detection and surveillance monitors. They can work with and respond to different conditions or situations and are trained to recognize attack and intrusion activities and patterns. Security guards are an effective form of security control when immediate, onsite, situation response and quick decision-making is required. There are a number of disadvantages to utilizing, maintaining, and relying upon security guards. Not all environments and not all facilities are designed to accommodate security guards. Furthermore, not all security guards can provide 100 percent reliability. Situations where their lives may be endangered, a security guard may be more concerned about self-protection than the preservation of the security of the facility. Guard dogs can be an effective alternative to security guards. They can be deployed as a perimeter security control and have proven to be an effective deterrent mechanism. However, guard dogs require significant and ongoing maintenance, and impose serious insurance and liability requirements. A badge, or ID card is a physical form of identification or electronic access control device. Examples such as name tags or smart cards can use several methods of authentication to provide authorization to access a facility, designated security areas, or secured workstations. Badges typically include photos and magnetic strips with encoded data, as well as specific information about the user to help verify identity. Badges may also be used in locations where physical access is monitored by security guards. The badge is a form of visual ID inspected by security. Alternatively, badges can be used with scanning devices that read the magnetic strip. In this case, the badge can be used either for identification or for authentication. Effective lighting is another method that’s typically implemented in perimeter security control. Its chief purpose is to forestall would be intruders, trespassers, and prowlers who are more inclined to attempt unlawful entry in the dark. Thought lighting is helpful, it’s not a guaranteed deterrent and should only be used as an added security measure, rather than the primary method. As well, effective lighting should not expose the locations of security guards, guard dogs, and patrol posts. A motion detector is a device that senses movement in a pinpointed area: When a motion detector picks up significant movement in the environment it triggers an alarm, which can be a deterrent, notification mechanism or a method to keep away would be intruders. Deterrent alarms may also trigger doors to close and engage locks, making further intrusion more difficult. Alarms that trigger repellants will blare a siren and activate lights. These types of alarms are used to hinder the intruder from furthering their activities and hopefully encourage them to leave the premises. Alarms that trigger notification are often silent, but they log data about the unfolding incident and notify security administrators, security guards, and law enforcement. When motion detectors and alarms are used, secondary verification methods should be used to avoid false triggers. These can happen when birds, animals or authorized persons accidentally trip the alarm. Using two or more detection systems and relying on two or more triggers to occur before the alarm is triggered can diminish the rate of false alarms and preserve certainty of sensing actual intrusions or attacks. A closed-circuit television (CCTV) system is a security tactic similar to motion detectors, and alarms but is not an automated detection-and-response system. CCTV relies on designated personnel to monitor the captured video to observe suspicious and malicious activities and to trigger alarms. CCTV is usually not employed as the main detection mechanism, but as a secondary tactic that is reviewed after an alert of an automated system occurs.
Technical Psychical Security Controls
The technical physical security controls used to administer physical access include smart cards, dumb cards, proximity readers, and biometrics. Others include audit trails, access logs, and intrusion detection systems (IDSs). Smart cards are similar to credit cards in appearance and contain an embedded magnetic strip, bar code, or integrated circuit chip. They can contain machine-readable ID information about the authorized user for verification purposes. A dumb card is an ID card that usually has a photo and printed details about the authorized user. Dumb cards are used in environments where security guards are posted. A proximity reader can be…
…a passive device, a field-powered device, or a transponder. The proximity device is worn or held by the authorized user. When they approach a proximity reader, it’s able to pinpoint the user then determine whether they have authorized access. A passive device reflects or otherwise alters the electromagnetic field generated by the reader. This alteration is picked up by the reader. A field-powered device has electronics that are triggered upon entering the electromagnetic field generated by the reader. A transponder device is self-powered and sends out a signal received by the reader. Intrusion detection systems monitor physical activity and are programmed to detect attempted entry, breach, or attack by an unauthorized user. These systems may include security guards, automated access controls, and motion detectors, as well as burglar alarms. Physical intrusion detection systems can scan for vibrations, movement, temperature changes, sound, changes in electromagnetic fields, etc.
Environmental and Personnel Safety
Under all circumstances, the most important element of physical security is the safeguarding of human life. This is the main goal for all security methods. Flooding, fires, release of toxic materials, and natural disasters jeopardize human life as well as the stability of a facility. Preserving the environment of a facility is an integral function in upholding safety for personnel. Although natural disasters cannot be averted, the resulting impact can be mitigated in building facilities able to withstand those disasters, securing high fire ratings, and proper mechanisms like sprinkler systems, etc. Basic elements such as power, noise, and temperature fluctuations can be examined. Electrical Power Supply Electronic equipment, including computer systems, is impacted by the quality of the power supply. The power supplied by electric companies fluctuates in consistency. Most electronic equipment requires clean power to function efficiently and equipment disturbances of damage due to power fluctuations in the quality of the power supply is common. Power supply issues such as spikes, surges, sags, brownouts, and blackouts impact the stability and operation of the electronic equipment. Organizations can implement devices designed to protect electronic equipment from power supply problems. These devices include surge suppressors and uninterruptible power supplies (UPS):
A surge suppressor can be used to diminish the effects of voltage spikes and surges that exist in commercial power sources and smooth out power variations. Given this protection method, there isn’t a known mechanism to fully protect electronic equipment from a close lightning strike. Surge suppressors are available at several outlets including local computer dealers and superstores. Most power strips with surge protection have indicator lights. If the light goes out, it’s an indicator the unit has lost surge protection capability and needs to be replaced. If the light begins flashing, it signals the power strip is failing and should be replaced immediately. Surge suppressors smooth out power fluctuations and protect computer equipment from power glitches up to a point. For complete protection from power surges and outages, an uninterruptible power supply (UPS) is recommended. An uninterruptible power supply (UPS) is an inline battery backup installed between the electronic equipment and the wall outlet. In addition to surge protection, a UPS functions as a battery when the power dips or fails. It will also signal a warning when the power source is above or below acceptable levels. Several UPS models can work with computer systems and activate safe shutdown in the event of a complete loss of power. This is processed through software that runs in the background and sends a signal through one of the computer’s COM ports when power is down. The length of time the UPS can act as a backup power source for a system depends on battery capacity and power demands of the equipment. A monitor is one of the main power drains. A more robust UPS device requires its own line and circuit breaker. To keep a system online as long as possible during a power failure, turn off the monitor immediately. When considering a UPS, it’s important to factor in the amount of protection that’s required. The watt rating must be sufficient to supply the electronic equipment and all its peripherals including time to safely shut down the system. This can be calculated by adding the power rating of all pieces of equipment to be sustained by the UPS. Make note of whether a laser printer can be supported by the UPS as they often require more power than a UPS is able to provide.
Electromagnetic Interference (EMI) and Radio Frequency Interference (RFI)
Electromagnetic interference (EMI) can create disruptions in the functioning of electronic equipment and can affect the quality of communications, transmissions, and playback. It can also impact data transmission that depend on electromagnetic transport mechanisms, such as telephone, cellular, television, audio, radio, and network mechanisms. There are two types of EMI: common mode EMI, generated by the difference in power between the live and ground wires of a power source or operating electronic equipment; and traverse mode EMI, which is generated by the difference in power between the live and neutral wires of a power source or operating electronic equipment. Radio frequency interference (RFI) is similar to…
…EMI and can impact the same systems. RFI is produced by a number of common electrical appliances such as fluorescent lights, electrical cables, electric space heaters, computers, elevators, motors, and electromagnets.
HVAC, Water and Fire Detection in Electronic-Heavy Environments
Heating, Ventilating, and Air Conditioning (HVAC): Maintaining the environment involves maintenance of the heating, ventilating, and air conditioning (HVAC) mechanisms. This is vital in computer and server rooms, which should be kept to a temperature of 60 – 75 degrees Fahrenheit or 15 – 23 degrees Celsius, and the humidity should be sustained between 40 and 60 percent. The humidity level is significant in these rooms as high humidity can cause corrosion, and excessively low humidity can cause static electricity. Water: Physical security policies should be able to deal with water leakage even if they are not a common occurrence. Water leaks can cause extensive damage to electronic equipment, especially while they are operating. Also, running electricity that is exposed to water presents a serious risk of electrocution to personnel. It’s important to locate server rooms away from a water source if possible. Water detection circuits can also be installed on the floor around mission-critical systems. These circuits will trigger an alarm if water is encroached upon the equipment. In addition to monitoring water leaks, the facility’s capacity to withstand severe rain or flooding should also be evaluated. Fire Detection and Fire Suppression: Fire is a serious risk in environments that have a lot of electronic equipment. Fire detection and fire suppression systems must be installed to preserve the safety of personnel as well as the electronic equipment. Along with the protection of human life, fire detection and suppression is purposed to diminish damage caused by fire, smoke, heat, and suppression materials, especially with IT infrastructure. One of the main elements of fire control is awareness training for personnel. Those in training should know the use and location of fire suppression mechanisms in the facility, as well as designated evacuation routes. Other details that can be included in fire response training:
cardiopulmonary resuscitation (CPR) training, emergency shutdown procedures, and a pre-established rendezvous location or safety verification mechanism. Addressing fire detection and suppression also entails reviewing the possible contamination and damage caused by a fire. The destructive elements of a fire include smoke and heat, but also include the suppression material, such as water or soda acid. Smoke is damaging to most storage devices; heat can impact any electronic or computer component. Suppression mediums can cause short circuits, initiate corrosion, or otherwise render equipment useless. All of these potentials must be addressed when designing a fire response system. Fire Detection Systems: Installing an automated detection and suppression system is vital to efficiently protecting a facility from fire. There are several types of fire detection systems: Fixed temperature detection systems that trigger suppression when a certain temperature is reached. Rate of rise temperature detection systems trigger suppression when the speed at which the temperature changes reaches a critical level. Flame actuated systems trigger suppression based on the infrared energy of flames. Smoke actuated systems trigger suppression based on photoelectric or radioactive ionization sensors. Most of these fire detection systems can be hooked into fire response service notification mechanisms. When suppression is triggered, these linked systems will alert the local fire responders and request aid using an automated message or alarm.
Using Fire Suppression Systems to Protect Electronics
There are different types of fire extinguishers that can handle the suppression of different types of fires. If an extinguisher is used improperly or the wrong type of fire extinguisher is used, the fire could escalate and intensify instead of being suppressed. Additionally, fire extinguishers are to be used only when a fire is still in the beginning stage. for the common types of fire extinguishers. Complications can arise if the fire suppression material used in a fire extinguisher damages equipment or creates significant smoke, or causes other potential effects that can result in collateral damage. When implementing a fire suppression system, make certain the type of system has capability to suppress the fire without destroying the equipment in the process. The different types of fire suppression systems that you can use include water discharge systems and gas discharge systems. Here are the four main types of water discharge systems:
are the four main types of water discharge systems: A wet pipe system – also known as a closed head system, and is always full of water. A dry pipe system – contains compressed air that is released when the system is triggered, and opens a water valve that causes the pipes to fill and discharge water. A deluge system – another dry pipe system that uses larger pipes and therefore a significantly larger volume of water. Deluge systems are not appropriate for environments that include electronic equipment. A preaction system – a combination of dry pipe and wet pipe systems. The system exists as a dry pipe until the initial stages of a fire are detected and then the pipes are filled with water. The water is released only after the sprinkler head activation triggers are melted by sufficient heat. If the fire is quenched before the sprinklers are triggered, the pipes can be manually emptied and reset. Preaction systems are the most appropriate water-based system for environments that include both electronic equipment and personnel in the same locations. Gas discharge systems tend to be more effective than water discharge systems. However, gas discharge systems are not advisable for environments where personnel are located as a gas discharge removes the oxygen from the air, making them hazardous to personnel. Gas discharge systems use a pressurized gaseous suppression medium, such as CO2 or Halon. Halon is a very efficient fire suppression compound, however, it’s damaging to the ozone as it converts to toxic gases at 900 degrees Fahrenheit. It is usually replaced by a more ecological and less toxic medium. The replacements for Halon include: Heptafluoropropane (HFC-227ea), also known as FM-200. Trifluromethane (HCFC-23), also known as FE-13. FE-13 molecules absorb heat, making it impossible for the air in the room to support combustion. It is considered to be one of the safest clean agents. Inergen (IG541), a combination of three different gases; nitrogen, argon, and carbon dioxide. When released, it lowers the oxygen content in a room to the point that the fire cannot be sustained. CEA-410 or CEA 308 NAF-S-III (HCFC Blend A) Aragon (IG55) or Argonite (IG01)
Failure of Electronic Equipment
Failure of electronic equipment is unavoidable, regardless of the quality of the equipment an organization has in place. It’s critical to be prepared for equipment failure to sustain the ongoing availability of the IT infrastructure and help to protect the integrity and availability of its resources. Preparing for equipment failure entails many processes. In some non-mission-critical situations, knowing where to obtain replacement parts for a 48-hour replacement timeline is sufficient. In other situations, onsite replacement parts are mandatory. In this case the response time in returning a system back to full capacity is proportional to the cost of maintaining such a solution. Costs include storage, transportation, pre-purchasing, and maintaining onsite installation and restoration expertise. In some circumstances, the maintenance of onsite replacements can’t be supported. For these situations, establishing a service level agreement (SLA), which explicitly details the response time of a vendor in the event of an equipment failure emergency with the hardware vendor, is essential. Aging hardware should also be scheduled for replacement and/or repair. The scheduling of these repairs should be based on…
…the mean time to failure (MTTF) and mean time to repair (MTTR) estimates for each device. MTTF is the anticipated typical functional lifetime of the device within the operating environment while the MTTR is the average length of time required to service a repair on the device. Many times a device will require numerous repairs before a catastrophic failure is expected. All devices should be replaced before their MTTF expires. When a device is under repair, another solution or a backup device should be in place to fill in for the duration of the repair.