Monitoring Controls Status

FacebookTwitterGoogle+LinkedInEmail
Description
This lesson covers IT service delivery controls and system monitoring which includes:
  • Hardware
  • Software
  • Centralized System logging
  • Network Device monitoring
  • Uptime-downtime reporting
This unit also discusses log management, how to effectively manage log events and data file controls. The lesson also covers the use of anti-virus software, e mail and mobile security code policies as well as maintenance controls. [toggle_content title="Transcript"] Alright, so now let’s think about monitoring the status of our controls. We know that continuous monitoring is an important consideration. We need to know that the controls we’ve put in-place are doing the job they’re expected to do. If you recall, I talked about planned inputs, expected behavior and planned outputs. Or expected outputs. So if we think about the delivery of our services, we have a lot of different services that a typical IT department provides. We have the monitoring of our systems, we have controls on our data files systems, control systems, system access controls, we have to deal with anti-virus software, our maintenance controls, providing an appropriate test and development requirement for system monitoring, as I was saying, continuous system monitoring is what we’re after here. If we can constantly keep an eye on the controls that protect our assets, then we can find out about problems as soon as possible. This could be problems with hardware, software, monitoring all of our network devices, our routers, our switches, our proxies, firewalls, keeping track of uptime and downtime. Some of these things could be folded into your metrics requirements as well but doing the monitoring to begin with is critical. What about log management? We know that we need logging for important transactions with our applications or operating systems. We also need to log security events. For instance, if you’re getting messages from your logging system saying that someone is attempting to login as an administrator again and again and again that could be evidence that there is some kind of brute force attack going on with one of your administrator accounts. Your applications: maybe the applications are experiencing problems and certain functions are failing because you’re running low on memory or processor. Your logging should indicate this. You might problems with your operating system, or with the network connections. Having the right tools in-place to send alerts as needed is what we’re thinking about. You might have a syslog aggregated server, something like Arc Sight where you’re gathering logs from all of the different devices in your environment and having them go to one place so you can centralize the monitoring and management of those log events. We need to think about user access logs, when people login to a system, successfully or especially unsuccessfully that should be logged somewhere. Again, that goes back to the idea that we want to be able to spot suspicious activity. There’s also the expectation that when users connect to a system, that they might see a warning banner which says that, “You’re connecting to a resource that this organization owns. Your activity is being monitored. If you do something objectionable, or illegal, we will prosecute,” and so on. Maybe you’ve got some policies and procedures in-place dealing with your passwords. We know that we should be using upper and lower case letters and numbers and special characters. Passwords typically should be changed every 30 days. Sometimes organizations go as long as 90 days. Maybe you’re allowed to reuse a password after you’ve changed it ten times, or maybe you’re never allowed to reuse a password. Your organization will decide what’s best for your requirements. These are some considerations to think about. Maybe you fail to login correctly three times. Now that login becomes disabled. It might be disabled permanently until the helpdesk turns it back on, or maybe it just gets disabled for fifteen minutes. Those are different variations of that type of control. We have to keep track of our privileged login accounts: people that are logging in as root or administrator or domain administrator. Or domain controller. People logging in to Active Directory. These are the critical pieces of the infrastructure in the organization so the controls on those login accounts should be correspondingly tightened to respond to the extra threat. Maybe we change the passwords every 30 days and there’s no longer time period allowed. It could be that you’ve got to save those passwords in an off-site location in case they’re needed in an emergency. Some organizations might put a hard copy of the accounts and their associated passwords into a safe. That way it can only be accessed by maybe two or three people that have the combination of the safe, and those might be the highest level individuals within the organization, the most trusted individuals in the organization. What about maintenance logins? We have back-up operators. We have people that deal with optimizing databases. There could be accounts that are used for other types of regular maintenance on a system. These are favorite targets for hackers. This is because when these maintenance accounts get created, sometimes they are left with default passwords, or default privileges. This, again, goes back to the idea of auditing your systems to understand what gets created when you install a database, or when you install a web server. Why are these extra accounts here? Why don't they have the same controls on their passwords as the other accounts? You want to find those problems before they become low-hanging fruit for a hacker. In general, if an account for maintenance is not needed we should just disable it. There’s no reason to even leave it active. We want to think also about controls on our data files. This means everything from a table in a database to a log file on a system, an event log. It could be files detailing different configuration detail for how the networking is set-up, or how your routing is set-up. Files that are used to control the system’s behavior, or its overall configuration. You want to think about having logical access controls, otherwise known as technical controls. So if someone needs a certain capability, that should most likely be assigned to their role and then they get assigned to a group which has that role. That provides an extra level of protection and makes the overhead of dealing with the maintenance a little bit easier. Transaction processing controls; if we’ve got processing going on within databases or financial transactions, there should be some expectation that the transaction is being monitored, validated, to make sure that it’s correct before it’s written to the database, for instance. We want to know that the transaction completed successfully and that the results were correct before we decide to save that information. If there is an inconsistency, or some other problem, typically that transaction is rolled-back and then it might be submitted again, or the user needs to attempt the transaction again. We have to think about application processing controls as well. Input controls, input validation. Trying to make sure that when input is accepted from a user, or from some other entity, like another software program, that the input is in the correct format, has the correct range of values, has the correct length or size. We don't want to be able to facilitate things like SQL injection or Cross-Site Scripting. We want to make sure, for instance, that we have unique logins and passwords on all of our systems. You shouldn’t be reusing the same password on all of the systems that you have access to. Sometimes we use the recapture tools where this tries to prevent an automated entry of login data. So some kind of distorted letters and characters might be visible, and someone has to look at that image and manually type that in. These are all controls that enhance the security for access to our applications. Then we can think a little bit about some of our processing controls. What about batch totals? If I’ve got 1000 records to process and my report shows that I processed 972 now I know that I’ve got some missing information. A good control in that case should spit out a report and then the information can be processed again until it is done correctly. Total number of items would relate somewhat to batch type controls or batch totals. We want to do things like exception reporting, which I talked about earlier. Maybe you processed all of your transactions but certain ones are irregular, for whatever reason. Or something got skipped. We want to be able to provide a mechanism to detect that automatically so that we don't have to rely on a manual process. Then we have output controls. What about things like negotiable instruments? We’ve got stocks and bonds and other financial documents. We want to make sure that we’ve got the right logical and physical controls for certain items like this. Maybe the printer that produces the document is in a secured area with limited physical access. We want to think about our event logs. This could be event logs for lots of different things, related to applications, related to the operating system, related to network events. How was that information retained? How was it used and how was it guaranteed to be available when it’s needed? Another important component of managing our environments is anti-virus software. There’s obvious reasons for using this. We want to be able to find viruses and worms in the environment. As I was mentioning earlier viruses are attached to programs that might be used in the course of doing business. Sometimes viruses are introduced to the environment because of careless user activity; clicking on attachments in emails, or going to dangerous websites. So effective detection and elimination of viruses is a critical function. Same thing would apply to worms. Worms can replicate without human interaction. So they have a certain difference in the way that they’re dangerous to the environment. They can use up all of your available network bandwidth as they multiply and infect other systems. Since worms don't need human interaction, they can multiply much quicker than a virus can. And, of course, worms don't attach themselves, or they’re not attached, rather, to a file. They exist on their own. So it’s kind of a different beast as far as what kind of controls you need to detect and eradicate a worm. We also have to consider our mobile devices. A lot of organizations are using a bring-your-own-device, or BYOD policy, where everybody can bring in their favorite version of a mobile phone or a tablet. We also have to deal with staff that use laptops. Generally, when we consider mobile code, we’re talking about a handheld device, not usually a computer, or a laptop computer. So how is this mobile device managed? Is it managed with group policy? Do you have people just expecting to do the right thing? There is some blend there sometimes between what the organization can provide and what the user is expected to do, depending on the make and model of the device they’re using. A little bit here about some email technology. We have multi-purpose Internet mail extensions, or MIME. This allows a lot of the multimedia functions that email was evolved to support. Things like audio files or video images. Being able to embed different components into an email to make it a richer experience for the person reading the email, or interacting with some of its resources. We need to think about the security policy for mobile devices. Going back to handheld devices and knowing that certain things are possible that the users might do on their own interacting with external servers, or the mobile devices are managed somewhat by internal servers where the security policies can be enforced more rigidly. If we let the users do whatever they want with their mobile devices, then you’re at your highest risk level. So trying to find a balance between security and functionality can be a challenge. Then we think about our maintenance controls. I talked a little bit earlier about back-up and recovery, as far as managing where the tapes go, how long they’re retained and so on. We also have to consider the verification that those back-ups are actually being performed correctly. Can you do a test restore of your data and verify that the data is correct and complete? We have to think about management of our projects, as it relates to maintenance controls. How do we know that the risk analysis phase of project management was done correctly? There should be some workflow involved, some kinds of checklists showing what needs to be done and the fact that it was done. Maybe someone signs off on each individual step and then when it gets finally to the end point we can verify that all of the different things were done correctly and in the right order. Configuration management, also important. This relates back to change control. So that we can understand when changes are made to a system, what needs to be documented? Who needs to review that? Who needs to approve it? And then be able to refer back to that information in the event of problems. Authorizing change is quite often a group effort. You might have representatives from different teams, the network team, the security team, the developers, the database management team, and so on. They all have some input and some perspective on making sure that they all agree that a change is acceptable to perform and that won’t cause other problems. Sometimes we have to deal with emergency changes. This is sometimes called a break-fix scenario. Where you see there’s a problem, you need to fix it right now, so you go ahead and fix it and then you file the change control paperwork later, because there was no option to wait for the typical change control process which may be several days, or even a week or more, depending on how large your organization is. So this kind of goes back to the idea of it’s easier to apologize than to ask for permission. Sometimes that’s the reality of dealing with emergencies in an IT environment. [/toggle_content]
Recommended Study Material
Learn on the go.
The app designed for the modern cyber security professional.
Get it on Google Play Get it on the App Store

Our Revolution

We believe Cyber Security training should be free, for everyone, FOREVER. Everyone, everywhere, deserves the OPPORTUNITY to learn, begin and grow a career in this fascinating field. Therefore, Cybrary is a free community where people, companies and training come together to give everyone the ability to collaborate in an open source way that is revolutionizing the cyber security educational experience.

Cybrary On The Go

Get the Cybrary app for Android for online and offline viewing of our lessons.

Get it on Google Play
 

Support Cybrary

Donate Here to Get This Month's Donor Badge

 
Skip to toolbar

We recommend always using caution when following any link

Are you sure you want to continue?

Continue
Cancel