Immigration Financial Information Bangladesh Gateway General World Cup Entertainment Programing University and College Scholarship Job Interview Health Job

Thursday, August 2, 2018

Notes 6

# The chair of the steering committee should be a senior person (executive level manager) with the authority to make decisions.
- The chief information officer (CIO) would not normally be the chair, although the CIO or his representative would be a member to provide input on organization wide strategies.

# The project steering committee provides overall direction and is also responsible for monitoring project costs and project schedules .A project steering committee usually consists of a senior representative from each function that will be affected by the new system and would be the most appropriate group to approve the RFP. The project sponsor provides funding for the project.

# Other factors can be meaningless in absence of proper alignment of IT security with business and IT objectives. Even if top management approves the policy which is not in line with business objective, same should be questionable.

# Board of directors in any organization have ultimate responsibility for the development of IS security function. Security committee performs as per the direction of board. The IS department is responsible for the execution of the policy. IS audit department need to ensure proper implementation of IS security policy and in case of any deviation need to report to management.

# Board of directors in any organization have ultimate responsibility for the IT governance. IT strategy committee advises the board and IT steering committee monitors the board approved IT governance policy and facilitates deployment of IT resources for specific projects in support of business plans. The audit committee looks after audit issues and control part.

# Project steering committee is ultimately responsible for total project management for IT related projects. They provide direction and monitors costs and project schedules. Audit committee do not involve in monitoring the projects. User management and system development management are involved in projects to the extent of their role however responsibility lies with project steering committee. User management assumes ownership of the project and resulting system. They should review and approve deliverables as they are defined and accomplished.

# The project steering committee provides overall direction and is also responsible for monitoring project costs and project schedules .A project steering committee usually consists of a senior representative from each function that will be affected by the new system and would be the most appropriate group to approve the RFP. The project sponsor provides funding for the project. IS strategy committee advices board of directors on IT initiatives.

# The project sponsor is the manager in charge of the business function, the owner of the data and the owner of the system under development. Providing functional specifications through functional users is the responsibility of the project sponsor. So, Requirement specifications is ultimately responsibility of Project Sponsor.

Sunday, July 22, 2018

Notes 5

# The first step in a risk-based audit approach is to gather information about the business and industry to evaluate the inherent risks. After completing the assessment of the inherent risks, the next step is to complete an assessment of the internal control structure. The controls are then tested and, on the basis of the test results, substantive tests are carried out and assessed.

# The IS auditor must identify the assets, look for vulnerabilities, and then identify the threats and the likelihood of occurrence.

# A bottom-up approach begins by defining operational-level requirements and policies, which are derived and implemented as the result of risk assessments.

# Primarily consideration should be documentation of identified risk. In order to manage and control a risk, it first must be recognized as a risk. Only after documentation, other factors to be considered.

# Audit charter should be independent from IS department and IT steering committee.

# Action plan in case of disruption of services is included in BCP policy.

# Audit compendium includes summary of critical of audit observations for higher management.

# The result of risk management process is used for making security policy decisions.

# Attribute sampling method (either control is present or absent) will be useful when testing for compliance.

# Compliance testing involves verification of process
- substantive testing involves verification of transactions or data.

Thursday, July 19, 2018

Notes 4

# Three main accuracy measures used for a biometric solution are:
- False-Acceptance Rate (FAR),
- False-Rejection Rate (FRR),
- Cross-Error Rate (CER) or Equal-Error Rate (EER)

- Most important overall quantitative performance indicator for biometric system is CER or EER.
- A low EER is a combination of a low FRR and a low FAR. CER or EER is a rate at which FAR and FRR is equal.
- The most effective biometric control system is the one with lowest CER or EER. Low FRRs or low FARs alone does not measure the overall efficiency of the device.

# Control self-assessment (CSA) is a technique that allows managers and work teams directly involved in business units, functions or processes to participate in assessing the organization's risk management and control processes. Team understand the business process, define the controls and generate an assessment of how well the controls are working. This is best achieved during preliminary survey phase.

# Reference Monitor:
- Mechanism that checks each request by a subject to access and use an object is as per security policy.
- In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects' (e.g., processes and users) ability to perform operations (e.g., read and write) on objects (e.g., files and sockets) on a system.
- A reference monitor is implemented via a security kernel, which is a hardware/software/firmware mechanism.

# Address Resolution Protocol (ARP)
- A network layer protocol used to convert an IP address into a physical address such as an Ethernet address. A host wishing to obtain a physical address broadcasts an ARP request onto the TCP/IP network. The host on the network that has the IP address in the request then replies with its physical hardware address.

# Access control analyzer
- An access control analyzer is an audit utility for analyzing how well access controls have been implemented and maintained within an access control package.

# Reverse ARP (RARP)
- Used by a host to discover its IP address. In this case, the host broadcasts its physical address and a RARP server replies with the host's IP address.

# A review of system configuration files for control options used would show level of access available for different user.
- Both log files are detective in nature.
- Job descriptions of users will not provide details about access level.

# Network security reviews include reviewing router access control lists, port scanning, internal and external connections to the system, etc.

#  Information security, business continuity and risk management should be considered while developing IT plan, but all this will add value only if IT plan is in line with business plan.

# Unprotected passwords files represent the greatest risk. Such files should be stored in an encrypted manner.

# General operating system access control functions include log user activities, log events, etc.
# network control feature - Logging data communication access activities
# database control function - Verifying user authorization at the field level
# application-level access control functions - Changing data files

# The very first step in reviewing an organization's IT strategic plan is to review/understand the business plan. Without understanding the context in which business operates and its expansion plan, review of strategic plan may not be that effective. To evaluate the IT strategic plan, the IS auditor would first need to familiarize him/herself with the business plan. Alignment of IT processes as per business is an important consideration. However, first one needs to understand the business.

# Authentication: The process of verifying who you are. When you log on to a PC with a user name and password you are authenticating. Authentication is about who somebody is.

# Authorization: The process of verifying that you have access to something. Gaining access to a resource (e.g. directory on a hard disk) because the permissions configured on it allow you access is authorization. Authorization is about what they're allowed to do.

#
- The risk that many users can claim to be a specific user can be better addressed by proper authentication process rather than authorization.
- Without an appropriate authorization process, it will be impossible to establish functional limits and accountability.
- Authorization process will not directly impact sharing user accounts. Other controls are required to prevent sharing of user accounts.
- In absence of proper authorization process principle of least privilege cannot be assured.

# False-Acceptance Rate (FAR):
FAR is a rate of acceptance of unauthorised person i.e. biometric will allow unauthorised person to access the system.
- Most important performance indicator for biometric system is false-acceptance rate (FAR).
- This is a fail-unsafe condition, i.e., an unauthorized individual may be granted access.
- A low FAR is most desirable when it is used to protect highly sensitive data.

Equal Error Rate (EER) or CER is best indicator when overall performance is to be evaluated.

# The risk of false-acceptance cannot be eliminated. Risk of a biometric device may be optimized, but will never be zero because this would imply an unacceptably high risk of false rejection.
# The fingerprint reader does not need to be protected in itself by a password.
# The usage of biometric protection on PCs does not provide assurance that unauthorized access will be impossible.

# Both CPM & PERT is a technique for estimating project duration and timeline. However, PERT is more reliable than CPM for estimating project duration. Advantage of PERT over CPM is that in CPM only single duration is considered while PERT considers three different scenarios i.e optimistic (best), pessimistic (worst) and normal (most likely) and on the basis of three scenarios, a single critical path is arrived.

# Digital Signature:

Step 1: Create Hash (Message digest) of the message.
Step 2: Encrypt the hash (as derived above) with private key of the sender.

Upon receiving the message, recipient will perform following functions:
Step 1: He will independently calculate hash value of the message.
Step 2: Then he will decrypt the digital signature using public key of sender. If recipient is able to decrypt the message successfully with public key of sender, then it proves authentication i.e message is infact sent from the sender. It ensures non-repudiation i.e sender cannot repudiate having sent the message.

Step 3: Now, recipient will compare value derived under step (1) with value derived under step (2). If both tallies, it proves integrity of the message.

# Digital signature is created by encrypting hash of the message. Encrypted hash cannot be altered without knowing public key of sender.

# Digital Signature is created in below two steps:

Step 1: Create Hash (Message digest) of the message.
Step 2: Encrypt the hash (as derived above) with private key of the sender.

If the sender is customer, hash to be encrypted by using customer’s (sender’s) private key.

Wednesday, July 18, 2018

Notes 3

# Function of IDS - Obtaining evidence on intrusive activity.

# Function of FireWall:
- Control the access on the basis of defined rule
- Blocking access to websites for unauthorised users
- Preventing access to servers for unauthorised users

# Main problem in operating IDSs is the recognition (detection) of events that are not really security incidents—false positives (i.e. false alarm).

# Concerns of biometric implementation:
- instances of false rejection rate.
- instances of false acceptance rate.

# Denial of service is a type of attack and is not a problem in the operation of IDSs.

# BEST method to detect the intrusion is to actively monitor the unsuccessful logins.
- Deactivating the user ID is preventive method and not detective.

# IDS cannot detect attacks which are in form of encrypted traffic. So if organisation has misunderstood that IDS can detect encrypted traffic also and accordingly designed its control strategy, then it is major concern.

# ‘War Driving’ - Used by hacker for unauthorised access to wireless infrastructure. War driving is a technique in which wireless equipped computer is used to locate and gain access to wireless networks. Same is done by driving or walking in and around building. ‘War Driving’ is also used by auditors to test wireless.
- WPA-2 is an encryption standard and not a technique to test the security.
- War dialling is a technique for gaining access to a computer or a network through the dialling of defined blocks of telephone numbers.

# Following are the best practises for wireless (wi-fi) security :
- Enable MAC (Media Access Control) address filtering.
- Enable Encryption to protect data in transit.
- Disable SSID (service set identifier) broadcasting.
- Disable DHCP (Dynamic Host Configuration Protocol).

# A randomly generated PSK is stronger than a MAC-based PSK.
- WEP (Wired equivalent privacy) has been shown to be a very weak encryption technique and can be cracked within minutes.

# The risk management process is about making specific, security-related decisions, such as the level of acceptable risk.

# Out of all types of firewall, Application-Level Firewall provides greatest security environment (as it works on application layer of OSI model).
- Application gateway works on application layer of OSI model and Circuit gateway works on session layer.
- Application gateway has different proxies for each service whereas Circuit gateway has single proxy for all services.
Therefore, application gateway works in a more detailed (granularity) way than the others.

# Out of all types of firewall implementation structures, Screened Subnet Firewall provides greatest security environment (as it implements 2 packet filtering router and 1 bastion host). It acts as proxy and direct connection between internal network and external network is not allowed. A screened-subnet firewall is also used as a demilitarized zone (DMZ).
Difference between screened-subnet firewall and screened host firewall is that, screened-subnet firewall uses two packet filtering router whereas screened-host firewall uses only one packet-filtering firewall. Both works on the concept of bastion host and proxy.

# Application gateway works on application layer of OSI model and effective in preventing applications, such as FTPs and https. A circuit gateway firewall is able to prevent paths or circuits, not applications, from entering the organization's network.

# Application-level gateway
Out of all types of firewall, Application-Level Firewall provides greatest security environment (as it works on application layer of OSI model).An application-level gateway is the best way to protect against hacking because it can define with detail rules that describe the type of user or connection that is or is not permitted. It analyze each package in detail at application level of OSI which means that it reviews the commands of each higher-level protocol such as HTTP, FTP etc.

# Firewall Security can be compromised when all the installation options are kept open.

# Audit Charter outlines the overall authority, scope and responsibilities of the Internal Audit Function. Functions of External Audit are governed by Engagement letters.

# Kerberos is a network authentication protocol for client-server applications that can be used to restrict access to the database to authorized users.
- Vitality detection and multimodal biometrics are controls against spoofing and mimicry attacks.
- Before-image/after-image logging of database transactions is a detective control
- Kerberos is a preventative control.

# Kerberos 
1. Kerberos is a Single Sign-on tool which is used to protect networks and related resources.
2. Kerberos work in Open Network Environment (ONE) which is sometimes also known as Distributed Computing Environment(DCE) and manages authentication in diverse environment.
3.In kerberos both client and server are authenticated.
4.Purpose of kerberos is to avoid spoofed attacks
5.Important components/ parts of kerberos system includes:

Authenticator
Credential
Kerberos Authentication Server(KAS)
Kerberos Database
Session Key
Ticket
Ticket Granting Server (TGS)
Timestamp
User or Client

6. Client identity is stored in kerberos database.
7. Ticket contains user identity,a session key, a timestamp etc.
8. Every ticket will have unique session key.
9. Tickets can be reused.
10. Kerberos server maintains history of previous user requests & sessions.

Tuesday, July 17, 2018

Notes 2

# Statistical sampling minimizes the detection risk.
- Detection risk is the chance that an auditor will not find material misstatements in an entity's financial statements. Detection risk is the risk that the auditor will conclude that no material errors are present when in fact there are.
- Using statistical sampling, probability of error can be objectively quantified and hence detection risk can be minimized.

#  compliance testing checks for the presence of controls. Compliance testing determines whether controls are being applied in compliance with policy. This includes tests to determine whether new accounts were appropriately authorized.

# Bottom-Up Approach: 
Start with testing of individual units such as programs or modules and work upward until a complete system is tested.
Advantages of bottom-up: (i) Test can be started even before all programs are complete (ii) Errors in critical modules can be found early.

#Top-Down Approach: 
Test starts from broader level and then gradually moves towards individual programs and modules.
Advantages of top-down: (i) Interface error can be detected earlier (ii) confidence in the system is achieved earlier.

System testing includes (i) Recovery testing (ii) Security testing (iii) Load testing (iv) Volume testing (v) Stress testing & (vi) Performance testing.


#  A company is planning to install a network-based intrusion detection system (IDS) to protect the web site that it hosts. Where should the device be installed?
A. On the local network B. Outside the firewall C. In the demilitarized zone (DMZ) D. On the server that hosts the web site

The correct answer is: C

Explanation:
The local network doesn't make sense. The public website will be facing the Internet, and the local network should always be behind another layer of firewalls. Traffic for the website will never reach the local network, because the website has to be in a DMZ. As for the server that hosts the website: the IDS is a standalone device, with a very specialized mission that requires lots of pattern matching. So, it's better to have it in a separate, custom-built box. A "soft IDS" that shares hardware with a website could be easily flooded by your typical script-kiddie attacks. So we only have to decide the order of the boxes.

- Internet-IDS-Firewall, or Internet-Firewall-IDS.
Firewalls define DMZs, remember. Internet-IDS-Firewall would be option B and Internet-Firewall-IDS is option C. Firewalls are built to be the first line of defense and face the Internet. The analysis an IDS has to make is usually more complicated (I assume we're talking traditional firewalls, not "next-generation firewalls"), so it's good for the firewall to do the "coarse work" for it. Take a very simple example: if there's only one website on the DMZ, the firewall can filter everything except ports 80 and 443 into the website's address. Then all the IDS has to do is examining the HTTP traffic for web vulnerabilities, CSS, and the like. So option C is clearly the best.

# Actively managing compliance with the contract terms for the outsourced services is the responsibility of IT management.
- Compliance with regulatory requirements is in purview of compliance or legal team.
- Payment is in scope of finance team.
- Penalty for non-compliance is by-product of managing compliance with contract.

# The primary activity of a CA is to issue certificates and to validate the identity and authenticity of the entity owning the certificate and integrity of the certificate issued by that CA.
- CAs are not responsible of secured communication channel. Private keys are not made available in public domain.

# Board of directors in any organisation have ultimate responsibility for the development of IS security function.
- Security committee performs as per the direction of board.
- The IS department is responsible for the execution of the policy.
- IS audit department need to ensure proper implementation of IS security policy and in case of any deviation need to report to management

Reference Links

https://atmanjunath.wordpress.com/
https://www.auditscripts.com/free-resources/cisa-exam-references/cisa-practice-tests/
http://passcisa.blogspot.com/


Monday, July 16, 2018

Notes 1

Shoulder surfing - Attack wherein any person nearby could "look over the shoulder" of the user to obtain the password.
Piggybacking - Unauthorized persons following authorized persons into restricted areas.
Dumpster diving - Attack wherein critical information is obtained trash box.
Impersonation - refers to someone acting as an employee in an attempt to retrieve desired information.

# As high complex criteria can be set in CIS, it is the best technique to identify transactions as per pre-defined criteria. Continuous and Intermittent Simulation (CIS) is a moderately complex set of programs that during a process run of a transaction, simulates the instruction execution of its application. As each transaction is entered, the simulator decides whether the transaction meets certain predetermined criteria and if so, audits the transaction. If not, the simulator waits until it encounters the next transaction that meets the criteria. Audits hooks which are of low complexity focus on specific conditions instead of detailed criteria in identifying transactions for review. ITF is incorrect because its focus is on test versus live data.

# A warm site has the basic infrastructure facilities, such as power, air conditioning and networking and some of computers. However, all computing device are not installed. Hence before resumption of services from warm site, timely availability of hardware is major concern. A cold site is basically availability of space and basic infrastructure. No communication equipments and computers are installed. Cold site is characterized by at least providing for electricity and HVAC (heat, ventilation and air-conditioning). No other computing facilities are available at cold site.

# It is the responsibility of the IT sterring committee to ensure the efficient use of IT resources.
# Strategy committee is responsible for advising board members about new projects.

# Absence of a project steering committee represents a major risk. A steering committee would provide a liaison between the IS department and the user department. It monitors the IT project prioritization as per business requirements.

# The responsibilities of a telecommunications analyst include reviewing network load requirements in terms of current and future transaction volumes (choice B), assessing the impact
of network load or terminal response times and network data transfer rates (choice C), and recommending network balancing procedures and improvements (choice D).
- Monitoring systems performance and tracking problems as a result of program changes (choice A) would put the analyst in a self-monitoring role.

# Social engineering is based on the divulgence of private information through dialogues, interviews, inquiries, etc., in which a user may be indiscreet regarding his/her or other's personal data.
- A sniffer is a computer tool to monitor the traffic in networks.
- Back doors are computer programs left by hackers to exploit vulnerabilities.
- Trojan horses are computer programs that pretend to supplant a real program; thus, the functionality of the program is not authorized and is usually malicious in nature.

#  For unit testing appropriate strategy is white box approach (as both involves testing of internal logic).Unit testing involves testing of individual program or module. In white box testing, program logic is tested. It is applicable for unit testing and interface testing. White box testing examines the internal structure of a module.
In black box, only functionality is tested. Program logics are not tested and hence not relevant for unit testing.

Saturday, June 30, 2018

Glossary

Important CISA Glossary Items:

A:

B:


Tuesday, June 26, 2018

Domain 4 (Part 2)

- IT support service (helpdesk and management of configuration, management, change, release, problem, incident)
- IT delivery service (SLA, management of IT finance, capacity, service continuity, availability).

> Problem management vs incident management - The purpose of problem management is to reduce the number of incidents, while incident management is all about restore the affected system to its normal operational state as quickly as possible.

# Media sanitization 
- process of eradicating data to the point where it is impossible to restore.
- an organization must select the correct sanitization techniques for the types of media it use for storing data.

Enterprise backend hardware: print server, file server, application server, web server, proxy server, database server, smart phones, PDA, firewall, IDS(intrusion detection system), IPS(intrusion prevention system), switches, routers, LPN, storage devices, memory and flash cards.

IS Risks: viruses, spy wares and other malicious programs, data theft, data loss, data corruption, loss of storage devices.
IS security controls: data encryption, granular control, IS security training, enforce desktop lock policy, antivirus policy, use of approved and secure devices, including return and privacy information in the data file.

# RFID (Radio frequency identification):
- Uses radio frequency to identify objects that is tagged. A tag consists of a chip and an antenna. The chip stores the ID of the object, and the antenna receives signal.
- An active tag draws power from a batter and can transmit its ID to a longer distance.
- A passive tag gets it energy from the radiated power from the RFID reader.
- Application area of RFID: asset management, hardware tracking, authenticity verification, matching, process control, access control, supply chain management.

> Risk of RFID
1. A direct attack on RFID system can disrupt business process.
2. A hacker can gain access to RFID information stored in your system.
3. RFID technology can appear as a threat to a non-RFID network.


Monday, June 25, 2018

Domain 3 (Part 2)

# CMM levels helps to improve software life cycle processes.
Level 5: optimizing (continuous improvement).
Level 4: managed (quantitative quality).
Level 3: defined (documented process).
Level 2: repeatable (disciplined management process).
Level 1: Initial (Adhoc, individual effort).

# CMMI (capability maturity model integration)
The purpose of CMMI is to integrate various software maturity models including CMM into a single model. Just like CMM, CMMI also has five maturity levels, but the description of each level is not similar to the CMM. CMMI levels:

Level 5- optimizing (focuses on processes)
Level 4-quantitatively (process is measured and controlled)
Level 3- defined (process is characterized and proactive)
Level 2- managed (process is characterized and reactive)
Level 1- initial (poorly controlled process, which can be unpredictable and reactive)

Business case - Gives the necessary information whether to start a project or not. It is developed from the result of feasibility study, which is done during the project-planning phase.

Software size estimation methods:
1) LOSC (lines of source codes)
2) Function point analysis - It considers the following parameters:
-> Number of user input -> Number of outputs -> Number of user inquiries -> Number of files -> Number of external interfaces

# Time box management
- This project management technique is used to deploy software project within a short and fixed time frame using fixed resources.
- It can be used with rapid application development type projects.
- Advantage: Preventing project cost overrun and delay.

- Project controlling activities: managing project scope, resources and risks.

# Project risks
1) The risks that impact Business benefit: project sponsors are responsible for mitigating this risk.
2) Project risks: project manager is responsible for project risks.

Project risk management process consists of five steps:
1) Inventory risks
2) Assess risks
3) Mitigation risks
4) Discovery risks
5) Review and evaluate

- The errors cause by the unauthorized access is the main problem with online programming methods.

Categories of program debug tools
1. Logic path monitors: identify the errors in program logics
2. Memory dumps: identify data inconsistency in data or parameters.
3. Output analyze: checks the accuracy of the results after execution

- The certification and accreditation process starts after successful completion of final acceptance test.

# Certification process - Assesses standard controls (operational, management, technical) in an information.it examines the level of compliance of policies, standards, guidelines, processes, procedures and guidelines. The goal of certification process is to determine whether the controls are operating correctly, producing expected outcome and meeting the security requirements. The outcome of the certification process helps to reassess and to update the system security plan.

# Accreditation - senior management’s decision which authorize IS operation and accept the risks (risks in IT assets, operation, individuals).
It is considered as a form of quality control, which challenges IS managers and staff to implement highly effective security controls in the organization’s IT systems.

# Changeover (cutover or go-live technique)
This is an approach to migrate the existing users of an old system to a newly developed system. It is also known as cutover since it cut out the users from the old system and move them to the new system.

- Parallel change over: The old system is kept running. Then running the new system, making both the new and old system running at the same time.in this approach the users use both
the system, and this helps to identify any problem that the uses face while using the new system. When users gain confidence in the new system, the full changeover to the new system take place.

- Phased change over: This approach break downs the old system into several deliverable modules. The first deliverable module of the old system is replaced with the first deliverable modules of the new system. Similarly, all the other new modules replace the old modules. Thus, the changeover to the new system takes place.
Risk: IT resource challenges, extended project life-cycle, running change management for the old system.

- Abrupt changeover: On a specific date and time, the old system is changed over to the new system and the use of old system is discontinued.
Risk: assets safeguard, data integrity, system effectiveness and efficiency.

- The main objective of a post implementation review is to assess and measure how much value the project has delivered to the business.

# EDI (electronic data interchange)
- Usually, EDI is used to transmit invoices, shipping orders and purchase orders.
- An EDI system requires the following components:
1. Communication software
2. Transaction software
3. Access to standards

When reviewing an EDI, an auditor should consider:
1. The proprietary version of EDI. Most of the large organization has its proprietary EDI.
2. Publicly available commercial EDI (this approach is less costly but has more security risks)

# Traditional EDI
1. Communication handler: A process handles data transmission in the dial-up lines or in other public networks.
2. EDI interface: it manages and control data path between the communication handlers and the application. The two components of EDI interface are EDI translator (converts data from EDI format to proprietary format) and application interface (used for data movement and data mapping).
3. Application system: A program that process data before sending and after receiving from the trading partners. Web based EDI- used for generic network access.

# EDI risks
1. Transaction authorization(the main risk in a ESD system)
2. Loss of business continuity
3. Deletion or manipulation of transactions
4. Duplicate EDI transmission and data loss.
5. Lose of transaction confidentiality

The IS auditor can verify the evaluation objective of EDI by reviewing the followings:
1. Encryption in place
2. The existence of checks for data editing
3. Validity and reasonability check for each transaction
4. Logging of each inbound transaction.
5. Verifying the number and value of transaction with control totals
6. Using segment count totals
7. Using transaction set count totals
8. Using batch control totals
9. Sender’s validity against other trading partners

Some other EDI audit options are:
1. Audit monitor: it is installed in the EDI computer to capture transactions so as an auditor can review it.
2. Expert systems: it is an audit monitor that can determine the significance of a transaction based on audit rules and prepare a report for the auditors.

# DSS (decision support system)
- DSS mainly focuses more on effectiveness and less on efficiency.
- Prototype is the preferred DSS development and design approach.
- The true evaluation of a DSS is whether it can improve management’s decision-making process.

# Data oriented system development:
A software development method where data and data structures are used to represent software requirements. Elimination of data transformation error is the major advantage of this method.

# Object oriented system development:
A programming technique, not a programming method, where data and the procedures are considered as an entity. It has the advantage of managing unrestricted types of data, modeling complex relationships, can adapt with changing environment.

# Component based development:
An extension of object oriented system development. In this technique, various applications are assembled to deliver their service with defined interfaces. The purpose of the interface here is to facilitate the application programs to communicate with each other regardless of their source language and operating platforms.
Advantages: short development time, programmers can focus more on business functionality of the application, promotes modularity, ability to combine cross languages and codes reusability, less development cost, allows to buy only the components, not a complete solution with features that are not required, that is necessary to the system.

# Application controls
1. Data input
2. Data processing
3. Output function


# Input controls 

- Input control is assured by the followings:
A. Input authorization
B. Batch control and balancing

A. Input authorization:
- It ensures that all the data input are authorized and approved by the responsible department or management. Input authorization types:
1. Signature on batch forms or source documents
2. Online access controls
3. Unique passwords
4. Terminal or client work station identification
5. Source document

B. Batch control and balancing
- Batch balancing is all about making sure that each transaction creates files or documents, which are added to the batch, processed and accepted by the system.

Input control techniques
1. Transaction log
2. Reconciliation of data-whether all data received are properly recorded and processed.
3. Documentation
4. Error correction procedures
5. Anticipation
6. Transmitted log
7. Cancellation of source document


# Data processing controls and procedures 

The processing controls consist of:
A. Data validation and editing procedures
B. Processing controls
C. Data file control procedures

A. Data validation and editing procedures:
- Input data need to be validated and edited as soon as they are generated. Data validation means finding out data errors, incomplete/missing data and inconsistency in data.
- Edit controls are used before data are processed in order to prevent inaccurate data processing.

-Sequence check
-Limit check
-Range check
-Validity check
-Reasonableness check
-Check digit - A number value is added to the original data to make sure that data has not been altered. It is used to detect transcription and transposition errors.
-Completeness check


B. Data processing controls:
- it ensures that data are complete and accurate. The data processing techniques are manual recalculation, editing and run-on totals, programmed controls, limit checks on amounts, reasonableness verification of calculated amounts, reconciliation of file totals and exception reports.

C. Data file control procedures:
- it helps to make sure that only the authorized processing are done on the data; it will not perform any unauthorized process on data. Content of the data tables or file can be divided into the following categories:
1. System control parameters: any changes in these data can change the way system functions.
2. Standing data: they are not frequently changed. Example: suppliers' names, addresses etc.
3. Master data/balance data: these are current balances and total balances, which are frequently updated by new transaction. Audit trails must be present to preventing changing these data.
4. Transaction logs: these logs are controlled by validation checks, exception report, control totals etc.

Important controls for data files: 

Before and after image reporting - The data file before and after the processing need to be recorded to analyze the processing impact on the database.
Error reporting and handling - Those who input the data should not review and authorize the error correction.
Source file retention 
Version usage - It is necessary for processing the correct version of the file because older version may not need to run all the procedures.
Data file security - Use to prevent unauthorized access to the data
One-for-one checking - It is used to make sure that all the documents are being processed.
Transaction logs - The activities that need to recorded are input time, username, input terminal/computer details etc. these activity data helps to generate audit trails and can be used to find out errors or warnings and to restore the system if any technical problem occurs.
Parity check  - This is used to detect transmission errors in the data. When a parity check is applied to a single character, it is called vertical or column check. In addition, if a parity check is applied to all the data it is called vertical or row check. By using both types of parity check simultaneously can greatly increase the error detection possibility, which may not be possible when only one type of parity check is used.

# Output controls 
- Output control ensures consistent and secure delivery of data. The data also need to be presented to the users in proper format. The output controls are:
1. Logging and storage of forms in secure place
2. Computer generated forms and signatures: all the computer-generated forms should be compared with the physical copy of the forms. One should be accountable for any issues, exceptions or unwanted modification of the forms.
3. Distribution of report: the report should be distributed to the person authorized to receive it.
4. Balancing & reconciling: procedures to find out errors in the output report should be established and it should be delivered to the concerned department for review and correction. 5. Output report retention: there should be report retention schedule and the report retention policy should follow legal regulation, if there is any.
6. Report receipt: the recipient of the reports should sign in the record or logbook. It will make sure that the sensitive reports are being distributed properly.

# Tasks of IS Auditor in application controls 
1. Identifying the important applications, and its components. Understating the follow of information among the applications or systems and gaining knowledge about the application by reading available documents and interviewing IS personals.
2. Identifying the strength of application controls, and evaluating the impact of those identified weakness.
3. Understanding the functionality of the applications by reviewing the system documentation.

# Data integrity tests 
Data integrity tests consist of a number of substantive tests. It aims to test the data accuracy, consistency and authorization.
1. Relational integrity: it is done at the data elements or record levels. Relational integrity can be maintained by building built-in data validation routines in the applications. It can also be implemented in the database by defining input constraints and data characteristics in the tables.
2. Referential integrity: it tests the relationship between entities in the tables in a database. Relational integrity helps to maintain the interrelationship integrity in relational database model (RDBMS).a relational database establish relation among various tables using references between primary and foreign keys. Referential integrity tests make sure that all these references exist in the original table.

# Data integrity in online transaction 
- The integrity of online data is maintained by four principles (ACID).
Atomicity: it states that a transaction is either complete or not If any transaction cannot be complete because of problem, it is necessary for the database to go back to state before the transaction, which ensures the atomicity.
Consistency: after each transaction, the database should go from its previous consistent state to another consistent state.
Isolation: every transaction should be isolated and it should have access to the database in a consistent state.
Durability: when a transaction is considered complete, then the database should retain the data even after any hardware or software failure.

> The main advantages of component-based development are the compatibility of the development system with multiple platforms and environments.
> Inadequate software baseline can result in project scope creep.
> An IS auditor reviewing agile project software development can expect to post-iteration reviews that document all the learned lessons.
> Checksum in data is used for integrity testing.
> Transaction journal is responsible for recording transaction activity. Therefore, by comparing the transaction journal with the authorized data source will reveal if there are any unauthorized input from a terminal (a specific computer).
> A console log printout does not record transaction activity from a terminal.
> An automated suspense file only shows the transactions that needs action.
> No modification is allowed once data are in the warehouse.
> A warehouse is just a copy of the original transaction data and it is used for query and analysis.
> The Meta data works as a content table in a warehouse. That is why meta data is considered the most important design element of data warehouse.
> RAD is a management technique.

Sunday, June 24, 2018

Domain 4 (Part 1)

RTO & RPO

- RTO of 2 hours indicates that organization needs to ensure that their system downtime should not exceed 2 hours.
- RPO of 2 hours indicates that organization needs to ensure that their data loss should not exceed 2 hours of data captured.
- In any given scenario, for critical systems, RTO is zero or near zero. Similarly, for critical data, RPO is zero or near zero.
- lower the RTO/RPO, higher the cost of maintenance of environment.
- low RTO/RPO indicates that disaster tolerance is low. Other way round, if disaster tolerance is low, RTO/RPO should be low.
- when RTO is low, mirrored site or hot site is recommended.
- when RPO is low, mirror imaging or real time replication for data back-up is recommended.
- where RPO is zero, synchronous data backup strategy to be used.
- Both RTO & RPO are based on time parameters. The lower the time requirements, the higher the cost of recovery strategies.
- RTO (Acceptable System Downtime)
- RPO (Acceptable Data Loss)

Alternate Recovery Site
- Mirrored site is fastest mode of recovery and then hot site.
- Cold site is slowest mode of recovery.
- For critical system, mirrored/hot sites are appropriate option.
- For non-critical system, cold site is appropriate option.
- Reciprocal agreement will have lowest expenditure in terms of recovery arrangement.

# Mirrored Site
Already Available - Space & Basic Infra, All IT equipment, Updated database
Required - hardly anything

# Hot Site
Already Available - Space & Basic Infra, All IT equipment
Required - -Updated database

# Warm Site
Already Available - Space & Basic Infra, Some IT equipment
Required - Needed IT equipment, Updated database

# Cold Site
Already Available - Space & Basic Infra
Required - Needed equipment, Updated database

# Mobile Site
- Mobile sites are processing facilities mounted on a transportable vehicle and kept ready to be delivered.
- A mobile site is a vehicle ready with all necessary computer equipment, and it can be moved to any cold or warm site depending upon the need. The need for a mobile site depends upon the scale of operations.

# Reciprocal Agreement
- Reciprocal agreements are the agreements wherein two organizations (mostly having similar processing) agree to help each other in case of emergency. Reciprocal agreements are the least expensive because they usually rely on agreement between two firms. However, they are the least reliable.


Physical Network Media:

- Fiber-optic cables have proven to be more secure than the other media. They have very low transmission loss, not affected by EMI and preferred choice for high volumes and long distance calls.

# Attenuation:
- Weakening of signals during transmission.
- Exists in both wired and wireless transmissions.
- Length of wire impacts the severity of attenuation.

# Electromagnetic Interference (EMI):
- EMI is a disturbance generated by an external source that affects an electrical circuit.
- The disturbance may degrade the performance of the circuit or even stop it from functioning. In the case of a data path, these effects can range from an increase in error rate to a total loss of the data.
- EMI is caused by electrical storms or noisy electrical equipments (e.g. motors, fluorescent lighting, radio transmitters etc.)

# Cross-Talks:
- Crosstalk is electromagnetic interference from one unshielded twisted pair to another twisted pair, normally running in parallel.
- Only occurs in wired communication

# Sags, Spikes, and surges:
- Only occurs in wired communication

# Multipath Interference:
- Only occurs in wireless communication

- Using separate conduits for data cables and electrical cables, minimizes the risk of data corruption due to an induced magnetic field created by electrical current.

# Alternate Routing:
- Method of routing information via an alternative medium, such as copper cable or fiber optics.

Last mile circuit protection:
Last mile circuit protection provides redundancy for local communication loop.

Long haul network diversity:
Long haul network diversity provides redundancy for long distance availability.

# Diverse Routing:
- Diverse routing is the method of routing traffic through split-cable facilities or duplicate-cable facilities.

- In alternate routing alternate type of cables are used such as fiber optics or copper cable
- In diverse routing same type of cable is used either in duplicate or by splitting.


Gateway - Application Layer (7th Layer)

Switch stores MAC address in a look up table

# Bridges:
Act as store-and-forward devices in moving frames toward their destination. This is achieved by analyzing the MAC header of a data packet. By examining the MAC address, the bridge can make decisions to direct the packet to its destination. It has the capacity to store frames and act as a storage and forward device.

# Backup schemes:

Full Backup - Every time data backup is taken of full database irrespective of earlier backup.
Incremental Backup - Backup is taken only of data changed since last backup (last backup can be either full backup or incremental backup).
Differential Backup - Backup is taken only of data changed since last full backup (last back to be full back-up only).

# Differential Backup
Full backup taken at Monday
- On Tuesday backup taken for changes made after Monday’s backup
- On Wednesday backup taken for changes made after Monday’s backup (ignoring Tuesday’s backup)
- On Thursday backup taken for changes made after Monday’s backup (ignoring Tuesday’s  & Wednesday’s backup)
- On Friday backup taken for changes made after Monday’s backup (ignoring Tuesday’s , Wednesday’s  & Thursday’s backup)

# Incremental Backup
Full backup taken at Monday
- On Tuesday backup taken for changes made after Monday’s backup
- On Wednesday backup taken for changes made after Tuesday’s backup
- On Thursday backup taken for changes made after Wednesday’s Backup
- On Friday backup taken for changes made after Thursday’s Backup

# Storage Capacity for each backup Scheme:
-Full Backup- Requires more time and storage capacity as compared to other two schemes.
-Differential- Requires less time and storage capacity as compared to full backup but more time and storage capacity as compared to Incremental.
-Incremental- Requires less time and storage capacity as compared to other two schemes.

# Restoration Capability for each backup Scheme:
-Full Backup- Fastest of all three schemes.
-Differential- Slower than Full backup but faster than incremental.
-Incremental-Slowest of all three schemes.


Wednesday, June 20, 2018

Domain 3 (Part 1)

Online Auditing Techniques:

- SCARF (System Control Audit Review File). An embedded (inbuilt) audit module is used to continuously monitor the transactions. Records only those transactions which are of special audit significance such as transactions above specified limit or transactions related to deviation/exception. Useful when regular processing can not be interrupted.

- Snapshots - snaps (pictures) are taken of the transactions as transaction moves through various stages in the application system. Both before-processing and after-processing images of the transactions are captured. Auditors can verify the correctness of the processing by checking before-processing and after-processing images of the transactions. Useful when audit trail is required.

- ITF (Integrated Test Facility) - Fictitious entities/transactions are created in live production environment. Processed results and actual results are compared to verify that systems are operating correctly. Need to isolate test data from actual production data.

- CIS (Continuous and Intermittent Simulation) - This technique can be used whenever the application system uses the database management system (DBMS). DBMS reads the transaction which is passed to CIS. If transaction is as per selected criteria, then CIS examines the transaction for correctness. CIS determines whether any discrepancies exist between the result it produces and those the application system produces. Thus, CIS replicates or simulates the application system processing. Best technique when transactions meeting certain criteria needs to be examined.

- Audit Hook - These are audit software that captures suspicious transactions. Criteria for suspicious transactions are designed by auditors as per their requirement. Helps the IS auditor to act before an error or an irregularity gets out of hand.

# Audit trail (Snapshot)
# Fictitious entity in LIVE production (ITF)
# Early detection (Audit Hook)
# Simulates the application system processing (CIS)


Testing in SDLC

# Unit Testing
- Testing of individual program or module testing done during development stage
- White box approach (i.e. testing of internal program logic) is applied

# Integrated/Interface Testing 
- Dictionary meaning of integrate is 'to connect'
- Testing of connection of two or more module or components that pass information from one area to another

# Parallel testing 
- Process of comparing results of the old and new system.
- To ensure that the implementation of a new system meets user requirements.

# Pilot Testing 
- Takes place first at one location to review the performance.The purpose is to see if the new system operates satisfactorily in one place before implementing it at other locations.

# Regression Testing
- Meaning = 'act of going back' or to 'return'
- Ensures that changes or corrections in a program have not introduced any new errors.
- Data used for regression testing should be same as the data used in previous tests.

# Sociability Testing
- Meaning = 'ability to have companionship with others'
- To ensure that new or modified system can work in the specified environment without adversely impacting existing system.

# System Testing
- Full pledge test that includes stress/load/security/recovery and other tests
Security Testing - Testing of appropriate access control and other security measures.
Recovery Testing - Checking system's ability to recover after a hardware or software failure.
Load Testing - Testing of performance of system during peak hours (processing large quantity of data)
Volume Testing - Testing to determine maximum volume of records (data) the application can handle.
Stress Testing - Testing to determine maximum number of concurrent users/services the application can process.
Performance Testing - Comparing the performance of the system to other equivalent system using well defined benchmarks.

# Top-Down Approach
- Testing starts with individual units such as individual programs or modules and work upward until a complete system is tested.   
- Test can be started even before all programs are complete
- Errors in critical modules can be found early.   
                                           
# Bottom-Up Approach
- Test  starts from broader level and then gradually moves towards individual programs and modules
- Interface error can be detected earlier
- Confidence in the system is achieved earlier
- Appropriate for prototype development.

# Alpha Testing 
- Testing done by internal user
- Done prior to beta testing
- May not involve testing of full functionality

# Beta Testing 
- Testing done by external user
- Done after alpha testing
- Generally, involves testing of full functionality


Check Digit/Parity Bits/Checksum/Cyclic Redundancy Checksums (CRC)/Redundancy Checksums/Forward Error Control/Atomicity

Check Digit:
- Mathematically calculated value that is added to data to ensure that the original data have not been altered.
- Helps in avoiding transposition and transcription errors.
- Ensure data accuracy

Parity Bits:
- Requires adding an extra bit on the data. This extra bit is called a parity bit. This bit simply says whether the number of 1 bits is odd or even. Generally the parity bit is 1 if the number of 1 bits is odd and 0 if the sum of the 1 bits is even.
- This parity is verified by receiving computer to ensure data completeness and data integrity during transmission.
- Parity bits are used to check for completeness of data transmissions. A parity check is a hardware control that detects data errors when data are read from one computer to another, from memory or during transmission.

Checksum:
- Checksum are exactly same as parity but able to identify complex errors also by increasing the complexity of the arithmetic.

Cyclic Redundancy Checksums (CRC)/Redundancy Checksums:
- More advanced version of checksums by increasing the complexity of the arithmetic.

Forward Error Control:
- Works on same principle as CRC. However FEC also corrects the error. FEC provides the receiver with the ability to correct errors.
- To detect & correct transmission error.

Atomicity:
Transaction must be all-or-nothing. That is, the transaction must either fully happen, or not happens at all. The principle of atomicity requires that a transaction be completed in its entirety or not at all. If an error or interruption occurs, all changes made upto that points are backed out.

Parity bits or checksum (higher version of parity bit) or CRC (higher version of checksum):
- To identify transmission error
- To ensure completeness
- To ensure integrity
- First preference to CRC. If CRC is not there as option then preference to be given to Checksum. If CRC and Checksum both are not there in option then preference to be given to Parity Bits.


PERT-CPM-Gantt Chart-FPA-Timebox:

PERT or CPM: To estimate project duration or timeless. First preference to be given to PERT.

Gantt Chart: To monitor the project or track any milestone

FPA or SLOC: To estimate software size. First preference to be given to FPA. SLOC = Source line of code. SLOC is direct method while FPA is indirect method. FPA is arrived on the basis of number and complexity of inputs, outputs, files, interfaces and queries. FPA is more reliable than SLOC.
When objective is to identify software size estimation-first preference to be given to FPA

Timebox Management: To prevent project cost overruns and delays from scheduled delivery

Earned Value Analysis (EVA):

-Budget to date
-Actual spending to date
-Estimate to complete
-Estimate at completion


# Function point analysis (FPA) - To estimate efforts required to develop software.

# Decision Support System (DSS)
- Supports the semi-structured problem (and not only structured problem).
- Should be flexible and adoptable to changing requirements and scenarios.
- Decision tree is used as a questionnaire to lead a user through a series of choices until a conclusion is reached.
- Interactive System

# RISK Factors for Implementation of DSS
- Inability to specify purpose or usage patterns in advance.
- Inability to predict and cushion impact on all parties.
- Non-existent or unwilling users/ Multiple users or implementers/ Disappearing users, implementers and maintainers.
- Lack or loss of support/ Lack of experience with similar systems
- Technical problems and cost effectiveness issues.


# Agile Development:

- Dictionary meaning of agile is ‘able to move quickly and easily’.
- Allows the programmer to just start writing a program without spending much time on pre-planning documentation.
- Less importance is placed on formal paper-based deliverables, with the preference being to produce releasable software in short iterations, typically ranging from 4 to 8 weeks.
- At the end of each iteration, the team considers and documents what worked well and what could have worked better, and identifies improvements to be implemented in subsequent iterations.
- Some programmers prefer agile because they do not want to be involved in tedious planning exercises.
- Major risk associated with agile development is lack of documentation.
- In agile approach reviews are done to identify lessons learned for future use in the project.

Object Oriented System Development (OOSD):

- OOSD is a programming technique and not a software development methodology.
- Object here refers to small piece of program that can be used individually or in combination with other objects.
- In Object oriented language, application is made up of smaller components (objects).
- One of the major benefits of object-oriented design and development is the ability to reuse objects.
- ‘encapsulation’ in which one object interacts with another object. This is a common practice whereby any particular object may call other object to perform its work.
– A major benefit of object-oriented development is the ability to reuse objects.

# Encapsulation
Encapsulation is a mechanism where you bind your data and code together as a single unit. It also means to hide your data in order to make it safe from any modification. What does this mean? The best way to understand encapsulation is to look at the example of a medical capsule, where the drug is always safe inside the capsule. Similarly, through encapsulation the methods and variables of a class are well hidden and safe.

- Permits enhanced degree of security over data.

When you create an object in an object-oriented language, you can hide the complexity of the internal workings of the object. As a developer, there are two main reasons why you would choose to hide complexity.

The first reason is to provide a simplified and understandable way to use your object without the need to understand the complexity inside. As mentioned above, a driver doesn't need to know how an internal combustion engine works. It is sufficient to know how to start the car, how to engage the transmission if you want to move, how to provide fuel, how to stop the car, and how to turn off the engine. You know to use the key, the shifter (and possibly clutch), the gas pedal and the brake pedal to accomplish each of these operations. These basic operations form an interface for the car. Think of an interface as the collection of things you can do to the car without knowing how each of those things works.

Hiding the complexity of the car from the user allows anyone, not just a mechanic, to drive a car. In the same way, hiding the complex functionality of your object from the user allows anyone to use it and to find ways to reuse it in the future regardless of their knowledge of the internal workings. This concept of keeping implementation details hidden from the rest of the system is key to object-oriented design.

Take a look at the CombustionEngine class below. Notice that it has only two public methods:
start() and stop()

Those public methods can be called from outside of the object. All of the other functions are private, meaning that they are not publicly visible to the rest of the application and cannot be called from outside of the object.

package engine {
    public class CombustionEngine {
     
        public function CombustionEngine() {}

        private function engageChoke():void {}
        private function disengageChoke():void {}
        private function engageElectricSystem():void {}
        private function powerSolenoid():void {}
        private function provideFuel():void {}
        private function provideSpark():void {}
     
        public function start():void {
            engageChoke();
            engageElectricSystem();
            powerSolenoid();
            provideFuel();
            provideSpark();
            disengageChoke();
        }
      public function stop():void {}
    }
}

You would use this class as follows:

var carEngine:CombustionEngine = new CombustionEngine();
carEngine.start();
carEngine.stop();

The second reason for hiding complexity is to manage change. Today most of us who drive use a vehicle with a gasoline-powered internal combustion engine. However, there a gas-electric hybrids, pure electric motors, and a variety of internal combustion engines that use alternative fuels. Each of those engine types has a different internal mechanism yet we are able to drive each of them because that complexity has been hidden. This means that, even though the mechanism which propels the car changes, the system itself functions the same way from the user's perspective.


# Inheritance
In OOP, computer programs are designed in such a way where everything is an object that interact with one another. Inheritance is one such concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes.

A child inherits the properties from his father. Similarly, in Java, there are two classes:
1. Parent class (Super or Base class)
2. Child class (Subclass or Derived class )

A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class.

# Polymorphisms
- Polymorphisms is a generic term that means 'many shapes'. More precisely Polymorphisms means the ability to request that the same operations be performed by a wide range of different types of things. Polymorphism is the ability of an object to change behavior on compile time / runtime

In OOP the polymorphisms is achieved by using many different techniques named method overloading, operator overloading, and method overriding,

- Method Overloading?
Method overloading is the ability to define several methods all with the same name.

public class MyLogger
{
    public void LogError(Exception e)
    {
        // Implementation goes here
    }

    public bool LogError(Exception e, string message)
    {
        // Implementation goes here
    }
}


# Prototyping:

- Process of creating systems through controlled trial and error.
- An early sample or model to test a concept or process. A small scale working system used to test the assumptions. Assumptions may be about user requirements, program design or internal logic.
- This method of system development can provide the organization with significant time and cost savings.
– By focusing mainly on what the user wants and sees, developers may miss some of the controls that come from the traditional systems development approach; therefore, a potential risk is that the finished system will have poor controls.
- Important advantage of prototyping is that it provides significant cost and time savings.
- Top-down approach testing methods is MOST effective during the initial phases of Prototyping.
- In prototyping, changes in the designs and requirements occur quickly and are seldom documented or approved; hence, change control becomes more complicated with prototyped systems.

# Rapid Application Development:

- RAD includes use of:

> Small and well trained development teams.
> Prototypes
> Tools to support modelling, prototyping and component re-usability.
> Central repository
> Rigid limits on development time frames

- RAD enables the organisation to develop systems quickly while reducing development cost and maintaining quality.
- RAD relies on the usage of a prototype that can be updated continually to meet changing user or business requirements.


# Steps in Benchmarking Process:
(1)Plan (for what processes benchmarking is to be done)
(2)Research (from where and with whom benchmarking is to be done)
(3)Observe (visit and observe processes of benchmarking partners)
(4)Analyse (analyzing the gap between organisation’s processes and benchmarking partner’s processes)
(5)Adopt (implement the best practises followed by benchmarking partner)
(6)Improve (continuous improvement)

- Parity bits are used to check for completeness of data transmissions.
- Check digit are a control check for accuracy.
- Detailed program logic is tested in White Box Testing
- The primary purpose of a system test is to evaluate the system functionally.

# Throughput 
Maximum rate of production or the maximum rate at which something can be processed. In data transmission, network throughput is the amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).

- In white box testing, program logic is tested. In black box, only functionality is tested.

- Configuration Management involves procedure throughout the software life cycle (from requirement analysis to maintenance) to identify, define and baseline software items in the system and thus provide a basis of problem management, change management and release management.

- Ideally, stress testing should be carried out in a test environment using live workloads.

- Data integrity testing examines the accuracy, completeness, consistency and authorization of data.
- Relational integrity testing detects modification to sensitive data by the use of control totals.
- Domain integrity testing verifies that data conforms to specifications.
- Referential integrity testing ensures that data exists in its parent or original file before it exists in the child or another file.

# control total - Used to ensure that batch data is completely and accurately transferred between two systems.

- A control total is frequently used as an easily recalculated control. A check digit is a method of verifying the accuracy of a single data item, such as a credit card number. Although a check sum is an excellent control over batch completeness and accuracy, it is not easily recalculated and, therefore, is not as commonly used in financial systems as a control total. Check sums are frequently used in data transfer as part of encryption protocols.

# Application Controls:
Controls over input, processing and output functions. They include method for ensuring that:
-Only complete, accurate and valid data are entered and updated in computer systems.
-Processing accomplishes the correct task
-Processing results meet the expectations
-Data are maintained

Check Digit - A numeric value that has been calculated mathematically is added to data to ensure that the original data have not been altered or an incorrect, but valid, value substituted. This control is effective in detecting transposition and transcription errors.

Completeness Check - A field should always contain data rather than zeros or blanks. A check of each byte of that field should be performed to determine that some form of data, not blanks or zeros, is present.

Duplicate Check - New transactions are matched to those previously input to ensure that they have not already been entered.

# buffer overflow - Poorly written code, especially in web-based applications, is often exploited by hackers using this techniques.
# brute-force attack is used to crack passwords.

# Sequence Check - The control number follows sequentially and any sequence or duplicated control numbers are rejected or noted on an exception report for follow-up purposes.
# Limit Check - Data should not exceed a predetermined amount.
# Range Check - Data should be within a predetermined range of values.
# Validity Check - Programmed checking of the data validity in accordance with predetermined criteria. For example, a payroll record contains a field for marital status and the acceptable status codes are M or S. If any other code is entered, the record should be rejected.
# Reasonableness Check - Input data are matched to predetermined reasonable limits or occurrence rates. For example, a manufacturer usually receives orders for no more than 50 items. If an order for more than 50 items is received, the computer program should be designed to print the record with a warning indicating that the order appears unreasonable.
# Table look-ups - Input data comply with predetermined criteria maintained in a computerized table of possible values. For example, the input clerk enters a city code of 1 to 10. This number corresponds with a computerized table that matches the code to a city name.
# Existence check - Data are entered correctly and agree with valid predetermined criteria. For example, a valid transaction code must be entered in the transaction code field.
# Key Verification - The keying process is repeated by a separate individual using a machine that compares the original keystrokes to the repeated keyed input For example, the worker number is keyed twice and compared to verify the keying process.

# Function Point Analysis - An indirect method of measuring the size of an application by considering the number and complexity of its inputs, outputs and files.

# Input Control procedure
- must ensure that every transaction to be processed is entered, processed and recorded accurately and correctly.

# Logic path monitors 
- Report on the sequence of steps executed by a program. This provides the programmer with clues to logic errors, if any, in the program.

# Run to run totals 
- Provide the ability to verify data values through stages of application processing. They ensures that data read into the computer were accepted and then applied to the updating process.

# Automated system balancing
- Would be the best way to ensure that no transactions are lost as any imbalance between total inputs and total outputs would be reported for investigation and correction.

Wednesday, May 16, 2018

Domain 5 (Part 3)

# Biometrics
Biometric controls are more reliable than other form of access control

Lifecycle - Enrolment -> transmission and storage -> verification -> identification and termination

Three main accuracy measures used for a biometric solution are:
- False-Acceptance Rate (FAR) (i.e access given to unauthorised person)
- False-Rejection Rate (FRR), (i.e. access rejected to authorised person)
- Cross-Error Rate (CER) or Equal-Error Rate (EER) (i.e. rate at which FAR is equal to FRR)

# Both FAR & FRR are inversely proportionate. As a general rule when FAR decreases, FRR increases and vice versa. Similarly if FRR decreases, FAR increases and vice versa.
# Most important performance indicator for biometric system is false-acceptance rate (FAR).
# Most important overall quantitative performance indicator for biometric system is CER or EER.

# ‘Retina Scan’, ‘Iris Scan’ has the highest reliability and lowest false-acceptance rate (FAR) among the current biometric methods.

# Biometric-Attacks
Replay - Residual Biometrics Characteristics
Brute-Force - Sending numerous request
Cryptographic - Attack on cryptography or encryption
Mimic - Faking the characteristics

# IDS & IPS
Three IDS -
(i) signature/ rule based  - Intrusion is identified on the basis of known type of attacks. Such known patterns are stored in form of signature. New attacks can't be identified.
(ii) statistical - Any activity which falls outside the scope of normal behaviour is flagged as intrusion.
(iii) neural network - Like statistical with added self-learning functionality.

# Neural network creates its own database. More effective in detecting fraud.
# Statistical based IDS generates most false positives (false alarms).

Four components of IDS - (i) sensor (ii) analyzer (iii) admin console and (iv) user interface

# Sensor collects the data and send to analyzer for data analysis.
# Most important concern of IDS implementation is that attacks not identified/detected by IDS.

# Challenges of IDS
- IDS will not able to detect application level vulnerabilities, Back doors into application, encrypted traffic.

# Challenges of IPS
- Threshold limitsthat are too high or too low will reduces the effectiveness of IPS
- IPS may itself become a threat when attacker sends commands to large number of host protected by IPS to make them dysfunctional.


# OSI Architecture
- Data link layer works on MAC address whereas Network layer works on IP address.
- Network layer inserts IP address and routing whereas transport layer ensure proper delivery.

# Transport layer - Reliable delivery or connection oriented or congestion control or order of sequence.
# Session layer - establishing connection.
# Presentation layer - acceptable format.
# Application layer - end user.


1st Layer [Physical Layer]         
-Physical layer is concerned with electrical and physical specifications for devices.
-Provides hardware for data (bit) transmission.

2nd Layer  [Data Link Layer]         
-bit stream (received from physical layer) is converted into data packets for network layer.
-data packets (received from network layer) is converted into bit stream for physical layer.
-uses MAC address.

3rd Layer  [Network Layer]           
-inserts address and provides routing service.
-uses IP address.
-Provides confidentiality, authentication, and data integrity services.

4th Layer  [Transport Layer]         
-ensures packet reaches its destination
-congestion control
-concerned with reliability of data transfer between two systems.
-ensure that data reaches its destination.
-make sure that packets on the receiving system are delivered in proper sequence
-uses connection-oriented protocols.
-implements a flow control mechanism that can detect congestion, reduce data transmission rates during congestion and increase transmission rates when the network appears to no longer be congested.
# Reliable delivery, Connection oriented, delivery in Proper order, Congestion control

5th Layer  [Session Layer]
- establishes, manages and terminates the connection between the application layers.
- Control connection
- Establish security for the user application

6th Layer  [Presentation Layer]     
-converts data into presentable form.
-provides services such as encryption, text compression and re-formatting
-formatting of data

7th Layer  [Application Layer]
-provides interface for the user.

# Wet (water based)[WBS] and Dry Sprinkling System (DPSS):
In WBS, water always remains in the system piping.
WBS is more effective and reliable.
Disadvantage of exposing the facility to water damage if pipe leaks or breaks. 

DPSS do not have water in the pipes until an electronic fire alarm activates the water pump to send water into the system.
Comparatively less effective and reliable.
Advantage of not exposing the facility to water damage even if pipe leaks or breaks

# Halon Gas System
- Halon gas removes oxygen from air thus starving the fire.
- They are not safe for human life.
- There should be audible alarm and brief delay before discharge to permit time for evacuation.
- Halon gas is banned as its adversely effects the ozone layer.
- Popular replacements are FM-200 & Argonite.

# FM-200 Gas
- FM-200 is colorless & odorless gas.
- FM-200 is safe to be used when people are present.
- FM-200 is environment friendly.
- It is commonly used as a gaseous fire suppression agent.

# What is Argonite Gas?
- Argonite is a mixture of 50% Argon & 50% Nitrogen.
- It is used as a gaseous fire suppression agent.
- Though environment friendly & non-toxic, people have suffocated by breathing argon by mistake.

# CO2
- CO2 Systems release pressurised CO2  gas in the area protected to replace the oxygen required for combustion.
- Unlike Halon, FM-200 & Agronite, CO2 is unable to sustain Human life.
- In most countries, it is illegal for such systems to be set to automatic release if any human may be in the area.
- CO2 installations are permitted where no humans are regularly present such as unmanned data centres.

# As per CRM, FM-200 & Argonite gases are safe for human life. However, it must be noted that Argonite, though environment friendly & non-toxic, people have suffocated by breathing argon by mistake.

# CO2 & Halon gases are not safe for human life.

# Single Signon (SSO)
Example - Kerberos - Authentication service used to validate services and users in distributed computing environment (DCE).
-In DCE, both usrs and servers authenticate themselves.
-In SSO, unauthorized access will have major impact.
-Unauthorised access can be best control by Kerberos.

Tuesday, May 15, 2018

Domain 5 (Part 2)

# Logical Access Control

Four main categories of access control are:

Mandatory access control (MACs) - Cannot be controlled or modified by normal users or data owners
Discretionary access control (DACs) - Activated or modified by the data owners at their discretion
Role-based access control
Rule-based access control

- MACs are better choice in terms of data security as compared to DACs.

Steps for implementing logical access controls:
- Inventory of IS resources.
- Classification of IS resources.
- Grouping/labelling of IS resources.
- Creation of an access control list.

# First step in data classification is to identify the owner of the data/application.
# Automated password management tool works as best preventive control and ensures compliance with password management policy.
# Preference to be given to preventive controls as compared to detective or deterrent controls.
# Preference to be given to automated controls as compared to manual controls.
# Prime objective of review of logical access control is to ensure access have been assigned as per organisation’s authorization.


# Logical steps for data classification:
- Inventory of Information Assets.
- Establish ownership.
- Classification of IS resources.
- Labelling of IS resources.
- Creation of access control list.

# Data owner/system owner is ultimately responsible for defining the access rules.
# Accountability for the maintenance of proper security controls over information assets resides with the data owner/system owner.
# Greatest benefit of well defined data classification policy is decreased cost of control.

# Objective of data protection/ classification of information assets:
- Ensure integrity/confidentiality of data
- Establish appropriate access control guidelines.
- Reduction in cost of protecting assets.

# Data classification must take into account following requirements:
-Legal/Regulatory/Contractual
-Confidentiality
-Integrity
-Availability

# Asymmetric Encryption
- For confidentiality, message has to be encrypted using receiver’s public key.
- For authentication, HASH of the message has to be created and HASH to be encrypted using sender’s private key. Hash is also known as message digest.
- For integrity, HASH of the message has to be created and HASH to be encrypted using sender’s private key.

# To ensure ‘confidentiality & authentication’:
-Hash of the message to be encrypted using sender’s private key (to ensure authentication/non-repudiation)
-Message to be encrypted using receiver’s public key (to ensure confidentiality)

# To ensure ‘confidentiality & authentication & integrity’:
-Message to be encrypted using receiver’s public key (to ensure confidentiality)
-Hash of the message to be encrypted using sender’s private key (to ensure authentication/non-repudiation and integrity)

# Sender's private key will not ensure confidentiality

A public key infrastructure (PKI) - A set of hardware,software,people,policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates.

Use of PKI (Public Key Infrastructure) :
         Step 1: Encrypt the message by symmetric key
         Step 2: Encrypt the above symmetric key using public key of receiver.
         Step 3: Send 'encrypted message' and 'encrypted symmetric key' to receiver.
         Step 4: Receiver will decrypt 'symmetric key' using private key of receiver.
         Step 5: With the help of above 'symmetric Key' receiver can decrypt the message.

# Encryption of symmetric session key is considered as an efficient use of PKI

# Symmetric key Cryptographic system - Most common is Data Encryption Standard (DES)
DES - A key of 56 bits is used for encrypt/decrypt and 8 bit is used for parity checking
AES - DES being replaced with AES, a public algorithm that supports keys from 128 bits to 256 bits in size.

# Elements of PKI
Certifying authority (CA) is solely responsible for issuance of digital certificate and managing the certificate throughout its life cycle.
Registration authority (RA) is responsible for identifying and authenticating subscribers, but does not sign or issue certificates.
Digital certificate - Composed of public key and information about the owner of public key.

# Time gap between update of CRL (certificate revocation list) is critical and is also posses risk in certification verification.

# Process involved in PKI: 
- Applicant will apply for digital certificate from Certifying Authority (CA).
- Certifying Authority (CA) delegates the process for verification of information (as supplied by applicant) to Registration Authority (RA).
- Registration Authority (RA) validates the information and if information is correct, tells Certifying Authority (CA) to issue the certificate.
- Certifying Authority issues the certificate and manages the same through its life cycle.
- Certifying Authority (CA) maintains a list of certificates which have been revoked/terminated before its expiry date. This list is known as certificate revocation list (CRL).
- Certifying Authority (CA) will also have Certification Practice Statement (CPS) in which standard operating procedure (SOP) for issuance of certificate and other relevant details are documented.

Thursday, May 10, 2018

Domain 5 (Part -1)

Digital Signature ensures:
- Integrity (i.e message has not been tampered)
- Authentication (i.e message has been actually sent by sender)
- Non-repudiation (i.e sender cannot later deny  about sending the message)

- Digital signature does not provide confidentiality of the message.
- Digital signature encrypts the hash of the message (and not the message). Hence digital signature does not provide confidentiality or privacy.
- For encryption of the hash of the message, private key of the sender is to be used.
- Non-repudiation provides the strongest evidence that a specific transaction/action has occurred.  No one can deny about the transaction/action.

# Best practises for Wireless (Wi-Fi) security:
- Enable MAC address filtering.
- Enable Encryption to protect data in transit.

- Disable SSID (service set identifier) broadcasting.
- Disable DHCP (Dynamic Host Configuration Protocol).

# ‘War Driving’
- Used by hacker for unauthorised access to wireless infrastructure. Wireless equipped computer is used to locate and gain access to wireless networks. Also used by auditors to test wireless.

# WPA-2 (Wi-Fi Protected Access) is the strongest encryption standard for the wireless connection.

# Confidentiality of the data transmitted in a wireless LAN is BEST protected, if the session is encrypted using dynamic keys (as compared to static keys)

Encryption Technique: The techniques will protect data in transit and not on device.
WEP - Wired Equivalent Privacy [Weak]
WPA - Wireless Protected Access [Medium]
WPA-2 - Wireless Protected Access [Strong]

SSID - Makes network visible to all. Technical term for a network name. When setting up a wireless home network, we give it a name to distinguish it from other networks in neighbourhood.

WAR walking
WAR chalking


# Types of Firewall

Application Level [7 -Application Layer] - Provides greatest security environment. works on concept of bastion hosting and proxy server. Separate proxy for each service. Control application like FTP, HTTP etc.
Circuit Level [5 - Session Layer] - works on concept of bastion hosting and proxy server. Same proxy for all services.
Stateful Inspection [3 - Network Layer] - Allows traffic from outside only if it is in response to traffic from internal hosts.
Packet Filtering [3 - Network Layer] - Allow or deny action is done as per IP address and PORT number of source and destination of packets.

# Types of Firewall Implementation

Screened Host:
- One Packet Filtering Router
- One Bastion Host

Dual Homed:
- One Packet Filtering Router.
- One Bastion host with two NIC (Network Interface Card).
- More restrictive form of screened host.

Screened Subnet [DMZ]:
- Two Packet Filtering Router
- One Bastion Host

# Screened Subnet Firewall (DMZ) provides greatest security environment.

# Bastion host
Both Application-Level Firewall as well as Circuit-Level Firewall works on concept of bastion hosting. On the Internet, a bastion host is the only host computer that a company allows to be addressed directly from the public network and that is designed to protect the rest of its network from exposure. Bastion host are heavily forfeited against attack.

Common characteristics of a bastion host are as follows:
-Its Operating system is hardened, in the sense that only essential services are installed on it.
-System should have all the unnecessary services disabled, unneeded ports closed, unused applications removed, unnecessary administrative tools removed i.e  vulnerabilities to be removed to the extent possible.
-It is configured to require additional authentication before a user is granted access to proxy services.
-It is configured to access only specific hosts.

# Proxy
A proxy is a middleman. Proxy stands between internal and external network. Proxy will not allow direct communication between two networks. Proxy technology can work at different layer of OSI model. A proxy based firewall that works at lower layer (session layer) is referred to as circuit-level proxy. A proxy based firewall that works at higher layer (application layer) is called as an application-level proxy.

- Most robust configuration in firewall rule is ‘deny all traffic and allow specific traffic’ (as against ‘allow all traffic and deny specific traffic’).