Immigration Financial Information Bangladesh Gateway General World Cup Entertainment Programing University and College Scholarship Job Interview Health Job

Saturday, June 30, 2018

Glossary

Important CISA Glossary Items:

A:

B:


Tuesday, June 26, 2018

Domain 4 (Part 2)

- IT support service (helpdesk and management of configuration, management, change, release, problem, incident)
- IT delivery service (SLA, management of IT finance, capacity, service continuity, availability).

> Problem management vs incident management - The purpose of problem management is to reduce the number of incidents, while incident management is all about restore the affected system to its normal operational state as quickly as possible.

# Media sanitization 
- process of eradicating data to the point where it is impossible to restore.
- an organization must select the correct sanitization techniques for the types of media it use for storing data.

Enterprise backend hardware: print server, file server, application server, web server, proxy server, database server, smart phones, PDA, firewall, IDS(intrusion detection system), IPS(intrusion prevention system), switches, routers, LPN, storage devices, memory and flash cards.

IS Risks: viruses, spy wares and other malicious programs, data theft, data loss, data corruption, loss of storage devices.
IS security controls: data encryption, granular control, IS security training, enforce desktop lock policy, antivirus policy, use of approved and secure devices, including return and privacy information in the data file.

# RFID (Radio frequency identification):
- Uses radio frequency to identify objects that is tagged. A tag consists of a chip and an antenna. The chip stores the ID of the object, and the antenna receives signal.
- An active tag draws power from a batter and can transmit its ID to a longer distance.
- A passive tag gets it energy from the radiated power from the RFID reader.
- Application area of RFID: asset management, hardware tracking, authenticity verification, matching, process control, access control, supply chain management.

> Risk of RFID
1. A direct attack on RFID system can disrupt business process.
2. A hacker can gain access to RFID information stored in your system.
3. RFID technology can appear as a threat to a non-RFID network.


Monday, June 25, 2018

Domain 3 (Part 2)

# CMM levels helps to improve software life cycle processes.
Level 5: optimizing (continuous improvement).
Level 4: managed (quantitative quality).
Level 3: defined (documented process).
Level 2: repeatable (disciplined management process).
Level 1: Initial (Adhoc, individual effort).

# CMMI (capability maturity model integration)
The purpose of CMMI is to integrate various software maturity models including CMM into a single model. Just like CMM, CMMI also has five maturity levels, but the description of each level is not similar to the CMM. CMMI levels:

Level 5- optimizing (focuses on processes)
Level 4-quantitatively (process is measured and controlled)
Level 3- defined (process is characterized and proactive)
Level 2- managed (process is characterized and reactive)
Level 1- initial (poorly controlled process, which can be unpredictable and reactive)

Business case - Gives the necessary information whether to start a project or not. It is developed from the result of feasibility study, which is done during the project-planning phase.

Software size estimation methods:
1) LOSC (lines of source codes)
2) Function point analysis - It considers the following parameters:
-> Number of user input -> Number of outputs -> Number of user inquiries -> Number of files -> Number of external interfaces

# Time box management
- This project management technique is used to deploy software project within a short and fixed time frame using fixed resources.
- It can be used with rapid application development type projects.
- Advantage: Preventing project cost overrun and delay.

- Project controlling activities: managing project scope, resources and risks.

# Project risks
1) The risks that impact Business benefit: project sponsors are responsible for mitigating this risk.
2) Project risks: project manager is responsible for project risks.

Project risk management process consists of five steps:
1) Inventory risks
2) Assess risks
3) Mitigation risks
4) Discovery risks
5) Review and evaluate

- The errors cause by the unauthorized access is the main problem with online programming methods.

Categories of program debug tools
1. Logic path monitors: identify the errors in program logics
2. Memory dumps: identify data inconsistency in data or parameters.
3. Output analyze: checks the accuracy of the results after execution

- The certification and accreditation process starts after successful completion of final acceptance test.

# Certification process - Assesses standard controls (operational, management, technical) in an information.it examines the level of compliance of policies, standards, guidelines, processes, procedures and guidelines. The goal of certification process is to determine whether the controls are operating correctly, producing expected outcome and meeting the security requirements. The outcome of the certification process helps to reassess and to update the system security plan.

# Accreditation - senior management’s decision which authorize IS operation and accept the risks (risks in IT assets, operation, individuals).
It is considered as a form of quality control, which challenges IS managers and staff to implement highly effective security controls in the organization’s IT systems.

# Changeover (cutover or go-live technique)
This is an approach to migrate the existing users of an old system to a newly developed system. It is also known as cutover since it cut out the users from the old system and move them to the new system.

- Parallel change over: The old system is kept running. Then running the new system, making both the new and old system running at the same time.in this approach the users use both
the system, and this helps to identify any problem that the uses face while using the new system. When users gain confidence in the new system, the full changeover to the new system take place.

- Phased change over: This approach break downs the old system into several deliverable modules. The first deliverable module of the old system is replaced with the first deliverable modules of the new system. Similarly, all the other new modules replace the old modules. Thus, the changeover to the new system takes place.
Risk: IT resource challenges, extended project life-cycle, running change management for the old system.

- Abrupt changeover: On a specific date and time, the old system is changed over to the new system and the use of old system is discontinued.
Risk: assets safeguard, data integrity, system effectiveness and efficiency.

- The main objective of a post implementation review is to assess and measure how much value the project has delivered to the business.

# EDI (electronic data interchange)
- Usually, EDI is used to transmit invoices, shipping orders and purchase orders.
- An EDI system requires the following components:
1. Communication software
2. Transaction software
3. Access to standards

When reviewing an EDI, an auditor should consider:
1. The proprietary version of EDI. Most of the large organization has its proprietary EDI.
2. Publicly available commercial EDI (this approach is less costly but has more security risks)

# Traditional EDI
1. Communication handler: A process handles data transmission in the dial-up lines or in other public networks.
2. EDI interface: it manages and control data path between the communication handlers and the application. The two components of EDI interface are EDI translator (converts data from EDI format to proprietary format) and application interface (used for data movement and data mapping).
3. Application system: A program that process data before sending and after receiving from the trading partners. Web based EDI- used for generic network access.

# EDI risks
1. Transaction authorization(the main risk in a ESD system)
2. Loss of business continuity
3. Deletion or manipulation of transactions
4. Duplicate EDI transmission and data loss.
5. Lose of transaction confidentiality

The IS auditor can verify the evaluation objective of EDI by reviewing the followings:
1. Encryption in place
2. The existence of checks for data editing
3. Validity and reasonability check for each transaction
4. Logging of each inbound transaction.
5. Verifying the number and value of transaction with control totals
6. Using segment count totals
7. Using transaction set count totals
8. Using batch control totals
9. Sender’s validity against other trading partners

Some other EDI audit options are:
1. Audit monitor: it is installed in the EDI computer to capture transactions so as an auditor can review it.
2. Expert systems: it is an audit monitor that can determine the significance of a transaction based on audit rules and prepare a report for the auditors.

# DSS (decision support system)
- DSS mainly focuses more on effectiveness and less on efficiency.
- Prototype is the preferred DSS development and design approach.
- The true evaluation of a DSS is whether it can improve management’s decision-making process.

# Data oriented system development:
A software development method where data and data structures are used to represent software requirements. Elimination of data transformation error is the major advantage of this method.

# Object oriented system development:
A programming technique, not a programming method, where data and the procedures are considered as an entity. It has the advantage of managing unrestricted types of data, modeling complex relationships, can adapt with changing environment.

# Component based development:
An extension of object oriented system development. In this technique, various applications are assembled to deliver their service with defined interfaces. The purpose of the interface here is to facilitate the application programs to communicate with each other regardless of their source language and operating platforms.
Advantages: short development time, programmers can focus more on business functionality of the application, promotes modularity, ability to combine cross languages and codes reusability, less development cost, allows to buy only the components, not a complete solution with features that are not required, that is necessary to the system.

# Application controls
1. Data input
2. Data processing
3. Output function


# Input controls 

- Input control is assured by the followings:
A. Input authorization
B. Batch control and balancing

A. Input authorization:
- It ensures that all the data input are authorized and approved by the responsible department or management. Input authorization types:
1. Signature on batch forms or source documents
2. Online access controls
3. Unique passwords
4. Terminal or client work station identification
5. Source document

B. Batch control and balancing
- Batch balancing is all about making sure that each transaction creates files or documents, which are added to the batch, processed and accepted by the system.

Input control techniques
1. Transaction log
2. Reconciliation of data-whether all data received are properly recorded and processed.
3. Documentation
4. Error correction procedures
5. Anticipation
6. Transmitted log
7. Cancellation of source document


# Data processing controls and procedures 

The processing controls consist of:
A. Data validation and editing procedures
B. Processing controls
C. Data file control procedures

A. Data validation and editing procedures:
- Input data need to be validated and edited as soon as they are generated. Data validation means finding out data errors, incomplete/missing data and inconsistency in data.
- Edit controls are used before data are processed in order to prevent inaccurate data processing.

-Sequence check
-Limit check
-Range check
-Validity check
-Reasonableness check
-Check digit - A number value is added to the original data to make sure that data has not been altered. It is used to detect transcription and transposition errors.
-Completeness check


B. Data processing controls:
- it ensures that data are complete and accurate. The data processing techniques are manual recalculation, editing and run-on totals, programmed controls, limit checks on amounts, reasonableness verification of calculated amounts, reconciliation of file totals and exception reports.

C. Data file control procedures:
- it helps to make sure that only the authorized processing are done on the data; it will not perform any unauthorized process on data. Content of the data tables or file can be divided into the following categories:
1. System control parameters: any changes in these data can change the way system functions.
2. Standing data: they are not frequently changed. Example: suppliers' names, addresses etc.
3. Master data/balance data: these are current balances and total balances, which are frequently updated by new transaction. Audit trails must be present to preventing changing these data.
4. Transaction logs: these logs are controlled by validation checks, exception report, control totals etc.

Important controls for data files: 

Before and after image reporting - The data file before and after the processing need to be recorded to analyze the processing impact on the database.
Error reporting and handling - Those who input the data should not review and authorize the error correction.
Source file retention 
Version usage - It is necessary for processing the correct version of the file because older version may not need to run all the procedures.
Data file security - Use to prevent unauthorized access to the data
One-for-one checking - It is used to make sure that all the documents are being processed.
Transaction logs - The activities that need to recorded are input time, username, input terminal/computer details etc. these activity data helps to generate audit trails and can be used to find out errors or warnings and to restore the system if any technical problem occurs.
Parity check  - This is used to detect transmission errors in the data. When a parity check is applied to a single character, it is called vertical or column check. In addition, if a parity check is applied to all the data it is called vertical or row check. By using both types of parity check simultaneously can greatly increase the error detection possibility, which may not be possible when only one type of parity check is used.

# Output controls 
- Output control ensures consistent and secure delivery of data. The data also need to be presented to the users in proper format. The output controls are:
1. Logging and storage of forms in secure place
2. Computer generated forms and signatures: all the computer-generated forms should be compared with the physical copy of the forms. One should be accountable for any issues, exceptions or unwanted modification of the forms.
3. Distribution of report: the report should be distributed to the person authorized to receive it.
4. Balancing & reconciling: procedures to find out errors in the output report should be established and it should be delivered to the concerned department for review and correction. 5. Output report retention: there should be report retention schedule and the report retention policy should follow legal regulation, if there is any.
6. Report receipt: the recipient of the reports should sign in the record or logbook. It will make sure that the sensitive reports are being distributed properly.

# Tasks of IS Auditor in application controls 
1. Identifying the important applications, and its components. Understating the follow of information among the applications or systems and gaining knowledge about the application by reading available documents and interviewing IS personals.
2. Identifying the strength of application controls, and evaluating the impact of those identified weakness.
3. Understanding the functionality of the applications by reviewing the system documentation.

# Data integrity tests 
Data integrity tests consist of a number of substantive tests. It aims to test the data accuracy, consistency and authorization.
1. Relational integrity: it is done at the data elements or record levels. Relational integrity can be maintained by building built-in data validation routines in the applications. It can also be implemented in the database by defining input constraints and data characteristics in the tables.
2. Referential integrity: it tests the relationship between entities in the tables in a database. Relational integrity helps to maintain the interrelationship integrity in relational database model (RDBMS).a relational database establish relation among various tables using references between primary and foreign keys. Referential integrity tests make sure that all these references exist in the original table.

# Data integrity in online transaction 
- The integrity of online data is maintained by four principles (ACID).
Atomicity: it states that a transaction is either complete or not If any transaction cannot be complete because of problem, it is necessary for the database to go back to state before the transaction, which ensures the atomicity.
Consistency: after each transaction, the database should go from its previous consistent state to another consistent state.
Isolation: every transaction should be isolated and it should have access to the database in a consistent state.
Durability: when a transaction is considered complete, then the database should retain the data even after any hardware or software failure.

> The main advantages of component-based development are the compatibility of the development system with multiple platforms and environments.
> Inadequate software baseline can result in project scope creep.
> An IS auditor reviewing agile project software development can expect to post-iteration reviews that document all the learned lessons.
> Checksum in data is used for integrity testing.
> Transaction journal is responsible for recording transaction activity. Therefore, by comparing the transaction journal with the authorized data source will reveal if there are any unauthorized input from a terminal (a specific computer).
> A console log printout does not record transaction activity from a terminal.
> An automated suspense file only shows the transactions that needs action.
> No modification is allowed once data are in the warehouse.
> A warehouse is just a copy of the original transaction data and it is used for query and analysis.
> The Meta data works as a content table in a warehouse. That is why meta data is considered the most important design element of data warehouse.
> RAD is a management technique.

Sunday, June 24, 2018

Domain 4 (Part 1)

RTO & RPO

- RTO of 2 hours indicates that organization needs to ensure that their system downtime should not exceed 2 hours.
- RPO of 2 hours indicates that organization needs to ensure that their data loss should not exceed 2 hours of data captured.
- In any given scenario, for critical systems, RTO is zero or near zero. Similarly, for critical data, RPO is zero or near zero.
- lower the RTO/RPO, higher the cost of maintenance of environment.
- low RTO/RPO indicates that disaster tolerance is low. Other way round, if disaster tolerance is low, RTO/RPO should be low.
- when RTO is low, mirrored site or hot site is recommended.
- when RPO is low, mirror imaging or real time replication for data back-up is recommended.
- where RPO is zero, synchronous data backup strategy to be used.
- Both RTO & RPO are based on time parameters. The lower the time requirements, the higher the cost of recovery strategies.
- RTO (Acceptable System Downtime)
- RPO (Acceptable Data Loss)

Alternate Recovery Site
- Mirrored site is fastest mode of recovery and then hot site.
- Cold site is slowest mode of recovery.
- For critical system, mirrored/hot sites are appropriate option.
- For non-critical system, cold site is appropriate option.
- Reciprocal agreement will have lowest expenditure in terms of recovery arrangement.

# Mirrored Site
Already Available - Space & Basic Infra, All IT equipment, Updated database
Required - hardly anything

# Hot Site
Already Available - Space & Basic Infra, All IT equipment
Required - -Updated database

# Warm Site
Already Available - Space & Basic Infra, Some IT equipment
Required - Needed IT equipment, Updated database

# Cold Site
Already Available - Space & Basic Infra
Required - Needed equipment, Updated database

# Mobile Site
- Mobile sites are processing facilities mounted on a transportable vehicle and kept ready to be delivered.
- A mobile site is a vehicle ready with all necessary computer equipment, and it can be moved to any cold or warm site depending upon the need. The need for a mobile site depends upon the scale of operations.

# Reciprocal Agreement
- Reciprocal agreements are the agreements wherein two organizations (mostly having similar processing) agree to help each other in case of emergency. Reciprocal agreements are the least expensive because they usually rely on agreement between two firms. However, they are the least reliable.


Physical Network Media:

- Fiber-optic cables have proven to be more secure than the other media. They have very low transmission loss, not affected by EMI and preferred choice for high volumes and long distance calls.

# Attenuation:
- Weakening of signals during transmission.
- Exists in both wired and wireless transmissions.
- Length of wire impacts the severity of attenuation.

# Electromagnetic Interference (EMI):
- EMI is a disturbance generated by an external source that affects an electrical circuit.
- The disturbance may degrade the performance of the circuit or even stop it from functioning. In the case of a data path, these effects can range from an increase in error rate to a total loss of the data.
- EMI is caused by electrical storms or noisy electrical equipments (e.g. motors, fluorescent lighting, radio transmitters etc.)

# Cross-Talks:
- Crosstalk is electromagnetic interference from one unshielded twisted pair to another twisted pair, normally running in parallel.
- Only occurs in wired communication

# Sags, Spikes, and surges:
- Only occurs in wired communication

# Multipath Interference:
- Only occurs in wireless communication

- Using separate conduits for data cables and electrical cables, minimizes the risk of data corruption due to an induced magnetic field created by electrical current.

# Alternate Routing:
- Method of routing information via an alternative medium, such as copper cable or fiber optics.

Last mile circuit protection:
Last mile circuit protection provides redundancy for local communication loop.

Long haul network diversity:
Long haul network diversity provides redundancy for long distance availability.

# Diverse Routing:
- Diverse routing is the method of routing traffic through split-cable facilities or duplicate-cable facilities.

- In alternate routing alternate type of cables are used such as fiber optics or copper cable
- In diverse routing same type of cable is used either in duplicate or by splitting.


Gateway - Application Layer (7th Layer)

Switch stores MAC address in a look up table

# Bridges:
Act as store-and-forward devices in moving frames toward their destination. This is achieved by analyzing the MAC header of a data packet. By examining the MAC address, the bridge can make decisions to direct the packet to its destination. It has the capacity to store frames and act as a storage and forward device.

# Backup schemes:

Full Backup - Every time data backup is taken of full database irrespective of earlier backup.
Incremental Backup - Backup is taken only of data changed since last backup (last backup can be either full backup or incremental backup).
Differential Backup - Backup is taken only of data changed since last full backup (last back to be full back-up only).

# Differential Backup
Full backup taken at Monday
- On Tuesday backup taken for changes made after Monday’s backup
- On Wednesday backup taken for changes made after Monday’s backup (ignoring Tuesday’s backup)
- On Thursday backup taken for changes made after Monday’s backup (ignoring Tuesday’s  & Wednesday’s backup)
- On Friday backup taken for changes made after Monday’s backup (ignoring Tuesday’s , Wednesday’s  & Thursday’s backup)

# Incremental Backup
Full backup taken at Monday
- On Tuesday backup taken for changes made after Monday’s backup
- On Wednesday backup taken for changes made after Tuesday’s backup
- On Thursday backup taken for changes made after Wednesday’s Backup
- On Friday backup taken for changes made after Thursday’s Backup

# Storage Capacity for each backup Scheme:
-Full Backup- Requires more time and storage capacity as compared to other two schemes.
-Differential- Requires less time and storage capacity as compared to full backup but more time and storage capacity as compared to Incremental.
-Incremental- Requires less time and storage capacity as compared to other two schemes.

# Restoration Capability for each backup Scheme:
-Full Backup- Fastest of all three schemes.
-Differential- Slower than Full backup but faster than incremental.
-Incremental-Slowest of all three schemes.


Wednesday, June 20, 2018

Domain 3 (Part 1)

Online Auditing Techniques:

- SCARF (System Control Audit Review File). An embedded (inbuilt) audit module is used to continuously monitor the transactions. Records only those transactions which are of special audit significance such as transactions above specified limit or transactions related to deviation/exception. Useful when regular processing can not be interrupted.

- Snapshots - snaps (pictures) are taken of the transactions as transaction moves through various stages in the application system. Both before-processing and after-processing images of the transactions are captured. Auditors can verify the correctness of the processing by checking before-processing and after-processing images of the transactions. Useful when audit trail is required.

- ITF (Integrated Test Facility) - Fictitious entities/transactions are created in live production environment. Processed results and actual results are compared to verify that systems are operating correctly. Need to isolate test data from actual production data.

- CIS (Continuous and Intermittent Simulation) - This technique can be used whenever the application system uses the database management system (DBMS). DBMS reads the transaction which is passed to CIS. If transaction is as per selected criteria, then CIS examines the transaction for correctness. CIS determines whether any discrepancies exist between the result it produces and those the application system produces. Thus, CIS replicates or simulates the application system processing. Best technique when transactions meeting certain criteria needs to be examined.

- Audit Hook - These are audit software that captures suspicious transactions. Criteria for suspicious transactions are designed by auditors as per their requirement. Helps the IS auditor to act before an error or an irregularity gets out of hand.

# Audit trail (Snapshot)
# Fictitious entity in LIVE production (ITF)
# Early detection (Audit Hook)
# Simulates the application system processing (CIS)


Testing in SDLC

# Unit Testing
- Testing of individual program or module testing done during development stage
- White box approach (i.e. testing of internal program logic) is applied

# Integrated/Interface Testing 
- Dictionary meaning of integrate is 'to connect'
- Testing of connection of two or more module or components that pass information from one area to another

# Parallel testing 
- Process of comparing results of the old and new system.
- To ensure that the implementation of a new system meets user requirements.

# Pilot Testing 
- Takes place first at one location to review the performance.The purpose is to see if the new system operates satisfactorily in one place before implementing it at other locations.

# Regression Testing
- Meaning = 'act of going back' or to 'return'
- Ensures that changes or corrections in a program have not introduced any new errors.
- Data used for regression testing should be same as the data used in previous tests.

# Sociability Testing
- Meaning = 'ability to have companionship with others'
- To ensure that new or modified system can work in the specified environment without adversely impacting existing system.

# System Testing
- Full pledge test that includes stress/load/security/recovery and other tests
Security Testing - Testing of appropriate access control and other security measures.
Recovery Testing - Checking system's ability to recover after a hardware or software failure.
Load Testing - Testing of performance of system during peak hours (processing large quantity of data)
Volume Testing - Testing to determine maximum volume of records (data) the application can handle.
Stress Testing - Testing to determine maximum number of concurrent users/services the application can process.
Performance Testing - Comparing the performance of the system to other equivalent system using well defined benchmarks.

# Top-Down Approach
- Testing starts with individual units such as individual programs or modules and work upward until a complete system is tested.   
- Test can be started even before all programs are complete
- Errors in critical modules can be found early.   
                                           
# Bottom-Up Approach
- Test  starts from broader level and then gradually moves towards individual programs and modules
- Interface error can be detected earlier
- Confidence in the system is achieved earlier
- Appropriate for prototype development.

# Alpha Testing 
- Testing done by internal user
- Done prior to beta testing
- May not involve testing of full functionality

# Beta Testing 
- Testing done by external user
- Done after alpha testing
- Generally, involves testing of full functionality


Check Digit/Parity Bits/Checksum/Cyclic Redundancy Checksums (CRC)/Redundancy Checksums/Forward Error Control/Atomicity

Check Digit:
- Mathematically calculated value that is added to data to ensure that the original data have not been altered.
- Helps in avoiding transposition and transcription errors.
- Ensure data accuracy

Parity Bits:
- Requires adding an extra bit on the data. This extra bit is called a parity bit. This bit simply says whether the number of 1 bits is odd or even. Generally the parity bit is 1 if the number of 1 bits is odd and 0 if the sum of the 1 bits is even.
- This parity is verified by receiving computer to ensure data completeness and data integrity during transmission.
- Parity bits are used to check for completeness of data transmissions. A parity check is a hardware control that detects data errors when data are read from one computer to another, from memory or during transmission.

Checksum:
- Checksum are exactly same as parity but able to identify complex errors also by increasing the complexity of the arithmetic.

Cyclic Redundancy Checksums (CRC)/Redundancy Checksums:
- More advanced version of checksums by increasing the complexity of the arithmetic.

Forward Error Control:
- Works on same principle as CRC. However FEC also corrects the error. FEC provides the receiver with the ability to correct errors.
- To detect & correct transmission error.

Atomicity:
Transaction must be all-or-nothing. That is, the transaction must either fully happen, or not happens at all. The principle of atomicity requires that a transaction be completed in its entirety or not at all. If an error or interruption occurs, all changes made upto that points are backed out.

Parity bits or checksum (higher version of parity bit) or CRC (higher version of checksum):
- To identify transmission error
- To ensure completeness
- To ensure integrity
- First preference to CRC. If CRC is not there as option then preference to be given to Checksum. If CRC and Checksum both are not there in option then preference to be given to Parity Bits.


PERT-CPM-Gantt Chart-FPA-Timebox:

PERT or CPM: To estimate project duration or timeless. First preference to be given to PERT.

Gantt Chart: To monitor the project or track any milestone

FPA or SLOC: To estimate software size. First preference to be given to FPA. SLOC = Source line of code. SLOC is direct method while FPA is indirect method. FPA is arrived on the basis of number and complexity of inputs, outputs, files, interfaces and queries. FPA is more reliable than SLOC.
When objective is to identify software size estimation-first preference to be given to FPA

Timebox Management: To prevent project cost overruns and delays from scheduled delivery

Earned Value Analysis (EVA):

-Budget to date
-Actual spending to date
-Estimate to complete
-Estimate at completion


# Function point analysis (FPA) - To estimate efforts required to develop software.

# Decision Support System (DSS)
- Supports the semi-structured problem (and not only structured problem).
- Should be flexible and adoptable to changing requirements and scenarios.
- Decision tree is used as a questionnaire to lead a user through a series of choices until a conclusion is reached.
- Interactive System

# RISK Factors for Implementation of DSS
- Inability to specify purpose or usage patterns in advance.
- Inability to predict and cushion impact on all parties.
- Non-existent or unwilling users/ Multiple users or implementers/ Disappearing users, implementers and maintainers.
- Lack or loss of support/ Lack of experience with similar systems
- Technical problems and cost effectiveness issues.


# Agile Development:

- Dictionary meaning of agile is ‘able to move quickly and easily’.
- Allows the programmer to just start writing a program without spending much time on pre-planning documentation.
- Less importance is placed on formal paper-based deliverables, with the preference being to produce releasable software in short iterations, typically ranging from 4 to 8 weeks.
- At the end of each iteration, the team considers and documents what worked well and what could have worked better, and identifies improvements to be implemented in subsequent iterations.
- Some programmers prefer agile because they do not want to be involved in tedious planning exercises.
- Major risk associated with agile development is lack of documentation.
- In agile approach reviews are done to identify lessons learned for future use in the project.

Object Oriented System Development (OOSD):

- OOSD is a programming technique and not a software development methodology.
- Object here refers to small piece of program that can be used individually or in combination with other objects.
- In Object oriented language, application is made up of smaller components (objects).
- One of the major benefits of object-oriented design and development is the ability to reuse objects.
- ‘encapsulation’ in which one object interacts with another object. This is a common practice whereby any particular object may call other object to perform its work.
– A major benefit of object-oriented development is the ability to reuse objects.

# Encapsulation
Encapsulation is a mechanism where you bind your data and code together as a single unit. It also means to hide your data in order to make it safe from any modification. What does this mean? The best way to understand encapsulation is to look at the example of a medical capsule, where the drug is always safe inside the capsule. Similarly, through encapsulation the methods and variables of a class are well hidden and safe.

- Permits enhanced degree of security over data.

When you create an object in an object-oriented language, you can hide the complexity of the internal workings of the object. As a developer, there are two main reasons why you would choose to hide complexity.

The first reason is to provide a simplified and understandable way to use your object without the need to understand the complexity inside. As mentioned above, a driver doesn't need to know how an internal combustion engine works. It is sufficient to know how to start the car, how to engage the transmission if you want to move, how to provide fuel, how to stop the car, and how to turn off the engine. You know to use the key, the shifter (and possibly clutch), the gas pedal and the brake pedal to accomplish each of these operations. These basic operations form an interface for the car. Think of an interface as the collection of things you can do to the car without knowing how each of those things works.

Hiding the complexity of the car from the user allows anyone, not just a mechanic, to drive a car. In the same way, hiding the complex functionality of your object from the user allows anyone to use it and to find ways to reuse it in the future regardless of their knowledge of the internal workings. This concept of keeping implementation details hidden from the rest of the system is key to object-oriented design.

Take a look at the CombustionEngine class below. Notice that it has only two public methods:
start() and stop()

Those public methods can be called from outside of the object. All of the other functions are private, meaning that they are not publicly visible to the rest of the application and cannot be called from outside of the object.

package engine {
    public class CombustionEngine {
     
        public function CombustionEngine() {}

        private function engageChoke():void {}
        private function disengageChoke():void {}
        private function engageElectricSystem():void {}
        private function powerSolenoid():void {}
        private function provideFuel():void {}
        private function provideSpark():void {}
     
        public function start():void {
            engageChoke();
            engageElectricSystem();
            powerSolenoid();
            provideFuel();
            provideSpark();
            disengageChoke();
        }
      public function stop():void {}
    }
}

You would use this class as follows:

var carEngine:CombustionEngine = new CombustionEngine();
carEngine.start();
carEngine.stop();

The second reason for hiding complexity is to manage change. Today most of us who drive use a vehicle with a gasoline-powered internal combustion engine. However, there a gas-electric hybrids, pure electric motors, and a variety of internal combustion engines that use alternative fuels. Each of those engine types has a different internal mechanism yet we are able to drive each of them because that complexity has been hidden. This means that, even though the mechanism which propels the car changes, the system itself functions the same way from the user's perspective.


# Inheritance
In OOP, computer programs are designed in such a way where everything is an object that interact with one another. Inheritance is one such concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes.

A child inherits the properties from his father. Similarly, in Java, there are two classes:
1. Parent class (Super or Base class)
2. Child class (Subclass or Derived class )

A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class.

# Polymorphisms
- Polymorphisms is a generic term that means 'many shapes'. More precisely Polymorphisms means the ability to request that the same operations be performed by a wide range of different types of things. Polymorphism is the ability of an object to change behavior on compile time / runtime

In OOP the polymorphisms is achieved by using many different techniques named method overloading, operator overloading, and method overriding,

- Method Overloading?
Method overloading is the ability to define several methods all with the same name.

public class MyLogger
{
    public void LogError(Exception e)
    {
        // Implementation goes here
    }

    public bool LogError(Exception e, string message)
    {
        // Implementation goes here
    }
}


# Prototyping:

- Process of creating systems through controlled trial and error.
- An early sample or model to test a concept or process. A small scale working system used to test the assumptions. Assumptions may be about user requirements, program design or internal logic.
- This method of system development can provide the organization with significant time and cost savings.
– By focusing mainly on what the user wants and sees, developers may miss some of the controls that come from the traditional systems development approach; therefore, a potential risk is that the finished system will have poor controls.
- Important advantage of prototyping is that it provides significant cost and time savings.
- Top-down approach testing methods is MOST effective during the initial phases of Prototyping.
- In prototyping, changes in the designs and requirements occur quickly and are seldom documented or approved; hence, change control becomes more complicated with prototyped systems.

# Rapid Application Development:

- RAD includes use of:

> Small and well trained development teams.
> Prototypes
> Tools to support modelling, prototyping and component re-usability.
> Central repository
> Rigid limits on development time frames

- RAD enables the organisation to develop systems quickly while reducing development cost and maintaining quality.
- RAD relies on the usage of a prototype that can be updated continually to meet changing user or business requirements.


# Steps in Benchmarking Process:
(1)Plan (for what processes benchmarking is to be done)
(2)Research (from where and with whom benchmarking is to be done)
(3)Observe (visit and observe processes of benchmarking partners)
(4)Analyse (analyzing the gap between organisation’s processes and benchmarking partner’s processes)
(5)Adopt (implement the best practises followed by benchmarking partner)
(6)Improve (continuous improvement)

- Parity bits are used to check for completeness of data transmissions.
- Check digit are a control check for accuracy.
- Detailed program logic is tested in White Box Testing
- The primary purpose of a system test is to evaluate the system functionally.

# Throughput 
Maximum rate of production or the maximum rate at which something can be processed. In data transmission, network throughput is the amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).

- In white box testing, program logic is tested. In black box, only functionality is tested.

- Configuration Management involves procedure throughout the software life cycle (from requirement analysis to maintenance) to identify, define and baseline software items in the system and thus provide a basis of problem management, change management and release management.

- Ideally, stress testing should be carried out in a test environment using live workloads.

- Data integrity testing examines the accuracy, completeness, consistency and authorization of data.
- Relational integrity testing detects modification to sensitive data by the use of control totals.
- Domain integrity testing verifies that data conforms to specifications.
- Referential integrity testing ensures that data exists in its parent or original file before it exists in the child or another file.

# control total - Used to ensure that batch data is completely and accurately transferred between two systems.

- A control total is frequently used as an easily recalculated control. A check digit is a method of verifying the accuracy of a single data item, such as a credit card number. Although a check sum is an excellent control over batch completeness and accuracy, it is not easily recalculated and, therefore, is not as commonly used in financial systems as a control total. Check sums are frequently used in data transfer as part of encryption protocols.

# Application Controls:
Controls over input, processing and output functions. They include method for ensuring that:
-Only complete, accurate and valid data are entered and updated in computer systems.
-Processing accomplishes the correct task
-Processing results meet the expectations
-Data are maintained

Check Digit - A numeric value that has been calculated mathematically is added to data to ensure that the original data have not been altered or an incorrect, but valid, value substituted. This control is effective in detecting transposition and transcription errors.

Completeness Check - A field should always contain data rather than zeros or blanks. A check of each byte of that field should be performed to determine that some form of data, not blanks or zeros, is present.

Duplicate Check - New transactions are matched to those previously input to ensure that they have not already been entered.

# buffer overflow - Poorly written code, especially in web-based applications, is often exploited by hackers using this techniques.
# brute-force attack is used to crack passwords.

# Sequence Check - The control number follows sequentially and any sequence or duplicated control numbers are rejected or noted on an exception report for follow-up purposes.
# Limit Check - Data should not exceed a predetermined amount.
# Range Check - Data should be within a predetermined range of values.
# Validity Check - Programmed checking of the data validity in accordance with predetermined criteria. For example, a payroll record contains a field for marital status and the acceptable status codes are M or S. If any other code is entered, the record should be rejected.
# Reasonableness Check - Input data are matched to predetermined reasonable limits or occurrence rates. For example, a manufacturer usually receives orders for no more than 50 items. If an order for more than 50 items is received, the computer program should be designed to print the record with a warning indicating that the order appears unreasonable.
# Table look-ups - Input data comply with predetermined criteria maintained in a computerized table of possible values. For example, the input clerk enters a city code of 1 to 10. This number corresponds with a computerized table that matches the code to a city name.
# Existence check - Data are entered correctly and agree with valid predetermined criteria. For example, a valid transaction code must be entered in the transaction code field.
# Key Verification - The keying process is repeated by a separate individual using a machine that compares the original keystrokes to the repeated keyed input For example, the worker number is keyed twice and compared to verify the keying process.

# Function Point Analysis - An indirect method of measuring the size of an application by considering the number and complexity of its inputs, outputs and files.

# Input Control procedure
- must ensure that every transaction to be processed is entered, processed and recorded accurately and correctly.

# Logic path monitors 
- Report on the sequence of steps executed by a program. This provides the programmer with clues to logic errors, if any, in the program.

# Run to run totals 
- Provide the ability to verify data values through stages of application processing. They ensures that data read into the computer were accepted and then applied to the updating process.

# Automated system balancing
- Would be the best way to ensure that no transactions are lost as any imbalance between total inputs and total outputs would be reported for investigation and correction.