Online Auditing Techniques:
- SCARF (System Control Audit Review File). An embedded (inbuilt) audit module is used to continuously monitor the transactions. Records only those transactions which are of special audit significance such as transactions above specified limit or transactions related to deviation/exception. Useful when regular processing can not be interrupted.
- Snapshots - snaps (pictures) are taken of the transactions as transaction moves through various stages in the application system. Both before-processing and after-processing images of the transactions are captured. Auditors can verify the correctness of the processing by checking before-processing and after-processing images of the transactions. Useful when audit trail is required.
- ITF (Integrated Test Facility) - Fictitious entities/transactions are created in live production environment. Processed results and actual results are compared to verify that systems are operating correctly. Need to isolate test data from actual production data.
- CIS (Continuous and Intermittent Simulation) - This technique can be used whenever the application system uses the database management system (DBMS). DBMS reads the transaction which is passed to CIS. If transaction is as per selected criteria, then CIS examines the transaction for correctness. CIS determines whether any discrepancies exist between the result it produces and those the application system produces. Thus, CIS replicates or simulates the application system processing. Best technique when transactions meeting certain criteria needs to be examined.
- Audit Hook - These are audit software that captures suspicious transactions. Criteria for suspicious transactions are designed by auditors as per their requirement. Helps the IS auditor to act before an error or an irregularity gets out of hand.
# Audit trail (Snapshot)
# Fictitious entity in LIVE production (ITF)
# Early detection (Audit Hook)
# Simulates the application system processing (CIS)
Testing in SDLC
# Unit Testing
- Testing of individual program or module testing done during development stage
- White box approach (i.e. testing of internal program logic) is applied
# Integrated/Interface Testing
- Dictionary meaning of integrate is 'to connect'
- Testing of connection of two or more module or components that pass information from one area to another
# Parallel testing
- Process of comparing results of the old and new system.
- To ensure that the implementation of a new system meets user requirements.
# Pilot Testing
- Takes place first at one location to review the performance.The purpose is to see if the new system operates satisfactorily in one place before implementing it at other locations.
# Regression Testing
- Meaning = 'act of going back' or to 'return'
- Ensures that changes or corrections in a program have not introduced any new errors.
- Data used for regression testing should be same as the data used in previous tests.
# Sociability Testing
- Meaning = 'ability to have companionship with others'
- To ensure that new or modified system can work in the specified environment without adversely impacting existing system.
# System Testing
- Full pledge test that includes stress/load/security/recovery and other tests
Security Testing - Testing of appropriate access control and other security measures.
Recovery Testing - Checking system's ability to recover after a hardware or software failure.
Load Testing - Testing of performance of system during peak hours (processing large quantity of data)
Volume Testing - Testing to determine maximum volume of records (data) the application can handle.
Stress Testing - Testing to determine maximum number of concurrent users/services the application can process.
Performance Testing - Comparing the performance of the system to other equivalent system using well defined benchmarks.
# Top-Down Approach
- Testing starts with individual units such as individual programs or modules and work upward until a complete system is tested.
- Test can be started even before all programs are complete
- Errors in critical modules can be found early.
# Bottom-Up Approach
- Test starts from broader level and then gradually moves towards individual programs and modules
- Interface error can be detected earlier
- Confidence in the system is achieved earlier
- Appropriate for prototype development.
# Alpha Testing
- Testing done by internal user
- Done prior to beta testing
- May not involve testing of full functionality
# Beta Testing
- Testing done by external user
- Done after alpha testing
- Generally, involves testing of full functionality
Check Digit/Parity Bits/Checksum/Cyclic Redundancy Checksums (CRC)/Redundancy Checksums/Forward Error Control/Atomicity
Check Digit:
- Mathematically calculated value that is added to data to ensure that the original data have not been altered.
- Helps in avoiding transposition and transcription errors.
- Ensure data accuracy
Parity Bits:
- Requires adding an extra bit on the data. This extra bit is called a parity bit. This bit simply says whether the number of 1 bits is odd or even. Generally the parity bit is 1 if the number of 1 bits is odd and 0 if the sum of the 1 bits is even.
- This parity is verified by receiving computer to ensure data completeness and data integrity during transmission.
- Parity bits are used to check for completeness of data transmissions. A parity check is a hardware control that detects data errors when data are read from one computer to another, from memory or during transmission.
Checksum:
- Checksum are exactly same as parity but able to identify complex errors also by increasing the complexity of the arithmetic.
Cyclic Redundancy Checksums (CRC)/Redundancy Checksums:
- More advanced version of checksums by increasing the complexity of the arithmetic.
Forward Error Control:
- Works on same principle as CRC. However FEC also corrects the error. FEC provides the receiver with the ability to correct errors.
- To detect & correct transmission error.
Atomicity:
Transaction must be all-or-nothing. That is, the transaction must either fully happen, or not happens at all. The principle of atomicity requires that a transaction be completed in its entirety or not at all. If an error or interruption occurs, all changes made upto that points are backed out.
Parity bits or checksum (higher version of parity bit) or CRC (higher version of checksum):
- To identify transmission error
- To ensure completeness
- To ensure integrity
- First preference to CRC. If CRC is not there as option then preference to be given to Checksum. If CRC and Checksum both are not there in option then preference to be given to Parity Bits.
PERT-CPM-Gantt Chart-FPA-Timebox:
PERT or CPM: To estimate project duration or timeless. First preference to be given to PERT.
Gantt Chart: To monitor the project or track any milestone
FPA or SLOC: To estimate software size. First preference to be given to FPA. SLOC = Source line of code. SLOC is direct method while FPA is indirect method. FPA is arrived on the basis of number and complexity of inputs, outputs, files, interfaces and queries. FPA is more reliable than SLOC.
When objective is to identify software size estimation-first preference to be given to FPA
Timebox Management: To prevent project cost overruns and delays from scheduled delivery
Earned Value Analysis (EVA):
-Budget to date
-Actual spending to date
-Estimate to complete
-Estimate at completion
# Function point analysis (FPA) - To estimate efforts required to develop software.
# Decision Support System (DSS)
- Supports the semi-structured problem (and not only structured problem).
- Should be flexible and adoptable to changing requirements and scenarios.
- Decision tree is used as a questionnaire to lead a user through a series of choices until a conclusion is reached.
- Interactive System
# RISK Factors for Implementation of DSS
- Inability to specify purpose or usage patterns in advance.
- Inability to predict and cushion impact on all parties.
- Non-existent or unwilling users/ Multiple users or implementers/ Disappearing users, implementers and maintainers.
- Lack or loss of support/ Lack of experience with similar systems
- Technical problems and cost effectiveness issues.
# Agile Development:
- Dictionary meaning of agile is ‘able to move quickly and easily’.
- Allows the programmer to just start writing a program without spending much time on pre-planning documentation.
- Less importance is placed on formal paper-based deliverables, with the preference being to produce releasable software in short iterations, typically ranging from 4 to 8 weeks.
- At the end of each iteration, the team considers and documents what worked well and what could have worked better, and identifies improvements to be implemented in subsequent iterations.
- Some programmers prefer agile because they do not want to be involved in tedious planning exercises.
- Major risk associated with agile development is lack of documentation.
- In agile approach reviews are done to identify lessons learned for future use in the project.
Object Oriented System Development (OOSD):
- OOSD is a programming technique and not a software development methodology.
- Object here refers to small piece of program that can be used individually or in combination with other objects.
- In Object oriented language, application is made up of smaller components (objects).
- One of the major benefits of object-oriented design and development is the ability to reuse objects.
- ‘encapsulation’ in which one object interacts with another object. This is a common practice whereby any particular object may call other object to perform its work.
– A major benefit of object-oriented development is the ability to reuse objects.
# Encapsulation
Encapsulation is a mechanism where you bind your data and code together as a single unit. It also means to hide your data in order to make it safe from any modification. What does this mean? The best way to understand encapsulation is to look at the example of a medical capsule, where the drug is always safe inside the capsule. Similarly, through encapsulation the methods and variables of a class are well hidden and safe.
- Permits enhanced degree of security over data.
When you create an object in an object-oriented language, you can hide the complexity of the internal workings of the object. As a developer, there are two main reasons why you would choose to hide complexity.
The first reason is to provide a simplified and understandable way to use your object without the need to understand the complexity inside. As mentioned above, a driver doesn't need to know how an internal combustion engine works. It is sufficient to know how to start the car, how to engage the transmission if you want to move, how to provide fuel, how to stop the car, and how to turn off the engine. You know to use the key, the shifter (and possibly clutch), the gas pedal and the brake pedal to accomplish each of these operations. These basic operations form an interface for the car. Think of an interface as the collection of things you can do to the car without knowing how each of those things works.
Hiding the complexity of the car from the user allows anyone, not just a mechanic, to drive a car. In the same way, hiding the complex functionality of your object from the user allows anyone to use it and to find ways to reuse it in the future regardless of their knowledge of the internal workings. This concept of keeping implementation details hidden from the rest of the system is key to object-oriented design.
Take a look at the CombustionEngine class below. Notice that it has only two public methods:
start() and stop()
Those public methods can be called from outside of the object. All of the other functions are private, meaning that they are not publicly visible to the rest of the application and cannot be called from outside of the object.
package engine {
public class CombustionEngine {
public function CombustionEngine() {}
private function engageChoke():void {}
private function disengageChoke():void {}
private function engageElectricSystem():void {}
private function powerSolenoid():void {}
private function provideFuel():void {}
private function provideSpark():void {}
public function start():void {
engageChoke();
engageElectricSystem();
powerSolenoid();
provideFuel();
provideSpark();
disengageChoke();
}
public function stop():void {}
}
}
You would use this class as follows:
var carEngine:CombustionEngine = new CombustionEngine();
carEngine.start();
carEngine.stop();
The second reason for hiding complexity is to manage change. Today most of us who drive use a vehicle with a gasoline-powered internal combustion engine. However, there a gas-electric hybrids, pure electric motors, and a variety of internal combustion engines that use alternative fuels. Each of those engine types has a different internal mechanism yet we are able to drive each of them because that complexity has been hidden. This means that, even though the mechanism which propels the car changes, the system itself functions the same way from the user's perspective.
# Inheritance
In OOP, computer programs are designed in such a way where everything is an object that interact with one another. Inheritance is one such concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes.
A child inherits the properties from his father. Similarly, in Java, there are two classes:
1. Parent class (Super or Base class)
2. Child class (Subclass or Derived class )
A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class.
# Polymorphisms
- Polymorphisms is a generic term that means 'many shapes'. More precisely Polymorphisms means the ability to request that the same operations be performed by a wide range of different types of things. Polymorphism is the ability of an object to change behavior on compile time / runtime
In OOP the polymorphisms is achieved by using many different techniques named method overloading, operator overloading, and method overriding,
- Method Overloading?
Method overloading is the ability to define several methods all with the same name.
public class MyLogger
{
public void LogError(Exception e)
{
// Implementation goes here
}
public bool LogError(Exception e, string message)
{
// Implementation goes here
}
}
# Prototyping:
- Process of creating systems through controlled trial and error.
- An early sample or model to test a concept or process. A small scale working system used to test the assumptions. Assumptions may be about user requirements, program design or internal logic.
- This method of system development can provide the organization with significant time and cost savings.
– By focusing mainly on what the user wants and sees, developers may miss some of the controls that come from the traditional systems development approach; therefore, a potential risk is that the finished system will have poor controls.
- Important advantage of prototyping is that it provides significant cost and time savings.
- Top-down approach testing methods is MOST effective during the initial phases of Prototyping.
- In prototyping, changes in the designs and requirements occur quickly and are seldom documented or approved; hence, change control becomes more complicated with prototyped systems.
# Rapid Application Development:
- RAD includes use of:
> Small and well trained development teams.
> Prototypes
> Tools to support modelling, prototyping and component re-usability.
> Central repository
> Rigid limits on development time frames
- RAD enables the organisation to develop systems quickly while reducing development cost and maintaining quality.
- RAD relies on the usage of a prototype that can be updated continually to meet changing user or business requirements.
# Steps in Benchmarking Process:
(1)Plan (for what processes benchmarking is to be done)
(2)Research (from where and with whom benchmarking is to be done)
(3)Observe (visit and observe processes of benchmarking partners)
(4)Analyse (analyzing the gap between organisation’s processes and benchmarking partner’s processes)
(5)Adopt (implement the best practises followed by benchmarking partner)
(6)Improve (continuous improvement)
- Parity bits are used to check for completeness of data transmissions.
- Check digit are a control check for accuracy.
- Detailed program logic is tested in White Box Testing
- The primary purpose of a system test is to evaluate the system functionally.
# Throughput
Maximum rate of production or the maximum rate at which something can be processed. In data transmission, network throughput is the amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).
- In white box testing, program logic is tested. In black box, only functionality is tested.
- Configuration Management involves procedure throughout the software life cycle (from requirement analysis to maintenance) to identify, define and baseline software items in the system and thus provide a basis of problem management, change management and release management.
- Ideally, stress testing should be carried out in a test environment using live workloads.
- Data integrity testing examines the accuracy, completeness, consistency and authorization of data.
- Relational integrity testing detects modification to sensitive data by the use of control totals.
- Domain integrity testing verifies that data conforms to specifications.
- Referential integrity testing ensures that data exists in its parent or original file before it exists in the child or another file.
# control total - Used to ensure that batch data is completely and accurately transferred between two systems.
- A control total is frequently used as an easily recalculated control. A check digit is a method of verifying the accuracy of a single data item, such as a credit card number. Although a check sum is an excellent control over batch completeness and accuracy, it is not easily recalculated and, therefore, is not as commonly used in financial systems as a control total. Check sums are frequently used in data transfer as part of encryption protocols.
# Application Controls:
Controls over input, processing and output functions. They include method for ensuring that:
-Only complete, accurate and valid data are entered and updated in computer systems.
-Processing accomplishes the correct task
-Processing results meet the expectations
-Data are maintained
Check Digit - A numeric value that has been calculated mathematically is added to data to ensure that the original data have not been altered or an incorrect, but valid, value substituted. This control is effective in detecting transposition and transcription errors.
Completeness Check - A field should always contain data rather than zeros or blanks. A check of each byte of that field should be performed to determine that some form of data, not blanks or zeros, is present.
Duplicate Check - New transactions are matched to those previously input to ensure that they have not already been entered.
# buffer overflow - Poorly written code, especially in web-based applications, is often exploited by hackers using this techniques.
# brute-force attack is used to crack passwords.
# Sequence Check - The control number follows sequentially and any sequence or duplicated control numbers are rejected or noted on an exception report for follow-up purposes.
# Limit Check - Data should not exceed a predetermined amount.
# Range Check - Data should be within a predetermined range of values.
# Validity Check - Programmed checking of the data validity in accordance with predetermined criteria. For example, a payroll record contains a field for marital status and the acceptable status codes are M or S. If any other code is entered, the record should be rejected.
# Reasonableness Check - Input data are matched to predetermined reasonable limits or occurrence rates. For example, a manufacturer usually receives orders for no more than 50 items. If an order for more than 50 items is received, the computer program should be designed to print the record with a warning indicating that the order appears unreasonable.
# Table look-ups - Input data comply with predetermined criteria maintained in a computerized table of possible values. For example, the input clerk enters a city code of 1 to 10. This number corresponds with a computerized table that matches the code to a city name.
# Existence check - Data are entered correctly and agree with valid predetermined criteria. For example, a valid transaction code must be entered in the transaction code field.
# Key Verification - The keying process is repeated by a separate individual using a machine that compares the original keystrokes to the repeated keyed input For example, the worker number is keyed twice and compared to verify the keying process.
# Function Point Analysis - An indirect method of measuring the size of an application by considering the number and complexity of its inputs, outputs and files.
# Input Control procedure
- must ensure that every transaction to be processed is entered, processed and recorded accurately and correctly.
# Logic path monitors
- Report on the sequence of steps executed by a program. This provides the programmer with clues to logic errors, if any, in the program.
# Run to run totals
- Provide the ability to verify data values through stages of application processing. They ensures that data read into the computer were accepted and then applied to the updating process.
# Automated system balancing
- Would be the best way to ensure that no transactions are lost as any imbalance between total inputs and total outputs would be reported for investigation and correction.
- SCARF (System Control Audit Review File). An embedded (inbuilt) audit module is used to continuously monitor the transactions. Records only those transactions which are of special audit significance such as transactions above specified limit or transactions related to deviation/exception. Useful when regular processing can not be interrupted.
- Snapshots - snaps (pictures) are taken of the transactions as transaction moves through various stages in the application system. Both before-processing and after-processing images of the transactions are captured. Auditors can verify the correctness of the processing by checking before-processing and after-processing images of the transactions. Useful when audit trail is required.
- ITF (Integrated Test Facility) - Fictitious entities/transactions are created in live production environment. Processed results and actual results are compared to verify that systems are operating correctly. Need to isolate test data from actual production data.
- CIS (Continuous and Intermittent Simulation) - This technique can be used whenever the application system uses the database management system (DBMS). DBMS reads the transaction which is passed to CIS. If transaction is as per selected criteria, then CIS examines the transaction for correctness. CIS determines whether any discrepancies exist between the result it produces and those the application system produces. Thus, CIS replicates or simulates the application system processing. Best technique when transactions meeting certain criteria needs to be examined.
- Audit Hook - These are audit software that captures suspicious transactions. Criteria for suspicious transactions are designed by auditors as per their requirement. Helps the IS auditor to act before an error or an irregularity gets out of hand.
# Audit trail (Snapshot)
# Fictitious entity in LIVE production (ITF)
# Early detection (Audit Hook)
# Simulates the application system processing (CIS)
Testing in SDLC
# Unit Testing
- Testing of individual program or module testing done during development stage
- White box approach (i.e. testing of internal program logic) is applied
# Integrated/Interface Testing
- Dictionary meaning of integrate is 'to connect'
- Testing of connection of two or more module or components that pass information from one area to another
# Parallel testing
- Process of comparing results of the old and new system.
- To ensure that the implementation of a new system meets user requirements.
# Pilot Testing
- Takes place first at one location to review the performance.The purpose is to see if the new system operates satisfactorily in one place before implementing it at other locations.
# Regression Testing
- Meaning = 'act of going back' or to 'return'
- Ensures that changes or corrections in a program have not introduced any new errors.
- Data used for regression testing should be same as the data used in previous tests.
# Sociability Testing
- Meaning = 'ability to have companionship with others'
- To ensure that new or modified system can work in the specified environment without adversely impacting existing system.
# System Testing
- Full pledge test that includes stress/load/security/recovery and other tests
Security Testing - Testing of appropriate access control and other security measures.
Recovery Testing - Checking system's ability to recover after a hardware or software failure.
Load Testing - Testing of performance of system during peak hours (processing large quantity of data)
Volume Testing - Testing to determine maximum volume of records (data) the application can handle.
Stress Testing - Testing to determine maximum number of concurrent users/services the application can process.
Performance Testing - Comparing the performance of the system to other equivalent system using well defined benchmarks.
# Top-Down Approach
- Testing starts with individual units such as individual programs or modules and work upward until a complete system is tested.
- Test can be started even before all programs are complete
- Errors in critical modules can be found early.
# Bottom-Up Approach
- Test starts from broader level and then gradually moves towards individual programs and modules
- Interface error can be detected earlier
- Confidence in the system is achieved earlier
- Appropriate for prototype development.
# Alpha Testing
- Testing done by internal user
- Done prior to beta testing
- May not involve testing of full functionality
# Beta Testing
- Testing done by external user
- Done after alpha testing
- Generally, involves testing of full functionality
Check Digit/Parity Bits/Checksum/Cyclic Redundancy Checksums (CRC)/Redundancy Checksums/Forward Error Control/Atomicity
Check Digit:
- Mathematically calculated value that is added to data to ensure that the original data have not been altered.
- Helps in avoiding transposition and transcription errors.
- Ensure data accuracy
Parity Bits:
- Requires adding an extra bit on the data. This extra bit is called a parity bit. This bit simply says whether the number of 1 bits is odd or even. Generally the parity bit is 1 if the number of 1 bits is odd and 0 if the sum of the 1 bits is even.
- This parity is verified by receiving computer to ensure data completeness and data integrity during transmission.
- Parity bits are used to check for completeness of data transmissions. A parity check is a hardware control that detects data errors when data are read from one computer to another, from memory or during transmission.
Checksum:
- Checksum are exactly same as parity but able to identify complex errors also by increasing the complexity of the arithmetic.
Cyclic Redundancy Checksums (CRC)/Redundancy Checksums:
- More advanced version of checksums by increasing the complexity of the arithmetic.
Forward Error Control:
- Works on same principle as CRC. However FEC also corrects the error. FEC provides the receiver with the ability to correct errors.
- To detect & correct transmission error.
Atomicity:
Transaction must be all-or-nothing. That is, the transaction must either fully happen, or not happens at all. The principle of atomicity requires that a transaction be completed in its entirety or not at all. If an error or interruption occurs, all changes made upto that points are backed out.
Parity bits or checksum (higher version of parity bit) or CRC (higher version of checksum):
- To identify transmission error
- To ensure completeness
- To ensure integrity
- First preference to CRC. If CRC is not there as option then preference to be given to Checksum. If CRC and Checksum both are not there in option then preference to be given to Parity Bits.
PERT-CPM-Gantt Chart-FPA-Timebox:
PERT or CPM: To estimate project duration or timeless. First preference to be given to PERT.
Gantt Chart: To monitor the project or track any milestone
FPA or SLOC: To estimate software size. First preference to be given to FPA. SLOC = Source line of code. SLOC is direct method while FPA is indirect method. FPA is arrived on the basis of number and complexity of inputs, outputs, files, interfaces and queries. FPA is more reliable than SLOC.
When objective is to identify software size estimation-first preference to be given to FPA
Timebox Management: To prevent project cost overruns and delays from scheduled delivery
Earned Value Analysis (EVA):
-Budget to date
-Actual spending to date
-Estimate to complete
-Estimate at completion
# Function point analysis (FPA) - To estimate efforts required to develop software.
# Decision Support System (DSS)
- Supports the semi-structured problem (and not only structured problem).
- Should be flexible and adoptable to changing requirements and scenarios.
- Decision tree is used as a questionnaire to lead a user through a series of choices until a conclusion is reached.
- Interactive System
# RISK Factors for Implementation of DSS
- Inability to specify purpose or usage patterns in advance.
- Inability to predict and cushion impact on all parties.
- Non-existent or unwilling users/ Multiple users or implementers/ Disappearing users, implementers and maintainers.
- Lack or loss of support/ Lack of experience with similar systems
- Technical problems and cost effectiveness issues.
# Agile Development:
- Dictionary meaning of agile is ‘able to move quickly and easily’.
- Allows the programmer to just start writing a program without spending much time on pre-planning documentation.
- Less importance is placed on formal paper-based deliverables, with the preference being to produce releasable software in short iterations, typically ranging from 4 to 8 weeks.
- At the end of each iteration, the team considers and documents what worked well and what could have worked better, and identifies improvements to be implemented in subsequent iterations.
- Some programmers prefer agile because they do not want to be involved in tedious planning exercises.
- Major risk associated with agile development is lack of documentation.
- In agile approach reviews are done to identify lessons learned for future use in the project.
Object Oriented System Development (OOSD):
- OOSD is a programming technique and not a software development methodology.
- Object here refers to small piece of program that can be used individually or in combination with other objects.
- In Object oriented language, application is made up of smaller components (objects).
- One of the major benefits of object-oriented design and development is the ability to reuse objects.
- ‘encapsulation’ in which one object interacts with another object. This is a common practice whereby any particular object may call other object to perform its work.
– A major benefit of object-oriented development is the ability to reuse objects.
# Encapsulation
Encapsulation is a mechanism where you bind your data and code together as a single unit. It also means to hide your data in order to make it safe from any modification. What does this mean? The best way to understand encapsulation is to look at the example of a medical capsule, where the drug is always safe inside the capsule. Similarly, through encapsulation the methods and variables of a class are well hidden and safe.
- Permits enhanced degree of security over data.
When you create an object in an object-oriented language, you can hide the complexity of the internal workings of the object. As a developer, there are two main reasons why you would choose to hide complexity.
The first reason is to provide a simplified and understandable way to use your object without the need to understand the complexity inside. As mentioned above, a driver doesn't need to know how an internal combustion engine works. It is sufficient to know how to start the car, how to engage the transmission if you want to move, how to provide fuel, how to stop the car, and how to turn off the engine. You know to use the key, the shifter (and possibly clutch), the gas pedal and the brake pedal to accomplish each of these operations. These basic operations form an interface for the car. Think of an interface as the collection of things you can do to the car without knowing how each of those things works.
Hiding the complexity of the car from the user allows anyone, not just a mechanic, to drive a car. In the same way, hiding the complex functionality of your object from the user allows anyone to use it and to find ways to reuse it in the future regardless of their knowledge of the internal workings. This concept of keeping implementation details hidden from the rest of the system is key to object-oriented design.
Take a look at the CombustionEngine class below. Notice that it has only two public methods:
start() and stop()
Those public methods can be called from outside of the object. All of the other functions are private, meaning that they are not publicly visible to the rest of the application and cannot be called from outside of the object.
package engine {
public class CombustionEngine {
public function CombustionEngine() {}
private function engageChoke():void {}
private function disengageChoke():void {}
private function engageElectricSystem():void {}
private function powerSolenoid():void {}
private function provideFuel():void {}
private function provideSpark():void {}
public function start():void {
engageChoke();
engageElectricSystem();
powerSolenoid();
provideFuel();
provideSpark();
disengageChoke();
}
public function stop():void {}
}
}
You would use this class as follows:
var carEngine:CombustionEngine = new CombustionEngine();
carEngine.start();
carEngine.stop();
The second reason for hiding complexity is to manage change. Today most of us who drive use a vehicle with a gasoline-powered internal combustion engine. However, there a gas-electric hybrids, pure electric motors, and a variety of internal combustion engines that use alternative fuels. Each of those engine types has a different internal mechanism yet we are able to drive each of them because that complexity has been hidden. This means that, even though the mechanism which propels the car changes, the system itself functions the same way from the user's perspective.
# Inheritance
In OOP, computer programs are designed in such a way where everything is an object that interact with one another. Inheritance is one such concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes.
A child inherits the properties from his father. Similarly, in Java, there are two classes:
1. Parent class (Super or Base class)
2. Child class (Subclass or Derived class )
A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class.
# Polymorphisms
- Polymorphisms is a generic term that means 'many shapes'. More precisely Polymorphisms means the ability to request that the same operations be performed by a wide range of different types of things. Polymorphism is the ability of an object to change behavior on compile time / runtime
In OOP the polymorphisms is achieved by using many different techniques named method overloading, operator overloading, and method overriding,
- Method Overloading?
Method overloading is the ability to define several methods all with the same name.
public class MyLogger
{
public void LogError(Exception e)
{
// Implementation goes here
}
public bool LogError(Exception e, string message)
{
// Implementation goes here
}
}
# Prototyping:
- Process of creating systems through controlled trial and error.
- An early sample or model to test a concept or process. A small scale working system used to test the assumptions. Assumptions may be about user requirements, program design or internal logic.
- This method of system development can provide the organization with significant time and cost savings.
– By focusing mainly on what the user wants and sees, developers may miss some of the controls that come from the traditional systems development approach; therefore, a potential risk is that the finished system will have poor controls.
- Important advantage of prototyping is that it provides significant cost and time savings.
- Top-down approach testing methods is MOST effective during the initial phases of Prototyping.
- In prototyping, changes in the designs and requirements occur quickly and are seldom documented or approved; hence, change control becomes more complicated with prototyped systems.
# Rapid Application Development:
- RAD includes use of:
> Small and well trained development teams.
> Prototypes
> Tools to support modelling, prototyping and component re-usability.
> Central repository
> Rigid limits on development time frames
- RAD enables the organisation to develop systems quickly while reducing development cost and maintaining quality.
- RAD relies on the usage of a prototype that can be updated continually to meet changing user or business requirements.
# Steps in Benchmarking Process:
(1)Plan (for what processes benchmarking is to be done)
(2)Research (from where and with whom benchmarking is to be done)
(3)Observe (visit and observe processes of benchmarking partners)
(4)Analyse (analyzing the gap between organisation’s processes and benchmarking partner’s processes)
(5)Adopt (implement the best practises followed by benchmarking partner)
(6)Improve (continuous improvement)
- Parity bits are used to check for completeness of data transmissions.
- Check digit are a control check for accuracy.
- Detailed program logic is tested in White Box Testing
- The primary purpose of a system test is to evaluate the system functionally.
# Throughput
Maximum rate of production or the maximum rate at which something can be processed. In data transmission, network throughput is the amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).
- In white box testing, program logic is tested. In black box, only functionality is tested.
- Configuration Management involves procedure throughout the software life cycle (from requirement analysis to maintenance) to identify, define and baseline software items in the system and thus provide a basis of problem management, change management and release management.
- Ideally, stress testing should be carried out in a test environment using live workloads.
- Data integrity testing examines the accuracy, completeness, consistency and authorization of data.
- Relational integrity testing detects modification to sensitive data by the use of control totals.
- Domain integrity testing verifies that data conforms to specifications.
- Referential integrity testing ensures that data exists in its parent or original file before it exists in the child or another file.
# control total - Used to ensure that batch data is completely and accurately transferred between two systems.
- A control total is frequently used as an easily recalculated control. A check digit is a method of verifying the accuracy of a single data item, such as a credit card number. Although a check sum is an excellent control over batch completeness and accuracy, it is not easily recalculated and, therefore, is not as commonly used in financial systems as a control total. Check sums are frequently used in data transfer as part of encryption protocols.
# Application Controls:
Controls over input, processing and output functions. They include method for ensuring that:
-Only complete, accurate and valid data are entered and updated in computer systems.
-Processing accomplishes the correct task
-Processing results meet the expectations
-Data are maintained
Check Digit - A numeric value that has been calculated mathematically is added to data to ensure that the original data have not been altered or an incorrect, but valid, value substituted. This control is effective in detecting transposition and transcription errors.
Completeness Check - A field should always contain data rather than zeros or blanks. A check of each byte of that field should be performed to determine that some form of data, not blanks or zeros, is present.
Duplicate Check - New transactions are matched to those previously input to ensure that they have not already been entered.
# buffer overflow - Poorly written code, especially in web-based applications, is often exploited by hackers using this techniques.
# brute-force attack is used to crack passwords.
# Sequence Check - The control number follows sequentially and any sequence or duplicated control numbers are rejected or noted on an exception report for follow-up purposes.
# Limit Check - Data should not exceed a predetermined amount.
# Range Check - Data should be within a predetermined range of values.
# Validity Check - Programmed checking of the data validity in accordance with predetermined criteria. For example, a payroll record contains a field for marital status and the acceptable status codes are M or S. If any other code is entered, the record should be rejected.
# Reasonableness Check - Input data are matched to predetermined reasonable limits or occurrence rates. For example, a manufacturer usually receives orders for no more than 50 items. If an order for more than 50 items is received, the computer program should be designed to print the record with a warning indicating that the order appears unreasonable.
# Table look-ups - Input data comply with predetermined criteria maintained in a computerized table of possible values. For example, the input clerk enters a city code of 1 to 10. This number corresponds with a computerized table that matches the code to a city name.
# Existence check - Data are entered correctly and agree with valid predetermined criteria. For example, a valid transaction code must be entered in the transaction code field.
# Key Verification - The keying process is repeated by a separate individual using a machine that compares the original keystrokes to the repeated keyed input For example, the worker number is keyed twice and compared to verify the keying process.
# Function Point Analysis - An indirect method of measuring the size of an application by considering the number and complexity of its inputs, outputs and files.
# Input Control procedure
- must ensure that every transaction to be processed is entered, processed and recorded accurately and correctly.
# Logic path monitors
- Report on the sequence of steps executed by a program. This provides the programmer with clues to logic errors, if any, in the program.
# Run to run totals
- Provide the ability to verify data values through stages of application processing. They ensures that data read into the computer were accepted and then applied to the updating process.
# Automated system balancing
- Would be the best way to ensure that no transactions are lost as any imbalance between total inputs and total outputs would be reported for investigation and correction.
No comments :
Post a Comment