Tuesday, December 6, 2011

Software Design concept

Design concepts

The design concepts provide the software designer with a foundation from which more sophisticated methods can be applied. A set of fundamental design concepts has evolved. They are:

    Abstraction - Abstraction is the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically in order to retain only information which is relevant for a particular purpose.
    Refinement - It is the process of elaboration. A hierarchy is developed by decomposing a macroscopic statement of function in a stepwise fashion until programming language statements are reached. In each step, one or several instructions of a given program are decomposed into more detailed instructions. Abstraction and Refinement are complementary concepts.
    Modularity - Software architecture is divided into components called modules.
    Software Architecture - It refers to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. A good software architecture will yield a good return on investment with respect to the desired outcome of the project, e.g. in terms of performance, quality, schedule and cost.
    Control Hierarchy - A program structure that represents the organization of a program component and implies a hierarchy of control.
    Structural Partitioning - The program structure can be divided both horizontally and vertically. Horizontal partitions define separate branches of modular hierarchy for each major program function. Vertical partitioning suggests that control and work should be distributed top down in the program structure.
    Data Structure - It is a representation of the logical relationship among individual elements of data.
    Software Procedure - It focuses on the processing of each modules individually
    Information Hiding - Modules should be specified and designed so that information contained within a module is inaccessible to other modules that have no need for such information.

More Detail Please Visit : www.gurukpo.comwww.gurukpo.com

Wednesday, November 9, 2011

CPU speed


clock speed

Also called clock rate, the speed at which a microprocessor executes instructions. Every computer contains an internal clock that regulates the rate at which instructions are executed and synchronizes all the various computer components. The CPU requires a fixed number of clock ticks (or clock cycles) to execute each instruction. The faster the clock, the more instructions the CPU can execute per second.
Clock speeds are expressed in megahertz (MHz) or gigahertz ((GHz).
The internal architecture of a CPU has as much to do with a CPU's performance as the clock speed, so two CPUs with the same clock speed will not necessarily perform equally. Whereas an Intel 80286 microprocessor requires 20 cycles to multiply two numbers, an Intel 80486 or later processor can perform the same calculation in a single clock tick. (Note that clock tick here refers to the system's clock, which runs at 66 MHz for all PCs.) These newer processors, therefore, would be 20 times faster than the older processors even if their clock speeds were the same. In addition, some microprocessors are superscalar, which means that they can execute more than one instruction per clock cycle.
Like CPUs, expansion buses also have clock speeds. Ideally, the CPU clock speed and the bus clock speed should be the same so that neither component slows down the other. In practice, the bus clock speed is often slower than the CPU clock speed, which creates a bottleneck. This is why new local buses, such as AGP, have been developed.

Friday, October 21, 2011

Software Process Engineering


Software systems come and go through a series of passages that account for their inception,
initial development, productive operation, upkeep, and retirement from one generation to
another. This article categorizes and examines a number of methods for describing or modeling
how software systems are developed. It begins with background and definitions of traditional
software life cycle models that dominate most textbook discussions and current software
development practices. This is followed by a more comprehensive review of the alternative
models of software evolution that are of current use as the basis for organizing software
engineering projects and technologies.

Background

Explicit models of software evolution date back to the earliest projects developing large software
systems in the 1950's and 1960's (Hosier 1961, Royce 1970). Overall, the apparent purpose of
these early software life cycle models was to provide a conceptual scheme for rationally
managing the development of software systems. Such a scheme could therefore serve as a basis
for planning, organizing, staffing, coordinating, budgeting, and directing software development
activities.

Since the 1960's, many descriptions of the classic software life cycle have appeared (e.g., Hosier
1961, Royce 1970, Boehm 1976, Distaso 1980, Scacchi 1984, Somerville 1999). Royce (1970)
originated the formulation of the software life cycle using the now familiar "waterfall" chart,
displayed in Figure 1. The chart summarizes in a single display how developing large software
systems is difficult because it involves complex engineering tasks that may require iteration and
rework before completion. These charts are often employed during introductory presentations,
for people (e.g., customers of custom software) who may be unfamiliar with the various
technical problems and strategies that must be addressed when constructing large software
systems (Royce 1970).

These classic software life cycle models usually include some version or subset of the following
activities:

System Initiation/Planning: where do systems come from? In most situations, new

feasible systems replace or supplement existing information processing mechanisms
whether they were previously automated, manual, or informal.

Requirement Analysis and Specification: identifies the problems a new software system is
suppose to solve, its operational capabilities, its desired performance characteristics, and
the resource infrastructure needed to support system operation and maintenance.

Functional Specification or Prototyping: identifies and potentially formalizes the objects
of computation, their attributes and relationships, the operations that transform these
objects, the constraints that restrict system behavior, and so forth.

Partition and Selection (Build vs. Buy vs. Reuse): given requirements and functional
specifications, divide the system into manageable pieces that denote logical subsystems,
then determine whether new, existing, or reusable software systems correspond to the
needed pieces.

Architectural Design and Configuration Specification: defines the interconnection and
resource interfaces between system subsystems, components, and modules in ways
suitable for their detailed design and overall configuration management.

Detailed Component Design Specification: defines the procedural methods through which
the data resources within the modules of a component are transformed from required
inputs into provided outputs.

Component Implementation and Debugging: codifies the preceding specifications into
operational source code implementations and validates their basic operation.

Software Integration and Testing: affirms and sustains the overall integrity of the
software system architectural configuration through verifying the consistency and
completeness of implemented modules, verifying the resource interfaces and
interconnections against their specifications, and validating the performance of the
system and subsystems against their requirements.

Documentation Revision and System Delivery: packaging and rationalizing recorded
system development descriptions into systematic documents and user guides, all in a
form suitable for dissemination and system support.

Deployment and Installation: providing directions for installing the delivered software
into the local computing environment, configuring operating systems parameters and user
access privileges, and running diagnostic test cases to assure the viability of basic system
operation.

Training and Use: providing system users with instructional aids and guidance for
understanding the system's capabilities and limits in order to effectively use the system.

 Software Maintenance: sustaining the useful operation of a system in its host/target
environment by providing requested functional enhancements, repairs, performance

Thursday, August 11, 2011

Centralized and Distributed Database

Distributed and Centralized Databases
Distributed data is defined as collection of logically distributed database which are connected with each other through a network. A distributed database management system is used for managing distributed database. Each side has its own database and operating system.

A centralized database has all its data on one place. As it is totally different from distributed database which has data on different places. In centralized database as all the data reside on one place so problem of bottle-neck can occur, and data availability is not efficient as in distributed database. Let me define some advantages of distributed database, it will clear the difference between centralized and distributed database.

Users can issue commands from any location to access data and it does not affect the working of database. Distributed database allows us to store one copy of data at different locations. Its advantage is that if a user wants to access data then the nearest site (location) will provide data so it takes less time.

There are multiple sites (computers) in a distributed database so if one site fails then system will not be useless, because other sites can do their job because as I earlier said that same copy of data is installed on every location. You will not find this thing in centralized database.

Any time new nodes (computers) can be added to the network without any difficulty.
Users do not know about the physical storage of data and it is known as distribution transparency, as we know that ideally, a DBMS must not show the details of where each file is stored or we can say that a DBMS should be distribution transparent.

Read More : http:www.gurukpo.com

Wednesday, August 10, 2011

Software Process in Software Engineering

Process : Process is the set of sequences that include
  • Activities
  • Constraint
  • Process
     That produces the intended result. 
  • Process are composed of sub processes
  • Process have entry and exit criteria.
Software Process :
                            Software process encompasses of set of activities and process that evolved or developed the software systems.
Process
1)Process Management Processes
2)Project Development Process

for more details visit http://www.gurukpo.com/