Monday, July 30, 2007

Hazards pipelining

HAZARDS, that prevent the next instruction in the instruction stream from being executing during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining.
There are three classes of hazards:
Structural Hazards:
They arise from resource conflicts when the hardware cannot support all possible combinations of instructions in simultaneous overlapped execution.
Data Hazards:
They arise when an instruction depends on the result of a previous instruction in a way that is exposed by the overlapping of instructions in the pipeline.
Control Hazards:
They arise from the pipelining of branches and other instructions that change the PC

Clarify my doubt friends..................

Can anyone explain me Control Hazards with a simple example???????????I could not understand it

Advantages and Disadvantages of Pipelining

Advantages and Disadvantages

Pipelining does not help in all cases. There are several disadvantages associated. An instruction pipeline is said to be fully pipelined if it can accept a new instruction every clock cycles. A pipeline that is not fully pipelined has wait cycles that delay the progress of the pipeline.

Advantages of pipelining:

The cycle time of the processor is reduced, thus increasing instruction bandwidth in most cases.

Advantages of not pipelining:
The processor executes only a single instruction at a time. This prevents branch delays (in effect, every branch is delayed) and problems with serial instructions being executed concurrently. Consequently the design is simpler and cheaper to manufacture.
The instruction latency in a non-pipelined processor is slightly lower than in a pipelined equivalent. This is due to the fact that extra flip flops must be added to the data path of a pipelined processor.
A non-pipelined processor will have a stable instruction bandwidth. The performance of a pipelined processor is much harder to predict and may vary more widely between different programs.

Structural Hazards

A "hazard'' is a circumstance that arises because of pipelining, that either will make implementing an instruction difficult or will cause errors in execution.

There are three classes of hazards:

1.Structural Hazards:
When a machine is pipelined, the overlapped execution of instructions requires pipelining of functional units and duplication of resources to allow all posible combinations of instructions in the pipeline. If some combination of instructions cannot be accommodated because of a resource conflict, the machine is said to have a structural hazard.

Common instances of structural hazards arise when :

  1. Some functional unit is not fully pipelined
  2. Some resource has not been duplicated enough to allow all combinations of instructions in the pipeline to execute.

Example: A machine may have only one register-file write port, but in some cases the pipeline might want to perform two writes in a clock cycle.

2.Data Hazards:

Data hazards occur when the pipeline changes the order of read/write accesses to operands so that the order differs from the order seen by sequentially executing instructions on the unpipelined machine.

Example:
ADD R1, R2, R3:Take for an instance the result of addition is placed in R1 at the end of fourth clock cycle,

SUB R4, R5, R1:Here R1's value is needed in the first clock cycle itself,This is the data hazard,as an instruction depends on the result of a previous instruction.

3.Control Hazards:

They arise from the pipelining of branches and other instructions that change the PC.

hazards in pipeline

Pipeline Hazards

There are situations, called hazards, that prevent the next instruction in the instruction stream from being executing during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining.

There are three classes of hazards:

Structural Hazards. They arise from resource conflicts when the hardware cannot support all possible combinations of instructions in simultaneous overlapped execution.

When a machine is pipelined, the overlapped execution of instructions requires pipelining of functional units and duplication of resources to allow all posible combinations of instructions in the pipeline. If some combination of instructions cannot be accommodated because of a resource conflict, the machine is said to have a structural hazard.

Common instances of structural hazards arise when

Some functional unit is not fully pipelined. Then a sequence of instructions using that unpipelined unit cannot proceed at the rate of one per clock cycle
Some resource has not been duplicated enough to allow all combinations of instructions in the pipeline to execute.
Example1: a machine may have only one register-file write port, but in some cases the pipeline might want to perform two writes in a clock cycle

Data Hazards. They arise when an instruction depends on the result of a previous instruction in a way that is exposed by the overlapping of instructions in the pipeline.

A major effect of pipelining is to change the relative timing of instructions by overlapping their execution. This introduces data and control hazards. Data hazards occur when the pipeline changes the order of read/write accesses to operands so that the order differs from the order seen by sequentially executing instructions on the unpipelined machine.

Control Hazards.They arise from the pipelining of branches and other instructions that change the PC.

Eliminating Hazards

In computer architecture, a hazard is a potential problem that can happen in a pipelined processor. There are typically three types of hazards: data hazards, branching hazards (control hazards), and structural hazards.
Instructions in a pipelined processor are performed in several stages, so that at any given time several instructions are being executed, and instructions may not be completed in the desired order.
A hazard occurs when two or more of these simultaneous (possibly out of order) instructions conflict.
Contents[hide]
1 Data hazards
2 Structural hazards
3 Branch (control) hazards
4 Eliminating hazards
4.1 Eliminating data hazards
4.2 Eliminating branch hazards
5 See also
//

Data hazards
Main article: Data dependency
Data hazards occur when data is modified. Ignoring potential data hazards can result in race conditions (sometimes known as race hazards). There are three situations a data hazard can occur in:
Read after Write (RAW) or True dependency: An operand is modified and read soon after. Because the first instruction may not have finished writing to the operand, the second instruction may use incorrect data.
Write after Read (WAR) or Anti dependency: Read an operand and write soon after to that same operand. Because the write may have finished before the read, the read instruction may incorrectly get the new written value.
Write after Write (WAW) or Output dependency: Two instructions that write to the same operand are performed. The first one issued may finish second, and therefore leave the operand with an incorrect data value.
The operands involved in data hazards can reside in memory or in a register.

Structural hazards
A structural hazard occurs when a part of the processor's hardware is needed by two or more instructions at the same time. A structural hazard might occur, for instance, if a program were to execute a branch instruction followed by a computation instruction. Because they are executed in parallel, and because branching is typically slow (requiring a comparison, program counter-related computation, and writing to registers), it is quite possible (depending on architecture) that the computation instruction and the branch instruction will both require the ALU (arithmetic logic unit) at the same time.

Branch (control) hazards
Branching hazards (also known as control hazards) occur when the processor is told to branch - i.e., if a certain condition is true, then jump from one part of the instruction stream to another - not necessarily to the next instruction sequentially. In such a case, the processor cannot tell in advance whether it should process the next instruction (when it may instead have to move to a distant instruction).
This can result in the processor doing unwanted actions.

Eliminating hazards
There are several established techniques for either preventing hazards from occurring, or working around them if they do.
Bubbling the Pipeline
Bubbling the pipeline (a technique also known as a pipeline break or pipeline stall) is a method for preventing data, structural, and branch hazards from occurring. As instructions are fetched, control logic determines whether a hazard could/will occur. If this is true, then the control logic inserts NOPs into the pipeline. Thus, before the next instruction (which would cause the hazard) is executed, the previous one will have had sufficient time to complete and prevent the hazard. If the number of NOPs is equal to the number of stages in the pipeline, the processor has been cleared of all instructions and can proceed free from hazards. This is called flushing the pipeline. All forms of stalling introduce a delay before the processor can resume execution.

Eliminating data hazards
Forwarding
NOTE: In the following examples, computed values are in bold, while Register numbers are not.
Forwarding involves feeding output data into a previous stage of the pipeline. For instance, let's say we want to write the value 3 to register 1, (which already contains a 6), and then add 7 to register 1 and store the result in register 2, i.e.:
Instruction 0: Register 1 = 6
Instruction 1: Register 1 = 3
Instruction 2: Register 2 = Register 1 + 7 = 10
Following execution, register 2 should contain the value 10. However, if Instruction 1 (write 3 to register 1) does not completely exit the pipeline before the second instruction starts execution, it means that Register 1 does not contain the value 3 when Instruction 2 performs its addition. In such an event, Instruction 2 adds 7 to the old value of register 1 (6), and so register 2 would contain 13 instead, i.e:
Instruction 0: Register 1 = 6
Instruction 1: Register 1 = 3
Instruction 2: Register 2 = Register 1 + 7 = 13
This error occurs because Instruction 2 reads Register 1 before Instruction 1 has committed/stored the result of its write operation to Register 1. So when Instruction 2 is reading the contents of Register 1, register 1 still contains 6, not 3.
Forwarding (described below) helps correct such errors by depending on the fact that the output of Instruction 1 (which is 3) can be used by subsequent instructions before the value 3 is committed to/stored in Register 1.
Forwarding is implemented by feeding back the output of an instruction into the previous stage(s) of the pipeline as soon as the output of that instruction is available. Forwarding applied to our example means that we do not wait to commit/store the output of Instruction 1 in Register 1 (in this example, the output is 3) before making that output available to the subsequent instruction (in this case, Instruction 2). The effect is that Instruction 2 uses the correct (the more recent) value of Register 1: the commit/store was made immediately and not pipelined.
With forwarding enabled, the ID/EX stage of the pipeline now has two inputs - the value read from the register specified (in this example, the value 6 from Register 1), and the new value of Register 1 (in this example, this value is 3) which is sent from the next stage (EX/MEM). Additional control logic is used to determine which input to use.
See feed-forward.
Register renaming

RAW WAR WAW

A ``hazard'' is a circumstance that arises because of pipelining, that either will make implementing an instruction difficult or will cause errors in execution (i.e. the pipelining changes the semantics of instruction execution).

There are three basic types of hazard, with subdivisions possible.

Structural Hazards
You can't use a single piece of hardware for two things at once. A Princeton architecture runs into a structural hazard on ld or st instructions, since there is only one path to memory and the pipeline is attempting to fetch an instruction on every cycle.

Data Hazards
An instruction may depend on data that is not available yet. There are three types of data hazards:

Read After Write (RAW)
The code calls for a read to occur after a write, but the pipeline causes the read to occur first. This is the most common (and is the only one that can occur in the simple example pipeline).

Write After Read (WAR)
The code calls for a write to occur after a read, but the pipeline causes the write to occur first. This can occur in a longer pipeline in which some instructions read registers in a late stage while others write registers in an early stage.

Write After Write (WAW)
The code calls for two writes to occur; the pipeline reorders them. This can occur in a pipeline in which register writes can occur in more than one pipeline stage.

Control Hazards
A decision needs to be made as to whether to take a branch, before the information to make the decision is available.

There are also three ways to fix a hazard.

Document
Simply document the instruction set as supporting the semantics as defined by the pipeline (in other words, punt). This is the origin of delayed branch and delayed load instructions. It results in non-intuitive instruction semantics, and is tied to a particular pipeline implementation. If the implementation changes then the hazards change, and you find yourself making a screwy semantics work. You can't just document the new semantics like you did the old, since you have legacy code to support.

Stall
Delay the execution of the second instruction until the hazard is resolved. This preserves reasonable semantics, but slows things down.
An improvement on simply stalling which can be used to resolve structural hazards is to use branch prediction: take a guess as to whether the branch is likely to be taken or not, and continue executing with whatever guess you've taken. If you guessed wrong you cancel any instructions that you started to execute, with a penalty no greater than what you would have had if you'd just stalled.

Forward
The data you need may actually exist somewhere in the system, just not where you want it to be. This can be resolved by adding extra data paths, which ``forward'' the data to where it is needed. This is the ideal solution, since it neither messes up the semantics nor slows down the system. But, if the data is simply not there yet, then it can't be done.

Hazard

In computer architecture, a hazard is a potential problem that can happen in a pipelined processor. There are typically three types of hazards:

1 Data hazards
2 Structural hazards
3 Branch (control) hazards
Data hazards
Data hazards occur when data is modified. Ignoring potential data hazards can result in race conditions (sometimes known as race hazards).
Structural hazards
A structural hazard occurs when a part of the processor's hardware is needed by two or more instructions at the same time. A structural hazard might occur, for instance, if a program were to execute a branch instruction followed by a computation instruction. Because they are executed in parallel, and because branching is typically slow (requiring a comparison, program counter-related computation, and writing to registers), it is quite possible (depending on architecture) that the computation instruction and the branch instruction will both require the ALU (arithmetic logic unit) at the same time.

Branch (control) hazards
Branching hazards (also known as control hazards) occur when the processor is told to branch - i.e., if a certain condition is true, then jump from one part of the instruction stream to another - not necessarily to the next instruction sequentially. In such a case, the processor cannot tell in advance whether it should process the next instruction (when it may instead have to move to a distant instruction).
This can result in the processor doing unwanted actions.

Hazards

A hazard is a potential problem that can happen in a pipelined processor. There are typically three types of hazards: data hazards, branching hazards (control hazards), and structural hazards.
Data hazards
Data hazards occur when data is modified. Ignoring potential data hazards can result in race conditions (sometimes known as race hazards). There are three situations a data hazard can occur in:
Read after Write (RAW) or True dependency: An operand is modified and read soon after. Because the first instruction may not have finished writing to the operand, the second instruction may use incorrect data.
Write after Read (WAR) or Anti dependency: Read an operand and write soon after to that same operand. Because the write may have finished before the read, the read instruction may incorrectly get the new written value.

Write after Write (WAW) or Output dependency: Two instructions that write to the same operand are performed. The first one issued may finish second, and therefore leave the operand with an incorrect data value.

The operands involved in data hazards can reside in memory or in a register.
Structural hazards
A structural hazard occurs when a part of the processor's hardware is needed by two or more instructions at the same time. A structural hazard might occur, for instance, if a program were to execute a branch instruction followed by a computation instruction. Because they are executed in parallel, and because branching is typically slow (requiring a comparison, program counter-related computation, and writing to registers), it is quite possible (depending on architecture) that the computation instruction and the branch instruction will both require the ALU (arithmetic logic unit) at the same time.
Branch (control) hazards
Branching hazards (also known as control hazards) occur when the processor is told to branch - i.e., if a certain condition is true, then jump from one part of the instruction stream to another - not necessarily to the next instruction sequentially. In such a case, the processor cannot tell in advance whether it should process the next instruction (when it may instead have to move to a distant instruction).
This can result in the processor doing unwanted actions.

Hazards in pipelining

There are situations, called hazards, that prevent the next instruction in the instruction stream from being executing during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining.
There are three classes of hazards:
Structural Hazards.
They arise from resource conflicts when the hardware cannot support all possible combinations of instructions in simultaneous overlapped execution.
Data Hazards.
They arise when an instruction depends on the result of a previous instruction in a way that is exposed by the overlapping of instructions in the pipeline.
Control Hazards.
They arise from the pipelining of branches and other instructions that change the PC.

Thursday, July 26, 2007

Pipelining

In computers, a pipeline is the continuous and somewhat overlapped movement of or in the arithmetic steps taken by the processor to perform an instruction. Pipelining is the use of a pipeline. Without a pipeline, a computer processor gets the first instruction from memory, performs the operation it calls for, and then goes to get the next instruction from memory, and so forth. While fetching (getting) the instruction, the arithmetic part of the processor is idle. It must wait until it gets the next instruction. With pipelining, the computer architecture allows the next instructions to be fetched while the processor is performing arithmetic operations, holding them in a close to the processor until each instruction operation can be performed. The staging of instruction fetching is continuous. The result is an increase in the number of instructions that can be performed during a given time period.





Pipelining is sometimes compared to a manufacturing assembly line in which different parts of a product are being assembled at the same time although ultimately there may be some parts that have to be assembled before others are. Even if there is some sequential dependency, the overall process can take advantage of those operations that can proceed concurrently.

Computer processor pipelining is sometimes divided into an instruction pipeline and an arithmetic pipeline. The instruction pipeline represents the stages in which an instruction is moved through the processor, including its being fetched, perhaps buffered, and then executed. The arithmetic pipeline represents the parts of an arithmetic operation that can be broken down and overlapped as they are performed. Pipelines and pipelining also apply to computer memory controllers and moving data through various memory staging places.

DIFFERENCE BETWEEN RISC AND CISC?

CISC ( Complex Instruction Set Computers ) ,RISC ( Reduced Instruction Set Computers )
Initially there was only CISC processors and the complexity of the architecture grew from programmer wanting to do more with the assembly language. These assembly language commands were not portable from one chip family to another, so that after a while of developing complex instructions developers of these languages were caught in their own complexity, where it would be too costly to develop an entirely new instruction hierarchy even after learning that RISC architecture could be faster. It also seemed like a step back from the CISC model., but RISC was proved to be faster. So instead they tried to adopt some of the concepts of the RISC into the CISC format.

CISC:
1)CISC has very powerful complex instructions that take multiple cycles to execute. In the CISC structure 8-10 cycles is average to execute an instruction
2)Any instruction can reference memory.
3)CISC is poorly pipe lined if at all.
4)CISC Instructions can vary in size, it is flexible.
This would create a problem in a pipeline because it would be difficult at best to fill the pipeline in a useful way. It generally doesn't work.
5)CISC instructions are interpreted by micro code. This is how CISC deals with the complex instructions. Micro programs takes the original instruction, and breaks it down into instructions the hardware can deal with.
6)CISC has powerful ways of accessing memory.

RISC:
1)Simple instructions that can be completed every cycle. An overlap can occur in each cycle so one instruction is completed. This is the basis of pipe lining.
2)Only Load and Store instructions can reference memory.
3)RISC is highly pipe lined.
4)RISC instructions are a fixed size. In the MIPS instruction set they are 32 bits.
5)RISC instructions are executed directly by the hardware.
6)RISC has limited ways of accessing memory.

Friday, July 13, 2007

THE REASON FOR MICROPROGRAMMING

Microcode was originally developed as a simpler method of developing the control logic for a computer. Initially CPU instruction sets were "hard wired". Each machine instruction (add, shift, move) was implemented directly with circuitry. This provided fast performance, but as instruction sets grew more complex, hard-wired instruction sets became more difficult to design and debug.
Microcode alleviated that problem by allowing CPU design engineers to write a microprogram to implement a machine instruction rather than design circuitry for it. Even late in the design process, microcode could easily be changed, whereas hard wired instructions could not. This greatly facilitated CPU design and led to more complex instruction sets.
Another advantage of microcode was the implementation of more complex machine instructions. In the 1960s through the late 1970s, much programming was done in assembly language, a symbolic equivalent of machine instructions. The more abstract and higher level the machine instruction, the greater the programmer productivity. The ultimate extension of this were "Directly Executable High Level Language" designs. In these each statement of a high level language such as PL/I would be entirely and directly executed by microcode, without compilation. The IBM Future Systems project and Data General Fountainhead Processor were examples of this.
Microprogramming also helped alleviate the memory bandwidth problem. During the 1970s, CPU speeds grew more quickly than memory speeds. Numerous acceleration techniques such as memory block transfer, memory pre-fetch and multi-level caches helped reduce this. However high level machine instructions (made possible by microcode) helped further. Fewer more complex machine instructions require less memory bandwidth. For example complete operations on character strings could be done as a single machine instruction, thus avoiding multiple instruction fetches.

Hardwiredcontrol

.
The main difference between a computer with hardwired control unit and one with microprogrammed control unit consists in the way in which the control unit passes from a state to another in order to generate the control signals:
• In a hardwired unit, a state corresponds to a phase, characterized by the activation of a phase
signal. In a phase, certain control signals are generated that are required to execute an instruction.
• In a microprogrammed unit, a state corresponds to a microinstruction, which encodes the micro-operations that should be executed during the same clock cycle. To execute an instruction of the computer, a sequence of microinstructions must be executed.
A microprogrammed control unit has two main functions:
• The control function, which defines the micro-operations that should be executed. This definition usually comprises the selection of the operands, selection of the operation to be executed,selection of the destination for a result, etc.
• The sequencing function, which defines the address of the next microinstruction to be executed.
This definition refers to identifying the source for the next address, controlling the test
conditions, or generating the address value directly.

Tuesday, July 10, 2007

Visit this site for "MicroProgrammed versus hardwired control unit"

http://www.cs.binghamton.edu/~reckert/hardwire3new.html
Microprocessor is the central processing unitin a computer. To make it to work it will needlots of other ICs around it (like ROM, RAMand timers).
Microcontroller is like a computer insidea single chip. Typical microcontroller has all necessary parts (CPU, ROM, RAM,timers) integrated inside one IC.
Basically, a microcontroller is a device which integrates a number of the components of a microprocessor system onto a single microchip.
So a microcontroller combines onto the same microchip :
· The CPU core
· Memory (both ROM and RAM)
· Some parallel digital I/O

The microprocessor is the integration of a number of useful functions into a single IC package. These functions are: The ability to execute a stored set of instructions to carry out user defined tasks. The ability to be able to access external memory chips to both read and write data from and to the memory. Basically, a microcontroller is a device which integrates a number of the components of a microprocessor system onto a single microchip.
Impedance is an electrical term that refers to how much a device Impedes the flow of current and is measured in ohms. While there is no set standard, low impedance usually refers to a range of between 120 and 800 ohms and high impedance refers to anything above 800 ohms.
Impedance is how much a device resists the flow of an AC signal, such as audio. Impedance is similar to resistance which is how much a device resists the flow of a DC signal. Both impedance and resistance are measured in ohms.

Saturday, July 7, 2007

GRAY CODE APPLICATION

History and practical application:

Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The French engineer Émile Baudot used Gray codes in telegraphy in 1878. He received the French Legion of Honor medal for his work. The Gray code is sometimes attributed, incorrectly,[5] to Elisha Gray (in Principles of Pulse Code Modulation, K. W. Cattermole,[6] for example).

Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube-based apparatus. The method and apparatus were patented in 1953 and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code.[7]
The use of his eponymous codes that Gray was most interested in was to minimize the effect of error in the conversion of analog signals to digital; his codes are still used today for this purpose, and others.

Part of front page of Gray's patent, showing PCM tube (10) with reflected binary code in plate (15)

Rotary encoder for angle-measuring devices marked in 3-bit binary-reflected Gray code (BRGC)

Gray codes are used in angle-measuring devices in preference to straightforward binary encoding. This avoids the possibility that, when several bits change in the binary representation of an angle, a misread could result from some of the bits changing before others. This application benefits from the cyclic nature of Gray codes, because the first and last values of the sequence differ by only one bit.

The binary-reflected Gray code can also be used to serve as a solution guide for the Tower of Hanoi problem. It also forms a Hamiltonian cycle on a hypercube, where each bit is seen as one dimension.

Due to the Hamming distance properties of Gray codes, they are sometimes used in Genetic Algorithms. They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.

Gray codes are also used in labelling the axes of Karnaugh maps.
When Gray codes are used in computers to address program memory, the computer uses less power because fewer address lines change as the program counter advances.
In modern digital communications, Gray codes play an important role in error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise.

Digital logic designers use gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.

Arithmetic Logic Unit

Arithmetic logic unit

A typical schematic symbol for an ALU: A & B are operands; R is the output; F is the input from the Control Unit; D is an output status
This article is about computer arithmetic units. An alternative meaning of ALU is Alu sequence(note lowercase).
The arithmetic logic unit (ALU) is a digital circuit that calculates an arithmetic operation (addition, subtraction, etc.) and logic operations (Exclusive Or, AND, etc.) between two numbers. The ALU is a fundamental building block of the central processing unit of a computer.
Many types of electronic circuits need to perform some type of arithmetic operation, so even the circuit inside a digital watch will have a tiny ALU that keeps adding 1 to the current time, and keeps checking if it should beep the timer, etc...
By far, the most complex electronic circuits are those that are built inside the chip of modern microprocessors like the Pentium. Therefore, these processors have inside them a powerful and very complex ALU. In fact, a modern microprocessor (or mainframe) may have multiple cores, each core with multiple execution units, each with multiple ALUs.
Many other circuits may contain ALUs inside: GPUs like the ones in NVidia and ATI graphic cards, FPUs like the old 80387 co-processor, and digital signal processor like the ones found in Sound Blaster sound cards, CD players and High-Definition TVs. All of these have several powerful and complex ALUs inside.
Numerical Systems
An ALU must process numbers using the same format as the rest of the digital circuit. For modern processors, that almost always is the two's complement binary number representation. Early computers used a wide variety of number systems, including one's complement, sign-magnitude format, and even true decimal systems, with ten tubes per digit.
ALUs for each one of these numeric systems had different designs, and that influenced the current preference for two's complement, as this is the representation that makes it easier for the ALUs to calculate additions and subtractions.

Practical overview
Most of the computer’s actions are performed by the ALU. The ALU gets data from processor registers. This data is processed and the results of this operation are stored into ALU output registers. Other mechanisms move data between these registers and memory.
A Control Unit controls the ALU, by setting circuits that tell the ALU what operations to perform.

hamming codes for detecting errors in more than one bit

Hamming code


In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect and correct single-bit errors, and can detect (but not correct) double-bit errors. In other words, the Hamming distance between the transmitted and received code-words must be zero or one for reliable communication.
In contrast, the simple parity code cannot detect errors where two bits are transposed, nor can it correct the errors it can find.
In mathematical terms, hamming codes are a class of binary linear codes. For each integer m > 1 there is a code with parameters: [2m − 1,2m − m − 1,3]. The parity-check matrix of a Hamming code is constructed by listing all columns of length m that are pair-wise independent



If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a 7-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error.
Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with 7-bits, Hamming described this as an (8,7) code, with eight bits in total, of which 7 are data. The repetition example would be (3,1), following the same logic. The information rate is the second number divided by the first, for our repetition example, 1/3.


Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the Hamming distance, after him). Parity has a distance of 2, as any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping two bits can be detected, but not corrected. When three bits flip in the same group there can be situations where the code corrects towards the wrong code word.


Hamming was interested in two problems at once; increasing the distance as much as possible, while at the same time increasing the information rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. The key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data.

General algorithm

Although any number of algorithms can be created, the following general algorithm positions the parity bits at powers of two to ease calculation of which bit was flipped upon detection of incorrect parity.
All bit positions that are powers of two are used as parity bits. (positions 1, 2, 4, 8, 16, 32, 64, etc.), see A000079 at the On-Line Encyclopedia of Integer Sequences.
All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, etc.), see A057716 at the On-Line Encyclopedia of Integer Sequences.
Each parity bit calculates the parity for some of the bits in the code word. The position of the parity bit determines the sequence of bits that it alternately checks and skips.
Position 1 (n=1): skip 0 bit (0=n-1), check 1 bit (n), skip 1 bit (n), check 1 bit (n), skip 1 bit (n), etc.
Position 2 (n=2): skip 1 bit (1=n-1), check 2 bits (n), skip 2 bits (n), check 2 bits (n), skip 2 bits (n), etc.
Position 4 (n=4): skip 3 bits (3=n-1), check 4 bits (n), skip 4 bits (n), check 4 bits (n), skip 4 bits (n), etc.
Position 8 (n=8): skip 7 bits (7=n-1), check 8 bits (n), skip 8 bits (n), check 8 bits (n), skip 8 bits (n), etc.
Position 16 (n=16): skip 15 bits (15=n-1), check 16 bits (n), skip 16 bits (n), check 16 bits (n), skip 16 bits (n), etc.
Position 32 (n=32): skip 31 bits (31=n-1), check 32 bits (n), skip 32 bits (n), check 32 bits (n), skip 32 bits (n), etc.
General rule for position n: skip n-1 bits, check n bits, skip n bits, check n bits...
And so on.
In other words, the parity bit at position 2k checks bits in positions having bit k set in their binary representation. Conversely, for instance, bit 13, i.e. 1101(2), is checked by bits 1000(2) = 8, 0100(2)=4 and 0001(2) = 1

BOOTING

The process of starting a computer and loading the operating system is referred to as “the bootstrap process”, or simply “booting”.
In computing, booting (booting up) is a bootstrapping process that starts operating systems when the user turns on a computer system. A boot sequence is the set of operations the computer performs when it is switched on that loads an operating system.

In computing, bootstrapping refers to a process where a simple system activates another more complicated system that serves the same purpose. The term is most often applied to the process of starting up a computer, in which a mechanism is needed to execute the software program that is responsible for executing software programs (the operating system).

When a computer is first powered on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complex actions such as loading a program from disk, so an apparent paradox exists: to load the operating system into memory, one appears to need to have an operating system already loaded.

The solution is to use a special small program, called a bootstrap loader, bootstrap or boot loader. This program's only job is to load other software for the operating system to start.

BOOTING DEVICE

A boot device is any device that must be initialized prior to loading the operating system. This includes the primary input device (keyboard), the primary output device (display), and the initial program load device (floppy drive,hard drive, CD-ROM, USB flash drive, etc.).
In a modern BIOS, the user can select one of several interfaces from which to boot. These include: hard disk, floppy, SCSI, CDROM, Zip, LS-120, a network interface card using PXE, or USB (USB-FDD, USB-ZIP, USB-CDROM, USB-HDD).
For example, one can install Microsoft Windows on the first hard disk and Linux on the second. By changing the BIOS boot device, the user can select the operating system to load

Friday, July 6, 2007

EXPANSION SLOTS

An opening in a computer where a circuit board can be inserted to add new capabilities to the computer. Nearly all personal computers except portables contain expansion slots for adding more memory, graphics capabilities, and support for special devices. The boards inserted into the expansion slots are called expansion boards, expansion cards , cards , add-ins , and add-ons.
Expansion slots for PCs come in two basic sizes: half- and full-size. Half-size slots are also called 8-bit slots because they can transfer 8 bits at a time. Full-size slots are sometimes called 16-bit slots. In addition, modern PCs include PCI slots for expansion boards that connect directly to the PCI bus
ADD-INS
(1) A component you can add to a computer or other device to increase its capabilities. Add-ins can increase memory or add graphics or communications capabilities to a computer. They can come in the form of expansion boards, cartridges , or chips. The term add-in is often used instead of add-on for chips you add to a board that is already installed in a computer. In contrast, add-on almost always refers to an entire circuit board.
(2) A software program that extends the capabilities of larger programs. For example, there are many Excel add-ins designed to complement the basic functionality offered by Excel. In the Windows environment, add-ins are becoming increasingly common thanks to OLE 2.0.
ADD-ONS
Refers to a product designed to complement another product. For example, there are numerous add-on boards available that you can plug into a personal computer to give it additional capabilities. Another term for add-on board is expansion board.
Add-on products are also available for software applications. For example, there are add-on report generation program that attach to popular database products such as dBASE, giving them additional report-generation and graphics capabilities.
The terms add-on and add-in are often, but not always, used synonymously. The term add-in can refer to individual chips you can insert into boards that are already installed in your computer. Add-on, on the other hand, almost always refers to an entire circuit board, cartridge, or program.

normalization advantage

The advantages of normalizing floating-point numbers are:
1) The representation is unique, there is exactly one way to write a real number in such a form.
2) It's easy to compare two normalized numbers, you separately test the sign, exponent and mantissa.
3) In a normalized form, a fixed size mantissa will use all the 'digit cells' to store significant digits.

Thursday, July 5, 2007

General comments

Dear girls

Some of you seem to repeat contents of posts that is already posted by somebody else.
So, before posting just read all the posts and comments, and then proceed.
You can also posts questions or doubts about concepts you dont understand.
Please read others post and give your comments if any.

rajitha

Excellent work!!

Dear girls,

I am extremely happy to see the initiative you all have taken to post on the blog.
Very good girls!! Keep it up!
I hope this enthusiasm stays all through this semester.

rajitha

Energy - Efficient System Architecture (EESA)

EESA aims to Improve System Energy Efficiencies


Reducing Power Consumption:

Minimizing power consumption is an objective most computing users have. For laptop users, it affects battery life. For desktop users, it affects cooling needs and therefore can impact distracting fan noise in an office. For server users, the cost of power and cooling are a growing part of data center operating expenses. To help address these challenges, researchers in the Systems Technology Lab, part of the Intel Corporate Technology Group, are focused on establishing a new computing architecture based on power management. Intel’s goal is to be the leader in computing performance per watt. Instead of looking at small architectural changes to achieve incremental power savings, Intel researchers envision an all-new Energy-Efficient System Architecture (EESA) designed to increase performance per watt by managing voltage and frequency.

EESA uses sensors to identify when a system function is idle and then sends that component to its lowest state of power consumption. EESA is expected to result in world-class energy efficiency that will extend Intel innovation and leadership in computing performance per watt. Intel is inviting developers from the PC industry to help define specifications and interfaces in the areas of I/O device optimization, system power conversion, and sensor architecture.

Five Key Technology Areas Comprise EESA:
The developers of EESA envision five technologies working together to optimize energy efficiency. Those functions are:
  • Fine-Grain Power Management
  • I/O Optimization
  • System Power Conversion
  • Client Sensor Architecture
  • Power Policy Management














Fine-Grain Power Management (FGPM):
The Fine-Grain Power Management (FGPM) function is central to EESA and provides more precise control over power levels inside the system. Today, the system sleep function is initiated only when all components are idle.


I/O Optimization:
I/O Optimization technology can lower power consumption in three user-facing areas: self-refreshing displays, power-managed I/O, and self-refreshing audio. I/O Optimization technology looks at the interfaces to these devices to find ways to reduce their dependency on processing cycles from the system. If system functions can be put into idle state more often, power consumption will be lowered.


System Power Conversion:
The System Power Conversion function streamlines how power is managed from its source to the circuits that use it. Repeated power conversions in today’s computers result in delivery efficiencies as low as 50 percent—meaning that half the power is consumed in conversion processes themselves.


Client Sensor Architecture:
Client Sensor Architecture standardizes how sensors communicate back to FGPM. Today there is no mechanism for many sensors to report their information back to a central management function. Client Sensor Architecture is designed to organize sensors by identifying their capabilities, monitoring their functions, and defining an orderly process to get information to FGPM in a systematic way.


Power Policy Management:
Power Policy Management technology, incorporates all the above information and controls, as shown in Figure 2, to maximize overall power efficiency for the platform. More information about the component parts of EESA will be released over the next several IDF cycles.

What is Computer Architecture?

Computer Architecture is the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.
A little more descriptive:
"Computer Architecture" refers to the fixed internal structure of the CPU (ie. electronic switches to represent logic gates) to perform logical operations, and may also include the built-in interface (i.e. opcodes) by which hardware resources (ie. CPU, memory, and also motherboard, peripherals) may be used by the software.

BUS

1) A collection of wires through which data is transmitted from one part of a computer to another. You can think of a bus as a highway on which data travels within a computer. When used in reference to personal computers, the term bus usually refers to internal bus. This is a bus that connects all the internal computer components to the CPU and main memory. There's also an expansion bus that enables expansion boards to access the CPU and memory.
All buses consist of two parts -- an address bus and a data bus. The data bus transfers actual data whereas the address bus transfers information about where the data should go.
The size of a bus, known as its width, is important because it determines how much data can be transmitted at one time. For example, a 16-bit bus can transmit 16 bits of data, whereas a 32-bit bus can transmit 32 bits of data.
Every bus has a clock speed measured in MHz. A fast bus allows data to be transferred faster, which makes applications run faster. On PCs, the old ISA bus is being replaced by faster buses such as PCI.
Nearly all PCs made today include a local bus for data that requires especially fast transfer speeds, such as video data. The local bus is a high-speed pathway that connects directly to the processor.
Several different types of buses are used on Apple Macintosh computers. Older Macs use a bus called NuBus, but newer ones use PCI.
(2) In networking, a bus is a central cable that connects all devices on a local-area network (LAN). It is also called the backbone.

Expansion slots

An opening in a computer where a circuit board can be inserted to add new capabilities to the computer. Nearly all personal computers except portables contain expansion slots for adding more memory, graphics capabilities, and support for special devices. The boards inserted into the expansion slots are called expansion boards, expansion cards , cards , add-ins , and add-ons.

Expansion slots for PCs come in two basic sizes: half- and full-size. Half-size slots are also called 8-bit slots because they can transfer 8 bits at a time. Full-size slots are sometimes called 16-bit slots. In addition, modern PCs include PCI slots for expansion boards that connect directly to the PCI bus.

Wednesday, July 4, 2007

what is computer architecture???

In computer engineering, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements (especially speeds and interconnections) and design implementations for the various parts of a computer — focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.
It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.
Computer architecture comprises at least three main subcategories [1]
Instruction set architecture, or ISA, is the abstract image of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, memory address modes, processor registers, and address and data formats.
Microarchitecture, also known as Computer organization is a lower level, more concrete, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA[2]. The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.
System Design which includes all of the other hardware components within a computing system such as:
system interconnects such as computer buses and switches
memory controllers and hierarchies
CPU off-load mechanisms such as direct memory access
issues like multi-processing.
Once both ISA and microarchitecture has been specified, the actual device needs to be designed into hardware. This design process is often called implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.

Who is Von Neumann?

Von Neumann was the first person to suggest to concept of the stored program. This concept states that there is no difference between computer instructions and data. More importantly, he suggested that it was not necessary to have seperate storage location for a computer program and data -- hence today, computer instructions are stored in primary memory for execution, along with other data.


Drawbacks of Von Neumann's design:

There are drawbacks to the von Neumann design. Aside from the von Neumann bottleneck described below, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a crash. A buffer overflow is one very common example of such a malfunction. The ability for programs to create and modify other programs is also frequently exploited by malware. Malware might use a buffer overflow to smash the call stack and overwrite the existing program, and then proceed to modify other program files on the system to propagate the compromise. Memory protection and other forms of access control can help protect against both accidental and malicious program modification.

MOTHERBOARD

The "motherboard" is a central element of the personal computer, the main circuitboard to which one connects memory, peripherals and other devices, which extend the capabilities of the computer. Motherboard (the art group) may be described as a collective of artists and 'techsperts' gathered around the core members Per Platou and Amanda Steggell, for various projects. The majority of their work has taken the form of installations and performative live art happenings, mediated and modulated by the intermediary influence of the net, and often integrate audience participation and interaction. Through their ambitious vehicles they explore the materiality and resistance of the net as a mediating instance.

INSTRUCTION SET

Classification of Instruction Sets

The instruction sets can be differentiated by
Operand storage in the CPU Number of explicit operands per instruction Operand location Operations Type and size of operands

The type of internal storage in the CPU is the most basic differentiation. The major choices are
a stack (the operands are implicitly on top of the stack)
an accumulator (one operand is implicitly the accumulator)
a set of registers (all operands are explicit either registers or memory locations)

Motherboard

A motherboard is the central or primary circuit board making up a complex electronic system, such as a modern computer. It is also known as a mainboard, baseboard, system board, or, on Apple computers, a logic board, and is sometimes abbreviated as mobo.[1]
The basic purpose of the motherboard, like a backplane, is to provide the electrical and logical connections by which the other components of the system communicate.
A typical
desktop computer is built with the microprocessor, main memory, and other essential components on the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices are typically attached to the motherboard via edge connectors and cables, although in modern computers it is increasingly common to integrate these "peripherals" into the motherboard.

Difference between Sequential and Combinational Circuit

Combinational logic refers to circuits whose output is strictly depended on the present value of the inputs. As soon as inputs are changed, the information about the previous inputs is lost, that is, combinational logics circuits have no memory. In many applications, information regarding input values at a certain instant of time is required at some future time. Although every digital system is likely to have combinational circuits, most systems encountered in practice also include memory elements, which require that the system be described in terms of sequential logic. Circuits whose outputs depends not only on the present input value but also the past input value are known as sequential logic circuits. The mathematical model of a sequential circuit is usually referred to as a sequential machine.

A general block diagram of a sequential circuit is shown below:






The diagram consists of combinational circuit to which memory elements are connected to form a feedback path. The memory elements are devices capable of storing binary information within them. The combinational part of the circuit receives two sets of input signals: one is primary (coming from the circuit environment) and secondary (coming from memory elements). The particular combination of secondary input variables at a given time is called the present state of the circuit. The secondary input variables are also know as the state variables.
The block diagram shows that the external outputs in a sequential circuit are a function not only of external inputs but also of the present state of the memory elements. The next state of the memory elements is also a function of external inputs and the present state. Thus a sequential circuit is specified by a time sequence of inputs, outputs, and internal states.

Expansion Slot
A slot located inside a computer on the motherboard or riser board that allows additional boards to be connected to it. Below is a listing of some of the expansion slots commonly found in IBM compatible computers as well as other brands of computers and a graphic illustration of a motherboard and its expansion slots.
Common types of expansion slots:
AGP
AMR
CNR
EISA
ISA
PCI
VESA
SMPS:
A switched-mode power supply, switch-mode power supply, or SMPS, is an electronic power supply unit (PSU) that incorporates a switching regulator — an internal control circuit that switches power transistors (such as MOSFETs) rapidly on and off in order to stabilize the output voltage or current. Switching regulators are used as replacements for the linear regulators when higher efficiency, smaller size or lighter weight are required. They are, however,more complicated and their switching currents can cause noise problems if not carefully suppressed. As with any offline electronic systems employing peak-hold AC-DC conversion, simple SMPS designs may have a poor power factor. The power output to cost crossover point between SMPS and linear regulating alternatives has been falling since the early 1980s as SMPS technology was developed and integrated into dedicated silicon chips. In early 2006 even very low power linear regulators became more expensive than SMPS when the cost of copper and iron used in the transformers increased abruptly on world markets.
SMPS can also be classified into four types according to the input and output waveforms, as follows.
AC in, DC out: rectifier, off-line converter
DC in, DC out: voltage converter, or current converter, or DC to DC converter
AC in, AC out: frequency changer, cycloconverter
DC in, AC out: inverter
AC and DC are abbreviations for alternating current and direct current.
SMPS and linear power supply comparison
There are two main types of regulated power supplies available: SMPS and Linear. The reasons for choosing one type or the other can be summarized as follows.
Size and weight — Linear power supplies use a transformer operating at the mains frequency of 50/60 Hz. This low-frequency transformer is several times larger and heavier than a corresponding transformer in an SMPS, which runs at typical frequencies of 50 kHz to 1 MHz.
Output voltage — Linear power supplies regulate the output by using a higher voltage in the initial stages and then expending some of it as heat to produce a lower, regulated voltage. This voltage drop is necessary and can not be eliminated by improving the design, even in theory. SMPSs can produce output voltages which are lower than the input voltage, higher than the input voltage and even negative to the input voltage, making them versatile and better suited for widely variable input voltages.
Efficiency, heat, and power dissipation — A linear supply regulates the output voltage or current by expending excess power as heat, which is inefficient. A regulated SMPS will regulate the output using duty cycle control, which draws only the power required by the load. In all SMPS topologies, the transistors are always switched fully on or fully off. Thus, ideally, an SMPS is 100% efficient. The only heat generated is in the non-ideal aspects of the components. Switching losses in the transistors, on-resistance of the switching transistors, equivalent series resistance in the inductor and capacitors, and rectifier voltage drop will lower the SMPS efficiency. However, by optimizing SMPS design, the amount of power loss and heat can be minimized. A good design can have an efficiency of 95%.
Complexity — A linear regulator ultimately consists of a power transistor, voltage regulating IC and a noise filtering capacitor. An SMPS typically contains a controller IC, one or several power transistors and diodes as well as power transformer, inductor and filter capacitors. Multiple voltages can be generated by one transformer core. For this an SMPS has to use duty cycle control. Both need a careful selection of their transformers. Due to the high operating frequencies in SMPS, the stray inductance and capacitance of the printed circuit board traces become important.
Radio frequency interference — The current in a SMPS is switched on and off sharply, and contains high frequency spectral components. This high-frequency current can generate undesirable electromagnetic interference. EMI filters and RF shielding are needed to reduce the disruptive interference. Linear PSUs generally do not produce interference, and are used to supply power where radio interference must not occur.
Electronic noise at the output terminals — Inexpensive linear PSUs with poor regulation may experience a small AC voltage "riding on" the DC output at twice mains frequency (100/120 Hz). These "ripples" are usually on the order of millivolts, and can be suppressed with larger filter capacitors or better voltage regulators. This small AC voltage can cause problems or interference in some circuits; for example, analog security cameras powered by switching power supplies may have unexpected brightness ripples or other banded distortions in the video they produce. Quality linear PSUs will suppress ripples much better. SMPS usually do not exhibit ripple at the power-line frequency, but do have generally noisier outputs than linear PSUs. The noise is usually correlated with the SMPS switching frequency.
Acoustic noise — Linear PSUs typically give off a faint, low frequency hum at mains frequency, but this is seldom audible (vibration of windings in the transformer is responsible). SMPSs, with their much higher operating frequencies, are not usually audible to humans (unless they have a fan, in the case of most computer SMPSs). A malfunctioning SMPS may generate high-pitched sounds, since they do in fact generate acoustic noise at the oscillator frequency.
Power factor — Linear PSUs have low power factors because current is drawn from the mains at the peaks of the voltage sinusoid. The current drawn by simple SMPS is uncorrelated to the the supply's input voltage waveform, so the early SMPS designs have a mediocre power factor as well and their use in personal computers and compact fluorescent lamps present a growing problem for power distribution. A SMPS with Power factor correction (PFC) can reduce this problem greatly, and are required by some electric regulation authorities(European in particular).
Electronic noise at the input terminals — In a similar fashion, very low cost SMPS may couple electrical switching noise back onto the mains power line. Linear PSUs rarely do this.

[edit] How an SMPS works

Block diagram of a mains operated AC-DC SMPS with output voltage regulation.

[edit] Input rectifier stage

AC, half-wave and full wave rectified signals
If the SMPS has an AC input, then its first job is to convert the input to DC. This is called rectification. The rectifier circuit can be configured as a voltage doubler by the addition of a switch operated either manually or automatically. This is a feature of larger supplies to permit operation from nominally 120 volt or 240 volt supplies. The rectifier produces an unregulated DC voltage which is then sent to a large filter capacitor. The current drawn from the mains supply by this rectifier circuit occurs in short pulses around the AC voltage peaks. These pulses have significant high frequency energy which reduces the power factor. Special control techniques can be employed by the following SMPS to force the average input current to follow the sinusoidal shape of the AC input voltage thus the designer should try correcting the power factor. A SMPS with a DC input does not require this stage. A SMPS designed for AC input can often be run from a DC supply, as the DC passes through the rectifier stage unchanged. (The user should check the manual before trying this, though most supplies are quite capable of such operation even though no clue is provided in the manual!)
If an input range switch is used, the rectifier stage is usually configured to operate as a voltage doubler when operating on the low voltage (~120 VAC) range and as a straight rectifier when operating on the high voltage (~240 VAC) range. If an input range switch is not used, then a full-wave rectifier is usually used and the downstream inverter stage is simply designed to be flexible enough to accept the wide range of dc voltages that will be produced by the rectifier stage. In higher-power SMPSs, some form of automatic range switching may be used.
The electric power normally is not used in the form in which it was produced or distributed. Practically all of electronic systems require some form of power conversion. A device that transfers electric energy from the source to the load using electronic circuits is referred to as Power Supply, although power converter would be a more accurate term for such a device. A typical application of a power supply is to convert utility AC voltage into regulated DC voltages required for electronic equipment. Nowadays in most power supplies providing more then a few watts the energy flow is controlled with power semiconductors that are continuously switching on and off with high frequency. Such devices are referred to as switch mode power supplies or SMPS. In general, SMPS can be classified into four types according to the form of input and output voltages: AC to DC (off-line power supply or a rectifier); DC to DC (voltage converter); AC to AC (frequency changer or cycloconverter); DC to AC (inverter).

Expansion Slots

An opening in a computer where a circuit board can be inserted to add new capabilities to the computer. Nearly all personal computers except portables contain expansion slots for adding more memory, graphics capabilities, and support for special devices. The boards inserted into the expansion slots are called expansion boards, expansion cards , cards , add-ins , and add-ons.
Expansion slots for PCs come in two basic sizes: half- and full-size. Half-size slots are also called 8-bit slots because they can transfer 8 bits at a time. Full-size slots are sometimes called 16-bit slots. In addition, modern PCs include PCI slots for expansion boards that connect directly to the PCI bus.

smps

A switched-mode power supply, switch-mode power supply, or SMPS, is an electronic power supply unit (PSU) that incorporates a switching regulator — an internal control circuit that switches power transistors (such as MOSFETs) rapidly on and off in order to stabilize the output voltage or current. Switching regulators are used as replacements for the linear
regulators when higher efficiency, smaller size or lighter weight are required. They are, however,more complicated and their switching currents can cause noise problems if not carefully suppressed. As with any offline electronic systems employing peak-hold AC-DC conversion, simple SMPS designs may have a poor power factor. The power output to cost crossover point between SMPS and linear regulating alternatives has been falling since the early 1980s as SMPS technology was developed and integrated into dedicated silicon chips. In early 2006 even very low power linear regulators became more expensive than SMPS when the cost of copper and iron used in the transformers increased abruptly on world markets.
SMPS can also be classified into four types according to the input and output waveforms, as follows.
AC in, DC out: rectifier, off-line converter
DC in, DC out: voltage converter, or current converter, or DC to DC converter,
AC in, AC out: frequency changer, cycloconverter
DC in, AC out: inverter
AC and DC are abbreviations for alternating current and direct current

SMPS

A switched-mode power supply, switch-mode power supply, or SMPS, is an electronic power supply unit (PSU) that incorporates a switching regulator — an internal control circuit that switches power transistors (such as MOSFETs) rapidly on and off in order to stabilize the output voltage or current. Switching regulators are used as replacements for the linear regulators when higher efficiency, smaller size or lighter weight are required. They are, however,more complicated and their switching currents can cause noise problems if not carefully suppressed. As with any offline electronic systems employing peak-hold AC-DC conversion, simple SMPS designs may have a poor power factor. The power output to cost crossover point between SMPS and linear regulating alternatives has been falling since the early 1980s as SMPS technology was developed and integrated into dedicated silicon chips. In early 2006 even very low power linear regulators became more expensive than SMPS when the cost of copper and iron used in the transformers increased abruptly on world markets.
SMPS can also be classified into four types according to the input and output waveforms, as follows.
AC in, DC out: rectifier, off-line converter
DC in, DC out: voltage converter, or current converter, or DC to DC converter
AC in, AC out: frequency changer, cycloconverter
DC in, AC out: inverter
AC and DC are abbreviations for alternating current and direct current.