Saturday, September 27, 2008

Software/Hardware Driven Identification (Daisy Chain)
This is significantly faster than a pure software approach. A daisy chain is used to identify the device requesting service.

Daisy Chain Polling Arangement

Daisy chaining is used for level sensitive interrupts, which act like a wired 'OR' gate. Any requesting device can take the interrupt line low, and keep it asserted low until it is serviced.

Because more than one device can assert the shared interrupt line simultaneously, some method must be employed to ensure device priority. This is done using the interrupt acknowledge signal generated by the processor in response to an interrupt request.

Each device is connected to the same interrupt request line, but the interrupt acknowledge line is passed through each device, from the highest priority device first, to the lowest priority device last.

After preserving the required registers, the microprocessor generates an interrupt acknowledge signal. This is gated through each device. If device 1 generated the interrupt, it will place its identification signal on the data bus, which is read by the processor, and used to generate the address of the interrupt-service routine. If device 1 did not request the servicing, it will pass the interrupt acknowledge signal on to the next device in the chain. Device 2 follows the same procedure, and so on.

Tuesday, July 29, 2008

RISC VS CISC













The simplest way to examine the advantages and disadvantages of RISC architecture is by contrasting it with it's predecessor: CISC (Complex Instruction Set Computers) architecture.
Multiplying Two Numbers in MemoryOn the right is a diagram representing the storage scheme for a generic computer. The main memory is divided into locations numbered from (row) 1: (column) 1 to (row) 6: (column) 4. The execution unit is responsible for carrying out all computations. However, the execution unit can only operate on data that has been loaded into one of the six registers (A, B, C, D, E, or F). Let's say we want to find the product of two numbers - one stored in location 2:3 and another stored in location 5:2 - and then store the product back in the location 2:3.
The CISC Approach The primary goal of CISC architecture is to complete a task in as few lines of assembly as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations. For this particular task, a CISC processor would come prepared with a specific instruction (we'll call it "MULT"). When executed, this instruction loads the two values into separate registers, multiplies the operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction:
MULT 2:3, 5:2
MULT is what is known as a "complex instruction." It operates directly on the computer's memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a higher level language. For instance, if we let "a" represent the value of 2:3 and "b" represent the value of 5:2, then this command is identical to the C statement "a = a * b."
One of the primary advantages of this system is that the compiler has to do very little work to translate a high-level language statement into assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is put on building complex instructions directly into the hardware.
The RISC Approach RISC processors only use simple instructions that can be executed within one clock cycle. Thus, the "MULT" command described above could be divided into three separate commands: "LOAD," which moves data from the memory bank to a register, "PROD," which finds the product of two operands located within the registers, and "STORE," which moves data from a register to the memory banks. In order to perform the exact series of steps described in the CISC approach, a programmer would need to code four lines of assembly:
LOAD A, 2:3LOAD B, 5:2PROD A, BSTORE 2:3, A
At first, this may seem like a much less efficient way of completing the operation. Because there are more lines of code, more RAM is needed to store the assembly level instructions. The compiler must also perform more work to convert a high-level language statement into code of this form.
Difference Between RISC and CISC::

  • CISC
  • Emphasis on hardware
  • Includes multi-clockcomplex instructions
  • Memory-to-memory:"LOAD" and "STORE"incorporated in instructions
  • Small code sizes,high cycles per second
  • Transistors used for storingcomplex instructionS
  • RISC
  • Emphasis on software
  • Single-clock,reduced instruction only
  • Register to register:"LOAD" and "STORE"are independent instructions
  • Low cycles per second,large code sizes
  • Spends more transistorson memory registers

However, the RISC strategy also brings some very important advantages. Because each instruction requires only one clock cycle to execute, the entire program will execute in approximately the same amount of time as the multi-cycle "MULT" command. These RISC "reduced instructions" require less transistors of hardware space than the complex instructions, leaving more room for general purpose registers. Because all of the instructions execute in a uniform amount of time (i.e. one clock), pipelining is possible.
Separating the "LOAD" and "STORE" instructions actually reduces the amount of work that the computer must perform. After a CISC-style "MULT" command is executed, the processor automatically erases the registers. If one of the operands needs to be used for another computation, the processor must re-load the data from the memory bank into a register. In RISC, the operand will remain in the register until another value is loaded in its place.
The Performance EquationThe following equation is commonly used for expressing a computer's performance ability:

The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program.
RISC Roadblocks Despite the advantages of RISC based processing, RISC chips took over a decade to gain a foothold in the commercial world. This was largely due to a lack of software support.
Although Apple's Power Macintosh line featured RISC-based chips and Windows NT was RISC compatible, Windows 3.1 and Windows 95 were designed with CISC processors in mind. Many companies were unwilling to take a chance with the emerging RISC technology. Without commercial interest, processor developers were unable to manufacture RISC chips in large enough volumes to make their price competitive.
Another major setback was the presence of Intel. Although their CISC chips were becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow through development and produce powerful processors. Although RISC chips might surpass Intel's efforts in specific areas, the differences were not great enough to persuade buyers to change technologies.
The Overall RISC Advantage Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology has also become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

TIGHTLY AND LOOSELY COUPLED SYSTEM

Tightly Coupled System
- Tasks and/or processors communicate in a highly synchronized fashion
- Communicates through a common shared memory
- Shared memory system
Loosely Coupled System
- Tasks or processors do not communicate in a
synchronized fashion
- Communicates by message passing packets
- Overhead for data exchange is high
- Distributed memory system

Wednesday, July 16, 2008

Target instructions prefetch

A processor that includes an execution pipeline that executes a programmed flow of instructions is provided. The processor also includes an instruction pointer generator configured to generate an instruction pointer. Furthermore, the processor includes a branch prediction circuit configured to receive the instruction pointer. In response to the instruction pointer, the branch prediction circuit is configured to determine if an instruction corresponding to the instruction pointer includes a branch that is predicted taken and if so to provide to said execution pipeline a target instruction corresponding to said instruction. The branch prediction circuit provides to the execution pipeline at least one target instruction corresponding to the instruction corresponding to the instruction pointer.

Monday, July 14, 2008

Throughput

In communication networks, such as Ethernet or packet radio, throughput is the average rate of successful message delivery over a communication channel. This data may be delivered over a physical or logical link, over a wireless channel, or that is passing through a certain network node, such as data passed between two specific computers. The throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per second or data packets per time slot.
The system throughput or aggregate throughput is the sum of the data rates that are delivered to all terminals in a network.
The throughput can be analyzed mathematically by means of queueing theory, where the load in packets per time unit is denoted arrival rate λ, and the throughput in packets per time unit is denoted departure rate μ.

Throughput

In computer technology, throughput is the amount of work that a computer can do in a given time period. Historically, throughput has been a measure of the comparative effectiveness of large commercial computers that run many programs concurrently. An early throughput measure was the number of batch jobs completed in a day.

Throughput

Throughput is the rate at which a computer or network sends or receives data . It therefore is a good measure of the channel capacity of a communications link, and connections to the internet are usually rated in terms of how many bits they pass per second (bit/s)

System Call

A system call is a request made by any arbitrary program to the operating system for performing tasks -- picked from a predefined set -- which the said program does not have required permissions to execute in its own flow of execution. Most operations interacting with the system require permissions not available to a user level process, i.e. any I/O performed with any arbitrary device present on the system or any form of communication with other processes requires the use of system calls.

System Call

A system call is a request made by any arbitrary program to the operating system for performing tasks -- picked from a predefined set -- which the said program does not have required permissions to execute in its own flow of execution. Most operations interacting with the system require permissions not available to a user level process, i.e. any I/O performed with any arbitrary device present on the system or any form of communication with other processes requires the use of system calls.

Saturday, July 12, 2008

Interrupts

An interrupt is an event in hardware that triggers the processor to jump from its current program counter to a specific point in the code. Interrupts are designed to be special events whose occurrence cannot be predicted precisely (or at all). The MSP has many different kinds of events that can trigger interrupts, and for each one the processor will send the execution to a unique, specific point in memory. Each interrupt is assigned a word long segment at the upper end of memory. This is enough memory for a jump to the location in memory where the interrupt will actually be handled. Interrupts in general can be divided into two kinds- maskable and non-maskable. A maskable interrupt is an interrupt whose trigger event is not always important, so the programmer can decide that the event should not cause the program to jump. A non-maskable interrupt (like the reset button) is so important that it should never be ignored. The processor will always jump to this interrupt when it happens. Often, maskable interrupts are turned off by default to simplify the default behavior of the device. Special control registers allow non-maskable and specific non-maskable interrupts to be turned on. Interrupts generally have a "priority;" when two interrupts happen at the same time, the higher priority interrupt will take precedence over the lower priority one. Thus if a peripheral timer goes off at the same time as the reset button is pushed, the processor will ignore the peripheral timer because the reset is more important (higher priority).
A two-phase-clock generator which generates a first clock and a nonoverlapping second clock from an input clock by utilizing gate delays, comprising:

a first floating inverter and a second floating inverter each having an input, an output, a first supply terminal and a second supply terminal, said input of said first logic gate and said input of said second logic gate being coupled to said input clock in antiphase, and said first supply terminals of each of said first and second floating inverters connected to a supply voltage;

a first output buffer having an input coupled to said output of said first floating inverter and having an output that provides said first clock, said output further being provided as feedback to said second supply terminal of said second floating inverter; and

a second output buffer having an input coupled to said output of said second floating inverter and having an output that provides said nonoverlapping second clock, said output further being provides as feedback to said second supply terminal of said first floating inverter.


A two-phase clock generator generates a nonoverlapping two-phase clock from a unipolar input clock by utilizing gate delays in first and second signal paths. The output of each signal path is fed over a cross-coupled feedback path back to a logic gate in the respective other signal path. Each logic gate is a floating inverter having a first supply terminal connected to a supply voltage, and having a second supply terminal that is the feed point for the respective feedback signal from the output of the other signal path.




First Previous Next Last Index Home Text






First Previous Next Last Index Home Text


ADDRESSING METHODS

ADDRESSING METHODS


ABSOLUTE (DIRECT) ADDRESSING

- The address of operand is given explicity as part of the instruction



IMPLIED ADDRESSING

- The address is implied by the instruction (e.g.,in one-address machine, the address
of the second operand is implied as being accumulator)


IMMEDIATE ADDRESSING

- The operand is given explicitly as the instruction. No memory
access is required. Also operand could follow immediately after the instruction.


INDIRECT ADDRESSING
- The effective address of the operand is in the register or main memory location
whose address appears in the instruction. It can have more than one level.


INDEXED ADDRESSING
- The effective address (EA) of the operand is generated by adding an index register
value (X) to the direct address (DA)
- EA = X + DA


BASE ADDRESSING

- The effective address of the operand is generated by adding base register value (B)
to the address
- EA = B + DA
CA - IV - D&IF - 16


SELF-RELATIVE ADDRESSING

- Effective address is a sum of a direct address and a program counter contents (PC).
EA = DA + PC


AUGMENTED ADDRESSING

- Effective address is a concatenation of the contents of the augmented address
register (AAR) and direct address.
EA = AAR || DA
(AAR often specifies a page and DA is an address within this
particular page)


BLOCK ADDRESSING

- Address of the first word in the block is given. Length of the block is usually specified
in the instruction; or also the last address can be given; or special end-of-block
character can be given; or blocks may have fixed length. Very useful in the
secondary storage management.

INTERRUPTS

see this to know about interrupts: http://en.wikipedia.org/wiki/Interrupt

IMPLEMENTATIONS OF SYSTEM CALLS

Typical implementations
Implementing system calls requires a control transfer which involves some sort of architecture specific feature. A typical way to implement this is to use a software interrupt or trap. Interrupts transfer control to the kernel so software simply needs to set up some register with the system call number they want and execute the software interrupt.
For many RISC processors this is the only feasible implementation, but CISC architectures such as x86 support additional techniques. One example is SYSCALL/SYSRET which is very similar to SYSENTER/SYSEXIT (the two mechanisms were created by Intel and AMD independently, but do basically the same thing). These are "fast" control transfer instructions that are designed to quickly transfer control to the kernel for a system call without the overhead of an interrupt. Linux 2.5 began using this on the x86, where available; formerly it used the INT instruction, where the system call number was placed in the EAX register before interrupt 0x80 was executed.[1]
An older x86 mechanism is called a call gate and is a way for a program to literally call a kernel function directly using a safe control transfer mechanism the kernel sets up in advance. This approach has been unpopular, presumably due to the requirement of a far call which uses x86 memory segmentation and the resulting lack of portability it causes, and existence of the faster instructions mentioned above.

INTERRUPT:

An interrupt is an event in hardware that triggers the processor to jump from its current program counter to a specific point in the code. Interrupts are designed to be special events whose occurrence cannot be predicted precisely .

TYPES:

In general ,interrupt can be divided into two kinds- maskable and non-maskable.

A maskable interrupt is an interrupt whose trigger event is not always important, so the programmer can decide that the event should not cause the program to jump.

A non-maskable interrupt (like the reset button) is so important that it should never be ignored. The processor will always jump to this interrupt when it happens.

what is a system call

A system call is a request made by any arbitrary program to the operating system for performing tasks -- picked from a predefined set -- which the said program does not have required permissions to execute in its own flow of execution. Most operations interacting with the system require permissions not available to a user level process, i.e. any I/O performed with any arbitrary device present on the system or any form of communication with other processes requires the use of system calls..........

The OS executes at the highest level of privilege and allows the applications to request services via system calls, which are often implemented through interrupts. If allowed, the system enters a higher privilege level, executes a specific set of instructions which the interrupting program has no direct control over, then returns control to the former flow of execution. This concept also serves as a way to implement security......

Friday, July 11, 2008

interrupt service

An interrupt service routine (ISR) is a software routine that hardware invokes in response to an interrupt. ISRs examine an interrupt and determine how to handle it. ISRs handle the interrupt, and then return a logical interrupt value. If no further handling is required because the device is disabled or data is buffered, the ISR notifies the kernel with a SYSINTR_NOP return value. An ISR must perform very fast to avoid slowing down the operation of the device and the operation of all lower priority ISRs.
Although an ISR might move data from a CPU register or a hardware port into a memory buffer, in general it relies on a dedicated interrupt thread, called the interrupt service thread (IST), to do most of the required processing. If additional processing is required, the ISR returns a logical interrupt value, other than SYSINTR_NOP, to the kernel. It then maps a physical interrupt number to a logical interrupt value.

INTERRUPT

A signal informing a program that an event has occurred. When a program receives an interrupt signal, it takes a specified action (which can be to ignore the signal). Interrupt signals can cause a program to suspend itself temporarily to service the interrupt.
Interrupt signals can come from a variety of sources. For example, every keystroke generates an interrupt signal. Interrupts can also be generated by other devices, such as a printer, to indicate that some event has occurred. These are called hardware interrupts. Interrupt signals initiated by programs are called software interrupts. A software interrupt is also called a trap or an exception.
PCs support 256 types of software interrupts and 15 hardware interrupts. Each type of software interrupt is associated with an interrupt handler -- a routine that takes control when the interrupt occurs. For example, when you press a key on your keyboard, this triggers a specific interrupt handler. The complete list of interrupts and associated interrupt handlers is stored in a table called the interrupt vector table, which resides in the first 1 K of addressable memory.
Also see the list of IRQ numbers in the Quick Reference section of Webopedia

INTERRUPT

A signal informing a program that an event has occurred. When a program receives an interrupt signal, it takes a specified action (which can be to ignore the signal). Interrupt signals can cause a program to suspend itself temporarily to service the interrupt.
Interrupt signals can come from a variety of sources. For example, every keystroke generates an interrupt signal. Interrupts can also be generated by other devices, such as a printer, to indicate that some event has occurred. These are called hardware interrupts. Interrupt signals initiated by programs are called software interrupts. A software interrupt is also called a trap or an exception.
PCs support 256 types of software interrupts and 15 hardware interrupts. Each type of software interrupt is associated with an interrupt handler -- a routine that takes control when the interrupt occurs. For example, when you press a key on your keyboard, this triggers a specific interrupt handler. The complete list of interrupts and associated interrupt handlers is stored in a table called the interrupt vector table, which resides in the first 1 K of addressable memory.
Also see the list of IRQ numbers in the Quick Reference section of Webopedia

Interrupt

In computing, an interrupt is an asynchronous signal from hardware indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing.
An act of interrupting is referred to as an interrupt request

Friday, July 4, 2008

Microprogram

: A sequence of microinstructions that are in special storage where they can be dynamically accessed to perform various functions.

Thursday, July 3, 2008

FUNCTIONS OF CONTROL UNIT

Functions!!!
The ControL Unit can be thought of as the brain of the CPU itself. It controls based on the instructions it decodes, how other parts of the CPU and in turn, rest of the computer systems should work in order that the instruction gets executed in a correct manner. There are two types of control units, the first type is called hardwired control unit. Hardwired control units are constructed using digital circuits and once formed cannont be changed. The other type of control unit is microprogrammed control unit. A microprogrammed control unit itself decodes and execute instructions by means of executing microprograms.

CISC AND RISC ARCHITECTURE

CISC Architecture:
CISC (Complex Instruction Set Computer) architecture means hardwiring( directly wired into a computer or physically connected to a computer system or network) the processor with complex instructions that are difficult to create using basic instructions.

CISC is especially popular in 80x86 type processors. This type of architecture has an elevated cost because of advanced functions printed on the silicone.

Instructions are of variable length and may sometimes require more than one clock cycle. Because CISC-based processors can only process one instruction at a time, the processing time is a function of the size of the instruction.

RISC Architecture:
Processors with RISC (Reduced Instruction Set Computer) technology do not have hardwired, advanced functions.

Programs must therefore be translated into simple instructions which complicates development and/or requires a more powerful processor. Such architecture has a reduced production cost compared to CISC processors.

In addition, instructions, simple in nature, are executed in just one clock cycle, which speeds up program execution when compared to CISC processors. Finally, these processors can handle multiple instructions simultaneously by processing them in parallel.

Parallel Processing

Parallel Processing:
Parallel processing consists of simultaneously executing instructions from the same program on different processors. This involves dividing a program into multiple processes handled in parallel in order to reduce execution time.

CONTROL MEMORY




Microprogram control

Microprogrammed Control
The control signals needed in each step of intruction execution can be generated by the finite state machine method, also called hardwired control, or, alternatively, by the microprogrammed control method discussed below.
Basic Concepts of Microprogramming:
Control word (CW):

A word with each bit for one of the control signals. Each step of the instruction execution is represented by a control word with all of the bits corresponding to the control signals needed for the step set to one.
Microinstruction:
Each step in a sequence of steps in the execution of a certain machine instruction is considered as a microinstruction, and it is represented by a control word. All of the bits corresponding to the control signals that need to be asserted in this step are set to 1, and all others are set to 0 (horizontal organization).
Microprogram:
Composed of a sequence of microinstructions corresponding to the sequence of steps in the execution of a given machine instruction.
Microprogramming:
The method of generating the control signals by properly setting the individual bits in a control word of a step.

Two phase clock generator


Tuesday, July 1, 2008

APPLICATIONS OF LOGIC MICROOPERATIONS

Selective-set Operation

Used to force selected bits of a register into logic-1 by using the OR operation

Example: 0100 ? 1000 = 1100


Selective-complement (toggling) Operation

Used to force selected bits of a register to be complemented by using the XOR operation

Example: 0001 ? 1000 = 1001

APPLICATIONS OF LOGIC MICROOPERATIONS

Set (Preset) Microoperation:

Force all bits into 1’s by ORing them with a value in which all its bits are being assigned to logic-1
Example: 100110 ? 111111 = 111111


Clear (Reset) Microoperation:

Force all bits into 0’s by ANDing them with a value in which all its bits are being assigned to logic-0
Example: 100110 ? 000000 = 000000

Defn microoperation & APPLICATIONS OF LOGIC MICROOPERATIONS

•The operations on the data in registers are called microoperations.





APPLICATIONS OF LOGIC MICROOPERATIONS


http://209.85.175.104/search?q=cache:r89BkQ2u8UMJ:calab.kaist.ac.kr/~hyoon/courses/cs311/cs311_2006/Ch4.ppt+applications+of+logic+microoperation&hl=en&ct=clnk&cd=4&gl=in

DESCRIPTION OF TRI-STATE BUFFER


A tri-state buffer is a useful device that allows us to control when current passes through the device, and when it doesn't. A tri-state buffer has two inputs: a data input x and a control input c. The control input acts like a valve. When the control input is active, the output is the input. That is, it behaves just like a normal buffer. The "valve" is open.
When the control input is not active, the output is "Z". The "valve" is open, and no electrical current flows through. Thus, even if x is 0 or 1, that value does not flow through.

Monday, June 30, 2008

Gates

The manipulation of binary information is done by logic circuits called gates. Gates are blocks of hardware that produce signals of binary 1 or 0 when input logic requirements are satisfied. A variety of logic gates are commonly used in digital computer systems. Each gate has a distinct graphic symbol and its operation can be described by means of an algebraic expression. The input-output relationship of the binary variables for each gate can be represented in tabular form by a truth table.

Saturday, June 28, 2008

REGISTER TRANSFER AND MICROOPERATIONS

Refer http://artoa.hanbat.ac.kr/lecture_data/computer_architecture/02.pdf

SHIFT REGISTER


A register that is capable of shifting data one bit at a time is called a shift register. The logical configuration of a serial shift register consists of a chain of flip-flops connected in cascade, with the output of one flip-flop being connected to the input of its neighbour. The operation of the shift register is synchronous; thus each flip-flop is connected to a common clock. Using D flip-flops forms the simplest type of shift-registers.
For more details abt this are in these following sites...scitec.uwichill.edu.bb/cmp/online/P10F/shift.htm
www.doctronics.co.uk/4014.htm

Difference between LAN and WAN

LAN versus WAN
To define a LAN
Up to now we've been talking about Ethernet and I've made reference to the fact that Ethernet is a LAN.
A LAN is a Local Area Network. Local is generally referred to a network contained within a building or an office or a campus.
Examples:
You might have a LAN for example on a University campus or between office blocks in an office park.
A big corporate perhaps like Anglo American, would generally have a LAN that might span several buildings.
To set up a LAN -relatively speaking- is cheap. If you want to put an extra couple of network points or an extra couple of devices on the network, it 's not very expensive to do that.
To define a WAN
Using a similar example, a Wide Area Network is a network that connects campuses.
What I'm going to do is write down some short descriptions of what a WAN is:
1. A WAN is generally slow. If we compare that to a LAN, we said that Ethernet could run up to 1000 Mbs, currently, certainly in South Africa, the fastest WAN is 155 Mbs, so you can see in a LAN we can talk up to 1000 Mbs whereas in a WAN, at the moment, currently, today in South Africa, we can only take, literally a 10th of the speed.
2. WAN's are expensive. If we look at the path of telecommunications, we need to connect two offices, one in Pretoria and one in Johannesburg together - it 's an expensive operation even for a slow line.
One of the differences between a WAN (Wide Area Network) and a LAN (Local Area Network) is the set-up cost. WAN generally are to connect remote offices and when we talk about remote offices we generally refer to the remote offices as those that are outside the campus. For example, if we have an office in Pretoria and we have an office in Cape Town, these are remote offices. There is no chance that we can connect the LAN between Cape Town and Pretoria. In a LAN we connect local offices whereas in a WAN we can connect remote offices.

Logic Microoperation

http://www.mans.edu.eg/FacEng/english/computers/PDFS/PDF3/1.3.pdf

DIFFERENCE BETWEEN ARITHMETIC AND LOGICAL SHIFT

All shifts can be categorized as logical, arithmetic, or circular. In a logical shift, a zero enters at the input of the shifter and the bit shifted out is clocked into the carry flip-flop of the CCR. An arithmetic shift left is identical to a logical shift left, but an arithmetic shift right causes the most significant bit, the sign bit, to be propagated right. This action preserves the correct sign of a two's complement value. For example, if the bytes 00101010 and 10101010 are shifted one place right (arithmetically), the results are 00010101 and 11010101, respectively.

PROGRAM COUNTER

program counter

PC, or "instruction address register"...
A register in the central processing unit that contains the addresss of the next instruction to be executed. The PC is automatically incremented after each instruction is fetched to point to the following instruction. It is not normally manipulated like an ordinary register but instead, special instructions are provided to alter the flow of control by writing a new value to the PC, e.g. JUMP, CALL

RIGHT AND LEFT BIT SHIFTING OPERATORS

Bit shifting, as the name signifies, does shifting of bits in byte(s). There are basically two ways, in which bits (of a byte) can be shifted, either to the right, or to the left. Thus we have two types of bit shifting operator.
If you think logically, its pretty clear that for bit shifting in a byte, we need to have two data. We need the byte(s) to shift bits on and the number of bits to be shifted. Guess what, the two operators need these to data as operands!
Right Bit Shifting Operator (>>)
Syntax: res = var >> num;
This would shift all bits in the variable var, num places to the right which would get stored to the variable res. So for example if var has the following bit structure:
var = 00110101 (decimal 53)
And we do the following operation:
res = var >> 2;
We would get res as:
res = 00001101 (decimal 13)
As you can see, shifting of the bits to right disposes the bits (2) from the right and introduces 0s (2) to the left.
Left Bit Shift Operator (<<)
It is similar to the right shift operator except that the direction of shifting is opposite. The following is I think enough to explain this:
var = 00110101 (decimal 53)
And we do the following operation:
res = var << 2;
We would get res as:
res = 11010100 (decimal 212)
Just the opposite here, here bits get disposed from the left and new 0s are introduced to the right.

ARITHMETIC RIGHT SHIFT


For example, in the x86 instruction set, the SAR instruction (arithmetic right shift) divides a signed number by a power of two, rounding towards negative infinity.[1] However, the IDIV instruction (signed divide) divides a signed number, rounding towards zero. So a SAR instruction cannot be substituted for an IDIV by power of two instruction nor vice versa.

Friday, June 27, 2008

Bit Shift Operators

The computer processor has the registers including a fixed number of available bits for storing numerals. So it is possible to "shift out" some bits of the register at one end, and "shift in" from the other end. The number of bits are shifted within the range mode of 32.
The bit shifts operators are used to perform bitwise operations on the binary representation of an integer instead of its numerical value. In this operation, the bit shifts operators don't operate the pairs of corresponding bits rather the digits are moved, or shifted in a computer register either to the left or right according to the distance specified by a number.

More Informations:http://www.roseindia.net/java/master-java/bitwise-bitshift-operators.shtml

LOGIC GATE

A logic gate is an elementary building block of a digital circuit . Most logic gates have two inputs and one output. At any given moment, every terminal is in one of the two binary conditions low (0) or high (1), represented by different voltage levels. The logic state of a terminal can, and generally does, change often, as the circuit processes data. In most logic gates, the low state is approximately zero volts (0 V), while the high state is approximately five volts positive (+5 V).
There are seven basic logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR.

ARITHMETIC SHIFT

In computer programming, an arithmetic shift is a shift operator, sometimes known as a signed shift (though it is not restricted to signed operands). For binary numbers it is a bitwise operation that shifts all of the bits of its operand; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in. What makes an arithmetic shift different from a logical shift is what the empty bit positions are filled with.

CIRCULAR SHIFT

In computer science, a circular shift is a shift operator that shifts all bits of its operand. Unlike an arithmetic shift, a circular shift does not preserve a number's sign bit or distinguish a number's exponent from its mantissa. Unlike a logical shift, the vacant bit positions are not filled in with zeros but are filled in with the bits that are shifted out of the sequence.
Circular shifts are used often in cryptography as part of the permutation of bit sequences.

Example
If the bit sequence 0001 0111 were subjected to a circular shift of one bit position... (see images to the right)
to the left would yield: 0010 1110
to the right would yield: 1000 1011.
If the bit sequence 0001 0111 were subjected to a circular shift of three bit positions...
to the left would yield: 1011 1000
to the right would yield: 1110 0010.

logical right shift

In computer science, a logical shift is a shift operator that shifts all the bits of its operand. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its mantissa; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in, generally with zeros (compare with a circular shift).
A logical shift is often used when its operand is being treated as a sequence of bits rather than as a number.

LOGICAL SHIFT


In computer science, a logical shift is a shift operator that shifts all the bits of its operand. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its mantissa; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in, generally with zeros (compare with a circular shift).
A logical shift is often used when its operand is being treated as a sequence of bits rather than as a number.
Logical shifts can be useful as efficient ways of performing multiplication or division of unsigned integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right by n bits on an unsigned binary number has the effect of dividing it by 2n (rounding towards 0).

Shift operations

http://en.wikipedia.org/wiki/Bitwise_operation

Thursday, June 26, 2008

DEVICES THAT HAVE USED SYMBIAN OS.

On November 16, 2006, the 100 millionth smartphone running the OS was shipped.[6]
Ericsson R380 (2000) was the first commercially available phone based on Symbian OS. As with the modern "FOMA" phones, this device was closed, and the user could not install new C++ applications. Unlike those, however, the R380 could not even run Java applications, and for this reason, some have questioned whether it can properly be termed a 'smartphone'.
Nokia 9210 Communicator smartphone (32-bit 66 MHz ARM9-based RISC CPU) (2001), 9300 Communicator (2004), 9500 Communicator (2004) using the Nokia Series 80 interface
UIQ interface:
Used for PDAs such as Sony Ericsson P800 (2002), P900 (2003), P910 (2004), P990 (2005), W950 (2006), M600 (2006), P1 (2007), W960 (2007), G700 (2008), G900 (2008), G702 (2008), Motorola A920, A925, A1000, RIZR Z8, RIZR Z10, DoCoMo M1000, BenQ P30, P31 and Nokia 6708 using this interface.
Nokia S60 (2002)
Nokia S60 is used in various phones, the first being the Nokia 7650, then the Nokia 3650, followed by the Nokia 3620/3660, Nokia 6600, Nokia 7610, Nokia 6670 and Nokia 3230. The Nokia N-Gage and Nokia N-Gage QD gaming/smartphone combos are also S60 platform devices. It was also used on other manufacturers' phones such as the Siemens SX1, Sendo X, Panasonic X700, Panasonic X800, Samsung SGH-D730, SGH-D720 and the Samsung SGH-Z600. Recent, more advanced devices using S60 include the Nokia 6620, Nokia 6630, the Nokia 6680, Nokia 6681 and Nokia 6682, a next generation Nseries, including the Nokia N70, Nokia N71, Nokia N72, Nokia N73, Nokia N75, Nokia N80, Nokia N81, Nokia N82, Nokia N90, Nokia N91, Nokia N92, Nokia N93 and Nokia N95, and the enterprise (i.e. business) model Eseries, including the Nokia E50, Nokia E51 Nokia E60, Nokia E61, Nokia E62, Nokia E65, and Nokia E70. For an up to date list, refer to the Symbian S60 website.
Nokia 7710 (2004) using the Nokia Series 90 interface.
Nokia 6120 classic, Nokia 6121 classic
Fujitsu, Mitsubishi, Sony Ericsson and Sharp phones for NTT DoCoMo in Japan, using an interface developed specifically for DoCoMo's FOMA "Freedom of Mobile Access" network brand. This UI platform is called MOAP "Mobile Orientated Applications Platform" and is based on the UI from earlier Fujitsu FOMA models.

Symbian os

How does Symbian OS work?
As an operating system software, Symbian OS provides the underlying routines and services for application software. For example, an email software that interacts with a user through a mobile phone screen and downloads email messages to the phone's inbox over a mobile network or WiFi access, is using the communication protocols and file management routines provided by the Symbian OS.
Symbian OS technology has been designed with these key points in mind:
to provide power, memory and input & output resource management specifically required in mobile devices
to deliver an open platform that complies with global telecommunications and Internet standards
to provide tools for developing mobile software for business, media and other applications
to ensure the wide availability of applications and accessories for different user requirements
to facilitate wireless connectivity for a variety of networks

Wednesday, June 25, 2008

MOTHER BOARD

The main circuit board of a microcomputer. The motherboard contains the connectors for attaching additional boards. Typically, the motherboard contains the CPU, BIOS, memory, mass storage interfaces, serial and parallel ports, expansion slots, and all the controllers required to control standard peripheral devices, such as the display screen, keyboard, and disk drive. Collectively, all these chips that reside on the motherboard are known as the motherboard's chipset.
On most PCs, it is possible to add memory chips directly to the motherboard. You may also be able to upgrade to a faster PC by replacing the CPU chip. To add additional core features, you may need to replace the motherboard entirely.

NORMALIZATION ADVANTAGE

The advantages of normalizing floating-point numbers are:
1) The representation is unique, there is exactly one way to write a real number in such a form. 2) It's easy to compare two normalized numbers, you separately test the sign, exponent and mantissa.
3) In a normalized form, a fixed size mantissa will use all the 'digit cells' to store significant digits.

different between decoder and encoder

Encoding is the process of converting the original message into a coded one, whereas decoding is the process of taking a coded message and converting it back to the original message. The process can be an algorithm that is applied for decoding and encoding, or they could be different. The words cipher and decipher can be applied in the same way.
An encoder is a device that is used to convert a signal or certain data into code. This kind of conversion is done for a variety of reasons, the most common being data compression. Other reasons for using encoders include data encryption for making the data secure and translating data from one code to another new or existing code. Encoders may be analog or digital devices. In analog devices, the encoding is done using analog circuitry, while in digital encoders the encoding is done using program algorithms. Some examples of encoders are multiplexers, compressors, and linear and rotary encoders.

A decoder, on the other hand, functions the reverse of an encoder. It is a device that is used to decode an encoded signal or data. It does this to help retrieve the data that was encoded in the first place. Both encoders and decoders usually function in tandem, i.e., an application that uses an encoder would ideally also require a decoder. There are different types of decoders, e.g. demultiplexers.

Overflow

OVERFLOW

*carry out is the extra 1 bit we generate that doesn’t fit.
*overflow is the condition where the answer is incorrect.
*With unsigned addition, we have carry out iff we have overflow.

With signed addition, this is not the case:
– (-3)+(-4) produces a carry out but no overflow (-7 is the right answer).
– 4+4 and 4+5 do not produce a carry out, but produce overflow (-8 and -7 are the wrong

answers)
– (-4)+(-5) produces a carry out and overflows.

How can we tell if we had a true overflow?
If the carry in and carry out of the most significant bitare different, we have a problem.

Operating System

tri state buffer

Another type of special gate output is called tristate, because it has the ability to provide three different output modes: current sinking ("low" logic level), current sourcing ("high"), and °oating("high-Z," or high-impedance). Tristate outputs are usually found as an optional feature on buffergates. Such gates require an extra input terminal to control the "high-Z" mode, and this input is usually called the enable.

Tuesday, June 24, 2008

Application of GRAY CODE

1. The first application is calculation of the weighted polynomial of a group code...
2.The second application is finding the solution to a variation of the tower of hanoi problem...
3.The third application is finding a Hamiltonian path in a genealized hypercube network...

MULTIPLEXER(MUX)

In electronics, a multiplexer or mux (occasionally the term muldex is also found, for a combination multiplexer-demultiplexer) is a device that performs multiplexing; it selects one of many analog or digital input signals and outputs that into a single line.
An electronic multiplexer makes it possible for several signals to share one expensive device or other resource, for example one A/D converter or one communication line, instead of having one device per input signal.
In electronics, a demultiplexer (or demux) is a device taking a single input signal and selecting one of many data-output-lines, which is connected to the single input. A multiplexer is often used with a complementary demultiplexer on the receiving end.

signed magnitude representation

there are many schemes for representing negative integers with patterns of bits. One scheme is sign-magnitude. It uses one bit (usually the leftmost) to indicate the sign. "0" indicates a positive integer, and "1" indicates a negative integer. The rest of the bits are used for the magnitude of the number.

BUS

When referring to a computer, the bus also known as the address bus, data bus, or local bus is a data connection connection between two or more devices connected to the computer. A computer bus is a method of transmitting data from one part of the computer to another part of the computer. The computer bus will connect all devices to the computer CPU and main memory.The data bus transfers actual data, whereas the address bus transfers information about where the data should go.
For example, a bus enables a computer processor to communicate with the memory or a video card to communicate with the memory.
A bus is capable of being parallel or a serial bus and today all computers utilize two types of buses, an internal or local bus and an external bus. An internal bus enables a communication between internal components such as a computer video card and memory and an external bus is capable of communicating with external components such as a SCSI scanner.
A computer or devices bus speed or throughput is always measured in bits per second or megabytes per second.

Tri State Buffer Gate

http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/CompOrg/tristate.html

Picture of ROM,RAM.

RAM



ROM


Picture ofTransistor and Logic Gate









logic gate Transistor





Register Allocation

Register allocation is the process of multiplexing a large number of target program variables onto a small number of CPU registers. The goal is to keep as many operands as possible in registers to maximise the execution speed of software programs. Register allocation can happen over a basic block (local register allocation), over a whole function/procedure (global register allocation), or in-between functions as a calling convention (interprocedural register allocation).

Register Transfer

In computer science, register transfer language (RTL) is a term used to describe a kind of intermediate representation (IR) that is very close to assembly language, such as that which is used in a compiler. Academic papers and textbooks also often use a form of RTL as an architecture-neutral assembly language. RTL is also the name of a specific IR used in the GNU Compiler Collection, and several other compilers, such as Zephyr[1]
Contents[hide]
1 RTL in GCC
2 RTL as it relates to logic synthesis
3 History
4 References
5 External links
//

Micro Operation

In computer central processing units, micro-operations, also known as a micro-ops or μops, are detailed low-level instructions used in some designs to implement complex machine instructions (sometimes termed macro-instructions in this context).
Various forms of μops has since long been the basis for traditional microcode routines used to simplify the implementation of a particular CPU design or perhaps just the sequencing of certain multi-step operations or addressing modes. More recently, μops have also been employed in a different way in order to let modern "CISC" processors more easily handle asyncronous parallel and speculative execution: As with traditional microcode, one or more table lookups (or equivalent) is done to locate the appropriate μop-sequence based on the encoding and semantics of the machine instruction (the decoding or translation step), however, instead of having rigid μop-sequences controlling the CPU directly from a microcode-ROM, sequences are here dynamically issued (i.e. buffered) before being used.
This dynamic method means that the fetch and decode stages can be more detached from the execution units than is feasible in a more traditional microcoded (or "hard-wired") design. As this allows a degree of freedom regarding execution order, it makes some extraction of instruction level parallelism out of a normal single-threaded program possible (provided that dependecies are checked etc). The buffering also opens up for certain analysis and reordering of code sequences "on the fly" in order to dynamically optimize mapping and sheduling of μops onto machine resources (such as ALUs, load/store units etc). This often means intermixed sequences of sub-operations (μops) generated for different machine instructions (forming partially reordered machine instructions).

Overflow

Arithmetic Overflow

The term arithmetic overflow or simply overflow has the following meanings.
In a
digital computer, the condition that occurs when a calculation produces a result that is greater in magnitude than what a given register or storage location can store or represent.
In a digital computer, the amount by which a calculated value is greater than that which a given register or storage location can store or represent. Note that the overflow may be placed at another location.
Most computers distinguish between two kinds of overflow condition. A
carry occurs when the result of an addition or subtraction, considering the operands and result as unsigned numbers, does not fit in the result. Therefore, it is useful to check the carry flag after adding or subtracting numbers that are interpreted as unsigned values. An overflow proper occurs when the result does not have the sign that one would predict from the signs of the operands (e.g. a negative result when adding two positive numbers). Therefore, it is useful to check the overflow flag after adding or subtracting numbers that are represented in two's complement form (i.e. they are considered signed numbers).
There are several methods of handling overflow:
Design: by selecting correct data types, both length and signed/unsigned.
Avoidance: by carefully ordering operations and checking operands in advance, it is possible to ensure that the result will never be larger than can be stored.
Handling: If it is anticipated that overflow may occur and when it happens detected and other processing done. Example: it is possible to add two numbers each two bytes wide using just a byte addition in steps: first add the low bytes then add the high bytes, but if it is necessary to carry out of the low bytes this is arithmetic overflow of the byte addition and it necessary to detect and increment the sum of the high bytes.
CPUs generally have a way of detecting this to support addition of numbers larger than their register size, typically using a status bit.
Propagation: if a value is too large to be stored it can be assigned a special value indicating that overflow has occurred and then have all successive operation return this flag value. This is useful so that the problem can be checked for once at the end of a long calculation rather than after each step. This is often supported in Floating Point Hardware called
FPUs.
Ignoring: This is the most common approach, but it gives incorrect results and can compromise a program's
security.
Division by zero is not a form of arithmetic overflow. Mathematically, division by zero within reals is explicitly undefined; it is not that the value is too large but rather that it has no value.

[edit] See also
integer overflow
Retrieved from "
http://en.wikipedia.org/wiki/Arithmetic_overflow"
Categories: Computer arithmetic

Advantage and Disadvantage of Complements

Advantage of 1's Complement
1.No need to compare numbers.
2.sign bit can be treated like any other bit.
3.Addition and Subtraction can be handled uniformly by the adder with the provision of input of the subtrahend as a negative number
Disadvantage of 1's complement:
1.End around carry if generated needs to be added.
2.'0' has 2 representations with all 0's and all 1's representing positive and negative zero resp...
Advantage of 2's complement
1.In 2's complement we have only one way to represent 0. This simplifies our representation scheme.
2.The biggest advantage of 2's comp over 1's comp is that you don't need extra hardware to detect when a result drops below zero
Disadvantage of 2'scomplement
1.Note also that 2's complement has only one form for 0 (this is good).
Ex. 11110 ==> 11110 - 11101 + 00011 -------- 100001 => 00001
Ex. 100000 ==> 100000 - 011101 + 100011 ---------- 1000011 => 000011
Also note that the MSB is effectively a sign bit (0 = positive, 1=negative), so you get easy sign checking with 2's complement as well. Most modern systems employ 2's complement for representing signed values.
2.2's complement requires asubtraction which will slow down the circuit and offset some of thetransistor savings.

High impedence

In electronics, high impedance (also known as hi-Z, tri-stated, or floating) is the state of an output terminal which is not currently driven by the circuit. In digital circuits, it means that the signal is neither driven to a logical high nor to a logical low level - hence "tri-stated". Such a signal can be seen as an open circuit (or "floating" wire) because connecting it to a (low impedance) circuit will not affect that circuit; it will instead itself be pulled to the same voltage as the actively driven output. The combined input/output pins found on many ICs are actually tri-state capable outputs which have been internally connected to inputs. This is the basis for bus-systems in computers, among many other uses

Monday, June 23, 2008

Self-complementing Codes....

Self-complementing Codes.....

  • A self complementing code is one thats 9's complement in decimal is the 1's complement in binary.
    • Ex: The 9's complement of 7 is 2 in decimal. In 2421 code, $ 7 = 1101$ and $ 2 = 0010$ .
  • Note: if a weighted code is self-complementing, the total weight must be 9. In thes respect, BCD code is not self complementing.
  • XS3 code is another example of a self-complementing code.

Excess-3

Excess-3

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Excess-3 binary coded decimal (XS-3), also called biased representation or Excess-N, is a numeral system used on some older computers that uses a pre-specified number N as a biasing value. It is a way to represent values with a balanced number of positive and negative numbers. In XS-3, numbers are represented as decimal digits, and each digit is represented by four bits as the BCD value plus 3 (the "excess" amount):

  • The smallest binary number represents the smallest value. (i.e. 0 - Excess Value)
  • The greatest binary number represents the largest value. (i.e. 2N - Excess Value - 1)
Decimal Binary Decimal Binary Decimal Binary Decimal Binary
-3 0000 1 0100 5 1000 9 1100
-2 0001 2 0101 6 1001 10 1101
-1 0010 3 0110 7 1010 11 1110
0 0011 4 0111 8 1011 12 1111

To encode a number such as 127, then, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010).

The primary advantage of XS-3 coding over BCD coding is that a decimal number can be nines' complemented (for subtraction) as easily as a binary number can be ones' complemented; just invert all bits.

Adding Excess-3 works on a different algorithm than BCD coding or regular binary numbers. When you add two XS-3 numbers together, the result is not an XS-3 number. For instance, when you add 1 and 0 in XS-3 the answer seems to be 4 instead of 1. In order to correct this problem, when you are finished adding each digit, you have to subtract 3 (binary 11) if the digit is less than decimal 10 and add three if the number is greater than or equal to decimal 10.