Monday, June 30, 2008

Gates

The manipulation of binary information is done by logic circuits called gates. Gates are blocks of hardware that produce signals of binary 1 or 0 when input logic requirements are satisfied. A variety of logic gates are commonly used in digital computer systems. Each gate has a distinct graphic symbol and its operation can be described by means of an algebraic expression. The input-output relationship of the binary variables for each gate can be represented in tabular form by a truth table.

Saturday, June 28, 2008

REGISTER TRANSFER AND MICROOPERATIONS

Refer http://artoa.hanbat.ac.kr/lecture_data/computer_architecture/02.pdf

SHIFT REGISTER


A register that is capable of shifting data one bit at a time is called a shift register. The logical configuration of a serial shift register consists of a chain of flip-flops connected in cascade, with the output of one flip-flop being connected to the input of its neighbour. The operation of the shift register is synchronous; thus each flip-flop is connected to a common clock. Using D flip-flops forms the simplest type of shift-registers.
For more details abt this are in these following sites...scitec.uwichill.edu.bb/cmp/online/P10F/shift.htm
www.doctronics.co.uk/4014.htm

Difference between LAN and WAN

LAN versus WAN
To define a LAN
Up to now we've been talking about Ethernet and I've made reference to the fact that Ethernet is a LAN.
A LAN is a Local Area Network. Local is generally referred to a network contained within a building or an office or a campus.
Examples:
You might have a LAN for example on a University campus or between office blocks in an office park.
A big corporate perhaps like Anglo American, would generally have a LAN that might span several buildings.
To set up a LAN -relatively speaking- is cheap. If you want to put an extra couple of network points or an extra couple of devices on the network, it 's not very expensive to do that.
To define a WAN
Using a similar example, a Wide Area Network is a network that connects campuses.
What I'm going to do is write down some short descriptions of what a WAN is:
1. A WAN is generally slow. If we compare that to a LAN, we said that Ethernet could run up to 1000 Mbs, currently, certainly in South Africa, the fastest WAN is 155 Mbs, so you can see in a LAN we can talk up to 1000 Mbs whereas in a WAN, at the moment, currently, today in South Africa, we can only take, literally a 10th of the speed.
2. WAN's are expensive. If we look at the path of telecommunications, we need to connect two offices, one in Pretoria and one in Johannesburg together - it 's an expensive operation even for a slow line.
One of the differences between a WAN (Wide Area Network) and a LAN (Local Area Network) is the set-up cost. WAN generally are to connect remote offices and when we talk about remote offices we generally refer to the remote offices as those that are outside the campus. For example, if we have an office in Pretoria and we have an office in Cape Town, these are remote offices. There is no chance that we can connect the LAN between Cape Town and Pretoria. In a LAN we connect local offices whereas in a WAN we can connect remote offices.

Logic Microoperation

http://www.mans.edu.eg/FacEng/english/computers/PDFS/PDF3/1.3.pdf

DIFFERENCE BETWEEN ARITHMETIC AND LOGICAL SHIFT

All shifts can be categorized as logical, arithmetic, or circular. In a logical shift, a zero enters at the input of the shifter and the bit shifted out is clocked into the carry flip-flop of the CCR. An arithmetic shift left is identical to a logical shift left, but an arithmetic shift right causes the most significant bit, the sign bit, to be propagated right. This action preserves the correct sign of a two's complement value. For example, if the bytes 00101010 and 10101010 are shifted one place right (arithmetically), the results are 00010101 and 11010101, respectively.

PROGRAM COUNTER

program counter

PC, or "instruction address register"...
A register in the central processing unit that contains the addresss of the next instruction to be executed. The PC is automatically incremented after each instruction is fetched to point to the following instruction. It is not normally manipulated like an ordinary register but instead, special instructions are provided to alter the flow of control by writing a new value to the PC, e.g. JUMP, CALL

RIGHT AND LEFT BIT SHIFTING OPERATORS

Bit shifting, as the name signifies, does shifting of bits in byte(s). There are basically two ways, in which bits (of a byte) can be shifted, either to the right, or to the left. Thus we have two types of bit shifting operator.
If you think logically, its pretty clear that for bit shifting in a byte, we need to have two data. We need the byte(s) to shift bits on and the number of bits to be shifted. Guess what, the two operators need these to data as operands!
Right Bit Shifting Operator (>>)
Syntax: res = var >> num;
This would shift all bits in the variable var, num places to the right which would get stored to the variable res. So for example if var has the following bit structure:
var = 00110101 (decimal 53)
And we do the following operation:
res = var >> 2;
We would get res as:
res = 00001101 (decimal 13)
As you can see, shifting of the bits to right disposes the bits (2) from the right and introduces 0s (2) to the left.
Left Bit Shift Operator (<<)
It is similar to the right shift operator except that the direction of shifting is opposite. The following is I think enough to explain this:
var = 00110101 (decimal 53)
And we do the following operation:
res = var << 2;
We would get res as:
res = 11010100 (decimal 212)
Just the opposite here, here bits get disposed from the left and new 0s are introduced to the right.

ARITHMETIC RIGHT SHIFT


For example, in the x86 instruction set, the SAR instruction (arithmetic right shift) divides a signed number by a power of two, rounding towards negative infinity.[1] However, the IDIV instruction (signed divide) divides a signed number, rounding towards zero. So a SAR instruction cannot be substituted for an IDIV by power of two instruction nor vice versa.

Friday, June 27, 2008

Bit Shift Operators

The computer processor has the registers including a fixed number of available bits for storing numerals. So it is possible to "shift out" some bits of the register at one end, and "shift in" from the other end. The number of bits are shifted within the range mode of 32.
The bit shifts operators are used to perform bitwise operations on the binary representation of an integer instead of its numerical value. In this operation, the bit shifts operators don't operate the pairs of corresponding bits rather the digits are moved, or shifted in a computer register either to the left or right according to the distance specified by a number.

More Informations:http://www.roseindia.net/java/master-java/bitwise-bitshift-operators.shtml

LOGIC GATE

A logic gate is an elementary building block of a digital circuit . Most logic gates have two inputs and one output. At any given moment, every terminal is in one of the two binary conditions low (0) or high (1), represented by different voltage levels. The logic state of a terminal can, and generally does, change often, as the circuit processes data. In most logic gates, the low state is approximately zero volts (0 V), while the high state is approximately five volts positive (+5 V).
There are seven basic logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR.

ARITHMETIC SHIFT

In computer programming, an arithmetic shift is a shift operator, sometimes known as a signed shift (though it is not restricted to signed operands). For binary numbers it is a bitwise operation that shifts all of the bits of its operand; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in. What makes an arithmetic shift different from a logical shift is what the empty bit positions are filled with.

CIRCULAR SHIFT

In computer science, a circular shift is a shift operator that shifts all bits of its operand. Unlike an arithmetic shift, a circular shift does not preserve a number's sign bit or distinguish a number's exponent from its mantissa. Unlike a logical shift, the vacant bit positions are not filled in with zeros but are filled in with the bits that are shifted out of the sequence.
Circular shifts are used often in cryptography as part of the permutation of bit sequences.

Example
If the bit sequence 0001 0111 were subjected to a circular shift of one bit position... (see images to the right)
to the left would yield: 0010 1110
to the right would yield: 1000 1011.
If the bit sequence 0001 0111 were subjected to a circular shift of three bit positions...
to the left would yield: 1011 1000
to the right would yield: 1110 0010.

logical right shift

In computer science, a logical shift is a shift operator that shifts all the bits of its operand. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its mantissa; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in, generally with zeros (compare with a circular shift).
A logical shift is often used when its operand is being treated as a sequence of bits rather than as a number.

LOGICAL SHIFT


In computer science, a logical shift is a shift operator that shifts all the bits of its operand. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its mantissa; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in, generally with zeros (compare with a circular shift).
A logical shift is often used when its operand is being treated as a sequence of bits rather than as a number.
Logical shifts can be useful as efficient ways of performing multiplication or division of unsigned integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right by n bits on an unsigned binary number has the effect of dividing it by 2n (rounding towards 0).

Shift operations

http://en.wikipedia.org/wiki/Bitwise_operation

Thursday, June 26, 2008

DEVICES THAT HAVE USED SYMBIAN OS.

On November 16, 2006, the 100 millionth smartphone running the OS was shipped.[6]
Ericsson R380 (2000) was the first commercially available phone based on Symbian OS. As with the modern "FOMA" phones, this device was closed, and the user could not install new C++ applications. Unlike those, however, the R380 could not even run Java applications, and for this reason, some have questioned whether it can properly be termed a 'smartphone'.
Nokia 9210 Communicator smartphone (32-bit 66 MHz ARM9-based RISC CPU) (2001), 9300 Communicator (2004), 9500 Communicator (2004) using the Nokia Series 80 interface
UIQ interface:
Used for PDAs such as Sony Ericsson P800 (2002), P900 (2003), P910 (2004), P990 (2005), W950 (2006), M600 (2006), P1 (2007), W960 (2007), G700 (2008), G900 (2008), G702 (2008), Motorola A920, A925, A1000, RIZR Z8, RIZR Z10, DoCoMo M1000, BenQ P30, P31 and Nokia 6708 using this interface.
Nokia S60 (2002)
Nokia S60 is used in various phones, the first being the Nokia 7650, then the Nokia 3650, followed by the Nokia 3620/3660, Nokia 6600, Nokia 7610, Nokia 6670 and Nokia 3230. The Nokia N-Gage and Nokia N-Gage QD gaming/smartphone combos are also S60 platform devices. It was also used on other manufacturers' phones such as the Siemens SX1, Sendo X, Panasonic X700, Panasonic X800, Samsung SGH-D730, SGH-D720 and the Samsung SGH-Z600. Recent, more advanced devices using S60 include the Nokia 6620, Nokia 6630, the Nokia 6680, Nokia 6681 and Nokia 6682, a next generation Nseries, including the Nokia N70, Nokia N71, Nokia N72, Nokia N73, Nokia N75, Nokia N80, Nokia N81, Nokia N82, Nokia N90, Nokia N91, Nokia N92, Nokia N93 and Nokia N95, and the enterprise (i.e. business) model Eseries, including the Nokia E50, Nokia E51 Nokia E60, Nokia E61, Nokia E62, Nokia E65, and Nokia E70. For an up to date list, refer to the Symbian S60 website.
Nokia 7710 (2004) using the Nokia Series 90 interface.
Nokia 6120 classic, Nokia 6121 classic
Fujitsu, Mitsubishi, Sony Ericsson and Sharp phones for NTT DoCoMo in Japan, using an interface developed specifically for DoCoMo's FOMA "Freedom of Mobile Access" network brand. This UI platform is called MOAP "Mobile Orientated Applications Platform" and is based on the UI from earlier Fujitsu FOMA models.

Symbian os

How does Symbian OS work?
As an operating system software, Symbian OS provides the underlying routines and services for application software. For example, an email software that interacts with a user through a mobile phone screen and downloads email messages to the phone's inbox over a mobile network or WiFi access, is using the communication protocols and file management routines provided by the Symbian OS.
Symbian OS technology has been designed with these key points in mind:
to provide power, memory and input & output resource management specifically required in mobile devices
to deliver an open platform that complies with global telecommunications and Internet standards
to provide tools for developing mobile software for business, media and other applications
to ensure the wide availability of applications and accessories for different user requirements
to facilitate wireless connectivity for a variety of networks

Wednesday, June 25, 2008

MOTHER BOARD

The main circuit board of a microcomputer. The motherboard contains the connectors for attaching additional boards. Typically, the motherboard contains the CPU, BIOS, memory, mass storage interfaces, serial and parallel ports, expansion slots, and all the controllers required to control standard peripheral devices, such as the display screen, keyboard, and disk drive. Collectively, all these chips that reside on the motherboard are known as the motherboard's chipset.
On most PCs, it is possible to add memory chips directly to the motherboard. You may also be able to upgrade to a faster PC by replacing the CPU chip. To add additional core features, you may need to replace the motherboard entirely.

NORMALIZATION ADVANTAGE

The advantages of normalizing floating-point numbers are:
1) The representation is unique, there is exactly one way to write a real number in such a form. 2) It's easy to compare two normalized numbers, you separately test the sign, exponent and mantissa.
3) In a normalized form, a fixed size mantissa will use all the 'digit cells' to store significant digits.

different between decoder and encoder

Encoding is the process of converting the original message into a coded one, whereas decoding is the process of taking a coded message and converting it back to the original message. The process can be an algorithm that is applied for decoding and encoding, or they could be different. The words cipher and decipher can be applied in the same way.
An encoder is a device that is used to convert a signal or certain data into code. This kind of conversion is done for a variety of reasons, the most common being data compression. Other reasons for using encoders include data encryption for making the data secure and translating data from one code to another new or existing code. Encoders may be analog or digital devices. In analog devices, the encoding is done using analog circuitry, while in digital encoders the encoding is done using program algorithms. Some examples of encoders are multiplexers, compressors, and linear and rotary encoders.

A decoder, on the other hand, functions the reverse of an encoder. It is a device that is used to decode an encoded signal or data. It does this to help retrieve the data that was encoded in the first place. Both encoders and decoders usually function in tandem, i.e., an application that uses an encoder would ideally also require a decoder. There are different types of decoders, e.g. demultiplexers.

Overflow

OVERFLOW

*carry out is the extra 1 bit we generate that doesn’t fit.
*overflow is the condition where the answer is incorrect.
*With unsigned addition, we have carry out iff we have overflow.

With signed addition, this is not the case:
– (-3)+(-4) produces a carry out but no overflow (-7 is the right answer).
– 4+4 and 4+5 do not produce a carry out, but produce overflow (-8 and -7 are the wrong

answers)
– (-4)+(-5) produces a carry out and overflows.

How can we tell if we had a true overflow?
If the carry in and carry out of the most significant bitare different, we have a problem.

Operating System

tri state buffer

Another type of special gate output is called tristate, because it has the ability to provide three different output modes: current sinking ("low" logic level), current sourcing ("high"), and °oating("high-Z," or high-impedance). Tristate outputs are usually found as an optional feature on buffergates. Such gates require an extra input terminal to control the "high-Z" mode, and this input is usually called the enable.

Tuesday, June 24, 2008

Application of GRAY CODE

1. The first application is calculation of the weighted polynomial of a group code...
2.The second application is finding the solution to a variation of the tower of hanoi problem...
3.The third application is finding a Hamiltonian path in a genealized hypercube network...

MULTIPLEXER(MUX)

In electronics, a multiplexer or mux (occasionally the term muldex is also found, for a combination multiplexer-demultiplexer) is a device that performs multiplexing; it selects one of many analog or digital input signals and outputs that into a single line.
An electronic multiplexer makes it possible for several signals to share one expensive device or other resource, for example one A/D converter or one communication line, instead of having one device per input signal.
In electronics, a demultiplexer (or demux) is a device taking a single input signal and selecting one of many data-output-lines, which is connected to the single input. A multiplexer is often used with a complementary demultiplexer on the receiving end.

signed magnitude representation

there are many schemes for representing negative integers with patterns of bits. One scheme is sign-magnitude. It uses one bit (usually the leftmost) to indicate the sign. "0" indicates a positive integer, and "1" indicates a negative integer. The rest of the bits are used for the magnitude of the number.

BUS

When referring to a computer, the bus also known as the address bus, data bus, or local bus is a data connection connection between two or more devices connected to the computer. A computer bus is a method of transmitting data from one part of the computer to another part of the computer. The computer bus will connect all devices to the computer CPU and main memory.The data bus transfers actual data, whereas the address bus transfers information about where the data should go.
For example, a bus enables a computer processor to communicate with the memory or a video card to communicate with the memory.
A bus is capable of being parallel or a serial bus and today all computers utilize two types of buses, an internal or local bus and an external bus. An internal bus enables a communication between internal components such as a computer video card and memory and an external bus is capable of communicating with external components such as a SCSI scanner.
A computer or devices bus speed or throughput is always measured in bits per second or megabytes per second.

Tri State Buffer Gate

http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/CompOrg/tristate.html

Picture of ROM,RAM.

RAM



ROM


Picture ofTransistor and Logic Gate









logic gate Transistor





Register Allocation

Register allocation is the process of multiplexing a large number of target program variables onto a small number of CPU registers. The goal is to keep as many operands as possible in registers to maximise the execution speed of software programs. Register allocation can happen over a basic block (local register allocation), over a whole function/procedure (global register allocation), or in-between functions as a calling convention (interprocedural register allocation).

Register Transfer

In computer science, register transfer language (RTL) is a term used to describe a kind of intermediate representation (IR) that is very close to assembly language, such as that which is used in a compiler. Academic papers and textbooks also often use a form of RTL as an architecture-neutral assembly language. RTL is also the name of a specific IR used in the GNU Compiler Collection, and several other compilers, such as Zephyr[1]
Contents[hide]
1 RTL in GCC
2 RTL as it relates to logic synthesis
3 History
4 References
5 External links
//

Micro Operation

In computer central processing units, micro-operations, also known as a micro-ops or μops, are detailed low-level instructions used in some designs to implement complex machine instructions (sometimes termed macro-instructions in this context).
Various forms of μops has since long been the basis for traditional microcode routines used to simplify the implementation of a particular CPU design or perhaps just the sequencing of certain multi-step operations or addressing modes. More recently, μops have also been employed in a different way in order to let modern "CISC" processors more easily handle asyncronous parallel and speculative execution: As with traditional microcode, one or more table lookups (or equivalent) is done to locate the appropriate μop-sequence based on the encoding and semantics of the machine instruction (the decoding or translation step), however, instead of having rigid μop-sequences controlling the CPU directly from a microcode-ROM, sequences are here dynamically issued (i.e. buffered) before being used.
This dynamic method means that the fetch and decode stages can be more detached from the execution units than is feasible in a more traditional microcoded (or "hard-wired") design. As this allows a degree of freedom regarding execution order, it makes some extraction of instruction level parallelism out of a normal single-threaded program possible (provided that dependecies are checked etc). The buffering also opens up for certain analysis and reordering of code sequences "on the fly" in order to dynamically optimize mapping and sheduling of μops onto machine resources (such as ALUs, load/store units etc). This often means intermixed sequences of sub-operations (μops) generated for different machine instructions (forming partially reordered machine instructions).

Overflow

Arithmetic Overflow

The term arithmetic overflow or simply overflow has the following meanings.
In a
digital computer, the condition that occurs when a calculation produces a result that is greater in magnitude than what a given register or storage location can store or represent.
In a digital computer, the amount by which a calculated value is greater than that which a given register or storage location can store or represent. Note that the overflow may be placed at another location.
Most computers distinguish between two kinds of overflow condition. A
carry occurs when the result of an addition or subtraction, considering the operands and result as unsigned numbers, does not fit in the result. Therefore, it is useful to check the carry flag after adding or subtracting numbers that are interpreted as unsigned values. An overflow proper occurs when the result does not have the sign that one would predict from the signs of the operands (e.g. a negative result when adding two positive numbers). Therefore, it is useful to check the overflow flag after adding or subtracting numbers that are represented in two's complement form (i.e. they are considered signed numbers).
There are several methods of handling overflow:
Design: by selecting correct data types, both length and signed/unsigned.
Avoidance: by carefully ordering operations and checking operands in advance, it is possible to ensure that the result will never be larger than can be stored.
Handling: If it is anticipated that overflow may occur and when it happens detected and other processing done. Example: it is possible to add two numbers each two bytes wide using just a byte addition in steps: first add the low bytes then add the high bytes, but if it is necessary to carry out of the low bytes this is arithmetic overflow of the byte addition and it necessary to detect and increment the sum of the high bytes.
CPUs generally have a way of detecting this to support addition of numbers larger than their register size, typically using a status bit.
Propagation: if a value is too large to be stored it can be assigned a special value indicating that overflow has occurred and then have all successive operation return this flag value. This is useful so that the problem can be checked for once at the end of a long calculation rather than after each step. This is often supported in Floating Point Hardware called
FPUs.
Ignoring: This is the most common approach, but it gives incorrect results and can compromise a program's
security.
Division by zero is not a form of arithmetic overflow. Mathematically, division by zero within reals is explicitly undefined; it is not that the value is too large but rather that it has no value.

[edit] See also
integer overflow
Retrieved from "
http://en.wikipedia.org/wiki/Arithmetic_overflow"
Categories: Computer arithmetic

Advantage and Disadvantage of Complements

Advantage of 1's Complement
1.No need to compare numbers.
2.sign bit can be treated like any other bit.
3.Addition and Subtraction can be handled uniformly by the adder with the provision of input of the subtrahend as a negative number
Disadvantage of 1's complement:
1.End around carry if generated needs to be added.
2.'0' has 2 representations with all 0's and all 1's representing positive and negative zero resp...
Advantage of 2's complement
1.In 2's complement we have only one way to represent 0. This simplifies our representation scheme.
2.The biggest advantage of 2's comp over 1's comp is that you don't need extra hardware to detect when a result drops below zero
Disadvantage of 2'scomplement
1.Note also that 2's complement has only one form for 0 (this is good).
Ex. 11110 ==> 11110 - 11101 + 00011 -------- 100001 => 00001
Ex. 100000 ==> 100000 - 011101 + 100011 ---------- 1000011 => 000011
Also note that the MSB is effectively a sign bit (0 = positive, 1=negative), so you get easy sign checking with 2's complement as well. Most modern systems employ 2's complement for representing signed values.
2.2's complement requires asubtraction which will slow down the circuit and offset some of thetransistor savings.

High impedence

In electronics, high impedance (also known as hi-Z, tri-stated, or floating) is the state of an output terminal which is not currently driven by the circuit. In digital circuits, it means that the signal is neither driven to a logical high nor to a logical low level - hence "tri-stated". Such a signal can be seen as an open circuit (or "floating" wire) because connecting it to a (low impedance) circuit will not affect that circuit; it will instead itself be pulled to the same voltage as the actively driven output. The combined input/output pins found on many ICs are actually tri-state capable outputs which have been internally connected to inputs. This is the basis for bus-systems in computers, among many other uses

Monday, June 23, 2008

Self-complementing Codes....

Self-complementing Codes.....

  • A self complementing code is one thats 9's complement in decimal is the 1's complement in binary.
    • Ex: The 9's complement of 7 is 2 in decimal. In 2421 code, $ 7 = 1101$ and $ 2 = 0010$ .
  • Note: if a weighted code is self-complementing, the total weight must be 9. In thes respect, BCD code is not self complementing.
  • XS3 code is another example of a self-complementing code.

Excess-3

Excess-3

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Excess-3 binary coded decimal (XS-3), also called biased representation or Excess-N, is a numeral system used on some older computers that uses a pre-specified number N as a biasing value. It is a way to represent values with a balanced number of positive and negative numbers. In XS-3, numbers are represented as decimal digits, and each digit is represented by four bits as the BCD value plus 3 (the "excess" amount):

  • The smallest binary number represents the smallest value. (i.e. 0 - Excess Value)
  • The greatest binary number represents the largest value. (i.e. 2N - Excess Value - 1)
Decimal Binary Decimal Binary Decimal Binary Decimal Binary
-3 0000 1 0100 5 1000 9 1100
-2 0001 2 0101 6 1001 10 1101
-1 0010 3 0110 7 1010 11 1110
0 0011 4 0111 8 1011 12 1111

To encode a number such as 127, then, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010).

The primary advantage of XS-3 coding over BCD coding is that a decimal number can be nines' complemented (for subtraction) as easily as a binary number can be ones' complemented; just invert all bits.

Adding Excess-3 works on a different algorithm than BCD coding or regular binary numbers. When you add two XS-3 numbers together, the result is not an XS-3 number. For instance, when you add 1 and 0 in XS-3 the answer seems to be 4 instead of 1. In order to correct this problem, when you are finished adding each digit, you have to subtract 3 (binary 11) if the digit is less than decimal 10 and add three if the number is greater than or equal to decimal 10.

GRAY CODE

GRAY CODE.......


The reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one digit.

The reflected binary code was originally designed to prevent spurious output from electromechanical switches.

Today, Gray codes are widely used to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems.

Bell Labs researcher Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name."

He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process."

The code was later named after Gray by others who used it.

Two different 1953 patent applications give "Gray code" as an alternative name for the "reflected binary code"; one of those also lists "minimum error code" and "cyclic permutation code" among the names.

A 1954 patent application refers to "the Bell Telephone Gray code".........

Applications of Gray Code

http://en.wikipedia.org/wiki/Gray_code

Saturday, June 21, 2008

GRAY CODE

The reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one digit.
The reflected binary code was originally designed to prevent spurious output from electromechanical switches. Today, Gray codes are widely used to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems

Define computer architecture.

The design of a computer system. It sets the standard for all devices that connect to it and all the software that runs on it. It is based on the type of programs that will run (business, scientific) and the number of programs that run concurrently.

Thursday, June 19, 2008

ERROE DETECTION CODE

Error detection is the ability to detect the presence of errors caused by noise or other impairments during transmission from the transmitter to the receiver.
The Most Common Error Detection Code used is the PARITY BIT
what is parity bit?
A parity bit is an extra bit included with the binary message to make the total number of 1's either odd or even A parity bit is an error detection mechanism that can only detect an odd number of errors,
During transfer of information from one location to another the parity bit is handled as follows
At the sending end the message is applied to a parity generator.the messages including the parity bit is transmitted to its destination.At the receiving end all the incoming bits are applied to a parity checker that checks the proper parity adopted.an error is detected for odd numbers and an even number of errors is not detected because even parity having a bit combinations of all 0's while in the odd parity there is always one bit that is one

encoding

Encoding is the process of transforming information from one format into another. The opposite operation is called decoding.
There are a number of more specific meanings that apply in certain contexts:
Encoding (in cognition) is a basic perceptual process of interpreting incoming stimuli; technically speaking, it is a complex, multi-stage process of converting relatively objective sensory input (e.g., light, sound) into subjectively meaningful experience.
Character encoding is a code that pairs a set of natural language characters (such as an alphabet or syllabary) with a set of something else, such as numbers or electrical pulses.
Text encoding uses a markup language to tag the structure and other features of a text to facilitate processing by computers. (See also Text Encoding Initiative.)
Semantics encoding of formal language A in formal language B is a method of representing all terms (e.g. programs or descriptions) of language A using language B.
Electronic encoding transforms a signal into a code optimized for transmission or storage, generally done with a codec.
Neural encoding is the way in which information is represented in neurons.
Memory encoding is the process of converting sensations into memories.
Encryption transforms information for secrecy.

SOME DETAILS

SEQUENTIAL CIRCUTS:
The state or logic level of combinational logic circuit outputs only depends on the present inputs to the circuit. For sequential circuits, the output depends on the sequence of inputs. In most sequential circuits, the inputs are sequenced by connecting one or more of the outputs to the inputs of the circuit creating what is known as feedback. When feedback is employed, the state of the circuit becomes time dependent.
One important parameter to consider when analyzing sequential circuits is the time taken for the output of the circuit to respond to a change in inputs, this is referred to as the propagation delay (tp).
LATCHES:
A latch is a bistable multivibrator device that can store one bit (‘0’ or ‘1’) of data. Because of their storing capacity, latches are sometimes referred to as bistable memory devices. Latches may be used in groups of 4,8,16 or 32 to temporarily store a nibble, byte or word of data. They are also used often in microprocessor-based designs. Some texts make no distinction between latches and flip-flops. Note however that there is an essential difference in the ways they are triggered.

FILPFLOPS:
Flip-flops are synchronous bistable storage devices capable of storing one bit. In this case synchronous means that the output state only changes at a specified point on a triggering input called the clock (C) That is, the output changes are synchronised with the clock signal.
The main difference between latches and flip-flops is the method used to change their states. Latches are level sensitive, or level-triggered. This means that the outputs are dependent on the voltage level applied, not on any signal transition. Flip-flops are edge-triggered, that is that they depend on the transition of a signal. This may either be a LOW-to-HIGH (rising edge) or a HIGH-to-LOW (falling edge) transition.

Clear my doubt




1's complement and 2's complement

Hi all,

I have a few doubts in the 1's and 2's complement representation. Generally negative numbers can be represented using either 1's complement or 2's compleent representation.

1's complement ---reverse all the bits

2's complement ---reverse all the bits + 1

i.e 1's complement of 2 ( 0000 0010 ) is -2 ( 1111 1101 )

But when a number and its complement are added the result must be azero right ??But in this case 0000 0010 + 1111 1101 = 1111 1111 ==[ ?? ]

Should'nt we be getting a zero as result ???

Ripple Adder

http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Comb/adder.html

ABOUT COMPLEMENTS

http://en.wikipedia.org/wiki/Signed_number_representations#Ones.27_complement

half adder

A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum and a carry value which are both binary digits.
The drawback of this circuit is that in case of a multibit addition, it cannot include a carry.

A half adder has two inputs, generally labelled A and B, and two outputs, the sum S and carry C. S is the two-bit XOR of A and B, and C is the AND of A and B. Essentially the output of a half adder is the sum of two one-bit numbers, with C being the most significant of these two outputs.

Register

REGISTER is a special, high-speed storage area within the CPU. All data must be represented in a register before it can be processed. For example, if two numbers are to be multiplied, both numbers must be in registers, and the result is also placed in a register. (The register can contain the address of a memory location where data is stored rather than the actual data itself.)
The number of registers that a CPU has and the size of each (number of
bits) help determine the power and speed of a CPU. For example a 32-bit CPU is one in which each register is 32 bits wide. Therefore, each CPU instruction can manipulate 32 bits of data.
Usually, the movement of data in and out of registers is completely
transparent to users, and even to programmers. Only assembly language programs can manipulate registers. In high-level languages, the compiler is responsible for translating high-level operations into low-level operations that access registers.

Representation negative numbers

fixed-point representation system - a radix numeration system in which the location of the decimal point is fixed by convention

floating-point representation system - a radix numeration system in which the location of the decimal point is indicated by an exponent of the radix; in the floating-point representation system, 0.0012 is represented as 0.12-2 where -2 is the exponent

decoder

A decoder is a device which does the reverse of an encoder, undoing the encoding so that the original information can be retrieved. The same method used to encode is usually just reversed in order to decode.
In digital electronics this would mean that a decoder is a multiple-input, multiple-output logic circuit that converts coded inputs into coded outputs, where the input and output codes are different. e.g. n-to-2n, BCD decoders.
Enable inputs must be on for the decoder to function, otherwise its outputs assume a single "disabled" output code word. Decoding is necessary in applications such as data multiplexing, 7 segment display and memory address decoding

LAN

A local area network (LAN) is a group of computers and associated devices that share a common communications line or wireless link. Typically, connected devices share the resources of a single processor or server within a small geographic area (for example, within an office building). Usually, the server has applications and data storage that are shared in common by multiple computer users. A local area network may serve as few as two or three users (for example, in a home network) or as many as thousands of users (for example, in an FDDI network).
In some situations, a wireless LAN may be preferable to a wired LAN because it is cheaper to install and maintain.
Major local area network technologies are: Ethernet,Token Ring,FDDI
ETHERNET
Ethernet is the most widely-installed local area network ( LAN) technology. An Ethernet LAN typically uses coaxial cable or special grades of twisted pair wires. Ethernet is also used in wireless LANs. The most commonly installed Ethernet systems are called 10BASE-T and provide transmission speeds up to 10 Mbps.
TOKEN RINGS
A Token Ring network is a local area network (LAN) in which all computers are connected in a ring or star topology and a bit- or token-passing scheme is used in order to prevent the collision of data between two computers that want to send messages at the same time.The token scheme can also be used with bus topology LANs.
FDDI
FDDI (Fiber Distributed Data Interface) is a set of ANSI and ISO standards for data transmission on fiber optic lines in a local area network (LAN) that can extend in range up to 200 km (124 miles). The FDDI protocol is based on the Token Ring protocol.
An FDDI network contains two token rings, one for possible backup in case the primary ring fails. The primary ring offers up to 100 Mbps capacity. If the secondary ring is not needed for backup, it can also carry data, extending capacity to 200 Mbps.

Karnaugh Map

Karnaugh Map The Karnaugh map, also known as a Veitch diagram (KV-map or K-map for short), is a tool to facilitate the simplification of Boolean algebra IC expressions. The Karnaugh map reduces the need for extensive calculations by taking advantage of human pattern-recognition and permitting the rapid identification and elimination of potential race hazards.The Karnaugh map was invented in 1952 by Edward W. Veitch. It was further developed in 1953 by Maurice Karnaugh, a telecommunications engineer at Bell Labs, to help simplify digital electronic circuits.In a Karnaugh map the boolean variables are transferred (generally from a truth table) and ordered according to the principles of Gray code in which only one variable changes in between squares. Once the table is generated and the output possibilities are transcribed, the data is arranged into the largest even group possible and the minterm is generated though the axiom laws of boolean algebra.

USB

USB (Universal Serial Bus) is a plug-and-play interface between a computer and add-on devices (such as audio players, joysticks, keyboards, telephones, scanners, and printers). With USB, a new device can be added to your computer without having to add an adapter card or even having to turn the computer off.USB supports a data speed of 12 megabits per secondA single USB port can be used to connect up to 127 peripheral devices, such as mice, modems, and keyboards.A USB flash drive is a NAND-type flash memory data storage device integrated with a USB (universal serial bus) connector. USB flash drives are typically removable and rewritable, much shorter than a floppy disk (1-4 inches or 25-102 mm), and weigh less than 2 ounces

RAM

RAM
Random access memory (usually known by its acronym, RAM) is a type of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order, i.e. at random.
TYPES OF RAM
DRAM
Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Like SRAM, it is in the class of volatile memory devices, since it loses its data when the power supply is removed. Unlike SRAM however, data may still be recovered for a short time after power-off.
SRAM
SRAM (static RAM) is random access memory (RAM) that retains data bits in its memory as long as power is being supplied. Unlike dynamic RAM (DRAM), which stores bits in cells consisting of a capacitor and a transistor, SRAM does not have to be periodically refreshed. Static RAM provides faster access to data and is more expensive than DRAM. SRAM is used for a computer's cache memory and as part of the random access memory digital-to-analog converter on a video card.

demultiplexer

In electronics, a demultiplexer (or demux) is a device taking a single input signal and selecting one of many data-output-lines, which is connected to the single input. A multiplexer is often used with a complementary demultiplexer on the receiving end.
An electronic multiplexer can be considered as a multiple-input, single-output switch, and a demultiplexer as a single-input, multiple-output switch. The schematic symbol for a multiplexer is an isosceles trapezoid with the longer parallel side containing the input pins and the short parallel side containing the output pin. The schematic on the right shows a 2-to-1 multiplexer on the left and an equivalent switch on the right. The sel wire connects the desired input to the output.

multiplexer

In electronics, a multiplexer or mux (occasionally the term muldex is also found, for a combination multiplexer-demultiplexer) is a device that performs multiplexing; it selects one of many analog or digital input signals and outputs that into a single line.
An electronic multiplexer makes it possible for several signals to share one expensive device or other resource, for example one A/D converter or one communication line, instead of having one device per input signal.

2's complements

A two's complement negative number multiplying circuit in which no complementing of the multiplier or multiplicand is required, no special cases need be detected, no complementing of the result is required and fewer transfer paths are needed.

Half Adders, Full Adders, Ripple Carry Adders

http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Comb/adder.html

complents

why do we use complents to simplify subraction operation?

subtrend and minuend

Subtraction Terminology
Subtraction is the arithmetic operation in which the difference between two numbers is calculated. general equation: a - b = c
'a' is called the minuend, 'b' is called the subtraend, while 'c' is called the difference. 5 - 3 = 2
5 is the minuend
3 is the subtrend
2 is the difference 5 and 3
- is the arithmetic subtraction operator
= is the equals operator
Subtraction is related to addition as follows. If a + b = c, then c - b = a and c - a = b.
If you subtract a larger number from a smaller number, then the difference is a negative number (i.e. it is less than zero). Negative numbers are prefixed with a dash - character. 3 - 5 = -2
3 minus 5 equals a negative 2 (i.e. -2)
5 subtracted from 3 is a difference of -2
the difference between 3 and 5 is -2

2'scomplement with examples

The case with the 1's complement, the 2's complement of a positive number is no different from the representation of that number using sign-magnitude notation.

Example 1
Store the integer +27 in a byte using sign-magnitude form and using the two's complement form.

+27 is stored exactly the same using either the sign-magnitude notation or the 2's complement notation
The number +27 is stored as: 0001 1011
The MSB is 0 since the number is positive.
0001 1011 is 27 in binary.

Example 2
What is the decimal integer value of 0101 1001 stored in a byte if the representation is:
a) sign-magnitude form
b) 2's complement form?

Solution:
There is no difference between the forms. Since the MSB is 0, the number is positive. The other 7 bytes 101 1001 in decimal form is 89. The integer stored is +89.


Problems
Store these integers in bytes using the 2's complement notation:
a) + 36 b) +130


Storing Negative Integers in Bytes Using Two's Complement Notation
We indicated in the last section that the formal way to determine the 1's complement of a negative integer to be stored in binary in n bits is to subtract the integer in binary from 2n - 1.

It turned out that the short way to do this is to simply invert all the bits of the representation of the corresponding positive integer.

Example 4 To find the 1's complement of -77 we note that +77 in binary is 0100 1101. Inverting bits, we determine that the representation of -77 in binary in 1's complement form is 1011 0010.

The formal way to determine the 2's complement of a negative integer to be stored in binary in n bits is to subtract the integer in binary from 2n .

To determine 1's complement of a negative integer to be stored in n bits you subtract from 2n – 1 and to find the 2’s complement of a negative integer to be stored in n bits you subtract from 2n . It follows that the 2's complement is just 1 more than the 1's complement.

This gives us a method to determine quickly the 2's complement of a negative integer.

Rule: To determine the 2's complement of a negative integer, determine the 1's complement and add 1.

example 5
Store -27 in a byte using 2's complement notation.

Steps 1 and 2 determine the 1's complement:

Step 1: +27 in binary is 0001 1011.

Step 2: Invert bits to yield 1110 0100. Then the 1's complement of -27 is 1110 0100.

Step 3 is the additional step needed to find the 2's complement.

Step 3: Add 1 to the 1's complement:

1110 0100
+ 1
1110 0101

The 2's complement of -27 is 1110 0101.
.

Definition of Ripple Adder

When multiple full adders are used with the carry ins and carry outs chained together then this is called a ripple carry adder because the correct value of the carry bit ripples from one bit to the next.
It is possible to create a logical circuit using several full adders to add multiple-bit numbers. Each full adder inputs a Cin, which is the Cout of the previous adder. This kind of adder is a ripple carry adder, since each carry bit "ripples" to the next full adder. Note that the first (and only the first) full adder may be replaced by a half adder.

Definition of Computer Architecture

computer architecture - the art of assembling logical elements into a computing device; the specification of the relation between parts of a computer system
computer architecture - (computer science) the structure and organization of a computer's hardware or system software; "the architecture of a computer's system software"
architecture
structure - the manner of construction of something and the arrangement of its parts; "artists must study the structure of the human body"; "the structure of the benzene molecule"

Definition of Encoder

An encoder is a device used to change a signal (such as a bitstream) or data into a code. The code may serve any of a number of purposes such as compressing information for transmission or storage, encrypting or adding redundancies to the input code, or translating from one code to another. This is usually done by means of a programmed algorithm, especially if any part is digital, while most analog encoding is done with analog circuitry.

Wednesday, June 18, 2008

Don’t Cares during design
Don’t cares arise only in two cases
1) When the designer truly does not care what the output provides.
2) When the input will never occur

DECODER

In digital electronics this would mean that a decoder is a multiple-input, multiple-output logic circuit that converts coded inputs into coded outputs, where the input and output codes are different. e.g. n-to-2n, BCD decoders.The simplest decoder circuit would be an AND gate because the output of an AND gate is "High" (1) only when all its inputs are "High". The input codegenerally has fewer bits than the output code. Each input code produces a different output code word.

latches

In electronics, a latch is a kind of bistable multivibrator, an electronic circuit which has two stable states and thereby can store one bit of information. Today the word is mainly used for simple transparent storage elements, while slightly more advanced non-transparent (or clocked) devices are described as flip-flops. Informally, as this distinction is quite new, the two words are sometimes used interchangeably.

flipflop

flipflop Also referred to as a bistable gate, a type of circuit that is interconnected with like circuits to form logic gates in digital integrated circuits, such as memory chips and microprocessors. The name “flip-flop” comes from the circuit’s nature of alternating between two states when a current is applied to the circuit (for example, 1 to 0 or 0 to 1). A flip-flop will maintain its state indefinitely until it receives an input pulse, called a trigger, which forces it to alternate its state. Once the circuit changes state it remains in that state until another trigger is received.

fundamental digital definitions

http://dept-info.labri.u-bordeaux.fr/~strandh/Teaching/AMP/Common/Strandh-Tutorial/Dir.html This link contains basic digital definitions

karnaugh's map and dont care conditions

KARNAUGH'S MAP:
The Karnaugh map, also known as a Veitch diagram (KV-map or K-map for short), is a tool to facilitate the simplification of Boolean algebra IC expressions. The Karnaugh map reduces the need for extensive calculations by taking advantage of human pattern-recognition and permitting the rapid identification and elimination of potential race hazards.
The Karnaugh map was invented in 1952 by Edward W. Veitch. It was further developed in 1953 by Maurice Karnaugh, a telecommunications engineer at Bell Labs, to help simplify digital electronic circuits.
In a Karnaugh map the boolean variables are transferred (generally from a truth table) and ordered according to the principles of Gray code in which only one variable changes in between squares. Once the table is generated and the output possibilities are transcribed, the data is arranged into the largest even group possible and the minterm is generated though the axiom laws of boolean algebra
DONT CARE CONDITIONS:
Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions (that is sets of inputs for which the designer doesn't care what the output is) because "don't care" conditions can be included in a ring to make it larger but do not have to be ringed. They are usually indicated on the map with a hyphen/dash/X in place of the number. The value can be a "0," "1," or the hyphen/dash/X depending on if one can use the "0" or "1" to simplify the KM more. If the "don't cares" don't help you simplify the KM more, then use the hyphen/dash/X.

imformations about lache and flipflop

Latch is a level sensitive memory element. As long as the latch enable is active, the latch is transparent, meaning the input appears on the output. When the enable goes in-active, the input is stored into the latch, the latch is closed, and a change in input will not be reflected on the latch's output. A flip-flop is a edge sensitive memory element, constructed using two latches. In a flip-flop, the input is sampled only at the rising or falling edge of the clock, after that the input is considered as dont care.