DYNAMIC RAM
INTRODUCTION :
Dynamic random-access memory (DRAM) is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The
capacitor can be either charged or discharged; these two states are taken to
represent the two values of a bit, conventionally called 0 and 1. Since even
"nonconducting" transistors always leak a small amount, the
capacitors will slowly discharge, and the information eventually fades unless
the capacitor charge is refreshed periodically. Because
of this refresh requirement, it is a dynamic memory as opposed to static random-access
memory (SRAM) and other static types of memory. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory),
since it loses its data quickly when power is removed. However, DRAM does
exhibit limited data remanence.
DRAM is widely used in digital electronics
where low-cost and high-capacity memory is required. One of the largest
applications for DRAM is the main memory (colloquially called the "RAM") in modern computers; and as the main memories of components used in
these computers such as graphics cards (where the "main
memory" is called the graphics memory). In contrast, SRAM, which is faster and more
expensive than DRAM, is typically used where speed is of greater concern than
cost, such as the cache memories in processors.
The advantage of DRAM is its structural
simplicity: only one transistor and a capacitor are
required per bit, compared to four or six transistors in SRAM. This allows DRAM
to reach very high densities.
The transistors and capacitors used are extremely small; billions can fit on a
single memory chip. Due to the dynamic nature of its memory cells, DRAM
consumes relatively large amounts of power, with different ways for managing
the power consumption.[2]
The principles of
operation for reading a simple 4 by 4 DRAM array.
Basic structure of a
DRAM cell array.
DRAM is usually arranged in a rectangular
array of charge storage cells consisting of one capacitor and transistor per
data bit. The figure to the right shows a simple example with a four-by-four
cell matrix. Some DRAM matrices are many thousands of cells in height and
width.[8][9]
The long horizontal lines connecting each row
are known as word-lines. Each column of cells is composed of two bit-lines,
each connected to every other storage cell in the column (the illustration to
the right does not include this important detail). They are generally known as
the "+" and "−" bit lines.
2. The bit-lines are precharged to exactly equal
voltages that are in between high and low logic levels (e.g., 0.5 V if the
two levels are 0 and 1 V). The bit-lines are physically symmetrical to
keep the capacitance equal, and therefore at this time their voltages are equal.[10]
3. The precharge circuit is switched off. Because
the bit-lines are relatively long, they have enough capacitance to maintain the precharged voltage for a brief
time. This is an example of dynamic logic.[10]
4. The desired row's word-line is then driven
high to connect a cell's storage capacitor to its bit-line. This causes the
transistor to conduct, transferring charge from the storage cell to the connected
bit-line (if the stored value is 1) or from the connected bit-line to the
storage cell (if the stored value is 0). Since the capacitance of the bit-line
is typically much higher than the capacitance of the storage cell, the voltage
on the bit-line increases very slightly if the storage cell's capacitor is
discharged and decreases very slightly if the storage cell is charged (e.g.,
0.54 and 0.45 V in the two cases). As the other bit-line holds 0.50 V
there is a small voltage difference between the two twisted bit-lines.[10]
5. The sense amplifiers are now connected to the
bit-lines pairs. Positive feedback then occurs from the cross-connected
inverters, thereby amplifying the small voltage difference between the odd and
even row bit-lines of a particular column until one bit line is fully at the
lowest voltage and the other is at the maximum high voltage. Once this has
happened, the row is "open" (the desired cell data is available).[10]
6. All storage cells in the open row are sensed
simultaneously, and the sense amplifier outputs latched. A column address then
selects which latch bit to connect to the external data bus. Reads of different
columns in the same row can be performed without a row opening delay because,
for the open row, all data has already been sensed and latched.[10]
7. While reading of columns in an open row is
occurring, current is flowing back up the bit-lines from the output of the
sense amplifiers and recharging the storage cells. This reinforces (i.e.
"refreshes") the charge in the storage cell by increasing the voltage
in the storage capacitor if it was charged to begin with, or by keeping it
discharged if it was empty. Note that due to the length of the bit-lines there
is a fairly long propagation delay for the charge to be transferred back to the
cell's capacitor. This takes significant time past the end of sense
amplification, and thus overlaps with one or more column reads.[10]
8. When done with reading all the columns in the
current open row, the word-line is switched off to disconnect the storage cell
capacitors (the row is "closed") from the bit-lines. The sense
amplifier is switched off, and the bit-lines are precharged again.[10]
CONCLUSION:
The most
significant change, and the primary reason that SDRAM has supplanted
asynchronous RAM, is the support for multiple internal banks inside the DRAM
chip. Using a few bits of "bank address" which accompany each
command, a second bank can be activated and begin reading data while a read from the first bank is in progress. By alternating banks, an SDRAM device can keep the data bus
continuously busy, in a way that asynchronous DRAM cannot.
No comments:
Post a Comment