ANSWERS: 2
  • Random access memory (usually known by its acronym, RAM) is a type of data store used in computers that allows the stored data to be accessed in any order — that is, at random, not just in sequence. In contrast, other types of memory devices (such as magnetic tapes, disks, and drums) can access data on the storage medium only in a predetermined order due to constraints in their mechanical design. Generally, RAM in a computer is considered main memory or primary storage: the working area used for loading, displaying and manipulating applications and data. This type of RAM is usually in the form of integrated circuits (ICs). These are commonly called memory sticks or RAM sticks because they are manufactured as small circuit boards with plastic packaging and are about the size of a few sticks of chewing gum. Most personal computers have slots for adding and replacing memory sticks. Most RAM can be both written to and read from, so "RAM" is often used interchangeably with "read-write memory." In this sense, RAM is the "opposite" of ROM, but in a more true sense, of sequential access memory. Overview Computers use RAM to hold the program code and data during computation. A defining characteristic of RAM is that all memory locations can be accessed at almost the same speed. Most other technologies have inherent delays for reading a particular bit or byte. Many types of RAM are volatile, which means that unlike some other forms of computer storage such as disk storage and tape storage, they lose all data when the computer is powered down. Modern RAM generally stores a bit of data as either a charge in a capacitor, as in dynamic RAM, or the state of a flip-flop, as in static RAM. Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. Unless the memory used is non-volatile, a RAM disk loses the stored data when the computer is shut down. However, volatile memory can retain its data when the computer is shut down if it has a separate power source, usually a battery. Some types of RAM can detect or correct random faults called memory errors in the stored data, using RAM parity. History Early main memory systems built from vacuum tubes behaved much like modern RAM, except that they failed frequently. Core memory, which used wires attached to small ferrite electromagnetic cores, also had roughly equal access time. The term “core” is still used by some programmers to describe the RAM main memory of a computer. The basic concepts of tube and core memory are used in modern RAM implemented with integrated circuits. Alternative primary storage mechanisms usually involved a non-uniform delay for memory access. Delay line memory used a sequence of sound wave pulses in mercury-filled tubes to hold a series of bits. Drum memory acted much like the modern hard disk, storing data magnetically in continuous circular bands. Recent developments Currently, several types of non-volatile RAM are under development, which will preserve data while powered down. The technologies used include carbon nanotubes and the magnetic tunnel effect. In summer 2003, a 128 kB magnetic RAM chip was introduced, which was manufactured with 0.18 µm technology. The core technology of MRAM is based on the magnetic tunnel effect. In June of 2004, Infineon Technologies unveiled a 16 MB prototype again based on 0.18 µm technology. As for carbon nanotube memory, a high-tech startup Nantero built a functioning prototype 10 GB array in 2004. The Memory Wall The term "memory wall", first officially coined in Hitting the Memory Wall: Implications of the Obvious (PDF), refers to the growing disparity between CPU and memory speed. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation: "First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Intel's new Tri-Gate could solve this problem. Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address." The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However Intel's new processors, Core 2 (codenamed Conroe) shows a significant improvement over previous Pentium 4 processors. Shadow RAM Shadow RAM is RAM whose contents are copied from read-only memory (ROM) to allow shorter access times [1], as ROM is in general slower than RAM. The original ROM is disabled and the new location on the RAM is write-protected. This process is called shadowing. This section is a stub. You can help by expanding it. DRAM packaging For economical reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as playstation and xbox) normally consists of dynamic RAM (DRAM). Other parts of the computer, such as cache memories and data buffers in hard disks, normally use static RAM (SRAM). General DRAM packaging formats Dynamic random access memory (DRAM) is produced as integrated circuits (ICs) bonded and mounted into plastic packages with metal pins for connection to control signals and buses. Today, these DRAM packages are in turn often assembled into plug-in modules for easier handling. Some standard module types are: * DRAM chip (Integrated Circuit or IC) o Dual in-line Package (DIP) * DRAM (memory) modules o Single In-line Pin Package (SIPP) o Single in-line memory module (SIMM) o Dual in-line memory module (DIMM) o Rambus modules are technically DIMMs, but are usually referred to as RIMMs due to their proprietary slot. o Small outline DIMM (SO-DIMM). Smaller version of the DIMM, used in laptops. Comes in versions with: + 72 pins (32-bit) + 144 pins (64-bit) + 200 pins (72-bit) o Small outline RIMM (SO-RIMM). Smaller version of the RIMM, used in laptops. * Stacked v. non-stacked RAM modules o Stacked RAM chips use two RAM wafers that are stacked on top of each other. This allows large module (like a 512mb or 1Gig SO-DIMM) to be manufactured using cheaper low density wafers. Stacked chip modules draw more power. Common DRAM modules Common DRAM packages as illustrated to the right, from top to bottom: 1. DIP 18-pin (DRAM chip, usually pre-FPRAM) 2. SIPP (usually FPRAM) 3. SIMM 30-pin (usually FPRAM) 4. SIMM 72-pin (so-called "PS/2 SIMM", usually EDO RAM) 5. DIMM 168-pin (SDRAM) 6. DIMM 184-pin (DDR SDRAM) 7. DIMM 240-pin (DDR2 SDRAM - not pictured)
  • Me thinks the A+ person is back........ Just because you're anonymous doesn't mean it isn't obvious when you ask an exam question.

Copyright 2023, Wired Ivy, LLC

Answerbag | Terms of Service | Privacy Policy