Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Generic SoC Architecture Components

Sanjeeb Mishra, ... Vijayakrishnan Rousseau, in System on Chip Interfaces for Low Power Design, 2016

Memory

Memory is one of the fundamental components of a system. There is at least some form of memory in a system. A number of technologies are used for making memory devices. However, in all, the memory devices can be classified into two categories: volatile and nonvolatile memory. Let's quickly review the two classifications and then we'll discuss them in detail in Chapter 7.

Volatile memory

Volatile memory is the memory that can keep the information only during the time it is powered up. In other words, volatile memory requires power to maintain the information.

Nonvolatile memory

Nonvolatile memory is the memory that can keep the information even when it is powered off. In other words, nonvolatile memory requires power while storing the data; however, once the data is stored, the nonvolatile memory technologies do not require power to maintain the data stored.

Volatile versus nonvolatile memory

As we can see, nonvolatile and volatile memory are fundamentally different by the definitions themselves. At first it may seem that nobody would prefer volatile memory over nonvolatile memory because the data are important and power is uncertain. However, there are a few reasons that both types of memories are in use and will continue to be in use:

First and foremost, volatile memory is typically faster than nonvolatile memory, so typically when operating on the data it's faster to do it on volatile memory. And since power is available anyway while operating on or processing the data, it's not a concern.

Since, inherently, volatile memory loses data, the mechanism to retain data in volatile memory is to keep refreshing the data content. By refreshing, we mean to read the data and write it back in cycle. Since memory refresh consumes significant power, it cannot replace nonvolatile memory for practical purposes.

There is a memory hierarchy so that the systems can get the best of both worlds with limited compromises. A typical memory hierarchy in a computer system would look like Figure 3.11.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

■ Figure 3.11. Typical memory hierarchy of a computer system.

So, as depicted in Figure 3.11, the CPU continues to process data from nonvolatile memory, which is fast. However, the data in volatile memory is continuously backed by nonvolatile memory. It must be noted that if the memory CPU is talking to is slow, it would slow down the whole system irrespective of how fast the CPU is, because the CPU would be blocked by the data availability from the memory device. However, fast memory devices are quite costly. In practice, therefore, computer systems today have multiple layers in the memory hierarchy to alleviate the problem.

We can see that volatile memory has multiple layers in the hierarchy and typically the nonvolatile memory has a single layer. The layers in the memory hierarchy from bottom to top typically go faster, costlier, and smaller. The fundamental principle for having this multilayer hierarchy is called locality of reference. Locality of reference means that during a given small period of time, in general, data accesses will be in a predictable manner within an address region, and the switching in this locality will happen at intervals. Therefore the data in a locality can be transferred to the fastest memory so that the CPU can process the data quickly. This works not only in theory but in practice as well. Details of memory evolution and various interfaces that these memory devices use are discussed in Chapter 7.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016305000037

Domain 6: Security Architecture and Design

Eric Conrad, ... Joshua Feldman, in Eleventh Hour CISSP (Second Edition), 2014

RAM and ROM

RAM is volatile memory used to hold instructions and data of currently running programs. It loses integrity after loss of power. RAM memory modules are installed into slots on the computer motherboard.

ROM (Read-Only Memory) is nonvolatile: data stored in ROM maintains integrity after loss of power. A computer Basic Input Output System (BIOS) firmware is stored in ROM. While ROM is “read only,” some types of ROM may be written to via flashing, as we will see shortly in Section “Flash memory.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124171428000066

Domain 6

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Second Edition), 2012

RAM and ROM

RAM is volatile memory used to hold instructions and data of currently running programs. It loses integrity after loss of power. RAM memory modules are installed into slots on the computer motherboard. Read-only memory (ROM) is nonvolatile: Data stored in ROM maintains integrity after loss of power. The basic input/output system (BIOS) firmware is stored in ROM. While ROM is “read only,” some types of ROM may be written to via flashing, as we will see shortly in the Flash Memory section.

Note

The volatility of RAM is a subject of ongoing research. Historically, it was believed that DRAM lost integrity after loss of power. The “cold boot” attack has shown that RAM has remanence; that is, it may maintain integrity seconds or even minutes after power loss. This has security ramifications, as encryption keys usually exist in plaintext in RAM; they may be recovered by “cold booting” a computer off a small OS installed on DVD or USB key and then quickly dumping the contents of memory. A video on the implications of cold boot, Lest We Remember: Cold Boot Attacks on Encryption Keys, is available at http://citp.princeton.edu/memory/. Remember that the exam sometimes simplifies complex matters. For the exam, simply remember that RAM is volatile (though not as volatile as we once believed).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499613000078

Data Hiding Forensics

Nihad Ahmad Hassan, Rami Hijazi, in Data Hiding Techniques in Windows OS, 2017

Windows Forensics 227

Capture Volatile Memory228

DumpIt 228

Belkasoft 230

FTK® Imager 231

Capture Disk Drive 231

Using FTK® Imager to Acquire Disk Drive 232

Deleted Files Recovery 233

Acquiring Disk Drive Images Using ProDiscover Basic 234

Analyzing the Digital Evidence for Deleted Files and Other Artifacts 235

Windows Registry Analysis 239

Windows Registry Startup Location 239

Checking Installed Programs 239

Connected USB Devices 242

Mostly Recently Used List 243

UserAssist Forensics 245

Internet Programs Investigation 245

Forensic Analysis of Windows Prefetch Files 249

Windows Minidump Files Forensics 250

Windows Thumbnail Forensics 250

File Signature Analysis 252

File Attributes Analysis 252

Discover Hidden Partitions 252

Detect Alternative Data Streams 255

Investigating Windows Volume Shadow Copy 255

Virtual Memory Analysis 257

Windows Password Cracking 259

Password Hashes Extraction 259

Ophcrack 262

Offline Windows Password and Registry Editor: Bootdisk/CD 262

Trinity Rescue Kit 262

Host Protected Area and Device Configuration Relay Forensic 262

Examining Encrypted Files 262

TCHunt 262

Cracking TrueCrypt Encrypted Volume Passwords 263

Password Cracking Techniques for Encrypted Files 264

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044490000063

Case Processing

David Watson, Andrew Jones, in Digital Forensics Processing and Procedures, 2013

Appendix 25 Some Evidence Found in Volatile Memory

The evidence recovered from volatile memory acquisition will vary depending on the device being acquired, but depending on the device being acquired will include, but not limited to:

available physical memory;

BIOS information;

clipboard information;

command history;

cron jobs;

current system uptime;

driver information;

hot fixes installed;

installed applications;

interface configurations;

listening ports;

local users;

logged on users;

malicious code that is run from memory rather than disk;

network cards;

network information;

network passwords;

network status;

open DLL files;

open files and registry handles;

open files;

open network connections;

operating system and version;

pagefile location;

passwords and crypto keys;

plaintext versions of encrypted material;

process memory;

process to port mapping;

processes running;

registered organization;

registered owner;

remote users;

routing information;

service information;

shares;

system installation date;

system time;

the memory map;

the VAD tree;

time zone;

total amount of physical memory;

unsaved files;

user IDs and passwords.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497428000091

Collecting evidence

John Sammons, in The Basics of Digital Forensics (Second Edition), 2015

Alert!

Evidence in RAM

A computer’s volatile memory (RAM) can contain some very valuable evidence, including running processes, executed console commands, passwords in clear text, unencrypted data, instant messages, Internet protocol addresses, and Trojan horse(s) (Shipley and Reeve, 2006).

Conducting and documenting a live collection

Now comes the tricky part. It’s time to get focused. Once you start, you should work uninterruptedly until the process is complete. To do otherwise only invites mistakes. Before getting underway, gather everything you will need: report forms, pens, memory capture tools, and so on. Every interaction with the computer will need to be noted. You could use an action/response approach (“I did this … The computer did that.”).

If the desktop isn’t visible, you can move the mouse slightly to wake it up. If that fails to bring up the desktop, pressing a single key should solve the problem. You should, of course, document which key was depressed in your notes.

Now that you can see the desktop, the first thing to note is the date and time as it appears on the computer. Next, record the visible icons and running applications. You don’t want to stop there. Documenting the running processes could help identify any malware that is in residence on the computer. The running processes can be documented by accessing the task manager. Why would that matter? One of the more popular defenses, especially in child pornography cases, is to claim that the contraband images were deposited by an unknown third party by way of a Trojan horse.

Now it’s time to use a validated memory capture tool to collect that volatile evidence in the RAM. After this step is complete, the process ends with proper shutdown. The proper shutdown allows any running application a chance to write any artifacts to the disk, allowing us to recover them later.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016350000048

Performance issues and design choices in delay-tolerant network (DTN) algorithms and protocols☆

J. Morgenroth, ... L. Wolf, in Advances in Delay-Tolerant Networks (DTNs) (Second Edition), 2021

13.4 The curse of copying—I/O performance matters

Traditional Internet Protocol (IP) stacks adopt the notion of “streaming,” in which a limited amount of data may be buffered but most of the data is sent out right away. If the outgoing link is currently unavailable or overloaded, packets are discarded. End-to-end data loss is usually prevented by end-to-end retransmissions of higher-level protocols. In a DTN the outgoing link may be unavailable over an extended period of time and data has to be stored on nodes. The DTN architecture (Cerf et al., 2007) further requires such storage to be persistent so that stored data survives system restarts.

Commercial Ethernet switches employ the “store-and-forward” paradigm in which frames are received, buffered (usually in RAM), and subsequently forwarded. While this allows switches to do error checking, it also requires enough temporary storage for at least a single frame. While Ethernet frames are limited in size, ADUs (which are transformed into PDUs by the DTN Engine) in a DTN are “possibly long” (Cerf et al., 2007) and normally not limited in size. When talking about the BP, PDUs have, in fact, limited size of 1.8 × 1019 bytes (because a self-delimiting numeric value (SDNV) can only hold 264 − 1 values).

So, a DTN Engine has to persistently store PDUs of significant size. While the performance of IP stacks is usually limited by the processing capabilities, DTN Engines will likely be limited by the storage bandwidth. Since PDUs have to be stored and retrieved during forwarding, the attainable throughput cannot exceed 50% of the storage bandwidth. Since persistent storage usually involves a hard disk drive (HDD) or flash memory, the storage bandwidth is significantly lower than RAM used in the switches. This makes it clear that DTNs are not a good match for streaming applications (many small ADUs) because the overhead per ADU is comparably high. Furthermore, supporting ADUs of arbitrary size causes certain handling problems, which will be discussed in this section.

13.4.1 Problem statement

On conventional DTN nodes, volatile memory is usually in the form of RAM and persistent memory in the form of flash memory or a hard disk. While RAM cannot be used as persistent storage and is also more expensive than flash or hard disk, it offers significant performance benefits. In Fig. 13.3, we show a network throughput measurement of the DTN2 reference implementation with PDUs stored in RAM or on HDD. The attained throughput when using RAM is between 2.7 and 19.1 times faster compared with storing bundles on HDD. This clearly shows that storing or buffering ADUs in RAM can offer significant performance benefits. However, since RAM is volatile and will be lost on node restarts, not all PDUs can be stored in it. Those requiring special reliability (custody) have to be stored persistently before custody is accepted.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Fig. 13.3. DTN2 network throughput (Pöttner et al., 2011a) (log y-axis).

Even when the performance of the storage back end is sufficient to support high throughput, copying data can also drastically impact performance. In Fig. 13.4 we show a traditional DTN Engine in which data arrives at a CL and is immediately handed over to a storage module. This storage is likely not in RAM because the PDU has to be persistently stored and can be of arbitrary size that can easily exceed the RAM. When the routing module then takes care of the PDU, data is copied to the next storage module. When the PDU is forwarded, data is again copied from the storage module to the CL (and the respective storage module) to allow sending the PDU. Even when keeping PDUs in RAM, copying is expensive and impacts performance.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Fig. 13.4. DTN Engine PDU handling with (slow) copying of blocks.

When keeping PDUs in persistent memory, copying has to be avoided as much as possible because the performance impact is even more significant.

13.4.2 Design advice: Central block storage mechanism

To allow the DTN Engine to achieve high performance, copying of block data has to be avoided as much as possible. The ideal case is shown in Fig. 13.5, in which a central storage component takes care of the PDU. The PDU enters the DTN Engine on the left side and is directly stored in the central component. Subsequently, references to the PDU are passed along until the PDU is forwarded to the next hop. In Fig. 13.6, we show a measurement with IBR-DTN, with and without a central block storage module. The performance increase of central storage that avoids copying is between 5.6% and 80.4%, depending on the size of the ADU.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Fig. 13.5. DTN Engine PDU handling with central storage.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Fig. 13.6. IBR-DTN throughput with and without block copying (Pöttner et al., 2011a).

Another issue with performance is the application programming interface (API). Sending and receiving applications have to be able to create and retrieve ADUs as fast as possible. In most implementations, copying the data at this point cannot be avoided. However, in an implementation that is ideal from the performance perspective, this copying would also be avoided by letting the application directly access the central storage component.

13.4.3 Design advice: Hybrid storage

As argued earlier, fast storage such as RAM is usually expensive and volatile. Persistent storage such as HDD is slow and cheap, while solid-state drives (SSDs) are in between because they are faster than HDDs but also more expensive. It is a characteristic of DTNs that the traffic patterns are bursty. During a contact, data has to be transferred as fast as possible because, especially for short contacts, time is precious. When no other node is in range, IO performance is of minor importance. A hybrid storage approach that combines the benefits of fast, expensive and slow, cheap storage is a good match for this kind of traffic pattern.

Fig. 13.7 shows the concept, in which two layers of storage are combined (Patterson and Hennessy, 2005). On write accesses, data is first of all written to volatile memory. Custody PDUs need to be written directly into persistent storage before accepting custody (write-through). However, conventional PDUs may be forwarded when residing on volatile memory. These PDUs can be written to persistent storage whenever there is time (write back). For write accesses, hybrid storages allow a certain amount of data to be stored with the native speed of the volatile storage. When the volatile storage is exceeded, the storage performance goes down to the performance of the persistent storage. This pattern is a good match for the bursty traffic pattern of typical DTNs.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Fig. 13.7. Hybrid storage architecture.

For read accesses, it is desirable to use the performance of the volatile memory. However, the DTN Engine or the storage component would have to preload PDUs into volatile memory. In a network with predicted or scheduled contacts (see Section 13.6), this is very possible. Since the DTN Engine knows which neighbor is going to show up next, ADUs for this neighbor can be preloaded and transferred at the bandwidth of the volatile storage. However, the prediction of opportunistic contacts is outside the scope of this chapter. In any case, ADUs that have not been preloaded into the volatile buffer have to read out of the persistent memory. Fortunately, flash as well as HDDs have the property that read access is faster (in terms of data rate) than write access. Therefore, preloading ADUs produces a smaller performance advantage than buffering write accesses.

The volatile buffer of the hybrid storage should be able to handle all data that is transferred during one contact. This ensures that data transfer can happen at maximum speed. For networks with a maximum contact duration of tcontactmax and a networking link with a data rate r, the amount of volatile buffer that is necessary can be calculated as tcontactmax × r. Furthermore, the intercontact time should be long enough to flush the volatile buffer into persistent storage.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081027936000138

Digital Building Blocks

Sarah L. Harris, David Harris, in Digital Design and Computer Architecture, 2022

5.5.4 Area and Delay

Flip-flops, SRAMs, and DRAMs are all volatile memories, but each has different area and delay characteristics. Table 5.6 shows a comparison of these three types of volatile memory. The data bit stored in a flip-flop is available immediately at its output. But flip-flops take at least 20 transistors to build. Generally, the more transistors a device has, the more area, power, and cost it requires. DRAM latency is longer than that of SRAM because its bitline is not actively driven by a transistor. DRAM must wait for charge to move (relatively) slowly from the capacitor to the bitline. DRAM also fundamentally has lower throughput than SRAM, because it must refresh data periodically and after a read. DRAM technologies such as synchronous DRAM (SDRAM) and double data rate (DDR) SDRAM have been developed to overcome this problem. SDRAM uses a clock to pipeline memory accesses. DDR SDRAM, sometimes called simply DDR, uses both the rising and falling edges of the clock to access data, thus doubling the throughput for a given clock speed. DDR was first standardized in 2000 and ran at 100 to 200 MHz. Later standards, DDR2, DDR3, and DDR4, increased the clock speeds, with speeds in 2021 being over 3 GHz.

Table 5.6. Memory comparison

Memory TypeTransistors per Bit CellLatency
Flip-flop ~20 Fast
SRAM 6 Medium
DRAM 1 Slow

Memory latency and throughput also depend on memory size; larger memories tend to be slower than smaller ones if all else is the same. The best memory type for a particular design depends on the speed, cost, and power constraints.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128200643000052

Windows Forensic Analysis

Ryan D. Pittman, Dave Shaver, in Handbook of Digital Forensics and Investigation, 2010

Pagefile.sys and Hiberfil.sys

Like their UNIX/Linux counterparts, Windows systems often have a need to swap data out of volatile memory to a location on the disk. However, whereas most nix systems have a whole partition (small as it may sometimes be) dedicated to this swap space, Windows systems tend to use one single file, pagefile.sys. Pagefile.sys (or simply, the page file) is created when Windows is installed and is generally 1 to 1.5 times the size of the installed system RAM on XP systems. The settings for the size of the page file, as well as whether the file is cleared at shutdown or disabled entirely can be found in the SYSTEM\<ControlSet###>\Control\Session Manager\Memory Management registry subkey.

The page file has intrigued examiners for years because theoretically it could contain any data that was held in memory for long after a system was powered down; these data could include unpacked executables, unencrypted passwords, encryption and communications keys, live chat messages, and more. However, the challenge has always been how to extract usable data from the mass of digital detritus often found within the pagefile.sys. One strategy is to use a tool like strings.exe (http://technet.microsoft.com/en-us/sysinternals/bb897439.aspx) or BinText (www.foundstone.com/us/resources/proddesc/bintext.htm)to attempt to pull out user-readable text from the page file. This can be effective, but even the elimination of all the “machine code” characters can leave the investigator looking through line after line of “48dfhs9bn” and “%__<>” strings, unable to discern the meaning of the seemingly random data. Another strategy is to look for recognizable data structures. As just a few examples, looking for executable headers (\x4d\x5A\x90), searching for URL prefixes (e.g., http:// or www.), or locating the text PRIVMSG (which precedes each message sent in many IRC chat clients) could pay dividends, depending on the type of investigation. Further, understanding the geographic relationship between data can be helpful. Consider the e-mail login prompt in Figure 5.47.

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Figure 5.47. Windows live e-mail login prompt.

To the user the password appears masked by dots; however, the computer sees the underlying data and the password to be used is held in RAM. If a search of the pagefile.sys revealed the user's e-mail address, it is not out of the realm of possibility that the user's password could be in close proximity and easily identified, particularly if it is a user-friendly word or phrase.

The hiberfil.sys is similar to the page file, but rather than being used as active swap space, the hiberfil.sys is a repository for the contents of RAM (in a compressed format) when a system is told to hibernate (such as when the lid of a laptop is closed).

Vista handles hibernation a bit differently than previous Windows versions in that it has three related modes: sleep, hibernation, and hybrid sleep-hibernation. In sleep mode, the system continues to supply minimal power to RAM maintaining the contents and not requiring the system to use the hiberfil.sys. Hibernation, on the other hand, causes the contents of RAM to be saved to the hiberfil.sys for restoration when the system “wakes up.” The hybrid sleep-hibernation mode takes advantage of both techniques, continuing to supply low-level power to RAM and saving the contents to the hiberfil.sys for redundancy. The SandMan Project is specifically aimed at assisting investigators in performing forensic analysis of Windows hibernation files (http://sandman.msuiche.net/).

Many examiners have also begun to encounter ReadyBoost used in conjunction with Vista systems. ReadyBoost uses up to 4GB of flash memory (usually in the form of a USB device or flash card) as a memory cache (virtual memory); specifically, Vista uses the flash memory to store data important for the function of the memory manager. An advanced version of ReadyBoost is also listed as a feature for Windows 7, removing the 4GB size restriction for utilized flash memory. Although the user can enjoy the speed gains from ReadyBoost, its use has little impact for the forensic examiner. A file called Readyboost.sfcache is created on the flash media used for ReadyBoost, but the file is (unfortunately for the examiner) 128-bit AES encrypted and represents nothing other than that the device was used for that purpose.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123742674000057

Implementation of organic RRAM with ink-jet printer: from design to using in RFID-based application

Toan Dao Thanh, ... Christos Volos, in Mem-elements for Neuromorphic Circuits with Artificial Intelligence Applications, 2021

17.1 Introduction

It is now well established from a variety of studies that memories is generally classified into two fundamental categorizes: volatile memory and nonvolatile memory [2,3,17]. In general, volatile memory needs power to maintain the stored information. In contrast, the stored information is retained in nonvolatile memory when the power supply is turned off. Fig. 17.1 shows a taxonomy of memories. Volatile memory can be categorized into static random-access memory (SRAM) and dynamic random-access memory (DRAM) while nonvolatile memory may be divided into several groups (see Fig. 17.1). In numerous electronic devices (portable devices, sensors in wireless sensor nodes, embedded systems etc.), a matter of considerable concern is battery life [16]. A great deal of previous research into memory has focused on how to enlarge the operational life time of such electronic devices [20,38,42].

Is a type of memory that can hold data for long periods of time even when there is no power to the computer?

Figure 17.1. Taxonomy of memories including two main categorizes: volatile memory and nonvolatile memory.

In recent years, there has been an increasing amount of literature on memristor, a potential candidate for emerging memory technologies [9,10,17,21,31]. Chua suggested that resistance switching memories were memristors [5]. Compared to the existing memory devices, the power consumption of a memristor was smaller [9,10]. Taherinejad et al. explored the possible storage of multi-bit data in a single memristor [30]. A memristor-based memory cell had less noise margins and stored non-binary data [39]. Discovering applications of the memristor for developing emerging memories is still an attractive research trend [7,18,25,34,35].

The development of the 4.0 era has received considerable critical attention. In recent years, there has been an increasing interest in portable devices, smart phones, smart homes, and smart cities. As a result, the demand for memories has increased significantly [6,11,26,29]. Embedded memory, emerging memory technologies as well as in-memory computing have received considerable attention recently. Resistive RAM (RRAM, or ReRAM) is a potential candidate for emerging nonvolatile memory technologies [1,14,24,37,40]. Compared with current RAM or read-only memory (ROM), resistive RAM is suitable for faster computing power and higher intensity [4,13,33,36,41].

From the view point of manufacturing, it is simple to see that printed electronics is an emerging fabrication method because of its lithography-free or vacuum-free processing [24]. Moreover, screen and ink-jet printing technologies have been investigated and used in electronic manufacturing [1,14]. It is noted that ink-jet method has been applied to fabricate electronic devices because such a method provides advanced features such as having lowest price and being easy to use [1,14,24]. Recently, there has been renewed interest in electronics with organic materials [19,27]. The key aspects of advancements can be easily listed as follows: low temperature process, low-cost, and mechanical flexibility [4,13,15,23,33]. Therefore, organic RRAM is promising for novel storage and/or processing information technologies. Organic RRAM promotes emerging applications of flexible electronics. Synthesis of Au nanoparticle and its application to fabricate organic RRAM device were reported in [12]. Dao introduced a high-performance organic resistive device to illustrate the application of the Au nanoparticle for RRAM array [8].

This chapter summarizes the implementation of organic RRAM with ink-jet printer. Design process is presented in Section 17.2 while fabrication process is reported in Section 17.3. Section 17.4 introduced a real application of the fabricated Organic RRAM for RFID.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128211847000268

What part of the computer holds data for long periods of time even when there is no power to the computer?

Computer memory is divided into main (or primary) memory and auxiliary (or secondary) memory. Main memory holds instructions and data when a program is executing, while auxiliary memory holds data and programs not currently in use and provides long-term storage.

What is a type of memory that requires electricity to hold data?

Volatile memory is memory that requires electric current to retain data. When the power is turned off, all data is erased. Volatile memory is often contrasted with non-volatile memory, which does not require power to maintain the data storage state.

What holds data in memory?

Primary storage, also known as main storage or memory, is the area in a computer in which data is stored for quick access by the computer's processor. The terms random access memory (RAM) and memory are often as synonyms for primary or main storage.

What is RAM explain?

RAM (random access memory) is a computer's short-term memory, where the data that the processor is currently using is stored. Your computer can access RAM memory much faster than data on a hard disk, SSD, or other long-term storage device, which is why RAM capacity is critical for system performance.