Which device is likely to use arm (advanced risc machine) cpu (central processing unit)?

Programming Languages Classification

Charles Shipley, Stephen Jodis, in Encyclopedia of Information Systems, 2003

III.A.1 Von Neumann Architecture

The von Neumann architecture—the fundamental architecture upon which nearly all digital computers have been based—has a number of characteristics that have had an immense impact on the most popular programming languages. These characteristics include a single, centralized control, housed in the central processing unit, and a separate storage area, primary memory, which can contain both instructions and data. The instructions are executed by the CPU, and so they must be brought into the CPU from the primary memory. The CPU also houses the unit that performs operations on operands, the arithmetic and logic unit (ALU), and so data must be fetched from primary memory and brought into the CPU in order to be acted upon. The primary memory has a built-in addressing mechanism, so that the CPU can refer to the addresses of instructions and operands. Finally, the CPU contains a register bank that constitutes a kind of “scratch pad” where intermediate results can be stored and consulted with greater speed than could primary memory.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001386

Security in embedded systems*

J. Rosenberg, in Rugged Embedded Systems, 2017

3.1 Processor Architectures and Security Flaws

The Von Neumann architecture, also known as the Princeton architecture, is a computer architecture based on that described in 1945 by the mathematician and physicist John Von Neumann. He described an architecture for an electronic digital computer with parts consisting of a processing unit containing an arithmetic logic unit (ALU) and processor registers, a control unit containing an instruction register and program counter (PC), a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning has evolved to be any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus.

The design of a Von Neumann architecture is simpler than the more modern Harvard architecture which is also a stored-program system but has one dedicated set of address and data buses for reading data from and writing data to memory, and another set of address and data buses for fetching instructions. A stored-program digital computer is one that keeps its program instructions, as well as its data, in read-write, random-access memory.

A stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which had to be done manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing. Self-modifying code has largely fallen out of favor, since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes.

On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, and other automated programming tools possible. One can “write programs which write programs.” On a smaller scale, repetitive I/O-intensive operations such as the BITBLT image manipulation primitive or pixel & vertex shaders in modern 3D graphics were considered inefficient to run without custom hardware. These operations could be accelerated on general purpose processors with “on the fly compilation” (“just-in-time compilation”) technology, e.g., code-generating programs—one form of self-modifying code that has remained popular.

There are drawbacks to the Von Neumann design especially when it comes to security, which was not even conceived as a problem until the 1980s. Program modifications can be quite harmful, either by accident or design. Since the processor just executes the word the PC points to, there is effectively no distinction between instructions and data. This is precisely the design flaw that attackers use to perform code injection attacks and it leads to the theme of the inherently secure processor: the processor cooperates in security.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024591000063

Instruction Sets

Marilyn Wolf, in Computers as Components (Fourth Edition), 2017

What we learned

Both the von Neumann and Harvard architectures are in common use today.

The programming model is a description of the architecture relevant to instruction operation.

ARM is a load-store architecture. It provides a few relatively complex instructions, such as saving and restoring multiple registers.

The PIC16F is a very small, efficient microcontroller.

The C55x provides a number of architectural features to support the arithmetic loops that are common on digital signal processing code.

The C64x organizes instructions into execution packets to enable parallel execution.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053874000029

DSP Architectures

Lars Wanhammar, in DSP Integrated Circuits, 1999

8.3.1 Harvard Architecture

In the classical von Neumann architecture the ALU and the control unit are connected to a single memory that stores both the data values and the program instructions. During execution, an instruction is read from the memory and decoded, appropriate operands are fetched from the memory, and, finally, the instruction is executed. The main disadvantage is that memory bandwidth becomes the bottleneck in such an architecture.

The most common operation a standard DSP processor must be able to perform efficiently is multiply-and-accumulate. This operation should ideally be performed in a single instruction cycle. This means that two values must be read from memory and (depending on organization) one value must be written, or two or more address registers must be updated, in that cycle. Hence, a high memory bandwidth is just as important as a fast multiply-and-accumulate operation.

Several memory buses and on-chip memories are therefore used so that reads and writes to different memory units can take place concurrently. Furthermore, pipelining is used extensively to increase the throughput. Two separate memories are used in the classical Harvard architecture as shown in Figure 8.5. One of the memories is used exclusively for data while the other is used for instructions. The Harvard architecture therefore achieves a high degree of concurrency. Current DSP architectures use multiple buses and execution units to achieve even higher degrees of concurrency. Chips with multiple DSP processors and a RISC processor are also available.

Which device is likely to use arm (advanced risc machine) cpu (central processing unit)?

Figure 8.5. Harvard architecture

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780127345307500088

Hardware and Software for Digital Signal Processors

Lizhe Tan, Jean Jiang, in Digital Signal Processing (Third Edition), 2019

14.8 Summary

1.

The Von Neumann architecture consists of a single, shared memory for programs and data, a single bus for memory access, an arithmetic unit, and a program control unit. The Von Neumann processor operates fetching and execution cycles seriously.

2.

The Harvard architecture has two separate memory spaces dedicated to program code and to data, respectively, two corresponding address buses, and two data buses for accessing two memory spaces. The Harvard processor offers fetching and executions in parallel.

3.

The DSP special hardware units include an MAC dedicated to DSP filtering operations, a shifter unit for scaling and address generators for circular buffering.

4.

The fixed-point DSP uses integer arithmetic. The data format Q-15 for the fixed-point system is preferred to avoid the overflows.

5.

The floating-point processor uses the floating-point arithmetic. The standard floating-point formats include the IEEE single-precision and double-precision formats.

6.

The architectures and features of fixed-point processors and floating-point processors were briefly reviewed.

7.

Implementing digital filters in the fixed-point DSP system requires scaling filter coefficients so that the filters are in Q-15 format, and input scaling for adder so that overflow during the MAC operations can be avoided.

8.

The floating-point processor is easy to code using the floating-point arithmetic and develop the prototype quickly. However, it is not efficient in terms of the number of instructions it has to complete compared with the fixed-point processor.

9.

The fixed-point processor using fixed-point arithmetic takes much effort to code. But it offers the least number of the instructions for the CPU to execute.

10.

Additional real-time DSP examples are provided, including adaptive filtering, signal quantization and coding, and sample rate conversion.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128150719000142

Brain-like intelligence

Zhongzhi Shi, in Intelligence Science, 2021

14.1 Introduction

The 21st century is the century of the intelligence revolution. Advanced technology with intelligence science at the core and life science in the lead will set off a new high-tech revolution—the intelligent technology revolution. In particular, the combination of intelligent technology, biotechnology and nanotechnology, and the development of the intelligent machine with biological characteristics will be the breakthrough of the high-tech revolution in the 21st century.

The intelligence revolution will create the history of human postcivilization. Different from the energy revolution which realized the conversion and utilization of energy, the intelligence revolution has the potential to realize the conversion and utilization of intelligence, that is, people give their own intelligence to machines, and intelligent machines transform human intelligence into machine intelligence and release human intelligence; people may also transform machine intelligence into human intelligence and make use of it. If the steam engine magically created the industrial society, then the intelligent machine can likewise miraculously create the intelligent society.

In 1936 Turing put forward the great idea of the Turing machine, which is based on the prototype of the human brain’s information process and laid the theoretical foundation of the modern computer [1]. Turing tried to build “a brain” and first raised the notion that putting programs into a machine can make a single machine perform multiple functions.

Since the 1960s, von Neumann architecture had been the mainstream of computer architecture. The following problems exist in classical computer technologies:

1.

Moore’s law shows that devices will reach the limit of physical miniaturization in the coming 10-15 years.

2.

Limited by the structure of the data bus, programming is hard and causes high energy consumption when we process large-scale and complex problems.

3.

There is no advantage in analyses that are complex, varied, real-time, and dynamic.

4.

The technology cannot meet the demand of processing the huge amount of information of the “digital world.” In the data sea produced every day, 80% of the data is original without any processing, whereas a lot of original data’s half-life is only three hours.

5.

In a long-term endeavor, the calculating speed of a computer is up to one quadrillion times that of humans, but the level of intelligence is low.

We study the human brain, researching the methods and algorithms of the brain’s process, and developing brain-like intelligence has now become an urgent requirement [2]. Now, much attention has been paid to the research of brain science and intelligence science in the world. On January 28, 2013, the EU launched the Human Brain Project (HBP), investing €1 billion to fund research and development in the next decade. The goal is to use supercomputers to completely multistage- and multilayer-simulate the human brain and to help people understand the function of human brain. Previous American President Obama announced an important project on April 2, 2013, which would cost about 10 years and a total amount of $1 billion. This project is called Brain Research through Advancing Innovative Neuro Technologies (BRAIN), in order to study the functions of the billions of neurons, to explore human perception, behavior, and consciousness, and to find the method to cure diseases related to the brain, such as Alzheimer’s disease.

IBM has promised to devote $1 billion used in the commercial application of its cognitive computing platform Watson. Google has purchased nine robot companies and one machine learning company, including Boston Dynamics. The father of high-throughput sequencing J. Rothberg and Yale university professor Xu Tian established a new bio-tech company, combining deep learning with biomedical-tech in research and development for new medicines and diagnostic technology.

Among several major frontier science and technology projects, China’s Brain Project (CBP) on Brain Science and Brain-like Intelligence Technology has attracted much attention and was initiated by the Chinese government in 2018. On March 22, 2018, Beijing’s brain science and brain-like research center was established. On May, 2018, the Shanghai brain science and brain-like research center was established in Zhangjiang laboratory. The establishment of these two centers marks the beginning of the China Brain Project. “Intelligence +” helps China’s high-quality economic development and will comprehensively promote the arrival of the intelligence revolution.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323853804000142

Parallel hardware and parallel software

Peter S. Pacheco, Matthew Malensek, in An Introduction to Parallel Programming (Second Edition), 2022

2.1.1 The von Neumann architecture

The “classical” von Neumann architecture consists of main memory, a central-processing unit (CPU) or processor or core, and an interconnection between the memory and the CPU. Main memory consists of a collection of locations, each of which is capable of storing both instructions and data. Every location has an address and the location's contents. The address is used to access the location, and the contents of the location is the instruction or data stored in the location.

The central processing unit is logically divided into a control unit and a datapath. The control unit is responsible for deciding which instructions in a program should be executed, and the datapath is responsible for executing the actual instructions. Data in the CPU and information about the state of an executing program are stored in special, very fast storage, called registers. The control unit has a special register called the program counter. It stores the address of the next instruction to be executed.

Instructions and data are transferred between the CPU and memory via the interconnect. This has traditionally been a bus, which consists of a collection of parallel wires and some hardware controlling access to the wires. More recent systems use more complex interconnects. (See Section 2.3.4.) A von Neumann machine executes a single instruction at a time, and each instruction operates on only a few pieces of data. See Fig. 2.1.

Which device is likely to use arm (advanced risc machine) cpu (central processing unit)?

Figure 2.1. The von Neumann architecture.

When data or instructions are transferred from memory to the CPU, we sometimes say the data or instructions are fetched or read from memory. When data are transferred from the CPU to memory, we sometimes say the data are written to memory or stored. The separation of memory and CPU is often called the von Neumann bottleneck, since the interconnect determines the rate at which instructions and data can be accessed. The potentially vast quantity of data and instructions needed to run a program is effectively isolated from the CPU. In 2021, CPUs are capable of executing instructions more than one hundred times faster than they can fetch items from main memory.

To better understand this problem, imagine that a large company has a single factory (the CPU) in one town and a single warehouse (main memory) in another. Furthermore, imagine that there is a single two-lane road joining the warehouse and the factory. All the raw materials used in manufacturing the products are stored in the warehouse. Also, all the finished products are stored in the warehouse before being shipped to customers. If the rate at which products can be manufactured is much larger than the rate at which raw materials and finished products can be transported, then it's likely that there will be a huge traffic jam on the road, and the employees and machinery in the factory will either be idle for extended periods or will have to reduce the rate at which they produce finished products.

To address the von Neumann bottleneck and, more generally, improve computer performance, computer engineers and computer scientists have experimented with many modifications to the basic von Neumann architecture. Before discussing some of these modifications, let's first take a moment to discuss some aspects of the software that is used in both von Neumann systems and more modern systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128046050000099

Stefan Edelkamp, Stefan Schrödl, in Heuristic Search, 2012

8.3 Model of Computation

Recent developments of hardware significantly deviate from the von Neumann architecture; for example, the next generation of processors has multicore processors and several processor cache levels (see Fig. 8.1). Consequences like cache anomalies are well known; for example, recursive programs like Quicksort perform unexpectedly well in practice when compared to other theoretically stronger sorting algorithms.

Which device is likely to use arm (advanced risc machine) cpu (central processing unit)?

Figure 8.1. The memory hierarchy.

The commonly used model for comparing the performances of external algorithms consists of a single processor, small internal memory that can hold up to M data items, and unlimited secondary memory. The size of the input problem (in terms of the number of records) is abbreviated by N. Moreover, the block size B governs the bandwidth of memory transfers. It is often convenient to refer to these parameters in terms of blocks, so we define m=M ∕B and n=N∕B. It is usually assumed that at the beginning of the algorithm, the input data is stored in contiguous blocks on external memory, and the same must hold for the output. Only the number of block read and writes are counted, and computations in internal memory do not incur any cost (see Fig. 8.2). An extension of the model considers D disks that can be accessed simultaneously. When using disks in parallel, the technique of disk striping can be employed to essentially increase the block size by a factor of D. Successive blocks are distributed across different disks. Formally, this means that if we enumerate the records from zero, the i th block of the j th disk contains record number (iDB+jB) through (iDB+( j+1)B−1). Usually, it is assumed that M and DB.

Which device is likely to use arm (advanced risc machine) cpu (central processing unit)?

Figure 8.2. The external memory model.

We distinguish two general approaches of external memory algorithms: either we can devise algorithms to solve specific computational problems while explicitly controlling secondary memory access, or we can develop general-purpose external memory data structures, such as stacks, queues, search trees, priority queues, and so on, and then use them in algorithms that are similar to their internal memory counterparts.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123725127000080

High-Performance Techniques for Big Data Processing

Philipp Neumann Prof, Dr, Julian Kunkel Dr, in Knowledge Discovery in Big Data from Astronomy and Earth Observation, 2020

7.2.1 Cache-Based Systems

Typically, processing of data in a computer follows the von Neumann architecture (von Neumann, 1993): A central processing unit (CPU) executes one instruction after the other. Each instruction causes one of the available processing units to perform modifications of the data stored in a memory system. In fact, most processors (e.g., standard processors for desktop PCs such as the Intel Xeon family) implement a memory hierarchy: to speed up search for and access of data, the slow, but large, main memory is extended by a hierarchy of smaller memory segments, so-called caches; see Fig. 7.1A. Depending on the underlying strategy for reading and writing data, data are only fetched from/written to main memory/higher-level cache if they are not found in a lower-level cache. Operations (such as floating point arithmetics) are performed in the smallest and thus fastest memory – the so-called registers, which are embedded in the processing unit itself and typically hold between 4 and 16 floating point numbers. Floating point operations, that is, additions and multiplications, can be carried out simultaneously for all numbers within the considered registers (vectorization), analogously to component-wise additions and multiplications on vectors in linear algebra. Making efficient use of this vectorization requires, however, data to be aligned in memory, so that they are loaded contiguously into a register for processing. Vector widths have been increasing from 128 to 512 bits over the last years, implying an increase from 4/2 to 16/8 float/double values that can be processed at a time; see Table 7.1 for an overview of hardware, corresponding register widths, and supported vector instruction sets.

Which device is likely to use arm (advanced risc machine) cpu (central processing unit)?

Fig. 7.1. (A) Cache architecture and (B) multicore architecture with shared L3 cache.

Table 7.1. Overview of commodity hardware, instruction sets, and vectorization properties. Architecture: codename of underlying hardware. Instruction set: supported vectorization method. SSE, Streaming SIMD Extensions; AVX, Advanced Vector Extensions; Number of registers, number of registers per compute core; Register width, size of a register (bits).

ArchitectureVendorInstruction setNumber of registersRegister width
NehalemIntel SSE 8/16 128 bit
Sandy Bridge, Ivy BridgeIntel AVX 16 256 bit
Bulldozer, Piledriver, JaguarAMD AVX 16 256 bit
Haswell, Broadwell, Kaby LakeIntel AVX2 16 256 bit
Excavator, ZenAMD AVX2 16 256 bit
Skylake-XIntel AVX-512 32 512 bit
Knights Landing (Xeon Phi)Intel AVX-512 32 512 bit

Modern processors provide multiple functional units, which can operate simultaneously to manipulate data. An example is given by the latest Intel Skylake-X compute cores, featuring two AVX-512 fused multiply-add units. Thus, a CPU can execute multiple instructions on multiple scalar values concurrently. Therefore, a CPU (or the compiler for the system) keeps track of data dependencies to ensure that the computation result is identical to a sequential execution. In the case of a data-intensive application, the use of cache-based architectures implies that if data in a small cache or in the registers can be reused, less time has to be spent for looking them up in memory – data access and processing are accelerated, respectively.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128191545000175

Advances in Computers

Amjad Ali, Khalid Saifullah Syed, in Advances in Computers, 2013

2 Modern Microprocessor Based Computer Systems

The basic physical organization of a modern computer, based on the von Neumann architecture model, comprises 5 units, namely memory, control, arithmetic-&-logic, input, and output. The central processing unit (CPU) comprises control and arithmetic-&-logic units. The functioning of a computer is precisely the execution of the instructions to process the data by its CPU. The instructions are the primitive operations that the CPU may execute, such as moving the contents of a memory location (called as register) to another memory location within the CPU, or adding the contents of two CPU registers. The control unit fetches the data/instruction from the system memory or main memory, sometimes also referred to as the random access memory (RAM). The data is then processed by the arithmetic-&-logic unit, sequentially, according to the instructions decoded by the control unit. Storing both the data and the instructions in a single main memory unit is an essential feature of the von-Neumann architecture. The input and output units provide interface between computer and the human.

Not only the CPU, but also the memory system plays a crucial role in determining the overall computational performance of the computer. The memory system of a modern computer is complicated one. A number of smaller and faster memory units, called cache memories or simply caches, are placed between the CPU and the main memory. The idea of a cache memory is to bring only some part of the program data needed currently from main memory into the cache to speedup the data access by the CPU. The cache memories form a memory hierarchy consisting of a number of levels in view of their distance from the CPU. The access time and size of the data increase as the hierarchy level gets away from the CPU. The memory hierarchy (combining smaller and faster caches with larger, slower, and cheaper main memory) behaves most of the time like a fast and large memory. This is mainly due to the fact that the caches are to exploit the feature of locality of memory references, also called the principle of locality, which is often exhibited by the computer programs. Common types of the locality of reference include the spatial locality (local in space) and the temporal locality (local in time). Spatial locality of reference occurs when a program accesses the data that is stored contiguously (for example, elements of an array) within a short period of time. Caches are used to exploit this feature of spatial locality by pre-fetching from the main memory some data contiguous to the requested one, into a cache. Temporal locality of reference occurs when a program accesses a used data item again after a short period of time (for example, in a loop). Caches are used to exploit this feature of temporal locality by retaining recently used data into a cache for a certain period of time. Note that the locality of reference is a property of the computer programs but is exploited in the memory system design through the caches. This, definitely, indicates that during the coding a programmer should take care to develop the code so as to enhance both the types of localities of reference for efficient cache utilization. This could be achieved by coding in a way that the data is accessed in a sequential/contiguous fashion and, if required to be reused, is accessed again as soon as possible.

A modern CPU (microprocessor) executes (at least) one instruction per clock cycle. Each different type of CPU architecture has its unique set of instructions, called its instruction set architecture (ISA). The instruction set architecture of a computer can be thought of the language that the computer can understand. Based on the type of ISA, there are two important classes of modern (microprocessor based) computer architectures: CISC (Complex Instruction Set Computer) architecture and RISC (Reduced Instruction Set Computer) architecture. The basic CISC architecture is essentially the von Neumann architecture in the sense of storing both the instruction and the data inside a common memory unit. On the other hand, the basic RISC architecture has two entirely separate memory spaces for the instructions and the data, which is the feature that was first introduced in Harvard architecture to overcome the bottleneck in the von Neumann architecture due to data-instruction shared paths between the CPU and the memory. CISC philosophy is that the ISA has a large number of instructions (and addressing modes, as well) with varying number of required clock cycles and execution time. Also certain instructions can perform multiple primitive operations. RISC philosophy is that the ISA has a small number of primitive instructions for ease in hardware manufacturing and thus the complicated operations are performed, at program level, by combining simpler ones. Due to its very nature, RISC architecture is usually experienced to be faster and efficient than a comparable CISC architecture. However, due to continuing quest for enhancement and flexibility, today a CPU executing an ISA based on CISC may exhibit certain characteristics of RISC and vice versa. Thus, the features of CISC and RISC architectures have been morphing with each other. Classic CISC architecture examples include VAX (by DEC), PDP-11 (by DEC), Motorola 68000 (by Freescale/Motorola), and x86 (mainly by Intel). The modern CISC architecture, x86-64, based processors like Pentium (by Intel) and Athlon (by AMD) basically evolved from the classic CISC architecture x86, but they exhibit several RISC features. Currently, Xeon (by Intel) and Opteron (by AMD) are the two quite prominent market icons based on x86-64 architecture. Famous RISC architecture examples include MIPS (by MIPS Technologies), POWER (mainly by IBM), SPARC (mainly by SUN/Oracle), ALPHA (by DEC), and ARM for embedded systems (by ARM Ltd.).

Today, Intel and AMD are two major vendors in the microprocessor industry, each with their own line of CPU architectures. The x86-64 CPUs from Intel and AMD, basically emerged as CISC architectures, now incorporate a number of RISC features, especially to provide for Instructions Level ParallelismILP (details later on). Interestingly, today the microprocessors (from Intel and AMD) implement the RISC feature of separate memory space for the data and the instructions (for Level-1 cache, at least).

Another main specialty of a modern CPU is that a number of CPU cores are fused together on a single chip/die with a common integrated memory controller for all the cores. Initially, dual core CPU chips were introduced around the year 2005 but, as of the year 2013, 12/16-core CPU chips are commonly available in the market, although the price might get manifold with linear increase in the number of cores per chip. Moreover, getting the best performance out of a larger number of cores in a single CPU chip is currently a challenging task, mainly due to the memory bandwidth limitations. A multicore CPU provides for more clock cycles by summing the clock cycles contributed by each of its cores. Thus, it is keeping the well-known Moore’s law effective, even today, to some extent. In fact, they provide for tackling the issues of high power requirements and heat dissipation realized in the case when all the cores are there in separate CPU chips, instead of being part of a single CPU chip [4]. Increment in the clock frequency of a single CPU core (silicon based) is virtually no more feasible due to the physical and practical obstacles. Multicore technology is the posed and accepted solution to this limitation.

Another sophisticated architectural innovation in several modern CPU architectures is the multithreading facility per CPU core. A physical core acts as to provide more than one (usually two) logical processors that might be benefited by the application in hand. The common realizations of this concept include hyperthreading, symmetric multithreading (SMT), and chip multithreading (CMT). A concise introduction to this topic and to the overall features of modern processors is given by Hager and Wellein ([5], 1–36). Implications of several of the architectural features of the modern processors (especially multicore, multithreading, and ILP) are discussed in the coming sections.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124080898000033

What are the properties of a file books CoverPhoto JS?

What are the properties of a file "Books/CoverPhoto. js"? An executable file contains program code, and can make changes to the computer system. Executable file extensions include "exe", "bat", "cmd", and "js".

Which notation is a more efficient method to express large numbers?

Scientific notation is simply a way of writing numbers. It is especially useful in expressing very large or very small numbers because it is shorter and more efficient and it shows magnitude very easily.

What does 5 GB memory mean in the context of the Microsoft Windows operating system?

What does 5 GB (GigaBytes) memory mean in the context of the Microsoft Windows operating system? 5120 MB. A user installs VirtualBox on a Linux OS (operating System). Which class of hypervisor is it? Type II, because VirtualBox is a software application within a host operating system.

Which statement holds true about a desktop workstation when comparing it to a server?

Which statement holds true about a desktop/workstation when comparing it to a server? Servers use the same type of components as a desktop. An access point is set up in a small home office.