Written by students who passed Immediately available after payment Read online or as PDF Wrong document? Swap it for free 4.6 TrustPilot
logo-home
Class notes

Class notes Bca Programming in Python 3

Rating
-
Sold
-
Pages
39
Uploaded on
10-09-2023
Written in
2023/2024

This is the python and feia notes for you guys it helps to your exams please open it and learn

Institution
Course

Content preview

Unit-3

Logical Versus Physical Address Space

An address generated by the CPU is commonly referred to as a logical address, whereas an address
seen by the memory unit—that is, the one loaded into the memory-address register of the memory—
is commonly referred to as a physical address.

The compile-time and load-time address-binding methods generate identical logical and physical
addresses. However, the execution-time address binding scheme results in differing logical and
physical addresses. In this case, we usually refer to the logical address as a virtual address. We use
logical address and virtual address interchangeably in this text. The set of all logical addresses
generated by a program is a logical address space. The set of all physical addresses corresponding
to these logical addresses is a physical address space. Thus, in the execution-time address-binding
scheme, the logical and physical address spaces differ.




The run-time mapping from virtual to physical addresses is done by a hardware device called the
memory-management unit (MMU). The base register is now called a relocation register. The value
in the relocation register is added to every address generated by a user process at the time the
address is sent to memory (see Figure 8.4). For example, if the base is at 14000, then an attempt by
the user to address location 0 is dynamically relocated to location14000; an access to location346
is mapped to location 14346.


1

,Swapping

A process must be in memory to be executed. A process, however, can be swapped temporarily out
of memory to a backing store and then brought back into memory for continued execution (Figure
8.5). Swapping makes it possible for the total physical address space of all processes to exceed the
real physical memory of the system, thus increasing the degree of multiprogramming in a system.




Standard Swapping

Standard swapping involves moving processes between main memory and a backing store. The
backing store is commonly a fast disk. It must be large enough to accommodate copies of all
memory images for all users, and it must provide direct access to these memory images. The system
maintains a ready queue consisting of all processes whose memory images are on the backing store
or in memory and are ready to run. Whenever the CPU scheduler decides to execute a process, it
calls the dispatcher. The dispatcher checks to see whether the next process in the queue is in
memory. If it is not, and if there is no free memory region, the dispatcher swaps out a process
currently in memory and swaps in the desired process. It then reloads registers and transfers control
to the selected process.
The context-switch time in such a swapping system is fairly high.

Contiguous Memory Allocation
The main memory must accommodate both the operating system and the various user processes.
We therefore need to allocate main memory in the most efficient way possible. This section explains
one early method, contiguous memory allocation.
2

,The memory is usually divided into two partitions: one for the resident operating system and one
for the user processes. We can place the operating system in either low memory or high memory.
The major factor affecting this decision is the location of the interrupt vector. Since the interrupt
vector is often in low memory, programmers usually place the operating system in low memory as
well. Thus, in this text, we discuss only the situation in which the operating system resides in low
memory. The development of the other situation is similar.
We usually want several user processes to reside in memory at the same time. We therefore need
to consider how to allocate available memory to the processes that are in the input queue waiting
to be brought into memory. In contiguous memory allocation, each process is contained in a single
section of memory that is contiguous to the section containing the next process.

Memory Allocation

One of the simplest methods for allocating memory is to divide memory into several fixed-sized
partitions. Each partition may contain exactly one process. Thus, the degree of multiprogramming
is bound by the number of partitions. In this multiple partition method, when a partition is free, a
process is selected from the input queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process. This method was originally
used by the IBM OS/360 operating system (called MFT) but is no longer in use.
The method described next is a generalization of the fixed-partition scheme (called MVT); it is used
primarily in batch environments. Many of the ideas presented here are also applicable to a time-
sharing environment in which pure segmentation is used for memory management.
In the variable-partition scheme, the operating system keeps a table indicating which parts of
memory are available and which are occupied. Initially, all memory is available for user processes
and is considered one large block of available memory, a hole. Eventually, as you will see, memory
contains a set of holes of various sizes.
As processes enter the system, they are put into an input queue. The operating system takes into
account the memory requirements of each process and the amount of available memory space in
determining which processes are allocated memory. When a process is allocated space, it is loaded
into memory, and it can then compete for CPU time. When a process terminates, it releases its
memory, which the operating system may then fill with another process from the input queue.
At any given time, then, we have a list of available block sizes and an input queue. The operating
system can order the input queue according to a scheduling algorithm. Memory is allocated to
processes until, finally, the memory requirements of the next process cannot be satisfied—that is,
no available block of memory (or hole) is large enough to hold that process. The operating system
can then wait until a large enough block is available, or it can skip down the input queue to see
whether the smaller memory requirements of some other process can be met.
In general, as mentioned, the memory blocks available comprise a set of holes of various sizes
scattered throughout memory. When a process arrives and needs memory, the system searches the
set for a hole that is large enough for this process. If the hole is too large, it is split into two parts.
One part is allocated to the arriving process; the other is returned to the set of holes. When
a process terminates, it releases its block of memory, which is then placed back in the set of holes.
If the new hole is adjacent to other holes, these adjacent holes are merged to form one larger hole.

3

, At this point, the system may need to check whether there are processes waiting for memory and
whether this newly freed and recombined memory could satisfy the demands of any of these waiting
processes.
This procedure is a particular instance of the general dynamic storage allocation problem, which
concerns how to satisfy a request of size n from a list of free holes. There are many solutions to this
problem. The first-fit, best-fit, and worst-fit strategies are the ones most commonly used to select a
free hole from the set of available holes.
• First fit. Allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or at the location where the previous first-fit search ended. We can stop
searching as soon as we find a free hole that is large enough.
• Best fit. Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.
• Worst fit. Allocate the largest hole. Again, we must search the entire list, unless it is sorted
by size. This strategy produces the largest leftover hole, which may be more useful than the
smaller leftover hole from a best-fit approach.

Paging

The basic method for implementing paging involves breaking physical memory into fixed-sized
blocks called frames and breaking logical memory into blocks of the same size called pages. When
a process is to be executed, its pages are loaded into any available memory frames from their source
(a file system or the backing store). The backing store is divided into fixed-sized blocks that are the
same size as the memory frames or clusters of multiple frames. This rather simple idea has great
functionality and wide ramifications. For example, the logical address space is now totally separate
from the physical address space, so a process can have a logical 64-bit address space even though
the system has less than 264 bytes of physical memory.

The hardware support for paging is illustrated in Figure 8.10. Every address generated by the CPU
is divided into two parts: a page number (p) and a page offset (d). The page number is used as an
index into a page table. The page table contains the base address of each page in physical memory.
This base address is combined with the page offset to define the physical memory address that is
sent to the memory unit. The paging model of memory is shown in Figure 8.11.




4

Connected book

Written for

Course

Document information

Uploaded on
September 10, 2023
Number of pages
39
Written in
2023/2024
Type
Class notes
Professor(s)
Anamika
Contains
All classes

Subjects

$8.49
Get access to the full document:

Wrong document? Swap it for free Within 14 days of purchase and before downloading, you can choose a different document. You can simply spend the amount again.
Written by students who passed
Immediately available after payment
Read online or as PDF

Get to know the seller
Seller avatar
keerthiap

Get to know the seller

Seller avatar
keerthiap Maharanis
Follow You need to be logged in order to follow users or courses
Sold
-
Member since
2 year
Number of followers
0
Documents
3
Last sold
-

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Working on your references?

Create accurate citations in APA, MLA and Harvard with our free citation generator.

Working on your references?

Frequently asked questions