What is Memory Management in Operating System

In a mono-programming or uni-programming system, the main memory is divided into two parts, one part for the operating system and another for a job that is currently executing, consider the below figure for a better understanding.

memory-partition

Partition 2 is allowed for a user process. But some parts of partition 2 are wasted, it is indicated by the white area in the figure. In the multiprogramming environment, the user space is divided into a number of partitions. Each partition is for one process. The task of sub-division is carried out dynamically by the operating system, this task is known as Memory Management. Efficient memory management is possible with multiprogramming.

A few processes are in the main memory, the remaining processes are waiting for I/O. Therefore, the processor will be idle for the waiting process. Thus, memory needs to be allocated efficiently to pack as many processes into memory as possible.


Logical vs Physical Address Space

An address generated by the CPU is called a logical address, whereas an address generated by the memory management unit is called a physical address.

For example, J1 is a program written by a user. The size of the program is 100KB. But the program loaded in the main memory from 2100 to 2200 KB, this actual loaded address in the main memory is called a physical address.

The set of all logical addresses generated by a program is referred to as a Logical Address Space. The set of physical addresses corresponding to this logical address is referred to as a Physical Address Space.

In our example from 0 to 100 KB is the logical address space from 2100 to 2200 KB is the physical address space, therefore:

Physical address space = Logical address space + Contents of the relocation register.
2200 = 100 + 2100 (Contents of the relocation register)
logical-vs-physical-address

The runtime mapping from logical to physical address is done by the memory management unit (MMU), which is not a hardware device.


Swapping

Swapping is a method to improve the main memory utilization. For example, the main memory consists of 10 processes. Assume that it is the maximum capacity, and the CPU currently executing process number 9. In the middle of the execution, process number 9 needs an I/O, then the CPU switch to another job, and process number 9 is moved to a disk. Then another process is loaded into the main memory in place of process number 9. When process number 9 is completed it’s an I/O operation then the processor moved into the main memory from the disk.

Switching a process from main memory to disk is said to be Swap Out and switching from disk to main memory is said to be Swap In. This type of mechanism is said to be Swapping. We can achieve efficient memory utilization with swapping.

swapping

Swapping requires a Backing store. The backing store is commonly a fast disk. It must be large enough to accommodate the copies of all process images for all users. When a process is swapped out, its executable image is copied into the backing store. When it is swapped in, it is copied into the new block allocated by the memory manager.


Memory Management Requirements

There are so many methods and policies for memory management. To observe these methods there are 5 requirements listed below.

  1. Relocation
  2. Protection
  3. Sharing
  4. Logical Organization
  5. Physical Organization

1. Relocation

Relocation is a mechanism to convert a logical address into a physical address. An address generated by the CPU is said to be a logical address. An address generated by the memory manager is said to be a physical address.

Physical Address = Contents of Relocation Register + Logical Address

Relocation is necessary at the time of swap in a process from one backing store to the main memory. Most of all time the process occupies the same location at the time of swap-in. But sometimes it is not possible, at the time of relocation is required.

memory-relocation

2. Protection

The word protection means providing security from unauthorized usage of memory. The operating system can protect the memory with the help of base and limit registers. Base registers consist of the starting address of the next process. The limit register specifies the boundary of that job, so the limit register is also said to be a fencing register.

memory-protection

The base register holds the smallest legal physical memory address, and the limit register contains the size of the process. If the logical address is greater than the contents of the base register it is the authorized access, otherwise, it is trapped in the operating system. If the physical address (Logical + Base) is less than the base limit it causes no problems, otherwise, it is trapped in the operating system.

3. Sharing

Generally, protection mechanisms are required at the time of accessing the same portion of main memory by several processes. Accessing the same portion of main memory by a number of processes is said to be Sharing. Suppose to say a number of processes are executing the same program, it is advantageous to allow each process to access the same copy of the program rather than have its own copy. If each process maintains a separate copy of that program, a lot of main memory will waste.

For example, if 3 users want to take their resumes using a word processor, then three users share the word processor in the server, in the place of an individual copy of the word processor.

4. Logical Organization

We know that the main memory in a computer system is organized as a linear, or one-dimensional, address space that consists of a sequence of bytes or words. Secondary memory at its physical level is similarly organized. Although this organization closely mirrors the actual machine hardware, it does not correspond to the way in which programs are typically constructed. Most programs are organized into modules, some of which are unmodifiable (read-only, execute-only) and some of which contain data that may be modified. If the operating system and computer hardware can effectively deal with user programs and data in the form of modules of some sort, then a number of advantages can be realized.

5. Physical Organization

Computer memory is organized into at least two levels: main memory and secondary memory. Main memory provides fast access at a relatively high cost. In addition, the main memory is volatile that is, it does not provide permanent storage. Secondary memory is slower and cheaper than main memory, and it is usually not volatile. Thus, secondary memory of large capacity can be provided to allow for long-term storage of programs and data, while a smaller main memory holds programs and data currently in use.


Dynamic Loading and Dynamic Linking

The word loading means loading the Program or Module from secondary storage devices to main memory. There are two types of Loadings:

  1. Compile time loading
  2. Runtime loading

All the routines loaded in the main memory at the time of compilation are said to be Static Loading or Compile Time Loading. The routines are loaded in the main memory at the time of execution or running is said to be Dynamic Loading. For example, consider a small program in C Language.

#include<stdio.h>
int main() 
{
	printf("Welcome to world of O.S.");
	return 0;
}

Suppose the program occupies 1KB in secondary storage. But it occupies hundreds of KBs in the main memory. Because some routines and header files (ex: stdio.h) should be loaded in the main memory to execute that program. Generally, this loading is done at the time of execution. We can say that meaning of dynamic loading in a single statement is with dynamic loading a routing is not loaded until it is called.

The main advantage of dynamic loading is that an unused routing is never loaded, so we can obtain better memory space utilization. This scheme is particularly useful when large amounts of code are needed to handle frequently occurring cases.

Linking the library files before the execution is said to be static linking, and at the time of execution is said to be dynamic linking. Most operating systems support only static linking. We can say the definition of dynamic linking in a single statement is that linking is postponed until execution time.


Memory Management Function

Memory management is concerned with four functions:

  • Keeping track of memory: Total memory allocated to the Job
  • Determining factor on memory policy: The job gets all memory when scheduled.
  • Allocation of memory: All required memory is allocated to the job.
  • De-allocation of memory: When the job is done, the total memory allocated will be free.
Scroll to Top