April 25, 2024

SamTech 365

PowerPlatform, Power Apps, Power Automate, PVA, SharePoint, C#, .Net, SQL, Azure News, Tips ….etc

Gigabytes of Memory and file structure impact

Author : DAOUDI Samir | Context : MSc Software Engineering – Computer Structure

Computers run thank to a collaboration of different components. The execution of our applications is handled by the CPU that interprets the machine code and executes the different instructions. The 1st step performed is the load of the application’s code or part of this code into memory which is a critical resource used for this end.

For long time, many efforts have been spent in increasing memory size and we jumped from kilobytes of memory to gigabytes. But memory sizes are still not sufficient especially with the latest software solutions (graphical, calculation, multimedia …etc. Computer’s memory has some properties that can be summarized as:

As D.Gookin stated ‘If computers were a sport, memory would be the competition’ [1]. I think that the competition between processes is due to the reduced size of memory.

If in the future, if memory size can be increased to many billions of bytes, this can have many circumstances on IT field.

Before discussing the impact of huge memory size on file structures, let’s first review the different file structures.
Files are in fact a collection of records stored in a rows [2], file structures differ in the way these records are stored in hard disk, access and presented to the users.

There are different structure in how internally files are organized, the simplest one is the
Binary file: which are collections of bits that can be loaded as it is in memory for treatment. Sequential file: Files where records are accessed sequentially, in order to access the xth element, we need to browse from the beginning to the (x-1)th record
Indexed file: Files that can be accessed randomly and quickly thanks to a structure known as index. Random access file: In which the data within the file can be access in any order [3].

The impact of big sized memory can be seen only in the indexed files and especially in case of double indexed files. The double index is used as a solution to optimize the big size of indexes and in this case another index for the index is created. So in case big memories, I think that we will just keep one level of indexes.

It will be also an improvement as the competition for the memory resources will decrease and this will no longer been considered as an expensive resource.
For the binary and sequential files, this structure will remain as they have been specifically designed like this for specific usages (videos, audio, applications…etc).

The hard drives wont be affected. I think that memory can be as big as possible hard drive will still exist simply because they are permanent storage devices and the memory is a volatile one.
In my personal point of view, the increase of memory capacity will

Positive – Very fast access time.
– Can be used to handle temporarily calculation results. – The allocation of space can be non contiguous.
– Characterized by a direct access.
Negative

– Volatile memory.
– May cause a competition between process and even deadlocks. – Reduced size comparing to hard disks.

Benifits

Problems

– Reduce the concurrency between processes.
– Accelerate the execution of applications and no I/O are needed from the disk to Store/Load pages or temporarily information.

The.
– Programs will be completely loaded into memory.

– To address a huge memory we need more bits in the address field of the instructions.
– If the instructions have to be redesigned many other changes may occur.

– Garbage collector task can be harder. – The translation of memory addresses (physical <-> Virtual may be more complex.

References:

[1] Dan Gookin, ‘PC for dummies’, 2009. ISBN: 978-0-470-46542-4
[2] A.A.Puntambekar , ‘Data And File Structures’, 2009. ISBN:9788184317015
[3] Lecture notes – Week 7, ‘Databases and File Structure’, P6.