In the second section of this document is made up of information on various modern microprocessors. This includes introduction of microprocessor and contrast of microprocessor in various devices like laptop computers, desktops, servers and embedded systems. The last part of this section explains about the trends that have an effect on the performance and design in modern microprocessors.
Section 1
Operating System
1. 0 Introduction
Fedora can be an open source operating-system built on the Linux kernel. The first release of Fedora was on 16th November 2003 as Fedora Key 1. The latest stable version was released on 2nd November 2010 as Fedora 14, codenamed Laughling. Fedora is developed by a community of designers collectively known as the Fedora Project. This task was founded by Warren Togami in December 2002. The Fedora job is sponsored by Red Head wear.
The next version of Fedora (Fedora 15 / Lovelock) will be released on 17th May 2011. The features included are:
Execution of GNOME 3
Execution of Systemd
OpenOffice is replaced by LibreOffice
Currently, Fedora rates third as the most popular Linux-based operating-system. Ubuntu being first, accompanied by Mint.
(Fedora 14 screenshot)
1. 1 Ram Management
Memory management is a field of computer research which develops techniques to efficiently deal with the computer memory space. Basically, storage area management requires allocation of memory section to various programs at their request, and then freeing it, such that it can be used again. Good memory space management technique maximizes processing efficiency. Ram management is a compromise between number (available Random Gain access to Ram) and performance (access time).
A good recollection management system must carry out the following tasks
Allocation of memory space blocks for different tasks
Allow posting of memory
Storage area space which is being used should be protected. This is required to prevent a individual from changing a task completed by another user.
Search engine optimization of available memory
Different operating-system implements various approaches for memory management and therefore, their performance varies. A number of the ram management techniques utilized by Fedora are as follows
Virtual Memory
Garbage collection
Swapping
Memory space hierarchy
Over commit accounting
OutOfMemory
Drop caches
1. 2 Digital Memory
Virtual storage is one the most commonly used recollection management technique in modern personal computers. It had been developed for multitasking kernels to handle the problem of insufficient Ram memory for multiple programs to operate simultaneously. Virtual memory space allows the computer to look at areas of Ram memory which have not been used for some time and copies those areas onto the hard disk. Only instructions and data utilized by the processor are stored in the RAM. The operating-system carries this out by developing a temporary document (known as SWAP or exchange data file) and putting it on the hard disk drive when the Memory space is not sufficient. This will likely raise the space in ram and can allow RAM to download new applications because the regions of RAM that was not used just lately was changed to hard disk drive.
So, virtual storage basically extends the users most important memory by taking into consideration the hard disk as if it were yet another RAM.
(Virtual storage)
Virtual storage area is implemented in Fedora since this is a multitasking and multiuser operating system. These features require necessary safeguard and potential to implement different process simultaneously whose cumulated process size can be greater than the primary recollection available in the system. These requirements can be attained by implementing virtual memory technique.
1. 2. 1 Advantages and Disadvantages
Advantages of Virtual Memory
1. Virtual memory allows the machine to function as if they have more Ram memory than it actually will. So, the machine can run more vigorous applications at confirmed time because exclusive memory increases the amount of most important space available.
2. An application or process can operate on something even though there isnt enough main memory necessary for the program or process to run. This is attained by the implementation of virtual recollection because it escalates the Ram memory space by copying areas of Memory which wasnt just lately used to the hard disk
3. Since hard disk space is a lot cheaper than RAM space, users need not spend big money upgrading their RAM.
Disadvantages of Virtual Memory
1. You will see a significant lack of system performance if the machine depends too much on the exclusive memory. This is because the operating-system must constantly swap information between Memory and the hard disk. And because the read/write swiftness of hard disk drive is a lot slower than RAM and the hard disk technology does not allow quick access to small bits of data at the same time, there will be a significant loss of performance when the machine relies seriously on virtual recollection. The RAM is much faster than hard disk drive because RAM comprises of built in circuit technology, so that it is faster, while hard disk drive is made on the magnetic technology, which is a lot slower than the Ram memory.
This can be prevented by ensuring system has enough Memory installed in it so the RAM can handle all the duties used by the user on a daily basis. Having this setup will ensure the best functioning of the machine.
2. The implementation of virtual memory requires allocation of certain portion of hard disk drive space because of its use. This can cause less hard disk drive space the end users use. That is if something has 20GB hard disk, and 2GB of its space is allocated for virtual memory, then the user cannot use this 2GB of space as it is reserved for exclusive memory
This problem can be resolved by having a difficult disk with enough space for the users requirement so the allocation of certain portion of hard disk won't result in insufficient space in the hard disk drive.
3. The machine might be unstable as a result of constant swap of information between hard disk and RAM.
1. 3 Garbage collection
Garbage collection is a kind of automated ram management technique where the operating system eliminates objects, data or other regions of the memory that happen to be no longer in use by the machine or this program. This technique is necessary for operating system to function well because the storage area available in the machine is obviously finite and failing to eliminate these unwanted data will result in significant performance reduction and unnecessary use of memory space.
In Fedora, garbage collection is mostly broken down to three stages, they are
Pruning
Trashing
Deleting
1. 3. 1 Pruning
In this level, the operating system identifies unwanted things, builds or data and they're detached from certain tags corresponding to garbage collection guidelines establish by Fedora. These insurance policies allow rules predicated on tag, personal and offer.
No items, builds or data are deleted in this level. The unwanted objects, builds or data are discovered in this level, which eventually gets deleted in the forthcoming stages.
1. 3. 2 Trashing
This is the level in which, the system overlooks the items, builds or data which was untagged in pruning level. After this, these objects, builds or data are tagged with a trashcan tag which instructs the machine to send these builds for deletion. The garbage collector directs a certain items, builds or data for deletion only if it meets the following requirement
1. The things, builds or data is untagged for at least 5 days
2. There are not protection key authorized on the things, builds or data
1. 3. 3 Deleting
This is the ultimate level in garbage collection, in this level, all the things, builds or data are evaluated one final time for any blunders in their tags. The items, builds or data is usually removed after it has been tagged with the trashcan tag for more than the elegance period (4 weeks by default).
Section 2
Computer System Architecture
2. 0 Introduction
A microprocessor, also known as a logical chip can be an involved circuit which provides the whole or almost all of the central handling unit (CPU) of a computer on a single chip. A microprocessor is generally designed to accomplish reasonable and arithmetic businesses.
(AMD Athlon Processor)
Microprocessors were released in the early 1970s and were found in electronic calculators that have been using Binary Coded Decimal (BCD) for computations. These microprocessors were 4 tad microprocessors. These microprocessors were soon found in devices like printers, terminals and various types of automation.
The first general goal commercially developed microprocessor was introduced by Intel. This microprocessor was called Intel 4004. This was also the first complete CPU on a chip. The technology used for the introduction of Intel 4004 was the silicon gate technology which increased the number of transistors in each microprocessor which increased the speed of computation. It had a clock swiftness of 108KHz and 2300 transistors with a dock for Input Result (I/O), RAM and Read Only Recollection (ROM). Intel 4004 could do about 92, 000 instruction every second, making each training routine 10. 8 microsecond. This can be inferred because the microprocessor is with the capacity of performing 92, 000 instructions per second, so after computation (1/92000), we can concur that each instruction takes 10. 8 microseconds.
The microprocessor works as an unnatural brain of your computer. It will get and provides out various instructions to other components present is some type of computer. All microprocessors work are based on logic. That is achieved from the three subsequent components of microprocessor which sorts the main top features of the microprocessor, they are
1. Group of digital instruction
2. Bandwidth
3. Clock speed
2. 1 Expansion of microprocessor
Presently, in the digital time, there are only a negligible about of electronic gadgets that will not have a microprocessor within it. This is because of the quick development in this field. Today, these devices perform a multitude of advanced task which only be achieved by the execution of a microprocessor in it.
The quick development of various fields like vehicle, weather forecasting, medication, communication and space technology can be credited to the introduction of microprocessor. This is as a result of capacity of microprocessors potential to make quick and credible decisions.
Microprocessor also enabled automation of various difficult manual jobs possible. It has resulted in better acceleration, efficiency and accuracy and reliability in many aspects of our life. The potential of microprocessor is still enormous as there is still room for further development of microprocessors.
2. 2 Microprocessor design strategy
Microprocessors design and architecture varies depending upon its manufacture and the device requirement the microprocessor would be satisfying. A number of the design strategies are as follows
Complex Instruction Collection Computers
In this structures, most of the work are done by the microprocessor itself. Here, a single instruction can carry out several low level functions. Low level procedure refers procedures like weight from recollection, arithmetic procedures and other such businesses.
Example: Motorola 68k processors
Reduced Instruction Set Computers
This is a CPU architecture in which most of the task is performed by the program itself. This ensures that the strain on the processor is suprisingly low and leads to faster execution of education.
Example: AMD 29k processors.
2. 3 Microprocessors in several devices
Based on the various machines and the task it holds out, microprocessors can be broadly categorized into
Desktop microprocessor
Laptop microprocessor
Server microprocessor
Embedded system microprocessor
2. 3. 1 Desktop Microprocessors
A desktop is a general purpose personal computer which will be used in a single location.
(Desktop PC)
As desktops are used in one location, you will see a constant way to obtain power to the system. Also, almost all of the desktop instances are successfully ventilated to minimize the temperature rise in the system. As there is absolutely no problem with power or ventilation and enough space to install cooling devices in desktop, the microprocessors are primarily designed to hand out high performance.
As the desktop microprocessors were created primarily for performance, they have got the following features
(Compared to laptop and embedded system microprocessor)
Higher number of transistors, Higher maximum temperature
Die size (physical surface area of on the wafer) is higher.
Processor occurrence is higher
Works with higher cache size
CPU multiplier is higher
Bus/main ration is higher
2. 3. 1 Laptop Microprocessor
A laptop is also a general purpose personal computer intended for mobile goal. A laptop also has most of the desktop hardware integrated in it. This included a keyboard, display, sound system and a touchpad. A laptop is power by main electricity using an AC adapter, but it has a rechargeable power attached to it so that it can function with out a constant way to obtain electricity for an AC adapter before power supply drains out. Modern notebooks also include cellular adapter, camera, mic, HDMI slot, touchscreen and a GSM SIM slot machine to assist better communication and user experience.
(An over-all purpose laptop)
As the laptop computers are being used to be mobile, it was created to be compact with all the current hardware and other peripheral devices fastened together, and this eases the portability of laptop in one point to another.
As laptops are incredibly compact and are designed to run on power supply for almost all of the time, the cpu for such machine must address a multitude of issues. This includes ventilation, electric power management and performance.
Power management is an important issue as laptop computers should use battery pack as efficiently as is possible. Therefor laptop processor chip should use less vitality than other microprocessors like desktop, server or embedded system microprocessors. Laptop processor chip also needs to create minimal high temperature as all the components are place alongside one another is a very compact space. This is because an increase in warmth would damage most of the hardware.
As the desktop microprocessors are designed mostly for performance, they have the following features
(in comparison to desktop and server microprocessor)
Less amount of transistors
Minimal temperatures increase
Die size (physical surface area of on the wafer) is very small.
Processor regularity is lower
Works with mediocre cache size
CPU multiplier is lower
Bus/main ration is lower
2. 3. 2 Server Microprocessor
A server is a mixture of solitary computer, or a series of computers and software that links numerous computer systems and/or electronic devices together. You can find wide types of servers, such as email server, database server, webserver, enterprise server and printing server.
An example of a server would be Dells PowerEdge servers or Horsepower Superdome, which can be an high-end server by HP. Machines have dedicated operating system developed for these people, like the Home windows Server family and Ubuntu server.
Intel Itanium 9300 is a microprocessor focused on work on business servers. It really is one of the most advanced processor available today. This processor is made up of more than two billion transistors about the same die, and helps up to four cores per pass away and 24MB L3 of on-die cache.
(An average server room)
As a server perform wide range of responsibilities and services to numerous clients. And there, the performance of the microprocessor isnt the only real important aspect as it pertains to microprocessor. The microprocessor also needs to address the issues like redundancy, availability, scalability and interoperability. To accomplish all of this, a server microprocessor must have the following attributes,
High rate of recurrence operation
Simultaneous multithreading
Optimized recollection system
Large cache
Writing of caches
Excellent remote access service
Non-uniform memory access
Clustering
Excellent electric power management
As server microprocessors were created generally for performance, they have the next features
(compared to embedded system, desktop and laptop microprocessor)
Highest range of transistors
Die size (physical surface of on the wafer) is highest.
Processor occurrence is highest
Supports high cache size
CPU multiplier is very high
Bus/core ration is higher
2. 3. 4 Embedded system microprocessor
Embedded system is your personal computer system which was created to perform only one or a few dedicated tasks, mostly with real time constraints. These systems are generally embedded as a subsystem in a larger system. Applications of embedded system range from small systems such as microwave range, watches to aircraft gadgets and communication.
As the area of operation of any embedded system can vary in different field, the microprocessor found in these systems also varies. But still, there a certain essential areas that the microprocessor should dwelling address, they are
Response time
Cost
Portability
Electricity management
Fault tolerance
Motorolas 68k category of processors, that have been a favorite in computers and workstations in the 1980s and early on 1990s are now widely used in embedded system.
Attributes of microprocessor on a standard embedded system are
(in comparison to server, laptop and embedded system microprocessor)
Lowest volume of transistors
Die size (physical surface area of on the wafer) is small.
Processor rate of recurrence is lowest
Will not facilitates cache size
CPU multiplier is low
Bus/primary ration is very low.
2. 4 Microprocessor trends
Since the advantages of the first microprocessor by Intel in 1971, the development of the microprocessor from 1970s to provide day is mind boggling.
The desk above proves the introduction of microprocessors over time.
We can inference from the desk above that performance is one the features that increase as new microprocessors are developed. This is credited the ever increase of performance requirements by both software and hardware.
The microprocessors performance can be increased by
1. Increasing number of cores
A primary is a part of the cpu that perform reading and executing of particular date. Processors initially got only one processor. Multicore processors were developed because tradition multi-processing was not efficient because of the increased demand of performance. As the numbers of cores are increased, the operating system can allocate application to different cores, which results in better performance and an improved multitasking experience.
Even although numbers of cores are increasing per cpu, which is
supposed to enhance the performance, but for many circumstances, this increased performance is not apparent. This is as a result of incapability of software to cope with the multi-core processors as these software aren't designed to work on a multi-core processor chip. This is solved by upgrading these software to be compatible with multicore processors.
2. Use broadband cache bus
Cache bus is an ardent bus that the cpu uses to communicate with the cache recollection. Broadband cache will increase the performance because this will certainly reduce the time required to read or modify frequently reached data.
Microprocessor design is another crucial portion of microprocessor development. The craze in design is the fact how big is these microprocessors retains getting small and variety of transistors in each microprocessor boosts as they develop.
According to Moores theory by Gordon More, the amounts of transistors in a microprocessor will twin every 1. 5 years but the size of the processor will remain same. This was achieved by lessening how big is the transistors. This theory was valid going back three decades. But currently, the sizes of the transistor and variety of transistors per microprocessor have reached a saturation point that any longer development would cause electric energy leakage. To handle this, scientists think that new materials should be used instead of silicon. This calls for purchases into nanotechnology research.
Conclusion
Information technology has generated a revolution. Today, almost all of the organization cannot function effectively lacking any IT Office.
In this document, we have talked about about memory space management and microprocessors. The document emphasizes the value of storage area management and microprocessor in your computer.
Memory management is a crucial portion of any operating system. It is because a poor storage area management can bring about failure of operating-system. If the operating system did not implement appropriate storage area management technique, this might also cause your system being crashed constantly.
The document contains elaborate explanation of virtual storage and garbage collection.
Now, microprocessor, on the other had is the most important hardware in a system. It is regarded as the mind of something, as it controls and coordinates all the parts of the system. A computer will not function when there is no microprocessor installed on the mother board.
On the latter portion of this document, the writer explained the introduction of microprocessor and how microprocessors became a necessity in today's world. The difference of microprocessor of various machines like desktop, server, laptop and embedded system is also included. The past part of this section explains the major pattern influencing microprocessors in conditions of development and design.
Frequently Asked Questions
Some of the frequently asked questions are
1. Why is memory management important?
Memory management is important because it ensures that memory installed in the system is efficiently monitored. That is to ensure effective performance of the machine.
2. Status some commonly used memory space management techniques
Virtual Memory
Swapping
Garbage Collection
3. Why is power management important in laptop microprocessor?
As laptops are made to be used on the move, they are driven by rechargeable batteries which can take limited electric power. Therefore, microprocessor for laptop should be power efficient to maximize the use time of laptop.
4. What's Moores theory?
Moores theory is a theory by Gordon Moore which says that the amounts of transistors in a microprocessor will double every 1. 5 years but the size of the processor will remain same.
Limitations and Extension
1. Limitation:
The system decreases because it depends too intensely on the virtual memory
Extension:
There is a significant loss of system performance when system relies heavily on virtual memory. This is because it takes additional time for the data stored in the hard disk drive to be accessed. This happens due to insufficiency on Ram memory. This can be resolved by either concluding some request or if the user requires these applications to perform simultaneously, then the user should install a bigger Memory in his system
2. Limitation:
The amount of transistors that may be put into a microprocessor will eventually reach its saturation point and then it will be struggling to add any longer transistors without outcomes like current leakage
Extension:
It holds true that, at one point in the future, quantity of transistors per processor will hit its maximum and then it will be impossible to include any more transistor because how big is the transistor cannot be reduced any more. So, to handle this issue, motherboard which can support multiple processors should be developed.
Appendices
(Fedora Company logo)
(Fodora 14 screenshot)
(Virtual Memory Representation shows how online memory works)
(An AMD 64 Athlon Processor chip)
(PC)
(Laptop)
(Servers)
(Microprocessor development image shows the exponential expansion of microprocessors)