1. Briefly describe Moore law. What are the implications of this law? Are there any practical limitations to Moore law?
Moore’s Law is a hypothesis stating that transistor densities on a single chip double every two years. Moore’s law describes a long-term trend in the history of computing hardware. The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years.
Moore’s law is a rule of thumb in the computer industry about the growth of computing power over time.
Attributed to Gordon E. Moore the co-founder of Intel, it states that the growth of computing power follows an empirical exponential law. Moore originally proposed a 12 month doubling and, later, a 24 month period. Due to the mathematical nature of doubling, this implies that within 30-50 years computers will become more intelligent than human beings.
The implications of many digital electronic devices are strongly linked to Moore’s law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras.
All of these are improving at (roughly) exponential rates as well. This has dramatically increased the usefulness of digital electronics in nearly every segment of the world economy. Moore’s law precisely describes a driving force of technological and social change in the late 20th and early 21st centuries.
Transistors per integrated circuit. The most popular formulation is of the doubling of the number of transistors on integrated circuits every two years. At the end of the 1970s, Moore’s law became known as the limit for the number of transistors on the most complex chips. Recent trends show that this rate has been maintained into 2007. Density at minimum cost per transistor. This is the formulation given in Moore’s 1965 paper. It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest. As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances in photolithography, this number would increase at “a rate of roughly a factor of two per year”.
Power consumption. The power consumption of computer nodes doubles every 18 months.Hard disk storage cost per unit of information. A similar law (sometimes called Kryder’s Law) has held for hard disk storage cost per unit of information. The rate of progression in disk storage over the past decades has actually sped up more than once, corresponding to the utilization of error correcting codes, the magnetoresistive effect and the giant magnetoresistive effect. The current rate of increase in hard drive capacity is roughly similar to the rate of increase in transistor count.
Recent trends show that this rate has been maintained into 2007.Network capacity. According to Gerry/Gerald Butters, the former head of Lucent’s Optical Networking Group at Bell Labs, there is another version, called Butter’s Law of Photonics, a formulation which deliberately parallels Moore’s law. Butter’s law says that the amount of data coming out of an optical fiber is doubling every nine months.
Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called “WDM”) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and dense wavelength-division multiplexing (DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen’s Law says that the bandwidth available to users increases by 50% annually.
2. What is a quad core processor? What advantages does it offer users?
Quad-core processors are computer central processing units (CPUs) that have four separate processing cores contained in a single device. Intel and AMD, two popular CPU manufacturers, both produce quad-core processors. Quad-core processors carry several advantages over normal single-core processors, though there is skepticism as to how much of an advantage they are for the average computer user.
Multitasking. Perhaps the most significant benefit of quad-core processors is their ability to handle several applications at the same time. When you run a few different programs on a single-core processor, it slows down from running data calculations for many programs at once. With a quad-core processor, each core is responsible for a different process, so even running four demanding programs can be possible without experiencing much delay from a lack of processing power.
Future Programs. One of the frequently cited benefits of quad-core processors is that they are “future proof.” As of summer 2009, there are not many programs that can utilize the full power of a quad-core processor, but programs and games capable of using multiple cores in parallel will be developed in the future. If and when this happens, computers without multiple cores will quickly become obsolete while those with quadcore processors will likely remain useful until developers make programs that can utilize an even greater number of processors.
Taxing Processes. Another area in which quad-core processors will yield significant benefits is in processes that require calculations on large amounts of data, such as rendering 3D graphics, compressing CDs or DVDs and audio and video editing. Enterprise resource planning and customer relationship management applications also see a noticeable benefit with quad-core processors.
Power Consumption. The integrated architecture of a quad-core processor uses less power than if the four cores were split into separate physical units. This is important, since the amount of electricity required by computer power supplies has risen quickly in recent years. Also, newer CPUs are beginning to use 45nm architecture, which requires less power and produce less heat than the larger 60nm processor architecture.
Criticism. Until programs take full advantage of multiple cores, there will not be a significant difference in performance between quad-core and dual-core processors, and perhaps even quad-core and single-core processors. Considering the rapid progress of computer technology, there may be processors with eight, ten or more cores by the time programs are developed that properly utilize parallel processing of many cores.
3. What would be an advantage for a university computer lab to install thin clients rather than standard desktop personal computer? Can you identify any disadvantages?
A thin client is an aesthetically slim PC used as an access point for server-based computing. It has fewer parts and requires fewer components to run; hence, it has numerous cost efficiency benefits. Although thin client benefits are remarkable, we must also look into their disadvantages.
Thin client computing fits a lot of work environments. Since thin clients do not need to be in the same place as their server, the setup presents thin client benefits that are mostly practical. Clients can be taken into the harshest of work places like dusty desert camps and can be deployed even after the occurrence of a natural disaster.
Thin clients are also perfect for environments where space is a big issue. A thin client has an inherent space-conserving attribute since they come in one piece with only the monitor showing while the unit is hidden behind it. Some even mount on walls with only the peripherals and the monitor exposed.
Even work places with very little budget room to run air conditioning systems can be expected to gain with thin client benefits in their facilities. The absence of dynamic or
moving parts to serve one’s computing purpose entails less generation of heat. This is mainly because thin clients make use of solid state devices like flash drives instead of hard drives.
However, as ideal as a server based computing may all seem, there are notable disadvantages which concern costs and performance abilities. Below is a rundown of advantages and disadvantages you should consider before deciding to use thin client computing in your university computer lab.
Advantages of Thin Computing:
Lower Operational Costs- An office environment where several workstations are involved will access a single server unit, thereby reducing the operational costs covering these related actions:
* Setting up the device takes less than ten minutes to accomplish. * The lifespan of thin clients is very long since there are no moving parts inside each unit. The only parts that need constant replacements are the peripherals which are external to the unit. This brings cost efficiency in the maintenance aspect which means when something breaks on the client’s end, it can be as easy as taking a replacement unit to replace the broken one. Even wear and tear is considerably unnoticeable.
* Energy efficiency- A thin client unit is said to consume 20W to 40W as opposed to the regular thick PC where power consumption during operation mode consumes 60W to 110W. In addition, thin clients themselves need little to no air conditioning at all, which literally means less operating costs. Whatever air conditioning needed is demanded and supplied at the server area. * Work efficiency- The thin client work environment can be far reaching and extensive; it can provide quick access to remotely located workers, also operating on server-based computing.
Superior Security – Since users will only have access to the server by network connections, security measures like different access levels for different users can be implemented. This way, users with lower access levels will not be able to see, know, or in worst case scenarios, hack into the confidential files and applications of the entire organization since they are all secured at the server’s end. It is also a way of securing the data in the event of natural disasters. The servers will be the only machines that need to survive the disaster as the server is the main location of all the saved data. Immediately after the disaster, new clients can easily be connected to the server as long as the server is intact.
Lower Malware Infection Risks – There is a very slim chance of getting malware on the server from a thin client. The client inputs to the server will only be coming from the keyboard, mouse actions, and screen images. Thin clients get their software or programs from the server itself. The software updates and virus scanning applications as well as patches will be implemented only on the server. At the same time, the servers will be the one to process information and store the information afterwards.
Highly Reliable –Business organizations can expect continuous service for longer durations since thin clients can have a lifespan of more than five years. In as much as thin clients are built as solid state devices, there is less impact from wear and tear through constant use.
Disadvantages of Thin Computing:
Client Organizations are Subject to Limitations – Since the thin clients do most of their processing at the server, there will be setups where rich media access will be disabled. Some of these concerns are the result of poor performance when simultaneous access to multimedia on the thin client is taking place. Heavy and resource-hungry applications like Flash animations and video streaming can slow the performance of both the server and client. In corporate organizations where video conferencing and webinars are often carried out, presentation of materials and web-cam/video communications can be adversely affected.
Requires Superior Network Connection – Using a network that has latency or network lag issues can greatly affect thin clients. It can even mean rendering the thin clients unusable because the processing will not be fluently transmitted from the server to the client. This makes the thin client very hard to use in these cases since the response from the server will affect both the visual and the processing performance of the thin client. Even printing tasks have been observed to hog bandwidth in some thin client set-ups, which can affect the work going on in other units.
A Thin Client Work Environment is Cost Intensive – For any plans of converting a regular work station into a thin client work environment, it is advised that comparative cost analysis be performed. Thin client set-ups have been noted to be cost efficient only if employed on a large-scale basis. Comparison of regular workstations using the same number of regular PC units should be made versus a work environment set-up that makes use of a dedicated server and the same number of thin clients.
In some cases the cost of installing the server itself is already far more expensive than all the regular workstations combined. This is aside from the fact that a thin client unit can cost as much as a fully-equipped PC. Nevertheless, some argue that the benefits of thin clients, as far as cost and maintenance efficiency are concerned, will offset the initial costs. Besides, as a capitalized investment, the costs can be spread out for at least five years.
Still, the excessiveness of the fees involving different licenses, which include software for every station, Client Access Licenses (CAL) for clients and server, as well as tracking and managing licenses, will tie up a substantial amount of business funds and may take too long to recover. Thus, smaller business organizations are advised to carefully consider such costs before venturing into server-based or thin client computing.
Single Point of Failure Affects All – If the server goes down, every thin client connected to it becomes barely usable. No matter how many clients are connected, if the server becomes inaccessible, all work processes will come to a standstill thereby adversely affecting business-hour productivity.
References