During one of the recent computing conference I heard the author saying:
With all the laws of computing fighting against us, the industry needs open interfaces to allow innovations...
I was curious about who all are fighting against us 🙂 . In this post I tried to put together summary of different laws related to computing (especially computing hardware trend, parallel performance and efficiency) that I came across over the years. Lets get started...!
Update If you prefer watching video instead of reading, you can look at my first screencast attempt here : First Screencast : Summary Of Computing Laws!.
In parallel computing this law from Gene Amdahl is used to predict the theoretical speedup of an application when using multiple processors. It can be summarized as:
The speedup of program using multiple processors is limited by the time needed for the sequential fraction of the program.
If an application has fraction that can run in parallel and remaining fraction is sequential, the maximum speedup achieved using processors can be formulated as:
As becomes very large, approaches and the speedup tends to . This analysis neglects other potential bottlenecks in computing systems such as memory bandwidth and I/O bandwidth. References Wikipedia, AFIPS Spring Joint Computer Conference paper (1967)
In 1972 Gordon Bell made following observation about how low cost, general purpose computing architectures evolves, become mainstream and eventually die out:
Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.
Technology advances in semiconductors, storage and networks enable a new, lower cost computer class to serve a new need and disrupt an existing class. During past decades there many examples including mainframes (1960s), minicomputers (1970s), personal computers with LAN (1980s), web browsers (1990s), cloud computing (2000), hand held devices (2010) and Wireless sensor networks enabling IoT during last decade. References Wikipedia
In 1974 Robert H. Dennard and his colleagues explained how CPU manufacturers were able to raise clock frequencies from one generation to the next without significantly increasing overall power consumption:
As transistors get smaller, their power density stays constant, so that the power use stays in proportion with area; both voltage and current scale (downward) with length.
This law is also known as MOSFET Scaling. Dennard’s Scaling appeared to break down in the 2005-2006 time period (as it ignored the leakage current and threshold voltage which establish a baseline of power per transistor). References Wikipedia, IEEE Journal of Solid-State Circuits paper (1974)
In 1953 Herb Grosch stated following in relation to computer performance and cost :
Giving added economy only as the square root of the increase in speed .
In other words, computer performance increases as the square of the cost. As an example, if computer A costs twice as much as computer B, you should expect computer A to be four times as fast as computer B. References Wikipedia, Journal of the Optical Society of America paper (1953)
Gunther's Law (aka Universal Scalability Law)
In 1993 Neil Gunther presented conceptual framework for understanding, evaluating, comparing and improving the scalability of the system:
where is relative capacity of a computational platform, represents either the number of physical processors (or users driving the software) and parameters and represents the levels of contention (e.g., queuing for shared resources) and coherency delay (i.e., latency for data to become consistent) in the system respectively. This is an extension of Amdahl’s law in the sense that it accounts for the additional overhead due to interprocess communication (at application, middleware, OS or hardware level). References Wikipedia, CMG Conference paper (1993), USL article
Gustafson's Law (aka Gustafson–Barsis's Law)
In 1988 John L. Gustafson and Edwin H. Barsis wrote an article Reevaluating Amdahl's Law showing how application can retain scalability on large number of processors by increasing problem size:
If you apply processors to a task that has serial fraction , scaling the task to take the same amount of time as before, the speedup is :
Gustafson's law addresses the shortcomings of Amdahl's law (which assume fixed problem size) by proposing that programmers can increase size of problems to fully exploit larger computing systems. In a way the law redefines efficiency, due to the possibility that limitations imposed by the sequential part of a program may be countered by increasing the total amount of computation. References Wikipedia, Communications of the ACM paper (1988)
Jonathan Koomey observed following trend since 1950s regarding energy efficiency of computers:
At a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half.
Koomey re-examined data in 2011 and found that this law had slowed i.e. 2.6 years instead of 1.57 years (due to the end of Dennard Scaling and the slowing of Moore's Law). References Wikipedia, IEEE Annals of the History of Computing paper (2011)
John Little in 1954 provided a simple and intuitive approach for assessing the efficiency of queuing systems and can be summarized with:
The long-term average number of customers () in a stable system is equal to the long-term average effective arrival rate () multiplied by the average time () a customer spends in the system.
This can be expressed algebraically as . Little’s Law can be used in almost any queuing system and applied in different fields, from running a small coffee shop to performance from processor memory subsystems. References Wikipedia, Journal of Operations Research paper (1961)
How many times you have heard below till now? I am curious!
In 1965 Gordon Moore, co-founder and CEO of Intel, made following observation regarding the transistors density:
The number of transistors in a dense integrated circuit doubles about every two years.
Moore's insight became a prediction, which in turn became the golden rule known as Moore's Law. The period is often quoted as 18 months because of a prediction by Intel executive David House (being a combination of the effect of more transistors and the transistors being faster). References Wikipedia, Electronics Volume 38 paper (1965)
For many events, roughly 80% of the effects come from 20% of the causes.
This is applicable to various domains and gives an insight into where one can devote efforts so as to increase our productivity and performance. In computing the Pareto principle can be applied to optimization efforts (80% of the improvement comes from 20% of the efforts). Vilfredo Pareto developed this principle in the context of the distribution of income and wealth among the population. References Wikipedia
This is related to the cost of production of semiconductor chips and named after Arthur Rock (a prominent investor who invested early and heavily in Intel, Apple and Teledyne):
The cost of a semiconductor chip fabrication plant doubles every four years.
This is also known as Moore's second law and can be seen as the economic flip side to Moore's Law : the latter results in declining costs per unit for processors and former results in increasing capital costs to product a new line of processors. References Wikipedia, Contemporary Physics paper (2005)
Sun and Ni’s Law
There are many applications that are bounded by the memory capacity in contrast to the processing speed. This law, also known as memory bounded speedup approach, is concerned with memory efficiency of parallel programs:
As computing power increases the corresponding increase in problem size is constrained by the system’s memory capacity.
Each processor has an independent small memory and the overall memory capacity of the system increases proportionately. Instead of following Gustafson’s Law i.e., fixing the execution time, the size of the problem should be increased such that overall memory capacity could be efficiently utilized. Wikipedia, IEEE Supercomputing Conference paper (1990)
This is an adage related to computer performance and named after Niklaus Wirth:
The software is getting slower more rapidly than hardware is becoming faster.
That's all for this post! If you have anything favourite on your list that I have missed, let me know and I will be happy to add it here.