Architecture and relocation of program; general-purpose function

Architecture of IBM System/360The paper “Architecture of IBM system/360” introduces four innovations such as; flexible storage mechanism which avail large capacities, speeds, protection of data, and relocation of program; general-purpose function which supports numerous application across scientific, commercial, real-time, logical data processing, and many more sectors; a better I/O system which provides parallel processing, command chaining, channel operation; and lastly machine language compatibility. In addition to these features the paper also shade some light on various problems faced while making system compatible at program bit level for all types for programming model in several sectors. Some of the features of IBM system/360 are still in use such as it pioneered the eight-bit byte still in use on every computer today, many companies use x86 servers which embraces principles of IBM system/360. The IBM floating point architecture is still being used along with the instruction sets too.Summary of the case for the Reduced Instruction Set ComputersUse of a complex architectural design was proven to be cost-effective architecture. For example, use of Microprogrammed control than hardwired control which could use more complex architectures in a cost-effective manner. Also, the reduction in the difference of Speed of Memory and Speed of CPU using complex architecture making them more cost-effective. However, the authors of this paper state that the trend of cost-effectiveness of the computer architecture than their predecessors cannot be always cost-effective and sometimes can be detrimental than useful.  To support their assumption; they have mainly compared, here, two architectural technologies that are CISC and RISC along with some other factors like code density, support for High Level Language (HLL), marketing strategy etc. For example, the memory was very expensive earlier which made programmer to compact their code by increasing the complexity of it. This trend was harmful because there were complex instruction and addressing mode required more bits which made the code erroneous. Also manufactures provided a more powerful instruction set to support HLL but there is very little evidence that they did so. Furthermore, difficulty in making CISC architecture had their consequences like invention of semiconductor memory which was cheaper and faster. The use of cache memory further reduced the differential speed between CPU and Memory. Special purpose instructions were not always faster than simple sequential instructions e.g. A single high-level instruction was replaced by several simple instructions which lead to 45% reduction in executing the function. The design time for CISC is also more than RISC architectures which made manufacturer to think ahead in time to design their complex architecture. In contrast, RISC architectures are simple to implement and design and they fit on a single chip with a simple architecture. Moreover, the design time taken is also less; better use of chip area can let you add more silicon area instead using on-chip caches, large and faster transistors and pipelining too. This means that RISC architectures are always step ahead than CISC architecture. And can be reasonable architecture if designed properly for supporting high level languages than CISC architectures.Source: Solution:Max. Transistor Count in 70 = 29000      And Max clock freq. = 5*106Max. Transistor Count in 80 = 1200000               And Max clock freq. = 25*106Max. Transistor Count in 90 = 95000000      And Max clock freq. = 600*106Max. Transistor Count in 00 = 410000000           And Max clock freq. = 2.4*109Max. Transistor Count in 10 = 1400000000      And Max clock freq. = 2.9*109Rate of increase of transistor from 70s to 80s = ((1200000-29000)/29000) *100 = 4038 %Similarly rate of increase of transistors in subsequent years is given below80s to 90s = 3400%90s to 00s = 876%00s to 10s = 241%ANDGrowth in frequency from 70s to 80s = ((25*106-5*106)/ 5*106) *100 = 400%Similarly rate of increase of transistors in subsequent years is given below80s to 90s = 2300%90s to 00s = 300%00s to 10s = 21%Plot of the number of transistors versus feature size   Source: Solution:a)According to chip statistics for Processor A shown in the table we haveDie Size = 400 mm2 = 4 cm2.Estimated defect rate = 0.30/cm2.Process complexity factor is 5 for 130 nm.And let’s assume wafer yield is 100%So according to Bose-Einstein formula for Die yield: Die yield=Wafer yield ×1÷?(1+Defects per unit area×Die area)?^N Therefore,Die yield=100%×1÷?(1+0.30×4)?^5            =  1÷?(2.20)?^5            = 0.019403 Hence the die yield for Processor A is 0.019403.b) From the chip statistics table, we can see that the Number of transistors are nearly equal for Processors A and B (276 and 279 (in millions) respectively). However, Processor A has much lower defect rate then Processors B and C. One reason for this difference could be the Manufacturing sizes of all three processors. We know that “As die size increases the defective density decreases” 1. As the table witnesses that there is a significant difference Die sizes and the manufacturing sizes of all the three processors. Hence, we could conclude that Processor A has more manufacturing size which manufacturer can use to reduce the complexity and hence the defect rate and giving more die yield then Processors B and C.1 Source: Solution:Moore’s Law states that “the number of transistors on a chip roughly doubles every two years”. This trend on integrated circuits stated by Moore can be formulated with respect to given problem as below:Number of transistors in 2025=2^(x/2)  times Number of transistors in 2015Where x denotes the difference between Target and Base year (here 2025 and 2015) i.e. x = 10 years.Hence, Number of transistors in 2025=2^(10/2)  times Number of transistors in 2015Which tells us that, Number of transistors on a chip in 2025 should be 32 times the number in 2015.Source:               Solution: From the problem statement we have following  Percentage of Parallelizable application = 50 % Number of processors in case 1 = N processors and Number of processors in case 2 = 1000 processorsAccording to Amdahl’s Law we have:Amdahl^’ s Law for speed up=1/((1 -Fraction)  + (Fraction/(Number of Processors)))So now putting the value for both the cases we get following speed ups Speed up in Case 1 =1/((1 -0.50)  + (0.50/N))and  Speed Up in Case 2  =1/((1 -0.50)  + (0.50/1000)) = 1.998Source: Solution:From the given table we have some inputs as follows: Peak Power consumption by 1 GB at 2GHz Processor = 66 W Power consumption by DRAM = 2.3 W Power consumption by HDD = 7.9 WFrom this data we can find out the total consumption of power by one server as follows:Total Power Consumption = Processor’s Peak Power + DRAM Power + HDD Power        = 60W + 2.3W + 7.9W           = 70.2 WOne Cooling door dissipates 14KW = 14000WHence,Number of Servers that could be cooled = Total power dissipated by cooling door/ Total Power consumption = 14000W/ 70.2W = 199.4 ~ 200 Servers Therefore, approx. 200 servers can be served by a cooling door.  Solution:a) Along with table we have some more information as follows: New FP cycles = 13 and  Processor Speed = 400MHz = 400 × 106 HzNow, we have to find Old and New CPIs for which we will use following formula:CPI=  (???Intruction×Cycles?)/(Total Instrution)Therefore;CPIOLD CPINEW=(30×1+20 ×2+40×3+10×5)/(30+20+40+10)  AND =  (30×1+20×2+40×3+10×13)/(30+20+40+10)  CPIOLD =2.4  CPINEW = 3.2 Now we will calculate MIPS,MIPSOLD =  (400 * ?10?^6)/(2.4 * ?10?^6 ) MIPSNEW  =  (400 * ?10?^6)/(3.2 * ?10?^6 ) AND  MIPSOLD =166.66 MIPSNEW =125b)Given: Wafer yield = 75% Old Die size = 12 mm2 New Die size = 10 mm2 Defect rate = 2/cm2Let’s calculate Old and New Die yields:Die yield=Wafer yield ×1÷?(1+Defects per unit area×Die area)?^N Die yieldold = 0.75 * 1 / (1 + 2 * 0.12)4 = 0.3172Die yieldNew = 0.75 * 1 / (1 + 2 * 0.10)4 = 0.3616Now, we have to find out the cost of old and new per processor:Dies per wafer=  (? ×?(wafer diameter/2)?^2)/(Die area)  -(? ×wafer diameter)/?(2*Die area)   Cost of Die=  (Cost of Wafer)/(Dies per wafer * Die yield) So, Dies per wafer for old = (? ×?(10/2)?^2)/0.12  -(? ×10)/?(2*0.12) = 590.37   & Cost of Die = 1000/(590*0.3172) = $ 5.34Dies per wafer for New = (? ×?(10/2)?^2)/0.10  -(? ×10)/(?(2*0.1) 0) = 715.15   & Cost of Die = 1000/(715*0.3616) = $ 3.86 From these answers we can conclude that when the Die size was decreased the price was also decreased. And as CPI increased the MIPS decreased.c) We are supposed to calculate the theoretical limit of the best possible overall speed up.So, we will use this formula; SpeedupOverall = Execution TimeOld / Execution TimeNew = 240/190SpeedupOverall = 1.263New CPI=  (???Intruction×Cycles?)/(Total Instrution)  =  (30*1+20*2+40*3)/(30+20+40+10)  =  190/100=1.9MIPSNEW  =  (400 * ?10?^6)/(1.9 * ?10?^6 )= 210.52 B) Solution:In Physics, power is amount of energy transferred per unit time. The mobile CPU Power, in general, is directly proportional to clock rates and it does affect the battery life; meaning the higher clock rate will drain your battery faster. However, mostly it depends on the usage pattern of the user. For example, the battery of a mobile user who plays online mobile games more often will drain much faster as compared to a mobile user who does not play games or does not use high energy usage applications on his mobile. However, if we reduce the clock speed by 20% then it will take more time to perform the operation (application may not run smoothly in the newer version than older) and it will ultimately hamper performance of new device. Hence in my opinion the mobile company should not reduce the clock speed for the sole purpose of extending the battery life. Rather another feasible option could be “Dash charging” technology which will give more power to battery in less amount of time which will not affect the performance of the newer device.Sources:          extend-battery-life/