Reference no: EM132190001
What would be the IEEE 754 double precision floating point representation of 1.3248735989328012498123 3898124124 × 10-17. For explanation, I want you to document the steps you perform, in this order:
(1) What is n in decimal fixed point form (ddd.ddddd);
(2) What is n in binary fixed point form (bbb.bbbb), storing the first 110 bits following the binary point);
(3) What is the normalized binary number, written in the form 1.bbbbb...bbb × 2e , storing 54 bits following the binary point)
(4) What are the 52 mantissa bits, after the bits in bit positions -53, -54, ... are eliminated using the round to nearest, ties to even mode; exclude the 1. part;
(5) What is the biased exponent in decimal and in binary?
(6) Write the 64-bits of the number in the order: s e m; and (7) Write the final answer as a 16- hexdigit number.
Please do it in order with a brief expanation for each point. I have a test coming up and the questions will likely be asked in this format.