Hi All,
I am trying to understand how fixed point and floating point are
implemented in hardware. This is what I understand so far
1) Fixed point means the decimal point is fixed. Floating point has
some bits set for mantissa and some for the exponent ie the decimal
point "floats".
2) Floating point has more dynamic range for the given number of bits
as compared to fixed.
3) Floating point is harder to implement in hardware than fixed point
but is closer to representing real world values.
4) Fixed point is better when ur application has power consumption
requirements but doesn care as much about the precision.
All of this is good. But i am trying to visualize fixed and floating
in hardware. What i am trying to ask is, in fixed point is there a
seprate datapath for fraction and for the the integer part. I wouldnt
think so because at the end of the day its just bits and its upto us
how we interpret them. I am sure fixed point and floating point
hardware are architecturally different.
So one good question to ask is...how is a fixed point multiplier
implemented and how is a floating point multiplier implemented. Can
someone give a real world processor example of each. It would be much
appreciated.
I am trying to understand how fixed point and floating point are
implemented in hardware. This is what I understand so far
1) Fixed point means the decimal point is fixed. Floating point has
some bits set for mantissa and some for the exponent ie the decimal
point "floats".
2) Floating point has more dynamic range for the given number of bits
as compared to fixed.
3) Floating point is harder to implement in hardware than fixed point
but is closer to representing real world values.
4) Fixed point is better when ur application has power consumption
requirements but doesn care as much about the precision.
All of this is good. But i am trying to visualize fixed and floating
in hardware. What i am trying to ask is, in fixed point is there a
seprate datapath for fraction and for the the integer part. I wouldnt
think so because at the end of the day its just bits and its upto us
how we interpret them. I am sure fixed point and floating point
hardware are architecturally different.
So one good question to ask is...how is a fixed point multiplier
implemented and how is a floating point multiplier implemented. Can
someone give a real world processor example of each. It would be much
appreciated.