Maker Pro
Maker Pro

Writing Scalabe Software in C++

S

Skybuck Flying

I don't agree with that.

Write large parts of code, do a check once.

Voila, only problem you will have two codes.

Bye,
Skybuck.
 
F

Frank Birbacher

Hi!

Skybuck said:
Lol such big statements LOL.

I visited that newsgroup two times.

And I never plan to revisit it again unless I have a really really really
really really strange question.


..oO(OMG, we're stuck with him/her here.)

If you managed to get yourself a bad reputation by visiting the
comp.lang.c group just two times you should be thinking about your
behaviour. You seem to be too enthusiastic.

Most people here stick to a wisdom:
1st collect your thoughts
2nd order your thoughts
3rd communicate
These people don't respond to their own posts ten times.

Frank
 
S

Skybuck Flying

No,

That newsgroup full with retards, that's all LOL.

Bye,
Skybuck.
 
M

MooseFET

For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits but
would still be faster than simulating it.

Multiply doesn't take very long to do. The instructions for it can be
easily inlined. For a divide, finding 2^N/X and then multiplying is
sometimes quicker. There are tricks for doing 2^N/X quickly.

A 256 bit divided by a 64 bit yelding a 64 bit can be done this way in
about 1/3rd the time of the actual divide on an 8 bit machine.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection that
overhead might disappear ! ;) :)

I wouldn't go away. You are adding parts and logic and choices to be
made to every instruction in the CPU, This uses up transistors and
time. Doing stuff takes time.
 
M

MooseFET

Do some actual _measurements_ and find out, rather than guessing. Emulating
64-bit operations even when not required is almost always cheaper in both
programmer and CPU time than trying to detect and handle cases in which not
to use emulation.

"Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet."
- M.A. Jackson

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason - including blind
stupidity."
- W.A. Wulf

"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil."
- Donald Knuth

I forgot who said these:

No amount of optimizing the implementation of the slow algorithm will
turn it into the fast one.

Optimize after there are zero bugs. There never are zero bugs.
 
M

MooseFET

No, you can write the code once and compile it twice.

Make that "compile it more than once". The truth may be that it only
gets compiled 1.9 times. If both flavors of the code are seen by the
compiler, at the same time, the compiler may not make twice as much
code as for the single version. It will shift the order of some
instructions to move common ones out of the sections that differ.


[...]
Are you talking to yourself? _Every single person_ commenting on this
thread is telling you you're wrong.

I think he knows he is lost. The insults are to cover the
embarrassment he feels.
 
S

Skybuck Flying

It's the other people that started the insults not me !

They think they know everything, well that's definetly not the case !

Bye,
Skybuck.
 
R

Rudy Velthuis

Stephen said:
"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason -
including blind stupidity." - W.A. Wulf

Skybuck is very good in the "blind stupidity" department, though. <g>
 
S

Stephen Sprunk

Skybuck Flying said:
For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits
but would still be faster than simulating it.

Not likely. On common 64-bit machines, all operations take the same amount
of time regardless of whether they're 32- or 64-bit, so there's no potential
speedup. So, you're only talking about potential benefits of not using
emulation on older 32-bit machines. The performance of detecting the 32-bit
case and then branching to either the 32- or 64-bit code paths (or, in the
16/32-bit equivalent, setting a bit in the segment descriptor) will usually
outweigh the savings you'll get from not needing to emulate 64-bit
operations. Even if it's not a certain victory, the programmer cost of the
code complexity will likely decide things in such a case -- especially since
it only benefits people with outdated machines.
Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection
that overhead might disappear ! ;) :)

Adding that detection logic into the CPU will just change where the overhead
is paid for; that cost has to be paid _somewhere_.

You seem to think that counting instructions is how to measure speed. That
hasn't been true on x86 since the days of the 486, or possibly even earlier.
Memory latency, cache (both instruction and data) hit rates, BPU and BHT
misses, utilization of varying types of functional units, parallelism, OOE,
and various other things mean the _only_ way to determine what's fastest is
to actually write the code and test it -- and the answers may be different
depending on the chips being used.

You are postulating chips that do not exist (this mythical BitMode) and that
the makers have shown no interest in making. You also ignore the cost of
figuring out what to set the BitMode too, as if that were free. You further
ignore how width-independent instructions are supposed to know how much data
to load/store, or how the compiler is supposed to efficiently reserve space
for such when the data types are not known at compile time.

S
 
R

Ron Natalie

MooseFET said:
You claim the above and then go on to say the below:


The "typedef" declares a new type.

No it does not. It makes an alias for an existing type. You can't
distinguish between the typedef and the original type either through
overloading or typeid or anything else.
 
M

MooseFET

No it does not.

Yes it does in all the ways tha matter to the argument with Skybuck.
It causes a new name to be associated with a type. This makes it a
declaration of a type. Just because C doesn't do as strict of type
checking as some other languages doesn't make it not a declare of a
type. After the typedef has been done there is a new symbol that is a
type.
 
F

Frithiof Andreas Jensen

"David Brown" <[email protected]> skrev i en meddelelse

If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.

It's this particular troll's mode of operation. News2020 was more fun.
 
F

Frithiof Andreas Jensen

"Frederick Williams" <"Frederick Williams"@antispamhotmail.co.uk.invalid>
skrev i en meddelelse
I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?

....because he knows that there are always a few people in
'sci.electronics.design' that will take the bait!
 
F

Frithiof Andreas Jensen

Most people here stick to a wisdom:
1st collect your thoughts

"SkyBuck troll tard vs 1.0": Fatal Exception: Brain dropped out on floor at
birth ... continuing.
 
J

JosephKK

Skybuck Flying [email protected] posted to sci.electronics.design:
Yes very simply statement.

Achieving this in a scalable way is what this thread is all about.

Re-writing code, or writing double code, or even using multiple
libraries is not really what this is about.

It's nearly impossible to achieve without hurting performance. Only
solutions might be c++ templates or generics, not even sure how easy
it would be to switch between two generated class at runtime.

Bye,
Skybuck.

Can't speak for other libraries but the integer and floating point
routines GCC C++ libraries are written as templates. Just don't
expect to change the CPU ALU mode on the fly. (at least not on
x86_64, PPC, MIPS or SPARC architectures.)
 
J

JosephKK

Skybuck Flying [email protected] posted to sci.electronics.design:
For addition and subtraction probably.

For multiple and division some performance could be reduced for 32
bits but would still be faster than simulating it.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the
detection that overhead might disappear ! ;) :)

Bye,
Skybuck.

Precisely why we use typing and let the compiler do it. It moves the
overhead of the detection clear out of the running program.
 
S

Skybuck Flying

You have valid points.

Just because an if statement/switch to 32 bit code proved faster on my
system and the simple test program doesn't have to mean it will always be
faster, or always be faster on other chips.

I am pretty much done investigating this issue.

I am now convinced extending the code with int64's is ok.

And I will do it only at those places where it's absolutely necessary, the
rest will remain 32 bits.

So current code base is being converted/upgrade to a mix of 32 bit and 64
bit numbers.

Haven't look at the compare statements for 64 bits and copy statements,
there some more overhead.

Some new algorithm parts to cope with 64 bits as well, so program will
probably be a bit slower anyway.

It's the price to pay lol ;)

Bye,
Skybuck.
 
Top