Maker Pro
Maker Pro

Writing Scalabe Software in C++

Well I just had an idea which might be interesting after all:

64 bit emulated mul and div are probably slow.

So if it's possible to switch to 32 bit maybe some speed gains can be
achieved !

So for addition and subtraction the 64 bit emulated versions are always
called.

But for multiplication and division the 32 bit version might be called when
possible and the 64 bit emulated version when absolutely necessary.

I shall inspect what Delphi does for 64 bit (<-emulated) multiplication and
division ;)

Bye,
Skybuck.



What are you multiplying and dividing by?

If you're multiplying by or dividing by powers of 2, bit shifts are
much faster than multiplications or divisions.
 
S

Skybuck Flying

David Brown said:
So if your program needs 32 bits, use 32 bits. If it needs 64 bits, use
64 bits.

Yes very simply statement.

Achieving this in a scalable way is what this thread is all about.

Re-writing code, or writing double code, or even using multiple libraries is
not really what this is about.

It's nearly impossible to achieve without hurting performance. Only
solutions might be c++ templates or generics, not even sure how easy it
would be to switch between two generated class at runtime.

Bye,
Skybuck.
 
S

Skybuck Flying

David Brown said:
If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.

What I wrote above is pretty clear to me, even my mother could understand
that ! ;)

Bye,
Skybuck.
 
S

Skybuck Flying

What I wrote is really simple.

if FileSize < 2^32 bits then 32 bit case
if FileSize > 2^32 bits then 64 bit case.

Ofcourse the compiler doesn't know at compile time, because the files are
opened at runtime.

Not even the programmer knows what the size of the file will be.

Bye,
Skybuck.
 
S

Skybuck Flying

MooseFET said:
This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.


{$ifdef bit32}
blablabla
{$endif else
{$ifdef bit64}
blablabla
{$endif}

^^ Still have to write two versions BLEH !

I wouldn't call that "Scalable Software" :)

It doesn't even scale properly at runtime.

Only one can be chosen at compile time.

Bye,
Skybuck.
 
D

Default User

Frederick said:
I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?


He's a well-known troll in comp.lang.c, looks like he's decided to
expand his business.



Brian
 
D

David Brown

Skybuck said:
What I wrote is really simple.

if FileSize < 2^32 bits then 32 bit case
if FileSize > 2^32 bits then 64 bit case.

Ofcourse the compiler doesn't know at compile time, because the files are
opened at runtime.

Not even the programmer knows what the size of the file will be.

Finally you have managed to explain what you want, after all this
absurdly long-winded drivel. Had you said this at the start, someone
would have told you the answer.

When you are reading a file from a disk, any extra time spent by a
32-bit cpu doing 64-bit arithmetic to handle the file offsets will be
totally and utterly irrelevant. Thus if you want to handle such large
files, you use 64-bit integers.

If you really are interested in learning to develop software, you should
first learn what's important.
 
M

MooseFET

Incorrect. C and C++ certainly do not.

You claim the above and then go on to say the below:
You can #define or typedef
something that appears to be a type but they aren't distinct types.

The "typedef" declares a new type. It has a place in the symbol table
of the compiler where it keeps track of types. The compiler knows
that it is equivelent to the simple type it was declared. This gives
all the ability needed to do what the OP is asking for. If the
"typedef" and "#declare# didn't exist then he would be right in his
claims. Since they do he is wrong.
You're just conditionally compiling which type you are using (which
accomplishes what you want). The distinction is an important one.
A typedef isn't seperately resolvable from the type it aliases.

I don't think you understand the argument. In Borland Pascal and C
you can do this: (perhaps slightly wrong C.)

#define tfoo int
#include <stuff.inc>
#undef tfoo
#define tfoo long int
#include <stuff.inc>

You can end up with two versions of the smae code one for "int" and
another for "long int". There is nothing useful that the OP is
talking about that can't be done already. He is just making a new way
to do each thing.
 
M

MooseFET

{$ifdef bit32}
blablabla
{$endif else
{$ifdef bit64}
blablabla
{$endif}

^^ Still have to write two versions BLEH !

You haven't thought about it. You don't need to make two copies. In
Borland Pascal, the exact same code can be used twice. You don't need
two copies of it.



I wouldn't call that "Scalable Software" :)

It doesn't even scale properly at runtime.

We already told you about virtual methods. They do the scaling at run
time.

Only one can be chosen at compile time.

That is not true.
 
M

mpm

What I wrote above is pretty clear to me, even my mother could understand
that ! ;)

Bye,
Skybuck.- Hide quoted text -

- Show quoted text -

First of all, if your mother can understand YOU, then she probably
"can" understand everything right down to the subatomic particles!!!
So, I really don't think your proof of clarity really establishes all
that much...

Besides, I have a much better task for you:

Instead of worrying about simple compile time directives, why don't
you think about execution time directives? Say, on-the-fly
recompilation to run (re-configure) programs to operate on AVAILABLE
hardware. (For example, a navy destroyer or aircraft carrier that has
just sustained heavy battle damage).

That way, you're not limiting your brain power to just plain old 32
and/or 64 bits.
You can incorporate a whole host of new variable and other
considerations!!.

Maybe you can write a program to control ship's navigation and run it
on a toaster oven when the going gets tough. Or maybe a universal
iPod-based fire control radar? Or even, get the washer/dryer machine
to play DVD's? Maybe at the same time its doing the other two jobs
just mentioned. Should be pretty simple.

All you just need to do is just put in some more conditional
statements, and maybe get the latest DLL and COM libraries, but I
don't know why it wouldn't work.?

Bye.
-mpm
 
S

Skybuck Flying

Lot's of code will have to be 64 bit.

My guess is the performance impact will be noticeable ! ;)

Even if it wasn't no reason for sloppy coding :)

Code might be re-used for something else sometime ;)

Bye,
Skybuck.
 
S

Skybuck Flying

Using conditional means one will be compiled and the other won't be
compiled.

It's that simple.

(You might use a different way of writing the conditionals but the concept
remains the same, if not give an example ;))

Bye,
Skybuck.
 
S

Skybuck Flying

Default User said:
He's a well-known troll in comp.lang.c, looks like he's decided to
expand his business.

Lol such big statements LOL.

I visited that newsgroup two times.

And I never plan to revisit it again unless I have a really really really
really really strange question.

Bye,
Skybuck ;)
 
S

Stephen Sprunk

Skybuck Flying said:
Lot's of code will have to be 64 bit.

My guess is the performance impact will be noticeable ! ;)

Do some actual _measurements_ and find out, rather than guessing. Emulating
64-bit operations even when not required is almost always cheaper in both
programmer and CPU time than trying to detect and handle cases in which not
to use emulation.

"Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet."
- M.A. Jackson

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason - including blind
stupidity."
- W.A. Wulf

"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil."
- Donald Knuth


S
 
S

Stephen Sprunk

Skybuck Flying said:
The world is not completely 64 bit, The world is not statis it fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

Not if you have a 64-bit machine; even if you're using a 32-bit machine,
emulating 64-bit operations s will hurts performance less than trying to
detect the appropriate choice and then act on that information.

S
 
S

Stephen Sprunk

Skybuck Flying said:
Absolutely nonsense.

If I want I can write a computer program that runs 32 bit when possible
and 64 bit emulated when needed.

Yes, it's entirely possible to do that.
My computer program will outperform your "always 64 emulated" program WITH
EASE.

No, it won't. Post an actual test program using your method, and I'll
produce a program that does the same thing with my method, and we can
compare runtimes.
The only problem is that I have to write each code twice.

A 32 bit version and a 64 bit version.

No, you can write the code once and compile it twice.
I simply instantiate the necessary object and run it.

First of all, you must pay the cost of determining which type to use. Even
ignoring that, tracking down which code path to execute for that object at
runtime will be slower than simply using 64-bit operations (which may or may
not need to be emulated) all the time.
Absolutely no big deal.

The only undesirable property of this solution is two code bases.

Wrong. You only need one code base, but the poor performance of such a
solution will be a "big deal".
Your lack fo programming language knownledge and experience is definetly
showing.

Are you talking to yourself? _Every single person_ commenting on this
thread is telling you you're wrong.

S
 
D

dave

In comp.arch Stephen Sprunk said:
Yes, it's entirely possible to do that.


No, it won't. Post an actual test program using your method, and I'll
produce a program that does the same thing with my method, and we can
compare runtimes.


No, you can write the code once and compile it twice.


First of all, you must pay the cost of determining which type to use. Even
ignoring that, tracking down which code path to execute for that object at
runtime will be slower than simply using 64-bit operations (which may or may
not need to be emulated) all the time.


Wrong. You only need one code base, but the poor performance of such a
solution will be a "big deal".


Are you talking to yourself? _Every single person_ commenting on this
thread is telling you you're wrong.

And at least one person (me) put him in a kill file after reading the first
3 of his posts.

--
 
S

Skybuck Flying

Stephen Sprunk said:
Not if you have a 64-bit machine; even if you're using a 32-bit machine,
emulating 64-bit operations s will hurts performance less than trying to
detect the appropriate choice and then act on that information.

For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits but
would still be faster than simulating it.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection that
overhead might disappear ! ;) :)

Bye,
Skybuck.
 
S

Skybuck Flying

Actually what I wrote only applies to doing the check each time.

If the check only has to be done once for large parts of code, there will
definetly be performance gains achievable ! ;)

(I already wrote that elsewhere but ok ;))

Bye,
Skybuck.
 
Top