Maker Pro
Maker Pro

Larkin, Power BASIC cannot be THAT good:

J

Jasen Betts

Not at all. I'm comparing all the myrid basics over the years.

"True basic" was just another proprietary extension. It's silly name
didn't mean that anybody else would ever follow suit.

Even microsoft has released at least 4 versions incompatible with each other.

And that's not counting the GUI, or the non win/dos versions.
 
T

Tim Williams

Jasen Betts said:
True, although one problem with *NIX command line utilities is that
there's no
standardization in how control parameters are passed to them.

??? there is
int argc, char *argv[]
which is considerably more standardisation than the windows XP or vista
command-line has.

In MS-DOS, it's even easier: you get one string up to 127 characters long.
it's up to you to play with it, which is usually tokenization first (which
is just what the C compiler does when it cooks up your args).

How it's programmed doesn't matter. He's probably referring to switches and
order (switches before arguments?).

Tim
 
N

Nobody

True, although one problem with *NIX command line utilities is that
there's no
standardization in how control parameters are passed to them.

??? there is
int argc, char *argv[]
which is considerably more standardisation than the windows XP or vista
command-line has.

In MS-DOS, it's even easier: you get one string up to 127 characters long.
it's up to you to play with it, which is usually tokenization first (which
is just what the C compiler does when it cooks up your args).

This can be a nuisance, as the program's documentation tends to document
its behaviour in terms of distinct arguments, without documenting how the
string is parsed into arguments. This can be an issue if one of the
arguments is an arbitrary string which may contain spaces, quotes etc.
Nowadays, you can usually rely upon it using the MSVCRT parser, but you
occasionally run into exceptions.

It also doesn't help that _spawnvp() et al concatenate their arguments
without quoting, so the program doesn't necessarily get the exact same
list of arguments which the caller provided.
 
T

Tim Williams

Nobody said:
This can be a nuisance, as the program's documentation tends to document
its behaviour in terms of distinct arguments, without documenting how the
string is parsed into arguments. This can be an issue if one of the
arguments is an arbitrary string which may contain spaces, quotes etc.

In those cases, either the string would have to go at the end (how else can
you tell it's an arbitrary string?), or it would have to be encapsulated
somehow (such as quotes).

Last time I wrote a command line parser, I happened to be writing in
assembly. There, it grabbed a byte, checked if it was a token, delimiter or
switch (i.e., "/"). If it was a switch, it scanned for the WORD "?/" or
etc. (being that endianness actually reads "/?" as "?/". Odd at a glance,
and not useful for more than two character switches, but interesting in its
own way.) After each switch was found, a flag was set indicating it had
been found, and therefore should not be checked for again. In that
particular program, switches could go anywhere in the command line, before
or after the argument. The first token found that wasn't a switch was
copied as the argument (path, in this case); any subsequent arguments were
ignored.

Meanwhile, in QBasic, I've done plenty of command lines, but since COMMAND$
is so temping, and because QB sadly doesn't provide any tokenizing
functions, I usually just go lame and do something like OPEN COMMAND$ FOR
INPUT AS #1, no switches at all. OTOH, in C, you get all parameters
tokenized already, so it's quite easy to look at them in order. That's kind
of nice. (Hmm, if QB had the foresight, it could be COMMAND$(n) instead!)

Tim
 
T

Tim Williams

AZ Nomad said:
Sheesh. It's a phone call for you. The 70's want their interpreted
language technology back.

Hah. Interpreters rule. Unbeaten for debugging, especially for the range
of constructs you can reconstruct on-the-fly.

Actually, is PowerBasic interpreted, or is it only compiled? FreeBASIC is
compile-only.

Tim
 
N

Nobody

Yes, but there's been some progress made in the last forty years. BASIC is
a piece of shit and utterly unuseable unless stacked to the gills with
proprietary extensions. Use any of these proprietary extensions, and the
code is no longer portable to any other compiler.

I use python when I want to use an interpreted language.

Okay language, lots of features, large userbase. Its main drawback is that
it's slower than a heavily-sedated snail. Proprietary extensions aren't
really an issue when there's only ever likely to be one implementation.

Other than having a larger user base, it doesn't really seem to have any
advantages over Lisp.

For quick computation and data-processing tasks, I'd pick Haskell, but
that's only an option if you can operate outside of the imperative
paradigm. That's likely to be an issue for EEs, as embedded programming
tends to be heavily state-oriented.
 
F

Fred Bartoli

John Larkin a écrit :
.NET is hardly a universal, portable platform. If all you write is
text-mode ANSI C, sure, it's fairly portable... as long as you keep
your ints and longs and stuff under control.



As an engineer, I'm more interested in number crunching, simulation,
graphics, generating embedded code images, stuff like that. There are
a few nice collections of Basic subroutines around - curve fitting,
FFTs, stuff like that - so I don't always write everything from
scratch. And I do often reuse code.

I haven't used a linked list in decades. PCs are so fast these days
you can waste enormous numbers of cycles on brute-force methods, like
linear searching, to save programming time.

Well done!
You've just got your ticket for a hiring interview at M$ ;-)
 
N

Nobody

How many lines of code is that, John?

I'm willing to bet that there are few other languages that could pull it
off in as few lines of code. Even with something like Python, since
graphics aren't native, you end up using one of the windowing toolkits
such as wxGTK or wxPython, and "just making a stupid plot" tends to
consume a lot more code than one might expect.

MatPlotLib (matplotlib.sourceforge.net) seems to have become the standard
plotting library for Python. NumPy and SciPy (scipy.org) are also useful
tools.

But if you're cobbling together a program for a specific task and for
personal use, the optimal language is usually whichever one you're most
fluent in.
 
N

Nobody

I'd like to see a high-level language similar to Python become a popular
choice for embedded programming... these days it still tends to be either
C or various proprietary flavors of BASIC. I suppose some of this is
driven by how little RAM many microcontrollers have (e.g., many Atmel AVRs
now have >64kB of flash ROM by <4kB of RAM... ouch!)

High-level languages tend to require a fairly substantial run-time
infrastructure, which tends to make them a poor fit for the lower-end of
embedded programming. Even C is too high-level for a lot of the things you
might want to do on a PIC10/12; if you're using instruction cycles for
timing, there isn't really much choice but assembler.

Also, dynamic memory allocation (especially with garbage collection, but
even without it) doesn't fit well with real-time programming.

At the higher end (e.g. mobile phones with an ARM, a few MB of RAM/flash
and a full OS), Java seems to be the most popular choice, particularly
with many of the ARM variants having hardware support for executing Java
bytecode directly.
 
M

Martin Brown

Yes. I expect someone just like Larkin is responsible for the glacially
slow graphics and charting in XL2007 (and also for degrading the chart
trend line polynomial fit to give the same wrong answer as LINEST).
But my linear searches are 50x as fast as whatever nonsense they are
doing.

Even if that were true then a decent Shell sort worst case O(N^1.5)
algorithm would beat your poxy O(N^2) algorithm when N > 2500. In a real
example with random unsorted data as input Shell's sort would be more
like O(N^1.3).

And an O(Nlog2N) sort would beat it hollow at N > ~500
You really are clueless about algorithms and their relative speeds.

Linear search or sort by straight insertion is typically only faster for
N<50. Hardware engineers tend to code the even slower Bubble sort. It
may be fast enough for your small datasets but it doesn't scale well.

Regards,
Martin Brown
 
N

Nobody

And an O(Nlog2N) sort would beat it hollow at N > ~500 You really are
clueless about algorithms and their relative speeds.

Linear search or sort by straight insertion is typically only faster for
N<50. Hardware engineers tend to code the even slower Bubble sort. It may
be fast enough for your small datasets but it doesn't scale well.

But is the absolute difference in run-time between the algorithms more
than the difference in programming time?

If you only run a program once, 30 seconds coding plus 1 minute run-time
is an improvement over 5 minutes coding plus 1 second run-time.

Even if a program gets used regularly, computers are cheaper than people.
For bespoke software, buying a faster computer is often cheaper than
paying more to make the software go faster.
 
N

Nobody

How about PostScript?

It makes PIC assembler look intuitive. PostScript was designed to minimise
the burden on the interpreter, not the programmer.
 
S

Spehro Pefhany

It makes PIC assembler look intuitive. PostScript was designed to minimise
the burden on the interpreter, not the programmer.

Hence the use of floating-point math for everything?
 
N

Nobody

Eh, maybe. Procedural language is not necessarily a good fit for a state
machine, nor for relay ladder logic. How good a fit is Haskell? Can you
demonstrate?
*

I wasn't suggesting using Haskell for embedded programming, but for
"support" tasks, e.g. crunching data.

You can use it for imperative programming. There are some cases where it
would be better than an imperative language, as you get to define the
evaluation semantics.
 
F

Fred Bartoli

Nobody a écrit :
But is the absolute difference in run-time between the algorithms more
than the difference in programming time?

If you only run a program once, 30 seconds coding plus 1 minute run-time
is an improvement over 5 minutes coding plus 1 second run-time.

Even if a program gets used regularly, computers are cheaper than people.
For bespoke software, buying a faster computer is often cheaper than
paying more to make the software go faster.

Depends whether the progammer and the user are the same person or not
(from the user's POV).
Depends whether there are one or more users.
Depends whether users are customers or not.

In the latter case, for ex., you have to count wasted time, computing
resources increase, or whatever for all your customers. Compare this
with your increased production cost, plus some profit, shared among all
your customers and you have something more meaningful and probably not
the same conclusions.

M$ is pretty great at wasting the time and resources of their customers.
The point is their sloppiness (their customers time) costs them almost
nothing. Except some bad reputation in the long term...
 
N

Nobody

Hence the use of floating-point math for everything?

FP math is pretty much essential for (real) graphics.

Integer-based graphics APIs have (or had) a place in low-resolution video
displays where you can see individual pixels, and where the graphics are
dominated by unscaled bitmaps and (mostly orthogonal) straight lines.

Integer coordinates don't make sense if you're operating at 300+ DPI, in
physical dimensions (mm or points rather than pixels), with scaled images,
thick (>1 pixel) lines, Bezier curves, arbitrary transformations, etc.
 
N

Nobody

Yeah, but even 16kB of flash ROM and an equal amount of RAM can get quite a
lot done.

You aren't going to be able to run Python in that ;)
It seems to me that it's the highly unequal ratio we now see
between flash ROM and RAM that makes many high-level languages difficult to
implement.

No, just the numbers really. High-level languages are designed for desktop
and server systems. As RAM has increased from a few megabytes through
hundreds of megabytes into gigabytes, software has followed suit. No-one
is going to worry if the interpreter uses a megabyte before it even gets
to printing "hello, world".
At times it just seems like we've regressed a bit in terms of how much
we can accomplish on a given piece of hardware, even with a HLL... take
a look at some of the old "handheld computers" from the 1980s, e.g.,
http://www.rskey.org/detail.asp?manufacturer=Casio&model=PB-2000C (32kB
RAM, runs interpreted C or assembly) or even
http://www.rskey.org/detail.asp?manufacturer=Casio&model=FX-795P (16kB
RAM, runs BASIC). Or even the "early advanced" HP calculators, like the
HP 28s -- 128kB ROM, 32kB RAM -- incredibly powerful for its day.

But by today's standards, BASIC and C are both low-level languages. BASIC
has int, float, string, and arrays thereof, C has int, float, struct and
array (strings are just arrays of 8-bit ints).

Modern HLLs have lists, tuples, dictionaries, objects (OOP), functions as
data values, closures, continuations, iterators, arbitrary-precision
arithmetic.

To put it in perspective: the megabyte of RAM that a modern HLL might use
probably costs less than 16K did in the 1980s. If (say) $20-worth of RAM
was a reasonable amount to use back then, why isn't it reasonable now?
 
S

Spehro Pefhany

FP math is pretty much essential for (real) graphics.

Integer-based graphics APIs have (or had) a place in low-resolution video
displays where you can see individual pixels, and where the graphics are
dominated by unscaled bitmaps and (mostly orthogonal) straight lines.

Integer coordinates don't make sense if you're operating at 300+ DPI, in
physical dimensions (mm or points rather than pixels), with scaled images,
thick (>1 pixel) lines, Bezier curves, arbitrary transformations, etc.

It was a little OTT in the 1980s. Postscript printers had more
powerful CPUs than many of the computers of the day. Not what I'd call
minimizing the burden. They scaled it back with PDF (and with the
limitations for Type 1 fonts).
 
N

Nobody

It was a little OTT in the 1980s. Postscript printers had more
powerful CPUs than many of the computers of the day. Not what I'd call
minimizing the burden.

I was referring specifically to the interpreter, rather than the graphics
library. The graphics library was certainly heavy-duty, but that was
inevitable for generating production-quality graphics.

The performance of the LaserWriter relative to the early Macs doesn't seem
so extreme when you consider that an office might have had a dozen Macs
and one LaserWriter.

Also, pushing the work onto the printer allows the same PostScript data to
be used for both the office printer and the Linotronic used for final
production. That won't work with pre-rendered bitmaps (and generating a
full-page, 2450 DPI bitmap on an early Mac was out of the question).
They scaled it back with PDF (and with the limitations for Type 1 fonts).

These are interpreter limitations, designed to make the code more like
"data" and less like "code". The graphics library is essentially unchanged
(e.g. PDF doesn't support alpha-blended bitmaps, even though it's
straightforward to implement on a monitor).
 
M

Martin Brown

Nobody said:
But is the absolute difference in run-time between the algorithms more
than the difference in programming time?

In most high level languages basic things like efficient sorting, hash
tables and searching are already available in a library somewhere.
RTFM and use them rather than reinvent the wheel badly.

Electronics engineers are more used to this approach than software
engineers. They do not automatically try to roll their own electrolytic
capacitors or make transistors from scratch every time.
If you only run a program once, 30 seconds coding plus 1 minute run-time
is an improvement over 5 minutes coding plus 1 second run-time.

For a use once and throw away program it maybe OK. But these quick hacks
have a nasty tendency to end up being used again and again. After the
fifth time of using it the second method wins out handsomely.
Even if a program gets used regularly, computers are cheaper than people.
For bespoke software, buying a faster computer is often cheaper than
paying more to make the software go faster.
That depends on how often the program is used. Certain commercial
programs that are widely used are a lot slower than they should be due
to stupidity in the design producing an inefficient bloatware solution.
It is always a risk in the time to market versus performance tradeoff.

On time, on budget, on spec - pick any two for hardware. You are lucky
to get one of these on target with the average software house.

Regards,
Martin Brown
 
Top